Compare commits

..

16 Commits

Author SHA1 Message Date
kevin zhao
34809c9880 timezones 2025-10-30 18:17:52 -04:00
kevin zhao
325e35388c adding system date to env context 2025-10-30 18:00:32 -04:00
Bernard Niset
ff6d4cec6b fix: Update seatbelt policy for java on macOS (#3987)
# Summary

This PR is related to the Issue #3978 and contains a fix to the seatbelt
profile for macOS that allows to run java/jdk tooling from the sandbox.
I have found that the included change is the minimum change to make it
run on my machine.

There is a unit test added by codex when making this fix. I wonder if it
is useful since you need java installed on the target machine for it to
be relevant. I can remove it it is better.

Fixes #3978
2025-10-30 14:25:04 -07:00
Celia Chen
6ef658a9f9 [Hygiene] Remove include_view_image_tool config (#5976)
There's still some debate about whether we want to expose
`tools.view_image` or `feature.view_image` so those are left unchanged
for now, but this old `include_view_image_tool` config is good-to-go.
Also updated the doc to reflect that `view_image` tool is now by default
true.
2025-10-30 13:23:24 -07:00
Brad M. Harris
8b8be343a7 Documentation improvement: add missing period (#3754)
Pull request template, minimal:

---

### **What?**

Minor change (low-hanging fruit).

### **Why?**

To improve code quality or documentation with minimal risk and effort.

### **How?**

Edited directly via VSCode Editor.

---

**Checklist (pre-PR):**

* [x] I have read the CLA Document and hereby sign the CLA.
* [x] I reviewed the “Contributing” markdown file for this project.

*This template meets standard external (non-OpenAI) PR requirements and
signals compliance for maintainers.*

Co-authored-by: Eric Traut <etraut@openai.com>
2025-10-30 13:01:33 -07:00
Owen Lin
89c00611c2 [app-server] remove serde(skip_serializing_if = "Option::is_none") annotations (#5939)
We had this annotation everywhere in app-server APIs which made it so
that fields get serialized as `field?: T`, meaning if the field as
`None` we would omit the field in the payload. Removing this annotation
changes it so that we return `field: T | null` instead, which makes
codex app-server's API more aligned with the convention of public OpenAI
APIs like Responses.

Separately, remove the `#[ts(optional_fields = nullable)]` annotations
that were recently added which made all the TS types become `field?: T |
null` which is not great since clients need to handle undefined and
null.

I think generally it'll be best to have optional types be either:
- `field: T | null` (preferred, aligned with public OpenAI APIs)
- `field?: T` where we have to, such as types generated from the MCP
schema:
https://github.com/modelcontextprotocol/modelcontextprotocol/blob/main/schema/2025-06-18/schema.ts
(see changes to `mcp-types/`)

I updated @etraut-openai's unit test to check that all generated TS
types are one or the other, not both (so will error if we have a type
that has `field?: T | null`). I don't think there's currently a good use
case for that - but we can always revisit.
2025-10-30 18:18:53 +00:00
Anton Panasenko
9572cfc782 [codex] add developer instructions (#5897)
we are using developer instructions for code reviews, we need to pass
them in cli as well.
2025-10-30 11:18:31 -07:00
Dylan Hurd
4a55646a02 chore: testing on freeform apply_patch (#5952)
## Summary
Duplicates the tests in `apply_patch_cli.rs`, but tests the freeform
apply_patch tool as opposed to the function call path. The good news is
that all the tests pass with zero logical tests, with the exception of
the heredoc, which doesn't really make sense in the freeform tool
context anyway.

@jif-oai since you wrote the original tests in #5557, I'd love your
opinion on the right way to DRY these test cases between the two. Happy
to set up a more sophisticated harness, but didn't want to go down the
rabbit hole until we agreed on the right pattern

## Testing
- [x] These are tests
2025-10-30 10:40:48 -07:00
jif-oai
209af68611 nit: log rmcp_client (#5978) 2025-10-30 17:40:38 +00:00
jif-oai
f4f9695978 feat: compaction prompt configurable (#5959)
```
 codex -c compact_prompt="Summarize in bullet points"
 ```
2025-10-30 14:24:24 +00:00
Ahmed Ibrahim
5fcc380bd9 Pass initial history as an optional to codex delegate (#5950)
This will give us more freedom on controlling the delegation. i.e we can
fork our history and run `compact`.
2025-10-30 07:22:42 -07:00
jif-oai
aa76003e28 chore: unify config crates (#5958) 2025-10-30 10:28:32 +00:00
Ahmed Ibrahim
fac548e430 Send delegate header (#5942)
Send delegate type header
2025-10-30 09:49:40 +00:00
Ahmed Ibrahim
9bd3453592 Add debug-only slash command for rollout path (#5936)
## Summary
- add a debug-only `/rollout` slash command that prints the rollout file
path or reports when none is known
- surface the new command in the slash command metadata and cover it
with unit tests

<img width="539" height="99" alt="image"
src="https://github.com/user-attachments/assets/688e1334-8a06-4576-abb8-ada33b458661"
/>
2025-10-30 03:51:00 +00:00
zhao-oai
b34efde2f3 asdf (#5940)
.
2025-10-30 01:10:41 +00:00
Ahmed Ibrahim
7aa46ab5fc ignore agent message deltas for the review mode (#5937)
The deltas produce the whole json output. ignore them.
2025-10-30 00:47:55 +00:00
74 changed files with 2235 additions and 638 deletions

View File

@@ -1 +1 @@
The changelog can be found on the [releases page](https://github.com/openai/codex/releases)
The changelog can be found on the [releases page](https://github.com/openai/codex/releases).

View File

@@ -545,7 +545,7 @@ mod tests {
use uuid::Uuid;
#[test]
fn generated_ts_omits_undefined_unions_for_optionals() -> Result<()> {
fn generated_ts_has_no_optional_nullable_fields() -> Result<()> {
let output_dir = std::env::temp_dir().join(format!("codex_ts_types_{}", Uuid::now_v7()));
fs::create_dir(&output_dir)?;
@@ -562,7 +562,7 @@ mod tests {
generate_ts(&output_dir, None)?;
let mut undefined_offenders = Vec::new();
let mut missing_optional_marker = BTreeSet::new();
let mut optional_nullable_offenders = BTreeSet::new();
let mut stack = vec![output_dir];
while let Some(dir) = stack.pop() {
for entry in fs::read_dir(&dir)? {
@@ -591,27 +591,80 @@ mod tests {
let mut search_start = 0;
while let Some(idx) = contents[search_start..].find("| null") {
let abs_idx = search_start + idx;
let Some(colon_idx) = contents[..abs_idx].rfind(':') else {
// Find the property-colon for this field by scanning forward
// from the start of the segment and ignoring nested braces,
// brackets, and parens. This avoids colons inside nested
// type literals like `{ [k in string]?: string }`.
let line_start_idx =
contents[..abs_idx].rfind('\n').map(|i| i + 1).unwrap_or(0);
let mut segment_start_idx = line_start_idx;
if let Some(rel_idx) = contents[line_start_idx..abs_idx].rfind(',') {
segment_start_idx = segment_start_idx.max(line_start_idx + rel_idx + 1);
}
if let Some(rel_idx) = contents[line_start_idx..abs_idx].rfind('{') {
segment_start_idx = segment_start_idx.max(line_start_idx + rel_idx + 1);
}
if let Some(rel_idx) = contents[line_start_idx..abs_idx].rfind('}') {
segment_start_idx = segment_start_idx.max(line_start_idx + rel_idx + 1);
}
// Scan forward for the colon that separates the field name from its type.
let mut level_brace = 0_i32;
let mut level_brack = 0_i32;
let mut level_paren = 0_i32;
let mut in_single = false;
let mut in_double = false;
let mut escape = false;
let mut prop_colon_idx = None;
for (i, ch) in contents[segment_start_idx..abs_idx].char_indices() {
let idx_abs = segment_start_idx + i;
if escape {
escape = false;
continue;
}
match ch {
'\\' => {
// Only treat as escape when inside a string.
if in_single || in_double {
escape = true;
}
}
'\'' => {
if !in_double {
in_single = !in_single;
}
}
'"' => {
if !in_single {
in_double = !in_double;
}
}
'{' if !in_single && !in_double => level_brace += 1,
'}' if !in_single && !in_double => level_brace -= 1,
'[' if !in_single && !in_double => level_brack += 1,
']' if !in_single && !in_double => level_brack -= 1,
'(' if !in_single && !in_double => level_paren += 1,
')' if !in_single && !in_double => level_paren -= 1,
':' if !in_single
&& !in_double
&& level_brace == 0
&& level_brack == 0
&& level_paren == 0 =>
{
prop_colon_idx = Some(idx_abs);
break;
}
_ => {}
}
}
let Some(colon_idx) = prop_colon_idx else {
search_start = abs_idx + 5;
continue;
};
let line_start_idx = contents[..colon_idx]
.rfind('\n')
.map(|i| i + 1)
.unwrap_or(0);
let mut segment_start_idx = line_start_idx;
if let Some(rel_idx) = contents[line_start_idx..colon_idx].rfind(',') {
segment_start_idx = segment_start_idx.max(line_start_idx + rel_idx + 1);
}
if let Some(rel_idx) = contents[line_start_idx..colon_idx].rfind('{') {
segment_start_idx = segment_start_idx.max(line_start_idx + rel_idx + 1);
}
if let Some(rel_idx) = contents[line_start_idx..colon_idx].rfind('}') {
segment_start_idx = segment_start_idx.max(line_start_idx + rel_idx + 1);
}
let mut field_prefix = contents[segment_start_idx..colon_idx].trim();
if field_prefix.is_empty() {
search_start = abs_idx + 5;
@@ -640,25 +693,26 @@ mod tests {
continue;
}
// If the last non-whitespace before ':' is '?', then this is an
// optional field with a nullable type (i.e., "?: T | null"),
// which we explicitly disallow.
if field_prefix.chars().rev().find(|c| !c.is_whitespace()) == Some('?') {
search_start = abs_idx + 5;
continue;
let line_number =
contents[..abs_idx].chars().filter(|c| *c == '\n').count() + 1;
let offending_line_end = contents[line_start_idx..]
.find('\n')
.map(|i| line_start_idx + i)
.unwrap_or(contents.len());
let offending_snippet =
contents[line_start_idx..offending_line_end].trim();
optional_nullable_offenders.insert(format!(
"{}:{}: {offending_snippet}",
path.display(),
line_number
));
}
let line_number =
contents[..abs_idx].chars().filter(|c| *c == '\n').count() + 1;
let offending_line_end = contents[line_start_idx..]
.find('\n')
.map(|i| line_start_idx + i)
.unwrap_or(contents.len());
let offending_snippet = contents[line_start_idx..offending_line_end].trim();
missing_optional_marker.insert(format!(
"{}:{}: {offending_snippet}",
path.display(),
line_number
));
search_start = abs_idx + 5;
}
}
@@ -670,12 +724,12 @@ mod tests {
"Generated TypeScript still includes unions with `undefined` in {undefined_offenders:?}"
);
// If this test fails, it means that a struct field that is `Option<T>` in Rust
// is being generated as `T | null` in TypeScript, without the optional marker
// (`?`). To fix this, add #[ts(optional_fields = nullable)] to the struct definition.
// If this assertion fails, it means a field was generated as
// "?: T | null" — i.e., both optional (undefined) and nullable (null).
// We only want either "?: T" or ": T | null".
assert!(
missing_optional_marker.is_empty(),
"Generated TypeScript has nullable fields without an optional marker: {missing_optional_marker:?}"
optional_nullable_offenders.is_empty(),
"Generated TypeScript has optional fields with nullable types (disallowed '?: T | null'), add #[ts(optional)] to fix:\n{optional_nullable_offenders:?}"
);
Ok(())

View File

@@ -30,20 +30,20 @@ pub enum JSONRPCMessage {
/// A request that expects a response.
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
pub struct JSONRPCRequest {
pub id: RequestId,
pub method: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub params: Option<serde_json::Value>,
}
/// A notification which does not expect a response.
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
pub struct JSONRPCNotification {
pub method: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub params: Option<serde_json::Value>,
}
@@ -62,10 +62,10 @@ pub struct JSONRPCError {
}
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
pub struct JSONRPCErrorError {
pub code: i64,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub data: Option<serde_json::Value>,
pub message: String,
}

View File

@@ -248,7 +248,6 @@ pub enum Account {
#[serde(rename = "chatgpt", rename_all = "camelCase")]
#[ts(rename = "chatgpt", rename_all = "camelCase")]
ChatGpt {
#[ts(optional = nullable)]
email: Option<String>,
plan_type: PlanType,
},
@@ -267,11 +266,9 @@ pub struct InitializeParams {
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Default, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
#[serde(rename_all = "camelCase")]
pub struct ClientInfo {
pub name: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub title: Option<String>,
pub version: String,
}
@@ -283,68 +280,62 @@ pub struct InitializeResponse {
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Default, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
#[serde(rename_all = "camelCase")]
pub struct NewConversationParams {
/// Optional override for the model name (e.g. "o3", "o4-mini").
#[serde(skip_serializing_if = "Option::is_none")]
pub model: Option<String>,
/// Override the model provider to use for this session.
#[serde(skip_serializing_if = "Option::is_none")]
pub model_provider: Option<String>,
/// Configuration profile from config.toml to specify default options.
#[serde(skip_serializing_if = "Option::is_none")]
pub profile: Option<String>,
/// Working directory for the session. If relative, it is resolved against
/// the server process's current working directory.
#[serde(skip_serializing_if = "Option::is_none")]
pub cwd: Option<String>,
/// Approval policy for shell commands generated by the model:
/// `untrusted`, `on-failure`, `on-request`, `never`.
#[serde(skip_serializing_if = "Option::is_none")]
pub approval_policy: Option<AskForApproval>,
/// Sandbox mode: `read-only`, `workspace-write`, or `danger-full-access`.
#[serde(skip_serializing_if = "Option::is_none")]
pub sandbox: Option<SandboxMode>,
/// Individual config settings that will override what is in
/// CODEX_HOME/config.toml.
#[serde(skip_serializing_if = "Option::is_none")]
pub config: Option<HashMap<String, serde_json::Value>>,
/// The set of instructions to use instead of the default ones.
#[serde(skip_serializing_if = "Option::is_none")]
pub base_instructions: Option<String>,
/// Whether to include the apply patch tool in the conversation.
/// Developer instructions that will be sent as a `developer` role message.
#[serde(skip_serializing_if = "Option::is_none")]
pub developer_instructions: Option<String>,
/// Prompt used during conversation compaction.
#[serde(skip_serializing_if = "Option::is_none")]
pub compact_prompt: Option<String>,
/// Whether to include the apply patch tool in the conversation.
pub include_apply_patch_tool: Option<bool>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
#[serde(rename_all = "camelCase")]
pub struct NewConversationResponse {
pub conversation_id: ConversationId,
pub model: String,
/// Note this could be ignored by the model.
#[serde(skip_serializing_if = "Option::is_none")]
pub reasoning_effort: Option<ReasoningEffort>,
pub rollout_path: PathBuf,
}
#[derive(Serialize, Deserialize, Debug, Clone, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
#[serde(rename_all = "camelCase")]
pub struct ResumeConversationResponse {
pub conversation_id: ConversationId,
pub model: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub initial_messages: Option<Vec<EventMsg>>,
pub rollout_path: PathBuf,
}
@@ -373,57 +364,46 @@ pub struct GetConversationSummaryResponse {
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Default, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
#[serde(rename_all = "camelCase")]
pub struct ListConversationsParams {
/// Optional page size; defaults to a reasonable server-side value.
#[serde(skip_serializing_if = "Option::is_none")]
pub page_size: Option<usize>,
/// Opaque pagination cursor returned by a previous call.
#[serde(skip_serializing_if = "Option::is_none")]
pub cursor: Option<String>,
/// Optional model provider filter (matches against session metadata).
/// - None => filter by the server's default model provider
/// - Some([]) => no filtering, include all providers
/// - Some([...]) => only include sessions with one of the specified providers
#[serde(skip_serializing_if = "Option::is_none")]
pub model_providers: Option<Vec<String>>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
#[serde(rename_all = "camelCase")]
pub struct ConversationSummary {
pub conversation_id: ConversationId,
pub path: PathBuf,
pub preview: String,
/// RFC3339 timestamp string for the session start, if available.
#[serde(skip_serializing_if = "Option::is_none")]
pub timestamp: Option<String>,
/// Model provider recorded for the session (resolved when absent in metadata).
pub model_provider: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
#[serde(rename_all = "camelCase")]
pub struct ListConversationsResponse {
pub items: Vec<ConversationSummary>,
/// Opaque cursor to pass to the next call to continue after the last item.
/// if None, there are no more items to return.
#[serde(skip_serializing_if = "Option::is_none")]
pub next_cursor: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Default, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
#[serde(rename_all = "camelCase")]
pub struct ListModelsParams {
/// Optional page size; defaults to a reasonable server-side value.
#[serde(skip_serializing_if = "Option::is_none")]
pub page_size: Option<usize>,
/// Opaque pagination cursor returned by a previous call.
#[serde(skip_serializing_if = "Option::is_none")]
pub cursor: Option<String>,
}
@@ -448,24 +428,19 @@ pub struct ReasoningEffortOption {
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
#[serde(rename_all = "camelCase")]
pub struct ListModelsResponse {
pub items: Vec<Model>,
/// Opaque cursor to pass to the next call to continue after the last item.
/// if None, there are no more items to return.
#[serde(skip_serializing_if = "Option::is_none")]
pub next_cursor: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
#[serde(rename_all = "camelCase")]
pub struct UploadFeedbackParams {
pub classification: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub reason: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub conversation_id: Option<ConversationId>,
pub include_logs: bool,
}
@@ -493,7 +468,6 @@ pub enum LoginAccountParams {
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
#[serde(rename_all = "camelCase")]
pub struct LoginAccountResponse {
/// Only set if the login method is ChatGPT.
@@ -510,20 +484,15 @@ pub struct LoginAccountResponse {
pub struct LogoutAccountResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
#[serde(rename_all = "camelCase")]
pub struct ResumeConversationParams {
/// Absolute path to the rollout JSONL file, when explicitly resuming a known rollout.
#[serde(skip_serializing_if = "Option::is_none")]
pub path: Option<PathBuf>,
/// If the rollout path is not known, it can be discovered via the conversation id at the cost of extra latency.
#[serde(skip_serializing_if = "Option::is_none")]
pub conversation_id: Option<ConversationId>,
/// if the rollout path or conversation id is not known, it can be resumed from given history
#[serde(skip_serializing_if = "Option::is_none")]
pub history: Option<Vec<ResponseItem>>,
/// Optional overrides to apply when spawning the resumed session.
#[serde(skip_serializing_if = "Option::is_none")]
pub overrides: Option<NewConversationParams>,
}
@@ -602,19 +571,15 @@ pub struct LogoutChatGptParams {}
pub struct LogoutChatGptResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
#[serde(rename_all = "camelCase")]
pub struct GetAuthStatusParams {
/// If true, include the current auth token (if available) in the response.
#[serde(skip_serializing_if = "Option::is_none")]
pub include_token: Option<bool>,
/// If true, attempt to refresh the token before returning status.
#[serde(skip_serializing_if = "Option::is_none")]
pub refresh_token: Option<bool>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
#[serde(rename_all = "camelCase")]
pub struct ExecOneOffCommandParams {
/// Command argv to execute.
@@ -623,10 +588,8 @@ pub struct ExecOneOffCommandParams {
/// If not specified, a sensible default is used server-side.
pub timeout_ms: Option<u64>,
/// Optional working directory for the process. Defaults to server config cwd.
#[serde(skip_serializing_if = "Option::is_none")]
pub cwd: Option<PathBuf>,
/// Optional explicit sandbox policy overriding the server default.
#[serde(skip_serializing_if = "Option::is_none")]
pub sandbox_policy: Option<SandboxPolicy>,
}
@@ -646,17 +609,13 @@ pub struct GetAccountRateLimitsResponse {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(optional_fields = nullable)]
pub struct GetAuthStatusResponse {
#[serde(skip_serializing_if = "Option::is_none")]
pub auth_method: Option<AuthMode>,
#[serde(skip_serializing_if = "Option::is_none")]
pub auth_token: Option<String>,
// Indicates that auth method must be valid to use the server.
// This can be false if using a custom provider that is configured
// with requires_openai_auth == false.
#[serde(skip_serializing_if = "Option::is_none")]
pub requires_openai_auth: Option<bool>,
}
@@ -667,7 +626,6 @@ pub struct GetUserAgentResponse {
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
#[serde(rename_all = "camelCase")]
pub struct UserInfoResponse {
/// Note: `alleged_user_email` is not currently verified. We read it from
@@ -684,15 +642,12 @@ pub struct GetUserSavedConfigResponse {
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
#[serde(rename_all = "camelCase")]
pub struct SetDefaultModelParams {
/// If set to None, this means `model` should be cleared in config.toml.
#[serde(skip_serializing_if = "Option::is_none")]
pub model: Option<String>,
/// If set to None, this means `model_reasoning_effort` should be cleared
/// in config.toml.
#[serde(skip_serializing_if = "Option::is_none")]
pub reasoning_effort: Option<ReasoningEffort>,
}
@@ -704,46 +659,32 @@ pub struct SetDefaultModelResponse {}
/// client-configurable settings that can be specified in the NewConversation
/// and SendUserTurn requests.
#[derive(Deserialize, Debug, Clone, PartialEq, Serialize, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
#[serde(rename_all = "camelCase")]
pub struct UserSavedConfig {
/// Approvals
#[serde(skip_serializing_if = "Option::is_none")]
pub approval_policy: Option<AskForApproval>,
#[serde(skip_serializing_if = "Option::is_none")]
pub sandbox_mode: Option<SandboxMode>,
#[serde(skip_serializing_if = "Option::is_none")]
pub sandbox_settings: Option<SandboxSettings>,
#[serde(skip_serializing_if = "Option::is_none")]
pub forced_chatgpt_workspace_id: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub forced_login_method: Option<ForcedLoginMethod>,
/// Model-specific configuration
#[serde(skip_serializing_if = "Option::is_none")]
pub model: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub model_reasoning_effort: Option<ReasoningEffort>,
#[serde(skip_serializing_if = "Option::is_none")]
pub model_reasoning_summary: Option<ReasoningSummary>,
#[serde(skip_serializing_if = "Option::is_none")]
pub model_verbosity: Option<Verbosity>,
/// Tools
#[serde(skip_serializing_if = "Option::is_none")]
pub tools: Option<Tools>,
/// Profiles
#[serde(skip_serializing_if = "Option::is_none")]
pub profile: Option<String>,
#[serde(default)]
pub profiles: HashMap<String, Profile>,
}
/// MCP representation of a [`codex_core::config_profile::ConfigProfile`].
#[derive(Deserialize, Debug, Clone, PartialEq, Serialize, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
#[serde(rename_all = "camelCase")]
pub struct Profile {
pub model: Option<String>,
@@ -756,29 +697,23 @@ pub struct Profile {
pub model_verbosity: Option<Verbosity>,
pub chatgpt_base_url: Option<String>,
}
/// MCP representation of a [`codex_core::config::ToolsToml`].
#[derive(Deserialize, Debug, Clone, PartialEq, Serialize, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
#[serde(rename_all = "camelCase")]
pub struct Tools {
#[serde(skip_serializing_if = "Option::is_none")]
pub web_search: Option<bool>,
#[serde(skip_serializing_if = "Option::is_none")]
pub view_image: Option<bool>,
}
/// MCP representation of a [`codex_core::config_types::SandboxWorkspaceWrite`].
/// MCP representation of a [`codex_core::config::types::SandboxWorkspaceWrite`].
#[derive(Deserialize, Debug, Clone, PartialEq, Serialize, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
#[serde(rename_all = "camelCase")]
pub struct SandboxSettings {
#[serde(default)]
pub writable_roots: Vec<PathBuf>,
#[serde(skip_serializing_if = "Option::is_none")]
pub network_access: Option<bool>,
#[serde(skip_serializing_if = "Option::is_none")]
pub exclude_tmpdir_env_var: Option<bool>,
#[serde(skip_serializing_if = "Option::is_none")]
pub exclude_slash_tmp: Option<bool>,
}
@@ -790,7 +725,6 @@ pub struct SendUserMessageParams {
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
#[serde(rename_all = "camelCase")]
pub struct SendUserTurnParams {
pub conversation_id: ConversationId,
@@ -799,7 +733,6 @@ pub struct SendUserTurnParams {
pub approval_policy: AskForApproval,
pub sandbox_policy: SandboxPolicy,
pub model: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub effort: Option<ReasoningEffort>,
pub summary: ReasoningSummary,
}
@@ -934,7 +867,6 @@ server_request_definitions! {
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
#[serde(rename_all = "camelCase")]
pub struct ApplyPatchApprovalParams {
pub conversation_id: ConversationId,
@@ -943,16 +875,13 @@ pub struct ApplyPatchApprovalParams {
pub call_id: String,
pub file_changes: HashMap<PathBuf, FileChange>,
/// Optional explanatory reason (e.g. request for extra write access).
#[serde(skip_serializing_if = "Option::is_none")]
pub reason: Option<String>,
/// When set, the agent is asking the user to allow writes under this root
/// for the remainder of the session (unclear if this is honored today).
#[serde(skip_serializing_if = "Option::is_none")]
pub grant_root: Option<PathBuf>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
#[serde(rename_all = "camelCase")]
pub struct ExecCommandApprovalParams {
pub conversation_id: ConversationId,
@@ -961,9 +890,7 @@ pub struct ExecCommandApprovalParams {
pub call_id: String,
pub command: Vec<String>,
pub cwd: PathBuf,
#[serde(skip_serializing_if = "Option::is_none")]
pub reason: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub risk: Option<SandboxCommandAssessment>,
pub parsed_cmd: Vec<ParsedCommand>,
}
@@ -979,26 +906,22 @@ pub struct ApplyPatchApprovalResponse {
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
#[serde(rename_all = "camelCase")]
#[ts(rename_all = "camelCase")]
pub struct FuzzyFileSearchParams {
pub query: String,
pub roots: Vec<String>,
// if provided, will cancel any previous request that used the same value
#[serde(skip_serializing_if = "Option::is_none")]
pub cancellation_token: Option<String>,
}
/// Superset of [`codex_file_search::FileMatch`]
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
pub struct FuzzyFileSearchResult {
pub root: String,
pub path: String,
pub file_name: String,
pub score: u32,
#[serde(skip_serializing_if = "Option::is_none")]
pub indices: Option<Vec<u32>>,
}
@@ -1008,18 +931,15 @@ pub struct FuzzyFileSearchResponse {
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
#[serde(rename_all = "camelCase")]
pub struct LoginChatGptCompleteNotification {
#[schemars(with = "String")]
pub login_id: Uuid,
pub success: bool,
#[serde(skip_serializing_if = "Option::is_none")]
pub error: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
#[serde(rename_all = "camelCase")]
pub struct SessionConfiguredNotification {
/// Name left as session_id instead of conversation_id for backwards compatibility.
@@ -1029,7 +949,6 @@ pub struct SessionConfiguredNotification {
pub model: String,
/// The effort the model is putting into reasoning about the user's request.
#[serde(skip_serializing_if = "Option::is_none")]
pub reasoning_effort: Option<ReasoningEffort>,
/// Identifier of the history log file (inode on Unix, 0 otherwise).
@@ -1041,18 +960,15 @@ pub struct SessionConfiguredNotification {
/// Optional initial messages (as events) for resumed sessions.
/// When present, UIs can use these to seed the history.
#[serde(skip_serializing_if = "Option::is_none")]
pub initial_messages: Option<Vec<EventMsg>>,
pub rollout_path: PathBuf,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
#[serde(rename_all = "camelCase")]
pub struct AuthStatusChangeNotification {
/// Current authentication method; omitted if signed out.
#[serde(skip_serializing_if = "Option::is_none")]
pub auth_method: Option<AuthMode>,
}
@@ -1125,6 +1041,8 @@ mod tests {
sandbox: None,
config: None,
base_instructions: None,
developer_instructions: None,
compact_prompt: None,
include_apply_patch_tool: None,
},
};
@@ -1134,7 +1052,14 @@ mod tests {
"id": 42,
"params": {
"model": "gpt-5-codex",
"approvalPolicy": "on-request"
"modelProvider": null,
"profile": null,
"cwd": null,
"approvalPolicy": "on-request",
"sandbox": null,
"config": null,
"baseInstructions": null,
"includeApplyPatchTool": null
}
}),
serde_json::to_value(&request)?,
@@ -1207,6 +1132,7 @@ mod tests {
"command": ["echo", "hello"],
"cwd": "/tmp",
"reason": "because tests",
"risk": null,
"parsedCmd": [
{
"type": "unknown",
@@ -1351,7 +1277,10 @@ mod tests {
json!({
"method": "model/list",
"id": 6,
"params": {}
"params": {
"pageSize": null,
"cursor": null
}
}),
serde_json::to_value(&request)?,
);

View File

@@ -73,8 +73,8 @@ use codex_core::auth::login_with_api_key;
use codex_core::config::Config;
use codex_core::config::ConfigOverrides;
use codex_core::config::ConfigToml;
use codex_core::config::load_config_as_toml;
use codex_core::config_edit::ConfigEditsBuilder;
use codex_core::config::edit::ConfigEditsBuilder;
use codex_core::config_loader::load_config_as_toml;
use codex_core::default_client::get_codex_user_agent;
use codex_core::exec::ExecParams;
use codex_core::exec_env::create_env;
@@ -1760,6 +1760,8 @@ async fn derive_config_from_params(
sandbox: sandbox_mode,
config: cli_overrides,
base_instructions,
developer_instructions,
compact_prompt,
include_apply_patch_tool,
} = params;
let overrides = ConfigOverrides {
@@ -1772,8 +1774,9 @@ async fn derive_config_from_params(
model_provider,
codex_linux_sandbox_exe,
base_instructions,
developer_instructions,
compact_prompt,
include_apply_patch_tool,
include_view_image_tool: None,
show_raw_agent_reasoning: None,
tools_web_search_request: None,
experimental_sandbox_command_assessment: None,

View File

@@ -166,6 +166,7 @@ mod tests {
"params": {
"loginId": Uuid::nil(),
"success": true,
"error": null,
},
}),
serde_json::to_value(jsonrpc_notification)

View File

@@ -44,7 +44,9 @@ async fn test_send_message_success() -> Result<()> {
// Start a conversation using the new wire API.
let new_conv_id = mcp
.send_new_conversation_request(NewConversationParams::default())
.send_new_conversation_request(NewConversationParams {
..Default::default()
})
.await?;
let new_conv_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
@@ -143,7 +145,10 @@ async fn test_send_message_raw_notifications_opt_in() -> Result<()> {
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let new_conv_id = mcp
.send_new_conversation_request(NewConversationParams::default())
.send_new_conversation_request(NewConversationParams {
developer_instructions: Some("Use the test harness tools.".to_string()),
..Default::default()
})
.await?;
let new_conv_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
@@ -177,6 +182,9 @@ async fn test_send_message_raw_notifications_opt_in() -> Result<()> {
})
.await?;
let developer = read_raw_response_item(&mut mcp, conversation_id).await;
assert_developer_message(&developer, "Use the test harness tools.");
let instructions = read_raw_response_item(&mut mcp, conversation_id).await;
assert_instructions_message(&instructions);
@@ -316,6 +324,21 @@ fn assert_instructions_message(item: &ResponseItem) {
}
}
fn assert_developer_message(item: &ResponseItem, expected_text: &str) {
match item {
ResponseItem::Message { role, content, .. } => {
assert_eq!(role, "developer");
let texts = content_texts(content);
assert_eq!(
texts,
vec![expected_text],
"expected developer instructions message, got {texts:?}"
);
}
other => panic!("expected developer instructions message, got {other:?}"),
}
}
fn assert_environment_message(item: &ResponseItem) {
match item {
ResponseItem::Message { role, content, .. } => {

View File

@@ -9,11 +9,11 @@ use codex_common::CliConfigOverrides;
use codex_common::format_env_display::format_env_display;
use codex_core::config::Config;
use codex_core::config::ConfigOverrides;
use codex_core::config::edit::ConfigEditsBuilder;
use codex_core::config::find_codex_home;
use codex_core::config::load_global_mcp_servers;
use codex_core::config_edit::ConfigEditsBuilder;
use codex_core::config_types::McpServerConfig;
use codex_core::config_types::McpServerTransportConfig;
use codex_core::config::types::McpServerConfig;
use codex_core::config::types::McpServerTransportConfig;
use codex_core::features::Feature;
use codex_core::mcp::auth::compute_auth_statuses;
use codex_core::protocol::McpAuthStatus;
@@ -347,9 +347,7 @@ async fn run_login(config_overrides: &CliConfigOverrides, login_args: LoginArgs)
.context("failed to load configuration")?;
if !config.features.enabled(Feature::RmcpClient) {
bail!(
"OAuth login is only supported when experimental_use_rmcp_client is true in config.toml."
);
bail!("OAuth login is only supported when [feature].rmcp_client is true in config.toml.");
}
let LoginArgs { name, scopes } = login_args;

View File

@@ -2,7 +2,7 @@ use std::path::Path;
use anyhow::Result;
use codex_core::config::load_global_mcp_servers;
use codex_core::config_types::McpServerTransportConfig;
use codex_core::config::types::McpServerTransportConfig;
use predicates::str::contains;
use pretty_assertions::assert_eq;
use tempfile::TempDir;

View File

@@ -1,9 +1,9 @@
use std::path::Path;
use anyhow::Result;
use codex_core::config::edit::ConfigEditsBuilder;
use codex_core::config::load_global_mcp_servers;
use codex_core::config_edit::ConfigEditsBuilder;
use codex_core::config_types::McpServerTransportConfig;
use codex_core::config::types::McpServerTransportConfig;
use predicates::prelude::PredicateBooleanExt;
use predicates::str::contains;
use pretty_assertions::assert_eq;

View File

@@ -21,6 +21,7 @@ use codex_protocol::models::FunctionCallOutputContentItem;
use codex_protocol::models::ReasoningItemContent;
use codex_protocol::models::ResponseItem;
use codex_protocol::protocol::SessionSource;
use codex_protocol::protocol::SubAgentSource;
use eventsource_stream::Eventsource;
use futures::Stream;
use futures::StreamExt;
@@ -347,13 +348,18 @@ pub(crate) async fn stream_chat_completions(
let mut req_builder = provider.create_request_builder(client, &None).await?;
// Include session source for backend telemetry and routing.
let task_type = match serde_json::to_value(session_source) {
Ok(serde_json::Value::String(s)) => s,
Ok(other) => other.to_string(),
Err(_) => "unknown".to_string(),
};
req_builder = req_builder.header("Codex-Task-Type", task_type);
// Include subagent header only for subagent sessions.
if let SessionSource::SubAgent(sub) = session_source.clone() {
let subagent = if let SubAgentSource::Other(label) = sub {
label
} else {
serde_json::to_value(&sub)
.ok()
.and_then(|v| v.as_str().map(std::string::ToString::to_string))
.unwrap_or_else(|| "other".to_string())
};
req_builder = req_builder.header("x-openai-subagent", subagent);
}
let res = otel_event_manager
.log_request(attempt, || {

View File

@@ -303,13 +303,18 @@ impl ModelClient {
.await
.map_err(StreamAttemptError::Fatal)?;
// Include session source for backend telemetry and routing.
let task_type = match serde_json::to_value(&self.session_source) {
Ok(serde_json::Value::String(s)) => s,
Ok(other) => other.to_string(),
Err(_) => "unknown".to_string(),
};
req_builder = req_builder.header("Codex-Task-Type", task_type);
// Include subagent header only for subagent sessions.
if let SessionSource::SubAgent(sub) = &self.session_source {
let subagent = if let crate::protocol::SubAgentSource::Other(label) = sub {
label.clone()
} else {
serde_json::to_value(sub)
.ok()
.and_then(|v| v.as_str().map(std::string::ToString::to_string))
.unwrap_or_else(|| "other".to_string())
};
req_builder = req_builder.header("x-openai-subagent", subagent);
}
req_builder = req_builder
// Send session_id for compatibility.

View File

@@ -56,8 +56,8 @@ use crate::client::ModelClient;
use crate::client_common::Prompt;
use crate::client_common::ResponseEvent;
use crate::config::Config;
use crate::config_types::McpServerTransportConfig;
use crate::config_types::ShellEnvironmentPolicy;
use crate::config::types::McpServerTransportConfig;
use crate::config::types::ShellEnvironmentPolicy;
use crate::conversation_history::ConversationHistory;
use crate::environment_context::EnvironmentContext;
use crate::error::CodexErr;
@@ -112,9 +112,11 @@ use crate::tools::spec::ToolsConfig;
use crate::tools::spec::ToolsConfigParams;
use crate::turn_diff_tracker::TurnDiffTracker;
use crate::unified_exec::UnifiedExecSessionManager;
use crate::user_instructions::DeveloperInstructions;
use crate::user_instructions::UserInstructions;
use crate::user_notification::UserNotification;
use crate::util::backoff;
use chrono::Local;
use codex_async_utils::OrCancelExt;
use codex_otel::otel_event_manager::OtelEventManager;
use codex_protocol::config_types::ReasoningEffort as ReasoningEffortConfig;
@@ -171,8 +173,10 @@ impl Codex {
model: config.model.clone(),
model_reasoning_effort: config.model_reasoning_effort,
model_reasoning_summary: config.model_reasoning_summary,
developer_instructions: config.developer_instructions.clone(),
user_instructions,
base_instructions: config.base_instructions.clone(),
compact_prompt: config.compact_prompt.clone(),
approval_policy: config.approval_policy,
sandbox_policy: config.sandbox_policy.clone(),
cwd: config.cwd.clone(),
@@ -264,7 +268,10 @@ pub(crate) struct TurnContext {
/// the model as well as sandbox policies are resolved against this path
/// instead of `std::env::current_dir()`.
pub(crate) cwd: PathBuf,
pub(crate) local_date_with_timezone: Option<String>,
pub(crate) developer_instructions: Option<String>,
pub(crate) base_instructions: Option<String>,
pub(crate) compact_prompt: Option<String>,
pub(crate) user_instructions: Option<String>,
pub(crate) approval_policy: AskForApproval,
pub(crate) sandbox_policy: SandboxPolicy,
@@ -281,6 +288,12 @@ impl TurnContext {
.map(PathBuf::from)
.map_or_else(|| self.cwd.clone(), |p| self.cwd.join(p))
}
pub(crate) fn compact_prompt(&self) -> &str {
self.compact_prompt
.as_deref()
.unwrap_or(compact::SUMMARIZATION_PROMPT)
}
}
#[allow(dead_code)]
@@ -295,12 +308,18 @@ pub(crate) struct SessionConfiguration {
model_reasoning_effort: Option<ReasoningEffortConfig>,
model_reasoning_summary: ReasoningSummaryConfig,
/// Developer instructions that supplement the base instructions.
developer_instructions: Option<String>,
/// Model instructions that are appended to the base instructions.
user_instructions: Option<String>,
/// Base instructions override.
base_instructions: Option<String>,
/// Compact prompt override.
compact_prompt: Option<String>,
/// When to escalate for approval for execution
approval_policy: AskForApproval,
/// How to sandbox commands executed in the system
@@ -406,7 +425,10 @@ impl Session {
sub_id,
client,
cwd: session_configuration.cwd.clone(),
local_date_with_timezone: Some(Local::now().format("%Y-%m-%d %:z").to_string()),
developer_instructions: session_configuration.developer_instructions.clone(),
base_instructions: session_configuration.base_instructions.clone(),
compact_prompt: session_configuration.compact_prompt.clone(),
user_instructions: session_configuration.user_instructions.clone(),
approval_policy: session_configuration.approval_policy,
sandbox_policy: session_configuration.sandbox_policy.clone(),
@@ -979,12 +1001,16 @@ impl Session {
}
pub(crate) fn build_initial_context(&self, turn_context: &TurnContext) -> Vec<ResponseItem> {
let mut items = Vec::<ResponseItem>::with_capacity(2);
let mut items = Vec::<ResponseItem>::with_capacity(3);
if let Some(developer_instructions) = turn_context.developer_instructions.as_deref() {
items.push(DeveloperInstructions::new(developer_instructions.to_string()).into());
}
if let Some(user_instructions) = turn_context.user_instructions.as_deref() {
items.push(UserInstructions::new(user_instructions.to_string()).into());
}
items.push(ResponseItem::from(EnvironmentContext::new(
Some(turn_context.cwd.clone()),
turn_context.local_date_with_timezone.clone(),
Some(turn_context.approval_policy),
Some(turn_context.sandbox_policy.clone()),
Some(self.user_shell().clone()),
@@ -1313,7 +1339,7 @@ mod handlers {
use crate::codex::Session;
use crate::codex::SessionSettingsUpdate;
use crate::codex::TurnContext;
use crate::codex::compact;
use crate::codex::spawn_review_thread;
use crate::config::Config;
use crate::mcp::auth::compute_auth_statuses;
@@ -1540,7 +1566,7 @@ mod handlers {
// Attempt to inject input into current task
if let Err(items) = sess
.inject_input(vec![UserInput::Text {
text: compact::SUMMARIZATION_PROMPT.to_string(),
text: turn_context.compact_prompt().to_string(),
}])
.await
{
@@ -1662,12 +1688,15 @@ async fn spawn_review_thread(
sub_id: sub_id.to_string(),
client,
tools_config,
developer_instructions: None,
user_instructions: None,
base_instructions: Some(base_instructions.clone()),
compact_prompt: parent_turn_context.compact_prompt.clone(),
approval_policy: parent_turn_context.approval_policy,
sandbox_policy: parent_turn_context.sandbox_policy.clone(),
shell_environment_policy: parent_turn_context.shell_environment_policy.clone(),
cwd: parent_turn_context.cwd.clone(),
local_date_with_timezone: parent_turn_context.local_date_with_timezone.clone(),
final_output_json_schema: None,
codex_linux_sandbox_exe: parent_turn_context.codex_linux_sandbox_exe.clone(),
tool_call_gate: Arc::new(ReadinessFlag::new()),
@@ -2277,8 +2306,8 @@ mod tests {
use super::*;
use crate::config::ConfigOverrides;
use crate::config::ConfigToml;
use crate::config_types::McpServerConfig;
use crate::config_types::McpServerTransportConfig;
use crate::config::types::McpServerConfig;
use crate::config::types::McpServerTransportConfig;
use crate::exec::ExecToolCallOutput;
use crate::mcp::auth::McpAuthStatusEntry;
use crate::tools::format_exec_output_str;
@@ -2498,8 +2527,10 @@ mod tests {
model: config.model.clone(),
model_reasoning_effort: config.model_reasoning_effort,
model_reasoning_summary: config.model_reasoning_summary,
developer_instructions: config.developer_instructions.clone(),
user_instructions: config.user_instructions.clone(),
base_instructions: config.base_instructions.clone(),
compact_prompt: config.compact_prompt.clone(),
approval_policy: config.approval_policy,
sandbox_policy: config.sandbox_policy.clone(),
cwd: config.cwd.clone(),
@@ -2572,8 +2603,10 @@ mod tests {
model: config.model.clone(),
model_reasoning_effort: config.model_reasoning_effort,
model_reasoning_summary: config.model_reasoning_summary,
developer_instructions: config.developer_instructions.clone(),
user_instructions: config.user_instructions.clone(),
base_instructions: config.base_instructions.clone(),
compact_prompt: config.compact_prompt.clone(),
approval_policy: config.approval_policy,
sandbox_policy: config.sandbox_policy.clone(),
cwd: config.cwd.clone(),

View File

@@ -2,6 +2,7 @@ use std::sync::Arc;
use super::Session;
use super::TurnContext;
use super::get_last_assistant_message_from_turn;
use crate::Prompt;
use crate::client_common::ResponseEvent;
use crate::error::CodexErr;
@@ -19,19 +20,11 @@ use codex_protocol::items::TurnItem;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ResponseInputItem;
use codex_protocol::models::ResponseItem;
use codex_protocol::protocol::InitialHistory;
use codex_protocol::protocol::RolloutItem;
use codex_protocol::protocol::SubAgentSource;
use codex_protocol::user_input::UserInput;
use futures::prelude::*;
use tokio_util::sync::CancellationToken;
use tracing::error;
use crate::codex_delegate::run_codex_conversation_with_history_one_shot;
use crate::features::Feature;
use crate::protocol::Event;
use crate::protocol::EventMsg as Ev;
pub const SUMMARIZATION_PROMPT: &str = include_str!("../../templates/compact/prompt.md");
const COMPACT_USER_MESSAGE_MAX_TOKENS: usize = 20_000;
@@ -46,10 +39,9 @@ pub(crate) async fn run_inline_auto_compact_task(
sess: Arc<Session>,
turn_context: Arc<TurnContext>,
) {
let input = vec![UserInput::Text {
text: SUMMARIZATION_PROMPT.to_string(),
}];
run_compact_task_inner_delegate(sess, turn_context, input).await;
let prompt = turn_context.compact_prompt().to_string();
let input = vec![UserInput::Text { text: prompt }];
run_compact_task_inner(sess, turn_context, input).await;
}
pub(crate) async fn run_compact_task(
@@ -61,7 +53,7 @@ pub(crate) async fn run_compact_task(
model_context_window: turn_context.client.get_model_context_window(),
});
sess.send_event(&turn_context, start_event).await;
run_compact_task_inner_delegate(sess.clone(), turn_context, input).await;
run_compact_task_inner(sess.clone(), turn_context, input).await;
None
}
@@ -70,7 +62,7 @@ async fn run_compact_task_inner(
turn_context: Arc<TurnContext>,
input: Vec<UserInput>,
) {
let initial_input_for_turn: ResponseInputItem = ResponseInputItem::from(input.clone());
let initial_input_for_turn: ResponseInputItem = ResponseInputItem::from(input);
let mut history = sess.clone_history().await;
history.record_items(&[initial_input_for_turn.into()]);
@@ -155,17 +147,7 @@ async fn run_compact_task_inner(
}
let history_snapshot = sess.clone_history().await.get_history();
let summary_text =
super::get_last_assistant_message_from_turn(&history_snapshot).unwrap_or_default();
finish_compact(sess, turn_context, summary_text, history_snapshot).await;
}
async fn finish_compact(
sess: Arc<Session>,
turn_context: Arc<TurnContext>,
summary_text: String,
history_snapshot: Vec<ResponseItem>,
) {
let summary_text = get_last_assistant_message_from_turn(&history_snapshot).unwrap_or_default();
let user_messages = collect_user_messages(&history_snapshot);
let initial_context = sess.build_initial_context(turn_context.as_ref());
let mut new_history = build_compacted_history(initial_context, &user_messages, &summary_text);
@@ -188,166 +170,6 @@ async fn finish_compact(
sess.send_event(&turn_context, event).await;
}
async fn run_compact_task_inner_delegate(
sess: Arc<Session>,
turn_context: Arc<TurnContext>,
input: Vec<UserInput>,
) {
let initial_input_for_turn: ResponseInputItem = ResponseInputItem::from(input.clone());
let mut history = sess.clone_history().await;
history.record_items(&[initial_input_for_turn.into()]);
let mut truncated_count = 0usize;
let max_retries = turn_context.client.get_provider().stream_max_retries();
let mut retries = 0;
let rollout_item = RolloutItem::TurnContext(TurnContextItem {
cwd: turn_context.cwd.clone(),
approval_policy: turn_context.approval_policy,
sandbox_policy: turn_context.sandbox_policy.clone(),
model: turn_context.client.get_model(),
effort: turn_context.client.get_reasoning_effort(),
summary: turn_context.client.get_reasoning_summary(),
});
sess.persist_rollout_items(&[rollout_item]).await;
// Delegate the model run to a sub-agent while preserving retries and trim behavior.
let mut last_agent_message: Option<String> = None;
loop {
let turn_input = history.get_history_for_prompt();
// Use the full prompt history as initial context for the delegate, and
// pass the summarization trigger as the one-shot user input.
let prefix_items = turn_input.clone();
let input_for_delegate = input.clone();
// Build initial history for the delegate from the prompt prefix.
let initial = InitialHistory::Forked(
prefix_items
.into_iter()
.map(RolloutItem::ResponseItem)
.collect(),
);
// Clone config and run a one-shot sub-agent turn labeled as Compact.
let mut sub_config = turn_context.client.config().as_ref().clone();
// Disable tools for the summarization run to match Prompt::default() semantics.
sub_config
.features
.disable(Feature::WebSearchRequest)
.disable(Feature::ViewImageTool)
.disable(Feature::StreamableShell)
.disable(Feature::UnifiedExec)
.disable(Feature::ApplyPatchFreeform);
let cancel = CancellationToken::new();
let run = run_codex_conversation_with_history_one_shot(
sub_config,
Arc::clone(&sess.services.auth_manager),
initial,
SubAgentSource::Compact,
input_for_delegate,
sess.clone(),
turn_context.clone(),
cancel.clone(),
)
.await;
let Ok(io) = run else {
if retries < max_retries {
retries += 1;
let delay = backoff(retries);
sess.notify_stream_error(
turn_context.as_ref(),
format!("Reconnecting... {retries}/{max_retries}"),
)
.await;
tokio::time::sleep(delay).await;
continue;
} else {
let event = EventMsg::Error(ErrorEvent {
message: "delegate failed to start".to_string(),
});
sess.send_event(&turn_context, event).await;
return;
}
};
// Process delegate events; forward to parent and interpret errors for retry/trim.
let mut saw_error: Option<String> = None;
while let Ok(Event { msg, .. }) = io.next_event().await {
match msg.clone() {
Ev::TaskComplete(done) => {
last_agent_message = done.last_agent_message;
saw_error = None;
break;
}
Ev::TurnAborted(_) => {
return;
}
Ev::Error(err_event) => {
saw_error = Some(err_event.message);
}
other => {
// Forward all other events for UI continuity (streaming text, token counts, etc.)
sess.send_event(&turn_context, other).await;
}
}
}
if let Some(message) = saw_error {
// Treat context window errors specially: trim oldest and retry.
if message == CodexErr::ContextWindowExceeded.to_string() {
if turn_input.len() > 1 {
error!(
"Context window exceeded while compacting; removing oldest history item. Error: {message}"
);
history.remove_first_item();
truncated_count += 1;
retries = 0;
continue;
}
sess.set_total_tokens_full(turn_context.as_ref()).await;
let event = EventMsg::Error(ErrorEvent { message });
sess.send_event(&turn_context, event).await;
return;
}
if retries < max_retries {
retries += 1;
let delay = backoff(retries);
sess.notify_stream_error(
turn_context.as_ref(),
format!("Reconnecting... {retries}/{max_retries}"),
)
.await;
tokio::time::sleep(delay).await;
continue;
} else {
let event = EventMsg::Error(ErrorEvent { message });
sess.send_event(&turn_context, event).await;
return;
}
} else {
if truncated_count > 0 {
sess.notify_background_event(
turn_context.as_ref(),
format!(
"Trimmed {truncated_count} older conversation item(s) before compacting so the prompt fits the model context window."
),
)
.await;
}
break;
}
}
let history_snapshot = sess.clone_history().await.get_history();
let summary_text = last_agent_message.unwrap_or_default();
finish_compact(sess, turn_context, summary_text, history_snapshot).await;
}
pub fn content_items_to_text(content: &[ContentItem]) -> Option<String> {
let mut pieces = Vec::new();
for item in content {

View File

@@ -36,31 +36,7 @@ pub(crate) async fn run_codex_conversation_interactive(
parent_session: Arc<Session>,
parent_ctx: Arc<TurnContext>,
cancel_token: CancellationToken,
) -> Result<Codex, CodexErr> {
run_codex_conversation_interactive_with_history(
config,
auth_manager,
InitialHistory::New,
SubAgentSource::Review,
parent_session,
parent_ctx,
cancel_token,
)
.await
}
/// Start an interactive sub-Codex conversation with a provided initial history and source.
///
/// Mirrors `run_codex_conversation_interactive` but allows the caller to seed history and
/// specify a sub-agent source label for rollout provenance.
pub(crate) async fn run_codex_conversation_interactive_with_history(
config: Config,
auth_manager: Arc<AuthManager>,
initial_history: InitialHistory,
sub_source: SubAgentSource,
parent_session: Arc<Session>,
parent_ctx: Arc<TurnContext>,
cancel_token: CancellationToken,
initial_history: Option<InitialHistory>,
) -> Result<Codex, CodexErr> {
let (tx_sub, rx_sub) = async_channel::bounded(SUBMISSION_CHANNEL_CAPACITY);
let (tx_ops, rx_ops) = async_channel::bounded(SUBMISSION_CHANNEL_CAPACITY);
@@ -68,8 +44,8 @@ pub(crate) async fn run_codex_conversation_interactive_with_history(
let CodexSpawnOk { codex, .. } = Codex::spawn(
config,
auth_manager,
initial_history,
SessionSource::SubAgent(sub_source),
initial_history.unwrap_or(InitialHistory::New),
SessionSource::SubAgent(SubAgentSource::Review),
)
.await?;
let codex = Arc::new(codex);
@@ -118,42 +94,18 @@ pub(crate) async fn run_codex_conversation_one_shot(
parent_session: Arc<Session>,
parent_ctx: Arc<TurnContext>,
cancel_token: CancellationToken,
) -> Result<Codex, CodexErr> {
run_codex_conversation_with_history_one_shot(
config,
auth_manager,
InitialHistory::New,
SubAgentSource::Review,
input,
parent_session,
parent_ctx,
cancel_token,
)
.await
}
/// Convenience wrapper for one-time use with initial history and explicit sub-agent source.
pub(crate) async fn run_codex_conversation_with_history_one_shot(
config: Config,
auth_manager: Arc<AuthManager>,
initial_history: InitialHistory,
sub_source: SubAgentSource,
input: Vec<UserInput>,
parent_session: Arc<Session>,
parent_ctx: Arc<TurnContext>,
cancel_token: CancellationToken,
initial_history: Option<InitialHistory>,
) -> Result<Codex, CodexErr> {
// Use a child token so we can stop the delegate after completion without
// requiring the caller to cancel the parent token.
let child_cancel = cancel_token.child_token();
let io = run_codex_conversation_interactive_with_history(
let io = run_codex_conversation_interactive(
config,
auth_manager,
initial_history,
sub_source,
parent_session,
parent_ctx,
child_cancel.clone(),
initial_history,
)
.await?;

View File

@@ -1,6 +1,6 @@
use crate::config::CONFIG_TOML_FILE;
use crate::config_types::McpServerConfig;
use crate::config_types::Notice;
use crate::config::types::McpServerConfig;
use crate::config::types::Notice;
use anyhow::Context;
use codex_protocol::config_types::ReasoningEffort;
use std::collections::BTreeMap;
@@ -41,8 +41,8 @@ pub enum ConfigEdit {
// TODO(jif) move to a dedicated file
mod document_helpers {
use crate::config_types::McpServerConfig;
use crate::config_types::McpServerTransportConfig;
use crate::config::types::McpServerConfig;
use crate::config::types::McpServerTransportConfig;
use toml_edit::Array as TomlArray;
use toml_edit::InlineTable;
use toml_edit::Item as TomlItem;
@@ -509,7 +509,7 @@ impl ConfigEditsBuilder {
#[cfg(test)]
mod tests {
use super::*;
use crate::config_types::McpServerTransportConfig;
use crate::config::types::McpServerTransportConfig;
use codex_protocol::config_types::ReasoningEffort;
use pretty_assertions::assert_eq;
use tempfile::tempdir;

View File

@@ -1,23 +1,22 @@
use crate::auth::AuthCredentialsStoreMode;
use crate::config::types::DEFAULT_OTEL_ENVIRONMENT;
use crate::config::types::History;
use crate::config::types::McpServerConfig;
use crate::config::types::Notice;
use crate::config::types::Notifications;
use crate::config::types::OtelConfig;
use crate::config::types::OtelConfigToml;
use crate::config::types::OtelExporterKind;
use crate::config::types::ReasoningSummaryFormat;
use crate::config::types::SandboxWorkspaceWrite;
use crate::config::types::ShellEnvironmentPolicy;
use crate::config::types::ShellEnvironmentPolicyToml;
use crate::config::types::Tui;
use crate::config::types::UriBasedFileOpener;
use crate::config_loader::LoadedConfigLayers;
pub use crate::config_loader::load_config_as_toml;
use crate::config_loader::load_config_as_toml;
use crate::config_loader::load_config_layers_with_overrides;
use crate::config_loader::merge_toml_values;
use crate::config_profile::ConfigProfile;
use crate::config_types::DEFAULT_OTEL_ENVIRONMENT;
use crate::config_types::History;
use crate::config_types::McpServerConfig;
use crate::config_types::Notice;
use crate::config_types::Notifications;
use crate::config_types::OtelConfig;
use crate::config_types::OtelConfigToml;
use crate::config_types::OtelExporterKind;
use crate::config_types::ReasoningSummaryFormat;
use crate::config_types::SandboxWorkspaceWrite;
use crate::config_types::ShellEnvironmentPolicy;
use crate::config_types::ShellEnvironmentPolicyToml;
use crate::config_types::Tui;
use crate::config_types::UriBasedFileOpener;
use crate::features::Feature;
use crate::features::FeatureOverrides;
use crate::features::Features;
@@ -51,9 +50,14 @@ use std::io::ErrorKind;
use std::path::Path;
use std::path::PathBuf;
use crate::config::profile::ConfigProfile;
use toml::Value as TomlValue;
use toml_edit::DocumentMut;
pub mod edit;
pub mod profile;
pub mod types;
#[cfg(target_os = "windows")]
pub const OPENAI_DEFAULT_MODEL: &str = "gpt-5";
#[cfg(not(target_os = "windows"))]
@@ -124,6 +128,12 @@ pub struct Config {
/// Base instructions override.
pub base_instructions: Option<String>,
/// Developer instructions override injected as a separate message.
pub developer_instructions: Option<String>,
/// Compact prompt override.
pub compact_prompt: Option<String>,
/// Optional external notifier command. When set, Codex will spawn this
/// program after each completed *turn* (i.e. when the agent finishes
/// processing a user submission). The value must be the full command
@@ -240,9 +250,6 @@ pub struct Config {
/// https://github.com/modelcontextprotocol/rust-sdk
pub use_experimental_use_rmcp_client: bool,
/// Include the `view_image` tool that lets the agent attach a local image path to context.
pub include_view_image_tool: bool,
/// Centralized feature flags; source of truth for feature gating.
pub features: Features,
@@ -265,7 +272,7 @@ pub struct Config {
pub disable_paste_burst: bool,
/// OTEL configuration (exporter type, endpoint, headers, etc.).
pub otel: crate::config_types::OtelConfig,
pub otel: crate::config::types::OtelConfig,
}
impl Config {
@@ -448,7 +455,7 @@ pub(crate) fn set_project_trusted_inner(
/// Patch `CODEX_HOME/config.toml` project state.
/// Use with caution.
pub fn set_project_trusted(codex_home: &Path, project_path: &Path) -> anyhow::Result<()> {
use crate::config_edit::ConfigEditsBuilder;
use crate::config::edit::ConfigEditsBuilder;
ConfigEditsBuilder::new(codex_home)
.set_project_trusted(project_path)
@@ -537,6 +544,13 @@ pub struct ConfigToml {
/// System instructions.
pub instructions: Option<String>,
/// Developer instructions inserted as a `developer` role message.
#[serde(default)]
pub developer_instructions: Option<String>,
/// Compact prompt used for history compaction.
pub compact_prompt: Option<String>,
/// When set, restricts ChatGPT login to a specific workspace identifier.
#[serde(default)]
pub forced_chatgpt_workspace_id: Option<String>,
@@ -629,17 +643,18 @@ pub struct ConfigToml {
pub disable_paste_burst: Option<bool>,
/// OTEL configuration.
pub otel: Option<crate::config_types::OtelConfigToml>,
pub otel: Option<crate::config::types::OtelConfigToml>,
/// Tracks whether the Windows onboarding screen has been acknowledged.
pub windows_wsl_setup_acknowledged: Option<bool>,
/// Collection of in-product notices (different from notifications)
/// See [`crate::config_types::Notices`] for more details
/// See [`crate::config::types::Notices`] for more details
pub notice: Option<Notice>,
/// Legacy, now use features
pub experimental_instructions_file: Option<PathBuf>,
pub experimental_compact_prompt_file: Option<PathBuf>,
pub experimental_use_exec_command_tool: Option<bool>,
pub experimental_use_unified_exec_tool: Option<bool>,
pub experimental_use_rmcp_client: Option<bool>,
@@ -820,8 +835,9 @@ pub struct ConfigOverrides {
pub config_profile: Option<String>,
pub codex_linux_sandbox_exe: Option<PathBuf>,
pub base_instructions: Option<String>,
pub developer_instructions: Option<String>,
pub compact_prompt: Option<String>,
pub include_apply_patch_tool: Option<bool>,
pub include_view_image_tool: Option<bool>,
pub show_raw_agent_reasoning: Option<bool>,
pub tools_web_search_request: Option<bool>,
pub experimental_sandbox_command_assessment: Option<bool>,
@@ -850,8 +866,9 @@ impl Config {
config_profile: config_profile_key,
codex_linux_sandbox_exe,
base_instructions,
developer_instructions,
compact_prompt,
include_apply_patch_tool: include_apply_patch_tool_override,
include_view_image_tool: include_view_image_tool_override,
show_raw_agent_reasoning,
tools_web_search_request: override_tools_web_search_request,
experimental_sandbox_command_assessment: sandbox_command_assessment_override,
@@ -878,7 +895,6 @@ impl Config {
let feature_overrides = FeatureOverrides {
include_apply_patch_tool: include_apply_patch_tool_override,
include_view_image_tool: include_view_image_tool_override,
web_search_request: override_tools_web_search_request,
experimental_sandbox_command_assessment: sandbox_command_assessment_override,
};
@@ -976,7 +992,6 @@ impl Config {
let history = cfg.history.unwrap_or_default();
let include_apply_patch_tool_flag = features.enabled(Feature::ApplyPatchFreeform);
let include_view_image_tool_flag = features.enabled(Feature::ViewImageTool);
let tools_web_search_request = features.enabled(Feature::WebSearchRequest);
let use_experimental_streamable_shell_tool = features.enabled(Feature::StreamableShell);
let use_experimental_unified_exec_tool = features.enabled(Feature::UnifiedExec);
@@ -1026,6 +1041,15 @@ impl Config {
.and_then(|info| info.auto_compact_token_limit)
});
let compact_prompt = compact_prompt.or(cfg.compact_prompt).and_then(|value| {
let trimmed = value.trim();
if trimmed.is_empty() {
None
} else {
Some(trimmed.to_string())
}
});
// Load base instructions override from a file if specified. If the
// path is relative, resolve it against the effective cwd so the
// behaviour matches other path-like config values.
@@ -1033,9 +1057,24 @@ impl Config {
.experimental_instructions_file
.as_ref()
.or(cfg.experimental_instructions_file.as_ref());
let file_base_instructions =
Self::get_base_instructions(experimental_instructions_path, &resolved_cwd)?;
let file_base_instructions = Self::load_override_from_file(
experimental_instructions_path,
&resolved_cwd,
"experimental instructions file",
)?;
let base_instructions = base_instructions.or(file_base_instructions);
let developer_instructions = developer_instructions.or(cfg.developer_instructions);
let experimental_compact_prompt_path = config_profile
.experimental_compact_prompt_file
.as_ref()
.or(cfg.experimental_compact_prompt_file.as_ref());
let file_compact_prompt = Self::load_override_from_file(
experimental_compact_prompt_path,
&resolved_cwd,
"experimental compact prompt file",
)?;
let compact_prompt = compact_prompt.or(file_compact_prompt);
// Default review model when not set in config; allow CLI override to take precedence.
let review_model = override_review_model
@@ -1060,6 +1099,8 @@ impl Config {
notify: cfg.notify,
user_instructions,
base_instructions,
developer_instructions,
compact_prompt,
// The config.toml omits "_mode" because it's a config file. However, "_mode"
// is important in code to differentiate the mode from the store implementation.
cli_auth_credentials_store_mode: cfg.cli_auth_credentials_store.unwrap_or_default(),
@@ -1112,7 +1153,6 @@ impl Config {
use_experimental_streamable_shell_tool,
use_experimental_unified_exec_tool,
use_experimental_use_rmcp_client,
include_view_image_tool: include_view_image_tool_flag,
features,
active_profile: active_profile_name,
active_project,
@@ -1156,18 +1196,15 @@ impl Config {
None
}
fn get_base_instructions(
fn load_override_from_file(
path: Option<&PathBuf>,
cwd: &Path,
description: &str,
) -> std::io::Result<Option<String>> {
let p = match path.as_ref() {
None => return Ok(None),
Some(p) => p,
let Some(p) = path else {
return Ok(None);
};
// Resolve relative paths against the provided cwd to make CLI
// overrides consistent regardless of where the process was launched
// from.
let full_path = if p.is_relative() {
cwd.join(p)
} else {
@@ -1177,10 +1214,7 @@ impl Config {
let contents = std::fs::read_to_string(&full_path).map_err(|e| {
std::io::Error::new(
e.kind(),
format!(
"failed to read experimental instructions file {}: {e}",
full_path.display()
),
format!("failed to read {description} {}: {e}", full_path.display()),
)
})?;
@@ -1188,10 +1222,7 @@ impl Config {
if s.is_empty() {
Err(std::io::Error::new(
std::io::ErrorKind::InvalidData,
format!(
"experimental instructions file is empty: {}",
full_path.display()
),
format!("{description} is empty: {}", full_path.display()),
))
} else {
Ok(Some(s))
@@ -1244,12 +1275,12 @@ pub fn log_dir(cfg: &Config) -> std::io::Result<PathBuf> {
#[cfg(test)]
mod tests {
use crate::config_edit::ConfigEdit;
use crate::config_edit::ConfigEditsBuilder;
use crate::config_edit::apply_blocking;
use crate::config_types::HistoryPersistence;
use crate::config_types::McpServerTransportConfig;
use crate::config_types::Notifications;
use crate::config::edit::ConfigEdit;
use crate::config::edit::ConfigEditsBuilder;
use crate::config::edit::apply_blocking;
use crate::config::types::HistoryPersistence;
use crate::config::types::McpServerTransportConfig;
use crate::config::types::Notifications;
use crate::features::Feature;
use super::*;
@@ -1556,7 +1587,7 @@ trust_level = "trusted"
profiles.insert(
"work".to_string(),
ConfigProfile {
include_view_image_tool: Some(false),
tools_view_image: Some(false),
..Default::default()
},
);
@@ -1573,7 +1604,6 @@ trust_level = "trusted"
)?;
assert!(!config.features.enabled(Feature::ViewImageTool));
assert!(!config.include_view_image_tool);
Ok(())
}
@@ -2649,6 +2679,61 @@ model = "gpt-5-codex"
}
}
#[test]
fn cli_override_sets_compact_prompt() -> std::io::Result<()> {
let codex_home = TempDir::new()?;
let overrides = ConfigOverrides {
compact_prompt: Some("Use the compact override".to_string()),
..Default::default()
};
let config = Config::load_from_base_config_with_overrides(
ConfigToml::default(),
overrides,
codex_home.path().to_path_buf(),
)?;
assert_eq!(
config.compact_prompt.as_deref(),
Some("Use the compact override")
);
Ok(())
}
#[test]
fn loads_compact_prompt_from_file() -> std::io::Result<()> {
let codex_home = TempDir::new()?;
let workspace = codex_home.path().join("workspace");
std::fs::create_dir_all(&workspace)?;
let prompt_path = workspace.join("compact_prompt.txt");
std::fs::write(&prompt_path, " summarize differently ")?;
let cfg = ConfigToml {
experimental_compact_prompt_file: Some(PathBuf::from("compact_prompt.txt")),
..Default::default()
};
let overrides = ConfigOverrides {
cwd: Some(workspace),
..Default::default()
};
let config = Config::load_from_base_config_with_overrides(
cfg,
overrides,
codex_home.path().to_path_buf(),
)?;
assert_eq!(
config.compact_prompt.as_deref(),
Some("summarize differently")
);
Ok(())
}
fn create_test_fixture() -> std::io::Result<PrecedenceTestFixture> {
let toml = r#"
model = "o3"
@@ -2804,6 +2889,8 @@ model_verbosity = "high"
model_verbosity: None,
chatgpt_base_url: "https://chatgpt.com/backend-api/".to_string(),
base_instructions: None,
developer_instructions: None,
compact_prompt: None,
forced_chatgpt_workspace_id: None,
forced_login_method: None,
include_apply_patch_tool: false,
@@ -2812,7 +2899,6 @@ model_verbosity = "high"
use_experimental_streamable_shell_tool: false,
use_experimental_unified_exec_tool: false,
use_experimental_use_rmcp_client: false,
include_view_image_tool: true,
features: Features::with_defaults(),
active_profile: Some("o3".to_string()),
active_project: ProjectConfig { trust_level: None },
@@ -2875,6 +2961,8 @@ model_verbosity = "high"
model_verbosity: None,
chatgpt_base_url: "https://chatgpt.com/backend-api/".to_string(),
base_instructions: None,
developer_instructions: None,
compact_prompt: None,
forced_chatgpt_workspace_id: None,
forced_login_method: None,
include_apply_patch_tool: false,
@@ -2883,7 +2971,6 @@ model_verbosity = "high"
use_experimental_streamable_shell_tool: false,
use_experimental_unified_exec_tool: false,
use_experimental_use_rmcp_client: false,
include_view_image_tool: true,
features: Features::with_defaults(),
active_profile: Some("gpt3".to_string()),
active_project: ProjectConfig { trust_level: None },
@@ -2961,6 +3048,8 @@ model_verbosity = "high"
model_verbosity: None,
chatgpt_base_url: "https://chatgpt.com/backend-api/".to_string(),
base_instructions: None,
developer_instructions: None,
compact_prompt: None,
forced_chatgpt_workspace_id: None,
forced_login_method: None,
include_apply_patch_tool: false,
@@ -2969,7 +3058,6 @@ model_verbosity = "high"
use_experimental_streamable_shell_tool: false,
use_experimental_unified_exec_tool: false,
use_experimental_use_rmcp_client: false,
include_view_image_tool: true,
features: Features::with_defaults(),
active_profile: Some("zdr".to_string()),
active_project: ProjectConfig { trust_level: None },
@@ -3033,6 +3121,8 @@ model_verbosity = "high"
model_verbosity: Some(Verbosity::High),
chatgpt_base_url: "https://chatgpt.com/backend-api/".to_string(),
base_instructions: None,
developer_instructions: None,
compact_prompt: None,
forced_chatgpt_workspace_id: None,
forced_login_method: None,
include_apply_patch_tool: false,
@@ -3041,7 +3131,6 @@ model_verbosity = "high"
use_experimental_streamable_shell_tool: false,
use_experimental_unified_exec_tool: false,
use_experimental_use_rmcp_client: false,
include_view_image_tool: true,
features: Features::with_defaults(),
active_profile: Some("gpt5".to_string()),
active_project: ProjectConfig { trust_level: None },
@@ -3174,7 +3263,7 @@ trust_level = "trusted"
#[cfg(test)]
mod notifications_tests {
use crate::config_types::Notifications;
use crate::config::types::Notifications;
use assert_matches::assert_matches;
use serde::Deserialize;

View File

@@ -22,8 +22,8 @@ pub struct ConfigProfile {
pub model_verbosity: Option<Verbosity>,
pub chatgpt_base_url: Option<String>,
pub experimental_instructions_file: Option<PathBuf>,
pub experimental_compact_prompt_file: Option<PathBuf>,
pub include_apply_patch_tool: Option<bool>,
pub include_view_image_tool: Option<bool>,
pub experimental_use_unified_exec_tool: Option<bool>,
pub experimental_use_exec_command_tool: Option<bool>,
pub experimental_use_rmcp_client: Option<bool>,

View File

@@ -24,6 +24,7 @@ pub enum NetworkAccess {
#[serde(rename = "environment_context", rename_all = "snake_case")]
pub(crate) struct EnvironmentContext {
pub cwd: Option<PathBuf>,
pub local_date: Option<String>,
pub approval_policy: Option<AskForApproval>,
pub sandbox_mode: Option<SandboxMode>,
pub network_access: Option<NetworkAccess>,
@@ -34,12 +35,14 @@ pub(crate) struct EnvironmentContext {
impl EnvironmentContext {
pub fn new(
cwd: Option<PathBuf>,
local_date: Option<String>,
approval_policy: Option<AskForApproval>,
sandbox_policy: Option<SandboxPolicy>,
shell: Option<Shell>,
) -> Self {
Self {
cwd,
local_date,
approval_policy,
sandbox_mode: match sandbox_policy {
Some(SandboxPolicy::DangerFullAccess) => Some(SandboxMode::DangerFullAccess),
@@ -79,6 +82,7 @@ impl EnvironmentContext {
pub fn equals_except_shell(&self, other: &EnvironmentContext) -> bool {
let EnvironmentContext {
cwd,
local_date,
approval_policy,
sandbox_mode,
network_access,
@@ -88,6 +92,7 @@ impl EnvironmentContext {
} = other;
self.cwd == *cwd
&& self.local_date == *local_date
&& self.approval_policy == *approval_policy
&& self.sandbox_mode == *sandbox_mode
&& self.network_access == *network_access
@@ -100,6 +105,11 @@ impl EnvironmentContext {
} else {
None
};
let local_date = if before.local_date_with_timezone != after.local_date_with_timezone {
after.local_date_with_timezone.clone()
} else {
None
};
let approval_policy = if before.approval_policy != after.approval_policy {
Some(after.approval_policy)
} else {
@@ -110,7 +120,7 @@ impl EnvironmentContext {
} else {
None
};
EnvironmentContext::new(cwd, approval_policy, sandbox_policy, None)
EnvironmentContext::new(cwd, local_date, approval_policy, sandbox_policy, None)
}
}
@@ -118,6 +128,7 @@ impl From<&TurnContext> for EnvironmentContext {
fn from(turn_context: &TurnContext) -> Self {
Self::new(
Some(turn_context.cwd.clone()),
turn_context.local_date_with_timezone.clone(),
Some(turn_context.approval_policy),
Some(turn_context.sandbox_policy.clone()),
// Shell is not configurable from turn to turn
@@ -134,6 +145,7 @@ impl EnvironmentContext {
/// ```xml
/// <environment_context>
/// <cwd>...</cwd>
/// <local_date>...</local_date>
/// <approval_policy>...</approval_policy>
/// <sandbox_mode>...</sandbox_mode>
/// <writable_roots>...</writable_roots>
@@ -146,6 +158,9 @@ impl EnvironmentContext {
if let Some(cwd) = self.cwd {
lines.push(format!(" <cwd>{}</cwd>", cwd.to_string_lossy()));
}
if let Some(local_date) = self.local_date {
lines.push(format!(" <local_date>{local_date}</local_date>"));
}
if let Some(approval_policy) = self.approval_policy {
lines.push(format!(
" <approval_policy>{approval_policy}</approval_policy>"
@@ -212,6 +227,7 @@ mod tests {
fn serialize_workspace_write_environment_context() {
let context = EnvironmentContext::new(
Some(PathBuf::from("/repo")),
Some("2025-01-01 +00:00".to_string()),
Some(AskForApproval::OnRequest),
Some(workspace_write_policy(vec!["/repo", "/tmp"], false)),
None,
@@ -219,6 +235,7 @@ mod tests {
let expected = r#"<environment_context>
<cwd>/repo</cwd>
<local_date>2025-01-01 +00:00</local_date>
<approval_policy>on-request</approval_policy>
<sandbox_mode>workspace-write</sandbox_mode>
<network_access>restricted</network_access>
@@ -235,12 +252,14 @@ mod tests {
fn serialize_read_only_environment_context() {
let context = EnvironmentContext::new(
None,
Some("2025-01-01 +00:00".to_string()),
Some(AskForApproval::Never),
Some(SandboxPolicy::ReadOnly),
None,
);
let expected = r#"<environment_context>
<local_date>2025-01-01 +00:00</local_date>
<approval_policy>never</approval_policy>
<sandbox_mode>read-only</sandbox_mode>
<network_access>restricted</network_access>
@@ -253,12 +272,14 @@ mod tests {
fn serialize_full_access_environment_context() {
let context = EnvironmentContext::new(
None,
Some("2025-01-01 +00:00".to_string()),
Some(AskForApproval::OnFailure),
Some(SandboxPolicy::DangerFullAccess),
None,
);
let expected = r#"<environment_context>
<local_date>2025-01-01 +00:00</local_date>
<approval_policy>on-failure</approval_policy>
<sandbox_mode>danger-full-access</sandbox_mode>
<network_access>enabled</network_access>
@@ -272,12 +293,14 @@ mod tests {
// Approval policy
let context1 = EnvironmentContext::new(
Some(PathBuf::from("/repo")),
Some("2025-01-01 +00:00".to_string()),
Some(AskForApproval::OnRequest),
Some(workspace_write_policy(vec!["/repo"], false)),
None,
);
let context2 = EnvironmentContext::new(
Some(PathBuf::from("/repo")),
Some("2025-01-01 +00:00".to_string()),
Some(AskForApproval::Never),
Some(workspace_write_policy(vec!["/repo"], true)),
None,
@@ -289,12 +312,14 @@ mod tests {
fn equals_except_shell_compares_sandbox_policy() {
let context1 = EnvironmentContext::new(
Some(PathBuf::from("/repo")),
Some("2025-01-01 +00:00".to_string()),
Some(AskForApproval::OnRequest),
Some(SandboxPolicy::new_read_only_policy()),
None,
);
let context2 = EnvironmentContext::new(
Some(PathBuf::from("/repo")),
Some("2025-01-01 +00:00".to_string()),
Some(AskForApproval::OnRequest),
Some(SandboxPolicy::new_workspace_write_policy()),
None,
@@ -307,12 +332,14 @@ mod tests {
fn equals_except_shell_compares_workspace_write_policy() {
let context1 = EnvironmentContext::new(
Some(PathBuf::from("/repo")),
Some("2025-01-01 +00:00".to_string()),
Some(AskForApproval::OnRequest),
Some(workspace_write_policy(vec!["/repo", "/tmp", "/var"], false)),
None,
);
let context2 = EnvironmentContext::new(
Some(PathBuf::from("/repo")),
Some("2025-01-01 +00:00".to_string()),
Some(AskForApproval::OnRequest),
Some(workspace_write_policy(vec!["/repo", "/tmp"], true)),
None,
@@ -325,6 +352,7 @@ mod tests {
fn equals_except_shell_ignores_shell() {
let context1 = EnvironmentContext::new(
Some(PathBuf::from("/repo")),
Some("2025-01-01 +00:00".to_string()),
Some(AskForApproval::OnRequest),
Some(workspace_write_policy(vec!["/repo"], false)),
Some(Shell::Bash(BashShell {
@@ -334,6 +362,7 @@ mod tests {
);
let context2 = EnvironmentContext::new(
Some(PathBuf::from("/repo")),
Some("2025-01-01 +00:00".to_string()),
Some(AskForApproval::OnRequest),
Some(workspace_write_policy(vec!["/repo"], false)),
Some(Shell::Zsh(ZshShell {

View File

@@ -253,7 +253,7 @@ impl std::fmt::Display for UsageLimitReachedError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let message = match self.plan_type.as_ref() {
Some(PlanType::Known(KnownPlan::Plus)) => format!(
"You've hit your usage limit. Upgrade to Pro (https://openai.com/chatgpt/pricing){}",
"You've hit your usage limit. Upgrade to Pro (https://openai.com/chatgpt/pricing), visit chatgpt.com/codex/settings/usage to purchase more credits{}",
retry_suffix_after_or(self.resets_at.as_ref())
),
Some(PlanType::Known(KnownPlan::Team)) | Some(PlanType::Known(KnownPlan::Business)) => {
@@ -266,8 +266,11 @@ impl std::fmt::Display for UsageLimitReachedError {
"You've hit your usage limit. Upgrade to Plus to continue using Codex (https://openai.com/chatgpt/pricing)."
.to_string()
}
Some(PlanType::Known(KnownPlan::Pro))
| Some(PlanType::Known(KnownPlan::Enterprise))
Some(PlanType::Known(KnownPlan::Pro)) => format!(
"You've hit your usage limit. Visit chatgpt.com/codex/settings/usage to purchase more credits{}",
retry_suffix_after_or(self.resets_at.as_ref())
),
Some(PlanType::Known(KnownPlan::Enterprise))
| Some(PlanType::Known(KnownPlan::Edu)) => format!(
"You've hit your usage limit.{}",
retry_suffix(self.resets_at.as_ref())
@@ -467,7 +470,7 @@ mod tests {
};
assert_eq!(
err.to_string(),
"You've hit your usage limit. Upgrade to Pro (https://openai.com/chatgpt/pricing) or try again later."
"You've hit your usage limit. Upgrade to Pro (https://openai.com/chatgpt/pricing), visit chatgpt.com/codex/settings/usage to purchase more credits or try again later."
);
}
@@ -597,7 +600,7 @@ mod tests {
#[test]
fn usage_limit_reached_error_formats_default_for_other_plans() {
let err = UsageLimitReachedError {
plan_type: Some(PlanType::Known(KnownPlan::Pro)),
plan_type: Some(PlanType::Known(KnownPlan::Enterprise)),
resets_at: None,
rate_limits: Some(rate_limit_snapshot()),
};
@@ -607,6 +610,23 @@ mod tests {
);
}
#[test]
fn usage_limit_reached_error_formats_pro_plan_with_reset() {
let base = Utc.with_ymd_and_hms(2024, 1, 1, 0, 0, 0).unwrap();
let resets_at = base + ChronoDuration::hours(1);
with_now_override(base, move || {
let err = UsageLimitReachedError {
plan_type: Some(PlanType::Known(KnownPlan::Pro)),
resets_at: Some(resets_at),
rate_limits: Some(rate_limit_snapshot()),
};
assert_eq!(
err.to_string(),
"You've hit your usage limit. Visit chatgpt.com/codex/settings/usage to purchase more credits or try again in 1 hour."
);
});
}
#[test]
fn usage_limit_reached_includes_minutes_when_available() {
let base = Utc.with_ymd_and_hms(2024, 1, 1, 0, 0, 0).unwrap();
@@ -636,7 +656,7 @@ mod tests {
};
assert_eq!(
err.to_string(),
"You've hit your usage limit. Upgrade to Pro (https://openai.com/chatgpt/pricing) or try again in 3 hours 32 minutes."
"You've hit your usage limit. Upgrade to Pro (https://openai.com/chatgpt/pricing), visit chatgpt.com/codex/settings/usage to purchase more credits or try again in 3 hours 32 minutes."
);
});
}

View File

@@ -1,6 +1,6 @@
use crate::config_types::EnvironmentVariablePattern;
use crate::config_types::ShellEnvironmentPolicy;
use crate::config_types::ShellEnvironmentPolicyInherit;
use crate::config::types::EnvironmentVariablePattern;
use crate::config::types::ShellEnvironmentPolicy;
use crate::config::types::ShellEnvironmentPolicyInherit;
use std::collections::HashMap;
use std::collections::HashSet;
@@ -71,7 +71,7 @@ where
#[cfg(test)]
mod tests {
use super::*;
use crate::config_types::ShellEnvironmentPolicyInherit;
use crate::config::types::ShellEnvironmentPolicyInherit;
use maplit::hashmap;
fn make_vars(pairs: &[(&str, &str)]) -> Vec<(String, String)> {

View File

@@ -6,7 +6,7 @@
//! container attached to `Config`.
use crate::config::ConfigToml;
use crate::config_profile::ConfigProfile;
use crate::config::profile::ConfigProfile;
use serde::Deserialize;
use std::collections::BTreeMap;
use std::collections::BTreeSet;
@@ -82,7 +82,6 @@ pub struct Features {
#[derive(Debug, Clone, Default)]
pub struct FeatureOverrides {
pub include_apply_patch_tool: Option<bool>,
pub include_view_image_tool: Option<bool>,
pub web_search_request: Option<bool>,
pub experimental_sandbox_command_assessment: Option<bool>,
}
@@ -91,7 +90,6 @@ impl FeatureOverrides {
fn apply(self, features: &mut Features) {
LegacyFeatureToggles {
include_apply_patch_tool: self.include_apply_patch_tool,
include_view_image_tool: self.include_view_image_tool,
tools_web_search: self.web_search_request,
..Default::default()
}
@@ -193,7 +191,6 @@ impl Features {
let profile_legacy = LegacyFeatureToggles {
include_apply_patch_tool: config_profile.include_apply_patch_tool,
include_view_image_tool: config_profile.include_view_image_tool,
experimental_sandbox_command_assessment: config_profile
.experimental_sandbox_command_assessment,
experimental_use_freeform_apply_patch: config_profile

View File

@@ -33,10 +33,6 @@ const ALIASES: &[Alias] = &[
legacy_key: "include_apply_patch_tool",
feature: Feature::ApplyPatchFreeform,
},
Alias {
legacy_key: "include_view_image_tool",
feature: Feature::ViewImageTool,
},
Alias {
legacy_key: "web_search",
feature: Feature::WebSearchRequest,
@@ -56,7 +52,6 @@ pub(crate) fn feature_for_key(key: &str) -> Option<Feature> {
#[derive(Debug, Default)]
pub struct LegacyFeatureToggles {
pub include_apply_patch_tool: Option<bool>,
pub include_view_image_tool: Option<bool>,
pub experimental_sandbox_command_assessment: Option<bool>,
pub experimental_use_freeform_apply_patch: Option<bool>,
pub experimental_use_exec_command_tool: Option<bool>,
@@ -110,12 +105,6 @@ impl LegacyFeatureToggles {
self.tools_web_search,
"tools.web_search",
);
set_if_some(
features,
Feature::ViewImageTool,
self.include_view_image_tool,
"include_view_image_tool",
);
set_if_some(
features,
Feature::ViewImageTool,

View File

@@ -17,10 +17,7 @@ pub use codex_conversation::CodexConversation;
mod codex_delegate;
mod command_safety;
pub mod config;
pub mod config_edit;
pub mod config_loader;
pub mod config_profile;
pub mod config_types;
mod conversation_history;
pub mod custom_prompts;
mod environment_context;

View File

@@ -7,8 +7,8 @@ use codex_rmcp_client::determine_streamable_http_auth_status;
use futures::future::join_all;
use tracing::warn;
use crate::config_types::McpServerConfig;
use crate::config_types::McpServerTransportConfig;
use crate::config::types::McpServerConfig;
use crate::config::types::McpServerTransportConfig;
#[derive(Debug, Clone)]
pub struct McpAuthStatusEntry {

View File

@@ -37,8 +37,8 @@ use tokio::task::JoinSet;
use tracing::info;
use tracing::warn;
use crate::config_types::McpServerConfig;
use crate::config_types::McpServerTransportConfig;
use crate::config::types::McpServerConfig;
use crate::config::types::McpServerTransportConfig;
/// Delimiter used to separate the server name from the tool name in a fully
/// qualified tool name.

View File

@@ -28,7 +28,7 @@ use tokio::fs;
use tokio::io::AsyncReadExt;
use crate::config::Config;
use crate::config_types::HistoryPersistence;
use crate::config::types::HistoryPersistence;
use codex_protocol::ConversationId;
#[cfg(unix)]

View File

@@ -1,4 +1,4 @@
use crate::config_types::ReasoningSummaryFormat;
use crate::config::types::ReasoningSummaryFormat;
use crate::tools::handlers::apply_patch::ApplyPatchToolType;
/// The `instructions` field in the payload sent to a model should always start

View File

@@ -1,6 +1,6 @@
use crate::config::Config;
use crate::config_types::OtelExporterKind as Kind;
use crate::config_types::OtelHttpProtocol as Protocol;
use crate::config::types::OtelExporterKind as Kind;
use crate::config::types::OtelHttpProtocol as Protocol;
use crate::default_client::originator;
use codex_otel::config::OtelExporter;
use codex_otel::config::OtelHttpProtocol;

View File

@@ -71,6 +71,10 @@
(sysctl-name-prefix "net.routetable.")
)
; Allow Java to set CPU type grade when required
(allow sysctl-write
(sysctl-name "kern.grade_cputype"))
; IOKit
(allow iokit-open
(iokit-registry-entry-class "RootDomainUserClient")

View File

@@ -4,6 +4,8 @@ use async_trait::async_trait;
use codex_protocol::items::TurnItem;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ResponseItem;
use codex_protocol::protocol::AgentMessageContentDeltaEvent;
use codex_protocol::protocol::AgentMessageDeltaEvent;
use codex_protocol::protocol::Event;
use codex_protocol::protocol::EventMsg;
use codex_protocol::protocol::ExitedReviewModeEvent;
@@ -88,6 +90,7 @@ async fn start_review_conversation(
session.clone_session(),
ctx.clone(),
cancellation_token,
None,
)
.await)
.ok()
@@ -111,13 +114,15 @@ async fn process_review_events(
}
prev_agent_message = Some(event);
}
// Suppress ItemCompleted for assistant messages: forwarding it would
// trigger legacy AgentMessage via as_legacy_events(), which this
// Suppress ItemCompleted only for assistant messages: forwarding it
// would trigger legacy AgentMessage via as_legacy_events(), which this
// review flow intentionally hides in favor of structured output.
EventMsg::ItemCompleted(ItemCompletedEvent {
item: TurnItem::AgentMessage(_),
..
}) => {}
})
| EventMsg::AgentMessageDelta(AgentMessageDeltaEvent { .. })
| EventMsg::AgentMessageContentDelta(AgentMessageContentDeltaEvent { .. }) => {}
EventMsg::TaskComplete(task_complete) => {
// Parse review output from the last agent message (if present).
let out = task_complete

View File

@@ -40,3 +40,31 @@ impl From<UserInstructions> for ResponseItem {
}
}
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
#[serde(rename = "developer_instructions", rename_all = "snake_case")]
pub(crate) struct DeveloperInstructions {
text: String,
}
impl DeveloperInstructions {
pub fn new<T: Into<String>>(text: T) -> Self {
Self { text: text.into() }
}
pub fn into_text(self) -> String {
self.text
}
}
impl From<DeveloperInstructions> for ResponseItem {
fn from(di: DeveloperInstructions) -> Self {
ResponseItem::Message {
id: None,
role: "developer".to_string(),
content: vec![ContentItem::InputText {
text: di.into_text(),
}],
}
}
}

View File

@@ -240,6 +240,30 @@ impl TestCodexHarness {
.expect("output string")
.to_string()
}
pub async fn custom_tool_call_output(&self, call_id: &str) -> String {
let bodies = self.request_bodies().await;
custom_tool_call_output(&bodies, call_id)
.get("output")
.and_then(Value::as_str)
.expect("output string")
.to_string()
}
}
fn custom_tool_call_output<'a>(bodies: &'a [Value], call_id: &str) -> &'a Value {
for body in bodies {
if let Some(items) = body.get("input").and_then(Value::as_array) {
for item in items {
if item.get("type").and_then(Value::as_str) == Some("custom_tool_call_output")
&& item.get("call_id").and_then(Value::as_str) == Some(call_id)
{
return item;
}
}
}
}
panic!("custom_tool_call_output {call_id} not found");
}
fn function_call_output<'a>(bodies: &'a [Value], call_id: &str) -> &'a Value {

View File

@@ -18,7 +18,7 @@ use tempfile::TempDir;
use wiremock::matchers::header;
#[tokio::test]
async fn responses_stream_includes_task_type_header() {
async fn responses_stream_includes_subagent_header_on_review() {
core_test_support::skip_if_no_network!();
let server = responses::start_mock_server().await;
@@ -27,9 +27,12 @@ async fn responses_stream_includes_task_type_header() {
responses::ev_completed("resp-1"),
]);
let request_recorder =
responses::mount_sse_once_match(&server, header("Codex-Task-Type", "exec"), response_body)
.await;
let request_recorder = responses::mount_sse_once_match(
&server,
header("x-openai-subagent", "review"),
response_body,
)
.await;
let provider = ModelProviderInfo {
name: "mock".into(),
@@ -76,7 +79,7 @@ async fn responses_stream_includes_task_type_header() {
effort,
summary,
conversation_id,
SessionSource::Exec,
SessionSource::SubAgent(codex_protocol::protocol::SubAgentSource::Review),
);
let mut prompt = Prompt::default();
@@ -96,5 +99,98 @@ async fn responses_stream_includes_task_type_header() {
}
let request = request_recorder.single_request();
assert_eq!(request.header("Codex-Task-Type").as_deref(), Some("exec"));
assert_eq!(
request.header("x-openai-subagent").as_deref(),
Some("review")
);
}
#[tokio::test]
async fn responses_stream_includes_subagent_header_on_other() {
core_test_support::skip_if_no_network!();
let server = responses::start_mock_server().await;
let response_body = responses::sse(vec![
responses::ev_response_created("resp-1"),
responses::ev_completed("resp-1"),
]);
let request_recorder = responses::mount_sse_once_match(
&server,
header("x-openai-subagent", "my-task"),
response_body,
)
.await;
let provider = ModelProviderInfo {
name: "mock".into(),
base_url: Some(format!("{}/v1", server.uri())),
env_key: None,
env_key_instructions: None,
experimental_bearer_token: None,
wire_api: WireApi::Responses,
query_params: None,
http_headers: None,
env_http_headers: None,
request_max_retries: Some(0),
stream_max_retries: Some(0),
stream_idle_timeout_ms: Some(5_000),
requires_openai_auth: false,
};
let codex_home = TempDir::new().expect("failed to create TempDir");
let mut config = load_default_config_for_test(&codex_home);
config.model_provider_id = provider.name.clone();
config.model_provider = provider.clone();
let effort = config.model_reasoning_effort;
let summary = config.model_reasoning_summary;
let config = Arc::new(config);
let conversation_id = ConversationId::new();
let otel_event_manager = OtelEventManager::new(
conversation_id,
config.model.as_str(),
config.model_family.slug.as_str(),
None,
Some("test@test.com".to_string()),
Some(AuthMode::ChatGPT),
false,
"test".to_string(),
);
let client = ModelClient::new(
Arc::clone(&config),
None,
otel_event_manager,
provider,
effort,
summary,
conversation_id,
SessionSource::SubAgent(codex_protocol::protocol::SubAgentSource::Other(
"my-task".to_string(),
)),
);
let mut prompt = Prompt::default();
prompt.input = vec![ResponseItem::Message {
id: None,
role: "user".into(),
content: vec![ContentItem::InputText {
text: "hello".into(),
}],
}];
let mut stream = client.stream(&prompt).await.expect("stream failed");
while let Some(event) = stream.next().await {
if matches!(event, Ok(ResponseEvent::Completed { .. })) {
break;
}
}
let request = request_recorder.single_request();
assert_eq!(
request.header("x-openai-subagent").as_deref(),
Some("my-task")
);
}

File diff suppressed because it is too large Load Diff

View File

@@ -58,6 +58,18 @@ fn assert_message_role(request_body: &serde_json::Value, role: &str) {
assert_eq!(request_body["role"].as_str().unwrap(), role);
}
#[expect(clippy::expect_used)]
fn assert_message_equals(request_body: &serde_json::Value, text: &str) {
let content = request_body["content"][0]["text"]
.as_str()
.expect("invalid message content");
assert_eq!(
content, text,
"expected message content '{content}' to equal '{text}'"
);
}
#[expect(clippy::expect_used)]
fn assert_message_starts_with(request_body: &serde_json::Value, text: &str) {
let content = request_body["content"][0]["text"]
@@ -608,6 +620,64 @@ async fn includes_user_instructions_message_in_request() {
assert_message_ends_with(&request_body["input"][1], "</environment_context>");
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn includes_developer_instructions_message_in_request() {
skip_if_no_network!();
let server = MockServer::start().await;
let resp_mock =
responses::mount_sse_once_match(&server, path("/v1/responses"), sse_completed("resp1"))
.await;
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let codex_home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&codex_home);
config.model_provider = model_provider;
config.user_instructions = Some("be nice".to_string());
config.developer_instructions = Some("be useful".to_string());
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));
let codex = conversation_manager
.new_conversation(config)
.await
.expect("create new conversation")
.conversation;
codex
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
}],
})
.await
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
let request = resp_mock.single_request();
let request_body = request.body_json();
assert!(
!request_body["instructions"]
.as_str()
.unwrap()
.contains("be nice")
);
assert_message_role(&request_body["input"][0], "developer");
assert_message_equals(&request_body["input"][0], "be useful");
assert_message_role(&request_body["input"][1], "user");
assert_message_starts_with(&request_body["input"][1], "<user_instructions>");
assert_message_ends_with(&request_body["input"][1], "</user_instructions>");
assert_message_role(&request_body["input"][2], "user");
assert_message_starts_with(&request_body["input"][2], "<environment_context>");
assert_message_ends_with(&request_body["input"][2], "</environment_context>");
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn azure_responses_request_includes_store_and_reasoning_ids() {
skip_if_no_network!();

View File

@@ -261,6 +261,65 @@ async fn summarize_context_three_requests_and_instructions() {
);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn manual_compact_uses_custom_prompt() {
skip_if_no_network!();
let server = start_mock_server().await;
let sse_stream = sse(vec![ev_completed("r1")]);
mount_sse_once(&server, sse_stream).await;
let custom_prompt = "Use this compact prompt instead";
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&home);
config.model_provider = model_provider;
config.compact_prompt = Some(custom_prompt.to_string());
let conversation_manager = ConversationManager::with_auth(CodexAuth::from_api_key("dummy"));
let codex = conversation_manager
.new_conversation(config)
.await
.expect("create conversation")
.conversation;
codex.submit(Op::Compact).await.expect("trigger compact");
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
let requests = server.received_requests().await.expect("collect requests");
let body = requests
.iter()
.find_map(|req| req.body_json::<serde_json::Value>().ok())
.expect("summary request body");
let input = body
.get("input")
.and_then(|v| v.as_array())
.expect("input array");
let mut found_custom_prompt = false;
let mut found_default_prompt = false;
for item in input {
if item["type"].as_str() != Some("message") {
continue;
}
let text = item["content"][0]["text"].as_str().unwrap_or_default();
if text == custom_prompt {
found_custom_prompt = true;
}
if text == SUMMARIZATION_PROMPT {
found_default_prompt = true;
}
}
assert!(found_custom_prompt, "custom prompt should be injected");
assert!(!found_default_prompt, "default prompt should be replaced");
}
// Windows CI only: bump to 4 workers to prevent SSE/event starvation and test timeouts.
#[cfg_attr(windows, tokio::test(flavor = "multi_thread", worker_threads = 4))]
#[cfg_attr(not(windows), tokio::test(flavor = "multi_thread", worker_threads = 2))]

View File

@@ -146,20 +146,7 @@ async fn compact_resume_and_fork_preserve_model_history_view() {
.unwrap_or_default()
.to_string();
let tool_calls = json!(requests[0]["tools"].as_array());
// Allow sub-agent runs to use a distinct prompt_cache_key; capture per-index keys.
let pk0 = requests[0]["prompt_cache_key"]
.as_str()
.unwrap_or_default()
.to_string();
let pk1 = requests[1]["prompt_cache_key"]
.as_str()
.unwrap_or_default()
.to_string();
let pk2 = requests[2]["prompt_cache_key"]
.as_str()
.unwrap_or_default()
.to_string();
let pk3 = requests[3]["prompt_cache_key"]
let prompt_cache_key = requests[0]["prompt_cache_key"]
.as_str()
.unwrap_or_default()
.to_string();
@@ -215,7 +202,7 @@ async fn compact_resume_and_fork_preserve_model_history_view() {
"include": [
"reasoning.encrypted_content"
],
"prompt_cache_key": pk0
"prompt_cache_key": prompt_cache_key
});
let compact_1 = json!(
{
@@ -284,7 +271,7 @@ async fn compact_resume_and_fork_preserve_model_history_view() {
"include": [
"reasoning.encrypted_content"
],
"prompt_cache_key": pk1
"prompt_cache_key": prompt_cache_key
});
let user_turn_2_after_compact = json!(
{
@@ -349,7 +336,7 @@ SUMMARY_ONLY_CONTEXT"
"include": [
"reasoning.encrypted_content"
],
"prompt_cache_key": pk2
"prompt_cache_key": prompt_cache_key
});
let usert_turn_3_after_resume = json!(
{
@@ -434,7 +421,7 @@ SUMMARY_ONLY_CONTEXT"
"include": [
"reasoning.encrypted_content"
],
"prompt_cache_key": pk3
"prompt_cache_key": prompt_cache_key
});
let user_turn_3_after_fork = json!(
{

View File

@@ -5,6 +5,8 @@ mod abort_tasks;
#[cfg(not(target_os = "windows"))]
mod apply_patch_cli;
#[cfg(not(target_os = "windows"))]
mod apply_patch_freeform;
#[cfg(not(target_os = "windows"))]
mod approvals;
mod cli_stream;
mod client;

View File

@@ -1,5 +1,6 @@
#![allow(clippy::unwrap_used)]
use chrono::Local;
use codex_core::CodexAuth;
use codex_core::ConversationManager;
use codex_core::ModelProviderInfo;
@@ -40,9 +41,11 @@ fn text_user_input(text: String) -> serde_json::Value {
}
fn default_env_context_str(cwd: &str, shell: &Shell) -> String {
let local_date = Local::now().format("%Y-%m-%d %:z").to_string();
format!(
r#"<environment_context>
<cwd>{}</cwd>
<local_date>{local_date}</local_date>
<approval_policy>on-request</approval_policy>
<sandbox_mode>read-only</sandbox_mode>
<network_access>restricted</network_access>
@@ -344,19 +347,23 @@ async fn prefixes_context_and_instructions_once_and_consistently_across_requests
let shell = default_user_shell().await;
let expected_env_text = format!(
r#"<environment_context>
let expected_env_text = {
let local_date = Local::now().format("%Y-%m-%d %:z").to_string();
format!(
r#"<environment_context>
<cwd>{}</cwd>
<local_date>{local_date}</local_date>
<approval_policy>on-request</approval_policy>
<sandbox_mode>read-only</sandbox_mode>
<network_access>restricted</network_access>
{}</environment_context>"#,
cwd.path().to_string_lossy(),
match shell.name() {
Some(name) => format!(" <shell>{name}</shell>\n"),
None => String::new(),
}
);
cwd.path().to_string_lossy(),
match shell.name() {
Some(name) => format!(" <shell>{name}</shell>\n"),
None => String::new(),
}
)
};
let expected_ui_text =
"<user_instructions>\n\nbe consistent and helpful\n\n</user_instructions>";

View File

@@ -204,6 +204,85 @@ async fn review_op_with_plain_text_emits_review_fallback() {
server.verify().await;
}
/// Ensure review flow suppresses assistant-specific streaming/completion events:
/// - AgentMessageContentDelta
/// - AgentMessageDelta (legacy)
/// - ItemCompleted for TurnItem::AgentMessage
// Windows CI only: bump to 4 workers to prevent SSE/event starvation and test timeouts.
#[cfg_attr(windows, tokio::test(flavor = "multi_thread", worker_threads = 4))]
#[cfg_attr(not(windows), tokio::test(flavor = "multi_thread", worker_threads = 2))]
async fn review_filters_agent_message_related_events() {
skip_if_no_network!();
// Stream simulating a typing assistant message with deltas and finalization.
let sse_raw = r#"[
{"type":"response.output_item.added", "item":{
"type":"message", "role":"assistant", "id":"msg-1",
"content":[{"type":"output_text","text":""}]
}},
{"type":"response.output_text.delta", "delta":"Hi"},
{"type":"response.output_text.delta", "delta":" there"},
{"type":"response.output_item.done", "item":{
"type":"message", "role":"assistant", "id":"msg-1",
"content":[{"type":"output_text","text":"Hi there"}]
}},
{"type":"response.completed", "response": {"id": "__ID__"}}
]"#;
let server = start_responses_server_with_sse(sse_raw, 1).await;
let codex_home = TempDir::new().unwrap();
let codex = new_conversation_for_server(&server, &codex_home, |_| {}).await;
codex
.submit(Op::Review {
review_request: ReviewRequest {
prompt: "Filter streaming events".to_string(),
user_facing_hint: "Filter streaming events".to_string(),
},
})
.await
.unwrap();
let mut saw_entered = false;
let mut saw_exited = false;
// Drain until TaskComplete; assert filtered events never surface.
wait_for_event_with_timeout(
&codex,
|event| match event {
EventMsg::TaskComplete(_) => true,
EventMsg::EnteredReviewMode(_) => {
saw_entered = true;
false
}
EventMsg::ExitedReviewMode(_) => {
saw_exited = true;
false
}
// The following must be filtered by review flow
EventMsg::AgentMessageContentDelta(_) => {
panic!("unexpected AgentMessageContentDelta surfaced during review")
}
EventMsg::AgentMessageDelta(_) => {
panic!("unexpected AgentMessageDelta surfaced during review")
}
EventMsg::ItemCompleted(ev) => match &ev.item {
codex_protocol::items::TurnItem::AgentMessage(_) => {
panic!(
"unexpected ItemCompleted for TurnItem::AgentMessage surfaced during review"
)
}
_ => false,
},
_ => false,
},
tokio::time::Duration::from_secs(5),
)
.await;
assert!(saw_entered && saw_exited, "missing review lifecycle events");
server.verify().await;
}
/// When the model returns structured JSON in a review, ensure no AgentMessage
/// is emitted; the UI consumes the structured result via ExitedReviewMode.
// Windows CI only: bump to 4 workers to prevent SSE/event starvation and test timeouts.

View File

@@ -8,8 +8,8 @@ use std::time::Duration;
use std::time::SystemTime;
use std::time::UNIX_EPOCH;
use codex_core::config_types::McpServerConfig;
use codex_core::config_types::McpServerTransportConfig;
use codex_core::config::types::McpServerConfig;
use codex_core::config::types::McpServerTransportConfig;
use codex_core::features::Feature;
use codex_core::protocol::AskForApproval;

View File

@@ -203,6 +203,69 @@ async fn python_getpwuid_works_under_seatbelt() {
assert!(status.success(), "python exited with {status:?}");
}
#[tokio::test]
async fn java_home_finds_runtime_under_seatbelt() {
if std::env::var(CODEX_SANDBOX_ENV_VAR) == Ok("seatbelt".to_string()) {
eprintln!("{CODEX_SANDBOX_ENV_VAR} is set to 'seatbelt', skipping test.");
return;
}
let java_home_path = Path::new("/usr/libexec/java_home");
if !java_home_path.exists() {
eprintln!("/usr/libexec/java_home is not present, skipping test.");
return;
}
let baseline_output = tokio::process::Command::new(java_home_path)
.env_remove("JAVA_HOME")
.output()
.await
.expect("should be able to invoke java_home outside seatbelt");
if !baseline_output.status.success() {
eprintln!(
"java_home exited with {:?} outside seatbelt, skipping test",
baseline_output.status
);
return;
}
let policy = SandboxPolicy::ReadOnly;
let command_cwd = std::env::current_dir().expect("getcwd");
let sandbox_cwd = command_cwd.clone();
let mut env: HashMap<String, String> = std::env::vars().collect();
env.remove("JAVA_HOME");
env.remove(CODEX_SANDBOX_ENV_VAR);
let child = spawn_command_under_seatbelt(
vec![java_home_path.to_string_lossy().to_string()],
command_cwd,
&policy,
sandbox_cwd.as_path(),
StdioPolicy::RedirectForShellTool,
env,
)
.await
.expect("should be able to spawn java_home under seatbelt");
let output = child
.wait_with_output()
.await
.expect("should be able to wait for java_home child");
assert!(
output.status.success(),
"java_home under seatbelt exited with {:?}, stderr: {}",
output.status,
String::from_utf8_lossy(&output.stderr)
);
let stdout = String::from_utf8_lossy(&output.stdout);
assert!(
!stdout.trim().is_empty(),
"java_home stdout unexpectedly empty under seatbelt"
);
}
#[expect(clippy::expect_used)]
fn create_test_scenario(tmp: &TempDir) -> TestScenario {
let repo_parent = tmp.path().to_path_buf();

View File

@@ -219,8 +219,8 @@ async fn mcp_tool_call_output_exceeds_limit_truncated_for_model() -> Result<()>
config.features.enable(Feature::RmcpClient);
config.mcp_servers.insert(
server_name.to_string(),
codex_core::config_types::McpServerConfig {
transport: codex_core::config_types::McpServerTransportConfig::Stdio {
codex_core::config::types::McpServerConfig {
transport: codex_core::config::types::McpServerTransportConfig::Stdio {
command: rmcp_test_server_bin,
args: Vec::new(),
env: None,

View File

@@ -61,6 +61,7 @@ Request `newConversation` params (subset):
- `sandbox`: `read-only` | `workspace-write` | `danger-full-access`
- `config`: map of additional config overrides
- `baseInstructions`: optional instruction override
- `compactPrompt`: optional replacement for the default compaction prompt
- `includePlanTool` / `includeApplyPatchTool`: booleans
Response: `{ conversationId, model, reasoningEffort?, rolloutPath }`

View File

@@ -174,8 +174,9 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
model_provider,
codex_linux_sandbox_exe,
base_instructions: None,
developer_instructions: None,
compact_prompt: None,
include_apply_patch_tool: None,
include_view_image_tool: None,
show_raw_agent_reasoning: oss.then_some(true),
tools_web_search_request: None,
experimental_sandbox_command_assessment: None,

View File

@@ -1,5 +1,5 @@
#![cfg(target_os = "linux")]
use codex_core::config_types::ShellEnvironmentPolicy;
use codex_core::config::types::ShellEnvironmentPolicy;
use codex_core::error::CodexErr;
use codex_core::error::SandboxErr;
use codex_core::exec::ExecParams;

View File

@@ -49,6 +49,14 @@ pub struct CodexToolCallParam {
/// The set of instructions to use instead of the default ones.
#[serde(default, skip_serializing_if = "Option::is_none")]
pub base_instructions: Option<String>,
/// Developer instructions that should be injected as a developer role message.
#[serde(default, skip_serializing_if = "Option::is_none")]
pub developer_instructions: Option<String>,
/// Prompt used when compacting the conversation.
#[serde(default, skip_serializing_if = "Option::is_none")]
pub compact_prompt: Option<String>,
}
/// Custom enum mirroring [`AskForApproval`], but has an extra dependency on
@@ -141,6 +149,8 @@ impl CodexToolCallParam {
sandbox,
config: cli_overrides,
base_instructions,
developer_instructions,
compact_prompt,
} = self;
// Build the `ConfigOverrides` recognized by codex-core.
@@ -154,8 +164,9 @@ impl CodexToolCallParam {
model_provider: None,
codex_linux_sandbox_exe,
base_instructions,
developer_instructions,
compact_prompt,
include_apply_patch_tool: None,
include_view_image_tool: None,
show_raw_agent_reasoning: None,
tools_web_search_request: None,
experimental_sandbox_command_assessment: None,
@@ -288,6 +299,14 @@ mod tests {
"description": "The set of instructions to use instead of the default ones.",
"type": "string"
},
"developer-instructions": {
"description": "Developer instructions that should be injected as a developer role message.",
"type": "string"
},
"compact-prompt": {
"description": "Prompt used when compacting the conversation.",
"type": "string"
},
},
"required": [
"prompt"

View File

@@ -341,6 +341,7 @@ async fn codex_tool_passes_base_instructions() -> anyhow::Result<()> {
.send_codex_tool_call(CodexToolCallParam {
prompt: "How are you?".to_string(),
base_instructions: Some("You are a helpful assistant.".to_string()),
developer_instructions: Some("Foreshadow upcoming tool calls.".to_string()),
..Default::default()
})
.await?;
@@ -367,10 +368,28 @@ async fn codex_tool_passes_base_instructions() -> anyhow::Result<()> {
);
let requests = server.received_requests().await.unwrap();
let request = requests[0].body_json::<serde_json::Value>().unwrap();
let request = requests[0].body_json::<serde_json::Value>()?;
let instructions = request["messages"][0]["content"].as_str().unwrap();
assert!(instructions.starts_with("You are a helpful assistant."));
let developer_msg = request["messages"]
.as_array()
.and_then(|messages| {
messages
.iter()
.find(|msg| msg.get("role").and_then(|role| role.as_str()) == Some("developer"))
})
.unwrap();
let developer_content = developer_msg
.get("content")
.and_then(|value| value.as_str())
.unwrap();
assert!(
!developer_content.contains('<'),
"expected developer instructions without XML tags, got `{developer_content}`"
);
assert_eq!(developer_content, "Foreshadow upcoming tool calls.");
Ok(())
}

View File

@@ -332,6 +332,7 @@ class StructField:
name: str
type_name: str
serde: str | None = None
ts: str | None = None
comment: str | None = None
def append(self, out: list[str], supports_const: bool) -> None:
@@ -339,6 +340,8 @@ class StructField:
out.append(f" // {self.comment}\n")
if self.serde:
out.append(f" {self.serde}\n")
if self.ts:
out.append(f" {self.ts}\n")
if self.viz == "const":
if supports_const:
out.append(f" const {self.name}: {self.type_name};\n")
@@ -378,9 +381,9 @@ def define_struct(
prop_type = f"Option<{prop_type}>"
rs_prop = rust_prop_name(prop_name, is_optional)
if prop_type.startswith("&'static str"):
fields.append(StructField("const", rs_prop.name, prop_type, rs_prop.serde))
fields.append(StructField("const", rs_prop.name, prop_type, rs_prop.serde, rs_prop.ts))
else:
fields.append(StructField("pub", rs_prop.name, prop_type, rs_prop.serde))
fields.append(StructField("pub", rs_prop.name, prop_type, rs_prop.serde, rs_prop.ts))
# Special-case: add Codex-specific user_agent to Implementation
if name == "Implementation":
@@ -390,6 +393,7 @@ def define_struct(
"user_agent",
"Option<String>",
'#[serde(default, skip_serializing_if = "Option::is_none")]',
'#[ts(optional)]',
"This is an extra field that the Codex MCP server sends as part of InitializeResult.",
)
)
@@ -474,7 +478,6 @@ def define_string_enum(
out.append(f" {capitalize(value)},\n")
out.append("}\n\n")
return out
def define_untagged_enum(name: str, type_list: list[str], out: list[str]) -> None:
@@ -590,7 +593,7 @@ def get_serde_annotation_for_anyof_type(type_name: str) -> str | None:
def map_type(
typedef: dict[str, any],
typedef: dict[str, Any],
prop_name: str | None = None,
struct_name: str | None = None,
) -> str:
@@ -665,7 +668,8 @@ class RustProp:
name: str
# serde annotation, if necessary
serde: str | None = None
# ts annotation, if necessary
ts: str | None = None
def rust_prop_name(name: str, is_optional: bool) -> RustProp:
"""Convert a JSON property name to a Rust property name."""
@@ -684,6 +688,7 @@ def rust_prop_name(name: str, is_optional: bool) -> RustProp:
prop_name = name
serde_annotations = []
ts_str = None
if is_rename:
serde_annotations.append(f'rename = "{name}"')
if is_optional:
@@ -691,13 +696,18 @@ def rust_prop_name(name: str, is_optional: bool) -> RustProp:
serde_annotations.append('skip_serializing_if = "Option::is_none"')
if serde_annotations:
# Also mark optional fields for ts-rs generation.
serde_str = f"#[serde({', '.join(serde_annotations)})]"
else:
serde_str = None
return RustProp(prop_name, serde_str)
if is_optional and serde_str:
ts_str = "#[ts(optional)]"
return RustProp(prop_name, serde_str, ts_str)
def to_snake_case(name: str) -> str:
def to_snake_case(name: str) -> str | None:
"""Convert a camelCase or PascalCase name to snake_case."""
snake_case = name[0].lower() + "".join("_" + c.lower() if c.isupper() else c for c in name[1:])
if snake_case != name:

View File

@@ -37,14 +37,17 @@ fn default_jsonrpc() -> String {
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct Annotations {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub audience: Option<Vec<Role>>,
#[serde(
rename = "lastModified",
default,
skip_serializing_if = "Option::is_none"
)]
#[ts(optional)]
pub last_modified: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub priority: Option<f64>,
}
@@ -52,6 +55,7 @@ pub struct Annotations {
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct AudioContent {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub annotations: Option<Annotations>,
pub data: String,
#[serde(rename = "mimeType")]
@@ -64,6 +68,7 @@ pub struct AudioContent {
pub struct BaseMetadata {
pub name: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub title: Option<String>,
}
@@ -71,6 +76,7 @@ pub struct BaseMetadata {
pub struct BlobResourceContents {
pub blob: String,
#[serde(rename = "mimeType", default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub mime_type: Option<String>,
pub uri: String,
}
@@ -78,10 +84,13 @@ pub struct BlobResourceContents {
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct BooleanSchema {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub default: Option<bool>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub description: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub title: Option<String>,
pub r#type: String, // &'static str = "boolean"
}
@@ -98,6 +107,7 @@ impl ModelContextProtocolRequest for CallToolRequest {
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct CallToolRequestParams {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub arguments: Option<serde_json::Value>,
pub name: String,
}
@@ -107,12 +117,14 @@ pub struct CallToolRequestParams {
pub struct CallToolResult {
pub content: Vec<ContentBlock>,
#[serde(rename = "isError", default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub is_error: Option<bool>,
#[serde(
rename = "structuredContent",
default,
skip_serializing_if = "Option::is_none"
)]
#[ts(optional)]
pub structured_content: Option<serde_json::Value>,
}
@@ -135,6 +147,7 @@ impl ModelContextProtocolNotification for CancelledNotification {
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct CancelledNotificationParams {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub reason: Option<String>,
#[serde(rename = "requestId")]
pub request_id: RequestId,
@@ -144,12 +157,16 @@ pub struct CancelledNotificationParams {
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct ClientCapabilities {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub elicitation: Option<serde_json::Value>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub experimental: Option<serde_json::Value>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub roots: Option<ClientCapabilitiesRoots>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub sampling: Option<serde_json::Value>,
}
@@ -161,6 +178,7 @@ pub struct ClientCapabilitiesRoots {
default,
skip_serializing_if = "Option::is_none"
)]
#[ts(optional)]
pub list_changed: Option<bool>,
}
@@ -228,6 +246,7 @@ impl ModelContextProtocolRequest for CompleteRequest {
pub struct CompleteRequestParams {
pub argument: CompleteRequestParamsArgument,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub context: Option<CompleteRequestParamsContext>,
pub r#ref: CompleteRequestParamsRef,
}
@@ -236,6 +255,7 @@ pub struct CompleteRequestParams {
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct CompleteRequestParamsContext {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub arguments: Option<serde_json::Value>,
}
@@ -262,8 +282,10 @@ pub struct CompleteResult {
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct CompleteResultCompletion {
#[serde(rename = "hasMore", default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub has_more: Option<bool>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub total: Option<i64>,
pub values: Vec<String>,
}
@@ -302,31 +324,37 @@ pub struct CreateMessageRequestParams {
default,
skip_serializing_if = "Option::is_none"
)]
#[ts(optional)]
pub include_context: Option<String>,
#[serde(rename = "maxTokens")]
pub max_tokens: i64,
pub messages: Vec<SamplingMessage>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub metadata: Option<serde_json::Value>,
#[serde(
rename = "modelPreferences",
default,
skip_serializing_if = "Option::is_none"
)]
#[ts(optional)]
pub model_preferences: Option<ModelPreferences>,
#[serde(
rename = "stopSequences",
default,
skip_serializing_if = "Option::is_none"
)]
#[ts(optional)]
pub stop_sequences: Option<Vec<String>>,
#[serde(
rename = "systemPrompt",
default,
skip_serializing_if = "Option::is_none"
)]
#[ts(optional)]
pub system_prompt: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub temperature: Option<f64>,
}
@@ -341,6 +369,7 @@ pub struct CreateMessageResult {
default,
skip_serializing_if = "Option::is_none"
)]
#[ts(optional)]
pub stop_reason: Option<String>,
}
@@ -385,6 +414,7 @@ pub struct ElicitRequestParams {
pub struct ElicitRequestParamsRequestedSchema {
pub properties: serde_json::Value,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub required: Option<Vec<String>>,
pub r#type: String, // &'static str = "object"
}
@@ -394,6 +424,7 @@ pub struct ElicitRequestParamsRequestedSchema {
pub struct ElicitResult {
pub action: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub content: Option<serde_json::Value>,
}
@@ -412,6 +443,7 @@ impl From<ElicitResult> for serde_json::Value {
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct EmbeddedResource {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub annotations: Option<Annotations>,
pub resource: EmbeddedResourceResource,
pub r#type: String, // &'static str = "resource"
@@ -429,11 +461,14 @@ pub type EmptyResult = Result;
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct EnumSchema {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub description: Option<String>,
pub r#enum: Vec<String>,
#[serde(rename = "enumNames", default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub enum_names: Option<Vec<String>>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub title: Option<String>,
pub r#type: String, // &'static str = "string"
}
@@ -450,6 +485,7 @@ impl ModelContextProtocolRequest for GetPromptRequest {
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct GetPromptRequestParams {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub arguments: Option<serde_json::Value>,
pub name: String,
}
@@ -458,6 +494,7 @@ pub struct GetPromptRequestParams {
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct GetPromptResult {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub description: Option<String>,
pub messages: Vec<PromptMessage>,
}
@@ -474,6 +511,7 @@ impl From<GetPromptResult> for serde_json::Value {
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct ImageContent {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub annotations: Option<Annotations>,
pub data: String,
#[serde(rename = "mimeType")]
@@ -486,10 +524,12 @@ pub struct ImageContent {
pub struct Implementation {
pub name: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub title: Option<String>,
pub version: String,
// This is an extra field that the Codex MCP server sends as part of InitializeResult.
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub user_agent: Option<String>,
}
@@ -516,6 +556,7 @@ pub struct InitializeRequestParams {
pub struct InitializeResult {
pub capabilities: ServerCapabilities,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub instructions: Option<String>,
#[serde(rename = "protocolVersion")]
pub protocol_version: String,
@@ -552,6 +593,7 @@ pub struct JSONRPCError {
pub struct JSONRPCErrorError {
pub code: i64,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub data: Option<serde_json::Value>,
pub message: String,
}
@@ -573,6 +615,7 @@ pub struct JSONRPCNotification {
pub jsonrpc: String,
pub method: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub params: Option<serde_json::Value>,
}
@@ -584,6 +627,7 @@ pub struct JSONRPCRequest {
pub jsonrpc: String,
pub method: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub params: Option<serde_json::Value>,
}
@@ -608,6 +652,7 @@ impl ModelContextProtocolRequest for ListPromptsRequest {
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct ListPromptsRequestParams {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub cursor: Option<String>,
}
@@ -619,6 +664,7 @@ pub struct ListPromptsResult {
default,
skip_serializing_if = "Option::is_none"
)]
#[ts(optional)]
pub next_cursor: Option<String>,
pub prompts: Vec<Prompt>,
}
@@ -643,6 +689,7 @@ impl ModelContextProtocolRequest for ListResourceTemplatesRequest {
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct ListResourceTemplatesRequestParams {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub cursor: Option<String>,
}
@@ -654,6 +701,7 @@ pub struct ListResourceTemplatesResult {
default,
skip_serializing_if = "Option::is_none"
)]
#[ts(optional)]
pub next_cursor: Option<String>,
#[serde(rename = "resourceTemplates")]
pub resource_templates: Vec<ResourceTemplate>,
@@ -679,6 +727,7 @@ impl ModelContextProtocolRequest for ListResourcesRequest {
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct ListResourcesRequestParams {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub cursor: Option<String>,
}
@@ -690,6 +739,7 @@ pub struct ListResourcesResult {
default,
skip_serializing_if = "Option::is_none"
)]
#[ts(optional)]
pub next_cursor: Option<String>,
pub resources: Vec<Resource>,
}
@@ -739,6 +789,7 @@ impl ModelContextProtocolRequest for ListToolsRequest {
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct ListToolsRequestParams {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub cursor: Option<String>,
}
@@ -750,6 +801,7 @@ pub struct ListToolsResult {
default,
skip_serializing_if = "Option::is_none"
)]
#[ts(optional)]
pub next_cursor: Option<String>,
pub tools: Vec<Tool>,
}
@@ -799,6 +851,7 @@ pub struct LoggingMessageNotificationParams {
pub data: serde_json::Value,
pub level: LoggingLevel,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub logger: Option<String>,
}
@@ -809,6 +862,7 @@ pub struct LoggingMessageNotificationParams {
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct ModelHint {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub name: Option<String>,
}
@@ -830,20 +884,24 @@ pub struct ModelPreferences {
default,
skip_serializing_if = "Option::is_none"
)]
#[ts(optional)]
pub cost_priority: Option<f64>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub hints: Option<Vec<ModelHint>>,
#[serde(
rename = "intelligencePriority",
default,
skip_serializing_if = "Option::is_none"
)]
#[ts(optional)]
pub intelligence_priority: Option<f64>,
#[serde(
rename = "speedPriority",
default,
skip_serializing_if = "Option::is_none"
)]
#[ts(optional)]
pub speed_priority: Option<f64>,
}
@@ -851,18 +909,23 @@ pub struct ModelPreferences {
pub struct Notification {
pub method: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub params: Option<serde_json::Value>,
}
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct NumberSchema {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub description: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub maximum: Option<i64>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub minimum: Option<i64>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub title: Option<String>,
pub r#type: String,
}
@@ -871,12 +934,14 @@ pub struct NumberSchema {
pub struct PaginatedRequest {
pub method: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub params: Option<PaginatedRequestParams>,
}
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct PaginatedRequestParams {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub cursor: Option<String>,
}
@@ -887,6 +952,7 @@ pub struct PaginatedResult {
default,
skip_serializing_if = "Option::is_none"
)]
#[ts(optional)]
pub next_cursor: Option<String>,
}
@@ -929,11 +995,13 @@ impl ModelContextProtocolNotification for ProgressNotification {
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct ProgressNotificationParams {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub message: Option<String>,
pub progress: f64,
#[serde(rename = "progressToken")]
pub progress_token: ProgressToken,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub total: Option<f64>,
}
@@ -948,11 +1016,14 @@ pub enum ProgressToken {
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct Prompt {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub arguments: Option<Vec<PromptArgument>>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub description: Option<String>,
pub name: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub title: Option<String>,
}
@@ -960,11 +1031,14 @@ pub struct Prompt {
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct PromptArgument {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub description: Option<String>,
pub name: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub required: Option<bool>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub title: Option<String>,
}
@@ -991,6 +1065,7 @@ pub struct PromptMessage {
pub struct PromptReference {
pub name: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub title: Option<String>,
pub r#type: String, // &'static str = "ref/prompt"
}
@@ -1034,6 +1109,7 @@ impl From<ReadResourceResult> for serde_json::Value {
pub struct Request {
pub method: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub params: Option<serde_json::Value>,
}
@@ -1048,15 +1124,20 @@ pub enum RequestId {
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct Resource {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub annotations: Option<Annotations>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub description: Option<String>,
#[serde(rename = "mimeType", default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub mime_type: Option<String>,
pub name: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub size: Option<i64>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub title: Option<String>,
pub uri: String,
}
@@ -1065,6 +1146,7 @@ pub struct Resource {
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct ResourceContents {
#[serde(rename = "mimeType", default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub mime_type: Option<String>,
pub uri: String,
}
@@ -1075,15 +1157,20 @@ pub struct ResourceContents {
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct ResourceLink {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub annotations: Option<Annotations>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub description: Option<String>,
#[serde(rename = "mimeType", default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub mime_type: Option<String>,
pub name: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub size: Option<i64>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub title: Option<String>,
pub r#type: String, // &'static str = "resource_link"
pub uri: String,
@@ -1101,13 +1188,17 @@ impl ModelContextProtocolNotification for ResourceListChangedNotification {
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct ResourceTemplate {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub annotations: Option<Annotations>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub description: Option<String>,
#[serde(rename = "mimeType", default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub mime_type: Option<String>,
pub name: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub title: Option<String>,
#[serde(rename = "uriTemplate")]
pub uri_template: String,
@@ -1148,6 +1239,7 @@ pub enum Role {
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct Root {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub name: Option<String>,
pub uri: String,
}
@@ -1179,16 +1271,22 @@ pub enum SamplingMessageContent {
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct ServerCapabilities {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub completions: Option<serde_json::Value>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub experimental: Option<serde_json::Value>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub logging: Option<serde_json::Value>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub prompts: Option<ServerCapabilitiesPrompts>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub resources: Option<ServerCapabilitiesResources>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub tools: Option<ServerCapabilitiesTools>,
}
@@ -1200,6 +1298,7 @@ pub struct ServerCapabilitiesTools {
default,
skip_serializing_if = "Option::is_none"
)]
#[ts(optional)]
pub list_changed: Option<bool>,
}
@@ -1211,8 +1310,10 @@ pub struct ServerCapabilitiesResources {
default,
skip_serializing_if = "Option::is_none"
)]
#[ts(optional)]
pub list_changed: Option<bool>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub subscribe: Option<bool>,
}
@@ -1224,6 +1325,7 @@ pub struct ServerCapabilitiesPrompts {
default,
skip_serializing_if = "Option::is_none"
)]
#[ts(optional)]
pub list_changed: Option<bool>,
}
@@ -1298,14 +1400,19 @@ pub struct SetLevelRequestParams {
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct StringSchema {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub description: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub format: Option<String>,
#[serde(rename = "maxLength", default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub max_length: Option<i64>,
#[serde(rename = "minLength", default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub min_length: Option<i64>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub title: Option<String>,
pub r#type: String, // &'static str = "string"
}
@@ -1328,6 +1435,7 @@ pub struct SubscribeRequestParams {
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct TextContent {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub annotations: Option<Annotations>,
pub text: String,
pub r#type: String, // &'static str = "text"
@@ -1336,6 +1444,7 @@ pub struct TextContent {
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct TextResourceContents {
#[serde(rename = "mimeType", default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub mime_type: Option<String>,
pub text: String,
pub uri: String,
@@ -1345,8 +1454,10 @@ pub struct TextResourceContents {
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct Tool {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub annotations: Option<ToolAnnotations>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub description: Option<String>,
#[serde(rename = "inputSchema")]
pub input_schema: ToolInputSchema,
@@ -1356,8 +1467,10 @@ pub struct Tool {
default,
skip_serializing_if = "Option::is_none"
)]
#[ts(optional)]
pub output_schema: Option<ToolOutputSchema>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub title: Option<String>,
}
@@ -1366,8 +1479,10 @@ pub struct Tool {
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct ToolOutputSchema {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub properties: Option<serde_json::Value>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub required: Option<Vec<String>>,
pub r#type: String, // &'static str = "object"
}
@@ -1376,8 +1491,10 @@ pub struct ToolOutputSchema {
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct ToolInputSchema {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub properties: Option<serde_json::Value>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub required: Option<Vec<String>>,
pub r#type: String, // &'static str = "object"
}
@@ -1397,26 +1514,31 @@ pub struct ToolAnnotations {
default,
skip_serializing_if = "Option::is_none"
)]
#[ts(optional)]
pub destructive_hint: Option<bool>,
#[serde(
rename = "idempotentHint",
default,
skip_serializing_if = "Option::is_none"
)]
#[ts(optional)]
pub idempotent_hint: Option<bool>,
#[serde(
rename = "openWorldHint",
default,
skip_serializing_if = "Option::is_none"
)]
#[ts(optional)]
pub open_world_hint: Option<bool>,
#[serde(
rename = "readOnlyHint",
default,
skip_serializing_if = "Option::is_none"
)]
#[ts(optional)]
pub read_only_hint: Option<bool>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub title: Option<String>,
}

View File

@@ -61,7 +61,6 @@ impl SandboxRiskCategory {
}
#[derive(Debug, Clone, Deserialize, Serialize, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
pub struct ExecApprovalRequestEvent {
/// Identifier for the associated exec call, if available.
pub call_id: String,
@@ -79,7 +78,6 @@ pub struct ExecApprovalRequestEvent {
}
#[derive(Debug, Clone, Deserialize, Serialize, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
pub struct ApplyPatchApprovalRequestEvent {
/// Responses API call id for the associated patch apply call, if available.
pub call_id: String,

View File

@@ -11,7 +11,6 @@ use ts_rs::TS;
pub const PROMPTS_CMD_PREFIX: &str = "prompts";
#[derive(Serialize, Deserialize, Debug, Clone, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
pub struct CustomPrompt {
pub name: String,
pub path: PathBuf,

View File

@@ -48,36 +48,35 @@ pub enum ContentItem {
#[serde(tag = "type", rename_all = "snake_case")]
pub enum ResponseItem {
Message {
#[serde(skip_serializing)]
#[ts(optional = nullable)]
#[serde(default, skip_serializing)]
#[ts(skip)]
id: Option<String>,
role: String,
content: Vec<ContentItem>,
},
Reasoning {
#[serde(default, skip_serializing)]
#[ts(skip)]
id: String,
summary: Vec<ReasoningItemReasoningSummary>,
#[serde(default, skip_serializing_if = "should_serialize_reasoning_content")]
#[ts(optional = nullable)]
#[ts(optional)]
content: Option<Vec<ReasoningItemContent>>,
#[ts(optional = nullable)]
encrypted_content: Option<String>,
},
LocalShellCall {
/// Set when using the chat completions API.
#[serde(skip_serializing)]
#[ts(optional = nullable)]
#[serde(default, skip_serializing)]
#[ts(skip)]
id: Option<String>,
/// Set when using the Responses API.
#[ts(optional = nullable)]
call_id: Option<String>,
status: LocalShellStatus,
action: LocalShellAction,
},
FunctionCall {
#[serde(skip_serializing)]
#[ts(optional = nullable)]
#[serde(default, skip_serializing)]
#[ts(skip)]
id: Option<String>,
name: String,
// The Responses API returns the function call arguments as a *string* that contains
@@ -97,11 +96,11 @@ pub enum ResponseItem {
output: FunctionCallOutputPayload,
},
CustomToolCall {
#[serde(skip_serializing)]
#[ts(optional = nullable)]
#[serde(default, skip_serializing)]
#[ts(skip)]
id: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional = nullable)]
#[ts(optional)]
status: Option<String>,
call_id: String,
@@ -121,11 +120,11 @@ pub enum ResponseItem {
// "action": {"type":"search","query":"weather: San Francisco, CA"}
// }
WebSearchCall {
#[serde(skip_serializing)]
#[ts(optional = nullable)]
#[serde(default, skip_serializing)]
#[ts(skip)]
id: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional = nullable)]
#[ts(optional)]
status: Option<String>,
action: WebSearchAction,
},
@@ -203,7 +202,6 @@ pub enum LocalShellAction {
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
pub struct LocalShellExecAction {
pub command: Vec<String>,
pub timeout_ms: Option<u64>,
@@ -296,7 +294,6 @@ impl From<Vec<UserInput>> for ResponseInputItem {
/// If the `name` of a `ResponseItem::FunctionCall` is either `container.exec`
/// or shell`, the `arguments` field should deserialize to this struct.
#[derive(Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
pub struct ShellToolCallParams {
pub command: Vec<String>,
pub workdir: Option<String>,
@@ -329,7 +326,6 @@ pub enum FunctionCallOutputContentItem {
/// `content_items` with the structured form that the Responses/Chat
/// Completions APIs understand.
#[derive(Debug, Default, Clone, PartialEq, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
pub struct FunctionCallOutputPayload {
pub content: String,
#[serde(skip_serializing_if = "Option::is_none")]

View File

@@ -18,14 +18,11 @@ pub enum ParsedCommand {
},
ListFiles {
cmd: String,
#[ts(optional = nullable)]
path: Option<String>,
},
Search {
cmd: String,
#[ts(optional = nullable)]
query: Option<String>,
#[ts(optional = nullable)]
path: Option<String>,
},
Unknown {

View File

@@ -21,7 +21,6 @@ pub struct PlanItemArg {
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema, TS)]
#[serde(deny_unknown_fields)]
#[ts(optional_fields = nullable)]
pub struct UpdatePlanArgs {
#[serde(default)]
pub explanation: Option<String>,

View File

@@ -661,7 +661,6 @@ impl HasLegacyEvent for EventMsg {
}
#[derive(Debug, Clone, Deserialize, Serialize, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
pub struct ExitedReviewModeEvent {
pub review_output: Option<ReviewOutputEvent>,
}
@@ -674,13 +673,11 @@ pub struct ErrorEvent {
}
#[derive(Debug, Clone, Deserialize, Serialize, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
pub struct TaskCompleteEvent {
pub last_agent_message: Option<String>,
}
#[derive(Debug, Clone, Deserialize, Serialize, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
pub struct TaskStartedEvent {
pub model_context_window: Option<i64>,
}
@@ -700,11 +697,9 @@ pub struct TokenUsage {
}
#[derive(Debug, Clone, Deserialize, Serialize, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
pub struct TokenUsageInfo {
pub total_token_usage: TokenUsage,
pub last_token_usage: TokenUsage,
#[ts(optional = nullable)]
#[ts(type = "number | null")]
pub model_context_window: Option<i64>,
}
@@ -765,30 +760,25 @@ impl TokenUsageInfo {
}
#[derive(Debug, Clone, Deserialize, Serialize, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
pub struct TokenCountEvent {
pub info: Option<TokenUsageInfo>,
pub rate_limits: Option<RateLimitSnapshot>,
}
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
pub struct RateLimitSnapshot {
pub primary: Option<RateLimitWindow>,
pub secondary: Option<RateLimitWindow>,
}
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
pub struct RateLimitWindow {
/// Percentage (0-100) of the window that has been consumed.
pub used_percent: f64,
/// Rolling window duration, in minutes.
#[ts(optional = nullable)]
#[ts(type = "number | null")]
pub window_minutes: Option<i64>,
/// Unix timestamp (seconds since epoch) when the window resets.
#[ts(optional = nullable)]
#[ts(type = "number | null")]
pub resets_at: Option<i64>,
}
@@ -902,7 +892,6 @@ pub struct AgentMessageEvent {
}
#[derive(Debug, Clone, Deserialize, Serialize, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
pub struct UserMessageEvent {
pub message: String,
#[serde(skip_serializing_if = "Option::is_none")]
@@ -938,7 +927,6 @@ pub struct AgentReasoningDeltaEvent {
}
#[derive(Debug, Clone, Deserialize, Serialize, JsonSchema, TS, PartialEq)]
#[ts(optional_fields = nullable)]
pub struct McpInvocation {
/// Name of the MCP server as defined in the config.
pub server: String,
@@ -1058,6 +1046,8 @@ pub enum SessionSource {
}
#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "snake_case")]
#[ts(rename_all = "snake_case")]
pub enum SubAgentSource {
Review,
Compact,
@@ -1065,7 +1055,6 @@ pub enum SubAgentSource {
}
#[derive(Serialize, Deserialize, Clone, Debug, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
pub struct SessionMeta {
pub id: ConversationId,
pub timestamp: String,
@@ -1094,7 +1083,6 @@ impl Default for SessionMeta {
}
#[derive(Serialize, Deserialize, Debug, Clone, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
pub struct SessionMetaLine {
#[serde(flatten)]
pub meta: SessionMeta,
@@ -1130,7 +1118,6 @@ impl From<CompactedItem> for ResponseItem {
}
#[derive(Serialize, Deserialize, Clone, Debug, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
pub struct TurnContextItem {
pub cwd: PathBuf,
pub approval_policy: AskForApproval,
@@ -1149,7 +1136,6 @@ pub struct RolloutLine {
}
#[derive(Serialize, Deserialize, Clone, Debug, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
pub struct GitInfo {
/// Current commit hash (SHA)
#[serde(skip_serializing_if = "Option::is_none")]
@@ -1283,7 +1269,6 @@ pub struct BackgroundEventEvent {
}
#[derive(Debug, Clone, Deserialize, Serialize, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
pub struct DeprecationNoticeEvent {
/// Concise summary of what is deprecated.
pub summary: String,
@@ -1293,14 +1278,12 @@ pub struct DeprecationNoticeEvent {
}
#[derive(Debug, Clone, Deserialize, Serialize, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
pub struct UndoStartedEvent {
#[serde(skip_serializing_if = "Option::is_none")]
pub message: Option<String>,
}
#[derive(Debug, Clone, Deserialize, Serialize, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
pub struct UndoCompletedEvent {
pub success: bool,
#[serde(skip_serializing_if = "Option::is_none")]
@@ -1345,7 +1328,6 @@ pub struct TurnDiffEvent {
}
#[derive(Debug, Clone, Deserialize, Serialize, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
pub struct GetHistoryEntryResponseEvent {
pub offset: usize,
pub log_id: u64,
@@ -1395,7 +1377,6 @@ pub struct ListCustomPromptsResponseEvent {
}
#[derive(Debug, Default, Clone, Deserialize, Serialize, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
pub struct SessionConfiguredEvent {
/// Name left as session_id instead of conversation_id for backwards compatibility.
pub session_id: ConversationId,
@@ -1456,7 +1437,6 @@ pub enum FileChange {
},
Update {
unified_diff: String,
#[ts(optional = nullable)]
move_path: Option<PathBuf>,
},
}

View File

@@ -17,7 +17,7 @@ use codex_ansi_escape::ansi_escape_line;
use codex_core::AuthManager;
use codex_core::ConversationManager;
use codex_core::config::Config;
use codex_core::config_edit::ConfigEditsBuilder;
use codex_core::config::edit::ConfigEditsBuilder;
use codex_core::model_family::find_family_for_model;
use codex_core::protocol::SessionSource;
use codex_core::protocol::TokenUsage;

View File

@@ -4,7 +4,7 @@ use std::path::PathBuf;
use std::sync::Arc;
use codex_core::config::Config;
use codex_core::config_types::Notifications;
use codex_core::config::types::Notifications;
use codex_core::git_info::current_branch_name;
use codex_core::git_info::local_git_branches;
use codex_core::project_doc::DEFAULT_PROJECT_DOC_FILENAME;
@@ -1270,7 +1270,16 @@ impl ChatWidget {
SlashCommand::Mcp => {
self.add_mcp_output();
}
#[cfg(debug_assertions)]
SlashCommand::Rollout => {
if let Some(path) = self.rollout_path() {
self.add_info_message(
format!("Current rollout path: {}", path.display()),
None,
);
} else {
self.add_info_message("Rollout path is not available yet.".to_string(), None);
}
}
SlashCommand::TestApproval => {
use codex_core::protocol::EventMsg;
use std::collections::HashMap;

View File

@@ -863,6 +863,42 @@ fn slash_undo_sends_op() {
}
}
#[test]
fn slash_rollout_displays_current_path() {
let (mut chat, mut rx, _op_rx) = make_chatwidget_manual();
let rollout_path = PathBuf::from("/tmp/codex-test-rollout.jsonl");
chat.current_rollout_path = Some(rollout_path.clone());
chat.dispatch_command(SlashCommand::Rollout);
let cells = drain_insert_history(&mut rx);
assert_eq!(cells.len(), 1, "expected info message for rollout path");
let rendered = lines_to_single_string(&cells[0]);
assert!(
rendered.contains(&rollout_path.display().to_string()),
"expected rollout path to be shown: {rendered}"
);
}
#[test]
fn slash_rollout_handles_missing_path() {
let (mut chat, mut rx, _op_rx) = make_chatwidget_manual();
chat.dispatch_command(SlashCommand::Rollout);
let cells = drain_insert_history(&mut rx);
assert_eq!(
cells.len(),
1,
"expected info message explaining missing path"
);
let rendered = lines_to_single_string(&cells[0]);
assert!(
rendered.contains("not available"),
"expected missing rollout path message: {rendered}"
);
}
#[test]
fn undo_success_events_render_info_messages() {
let (mut chat, mut rx, _op_rx) = make_chatwidget_manual();

View File

@@ -23,8 +23,8 @@ use crate::wrapping::word_wrap_lines;
use base64::Engine;
use codex_common::format_env_display::format_env_display;
use codex_core::config::Config;
use codex_core::config_types::McpServerTransportConfig;
use codex_core::config_types::ReasoningSummaryFormat;
use codex_core::config::types::McpServerTransportConfig;
use codex_core::config::types::ReasoningSummaryFormat;
use codex_core::protocol::FileChange;
use codex_core::protocol::McpAuthStatus;
use codex_core::protocol::McpInvocation;
@@ -1450,8 +1450,8 @@ mod tests {
use codex_core::config::Config;
use codex_core::config::ConfigOverrides;
use codex_core::config::ConfigToml;
use codex_core::config_types::McpServerConfig;
use codex_core::config_types::McpServerTransportConfig;
use codex_core::config::types::McpServerConfig;
use codex_core::config::types::McpServerTransportConfig;
use codex_core::protocol::McpAuthStatus;
use codex_protocol::parse_command::ParsedCommand;
use dirs::home_dir;

View File

@@ -144,8 +144,9 @@ pub async fn run_main(
config_profile: cli.config_profile.clone(),
codex_linux_sandbox_exe,
base_instructions: None,
developer_instructions: None,
compact_prompt: None,
include_apply_patch_tool: None,
include_view_image_tool: None,
show_raw_agent_reasoning: cli.oss.then_some(true),
tools_web_search_request: cli.web_search.then_some(true),
experimental_sandbox_command_assessment: None,

View File

@@ -1,6 +1,6 @@
use std::path::PathBuf;
use codex_core::config_edit::ConfigEditsBuilder;
use codex_core::config::edit::ConfigEditsBuilder;
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
use crossterm::event::KeyEventKind;

View File

@@ -26,7 +26,7 @@ pub enum SlashCommand {
Logout,
Quit,
Feedback,
#[cfg(debug_assertions)]
Rollout,
TestApproval,
}
@@ -48,7 +48,7 @@ impl SlashCommand {
SlashCommand::Approvals => "choose what Codex can do without approval",
SlashCommand::Mcp => "list configured MCP tools",
SlashCommand::Logout => "log out of Codex",
#[cfg(debug_assertions)]
SlashCommand::Rollout => "print the rollout file path",
SlashCommand::TestApproval => "test approval request",
}
}
@@ -76,14 +76,23 @@ impl SlashCommand {
| SlashCommand::Mcp
| SlashCommand::Feedback
| SlashCommand::Quit => true,
#[cfg(debug_assertions)]
SlashCommand::Rollout => true,
SlashCommand::TestApproval => true,
}
}
fn is_visible(self) -> bool {
match self {
SlashCommand::Rollout | SlashCommand::TestApproval => cfg!(debug_assertions),
_ => true,
}
}
}
/// Return all built-in commands in a Vec paired with their command string.
pub fn built_in_slash_commands() -> Vec<(&'static str, SlashCommand)> {
SlashCommand::iter().map(|c| (c.command(), c)).collect()
SlashCommand::iter()
.filter(|command| command.is_visible())
.map(|c| (c.command(), c))
.collect()
}

View File

@@ -31,6 +31,8 @@ use super::rate_limits::StatusRateLimitRow;
use super::rate_limits::compose_rate_limit_data;
use super::rate_limits::format_status_limit_summary;
use super::rate_limits::render_status_limit_progress_bar;
use crate::wrapping::RtOptions;
use crate::wrapping::word_wrap_lines;
#[derive(Debug, Clone)]
struct StatusContextWindowData {
@@ -195,13 +197,7 @@ impl StatusHistoryCell {
lines
}
StatusRateLimitData::Missing => {
vec![formatter.line(
"Limits",
vec![
Span::from("visit ").dim(),
"chatgpt.com/codex/settings/usage".cyan().underlined(),
],
)]
vec![formatter.line("Limits", vec![Span::from("data not available yet").dim()])]
}
}
}
@@ -315,6 +311,21 @@ impl HistoryCell for StatusHistoryCell {
let formatter = FieldFormatter::from_labels(labels.iter().map(String::as_str));
let value_width = formatter.value_width(available_inner_width);
let note_first_line = Line::from(vec![
Span::from("Visit ").cyan(),
"chatgpt.com/codex/settings/usage".cyan().underlined(),
Span::from(" for up-to-date").cyan(),
]);
let note_second_line = Line::from(vec![
Span::from("information on rate limits and credits").cyan(),
]);
let note_lines = word_wrap_lines(
[note_first_line, note_second_line],
RtOptions::new(available_inner_width),
);
lines.extend(note_lines);
lines.push(Line::from(Vec::<Span<'static>>::new()));
let mut model_spans = vec![Span::from(self.model_name.clone())];
if !self.model_details.is_empty() {
model_spans.push(Span::from(" (").dim());

View File

@@ -7,6 +7,9 @@ expression: sanitized
╭────────────────────────────────────────────────────────────────────────────╮
│ >_ OpenAI Codex (v0.0.0) │
│ │
│ Visit chatgpt.com/codex/settings/usage for up-to-date │
│ information on rate limits and credits │
│ │
│ Model: gpt-5-codex (reasoning none, summaries auto) │
│ Directory: [[workspace]] │
│ Approval: on-request │

View File

@@ -7,6 +7,9 @@ expression: sanitized
╭─────────────────────────────────────────────────────────────────────╮
│ >_ OpenAI Codex (v0.0.0) │
│ │
│ Visit chatgpt.com/codex/settings/usage for up-to-date │
│ information on rate limits and credits │
│ │
│ Model: gpt-5-codex (reasoning high, summaries detailed) │
│ Directory: [[workspace]] │
│ Approval: on-request │

View File

@@ -7,6 +7,9 @@ expression: sanitized
╭─────────────────────────────────────────────────────────────────╮
│ >_ OpenAI Codex (v0.0.0) │
│ │
│ Visit chatgpt.com/codex/settings/usage for up-to-date │
│ information on rate limits and credits │
│ │
│ Model: gpt-5-codex (reasoning none, summaries auto) │
│ Directory: [[workspace]] │
│ Approval: on-request │

View File

@@ -7,6 +7,9 @@ expression: sanitized
╭─────────────────────────────────────────────────────────────────╮
│ >_ OpenAI Codex (v0.0.0) │
│ │
│ Visit chatgpt.com/codex/settings/usage for up-to-date │
│ information on rate limits and credits │
│ │
│ Model: gpt-5-codex (reasoning none, summaries auto) │
│ Directory: [[workspace]] │
│ Approval: on-request │
@@ -15,5 +18,5 @@ expression: sanitized
│ │
│ Token usage: 750 total (500 input + 250 output) │
│ Context window: 100% left (750 used / 272K) │
│ Limits: visit chatgpt.com/codex/settings/usage
│ Limits: data not available yet
╰─────────────────────────────────────────────────────────────────╯

View File

@@ -7,6 +7,9 @@ expression: sanitized
╭─────────────────────────────────────────────────────────────────────╮
│ >_ OpenAI Codex (v0.0.0) │
│ │
│ Visit chatgpt.com/codex/settings/usage for up-to-date │
│ information on rate limits and credits │
│ │
│ Model: gpt-5-codex (reasoning none, summaries auto) │
│ Directory: [[workspace]] │
│ Approval: on-request │

View File

@@ -7,6 +7,10 @@ expression: sanitized
╭────────────────────────────────────────────╮
│ >_ OpenAI Codex (v0.0.0) │
│ │
│ Visit chatgpt.com/codex/settings/usage for │
│ up-to-date │
│ information on rate limits and credits │
│ │
│ Model: gpt-5-codex (reasoning │
│ Directory: [[workspace]] │
│ Approval: on-request │

View File

@@ -28,7 +28,6 @@ type CommitID = String;
/// Details of a ghost commit created from a repository state.
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize, JsonSchema, TS)]
#[ts(optional_fields = nullable)]
pub struct GhostCommit {
id: CommitID,
parent: Option<CommitID>,

View File

@@ -312,12 +312,12 @@ Though using this option may also be necessary if you try to use Codex in enviro
### tools.\*
Use the optional `[tools]` table to toggle built-in tools that the agent may call. Both keys default to `false` (tools stay disabled) unless you opt in:
Use the optional `[tools]` table to toggle built-in tools that the agent may call. `web_search` stays off unless you opt in, while `view_image` is now enabled by default:
```toml
[tools]
web_search = true # allow Codex to issue first-party web searches without prompting you
view_image = true # let Codex attach local images (paths in your workspace) to the model request
view_image = false # disable image uploads (they're enabled by default)
```
`web_search` is also recognized under the legacy name `web_search_request`. The `view_image` toggle is useful when you want to include screenshots or diagrams from your repo without pasting them manually. Codex still respects sandboxing: it can only attach files inside the workspace roots you allow.
@@ -926,7 +926,7 @@ Valid values:
| `experimental_use_exec_command_tool` | boolean | Use experimental exec command tool. |
| `projects.<path>.trust_level` | string | Mark project/worktree as trusted (only `"trusted"` is recognized). |
| `tools.web_search` | boolean | Enable web search tool (alias: `web_search_request`) (default: false). |
| `tools.view_image` | boolean | Enable or disable the `view_image` tool so Codex can attach local image files from the workspace (default: true). |
| `forced_login_method` | `chatgpt` \| `api` | Only allow Codex to be used with ChatGPT or API keys. |
| `forced_chatgpt_workspace_id` | string (uuid) | Only allow Codex to be used with the specified ChatGPT workspace. |
| `cli_auth_credentials_store` | `file` \| `keyring` \| `auto` | Where to store CLI login credentials (default: `file`). |
| `tools.view_image` | boolean | Enable the `view_image` tool so Codex can attach local image files from the workspace (default: false). |