Compare commits

..

43 Commits

Author SHA1 Message Date
Ahmed Ibrahim
570639cf98 tests 2025-09-12 15:46:49 -04:00
Ahmed Ibrahim
1c50fbb8a7 tests 2025-09-12 14:15:16 -04:00
Ahmed Ibrahim
3316d04ed4 rebase 2025-09-12 14:02:12 -04:00
Ahmed Ibrahim
67a8566f59 Merge branch 'patch-tools' of https://github.com/openai/codex into patch-tools 2025-09-12 14:01:34 -04:00
Ahmed Ibrahim
2d36621f48 Merge branch 'main' into patch-tools 2025-09-12 14:01:22 -04:00
Ahmed Ibrahim
0a70810fc0 squash: 19 commit(s) since origin/main
- fmt + clippy: codex-core deterministic shell tool tests, conflict cleanup
- patch-tools
- fix tests
- add tests
- add tests
- patch-tools
- patch-tools
- patch-tools
- patch-tools
- patch-tools
- patch-tools
- patch-tools
- patch-tools
- patch-tools
- clippy
- add tests
- clippy
- clippy
- add tests
2025-09-12 14:00:10 -04:00
Ahmed Ibrahim
b5cf9e09ff add tests 2025-09-12 13:54:38 -04:00
Ahmed Ibrahim
b2067c73d9 clippy 2025-09-12 13:50:23 -04:00
Ahmed Ibrahim
13e8771ee9 clippy 2025-09-12 13:50:16 -04:00
jif-oai
bba567cee9 bug: fix model save (#3525)
Fix those 2 behaviors:
1. The model does not get saved if we don't CTRL + S
2. The reasoning effort get saved
2025-09-12 10:38:12 -07:00
Ahmed Ibrahim
6577197fa4 add tests 2025-09-12 13:29:31 -04:00
Ahmed Ibrahim
fd1e12f34e clippy 2025-09-12 12:27:20 -04:00
Ahmed Ibrahim
ba6af23cb6 Add spacing to timer duration formats (#3471)
<img width="426" height="28" alt="image"
src="https://github.com/user-attachments/assets/b281aca3-3c8d-4b88-a017-5d2f8ea9f3d5"
/>
2025-09-12 12:05:57 -04:00
Charlie Weems
f805d17930 MCP Documentation Changes Requests in Code Review (#3507)
Add in review changes from @bolinfest that were dropped due to
auto-merge (#3345).
2025-09-12 09:04:49 -07:00
Ahmed Ibrahim
9580603fed patch-tools 2025-09-12 11:53:34 -04:00
Ahmed Ibrahim
da38a8f56a patch-tools 2025-09-12 11:51:08 -04:00
Ahmed Ibrahim
552a438cc9 patch-tools 2025-09-12 11:50:18 -04:00
Ahmed Ibrahim
a36a273d4e patch-tools 2025-09-12 11:49:46 -04:00
Ahmed Ibrahim
6884c6ccf6 patch-tools 2025-09-12 11:49:14 -04:00
Ahmed Ibrahim
1e5a613c55 patch-tools 2025-09-12 11:48:01 -04:00
Michael Bolin
90965fbc84 chore: add just test, which runs cargo nextest (#3508)
Since I can never seem to remember to add `--no-fail-fast` when running
`cargo nextest run`, let's just create an alias for it.
2025-09-12 08:44:44 -07:00
Ahmed Ibrahim
4fee2ca3fd patch-tools 2025-09-12 11:44:40 -04:00
Ahmed Ibrahim
3318cf9369 patch-tools 2025-09-12 11:40:23 -04:00
Ahmed Ibrahim
5ba0bcf035 patch-tools 2025-09-12 11:36:57 -04:00
Ahmed Ibrahim
6d55ef62f9 add tests 2025-09-12 11:30:10 -04:00
Ahmed Ibrahim
cecf3a82a6 add tests 2025-09-12 11:29:25 -04:00
Michael Bolin
c172e8e997 feat: added SetDefaultModel to JSON-RPC server (#3512)
This adds `SetDefaultModel`, which takes `model` and `reasoning_effort`
as optional fields. If set, the field will overwrite what is in the
user's `config.toml`.

This reuses logic that was added to support the `/model` command in the
TUI: https://github.com/openai/codex/pull/2799.
2025-09-11 23:44:17 -07:00
Michael Bolin
9bbeb75361 feat: include reasoning_effort in NewConversationResponse (#3506)
`ClientRequest::NewConversation` picks up the reasoning level from the user's defaults in `config.toml`, so it should be reported in `NewConversationResponse`.
2025-09-11 21:04:40 -07:00
Fouad Matin
6ccd32c601 add(readme): IDE (#3494)
update copy in readme to add link to IDE
2025-09-11 17:46:20 -07:00
pakrym-oai
3b5a5412bb Log cf-ray header in client traces (#3488)
## Summary
- log the `cf-ray` header when tracing HTTP responses in the Codex
client
- keep existing response status logging unchanged

## Testing
- just fmt
- just fix -p codex-core
- cargo test -p codex-core *(fails:
suite::client::azure_overrides_assign_properties_used_for_responses_url,
suite::client::env_var_overrides_loaded_auth)*

------
https://chatgpt.com/codex/tasks/task_i_68c31640dacc83209be131baf91611cd
2025-09-11 17:42:44 -07:00
jif-oai
44bb53df1e bug: default to image (#3501)
Default the MIME type to image
2025-09-11 23:10:24 +00:00
Dylan Hurd
9a7266a33f fix tests 2025-09-11 15:56:27 -07:00
Ahmed Ibrahim
2abad8fece patch-tools 2025-09-11 18:47:38 -04:00
Ahmed Ibrahim
0d4a25b981 fmt + clippy: codex-core deterministic shell tool tests, conflict cleanup 2025-09-11 15:40:25 -07:00
jif-oai
8453915e02 feat: TUI onboarding (#3398)
Example of how onboarding could look like
2025-09-11 15:04:29 -07:00
Ahmed Ibrahim
44587c2443 Use PlanType enum when formatting usage-limit CTA (#3495)
- Started using Play type struct
- Added CTA for team/business 
- Refactored a bit to unify the logic
2025-09-11 22:01:25 +00:00
Charlie Weems
8f7b22b652 Add more detailed documentation on MCP server usage (#3345)
Adds further information on how to get started with `codex mcp`:
- Tool details and parameter references
- Quickstart with example using MCP inspector.
2025-09-11 14:38:24 -07:00
Dylan
027944c64e fix: improve handle_sandbox_error timeouts (#3435)
## Summary
Handle timeouts the same way, regardless of approval mode. There's more
to do here, but this is simple and should be zero-regret

## Testing
- [x] existing tests pass
- [x] test locally and verify rollout
2025-09-11 12:09:20 -07:00
Michael Bolin
bec51f6c05 chore: enable clippy::redundant_clone (#3489)
Created this PR by:

- adding `redundant_clone` to `[workspace.lints.clippy]` in
`cargo-rs/Cargol.toml`
- running `cargo clippy --tests --fix`
- running `just fmt`

Though I had to clean up one instance of the following that resulted:

```rust
let codex = codex;
```
2025-09-11 11:59:37 -07:00
pakrym-oai
66967500bb Assign the entire gpt-5 model family same characteristics (#3490)
So the context size indicator is displayed.
2025-09-11 18:56:49 +00:00
Ahmed Ibrahim
167b4f0e25 Clear composer on fork (#3445)
Fixes this

<img width="344" height="51" alt="image"
src="https://github.com/user-attachments/assets/f227d338-b044-4f8d-bf07-87499b4230d8"
/>
2025-09-11 11:45:17 -07:00
Michael Bolin
167154178b fix: use -F instead of -f for force=true in gh call (#3486)
Apparently `-F` is the correct thing to use. From the code sample on 


https://docs.github.com/en/rest/git/refs?apiVersion=2022-11-28#update-a-reference

```shell
gh api \
  --method PATCH \
  -H "Accept: application/vnd.github+json" \
  -H "X-GitHub-Api-Version: 2022-11-28" \
  /repos/OWNER/REPO/git/refs/REF \
   -f 'sha=aa218f56b14c9653891f9e74264a383fa43fefbd' -F "force=true"
```

Also, I ran the following locally and verified it worked:

```shell
export GITHUB_REPOSITORY=openai/codex
export GITHUB_SHA=305252b2fb2d57bb40a9e4bad269db9a761f7099
gh api \
  repos/${GITHUB_REPOSITORY}/git/refs/heads/latest-alpha-cli \
  -X PATCH \
  -f sha="${GITHUB_SHA}" \
  -F force=true
```

`$GITHUB_REPOSITORY` and `$GITHUB_SHA` should already be available as
environment variables for the `run` step without having to be redeclared
in the `env` section.
2025-09-11 11:32:47 -07:00
Ahmed Ibrahim
674e3d3c90 Add Compact and Turn Context to the rollout items (#3444)
Adding compact and turn context to the rollout items

based on #3440
2025-09-11 18:08:51 +00:00
74 changed files with 1840 additions and 455 deletions

View File

@@ -237,4 +237,4 @@ jobs:
repos/${GITHUB_REPOSITORY}/git/refs/heads/latest-alpha-cli \
-X PATCH \
-f sha="${GITHUB_SHA}" \
-f force=true
-F force=true

View File

@@ -2,7 +2,10 @@
<p align="center"><code>npm i -g @openai/codex</code><br />or <code>brew install codex</code></p>
<p align="center"><strong>Codex CLI</strong> is a coding agent from OpenAI that runs locally on your computer.</br>If you are looking for the <em>cloud-based agent</em> from OpenAI, <strong>Codex Web</strong>, see <a href="https://chatgpt.com/codex">chatgpt.com/codex</a>.</p>
<p align="center"><strong>Codex CLI</strong> is a coding agent from OpenAI that runs locally on your computer.
</br>
</br>If you want Codex in your code editor (VS Code, Cursor, Windsurf), <a href="https://developers.openai.com/codex/ide">install in your IDE</a>
</br>If you are looking for the <em>cloud-based agent</em> from OpenAI, <strong>Codex Web</strong>, go to <a href="https://chatgpt.com/codex">chatgpt.com/codex</a></p>
<p align="center">
<img src="./.github/codex-cli-splash.png" alt="Codex CLI splash" width="80%" />

View File

@@ -34,6 +34,7 @@ rust = {}
[workspace.lints.clippy]
expect_used = "deny"
redundant_clone = "deny"
uninlined_format_args = "deny"
unwrap_used = "deny"

View File

@@ -617,7 +617,7 @@ fn test_parse_patch_lenient() {
assert_eq!(
parse_patch_text(&patch_text_in_double_quoted_heredoc, ParseMode::Lenient),
Ok(ApplyPatchArgs {
hunks: expected_patch.clone(),
hunks: expected_patch,
patch: patch_text.to_string(),
workdir: None,
})
@@ -637,7 +637,7 @@ fn test_parse_patch_lenient() {
"<<EOF\n*** Begin Patch\n*** Update File: file2.py\nEOF\n".to_string();
assert_eq!(
parse_patch_text(&patch_text_with_missing_closing_heredoc, ParseMode::Strict),
Err(expected_error.clone())
Err(expected_error)
);
assert_eq!(
parse_patch_text(&patch_text_with_missing_closing_heredoc, ParseMode::Lenient),

View File

@@ -2,7 +2,7 @@ use std::time::Duration;
use std::time::Instant;
/// Returns a string representing the elapsed time since `start_time` like
/// "1m15s" or "1.50s".
/// "1m 15s" or "1.50s".
pub fn format_elapsed(start_time: Instant) -> String {
format_duration(start_time.elapsed())
}
@@ -12,7 +12,7 @@ pub fn format_elapsed(start_time: Instant) -> String {
/// Formatting rules:
/// * < 1 s -> "{milli}ms"
/// * < 60 s -> "{sec:.2}s" (two decimal places)
/// * >= 60 s -> "{min}m{sec:02}s"
/// * >= 60 s -> "{min}m {sec:02}s"
pub fn format_duration(duration: Duration) -> String {
let millis = duration.as_millis() as i64;
format_elapsed_millis(millis)
@@ -26,7 +26,7 @@ fn format_elapsed_millis(millis: i64) -> String {
} else {
let minutes = millis / 60_000;
let seconds = (millis % 60_000) / 1000;
format!("{minutes}m{seconds:02}s")
format!("{minutes}m {seconds:02}s")
}
}
@@ -61,12 +61,18 @@ mod tests {
fn test_format_duration_minutes() {
// Durations ≥ 1 minute should be printed mmss.
let dur = Duration::from_millis(75_000); // 1m15s
assert_eq!(format_duration(dur), "1m15s");
assert_eq!(format_duration(dur), "1m 15s");
let dur_exact = Duration::from_millis(60_000); // 1m0s
assert_eq!(format_duration(dur_exact), "1m00s");
assert_eq!(format_duration(dur_exact), "1m 00s");
let dur_long = Duration::from_millis(3_601_000);
assert_eq!(format_duration(dur_long), "60m01s");
assert_eq!(format_duration(dur_long), "60m 01s");
}
#[test]
fn test_format_duration_one_hour_has_space() {
let dur_hour = Duration::from_millis(3_600_000);
assert_eq!(format_duration(dur_hour), "60m 00s");
}
}

View File

@@ -49,6 +49,13 @@ pub fn builtin_model_presets() -> &'static [ModelPreset] {
model: "gpt-5",
effort: ReasoningEffort::High,
},
ModelPreset {
id: "gpt-5-high-new",
label: "gpt-5 high new",
description: "— our latest release tuned to rely on the model's built-in reasoning defaults",
model: "gpt-5-high-new",
effort: ReasoningEffort::Medium,
},
];
PRESETS
}

View File

@@ -17,6 +17,7 @@ use std::time::Duration;
use codex_protocol::mcp_protocol::AuthMode;
use crate::token_data::PlanType;
use crate::token_data::TokenData;
use crate::token_data::parse_id_token;
@@ -131,13 +132,12 @@ impl CodexAuth {
}
pub fn get_account_id(&self) -> Option<String> {
self.get_current_token_data()
.and_then(|t| t.account_id.clone())
self.get_current_token_data().and_then(|t| t.account_id)
}
pub fn get_plan_type(&self) -> Option<String> {
pub(crate) fn get_plan_type(&self) -> Option<PlanType> {
self.get_current_token_data()
.and_then(|t| t.id_token.chatgpt_plan_type.as_ref().map(|p| p.as_string()))
.and_then(|t| t.id_token.chatgpt_plan_type)
}
fn get_current_auth_json(&self) -> Option<AuthDotJson> {
@@ -146,7 +146,7 @@ impl CodexAuth {
}
fn get_current_token_data(&self) -> Option<TokenData> {
self.get_current_auth_json().and_then(|t| t.tokens.clone())
self.get_current_auth_json().and_then(|t| t.tokens)
}
/// Consider this private to integration tests.
@@ -291,10 +291,10 @@ async fn update_tokens(
let tokens = auth_dot_json.tokens.get_or_insert_with(TokenData::default);
tokens.id_token = parse_id_token(&id_token).map_err(std::io::Error::other)?;
if let Some(access_token) = access_token {
tokens.access_token = access_token.to_string();
tokens.access_token = access_token;
}
if let Some(refresh_token) = refresh_token {
tokens.refresh_token = refresh_token.to_string();
tokens.refresh_token = refresh_token;
}
auth_dot_json.last_refresh = Some(Utc::now());
write_auth_json(auth_file, &auth_dot_json)?;

View File

@@ -41,6 +41,7 @@ use crate::model_provider_info::WireApi;
use crate::openai_model_info::get_model_info;
use crate::openai_tools::create_tools_json_for_responses_api;
use crate::protocol::TokenUsage;
use crate::token_data::PlanType;
use crate::util::backoff;
use codex_protocol::config_types::ReasoningEffort as ReasoningEffortConfig;
use codex_protocol::config_types::ReasoningSummary as ReasoningSummaryConfig;
@@ -60,7 +61,7 @@ struct Error {
message: Option<String>,
// Optional fields available on "usage_limit_reached" and "usage_not_included" errors
plan_type: Option<String>,
plan_type: Option<PlanType>,
resets_in_seconds: Option<u64>,
}
@@ -239,10 +240,10 @@ impl ModelClient {
let res = req_builder.send().await;
if let Ok(resp) = &res {
trace!(
"Response status: {}, request-id: {}",
"Response status: {}, cf-ray: {}",
resp.status(),
resp.headers()
.get("x-request-id")
.get("cf-ray")
.map(|v| v.to_str().unwrap_or_default())
.unwrap_or_default()
);
@@ -304,7 +305,7 @@ impl ModelClient {
// token.
let plan_type = error
.plan_type
.or_else(|| auth.and_then(|a| a.get_plan_type()));
.or_else(|| auth.as_ref().and_then(|a| a.get_plan_type()));
let resets_in_seconds = error.resets_in_seconds;
return Err(CodexErr::UsageLimitReached(UsageLimitReachedError {
plan_type,
@@ -1037,4 +1038,37 @@ mod tests {
let delay = try_parse_retry_after(&err);
assert_eq!(delay, Some(Duration::from_secs_f64(1.898)));
}
#[test]
fn error_response_deserializes_old_schema_known_plan_type_and_serializes_back() {
use crate::token_data::KnownPlan;
use crate::token_data::PlanType;
let json = r#"{"error":{"type":"usage_limit_reached","plan_type":"pro","resets_in_seconds":3600}}"#;
let resp: ErrorResponse =
serde_json::from_str(json).expect("should deserialize old schema");
assert!(matches!(
resp.error.plan_type,
Some(PlanType::Known(KnownPlan::Pro))
));
let plan_json = serde_json::to_string(&resp.error.plan_type).expect("serialize plan_type");
assert_eq!(plan_json, "\"pro\"");
}
#[test]
fn error_response_deserializes_old_schema_unknown_plan_type_and_serializes_back() {
use crate::token_data::PlanType;
let json =
r#"{"error":{"type":"usage_limit_reached","plan_type":"vip","resets_in_seconds":60}}"#;
let resp: ErrorResponse =
serde_json::from_str(json).expect("should deserialize old schema");
assert!(matches!(resp.error.plan_type, Some(PlanType::Unknown(ref s)) if s == "vip"));
let plan_json = serde_json::to_string(&resp.error.plan_type).expect("serialize plan_type");
assert_eq!(plan_json, "\"vip\"");
}
}

View File

@@ -9,9 +9,6 @@ use std::sync::atomic::AtomicU64;
use std::time::Duration;
use crate::AuthManager;
use crate::config_edit::CONFIG_KEY_EFFORT;
use crate::config_edit::CONFIG_KEY_MODEL;
use crate::config_edit::persist_non_null_overrides;
use crate::event_mapping::map_response_item_to_event_messages;
use async_channel::Receiver;
use async_channel::Sender;
@@ -19,11 +16,13 @@ use codex_apply_patch::ApplyPatchAction;
use codex_apply_patch::MaybeApplyPatchVerified;
use codex_apply_patch::maybe_parse_apply_patch_verified;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::protocol::CompactedItem;
use codex_protocol::protocol::ConversationPathResponseEvent;
use codex_protocol::protocol::RolloutItem;
use codex_protocol::protocol::TaskStartedEvent;
use codex_protocol::protocol::TurnAbortReason;
use codex_protocol::protocol::TurnAbortedEvent;
use codex_protocol::protocol::TurnContextItem;
use futures::prelude::*;
use mcp_types::CallToolResult;
use serde::Deserialize;
@@ -212,12 +211,7 @@ impl Codex {
let conversation_id = session.conversation_id;
// This task will run until Op::Shutdown is received.
tokio::spawn(submission_loop(
session.clone(),
turn_context,
config,
rx_sub,
));
tokio::spawn(submission_loop(session, turn_context, config, rx_sub));
let codex = Codex {
next_id: AtomicU64::new(0),
tx_sub,
@@ -468,7 +462,6 @@ impl Session {
tools_config: ToolsConfig::new(&ToolsConfigParams {
model_family: &config.model_family,
approval_policy,
sandbox_policy: sandbox_policy.clone(),
include_plan_tool: config.include_plan_tool,
include_apply_patch_tool: config.include_apply_patch_tool,
include_web_search_request: config.tools_web_search_request,
@@ -508,6 +501,7 @@ impl Session {
msg: EventMsg::SessionConfigured(SessionConfiguredEvent {
session_id: conversation_id,
model,
reasoning_effort: model_reasoning_effort,
history_log_id,
history_entry_count,
initial_messages,
@@ -691,12 +685,6 @@ impl Session {
if let Some(user_instructions) = turn_context.user_instructions.as_deref() {
items.push(UserInstructions::new(user_instructions.to_string()).into());
}
items.push(ResponseItem::from(EnvironmentContext::new(
Some(turn_context.cwd.clone()),
Some(turn_context.approval_policy),
Some(turn_context.sandbox_policy.clone()),
Some(self.user_shell.clone()),
)));
items
}
@@ -1072,7 +1060,7 @@ impl AgentTask {
id: self.sub_id,
msg: EventMsg::TurnAborted(TurnAbortedEvent { reason }),
};
let sess = self.sess.clone();
let sess = self.sess;
tokio::spawn(async move {
sess.send_event(event).await;
});
@@ -1148,7 +1136,6 @@ async fn submission_loop(
let tools_config = ToolsConfig::new(&ToolsConfigParams {
model_family: &effective_family,
approval_policy: new_approval_policy,
sandbox_policy: new_sandbox_policy.clone(),
include_plan_tool: config.include_plan_tool,
include_apply_patch_tool: config.include_apply_patch_tool,
include_web_search_request: config.tools_web_search_request,
@@ -1170,42 +1157,18 @@ async fn submission_loop(
// Install the new persistent context for subsequent tasks/turns.
turn_context = Arc::new(new_turn_context);
// Optionally persist changes to model / effort
let effort_str = effort.map(|_| effective_effort.to_string());
if let Err(e) = persist_non_null_overrides(
&config.codex_home,
config.active_profile.as_deref(),
&[
(&[CONFIG_KEY_MODEL], model.as_deref()),
(&[CONFIG_KEY_EFFORT], effort_str.as_deref()),
],
)
.await
{
warn!("failed to persist overrides: {e:#}");
}
if cwd.is_some() || approval_policy.is_some() || sandbox_policy.is_some() {
sess.record_conversation_items(&[ResponseItem::from(EnvironmentContext::new(
cwd,
approval_policy,
sandbox_policy,
// Shell is not configurable from turn to turn
None,
))])
.await;
}
}
Op::UserInput { items } => {
// attempt to inject input into current task
if let Err(items) = sess.inject_input(items) {
// no current task, spawn a new one
let task =
AgentTask::spawn(sess.clone(), Arc::clone(&turn_context), sub.id, items);
sess.set_task(task);
}
submit_user_input(
turn_context.cwd.clone(),
turn_context.approval_policy,
turn_context.sandbox_policy.clone(),
&sess,
&turn_context,
sub.id.clone(),
items,
)
.await;
}
Op::UserTurn {
items,
@@ -1250,7 +1213,6 @@ async fn submission_loop(
tools_config: ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy,
sandbox_policy: sandbox_policy.clone(),
include_plan_tool: config.include_plan_tool,
include_apply_patch_tool: config.include_apply_patch_tool,
include_web_search_request: config.tools_web_search_request,
@@ -1267,11 +1229,16 @@ async fn submission_loop(
shell_environment_policy: turn_context.shell_environment_policy.clone(),
cwd,
};
// TODO: record the new environment context in the conversation history
// no current task, spawn a new one with the perturn context
let task =
AgentTask::spawn(sess.clone(), Arc::new(fresh_turn_context), sub.id, items);
sess.set_task(task);
submit_user_input(
fresh_turn_context.cwd.clone(),
fresh_turn_context.approval_policy,
fresh_turn_context.sandbox_policy.clone(),
&sess,
&Arc::new(fresh_turn_context),
sub.id.clone(),
items,
)
.await;
}
}
Op::ExecApproval { id, decision } => match decision {
@@ -1770,7 +1737,7 @@ async fn try_run_turn(
}
})
.map(|call_id| ResponseItem::CustomToolCallOutput {
call_id: call_id.clone(),
call_id,
output: "aborted".to_string(),
})
.collect::<Vec<_>>()
@@ -1786,6 +1753,15 @@ async fn try_run_turn(
})
};
let rollout_item = RolloutItem::TurnContext(TurnContextItem {
cwd: turn_context.cwd.clone(),
approval_policy: turn_context.approval_policy,
sandbox_policy: turn_context.sandbox_policy.clone(),
model: turn_context.client.get_model(),
effort: turn_context.client.get_reasoning_effort(),
summary: turn_context.client.get_reasoning_summary(),
});
sess.persist_rollout_items(&[rollout_item]).await;
let mut stream = turn_context.client.clone().stream(&prompt).await?;
let mut output = Vec::new();
@@ -1968,10 +1944,14 @@ async fn run_compact_task(
sess.remove_task(&sub_id);
{
let rollout_item = {
let mut state = sess.state.lock_unchecked();
state.history.keep_last_messages(1);
}
RolloutItem::Compacted(CompactedItem {
message: state.history.last_agent_message(),
})
};
sess.persist_rollout_items(&[rollout_item]).await;
let event = Event {
id: sub_id.clone(),
@@ -2708,6 +2688,20 @@ async fn handle_sandbox_error(
let sub_id = exec_command_context.sub_id.clone();
let cwd = exec_command_context.cwd.clone();
// if the command timed out, we can simply return this failure to the model
if matches!(error, SandboxErr::Timeout) {
return ResponseInputItem::FunctionCallOutput {
call_id,
output: FunctionCallOutputPayload {
content: format!(
"command timed out after {} milliseconds",
params.timeout_duration().as_millis()
),
success: Some(false),
},
};
}
// Early out if either the user never wants to be asked for approval, or
// we're letting the model manage escalation requests. Otherwise, continue
match turn_context.approval_policy {
@@ -2725,20 +2719,6 @@ async fn handle_sandbox_error(
AskForApproval::UnlessTrusted | AskForApproval::OnFailure => (),
}
// similarly, if the command timed out, we can simply return this failure to the model
if matches!(error, SandboxErr::Timeout) {
return ResponseInputItem::FunctionCallOutput {
call_id,
output: FunctionCallOutputPayload {
content: format!(
"command timed out after {} milliseconds",
params.timeout_duration().as_millis()
),
success: Some(false),
},
};
}
// Note that when `error` is `SandboxErr::Denied`, it could be a false
// positive. That is, it may have exited with a non-zero exit code, not
// because the sandbox denied it, but because that is its expected behavior,
@@ -2833,6 +2813,29 @@ async fn handle_sandbox_error(
}
}
async fn submit_user_input(
cwd: PathBuf,
approval_policy: AskForApproval,
sandbox_policy: SandboxPolicy,
sess: &Arc<Session>,
turn_context: &Arc<TurnContext>,
sub_id: String,
items: Vec<InputItem>,
) {
sess.record_conversation_items(&[ResponseItem::from(EnvironmentContext::new(
Some(cwd),
Some(approval_policy),
Some(sandbox_policy),
Some(sess.user_shell.clone()),
))])
.await;
if let Err(items) = sess.inject_input(items) {
// no current task, spawn a new one
let task = AgentTask::spawn(Arc::clone(sess), Arc::clone(turn_context), sub_id, items);
sess.set_task(task);
}
}
fn format_exec_output_str(exec_output: &ExecToolCallOutput) -> String {
let ExecToolCallOutput {
aggregated_output, ..
@@ -2997,6 +3000,15 @@ async fn drain_to_completed(
sub_id: &str,
prompt: &Prompt,
) -> CodexResult<()> {
let rollout_item = RolloutItem::TurnContext(TurnContextItem {
cwd: turn_context.cwd.clone(),
approval_policy: turn_context.approval_policy,
sandbox_policy: turn_context.sandbox_policy.clone(),
model: turn_context.client.get_model(),
effort: turn_context.client.get_reasoning_effort(),
summary: turn_context.client.get_reasoning_summary(),
});
sess.persist_rollout_items(&[rollout_item]).await;
let mut stream = turn_context.client.clone().stream(prompt).await?;
loop {
let maybe_event = stream.next().await;
@@ -3130,7 +3142,7 @@ mod tests {
exit_code: 0,
stdout: StreamOutput::new(String::new()),
stderr: StreamOutput::new(String::new()),
aggregated_output: StreamOutput::new(full.clone()),
aggregated_output: StreamOutput::new(full),
duration: StdDuration::from_secs(1),
};
@@ -3164,7 +3176,7 @@ mod tests {
fn model_truncation_respects_byte_budget() {
// Construct a large output (about 100kB) so byte budget dominates
let big_line = "x".repeat(100);
let full = std::iter::repeat_n(big_line.clone(), 1000)
let full = std::iter::repeat_n(big_line, 1000)
.collect::<Vec<_>>()
.join("\n");

View File

@@ -15,6 +15,7 @@ use crate::model_provider_info::built_in_model_providers;
use crate::openai_model_info::get_model_info;
use crate::protocol::AskForApproval;
use crate::protocol::SandboxPolicy;
use anyhow::Context;
use codex_protocol::config_types::ReasoningEffort;
use codex_protocol::config_types::ReasoningSummary;
use codex_protocol::config_types::SandboxMode;
@@ -31,6 +32,7 @@ use toml::Value as TomlValue;
use toml_edit::DocumentMut;
const OPENAI_DEFAULT_MODEL: &str = "gpt-5";
pub const GPT5_HIGH_MODEL: &str = "gpt-5-high";
/// Maximum number of bytes of the documentation that will be embedded. Larger
/// files are *silently truncated* to this size so we do not take up too much of
@@ -128,9 +130,6 @@ pub struct Config {
/// output will be hyperlinked using the specified URI scheme.
pub file_opener: UriBasedFileOpener,
/// Collection of settings that are specific to the TUI.
pub tui: Tui,
/// Path to the `codex-linux-sandbox` executable. This must be set if
/// [`crate::exec::SandboxType::LinuxSeccomp`] is used. Note that this
/// cannot be set in the config file: it must be set in code via
@@ -351,6 +350,107 @@ pub fn set_project_trusted(codex_home: &Path, project_path: &Path) -> anyhow::Re
Ok(())
}
fn ensure_profile_table<'a>(
doc: &'a mut DocumentMut,
profile_name: &str,
) -> anyhow::Result<&'a mut toml_edit::Table> {
let mut created_profiles_table = false;
{
let root = doc.as_table_mut();
let needs_table = !root.contains_key("profiles")
|| root
.get("profiles")
.and_then(|item| item.as_table())
.is_none();
if needs_table {
root.insert("profiles", toml_edit::table());
created_profiles_table = true;
}
}
let Some(profiles_table) = doc["profiles"].as_table_mut() else {
return Err(anyhow::anyhow!(
"profiles table missing after initialization"
));
};
if created_profiles_table {
profiles_table.set_implicit(true);
}
let needs_profile_table = !profiles_table.contains_key(profile_name)
|| profiles_table
.get(profile_name)
.and_then(|item| item.as_table())
.is_none();
if needs_profile_table {
profiles_table.insert(profile_name, toml_edit::table());
}
let Some(profile_table) = profiles_table
.get_mut(profile_name)
.and_then(|item| item.as_table_mut())
else {
return Err(anyhow::anyhow!(format!(
"profile table missing for {profile_name}"
)));
};
profile_table.set_implicit(false);
Ok(profile_table)
}
// TODO(jif) refactor config persistence.
pub async fn persist_model_selection(
codex_home: &Path,
active_profile: Option<&str>,
model: &str,
effort: Option<ReasoningEffort>,
) -> anyhow::Result<()> {
let config_path = codex_home.join(CONFIG_TOML_FILE);
let serialized = match tokio::fs::read_to_string(&config_path).await {
Ok(contents) => contents,
Err(err) if err.kind() == std::io::ErrorKind::NotFound => String::new(),
Err(err) => return Err(err.into()),
};
let mut doc = if serialized.is_empty() {
DocumentMut::new()
} else {
serialized.parse::<DocumentMut>()?
};
if let Some(profile_name) = active_profile {
let profile_table = ensure_profile_table(&mut doc, profile_name)?;
profile_table["model"] = toml_edit::value(model);
if let Some(effort) = effort {
profile_table["model_reasoning_effort"] = toml_edit::value(effort.to_string());
}
} else {
let table = doc.as_table_mut();
table["model"] = toml_edit::value(model);
if let Some(effort) = effort {
table["model_reasoning_effort"] = toml_edit::value(effort.to_string());
}
}
// TODO(jif) refactor the home creation
tokio::fs::create_dir_all(codex_home)
.await
.with_context(|| {
format!(
"failed to create Codex home directory at {}",
codex_home.display()
)
})?;
tokio::fs::write(&config_path, doc.to_string())
.await
.with_context(|| format!("failed to persist config.toml at {}", config_path.display()))?;
Ok(())
}
/// Apply a single dotted-path override onto a TOML value.
fn apply_toml_override(root: &mut TomlValue, path: &str, value: TomlValue) {
use toml::value::Table;
@@ -395,7 +495,7 @@ fn apply_toml_override(root: &mut TomlValue, path: &str, value: TomlValue) {
}
/// Base config deserialized from ~/.codex/config.toml.
#[derive(Deserialize, Debug, Clone, Default)]
#[derive(Deserialize, Debug, Clone, Default, PartialEq)]
pub struct ConfigToml {
/// Optional override of model selection.
pub model: Option<String>,
@@ -527,7 +627,7 @@ pub struct ProjectConfig {
pub trust_level: Option<String>,
}
#[derive(Deserialize, Debug, Clone, Default)]
#[derive(Deserialize, Debug, Clone, Default, PartialEq)]
pub struct ToolsToml {
#[serde(default, alias = "web_search_request")]
pub web_search: Option<bool>,
@@ -804,7 +904,6 @@ impl Config {
codex_home,
history,
file_opener: cfg.file_opener.unwrap_or(UriBasedFileOpener::VsCode),
tui: cfg.tui.unwrap_or_default(),
codex_linux_sandbox_exe,
hide_agent_reasoning: cfg.hide_agent_reasoning.unwrap_or(false),
@@ -948,6 +1047,7 @@ mod tests {
use super::*;
use pretty_assertions::assert_eq;
use tempfile::TempDir;
#[test]
@@ -1038,6 +1138,145 @@ exclude_slash_tmp = true
);
}
#[tokio::test]
async fn persist_model_selection_updates_defaults() -> anyhow::Result<()> {
let codex_home = TempDir::new()?;
persist_model_selection(
codex_home.path(),
None,
"gpt-5-high-new",
Some(ReasoningEffort::High),
)
.await?;
let serialized =
tokio::fs::read_to_string(codex_home.path().join(CONFIG_TOML_FILE)).await?;
let parsed: ConfigToml = toml::from_str(&serialized)?;
assert_eq!(parsed.model.as_deref(), Some("gpt-5-high-new"));
assert_eq!(parsed.model_reasoning_effort, Some(ReasoningEffort::High));
Ok(())
}
#[tokio::test]
async fn persist_model_selection_overwrites_existing_model() -> anyhow::Result<()> {
let codex_home = TempDir::new()?;
let config_path = codex_home.path().join(CONFIG_TOML_FILE);
tokio::fs::write(
&config_path,
r#"
model = "gpt-5"
model_reasoning_effort = "medium"
[profiles.dev]
model = "gpt-4.1"
"#,
)
.await?;
persist_model_selection(
codex_home.path(),
None,
"o4-mini",
Some(ReasoningEffort::High),
)
.await?;
let serialized = tokio::fs::read_to_string(config_path).await?;
let parsed: ConfigToml = toml::from_str(&serialized)?;
assert_eq!(parsed.model.as_deref(), Some("o4-mini"));
assert_eq!(parsed.model_reasoning_effort, Some(ReasoningEffort::High));
assert_eq!(
parsed
.profiles
.get("dev")
.and_then(|profile| profile.model.as_deref()),
Some("gpt-4.1"),
);
Ok(())
}
#[tokio::test]
async fn persist_model_selection_updates_profile() -> anyhow::Result<()> {
let codex_home = TempDir::new()?;
persist_model_selection(
codex_home.path(),
Some("dev"),
"gpt-5-high-new",
Some(ReasoningEffort::Low),
)
.await?;
let serialized =
tokio::fs::read_to_string(codex_home.path().join(CONFIG_TOML_FILE)).await?;
let parsed: ConfigToml = toml::from_str(&serialized)?;
let profile = parsed
.profiles
.get("dev")
.expect("profile should be created");
assert_eq!(profile.model.as_deref(), Some("gpt-5-high-new"));
assert_eq!(profile.model_reasoning_effort, Some(ReasoningEffort::Low));
Ok(())
}
#[tokio::test]
async fn persist_model_selection_updates_existing_profile() -> anyhow::Result<()> {
let codex_home = TempDir::new()?;
let config_path = codex_home.path().join(CONFIG_TOML_FILE);
tokio::fs::write(
&config_path,
r#"
[profiles.dev]
model = "gpt-4"
model_reasoning_effort = "medium"
[profiles.prod]
model = "gpt-5"
"#,
)
.await?;
persist_model_selection(
codex_home.path(),
Some("dev"),
"o4-high",
Some(ReasoningEffort::Medium),
)
.await?;
let serialized = tokio::fs::read_to_string(config_path).await?;
let parsed: ConfigToml = toml::from_str(&serialized)?;
let dev_profile = parsed
.profiles
.get("dev")
.expect("dev profile should survive updates");
assert_eq!(dev_profile.model.as_deref(), Some("o4-high"));
assert_eq!(
dev_profile.model_reasoning_effort,
Some(ReasoningEffort::Medium)
);
assert_eq!(
parsed
.profiles
.get("prod")
.and_then(|profile| profile.model.as_deref()),
Some("gpt-5"),
);
Ok(())
}
struct PrecedenceTestFixture {
cwd: TempDir,
codex_home: TempDir,
@@ -1196,7 +1435,6 @@ model_verbosity = "high"
codex_home: fixture.codex_home(),
history: History::default(),
file_opener: UriBasedFileOpener::VsCode,
tui: Tui::default(),
codex_linux_sandbox_exe: None,
hide_agent_reasoning: false,
show_raw_agent_reasoning: false,
@@ -1253,7 +1491,6 @@ model_verbosity = "high"
codex_home: fixture.codex_home(),
history: History::default(),
file_opener: UriBasedFileOpener::VsCode,
tui: Tui::default(),
codex_linux_sandbox_exe: None,
hide_agent_reasoning: false,
show_raw_agent_reasoning: false,
@@ -1325,7 +1562,6 @@ model_verbosity = "high"
codex_home: fixture.codex_home(),
history: History::default(),
file_opener: UriBasedFileOpener::VsCode,
tui: Tui::default(),
codex_linux_sandbox_exe: None,
hide_agent_reasoning: false,
show_raw_agent_reasoning: false,
@@ -1383,7 +1619,6 @@ model_verbosity = "high"
codex_home: fixture.codex_home(),
history: History::default(),
file_opener: UriBasedFileOpener::VsCode,
tui: Tui::default(),
codex_linux_sandbox_exe: None,
hide_agent_reasoning: false,
show_raw_agent_reasoning: false,

View File

@@ -1,3 +1,4 @@
use codex_protocol::models::ContentItem;
use codex_protocol::models::ResponseItem;
/// Transcript of conversation history
@@ -59,6 +60,26 @@ impl ConversationHistory {
kept.reverse();
self.items = kept;
}
pub(crate) fn last_agent_message(&self) -> String {
for item in self.items.iter().rev() {
if let ResponseItem::Message { role, content, .. } = item
&& role == "assistant"
{
return content
.iter()
.find_map(|ci| {
if let ContentItem::OutputText { text } = ci {
Some(text.clone())
} else {
None
}
})
.unwrap_or_default();
}
}
String::new()
}
}
/// Anything that is not a system message or "reasoning" message is considered

View File

@@ -63,7 +63,7 @@ impl EnvironmentContext {
if writable_roots.is_empty() {
None
} else {
Some(writable_roots.clone())
Some(writable_roots)
}
}
_ => None,

View File

@@ -1,3 +1,5 @@
use crate::token_data::KnownPlan;
use crate::token_data::PlanType;
use codex_protocol::mcp_protocol::ConversationId;
use reqwest::StatusCode;
use serde_json;
@@ -127,38 +129,58 @@ pub enum CodexErr {
#[derive(Debug)]
pub struct UsageLimitReachedError {
pub plan_type: Option<String>,
pub resets_in_seconds: Option<u64>,
pub(crate) plan_type: Option<PlanType>,
pub(crate) resets_in_seconds: Option<u64>,
}
impl std::fmt::Display for UsageLimitReachedError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
// Base message differs slightly for legacy ChatGPT Plus plan users.
if let Some(plan_type) = &self.plan_type
&& plan_type == "plus"
{
write!(
f,
"You've hit your usage limit. Upgrade to Pro (https://openai.com/chatgpt/pricing) or try again"
)?;
if let Some(secs) = self.resets_in_seconds {
let reset_duration = format_reset_duration(secs);
write!(f, " in {reset_duration}.")?;
} else {
write!(f, " later.")?;
let message = match self.plan_type.as_ref() {
Some(PlanType::Known(KnownPlan::Plus)) => format!(
"You've hit your usage limit. Upgrade to Pro (https://openai.com/chatgpt/pricing){}",
retry_suffix_after_or(self.resets_in_seconds)
),
Some(PlanType::Known(KnownPlan::Team)) | Some(PlanType::Known(KnownPlan::Business)) => {
format!(
"You've hit your usage limit. To get more access now, send a request to your admin{}",
retry_suffix_after_or(self.resets_in_seconds)
)
}
} else {
write!(f, "You've hit your usage limit.")?;
if let Some(secs) = self.resets_in_seconds {
let reset_duration = format_reset_duration(secs);
write!(f, " Try again in {reset_duration}.")?;
} else {
write!(f, " Try again later.")?;
Some(PlanType::Known(KnownPlan::Free)) => {
"To use Codex with your ChatGPT plan, upgrade to Plus: https://openai.com/chatgpt/pricing."
.to_string()
}
}
Some(PlanType::Known(KnownPlan::Pro))
| Some(PlanType::Known(KnownPlan::Enterprise))
| Some(PlanType::Known(KnownPlan::Edu)) => format!(
"You've hit your usage limit.{}",
retry_suffix(self.resets_in_seconds)
),
Some(PlanType::Unknown(_)) | None => format!(
"You've hit your usage limit.{}",
retry_suffix(self.resets_in_seconds)
),
};
Ok(())
write!(f, "{message}")
}
}
fn retry_suffix(resets_in_seconds: Option<u64>) -> String {
if let Some(secs) = resets_in_seconds {
let reset_duration = format_reset_duration(secs);
format!(" Try again in {reset_duration}.")
} else {
" Try again later.".to_string()
}
}
fn retry_suffix_after_or(resets_in_seconds: Option<u64>) -> String {
if let Some(secs) = resets_in_seconds {
let reset_duration = format_reset_duration(secs);
format!(" or try again in {reset_duration}.")
} else {
" or try again later.".to_string()
}
}
@@ -237,7 +259,7 @@ mod tests {
#[test]
fn usage_limit_reached_error_formats_plus_plan() {
let err = UsageLimitReachedError {
plan_type: Some("plus".to_string()),
plan_type: Some(PlanType::Known(KnownPlan::Plus)),
resets_in_seconds: None,
};
assert_eq!(
@@ -246,6 +268,18 @@ mod tests {
);
}
#[test]
fn usage_limit_reached_error_formats_free_plan() {
let err = UsageLimitReachedError {
plan_type: Some(PlanType::Known(KnownPlan::Free)),
resets_in_seconds: Some(3600),
};
assert_eq!(
err.to_string(),
"To use Codex with your ChatGPT plan, upgrade to Plus: https://openai.com/chatgpt/pricing."
);
}
#[test]
fn usage_limit_reached_error_formats_default_when_none() {
let err = UsageLimitReachedError {
@@ -258,10 +292,34 @@ mod tests {
);
}
#[test]
fn usage_limit_reached_error_formats_team_plan() {
let err = UsageLimitReachedError {
plan_type: Some(PlanType::Known(KnownPlan::Team)),
resets_in_seconds: Some(3600),
};
assert_eq!(
err.to_string(),
"You've hit your usage limit. To get more access now, send a request to your admin or try again in 1 hour."
);
}
#[test]
fn usage_limit_reached_error_formats_business_plan_without_reset() {
let err = UsageLimitReachedError {
plan_type: Some(PlanType::Known(KnownPlan::Business)),
resets_in_seconds: None,
};
assert_eq!(
err.to_string(),
"You've hit your usage limit. To get more access now, send a request to your admin or try again later."
);
}
#[test]
fn usage_limit_reached_error_formats_default_for_other_plans() {
let err = UsageLimitReachedError {
plan_type: Some("pro".to_string()),
plan_type: Some(PlanType::Known(KnownPlan::Pro)),
resets_in_seconds: None,
};
assert_eq!(
@@ -285,7 +343,7 @@ mod tests {
#[test]
fn usage_limit_reached_includes_hours_and_minutes() {
let err = UsageLimitReachedError {
plan_type: Some("plus".to_string()),
plan_type: Some(PlanType::Known(KnownPlan::Plus)),
resets_in_seconds: Some(3 * 3600 + 32 * 60),
};
assert_eq!(

View File

@@ -159,7 +159,7 @@ mod tests {
EventMsg::UserMessage(user) => {
assert_eq!(user.message, "Hello world");
assert!(matches!(user.kind, Some(InputMessageKind::Plain)));
assert_eq!(user.images, Some(vec![img1.clone(), img2.clone()]));
assert_eq!(user.images, Some(vec![img1, img2]));
}
other => panic!("expected UserMessage, got {other:?}"),
}

View File

@@ -802,7 +802,7 @@ mod tests {
async fn resolve_root_git_project_for_trust_regular_repo_returns_repo_root() {
let temp_dir = TempDir::new().expect("Failed to create temp dir");
let repo_path = create_test_git_repo(&temp_dir).await;
let expected = std::fs::canonicalize(&repo_path).unwrap().to_path_buf();
let expected = std::fs::canonicalize(&repo_path).unwrap();
assert_eq!(
resolve_root_git_project_for_trust(&repo_path),
@@ -810,10 +810,7 @@ mod tests {
);
let nested = repo_path.join("sub/dir");
std::fs::create_dir_all(&nested).unwrap();
assert_eq!(
resolve_root_git_project_for_trust(&nested),
Some(expected.clone())
);
assert_eq!(resolve_root_git_project_for_trust(&nested), Some(expected));
}
#[tokio::test]

View File

@@ -0,0 +1,68 @@
use anyhow::Context;
use serde::Deserialize;
use serde::Serialize;
use std::path::Path;
use std::path::PathBuf;
pub(crate) const INTERNAL_STORAGE_FILE: &str = "internal_storage.json";
#[derive(Debug, Default, Clone, Serialize, Deserialize)]
pub struct InternalStorage {
#[serde(skip)]
storage_path: PathBuf,
#[serde(default)]
pub gpt_5_high_model_prompt_seen: bool,
}
// TODO(jif) generalise all the file writers and build proper async channel inserters.
impl InternalStorage {
pub fn load(codex_home: &Path) -> Self {
let storage_path = codex_home.join(INTERNAL_STORAGE_FILE);
match std::fs::read_to_string(&storage_path) {
Ok(serialized) => match serde_json::from_str::<Self>(&serialized) {
Ok(mut storage) => {
storage.storage_path = storage_path;
storage
}
Err(error) => {
tracing::warn!("failed to parse internal storage: {error:?}");
Self::empty(storage_path)
}
},
Err(error) => {
tracing::warn!("failed to read internal storage: {error:?}");
Self::empty(storage_path)
}
}
}
fn empty(storage_path: PathBuf) -> Self {
Self {
storage_path,
..Default::default()
}
}
pub async fn persist(&self) -> anyhow::Result<()> {
let serialized = serde_json::to_string_pretty(self)?;
if let Some(parent) = self.storage_path.parent() {
tokio::fs::create_dir_all(parent).await.with_context(|| {
format!(
"failed to create internal storage directory at {}",
parent.display()
)
})?;
}
tokio::fs::write(&self.storage_path, serialized)
.await
.with_context(|| {
format!(
"failed to persist internal storage at {}",
self.storage_path.display()
)
})
}
}

View File

@@ -28,6 +28,7 @@ mod exec_command;
pub mod exec_env;
mod flags;
pub mod git_info;
pub mod internal_storage;
mod is_safe_command;
pub mod landlock;
mod mcp_connection_manager;
@@ -74,6 +75,7 @@ pub use rollout::list::ConversationsPage;
pub use rollout::list::Cursor;
mod user_notification;
pub mod util;
pub use apply_patch::CODEX_APPLY_PATCH_ARG1;
pub use safety::get_platform_sandbox;
// Re-export the protocol types from the standalone `codex-protocol` crate so existing

View File

@@ -78,7 +78,7 @@ pub(crate) fn get_model_info(model_family: &ModelFamily) -> Option<ModelInfo> {
max_output_tokens: 4_096,
}),
"gpt-5" => Some(ModelInfo {
_ if slug.starts_with("gpt-5") => Some(ModelInfo {
context_window: 272_000,
max_output_tokens: 128_000,
}),

View File

@@ -8,7 +8,6 @@ use std::collections::HashMap;
use crate::model_family::ModelFamily;
use crate::plan_tool::PLAN_TOOL;
use crate::protocol::AskForApproval;
use crate::protocol::SandboxPolicy;
use crate::tool_apply_patch::ApplyPatchToolType;
use crate::tool_apply_patch::create_apply_patch_freeform_tool;
use crate::tool_apply_patch::create_apply_patch_json_tool;
@@ -58,7 +57,7 @@ pub(crate) enum OpenAiTool {
#[derive(Debug, Clone)]
pub enum ConfigShellToolType {
DefaultShell,
ShellWithRequest { sandbox_policy: SandboxPolicy },
ShellWithRequest,
LocalShell,
StreamableShell,
}
@@ -76,7 +75,6 @@ pub(crate) struct ToolsConfig {
pub(crate) struct ToolsConfigParams<'a> {
pub(crate) model_family: &'a ModelFamily,
pub(crate) approval_policy: AskForApproval,
pub(crate) sandbox_policy: SandboxPolicy,
pub(crate) include_plan_tool: bool,
pub(crate) include_apply_patch_tool: bool,
pub(crate) include_web_search_request: bool,
@@ -90,7 +88,6 @@ impl ToolsConfig {
let ToolsConfigParams {
model_family,
approval_policy,
sandbox_policy,
include_plan_tool,
include_apply_patch_tool,
include_web_search_request,
@@ -106,9 +103,7 @@ impl ToolsConfig {
ConfigShellToolType::DefaultShell
};
if matches!(approval_policy, AskForApproval::OnRequest) && !use_streamable_shell_tool {
shell_type = ConfigShellToolType::ShellWithRequest {
sandbox_policy: sandbox_policy.clone(),
}
shell_type = ConfigShellToolType::ShellWithRequest;
}
let apply_patch_tool_type = match model_family.apply_patch_tool_type {
@@ -251,7 +246,9 @@ fn create_unified_exec_tool() -> OpenAiTool {
})
}
fn create_shell_tool_for_sandbox(sandbox_policy: &SandboxPolicy) -> OpenAiTool {
const SHELL_TOOL_DESCRIPTION: &str = r#"Runs a shell command and returns its output"#;
fn create_shell_tool_for_request() -> OpenAiTool {
let mut properties = BTreeMap::new();
properties.insert(
"command".to_string(),
@@ -263,79 +260,29 @@ fn create_shell_tool_for_sandbox(sandbox_policy: &SandboxPolicy) -> OpenAiTool {
properties.insert(
"workdir".to_string(),
JsonSchema::String {
description: Some("The working directory to execute the command in".to_string()),
description: Some("Working directory to execute the command in.".to_string()),
},
);
properties.insert(
"timeout_ms".to_string(),
JsonSchema::Number {
description: Some("The timeout for the command in milliseconds".to_string()),
description: Some("Timeout for the command in milliseconds.".to_string()),
},
);
if matches!(sandbox_policy, SandboxPolicy::WorkspaceWrite { .. }) {
properties.insert(
properties.insert(
"with_escalated_permissions".to_string(),
JsonSchema::Boolean {
description: Some("Whether to request escalated permissions. Set to true if command needs to be run without sandbox restrictions".to_string()),
description: Some("Request escalated permissions, only for when a command would otherwise be blocked by the sandbox.".to_string()),
},
);
properties.insert(
properties.insert(
"justification".to_string(),
JsonSchema::String {
description: Some("Only set if with_escalated_permissions is true. 1-sentence explanation of why we want to run this command.".to_string()),
description: Some("Required if and only if with_escalated_permissions == true. One sentence explaining why escalation is needed (e.g., write outside CWD, network fetch, git commit).".to_string()),
},
);
}
let description = match sandbox_policy {
SandboxPolicy::WorkspaceWrite {
network_access,
..
} => {
let network_line = if !network_access {
"\n - Commands that require network access"
} else {
""
};
format!(
r#"
The shell tool is used to execute shell commands.
- When invoking the shell tool, your call will be running in a sandbox, and some shell commands will require escalated privileges:
- Types of actions that require escalated privileges:
- Writing files other than those in the writable roots (see the environment context for the allowed directories){network_line}
- Examples of commands that require escalated privileges:
- git commit
- npm install or pnpm install
- cargo build
- cargo test
- When invoking a command that will require escalated privileges:
- Provide the with_escalated_permissions parameter with the boolean value true
- Include a short, 1 sentence explanation for why we need to run with_escalated_permissions in the justification parameter."#,
)
}
SandboxPolicy::DangerFullAccess => {
"Runs a shell command and returns its output.".to_string()
}
SandboxPolicy::ReadOnly => {
r#"
The shell tool is used to execute shell commands.
- When invoking the shell tool, your call will be running in a sandbox, and some shell commands (including apply_patch) will require escalated permissions:
- Types of actions that require escalated privileges:
- Writing files
- Applying patches
- Examples of commands that require escalated privileges:
- apply_patch
- git commit
- npm install or pnpm install
- cargo build
- cargo test
- When invoking a command that will require escalated privileges:
- Provide the with_escalated_permissions parameter with the boolean value true
- Include a short, 1 sentence explanation for why we need to run with_escalated_permissions in the justification parameter"#.to_string()
}
};
let description = SHELL_TOOL_DESCRIPTION.to_string();
OpenAiTool::Function(ResponsesApiTool {
name: "shell".to_string(),
@@ -348,7 +295,6 @@ The shell tool is used to execute shell commands.
},
})
}
fn create_view_image_tool() -> OpenAiTool {
// Support only local filesystem path.
let mut properties = BTreeMap::new();
@@ -589,8 +535,8 @@ pub(crate) fn get_openai_tools(
ConfigShellToolType::DefaultShell => {
tools.push(create_shell_tool());
}
ConfigShellToolType::ShellWithRequest { sandbox_policy } => {
tools.push(create_shell_tool_for_sandbox(sandbox_policy));
ConfigShellToolType::ShellWithRequest => {
tools.push(create_shell_tool_for_request());
}
ConfigShellToolType::LocalShell => {
tools.push(OpenAiTool::LocalShell {});
@@ -686,7 +632,6 @@ mod tests {
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::ReadOnly,
include_plan_tool: true,
include_apply_patch_tool: false,
include_web_search_request: true,
@@ -708,7 +653,6 @@ mod tests {
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::ReadOnly,
include_plan_tool: true,
include_apply_patch_tool: false,
include_web_search_request: true,
@@ -730,7 +674,6 @@ mod tests {
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::ReadOnly,
include_plan_tool: false,
include_apply_patch_tool: false,
include_web_search_request: true,
@@ -836,7 +779,6 @@ mod tests {
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::ReadOnly,
include_plan_tool: false,
include_apply_patch_tool: false,
include_web_search_request: false,
@@ -914,7 +856,6 @@ mod tests {
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::ReadOnly,
include_plan_tool: false,
include_apply_patch_tool: false,
include_web_search_request: true,
@@ -977,7 +918,6 @@ mod tests {
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::ReadOnly,
include_plan_tool: false,
include_apply_patch_tool: false,
include_web_search_request: true,
@@ -1035,7 +975,6 @@ mod tests {
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::ReadOnly,
include_plan_tool: false,
include_apply_patch_tool: false,
include_web_search_request: true,
@@ -1096,7 +1035,6 @@ mod tests {
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::ReadOnly,
include_plan_tool: false,
include_apply_patch_tool: false,
include_web_search_request: true,
@@ -1150,13 +1088,7 @@ mod tests {
#[test]
fn test_shell_tool_for_sandbox_workspace_write() {
let sandbox_policy = SandboxPolicy::WorkspaceWrite {
writable_roots: vec!["workspace".into()],
network_access: false,
exclude_tmpdir_env_var: false,
exclude_slash_tmp: false,
};
let tool = super::create_shell_tool_for_sandbox(&sandbox_policy);
let tool = super::create_shell_tool_for_request();
let OpenAiTool::Function(ResponsesApiTool {
description, name, ..
}) = &tool
@@ -1165,26 +1097,13 @@ mod tests {
};
assert_eq!(name, "shell");
let expected = r#"
The shell tool is used to execute shell commands.
- When invoking the shell tool, your call will be running in a sandbox, and some shell commands will require escalated privileges:
- Types of actions that require escalated privileges:
- Writing files other than those in the writable roots (see the environment context for the allowed directories)
- Commands that require network access
- Examples of commands that require escalated privileges:
- git commit
- npm install or pnpm install
- cargo build
- cargo test
- When invoking a command that will require escalated privileges:
- Provide the with_escalated_permissions parameter with the boolean value true
- Include a short, 1 sentence explanation for why we need to run with_escalated_permissions in the justification parameter."#;
let expected = super::SHELL_TOOL_DESCRIPTION;
assert_eq!(description, expected);
}
#[test]
fn test_shell_tool_for_sandbox_readonly() {
let tool = super::create_shell_tool_for_sandbox(&SandboxPolicy::ReadOnly);
let tool = super::create_shell_tool_for_request();
let OpenAiTool::Function(ResponsesApiTool {
description, name, ..
}) = &tool
@@ -1193,27 +1112,13 @@ The shell tool is used to execute shell commands.
};
assert_eq!(name, "shell");
let expected = r#"
The shell tool is used to execute shell commands.
- When invoking the shell tool, your call will be running in a sandbox, and some shell commands (including apply_patch) will require escalated permissions:
- Types of actions that require escalated privileges:
- Writing files
- Applying patches
- Examples of commands that require escalated privileges:
- apply_patch
- git commit
- npm install or pnpm install
- cargo build
- cargo test
- When invoking a command that will require escalated privileges:
- Provide the with_escalated_permissions parameter with the boolean value true
- Include a short, 1 sentence explanation for why we need to run with_escalated_permissions in the justification parameter"#;
let expected = super::SHELL_TOOL_DESCRIPTION;
assert_eq!(description, expected);
}
#[test]
fn test_shell_tool_for_sandbox_danger_full_access() {
let tool = super::create_shell_tool_for_sandbox(&SandboxPolicy::DangerFullAccess);
let tool = super::create_shell_tool_for_request();
let OpenAiTool::Function(ResponsesApiTool {
description, name, ..
}) = &tool
@@ -1222,6 +1127,7 @@ The shell tool is used to execute shell commands.
};
assert_eq!(name, "shell");
assert_eq!(description, "Runs a shell command and returns its output.");
let expected = super::SHELL_TOOL_DESCRIPTION;
assert_eq!(description, expected);
}
}

View File

@@ -868,7 +868,7 @@ pub fn parse_command_impl(command: &[String]) -> Vec<ParsedCommand> {
let parts = if contains_connectors(&normalized) {
split_on_connectors(&normalized)
} else {
vec![normalized.clone()]
vec![normalized]
};
// Preserve left-to-right execution order for all commands, including bash -c/-lc
@@ -1201,10 +1201,7 @@ fn parse_bash_lc_commands(original: &[String]) -> Option<Vec<ParsedCommand>> {
name,
}
} else {
ParsedCommand::Read {
cmd: cmd.clone(),
name,
}
ParsedCommand::Read { cmd, name }
}
} else {
ParsedCommand::Read {
@@ -1215,10 +1212,7 @@ fn parse_bash_lc_commands(original: &[String]) -> Option<Vec<ParsedCommand>> {
}
ParsedCommand::ListFiles { path, cmd, .. } => {
if had_connectors {
ParsedCommand::ListFiles {
cmd: cmd.clone(),
path,
}
ParsedCommand::ListFiles { cmd, path }
} else {
ParsedCommand::ListFiles {
cmd: shlex_join(&script_tokens),
@@ -1230,11 +1224,7 @@ fn parse_bash_lc_commands(original: &[String]) -> Option<Vec<ParsedCommand>> {
query, path, cmd, ..
} => {
if had_connectors {
ParsedCommand::Search {
cmd: cmd.clone(),
query,
path,
}
ParsedCommand::Search { cmd, query, path }
} else {
ParsedCommand::Search {
cmd: shlex_join(&script_tokens),

View File

@@ -26,7 +26,7 @@ const PROJECT_DOC_SEPARATOR: &str = "\n\n--- project-doc ---\n\n";
/// Combines `Config::instructions` and `AGENTS.md` (if present) into a single
/// string of instructions.
pub(crate) async fn get_user_instructions(config: &Config) -> Option<String> {
pub async fn get_user_instructions(config: &Config) -> Option<String> {
match read_project_docs(config).await {
Ok(Some(project_doc)) => match &config.user_instructions {
Some(original_instructions) => Some(format!(
@@ -115,7 +115,7 @@ pub fn discover_project_doc_paths(config: &Config) -> std::io::Result<Vec<PathBu
// Build chain from cwd upwards and detect git root.
let mut chain: Vec<PathBuf> = vec![dir.clone()];
let mut git_root: Option<PathBuf> = None;
let mut cursor = dir.clone();
let mut cursor = dir;
while let Some(parent) = cursor.parent() {
let git_marker = cursor.join(".git");
let git_exists = match std::fs::metadata(&git_marker) {

View File

@@ -318,6 +318,12 @@ async fn read_head_and_flags(
head.push(val);
}
}
RolloutItem::TurnContext(_) => {
// Not included in `head`; skip.
}
RolloutItem::Compacted(_) => {
// Not included in `head`; skip.
}
RolloutItem::EventMsg(ev) => {
if matches!(ev, EventMsg::UserMessage(_)) {
saw_user_event = true;

View File

@@ -8,8 +8,10 @@ pub(crate) fn is_persisted_response_item(item: &RolloutItem) -> bool {
match item {
RolloutItem::ResponseItem(item) => should_persist_response_item(item),
RolloutItem::EventMsg(ev) => should_persist_event_msg(ev),
// Always persist session meta
RolloutItem::SessionMeta(_) => true,
// Persist Codex executive markers so we can analyze flows (e.g., compaction, API turns).
RolloutItem::Compacted(_) | RolloutItem::TurnContext(_) | RolloutItem::SessionMeta(_) => {
true
}
}
}

View File

@@ -238,6 +238,12 @@ impl RolloutRecorder {
RolloutItem::ResponseItem(item) => {
items.push(RolloutItem::ResponseItem(item));
}
RolloutItem::Compacted(item) => {
items.push(RolloutItem::Compacted(item));
}
RolloutItem::TurnContext(item) => {
items.push(RolloutItem::TurnContext(item));
}
RolloutItem::EventMsg(_ev) => {
items.push(RolloutItem::EventMsg(_ev));
}

View File

@@ -305,7 +305,7 @@ async fn test_pagination_cursor() {
path: p1,
head: head_1,
}],
next_cursor: Some(expected_cursor3.clone()),
next_cursor: Some(expected_cursor3),
num_scanned_files: 5, // scanned 05, 04 (anchor), 03, 02 (anchor), 01
reached_scan_cap: false,
};
@@ -344,7 +344,7 @@ async fn test_get_conversation_contents() {
let expected_cursor: Cursor = serde_json::from_str(&format!("\"{ts}|{uuid}\"")).unwrap();
let expected_page = ConversationsPage {
items: vec![ConversationItem {
path: expected_path.clone(),
path: expected_path,
head: expected_head,
}],
next_cursor: Some(expected_cursor),
@@ -437,7 +437,7 @@ async fn test_stable_ordering_same_second_pagination() {
path: p1,
head: head(u1),
}],
next_cursor: Some(expected_cursor2.clone()),
next_cursor: Some(expected_cursor2),
num_scanned_files: 3, // scanned u3, u2 (anchor), u1
reached_scan_cap: false,
};

View File

@@ -293,7 +293,7 @@ mod tests {
// With the parent dir explicitly added as a writable root, the
// outside write should be permitted.
let policy_with_parent = SandboxPolicy::WorkspaceWrite {
writable_roots: vec![parent.clone()],
writable_roots: vec![parent],
network_access: false,
exclude_tmpdir_env_var: true,
exclude_slash_tmp: true,

View File

@@ -153,7 +153,7 @@ mod tests {
// Build a policy that only includes the two test roots as writable and
// does not automatically include defaults TMPDIR or /tmp.
let policy = SandboxPolicy::WorkspaceWrite {
writable_roots: vec![root_with_git.clone(), root_without_git.clone()],
writable_roots: vec![root_with_git, root_without_git],
network_access: false,
exclude_tmpdir_env_var: true,
exclude_slash_tmp: true,

View File

@@ -326,10 +326,7 @@ mod tests {
.format_default_shell_invocation(input.iter().map(|s| s.to_string()).collect());
let expected_cmd = expected_cmd
.iter()
.map(|s| {
s.replace("BASHRC_PATH", bashrc_path.to_str().unwrap())
.to_string()
})
.map(|s| s.replace("BASHRC_PATH", bashrc_path.to_str().unwrap()))
.collect();
assert_eq!(actual_cmd, Some(expected_cmd));
@@ -435,10 +432,7 @@ mod macos_tests {
.format_default_shell_invocation(input.iter().map(|s| s.to_string()).collect());
let expected_cmd = expected_cmd
.iter()
.map(|s| {
s.replace("ZSHRC_PATH", zshrc_path.to_str().unwrap())
.to_string()
})
.map(|s| s.replace("ZSHRC_PATH", zshrc_path.to_str().unwrap()))
.collect();
assert_eq!(actual_cmd, Some(expected_cmd));

View File

@@ -47,15 +47,6 @@ pub(crate) enum PlanType {
Unknown(String),
}
impl PlanType {
pub fn as_string(&self) -> String {
match self {
Self::Known(known) => format!("{known:?}").to_lowercase(),
Self::Unknown(s) => s.clone(),
}
}
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub(crate) enum KnownPlan {

View File

@@ -678,7 +678,7 @@ index {left_oid}..{right_oid}
let dest = dir.path().join("dest.txt");
let mut acc = TurnDiffTracker::new();
let mv = HashMap::from([(
src.clone(),
src,
FileChange::Update {
unified_diff: "".into(),
move_path: Some(dest.clone()),

View File

@@ -586,26 +586,6 @@ mod tests {
Ok(())
}
#[cfg(unix)]
#[tokio::test]
async fn correct_path_resolution() -> Result<(), UnifiedExecError> {
let manager = UnifiedExecSessionManager::default();
let result = manager
.handle_request(UnifiedExecRequest {
session_id: None,
input_chunks: &["echo".to_string(), "codex".to_string()],
timeout_ms: Some(1_500),
})
.await?;
assert!(result.session_id.is_none());
assert!(result.output.contains("codex"));
assert!(manager.sessions.lock().await.is_empty());
Ok(())
}
#[cfg(unix)]
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn reusing_completed_session_returns_unknown_session() -> Result<(), UnifiedExecError> {

View File

@@ -4,9 +4,11 @@ use codex_core::ModelProviderInfo;
use codex_core::NewConversation;
use codex_core::WireApi;
use codex_core::built_in_model_providers;
use codex_core::project_doc::get_user_instructions;
use codex_core::protocol::EventMsg;
use codex_core::protocol::InputItem;
use codex_core::protocol::Op;
use codex_core::shell::default_user_shell;
use codex_core::spawn::CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR;
use core_test_support::load_default_config_for_test;
use core_test_support::load_sse_fixture_with_id;
@@ -221,6 +223,8 @@ async fn resume_includes_initial_messages_and_sends_prior_items() {
};
let codex_home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&codex_home);
let cwd = TempDir::new().unwrap();
config.cwd = cwd.path().to_path_buf();
config.model_provider = model_provider;
config.experimental_resume = Some(session_path.clone());
// Also configure user instructions to ensure they are NOT delivered on resume.
@@ -259,6 +263,29 @@ async fn resume_includes_initial_messages_and_sends_prior_items() {
let request = &server.received_requests().await.unwrap()[0];
let request_body = request.body_json::<serde_json::Value>().unwrap();
// Build expected environment context for this turn.
let shell = default_user_shell().await;
let shell_line = match shell.name() {
Some(name) => format!(" <shell>{name}</shell>\n"),
None => String::new(),
};
let expected_env_text_turn = format!(
r#"<environment_context>
<cwd>{}</cwd>
<approval_policy>on-request</approval_policy>
<sandbox_mode>read-only</sandbox_mode>
<network_access>restricted</network_access>
{}</environment_context>"#,
cwd.path().to_string_lossy(),
shell_line.as_str(),
);
let expected_env_msg_turn = json!({
"type": "message",
"role": "user",
"content": [ { "type": "input_text", "text": expected_env_text_turn } ]
});
let expected_input = json!([
{
"type": "message",
@@ -270,12 +297,14 @@ async fn resume_includes_initial_messages_and_sends_prior_items() {
"role": "assistant",
"content": [{ "type": "output_text", "text": "resumed assistant message" }]
},
expected_env_msg_turn,
{
"type": "message",
"role": "user",
"content": [{ "type": "input_text", "text": "hello" }]
}
]);
assert_eq!(request_body["input"], expected_input);
}
@@ -838,7 +867,7 @@ async fn history_dedupes_streamed_and_final_messages_across_turns() {
conversation: codex,
..
} = conversation_manager
.new_conversation(config)
.new_conversation(config.clone())
.await
.expect("create new conversation");
@@ -873,34 +902,49 @@ async fn history_dedupes_streamed_and_final_messages_across_turns() {
let requests = server.received_requests().await.unwrap();
assert_eq!(requests.len(), 3, "expected 3 requests (one per turn)");
// Replace full-array compare with tail-only raw JSON compare using a single hard-coded value.
let r3_tail_expected = json!([
{
"type": "message",
"role": "user",
"content": [{"type":"input_text","text":"U1"}]
},
{
"type": "message",
"role": "assistant",
"content": [{"type":"output_text","text":"Hey there!\n"}]
},
{
"type": "message",
"role": "user",
"content": [{"type":"input_text","text":"U2"}]
},
{
"type": "message",
"role": "assistant",
"content": [{"type":"output_text","text":"Hey there!\n"}]
},
{
"type": "message",
"role": "user",
"content": [{"type":"input_text","text":"U3"}]
}
]);
// Build expected environment context dynamically to avoid OS-dependent flakiness.
let user_instructions = get_user_instructions(&config).await;
let shell = default_user_shell().await;
let shell_line = match shell.name() {
Some(name) => format!(" <shell>{name}</shell>\n"),
None => String::new(),
};
let expected_env_text = format!(
r#"<environment_context>
<cwd>{}</cwd>
<approval_policy>on-request</approval_policy>
<sandbox_mode>read-only</sandbox_mode>
<network_access>restricted</network_access>
{}</environment_context>"#,
std::env::current_dir().unwrap().to_string_lossy(),
shell_line.as_str(),
);
let expected_env_msg = json!({
"type": "message",
"role": "user",
"content": [ { "type": "input_text", "text": expected_env_text } ]
});
// Wrap user instructions in the XML container to match the raw/ingest view
let expected_ui_text = format!(
"<user_instructions>\n\n{}\n\n</user_instructions>",
user_instructions.clone().unwrap()
);
let expected_ui_msg = json!({
"type": "message",
"role": "user",
"content": [ { "type": "input_text", "text": expected_ui_text } ]
});
let expected_full = json!([
expected_ui_msg,
expected_env_msg.clone(),
{"type":"message","role":"user","content":[{"type":"input_text","text":"U1"}]},
{"type":"message","role":"assistant","content":[{"type":"output_text","text":"Hey there!\n"}]},
expected_env_msg.clone(),
{"type":"message","role":"user","content":[{"type":"input_text","text":"U2"}]},
{"type":"message","role":"assistant","content":[{"type":"output_text","text":"Hey there!\n"}]},
expected_env_msg,
{"type":"message","role":"user","content":[{"type":"input_text","text":"U3"}]}]);
let r3_input_array = requests[2]
.body_json::<serde_json::Value>()
@@ -909,12 +953,6 @@ async fn history_dedupes_streamed_and_final_messages_across_turns() {
.and_then(|v| v.as_array())
.cloned()
.expect("r3 missing input array");
// skipping earlier context and developer messages
let tail_len = r3_tail_expected.as_array().unwrap().len();
let actual_tail = &r3_input_array[r3_input_array.len() - tail_len..];
assert_eq!(
serde_json::Value::Array(actual_tail.to_vec()),
r3_tail_expected,
"request 3 tail mismatch",
);
assert_eq!(json!(r3_input_array), expected_full);
}

View File

@@ -3,10 +3,13 @@
use codex_core::CodexAuth;
use codex_core::ConversationManager;
use codex_core::ModelProviderInfo;
use codex_core::NewConversation;
use codex_core::built_in_model_providers;
use codex_core::protocol::EventMsg;
use codex_core::protocol::InputItem;
use codex_core::protocol::Op;
use codex_core::protocol::RolloutItem;
use codex_core::protocol::RolloutLine;
use codex_core::spawn::CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR;
use core_test_support::load_default_config_for_test;
use core_test_support::wait_for_event;
@@ -142,11 +145,12 @@ async fn summarize_context_three_requests_and_instructions() {
let mut config = load_default_config_for_test(&home);
config.model_provider = model_provider;
let conversation_manager = ConversationManager::with_auth(CodexAuth::from_api_key("dummy"));
let codex = conversation_manager
.new_conversation(config)
.await
.unwrap()
.conversation;
let NewConversation {
conversation: codex,
session_configured,
..
} = conversation_manager.new_conversation(config).await.unwrap();
let rollout_path = session_configured.rollout_path;
// 1) Normal user input should hit server once.
codex
@@ -248,4 +252,47 @@ async fn summarize_context_three_requests_and_instructions() {
!messages.iter().any(|(_, t)| t.contains(SUMMARIZE_TRIGGER)),
"third request should not include the summarize trigger"
);
// Shut down Codex to flush rollout entries before inspecting the file.
codex.submit(Op::Shutdown).await.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::ShutdownComplete)).await;
// Verify rollout contains APITurn entries for each API call and a Compacted entry.
let text = std::fs::read_to_string(&rollout_path).unwrap_or_else(|e| {
panic!(
"failed to read rollout file {}: {e}",
rollout_path.display()
)
});
let mut api_turn_count = 0usize;
let mut saw_compacted_summary = false;
for line in text.lines() {
let trimmed = line.trim();
if trimmed.is_empty() {
continue;
}
let Ok(entry): Result<RolloutLine, _> = serde_json::from_str(trimmed) else {
continue;
};
match entry.item {
RolloutItem::TurnContext(_) => {
api_turn_count += 1;
}
RolloutItem::Compacted(ci) => {
if ci.message == SUMMARY_TEXT {
saw_compacted_summary = true;
}
}
_ => {}
}
}
assert!(
api_turn_count == 3,
"expected three APITurn entries in rollout"
);
assert!(
saw_compacted_summary,
"expected a Compacted entry containing the summarizer output"
);
}

View File

@@ -7,6 +7,7 @@ mod exec;
mod exec_stream_events;
mod fork_conversation;
mod live_cli;
mod model_overrides;
mod prompt_caching;
mod seatbelt;
mod stream_error_allows_next_turn;

View File

@@ -0,0 +1,92 @@
use codex_core::CodexAuth;
use codex_core::ConversationManager;
use codex_core::protocol::EventMsg;
use codex_core::protocol::Op;
use codex_core::protocol_config_types::ReasoningEffort;
use core_test_support::load_default_config_for_test;
use core_test_support::wait_for_event;
use pretty_assertions::assert_eq;
use tempfile::TempDir;
const CONFIG_TOML: &str = "config.toml";
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn override_turn_context_does_not_persist_when_config_exists() {
let codex_home = TempDir::new().unwrap();
let config_path = codex_home.path().join(CONFIG_TOML);
let initial_contents = "model = \"gpt-4o\"\n";
tokio::fs::write(&config_path, initial_contents)
.await
.expect("seed config.toml");
let mut config = load_default_config_for_test(&codex_home);
config.model = "gpt-4o".to_string();
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));
let codex = conversation_manager
.new_conversation(config)
.await
.expect("create conversation")
.conversation;
codex
.submit(Op::OverrideTurnContext {
cwd: None,
approval_policy: None,
sandbox_policy: None,
model: Some("o3".to_string()),
effort: Some(ReasoningEffort::High),
summary: None,
})
.await
.expect("submit override");
codex.submit(Op::Shutdown).await.expect("request shutdown");
wait_for_event(&codex, |ev| matches!(ev, EventMsg::ShutdownComplete)).await;
let contents = tokio::fs::read_to_string(&config_path)
.await
.expect("read config.toml after override");
assert_eq!(contents, initial_contents);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn override_turn_context_does_not_create_config_file() {
let codex_home = TempDir::new().unwrap();
let config_path = codex_home.path().join(CONFIG_TOML);
assert!(
!config_path.exists(),
"test setup should start without config"
);
let config = load_default_config_for_test(&codex_home);
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));
let codex = conversation_manager
.new_conversation(config)
.await
.expect("create conversation")
.conversation;
codex
.submit(Op::OverrideTurnContext {
cwd: None,
approval_policy: None,
sandbox_policy: None,
model: Some("o3".to_string()),
effort: Some(ReasoningEffort::Medium),
summary: None,
})
.await
.expect("submit override");
codex.submit(Op::Shutdown).await.expect("request shutdown");
wait_for_event(&codex, |ev| matches!(ev, EventMsg::ShutdownComplete)).await;
assert!(
!config_path.exists(),
"override should not create config.toml"
);
}

View File

@@ -270,8 +270,13 @@ async fn prefixes_context_and_instructions_once_and_consistently_across_requests
assert_eq!(requests.len(), 2, "expected two POST requests");
let shell = default_user_shell().await;
let shell_line = match shell.name() {
Some(name) => format!(" <shell>{name}</shell>\n"),
None => String::new(),
};
let expected_env_text = format!(
// Per-turn environment context includes the shell tag.
let expected_env_text_turn = format!(
r#"<environment_context>
<cwd>{}</cwd>
<approval_policy>on-request</approval_policy>
@@ -279,18 +284,15 @@ async fn prefixes_context_and_instructions_once_and_consistently_across_requests
<network_access>restricted</network_access>
{}</environment_context>"#,
cwd.path().to_string_lossy(),
match shell.name() {
Some(name) => format!(" <shell>{name}</shell>\n"),
None => String::new(),
}
shell_line.as_str(),
);
let expected_ui_text =
"<user_instructions>\n\nbe consistent and helpful\n\n</user_instructions>";
let expected_env_msg = serde_json::json!({
let expected_env_msg_turn = serde_json::json!({
"type": "message",
"role": "user",
"content": [ { "type": "input_text", "text": expected_env_text } ]
"content": [ { "type": "input_text", "text": expected_env_text_turn } ]
});
let expected_ui_msg = serde_json::json!({
"type": "message",
@@ -304,11 +306,29 @@ async fn prefixes_context_and_instructions_once_and_consistently_across_requests
"content": [ { "type": "input_text", "text": "hello 1" } ]
});
let body1 = requests[0].body_json::<serde_json::Value>().unwrap();
let body1_input = body1["input"].as_array().unwrap();
assert_eq!(
body1["input"],
serde_json::json!([expected_ui_msg, expected_env_msg, expected_user_message_1])
serde_json::json!([
expected_ui_msg,
expected_env_msg_turn,
expected_user_message_1
])
);
let env_texts: Vec<&str> = body1_input
.iter()
.filter_map(|msg| {
msg.get("content")
.and_then(|content| content.as_array())
.and_then(|content| content.first())
.and_then(|item| item.get("text"))
.and_then(|text| text.as_str())
})
.filter(|text| text.starts_with("<environment_context>"))
.collect();
assert_eq!(env_texts, vec![expected_env_text_turn.as_str()]);
let expected_user_message_2 = serde_json::json!({
"type": "message",
"role": "user",
@@ -318,7 +338,7 @@ async fn prefixes_context_and_instructions_once_and_consistently_across_requests
let expected_body2 = serde_json::json!(
[
body1["input"].as_array().unwrap().as_slice(),
[expected_user_message_2].as_slice(),
[expected_env_msg_turn, expected_user_message_2].as_slice(),
]
.concat()
);
@@ -423,19 +443,28 @@ async fn overrides_turn_context_but_keeps_cached_prefix_and_key_constant() {
"role": "user",
"content": [ { "type": "input_text", "text": "hello 2" } ]
});
let shell = default_user_shell().await;
let shell_line = match shell.name() {
Some(name) => format!(" <shell>{name}</shell>\n"),
None => String::new(),
};
// After overriding the turn context, the environment context should be emitted again
// reflecting the new approval policy and sandbox settings. Omit cwd because it did
// not change.
let expected_env_text_2 = format!(
r#"<environment_context>
<cwd>{}</cwd>
<approval_policy>never</approval_policy>
<sandbox_mode>workspace-write</sandbox_mode>
<network_access>enabled</network_access>
<writable_roots>
<root>{}</root>
</writable_roots>
</environment_context>"#,
writable.path().to_string_lossy()
{}</environment_context>"#,
cwd.path().to_string_lossy(),
writable.path().to_string_lossy(),
shell_line.as_str()
);
let expected_env_msg_2 = serde_json::json!({
"type": "message",
@@ -546,12 +575,165 @@ async fn per_turn_overrides_keep_cached_prefix_and_key_constant() {
"role": "user",
"content": [ { "type": "input_text", "text": "hello 2" } ]
});
let shell = default_user_shell().await;
let shell_line = match shell.name() {
Some(name) => format!(" <shell>{name}</shell>\n"),
None => String::new(),
};
let expected_env_text_2 = format!(
r#"<environment_context>
<cwd>{}</cwd>
<approval_policy>never</approval_policy>
<sandbox_mode>workspace-write</sandbox_mode>
<network_access>enabled</network_access>
<writable_roots>
<root>{}</root>
</writable_roots>
{}</environment_context>"#,
new_cwd.path().to_string_lossy(),
writable.path().to_string_lossy(),
shell_line.as_str()
);
let expected_env_msg_2 = serde_json::json!({
"type": "message",
"role": "user",
"content": [ { "type": "input_text", "text": expected_env_text_2 } ]
});
let expected_body2 = serde_json::json!(
[
body1["input"].as_array().unwrap().as_slice(),
[expected_user_message_2].as_slice(),
[expected_env_msg_2, expected_user_message_2].as_slice(),
]
.concat()
);
assert_eq!(body2["input"], expected_body2);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn tools_stable_across_all_approval_policy_transitions() {
use pretty_assertions::assert_eq;
let server = MockServer::start().await;
let sse = sse_completed("resp");
let template = ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.set_body_raw(sse, "text/event-stream");
// Build all transitions FROM each to each other (exclude self transitions)
let policies = vec![
AskForApproval::UnlessTrusted,
AskForApproval::OnFailure,
AskForApproval::OnRequest,
AskForApproval::Never,
];
let mut transitions: Vec<(AskForApproval, AskForApproval)> = Vec::new();
for &from in &policies {
for &to in &policies {
if from != to {
transitions.push((from, to));
}
}
}
// Expect 2 POSTs per transition
Mock::given(method("POST"))
.and(path("/v1/responses"))
.respond_with(template)
.expect((transitions.len() * 2) as u64)
.mount(&server)
.await;
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let cwd = TempDir::new().unwrap();
let codex_home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&codex_home);
config.cwd = cwd.path().to_path_buf();
config.model_provider = model_provider;
config.user_instructions = Some("be consistent and helpful".to_string());
// Keep tools stable and minimal
config.include_plan_tool = false;
config.include_apply_patch_tool = false;
config.tools_web_search_request = false;
config.use_experimental_unified_exec_tool = true; // policy-independent tool
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));
let codex = conversation_manager
.new_conversation(config)
.await
.expect("create new conversation")
.conversation;
for (i, (from, to)) in transitions.iter().enumerate() {
// Ensure a known starting policy for this pair
codex
.submit(Op::OverrideTurnContext {
cwd: None,
approval_policy: Some(*from),
sandbox_policy: None,
model: None,
effort: None,
summary: None,
})
.await
.unwrap();
codex
.submit(Op::UserInput {
items: vec![InputItem::Text {
text: format!("turn {i}-a"),
}],
})
.await
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
// Override to the target policy and send next turn
codex
.submit(Op::OverrideTurnContext {
cwd: None,
approval_policy: Some(*to),
sandbox_policy: None,
model: None,
effort: None,
summary: None,
})
.await
.unwrap();
codex
.submit(Op::UserInput {
items: vec![InputItem::Text {
text: format!("turn {i}-b"),
}],
})
.await
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
}
// Verify tool arrays are identical across each pair of requests
let requests = server.received_requests().await.unwrap();
assert_eq!(
requests.len(),
transitions.len() * 2,
"expected 2 requests per transition"
);
for i in 0..transitions.len() {
let body_a = requests[2 * i].body_json::<serde_json::Value>().unwrap();
let body_b = requests[2 * i + 1]
.body_json::<serde_json::Value>()
.unwrap();
assert_eq!(
body_a["tools"], body_b["tools"],
"tools changed between requests for transition #{i}: {:?}",
transitions[i]
);
}
}

View File

@@ -280,7 +280,7 @@ impl EventProcessor for EventProcessorWithHumanOutput {
parsed_cmd: _,
}) => {
self.call_id_to_command.insert(
call_id.clone(),
call_id,
ExecCommandBegin {
command: command.clone(),
},
@@ -382,7 +382,7 @@ impl EventProcessor for EventProcessorWithHumanOutput {
// Store metadata so we can calculate duration later when we
// receive the corresponding PatchApplyEnd event.
self.call_id_to_patch.insert(
call_id.clone(),
call_id,
PatchApplyBegin {
start_time: Instant::now(),
auto_approved,
@@ -520,6 +520,7 @@ impl EventProcessor for EventProcessorWithHumanOutput {
let SessionConfiguredEvent {
session_id: conversation_id,
model,
reasoning_effort: _,
history_log_id: _,
history_entry_count: _,
initial_messages: _,

View File

@@ -146,7 +146,7 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
model_provider,
codex_linux_sandbox_exe,
base_instructions: None,
include_plan_tool: Some(true),
include_plan_tool: None,
include_apply_patch_tool: None,
include_view_image_tool: None,
show_raw_agent_reasoning: oss.then_some(true),

View File

@@ -61,7 +61,7 @@ pub(crate) async fn run_e2e_exec_test(cwd: &Path, response_streams: Vec<String>)
.context("should find binary for codex-exec")
.expect("should find binary for codex-exec")
.current_dir(cwd.clone())
.env("CODEX_HOME", cwd.clone())
.env("CODEX_HOME", cwd)
.env("OPENAI_API_KEY", "dummy")
.env("OPENAI_BASE_URL", format!("{uri}/v1"))
.arg("--skip-git-repo-check")

View File

@@ -88,7 +88,7 @@ impl ExecvChecker {
let mut program = valid_exec.program.to_string();
for system_path in valid_exec.system_path {
if is_executable_file(&system_path) {
program = system_path.to_string();
program = system_path;
break;
}
}
@@ -196,7 +196,7 @@ system_path=[{fake_cp:?}]
let checker = setup(&fake_cp);
let exec_call = ExecCall {
program: "cp".into(),
args: vec![source.clone(), dest.clone()],
args: vec![source, dest.clone()],
};
let valid_exec = match checker.r#match(&exec_call)? {
MatchedExec::Match { exec } => exec,
@@ -207,7 +207,7 @@ system_path=[{fake_cp:?}]
assert_eq!(
checker.check(valid_exec.clone(), &cwd, &[], &[]),
Err(ReadablePathNotInReadableFolders {
file: source_path.clone(),
file: source_path,
folders: vec![]
}),
);
@@ -229,7 +229,7 @@ system_path=[{fake_cp:?}]
// Both readable and writeable folders specified.
assert_eq!(
checker.check(
valid_exec.clone(),
valid_exec,
&cwd,
std::slice::from_ref(&root_path),
std::slice::from_ref(&root_path)
@@ -241,7 +241,7 @@ system_path=[{fake_cp:?}]
// folders.
let exec_call_folders_as_args = ExecCall {
program: "cp".into(),
args: vec![root.clone(), root.clone()],
args: vec![root.clone(), root],
};
let valid_exec_call_folders_as_args = match checker.r#match(&exec_call_folders_as_args)? {
MatchedExec::Match { exec } => exec,
@@ -254,7 +254,7 @@ system_path=[{fake_cp:?}]
std::slice::from_ref(&root_path),
std::slice::from_ref(&root_path)
),
Ok(cp.clone()),
Ok(cp),
);
// Specify a parent of a readable folder as input.

View File

@@ -104,7 +104,7 @@ impl PolicyBuilder {
info!("adding program spec: {program_spec:?}");
let name = program_spec.program.clone();
let mut programs = self.programs.borrow_mut();
programs.insert(name.clone(), program_spec);
programs.insert(name, program_spec);
}
fn add_forbidden_substrings(&self, substrings: &[String]) {

View File

@@ -31,6 +31,13 @@ install:
rustup show active-toolchain
cargo fetch
# Run `cargo nextest` since it's faster than `cargo test`, though including
# --no-fail-fast is important to ensure all tests are run.
#
# Run `cargo install cargo-nextest` if you don't have it installed.
test:
cargo nextest run --no-fail-fast
# Run the MCP server
mcp-server-run *args:
cargo run -p codex-mcp-server -- "$@"

View File

@@ -42,7 +42,7 @@ impl ServerOptions {
pub fn new(codex_home: PathBuf, client_id: String) -> Self {
Self {
codex_home,
client_id: client_id.to_string(),
client_id,
issuer: DEFAULT_ISSUER.to_string(),
port: DEFAULT_PORT,
open_browser: true,
@@ -126,7 +126,7 @@ pub fn run_login_server(opts: ServerOptions) -> io::Result<LoginServer> {
let shutdown_notify = Arc::new(tokio::sync::Notify::new());
let server_handle = {
let shutdown_notify = shutdown_notify.clone();
let server = server.clone();
let server = server;
tokio::spawn(async move {
let result = loop {
tokio::select! {

View File

@@ -18,6 +18,9 @@ use codex_core::config::Config;
use codex_core::config::ConfigOverrides;
use codex_core::config::ConfigToml;
use codex_core::config::load_config_as_toml;
use codex_core::config_edit::CONFIG_KEY_EFFORT;
use codex_core::config_edit::CONFIG_KEY_MODEL;
use codex_core::config_edit::persist_non_null_overrides;
use codex_core::default_client::get_codex_user_agent;
use codex_core::exec::ExecParams;
use codex_core::exec_env::create_env;
@@ -71,6 +74,8 @@ use codex_protocol::mcp_protocol::SendUserMessageResponse;
use codex_protocol::mcp_protocol::SendUserTurnParams;
use codex_protocol::mcp_protocol::SendUserTurnResponse;
use codex_protocol::mcp_protocol::ServerNotification;
use codex_protocol::mcp_protocol::SetDefaultModelParams;
use codex_protocol::mcp_protocol::SetDefaultModelResponse;
use codex_protocol::mcp_protocol::UserInfoResponse;
use codex_protocol::mcp_protocol::UserSavedConfig;
use codex_protocol::models::ContentItem;
@@ -192,6 +197,9 @@ impl CodexMessageProcessor {
ClientRequest::GetUserSavedConfig { request_id } => {
self.get_user_saved_config(request_id).await;
}
ClientRequest::SetDefaultModel { request_id, params } => {
self.set_default_model(request_id, params).await;
}
ClientRequest::GetUserAgent { request_id } => {
self.get_user_agent(request_id).await;
}
@@ -499,6 +507,40 @@ impl CodexMessageProcessor {
self.outgoing.send_response(request_id, response).await;
}
async fn set_default_model(&self, request_id: RequestId, params: SetDefaultModelParams) {
let SetDefaultModelParams {
model,
reasoning_effort,
} = params;
let effort_str = reasoning_effort.map(|effort| effort.to_string());
let overrides: [(&[&str], Option<&str>); 2] = [
(&[CONFIG_KEY_MODEL], model.as_deref()),
(&[CONFIG_KEY_EFFORT], effort_str.as_deref()),
];
match persist_non_null_overrides(
&self.config.codex_home,
self.config.active_profile.as_deref(),
&overrides,
)
.await
{
Ok(()) => {
let response = SetDefaultModelResponse {};
self.outgoing.send_response(request_id, response).await;
}
Err(err) => {
let error = JSONRPCErrorError {
code: INTERNAL_ERROR_CODE,
message: format!("failed to persist overrides: {err}"),
data: None,
};
self.outgoing.send_error(request_id, error).await;
}
}
}
async fn exec_one_off_command(&self, request_id: RequestId, params: ExecOneOffCommandParams) {
tracing::debug!("ExecOneOffCommand params: {params:?}");
@@ -593,6 +635,7 @@ impl CodexMessageProcessor {
let response = NewConversationResponse {
conversation_id,
model: session_configured.model,
reasoning_effort: session_configured.reasoning_effort,
rollout_path: session_configured.rollout_path,
};
self.outgoing.send_response(request_id, response).await;

View File

@@ -222,7 +222,7 @@ async fn run_codex_tool_session_inner(
}
EventMsg::TaskComplete(TaskCompleteEvent { last_agent_message }) => {
let text = match last_agent_message {
Some(msg) => msg.clone(),
Some(msg) => msg,
None => "".to_string(),
};
let result = CallToolResult {

View File

@@ -531,7 +531,6 @@ impl MessageProcessor {
// Spawn the long-running reply handler.
tokio::spawn({
let codex = codex.clone();
let outgoing = outgoing.clone();
let prompt = prompt.clone();
let running_requests_id_to_codex_uuid = running_requests_id_to_codex_uuid.clone();

View File

@@ -258,6 +258,7 @@ pub(crate) struct OutgoingError {
mod tests {
use codex_core::protocol::EventMsg;
use codex_core::protocol::SessionConfiguredEvent;
use codex_protocol::config_types::ReasoningEffort;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::mcp_protocol::LoginChatGptCompleteNotification;
use pretty_assertions::assert_eq;
@@ -279,6 +280,7 @@ mod tests {
msg: EventMsg::SessionConfigured(SessionConfiguredEvent {
session_id: conversation_id,
model: "gpt-4o".to_string(),
reasoning_effort: ReasoningEffort::default(),
history_log_id: 1,
history_entry_count: 1000,
initial_messages: None,
@@ -299,7 +301,7 @@ mod tests {
let Ok(expected_params) = serde_json::to_value(&event) else {
panic!("Event must serialize");
};
assert_eq!(params, Some(expected_params.clone()));
assert_eq!(params, Some(expected_params));
}
#[tokio::test]
@@ -312,6 +314,7 @@ mod tests {
let session_configured_event = SessionConfiguredEvent {
session_id: conversation_id,
model: "gpt-4o".to_string(),
reasoning_effort: ReasoningEffort::default(),
history_log_id: 1,
history_entry_count: 1000,
initial_messages: None,
@@ -342,6 +345,7 @@ mod tests {
"msg": {
"session_id": session_configured_event.session_id,
"model": session_configured_event.model,
"reasoning_effort": session_configured_event.reasoning_effort,
"history_log_id": session_configured_event.history_log_id,
"history_entry_count": session_configured_event.history_entry_count,
"type": "session_configured",

View File

@@ -24,6 +24,7 @@ use codex_protocol::mcp_protocol::RemoveConversationListenerParams;
use codex_protocol::mcp_protocol::ResumeConversationParams;
use codex_protocol::mcp_protocol::SendUserMessageParams;
use codex_protocol::mcp_protocol::SendUserTurnParams;
use codex_protocol::mcp_protocol::SetDefaultModelParams;
use mcp_types::CallToolRequestParams;
use mcp_types::ClientCapabilities;
@@ -301,6 +302,15 @@ impl McpProcess {
self.send_request("userInfo", None).await
}
/// Send a `setDefaultModel` JSON-RPC request.
pub async fn send_set_default_model_request(
&mut self,
params: SetDefaultModelParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("setDefaultModel", params).await
}
/// Send a `listConversations` JSON-RPC request.
pub async fn send_list_conversations_request(
&mut self,

View File

@@ -90,6 +90,7 @@ async fn test_codex_jsonrpc_conversation_flow() {
let NewConversationResponse {
conversation_id,
model,
reasoning_effort: _,
rollout_path: _,
} = new_conv_resp;
assert_eq!(model, "mock-model");

View File

@@ -59,6 +59,7 @@ async fn test_conversation_create_and_send_message_ok() {
let NewConversationResponse {
conversation_id,
model,
reasoning_effort: _,
rollout_path: _,
} = to_response::<NewConversationResponse>(new_conv_resp)
.expect("deserialize newConversation response");

View File

@@ -9,5 +9,6 @@ mod interrupt;
mod list_resume;
mod login;
mod send_message;
mod set_default_model;
mod user_agent;
mod user_info;

View File

@@ -0,0 +1,62 @@
use codex_core::config::ConfigToml;
use codex_protocol::config_types::ReasoningEffort;
use codex_protocol::mcp_protocol::SetDefaultModelParams;
use codex_protocol::mcp_protocol::SetDefaultModelResponse;
use mcp_test_support::McpProcess;
use mcp_test_support::to_response;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use pretty_assertions::assert_eq;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn set_default_model_persists_overrides() {
let codex_home = TempDir::new().unwrap_or_else(|e| panic!("create tempdir: {e}"));
let mut mcp = McpProcess::new(codex_home.path())
.await
.expect("spawn mcp process");
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize())
.await
.expect("init timeout")
.expect("init failed");
let params = SetDefaultModelParams {
model: Some("o4-mini".to_string()),
reasoning_effort: Some(ReasoningEffort::High),
};
let request_id = mcp
.send_set_default_model_request(params)
.await
.expect("send setDefaultModel");
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
)
.await
.expect("setDefaultModel timeout")
.expect("setDefaultModel response");
let _: SetDefaultModelResponse =
to_response(resp).expect("deserialize setDefaultModel response");
let config_path = codex_home.path().join("config.toml");
let config_contents = tokio::fs::read_to_string(&config_path)
.await
.expect("read config.toml");
let config_toml: ConfigToml = toml::from_str(&config_contents).expect("parse config.toml");
assert_eq!(
ConfigToml {
model: Some("o4-mini".to_string()),
model_reasoning_effort: Some(ReasoningEffort::High),
..Default::default()
},
config_toml,
);
}

View File

@@ -40,6 +40,7 @@ pub fn generate_ts(out_dir: &Path, prettier: Option<&Path>) -> Result<()> {
codex_protocol::mcp_protocol::ApplyPatchApprovalResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::ExecCommandApprovalResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::GetUserSavedConfigResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::SetDefaultModelResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::GetUserAgentResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::UserInfoResponse::export_all_to(out_dir)?;

View File

@@ -153,6 +153,11 @@ pub enum ClientRequest {
#[serde(rename = "id")]
request_id: RequestId,
},
SetDefaultModel {
#[serde(rename = "id")]
request_id: RequestId,
params: SetDefaultModelParams,
},
GetUserAgent {
#[serde(rename = "id")]
request_id: RequestId,
@@ -217,6 +222,8 @@ pub struct NewConversationParams {
pub struct NewConversationResponse {
pub conversation_id: ConversationId,
pub model: String,
/// Note this could be ignored by the model.
pub reasoning_effort: ReasoningEffort,
pub rollout_path: PathBuf,
}
@@ -414,6 +421,19 @@ pub struct GetUserSavedConfigResponse {
pub config: UserSavedConfig,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(rename_all = "camelCase")]
pub struct SetDefaultModelParams {
#[serde(skip_serializing_if = "Option::is_none")]
pub model: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub reasoning_effort: Option<ReasoningEffort>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(rename_all = "camelCase")]
pub struct SetDefaultModelResponse {}
/// UserSavedConfig contains a subset of the config. It is meant to expose mcp
/// client-configurable settings that can be specified in the NewConversation
/// and SendUserTurn requests.

View File

@@ -219,7 +219,7 @@ impl From<Vec<InputItem>> for ResponseInputItem {
let mime = mime_guess::from_path(&path)
.first()
.map(|m| m.essence_str().to_owned())
.unwrap_or_else(|| "application/octet-stream".to_string());
.unwrap_or_else(|| "image".to_string());
let encoded = base64::engine::general_purpose::STANDARD.encode(bytes);
Some(ContentItem::InputImage {
image_url: format!("data:{mime};base64,{encoded}"),

View File

@@ -897,9 +897,26 @@ pub struct SessionMetaLine {
pub enum RolloutItem {
SessionMeta(SessionMetaLine),
ResponseItem(ResponseItem),
Compacted(CompactedItem),
TurnContext(TurnContextItem),
EventMsg(EventMsg),
}
#[derive(Serialize, Deserialize, Clone, Debug, TS)]
pub struct CompactedItem {
pub message: String,
}
#[derive(Serialize, Deserialize, Clone, Debug, TS)]
pub struct TurnContextItem {
pub cwd: PathBuf,
pub approval_policy: AskForApproval,
pub sandbox_policy: SandboxPolicy,
pub model: String,
pub effort: ReasoningEffortConfig,
pub summary: ReasoningSummaryConfig,
}
#[derive(Serialize, Deserialize, Clone)]
pub struct RolloutLine {
pub timestamp: String,
@@ -1064,6 +1081,9 @@ pub struct SessionConfiguredEvent {
/// Tell the client what model is being queried.
pub model: String,
/// The effort the model is putting into reasoning about the user's request.
pub reasoning_effort: ReasoningEffortConfig,
/// Identifier of the history log file (inode on Unix, 0 otherwise).
pub history_log_id: u64,
@@ -1152,6 +1172,7 @@ mod tests {
msg: EventMsg::SessionConfigured(SessionConfiguredEvent {
session_id: conversation_id,
model: "codex-mini-latest".to_string(),
reasoning_effort: ReasoningEffortConfig::default(),
history_log_id: 0,
history_entry_count: 0,
initial_messages: None,
@@ -1165,6 +1186,7 @@ mod tests {
"type": "session_configured",
"session_id": "67e55044-10b1-426f-9247-bb680e5fe0c8",
"model": "codex-mini-latest",
"reasoning_effort": "medium",
"history_log_id": 0,
"history_entry_count": 0,
"rollout_path": format!("{}", rollout_file.path().display()),

View File

@@ -11,7 +11,10 @@ use codex_ansi_escape::ansi_escape_line;
use codex_core::AuthManager;
use codex_core::ConversationManager;
use codex_core::config::Config;
use codex_core::config::persist_model_selection;
use codex_core::model_family::find_family_for_model;
use codex_core::protocol::TokenUsage;
use codex_core::protocol_config_types::ReasoningEffort as ReasoningEffortConfig;
use color_eyre::eyre::Result;
use color_eyre::eyre::WrapErr;
use crossterm::event::KeyCode;
@@ -37,6 +40,9 @@ pub(crate) struct App {
/// Config is stored here so we can recreate ChatWidgets as needed.
pub(crate) config: Config,
pub(crate) active_profile: Option<String>,
model_saved_to_profile: bool,
model_saved_to_global: bool,
pub(crate) file_search: FileSearchManager,
@@ -61,6 +67,7 @@ impl App {
tui: &mut tui::Tui,
auth_manager: Arc<AuthManager>,
config: Config,
active_profile: Option<String>,
initial_prompt: Option<String>,
initial_images: Vec<PathBuf>,
resume_selection: ResumeSelection,
@@ -119,6 +126,9 @@ impl App {
app_event_tx,
chat_widget,
config,
active_profile,
model_saved_to_profile: false,
model_saved_to_global: false,
file_search,
enhanced_keys_supported,
transcript_lines: Vec::new(),
@@ -288,10 +298,17 @@ impl App {
self.chat_widget.apply_file_search_result(query, matches);
}
AppEvent::UpdateReasoningEffort(effort) => {
self.chat_widget.set_reasoning_effort(effort);
self.on_update_reasoning_effort(effort);
}
AppEvent::UpdateModel(model) => {
self.chat_widget.set_model(model);
self.chat_widget.set_model(model.clone());
self.config.model = model.clone();
if let Some(family) = find_family_for_model(&model) {
self.config.model_family = family;
}
self.model_saved_to_profile = false;
self.model_saved_to_global = false;
self.show_model_save_hint();
}
AppEvent::UpdateAskForApprovalPolicy(policy) => {
self.chat_widget.set_approval_policy(policy);
@@ -304,7 +321,107 @@ impl App {
}
pub(crate) fn token_usage(&self) -> codex_core::protocol::TokenUsage {
self.chat_widget.token_usage().clone()
self.chat_widget.token_usage()
}
fn show_model_save_hint(&mut self) {
let model = self.config.model.clone();
if self.active_profile.is_some() {
self.chat_widget.add_info_message(format!(
"Model switched to {model}. Press Ctrl+S to save it for this profile, then press Ctrl+S again to set it as your global default."
));
} else {
self.chat_widget.add_info_message(format!(
"Model switched to {model}. Press Ctrl+S to save it as your global default."
));
}
}
fn on_update_reasoning_effort(&mut self, effort: ReasoningEffortConfig) {
let changed = self.config.model_reasoning_effort != effort;
self.chat_widget.set_reasoning_effort(effort);
self.config.model_reasoning_effort = effort;
if changed {
let show_hint = self.model_saved_to_profile || self.model_saved_to_global;
self.model_saved_to_profile = false;
self.model_saved_to_global = false;
if show_hint {
self.show_model_save_hint();
}
}
}
async fn persist_model_shortcut(&mut self) {
enum SaveScope<'a> {
Profile(&'a str),
Global,
AlreadySaved,
}
let scope = if let Some(profile) = self
.active_profile
.as_deref()
.filter(|_| !self.model_saved_to_profile)
{
SaveScope::Profile(profile)
} else if !self.model_saved_to_global {
SaveScope::Global
} else {
SaveScope::AlreadySaved
};
let model = self.config.model.clone();
let effort = self.config.model_reasoning_effort;
let codex_home = self.config.codex_home.clone();
match scope {
SaveScope::Profile(profile) => {
match persist_model_selection(&codex_home, Some(profile), &model, Some(effort))
.await
{
Ok(()) => {
self.model_saved_to_profile = true;
self.chat_widget.add_info_message(format!(
"Saved model {model} ({effort}) for profile `{profile}`. Press Ctrl+S again to make this your global default."
));
}
Err(err) => {
tracing::error!(
error = %err,
"failed to persist model selection via shortcut"
);
self.chat_widget.add_error_message(format!(
"Failed to save model preference for profile `{profile}`: {err}"
));
}
}
}
SaveScope::Global => {
match persist_model_selection(&codex_home, None, &model, Some(effort)).await {
Ok(()) => {
self.model_saved_to_global = true;
self.chat_widget.add_info_message(format!(
"Saved model {model} ({effort}) as your global default."
));
}
Err(err) => {
tracing::error!(
error = %err,
"failed to persist global model selection via shortcut"
);
self.chat_widget.add_error_message(format!(
"Failed to save global model preference: {err}"
));
}
}
}
SaveScope::AlreadySaved => {
self.chat_widget.add_info_message(
"Model preference already saved globally; no further action needed."
.to_string(),
);
}
}
}
async fn handle_key_event(&mut self, tui: &mut tui::Tui, key_event: KeyEvent) {
@@ -320,6 +437,14 @@ impl App {
self.overlay = Some(Overlay::new_transcript(self.transcript_lines.clone()));
tui.frame_requester().schedule_frame();
}
KeyEvent {
code: KeyCode::Char('s'),
modifiers: crossterm::event::KeyModifiers::CONTROL,
kind: KeyEventKind::Press,
..
} => {
self.persist_model_shortcut().await;
}
// Esc primes/advances backtracking only in normal (not working) mode
// with an empty composer. In any other state, forward Esc so the
// active UI (e.g. status indicator, modals, popups) handles it.
@@ -366,3 +491,67 @@ impl App {
};
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::app_backtrack::BacktrackState;
use crate::chatwidget::tests::make_chatwidget_manual_with_sender;
use crate::file_search::FileSearchManager;
use codex_core::CodexAuth;
use codex_core::ConversationManager;
use ratatui::text::Line;
use std::sync::Arc;
use std::sync::atomic::AtomicBool;
fn make_test_app() -> App {
let (chat_widget, app_event_tx, _rx, _op_rx) = make_chatwidget_manual_with_sender();
let config = chat_widget.config_ref().clone();
let server = Arc::new(ConversationManager::with_auth(CodexAuth::from_api_key(
"Test API Key",
)));
let file_search = FileSearchManager::new(config.cwd.clone(), app_event_tx.clone());
App {
server,
app_event_tx,
chat_widget,
config,
active_profile: None,
model_saved_to_profile: false,
model_saved_to_global: false,
file_search,
transcript_lines: Vec::<Line<'static>>::new(),
overlay: None,
deferred_history_lines: Vec::new(),
has_emitted_history_lines: false,
enhanced_keys_supported: false,
commit_anim_running: Arc::new(AtomicBool::new(false)),
backtrack: BacktrackState::default(),
}
}
#[test]
fn update_reasoning_effort_updates_config_and_resets_flags() {
let mut app = make_test_app();
app.model_saved_to_profile = true;
app.model_saved_to_global = true;
app.config.model_reasoning_effort = ReasoningEffortConfig::Medium;
app.chat_widget
.set_reasoning_effort(ReasoningEffortConfig::Medium);
app.on_update_reasoning_effort(ReasoningEffortConfig::High);
assert_eq!(
app.config.model_reasoning_effort,
ReasoningEffortConfig::High
);
assert_eq!(
app.chat_widget.config_ref().model_reasoning_effort,
ReasoningEffortConfig::High
);
assert!(!app.model_saved_to_profile);
assert!(!app.model_saved_to_global);
}
}

View File

@@ -335,7 +335,7 @@ impl App {
self.trim_transcript_for_backtrack(drop_count);
self.render_transcript_once(tui);
if !prefill.is_empty() {
self.chat_widget.insert_str(prefill);
self.chat_widget.set_composer_text(prefill.to_string());
}
tui.frame_requester().schedule_frame();
}

View File

@@ -243,6 +243,10 @@ impl ChatComposer {
/// Replace the entire composer content with `text` and reset cursor.
pub(crate) fn set_text_content(&mut self, text: String) {
// Clear any existing content, placeholders, and attachments first.
self.textarea.set_text("");
self.pending_pastes.clear();
self.attached_images.clear();
self.textarea.set_text(&text);
self.textarea.set_cursor(0);
self.sync_command_popup();
@@ -483,7 +487,7 @@ impl ChatComposer {
} => {
// Hide popup without modifying text, remember token to avoid immediate reopen.
if let Some(tok) = Self::current_at_token(&self.textarea) {
self.dismissed_file_popup_token = Some(tok.to_string());
self.dismissed_file_popup_token = Some(tok);
}
self.active_popup = ActivePopup::None;
(InputResult::None, true)
@@ -542,7 +546,7 @@ impl ChatComposer {
Some(ext) if ext == "jpg" || ext == "jpeg" => "JPEG",
_ => "IMG",
};
self.attach_image(path_buf.clone(), w, h, format_label);
self.attach_image(path_buf, w, h, format_label);
// Add a trailing space to keep typing fluid.
self.textarea.insert_str(" ");
} else {
@@ -2119,7 +2123,7 @@ mod tests {
// Re-add and test backspace in middle: should break the placeholder string
// and drop the image mapping (same as text placeholder behavior).
composer.attach_image(path.clone(), 20, 10, "PNG");
composer.attach_image(path, 20, 10, "PNG");
let placeholder2 = composer.attached_images[0].placeholder.clone();
// Move cursor to roughly middle of placeholder
if let Some(start_pos) = composer.textarea.text().find(&placeholder2) {
@@ -2178,7 +2182,7 @@ mod tests {
let path1 = PathBuf::from("/tmp/image_dup1.png");
let path2 = PathBuf::from("/tmp/image_dup2.png");
composer.attach_image(path1.clone(), 10, 5, "PNG");
composer.attach_image(path1, 10, 5, "PNG");
// separate placeholders with a space for clarity
composer.handle_paste(" ".into());
composer.attach_image(path2.clone(), 10, 5, "PNG");
@@ -2227,7 +2231,7 @@ mod tests {
assert!(composer.textarea.text().starts_with("[image 3x2 PNG] "));
let imgs = composer.take_recent_submission_images();
assert_eq!(imgs, vec![tmp_path.clone()]);
assert_eq!(imgs, vec![tmp_path]);
}
#[test]

View File

@@ -564,7 +564,7 @@ mod tests {
let (tx_raw, rx) = unbounded_channel::<AppEvent>();
let tx = AppEventSender::new(tx_raw);
let mut pane = BottomPane::new(BottomPaneParams {
app_event_tx: tx.clone(),
app_event_tx: tx,
frame_requester: FrameRequester::test_dummy(),
has_input_focus: true,
enhanced_keys_supported: false,

View File

@@ -649,9 +649,7 @@ impl TextArea {
}
fn add_element(&mut self, range: Range<usize>) {
let elem = TextElement {
range: range.clone(),
};
let elem = TextElement { range };
self.elements.push(elem);
self.elements.sort_by_key(|e| e.range.start);
}

View File

@@ -574,14 +574,14 @@ impl ChatWidget {
self.active_exec_cell = Some(history_cell::new_active_exec_command(
ev.call_id.clone(),
ev.command.clone(),
ev.parsed_cmd.clone(),
ev.parsed_cmd,
));
}
} else {
self.active_exec_cell = Some(history_cell::new_active_exec_command(
ev.call_id.clone(),
ev.command.clone(),
ev.parsed_cmd.clone(),
ev.parsed_cmd,
));
}
@@ -804,7 +804,7 @@ impl ChatWidget {
"attach_image path={path:?} width={width} height={height} format={format_label}",
);
self.bottom_pane
.attach_image(path.clone(), width, height, format_label);
.attach_image(path, width, height, format_label);
self.request_redraw();
}
@@ -986,7 +986,7 @@ impl ChatWidget {
// Only show the text portion in conversation history.
if !text.is_empty() {
self.add_to_history(history_cell::new_user_prompt(text.clone()));
self.add_to_history(history_cell::new_user_prompt(text));
}
}
@@ -1055,10 +1055,10 @@ impl ChatWidget {
EventMsg::PlanUpdate(update) => self.on_plan_update(update),
EventMsg::ExecApprovalRequest(ev) => {
// For replayed events, synthesize an empty id (these should not occur).
self.on_exec_approval_request(id.clone().unwrap_or_default(), ev)
self.on_exec_approval_request(id.unwrap_or_default(), ev)
}
EventMsg::ApplyPatchApprovalRequest(ev) => {
self.on_apply_patch_approval_request(id.clone().unwrap_or_default(), ev)
self.on_apply_patch_approval_request(id.unwrap_or_default(), ev)
}
EventMsg::ExecCommandBegin(ev) => self.on_exec_command_begin(ev),
EventMsg::ExecCommandOutputDelta(delta) => self.on_exec_command_output_delta(delta),
@@ -1207,7 +1207,7 @@ impl ChatWidget {
self.bottom_pane.show_selection_view(
"Select model and reasoning level".to_string(),
Some("Switch between OpenAI models for this and future Codex CLI session".to_string()),
Some("Press Enter to confirm or Esc to go back".to_string()),
Some("Press Enter to confirm, Esc to go back, Ctrl+S to save".to_string()),
items,
);
}
@@ -1273,6 +1273,16 @@ impl ChatWidget {
self.config.model = model;
}
pub(crate) fn add_info_message(&mut self, message: String) {
self.add_to_history(history_cell::new_info_event(message));
self.request_redraw();
}
pub(crate) fn add_error_message(&mut self, message: String) {
self.add_to_history(history_cell::new_error_event(message));
self.request_redraw();
}
pub(crate) fn add_mcp_output(&mut self) {
if self.config.mcp_servers.is_empty() {
self.add_to_history(history_cell::empty_mcp_output());
@@ -1316,6 +1326,11 @@ impl ChatWidget {
self.bottom_pane.insert_str(text);
}
/// Replace the composer content with the provided text and reset cursor.
pub(crate) fn set_composer_text(&mut self, text: String) {
self.bottom_pane.set_composer_text(text);
}
pub(crate) fn show_esc_backtrack_hint(&mut self) {
self.bottom_pane.show_esc_backtrack_hint();
}
@@ -1436,4 +1451,4 @@ fn extract_first_bold(s: &str) -> Option<String> {
}
#[cfg(test)]
mod tests;
pub(crate) mod tests;

View File

@@ -20,7 +20,7 @@ pub(crate) fn spawn_agent(
) -> UnboundedSender<Op> {
let (codex_op_tx, mut codex_op_rx) = unbounded_channel::<Op>();
let app_event_tx_clone = app_event_tx.clone();
let app_event_tx_clone = app_event_tx;
tokio::spawn(async move {
let NewConversation {
conversation_id: _,
@@ -71,7 +71,7 @@ pub(crate) fn spawn_agent_from_existing(
) -> UnboundedSender<Op> {
let (codex_op_tx, mut codex_op_rx) = unbounded_channel::<Op>();
let app_event_tx_clone = app_event_tx.clone();
let app_event_tx_clone = app_event_tx;
tokio::spawn(async move {
// Forward the captured `SessionConfigured` event so it can be rendered in the UI.
let ev = codex_core::protocol::Event {

View File

@@ -138,6 +138,7 @@ fn resumed_initial_messages_render_history() {
let configured = codex_core::protocol::SessionConfiguredEvent {
session_id: conversation_id,
model: "test-model".to_string(),
reasoning_effort: ReasoningEffortConfig::default(),
history_log_id: 0,
history_entry_count: 0,
initial_messages: Some(vec![
@@ -246,6 +247,17 @@ fn make_chatwidget_manual() -> (
(widget, rx, op_rx)
}
pub(crate) fn make_chatwidget_manual_with_sender() -> (
ChatWidget,
AppEventSender,
tokio::sync::mpsc::UnboundedReceiver<AppEvent>,
tokio::sync::mpsc::UnboundedReceiver<Op>,
) {
let (widget, rx, op_rx) = make_chatwidget_manual();
let app_event_tx = widget.app_event_tx.clone();
(widget, app_event_tx, rx, op_rx)
}
fn drain_insert_history(
rx: &mut tokio::sync::mpsc::UnboundedReceiver<AppEvent>,
) -> Vec<Vec<ratatui::text::Line<'static>>> {
@@ -352,7 +364,7 @@ fn exec_approval_decision_truncates_multiline_and_long_commands() {
let long = format!("echo {}", "a".repeat(200));
let ev_long = ExecApprovalRequestEvent {
call_id: "call-long".into(),
command: vec!["bash".into(), "-lc".into(), long.clone()],
command: vec!["bash".into(), "-lc".into(), long],
cwd: std::env::current_dir().unwrap_or_else(|_| PathBuf::from(".")),
reason: None,
};

View File

@@ -737,10 +737,10 @@ mod tests {
let mut changes: HashMap<PathBuf, FileChange> = HashMap::new();
changes.insert(
abs_old.clone(),
abs_old,
FileChange::Update {
unified_diff: patch,
move_path: Some(abs_new.clone()),
move_path: Some(abs_new),
},
);

View File

@@ -601,6 +601,7 @@ pub(crate) fn new_session_info(
) -> PlainHistoryCell {
let SessionConfiguredEvent {
model,
reasoning_effort: _,
session_id: _,
history_log_id: _,
history_entry_count: _,
@@ -697,7 +698,7 @@ fn spinner(start_time: Option<Instant>) -> Span<'static> {
pub(crate) fn new_active_mcp_tool_call(invocation: McpInvocation) -> PlainHistoryCell {
let title_line = Line::from(vec!["tool".magenta(), " running...".dim()]);
let lines: Vec<Line> = vec![title_line, format_mcp_invocation(invocation.clone())];
let lines: Vec<Line> = vec![title_line, format_mcp_invocation(invocation)];
PlainHistoryCell { lines }
}
@@ -1052,6 +1053,12 @@ pub(crate) fn new_mcp_tools_output(
PlainHistoryCell { lines }
}
pub(crate) fn new_info_event(message: String) -> PlainHistoryCell {
let lines: Vec<Line<'static>> =
vec![vec![padded_emoji("💾").green(), " ".into(), message.into()].into()];
PlainHistoryCell { lines }
}
pub(crate) fn new_error_event(message: String) -> PlainHistoryCell {
// Use a hair space (U+200A) to create a subtle, near-invisible separation
// before the text. VS16 is intentionally omitted to keep spacing tighter
@@ -1324,7 +1331,7 @@ fn format_mcp_invocation<'a>(invocation: McpInvocation) -> Line<'a> {
let invocation_spans = vec![
invocation.server.clone().cyan(),
".".into(),
invocation.tool.clone().cyan(),
invocation.tool.cyan(),
"(".into(),
args_str.dim(),
")".into(),

View File

@@ -11,8 +11,10 @@ use codex_core::RolloutRecorder;
use codex_core::config::Config;
use codex_core::config::ConfigOverrides;
use codex_core::config::ConfigToml;
use codex_core::config::GPT5_HIGH_MODEL;
use codex_core::config::find_codex_home;
use codex_core::config::load_config_as_toml_with_cli_overrides;
use codex_core::config::persist_model_selection;
use codex_core::protocol::AskForApproval;
use codex_core::protocol::SandboxPolicy;
use codex_ollama::DEFAULT_OSS_MODEL;
@@ -47,6 +49,7 @@ pub mod live_wrap;
mod markdown;
mod markdown_render;
mod markdown_stream;
mod new_model_popup;
pub mod onboarding;
mod pager_overlay;
mod render;
@@ -65,12 +68,14 @@ mod wrapping;
#[cfg(not(debug_assertions))]
mod updates;
pub use cli::Cli;
use crate::new_model_popup::ModelUpgradeDecision;
use crate::new_model_popup::run_model_upgrade_popup;
use crate::onboarding::TrustDirectorySelection;
use crate::onboarding::onboarding_screen::OnboardingScreenArgs;
use crate::onboarding::onboarding_screen::run_onboarding_app;
use crate::tui::Tui;
pub use cli::Cli;
use codex_core::internal_storage::InternalStorage;
// (tests access modules directly within the crate)
@@ -174,14 +179,21 @@ pub async fn run_main(
}
};
let cli_profile_override = cli.config_profile.clone();
let active_profile = cli_profile_override
.clone()
.or_else(|| config_toml.profile.clone());
let should_show_trust_screen = determine_repo_trust_state(
&mut config,
&config_toml,
approval_policy,
sandbox_mode,
cli.config_profile.clone(),
cli_profile_override,
)?;
let internal_storage = InternalStorage::load(&config.codex_home);
let log_dir = codex_core::config::log_dir(&config)?;
std::fs::create_dir_all(&log_dir)?;
// Open (or create) your log file, appending to it.
@@ -224,14 +236,22 @@ pub async fn run_main(
let _ = tracing_subscriber::registry().with(file_layer).try_init();
run_ratatui_app(cli, config, should_show_trust_screen)
.await
.map_err(|err| std::io::Error::other(err.to_string()))
run_ratatui_app(
cli,
config,
internal_storage,
active_profile,
should_show_trust_screen,
)
.await
.map_err(|err| std::io::Error::other(err.to_string()))
}
async fn run_ratatui_app(
cli: Cli,
config: Config,
mut internal_storage: InternalStorage,
active_profile: Option<String>,
should_show_trust_screen: bool,
) -> color_eyre::Result<codex_core::protocol::TokenUsage> {
let mut config = config;
@@ -300,14 +320,6 @@ async fn run_ratatui_app(
// Initialize high-fidelity session event logging if enabled.
session_log::maybe_init(&config);
let Cli {
prompt,
images,
resume,
r#continue,
..
} = cli;
let auth_manager = AuthManager::shared(config.codex_home.clone());
let login_status = get_login_status(&config);
let should_show_onboarding =
@@ -330,7 +342,7 @@ async fn run_ratatui_app(
}
}
let resume_selection = if r#continue {
let resume_selection = if cli.r#continue {
match RolloutRecorder::list_conversations(&config.codex_home, 1, None).await {
Ok(page) => page
.items
@@ -339,7 +351,7 @@ async fn run_ratatui_app(
.unwrap_or(resume_picker::ResumeSelection::StartFresh),
Err(_) => resume_picker::ResumeSelection::StartFresh,
}
} else if resume {
} else if cli.resume {
match resume_picker::run_resume_picker(&mut tui, &config.codex_home).await? {
resume_picker::ResumeSelection::Exit => {
restore();
@@ -352,10 +364,42 @@ async fn run_ratatui_app(
resume_picker::ResumeSelection::StartFresh
};
if should_show_model_rollout_prompt(
&cli,
&config,
active_profile.as_deref(),
internal_storage.gpt_5_high_model_prompt_seen,
) {
internal_storage.gpt_5_high_model_prompt_seen = true;
if let Err(e) = internal_storage.persist().await {
error!("Failed to persist internal storage: {e:?}");
}
let upgrade_decision = run_model_upgrade_popup(&mut tui).await?;
let switch_to_new_model = upgrade_decision == ModelUpgradeDecision::Switch;
if switch_to_new_model {
config.model = GPT5_HIGH_MODEL.to_owned();
if let Err(e) = persist_model_selection(
&config.codex_home,
active_profile.as_deref(),
&config.model,
None,
)
.await
{
error!("Failed to persist model selection: {e:?}");
}
}
}
let Cli { prompt, images, .. } = cli;
let app_result = App::run(
&mut tui,
auth_manager,
config,
active_profile,
prompt,
images,
resume_selection,
@@ -463,11 +507,44 @@ fn should_show_login_screen(login_status: LoginStatus, config: &Config) -> bool
login_status == LoginStatus::NotAuthenticated
}
fn should_show_model_rollout_prompt(
cli: &Cli,
config: &Config,
active_profile: Option<&str>,
gpt_5_high_model_prompt_seen: bool,
) -> bool {
// TODO(jif) drop.
let debug_high_enabled = std::env::var("DEBUG_HIGH")
.map(|v| v.eq_ignore_ascii_case("1"))
.unwrap_or(false);
active_profile.is_none()
&& debug_high_enabled
&& cli.model.is_none()
&& !gpt_5_high_model_prompt_seen
&& config.model_provider.requires_openai_auth
&& !cli.oss
}
#[cfg(test)]
mod tests {
use super::*;
use clap::Parser;
use std::sync::Once;
fn enable_debug_high_env() {
static DEBUG_HIGH_ONCE: Once = Once::new();
DEBUG_HIGH_ONCE.call_once(|| {
// SAFETY: Tests run in a controlled environment and require this env variable to
// opt into the GPT-5 High rollout prompt gating. We only set it once.
unsafe {
std::env::set_var("DEBUG_HIGH", "1");
}
});
}
fn make_config() -> Config {
enable_debug_high_env();
Config::load_from_base_config_with_overrides(
ConfigToml::default(),
ConfigOverrides::default(),
@@ -484,4 +561,37 @@ mod tests {
&cfg
));
}
#[test]
fn shows_model_rollout_prompt_for_default_model() {
let cli = Cli::parse_from(["codex"]);
let cfg = make_config();
assert!(should_show_model_rollout_prompt(&cli, &cfg, None, false));
}
#[test]
fn hides_model_rollout_prompt_when_marked_seen() {
let cli = Cli::parse_from(["codex"]);
let cfg = make_config();
assert!(!should_show_model_rollout_prompt(&cli, &cfg, None, true));
}
#[test]
fn hides_model_rollout_prompt_when_cli_overrides_model() {
let cli = Cli::parse_from(["codex", "--model", "gpt-4.1"]);
let cfg = make_config();
assert!(!should_show_model_rollout_prompt(&cli, &cfg, None, false));
}
#[test]
fn hides_model_rollout_prompt_when_profile_active() {
let cli = Cli::parse_from(["codex"]);
let cfg = make_config();
assert!(!should_show_model_rollout_prompt(
&cli,
&cfg,
Some("gpt5"),
false,
));
}
}

View File

@@ -0,0 +1,155 @@
use crate::tui::FrameRequester;
use crate::tui::Tui;
use crate::tui::TuiEvent;
use codex_core::config::GPT5_HIGH_MODEL;
use color_eyre::eyre::Result;
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
use ratatui::buffer::Buffer;
use ratatui::layout::Rect;
use ratatui::prelude::Widget;
use ratatui::style::Stylize;
use ratatui::text::Line;
use ratatui::widgets::Clear;
use ratatui::widgets::Paragraph;
use ratatui::widgets::WidgetRef;
use ratatui::widgets::Wrap;
use tokio_stream::StreamExt;
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub(crate) enum ModelUpgradeDecision {
Switch,
KeepCurrent,
}
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
enum ModelUpgradeOption {
TryNewModel,
KeepCurrent,
}
struct ModelUpgradePopup {
highlighted: ModelUpgradeOption,
decision: Option<ModelUpgradeDecision>,
request_frame: FrameRequester,
}
impl ModelUpgradePopup {
fn new(request_frame: FrameRequester) -> Self {
Self {
highlighted: ModelUpgradeOption::TryNewModel,
decision: None,
request_frame,
}
}
fn handle_key_event(&mut self, key_event: KeyEvent) {
match key_event.code {
KeyCode::Up | KeyCode::Char('k') => self.highlight(ModelUpgradeOption::TryNewModel),
KeyCode::Down | KeyCode::Char('j') => self.highlight(ModelUpgradeOption::KeepCurrent),
KeyCode::Char('1') => self.select(ModelUpgradeOption::TryNewModel),
KeyCode::Char('2') => self.select(ModelUpgradeOption::KeepCurrent),
KeyCode::Enter => self.select(self.highlighted),
KeyCode::Esc => self.select(ModelUpgradeOption::KeepCurrent),
_ => {}
}
}
fn highlight(&mut self, option: ModelUpgradeOption) {
if self.highlighted != option {
self.highlighted = option;
self.request_frame.schedule_frame();
}
}
fn select(&mut self, option: ModelUpgradeOption) {
self.decision = Some(option.into());
self.request_frame.schedule_frame();
}
}
impl From<ModelUpgradeOption> for ModelUpgradeDecision {
fn from(option: ModelUpgradeOption) -> Self {
match option {
ModelUpgradeOption::TryNewModel => ModelUpgradeDecision::Switch,
ModelUpgradeOption::KeepCurrent => ModelUpgradeDecision::KeepCurrent,
}
}
}
impl WidgetRef for &ModelUpgradePopup {
fn render_ref(&self, area: Rect, buf: &mut Buffer) {
Clear.render(area, buf);
let mut lines: Vec<Line> = vec![
Line::from(vec![
"> ".into(),
format!("Try {GPT5_HIGH_MODEL} as your default model").bold(),
]),
format!(" {GPT5_HIGH_MODEL} is our latest model tuned for coding workflows.").into(),
" Switch now or keep your current default you can change models any time.".into(),
"".into(),
];
let create_option =
|index: usize, option: ModelUpgradeOption, text: &str| -> Line<'static> {
if self.highlighted == option {
Line::from(vec![
format!("> {}. ", index + 1).cyan(),
text.to_owned().cyan(),
])
} else {
format!(" {}. {text}", index + 1).into()
}
};
lines.push(create_option(
0,
ModelUpgradeOption::TryNewModel,
&format!("Yes, switch me to {GPT5_HIGH_MODEL}"),
));
lines.push(create_option(
1,
ModelUpgradeOption::KeepCurrent,
"Not right now",
));
lines.push("".into());
lines.push(
" Press Enter to confirm or Esc to keep your current model"
.dim()
.into(),
);
Paragraph::new(lines)
.wrap(Wrap { trim: false })
.render(area, buf);
}
}
pub(crate) async fn run_model_upgrade_popup(tui: &mut Tui) -> Result<ModelUpgradeDecision> {
let mut popup = ModelUpgradePopup::new(tui.frame_requester());
tui.draw(u16::MAX, |frame| {
frame.render_widget_ref(&popup, frame.area());
})?;
let events = tui.event_stream();
tokio::pin!(events);
while popup.decision.is_none() {
if let Some(event) = events.next().await {
match event {
TuiEvent::Key(key_event) => popup.handle_key_event(key_event),
TuiEvent::Draw => {
let _ = tui.draw(u16::MAX, |frame| {
frame.render_widget_ref(&popup, frame.area());
});
}
_ => {}
}
} else {
break;
}
}
Ok(popup.decision.unwrap_or(ModelUpgradeDecision::KeepCurrent))
}

View File

@@ -432,7 +432,7 @@ impl AuthModeWidget {
match &mut *guard {
SignInState::ApiKeyEntry(state) => {
if state.value.is_empty() {
if let Some(prefill) = prefill_from_env.clone() {
if let Some(prefill) = prefill_from_env {
state.value = prefill;
state.prepopulated_from_env = true;
} else {

View File

@@ -71,7 +71,7 @@ impl OnboardingScreen {
config,
} = args;
let cwd = config.cwd.clone();
let codex_home = config.codex_home.clone();
let codex_home = config.codex_home;
let mut steps: Vec<Step> = vec![Step::Welcome(WelcomeWidget {
is_logged_in: !matches!(login_status, LoginStatus::NotAuthenticated),
})];

View File

@@ -32,7 +32,7 @@ pub(crate) struct StatusIndicatorWidget {
}
// Format elapsed seconds into a compact human-friendly form used by the status line.
// Examples: 0s, 59s, 1m00s, 59m59s, 1h00m00s, 2h03m09s
// Examples: 0s, 59s, 1m 00s, 59m 59s, 1h 00m 00s, 2h 03m 09s
fn fmt_elapsed_compact(elapsed_secs: u64) -> String {
if elapsed_secs < 60 {
return format!("{elapsed_secs}s");
@@ -40,12 +40,12 @@ fn fmt_elapsed_compact(elapsed_secs: u64) -> String {
if elapsed_secs < 3600 {
let minutes = elapsed_secs / 60;
let seconds = elapsed_secs % 60;
return format!("{minutes}m{seconds:02}s");
return format!("{minutes}m {seconds:02}s");
}
let hours = elapsed_secs / 3600;
let minutes = (elapsed_secs % 3600) / 60;
let seconds = elapsed_secs % 60;
format!("{hours}h{minutes:02}m{seconds:02}s")
format!("{hours}h {minutes:02}m {seconds:02}s")
}
impl StatusIndicatorWidget {
@@ -209,13 +209,13 @@ mod tests {
assert_eq!(fmt_elapsed_compact(0), "0s");
assert_eq!(fmt_elapsed_compact(1), "1s");
assert_eq!(fmt_elapsed_compact(59), "59s");
assert_eq!(fmt_elapsed_compact(60), "1m00s");
assert_eq!(fmt_elapsed_compact(61), "1m01s");
assert_eq!(fmt_elapsed_compact(3 * 60 + 5), "3m05s");
assert_eq!(fmt_elapsed_compact(59 * 60 + 59), "59m59s");
assert_eq!(fmt_elapsed_compact(3600), "1h00m00s");
assert_eq!(fmt_elapsed_compact(3600 + 60 + 1), "1h01m01s");
assert_eq!(fmt_elapsed_compact(25 * 3600 + 2 * 60 + 3), "25h02m03s");
assert_eq!(fmt_elapsed_compact(60), "1m 00s");
assert_eq!(fmt_elapsed_compact(61), "1m 01s");
assert_eq!(fmt_elapsed_compact(3 * 60 + 5), "3m 05s");
assert_eq!(fmt_elapsed_compact(59 * 60 + 59), "59m 59s");
assert_eq!(fmt_elapsed_compact(3600), "1h 00m 00s");
assert_eq!(fmt_elapsed_compact(3600 + 60 + 1), "1h 01m 01s");
assert_eq!(fmt_elapsed_compact(25 * 3600 + 2 * 60 + 3), "25h 02m 03s");
}
#[test]

View File

@@ -244,7 +244,7 @@ impl UserApprovalWidget {
"You ".into(),
"approved".bold(),
" codex to run ".into(),
snippet.clone().dim(),
snippet.dim(),
" this time".bold(),
]);
}
@@ -254,7 +254,7 @@ impl UserApprovalWidget {
"You ".into(),
"approved".bold(),
" codex to run ".into(),
snippet.clone().dim(),
snippet.dim(),
" every time this session".bold(),
]);
}
@@ -264,7 +264,7 @@ impl UserApprovalWidget {
"You ".into(),
"did not approve".bold(),
" codex to run ".into(),
snippet.clone().dim(),
snippet.dim(),
]);
}
ReviewDecision::Abort => {
@@ -273,7 +273,7 @@ impl UserApprovalWidget {
"You ".into(),
"canceled".bold(),
" the request to run ".into(),
snippet.clone().dim(),
snippet.dim(),
]);
}
}

View File

@@ -1,4 +1,4 @@
{"ts":"2025-08-09T15:51:04.827Z","dir":"meta","kind":"session_start","cwd":"/Users/easong/code/codex/codex-rs","model":"gpt-5","model_provider_id":"openai","model_provider_name":"OpenAI"}
{"ts":"2025-08-09T15:51:04.827Z","dir":"meta","kind":"session_start","cwd":"/Users/easong/code/codex/codex-rs","model":"gpt-5","reasoning_effort":"medium","model_provider_id":"openai","model_provider_name":"OpenAI"}
{"ts":"2025-08-09T15:51:04.827Z","dir":"to_tui","kind":"key_event","event":"KeyEvent { code: Char('c'), modifiers: KeyModifiers(0x0), kind: Press, state: KeyEventState(0x0) }"}
{"ts":"2025-08-09T15:51:04.827Z","dir":"to_tui","kind":"key_event","event":"KeyEvent { code: Char('o'), modifiers: KeyModifiers(0x0), kind: Press, state: KeyEventState(0x0) }"}
{"ts":"2025-08-09T15:51:04.827Z","dir":"to_tui","kind":"key_event","event":"KeyEvent { code: Char('m'), modifiers: KeyModifiers(0x0), kind: Press, state: KeyEventState(0x0) }"}
@@ -34,7 +34,7 @@
{"ts":"2025-08-09T15:51:04.829Z","dir":"to_tui","kind":"app_event","variant":"RequestRedraw"}
{"ts":"2025-08-09T15:51:04.829Z","dir":"to_tui","kind":"log_line","line":"[INFO codex_core::codex] resume_path: None"}
{"ts":"2025-08-09T15:51:04.830Z","dir":"to_tui","kind":"app_event","variant":"Redraw"}
{"ts":"2025-08-09T15:51:04.856Z","dir":"to_tui","kind":"codex_event","payload":{"id":"0","msg":{"type":"session_configured","session_id":"d126e3d0-80ed-480a-be8c-09d97ff602cf","model":"gpt-5","history_log_id":2532619,"history_entry_count":339,"rollout_path":"/tmp/codex-test-rollout.jsonl"}}}
{"ts":"2025-08-09T15:51:04.856Z","dir":"to_tui","kind":"codex_event","payload":{"id":"0","msg":{"type":"session_configured","session_id":"d126e3d0-80ed-480a-be8c-09d97ff602cf","model":"gpt-5","reasoning_effort":"medium","history_log_id":2532619,"history_entry_count":339,"rollout_path":"/tmp/codex-test-rollout.jsonl"}}}
{"ts":"2025-08-09T15:51:04.856Z","dir":"to_tui","kind":"insert_history","lines":9}
{"ts":"2025-08-09T15:51:04.857Z","dir":"to_tui","kind":"app_event","variant":"RequestRedraw"}
{"ts":"2025-08-09T15:51:04.857Z","dir":"to_tui","kind":"app_event","variant":"RequestRedraw"}
@@ -16447,7 +16447,7 @@
{"ts":"2025-08-09T16:06:58.083Z","dir":"to_tui","kind":"app_event","variant":"RequestRedraw"}
{"ts":"2025-08-09T16:06:58.085Z","dir":"to_tui","kind":"app_event","variant":"Redraw"}
{"ts":"2025-08-09T16:06:58.085Z","dir":"to_tui","kind":"log_line","line":"[INFO codex_core::codex] resume_path: None"}
{"ts":"2025-08-09T16:06:58.136Z","dir":"to_tui","kind":"codex_event","payload":{"id":"0","msg":{"type":"session_configured","session_id":"c7df96da-daec-4fe9-aed9-3cd19b7a6192","model":"gpt-5","history_log_id":2532619,"history_entry_count":342,"rollout_path":"/tmp/codex-test-rollout.jsonl"}}}
{"ts":"2025-08-09T16:06:58.136Z","dir":"to_tui","kind":"codex_event","payload":{"id":"0","msg":{"type":"session_configured","session_id":"c7df96da-daec-4fe9-aed9-3cd19b7a6192","model":"gpt-5","reasoning_effort":"medium","history_log_id":2532619,"history_entry_count":342,"rollout_path":"/tmp/codex-test-rollout.jsonl"}}}
{"ts":"2025-08-09T16:06:58.136Z","dir":"to_tui","kind":"insert_history","lines":9}
{"ts":"2025-08-09T16:06:58.136Z","dir":"to_tui","kind":"app_event","variant":"RequestRedraw"}
{"ts":"2025-08-09T16:06:58.136Z","dir":"to_tui","kind":"app_event","variant":"RequestRedraw"}

View File

@@ -38,5 +38,49 @@ args = ["-y", "mcp-server"]
env = { "API_KEY" = "value" }
```
## Using Codex as an MCP Server
The Codex CLI can also be run as an MCP _server_ via `codex mcp`. For example, you can use `codex mcp` to make Codex available as a tool inside of a multi-agent framework like the OpenAI [Agents SDK](https://platform.openai.com/docs/guides/agents).
### Codex MCP Server Quickstart
You can launch a Codex MCP server with the [Model Context Protocol Inspector](https://modelcontextprotocol.io/legacy/tools/inspector):
``` bash
npx @modelcontextprotocol/inspector codex mcp
```
Send a `tools/list` request and you will see that there are two tools available:
**`codex`** - Run a Codex session. Accepts configuration parameters matching the Codex Config struct. The `codex` tool takes the following properties:
Property | Type | Description
-------------------|----------|----------------------------------------------------------------------------------------------------------
**`prompt`** (required) | string | The initial user prompt to start the Codex conversation.
`approval-policy` | string | Approval policy for shell commands generated by the model: `untrusted`, `on-failure`, `never`.
`base-instructions` | string | The set of instructions to use instead of the default ones.
`config` | object | Individual [config settings](https://github.com/openai/codex/blob/main/docs/config.md#config) that will override what is in `$CODEX_HOME/config.toml`.
`cwd` | string | Working directory for the session. If relative, resolved against the server process's current directory.
`include-plan-tool` | boolean | Whether to include the plan tool in the conversation.
`model` | string | Optional override for the model name (e.g. `o3`, `o4-mini`).
`profile` | string | Configuration profile from `config.toml` to specify default options.
`sandbox` | string | Sandbox mode: `read-only`, `workspace-write`, or `danger-full-access`.
**`codex-reply`** - Continue a Codex session by providing the conversation id and prompt. The `codex-reply` tool takes the following properties:
Property | Type | Description
-----------|--------|---------------------------------------------------------------
**`prompt`** (required) | string | The next user prompt to continue the Codex conversation.
**`conversationId`** (required) | string | The id of the conversation to continue.
### Trying it Out
> [!TIP]
> It is somewhat experimental, but the Codex CLI can also be run as an MCP _server_ via `codex mcp`. If you launch it with an MCP client such as `npx @modelcontextprotocol/inspector codex mcp` and send it a `tools/list` request, you will see that there is only one tool, `codex`, that accepts a grab-bag of inputs, including a catch-all `config` map for anything you might want to override. Feel free to play around with it and provide feedback via GitHub issues.
> Codex often takes a few minutes to run. To accommodate this, adjust the MCP inspector's Request and Total timeouts to 600000ms (10 minutes) under ⛭ Configuration.
Use the MCP inspector and `codex mcp` to build a simple tic-tac-toe game with the following settings:
**approval-policy:** never
**prompt:** Implement a simple tic-tac-toe game with HTML, Javascript, and CSS. Write the game in a single file called index.html.
**sandbox:** workspace-write
Click "Run Tool" and you should see a list of events emitted from the Codex MCP server as it builds the game.