Compare commits

...

18 Commits

Author SHA1 Message Date
Dylan Hurd
72d8bde988 [apply-patch] Handle multiple context lines 2025-08-25 15:09:37 -07:00
Dylan
7f7d1e30f3 [exec] Clean up apply-patch tests (#2648)
## Summary
These tests were getting a bit unwieldy, and they're starting to become
load-bearing. Let's clean them up, and get them working solidly so we
can easily expand this harness with new tests.

## Test Plan
- [x] Tests continue to pass
2025-08-25 15:08:01 -07:00
Michael Bolin
568d6f819f fix: use backslash as path separator on Windows (#2684)
I noticed that when running `/status` on Windows, I saw something like:

```
Path: ~/src\codex
```

so now it should be:

```
Path: ~\src\codex
```

Admittedly, `~` is understood by PowerShell but not on Windows, in
general, but it's much less verbose than `%USERPROFILE%`.
2025-08-25 14:47:17 -07:00
Jeremy Rose
251c4c2ba9 tui: queue messages (#2637)
https://github.com/user-attachments/assets/44349aa6-3b97-4029-99e1-5484e9a8775f
2025-08-25 21:38:38 +00:00
Odysseas Yiakoumis
a6c346b9e1 avoid error when /compact response has no token_usage (#2417) (#2640)
**Context**  
When running `/compact`, `drain_to_completed` would throw an error if
`token_usage` was `None` in `ResponseEvent::Completed`. This made the
command fail even though everything else had succeeded.

**What changed**  
- Instead of erroring, we now just check `if let Some(token_usage)`
before sending the event.
- If it’s missing, we skip it and move on.  

**Why**  
This makes `AgentTask::compact()` behave in the same way as
`AgentTask::spawn()`, which also doesn’t error out when `token_usage`
isn’t available. Keeps things consistent and avoids unnecessary
failures.

**Fixes**  
Closes #2417

---------

Co-authored-by: Ahmed Ibrahim <aibrahim@openai.com>
2025-08-25 18:42:22 +00:00
Gabriel Peal
e307040f10 Index file (#2678) 2025-08-25 13:23:32 -04:00
dependabot[bot]
7d67e54628 chore(deps): bump toml_edit from 0.23.3 to 0.23.4 in /codex-rs (#2665) 2025-08-25 08:20:30 -07:00
Michael Bolin
295ca27e98 fix: Scope ExecSessionManager to Session instead of using global singleton (#2664)
The `SessionManager` in `exec_command` owns a number of
`ExecCommandSession` objects where `ExecCommandSession` has a
non-trivial implementation of `Drop`, so we want to be able to drop an
individual `SessionManager` to help ensure things get cleaned up in a
timely fashion. To that end, we should have one `SessionManager` per
session rather than one global one for the lifetime of the CLI process.
2025-08-24 22:52:49 -07:00
Michael Bolin
7b20db942a fix: build is broken on main; introduce ToolsConfigParams to help fix (#2663)
`ToolsConfig::new()` taking a large number of boolean params was hard to
manage and it finally bit us (see
https://github.com/openai/codex/pull/2660). This changes
`ToolsConfig::new()` so that it takes a struct (and also reduces the
visibility of some members, where possible).
2025-08-24 22:43:42 -07:00
Uhyeon Park
ee2ccb5cb6 Fix cache hit rate by making MCP tools order deterministic (#2611)
Fixes https://github.com/openai/codex/issues/2610

This PR sorts the tools in `get_openai_tools` by name to ensure a
consistent MCP tool order.

Currently, MCP servers are stored in a HashMap, which does not guarantee
ordering. As a result, the tool order changes across turns, effectively
breaking prompt caching in multi-turn sessions.

An alternative solution would be to replace the HashMap with an ordered
structure, but that would require a much larger code change. Given that
it is unrealistic to have so many MCP tools that sorting would cause
performance issues, this lightweight fix is chosen instead.

By ensuring deterministic tool order, this change should significantly
improve cache hit rates and prevent users from hitting usage limits too
quickly. (For reference, my own sessions last week reached the limit
unusually fast, with cache hit rates falling below 1%.)

## Result

After this fix, sessions with MCP servers now show caching behavior
almost identical to sessions without MCP servers.
Without MCP             |  With MCP
:-------------------------:|:-------------------------:
<img width="1368" height="1634" alt="image"
src="https://github.com/user-attachments/assets/26edab45-7be8-4d6a-b471-558016615fc8"
/> | <img width="1356" height="1632" alt="image"
src="https://github.com/user-attachments/assets/5f3634e0-3888-420b-9aaf-deefd9397b40"
/>
2025-08-24 19:56:24 -07:00
ae
8b49346657 fix: update gpt-5 stats (#2649)
- To match what's on <https://platform.openai.com/docs/models/gpt-5>.
2025-08-24 16:45:41 -07:00
dependabot[bot]
e49116a4c5 chore(deps): bump whoami from 1.6.0 to 1.6.1 in /codex-rs (#2497)
Bumps [whoami](https://github.com/ardaku/whoami) from 1.6.0 to 1.6.1.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/ardaku/whoami/commits">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=whoami&package-manager=cargo&previous-version=1.6.0&new-version=1.6.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-24 14:38:30 -07:00
Michael Bolin
517ffd00c6 feat: use the arg0 trick with apply_patch (#2646)
Historically, Codex CLI has treated `apply_patch` (and its sometimes
misspelling, `applypatch`) as a "virtual CLI," intercepting it when it
appears as the first arg to `command` for the `"container.exec",
`"shell"`, or `"local_shell"` tools.

This approach has a known limitation where if, say, the model created a
Python script that runs `apply_patch` and then tried to run the Python
script, we have no insight as to what the model is trying to do and the
Python Script would fail because `apply_patch` was never really on the
`PATH`.

One way to solve this problem is to require users to install an
`apply_patch` executable alongside the `codex` executable (or at least
put it someplace where Codex can discover it). Though to keep Codex CLI
as a standalone executable, we exploit "the arg0 trick" where we create
a temporary directory with an entry named `apply_patch` and prepend that
directory to the `PATH` for the duration of the invocation of Codex.

- On UNIX, `apply_patch` is a symlink to `codex`, which now changes its
behavior to behave like `apply_patch` if arg0 is `apply_patch` (or
`applypatch`)
- On Windows, `apply_patch.bat` is a batch script that runs `codex
--codex-run-as-apply-patch %*`, as Codex also changes its behavior if
the first argument is `--codex-run-as-apply-patch`.
2025-08-24 14:35:51 -07:00
Dylan
4157788310 [apply_patch] disable default freeform tool (#2643)
## Summary
We're seeing some issues in the freeform tool - let's disable by default
until it stabilizes.

## Testing
- [x] Ran locally, confirmed codex-cli could make edits
2025-08-24 11:12:37 -07:00
Jeremy Rose
32bbbbad61 test: faster test execution in codex-core (#2633)
this dramatically improves time to run `cargo test -p codex-core` (~25x
speedup).

before:
```
cargo test -p codex-core  35.96s user 68.63s system 19% cpu 8:49.80 total
```

after:
```
cargo test -p codex-core  5.51s user 8.16s system 63% cpu 21.407 total
```

both tests measured "hot", i.e. on a 2nd run with no filesystem changes,
to exclude compile times.

approach inspired by [Delete Cargo Integration
Tests](https://matklad.github.io/2021/02/27/delete-cargo-integration-tests.html),
we move all test cases in tests/ into a single suite in order to have a
single binary, as there is significant overhead for each test binary
executed, and because test execution is only parallelized with a single
binary.
2025-08-24 11:10:53 -07:00
Ahmed Ibrahim
c6a52d611c Resume conversation from an earlier point in history (#2607)
Fixing merge conflict of this: #2588


https://github.com/user-attachments/assets/392c7c37-cf8f-4ed6-952e-8215e8c57bc4
2025-08-23 23:23:15 -07:00
Reuben Narad
363636f5eb Add web search tool (#2371)
Adds web_search tool, enabling the model to use Responses API web_search
tool.
- Disabled by default, enabled by --search flag
- When --search is passed, exposes web_search_request function tool to
the model, which triggers user approval. When approved, the model can
use the web_search tool for the remainder of the turn
<img width="1033" height="294" alt="image"
src="https://github.com/user-attachments/assets/62ac6563-b946-465c-ba5d-9325af28b28f"
/>

---------

Co-authored-by: easong-openai <easong@openai.com>
2025-08-23 22:58:56 -07:00
Ahmed Ibrahim
957d44918d send-aggregated output (#2364)
We want to send an aggregated output of stderr and stdout so we don't
have to aggregate it stderr+stdout as we lose order sometimes.

---------

Co-authored-by: Gabriel Peal <gpeal@users.noreply.github.com>
2025-08-23 16:54:31 +00:00
126 changed files with 3105 additions and 1118 deletions

15
codex-rs/Cargo.lock generated
View File

@@ -635,6 +635,7 @@ name = "codex-apply-patch"
version = "0.0.0"
dependencies = [
"anyhow",
"assert_cmd",
"pretty_assertions",
"similar",
"tempfile",
@@ -652,6 +653,7 @@ dependencies = [
"codex-core",
"codex-linux-sandbox",
"dotenvy",
"tempfile",
"tokio",
]
@@ -752,7 +754,7 @@ dependencies = [
"tokio-test",
"tokio-util",
"toml 0.9.5",
"toml_edit 0.23.3",
"toml_edit 0.23.4",
"tracing",
"tree-sitter",
"tree-sitter-bash",
@@ -2720,6 +2722,7 @@ checksum = "4488594b9328dee448adb906d8b126d9b7deb7cf5c22161ee591610bb1be83c0"
dependencies = [
"bitflags 2.9.1",
"libc",
"redox_syscall",
]
[[package]]
@@ -5192,9 +5195,9 @@ dependencies = [
[[package]]
name = "toml_edit"
version = "0.23.3"
version = "0.23.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "17d3b47e6b7a040216ae5302712c94d1cf88c95b47efa80e2c59ce96c878267e"
checksum = "7211ff1b8f0d3adae1663b7da9ffe396eabe1ca25f0b0bee42b0da29a9ddce93"
dependencies = [
"indexmap 2.10.0",
"toml_datetime 0.7.0",
@@ -5775,11 +5778,11 @@ dependencies = [
[[package]]
name = "whoami"
version = "1.6.0"
version = "1.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6994d13118ab492c3c80c1f81928718159254c53c472bf9ce36f8dae4add02a7"
checksum = "5d4a4db5077702ca3015d3d02d74974948aba2ad9e12ab7df718ee64ccd7e97d"
dependencies = [
"redox_syscall",
"libredox",
"wasite",
"web-sys",
]

View File

@@ -43,6 +43,12 @@ To run Codex non-interactively, run `codex exec PROMPT` (you can also pass the p
Typing `@` triggers a fuzzy-filename search over the workspace root. Use up/down to select among the results and Tab or Enter to replace the `@` with the selected path. You can use Esc to cancel the search.
### EscEsc to edit a previous message
When the chat composer is empty, press Esc to prime “backtrack” mode. Press Esc again to open a transcript preview highlighting the last user message; press Esc repeatedly to step to older user messages. Press Enter to confirm and Codex will fork the conversation from that point, trim the visible transcript accordingly, and prefill the composer with the selected user message so you can edit and resubmit it.
In the transcript preview, the footer shows an `Esc edit prev` hint while editing is active.
### `--cd`/`-C` flag
Sometimes it is not convenient to `cd` to the directory you want Codex to use as the "working root" before running Codex. Fortunately, `codex` supports a `--cd` option so you can specify whatever folder you want. You can confirm that Codex is honoring `--cd` by double-checking the **workdir** it reports in the TUI at the start of a new session.

View File

@@ -7,6 +7,10 @@ version = { workspace = true }
name = "codex_apply_patch"
path = "src/lib.rs"
[[bin]]
name = "apply_patch"
path = "src/main.rs"
[lints]
workspace = true
@@ -18,5 +22,6 @@ tree-sitter = "0.25.8"
tree-sitter-bash = "0.25.0"
[dev-dependencies]
assert_cmd = "2"
pretty_assertions = "1.4.1"
tempfile = "3.13.0"

View File

@@ -1,5 +1,6 @@
mod parser;
mod seek_sequence;
mod standalone_executable;
use std::collections::HashMap;
use std::path::Path;
@@ -19,6 +20,8 @@ use tree_sitter::LanguageError;
use tree_sitter::Parser;
use tree_sitter_bash::LANGUAGE as BASH;
pub use standalone_executable::main;
/// Detailed instructions for gpt-4.1 on how to use the `apply_patch` tool.
pub const APPLY_PATCH_TOOL_INSTRUCTIONS: &str = include_str!("../apply_patch_tool_instructions.md");
@@ -529,32 +532,51 @@ fn compute_replacements(
let mut line_index: usize = 0;
for chunk in chunks {
// If a chunk has a `change_context`, we use seek_sequence to find it, then
// adjust our `line_index` to continue from there.
if let Some(ctx_line) = &chunk.change_context {
if let Some(idx) = seek_sequence::seek_sequence(
original_lines,
std::slice::from_ref(ctx_line),
line_index,
false,
) {
line_index = idx + 1;
} else {
return Err(ApplyPatchError::ComputeReplacements(format!(
"Failed to find context '{}' in {}",
ctx_line,
path.display()
)));
// If a chunk has context lines, we use seek_sequence to find each in order,
// then adjust our `line_index` to continue from there.
if !chunk.context_lines.is_empty() {
let total = chunk.context_lines.len();
for (i, ctx_line) in chunk.context_lines.iter().enumerate() {
if let Some(idx) = seek_sequence::seek_sequence(
original_lines,
std::slice::from_ref(ctx_line),
line_index,
false,
) {
line_index = idx + 1;
} else {
return Err(ApplyPatchError::ComputeReplacements(format!(
"Failed to find context {}/{}: '{}' in {}",
i + 1,
total,
ctx_line,
path.display()
)));
}
}
}
if chunk.old_lines.is_empty() {
// Pure addition (no old lines). We'll add them at the end or just
// before the final empty line if one exists.
let insertion_idx = if original_lines.last().is_some_and(|s| s.is_empty()) {
original_lines.len() - 1
// Pure addition (no old lines).
// Prefer to insert at the matched context anchor if one exists and
// the hunk is not explicitly marked as end-of-file.
let insertion_idx = if chunk.is_end_of_file {
if original_lines.last().is_some_and(|s| s.is_empty()) {
original_lines.len() - 1
} else {
original_lines.len()
}
} else if !chunk.context_lines.is_empty() {
// Insert immediately after the last matched context line.
line_index
} else {
original_lines.len()
// No context provided: fall back to appending at the end (before
// the trailing empty line if present).
if original_lines.last().is_some_and(|s| s.is_empty()) {
original_lines.len() - 1
} else {
original_lines.len()
}
};
replacements.push((insertion_idx, 0, chunk.new_lines.clone()));
continue;
@@ -1267,6 +1289,57 @@ g
);
}
#[test]
fn test_insert_addition_after_single_context_anchor() {
let dir = tempdir().unwrap();
let path = dir.path().join("single_ctx.txt");
fs::write(&path, "class BaseClass:\n def method():\nline1\nline2\n").unwrap();
let patch = wrap_patch(&format!(
r#"*** Update File: {}
@@ class BaseClass:
+INSERTED
"#,
path.display()
));
let mut stdout = Vec::new();
let mut stderr = Vec::new();
apply_patch(&patch, &mut stdout, &mut stderr).unwrap();
let contents = fs::read_to_string(path).unwrap();
assert_eq!(
contents,
"class BaseClass:\nINSERTED\n def method():\nline1\nline2\n"
);
}
#[test]
fn test_insert_addition_after_multi_context_anchor() {
let dir = tempdir().unwrap();
let path = dir.path().join("multi_ctx.txt");
fs::write(&path, "class BaseClass:\n def method():\nline1\nline2\n").unwrap();
let patch = wrap_patch(&format!(
r#"*** Update File: {}
@@ class BaseClass:
@@ def method():
+INSERTED
"#,
path.display()
));
let mut stdout = Vec::new();
let mut stderr = Vec::new();
apply_patch(&patch, &mut stdout, &mut stderr).unwrap();
let contents = fs::read_to_string(path).unwrap();
assert_eq!(
contents,
"class BaseClass:\n def method():\nINSERTED\nline1\nline2\n"
);
}
#[test]
fn test_apply_patch_should_resolve_absolute_paths_in_cwd() {
let session_dir = tempdir().unwrap();

View File

@@ -0,0 +1,3 @@
pub fn main() -> ! {
codex_apply_patch::main()
}

View File

@@ -69,7 +69,7 @@ pub enum Hunk {
path: PathBuf,
move_path: Option<PathBuf>,
/// Chunks should be in order, i.e. the `change_context` of one chunk
/// Chunks should be in order, i.e. the first context line of one chunk
/// should occur later in the file than the previous chunk.
chunks: Vec<UpdateFileChunk>,
},
@@ -89,12 +89,13 @@ use Hunk::*;
#[derive(Debug, PartialEq, Clone)]
pub struct UpdateFileChunk {
/// A single line of context used to narrow down the position of the chunk
/// (this is usually a class, method, or function definition.)
pub change_context: Option<String>,
/// Context lines used to narrow down the position of the chunk.
/// Each entry is searched sequentially to progressively restrict the
/// search to the desired region (e.g. class → method).
pub context_lines: Vec<String>,
/// A contiguous block of lines that should be replaced with `new_lines`.
/// `old_lines` must occur strictly after `change_context`.
/// `old_lines` must occur strictly after the context.
pub old_lines: Vec<String>,
pub new_lines: Vec<String>,
@@ -344,32 +345,38 @@ fn parse_update_file_chunk(
line_number,
});
}
// If we see an explicit context marker @@ or @@ <context>, consume it; otherwise, optionally
// allow treating the chunk as starting directly with diff lines.
let (change_context, start_index) = if lines[0] == EMPTY_CHANGE_CONTEXT_MARKER {
(None, 1)
} else if let Some(context) = lines[0].strip_prefix(CHANGE_CONTEXT_MARKER) {
(Some(context.to_string()), 1)
} else {
if !allow_missing_context {
return Err(InvalidHunkError {
message: format!(
"Expected update hunk to start with a @@ context marker, got: '{}'",
lines[0]
),
line_number,
});
let mut context_lines = Vec::new();
let mut start_index = 0;
let mut saw_context_marker = false;
while start_index < lines.len() {
if lines[start_index] == EMPTY_CHANGE_CONTEXT_MARKER {
saw_context_marker = true;
start_index += 1;
} else if let Some(context) = lines[start_index].strip_prefix(CHANGE_CONTEXT_MARKER) {
saw_context_marker = true;
context_lines.push(context.to_string());
start_index += 1;
} else {
break;
}
(None, 0)
};
}
if !saw_context_marker && !allow_missing_context {
return Err(InvalidHunkError {
message: format!(
"Expected update hunk to start with a @@ context marker, got: '{}'",
lines[0]
),
line_number,
});
}
if start_index >= lines.len() {
return Err(InvalidHunkError {
message: "Update hunk does not contain any lines".to_string(),
line_number: line_number + 1,
line_number: line_number + start_index,
});
}
let mut chunk = UpdateFileChunk {
change_context,
context_lines,
old_lines: Vec::new(),
new_lines: Vec::new(),
is_end_of_file: false,
@@ -381,7 +388,7 @@ fn parse_update_file_chunk(
if parsed_lines == 0 {
return Err(InvalidHunkError {
message: "Update hunk does not contain any lines".to_string(),
line_number: line_number + 1,
line_number: line_number + start_index,
});
}
chunk.is_end_of_file = true;
@@ -411,7 +418,7 @@ fn parse_update_file_chunk(
message: format!(
"Unexpected line found in update hunk: '{line_contents}'. Every line should start with ' ' (context line), '+' (added line), or '-' (removed line)"
),
line_number: line_number + 1,
line_number: line_number + start_index,
});
}
// Assume this is the start of the next hunk.
@@ -491,7 +498,7 @@ fn test_parse_patch() {
path: PathBuf::from("path/update.py"),
move_path: Some(PathBuf::from("path/update2.py")),
chunks: vec![UpdateFileChunk {
change_context: Some("def f():".to_string()),
context_lines: vec!["def f():".to_string()],
old_lines: vec![" pass".to_string()],
new_lines: vec![" return 123".to_string()],
is_end_of_file: false
@@ -518,7 +525,7 @@ fn test_parse_patch() {
path: PathBuf::from("file.py"),
move_path: None,
chunks: vec![UpdateFileChunk {
change_context: None,
context_lines: Vec::new(),
old_lines: vec![],
new_lines: vec!["line".to_string()],
is_end_of_file: false
@@ -548,7 +555,7 @@ fn test_parse_patch() {
path: PathBuf::from("file2.py"),
move_path: None,
chunks: vec![UpdateFileChunk {
change_context: None,
context_lines: Vec::new(),
old_lines: vec!["import foo".to_string()],
new_lines: vec!["import foo".to_string(), "bar".to_string()],
is_end_of_file: false,
@@ -568,7 +575,7 @@ fn test_parse_patch_lenient() {
path: PathBuf::from("file2.py"),
move_path: None,
chunks: vec![UpdateFileChunk {
change_context: None,
context_lines: Vec::new(),
old_lines: vec!["import foo".to_string()],
new_lines: vec!["import foo".to_string(), "bar".to_string()],
is_end_of_file: false,
@@ -701,7 +708,7 @@ fn test_update_file_chunk() {
),
Ok((
(UpdateFileChunk {
change_context: Some("change_context".to_string()),
context_lines: vec!["change_context".to_string()],
old_lines: vec![
"".to_string(),
"context".to_string(),
@@ -723,7 +730,7 @@ fn test_update_file_chunk() {
parse_update_file_chunk(&["@@", "+line", "*** End of File"], 123, false),
Ok((
(UpdateFileChunk {
change_context: None,
context_lines: Vec::new(),
old_lines: vec![],
new_lines: vec!["line".to_string()],
is_end_of_file: true
@@ -731,4 +738,29 @@ fn test_update_file_chunk() {
3
))
);
assert_eq!(
parse_update_file_chunk(
&[
"@@ class BaseClass",
"@@ def method()",
" context",
"-old",
"+new",
],
123,
false
),
Ok((
(UpdateFileChunk {
context_lines: vec![
"class BaseClass".to_string(),
" def method()".to_string()
],
old_lines: vec!["context".to_string(), "old".to_string()],
new_lines: vec!["context".to_string(), "new".to_string()],
is_end_of_file: false
}),
5
))
);
}

View File

@@ -0,0 +1,59 @@
use std::io::Read;
use std::io::Write;
pub fn main() -> ! {
let exit_code = run_main();
std::process::exit(exit_code);
}
/// We would prefer to return `std::process::ExitCode`, but its `exit_process()`
/// method is still a nightly API and we want main() to return !.
pub fn run_main() -> i32 {
// Expect either one argument (the full apply_patch payload) or read it from stdin.
let mut args = std::env::args_os();
let _argv0 = args.next();
let patch_arg = match args.next() {
Some(arg) => match arg.into_string() {
Ok(s) => s,
Err(_) => {
eprintln!("Error: apply_patch requires a UTF-8 PATCH argument.");
return 1;
}
},
None => {
// No argument provided; attempt to read the patch from stdin.
let mut buf = String::new();
match std::io::stdin().read_to_string(&mut buf) {
Ok(_) => {
if buf.is_empty() {
eprintln!("Usage: apply_patch 'PATCH'\n echo 'PATCH' | apply-patch");
return 2;
}
buf
}
Err(err) => {
eprintln!("Error: Failed to read PATCH from stdin.\n{err}");
return 1;
}
}
}
};
// Refuse extra args to avoid ambiguity.
if args.next().is_some() {
eprintln!("Error: apply_patch accepts exactly one argument.");
return 2;
}
let mut stdout = std::io::stdout();
let mut stderr = std::io::stderr();
match crate::apply_patch(&patch_arg, &mut stdout, &mut stderr) {
Ok(()) => {
// Flush to ensure output ordering when used in pipelines.
let _ = stdout.flush();
0
}
Err(_) => 1,
}
}

View File

@@ -0,0 +1,3 @@
// Single integration test binary that aggregates all test modules.
// The submodules live in `tests/suite/`.
mod suite;

View File

@@ -0,0 +1,90 @@
use assert_cmd::prelude::*;
use std::fs;
use std::process::Command;
use tempfile::tempdir;
#[test]
fn test_apply_patch_cli_add_and_update() -> anyhow::Result<()> {
let tmp = tempdir()?;
let file = "cli_test.txt";
let absolute_path = tmp.path().join(file);
// 1) Add a file
let add_patch = format!(
r#"*** Begin Patch
*** Add File: {file}
+hello
*** End Patch"#
);
Command::cargo_bin("apply_patch")
.expect("should find apply_patch binary")
.arg(add_patch)
.current_dir(tmp.path())
.assert()
.success()
.stdout(format!("Success. Updated the following files:\nA {file}\n"));
assert_eq!(fs::read_to_string(&absolute_path)?, "hello\n");
// 2) Update the file
let update_patch = format!(
r#"*** Begin Patch
*** Update File: {file}
@@
-hello
+world
*** End Patch"#
);
Command::cargo_bin("apply_patch")
.expect("should find apply_patch binary")
.arg(update_patch)
.current_dir(tmp.path())
.assert()
.success()
.stdout(format!("Success. Updated the following files:\nM {file}\n"));
assert_eq!(fs::read_to_string(&absolute_path)?, "world\n");
Ok(())
}
#[test]
fn test_apply_patch_cli_stdin_add_and_update() -> anyhow::Result<()> {
let tmp = tempdir()?;
let file = "cli_test_stdin.txt";
let absolute_path = tmp.path().join(file);
// 1) Add a file via stdin
let add_patch = format!(
r#"*** Begin Patch
*** Add File: {file}
+hello
*** End Patch"#
);
let mut cmd =
assert_cmd::Command::cargo_bin("apply_patch").expect("should find apply_patch binary");
cmd.current_dir(tmp.path());
cmd.write_stdin(add_patch)
.assert()
.success()
.stdout(format!("Success. Updated the following files:\nA {file}\n"));
assert_eq!(fs::read_to_string(&absolute_path)?, "hello\n");
// 2) Update the file via stdin
let update_patch = format!(
r#"*** Begin Patch
*** Update File: {file}
@@
-hello
+world
*** End Patch"#
);
let mut cmd =
assert_cmd::Command::cargo_bin("apply_patch").expect("should find apply_patch binary");
cmd.current_dir(tmp.path());
cmd.write_stdin(update_patch)
.assert()
.success()
.stdout(format!("Success. Updated the following files:\nM {file}\n"));
assert_eq!(fs::read_to_string(&absolute_path)?, "world\n");
Ok(())
}

View File

@@ -0,0 +1 @@
mod cli;

View File

@@ -16,4 +16,5 @@ codex-apply-patch = { path = "../apply-patch" }
codex-core = { path = "../core" }
codex-linux-sandbox = { path = "../linux-sandbox" }
dotenvy = "0.15.7"
tempfile = "3"
tokio = { version = "1", features = ["rt-multi-thread"] }

View File

@@ -3,6 +3,13 @@ use std::path::Path;
use std::path::PathBuf;
use codex_core::CODEX_APPLY_PATCH_ARG1;
#[cfg(unix)]
use std::os::unix::fs::symlink;
use tempfile::TempDir;
const LINUX_SANDBOX_ARG0: &str = "codex-linux-sandbox";
const APPLY_PATCH_ARG0: &str = "apply_patch";
const MISSPELLED_APPLY_PATCH_ARG0: &str = "applypatch";
/// While we want to deploy the Codex CLI as a single executable for simplicity,
/// we also want to expose some of its functionality as distinct CLIs, so we use
@@ -39,9 +46,11 @@ where
.and_then(|s| s.to_str())
.unwrap_or("");
if exe_name == "codex-linux-sandbox" {
if exe_name == LINUX_SANDBOX_ARG0 {
// Safety: [`run_main`] never returns.
codex_linux_sandbox::run_main();
} else if exe_name == APPLY_PATCH_ARG0 || exe_name == MISSPELLED_APPLY_PATCH_ARG0 {
codex_apply_patch::main();
}
let argv1 = args.next().unwrap_or_default();
@@ -68,6 +77,19 @@ where
// before creating any threads/the Tokio runtime.
load_dotenv();
// Retain the TempDir so it exists for the lifetime of the invocation of
// this executable. Admittedly, we could invoke `keep()` on it, but it
// would be nice to avoid leaving temporary directories behind, if possible.
let _path_entry = match prepend_path_entry_for_apply_patch() {
Ok(path_entry) => Some(path_entry),
Err(err) => {
// It is possible that Codex will proceed successfully even if
// updating the PATH fails, so warn the user and move on.
eprintln!("WARNING: proceeding, even though we could not update PATH: {err}");
None
}
};
// Regular invocation create a Tokio runtime and execute the provided
// async entry-point.
let runtime = tokio::runtime::Runtime::new()?;
@@ -113,3 +135,67 @@ where
}
}
}
/// Creates a temporary directory with either:
///
/// - UNIX: `apply_patch` symlink to the current executable
/// - WINDOWS: `apply_patch.bat` batch script to invoke the current executable
/// with the "secret" --codex-run-as-apply-patch flag.
///
/// This temporary directory is prepended to the PATH environment variable so
/// that `apply_patch` can be on the PATH without requiring the user to
/// install a separate `apply_patch` executable, simplifying the deployment of
/// Codex CLI.
///
/// IMPORTANT: This function modifies the PATH environment variable, so it MUST
/// be called before multiple threads are spawned.
fn prepend_path_entry_for_apply_patch() -> std::io::Result<TempDir> {
let temp_dir = TempDir::new()?;
let path = temp_dir.path();
for filename in &[APPLY_PATCH_ARG0, MISSPELLED_APPLY_PATCH_ARG0] {
let exe = std::env::current_exe()?;
#[cfg(unix)]
{
let link = path.join(filename);
symlink(&exe, &link)?;
}
#[cfg(windows)]
{
let batch_script = path.join(format!("{filename}.bat"));
std::fs::write(
&batch_script,
format!(
r#"@echo off
"{}" {CODEX_APPLY_PATCH_ARG1} %*
"#,
exe.display()
),
)?;
}
}
#[cfg(unix)]
const PATH_SEPARATOR: &str = ":";
#[cfg(windows)]
const PATH_SEPARATOR: &str = ";";
let path_element = path.display();
let updated_path_env_var = match std::env::var("PATH") {
Ok(existing_path) => {
format!("{path_element}{PATH_SEPARATOR}{existing_path}")
}
Err(_) => {
format!("{path_element}")
}
};
unsafe {
std::env::set_var("PATH", updated_path_env_var);
}
Ok(temp_dir)
}

View File

@@ -0,0 +1,3 @@
// Single integration test binary that aggregates all test modules.
// The submodules live in `tests/suite/`.
mod suite;

View File

@@ -0,0 +1,2 @@
// Aggregates all former standalone integration tests as modules.
mod apply_command_e2e;

View File

@@ -6,6 +6,7 @@ version = { workspace = true }
[lib]
name = "codex_core"
path = "src/lib.rs"
doctest = false
[lints]
workspace = true
@@ -51,12 +52,12 @@ tokio = { version = "1", features = [
] }
tokio-util = "0.7.16"
toml = "0.9.5"
toml_edit = "0.23.3"
toml_edit = "0.23.4"
tracing = { version = "0.1.41", features = ["log"] }
tree-sitter = "0.25.8"
tree-sitter-bash = "0.25.0"
uuid = { version = "1", features = ["serde", "v4"] }
whoami = "1.6.0"
whoami = "1.6.1"
wildmatch = "2.4.0"

View File

@@ -623,6 +623,12 @@ where
Poll::Ready(Some(Ok(ResponseEvent::ReasoningSummaryPartAdded))) => {
continue;
}
Poll::Ready(Some(Ok(ResponseEvent::WebSearchCallBegin { .. }))) => {
return Poll::Ready(Some(Ok(ResponseEvent::WebSearchCallBegin {
call_id: String::new(),
query: None,
})));
}
}
}
}

View File

@@ -149,7 +149,21 @@ impl ModelClient {
let store = prompt.store && auth_mode != Some(AuthMode::ChatGPT);
let full_instructions = prompt.get_full_instructions(&self.config.model_family);
let tools_json = create_tools_json_for_responses_api(&prompt.tools)?;
let mut tools_json = create_tools_json_for_responses_api(&prompt.tools)?;
// ChatGPT backend expects the preview name for web search.
if auth_mode == Some(AuthMode::ChatGPT) {
for tool in &mut tools_json {
if let Some(map) = tool.as_object_mut()
&& map.get("type").and_then(|v| v.as_str()) == Some("web_search")
{
map.insert(
"type".to_string(),
serde_json::Value::String("web_search_preview".to_string()),
);
}
}
}
let reasoning = create_reasoning_param_for_request(
&self.config.model_family,
self.effort,
@@ -466,7 +480,8 @@ async fn process_sse<S>(
}
};
trace!("SSE event: {}", sse.data);
let raw = sse.data.clone();
trace!("SSE event: {}", raw);
let event: SseEvent = match serde_json::from_str(&sse.data) {
Ok(event) => event,
@@ -580,8 +595,24 @@ async fn process_sse<S>(
| "response.in_progress"
| "response.output_item.added"
| "response.output_text.done" => {
// Currently, we ignore this event, but we handle it
// separately to skip the logging message in the `other` case.
if event.kind == "response.output_item.added"
&& let Some(item) = event.item.as_ref()
{
// Detect web_search_call begin and forward a synthetic event upstream.
if let Some(ty) = item.get("type").and_then(|v| v.as_str())
&& ty == "web_search_call"
{
let call_id = item
.get("id")
.and_then(|v| v.as_str())
.unwrap_or("")
.to_string();
let ev = ResponseEvent::WebSearchCallBegin { call_id, query: None };
if tx_event.send(Ok(ev)).await.is_err() {
return;
}
}
}
}
"response.reasoning_summary_part.added" => {
// Boundary between reasoning summary sections (e.g., titles).
@@ -591,7 +622,7 @@ async fn process_sse<S>(
}
}
"response.reasoning_summary_text.done" => {}
other => debug!(other, "sse event"),
_ => {}
}
}
}

View File

@@ -93,6 +93,10 @@ pub enum ResponseEvent {
ReasoningSummaryDelta(String),
ReasoningContentDelta(String),
ReasoningSummaryPartAdded,
WebSearchCallBegin {
call_id: String,
query: Option<String>,
},
}
#[derive(Debug, Serialize)]

View File

@@ -55,7 +55,7 @@ use crate::exec::StreamOutput;
use crate::exec::process_exec_tool_call;
use crate::exec_command::EXEC_COMMAND_TOOL_NAME;
use crate::exec_command::ExecCommandParams;
use crate::exec_command::SESSION_MANAGER;
use crate::exec_command::ExecSessionManager;
use crate::exec_command::WRITE_STDIN_TOOL_NAME;
use crate::exec_command::WriteStdinParams;
use crate::exec_env::create_env;
@@ -64,6 +64,7 @@ use crate::mcp_tool_call::handle_mcp_tool_call;
use crate::model_family::find_family_for_model;
use crate::openai_tools::ApplyPatchToolArgs;
use crate::openai_tools::ToolsConfig;
use crate::openai_tools::ToolsConfigParams;
use crate::openai_tools::get_openai_tools;
use crate::parse_command::parse_command;
use crate::plan_tool::handle_update_plan;
@@ -96,6 +97,7 @@ use crate::protocol::StreamErrorEvent;
use crate::protocol::Submission;
use crate::protocol::TaskCompleteEvent;
use crate::protocol::TurnDiffEvent;
use crate::protocol::WebSearchBeginEvent;
use crate::rollout::RolloutRecorder;
use crate::safety::SafetyCheck;
use crate::safety::assess_command_safety;
@@ -147,6 +149,14 @@ pub struct CodexSpawnOk {
}
pub(crate) const INITIAL_SUBMIT_ID: &str = "";
pub(crate) const SUBMISSION_CHANNEL_CAPACITY: usize = 64;
// Model-formatting limits: clients get full streams; oonly content sent to the model is truncated.
pub(crate) const MODEL_FORMAT_MAX_BYTES: usize = 10 * 1024; // 10 KiB
pub(crate) const MODEL_FORMAT_MAX_LINES: usize = 256; // lines
pub(crate) const MODEL_FORMAT_HEAD_LINES: usize = MODEL_FORMAT_MAX_LINES / 2;
pub(crate) const MODEL_FORMAT_TAIL_LINES: usize = MODEL_FORMAT_MAX_LINES - MODEL_FORMAT_HEAD_LINES; // 128
pub(crate) const MODEL_FORMAT_HEAD_BYTES: usize = MODEL_FORMAT_MAX_BYTES / 2;
impl Codex {
/// Spawn a new [`Codex`] and initialize the session.
@@ -155,7 +165,7 @@ impl Codex {
auth_manager: Arc<AuthManager>,
initial_history: Option<Vec<ResponseItem>>,
) -> CodexResult<CodexSpawnOk> {
let (tx_sub, rx_sub) = async_channel::bounded(64);
let (tx_sub, rx_sub) = async_channel::bounded(SUBMISSION_CHANNEL_CAPACITY);
let (tx_event, rx_event) = async_channel::unbounded();
let user_instructions = get_user_instructions(&config).await;
@@ -259,6 +269,7 @@ pub(crate) struct Session {
/// Manager for external MCP servers/tools.
mcp_connection_manager: McpConnectionManager,
session_manager: ExecSessionManager,
/// External notifier command (will be passed as args to exec()). When
/// `None` this feature is disabled.
@@ -497,14 +508,15 @@ impl Session {
);
let turn_context = TurnContext {
client,
tools_config: ToolsConfig::new(
&config.model_family,
tools_config: ToolsConfig::new(&ToolsConfigParams {
model_family: &config.model_family,
approval_policy,
sandbox_policy.clone(),
config.include_plan_tool,
config.include_apply_patch_tool,
config.use_experimental_streamable_shell_tool,
),
sandbox_policy: sandbox_policy.clone(),
include_plan_tool: config.include_plan_tool,
include_apply_patch_tool: config.include_apply_patch_tool,
include_web_search_request: config.tools_web_search_request,
use_streamable_shell_tool: config.use_experimental_streamable_shell_tool,
}),
user_instructions,
base_instructions,
approval_policy,
@@ -517,6 +529,7 @@ impl Session {
session_id,
tx_event: tx_event.clone(),
mcp_connection_manager,
session_manager: ExecSessionManager::default(),
notify,
state: Mutex::new(state),
rollout: Mutex::new(rollout_recorder),
@@ -728,15 +741,15 @@ impl Session {
let ExecToolCallOutput {
stdout,
stderr,
aggregated_output,
duration,
exit_code,
} = output;
// Because stdout and stderr could each be up to 100 KiB, we send
// truncated versions.
const MAX_STREAM_OUTPUT: usize = 5 * 1024; // 5KiB
let stdout = stdout.text.chars().take(MAX_STREAM_OUTPUT).collect();
let stderr = stderr.text.chars().take(MAX_STREAM_OUTPUT).collect();
// Send full stdout/stderr to clients; do not truncate.
let stdout = stdout.text.clone();
let stderr = stderr.text.clone();
let formatted_output = format_exec_output_str(output);
let aggregated_output: String = aggregated_output.text.clone();
let msg = if is_apply_patch {
EventMsg::PatchApplyEnd(PatchApplyEndEvent {
@@ -750,9 +763,10 @@ impl Session {
call_id: call_id.to_string(),
stdout,
stderr,
formatted_output,
duration: *duration,
aggregated_output,
exit_code: *exit_code,
duration: *duration,
formatted_output,
})
};
@@ -810,6 +824,7 @@ impl Session {
exit_code: -1,
stdout: StreamOutput::new(String::new()),
stderr: StreamOutput::new(get_error_message_ui(e)),
aggregated_output: StreamOutput::new(get_error_message_ui(e)),
duration: Duration::default(),
};
&output_stderr
@@ -1080,14 +1095,15 @@ async fn submission_loop(
.unwrap_or(prev.sandbox_policy.clone());
let new_cwd = cwd.clone().unwrap_or_else(|| prev.cwd.clone());
let tools_config = ToolsConfig::new(
&effective_family,
new_approval_policy,
new_sandbox_policy.clone(),
config.include_plan_tool,
config.include_apply_patch_tool,
config.use_experimental_streamable_shell_tool,
);
let tools_config = ToolsConfig::new(&ToolsConfigParams {
model_family: &effective_family,
approval_policy: new_approval_policy,
sandbox_policy: new_sandbox_policy.clone(),
include_plan_tool: config.include_plan_tool,
include_apply_patch_tool: config.include_apply_patch_tool,
include_web_search_request: config.tools_web_search_request,
use_streamable_shell_tool: config.use_experimental_streamable_shell_tool,
});
let new_turn_context = TurnContext {
client,
@@ -1159,14 +1175,16 @@ async fn submission_loop(
let fresh_turn_context = TurnContext {
client,
tools_config: ToolsConfig::new(
&model_family,
tools_config: ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy,
sandbox_policy.clone(),
config.include_plan_tool,
config.include_apply_patch_tool,
config.use_experimental_streamable_shell_tool,
),
sandbox_policy: sandbox_policy.clone(),
include_plan_tool: config.include_plan_tool,
include_apply_patch_tool: config.include_apply_patch_tool,
include_web_search_request: config.tools_web_search_request,
use_streamable_shell_tool: config
.use_experimental_streamable_shell_tool,
}),
user_instructions: turn_context.user_instructions.clone(),
base_instructions: turn_context.base_instructions.clone(),
approval_policy,
@@ -1677,6 +1695,7 @@ async fn try_run_turn(
let mut stream = turn_context.client.clone().stream(&prompt).await?;
let mut output = Vec::new();
loop {
// Poll the next item from the model stream. We must inspect *both* Ok and Err
// cases so that transient stream failures (e.g., dropped SSE connection before
@@ -1713,6 +1732,16 @@ async fn try_run_turn(
.await?;
output.push(ProcessedResponseItem { item, response });
}
ResponseEvent::WebSearchCallBegin { call_id, query } => {
let q = query.unwrap_or_else(|| "Searching Web...".to_string());
let _ = sess
.tx_event
.send(Event {
id: sub_id.to_string(),
msg: EventMsg::WebSearchBegin(WebSearchBeginEvent { call_id, query: q }),
})
.await;
}
ResponseEvent::Completed {
response_id: _,
token_usage,
@@ -2085,7 +2114,8 @@ async fn handle_function_call(
};
}
};
let result = SESSION_MANAGER
let result = sess
.session_manager
.handle_exec_command_request(exec_params)
.await;
let function_call_output = crate::exec_command::result_into_payload(result);
@@ -2107,7 +2137,8 @@ async fn handle_function_call(
};
}
};
let result = SESSION_MANAGER
let result = sess
.session_manager
.handle_write_stdin_request(write_stdin_params)
.await;
let function_call_output: FunctionCallOutputPayload =
@@ -2604,23 +2635,103 @@ async fn handle_sandbox_error(
fn format_exec_output_str(exec_output: &ExecToolCallOutput) -> String {
let ExecToolCallOutput {
exit_code,
stdout,
stderr,
..
aggregated_output, ..
} = exec_output;
let is_success = *exit_code == 0;
let output = if is_success { stdout } else { stderr };
// Head+tail truncation for the model: show the beginning and end with an elision.
// Clients still receive full streams; only this formatted summary is capped.
let mut formatted_output = output.text.clone();
if let Some(truncated_after_lines) = output.truncated_after_lines {
formatted_output.push_str(&format!(
"\n\n[Output truncated after {truncated_after_lines} lines: too many lines or bytes.]",
));
let s = aggregated_output.text.as_str();
let total_lines = s.lines().count();
if s.len() <= MODEL_FORMAT_MAX_BYTES && total_lines <= MODEL_FORMAT_MAX_LINES {
return s.to_string();
}
formatted_output
let lines: Vec<&str> = s.lines().collect();
let head_take = MODEL_FORMAT_HEAD_LINES.min(lines.len());
let tail_take = MODEL_FORMAT_TAIL_LINES.min(lines.len().saturating_sub(head_take));
let omitted = lines.len().saturating_sub(head_take + tail_take);
// Join head and tail blocks (lines() strips newlines; reinsert them)
let head_block = lines
.iter()
.take(head_take)
.cloned()
.collect::<Vec<_>>()
.join("\n");
let tail_block = if tail_take > 0 {
lines[lines.len() - tail_take..].join("\n")
} else {
String::new()
};
let marker = format!("\n[... omitted {omitted} of {total_lines} lines ...]\n\n");
// Byte budgets for head/tail around the marker
let mut head_budget = MODEL_FORMAT_HEAD_BYTES.min(MODEL_FORMAT_MAX_BYTES);
let tail_budget = MODEL_FORMAT_MAX_BYTES.saturating_sub(head_budget + marker.len());
if tail_budget == 0 && marker.len() >= MODEL_FORMAT_MAX_BYTES {
// Degenerate case: marker alone exceeds budget; return a clipped marker
return take_bytes_at_char_boundary(&marker, MODEL_FORMAT_MAX_BYTES).to_string();
}
if tail_budget == 0 {
// Make room for the marker by shrinking head
head_budget = MODEL_FORMAT_MAX_BYTES.saturating_sub(marker.len());
}
// Enforce line-count cap by trimming head/tail lines
let head_lines_text = head_block;
let tail_lines_text = tail_block;
// Build final string respecting byte budgets
let head_part = take_bytes_at_char_boundary(&head_lines_text, head_budget);
let mut result = String::with_capacity(MODEL_FORMAT_MAX_BYTES.min(s.len()));
result.push_str(head_part);
result.push_str(&marker);
let remaining = MODEL_FORMAT_MAX_BYTES.saturating_sub(result.len());
let tail_budget_final = remaining;
let tail_part = take_last_bytes_at_char_boundary(&tail_lines_text, tail_budget_final);
result.push_str(tail_part);
result
}
// Truncate a &str to a byte budget at a char boundary (prefix)
#[inline]
fn take_bytes_at_char_boundary(s: &str, maxb: usize) -> &str {
if s.len() <= maxb {
return s;
}
let mut last_ok = 0;
for (i, ch) in s.char_indices() {
let nb = i + ch.len_utf8();
if nb > maxb {
break;
}
last_ok = nb;
}
&s[..last_ok]
}
// Take a suffix of a &str within a byte budget at a char boundary
#[inline]
fn take_last_bytes_at_char_boundary(s: &str, maxb: usize) -> &str {
if s.len() <= maxb {
return s;
}
let mut start = s.len();
let mut used = 0usize;
for (i, ch) in s.char_indices().rev() {
let nb = ch.len_utf8();
if used + nb > maxb {
break;
}
start = i;
used += nb;
if start == 0 {
break;
}
}
&s[start..]
}
/// Exec output is a pre-serialized JSON payload
@@ -2705,15 +2816,9 @@ async fn drain_to_completed(
response_id: _,
token_usage,
}) => {
let token_usage = match token_usage {
Some(usage) => usage,
None => {
return Err(CodexErr::Stream(
"token_usage was None in ResponseEvent::Completed".into(),
None,
));
}
};
// some providers don't return token usage, so we default
// TODO: consider approximate token usage
let token_usage = token_usage.unwrap_or_default();
sess.tx_event
.send(Event {
id: sub_id.to_string(),
@@ -2721,6 +2826,7 @@ async fn drain_to_completed(
})
.await
.ok();
return Ok(());
}
Ok(_) => continue,
@@ -2771,6 +2877,7 @@ mod tests {
use mcp_types::TextContent;
use pretty_assertions::assert_eq;
use serde_json::json;
use std::time::Duration as StdDuration;
fn text_block(s: &str) -> ContentBlock {
ContentBlock::TextContent(TextContent {
@@ -2805,6 +2912,82 @@ mod tests {
assert_eq!(expected, got);
}
#[test]
fn model_truncation_head_tail_by_lines() {
// Build 400 short lines so line-count limit, not byte budget, triggers truncation
let lines: Vec<String> = (1..=400).map(|i| format!("line{i}")).collect();
let full = lines.join("\n");
let exec = ExecToolCallOutput {
exit_code: 0,
stdout: StreamOutput::new(String::new()),
stderr: StreamOutput::new(String::new()),
aggregated_output: StreamOutput::new(full.clone()),
duration: StdDuration::from_secs(1),
};
let out = format_exec_output_str(&exec);
// Expect elision marker with correct counts
let omitted = 400 - MODEL_FORMAT_MAX_LINES; // 144
let marker = format!("\n[... omitted {omitted} of 400 lines ...]\n\n");
assert!(out.contains(&marker), "missing marker: {out}");
// Validate head and tail
let parts: Vec<&str> = out.split(&marker).collect();
assert_eq!(parts.len(), 2, "expected one marker split");
let head = parts[0];
let tail = parts[1];
let expected_head: String = (1..=MODEL_FORMAT_HEAD_LINES)
.map(|i| format!("line{i}"))
.collect::<Vec<_>>()
.join("\n");
assert!(head.starts_with(&expected_head), "head mismatch");
let expected_tail: String = ((400 - MODEL_FORMAT_TAIL_LINES + 1)..=400)
.map(|i| format!("line{i}"))
.collect::<Vec<_>>()
.join("\n");
assert!(tail.ends_with(&expected_tail), "tail mismatch");
}
#[test]
fn model_truncation_respects_byte_budget() {
// Construct a large output (about 100kB) so byte budget dominates
let big_line = "x".repeat(100);
let full = std::iter::repeat_n(big_line.clone(), 1000)
.collect::<Vec<_>>()
.join("\n");
let exec = ExecToolCallOutput {
exit_code: 0,
stdout: StreamOutput::new(String::new()),
stderr: StreamOutput::new(String::new()),
aggregated_output: StreamOutput::new(full.clone()),
duration: StdDuration::from_secs(1),
};
let out = format_exec_output_str(&exec);
assert!(out.len() <= MODEL_FORMAT_MAX_BYTES, "exceeds byte budget");
assert!(out.contains("omitted"), "should contain elision marker");
// Ensure head and tail are drawn from the original
assert!(full.starts_with(out.chars().take(8).collect::<String>().as_str()));
assert!(
full.ends_with(
out.chars()
.rev()
.take(8)
.collect::<String>()
.chars()
.rev()
.collect::<String>()
.as_str()
)
);
}
#[test]
fn falls_back_to_content_when_structured_is_null() {
let ctr = CallToolResult {

View File

@@ -169,6 +169,8 @@ pub struct Config {
/// model family's default preference.
pub include_apply_patch_tool: bool,
pub tools_web_search_request: bool,
/// The value for the `originator` header included with Responses API requests.
pub responses_originator_header: String,
@@ -480,6 +482,9 @@ pub struct ConfigToml {
/// If set to `true`, the API key will be signed with the `originator` header.
pub preferred_auth_method: Option<AuthMode>,
/// Nested tools section for feature toggles
pub tools: Option<ToolsToml>,
}
#[derive(Deserialize, Debug, Clone, PartialEq, Eq)]
@@ -487,6 +492,13 @@ pub struct ProjectConfig {
pub trust_level: Option<String>,
}
#[derive(Deserialize, Debug, Clone, Default)]
pub struct ToolsToml {
// Renamed from `web_search_request`; keep alias for backwards compatibility.
#[serde(default, alias = "web_search_request")]
pub web_search: Option<bool>,
}
impl ConfigToml {
/// Derive the effective sandbox policy from the configuration.
fn derive_sandbox_policy(&self, sandbox_mode_override: Option<SandboxMode>) -> SandboxPolicy {
@@ -576,6 +588,7 @@ pub struct ConfigOverrides {
pub include_apply_patch_tool: Option<bool>,
pub disable_response_storage: Option<bool>,
pub show_raw_agent_reasoning: Option<bool>,
pub tools_web_search_request: Option<bool>,
}
impl Config {
@@ -602,6 +615,7 @@ impl Config {
include_apply_patch_tool,
disable_response_storage,
show_raw_agent_reasoning,
tools_web_search_request: override_tools_web_search_request,
} = overrides;
let config_profile = match config_profile_key.as_ref().or(cfg.profile.as_ref()) {
@@ -640,7 +654,7 @@ impl Config {
})?
.clone();
let shell_environment_policy = cfg.shell_environment_policy.into();
let shell_environment_policy = cfg.shell_environment_policy.clone().into();
let resolved_cwd = {
use std::env;
@@ -661,7 +675,11 @@ impl Config {
}
};
let history = cfg.history.unwrap_or_default();
let history = cfg.history.clone().unwrap_or_default();
let tools_web_search_request = override_tools_web_search_request
.or(cfg.tools.as_ref().and_then(|t| t.web_search))
.unwrap_or(false);
let model = model
.or(config_profile.model)
@@ -735,7 +753,7 @@ impl Config {
codex_home,
history,
file_opener: cfg.file_opener.unwrap_or(UriBasedFileOpener::VsCode),
tui: cfg.tui.unwrap_or_default(),
tui: cfg.tui.clone().unwrap_or_default(),
codex_linux_sandbox_exe,
hide_agent_reasoning: cfg.hide_agent_reasoning.unwrap_or(false),
@@ -754,12 +772,13 @@ impl Config {
model_verbosity: config_profile.model_verbosity.or(cfg.model_verbosity),
chatgpt_base_url: config_profile
.chatgpt_base_url
.or(cfg.chatgpt_base_url)
.or(cfg.chatgpt_base_url.clone())
.unwrap_or("https://chatgpt.com/backend-api/".to_string()),
experimental_resume,
include_plan_tool: include_plan_tool.unwrap_or(false),
include_apply_patch_tool: include_apply_patch_tool.unwrap_or(false),
tools_web_search_request,
responses_originator_header,
preferred_auth_method: cfg.preferred_auth_method.unwrap_or(AuthMode::ChatGPT),
use_experimental_streamable_shell_tool: cfg
@@ -1129,6 +1148,7 @@ disable_response_storage = true
base_instructions: None,
include_plan_tool: false,
include_apply_patch_tool: false,
tools_web_search_request: false,
responses_originator_header: "codex_cli_rs".to_string(),
preferred_auth_method: AuthMode::ChatGPT,
use_experimental_streamable_shell_tool: false,
@@ -1184,6 +1204,7 @@ disable_response_storage = true
base_instructions: None,
include_plan_tool: false,
include_apply_patch_tool: false,
tools_web_search_request: false,
responses_originator_header: "codex_cli_rs".to_string(),
preferred_auth_method: AuthMode::ChatGPT,
use_experimental_streamable_shell_tool: false,
@@ -1254,6 +1275,7 @@ disable_response_storage = true
base_instructions: None,
include_plan_tool: false,
include_apply_patch_tool: false,
tools_web_search_request: false,
responses_originator_header: "codex_cli_rs".to_string(),
preferred_auth_method: AuthMode::ChatGPT,
use_experimental_streamable_shell_tool: false,

View File

@@ -28,18 +28,17 @@ use crate::spawn::StdioPolicy;
use crate::spawn::spawn_child_async;
use serde_bytes::ByteBuf;
// Maximum we send for each stream, which is either:
// - 10KiB OR
// - 256 lines
const MAX_STREAM_OUTPUT: usize = 10 * 1024;
const MAX_STREAM_OUTPUT_LINES: usize = 256;
const DEFAULT_TIMEOUT_MS: u64 = 10_000;
// Hardcode these since it does not seem worth including the libc crate just
// for these.
const SIGKILL_CODE: i32 = 9;
const TIMEOUT_CODE: i32 = 64;
const EXIT_CODE_SIGNAL_BASE: i32 = 128; // conventional shell: 128 + signal
// I/O buffer sizing
const READ_CHUNK_SIZE: usize = 8192; // bytes per read
const AGGREGATE_BUFFER_INITIAL_CAPACITY: usize = 8 * 1024; // 8 KiB
#[derive(Debug, Clone)]
pub struct ExecParams {
@@ -153,6 +152,7 @@ pub async fn process_exec_tool_call(
exit_code,
stdout,
stderr,
aggregated_output: raw_output.aggregated_output.from_utf8_lossy(),
duration,
})
}
@@ -189,10 +189,11 @@ pub struct StreamOutput<T> {
pub truncated_after_lines: Option<u32>,
}
#[derive(Debug)]
pub struct RawExecToolCallOutput {
struct RawExecToolCallOutput {
pub exit_status: ExitStatus,
pub stdout: StreamOutput<Vec<u8>>,
pub stderr: StreamOutput<Vec<u8>>,
pub aggregated_output: StreamOutput<Vec<u8>>,
}
impl StreamOutput<String> {
@@ -213,11 +214,17 @@ impl StreamOutput<Vec<u8>> {
}
}
#[inline]
fn append_all(dst: &mut Vec<u8>, src: &[u8]) {
dst.extend_from_slice(src);
}
#[derive(Debug)]
pub struct ExecToolCallOutput {
pub exit_code: i32,
pub stdout: StreamOutput<String>,
pub stderr: StreamOutput<String>,
pub aggregated_output: StreamOutput<String>,
pub duration: Duration,
}
@@ -253,7 +260,7 @@ async fn exec(
/// Consumes the output of a child process, truncating it so it is suitable for
/// use as the output of a `shell` tool call. Also enforces specified timeout.
pub(crate) async fn consume_truncated_output(
async fn consume_truncated_output(
mut child: Child,
timeout: Duration,
stdout_stream: Option<StdoutStream>,
@@ -273,19 +280,19 @@ pub(crate) async fn consume_truncated_output(
))
})?;
let (agg_tx, agg_rx) = async_channel::unbounded::<Vec<u8>>();
let stdout_handle = tokio::spawn(read_capped(
BufReader::new(stdout_reader),
MAX_STREAM_OUTPUT,
MAX_STREAM_OUTPUT_LINES,
stdout_stream.clone(),
false,
Some(agg_tx.clone()),
));
let stderr_handle = tokio::spawn(read_capped(
BufReader::new(stderr_reader),
MAX_STREAM_OUTPUT,
MAX_STREAM_OUTPUT_LINES,
stdout_stream.clone(),
true,
Some(agg_tx.clone()),
));
let exit_status = tokio::select! {
@@ -297,38 +304,48 @@ pub(crate) async fn consume_truncated_output(
// timeout
child.start_kill()?;
// Debatable whether `child.wait().await` should be called here.
synthetic_exit_status(128 + TIMEOUT_CODE)
synthetic_exit_status(EXIT_CODE_SIGNAL_BASE + TIMEOUT_CODE)
}
}
}
_ = tokio::signal::ctrl_c() => {
child.start_kill()?;
synthetic_exit_status(128 + SIGKILL_CODE)
synthetic_exit_status(EXIT_CODE_SIGNAL_BASE + SIGKILL_CODE)
}
};
let stdout = stdout_handle.await??;
let stderr = stderr_handle.await??;
drop(agg_tx);
let mut combined_buf = Vec::with_capacity(AGGREGATE_BUFFER_INITIAL_CAPACITY);
while let Ok(chunk) = agg_rx.recv().await {
append_all(&mut combined_buf, &chunk);
}
let aggregated_output = StreamOutput {
text: combined_buf,
truncated_after_lines: None,
};
Ok(RawExecToolCallOutput {
exit_status,
stdout,
stderr,
aggregated_output,
})
}
async fn read_capped<R: AsyncRead + Unpin + Send + 'static>(
mut reader: R,
max_output: usize,
max_lines: usize,
stream: Option<StdoutStream>,
is_stderr: bool,
aggregate_tx: Option<Sender<Vec<u8>>>,
) -> io::Result<StreamOutput<Vec<u8>>> {
let mut buf = Vec::with_capacity(max_output.min(8 * 1024));
let mut tmp = [0u8; 8192];
let mut buf = Vec::with_capacity(AGGREGATE_BUFFER_INITIAL_CAPACITY);
let mut tmp = [0u8; READ_CHUNK_SIZE];
let mut remaining_bytes = max_output;
let mut remaining_lines = max_lines;
// No caps: append all bytes
loop {
let n = reader.read(&mut tmp).await?;
@@ -355,33 +372,17 @@ async fn read_capped<R: AsyncRead + Unpin + Send + 'static>(
let _ = stream.tx_event.send(event).await;
}
// Copy into the buffer only while we still have byte and line budget.
if remaining_bytes > 0 && remaining_lines > 0 {
let mut copy_len = 0;
for &b in &tmp[..n] {
if remaining_bytes == 0 || remaining_lines == 0 {
break;
}
copy_len += 1;
remaining_bytes -= 1;
if b == b'\n' {
remaining_lines -= 1;
}
}
buf.extend_from_slice(&tmp[..copy_len]);
if let Some(tx) = &aggregate_tx {
let _ = tx.send(tmp[..n].to_vec()).await;
}
// Continue reading to EOF to avoid back-pressure, but discard once caps are hit.
}
let truncated = remaining_lines == 0 || remaining_bytes == 0;
append_all(&mut buf, &tmp[..n]);
// Continue reading to EOF to avoid back-pressure
}
Ok(StreamOutput {
text: buf,
truncated_after_lines: if truncated {
Some((max_lines - remaining_lines) as u32)
} else {
None
},
truncated_after_lines: None,
})
}

View File

@@ -10,5 +10,5 @@ pub use responses_api::EXEC_COMMAND_TOOL_NAME;
pub use responses_api::WRITE_STDIN_TOOL_NAME;
pub use responses_api::create_exec_command_tool_for_responses_api;
pub use responses_api::create_write_stdin_tool_for_responses_api;
pub use session_manager::SESSION_MANAGER;
pub use session_manager::SessionManager as ExecSessionManager;
pub use session_manager::result_into_payload;

View File

@@ -2,7 +2,6 @@ use std::collections::HashMap;
use std::io::ErrorKind;
use std::io::Read;
use std::sync::Arc;
use std::sync::LazyLock;
use std::sync::Mutex as StdMutex;
use std::sync::atomic::AtomicU32;
@@ -22,8 +21,6 @@ use crate::exec_command::exec_command_session::ExecCommandSession;
use crate::exec_command::session_id::SessionId;
use codex_protocol::models::FunctionCallOutputPayload;
pub static SESSION_MANAGER: LazyLock<SessionManager> = LazyLock::new(SessionManager::default);
#[derive(Debug, Default)]
pub struct SessionManager {
next_session_id: AtomicU32,

View File

@@ -90,7 +90,6 @@ pub fn find_family_for_model(slug: &str) -> Option<ModelFamily> {
model_family!(
slug, slug,
supports_reasoning_summaries: true,
apply_patch_tool_type: Some(ApplyPatchToolType::Freeform),
)
} else if slug.starts_with("gpt-4.1") {
model_family!(
@@ -107,7 +106,6 @@ pub fn find_family_for_model(slug: &str) -> Option<ModelFamily> {
model_family!(
slug, "gpt-5",
supports_reasoning_summaries: true,
apply_patch_tool_type: Some(ApplyPatchToolType::Freeform),
)
} else {
None

View File

@@ -79,13 +79,13 @@ pub(crate) fn get_model_info(model_family: &ModelFamily) -> Option<ModelInfo> {
}),
"gpt-5" => Some(ModelInfo {
context_window: 200_000,
max_output_tokens: 100_000,
context_window: 400_000,
max_output_tokens: 128_000,
}),
_ if slug.starts_with("codex-") => Some(ModelInfo {
context_window: 200_000,
max_output_tokens: 100_000,
context_window: 400_000,
max_output_tokens: 128_000,
}),
_ => None,

View File

@@ -47,6 +47,8 @@ pub(crate) enum OpenAiTool {
Function(ResponsesApiTool),
#[serde(rename = "local_shell")]
LocalShell {},
#[serde(rename = "web_search")]
WebSearch {},
#[serde(rename = "custom")]
Freeform(FreeformTool),
}
@@ -60,22 +62,35 @@ pub enum ConfigShellToolType {
}
#[derive(Debug, Clone)]
pub struct ToolsConfig {
pub(crate) struct ToolsConfig {
pub shell_type: ConfigShellToolType,
pub plan_tool: bool,
pub apply_patch_tool_type: Option<ApplyPatchToolType>,
pub web_search_request: bool,
}
pub(crate) struct ToolsConfigParams<'a> {
pub(crate) model_family: &'a ModelFamily,
pub(crate) approval_policy: AskForApproval,
pub(crate) sandbox_policy: SandboxPolicy,
pub(crate) include_plan_tool: bool,
pub(crate) include_apply_patch_tool: bool,
pub(crate) include_web_search_request: bool,
pub(crate) use_streamable_shell_tool: bool,
}
impl ToolsConfig {
pub fn new(
model_family: &ModelFamily,
approval_policy: AskForApproval,
sandbox_policy: SandboxPolicy,
include_plan_tool: bool,
include_apply_patch_tool: bool,
use_streamable_shell_tool: bool,
) -> Self {
let mut shell_type = if use_streamable_shell_tool {
pub fn new(params: &ToolsConfigParams) -> Self {
let ToolsConfigParams {
model_family,
approval_policy,
sandbox_policy,
include_plan_tool,
include_apply_patch_tool,
include_web_search_request,
use_streamable_shell_tool,
} = params;
let mut shell_type = if *use_streamable_shell_tool {
ConfigShellToolType::StreamableShell
} else if model_family.uses_local_shell_tool {
ConfigShellToolType::LocalShell
@@ -92,7 +107,7 @@ impl ToolsConfig {
Some(ApplyPatchToolType::Freeform) => Some(ApplyPatchToolType::Freeform),
Some(ApplyPatchToolType::Function) => Some(ApplyPatchToolType::Function),
None => {
if include_apply_patch_tool {
if *include_apply_patch_tool {
Some(ApplyPatchToolType::Freeform)
} else {
None
@@ -102,8 +117,9 @@ impl ToolsConfig {
Self {
shell_type,
plan_tool: include_plan_tool,
plan_tool: *include_plan_tool,
apply_patch_tool_type,
web_search_request: *include_web_search_request,
}
}
}
@@ -521,8 +537,17 @@ pub(crate) fn get_openai_tools(
}
}
if config.web_search_request {
tools.push(OpenAiTool::WebSearch {});
}
if let Some(mcp_tools) = mcp_tools {
for (name, tool) in mcp_tools {
// Ensure deterministic ordering to maximize prompt cache hits.
// HashMap iteration order is non-deterministic, so sort by fully-qualified tool name.
let mut entries: Vec<(String, mcp_types::Tool)> = mcp_tools.into_iter().collect();
entries.sort_by(|a, b| a.0.cmp(&b.0));
for (name, tool) in entries.into_iter() {
match mcp_tool_to_openai_tool(name.clone(), tool.clone()) {
Ok(converted_tool) => tools.push(OpenAiTool::Function(converted_tool)),
Err(e) => {
@@ -549,6 +574,7 @@ mod tests {
.map(|tool| match tool {
OpenAiTool::Function(ResponsesApiTool { name, .. }) => name,
OpenAiTool::LocalShell {} => "local_shell",
OpenAiTool::WebSearch {} => "web_search",
OpenAiTool::Freeform(FreeformTool { name, .. }) => name,
})
.collect::<Vec<_>>();
@@ -570,46 +596,49 @@ mod tests {
fn test_get_openai_tools() {
let model_family = find_family_for_model("codex-mini-latest")
.expect("codex-mini-latest should be a valid model family");
let config = ToolsConfig::new(
&model_family,
AskForApproval::Never,
SandboxPolicy::ReadOnly,
true,
false,
/*use_experimental_streamable_shell_tool*/ false,
);
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::ReadOnly,
include_plan_tool: true,
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
});
let tools = get_openai_tools(&config, Some(HashMap::new()));
assert_eq_tool_names(&tools, &["local_shell", "update_plan"]);
assert_eq_tool_names(&tools, &["local_shell", "update_plan", "web_search"]);
}
#[test]
fn test_get_openai_tools_default_shell() {
let model_family = find_family_for_model("o3").expect("o3 should be a valid model family");
let config = ToolsConfig::new(
&model_family,
AskForApproval::Never,
SandboxPolicy::ReadOnly,
true,
false,
/*use_experimental_streamable_shell_tool*/ false,
);
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::ReadOnly,
include_plan_tool: true,
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
});
let tools = get_openai_tools(&config, Some(HashMap::new()));
assert_eq_tool_names(&tools, &["shell", "update_plan"]);
assert_eq_tool_names(&tools, &["shell", "update_plan", "web_search"]);
}
#[test]
fn test_get_openai_tools_mcp_tools() {
let model_family = find_family_for_model("o3").expect("o3 should be a valid model family");
let config = ToolsConfig::new(
&model_family,
AskForApproval::Never,
SandboxPolicy::ReadOnly,
false,
false,
/*use_experimental_streamable_shell_tool*/ false,
);
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::ReadOnly,
include_plan_tool: false,
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
});
let tools = get_openai_tools(
&config,
Some(HashMap::from([(
@@ -631,8 +660,8 @@ mod tests {
"number_property": { "type": "number" },
},
"required": [
"string_property",
"number_property"
"string_property".to_string(),
"number_property".to_string()
],
"additionalProperties": Some(false),
},
@@ -648,10 +677,13 @@ mod tests {
)])),
);
assert_eq_tool_names(&tools, &["shell", "test_server/do_something_cool"]);
assert_eq_tool_names(
&tools,
&["shell", "web_search", "test_server/do_something_cool"],
);
assert_eq!(
tools[1],
tools[2],
OpenAiTool::Function(ResponsesApiTool {
name: "test_server/do_something_cool".to_string(),
parameters: JsonSchema::Object {
@@ -694,17 +726,93 @@ mod tests {
);
}
#[test]
fn test_get_openai_tools_mcp_tools_sorted_by_name() {
let model_family = find_family_for_model("o3").expect("o3 should be a valid model family");
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::ReadOnly,
include_plan_tool: false,
include_apply_patch_tool: false,
include_web_search_request: false,
use_streamable_shell_tool: false,
});
// Intentionally construct a map with keys that would sort alphabetically.
let tools_map: HashMap<String, mcp_types::Tool> = HashMap::from([
(
"test_server/do".to_string(),
mcp_types::Tool {
name: "a".to_string(),
input_schema: ToolInputSchema {
properties: Some(serde_json::json!({})),
required: None,
r#type: "object".to_string(),
},
output_schema: None,
title: None,
annotations: None,
description: Some("a".to_string()),
},
),
(
"test_server/something".to_string(),
mcp_types::Tool {
name: "b".to_string(),
input_schema: ToolInputSchema {
properties: Some(serde_json::json!({})),
required: None,
r#type: "object".to_string(),
},
output_schema: None,
title: None,
annotations: None,
description: Some("b".to_string()),
},
),
(
"test_server/cool".to_string(),
mcp_types::Tool {
name: "c".to_string(),
input_schema: ToolInputSchema {
properties: Some(serde_json::json!({})),
required: None,
r#type: "object".to_string(),
},
output_schema: None,
title: None,
annotations: None,
description: Some("c".to_string()),
},
),
]);
let tools = get_openai_tools(&config, Some(tools_map));
// Expect shell first, followed by MCP tools sorted by fully-qualified name.
assert_eq_tool_names(
&tools,
&[
"shell",
"test_server/cool",
"test_server/do",
"test_server/something",
],
);
}
#[test]
fn test_mcp_tool_property_missing_type_defaults_to_string() {
let model_family = find_family_for_model("o3").expect("o3 should be a valid model family");
let config = ToolsConfig::new(
&model_family,
AskForApproval::Never,
SandboxPolicy::ReadOnly,
false,
false,
/*use_experimental_streamable_shell_tool*/ false,
);
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::ReadOnly,
include_plan_tool: false,
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
});
let tools = get_openai_tools(
&config,
@@ -729,10 +837,10 @@ mod tests {
)])),
);
assert_eq_tool_names(&tools, &["shell", "dash/search"]);
assert_eq_tool_names(&tools, &["shell", "web_search", "dash/search"]);
assert_eq!(
tools[1],
tools[2],
OpenAiTool::Function(ResponsesApiTool {
name: "dash/search".to_string(),
parameters: JsonSchema::Object {
@@ -754,14 +862,15 @@ mod tests {
#[test]
fn test_mcp_tool_integer_normalized_to_number() {
let model_family = find_family_for_model("o3").expect("o3 should be a valid model family");
let config = ToolsConfig::new(
&model_family,
AskForApproval::Never,
SandboxPolicy::ReadOnly,
false,
false,
/*use_experimental_streamable_shell_tool*/ false,
);
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::ReadOnly,
include_plan_tool: false,
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
});
let tools = get_openai_tools(
&config,
@@ -784,9 +893,9 @@ mod tests {
)])),
);
assert_eq_tool_names(&tools, &["shell", "dash/paginate"]);
assert_eq_tool_names(&tools, &["shell", "web_search", "dash/paginate"]);
assert_eq!(
tools[1],
tools[2],
OpenAiTool::Function(ResponsesApiTool {
name: "dash/paginate".to_string(),
parameters: JsonSchema::Object {
@@ -806,14 +915,15 @@ mod tests {
#[test]
fn test_mcp_tool_array_without_items_gets_default_string_items() {
let model_family = find_family_for_model("o3").expect("o3 should be a valid model family");
let config = ToolsConfig::new(
&model_family,
AskForApproval::Never,
SandboxPolicy::ReadOnly,
false,
false,
/*use_experimental_streamable_shell_tool*/ false,
);
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::ReadOnly,
include_plan_tool: false,
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
});
let tools = get_openai_tools(
&config,
@@ -836,9 +946,9 @@ mod tests {
)])),
);
assert_eq_tool_names(&tools, &["shell", "dash/tags"]);
assert_eq_tool_names(&tools, &["shell", "web_search", "dash/tags"]);
assert_eq!(
tools[1],
tools[2],
OpenAiTool::Function(ResponsesApiTool {
name: "dash/tags".to_string(),
parameters: JsonSchema::Object {
@@ -861,14 +971,15 @@ mod tests {
#[test]
fn test_mcp_tool_anyof_defaults_to_string() {
let model_family = find_family_for_model("o3").expect("o3 should be a valid model family");
let config = ToolsConfig::new(
&model_family,
AskForApproval::Never,
SandboxPolicy::ReadOnly,
false,
false,
/*use_experimental_streamable_shell_tool*/ false,
);
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::ReadOnly,
include_plan_tool: false,
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
});
let tools = get_openai_tools(
&config,
@@ -891,9 +1002,9 @@ mod tests {
)])),
);
assert_eq_tool_names(&tools, &["shell", "dash/value"]);
assert_eq_tool_names(&tools, &["shell", "web_search", "dash/value"]);
assert_eq!(
tools[1],
tools[2],
OpenAiTool::Function(ResponsesApiTool {
name: "dash/value".to_string(),
parameters: JsonSchema::Object {

View File

@@ -0,0 +1,3 @@
// Single integration test binary that aggregates all test modules.
// The submodules live in `tests/all/`.
mod suite;

View File

@@ -70,12 +70,12 @@ async fn truncates_output_lines() {
let output = run_test_cmd(tmp, cmd).await.unwrap();
let expected_output = (1..=256)
let expected_output = (1..=300)
.map(|i| format!("{i}\n"))
.collect::<Vec<_>>()
.join("");
assert_eq!(output.stdout.text, expected_output);
assert_eq!(output.stdout.truncated_after_lines, Some(256));
assert_eq!(output.stdout.truncated_after_lines, None);
}
/// Command succeeds with exit code 0 normally
@@ -91,8 +91,8 @@ async fn truncates_output_bytes() {
let output = run_test_cmd(tmp, cmd).await.unwrap();
assert_eq!(output.stdout.text.len(), 10240);
assert_eq!(output.stdout.truncated_after_lines, Some(10));
assert!(output.stdout.text.len() >= 15000);
assert_eq!(output.stdout.truncated_after_lines, None);
}
/// Command not found returns exit code 127, this is not considered a sandbox error

View File

@@ -139,3 +139,34 @@ async fn test_exec_stderr_stream_events_echo() {
}
assert_eq!(String::from_utf8_lossy(&err), "oops\n");
}
#[tokio::test]
async fn test_aggregated_output_interleaves_in_order() {
// Spawn a shell that alternates stdout and stderr with sleeps to enforce order.
let cmd = vec![
"/bin/sh".to_string(),
"-c".to_string(),
"printf 'O1\\n'; sleep 0.01; printf 'E1\\n' 1>&2; sleep 0.01; printf 'O2\\n'; sleep 0.01; printf 'E2\\n' 1>&2".to_string(),
];
let params = ExecParams {
command: cmd,
cwd: std::env::current_dir().unwrap_or_else(|_| PathBuf::from(".")),
timeout_ms: Some(5_000),
env: HashMap::new(),
with_escalated_permissions: None,
justification: None,
};
let policy = SandboxPolicy::new_read_only_policy();
let result = process_exec_tool_call(params, SandboxType::None, &policy, &None, None)
.await
.expect("process_exec_tool_call");
assert_eq!(result.exit_code, 0);
assert_eq!(result.stdout.text, "O1\nO2\n");
assert_eq!(result.stderr.text, "E1\nE2\n");
assert_eq!(result.aggregated_output.text, "O1\nE1\nO2\nE2\n");
assert_eq!(result.aggregated_output.truncated_after_lines, None);
}

View File

@@ -0,0 +1,12 @@
// Aggregates all former standalone integration tests as modules.
mod cli_stream;
mod client;
mod compact;
mod exec;
mod exec_stream_events;
mod live_cli;
mod prompt_caching;
mod seatbelt;
mod stream_error_allows_next_turn;
mod stream_no_completed;

View File

@@ -107,8 +107,8 @@ async fn codex_mini_latest_tools() {
assert_eq!(requests.len(), 2, "expected two POST requests");
let expected_instructions = [
include_str!("../prompt.md"),
include_str!("../../apply-patch/apply_patch_tool_instructions.md"),
include_str!("../../prompt.md"),
include_str!("../../../apply-patch/apply_patch_tool_instructions.md"),
]
.join("\n");
@@ -188,7 +188,7 @@ async fn prompt_tools_are_consistent_across_requests() {
let requests = server.received_requests().await.unwrap();
assert_eq!(requests.len(), 2, "expected two POST requests");
let expected_instructions: &str = include_str!("../prompt.md");
let expected_instructions: &str = include_str!("../../prompt.md");
// our internal implementation is responsible for keeping tools in sync
// with the OpenAI schema, so we just verify the tool presence here
let expected_tools_names: &[&str] = &["shell", "update_plan", "apply_patch"];

View File

@@ -24,6 +24,7 @@ use codex_core::protocol::StreamErrorEvent;
use codex_core::protocol::TaskCompleteEvent;
use codex_core::protocol::TurnAbortReason;
use codex_core::protocol::TurnDiffEvent;
use codex_core::protocol::WebSearchBeginEvent;
use owo_colors::OwoColorize;
use owo_colors::Style;
use shlex::try_join;
@@ -287,8 +288,7 @@ impl EventProcessor for EventProcessorWithHumanOutput {
EventMsg::ExecCommandOutputDelta(_) => {}
EventMsg::ExecCommandEnd(ExecCommandEndEvent {
call_id,
stdout,
stderr,
aggregated_output,
duration,
exit_code,
..
@@ -304,8 +304,7 @@ impl EventProcessor for EventProcessorWithHumanOutput {
("".to_string(), format!("exec('{call_id}')"))
};
let output = if exit_code == 0 { stdout } else { stderr };
let truncated_output = output
let truncated_output = aggregated_output
.lines()
.take(MAX_OUTPUT_LINES_FOR_EXEC_TOOL_CALL)
.collect::<Vec<_>>()
@@ -363,6 +362,9 @@ impl EventProcessor for EventProcessorWithHumanOutput {
}
}
}
EventMsg::WebSearchBegin(WebSearchBeginEvent { call_id: _, query }) => {
ts_println!(self, "🌐 {query}");
}
EventMsg::PatchApplyBegin(PatchApplyBeginEvent {
call_id,
auto_approved,

View File

@@ -150,6 +150,7 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
include_apply_patch_tool: None,
disable_response_storage: oss.then_some(true),
show_raw_agent_reasoning: oss.then_some(true),
tools_web_search_request: None,
};
// Parse `-c` overrides.
let cli_kv_overrides = match config_overrides.parse_overrides() {

View File

@@ -0,0 +1,3 @@
// Single integration test binary that aggregates all test modules.
// The submodules live in `tests/suite/`.
mod suite;

View File

@@ -1,339 +0,0 @@
#![allow(clippy::expect_used, clippy::unwrap_used)]
use anyhow::Context;
use assert_cmd::prelude::*;
use codex_core::CODEX_APPLY_PATCH_ARG1;
use std::fs;
use std::process::Command;
use tempfile::tempdir;
/// While we may add an `apply-patch` subcommand to the `codex` CLI multitool
/// at some point, we must ensure that the smaller `codex-exec` CLI can still
/// emulate the `apply_patch` CLI.
#[test]
fn test_standalone_exec_cli_can_use_apply_patch() -> anyhow::Result<()> {
let tmp = tempdir()?;
let relative_path = "source.txt";
let absolute_path = tmp.path().join(relative_path);
fs::write(&absolute_path, "original content\n")?;
Command::cargo_bin("codex-exec")
.context("should find binary for codex-exec")?
.arg(CODEX_APPLY_PATCH_ARG1)
.arg(
r#"*** Begin Patch
*** Update File: source.txt
@@
-original content
+modified by apply_patch
*** End Patch"#,
)
.current_dir(tmp.path())
.assert()
.success()
.stdout("Success. Updated the following files:\nM source.txt\n")
.stderr(predicates::str::is_empty());
assert_eq!(
fs::read_to_string(absolute_path)?,
"modified by apply_patch\n"
);
Ok(())
}
#[cfg(not(target_os = "windows"))]
#[tokio::test]
async fn test_apply_patch_tool() -> anyhow::Result<()> {
use core_test_support::load_sse_fixture_with_id_from_str;
use tempfile::TempDir;
use wiremock::Mock;
use wiremock::MockServer;
use wiremock::ResponseTemplate;
use wiremock::matchers::method;
use wiremock::matchers::path;
const SSE_TOOL_CALL_ADD: &str = r#"[
{
"type": "response.output_item.done",
"item": {
"type": "function_call",
"name": "apply_patch",
"arguments": "{\n \"input\": \"*** Begin Patch\\n*** Add File: test.md\\n+Hello world\\n*** End Patch\"\n}",
"call_id": "__ID__"
}
},
{
"type": "response.completed",
"response": {
"id": "__ID__",
"usage": {
"input_tokens": 0,
"input_tokens_details": null,
"output_tokens": 0,
"output_tokens_details": null,
"total_tokens": 0
},
"output": []
}
}
]"#;
const SSE_TOOL_CALL_UPDATE: &str = r#"[
{
"type": "response.output_item.done",
"item": {
"type": "function_call",
"name": "apply_patch",
"arguments": "{\n \"input\": \"*** Begin Patch\\n*** Update File: test.md\\n@@\\n-Hello world\\n+Final text\\n*** End Patch\"\n}",
"call_id": "__ID__"
}
},
{
"type": "response.completed",
"response": {
"id": "__ID__",
"usage": {
"input_tokens": 0,
"input_tokens_details": null,
"output_tokens": 0,
"output_tokens_details": null,
"total_tokens": 0
},
"output": []
}
}
]"#;
const SSE_TOOL_CALL_COMPLETED: &str = r#"[
{
"type": "response.completed",
"response": {
"id": "__ID__",
"usage": {
"input_tokens": 0,
"input_tokens_details": null,
"output_tokens": 0,
"output_tokens_details": null,
"total_tokens": 0
},
"output": []
}
}
]"#;
// Start a mock model server
let server = MockServer::start().await;
// First response: model calls apply_patch to create test.md
let first = ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.set_body_raw(
load_sse_fixture_with_id_from_str(SSE_TOOL_CALL_ADD, "call1"),
"text/event-stream",
);
Mock::given(method("POST"))
// .and(path("/v1/responses"))
.respond_with(first)
.up_to_n_times(1)
.mount(&server)
.await;
// Second response: model calls apply_patch to update test.md
let second = ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.set_body_raw(
load_sse_fixture_with_id_from_str(SSE_TOOL_CALL_UPDATE, "call2"),
"text/event-stream",
);
Mock::given(method("POST"))
.and(path("/v1/responses"))
.respond_with(second)
.up_to_n_times(1)
.mount(&server)
.await;
let final_completed = ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.set_body_raw(
load_sse_fixture_with_id_from_str(SSE_TOOL_CALL_COMPLETED, "resp3"),
"text/event-stream",
);
Mock::given(method("POST"))
.and(path("/v1/responses"))
.respond_with(final_completed)
.expect(1)
.mount(&server)
.await;
let tmp_cwd = TempDir::new().unwrap();
Command::cargo_bin("codex-exec")
.context("should find binary for codex-exec")?
.current_dir(tmp_cwd.path())
.env("CODEX_HOME", tmp_cwd.path())
.env("OPENAI_API_KEY", "dummy")
.env("OPENAI_BASE_URL", format!("{}/v1", server.uri()))
.arg("--skip-git-repo-check")
.arg("-s")
.arg("workspace-write")
.arg("foo")
.assert()
.success();
// Verify final file contents
let final_path = tmp_cwd.path().join("test.md");
let contents = std::fs::read_to_string(&final_path)
.unwrap_or_else(|e| panic!("failed reading {}: {e}", final_path.display()));
assert_eq!(contents, "Final text\n");
Ok(())
}
#[cfg(not(target_os = "windows"))]
#[tokio::test]
async fn test_apply_patch_freeform_tool() -> anyhow::Result<()> {
use core_test_support::load_sse_fixture_with_id_from_str;
use tempfile::TempDir;
use wiremock::Mock;
use wiremock::MockServer;
use wiremock::ResponseTemplate;
use wiremock::matchers::method;
use wiremock::matchers::path;
const SSE_TOOL_CALL_ADD: &str = r#"[
{
"type": "response.output_item.done",
"item": {
"type": "custom_tool_call",
"name": "apply_patch",
"input": "*** Begin Patch\n*** Add File: test.md\n+Hello world\n*** End Patch",
"call_id": "__ID__"
}
},
{
"type": "response.completed",
"response": {
"id": "__ID__",
"usage": {
"input_tokens": 0,
"input_tokens_details": null,
"output_tokens": 0,
"output_tokens_details": null,
"total_tokens": 0
},
"output": []
}
}
]"#;
const SSE_TOOL_CALL_UPDATE: &str = r#"[
{
"type": "response.output_item.done",
"item": {
"type": "custom_tool_call",
"name": "apply_patch",
"input": "*** Begin Patch\n*** Update File: test.md\n@@\n-Hello world\n+Final text\n*** End Patch",
"call_id": "__ID__"
}
},
{
"type": "response.completed",
"response": {
"id": "__ID__",
"usage": {
"input_tokens": 0,
"input_tokens_details": null,
"output_tokens": 0,
"output_tokens_details": null,
"total_tokens": 0
},
"output": []
}
}
]"#;
const SSE_TOOL_CALL_COMPLETED: &str = r#"[
{
"type": "response.completed",
"response": {
"id": "__ID__",
"usage": {
"input_tokens": 0,
"input_tokens_details": null,
"output_tokens": 0,
"output_tokens_details": null,
"total_tokens": 0
},
"output": []
}
}
]"#;
// Start a mock model server
let server = MockServer::start().await;
// First response: model calls apply_patch to create test.md
let first = ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.set_body_raw(
load_sse_fixture_with_id_from_str(SSE_TOOL_CALL_ADD, "call1"),
"text/event-stream",
);
Mock::given(method("POST"))
// .and(path("/v1/responses"))
.respond_with(first)
.up_to_n_times(1)
.mount(&server)
.await;
// Second response: model calls apply_patch to update test.md
let second = ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.set_body_raw(
load_sse_fixture_with_id_from_str(SSE_TOOL_CALL_UPDATE, "call2"),
"text/event-stream",
);
Mock::given(method("POST"))
.and(path("/v1/responses"))
.respond_with(second)
.up_to_n_times(1)
.mount(&server)
.await;
let final_completed = ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.set_body_raw(
load_sse_fixture_with_id_from_str(SSE_TOOL_CALL_COMPLETED, "resp3"),
"text/event-stream",
);
Mock::given(method("POST"))
// .and(path("/v1/responses"))
.respond_with(final_completed)
.expect(1)
.mount(&server)
.await;
let tmp_cwd = TempDir::new().unwrap();
Command::cargo_bin("codex-exec")
.context("should find binary for codex-exec")?
.current_dir(tmp_cwd.path())
.env("CODEX_HOME", tmp_cwd.path())
.env("OPENAI_API_KEY", "dummy")
.env("OPENAI_BASE_URL", format!("{}/v1", server.uri()))
.arg("--skip-git-repo-check")
.arg("-s")
.arg("workspace-write")
.arg("foo")
.assert()
.success();
// Verify final file contents
let final_path = tmp_cwd.path().join("test.md");
let contents = std::fs::read_to_string(&final_path)
.unwrap_or_else(|e| panic!("failed reading {}: {e}", final_path.display()));
assert_eq!(contents, "Final text\n");
Ok(())
}

View File

@@ -0,0 +1,4 @@
class BaseClass:
def method():
return True

View File

@@ -0,0 +1,25 @@
[
{
"type": "response.output_item.done",
"item": {
"type": "custom_tool_call",
"name": "apply_patch",
"input": "*** Begin Patch\n*** Add File: test.md\n+Hello world\n*** End Patch",
"call_id": "__ID__"
}
},
{
"type": "response.completed",
"response": {
"id": "__ID__",
"usage": {
"input_tokens": 0,
"input_tokens_details": null,
"output_tokens": 0,
"output_tokens_details": null,
"total_tokens": 0
},
"output": []
}
}
]

View File

@@ -0,0 +1,25 @@
[
{
"type": "response.output_item.done",
"item": {
"type": "custom_tool_call",
"name": "apply_patch",
"input": "*** Begin Patch\n*** Update File: app.py\n@@ class BaseClass:\n@@ def method():\n- return False\n+ return True\n*** End Patch",
"call_id": "__ID__"
}
},
{
"type": "response.completed",
"response": {
"id": "__ID__",
"usage": {
"input_tokens": 0,
"input_tokens_details": null,
"output_tokens": 0,
"output_tokens_details": null,
"total_tokens": 0
},
"output": []
}
}
]

View File

@@ -0,0 +1,25 @@
[
{
"type": "response.output_item.done",
"item": {
"type": "custom_tool_call",
"name": "apply_patch",
"input": "*** Begin Patch\n*** Add File: app.py\n+class BaseClass:\n+ def method():\n+ return False\n*** End Patch",
"call_id": "__ID__"
}
},
{
"type": "response.completed",
"response": {
"id": "__ID__",
"usage": {
"input_tokens": 0,
"input_tokens_details": null,
"output_tokens": 0,
"output_tokens_details": null,
"total_tokens": 0
},
"output": []
}
}
]

View File

@@ -0,0 +1,25 @@
[
{
"type": "response.output_item.done",
"item": {
"type": "custom_tool_call",
"name": "apply_patch",
"input": "*** Begin Patch\n*** Update File: app.py\n@@ def method():\n- return False\n+\n+ return True\n*** End Patch",
"call_id": "__ID__"
}
},
{
"type": "response.completed",
"response": {
"id": "__ID__",
"usage": {
"input_tokens": 0,
"input_tokens_details": null,
"output_tokens": 0,
"output_tokens_details": null,
"total_tokens": 0
},
"output": []
}
}
]

View File

@@ -0,0 +1,25 @@
[
{
"type": "response.output_item.done",
"item": {
"type": "function_call",
"name": "apply_patch",
"arguments": "{\n \"input\": \"*** Begin Patch\\n*** Update File: test.md\\n@@\\n-Hello world\\n+Final text\\n*** End Patch\"\n}",
"call_id": "__ID__"
}
},
{
"type": "response.completed",
"response": {
"id": "__ID__",
"usage": {
"input_tokens": 0,
"input_tokens_details": null,
"output_tokens": 0,
"output_tokens_details": null,
"total_tokens": 0
},
"output": []
}
}
]

View File

@@ -0,0 +1,16 @@
[
{
"type": "response.completed",
"response": {
"id": "__ID__",
"usage": {
"input_tokens": 0,
"input_tokens_details": null,
"output_tokens": 0,
"output_tokens_details": null,
"total_tokens": 0
},
"output": []
}
}
]

View File

@@ -0,0 +1,146 @@
#![allow(clippy::expect_used, clippy::unwrap_used)]
use anyhow::Context;
use assert_cmd::prelude::*;
use codex_core::CODEX_APPLY_PATCH_ARG1;
use std::fs;
use std::process::Command;
use tempfile::tempdir;
/// While we may add an `apply-patch` subcommand to the `codex` CLI multitool
/// at some point, we must ensure that the smaller `codex-exec` CLI can still
/// emulate the `apply_patch` CLI.
#[test]
fn test_standalone_exec_cli_can_use_apply_patch() -> anyhow::Result<()> {
let tmp = tempdir()?;
let relative_path = "source.txt";
let absolute_path = tmp.path().join(relative_path);
fs::write(&absolute_path, "original content\n")?;
Command::cargo_bin("codex-exec")
.context("should find binary for codex-exec")?
.arg(CODEX_APPLY_PATCH_ARG1)
.arg(
r#"*** Begin Patch
*** Update File: source.txt
@@
-original content
+modified by apply_patch
*** End Patch"#,
)
.current_dir(tmp.path())
.assert()
.success()
.stdout("Success. Updated the following files:\nM source.txt\n")
.stderr(predicates::str::is_empty());
assert_eq!(
fs::read_to_string(absolute_path)?,
"modified by apply_patch\n"
);
Ok(())
}
#[cfg(not(target_os = "windows"))]
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
async fn test_apply_patch_tool() -> anyhow::Result<()> {
use crate::suite::common::run_e2e_exec_test;
use codex_core::spawn::CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR;
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return Ok(());
}
let tmp_cwd = tempdir().expect("failed to create temp dir");
let tmp_path = tmp_cwd.path().to_path_buf();
run_e2e_exec_test(
tmp_cwd.path(),
vec![
include_str!("../fixtures/sse_apply_patch_add.json").to_string(),
include_str!("../fixtures/sse_apply_patch_update.json").to_string(),
include_str!("../fixtures/sse_response_completed.json").to_string(),
],
)
.await;
let final_path = tmp_path.join("test.md");
let contents = std::fs::read_to_string(&final_path)
.unwrap_or_else(|e| panic!("failed reading {}: {e}", final_path.display()));
assert_eq!(contents, "Final text\n");
Ok(())
}
#[cfg(not(target_os = "windows"))]
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
async fn test_apply_patch_freeform_tool() -> anyhow::Result<()> {
use crate::suite::common::run_e2e_exec_test;
use codex_core::spawn::CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR;
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return Ok(());
}
let tmp_cwd = tempdir().expect("failed to create temp dir");
run_e2e_exec_test(
tmp_cwd.path(),
vec![
include_str!("../fixtures/sse_apply_patch_freeform_add.json").to_string(),
include_str!("../fixtures/sse_apply_patch_freeform_update.json").to_string(),
include_str!("../fixtures/sse_response_completed.json").to_string(),
],
)
.await;
// Verify final file contents
let final_path = tmp_cwd.path().join("app.py");
let contents = std::fs::read_to_string(&final_path)
.unwrap_or_else(|e| panic!("failed reading {}: {e}", final_path.display()));
assert_eq!(
contents,
include_str!("../fixtures/apply_patch_freeform_final.txt")
);
Ok(())
}
#[cfg(not(target_os = "windows"))]
#[tokio::test]
async fn test_apply_patch_context() -> anyhow::Result<()> {
use crate::suite::common::run_e2e_exec_test;
use codex_core::spawn::CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR;
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return Ok(());
}
let tmp_cwd = tempdir().expect("failed to create temp dir");
run_e2e_exec_test(
tmp_cwd.path(),
vec![
include_str!("../fixtures/sse_apply_patch_freeform_add.json").to_string(),
include_str!("../fixtures/sse_apply_patch_context_update.json").to_string(),
include_str!("../fixtures/sse_response_completed.json").to_string(),
],
)
.await;
// Verify final file contents
let final_path = tmp_cwd.path().join("app.py");
let contents = std::fs::read_to_string(&final_path)
.unwrap_or_else(|e| panic!("failed reading {}: {e}", final_path.display()));
assert_eq!(
contents,
r#"class BaseClass:
def method():
return True
"#
);
Ok(())
}

View File

@@ -0,0 +1,73 @@
// this file is only used for e2e tests which are currently disabled on windows
#![cfg(not(target_os = "windows"))]
#![allow(clippy::expect_used)]
use anyhow::Context;
use assert_cmd::prelude::*;
use core_test_support::load_sse_fixture_with_id_from_str;
use std::path::Path;
use std::process::Command;
use std::sync::atomic::AtomicUsize;
use std::sync::atomic::Ordering;
use wiremock::Mock;
use wiremock::MockServer;
use wiremock::matchers::method;
use wiremock::matchers::path;
use wiremock::Respond;
struct SeqResponder {
num_calls: AtomicUsize,
responses: Vec<String>,
}
impl Respond for SeqResponder {
fn respond(&self, _: &wiremock::Request) -> wiremock::ResponseTemplate {
let call_num = self.num_calls.fetch_add(1, Ordering::SeqCst);
match self.responses.get(call_num) {
Some(body) => wiremock::ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.set_body_raw(
load_sse_fixture_with_id_from_str(body, &format!("request_{}", call_num)),
"text/event-stream",
),
None => panic!("no response for {call_num}"),
}
}
}
/// Helper function to run an E2E test of a codex-exec call. Starts a wiremock
/// server, and returns the response_streams in order for each api call. Runs
/// the codex-exec command with the wiremock server as the model server.
pub(crate) async fn run_e2e_exec_test(cwd: &Path, response_streams: Vec<String>) {
let server = MockServer::start().await;
let num_calls = response_streams.len();
let seq_responder = SeqResponder {
num_calls: AtomicUsize::new(0),
responses: response_streams,
};
Mock::given(method("POST"))
.and(path("/v1/responses"))
.respond_with(seq_responder)
.expect(num_calls as u64)
.mount(&server)
.await;
let cwd = cwd.to_path_buf();
let uri = server.uri();
Command::cargo_bin("codex-exec")
.context("should find binary for codex-exec")
.expect("should find binary for codex-exec")
.current_dir(cwd.clone())
.env("CODEX_HOME", cwd.clone())
.env("OPENAI_API_KEY", "dummy")
.env("OPENAI_BASE_URL", format!("{}/v1", uri))
.arg("--skip-git-repo-check")
.arg("-s")
.arg("danger-full-access")
.arg("foo")
.assert()
.success();
}

View File

@@ -0,0 +1,4 @@
// Aggregates all former standalone integration tests as modules.
mod apply_patch;
mod common;
mod sandbox;

View File

@@ -0,0 +1,3 @@
// Single integration test binary that aggregates all test modules.
// The submodules live in `tests/suite/`.
mod suite;

View File

@@ -0,0 +1,10 @@
// Aggregates all former standalone integration tests as modules.
mod bad;
mod cp;
mod good;
mod head;
mod literal;
mod ls;
mod parse_sed_command;
mod pwd;
mod sed;

View File

@@ -0,0 +1,3 @@
// Single integration test binary that aggregates all test modules.
// The submodules live in `tests/suite/`.
mod suite;

View File

@@ -0,0 +1,2 @@
// Aggregates all former standalone integration tests as modules.
mod landlock;

View File

@@ -0,0 +1,3 @@
// Single integration test binary that aggregates all test modules.
// The submodules live in `tests/suite/`.
mod suite;

View File

@@ -0,0 +1,2 @@
// Aggregates all former standalone integration tests as modules.
mod login_server_e2e;

View File

@@ -738,6 +738,7 @@ fn derive_config_from_params(
include_apply_patch_tool,
disable_response_storage: None,
show_raw_agent_reasoning: None,
tools_web_search_request: None,
};
let cli_overrides = cli_overrides

View File

@@ -163,6 +163,7 @@ impl CodexToolCallParam {
include_apply_patch_tool: None,
disable_response_storage: None,
show_raw_agent_reasoning: None,
tools_web_search_request: None,
};
let cli_overrides = cli_overrides

View File

@@ -272,6 +272,7 @@ async fn run_codex_tool_session_inner(
| EventMsg::PatchApplyBegin(_)
| EventMsg::PatchApplyEnd(_)
| EventMsg::TurnDiff(_)
| EventMsg::WebSearchBegin(_)
| EventMsg::GetHistoryEntryResponse(_)
| EventMsg::PlanUpdate(_)
| EventMsg::TurnAborted(_)

View File

@@ -0,0 +1,3 @@
// Single integration test binary that aggregates all test modules.
// The submodules live in `tests/suite/`.
mod suite;

View File

@@ -0,0 +1,8 @@
// Aggregates all former standalone integration tests as modules.
mod auth;
mod codex_message_processor_flow;
mod codex_tool;
mod create_conversation;
mod interrupt;
mod login;
mod send_message;

View File

@@ -0,0 +1,3 @@
// Single integration test binary that aggregates all test modules.
// The submodules live in `tests/suite/`.
mod suite;

View File

@@ -0,0 +1,3 @@
// Aggregates all former standalone integration tests as modules.
mod initialize;
mod progress_notification;

View File

@@ -48,6 +48,8 @@ pub fn generate_ts(out_dir: &Path, prettier: Option<&Path>) -> Result<()> {
codex_protocol::mcp_protocol::ExecCommandApprovalResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::ServerNotification::export_all_to(out_dir)?;
generate_index_ts(out_dir)?;
// Prepend header to each generated .ts file
let ts_files = ts_files_in(out_dir)?;
for file in &ts_files {
@@ -109,5 +111,39 @@ fn ts_files_in(dir: &Path) -> Result<Vec<PathBuf>> {
files.push(path);
}
}
files.sort();
Ok(files)
}
/// Generate an index.ts file that re-exports all generated types.
/// This allows consumers to import all types from a single file.
fn generate_index_ts(out_dir: &Path) -> Result<PathBuf> {
let mut entries: Vec<String> = Vec::new();
let mut stems: Vec<String> = ts_files_in(out_dir)?
.into_iter()
.filter_map(|p| {
let stem = p.file_stem()?.to_string_lossy().into_owned();
if stem == "index" { None } else { Some(stem) }
})
.collect();
stems.sort();
stems.dedup();
for name in stems {
entries.push(format!("export type {{ {name} }} from \"./{name}\";\n"));
}
let mut content =
String::with_capacity(HEADER.len() + entries.iter().map(|s| s.len()).sum::<usize>());
content.push_str(HEADER);
for line in &entries {
content.push_str(line);
}
let index_path = out_dir.join("index.ts");
let mut f = fs::File::create(&index_path)
.with_context(|| format!("Failed to create {}", index_path.display()))?;
f.write_all(content.as_bytes())
.with_context(|| format!("Failed to write {}", index_path.display()))?;
Ok(index_path)
}

View File

@@ -437,6 +437,8 @@ pub enum EventMsg {
McpToolCallEnd(McpToolCallEndEvent),
WebSearchBegin(WebSearchBeginEvent),
/// Notification that the server is about to execute a command.
ExecCommandBegin(ExecCommandBeginEvent),
@@ -658,6 +660,12 @@ impl McpToolCallEndEvent {
}
}
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct WebSearchBeginEvent {
pub call_id: String,
pub query: String,
}
/// Response payload for `Op::GetHistory` containing the current session's
/// in-memory transcript.
#[derive(Debug, Clone, Deserialize, Serialize)]
@@ -685,6 +693,9 @@ pub struct ExecCommandEndEvent {
pub stdout: String,
/// Captured stderr
pub stderr: String,
/// Captured aggregated output
#[serde(default)]
pub aggregated_output: String,
/// The command's exit code.
pub exit_code: i32,
/// The duration of the command execution.

View File

@@ -1,3 +1,4 @@
use crate::app_backtrack::BacktrackState;
use crate::app_event::AppEvent;
use crate::app_event_sender::AppEventSender;
use crate::chatwidget::ChatWidget;
@@ -25,27 +26,31 @@ use std::thread;
use std::time::Duration;
use tokio::select;
use tokio::sync::mpsc::unbounded_channel;
// use uuid::Uuid;
pub(crate) struct App {
server: Arc<ConversationManager>,
app_event_tx: AppEventSender,
chat_widget: ChatWidget,
pub(crate) server: Arc<ConversationManager>,
pub(crate) app_event_tx: AppEventSender,
pub(crate) chat_widget: ChatWidget,
/// Config is stored here so we can recreate ChatWidgets as needed.
config: Config,
pub(crate) config: Config,
file_search: FileSearchManager,
pub(crate) file_search: FileSearchManager,
transcript_lines: Vec<Line<'static>>,
pub(crate) transcript_lines: Vec<Line<'static>>,
// Transcript overlay state
transcript_overlay: Option<TranscriptApp>,
deferred_history_lines: Vec<Line<'static>>,
pub(crate) transcript_overlay: Option<TranscriptApp>,
pub(crate) deferred_history_lines: Vec<Line<'static>>,
enhanced_keys_supported: bool,
pub(crate) enhanced_keys_supported: bool,
/// Controls the animation thread that sends CommitTick events.
commit_anim_running: Arc<AtomicBool>,
pub(crate) commit_anim_running: Arc<AtomicBool>,
// Esc-backtracking state grouped
pub(crate) backtrack: crate::app_backtrack::BacktrackState,
}
impl App {
@@ -87,6 +92,7 @@ impl App {
transcript_overlay: None,
deferred_history_lines: Vec::new(),
commit_anim_running: Arc::new(AtomicBool::new(false)),
backtrack: BacktrackState::default(),
};
let tui_events = tui.event_stream();
@@ -96,7 +102,7 @@ impl App {
while select! {
Some(event) = app_event_rx.recv() => {
app.handle_event(tui, event)?
app.handle_event(tui, event).await?
}
Some(event) = tui_events.next() => {
app.handle_tui_event(tui, event).await?
@@ -111,18 +117,8 @@ impl App {
tui: &mut tui::Tui,
event: TuiEvent,
) -> Result<bool> {
if let Some(overlay) = &mut self.transcript_overlay {
overlay.handle_event(tui, event)?;
if overlay.is_done {
// Exit alternate screen and restore viewport.
let _ = tui.leave_alt_screen();
if !self.deferred_history_lines.is_empty() {
let lines = std::mem::take(&mut self.deferred_history_lines);
tui.insert_history_lines(lines);
}
self.transcript_overlay = None;
tui.frame_requester().schedule_frame();
}
if self.transcript_overlay.is_some() {
let _ = self.handle_backtrack_overlay_event(tui, event).await?;
} else {
match event {
TuiEvent::Key(key_event) => {
@@ -161,7 +157,7 @@ impl App {
Ok(true)
}
fn handle_event(&mut self, tui: &mut tui::Tui, event: AppEvent) -> Result<bool> {
async fn handle_event(&mut self, tui: &mut tui::Tui, event: AppEvent) -> Result<bool> {
match event {
AppEvent::NewSession => {
self.chat_widget = ChatWidget::new(
@@ -227,6 +223,9 @@ impl App {
AppEvent::CodexEvent(event) => {
self.chat_widget.handle_codex_event(event);
}
AppEvent::ConversationHistory(ev) => {
self.on_conversation_history_for_backtrack(tui, ev).await?;
}
AppEvent::ExitRequest => {
return Ok(false);
}
@@ -304,10 +303,36 @@ impl App {
self.transcript_overlay = Some(TranscriptApp::new(self.transcript_lines.clone()));
tui.frame_requester().schedule_frame();
}
// Esc primes/advances backtracking when composer is empty.
KeyEvent {
code: KeyCode::Esc,
kind: KeyEventKind::Press | KeyEventKind::Repeat,
..
} => {
self.handle_backtrack_esc_key(tui);
}
// Enter confirms backtrack when primed + count > 0. Otherwise pass to widget.
KeyEvent {
code: KeyCode::Enter,
kind: KeyEventKind::Press,
..
} if self.backtrack.primed
&& self.backtrack.count > 0
&& self.chat_widget.composer_is_empty() =>
{
// Delegate to helper for clarity; preserves behavior.
self.confirm_backtrack_from_main();
}
KeyEvent {
kind: KeyEventKind::Press | KeyEventKind::Repeat,
..
} => {
// Any non-Esc key press should cancel a primed backtrack.
// This avoids stale "Esc-primed" state after the user starts typing
// (even if they later backspace to empty).
if key_event.code != KeyCode::Esc && self.backtrack.primed {
self.reset_backtrack_state();
}
self.chat_widget.handle_key_event(key_event);
}
_ => {

View File

@@ -0,0 +1,349 @@
use crate::app::App;
use crate::backtrack_helpers;
use crate::transcript_app::TranscriptApp;
use crate::tui;
use crate::tui::TuiEvent;
use codex_core::protocol::ConversationHistoryResponseEvent;
use color_eyre::eyre::Result;
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
use crossterm::event::KeyEventKind;
/// Aggregates all backtrack-related state used by the App.
#[derive(Default)]
pub(crate) struct BacktrackState {
/// True when Esc has primed backtrack mode in the main view.
pub(crate) primed: bool,
/// Session id of the base conversation to fork from.
pub(crate) base_id: Option<uuid::Uuid>,
/// Current step count (Nth last user message).
pub(crate) count: usize,
/// True when the transcript overlay is showing a backtrack preview.
pub(crate) overlay_preview_active: bool,
/// Pending fork request: (base_id, drop_count, prefill).
pub(crate) pending: Option<(uuid::Uuid, usize, String)>,
}
impl App {
/// Route overlay events when transcript overlay is active.
/// - If backtrack preview is active: Esc steps selection; Enter confirms.
/// - Otherwise: Esc begins preview; all other events forward to overlay.
/// interactions (Esc to step target, Enter to confirm) and overlay lifecycle.
pub(crate) async fn handle_backtrack_overlay_event(
&mut self,
tui: &mut tui::Tui,
event: TuiEvent,
) -> Result<bool> {
if self.backtrack.overlay_preview_active {
match event {
TuiEvent::Key(KeyEvent {
code: KeyCode::Esc,
kind: KeyEventKind::Press | KeyEventKind::Repeat,
..
}) => {
self.overlay_step_backtrack(tui, event)?;
Ok(true)
}
TuiEvent::Key(KeyEvent {
code: KeyCode::Enter,
kind: KeyEventKind::Press,
..
}) => {
self.overlay_confirm_backtrack(tui);
Ok(true)
}
// Catchall: forward any other events to the overlay widget.
_ => {
self.overlay_forward_event(tui, event)?;
Ok(true)
}
}
} else if let TuiEvent::Key(KeyEvent {
code: KeyCode::Esc,
kind: KeyEventKind::Press | KeyEventKind::Repeat,
..
}) = event
{
// First Esc in transcript overlay: begin backtrack preview at latest user message.
self.begin_overlay_backtrack_preview(tui);
Ok(true)
} else {
// Not in backtrack mode: forward events to the overlay widget.
self.overlay_forward_event(tui, event)?;
Ok(true)
}
}
/// Handle global Esc presses for backtracking when no overlay is present.
pub(crate) fn handle_backtrack_esc_key(&mut self, tui: &mut tui::Tui) {
// Only handle backtracking when composer is empty to avoid clobbering edits.
if self.chat_widget.composer_is_empty() {
if !self.backtrack.primed {
self.prime_backtrack();
} else if self.transcript_overlay.is_none() {
self.open_backtrack_preview(tui);
} else if self.backtrack.overlay_preview_active {
self.step_backtrack_and_highlight(tui);
}
}
}
/// Stage a backtrack and request conversation history from the agent.
pub(crate) fn request_backtrack(
&mut self,
prefill: String,
base_id: uuid::Uuid,
drop_last_messages: usize,
) {
self.backtrack.pending = Some((base_id, drop_last_messages, prefill));
self.app_event_tx.send(crate::app_event::AppEvent::CodexOp(
codex_core::protocol::Op::GetHistory,
));
}
/// Open transcript overlay (enters alternate screen and shows full transcript).
pub(crate) fn open_transcript_overlay(&mut self, tui: &mut tui::Tui) {
let _ = tui.enter_alt_screen();
self.transcript_overlay = Some(TranscriptApp::new(self.transcript_lines.clone()));
tui.frame_requester().schedule_frame();
}
/// Close transcript overlay and restore normal UI.
pub(crate) fn close_transcript_overlay(&mut self, tui: &mut tui::Tui) {
let _ = tui.leave_alt_screen();
let was_backtrack = self.backtrack.overlay_preview_active;
if !self.deferred_history_lines.is_empty() {
let lines = std::mem::take(&mut self.deferred_history_lines);
tui.insert_history_lines(lines);
}
self.transcript_overlay = None;
self.backtrack.overlay_preview_active = false;
if was_backtrack {
// Ensure backtrack state is fully reset when overlay closes (e.g. via 'q').
self.reset_backtrack_state();
}
}
/// Re-render the full transcript into the terminal scrollback in one call.
/// Useful when switching sessions to ensure prior history remains visible.
pub(crate) fn render_transcript_once(&mut self, tui: &mut tui::Tui) {
if !self.transcript_lines.is_empty() {
tui.insert_history_lines(self.transcript_lines.clone());
}
}
/// Initialize backtrack state and show composer hint.
fn prime_backtrack(&mut self) {
self.backtrack.primed = true;
self.backtrack.count = 0;
self.backtrack.base_id = self.chat_widget.session_id();
self.chat_widget.show_esc_backtrack_hint();
}
/// Open overlay and begin backtrack preview flow (first step + highlight).
fn open_backtrack_preview(&mut self, tui: &mut tui::Tui) {
self.open_transcript_overlay(tui);
self.backtrack.overlay_preview_active = true;
// Composer is hidden by overlay; clear its hint.
self.chat_widget.clear_esc_backtrack_hint();
self.step_backtrack_and_highlight(tui);
}
/// When overlay is already open, begin preview mode and select latest user message.
fn begin_overlay_backtrack_preview(&mut self, tui: &mut tui::Tui) {
self.backtrack.primed = true;
self.backtrack.base_id = self.chat_widget.session_id();
self.backtrack.overlay_preview_active = true;
let sel = self.compute_backtrack_selection(tui, 1);
self.apply_backtrack_selection(sel);
tui.frame_requester().schedule_frame();
}
/// Step selection to the next older user message and update overlay.
fn step_backtrack_and_highlight(&mut self, tui: &mut tui::Tui) {
let next = self.backtrack.count.saturating_add(1);
let sel = self.compute_backtrack_selection(tui, next);
self.apply_backtrack_selection(sel);
tui.frame_requester().schedule_frame();
}
/// Compute normalized target, scroll offset, and highlight for requested step.
fn compute_backtrack_selection(
&self,
tui: &tui::Tui,
requested_n: usize,
) -> (usize, Option<usize>, Option<(usize, usize)>) {
let nth = backtrack_helpers::normalize_backtrack_n(&self.transcript_lines, requested_n);
let header_idx =
backtrack_helpers::find_nth_last_user_header_index(&self.transcript_lines, nth);
let offset = header_idx.map(|idx| {
backtrack_helpers::wrapped_offset_before(
&self.transcript_lines,
idx,
tui.terminal.viewport_area.width,
)
});
let hl = backtrack_helpers::highlight_range_for_nth_last_user(&self.transcript_lines, nth);
(nth, offset, hl)
}
/// Apply a computed backtrack selection to the overlay and internal counter.
fn apply_backtrack_selection(
&mut self,
selection: (usize, Option<usize>, Option<(usize, usize)>),
) {
let (nth, offset, hl) = selection;
self.backtrack.count = nth;
if let Some(overlay) = &mut self.transcript_overlay {
if let Some(off) = offset {
overlay.scroll_offset = off;
}
overlay.set_highlight_range(hl);
}
}
/// Forward any event to the overlay and close it if done.
fn overlay_forward_event(&mut self, tui: &mut tui::Tui, event: TuiEvent) -> Result<()> {
if let Some(overlay) = &mut self.transcript_overlay {
overlay.handle_event(tui, event)?;
if overlay.is_done {
self.close_transcript_overlay(tui);
}
}
tui.frame_requester().schedule_frame();
Ok(())
}
/// Handle Enter in overlay backtrack preview: confirm selection and reset state.
fn overlay_confirm_backtrack(&mut self, tui: &mut tui::Tui) {
if let Some(base_id) = self.backtrack.base_id {
let drop_last_messages = self.backtrack.count;
let prefill =
backtrack_helpers::nth_last_user_text(&self.transcript_lines, drop_last_messages)
.unwrap_or_default();
self.close_transcript_overlay(tui);
self.request_backtrack(prefill, base_id, drop_last_messages);
}
self.reset_backtrack_state();
}
/// Handle Esc in overlay backtrack preview: step selection if armed, else forward.
fn overlay_step_backtrack(&mut self, tui: &mut tui::Tui, event: TuiEvent) -> Result<()> {
if self.backtrack.base_id.is_some() {
self.step_backtrack_and_highlight(tui);
} else {
self.overlay_forward_event(tui, event)?;
}
Ok(())
}
/// Confirm a primed backtrack from the main view (no overlay visible).
/// Computes the prefill from the selected user message and requests history.
pub(crate) fn confirm_backtrack_from_main(&mut self) {
if let Some(base_id) = self.backtrack.base_id {
let drop_last_messages = self.backtrack.count;
let prefill =
backtrack_helpers::nth_last_user_text(&self.transcript_lines, drop_last_messages)
.unwrap_or_default();
self.request_backtrack(prefill, base_id, drop_last_messages);
}
self.reset_backtrack_state();
}
/// Clear all backtrack-related state and composer hints.
pub(crate) fn reset_backtrack_state(&mut self) {
self.backtrack.primed = false;
self.backtrack.base_id = None;
self.backtrack.count = 0;
// In case a hint is somehow still visible (e.g., race with overlay open/close).
self.chat_widget.clear_esc_backtrack_hint();
}
/// Handle a ConversationHistory response while a backtrack is pending.
/// If it matches the primed base session, fork and switch to the new conversation.
pub(crate) async fn on_conversation_history_for_backtrack(
&mut self,
tui: &mut tui::Tui,
ev: ConversationHistoryResponseEvent,
) -> Result<()> {
if let Some((base_id, _, _)) = self.backtrack.pending.as_ref()
&& ev.conversation_id == *base_id
&& let Some((_, drop_count, prefill)) = self.backtrack.pending.take()
{
self.fork_and_switch_to_new_conversation(tui, ev, drop_count, prefill)
.await;
}
Ok(())
}
/// Fork the conversation using provided history and switch UI/state accordingly.
async fn fork_and_switch_to_new_conversation(
&mut self,
tui: &mut tui::Tui,
ev: ConversationHistoryResponseEvent,
drop_count: usize,
prefill: String,
) {
let cfg = self.chat_widget.config_ref().clone();
// Perform the fork via a thin wrapper for clarity/testability.
let result = self
.perform_fork(ev.entries.clone(), drop_count, cfg.clone())
.await;
match result {
Ok(new_conv) => {
self.install_forked_conversation(tui, cfg, new_conv, drop_count, &prefill)
}
Err(e) => tracing::error!("error forking conversation: {e:#}"),
}
}
/// Thin wrapper around ConversationManager::fork_conversation.
async fn perform_fork(
&self,
entries: Vec<codex_protocol::models::ResponseItem>,
drop_count: usize,
cfg: codex_core::config::Config,
) -> codex_core::error::Result<codex_core::NewConversation> {
self.server
.fork_conversation(entries, drop_count, cfg)
.await
}
/// Install a forked conversation into the ChatWidget and update UI to reflect selection.
fn install_forked_conversation(
&mut self,
tui: &mut tui::Tui,
cfg: codex_core::config::Config,
new_conv: codex_core::NewConversation,
drop_count: usize,
prefill: &str,
) {
let conv = new_conv.conversation;
let session_configured = new_conv.session_configured;
self.chat_widget = crate::chatwidget::ChatWidget::new_from_existing(
cfg,
conv,
session_configured,
tui.frame_requester(),
self.app_event_tx.clone(),
self.enhanced_keys_supported,
);
// Trim transcript up to the selected user message and re-render it.
self.trim_transcript_for_backtrack(drop_count);
self.render_transcript_once(tui);
if !prefill.is_empty() {
self.chat_widget.insert_str(prefill);
}
tui.frame_requester().schedule_frame();
}
/// Trim transcript_lines to preserve only content up to the selected user message.
fn trim_transcript_for_backtrack(&mut self, drop_count: usize) {
if let Some(cut_idx) =
backtrack_helpers::find_nth_last_user_header_index(&self.transcript_lines, drop_count)
{
self.transcript_lines.truncate(cut_idx);
} else {
self.transcript_lines.clear();
}
}
}

View File

@@ -1,3 +1,4 @@
use codex_core::protocol::ConversationHistoryResponseEvent;
use codex_core::protocol::Event;
use codex_file_search::FileMatch;
use ratatui::text::Line;
@@ -57,4 +58,7 @@ pub(crate) enum AppEvent {
/// Update the current sandbox policy in the running app and widget.
UpdateSandboxPolicy(SandboxPolicy),
/// Forwarded conversation history snapshot from the current conversation.
ConversationHistory(ConversationHistoryResponseEvent),
}

View File

@@ -0,0 +1,154 @@
use ratatui::text::Line;
/// Convenience: compute the highlight range for the Nth last user message.
pub(crate) fn highlight_range_for_nth_last_user(
lines: &[Line<'_>],
n: usize,
) -> Option<(usize, usize)> {
let header = find_nth_last_user_header_index(lines, n)?;
Some(highlight_range_from_header(lines, header))
}
/// Compute the wrapped display-line offset before `header_idx`, for a given width.
pub(crate) fn wrapped_offset_before(lines: &[Line<'_>], header_idx: usize, width: u16) -> usize {
let before = &lines[0..header_idx];
crate::insert_history::word_wrap_lines(before, width).len()
}
/// Find the header index for the Nth last user message in the transcript.
/// Returns `None` if `n == 0` or there are fewer than `n` user messages.
pub(crate) fn find_nth_last_user_header_index(lines: &[Line<'_>], n: usize) -> Option<usize> {
if n == 0 {
return None;
}
let mut found = 0usize;
for (idx, line) in lines.iter().enumerate().rev() {
let content: String = line
.spans
.iter()
.map(|s| s.content.as_ref())
.collect::<Vec<_>>()
.join("");
if content.trim() == "user" {
found += 1;
if found == n {
return Some(idx);
}
}
}
None
}
/// Normalize a requested backtrack step `n` against the available user messages.
/// - Returns `0` if there are no user messages.
/// - Returns `n` if the Nth last user message exists.
/// - Otherwise wraps to `1` (the most recent user message).
pub(crate) fn normalize_backtrack_n(lines: &[Line<'_>], n: usize) -> usize {
if n == 0 {
return 0;
}
if find_nth_last_user_header_index(lines, n).is_some() {
return n;
}
if find_nth_last_user_header_index(lines, 1).is_some() {
1
} else {
0
}
}
/// Extract the text content of the Nth last user message.
/// The message body is considered to be the lines following the "user" header
/// until the first blank line.
pub(crate) fn nth_last_user_text(lines: &[Line<'_>], n: usize) -> Option<String> {
let header_idx = find_nth_last_user_header_index(lines, n)?;
extract_message_text_after_header(lines, header_idx)
}
/// Extract message text starting after `header_idx` until the first blank line.
fn extract_message_text_after_header(lines: &[Line<'_>], header_idx: usize) -> Option<String> {
let start = header_idx + 1;
let mut out: Vec<String> = Vec::new();
for line in lines.iter().skip(start) {
let is_blank = line
.spans
.iter()
.all(|s| s.content.as_ref().trim().is_empty());
if is_blank {
break;
}
let text: String = line
.spans
.iter()
.map(|s| s.content.as_ref())
.collect::<Vec<_>>()
.join("");
out.push(text);
}
if out.is_empty() {
None
} else {
Some(out.join("\n"))
}
}
/// Given a header index, return the inclusive range for the message block
/// [header_idx, end) where end is the first blank line after the header or the
/// end of the transcript.
fn highlight_range_from_header(lines: &[Line<'_>], header_idx: usize) -> (usize, usize) {
let mut end = header_idx + 1;
while end < lines.len() {
let is_blank = lines[end]
.spans
.iter()
.all(|s| s.content.as_ref().trim().is_empty());
if is_blank {
break;
}
end += 1;
}
(header_idx, end)
}
#[cfg(test)]
mod tests {
use super::*;
use ratatui::text::Span;
fn line(s: &str) -> Line<'static> {
Line::from(Span::raw(s.to_string()))
}
fn transcript_with_users(count: usize) -> Vec<Line<'static>> {
// Build a transcript with `count` user messages, each followed by one body line and a blank line.
let mut v = Vec::new();
for i in 0..count {
v.push(line("user"));
v.push(line(&format!("message {i}")));
v.push(line(""));
}
v
}
#[test]
fn normalize_wraps_to_one_when_past_oldest() {
let lines = transcript_with_users(2);
assert_eq!(normalize_backtrack_n(&lines, 1), 1);
assert_eq!(normalize_backtrack_n(&lines, 2), 2);
// Requesting 3rd when only 2 exist wraps to 1
assert_eq!(normalize_backtrack_n(&lines, 3), 1);
}
#[test]
fn normalize_returns_zero_when_no_user_messages() {
let lines = transcript_with_users(0);
assert_eq!(normalize_backtrack_n(&lines, 1), 0);
assert_eq!(normalize_backtrack_n(&lines, 5), 0);
}
#[test]
fn normalize_keeps_valid_n() {
let lines = transcript_with_users(3);
assert_eq!(normalize_backtrack_n(&lines, 2), 2);
}
}

View File

@@ -28,16 +28,6 @@ pub(crate) trait BottomPaneView {
/// Render the view: this will be displayed in place of the composer.
fn render(&self, area: Rect, buf: &mut Buffer);
/// Update the status indicator animated header. Default no-op.
fn update_status_header(&mut self, _header: String) {
// no-op
}
/// Called when task completes to check if the view should be hidden.
fn should_hide_when_task_is_done(&mut self) -> bool {
false
}
/// Try to handle approval request; return the original value if not
/// consumed.
fn try_consume_approval_request(
@@ -46,8 +36,4 @@ pub(crate) trait BottomPaneView {
) -> Option<ApprovalRequest> {
Some(request)
}
/// Optional hook for views that expose a live status line. Views that do not
/// support this can ignore the call.
fn update_status_text(&mut self, _text: String) {}
}

View File

@@ -81,6 +81,7 @@ pub(crate) struct ChatComposer {
app_event_tx: AppEventSender,
history: ChatComposerHistory,
ctrl_c_quit_hint: bool,
esc_backtrack_hint: bool,
use_shift_enter_hint: bool,
dismissed_file_popup_token: Option<String>,
current_file_query: Option<String>,
@@ -121,6 +122,7 @@ impl ChatComposer {
app_event_tx,
history: ChatComposerHistory::new(),
ctrl_c_quit_hint: false,
esc_backtrack_hint: false,
use_shift_enter_hint,
dismissed_file_popup_token: None,
current_file_query: None,
@@ -153,7 +155,7 @@ impl ChatComposer {
ActivePopup::None => 1,
};
let [textarea_rect, _] =
Layout::vertical([Constraint::Min(0), Constraint::Max(popup_height)]).areas(area);
Layout::vertical([Constraint::Min(1), Constraint::Max(popup_height)]).areas(area);
let mut textarea_rect = textarea_rect;
textarea_rect.width = textarea_rect.width.saturating_sub(1);
textarea_rect.x += 1;
@@ -230,6 +232,20 @@ impl ChatComposer {
true
}
/// Replace the entire composer content with `text` and reset cursor.
pub(crate) fn set_text_content(&mut self, text: String) {
self.textarea.set_text(&text);
self.textarea.set_cursor(0);
self.sync_command_popup();
self.sync_file_search_popup();
}
/// Get the current composer text.
#[cfg(test)]
pub(crate) fn current_text(&self) -> String {
self.textarea.text().to_string()
}
pub fn attach_image(&mut self, path: PathBuf, width: u32, height: u32, format_label: &str) {
let placeholder = format!("[image {width}x{height} {format_label}]");
// Insert as an element to match large paste placeholder behavior:
@@ -1091,9 +1107,13 @@ impl ChatComposer {
fn set_has_focus(&mut self, has_focus: bool) {
self.has_focus = has_focus;
}
pub(crate) fn set_esc_backtrack_hint(&mut self, show: bool) {
self.esc_backtrack_hint = show;
}
}
impl WidgetRef for &ChatComposer {
impl WidgetRef for ChatComposer {
fn render_ref(&self, area: Rect, buf: &mut Buffer) {
let popup_height = match &self.active_popup {
ActivePopup::Command(popup) => popup.calculate_required_height(),
@@ -1101,7 +1121,7 @@ impl WidgetRef for &ChatComposer {
ActivePopup::None => 1,
};
let [textarea_rect, popup_rect] =
Layout::vertical([Constraint::Min(0), Constraint::Max(popup_height)]).areas(area);
Layout::vertical([Constraint::Min(1), Constraint::Max(popup_height)]).areas(area);
match &self.active_popup {
ActivePopup::Command(popup) => {
popup.render_ref(popup_rect, buf);
@@ -1137,6 +1157,12 @@ impl WidgetRef for &ChatComposer {
]
};
if !self.ctrl_c_quit_hint && self.esc_backtrack_hint {
hint.push(Span::from(" "));
hint.push("Esc".set_style(key_hint_style));
hint.push(Span::from(" edit prev"));
}
// Append token/context usage info to the footer hints when available.
if let Some(token_usage_info) = &self.token_usage_info {
let token_usage = &token_usage_info.total_token_usage;
@@ -1484,7 +1510,7 @@ mod tests {
}
terminal
.draw(|f| f.render_widget_ref(&composer, f.area()))
.draw(|f| f.render_widget_ref(composer, f.area()))
.unwrap_or_else(|e| panic!("Failed to draw {name} composer: {e}"));
assert_snapshot!(name, terminal.backend());

View File

@@ -24,7 +24,6 @@ mod list_selection_view;
mod popup_consts;
mod scroll_state;
mod selection_popup_common;
mod status_indicator_view;
mod textarea;
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
@@ -36,10 +35,10 @@ pub(crate) enum CancellationEvent {
pub(crate) use chat_composer::ChatComposer;
pub(crate) use chat_composer::InputResult;
use crate::status_indicator_widget::StatusIndicatorWidget;
use approval_modal_view::ApprovalModalView;
pub(crate) use list_selection_view::SelectionAction;
pub(crate) use list_selection_view::SelectionItem;
use status_indicator_view::StatusIndicatorView;
/// Pane displayed in the lower half of the chat UI.
pub(crate) struct BottomPane {
@@ -47,7 +46,7 @@ pub(crate) struct BottomPane {
/// input state is retained when the view is closed.
composer: ChatComposer,
/// If present, this is displayed instead of the `composer`.
/// If present, this is displayed instead of the `composer` (e.g. modals).
active_view: Option<Box<dyn BottomPaneView>>,
app_event_tx: AppEventSender,
@@ -56,10 +55,12 @@ pub(crate) struct BottomPane {
has_input_focus: bool,
is_task_running: bool,
ctrl_c_quit_hint: bool,
esc_backtrack_hint: bool,
/// True if the active view is the StatusIndicatorView that replaces the
/// composer during a running task.
status_view_active: bool,
/// Inline status indicator shown above the composer while a task is running.
status: Option<StatusIndicatorWidget>,
/// Queued user messages to show under the status indicator.
queued_user_messages: Vec<String>,
}
pub(crate) struct BottomPaneParams {
@@ -87,41 +88,60 @@ impl BottomPane {
has_input_focus: params.has_input_focus,
is_task_running: false,
ctrl_c_quit_hint: false,
status_view_active: false,
status: None,
queued_user_messages: Vec::new(),
esc_backtrack_hint: false,
}
}
pub fn desired_height(&self, width: u16) -> u16 {
let view_height = if let Some(view) = self.active_view.as_ref() {
let top_margin = if self.active_view.is_some() { 0 } else { 1 };
// Base height depends on whether a modal/overlay is active.
let mut base = if let Some(view) = self.active_view.as_ref() {
view.desired_height(width)
} else {
self.composer.desired_height(width)
};
let top_pad = if self.active_view.is_none() || self.status_view_active {
1
} else {
0
};
view_height
.saturating_add(Self::BOTTOM_PAD_LINES)
.saturating_add(top_pad)
// If a status indicator is active and no modal is covering the composer,
// include its height above the composer.
if self.active_view.is_none()
&& let Some(status) = self.status.as_ref()
{
base = base.saturating_add(status.desired_height(width));
}
// Account for bottom padding rows. Top spacing is handled in layout().
base.saturating_add(Self::BOTTOM_PAD_LINES)
.saturating_add(top_margin)
}
fn layout(&self, area: Rect) -> Rect {
let top = if self.active_view.is_none() || self.status_view_active {
1
fn layout(&self, area: Rect) -> [Rect; 2] {
// Prefer showing the status header when space is extremely tight.
// Drop the top spacer if there is only one row available.
let mut top_margin = if self.active_view.is_some() { 0 } else { 1 };
if area.height <= 1 {
top_margin = 0;
}
let status_height = if self.active_view.is_none() {
if let Some(status) = self.status.as_ref() {
status.desired_height(area.width)
} else {
0
}
} else {
0
};
let [_, content, _] = Layout::vertical([
Constraint::Max(top),
let [_, status, content, _] = Layout::vertical([
Constraint::Max(top_margin),
Constraint::Max(status_height),
Constraint::Min(1),
Constraint::Max(BottomPane::BOTTOM_PAD_LINES),
])
.areas(area);
content
[status, content]
}
pub fn cursor_pos(&self, area: Rect) -> Option<(u16, u16)> {
@@ -129,10 +149,10 @@ impl BottomPane {
// status indicator shown while a task is running, or approval modal).
// In these states the textarea is not interactable, so we should not
// show its caret.
if self.active_view.is_some() || self.status_view_active {
if self.active_view.is_some() {
None
} else {
let content = self.layout(area);
let [_, content] = self.layout(area);
self.composer.cursor_pos(content)
}
}
@@ -143,18 +163,21 @@ impl BottomPane {
view.handle_key_event(self, key_event);
if !view.is_complete() {
self.active_view = Some(view);
} else if self.is_task_running {
let mut v = StatusIndicatorView::new(
self.app_event_tx.clone(),
self.frame_requester.clone(),
);
v.update_text("waiting for model".to_string());
self.active_view = Some(Box::new(v));
self.status_view_active = true;
}
self.request_redraw();
InputResult::None
} else {
// If a task is running and a status line is visible, allow Esc to
// send an interrupt even while the composer has focus.
if matches!(key_event.code, crossterm::event::KeyCode::Esc)
&& self.is_task_running
&& let Some(status) = &self.status
{
// Send Op::Interrupt
status.interrupt();
self.request_redraw();
return InputResult::None;
}
let (input_result, needs_redraw) = self.composer.handle_key_event(key_event);
if needs_redraw {
self.request_redraw();
@@ -176,15 +199,6 @@ impl BottomPane {
CancellationEvent::Handled => {
if !view.is_complete() {
self.active_view = Some(view);
} else if self.is_task_running {
// Modal aborted but task still running restore status indicator.
let mut v = StatusIndicatorView::new(
self.app_event_tx.clone(),
self.frame_requester.clone(),
);
v.update_text("waiting for model".to_string());
self.active_view = Some(Box::new(v));
self.status_view_active = true;
}
self.show_ctrl_c_quit_hint();
}
@@ -209,13 +223,24 @@ impl BottomPane {
self.request_redraw();
}
/// Replace the composer text with `text`.
pub(crate) fn set_composer_text(&mut self, text: String) {
self.composer.set_text_content(text);
self.request_redraw();
}
/// Get the current composer text (for tests and programmatic checks).
#[cfg(test)]
pub(crate) fn composer_text(&self) -> String {
self.composer.current_text()
}
/// Update the animated header shown to the left of the brackets in the
/// status indicator (defaults to "Working"). This will update the active
/// StatusIndicatorView if present; otherwise, if a live overlay is active,
/// it will update that. If neither is present, this call is a no-op.
/// status indicator (defaults to "Working"). No-ops if the status
/// indicator is not active.
pub(crate) fn update_status_header(&mut self, header: String) {
if let Some(view) = self.active_view.as_mut() {
view.update_status_header(header.clone());
if let Some(status) = self.status.as_mut() {
status.update_header(header);
self.request_redraw();
}
}
@@ -240,27 +265,39 @@ impl BottomPane {
self.ctrl_c_quit_hint
}
pub(crate) fn show_esc_backtrack_hint(&mut self) {
self.esc_backtrack_hint = true;
self.composer.set_esc_backtrack_hint(true);
self.request_redraw();
}
pub(crate) fn clear_esc_backtrack_hint(&mut self) {
if self.esc_backtrack_hint {
self.esc_backtrack_hint = false;
self.composer.set_esc_backtrack_hint(false);
self.request_redraw();
}
}
// esc_backtrack_hint_visible removed; hints are controlled internally.
pub fn set_task_running(&mut self, running: bool) {
self.is_task_running = running;
if running {
if self.active_view.is_none() {
self.active_view = Some(Box::new(StatusIndicatorView::new(
if self.status.is_none() {
self.status = Some(StatusIndicatorWidget::new(
self.app_event_tx.clone(),
self.frame_requester.clone(),
)));
self.status_view_active = true;
));
}
if let Some(status) = self.status.as_mut() {
status.set_queued_messages(self.queued_user_messages.clone());
}
self.request_redraw();
} else {
// Drop the status view when a task completes, but keep other
// modal views (e.g. approval dialogs).
if let Some(mut view) = self.active_view.take() {
if !view.should_hide_when_task_is_done() {
self.active_view = Some(view);
}
self.status_view_active = false;
}
// Hide the status indicator when a task completes, but keep other modal views.
self.status = None;
}
}
@@ -280,21 +317,16 @@ impl BottomPane {
self.app_event_tx.clone(),
);
self.active_view = Some(Box::new(view));
self.status_view_active = false;
self.request_redraw();
}
/// Update the live status text shown while a task is running.
/// If a modal view is active (i.e., not the status indicator), this is a noop.
pub(crate) fn update_status_text(&mut self, text: String) {
if !self.is_task_running || !self.status_view_active {
return;
}
if let Some(mut view) = self.active_view.take() {
view.update_status_text(text);
self.active_view = Some(view);
self.request_redraw();
/// Update the queued messages shown under the status header.
pub(crate) fn set_queued_user_messages(&mut self, queued: Vec<String>) {
self.queued_user_messages = queued.clone();
if let Some(status) = self.status.as_mut() {
status.set_queued_messages(queued);
}
self.request_redraw();
}
pub(crate) fn composer_is_empty(&self) -> bool {
@@ -335,7 +367,6 @@ impl BottomPane {
// Otherwise create a new approval modal overlay.
let modal = ApprovalModalView::new(request, self.app_event_tx.clone());
self.active_view = Some(Box::new(modal));
self.status_view_active = false;
self.request_redraw()
}
@@ -391,12 +422,20 @@ impl BottomPane {
impl WidgetRef for &BottomPane {
fn render_ref(&self, area: Rect, buf: &mut Buffer) {
let content = self.layout(area);
let [status_area, content] = self.layout(area);
// When a modal view is active, it owns the whole content area.
if let Some(view) = &self.active_view {
view.render(content, buf);
} else {
(&self.composer).render_ref(content, buf);
// No active modal:
// If a status indicator is active, render it above the composer.
if let Some(status) = &self.status {
status.render_ref(status_area, buf);
}
// Render the composer in the remaining area.
self.composer.render_ref(content, buf);
}
}
}
@@ -467,7 +506,7 @@ mod tests {
}
#[test]
fn composer_not_shown_after_denied_if_task_running() {
fn composer_shown_after_denied_while_task_running() {
let (tx_raw, rx) = unbounded_channel::<AppEvent>();
let tx = AppEventSender::new(tx_raw);
let mut pane = BottomPane::new(BottomPaneParams {
@@ -478,7 +517,7 @@ mod tests {
placeholder_text: "Ask Codex to do anything".to_string(),
});
// Start a running task so the status indicator replaces the composer.
// Start a running task so the status indicator is active above the composer.
pane.set_task_running(true);
// Push an approval modal (e.g., command approval) which should hide the status view.
@@ -490,16 +529,17 @@ mod tests {
use crossterm::event::KeyModifiers;
pane.handle_key_event(KeyEvent::new(KeyCode::Char('n'), KeyModifiers::NONE));
// After denial, since the task is still running, the status indicator
// should be restored as the active view; the composer should NOT be visible.
// After denial, since the task is still running, the status indicator should be
// visible above the composer. The modal should be gone.
assert!(
pane.status_view_active,
"status view should be active after denial"
pane.active_view.is_none(),
"no active modal view after denial"
);
assert!(pane.active_view.is_some(), "active view should be present");
// Render and ensure the top row includes the Working header instead of the composer.
let area = Rect::new(0, 0, 40, 3);
// Render and ensure the top row includes the Working header and a composer line below.
// Give the animation thread a moment to tick.
std::thread::sleep(std::time::Duration::from_millis(120));
let area = Rect::new(0, 0, 40, 6);
let mut buf = Buffer::empty(area);
(&pane).render_ref(area, &mut buf);
let mut row1 = String::new();
@@ -511,6 +551,23 @@ mod tests {
"expected Working header after denial on row 1: {row1:?}"
);
// Composer placeholder should be visible somewhere below.
let mut found_composer = false;
for y in 1..area.height.saturating_sub(2) {
let mut row = String::new();
for x in 0..area.width {
row.push(buf[(x, y)].symbol().chars().next().unwrap_or(' '));
}
if row.contains("Ask Codex") {
found_composer = true;
break;
}
}
assert!(
found_composer,
"expected composer visible under status line"
);
// Drain the channel to avoid unused warnings.
drop(rx);
}
@@ -530,7 +587,8 @@ mod tests {
// Begin a task: show initial status.
pane.set_task_running(true);
let area = Rect::new(0, 0, 40, 3);
// Use a height that allows the status line to be visible above the composer.
let area = Rect::new(0, 0, 40, 6);
let mut buf = Buffer::empty(area);
(&pane).render_ref(area, &mut buf);
@@ -545,7 +603,7 @@ mod tests {
}
#[test]
fn bottom_padding_present_for_status_view() {
fn bottom_padding_present_with_status_above_composer() {
let (tx_raw, _rx) = unbounded_channel::<AppEvent>();
let tx = AppEventSender::new(tx_raw);
let mut pane = BottomPane::new(BottomPaneParams {
@@ -574,19 +632,29 @@ mod tests {
for x in 0..area.width {
top.push(buf[(x, 1)].symbol().chars().next().unwrap_or(' '));
}
assert_eq!(buf[(0, 1)].symbol().chars().next().unwrap_or(' '), '▌');
assert!(
top.trim_start().starts_with("Working"),
"expected top row to start with 'Working': {top:?}"
);
assert!(
top.contains("Working"),
"expected Working header on top row: {top:?}"
);
// Bottom two rows are blank padding
// Next row (spacer) is blank, and bottom two rows are blank padding
let mut spacer = String::new();
let mut r_last = String::new();
let mut r_last2 = String::new();
for x in 0..area.width {
// Spacer row immediately below the status header lives at y=2.
spacer.push(buf[(x, 2)].symbol().chars().next().unwrap_or(' '));
r_last.push(buf[(x, height - 1)].symbol().chars().next().unwrap_or(' '));
r_last2.push(buf[(x, height - 2)].symbol().chars().next().unwrap_or(' '));
}
assert!(
spacer.trim().is_empty(),
"expected spacer line blank: {spacer:?}"
);
assert!(
r_last.trim().is_empty(),
"expected last row blank: {r_last:?}"
@@ -611,7 +679,7 @@ mod tests {
pane.set_task_running(true);
// Height=2 → with spacer, spinner on row 1; no bottom padding.
// Height=2 → composer visible; status is hidden to preserve composer. Spacer may collapse.
let area2 = Rect::new(0, 0, 20, 2);
let mut buf2 = Buffer::empty(area2);
(&pane).render_ref(area2, &mut buf2);
@@ -621,13 +689,17 @@ mod tests {
row0.push(buf2[(x, 0)].symbol().chars().next().unwrap_or(' '));
row1.push(buf2[(x, 1)].symbol().chars().next().unwrap_or(' '));
}
assert!(row0.trim().is_empty(), "expected spacer on row 0: {row0:?}");
let has_composer = row0.contains("Ask Codex") || row1.contains("Ask Codex");
assert!(
row1.contains("Working"),
"expected Working on row 1: {row1:?}"
has_composer,
"expected composer to be visible on one of the rows: row0={row0:?}, row1={row1:?}"
);
assert!(
!row0.contains("Working") && !row1.contains("Working"),
"status header should be hidden when height=2"
);
// Height=1 → no padding; single row is the spinner.
// Height=1 → no padding; single row is the composer (status hidden).
let area1 = Rect::new(0, 0, 20, 1);
let mut buf1 = Buffer::empty(area1);
(&pane).render_ref(area1, &mut buf1);
@@ -636,8 +708,8 @@ mod tests {
only.push(buf1[(x, 0)].symbol().chars().next().unwrap_or(' '));
}
assert!(
only.contains("Working"),
"expected Working header with no padding: {only:?}"
only.contains("Ask Codex"),
"expected composer with no padding: {only:?}"
);
}
}

View File

@@ -1,59 +0,0 @@
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
use ratatui::buffer::Buffer;
use ratatui::widgets::WidgetRef;
use crate::app_event_sender::AppEventSender;
use crate::bottom_pane::BottomPane;
use crate::status_indicator_widget::StatusIndicatorWidget;
use crate::tui::FrameRequester;
use super::BottomPaneView;
pub(crate) struct StatusIndicatorView {
view: StatusIndicatorWidget,
}
impl StatusIndicatorView {
pub fn new(app_event_tx: AppEventSender, frame_requester: FrameRequester) -> Self {
Self {
view: StatusIndicatorWidget::new(app_event_tx, frame_requester),
}
}
pub fn update_text(&mut self, text: String) {
self.view.update_text(text);
}
pub fn update_header(&mut self, header: String) {
self.view.update_header(header);
}
}
impl BottomPaneView for StatusIndicatorView {
fn update_status_header(&mut self, header: String) {
self.update_header(header);
}
fn should_hide_when_task_is_done(&mut self) -> bool {
true
}
fn desired_height(&self, width: u16) -> u16 {
self.view.desired_height(width)
}
fn render(&self, area: ratatui::layout::Rect, buf: &mut Buffer) {
self.view.render_ref(area, buf);
}
fn handle_key_event(&mut self, _pane: &mut BottomPane, key_event: KeyEvent) {
if key_event.code == KeyCode::Esc {
self.view.interrupt();
}
}
fn update_status_text(&mut self, text: String) {
self.update_text(text);
}
}

View File

@@ -1,4 +1,5 @@
use std::collections::HashMap;
use std::collections::VecDeque;
use std::path::PathBuf;
use std::sync::Arc;
@@ -28,9 +29,12 @@ use codex_core::protocol::TaskCompleteEvent;
use codex_core::protocol::TokenUsage;
use codex_core::protocol::TurnAbortReason;
use codex_core::protocol::TurnDiffEvent;
use codex_core::protocol::WebSearchBeginEvent;
use codex_protocol::parse_command::ParsedCommand;
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
use crossterm::event::KeyEventKind;
use crossterm::event::KeyModifiers;
use rand::Rng;
use ratatui::buffer::Buffer;
use ratatui::layout::Constraint;
@@ -63,6 +67,7 @@ mod interrupts;
use self::interrupts::InterruptManager;
mod agent;
use self::agent::spawn_agent;
use self::agent::spawn_agent_from_existing;
use crate::streaming::controller::AppEventHistorySink;
use crate::streaming::controller::StreamController;
use codex_common::approval_presets::ApprovalPreset;
@@ -98,15 +103,17 @@ pub(crate) struct ChatWidget {
task_complete_pending: bool,
// Queue of interruptive UI events deferred during an active write cycle
interrupts: InterruptManager,
// Whether a redraw is needed after handling the current event
needs_redraw: bool,
// Accumulates the current reasoning block text to extract a header
reasoning_buffer: String,
// Accumulates full reasoning content for transcript-only recording
full_reasoning_buffer: String,
session_id: Option<Uuid>,
frame_requester: FrameRequester,
// Whether to include the initial welcome banner on session configured
show_welcome_banner: bool,
last_history_was_exec: bool,
// User messages queued while a turn is in progress
queued_user_messages: VecDeque<UserMessage>,
}
struct UserMessage {
@@ -132,10 +139,6 @@ fn create_initial_user_message(text: String, image_paths: Vec<PathBuf>) -> Optio
}
impl ChatWidget {
#[inline]
fn mark_needs_redraw(&mut self) {
self.needs_redraw = true;
}
fn flush_answer_stream_with_separator(&mut self) {
let sink = AppEventHistorySink(self.app_event_tx.clone());
let _ = self.stream.finalize(true, &sink);
@@ -145,18 +148,22 @@ impl ChatWidget {
self.bottom_pane
.set_history_metadata(event.history_log_id, event.history_entry_count);
self.session_id = Some(event.session_id);
self.add_to_history(history_cell::new_session_info(&self.config, event, true));
self.add_to_history(history_cell::new_session_info(
&self.config,
event,
self.show_welcome_banner,
));
if let Some(user_message) = self.initial_user_message.take() {
self.submit_user_message(user_message);
}
self.mark_needs_redraw();
self.request_redraw();
}
fn on_agent_message(&mut self, message: String) {
let sink = AppEventHistorySink(self.app_event_tx.clone());
let finished = self.stream.apply_final_answer(&message, &sink);
self.handle_if_stream_finished(finished);
self.mark_needs_redraw();
self.request_redraw();
}
fn on_agent_message_delta(&mut self, delta: String) {
@@ -175,7 +182,7 @@ impl ChatWidget {
} else {
// Fallback while we don't yet have a bold header: leave existing header as-is.
}
self.mark_needs_redraw();
self.request_redraw();
}
fn on_agent_reasoning_final(&mut self) {
@@ -189,7 +196,7 @@ impl ChatWidget {
}
self.reasoning_buffer.clear();
self.full_reasoning_buffer.clear();
self.mark_needs_redraw();
self.request_redraw();
}
fn on_reasoning_section_break(&mut self) {
@@ -207,7 +214,7 @@ impl ChatWidget {
self.stream.reset_headers_for_new_turn();
self.full_reasoning_buffer.clear();
self.reasoning_buffer.clear();
self.mark_needs_redraw();
self.request_redraw();
}
fn on_task_complete(&mut self) {
@@ -220,7 +227,10 @@ impl ChatWidget {
// Mark task stopped and request redraw now that all content is in history.
self.bottom_pane.set_task_running(false);
self.running_commands.clear();
self.mark_needs_redraw();
self.request_redraw();
// If there is a queued user message, send exactly one now to begin the next turn.
self.maybe_send_next_queued_input();
}
fn on_token_count(&mut self, token_usage: TokenUsage) {
@@ -238,7 +248,10 @@ impl ChatWidget {
self.bottom_pane.set_task_running(false);
self.running_commands.clear();
self.stream.clear_all();
self.mark_needs_redraw();
self.request_redraw();
// After an error ends the turn, try sending the next queued input.
self.maybe_send_next_queued_input();
}
fn on_plan_update(&mut self, update: codex_core::plan_tool::UpdatePlanArgs) {
@@ -308,6 +321,11 @@ impl ChatWidget {
self.defer_or_handle(|q| q.push_mcp_end(ev), |s| s.handle_mcp_end_now(ev2));
}
fn on_web_search_begin(&mut self, ev: WebSearchBeginEvent) {
self.flush_answer_stream_with_separator();
self.add_to_history(history_cell::new_web_search_call(ev.query));
}
fn on_get_history_entry_response(
&mut self,
event: codex_core::protocol::GetHistoryEntryResponseEvent,
@@ -336,7 +354,7 @@ impl ChatWidget {
fn on_stream_error(&mut self, message: String) {
// Show stream errors in the transcript so users see retry/backoff info.
self.add_to_history(history_cell::new_stream_error_event(message));
self.mark_needs_redraw();
self.request_redraw();
}
/// Periodic tick to commit at most one queued line to history with a small delay,
/// animating the output.
@@ -390,7 +408,7 @@ impl ChatWidget {
let sink = AppEventHistorySink(self.app_event_tx.clone());
self.stream.begin(&sink);
self.stream.push_and_maybe_commit(&delta, &sink);
self.mark_needs_redraw();
self.request_redraw();
}
pub(crate) fn handle_exec_end_now(&mut self, ev: ExecCommandEndEvent) {
@@ -448,7 +466,7 @@ impl ChatWidget {
reason: ev.reason,
};
self.bottom_pane.push_approval_request(request);
self.mark_needs_redraw();
self.request_redraw();
}
pub(crate) fn handle_apply_patch_approval_now(
@@ -468,7 +486,7 @@ impl ChatWidget {
grant_root: ev.grant_root,
};
self.bottom_pane.push_approval_request(request);
self.mark_needs_redraw();
self.request_redraw();
}
pub(crate) fn handle_exec_begin_now(&mut self, ev: ExecCommandBeginEvent) {
@@ -496,7 +514,7 @@ impl ChatWidget {
}
// Request a redraw so the working header and command list are visible immediately.
self.mark_needs_redraw();
self.request_redraw();
}
pub(crate) fn handle_mcp_begin_now(&mut self, ev: McpToolCallBeginEvent) {
@@ -576,11 +594,57 @@ impl ChatWidget {
pending_exec_completions: Vec::new(),
task_complete_pending: false,
interrupts: InterruptManager::new(),
needs_redraw: false,
reasoning_buffer: String::new(),
full_reasoning_buffer: String::new(),
session_id: None,
last_history_was_exec: false,
queued_user_messages: VecDeque::new(),
show_welcome_banner: true,
}
}
/// Create a ChatWidget attached to an existing conversation (e.g., a fork).
pub(crate) fn new_from_existing(
config: Config,
conversation: std::sync::Arc<codex_core::CodexConversation>,
session_configured: codex_core::protocol::SessionConfiguredEvent,
frame_requester: FrameRequester,
app_event_tx: AppEventSender,
enhanced_keys_supported: bool,
) -> Self {
let mut rng = rand::rng();
let placeholder = EXAMPLE_PROMPTS[rng.random_range(0..EXAMPLE_PROMPTS.len())].to_string();
let codex_op_tx =
spawn_agent_from_existing(conversation, session_configured, app_event_tx.clone());
Self {
app_event_tx: app_event_tx.clone(),
frame_requester: frame_requester.clone(),
codex_op_tx,
bottom_pane: BottomPane::new(BottomPaneParams {
frame_requester,
app_event_tx,
has_input_focus: true,
enhanced_keys_supported,
placeholder_text: placeholder,
}),
active_exec_cell: None,
config: config.clone(),
initial_user_message: None,
total_token_usage: TokenUsage::default(),
last_token_usage: TokenUsage::default(),
stream: StreamController::new(config),
running_commands: HashMap::new(),
pending_exec_completions: Vec::new(),
task_complete_pending: false,
interrupts: InterruptManager::new(),
reasoning_buffer: String::new(),
full_reasoning_buffer: String::new(),
session_id: None,
last_history_was_exec: false,
queued_user_messages: VecDeque::new(),
show_welcome_banner: false,
}
}
@@ -597,13 +661,39 @@ impl ChatWidget {
self.bottom_pane.clear_ctrl_c_quit_hint();
}
// Alt+Up: Edit the most recent queued user message (if any).
if matches!(
key_event,
KeyEvent {
code: KeyCode::Up,
modifiers: KeyModifiers::ALT,
kind: KeyEventKind::Press,
..
}
) && !self.queued_user_messages.is_empty()
{
// Prefer the most recently queued item.
if let Some(user_message) = self.queued_user_messages.pop_back() {
self.bottom_pane.set_composer_text(user_message.text);
self.refresh_queued_user_messages();
self.request_redraw();
}
return;
}
match self.bottom_pane.handle_key_event(key_event) {
InputResult::Submitted(text) => {
let images = self.bottom_pane.take_recent_submission_images();
self.submit_user_message(UserMessage {
// If a task is running, queue the user input to be sent after the turn completes.
let user_message = UserMessage {
text,
image_paths: images,
});
image_paths: self.bottom_pane.take_recent_submission_images(),
};
if self.bottom_pane.is_task_running() {
self.queued_user_messages.push_back(user_message);
self.refresh_queued_user_messages();
} else {
self.submit_user_message(user_message);
}
}
InputResult::Command(cmd) => {
self.dispatch_command(cmd);
@@ -789,8 +879,6 @@ impl ChatWidget {
}
pub(crate) fn handle_codex_event(&mut self, event: Event) {
// Reset redraw flag for this dispatch
self.needs_redraw = false;
let Event { id, msg } = event;
match msg {
@@ -839,6 +927,7 @@ impl ChatWidget {
EventMsg::ExecCommandEnd(ev) => self.on_exec_command_end(ev),
EventMsg::McpToolCallBegin(ev) => self.on_mcp_tool_call_begin(ev),
EventMsg::McpToolCallEnd(ev) => self.on_mcp_tool_call_end(ev),
EventMsg::WebSearchBegin(ev) => self.on_web_search_begin(ev),
EventMsg::GetHistoryEntryResponse(ev) => self.on_get_history_entry_response(ev),
EventMsg::McpListToolsResponse(ev) => self.on_list_mcp_tools(ev),
EventMsg::ShutdownComplete => self.on_shutdown_complete(),
@@ -847,12 +936,11 @@ impl ChatWidget {
self.on_background_event(message)
}
EventMsg::StreamError(StreamErrorEvent { message }) => self.on_stream_error(message),
EventMsg::ConversationHistory(_) => {}
}
// Coalesce redraws: issue at most one after handling the event
if self.needs_redraw {
self.request_redraw();
self.needs_redraw = false;
EventMsg::ConversationHistory(ev) => {
// Forward to App so it can process backtrack flows.
self.app_event_tx
.send(crate::app_event::AppEvent::ConversationHistory(ev));
}
}
}
@@ -860,16 +948,34 @@ impl ChatWidget {
self.frame_requester.schedule_frame();
}
// If idle and there are queued inputs, submit exactly one to start the next turn.
fn maybe_send_next_queued_input(&mut self) {
if self.bottom_pane.is_task_running() {
return;
}
if let Some(user_message) = self.queued_user_messages.pop_front() {
self.submit_user_message(user_message);
}
// Update the list to reflect the remaining queued messages (if any).
self.refresh_queued_user_messages();
}
/// Rebuild and update the queued user messages from the current queue.
fn refresh_queued_user_messages(&mut self) {
let messages: Vec<String> = self
.queued_user_messages
.iter()
.map(|m| m.text.clone())
.collect();
self.bottom_pane.set_queued_user_messages(messages);
}
pub(crate) fn add_diff_in_progress(&mut self) {
self.bottom_pane.set_task_running(true);
self.bottom_pane
.update_status_text("computing diff".to_string());
self.request_redraw();
}
pub(crate) fn on_diff_complete(&mut self) {
self.bottom_pane.set_task_running(false);
self.mark_needs_redraw();
self.request_redraw();
}
pub(crate) fn add_status_output(&mut self) {
@@ -1022,6 +1128,14 @@ impl ChatWidget {
pub(crate) fn insert_str(&mut self, text: &str) {
self.bottom_pane.insert_str(text);
}
pub(crate) fn show_esc_backtrack_hint(&mut self) {
self.bottom_pane.show_esc_backtrack_hint();
}
pub(crate) fn clear_esc_backtrack_hint(&mut self) {
self.bottom_pane.clear_esc_backtrack_hint();
}
/// Forward an `Op` directly to codex.
pub(crate) fn submit_op(&self, op: Op) {
// Record outbound operation for session replay fidelity.
@@ -1049,6 +1163,16 @@ impl ChatWidget {
&self.total_token_usage
}
pub(crate) fn session_id(&self) -> Option<Uuid> {
self.session_id
}
/// Return a reference to the widget's current config (includes any
/// runtime overrides applied via TUI, e.g., model or approval policy).
pub(crate) fn config_ref(&self) -> &Config {
&self.config
}
pub(crate) fn clear_token_usage(&mut self) {
self.total_token_usage = TokenUsage::default();
self.bottom_pane.set_token_usage(

View File

@@ -1,5 +1,6 @@
use std::sync::Arc;
use codex_core::CodexConversation;
use codex_core::ConversationManager;
use codex_core::NewConversation;
use codex_core::config::Config;
@@ -59,3 +60,40 @@ pub(crate) fn spawn_agent(
codex_op_tx
}
/// Spawn agent loops for an existing conversation (e.g., a forked conversation).
/// Sends the provided `SessionConfiguredEvent` immediately, then forwards subsequent
/// events and accepts Ops for submission.
pub(crate) fn spawn_agent_from_existing(
conversation: std::sync::Arc<CodexConversation>,
session_configured: codex_core::protocol::SessionConfiguredEvent,
app_event_tx: AppEventSender,
) -> UnboundedSender<Op> {
let (codex_op_tx, mut codex_op_rx) = unbounded_channel::<Op>();
let app_event_tx_clone = app_event_tx.clone();
tokio::spawn(async move {
// Forward the captured `SessionConfigured` event so it can be rendered in the UI.
let ev = codex_core::protocol::Event {
id: "".to_string(),
msg: codex_core::protocol::EventMsg::SessionConfigured(session_configured),
};
app_event_tx_clone.send(AppEvent::CodexEvent(ev));
let conversation_clone = conversation.clone();
tokio::spawn(async move {
while let Some(op) = codex_op_rx.recv().await {
let id = conversation_clone.submit(op).await;
if let Err(e) = id {
tracing::error!("failed to submit op: {e}");
}
}
});
while let Ok(event) = conversation.next_event().await {
app_event_tx_clone.send(AppEvent::CodexEvent(event));
}
});
codex_op_tx
}

View File

@@ -0,0 +1,13 @@
---
source: tui/src/chatwidget/tests.rs
expression: terminal.backend()
---
"? Codex wants to run echo hello world "
" "
"Model wants to run a command "
" "
"▌Allow command? "
"▌ Yes Always No No, provide feedback "
"▌ Approve and run the command "
" "
" "

Some files were not shown because too many files have changed in this diff Show More