Compare commits

...

36 Commits

Author SHA1 Message Date
Ahmed Ibrahim
4e6bc85fc4 oss 2025-09-02 15:19:29 -07:00
Ahmed Ibrahim
8bb57afc84 oss 2025-09-02 15:18:04 -07:00
Ahmed Ibrahim
a8324c5d94 progress 2025-09-02 14:52:44 -07:00
Jeremy Rose
46e35a2345 tui: fix extra blank lines in streamed agent messages (#3065)
Fixes excessive blank lines appearing during agent message streaming.

- Only insert a separator blank line for new, non-streaming history
cells.
- Streaming continuations now append without adding a spacer,
eliminating extra gaps between chunks.

Affected area: TUI display of agent messages (tui/src/app.rs).
2025-09-02 13:45:51 -07:00
Reuben Narad
7bcdc5cc7c fix config reference table (#3063)
3 quick fixes to docs/config.md

- Fix the reference table so option lists render correctly
- Corrected the default `stream_max_retries` to 5 (Old: 10)
- Update example approval_policy to untrusted (Old: unless-allow-listed)
2025-09-02 13:03:11 -07:00
Michael Bolin
4b426f7e1e fix: leverage windows-11-arm for Windows ARM builds (#3062)
This is in support of https://github.com/openai/codex/issues/2979.

Once we have a release out, we can update the npm module and the VS Code
extension to take advantage of this.
2025-09-02 12:56:09 -07:00
Jeremy Rose
fcb62a0fa5 tui: hide '/init' suggestion when AGENTS.md exists (#3038)
Hide the “/init” suggestion in the new-session banner when an
`AGENTS.md` exists anywhere from the repo root down to the current
working directory.

Changes
- Conditional suggestion: use `discover_project_doc_paths(config)` to
suppress `/init` when agents docs are present.
- TUI style cleanup: switch banner construction to `Stylize` helpers
(`.bold()`, `.dim()`, `.into()`), avoiding `Span::styled`/`Span::raw`.
- Fixture update: remove `/init` line in
`tui/tests/fixtures/ideal-binary-response.txt` to match the new banner.

Validation
- Ran formatting and scoped lint fixes: `just fmt` and `just fix -p
codex-tui`.
- Tests: `cargo test -p codex-tui` passed (`176 passed, 0 failed`).

Notes
- No change to the `/init` command itself; only the welcome banner now
adapts based on presence of `AGENTS.md`.
2025-09-02 12:04:32 -07:00
Ahmed Ibrahim
eb40fe3451 Add logs to know when we users are changing the model (#3060) 2025-09-02 17:59:07 +00:00
Jeremy Rose
b32c79e371 tui: fix laggy typing (#2922)
we were checking every typed character to see if it was an image. this
involved going to disk, which was slow.

this was a bad interaction between image paste support and burst-paste
detection.
2025-09-02 10:35:29 -07:00
Jeremy Rose
e442ecedab rework message styling (#2877)
https://github.com/user-attachments/assets/cf07f62b-1895-44bb-b9c3-7a12032eb371
2025-09-02 17:29:58 +00:00
Lionel Cheng
3f8d6021ac Fix: Adapt pr template with correct link following doc refacto (#2982)
This PR fixes the link of contributing page in Pull Request template to
the right one following the migration of the section to a dedicated
file.

Signed-off-by: lionelchg <lionel.cheng@hotmail.fr>
2025-09-02 13:05:52 -04:00
Uhyeon Park
7ac6194c22 Bug fix: ignore Enter on empty input to avoid queuing blank messages (#3047)
## Summary
Pressing Enter with an empty composer was treated as a submission, which
queued a blank message while a task was running. This PR suppresses
submission when there is no text and no attachments.

## Root Cause

- ChatComposer returned Submitted even when the trimmed text was empty.
ChatWidget then queued it during a running task, leading to an empty
item appearing in the queued list and being popped later with no effect.

## Changes
- ChatComposer Enter handling: if trimmed text is empty and there are no
attached images, return None instead of Submitted.
- No changes to ChatWidget; behavior naturally stops queuing blanks at
the source.

## Code Paths

- Modified: `tui/src/bottom_pane/chat_composer.rs`
- Tests added:
    - `tui/src/bottom_pane/chat_composer.rs`: `empty_enter_returns_none`
- `tui/src/chatwidget/tests.rs`:
`empty_enter_during_task_does_not_queue`

## Result

### Before


https://github.com/user-attachments/assets/a40e2f6d-42ba-4a82-928b-8f5458f5884d

### After



https://github.com/user-attachments/assets/958900b7-a566-44fc-b16c-b80380739c92
2025-09-02 13:05:45 -04:00
Jeremy Rose
619436c58f remove extra quote from disabled-command message (#3035)
there was an extra ' floating around for some reason.
2025-09-02 09:46:41 -07:00
dependabot[bot]
1cc6b97227 chore(deps): bump regex-lite from 0.1.6 to 0.1.7 in /codex-rs (#3010)
Bumps [regex-lite](https://github.com/rust-lang/regex) from 0.1.6 to
0.1.7.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/rust-lang/regex/blob/master/CHANGELOG.md">regex-lite's
changelog</a>.</em></p>
<blockquote>
<h1>0.1.79</h1>
<ul>
<li>Require regex-syntax 0.3.8.</li>
</ul>
<h1>0.1.78</h1>
<ul>
<li>[PR <a
href="https://redirect.github.com/rust-lang/regex/issues/290">#290</a>](<a
href="https://redirect.github.com/rust-lang/regex/pull/290">rust-lang/regex#290</a>):
Fixes bug <a
href="https://redirect.github.com/rust-lang/regex/issues/289">#289</a>,
which caused some regexes with a certain combination
of literals to match incorrectly.</li>
</ul>
<h1>0.1.77</h1>
<ul>
<li>[PR <a
href="https://redirect.github.com/rust-lang/regex/issues/281">#281</a>](<a
href="https://redirect.github.com/rust-lang/regex/pull/281">rust-lang/regex#281</a>):
Fixes bug <a
href="https://redirect.github.com/rust-lang/regex/issues/280">#280</a>
by disabling all literal optimizations when a pattern
is partially anchored.</li>
</ul>
<h1>0.1.76</h1>
<ul>
<li>Tweak criteria for using the Teddy literal matcher.</li>
</ul>
<h1>0.1.75</h1>
<ul>
<li>[PR <a
href="https://redirect.github.com/rust-lang/regex/issues/275">#275</a>](<a
href="https://redirect.github.com/rust-lang/regex/pull/275">rust-lang/regex#275</a>):
Improves match verification performance in the Teddy SIMD searcher.</li>
<li>[PR <a
href="https://redirect.github.com/rust-lang/regex/issues/278">#278</a>](<a
href="https://redirect.github.com/rust-lang/regex/pull/278">rust-lang/regex#278</a>):
Replaces slow substring loop in the Teddy SIMD searcher with
Aho-Corasick.</li>
<li>Implemented DoubleEndedIterator on regex set match iterators.</li>
</ul>
<h1>0.1.74</h1>
<ul>
<li>Release regex-syntax 0.3.5 with a minor bug fix.</li>
<li>Fix bug <a
href="https://redirect.github.com/rust-lang/regex/issues/272">#272</a>.</li>
<li>Fix bug <a
href="https://redirect.github.com/rust-lang/regex/issues/277">#277</a>.</li>
<li>[PR <a
href="https://redirect.github.com/rust-lang/regex/issues/270">#270</a>](<a
href="https://redirect.github.com/rust-lang/regex/pull/270">rust-lang/regex#270</a>):
Fixes bugs <a
href="https://redirect.github.com/rust-lang/regex/issues/264">#264</a>,
<a
href="https://redirect.github.com/rust-lang/regex/issues/268">#268</a>
and an unreported where the DFA cache size could be
drastically underestimated in some cases (leading to high unexpected
memory
usage).</li>
</ul>
<h1>0.1.73</h1>
<ul>
<li>Release <code>regex-syntax 0.3.4</code>.</li>
<li>Bump <code>regex-syntax</code> dependency version for
<code>regex</code> to <code>0.3.4</code>.</li>
</ul>
<h1>0.1.72</h1>
<ul>
<li>[PR <a
href="https://redirect.github.com/rust-lang/regex/issues/262">#262</a>](<a
href="https://redirect.github.com/rust-lang/regex/pull/262">rust-lang/regex#262</a>):
Fixes a number of small bugs caught by fuzz testing (AFL).</li>
</ul>
<h1>0.1.71</h1>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="45c3da7681"><code>45c3da7</code></a>
regex-lite-0.1.7</li>
<li><a
href="873ed800c5"><code>873ed80</code></a>
regex-automata-0.4.10</li>
<li><a
href="ea834f8e1f"><code>ea834f8</code></a>
regex-syntax-0.8.6</li>
<li><a
href="86836fbe84"><code>86836fb</code></a>
changelog: 1.11.2</li>
<li><a
href="63a26c1a7f"><code>63a26c1</code></a>
cargo: ensure that 'perf' doesn't enable 'std' implicitly (<a
href="https://redirect.github.com/rust-lang/regex/issues/1150">#1150</a>)</li>
<li><a
href="dd96592be2"><code>dd96592</code></a>
doc: clarify CRLF mode effect</li>
<li><a
href="931dae0192"><code>931dae0</code></a>
cargo: point <code>repository</code> metadata to clonable URLs</li>
<li><a
href="a66fde6e80"><code>a66fde6</code></a>
doc: remove references to non-existent parameters</li>
<li><a
href="1873e96a7b"><code>1873e96</code></a>
automata: add <code>DFA::set_prefilter</code> method to the DFA
types</li>
<li><a
href="89ff15310b"><code>89ff153</code></a>
doc: fix misspelling typo</li>
<li>Additional commits viewable in <a
href="https://github.com/rust-lang/regex/compare/regex-lite-0.1.6...regex-lite-0.1.7">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=regex-lite&package-manager=cargo&previous-version=0.1.6&new-version=0.1.7)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-02 09:09:17 -07:00
Michael Bolin
7eee69d821 fix: try to populate the Windows cache for release builds when PRs are put up for review (#2884)
Windows release builds take close to 12 minutes whereas Mac/Linux are
closer to 5. Let's see if this speeds things up?
2025-08-28 23:48:29 -07:00
Michael Bolin
65636802f7 fix: drop Mutexes earlier in MCP server (#2878) 2025-08-28 22:50:16 -07:00
Michael Bolin
c988ce28fe fix: drop Mutex before calling tx_approve.send() (#2876) 2025-08-28 22:49:29 -07:00
Michael Bolin
cb2f952143 fix: remove unnecessary flush() calls (#2873)
Because we are writing to a pipe, these `flush()` calls are unnecessary,
so removing these saves us one syscall per write in these two cases.
2025-08-28 22:41:10 -07:00
Jeremy Rose
7d734bff65 suggest just fix -p in agents.md (#2881) 2025-08-28 22:32:53 -07:00
Michael Bolin
970e466ab3 fix: switch to unbounded channel (#2874)
#2747 encouraged me to audit our codebase for similar issues, as now I
am particularly suspicious that our flaky tests are due to a racy
deadlock.

I asked Codex to audit our code, and one of its suggestions was this:

> **High-Risk Patterns**
>
> All `send_*` methods await on a bounded
`mpsc::Sender<OutgoingMessage>`. If the writer blocks, the channel fills
and the processor task blocks on send, stops draining incoming requests,
and stdin reader eventually blocks on its send. This creates a
backpressure deadlock cycle across the three tasks.
>
> **Recommendations**
> * Server outgoing path: break the backpressure cycle
> * Option A (minimal risk): Change `OutgoingMessageSender` to use an
unbounded channel to decouple producer from stdout. Add rate logging so
floods are visible.
> * Option B (bounded + drop policy): Change `send_*` to try_send and
drop messages (or coalesce) when the queue is full, logging a warning.
This prevents processor stalls at the cost of losing messages under
extreme backpressure.
> * Option C (two-stage buffer): Keep bounded channel, but have a
dedicated “egress” task that drains an unbounded internal queue, writing
to stdout with retries and a shutdown timeout. This centralizes
backpressure policy.

So this PR is Option A.

Indeed, we previously used a bounded channel with a capacity of `128`,
but as we discovered recently with #2776, there are certainly cases
where we can get flooded with events.

That said, `test_shell_command_approval_triggers_elicitation` just
failed one one build when I put up this PR, so clearly we are not out of
the woods yet...

**Update:** I think I found the true source of the deadlock! See
https://github.com/openai/codex/pull/2876
2025-08-28 22:20:10 -07:00
Michael Bolin
5d2d3002ef fix: specify --profile to cargo clippy in CI (#2871)
Today we had a breakage in the release build that went unnoticed by CI.
Here is what happened:

- https://github.com/openai/codex/pull/2242 originally added some logic
to do release builds to prevent this from happening
- https://github.com/openai/codex/pull/2276 undid that change to try to
speed things up by removing the step to build all the individual crates
in release mode, assuming the `cargo check` call was sufficient
coverage, which it would have been, had it specified `--profile`

This PR adds `--profile` to the `cargo check` step so we should get the
desired coverage from our build matrix.

Indeed, enabling this in our CI uncovered a warning that is only present
in release mode that was going unnoticed.
2025-08-28 21:43:40 -07:00
agro
bb30996f7c Bugfix: Prevents brew install codex in comment to be executed (#2868)
The default install command causes unexpected code to be executed:

```
npm install -g @openai/codex # Alternatively: `brew install codex`
```

The problem is some environment will treat # as literal string, not
start of comment. Therefore the user will execute this instead (because
it's in backtick)

```
brew install codex
```

And then the npm command will error (because it's trying to install
package #)
2025-08-28 21:40:28 -07:00
dedrisian-oai
3f8184034f Fix CI release build (#2864) 2025-08-29 03:06:10 +00:00
unship
f7cb2f87a0 Bug fix: clone of incoming_tx can lead to deadlock (#2747)
POC code

```rust
use tokio::sync::mpsc;
use std::time::Duration;

#[tokio::main]
async fn main() {
    println!("=== Test 1: Simulating original MCP server pattern ===");
    test_original_pattern().await;
}

async fn test_original_pattern() {
    println!("Testing the original pattern from MCP server...");
    
    // Create channel - this simulates the original incoming_tx/incoming_rx
    let (tx, mut rx) = mpsc::channel::<String>(10);
    
    // Task 1: Simulates stdin reader that will naturally terminate
    let stdin_task = tokio::spawn({
        let tx_clone = tx.clone();
        async move {
            println!("  stdin_task: Started, will send 3 messages then exit");
            for i in 0..3 {
                let msg = format!("Message {}", i);
                if tx_clone.send(msg.clone()).await.is_err() {
                    println!("  stdin_task: Receiver dropped, exiting");
                    break;
                }
                println!("  stdin_task: Sent {}", msg);
                tokio::time::sleep(Duration::from_millis(300)).await;
            }
            println!("  stdin_task: Finished (simulating EOF)");
            // tx_clone is dropped here
        }
    });
    
    // Task 2: Simulates message processor
    let processor_task = tokio::spawn(async move {
        println!("  processor_task: Started, waiting for messages");
        while let Some(msg) = rx.recv().await {
            println!("  processor_task: Processing {}", msg);
            tokio::time::sleep(Duration::from_millis(100)).await;
        }
        println!("  processor_task: Finished (channel closed)");
    });
    
    // Task 3: Simulates stdout writer or other background task
    let background_task = tokio::spawn(async move {
        for i in 0..2 {
            tokio::time::sleep(Duration::from_millis(500)).await;
            println!("  background_task: Tick {}", i);
        }
        println!("  background_task: Finished");
    });
    
    println!("  main: Original tx is still alive here");
    println!("  main: About to call tokio::join! - will this deadlock?");
    
    // This is the pattern from the original code
    let _ = tokio::join!(stdin_task, processor_task, background_task);
}

```

---------

Co-authored-by: Michael Bolin <bolinfest@gmail.com>
2025-08-28 19:28:17 -07:00
Ahmed Ibrahim
9dbe7284d2 Following up on #2371 post commit feedback (#2852)
- Introduce websearch end to complement the begin 
- Moves the logic of adding the sebsearch tool to
create_tools_json_for_responses_api
- Making it the client responsibility to toggle the tool on or off 
- Other misc in #2371 post commit feedback
- Show the query:

<img width="1392" height="151" alt="image"
src="https://github.com/user-attachments/assets/8457f1a6-f851-44cf-bcca-0d4fe460ce89"
/>
2025-08-28 19:24:38 -07:00
dedrisian-oai
b8e8454b3f Custom /prompts (#2696)
Adds custom `/prompts` to `~/.codex/prompts/<command>.md`.

<img width="239" height="107" alt="Screenshot 2025-08-25 at 6 22 42 PM"
src="https://github.com/user-attachments/assets/fe6ebbaa-1bf6-49d3-95f9-fdc53b752679"
/>

---

Details:

1. Adds `Op::ListCustomPrompts` to core.
2. Returns `ListCustomPromptsResponse` with list of `CustomPrompt`
(name, content).
3. TUI calls the operation on load, and populates the custom prompts
(excluding prompts that collide with builtins).
4. Selecting the custom prompt automatically sends the prompt to the
agent.
2025-08-29 02:16:39 +00:00
HaxagonusD
bbcfd63aba UI: Make slash commands bold in welcome message (#2762)
## What
Make slash commands (/init, /status, /approvals, /model) bold and white
in the welcome message for better visibility.
<img width="990" height="286" alt="image"
src="https://github.com/user-attachments/assets/13f90e96-b84a-4659-aab4-576d84a31af7"
/>


## Why
The current welcome message displays all text in a dimmed style, making
the slash commands less prominent. Users need to quickly identify
available commands when starting Codex.

## How
Modified `tui/src/history_cell.rs` in the `new_session_info` function
to:
- Split each command line into separate spans
- Apply bold white styling to command text (`/init`, `/status`, etc.)
- Keep descriptions dimmed for visual contrast
- Maintain existing layout and spacing

## Test plan
- [ ] Run the TUI and verify commands appear bold in the welcome message
- [ ] Ensure descriptions remain dimmed for readability
- [ ] Confirm all existing tests pass
2025-08-28 18:12:41 -07:00
Eric Traut
6209d49520 Changed OAuth success screen to use the string "Codex" rather than "Codex CLI" (#2737) 2025-08-28 21:21:10 +00:00
Gabriel Peal
c3a8b96a60 Add a VS Code Extension issue template (#2853)
Template mostly copied from the bug template
2025-08-28 16:56:52 -04:00
Ahmed Ibrahim
c9ca63dc1e burst paste edge cases (#2683)
This PR fixes two edge cases in managing burst paste (mainly on power
shell).
Bugs:
- Needs an event key after paste to render the pasted items

> ChatComposer::flush_paste_burst_if_due() flushes on timeout. Called:
>     - Pre-render in App on TuiEvent::Draw.
>     - Via a delayed frame
>
BottomPane::request_redraw_in(ChatComposer::recommended_paste_flush_delay()).

- Parses two key events separately before starting parsing burst paste

> When threshold is crossed, pull preceding burst chars out of the
textarea and prepend to paste_burst_buffer, then keep buffering.

- Integrates with #2567 to bring image pasting to windows.
2025-08-28 12:54:12 -07:00
Ahmed Ibrahim
ed06f90fb3 Race condition in compact (#2746)
This fixes the flakiness in
`summarize_context_three_requests_and_instructions` because we should
trim history before sending task complete.
2025-08-28 12:53:00 -07:00
Michael Bolin
f09170b574 chore: print stderr from MCP server to test output using eprintln! (#2849)
Related to https://github.com/openai/codex/pull/2848, I don't see the
stderr from `codex mcp` colocated with the other stderr from
`test_shell_command_approval_triggers_elicitation()` when it fails even
though we have `RUST_LOG=debug` set when we spawn `codex mcp`:


1e9e703b96/codex-rs/mcp-server/tests/common/mcp_process.rs (L65)

Let's try this new logic which should be more explicit.
2025-08-28 12:43:13 -07:00
Michael Bolin
1e9e703b96 chore: try to make it easier to debug the flakiness of test_shell_command_approval_triggers_elicitation (#2848)
`test_shell_command_approval_triggers_elicitation()` is one of a number
of integration tests that we have observed to be flaky on GitHub CI, so
this PR tries to reduce the flakiness _and_ to provide us with more
information when it flakes. Specifically:

- Changed the command that we use to trigger the elicitation from `git
init` to `python3 -c 'import pathlib; pathlib.Path(r"{}").touch()'`
because running `git` seems more likely to invite variance.
- Increased the timeout to wait for the task response from 10s to 20s.
- Added more logging.
2025-08-28 12:33:33 -07:00
Michael Bolin
74d2741729 chore: require uninlined_format_args from clippy (#2845)
- added `uninlined_format_args` to `[workspace.lints.clippy]` in the
`Cargo.toml` for the workspace
- ran `cargo clippy --tests --fix`
- ran `just fmt`
2025-08-28 11:25:23 -07:00
Jeremy Rose
e5611aab07 disallow some slash commands while a task is running (#2792)
/new, /init, /models, /approvals, etc. don't work correctly during a
turn. disable them.
2025-08-28 10:15:59 -07:00
dedrisian-oai
4e9ad23864 Add "View Image" tool (#2723)
Adds a "View Image" tool so Codex can find and see images by itself:

<img width="1772" height="420" alt="Screenshot 2025-08-26 at 10 40
04 AM"
src="https://github.com/user-attachments/assets/7a459c7b-0b86-4125-82d9-05fbb35ade03"
/>
2025-08-27 17:41:23 -07:00
126 changed files with 4548 additions and 2537 deletions

View File

@@ -0,0 +1,50 @@
name: 🧑‍💻 VS Code Extension
description: Report an issue with the VS Code extension
labels:
- extension
- needs triage
body:
- type: markdown
attributes:
value: |
Before submitting a new issue, please search for existing issues to see if your issue has already been reported.
If it has, please add a 👍 reaction (no need to leave a comment) to the existing issue instead of creating a new one.
- type: input
id: version
attributes:
label: What version of the VS Code extension are you using?
- type: input
id: ide
attributes:
label: Which IDE are you using?
description: Like `VS Code`, `Cursor`, `Windsurf`, etc.
- type: input
id: platform
attributes:
label: What platform is your computer?
description: |
For MacOS and Linux: copy the output of `uname -mprs`
For Windows: copy the output of `"$([Environment]::OSVersion | ForEach-Object VersionString) $(if ([Environment]::Is64BitOperatingSystem) { "x64" } else { "x86" })"` in the PowerShell console
- type: textarea
id: steps
attributes:
label: What steps can reproduce the bug?
description: Explain the bug and provide a code snippet that can reproduce it.
validations:
required: true
- type: textarea
id: expected
attributes:
label: What is the expected behavior?
description: If possible, please provide text instead of a screenshot.
- type: textarea
id: actual
attributes:
label: What do you see instead?
description: If possible, please provide text instead of a screenshot.
- type: textarea
id: notes
attributes:
label: Additional information
description: Is there anything else you think we should know?

View File

@@ -21,6 +21,10 @@
"windows-x86_64": {
"regex": "^codex-x86_64-pc-windows-msvc\\.exe\\.zst$",
"path": "codex.exe"
},
"windows-aarch64": {
"regex": "^codex-aarch64-pc-windows-msvc\\.exe\\.zst$",
"path": "codex.exe"
}
}
}

View File

@@ -1,6 +1,6 @@
# External (non-OpenAI) Pull Request Requirements
Before opening this Pull Request, please read the "Contributing" section of the README or your PR may be closed:
https://github.com/openai/codex#contributing
Before opening this Pull Request, please read the dedicated "Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md
If your PR conforms to our contribution guidelines, replace this text with a detailed and high quality description of your changes.

View File

@@ -100,15 +100,26 @@ jobs:
- runner: windows-latest
target: x86_64-pc-windows-msvc
profile: dev
- runner: windows-11-arm
target: aarch64-pc-windows-msvc
profile: dev
# Also run representative release builds on Mac and Linux because
# there could be release-only build errors we want to catch.
# Hopefully this also pre-populates the build cache to speed up
# releases.
- runner: macos-14
target: aarch64-apple-darwin
profile: release
- runner: ubuntu-24.04
target: x86_64-unknown-linux-musl
profile: release
- runner: windows-latest
target: x86_64-pc-windows-msvc
profile: release
- runner: windows-11-arm
target: aarch64-pc-windows-msvc
profile: release
steps:
- uses: actions/checkout@v5
@@ -134,7 +145,7 @@ jobs:
- name: cargo clippy
id: clippy
run: cargo clippy --target ${{ matrix.target }} --all-features --tests -- -D warnings
run: cargo clippy --target ${{ matrix.target }} --all-features --tests --profile ${{ matrix.profile }} -- -D warnings
# Running `cargo build` from the workspace root builds the workspace using
# the union of all features from third-party crates. This can mask errors

View File

@@ -72,6 +72,8 @@ jobs:
target: aarch64-unknown-linux-gnu
- runner: windows-latest
target: x86_64-pc-windows-msvc
- runner: windows-11-arm
target: aarch64-pc-windows-msvc
steps:
- uses: actions/checkout@v5
@@ -87,7 +89,7 @@ jobs:
~/.cargo/registry/cache/
~/.cargo/git/db/
${{ github.workspace }}/codex-rs/target/
key: cargo-release-${{ matrix.runner }}-${{ matrix.target }}-release-${{ hashFiles('**/Cargo.lock') }}
key: cargo-${{ matrix.runner }}-${{ matrix.target }}-release-${{ hashFiles('**/Cargo.lock') }}
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Install musl build tools

View File

@@ -8,7 +8,7 @@ In the codex-rs folder where the rust code lives:
- You operate in a sandbox where `CODEX_SANDBOX_NETWORK_DISABLED=1` will be set whenever you use the `shell` tool. Any existing code that uses `CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR` was authored with this fact in mind. It is often used to early exit out of tests that the author knew you would not be able to run given your sandbox limitations.
- Similarly, when you spawn a process using Seatbelt (`/usr/bin/sandbox-exec`), `CODEX_SANDBOX=seatbelt` will be set on the child process. Integration tests that want to run Seatbelt themselves cannot be run under Seatbelt, so checks for `CODEX_SANDBOX=seatbelt` are also often used to early exit out of tests, as appropriate.
Before finalizing a change to `codex-rs`, run `just fmt` (in `codex-rs` directory) to format the code and `just fix -p <project>` (in `codex-rs` directory) to fix any linter issues in the code. Additionally, run the tests:
Before finalizing a change to `codex-rs`, run `just fmt` (in `codex-rs` directory) to format the code and `just fix -p <project>` (in `codex-rs` directory) to fix any linter issues in the code. Prefer scoping with `-p` to avoid slow workspacewide Clippy builds; only run `just fix` without `-p` if you changed shared crates. Additionally, run the tests:
1. Run the test for the specific project that was changed. For example, if changes were made in `codex-rs/tui`, run `cargo test -p codex-tui`.
2. Once those pass, if any changes were made in common, core, or protocol, run the complete test suite with `cargo test --all-features`.
When running interactively, ask the user before running these commands to finalize.

View File

@@ -14,10 +14,16 @@
### Installing and running Codex CLI
Install globally with your preferred package manager:
Install globally with your preferred package manager. If you use npm:
```shell
npm install -g @openai/codex # Alternatively: `brew install codex`
npm install -g @openai/codex
```
Alternatively, if you use Homebrew:
```shell
brew install codex
```
Then simply run `codex` to get started:

12
codex-rs/Cargo.lock generated
View File

@@ -973,11 +973,13 @@ dependencies = [
"diffy",
"image",
"insta",
"itertools 0.14.0",
"lazy_static",
"libc",
"mcp-types",
"once_cell",
"path-clean",
"pathdiff",
"pretty_assertions",
"rand 0.9.2",
"ratatui",
@@ -3377,6 +3379,12 @@ dependencies = [
"once_cell",
]
[[package]]
name = "pathdiff"
version = "0.2.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "df94ce210e5bc13cb6651479fa48d14f601d9858cfe0467f43ae157023b938d3"
[[package]]
name = "percent-encoding"
version = "2.3.1"
@@ -3907,9 +3915,9 @@ dependencies = [
[[package]]
name = "regex-lite"
version = "0.1.6"
version = "0.1.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "53a49587ad06b26609c52e423de037e7f57f20d53535d66e08c695f347df952a"
checksum = "943f41321c63ef1c92fd763bfe054d2668f7f225a5c29f0105903dc2fc04ba30"
[[package]]
name = "regex-syntax"

View File

@@ -34,6 +34,7 @@ rust = {}
[workspace.lints.clippy]
expect_used = "deny"
uninlined_format_args = "deny"
unwrap_used = "deny"
[profile.release]

View File

@@ -116,7 +116,9 @@ pub enum ApplyPatchFileChange {
Add {
content: String,
},
Delete,
Delete {
content: String,
},
Update {
unified_diff: String,
move_path: Option<PathBuf>,
@@ -210,7 +212,18 @@ pub fn maybe_parse_apply_patch_verified(argv: &[String], cwd: &Path) -> MaybeApp
changes.insert(path, ApplyPatchFileChange::Add { content: contents });
}
Hunk::DeleteFile { .. } => {
changes.insert(path, ApplyPatchFileChange::Delete);
let content = match std::fs::read_to_string(&path) {
Ok(content) => content,
Err(e) => {
return MaybeApplyPatchVerified::CorrectnessError(
ApplyPatchError::IoError(IoError {
context: format!("Failed to read {}", path.display()),
source: e,
}),
);
}
};
changes.insert(path, ApplyPatchFileChange::Delete { content });
}
Hunk::UpdateFile {
move_path, chunks, ..

View File

@@ -31,7 +31,7 @@ mime_guess = "2.0"
os_info = "3.12.0"
portable-pty = "0.9.0"
rand = "0.9"
regex-lite = "0.1.6"
regex-lite = "0.1.7"
reqwest = { version = "0.12", features = ["json", "stream"] }
serde = { version = "1", features = ["derive"] }
serde_bytes = "0.11"

View File

@@ -109,7 +109,9 @@ pub(crate) fn convert_apply_patch_to_protocol(
ApplyPatchFileChange::Add { content } => FileChange::Add {
content: content.clone(),
},
ApplyPatchFileChange::Delete => FileChange::Delete,
ApplyPatchFileChange::Delete { content } => FileChange::Delete {
content: content.clone(),
},
ApplyPatchFileChange::Update {
unified_diff,
move_path,

View File

@@ -19,6 +19,7 @@ use crate::ModelProviderInfo;
use crate::client_common::Prompt;
use crate::client_common::ResponseEvent;
use crate::client_common::ResponseStream;
use crate::config::Config;
use crate::error::CodexErr;
use crate::error::Result;
use crate::model_family::ModelFamily;
@@ -34,6 +35,7 @@ pub(crate) async fn stream_chat_completions(
model_family: &ModelFamily,
client: &reqwest::Client,
provider: &ModelProviderInfo,
config: &Config,
) -> Result<ResponseStream> {
// Build messages array
let mut messages = Vec::<serde_json::Value>::new();
@@ -129,8 +131,26 @@ pub(crate) async fn stream_chat_completions(
"content": output,
}));
}
ResponseItem::Reasoning { .. } | ResponseItem::Other => {
// Omit these items from the conversation history.
ResponseItem::Reasoning {
id: _,
summary,
content,
encrypted_content: _,
} => {
if !config.skip_reasoning_in_chat_completions {
// There is no clear way of sending reasoning items over chat completions.
// We are sending it as an assistant message.
tracing::info!("reasoning item: {:?}", item);
let reasoning =
format!("Reasoning Summary: {summary:?}, Reasoning Content: {content:?}");
messages.push(json!({
"role": "assistant",
"content": reasoning,
}));
}
}
ResponseItem::WebSearchCall { .. } | ResponseItem::Other => {
tracing::info!("omitting item from chat completions: {:?}", item);
continue;
}
}
@@ -348,6 +368,8 @@ async fn process_chat_sse<S>(
}
if let Some(reasoning) = maybe_text {
// Accumulate so we can emit a terminal Reasoning item at end-of-turn.
reasoning_text.push_str(&reasoning);
let _ = tx_event
.send(Ok(ResponseEvent::ReasoningContentDelta(reasoning)))
.await;
@@ -623,11 +645,8 @@ where
Poll::Ready(Some(Ok(ResponseEvent::ReasoningSummaryPartAdded))) => {
continue;
}
Poll::Ready(Some(Ok(ResponseEvent::WebSearchCallBegin { .. }))) => {
return Poll::Ready(Some(Ok(ResponseEvent::WebSearchCallBegin {
call_id: String::new(),
query: None,
})));
Poll::Ready(Some(Ok(ResponseEvent::WebSearchCallBegin { call_id }))) => {
return Poll::Ready(Some(Ok(ResponseEvent::WebSearchCallBegin { call_id })));
}
}
}

View File

@@ -110,6 +110,7 @@ impl ModelClient {
&self.config.model_family,
&self.client,
&self.provider,
&self.config,
)
.await?;
@@ -160,21 +161,7 @@ impl ModelClient {
let store = prompt.store && auth_mode != Some(AuthMode::ChatGPT);
let full_instructions = prompt.get_full_instructions(&self.config.model_family);
let mut tools_json = create_tools_json_for_responses_api(&prompt.tools)?;
// ChatGPT backend expects the preview name for web search.
if auth_mode == Some(AuthMode::ChatGPT) {
for tool in &mut tools_json {
if let Some(map) = tool.as_object_mut()
&& map.get("type").and_then(|v| v.as_str()) == Some("web_search")
{
map.insert(
"type".to_string(),
serde_json::Value::String("web_search_preview".to_string()),
);
}
}
}
let tools_json = create_tools_json_for_responses_api(&prompt.tools)?;
let reasoning = create_reasoning_param_for_request(
&self.config.model_family,
self.effort,
@@ -607,11 +594,9 @@ async fn process_sse<S>(
| "response.custom_tool_call_input.delta"
| "response.custom_tool_call_input.done" // also emitted as response.output_item.done
| "response.in_progress"
| "response.output_item.added"
| "response.output_text.done" => {
if event.kind == "response.output_item.added"
&& let Some(item) = event.item.as_ref()
{
| "response.output_text.done" => {}
"response.output_item.added" => {
if let Some(item) = event.item.as_ref() {
// Detect web_search_call begin and forward a synthetic event upstream.
if let Some(ty) = item.get("type").and_then(|v| v.as_str())
&& ty == "web_search_call"
@@ -621,7 +606,7 @@ async fn process_sse<S>(
.and_then(|v| v.as_str())
.unwrap_or("")
.to_string();
let ev = ResponseEvent::WebSearchCallBegin { call_id, query: None };
let ev = ResponseEvent::WebSearchCallBegin { call_id };
if tx_event.send(Ok(ev)).await.is_err() {
return;
}

View File

@@ -95,7 +95,6 @@ pub enum ResponseEvent {
ReasoningSummaryPartAdded,
WebSearchCallBegin {
call_id: String,
query: Option<String>,
},
}

View File

@@ -89,6 +89,7 @@ use crate::protocol::ExecCommandBeginEvent;
use crate::protocol::ExecCommandEndEvent;
use crate::protocol::FileChange;
use crate::protocol::InputItem;
use crate::protocol::ListCustomPromptsResponseEvent;
use crate::protocol::Op;
use crate::protocol::PatchApplyBeginEvent;
use crate::protocol::PatchApplyEndEvent;
@@ -100,6 +101,7 @@ use crate::protocol::Submission;
use crate::protocol::TaskCompleteEvent;
use crate::protocol::TurnDiffEvent;
use crate::protocol::WebSearchBeginEvent;
use crate::protocol::WebSearchEndEvent;
use crate::rollout::RolloutRecorder;
use crate::safety::SafetyCheck;
use crate::safety::assess_command_safety;
@@ -110,6 +112,7 @@ use crate::user_notification::UserNotification;
use crate::util::backoff;
use codex_protocol::config_types::ReasoningEffort as ReasoningEffortConfig;
use codex_protocol::config_types::ReasoningSummary as ReasoningSummaryConfig;
use codex_protocol::custom_prompts::CustomPrompt;
use codex_protocol::models::ContentItem;
use codex_protocol::models::FunctionCallOutputPayload;
use codex_protocol::models::LocalShellAction;
@@ -118,6 +121,7 @@ use codex_protocol::models::ReasoningItemReasoningSummary;
use codex_protocol::models::ResponseInputItem;
use codex_protocol::models::ResponseItem;
use codex_protocol::models::ShellToolCallParams;
use codex_protocol::models::WebSearchAction;
// A convenience extension trait for acquiring mutex locks where poisoning is
// unrecoverable and should abort the program. This avoids scattered `.unwrap()`
@@ -518,6 +522,7 @@ impl Session {
include_apply_patch_tool: config.include_apply_patch_tool,
include_web_search_request: config.tools_web_search_request,
use_streamable_shell_tool: config.use_experimental_streamable_shell_tool,
include_view_image_tool: config.include_view_image_tool,
}),
user_instructions,
base_instructions,
@@ -652,9 +657,17 @@ impl Session {
}
pub fn notify_approval(&self, sub_id: &str, decision: ReviewDecision) {
let mut state = self.state.lock_unchecked();
if let Some(tx_approve) = state.pending_approvals.remove(sub_id) {
tx_approve.send(decision).ok();
let entry = {
let mut state = self.state.lock_unchecked();
state.pending_approvals.remove(sub_id)
};
match entry {
Some(tx_approve) => {
tx_approve.send(decision).ok();
}
None => {
warn!("No pending approval found for sub_id: {sub_id}");
}
}
}
@@ -1108,6 +1121,7 @@ async fn submission_loop(
include_apply_patch_tool: config.include_apply_patch_tool,
include_web_search_request: config.tools_web_search_request,
use_streamable_shell_tool: config.use_experimental_streamable_shell_tool,
include_view_image_tool: config.include_view_image_tool,
});
let new_turn_context = TurnContext {
@@ -1193,6 +1207,7 @@ async fn submission_loop(
include_web_search_request: config.tools_web_search_request,
use_streamable_shell_tool: config
.use_experimental_streamable_shell_tool,
include_view_image_tool: config.include_view_image_tool,
}),
user_instructions: turn_context.user_instructions.clone(),
base_instructions: turn_context.base_instructions.clone(),
@@ -1283,6 +1298,27 @@ async fn submission_loop(
warn!("failed to send McpListToolsResponse event: {e}");
}
}
Op::ListCustomPrompts => {
let tx_event = sess.tx_event.clone();
let sub_id = sub.id.clone();
let custom_prompts: Vec<CustomPrompt> =
if let Some(dir) = crate::custom_prompts::default_prompts_dir() {
crate::custom_prompts::discover_prompts_in(&dir).await
} else {
Vec::new()
};
let event = Event {
id: sub_id,
msg: EventMsg::ListCustomPromptsResponse(ListCustomPromptsResponseEvent {
custom_prompts,
}),
};
if let Err(e) = tx_event.send(event).await {
warn!("failed to send ListCustomPromptsResponse event: {e}");
}
}
Op::Compact => {
// Create a summarization request as user input
const SUMMARIZATION_PROMPT: &str = include_str!("prompt_for_compact_command.md");
@@ -1743,13 +1779,12 @@ async fn try_run_turn(
.await?;
output.push(ProcessedResponseItem { item, response });
}
ResponseEvent::WebSearchCallBegin { call_id, query } => {
let q = query.unwrap_or_else(|| "Searching Web...".to_string());
ResponseEvent::WebSearchCallBegin { call_id } => {
let _ = sess
.tx_event
.send(Event {
id: sub_id.to_string(),
msg: EventMsg::WebSearchBegin(WebSearchBeginEvent { call_id, query: q }),
msg: EventMsg::WebSearchBegin(WebSearchBeginEvent { call_id }),
})
.await;
}
@@ -1881,6 +1916,12 @@ async fn run_compact_task(
}
sess.remove_task(&sub_id);
{
let mut state = sess.state.lock_unchecked();
state.history.keep_last_messages(1);
}
let event = Event {
id: sub_id.clone(),
msg: EventMsg::AgentMessage(AgentMessageEvent {
@@ -1895,9 +1936,6 @@ async fn run_compact_task(
}),
};
sess.send_event(event).await;
let mut state = sess.state.lock_unchecked();
state.history.keep_last_messages(1);
}
async fn handle_response_item(
@@ -2045,6 +2083,17 @@ async fn handle_response_item(
debug!("unexpected CustomToolCallOutput from stream");
None
}
ResponseItem::WebSearchCall { id, action, .. } => {
if let WebSearchAction::Search { query } = action {
let call_id = id.unwrap_or_else(|| "".to_string());
let event = Event {
id: sub_id.to_string(),
msg: EventMsg::WebSearchEnd(WebSearchEndEvent { call_id, query }),
};
sess.tx_event.send(event).await.ok();
}
None
}
ResponseItem::Other => None,
};
Ok(output)
@@ -2077,6 +2126,36 @@ async fn handle_function_call(
)
.await
}
"view_image" => {
#[derive(serde::Deserialize)]
struct SeeImageArgs {
path: String,
}
let args = match serde_json::from_str::<SeeImageArgs>(&arguments) {
Ok(a) => a,
Err(e) => {
return ResponseInputItem::FunctionCallOutput {
call_id,
output: FunctionCallOutputPayload {
content: format!("failed to parse function arguments: {e}"),
success: Some(false),
},
};
}
};
let abs = turn_context.resolve_path(Some(args.path));
let output = match sess.inject_input(vec![InputItem::LocalImage { path: abs }]) {
Ok(()) => FunctionCallOutputPayload {
content: "attached local image path".to_string(),
success: Some(true),
},
Err(_) => FunctionCallOutputPayload {
content: "unable to attach image (no active task)".to_string(),
success: Some(false),
},
};
ResponseInputItem::FunctionCallOutput { call_id, output }
}
"apply_patch" => {
let args = match serde_json::from_str::<ApplyPatchToolArgs>(&arguments) {
Ok(a) => a,

View File

@@ -178,6 +178,17 @@ pub struct Config {
pub preferred_auth_method: AuthMode,
pub use_experimental_streamable_shell_tool: bool,
/// Include the `view_image` tool that lets the agent attach a local image path to context.
pub include_view_image_tool: bool,
/// When true, disables burst-paste detection for typed input entirely.
/// All characters are inserted as they are received, and no buffering
/// or placeholder replacement will occur for fast keypress bursts.
pub disable_paste_burst: bool,
/// When `true`, reasoning items in Chat Completions input will be skipped.
/// Defaults to `false`.
pub skip_reasoning_in_chat_completions: bool,
}
impl Config {
@@ -485,6 +496,15 @@ pub struct ConfigToml {
/// Nested tools section for feature toggles
pub tools: Option<ToolsToml>,
/// When true, disables burst-paste detection for typed input entirely.
/// All characters are inserted as they are received, and no buffering
/// or placeholder replacement will occur for fast keypress bursts.
pub disable_paste_burst: Option<bool>,
/// When set to `true`, reasoning items will be skipped from Chat Completions input.
/// Defaults to `false`.
pub skip_reasoning_in_chat_completions: Option<bool>,
}
#[derive(Deserialize, Debug, Clone, PartialEq, Eq)]
@@ -494,9 +514,12 @@ pub struct ProjectConfig {
#[derive(Deserialize, Debug, Clone, Default)]
pub struct ToolsToml {
// Renamed from `web_search_request`; keep alias for backwards compatibility.
#[serde(default, alias = "web_search_request")]
pub web_search: Option<bool>,
/// Enable the `view_image` tool that lets the agent attach local images.
#[serde(default)]
pub view_image: Option<bool>,
}
impl ConfigToml {
@@ -586,6 +609,7 @@ pub struct ConfigOverrides {
pub base_instructions: Option<String>,
pub include_plan_tool: Option<bool>,
pub include_apply_patch_tool: Option<bool>,
pub include_view_image_tool: Option<bool>,
pub disable_response_storage: Option<bool>,
pub show_raw_agent_reasoning: Option<bool>,
pub tools_web_search_request: Option<bool>,
@@ -613,6 +637,7 @@ impl Config {
base_instructions,
include_plan_tool,
include_apply_patch_tool,
include_view_image_tool,
disable_response_storage,
show_raw_agent_reasoning,
tools_web_search_request: override_tools_web_search_request,
@@ -654,7 +679,7 @@ impl Config {
})?
.clone();
let shell_environment_policy = cfg.shell_environment_policy.clone().into();
let shell_environment_policy = cfg.shell_environment_policy.into();
let resolved_cwd = {
use std::env;
@@ -675,12 +700,16 @@ impl Config {
}
};
let history = cfg.history.clone().unwrap_or_default();
let history = cfg.history.unwrap_or_default();
let tools_web_search_request = override_tools_web_search_request
.or(cfg.tools.as_ref().and_then(|t| t.web_search))
.unwrap_or(false);
let include_view_image_tool = include_view_image_tool
.or(cfg.tools.as_ref().and_then(|t| t.view_image))
.unwrap_or(true);
let model = model
.or(config_profile.model)
.or(cfg.model)
@@ -753,7 +782,7 @@ impl Config {
codex_home,
history,
file_opener: cfg.file_opener.unwrap_or(UriBasedFileOpener::VsCode),
tui: cfg.tui.clone().unwrap_or_default(),
tui: cfg.tui.unwrap_or_default(),
codex_linux_sandbox_exe,
hide_agent_reasoning: cfg.hide_agent_reasoning.unwrap_or(false),
@@ -772,7 +801,7 @@ impl Config {
model_verbosity: config_profile.model_verbosity.or(cfg.model_verbosity),
chatgpt_base_url: config_profile
.chatgpt_base_url
.or(cfg.chatgpt_base_url.clone())
.or(cfg.chatgpt_base_url)
.unwrap_or("https://chatgpt.com/backend-api/".to_string()),
experimental_resume,
@@ -784,6 +813,11 @@ impl Config {
use_experimental_streamable_shell_tool: cfg
.experimental_use_exec_command_tool
.unwrap_or(false),
include_view_image_tool,
disable_paste_burst: cfg.disable_paste_burst.unwrap_or(false),
skip_reasoning_in_chat_completions: cfg
.skip_reasoning_in_chat_completions
.unwrap_or(false),
};
Ok(config)
}
@@ -1152,6 +1186,9 @@ disable_response_storage = true
responses_originator_header: "codex_cli_rs".to_string(),
preferred_auth_method: AuthMode::ChatGPT,
use_experimental_streamable_shell_tool: false,
include_view_image_tool: true,
disable_paste_burst: false,
skip_reasoning_in_chat_completions: false,
},
o3_profile_config
);
@@ -1208,6 +1245,9 @@ disable_response_storage = true
responses_originator_header: "codex_cli_rs".to_string(),
preferred_auth_method: AuthMode::ChatGPT,
use_experimental_streamable_shell_tool: false,
include_view_image_tool: true,
disable_paste_burst: false,
skip_reasoning_in_chat_completions: false,
};
assert_eq!(expected_gpt3_profile_config, gpt3_profile_config);
@@ -1279,6 +1319,9 @@ disable_response_storage = true
responses_originator_header: "codex_cli_rs".to_string(),
preferred_auth_method: AuthMode::ChatGPT,
use_experimental_streamable_shell_tool: false,
include_view_image_tool: true,
disable_paste_burst: false,
skip_reasoning_in_chat_completions: false,
};
assert_eq!(expected_zdr_profile_config, zdr_profile_config);
@@ -1300,9 +1343,9 @@ disable_response_storage = true
let raw_path = project_dir.path().to_string_lossy();
let path_str = if raw_path.contains('\\') {
format!("'{}'", raw_path)
format!("'{raw_path}'")
} else {
format!("\"{}\"", raw_path)
format!("\"{raw_path}\"")
};
let expected = format!(
r#"[projects.{path_str}]
@@ -1323,9 +1366,9 @@ trust_level = "trusted"
let config_path = codex_home.path().join(CONFIG_TOML_FILE);
let raw_path = project_dir.path().to_string_lossy();
let path_str = if raw_path.contains('\\') {
format!("'{}'", raw_path)
format!("'{raw_path}'")
} else {
format!("\"{}\"", raw_path)
format!("\"{raw_path}\"")
};
// Use a quoted key so backslashes don't require escaping on Windows
let initial = format!(

View File

@@ -72,7 +72,7 @@ fn is_api_message(message: &ResponseItem) -> bool {
| ResponseItem::CustomToolCallOutput { .. }
| ResponseItem::LocalShellCall { .. }
| ResponseItem::Reasoning { .. } => true,
ResponseItem::Other => false,
ResponseItem::WebSearchCall { .. } | ResponseItem::Other => false,
}
}

View File

@@ -0,0 +1,127 @@
use codex_protocol::custom_prompts::CustomPrompt;
use std::collections::HashSet;
use std::path::Path;
use std::path::PathBuf;
use tokio::fs;
/// Return the default prompts directory: `$CODEX_HOME/prompts`.
/// If `CODEX_HOME` cannot be resolved, returns `None`.
pub fn default_prompts_dir() -> Option<PathBuf> {
crate::config::find_codex_home()
.ok()
.map(|home| home.join("prompts"))
}
/// Discover prompt files in the given directory, returning entries sorted by name.
/// Non-files are ignored. If the directory does not exist or cannot be read, returns empty.
pub async fn discover_prompts_in(dir: &Path) -> Vec<CustomPrompt> {
discover_prompts_in_excluding(dir, &HashSet::new()).await
}
/// Discover prompt files in the given directory, excluding any with names in `exclude`.
/// Returns entries sorted by name. Non-files are ignored. Missing/unreadable dir yields empty.
pub async fn discover_prompts_in_excluding(
dir: &Path,
exclude: &HashSet<String>,
) -> Vec<CustomPrompt> {
let mut out: Vec<CustomPrompt> = Vec::new();
let mut entries = match fs::read_dir(dir).await {
Ok(entries) => entries,
Err(_) => return out,
};
while let Ok(Some(entry)) = entries.next_entry().await {
let path = entry.path();
let is_file = entry
.file_type()
.await
.map(|ft| ft.is_file())
.unwrap_or(false);
if !is_file {
continue;
}
// Only include Markdown files with a .md extension.
let is_md = path
.extension()
.and_then(|s| s.to_str())
.map(|ext| ext.eq_ignore_ascii_case("md"))
.unwrap_or(false);
if !is_md {
continue;
}
let Some(name) = path
.file_stem()
.and_then(|s| s.to_str())
.map(|s| s.to_string())
else {
continue;
};
if exclude.contains(&name) {
continue;
}
let content = match fs::read_to_string(&path).await {
Ok(s) => s,
Err(_) => continue,
};
out.push(CustomPrompt {
name,
path,
content,
});
}
out.sort_by(|a, b| a.name.cmp(&b.name));
out
}
#[cfg(test)]
mod tests {
use super::*;
use std::fs;
use tempfile::tempdir;
#[tokio::test]
async fn empty_when_dir_missing() {
let tmp = tempdir().expect("create TempDir");
let missing = tmp.path().join("nope");
let found = discover_prompts_in(&missing).await;
assert!(found.is_empty());
}
#[tokio::test]
async fn discovers_and_sorts_files() {
let tmp = tempdir().expect("create TempDir");
let dir = tmp.path();
fs::write(dir.join("b.md"), b"b").unwrap();
fs::write(dir.join("a.md"), b"a").unwrap();
fs::create_dir(dir.join("subdir")).unwrap();
let found = discover_prompts_in(dir).await;
let names: Vec<String> = found.into_iter().map(|e| e.name).collect();
assert_eq!(names, vec!["a", "b"]);
}
#[tokio::test]
async fn excludes_builtins() {
let tmp = tempdir().expect("create TempDir");
let dir = tmp.path();
fs::write(dir.join("init.md"), b"ignored").unwrap();
fs::write(dir.join("foo.md"), b"ok").unwrap();
let mut exclude = HashSet::new();
exclude.insert("init".to_string());
let found = discover_prompts_in_excluding(dir, &exclude).await;
let names: Vec<String> = found.into_iter().map(|e| e.name).collect();
assert_eq!(names, vec!["foo"]);
}
#[tokio::test]
async fn skips_non_utf8_files() {
let tmp = tempdir().expect("create TempDir");
let dir = tmp.path();
// Valid UTF-8 file
fs::write(dir.join("good.md"), b"hello").unwrap();
// Invalid UTF-8 content in .md file (e.g., lone 0xFF byte)
fs::write(dir.join("bad.md"), vec![0xFF, 0xFE, b'\n']).unwrap();
let found = discover_prompts_in(dir).await;
let names: Vec<String> = found.into_iter().map(|e| e.name).collect();
assert_eq!(names, vec!["good"]);
}
}

View File

@@ -85,23 +85,21 @@ impl EnvironmentContext {
}
if let Some(approval_policy) = self.approval_policy {
lines.push(format!(
" <approval_policy>{}</approval_policy>",
approval_policy
" <approval_policy>{approval_policy}</approval_policy>"
));
}
if let Some(sandbox_mode) = self.sandbox_mode {
lines.push(format!(" <sandbox_mode>{}</sandbox_mode>", sandbox_mode));
lines.push(format!(" <sandbox_mode>{sandbox_mode}</sandbox_mode>"));
}
if let Some(network_access) = self.network_access {
lines.push(format!(
" <network_access>{}</network_access>",
network_access
" <network_access>{network_access}</network_access>"
));
}
if let Some(shell) = self.shell
&& let Some(shell_name) = shell.name()
{
lines.push(format!(" <shell>{}</shell>", shell_name));
lines.push(format!(" <shell>{shell_name}</shell>"));
}
lines.push(ENVIRONMENT_CONTEXT_END.to_string());
lines.join("\n")

View File

@@ -170,15 +170,15 @@ fn format_reset_duration(total_secs: u64) -> String {
let mut parts: Vec<String> = Vec::new();
if days > 0 {
let unit = if days == 1 { "day" } else { "days" };
parts.push(format!("{} {}", days, unit));
parts.push(format!("{days} {unit}"));
}
if hours > 0 {
let unit = if hours == 1 { "hour" } else { "hours" };
parts.push(format!("{} {}", hours, unit));
parts.push(format!("{hours} {unit}"));
}
if minutes > 0 {
let unit = if minutes == 1 { "minute" } else { "minutes" };
parts.push(format!("{} {}", minutes, unit));
parts.push(format!("{minutes} {unit}"));
}
if parts.is_empty() {

View File

@@ -359,10 +359,7 @@ fn truncate_middle(s: &str, max_bytes: usize) -> (String, Option<u64>) {
let est_tokens = (s.len() as u64).div_ceil(4);
if max_bytes == 0 {
// Cannot keep any content; still return a full marker (never truncated).
return (
format!("{} tokens truncated…", est_tokens),
Some(est_tokens),
);
return (format!("{est_tokens} tokens truncated…"), Some(est_tokens));
}
// Helper to truncate a string to a given byte length on a char boundary.
@@ -406,16 +403,13 @@ fn truncate_middle(s: &str, max_bytes: usize) -> (String, Option<u64>) {
// Refine marker length and budgets until stable. Marker is never truncated.
let mut guess_tokens = est_tokens; // worst-case: everything truncated
for _ in 0..4 {
let marker = format!("{} tokens truncated…", guess_tokens);
let marker = format!("{guess_tokens} tokens truncated…");
let marker_len = marker.len();
let keep_budget = max_bytes.saturating_sub(marker_len);
if keep_budget == 0 {
// No room for any content within the cap; return a full, untruncated marker
// that reflects the entire truncated content.
return (
format!("{} tokens truncated…", est_tokens),
Some(est_tokens),
);
return (format!("{est_tokens} tokens truncated…"), Some(est_tokens));
}
let left_budget = keep_budget / 2;
@@ -441,14 +435,11 @@ fn truncate_middle(s: &str, max_bytes: usize) -> (String, Option<u64>) {
}
// Fallback: use last guess to build output.
let marker = format!("{} tokens truncated…", guess_tokens);
let marker = format!("{guess_tokens} tokens truncated…");
let marker_len = marker.len();
let keep_budget = max_bytes.saturating_sub(marker_len);
if keep_budget == 0 {
return (
format!("{} tokens truncated…", est_tokens),
Some(est_tokens),
);
return (format!("{est_tokens} tokens truncated…"), Some(est_tokens));
}
let left_budget = keep_budget / 2;
let right_budget = keep_budget - left_budget;

View File

@@ -10,7 +10,35 @@ use tokio::process::Command;
use tokio::time::Duration as TokioDuration;
use tokio::time::timeout;
use crate::util::is_inside_git_repo;
/// Return `true` if the project folder specified by the `Config` is inside a
/// Git repository.
///
/// The check walks up the directory hierarchy looking for a `.git` file or
/// directory (note `.git` can be a file that contains a `gitdir` entry). This
/// approach does **not** require the `git` binary or the `git2` crate and is
/// therefore fairly lightweight.
///
/// Note that this does **not** detect *worktrees* created with
/// `git worktree add` where the checkout lives outside the main repository
/// directory. If you need Codex to work from such a checkout simply pass the
/// `--allow-no-git-exec` CLI flag that disables the repo requirement.
pub fn get_git_repo_root(base_dir: &Path) -> Option<PathBuf> {
let mut dir = base_dir.to_path_buf();
loop {
if dir.join(".git").exists() {
return Some(dir);
}
// Pop one component (go up one directory). `pop` returns false when
// we have reached the filesystem root.
if !dir.pop() {
break;
}
}
None
}
/// Timeout for git commands to prevent freezing on large repositories
const GIT_COMMAND_TIMEOUT: TokioDuration = TokioDuration::from_secs(5);
@@ -94,9 +122,7 @@ pub async fn collect_git_info(cwd: &Path) -> Option<GitInfo> {
/// Returns the closest git sha to HEAD that is on a remote as well as the diff to that sha.
pub async fn git_diff_to_remote(cwd: &Path) -> Option<GitDiffToRemote> {
if !is_inside_git_repo(cwd) {
return None;
}
get_git_repo_root(cwd)?;
let remotes = get_git_remotes(cwd).await?;
let branches = branch_ancestry(cwd).await?;
@@ -440,7 +466,7 @@ async fn diff_against_sha(cwd: &Path, sha: &GitSha) -> Option<String> {
}
/// Resolve the path that should be used for trust checks. Similar to
/// `[utils::is_inside_git_repo]`, but resolves to the root of the main
/// `[get_git_repo_root]`, but resolves to the root of the main
/// repository. Handles worktrees.
pub fn resolve_root_git_project_for_trust(cwd: &Path) -> Option<PathBuf> {
let base = if cwd.is_dir() { cwd } else { cwd.parent()? };

View File

@@ -17,6 +17,7 @@ pub mod config;
pub mod config_profile;
pub mod config_types;
mod conversation_history;
pub mod custom_prompts;
mod environment_context;
pub mod error;
pub mod exec;

View File

@@ -47,7 +47,9 @@ pub(crate) enum OpenAiTool {
Function(ResponsesApiTool),
#[serde(rename = "local_shell")]
LocalShell {},
#[serde(rename = "web_search")]
// TODO: Understand why we get an error on web_search although the API docs say it's supported.
// https://platform.openai.com/docs/guides/tools-web-search?api-mode=responses#:~:text=%7B%20type%3A%20%22web_search%22%20%7D%2C
#[serde(rename = "web_search_preview")]
WebSearch {},
#[serde(rename = "custom")]
Freeform(FreeformTool),
@@ -67,6 +69,7 @@ pub(crate) struct ToolsConfig {
pub plan_tool: bool,
pub apply_patch_tool_type: Option<ApplyPatchToolType>,
pub web_search_request: bool,
pub include_view_image_tool: bool,
}
pub(crate) struct ToolsConfigParams<'a> {
@@ -77,6 +80,7 @@ pub(crate) struct ToolsConfigParams<'a> {
pub(crate) include_apply_patch_tool: bool,
pub(crate) include_web_search_request: bool,
pub(crate) use_streamable_shell_tool: bool,
pub(crate) include_view_image_tool: bool,
}
impl ToolsConfig {
@@ -89,6 +93,7 @@ impl ToolsConfig {
include_apply_patch_tool,
include_web_search_request,
use_streamable_shell_tool,
include_view_image_tool,
} = params;
let mut shell_type = if *use_streamable_shell_tool {
ConfigShellToolType::StreamableShell
@@ -120,6 +125,7 @@ impl ToolsConfig {
plan_tool: *include_plan_tool,
apply_patch_tool_type,
web_search_request: *include_web_search_request,
include_view_image_tool: *include_view_image_tool,
}
}
}
@@ -292,6 +298,30 @@ The shell tool is used to execute shell commands.
},
})
}
fn create_view_image_tool() -> OpenAiTool {
// Support only local filesystem path.
let mut properties = BTreeMap::new();
properties.insert(
"path".to_string(),
JsonSchema::String {
description: Some("Local filesystem path to an image file".to_string()),
},
);
OpenAiTool::Function(ResponsesApiTool {
name: "view_image".to_string(),
description:
"Attach a local image (by filesystem path) to the conversation context for this turn."
.to_string(),
strict: false,
parameters: JsonSchema::Object {
properties,
required: Some(vec!["path".to_string()]),
additional_properties: Some(false),
},
})
}
/// TODO(dylan): deprecate once we get rid of json tool
#[derive(Serialize, Deserialize)]
pub(crate) struct ApplyPatchToolArgs {
@@ -307,12 +337,12 @@ pub fn create_tools_json_for_responses_api(
let mut tools_json = Vec::new();
for tool in tools {
tools_json.push(serde_json::to_value(tool)?);
let json = serde_json::to_value(tool)?;
tools_json.push(json);
}
Ok(tools_json)
}
/// Returns JSON values that are compatible with Function Calling in the
/// Chat Completions API:
/// https://platform.openai.com/docs/guides/function-calling?api-mode=chat
@@ -541,6 +571,11 @@ pub(crate) fn get_openai_tools(
tools.push(OpenAiTool::WebSearch {});
}
// Include the view_image tool so the agent can attach images to context.
if config.include_view_image_tool {
tools.push(create_view_image_tool());
}
if let Some(mcp_tools) = mcp_tools {
// Ensure deterministic ordering to maximize prompt cache hits.
// HashMap iteration order is non-deterministic, so sort by fully-qualified tool name.
@@ -604,10 +639,14 @@ mod tests {
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
});
let tools = get_openai_tools(&config, Some(HashMap::new()));
assert_eq_tool_names(&tools, &["local_shell", "update_plan", "web_search"]);
assert_eq_tool_names(
&tools,
&["local_shell", "update_plan", "web_search", "view_image"],
);
}
#[test]
@@ -621,10 +660,14 @@ mod tests {
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
});
let tools = get_openai_tools(&config, Some(HashMap::new()));
assert_eq_tool_names(&tools, &["shell", "update_plan", "web_search"]);
assert_eq_tool_names(
&tools,
&["shell", "update_plan", "web_search", "view_image"],
);
}
#[test]
@@ -638,6 +681,7 @@ mod tests {
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
});
let tools = get_openai_tools(
&config,
@@ -660,8 +704,8 @@ mod tests {
"number_property": { "type": "number" },
},
"required": [
"string_property".to_string(),
"number_property".to_string()
"string_property",
"number_property",
],
"additionalProperties": Some(false),
},
@@ -679,11 +723,16 @@ mod tests {
assert_eq_tool_names(
&tools,
&["shell", "web_search", "test_server/do_something_cool"],
&[
"shell",
"web_search",
"view_image",
"test_server/do_something_cool",
],
);
assert_eq!(
tools[2],
tools[3],
OpenAiTool::Function(ResponsesApiTool {
name: "test_server/do_something_cool".to_string(),
parameters: JsonSchema::Object {
@@ -737,6 +786,7 @@ mod tests {
include_apply_patch_tool: false,
include_web_search_request: false,
use_streamable_shell_tool: false,
include_view_image_tool: true,
});
// Intentionally construct a map with keys that would sort alphabetically.
@@ -794,6 +844,7 @@ mod tests {
&tools,
&[
"shell",
"view_image",
"test_server/cool",
"test_server/do",
"test_server/something",
@@ -812,6 +863,7 @@ mod tests {
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
});
let tools = get_openai_tools(
@@ -837,10 +889,13 @@ mod tests {
)])),
);
assert_eq_tool_names(&tools, &["shell", "web_search", "dash/search"]);
assert_eq_tool_names(
&tools,
&["shell", "web_search", "view_image", "dash/search"],
);
assert_eq!(
tools[2],
tools[3],
OpenAiTool::Function(ResponsesApiTool {
name: "dash/search".to_string(),
parameters: JsonSchema::Object {
@@ -870,6 +925,7 @@ mod tests {
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
});
let tools = get_openai_tools(
@@ -893,9 +949,12 @@ mod tests {
)])),
);
assert_eq_tool_names(&tools, &["shell", "web_search", "dash/paginate"]);
assert_eq_tool_names(
&tools,
&["shell", "web_search", "view_image", "dash/paginate"],
);
assert_eq!(
tools[2],
tools[3],
OpenAiTool::Function(ResponsesApiTool {
name: "dash/paginate".to_string(),
parameters: JsonSchema::Object {
@@ -923,6 +982,7 @@ mod tests {
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
});
let tools = get_openai_tools(
@@ -946,9 +1006,9 @@ mod tests {
)])),
);
assert_eq_tool_names(&tools, &["shell", "web_search", "dash/tags"]);
assert_eq_tool_names(&tools, &["shell", "web_search", "view_image", "dash/tags"]);
assert_eq!(
tools[2],
tools[3],
OpenAiTool::Function(ResponsesApiTool {
name: "dash/tags".to_string(),
parameters: JsonSchema::Object {
@@ -979,6 +1039,7 @@ mod tests {
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
});
let tools = get_openai_tools(
@@ -1002,9 +1063,9 @@ mod tests {
)])),
);
assert_eq_tool_names(&tools, &["shell", "web_search", "dash/value"]);
assert_eq_tool_names(&tools, &["shell", "web_search", "view_image", "dash/value"]);
assert_eq!(
tools[2],
tools[3],
OpenAiTool::Function(ResponsesApiTool {
name: "dash/value".to_string(),
parameters: JsonSchema::Object {

View File

@@ -20,22 +20,6 @@ pub enum ParsedCommand {
query: Option<String>,
path: Option<String>,
},
Format {
cmd: String,
tool: Option<String>,
targets: Option<Vec<String>>,
},
Test {
cmd: String,
},
Lint {
cmd: String,
tool: Option<String>,
targets: Option<Vec<String>>,
},
Noop {
cmd: String,
},
Unknown {
cmd: String,
},
@@ -50,10 +34,6 @@ impl From<ParsedCommand> for codex_protocol::parse_command::ParsedCommand {
ParsedCommand::Read { cmd, name } => P::Read { cmd, name },
ParsedCommand::ListFiles { cmd, path } => P::ListFiles { cmd, path },
ParsedCommand::Search { cmd, query, path } => P::Search { cmd, query, path },
ParsedCommand::Format { cmd, tool, targets } => P::Format { cmd, tool, targets },
ParsedCommand::Test { cmd } => P::Test { cmd },
ParsedCommand::Lint { cmd, tool, targets } => P::Lint { cmd, tool, targets },
ParsedCommand::Noop { cmd } => P::Noop { cmd },
ParsedCommand::Unknown { cmd } => P::Unknown { cmd },
}
}
@@ -122,7 +102,7 @@ mod tests {
assert_parsed(
&vec_str(&["bash", "-lc", inner]),
vec![ParsedCommand::Unknown {
cmd: "git status | wc -l".to_string(),
cmd: "git status".to_string(),
}],
);
}
@@ -244,6 +224,17 @@ mod tests {
);
}
#[test]
fn cd_then_cat_is_single_read() {
assert_parsed(
&shlex_split_safe("cd foo && cat foo.txt"),
vec![ParsedCommand::Read {
cmd: "cat foo.txt".to_string(),
name: "foo.txt".to_string(),
}],
);
}
#[test]
fn supports_ls_with_pipe() {
let inner = "ls -la | sed -n '1,120p'";
@@ -315,27 +306,6 @@ mod tests {
);
}
#[test]
fn supports_npm_run_with_forwarded_args() {
assert_parsed(
&vec_str(&[
"npm",
"run",
"lint",
"--",
"--max-warnings",
"0",
"--format",
"json",
]),
vec![ParsedCommand::Lint {
cmd: "npm run lint -- --max-warnings 0 --format json".to_string(),
tool: Some("npm-script:lint".to_string()),
targets: None,
}],
);
}
#[test]
fn supports_grep_recursive_current_dir() {
assert_parsed(
@@ -396,173 +366,10 @@ mod tests {
fn supports_cd_and_rg_files() {
assert_parsed(
&shlex_split_safe("cd codex-rs && rg --files"),
vec![
ParsedCommand::Unknown {
cmd: "cd codex-rs".to_string(),
},
ParsedCommand::Search {
cmd: "rg --files".to_string(),
query: None,
path: None,
},
],
);
}
#[test]
fn echo_then_cargo_test_sequence() {
assert_parsed(
&shlex_split_safe("echo Running tests... && cargo test --all-features --quiet"),
vec![ParsedCommand::Test {
cmd: "cargo test --all-features --quiet".to_string(),
}],
);
}
#[test]
fn supports_cargo_fmt_and_test_with_config() {
assert_parsed(
&shlex_split_safe(
"cargo fmt -- --config imports_granularity=Item && cargo test -p core --all-features",
),
vec![
ParsedCommand::Format {
cmd: "cargo fmt -- --config 'imports_granularity=Item'".to_string(),
tool: Some("cargo fmt".to_string()),
targets: None,
},
ParsedCommand::Test {
cmd: "cargo test -p core --all-features".to_string(),
},
],
);
}
#[test]
fn recognizes_rustfmt_and_clippy() {
assert_parsed(
&shlex_split_safe("rustfmt src/main.rs"),
vec![ParsedCommand::Format {
cmd: "rustfmt src/main.rs".to_string(),
tool: Some("rustfmt".to_string()),
targets: Some(vec!["src/main.rs".to_string()]),
}],
);
assert_parsed(
&shlex_split_safe("cargo clippy -p core --all-features -- -D warnings"),
vec![ParsedCommand::Lint {
cmd: "cargo clippy -p core --all-features -- -D warnings".to_string(),
tool: Some("cargo clippy".to_string()),
targets: None,
}],
);
}
#[test]
fn recognizes_pytest_go_and_tools() {
assert_parsed(
&shlex_split_safe(
"pytest -k 'Login and not slow' tests/test_login.py::TestLogin::test_ok",
),
vec![ParsedCommand::Test {
cmd: "pytest -k 'Login and not slow' tests/test_login.py::TestLogin::test_ok"
.to_string(),
}],
);
assert_parsed(
&shlex_split_safe("go fmt ./..."),
vec![ParsedCommand::Format {
cmd: "go fmt ./...".to_string(),
tool: Some("go fmt".to_string()),
targets: Some(vec!["./...".to_string()]),
}],
);
assert_parsed(
&shlex_split_safe("go test ./pkg -run TestThing"),
vec![ParsedCommand::Test {
cmd: "go test ./pkg -run TestThing".to_string(),
}],
);
assert_parsed(
&shlex_split_safe("eslint . --max-warnings 0"),
vec![ParsedCommand::Lint {
cmd: "eslint . --max-warnings 0".to_string(),
tool: Some("eslint".to_string()),
targets: Some(vec![".".to_string()]),
}],
);
assert_parsed(
&shlex_split_safe("prettier -w ."),
vec![ParsedCommand::Format {
cmd: "prettier -w .".to_string(),
tool: Some("prettier".to_string()),
targets: Some(vec![".".to_string()]),
}],
);
}
#[test]
fn recognizes_jest_and_vitest_filters() {
assert_parsed(
&shlex_split_safe("jest -t 'should work' src/foo.test.ts"),
vec![ParsedCommand::Test {
cmd: "jest -t 'should work' src/foo.test.ts".to_string(),
}],
);
assert_parsed(
&shlex_split_safe("vitest -t 'runs' src/foo.test.tsx"),
vec![ParsedCommand::Test {
cmd: "vitest -t runs src/foo.test.tsx".to_string(),
}],
);
}
#[test]
fn recognizes_npx_and_scripts() {
assert_parsed(
&shlex_split_safe("npx eslint src"),
vec![ParsedCommand::Lint {
cmd: "npx eslint src".to_string(),
tool: Some("eslint".to_string()),
targets: Some(vec!["src".to_string()]),
}],
);
assert_parsed(
&shlex_split_safe("npx prettier -c ."),
vec![ParsedCommand::Format {
cmd: "npx prettier -c .".to_string(),
tool: Some("prettier".to_string()),
targets: Some(vec![".".to_string()]),
}],
);
assert_parsed(
&shlex_split_safe("pnpm run lint -- --max-warnings 0"),
vec![ParsedCommand::Lint {
cmd: "pnpm run lint -- --max-warnings 0".to_string(),
tool: Some("pnpm-script:lint".to_string()),
targets: None,
}],
);
assert_parsed(
&shlex_split_safe("npm test"),
vec![ParsedCommand::Test {
cmd: "npm test".to_string(),
}],
);
assert_parsed(
&shlex_split_safe("yarn test"),
vec![ParsedCommand::Test {
cmd: "yarn test".to_string(),
vec![ParsedCommand::Search {
cmd: "rg --files".to_string(),
query: None,
path: None,
}],
);
}
@@ -770,6 +577,51 @@ mod tests {
);
}
#[test]
fn parses_mixed_sequence_with_pipes_semicolons_and_or() {
// Provided long command sequence combining sequencing, pipelines, and ORs.
let inner = "pwd; ls -la; rg --files -g '!target' | wc -l; rg -n '^\\[workspace\\]' -n Cargo.toml || true; rg -n '^\\[package\\]' -n */Cargo.toml || true; cargo --version; rustc --version; cargo clippy --workspace --all-targets --all-features -q";
let args = vec_str(&["bash", "-lc", inner]);
let expected = vec![
ParsedCommand::Unknown {
cmd: "pwd".to_string(),
},
ParsedCommand::ListFiles {
cmd: shlex_join(&shlex_split_safe("ls -la")),
path: None,
},
ParsedCommand::Search {
cmd: shlex_join(&shlex_split_safe("rg --files -g '!target'")),
query: None,
path: Some("!target".to_string()),
},
ParsedCommand::Search {
cmd: shlex_join(&shlex_split_safe("rg -n '^\\[workspace\\]' -n Cargo.toml")),
query: Some("^\\[workspace\\]".to_string()),
path: Some("Cargo.toml".to_string()),
},
ParsedCommand::Search {
cmd: shlex_join(&shlex_split_safe("rg -n '^\\[package\\]' -n */Cargo.toml")),
query: Some("^\\[package\\]".to_string()),
path: Some("Cargo.toml".to_string()),
},
ParsedCommand::Unknown {
cmd: shlex_join(&shlex_split_safe("cargo --version")),
},
ParsedCommand::Unknown {
cmd: shlex_join(&shlex_split_safe("rustc --version")),
},
ParsedCommand::Unknown {
cmd: shlex_join(&shlex_split_safe(
"cargo clippy --workspace --all-targets --all-features -q",
)),
},
];
assert_parsed(&args, expected);
}
#[test]
fn strips_true_in_sequence() {
// `true` should be dropped from parsed sequences
@@ -867,159 +719,6 @@ mod tests {
);
}
#[test]
fn pnpm_test_is_parsed_as_test() {
assert_parsed(
&shlex_split_safe("pnpm test"),
vec![ParsedCommand::Test {
cmd: "pnpm test".to_string(),
}],
);
}
#[test]
fn pnpm_exec_vitest_is_unknown() {
// From commands_combined: cd codex-cli && pnpm exec vitest run tests/... --threads=false --passWithNoTests
let inner = "cd codex-cli && pnpm exec vitest run tests/file-tag-utils.test.ts --threads=false --passWithNoTests";
assert_parsed(
&shlex_split_safe(inner),
vec![
ParsedCommand::Unknown {
cmd: "cd codex-cli".to_string(),
},
ParsedCommand::Unknown {
cmd: "pnpm exec vitest run tests/file-tag-utils.test.ts '--threads=false' --passWithNoTests".to_string(),
},
],
);
}
#[test]
fn cargo_test_with_crate() {
assert_parsed(
&shlex_split_safe("cargo test -p codex-core parse_command::"),
vec![ParsedCommand::Test {
cmd: "cargo test -p codex-core parse_command::".to_string(),
}],
);
}
#[test]
fn cargo_test_with_crate_2() {
assert_parsed(
&shlex_split_safe(
"cd core && cargo test -q parse_command::tests::bash_dash_c_pipeline_parsing parse_command::tests::fd_file_finder_variants",
),
vec![ParsedCommand::Test {
cmd: "cargo test -q parse_command::tests::bash_dash_c_pipeline_parsing parse_command::tests::fd_file_finder_variants".to_string(),
}],
);
}
#[test]
fn cargo_test_with_crate_3() {
assert_parsed(
&shlex_split_safe("cd core && cargo test -q parse_command::tests"),
vec![ParsedCommand::Test {
cmd: "cargo test -q parse_command::tests".to_string(),
}],
);
}
#[test]
fn cargo_test_with_crate_4() {
assert_parsed(
&shlex_split_safe("cd core && cargo test --all-features parse_command -- --nocapture"),
vec![ParsedCommand::Test {
cmd: "cargo test --all-features parse_command -- --nocapture".to_string(),
}],
);
}
// Additional coverage for other common tools/frameworks
#[test]
fn recognizes_black_and_ruff() {
// black formats Python code
assert_parsed(
&shlex_split_safe("black src"),
vec![ParsedCommand::Format {
cmd: "black src".to_string(),
tool: Some("black".to_string()),
targets: Some(vec!["src".to_string()]),
}],
);
// ruff check is a linter; ensure we collect targets
assert_parsed(
&shlex_split_safe("ruff check ."),
vec![ParsedCommand::Lint {
cmd: "ruff check .".to_string(),
tool: Some("ruff".to_string()),
targets: Some(vec![".".to_string()]),
}],
);
// ruff format is a formatter
assert_parsed(
&shlex_split_safe("ruff format pkg/"),
vec![ParsedCommand::Format {
cmd: "ruff format pkg/".to_string(),
tool: Some("ruff".to_string()),
targets: Some(vec!["pkg/".to_string()]),
}],
);
}
#[test]
fn recognizes_pnpm_monorepo_test_and_npm_format_script() {
// pnpm -r test in a monorepo should still parse as a test action
assert_parsed(
&shlex_split_safe("pnpm -r test"),
vec![ParsedCommand::Test {
cmd: "pnpm -r test".to_string(),
}],
);
// npm run format should be recognized as a format action
assert_parsed(
&shlex_split_safe("npm run format -- -w ."),
vec![ParsedCommand::Format {
cmd: "npm run format -- -w .".to_string(),
tool: Some("npm-script:format".to_string()),
targets: None,
}],
);
}
#[test]
fn yarn_test_is_parsed_as_test() {
assert_parsed(
&shlex_split_safe("yarn test"),
vec![ParsedCommand::Test {
cmd: "yarn test".to_string(),
}],
);
}
#[test]
fn pytest_file_only_and_go_run_regex() {
// pytest invoked with a file path should be captured as a filter
assert_parsed(
&shlex_split_safe("pytest tests/test_example.py"),
vec![ParsedCommand::Test {
cmd: "pytest tests/test_example.py".to_string(),
}],
);
// go test with -run regex should capture the filter
assert_parsed(
&shlex_split_safe("go test ./... -run '^TestFoo$'"),
vec![ParsedCommand::Test {
cmd: "go test ./... -run '^TestFoo$'".to_string(),
}],
);
}
#[test]
fn grep_with_query_and_path() {
assert_parsed(
@@ -1090,30 +789,6 @@ mod tests {
);
}
#[test]
fn eslint_with_config_path_and_target() {
assert_parsed(
&shlex_split_safe("eslint -c .eslintrc.json src"),
vec![ParsedCommand::Lint {
cmd: "eslint -c .eslintrc.json src".to_string(),
tool: Some("eslint".to_string()),
targets: Some(vec!["src".to_string()]),
}],
);
}
#[test]
fn npx_eslint_with_config_path_and_target() {
assert_parsed(
&shlex_split_safe("npx eslint -c .eslintrc src"),
vec![ParsedCommand::Lint {
cmd: "npx eslint -c .eslintrc src".to_string(),
tool: Some("eslint".to_string()),
targets: Some(vec!["src".to_string()]),
}],
);
}
#[test]
fn fd_file_finder_variants() {
assert_parsed(
@@ -1202,16 +877,13 @@ fn simplify_once(commands: &[ParsedCommand]) -> Option<Vec<ParsedCommand>> {
return Some(commands[1..].to_vec());
}
// cd foo && [any Test command] => [any Test command]
// cd foo && [any command] => [any command] (keep non-cd when a cd is followed by something)
if let Some(idx) = commands.iter().position(|pc| match pc {
ParsedCommand::Unknown { cmd } => {
shlex_split(cmd).is_some_and(|t| t.first().map(|s| s.as_str()) == Some("cd"))
}
_ => false,
}) && commands
.iter()
.skip(idx + 1)
.any(|pc| matches!(pc, ParsedCommand::Test { .. }))
}) && commands.len() > idx + 1
{
let mut out = Vec::with_capacity(commands.len() - 1);
out.extend_from_slice(&commands[..idx]);
@@ -1220,10 +892,10 @@ fn simplify_once(commands: &[ParsedCommand]) -> Option<Vec<ParsedCommand>> {
}
// cmd || true => cmd
if let Some(idx) = commands.iter().position(|pc| match pc {
ParsedCommand::Noop { cmd } => cmd == "true",
_ => false,
}) {
if let Some(idx) = commands
.iter()
.position(|pc| matches!(pc, ParsedCommand::Unknown { cmd } if cmd == "true"))
{
let mut out = Vec::with_capacity(commands.len() - 1);
out.extend_from_slice(&commands[..idx]);
out.extend_from_slice(&commands[idx + 1..]);
@@ -1377,75 +1049,6 @@ fn skip_flag_values<'a>(args: &'a [String], flags_with_vals: &[&str]) -> Vec<&'a
out
}
/// Common flags for ESLint that take a following value and should not be
/// considered positional targets.
const ESLINT_FLAGS_WITH_VALUES: &[&str] = &[
"-c",
"--config",
"--parser",
"--parser-options",
"--rulesdir",
"--plugin",
"--max-warnings",
"--format",
];
fn collect_non_flag_targets(args: &[String]) -> Option<Vec<String>> {
let mut targets = Vec::new();
let mut skip_next = false;
for (i, a) in args.iter().enumerate() {
if a == "--" {
break;
}
if skip_next {
skip_next = false;
continue;
}
if a == "-p"
|| a == "--package"
|| a == "--features"
|| a == "-C"
|| a == "--config"
|| a == "--config-path"
|| a == "--out-dir"
|| a == "-o"
|| a == "--run"
|| a == "--max-warnings"
|| a == "--format"
{
if i + 1 < args.len() {
skip_next = true;
}
continue;
}
if a.starts_with('-') {
continue;
}
targets.push(a.clone());
}
if targets.is_empty() {
None
} else {
Some(targets)
}
}
fn collect_non_flag_targets_with_flags(
args: &[String],
flags_with_vals: &[&str],
) -> Option<Vec<String>> {
let targets: Vec<String> = skip_flag_values(args, flags_with_vals)
.into_iter()
.filter(|a| !a.starts_with('-'))
.cloned()
.collect();
if targets.is_empty() {
None
} else {
Some(targets)
}
}
fn is_pathish(s: &str) -> bool {
s == "."
|| s == ".."
@@ -1514,47 +1117,6 @@ fn parse_find_query_and_path(tail: &[String]) -> (Option<String>, Option<String>
(query, path)
}
fn classify_npm_like(tool: &str, tail: &[String], full_cmd: &[String]) -> Option<ParsedCommand> {
let mut r = tail;
if tool == "pnpm" && r.first().map(|s| s.as_str()) == Some("-r") {
r = &r[1..];
}
let mut script_name: Option<String> = None;
if r.first().map(|s| s.as_str()) == Some("run") {
script_name = r.get(1).cloned();
} else {
let is_test_cmd = (tool == "npm" && r.first().map(|s| s.as_str()) == Some("t"))
|| ((tool == "npm" || tool == "pnpm" || tool == "yarn")
&& r.first().map(|s| s.as_str()) == Some("test"));
if is_test_cmd {
script_name = Some("test".to_string());
}
}
if let Some(name) = script_name {
let lname = name.to_lowercase();
if lname == "test" || lname == "unit" || lname == "jest" || lname == "vitest" {
return Some(ParsedCommand::Test {
cmd: shlex_join(full_cmd),
});
}
if lname == "lint" || lname == "eslint" {
return Some(ParsedCommand::Lint {
cmd: shlex_join(full_cmd),
tool: Some(format!("{tool}-script:{name}")),
targets: None,
});
}
if lname == "format" || lname == "fmt" || lname == "prettier" {
return Some(ParsedCommand::Format {
cmd: shlex_join(full_cmd),
tool: Some(format!("{tool}-script:{name}")),
targets: None,
});
}
}
None
}
fn parse_bash_lc_commands(original: &[String]) -> Option<Vec<ParsedCommand>> {
let [bash, flag, script] = original else {
return None;
@@ -1586,7 +1148,7 @@ fn parse_bash_lc_commands(original: &[String]) -> Option<Vec<ParsedCommand>> {
.map(|tokens| summarize_main_tokens(&tokens))
.collect();
if commands.len() > 1 {
commands.retain(|pc| !matches!(pc, ParsedCommand::Noop { .. }));
commands.retain(|pc| !matches!(pc, ParsedCommand::Unknown { cmd } if cmd == "true"));
}
if commands.len() == 1 {
// If we reduced to a single command, attribute the full original script
@@ -1655,27 +1217,7 @@ fn parse_bash_lc_commands(original: &[String]) -> Option<Vec<ParsedCommand>> {
}
}
}
ParsedCommand::Format {
tool, targets, cmd, ..
} => ParsedCommand::Format {
cmd: cmd.clone(),
tool,
targets,
},
ParsedCommand::Test { cmd, .. } => ParsedCommand::Test { cmd: cmd.clone() },
ParsedCommand::Lint {
tool, targets, cmd, ..
} => ParsedCommand::Lint {
cmd: cmd.clone(),
tool,
targets,
},
ParsedCommand::Unknown { .. } => ParsedCommand::Unknown {
cmd: script.clone(),
},
ParsedCommand::Noop { .. } => ParsedCommand::Noop {
cmd: script.clone(),
},
other => other,
})
.collect();
}
@@ -1728,124 +1270,6 @@ fn drop_small_formatting_commands(mut commands: Vec<Vec<String>>) -> Vec<Vec<Str
fn summarize_main_tokens(main_cmd: &[String]) -> ParsedCommand {
match main_cmd.split_first() {
Some((head, tail)) if head == "true" && tail.is_empty() => ParsedCommand::Noop {
cmd: shlex_join(main_cmd),
},
// (sed-specific logic handled below in dedicated arm returning Read)
Some((head, tail))
if head == "cargo" && tail.first().map(|s| s.as_str()) == Some("fmt") =>
{
ParsedCommand::Format {
cmd: shlex_join(main_cmd),
tool: Some("cargo fmt".to_string()),
targets: collect_non_flag_targets(&tail[1..]),
}
}
Some((head, tail))
if head == "cargo" && tail.first().map(|s| s.as_str()) == Some("clippy") =>
{
ParsedCommand::Lint {
cmd: shlex_join(main_cmd),
tool: Some("cargo clippy".to_string()),
targets: collect_non_flag_targets(&tail[1..]),
}
}
Some((head, tail))
if head == "cargo" && tail.first().map(|s| s.as_str()) == Some("test") =>
{
ParsedCommand::Test {
cmd: shlex_join(main_cmd),
}
}
Some((head, tail)) if head == "rustfmt" => ParsedCommand::Format {
cmd: shlex_join(main_cmd),
tool: Some("rustfmt".to_string()),
targets: collect_non_flag_targets(tail),
},
Some((head, tail)) if head == "go" && tail.first().map(|s| s.as_str()) == Some("fmt") => {
ParsedCommand::Format {
cmd: shlex_join(main_cmd),
tool: Some("go fmt".to_string()),
targets: collect_non_flag_targets(&tail[1..]),
}
}
Some((head, tail)) if head == "go" && tail.first().map(|s| s.as_str()) == Some("test") => {
ParsedCommand::Test {
cmd: shlex_join(main_cmd),
}
}
Some((head, _)) if head == "pytest" => ParsedCommand::Test {
cmd: shlex_join(main_cmd),
},
Some((head, tail)) if head == "eslint" => {
// Treat configuration flags with values (e.g. `-c .eslintrc`) as non-targets.
let targets = collect_non_flag_targets_with_flags(tail, ESLINT_FLAGS_WITH_VALUES);
ParsedCommand::Lint {
cmd: shlex_join(main_cmd),
tool: Some("eslint".to_string()),
targets,
}
}
Some((head, tail)) if head == "prettier" => ParsedCommand::Format {
cmd: shlex_join(main_cmd),
tool: Some("prettier".to_string()),
targets: collect_non_flag_targets(tail),
},
Some((head, tail)) if head == "black" => ParsedCommand::Format {
cmd: shlex_join(main_cmd),
tool: Some("black".to_string()),
targets: collect_non_flag_targets(tail),
},
Some((head, tail))
if head == "ruff" && tail.first().map(|s| s.as_str()) == Some("check") =>
{
ParsedCommand::Lint {
cmd: shlex_join(main_cmd),
tool: Some("ruff".to_string()),
targets: collect_non_flag_targets(&tail[1..]),
}
}
Some((head, tail))
if head == "ruff" && tail.first().map(|s| s.as_str()) == Some("format") =>
{
ParsedCommand::Format {
cmd: shlex_join(main_cmd),
tool: Some("ruff".to_string()),
targets: collect_non_flag_targets(&tail[1..]),
}
}
Some((head, _)) if (head == "jest" || head == "vitest") => ParsedCommand::Test {
cmd: shlex_join(main_cmd),
},
Some((head, tail))
if head == "npx" && tail.first().map(|s| s.as_str()) == Some("eslint") =>
{
let targets = collect_non_flag_targets_with_flags(&tail[1..], ESLINT_FLAGS_WITH_VALUES);
ParsedCommand::Lint {
cmd: shlex_join(main_cmd),
tool: Some("eslint".to_string()),
targets,
}
}
Some((head, tail))
if head == "npx" && tail.first().map(|s| s.as_str()) == Some("prettier") =>
{
ParsedCommand::Format {
cmd: shlex_join(main_cmd),
tool: Some("prettier".to_string()),
targets: collect_non_flag_targets(&tail[1..]),
}
}
// NPM-like scripts including yarn
Some((tool, tail)) if (tool == "pnpm" || tool == "npm" || tool == "yarn") => {
if let Some(cmd) = classify_npm_like(tool, tail, main_cmd) {
cmd
} else {
ParsedCommand::Unknown {
cmd: shlex_join(main_cmd),
}
}
}
Some((head, tail)) if head == "ls" => {
// Avoid treating option values as paths (e.g., ls -I "*.test.js").
let candidates = skip_flag_values(

View File

@@ -135,7 +135,7 @@ impl RolloutRecorder {
| ResponseItem::CustomToolCall { .. }
| ResponseItem::CustomToolCallOutput { .. }
| ResponseItem::Reasoning { .. } => filtered.push(item.clone()),
ResponseItem::Other => {
ResponseItem::WebSearchCall { .. } | ResponseItem::Other => {
// These should never be serialized.
continue;
}
@@ -199,7 +199,7 @@ impl RolloutRecorder {
| ResponseItem::CustomToolCall { .. }
| ResponseItem::CustomToolCallOutput { .. }
| ResponseItem::Reasoning { .. } => items.push(item),
ResponseItem::Other => {}
ResponseItem::WebSearchCall { .. } | ResponseItem::Other => {}
},
Err(e) => {
warn!("failed to parse item: {v:?}, error: {e}");
@@ -326,7 +326,7 @@ async fn rollout_writer(
| ResponseItem::Reasoning { .. } => {
writer.write_line(&item).await?;
}
ResponseItem::Other => {}
ResponseItem::WebSearchCall { .. } | ResponseItem::Other => {}
}
}
}

View File

@@ -222,7 +222,7 @@ fn is_write_patch_constrained_to_writable_paths(
for (path, change) in action.changes() {
match change {
ApplyPatchFileChange::Add { .. } | ApplyPatchFileChange::Delete => {
ApplyPatchFileChange::Add { .. } | ApplyPatchFileChange::Delete { .. } => {
if !is_path_writable(path) {
return false;
}

View File

@@ -578,7 +578,12 @@ index {ZERO_OID}..{right_oid}
fs::write(&file, "x\n").unwrap();
let mut acc = TurnDiffTracker::new();
let del_changes = HashMap::from([(file.clone(), FileChange::Delete)]);
let del_changes = HashMap::from([(
file.clone(),
FileChange::Delete {
content: "x\n".to_string(),
},
)]);
acc.on_patch_begin(&del_changes);
// Simulate apply: delete the file from disk.
@@ -741,7 +746,12 @@ index {left_oid}..{right_oid}
assert_eq!(first, expected_first);
// Next: introduce a brand-new path b.txt into baseline snapshots via a delete change.
let del_b = HashMap::from([(b.clone(), FileChange::Delete)]);
let del_b = HashMap::from([(
b.clone(),
FileChange::Delete {
content: "z\n".to_string(),
},
)]);
acc.on_patch_begin(&del_b);
// Simulate apply: delete b.txt.
let baseline_mode = file_mode_for_path(&b).unwrap_or(FileMode::Regular);

View File

@@ -1,4 +1,3 @@
use std::path::Path;
use std::time::Duration;
use rand::Rng;
@@ -12,33 +11,3 @@ pub(crate) fn backoff(attempt: u64) -> Duration {
let jitter = rand::rng().random_range(0.9..1.1);
Duration::from_millis((base as f64 * jitter) as u64)
}
/// Return `true` if the project folder specified by the `Config` is inside a
/// Git repository.
///
/// The check walks up the directory hierarchy looking for a `.git` file or
/// directory (note `.git` can be a file that contains a `gitdir` entry). This
/// approach does **not** require the `git` binary or the `git2` crate and is
/// therefore fairly lightweight.
///
/// Note that this does **not** detect *worktrees* created with
/// `git worktree add` where the checkout lives outside the main repository
/// directory. If you need Codex to work from such a checkout simply pass the
/// `--allow-no-git-exec` CLI flag that disables the repo requirement.
pub fn is_inside_git_repo(base_dir: &Path) -> bool {
let mut dir = base_dir.to_path_buf();
loop {
if dir.join(".git").exists() {
return true;
}
// Pop one component (go up one directory). `pop` returns false when
// we have reached the filesystem root.
if !dir.pop() {
break;
}
}
false
}

View File

@@ -418,7 +418,7 @@ async fn prefers_chatgpt_token_when_config_prefers_chatgpt() {
match CodexAuth::from_codex_home(codex_home.path(), config.preferred_auth_method) {
Ok(Some(auth)) => codex_login::AuthManager::from_auth_for_testing(auth),
Ok(None) => panic!("No CodexAuth found in codex_home"),
Err(e) => panic!("Failed to load CodexAuth: {}", e),
Err(e) => panic!("Failed to load CodexAuth: {e}"),
};
let conversation_manager = ConversationManager::new(auth_manager);
let NewConversation {
@@ -499,7 +499,7 @@ async fn prefers_apikey_when_config_prefers_apikey_even_with_chatgpt_tokens() {
match CodexAuth::from_codex_home(codex_home.path(), config.preferred_auth_method) {
Ok(Some(auth)) => codex_login::AuthManager::from_auth_for_testing(auth),
Ok(None) => panic!("No CodexAuth found in codex_home"),
Err(e) => panic!("Failed to load CodexAuth: {}", e),
Err(e) => panic!("Failed to load CodexAuth: {e}"),
};
let conversation_manager = ConversationManager::new(auth_manager);
let NewConversation {

View File

@@ -191,7 +191,7 @@ async fn prompt_tools_are_consistent_across_requests() {
let expected_instructions: &str = include_str!("../../prompt.md");
// our internal implementation is responsible for keeping tools in sync
// with the OpenAI schema, so we just verify the tool presence here
let expected_tools_names: &[&str] = &["shell", "update_plan", "apply_patch"];
let expected_tools_names: &[&str] = &["shell", "update_plan", "apply_patch", "view_image"];
let body0 = requests[0].body_json::<serde_json::Value>().unwrap();
assert_eq!(
body0["instructions"],
@@ -280,7 +280,7 @@ async fn prefixes_context_and_instructions_once_and_consistently_across_requests
{}</environment_context>"#,
cwd.path().to_string_lossy(),
match shell.name() {
Some(name) => format!(" <shell>{}</shell>\n", name),
Some(name) => format!(" <shell>{name}</shell>\n"),
None => String::new(),
}
);

View File

@@ -25,6 +25,7 @@ use codex_core::protocol::TaskCompleteEvent;
use codex_core::protocol::TurnAbortReason;
use codex_core::protocol::TurnDiffEvent;
use codex_core::protocol::WebSearchBeginEvent;
use codex_core::protocol::WebSearchEndEvent;
use owo_colors::OwoColorize;
use owo_colors::Style;
use shlex::try_join;
@@ -362,8 +363,9 @@ impl EventProcessor for EventProcessorWithHumanOutput {
}
}
}
EventMsg::WebSearchBegin(WebSearchBeginEvent { call_id: _, query }) => {
ts_println!(self, "🌐 {query}");
EventMsg::WebSearchBegin(WebSearchBeginEvent { call_id: _ }) => {}
EventMsg::WebSearchEnd(WebSearchEndEvent { call_id: _, query }) => {
ts_println!(self, "🌐 Searched: {query}");
}
EventMsg::PatchApplyBegin(PatchApplyBeginEvent {
call_id,
@@ -402,13 +404,16 @@ impl EventProcessor for EventProcessorWithHumanOutput {
println!("{}", line.style(self.green));
}
}
FileChange::Delete => {
FileChange::Delete { content } => {
let header = format!(
"{} {}",
format_file_change(change),
path.to_string_lossy()
);
println!("{}", header.style(self.magenta));
for line in content.lines() {
println!("{}", line.style(self.red));
}
}
FileChange::Update {
unified_diff,
@@ -533,6 +538,9 @@ impl EventProcessor for EventProcessorWithHumanOutput {
EventMsg::McpListToolsResponse(_) => {
// Currently ignored in exec output.
}
EventMsg::ListCustomPromptsResponse(_) => {
// Currently ignored in exec output.
}
EventMsg::TurnAborted(abort_reason) => match abort_reason.reason {
TurnAbortReason::Interrupted => {
ts_println!(self, "task interrupted");
@@ -555,7 +563,7 @@ fn escape_command(command: &[String]) -> String {
fn format_file_change(change: &FileChange) -> &'static str {
match change {
FileChange::Add { .. } => "A",
FileChange::Delete => "D",
FileChange::Delete { .. } => "D",
FileChange::Update {
move_path: Some(_), ..
} => "R",

View File

@@ -13,13 +13,13 @@ use codex_core::ConversationManager;
use codex_core::NewConversation;
use codex_core::config::Config;
use codex_core::config::ConfigOverrides;
use codex_core::git_info::get_git_repo_root;
use codex_core::protocol::AskForApproval;
use codex_core::protocol::Event;
use codex_core::protocol::EventMsg;
use codex_core::protocol::InputItem;
use codex_core::protocol::Op;
use codex_core::protocol::TaskCompleteEvent;
use codex_core::util::is_inside_git_repo;
use codex_login::AuthManager;
use codex_ollama::DEFAULT_OSS_MODEL;
use codex_protocol::config_types::SandboxMode;
@@ -148,6 +148,7 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
base_instructions: None,
include_plan_tool: None,
include_apply_patch_tool: None,
include_view_image_tool: None,
disable_response_storage: oss.then_some(true),
show_raw_agent_reasoning: oss.then_some(true),
tools_web_search_request: None,
@@ -182,7 +183,7 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
// is using.
event_processor.print_config_summary(&config, &prompt);
if !skip_git_repo_check && !is_inside_git_repo(&config.cwd.to_path_buf()) {
if !skip_git_repo_check && get_git_repo_root(&config.cwd.to_path_buf()).is_none() {
eprintln!("Not inside a trusted directory and --skip-git-repo-check was not specified.");
std::process::exit(1);
}

View File

@@ -28,7 +28,7 @@ impl Respond for SeqResponder {
Some(body) => wiremock::ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.set_body_raw(
load_sse_fixture_with_id_from_str(body, &format!("request_{}", call_num)),
load_sse_fixture_with_id_from_str(body, &format!("request_{call_num}")),
"text/event-stream",
),
None => panic!("no response for {call_num}"),
@@ -63,7 +63,7 @@ pub(crate) async fn run_e2e_exec_test(cwd: &Path, response_streams: Vec<String>)
.current_dir(cwd.clone())
.env("CODEX_HOME", cwd.clone())
.env("OPENAI_API_KEY", "dummy")
.env("OPENAI_BASE_URL", format!("{}/v1", uri))
.env("OPENAI_BASE_URL", format!("{uri}/v1"))
.arg("--skip-git-repo-check")
.arg("-s")
.arg("danger-full-access")

View File

@@ -2,7 +2,7 @@
<html lang="en">
<head>
<meta charset="utf-8" />
<title>Sign into Codex CLI</title>
<title>Sign into Codex</title>
<link rel="icon" href='data:image/svg+xml,%3Csvg xmlns="http://www.w3.org/2000/svg" width="32" height="32" fill="none" viewBox="0 0 32 32"%3E%3Cpath stroke="%23000" stroke-linecap="round" stroke-width="2.484" d="M22.356 19.797H17.17M9.662 12.29l1.979 3.576a.511.511 0 0 1-.005.504l-1.974 3.409M30.758 16c0 8.15-6.607 14.758-14.758 14.758-8.15 0-14.758-6.607-14.758-14.758C1.242 7.85 7.85 1.242 16 1.242c8.15 0 14.758 6.608 14.758 14.758Z"/%3E%3C/svg%3E' type="image/svg+xml">
<style>
.container {
@@ -135,7 +135,7 @@
<div class="logo">
<svg xmlns="http://www.w3.org/2000/svg" width="32" height="32" fill="none" viewBox="0 0 32 32"><path stroke="#000" stroke-linecap="round" stroke-width="2.484" d="M22.356 19.797H17.17M9.662 12.29l1.979 3.576a.511.511 0 0 1-.005.504l-1.974 3.409M30.758 16c0 8.15-6.607 14.758-14.758 14.758-8.15 0-14.758-6.607-14.758-14.758C1.242 7.85 7.85 1.242 16 1.242c8.15 0 14.758 6.608 14.758 14.758Z"></path></svg>
</div>
<div class="title">Signed in to Codex CLI</div>
<div class="title">Signed in to Codex</div>
</div>
<div class="close-box" style="display: none;">
<div class="setup-description">You may now close this page</div>

View File

@@ -129,10 +129,7 @@ impl McpClient {
error!("failed to write newline to child stdin");
break;
}
if stdin.flush().await.is_err() {
error!("failed to flush child stdin");
break;
}
// No explicit flush needed on a pipe; write_all is sufficient.
}
Err(e) => error!("failed to serialize JSONRPCMessage: {e}"),
}
@@ -365,7 +362,11 @@ impl McpClient {
}
};
if let Some(tx) = pending.lock().await.remove(&id) {
let tx_opt = {
let mut guard = pending.lock().await;
guard.remove(&id)
};
if let Some(tx) = tx_opt {
// Ignore send errors the receiver might have been dropped.
let _ = tx.send(JSONRPCMessage::Response(resp));
} else {
@@ -383,7 +384,11 @@ impl McpClient {
RequestId::String(_) => return, // see comment above
};
if let Some(tx) = pending.lock().await.remove(&id) {
let tx_opt = {
let mut guard = pending.lock().await;
guard.remove(&id)
};
if let Some(tx) = tx_opt {
let _ = tx.send(JSONRPCMessage::Error(err));
}
}

View File

@@ -798,6 +798,7 @@ fn derive_config_from_params(
base_instructions,
include_plan_tool,
include_apply_patch_tool,
include_view_image_tool: None,
disable_response_storage: None,
show_raw_agent_reasoning: None,
tools_web_search_request: None,

View File

@@ -161,6 +161,7 @@ impl CodexToolCallParam {
base_instructions,
include_plan_tool,
include_apply_patch_tool: None,
include_view_image_tool: None,
disable_response_storage: None,
show_raw_agent_reasoning: None,
tools_web_search_request: None,

View File

@@ -264,6 +264,7 @@ async fn run_codex_tool_session_inner(
| EventMsg::McpToolCallBegin(_)
| EventMsg::McpToolCallEnd(_)
| EventMsg::McpListToolsResponse(_)
| EventMsg::ListCustomPromptsResponse(_)
| EventMsg::ExecCommandBegin(_)
| EventMsg::ExecCommandOutputDelta(_)
| EventMsg::ExecCommandEnd(_)
@@ -273,6 +274,7 @@ async fn run_codex_tool_session_inner(
| EventMsg::PatchApplyEnd(_)
| EventMsg::TurnDiff(_)
| EventMsg::WebSearchBegin(_)
| EventMsg::WebSearchEnd(_)
| EventMsg::GetHistoryEntryResponse(_)
| EventMsg::PlanUpdate(_)
| EventMsg::TurnAborted(_)

View File

@@ -59,11 +59,10 @@ pub async fn run_main(
// Set up channels.
let (incoming_tx, mut incoming_rx) = mpsc::channel::<JSONRPCMessage>(CHANNEL_CAPACITY);
let (outgoing_tx, mut outgoing_rx) = mpsc::channel::<OutgoingMessage>(CHANNEL_CAPACITY);
let (outgoing_tx, mut outgoing_rx) = mpsc::unbounded_channel::<OutgoingMessage>();
// Task: read from stdin, push to `incoming_tx`.
let stdin_reader_handle = tokio::spawn({
let incoming_tx = incoming_tx.clone();
async move {
let stdin = io::stdin();
let reader = BufReader::new(stdin);
@@ -135,10 +134,6 @@ pub async fn run_main(
error!("Failed to write newline to stdout: {e}");
break;
}
if let Err(e) = stdout.flush().await {
error!("Failed to flush stdout: {e}");
break;
}
}
Err(e) => error!("Failed to serialize JSONRPCMessage: {e}"),
}

View File

@@ -24,12 +24,12 @@ use crate::error_code::INTERNAL_ERROR_CODE;
/// Sends messages to the client and manages request callbacks.
pub(crate) struct OutgoingMessageSender {
next_request_id: AtomicI64,
sender: mpsc::Sender<OutgoingMessage>,
sender: mpsc::UnboundedSender<OutgoingMessage>,
request_id_to_callback: Mutex<HashMap<RequestId, oneshot::Sender<Result>>>,
}
impl OutgoingMessageSender {
pub(crate) fn new(sender: mpsc::Sender<OutgoingMessage>) -> Self {
pub(crate) fn new(sender: mpsc::UnboundedSender<OutgoingMessage>) -> Self {
Self {
next_request_id: AtomicI64::new(0),
sender,
@@ -55,7 +55,7 @@ impl OutgoingMessageSender {
method: method.to_string(),
params,
});
let _ = self.sender.send(outgoing_message).await;
let _ = self.sender.send(outgoing_message);
rx_approve
}
@@ -81,7 +81,7 @@ impl OutgoingMessageSender {
match serde_json::to_value(response) {
Ok(result) => {
let outgoing_message = OutgoingMessage::Response(OutgoingResponse { id, result });
let _ = self.sender.send(outgoing_message).await;
let _ = self.sender.send(outgoing_message);
}
Err(err) => {
self.send_error(
@@ -123,24 +123,24 @@ impl OutgoingMessageSender {
}
pub(crate) async fn send_server_notification(&self, notification: ServerNotification) {
let method = format!("codex/event/{}", notification);
let method = format!("codex/event/{notification}");
let params = match serde_json::to_value(&notification) {
Ok(serde_json::Value::Object(mut map)) => map.remove("data"),
_ => None,
};
let outgoing_message =
OutgoingMessage::Notification(OutgoingNotification { method, params });
let _ = self.sender.send(outgoing_message).await;
let _ = self.sender.send(outgoing_message);
}
pub(crate) async fn send_notification(&self, notification: OutgoingNotification) {
let outgoing_message = OutgoingMessage::Notification(notification);
let _ = self.sender.send(outgoing_message).await;
let _ = self.sender.send(outgoing_message);
}
pub(crate) async fn send_error(&self, id: RequestId, error: JSONRPCErrorError) {
let outgoing_message = OutgoingMessage::Error(OutgoingError { id, error });
let _ = self.sender.send(outgoing_message).await;
let _ = self.sender.send(outgoing_message);
}
}
@@ -250,7 +250,7 @@ mod tests {
#[tokio::test]
async fn test_send_event_as_notification() {
let (outgoing_tx, mut outgoing_rx) = mpsc::channel::<OutgoingMessage>(2);
let (outgoing_tx, mut outgoing_rx) = mpsc::unbounded_channel::<OutgoingMessage>();
let outgoing_message_sender = OutgoingMessageSender::new(outgoing_tx);
let event = Event {
@@ -281,7 +281,7 @@ mod tests {
#[tokio::test]
async fn test_send_event_as_notification_with_meta() {
let (outgoing_tx, mut outgoing_rx) = mpsc::channel::<OutgoingMessage>(2);
let (outgoing_tx, mut outgoing_rx) = mpsc::unbounded_channel::<OutgoingMessage>();
let outgoing_message_sender = OutgoingMessageSender::new(outgoing_tx);
let session_configured_event = SessionConfiguredEvent {

View File

@@ -61,6 +61,7 @@ impl McpProcess {
cmd.stdin(Stdio::piped());
cmd.stdout(Stdio::piped());
cmd.stderr(Stdio::piped());
cmd.env("CODEX_HOME", codex_home);
cmd.env("RUST_LOG", "debug");
@@ -77,6 +78,17 @@ impl McpProcess {
.take()
.ok_or_else(|| anyhow::format_err!("mcp should have stdout fd"))?;
let stdout = BufReader::new(stdout);
// Forward child's stderr to our stderr so failures are visible even
// when stdout/stderr are captured by the test harness.
if let Some(stderr) = process.stderr.take() {
let mut stderr_reader = BufReader::new(stderr).lines();
tokio::spawn(async move {
while let Ok(Some(line)) = stderr_reader.next_line().await {
eprintln!("[mcp stderr] {line}");
}
});
}
Ok(Self {
next_request_id: AtomicI64::new(0),
process,
@@ -283,6 +295,7 @@ impl McpProcess {
}
async fn send_jsonrpc_message(&mut self, message: JSONRPCMessage) -> anyhow::Result<()> {
eprintln!("writing message to stdin: {message:?}");
let payload = serde_json::to_string(&message)?;
self.stdin.write_all(payload.as_bytes()).await?;
self.stdin.write_all(b"\n").await?;
@@ -294,13 +307,15 @@ impl McpProcess {
let mut line = String::new();
self.stdout.read_line(&mut line).await?;
let message = serde_json::from_str::<JSONRPCMessage>(&line)?;
eprintln!("read message from stdout: {message:?}");
Ok(message)
}
pub async fn read_stream_until_request_message(&mut self) -> anyhow::Result<JSONRPCRequest> {
eprintln!("in read_stream_until_request_message()");
loop {
let message = self.read_jsonrpc_message().await?;
eprint!("message: {message:?}");
match message {
JSONRPCMessage::Notification(_) => {
@@ -323,10 +338,10 @@ impl McpProcess {
&mut self,
request_id: RequestId,
) -> anyhow::Result<JSONRPCResponse> {
eprintln!("in read_stream_until_response_message({request_id:?})");
loop {
let message = self.read_jsonrpc_message().await?;
eprint!("message: {message:?}");
match message {
JSONRPCMessage::Notification(_) => {
eprintln!("notification: {message:?}");
@@ -352,8 +367,6 @@ impl McpProcess {
) -> anyhow::Result<mcp_types::JSONRPCError> {
loop {
let message = self.read_jsonrpc_message().await?;
eprint!("message: {message:?}");
match message {
JSONRPCMessage::Notification(_) => {
eprintln!("notification: {message:?}");
@@ -377,10 +390,10 @@ impl McpProcess {
&mut self,
method: &str,
) -> anyhow::Result<JSONRPCNotification> {
eprintln!("in read_stream_until_notification_message({method})");
loop {
let message = self.read_jsonrpc_message().await?;
eprint!("message: {message:?}");
match message {
JSONRPCMessage::Notification(notification) => {
if notification.method == method {
@@ -405,10 +418,10 @@ impl McpProcess {
pub async fn read_stream_until_legacy_task_complete_notification(
&mut self,
) -> anyhow::Result<JSONRPCNotification> {
eprintln!("in read_stream_until_legacy_task_complete_notification()");
loop {
let message = self.read_jsonrpc_message().await?;
eprint!("message: {message:?}");
match message {
JSONRPCMessage::Notification(notification) => {
let is_match = if notification.method == "codex/event" {
@@ -427,6 +440,8 @@ impl McpProcess {
if is_match {
return Ok(notification);
} else {
eprintln!("ignoring notification: {notification:?}");
}
}
JSONRPCMessage::Request(_) => {

View File

@@ -30,7 +30,8 @@ use mcp_test_support::create_final_assistant_message_sse_response;
use mcp_test_support::create_mock_chat_completions_server;
use mcp_test_support::create_shell_sse_response;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
// Allow ample time on slower CI or under load to avoid flakes.
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(20);
/// Test that a shell command that is not on the "trusted" list triggers an
/// elicitation request to the MCP and that sending the approval runs the
@@ -52,9 +53,22 @@ async fn test_shell_command_approval_triggers_elicitation() {
}
async fn shell_command_approval_triggers_elicitation() -> anyhow::Result<()> {
// We use `git init` because it will not be on the "trusted" list.
let shell_command = vec!["git".to_string(), "init".to_string()];
// Use a simple, untrusted command that creates a file so we can
// observe a side-effect.
//
// Crossplatform approach: run a tiny Python snippet to touch the file
// using `python3 -c ...` on all platforms.
let workdir_for_shell_function_call = TempDir::new()?;
let created_filename = "created_by_shell_tool.txt";
let created_file = workdir_for_shell_function_call
.path()
.join(created_filename);
let shell_command = vec![
"python3".to_string(),
"-c".to_string(),
format!("import pathlib; pathlib.Path('{created_filename}').touch()"),
];
let McpHandle {
process: mut mcp_process,
@@ -67,7 +81,7 @@ async fn shell_command_approval_triggers_elicitation() -> anyhow::Result<()> {
Some(5_000),
"call1234",
)?,
create_final_assistant_message_sse_response("Enjoy your new git repo!")?,
create_final_assistant_message_sse_response("File created!")?,
])
.await?;
@@ -122,8 +136,7 @@ async fn shell_command_approval_triggers_elicitation() -> anyhow::Result<()> {
.expect("task_complete_notification timeout")
.expect("task_complete_notification resp");
// Verify the original `codex` tool call completes and that `git init` ran
// successfully.
// Verify the original `codex` tool call completes and that the file was created.
let codex_response = timeout(
DEFAULT_READ_TIMEOUT,
mcp_process.read_stream_until_response_message(RequestId::Integer(codex_request_id)),
@@ -136,7 +149,7 @@ async fn shell_command_approval_triggers_elicitation() -> anyhow::Result<()> {
result: json!({
"content": [
{
"text": "Enjoy your new git repo!",
"text": "File created!",
"type": "text"
}
]
@@ -145,10 +158,7 @@ async fn shell_command_approval_triggers_elicitation() -> anyhow::Result<()> {
codex_response
);
assert!(
workdir_for_shell_function_call.path().join(".git").is_dir(),
".git folder should have been created"
);
assert!(created_file.is_file(), "created file should exist");
Ok(())
}

View File

@@ -0,0 +1,10 @@
use serde::Deserialize;
use serde::Serialize;
use std::path::PathBuf;
#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct CustomPrompt {
pub name: String,
pub path: PathBuf,
pub content: String,
}

View File

@@ -1,4 +1,5 @@
pub mod config_types;
pub mod custom_prompts;
pub mod mcp_protocol;
pub mod message_history;
pub mod models;

View File

@@ -95,6 +95,22 @@ pub enum ResponseItem {
call_id: String,
output: String,
},
// Emitted by the Responses API when the agent triggers a web search.
// Example payload (from SSE `response.output_item.done`):
// {
// "id":"ws_...",
// "type":"web_search_call",
// "status":"completed",
// "action": {"type":"search","query":"weather: San Francisco, CA"}
// }
WebSearchCall {
#[serde(default, skip_serializing_if = "Option::is_none")]
id: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
status: Option<String>,
action: WebSearchAction,
},
#[serde(other)]
Other,
}
@@ -162,6 +178,16 @@ pub struct LocalShellExecAction {
pub user: Option<String>,
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum WebSearchAction {
Search {
query: String,
},
#[serde(other)]
Other,
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum ReasoningItemReasoningSummary {

View File

@@ -17,22 +17,6 @@ pub enum ParsedCommand {
query: Option<String>,
path: Option<String>,
},
Format {
cmd: String,
tool: Option<String>,
targets: Option<Vec<String>>,
},
Test {
cmd: String,
},
Lint {
cmd: String,
tool: Option<String>,
targets: Option<Vec<String>>,
},
Noop {
cmd: String,
},
Unknown {
cmd: String,
},

View File

@@ -10,6 +10,7 @@ use std::path::PathBuf;
use std::str::FromStr;
use std::time::Duration;
use crate::custom_prompts::CustomPrompt;
use mcp_types::CallToolResult;
use mcp_types::Tool as McpTool;
use serde::Deserialize;
@@ -146,6 +147,9 @@ pub enum Op {
/// Reply is delivered via `EventMsg::McpListToolsResponse`.
ListMcpTools,
/// Request the list of available custom prompts.
ListCustomPrompts,
/// Request the agent to summarize the current conversation context.
/// The agent will use its existing context (either conversation history or previous response id)
/// to generate a summary which will be returned as an AgentMessage event.
@@ -439,6 +443,8 @@ pub enum EventMsg {
WebSearchBegin(WebSearchBeginEvent),
WebSearchEnd(WebSearchEndEvent),
/// Notification that the server is about to execute a command.
ExecCommandBegin(ExecCommandBeginEvent),
@@ -472,6 +478,9 @@ pub enum EventMsg {
/// List of MCP tools available to the agent.
McpListToolsResponse(McpListToolsResponseEvent),
/// List of custom prompts available to the agent.
ListCustomPromptsResponse(ListCustomPromptsResponseEvent),
PlanUpdate(UpdatePlanArgs),
TurnAborted(TurnAbortedEvent),
@@ -668,6 +677,11 @@ impl McpToolCallEndEvent {
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct WebSearchBeginEvent {
pub call_id: String,
}
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct WebSearchEndEvent {
pub call_id: String,
pub query: String,
}
@@ -806,6 +820,12 @@ pub struct McpListToolsResponseEvent {
pub tools: std::collections::HashMap<String, McpTool>,
}
/// Response payload for `Op::ListCustomPrompts`.
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct ListCustomPromptsResponseEvent {
pub custom_prompts: Vec<CustomPrompt>,
}
#[derive(Debug, Default, Clone, Deserialize, Serialize)]
pub struct SessionConfiguredEvent {
/// Unique id for this session.
@@ -849,7 +869,9 @@ pub enum FileChange {
Add {
content: String,
},
Delete,
Delete {
content: String,
},
Update {
unified_diff: String,
move_path: Option<PathBuf>,

View File

@@ -49,6 +49,7 @@ image = { version = "^0.25.6", default-features = false, features = [
"jpeg",
"png",
] }
itertools = "0.14.0"
lazy_static = "1"
mcp-types = { path = "../mcp-types" }
once_cell = "1"
@@ -87,6 +88,7 @@ unicode-segmentation = "1.12.0"
unicode-width = "0.1"
url = "2"
uuid = "1"
pathdiff = "0.2"
[target.'cfg(unix)'.dependencies]
libc = "0.2"

View File

@@ -43,6 +43,7 @@ pub(crate) struct App {
// Pager overlay state (Transcript or Static like Diff)
pub(crate) overlay: Option<Overlay>,
pub(crate) deferred_history_lines: Vec<Line<'static>>,
has_emitted_history_lines: bool,
pub(crate) enhanced_keys_supported: bool,
@@ -91,6 +92,7 @@ impl App {
transcript_lines: Vec::new(),
overlay: None,
deferred_history_lines: Vec::new(),
has_emitted_history_lines: false,
commit_anim_running: Arc::new(AtomicBool::new(false)),
backtrack: BacktrackState::default(),
};
@@ -133,6 +135,12 @@ impl App {
self.chat_widget.handle_paste(pasted);
}
TuiEvent::Draw => {
if self
.chat_widget
.handle_paste_burst_tick(tui.frame_requester())
{
return Ok(true);
}
tui.draw(
self.chat_widget.desired_height(tui.terminal.size()?.width),
|frame| {
@@ -171,27 +179,28 @@ impl App {
);
tui.frame_requester().schedule_frame();
}
AppEvent::InsertHistoryLines(lines) => {
if let Some(Overlay::Transcript(t)) = &mut self.overlay {
t.insert_lines(lines.clone());
tui.frame_requester().schedule_frame();
}
self.transcript_lines.extend(lines.clone());
if self.overlay.is_some() {
self.deferred_history_lines.extend(lines);
} else {
tui.insert_history_lines(lines);
}
}
AppEvent::InsertHistoryCell(cell) => {
let cell_transcript = cell.transcript_lines();
let mut cell_transcript = cell.transcript_lines();
if !cell.is_stream_continuation() && !self.transcript_lines.is_empty() {
cell_transcript.insert(0, Line::from(""));
}
if let Some(Overlay::Transcript(t)) = &mut self.overlay {
t.insert_lines(cell_transcript.clone());
tui.frame_requester().schedule_frame();
}
self.transcript_lines.extend(cell_transcript.clone());
let display = cell.display_lines();
let mut display = cell.display_lines(tui.terminal.last_known_screen_size.width);
if !display.is_empty() {
// Only insert a separating blank line for new cells that are not
// part of an ongoing stream. Streaming continuations should not
// accrue extra blank lines between chunks.
if !cell.is_stream_continuation() {
if self.has_emitted_history_lines {
display.insert(0, Line::from(""));
} else {
self.has_emitted_history_lines = true;
}
}
if self.overlay.is_some() {
self.deferred_history_lines.extend(display);
} else {

View File

@@ -1,7 +1,6 @@
use codex_core::protocol::ConversationHistoryResponseEvent;
use codex_core::protocol::Event;
use codex_file_search::FileMatch;
use ratatui::text::Line;
use crate::history_cell::HistoryCell;
@@ -40,7 +39,6 @@ pub(crate) enum AppEvent {
/// Result of computing a `/diff` command.
DiffResult(String),
InsertHistoryLines(Vec<Line<'static>>),
InsertHistoryCell(Box<dyn HistoryCell>),
StartCommitAnimation,

View File

@@ -100,6 +100,7 @@ mod tests {
has_input_focus: true,
enhanced_keys_supported: false,
placeholder_text: "Ask Codex to do anything".to_string(),
disable_paste_burst: false,
});
assert_eq!(CancellationEvent::Handled, view.on_ctrl_c(&mut pane));
assert!(view.queue.is_empty());

View File

@@ -22,9 +22,14 @@ use ratatui::widgets::StatefulWidgetRef;
use ratatui::widgets::WidgetRef;
use super::chat_composer_history::ChatComposerHistory;
use super::command_popup::CommandItem;
use super::command_popup::CommandPopup;
use super::file_search_popup::FileSearchPopup;
use super::paste_burst::CharDecision;
use super::paste_burst::PasteBurst;
use crate::bottom_pane::paste_burst::FlushResult;
use crate::slash_command::SlashCommand;
use codex_protocol::custom_prompts::CustomPrompt;
use crate::app_event::AppEvent;
use crate::app_event_sender::AppEventSender;
@@ -40,16 +45,12 @@ use std::path::PathBuf;
use std::time::Duration;
use std::time::Instant;
// Heuristic thresholds for detecting paste-like input bursts.
const PASTE_BURST_MIN_CHARS: u16 = 3;
const PASTE_BURST_CHAR_INTERVAL: Duration = Duration::from_millis(8);
const PASTE_ENTER_SUPPRESS_WINDOW: Duration = Duration::from_millis(120);
/// If the pasted content exceeds this number of characters, replace it with a
/// placeholder in the UI.
const LARGE_PASTE_CHAR_THRESHOLD: usize = 1000;
/// Result returned when the user interacts with the text area.
#[derive(Debug, PartialEq)]
pub enum InputResult {
Submitted(String),
Command(SlashCommand),
@@ -93,13 +94,11 @@ pub(crate) struct ChatComposer {
has_focus: bool,
attached_images: Vec<AttachedImage>,
placeholder_text: String,
// Heuristic state to detect non-bracketed paste bursts.
last_plain_char_time: Option<Instant>,
consecutive_plain_char_burst: u16,
paste_burst_until: Option<Instant>,
// Buffer to accumulate characters during a detected non-bracketed paste burst.
paste_burst_buffer: String,
in_paste_burst_mode: bool,
// Non-bracketed paste burst tracker.
paste_burst: PasteBurst,
// When true, disables paste-burst logic and inserts characters immediately.
disable_paste_burst: bool,
custom_prompts: Vec<CustomPrompt>,
}
/// Popup state at most one can be visible at any time.
@@ -115,10 +114,11 @@ impl ChatComposer {
app_event_tx: AppEventSender,
enhanced_keys_supported: bool,
placeholder_text: String,
disable_paste_burst: bool,
) -> Self {
let use_shift_enter_hint = enhanced_keys_supported;
Self {
let mut this = Self {
textarea: TextArea::new(),
textarea_state: RefCell::new(TextAreaState::default()),
active_popup: ActivePopup::None,
@@ -134,12 +134,13 @@ impl ChatComposer {
has_focus: has_input_focus,
attached_images: Vec::new(),
placeholder_text,
last_plain_char_time: None,
consecutive_plain_char_burst: 0,
paste_burst_until: None,
paste_burst_buffer: String::new(),
in_paste_burst_mode: false,
}
paste_burst: PasteBurst::default(),
disable_paste_burst: false,
custom_prompts: Vec::new(),
};
// Apply configuration via the setter to keep side-effects centralized.
this.set_disable_paste_burst(disable_paste_burst);
this
}
pub fn desired_height(&self, width: u16) -> u16 {
@@ -223,17 +224,21 @@ impl ChatComposer {
let placeholder = format!("[Pasted Content {char_count} chars]");
self.textarea.insert_element(&placeholder);
self.pending_pastes.push((placeholder, pasted));
} else if self.handle_paste_image_path(pasted.clone()) {
} else if char_count > 1 && self.handle_paste_image_path(pasted.clone()) {
self.textarea.insert_str(" ");
} else {
self.textarea.insert_str(&pasted);
}
// Explicit paste events should not trigger Enter suppression.
self.last_plain_char_time = None;
self.consecutive_plain_char_burst = 0;
self.paste_burst_until = None;
self.paste_burst.clear_after_explicit_paste();
// Keep popup sync consistent with key handling: prefer slash popup; only
// sync file popup when slash popup is NOT active.
self.sync_command_popup();
self.sync_file_search_popup();
if matches!(self.active_popup, ActivePopup::Command(_)) {
self.dismissed_file_popup_token = None;
} else {
self.sync_file_search_popup();
}
true
}
@@ -256,6 +261,14 @@ impl ChatComposer {
}
}
pub(crate) fn set_disable_paste_burst(&mut self, disabled: bool) {
let was_disabled = self.disable_paste_burst;
self.disable_paste_burst = disabled;
if disabled && !was_disabled {
self.paste_burst.clear_window_after_non_char();
}
}
/// Replace the entire composer content with `text` and reset cursor.
pub(crate) fn set_text_content(&mut self, text: String) {
self.textarea.set_text(&text);
@@ -270,6 +283,7 @@ impl ChatComposer {
self.textarea.text().to_string()
}
/// Attempt to start a burst by retro-capturing recent chars before the cursor.
pub fn attach_image(&mut self, path: PathBuf, width: u32, height: u32, format_label: &str) {
let placeholder = format!("[image {width}x{height} {format_label}]");
// Insert as an element to match large paste placeholder behavior:
@@ -284,6 +298,18 @@ impl ChatComposer {
images.into_iter().map(|img| img.path).collect()
}
pub(crate) fn flush_paste_burst_if_due(&mut self) -> bool {
self.handle_paste_burst_flush(Instant::now())
}
pub(crate) fn is_in_paste_burst(&self) -> bool {
self.paste_burst.is_active()
}
pub(crate) fn recommended_paste_flush_delay() -> Duration {
PasteBurst::recommended_flush_delay()
}
/// Integrate results from an asynchronous file search.
pub(crate) fn on_file_search_result(&mut self, query: String, matches: Vec<FileMatch>) {
// Only apply if user is still editing a token starting with `query`.
@@ -366,16 +392,29 @@ impl ChatComposer {
KeyEvent {
code: KeyCode::Tab, ..
} => {
if let Some(cmd) = popup.selected_command() {
let first_line = self.textarea.text().lines().next().unwrap_or("");
let starts_with_cmd = first_line
.trim_start()
.starts_with(&format!("/{}", cmd.command()));
if !starts_with_cmd {
self.textarea.set_text(&format!("/{} ", cmd.command()));
self.textarea.set_cursor(self.textarea.text().len());
// Ensure popup filtering/selection reflects the latest composer text
// before applying completion.
let first_line = self.textarea.text().lines().next().unwrap_or("");
popup.on_composer_text_change(first_line.to_string());
if let Some(sel) = popup.selected_item() {
match sel {
CommandItem::Builtin(cmd) => {
let starts_with_cmd = first_line
.trim_start()
.starts_with(&format!("/{}", cmd.command()));
if !starts_with_cmd {
self.textarea.set_text(&format!("/{} ", cmd.command()));
}
}
CommandItem::UserPrompt(idx) => {
if let Some(name) = popup.prompt_name(idx) {
let starts_with_cmd =
first_line.trim_start().starts_with(&format!("/{name}"));
if !starts_with_cmd {
self.textarea.set_text(&format!("/{name} "));
}
}
}
}
// After completing the command, move cursor to the end.
if !self.textarea.text().is_empty() {
@@ -390,16 +429,30 @@ impl ChatComposer {
modifiers: KeyModifiers::NONE,
..
} => {
if let Some(cmd) = popup.selected_command() {
if let Some(sel) = popup.selected_item() {
// Clear textarea so no residual text remains.
self.textarea.set_text("");
let result = (InputResult::Command(*cmd), true);
// Hide popup since the command has been dispatched.
// Capture any needed data from popup before clearing it.
let prompt_content = match sel {
CommandItem::UserPrompt(idx) => {
popup.prompt_content(idx).map(|s| s.to_string())
}
_ => None,
};
// Hide popup since an action has been dispatched.
self.active_popup = ActivePopup::None;
return result;
match sel {
CommandItem::Builtin(cmd) => {
return (InputResult::Command(cmd), true);
}
CommandItem::UserPrompt(_) => {
if let Some(contents) = prompt_content {
return (InputResult::Submitted(contents), true);
}
return (InputResult::None, true);
}
}
}
// Fallback to default newline handling if no command selected.
self.handle_key_event_without_popup(key_event)
@@ -423,9 +476,7 @@ impl ChatComposer {
#[inline]
fn handle_non_ascii_char(&mut self, input: KeyEvent) -> (InputResult, bool) {
if !self.paste_burst_buffer.is_empty() || self.in_paste_burst_mode {
let pasted = std::mem::take(&mut self.paste_burst_buffer);
self.in_paste_burst_mode = false;
if let Some(pasted) = self.paste_burst.flush_before_modified_input() {
self.handle_paste(pasted);
}
self.textarea.input(input);
@@ -740,14 +791,11 @@ impl ChatComposer {
.next()
.unwrap_or("")
.starts_with('/');
if (self.in_paste_burst_mode || !self.paste_burst_buffer.is_empty())
&& !in_slash_context
{
self.paste_burst_buffer.push('\n');
if self.paste_burst.is_active() && !in_slash_context {
let now = Instant::now();
// Keep the window alive so subsequent lines are captured too.
self.paste_burst_until = Some(now + PASTE_ENTER_SUPPRESS_WINDOW);
return (InputResult::None, true);
if self.paste_burst.append_newline_if_active(now) {
return (InputResult::None, true);
}
}
// If we have pending placeholder pastes, submit immediately to expand them.
if !self.pending_pastes.is_empty() {
@@ -768,19 +816,12 @@ impl ChatComposer {
// During a paste-like burst, treat Enter as a newline instead of submit.
let now = Instant::now();
let tight_after_char = self
.last_plain_char_time
.is_some_and(|t| now.duration_since(t) <= PASTE_BURST_CHAR_INTERVAL);
let recent_after_char = self
.last_plain_char_time
.is_some_and(|t| now.duration_since(t) <= PASTE_ENTER_SUPPRESS_WINDOW);
let burst_by_count =
recent_after_char && self.consecutive_plain_char_burst >= PASTE_BURST_MIN_CHARS;
let in_burst_window = self.paste_burst_until.is_some_and(|until| now <= until);
if tight_after_char || burst_by_count || in_burst_window {
if self
.paste_burst
.newline_should_insert_instead_of_submit(now)
{
self.textarea.insert_str("\n");
self.paste_burst_until = Some(now + PASTE_ENTER_SUPPRESS_WINDOW);
self.paste_burst.extend_window(now);
return (InputResult::None, true);
}
let mut text = self.textarea.text().to_string();
@@ -794,7 +835,12 @@ impl ChatComposer {
}
self.pending_pastes.clear();
// If there is neither text nor attachments, suppress submission entirely.
let has_attachments = !self.attached_images.is_empty();
text = text.trim().to_string();
if text.is_empty() && !has_attachments {
return (InputResult::None, true);
}
if !text.is_empty() {
self.history.record_local_submission(&text);
}
@@ -805,27 +851,42 @@ impl ChatComposer {
}
}
fn handle_paste_burst_flush(&mut self, now: Instant) -> bool {
match self.paste_burst.flush_if_due(now) {
FlushResult::Paste(pasted) => {
self.handle_paste(pasted);
true
}
FlushResult::Typed(ch) => {
// Mirror insert_str() behavior so popups stay in sync when a
// pending fast char flushes as normal typed input.
self.textarea.insert_str(ch.to_string().as_str());
// Keep popup sync consistent with key handling: prefer slash popup; only
// sync file popup when slash popup is NOT active.
self.sync_command_popup();
if matches!(self.active_popup, ActivePopup::Command(_)) {
self.dismissed_file_popup_token = None;
} else {
self.sync_file_search_popup();
}
true
}
FlushResult::None => false,
}
}
/// Handle generic Input events that modify the textarea content.
fn handle_input_basic(&mut self, input: KeyEvent) -> (InputResult, bool) {
// If we have a buffered non-bracketed paste burst and enough time has
// elapsed since the last char, flush it before handling a new input.
let now = Instant::now();
let timed_out = self
.last_plain_char_time
.is_some_and(|t| now.duration_since(t) > PASTE_BURST_CHAR_INTERVAL);
if timed_out && (!self.paste_burst_buffer.is_empty() || self.in_paste_burst_mode) {
let pasted = std::mem::take(&mut self.paste_burst_buffer);
self.in_paste_burst_mode = false;
// Reuse normal paste path (handles large-paste placeholders).
self.handle_paste(pasted);
}
self.handle_paste_burst_flush(now);
// If we're capturing a burst and receive Enter, accumulate it instead of inserting.
if matches!(input.code, KeyCode::Enter)
&& (self.in_paste_burst_mode || !self.paste_burst_buffer.is_empty())
&& self.paste_burst.is_active()
&& self.paste_burst.append_newline_if_active(now)
{
self.paste_burst_buffer.push('\n');
self.paste_burst_until = Some(now + PASTE_ENTER_SUPPRESS_WINDOW);
return (InputResult::None, true);
}
@@ -840,65 +901,50 @@ impl ChatComposer {
modifiers.contains(KeyModifiers::CONTROL) || modifiers.contains(KeyModifiers::ALT);
if !has_ctrl_or_alt {
// Non-ASCII characters (e.g., from IMEs) can arrive in quick bursts and be
// misclassified by our non-bracketed paste heuristic. To avoid leaving
// residual buffered content or misdetecting a paste, flush any burst buffer
// and insert non-ASCII characters directly.
// misclassified by paste heuristics. Flush any active burst buffer and insert
// non-ASCII characters directly.
if !ch.is_ascii() {
return self.handle_non_ascii_char(input);
}
// Update burst heuristics.
match self.last_plain_char_time {
Some(prev) if now.duration_since(prev) <= PASTE_BURST_CHAR_INTERVAL => {
self.consecutive_plain_char_burst =
self.consecutive_plain_char_burst.saturating_add(1);
}
_ => {
self.consecutive_plain_char_burst = 1;
}
}
self.last_plain_char_time = Some(now);
// If we're already buffering, capture the char into the buffer.
if self.in_paste_burst_mode {
self.paste_burst_buffer.push(ch);
// Keep the window alive while we receive the burst.
self.paste_burst_until = Some(now + PASTE_ENTER_SUPPRESS_WINDOW);
return (InputResult::None, true);
} else if self.consecutive_plain_char_burst >= PASTE_BURST_MIN_CHARS {
// Do not start burst buffering while typing a slash command (first line starts with '/').
let first_line = self.textarea.text().lines().next().unwrap_or("");
if first_line.starts_with('/') {
// Keep heuristics but do not buffer.
self.paste_burst_until = Some(now + PASTE_ENTER_SUPPRESS_WINDOW);
// Insert normally.
self.textarea.input(input);
let text_after = self.textarea.text();
self.pending_pastes
.retain(|(placeholder, _)| text_after.contains(placeholder));
match self.paste_burst.on_plain_char(ch, now) {
CharDecision::BufferAppend => {
self.paste_burst.append_char_to_buffer(ch, now);
return (InputResult::None, true);
}
CharDecision::BeginBuffer { retro_chars } => {
let cur = self.textarea.cursor();
let txt = self.textarea.text();
let safe_cur = Self::clamp_to_char_boundary(txt, cur);
let before = &txt[..safe_cur];
if let Some(grab) =
self.paste_burst
.decide_begin_buffer(now, before, retro_chars as usize)
{
if !grab.grabbed.is_empty() {
self.textarea.replace_range(grab.start_byte..safe_cur, "");
}
self.paste_burst.begin_with_retro_grabbed(grab.grabbed, now);
self.paste_burst.append_char_to_buffer(ch, now);
return (InputResult::None, true);
}
// If decide_begin_buffer opted not to start buffering,
// fall through to normal insertion below.
}
CharDecision::BeginBufferFromPending => {
// First char was held; now append the current one.
self.paste_burst.append_char_to_buffer(ch, now);
return (InputResult::None, true);
}
CharDecision::RetainFirstChar => {
// Keep the first fast char pending momentarily.
return (InputResult::None, true);
}
// Begin buffering from this character onward.
self.paste_burst_buffer.push(ch);
self.in_paste_burst_mode = true;
// Keep the window alive to continue capturing.
self.paste_burst_until = Some(now + PASTE_ENTER_SUPPRESS_WINDOW);
return (InputResult::None, true);
}
// Not buffering: insert normally and continue.
self.textarea.input(input);
let text_after = self.textarea.text();
self.pending_pastes
.retain(|(placeholder, _)| text_after.contains(placeholder));
return (InputResult::None, true);
} else {
// Modified char ends any burst: flush buffered content before applying.
if !self.paste_burst_buffer.is_empty() || self.in_paste_burst_mode {
let pasted = std::mem::take(&mut self.paste_burst_buffer);
self.in_paste_burst_mode = false;
self.handle_paste(pasted);
}
}
if let Some(pasted) = self.paste_burst.flush_before_modified_input() {
self.handle_paste(pasted);
}
}
// For non-char inputs (or after flushing), handle normally.
@@ -925,25 +971,15 @@ impl ChatComposer {
let has_ctrl_or_alt = modifiers.contains(KeyModifiers::CONTROL)
|| modifiers.contains(KeyModifiers::ALT);
if has_ctrl_or_alt {
// Modified char: clear burst window.
self.consecutive_plain_char_burst = 0;
self.last_plain_char_time = None;
self.paste_burst_until = None;
self.in_paste_burst_mode = false;
self.paste_burst_buffer.clear();
self.paste_burst.clear_window_after_non_char();
}
// Plain chars handled above.
}
KeyCode::Enter => {
// Keep burst window alive (supports blank lines in paste).
}
_ => {
// Other keys: clear burst window and any buffer (after flushing earlier).
self.consecutive_plain_char_burst = 0;
self.last_plain_char_time = None;
self.paste_burst_until = None;
self.in_paste_burst_mode = false;
// Do not clear paste_burst_buffer here; it should have been flushed above.
// Other keys: clear burst window (buffer should have been flushed above if needed).
self.paste_burst.clear_window_after_non_char();
}
}
@@ -1135,7 +1171,7 @@ impl ChatComposer {
}
_ => {
if input_starts_with_slash {
let mut command_popup = CommandPopup::new();
let mut command_popup = CommandPopup::new(self.custom_prompts.clone());
command_popup.on_composer_text_change(first_line.to_string());
self.active_popup = ActivePopup::Command(command_popup);
}
@@ -1143,6 +1179,13 @@ impl ChatComposer {
}
}
pub(crate) fn set_custom_prompts(&mut self, prompts: Vec<CustomPrompt>) {
self.custom_prompts = prompts.clone();
if let ActivePopup::Command(popup) = &mut self.active_popup {
popup.set_prompts(prompts);
}
}
/// Synchronize `self.file_search_popup` with the current text in the textarea.
/// Note this is only called when self.active_popup is NOT Command.
fn sync_file_search_popup(&mut self) {
@@ -1480,8 +1523,13 @@ mod tests {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
let needs_redraw = composer.handle_paste("hello".to_string());
assert!(needs_redraw);
@@ -1496,6 +1544,33 @@ mod tests {
}
}
#[test]
fn empty_enter_returns_none() {
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
use crossterm::event::KeyModifiers;
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
// Ensure composer is empty and press Enter.
assert!(composer.textarea.text().is_empty());
let (result, _needs_redraw) =
composer.handle_key_event(KeyEvent::new(KeyCode::Enter, KeyModifiers::NONE));
match result {
InputResult::None => {}
other => panic!("expected None for empty enter, got: {other:?}"),
}
}
#[test]
fn handle_paste_large_uses_placeholder_and_replaces_on_submit() {
use crossterm::event::KeyCode;
@@ -1504,8 +1579,13 @@ mod tests {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
let large = "x".repeat(LARGE_PASTE_CHAR_THRESHOLD + 10);
let needs_redraw = composer.handle_paste(large.clone());
@@ -1534,8 +1614,13 @@ mod tests {
let large = "y".repeat(LARGE_PASTE_CHAR_THRESHOLD + 1);
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
composer.handle_paste(large);
assert_eq!(composer.pending_pastes.len(), 1);
@@ -1576,6 +1661,7 @@ mod tests {
sender.clone(),
false,
"Ask Codex to do anything".to_string(),
false,
);
if let Some(text) = input {
@@ -1605,6 +1691,78 @@ mod tests {
}
}
#[test]
fn slash_popup_model_first_for_mo_ui() {
use insta::assert_snapshot;
use ratatui::Terminal;
use ratatui::backend::TestBackend;
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
// Type "/mo" humanlike so paste-burst doesnt interfere.
type_chars_humanlike(&mut composer, &['/', 'm', 'o']);
let mut terminal = match Terminal::new(TestBackend::new(60, 4)) {
Ok(t) => t,
Err(e) => panic!("Failed to create terminal: {e}"),
};
terminal
.draw(|f| f.render_widget_ref(composer, f.area()))
.unwrap_or_else(|e| panic!("Failed to draw composer: {e}"));
// Visual snapshot should show the slash popup with /model as the first entry.
assert_snapshot!("slash_popup_mo", terminal.backend());
}
#[test]
fn slash_popup_model_first_for_mo_logic() {
use super::super::command_popup::CommandItem;
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
type_chars_humanlike(&mut composer, &['/', 'm', 'o']);
match &composer.active_popup {
ActivePopup::Command(popup) => match popup.selected_item() {
Some(CommandItem::Builtin(cmd)) => {
assert_eq!(cmd.command(), "model")
}
Some(CommandItem::UserPrompt(_)) => {
panic!("unexpected prompt selected for '/mo'")
}
None => panic!("no selected command for '/mo'"),
},
_ => panic!("slash popup not active after typing '/mo'"),
}
}
// Test helper: simulate human typing with a brief delay and flush the paste-burst buffer
fn type_chars_humanlike(composer: &mut ChatComposer, chars: &[char]) {
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
use crossterm::event::KeyModifiers;
for &ch in chars {
let _ = composer.handle_key_event(KeyEvent::new(KeyCode::Char(ch), KeyModifiers::NONE));
std::thread::sleep(ChatComposer::recommended_paste_flush_delay());
let _ = composer.flush_paste_burst_if_due();
}
}
#[test]
fn slash_init_dispatches_command_and_does_not_submit_literal_text() {
use crossterm::event::KeyCode;
@@ -1613,15 +1771,16 @@ mod tests {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
// Type the slash command.
for ch in [
'/', 'i', 'n', 'i', 't', // "/init"
] {
let _ = composer.handle_key_event(KeyEvent::new(KeyCode::Char(ch), KeyModifiers::NONE));
}
type_chars_humanlike(&mut composer, &['/', 'i', 'n', 'i', 't']);
// Press Enter to dispatch the selected command.
let (result, _needs_redraw) =
@@ -1649,12 +1808,15 @@ mod tests {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
for ch in ['/', 'c'] {
let _ = composer.handle_key_event(KeyEvent::new(KeyCode::Char(ch), KeyModifiers::NONE));
}
type_chars_humanlike(&mut composer, &['/', 'c']);
let (_result, _needs_redraw) =
composer.handle_key_event(KeyEvent::new(KeyCode::Tab, KeyModifiers::NONE));
@@ -1671,12 +1833,15 @@ mod tests {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
for ch in ['/', 'm', 'e', 'n', 't', 'i', 'o', 'n'] {
let _ = composer.handle_key_event(KeyEvent::new(KeyCode::Char(ch), KeyModifiers::NONE));
}
type_chars_humanlike(&mut composer, &['/', 'm', 'e', 'n', 't', 'i', 'o', 'n']);
let (result, _needs_redraw) =
composer.handle_key_event(KeyEvent::new(KeyCode::Enter, KeyModifiers::NONE));
@@ -1703,8 +1868,13 @@ mod tests {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
// Define test cases: (paste content, is_large)
let test_cases = [
@@ -1777,8 +1947,13 @@ mod tests {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
// Define test cases: (content, is_large)
let test_cases = [
@@ -1844,8 +2019,13 @@ mod tests {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
// Define test cases: (cursor_position_from_end, expected_pending_count)
let test_cases = [
@@ -1887,8 +2067,13 @@ mod tests {
fn attach_image_and_submit_includes_image_paths() {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
let path = PathBuf::from("/tmp/image1.png");
composer.attach_image(path.clone(), 32, 16, "PNG");
composer.handle_paste(" hi".into());
@@ -1906,8 +2091,13 @@ mod tests {
fn attach_image_without_text_submits_empty_text_and_images() {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
let path = PathBuf::from("/tmp/image2.png");
composer.attach_image(path.clone(), 10, 5, "PNG");
let (result, _) =
@@ -1926,8 +2116,13 @@ mod tests {
fn image_placeholder_backspace_behaves_like_text_placeholder() {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
let path = PathBuf::from("/tmp/image3.png");
composer.attach_image(path.clone(), 20, 10, "PNG");
let placeholder = composer.attached_images[0].placeholder.clone();
@@ -1962,8 +2157,13 @@ mod tests {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
// Insert an image placeholder at the start
let path = PathBuf::from("/tmp/image_multibyte.png");
@@ -1983,8 +2183,13 @@ mod tests {
fn deleting_one_of_duplicate_image_placeholders_removes_matching_entry() {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
let path1 = PathBuf::from("/tmp/image_dup1.png");
let path2 = PathBuf::from("/tmp/image_dup2.png");
@@ -2025,8 +2230,13 @@ mod tests {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
let needs_redraw = composer.handle_paste(tmp_path.to_string_lossy().to_string());
assert!(needs_redraw);
@@ -2035,4 +2245,136 @@ mod tests {
let imgs = composer.take_recent_submission_images();
assert_eq!(imgs, vec![tmp_path.clone()]);
}
#[test]
fn selecting_custom_prompt_submits_file_contents() {
let prompt_text = "Hello from saved prompt";
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
// Inject prompts as if received via event.
composer.set_custom_prompts(vec![CustomPrompt {
name: "my-prompt".to_string(),
path: "/tmp/my-prompt.md".to_string().into(),
content: prompt_text.to_string(),
}]);
type_chars_humanlike(
&mut composer,
&['/', 'm', 'y', '-', 'p', 'r', 'o', 'm', 'p', 't'],
);
let (result, _needs_redraw) =
composer.handle_key_event(KeyEvent::new(KeyCode::Enter, KeyModifiers::NONE));
assert_eq!(InputResult::Submitted(prompt_text.to_string()), result);
}
#[test]
fn burst_paste_fast_small_buffers_and_flushes_on_stop() {
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
use crossterm::event::KeyModifiers;
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
let count = 32;
for _ in 0..count {
let _ =
composer.handle_key_event(KeyEvent::new(KeyCode::Char('a'), KeyModifiers::NONE));
assert!(
composer.is_in_paste_burst(),
"expected active paste burst during fast typing"
);
assert!(
composer.textarea.text().is_empty(),
"text should not appear during burst"
);
}
assert!(
composer.textarea.text().is_empty(),
"text should remain empty until flush"
);
std::thread::sleep(ChatComposer::recommended_paste_flush_delay());
let flushed = composer.flush_paste_burst_if_due();
assert!(flushed, "expected buffered text to flush after stop");
assert_eq!(composer.textarea.text(), "a".repeat(count));
assert!(
composer.pending_pastes.is_empty(),
"no placeholder for small burst"
);
}
#[test]
fn burst_paste_fast_large_inserts_placeholder_on_flush() {
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
use crossterm::event::KeyModifiers;
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
let count = LARGE_PASTE_CHAR_THRESHOLD + 1; // > threshold to trigger placeholder
for _ in 0..count {
let _ =
composer.handle_key_event(KeyEvent::new(KeyCode::Char('x'), KeyModifiers::NONE));
}
// Nothing should appear until we stop and flush
assert!(composer.textarea.text().is_empty());
std::thread::sleep(ChatComposer::recommended_paste_flush_delay());
let flushed = composer.flush_paste_burst_if_due();
assert!(flushed, "expected flush after stopping fast input");
let expected_placeholder = format!("[Pasted Content {count} chars]");
assert_eq!(composer.textarea.text(), expected_placeholder);
assert_eq!(composer.pending_pastes.len(), 1);
assert_eq!(composer.pending_pastes[0].0, expected_placeholder);
assert_eq!(composer.pending_pastes[0].1.len(), count);
assert!(composer.pending_pastes[0].1.chars().all(|c| c == 'x'));
}
#[test]
fn humanlike_typing_1000_chars_appears_live_no_placeholder() {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
let count = LARGE_PASTE_CHAR_THRESHOLD; // 1000 in current config
let chars: Vec<char> = vec!['z'; count];
type_chars_humanlike(&mut composer, &chars);
assert_eq!(composer.textarea.text(), "z".repeat(count));
assert!(composer.pending_pastes.is_empty());
}
}

View File

@@ -9,22 +9,58 @@ use super::selection_popup_common::render_rows;
use crate::slash_command::SlashCommand;
use crate::slash_command::built_in_slash_commands;
use codex_common::fuzzy_match::fuzzy_match;
use codex_protocol::custom_prompts::CustomPrompt;
use std::collections::HashSet;
/// A selectable item in the popup: either a built-in command or a user prompt.
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub(crate) enum CommandItem {
Builtin(SlashCommand),
// Index into `prompts`
UserPrompt(usize),
}
pub(crate) struct CommandPopup {
command_filter: String,
all_commands: Vec<(&'static str, SlashCommand)>,
builtins: Vec<(&'static str, SlashCommand)>,
prompts: Vec<CustomPrompt>,
state: ScrollState,
}
impl CommandPopup {
pub(crate) fn new() -> Self {
pub(crate) fn new(mut prompts: Vec<CustomPrompt>) -> Self {
let builtins = built_in_slash_commands();
// Exclude prompts that collide with builtin command names and sort by name.
let exclude: HashSet<String> = builtins.iter().map(|(n, _)| (*n).to_string()).collect();
prompts.retain(|p| !exclude.contains(&p.name));
prompts.sort_by(|a, b| a.name.cmp(&b.name));
Self {
command_filter: String::new(),
all_commands: built_in_slash_commands(),
builtins,
prompts,
state: ScrollState::new(),
}
}
pub(crate) fn set_prompts(&mut self, mut prompts: Vec<CustomPrompt>) {
let exclude: HashSet<String> = self
.builtins
.iter()
.map(|(n, _)| (*n).to_string())
.collect();
prompts.retain(|p| !exclude.contains(&p.name));
prompts.sort_by(|a, b| a.name.cmp(&b.name));
self.prompts = prompts;
}
pub(crate) fn prompt_name(&self, idx: usize) -> Option<&str> {
self.prompts.get(idx).map(|p| p.name.as_str())
}
pub(crate) fn prompt_content(&self, idx: usize) -> Option<&str> {
self.prompts.get(idx).map(|p| p.content.as_str())
}
/// Update the filter string based on the current composer text. The text
/// passed in is expected to start with a leading '/'. Everything after the
/// *first* '/" on the *first* line becomes the active filter that is used
@@ -50,7 +86,7 @@ impl CommandPopup {
}
// Reset or clamp selected index based on new filtered list.
let matches_len = self.filtered_commands().len();
let matches_len = self.filtered_items().len();
self.state.clamp_selection(matches_len);
self.state
.ensure_visible(matches_len, MAX_POPUP_ROWS.min(matches_len));
@@ -59,56 +95,76 @@ impl CommandPopup {
/// Determine the preferred height of the popup. This is the number of
/// rows required to show at most MAX_POPUP_ROWS commands.
pub(crate) fn calculate_required_height(&self) -> u16 {
self.filtered_commands().len().clamp(1, MAX_POPUP_ROWS) as u16
self.filtered_items().len().clamp(1, MAX_POPUP_ROWS) as u16
}
/// Compute fuzzy-filtered matches paired with optional highlight indices and score.
/// Sorted by ascending score, then by command name for stability.
fn filtered(&self) -> Vec<(&SlashCommand, Option<Vec<usize>>, i32)> {
/// Compute fuzzy-filtered matches over built-in commands and user prompts,
/// paired with optional highlight indices and score. Sorted by ascending
/// score, then by name for stability.
fn filtered(&self) -> Vec<(CommandItem, Option<Vec<usize>>, i32)> {
let filter = self.command_filter.trim();
let mut out: Vec<(&SlashCommand, Option<Vec<usize>>, i32)> = Vec::new();
let mut out: Vec<(CommandItem, Option<Vec<usize>>, i32)> = Vec::new();
if filter.is_empty() {
for (_, cmd) in self.all_commands.iter() {
out.push((cmd, None, 0));
// Built-ins first, in presentation order.
for (_, cmd) in self.builtins.iter() {
out.push((CommandItem::Builtin(*cmd), None, 0));
}
// Then prompts, already sorted by name.
for idx in 0..self.prompts.len() {
out.push((CommandItem::UserPrompt(idx), None, 0));
}
// Keep the original presentation order when no filter is applied.
return out;
} else {
for (_, cmd) in self.all_commands.iter() {
if let Some((indices, score)) = fuzzy_match(cmd.command(), filter) {
out.push((cmd, Some(indices), score));
}
}
for (_, cmd) in self.builtins.iter() {
if let Some((indices, score)) = fuzzy_match(cmd.command(), filter) {
out.push((CommandItem::Builtin(*cmd), Some(indices), score));
}
}
// When filtering, sort by ascending score and then by command for stability.
out.sort_by(|a, b| a.2.cmp(&b.2).then_with(|| a.0.command().cmp(b.0.command())));
for (idx, p) in self.prompts.iter().enumerate() {
if let Some((indices, score)) = fuzzy_match(&p.name, filter) {
out.push((CommandItem::UserPrompt(idx), Some(indices), score));
}
}
// When filtering, sort by ascending score and then by name for stability.
out.sort_by(|a, b| {
a.2.cmp(&b.2).then_with(|| {
let an = match a.0 {
CommandItem::Builtin(c) => c.command(),
CommandItem::UserPrompt(i) => &self.prompts[i].name,
};
let bn = match b.0 {
CommandItem::Builtin(c) => c.command(),
CommandItem::UserPrompt(i) => &self.prompts[i].name,
};
an.cmp(bn)
})
});
out
}
fn filtered_commands(&self) -> Vec<&SlashCommand> {
fn filtered_items(&self) -> Vec<CommandItem> {
self.filtered().into_iter().map(|(c, _, _)| c).collect()
}
/// Move the selection cursor one step up.
pub(crate) fn move_up(&mut self) {
let matches = self.filtered_commands();
let len = matches.len();
let len = self.filtered_items().len();
self.state.move_up_wrap(len);
self.state.ensure_visible(len, MAX_POPUP_ROWS.min(len));
}
/// Move the selection cursor one step down.
pub(crate) fn move_down(&mut self) {
let matches = self.filtered_commands();
let matches_len = matches.len();
let matches_len = self.filtered_items().len();
self.state.move_down_wrap(matches_len);
self.state
.ensure_visible(matches_len, MAX_POPUP_ROWS.min(matches_len));
}
/// Return currently selected command, if any.
pub(crate) fn selected_command(&self) -> Option<&SlashCommand> {
let matches = self.filtered_commands();
pub(crate) fn selected_item(&self) -> Option<CommandItem> {
let matches = self.filtered_items();
self.state
.selected_idx
.and_then(|idx| matches.get(idx).copied())
@@ -123,11 +179,19 @@ impl WidgetRef for CommandPopup {
} else {
matches
.into_iter()
.map(|(cmd, indices, _)| GenericDisplayRow {
name: format!("/{}", cmd.command()),
match_indices: indices.map(|v| v.into_iter().map(|i| i + 1).collect()),
is_current: false,
description: Some(cmd.description().to_string()),
.map(|(item, indices, _)| match item {
CommandItem::Builtin(cmd) => GenericDisplayRow {
name: format!("/{}", cmd.command()),
match_indices: indices.map(|v| v.into_iter().map(|i| i + 1).collect()),
is_current: false,
description: Some(cmd.description().to_string()),
},
CommandItem::UserPrompt(i) => GenericDisplayRow {
name: format!("/{}", self.prompts[i].name),
match_indices: indices.map(|v| v.into_iter().map(|i| i + 1).collect()),
is_current: false,
description: Some("send saved prompt".to_string()),
},
})
.collect()
};
@@ -141,31 +205,96 @@ mod tests {
#[test]
fn filter_includes_init_when_typing_prefix() {
let mut popup = CommandPopup::new();
let mut popup = CommandPopup::new(Vec::new());
// Simulate the composer line starting with '/in' so the popup filters
// matching commands by prefix.
popup.on_composer_text_change("/in".to_string());
// Access the filtered list via the selected command and ensure that
// one of the matches is the new "init" command.
let matches = popup.filtered_commands();
let matches = popup.filtered_items();
let has_init = matches.iter().any(|item| match item {
CommandItem::Builtin(cmd) => cmd.command() == "init",
CommandItem::UserPrompt(_) => false,
});
assert!(
matches.iter().any(|cmd| cmd.command() == "init"),
has_init,
"expected '/init' to appear among filtered commands"
);
}
#[test]
fn selecting_init_by_exact_match() {
let mut popup = CommandPopup::new();
let mut popup = CommandPopup::new(Vec::new());
popup.on_composer_text_change("/init".to_string());
// When an exact match exists, the selected command should be that
// command by default.
let selected = popup.selected_command();
let selected = popup.selected_item();
match selected {
Some(cmd) => assert_eq!(cmd.command(), "init"),
Some(CommandItem::Builtin(cmd)) => assert_eq!(cmd.command(), "init"),
Some(CommandItem::UserPrompt(_)) => panic!("unexpected prompt selected for '/init'"),
None => panic!("expected a selected command for exact match"),
}
}
#[test]
fn model_is_first_suggestion_for_mo() {
let mut popup = CommandPopup::new(Vec::new());
popup.on_composer_text_change("/mo".to_string());
let matches = popup.filtered_items();
match matches.first() {
Some(CommandItem::Builtin(cmd)) => assert_eq!(cmd.command(), "model"),
Some(CommandItem::UserPrompt(_)) => {
panic!("unexpected prompt ranked before '/model' for '/mo'")
}
None => panic!("expected at least one match for '/mo'"),
}
}
#[test]
fn prompt_discovery_lists_custom_prompts() {
let prompts = vec![
CustomPrompt {
name: "foo".to_string(),
path: "/tmp/foo.md".to_string().into(),
content: "hello from foo".to_string(),
},
CustomPrompt {
name: "bar".to_string(),
path: "/tmp/bar.md".to_string().into(),
content: "hello from bar".to_string(),
},
];
let popup = CommandPopup::new(prompts);
let items = popup.filtered_items();
let mut prompt_names: Vec<String> = items
.into_iter()
.filter_map(|it| match it {
CommandItem::UserPrompt(i) => popup.prompt_name(i).map(|s| s.to_string()),
_ => None,
})
.collect();
prompt_names.sort();
assert_eq!(prompt_names, vec!["bar".to_string(), "foo".to_string()]);
}
#[test]
fn prompt_name_collision_with_builtin_is_ignored() {
// Create a prompt named like a builtin (e.g. "init").
let popup = CommandPopup::new(vec![CustomPrompt {
name: "init".to_string(),
path: "/tmp/init.md".to_string().into(),
content: "should be ignored".to_string(),
}]);
let items = popup.filtered_items();
let has_collision_prompt = items.into_iter().any(|it| match it {
CommandItem::UserPrompt(i) => popup.prompt_name(i) == Some("init"),
_ => false,
});
assert!(
!has_collision_prompt,
"prompt with builtin name should be ignored"
);
}
}

View File

@@ -13,6 +13,7 @@ use ratatui::layout::Constraint;
use ratatui::layout::Layout;
use ratatui::layout::Rect;
use ratatui::widgets::WidgetRef;
use std::time::Duration;
mod approval_modal_view;
mod bottom_pane_view;
@@ -21,6 +22,7 @@ mod chat_composer_history;
mod command_popup;
mod file_search_popup;
mod list_selection_view;
mod paste_burst;
mod popup_consts;
mod scroll_state;
mod selection_popup_common;
@@ -34,6 +36,7 @@ pub(crate) enum CancellationEvent {
pub(crate) use chat_composer::ChatComposer;
pub(crate) use chat_composer::InputResult;
use codex_protocol::custom_prompts::CustomPrompt;
use crate::status_indicator_widget::StatusIndicatorWidget;
use approval_modal_view::ApprovalModalView;
@@ -69,6 +72,7 @@ pub(crate) struct BottomPaneParams {
pub(crate) has_input_focus: bool,
pub(crate) enhanced_keys_supported: bool,
pub(crate) placeholder_text: String,
pub(crate) disable_paste_burst: bool,
}
impl BottomPane {
@@ -81,6 +85,7 @@ impl BottomPane {
params.app_event_tx.clone(),
enhanced_keys_supported,
params.placeholder_text,
params.disable_paste_burst,
),
active_view: None,
app_event_tx: params.app_event_tx,
@@ -95,53 +100,47 @@ impl BottomPane {
}
pub fn desired_height(&self, width: u16) -> u16 {
let top_margin = if self.active_view.is_some() { 0 } else { 1 };
// Always reserve one blank row above the pane for visual spacing.
let top_margin = 1;
// Base height depends on whether a modal/overlay is active.
let mut base = if let Some(view) = self.active_view.as_ref() {
view.desired_height(width)
} else {
self.composer.desired_height(width)
let base = match self.active_view.as_ref() {
Some(view) => view.desired_height(width),
None => self.composer.desired_height(width).saturating_add(
self.status
.as_ref()
.map_or(0, |status| status.desired_height(width)),
),
};
// If a status indicator is active and no modal is covering the composer,
// include its height above the composer.
if self.active_view.is_none()
&& let Some(status) = self.status.as_ref()
{
base = base.saturating_add(status.desired_height(width));
}
// Account for bottom padding rows. Top spacing is handled in layout().
base.saturating_add(Self::BOTTOM_PAD_LINES)
.saturating_add(top_margin)
}
fn layout(&self, area: Rect) -> [Rect; 2] {
// Prefer showing the status header when space is extremely tight.
// Drop the top spacer if there is only one row available.
let mut top_margin = if self.active_view.is_some() { 0 } else { 1 };
if area.height <= 1 {
top_margin = 0;
}
let status_height = if self.active_view.is_none() {
if let Some(status) = self.status.as_ref() {
status.desired_height(area.width)
} else {
0
}
// At small heights, bottom pane takes the entire height.
let (top_margin, bottom_margin) = if area.height <= BottomPane::BOTTOM_PAD_LINES + 1 {
(0, 0)
} else {
0
(1, BottomPane::BOTTOM_PAD_LINES)
};
let [_, status, content, _] = Layout::vertical([
Constraint::Max(top_margin),
Constraint::Max(status_height),
Constraint::Min(1),
Constraint::Max(BottomPane::BOTTOM_PAD_LINES),
])
.areas(area);
[status, content]
let area = Rect {
x: area.x,
y: area.y + top_margin,
width: area.width,
height: area.height - top_margin - bottom_margin,
};
match self.active_view.as_ref() {
Some(_) => [Rect::ZERO, area],
None => {
let status_height = self
.status
.as_ref()
.map_or(0, |status| status.desired_height(area.width));
Layout::vertical([Constraint::Max(status_height), Constraint::Min(1)]).areas(area)
}
}
}
pub fn cursor_pos(&self, area: Rect) -> Option<(u16, u16)> {
@@ -182,6 +181,9 @@ impl BottomPane {
if needs_redraw {
self.request_redraw();
}
if self.composer.is_in_paste_burst() {
self.request_redraw_in(ChatComposer::recommended_paste_flush_delay());
}
input_result
}
}
@@ -329,6 +331,12 @@ impl BottomPane {
self.request_redraw();
}
/// Update custom prompts available for the slash popup.
pub(crate) fn set_custom_prompts(&mut self, prompts: Vec<CustomPrompt>) {
self.composer.set_custom_prompts(prompts);
self.request_redraw();
}
pub(crate) fn composer_is_empty(&self) -> bool {
self.composer.is_empty()
}
@@ -382,12 +390,24 @@ impl BottomPane {
self.frame_requester.schedule_frame();
}
pub(crate) fn request_redraw_in(&self, dur: Duration) {
self.frame_requester.schedule_frame_in(dur);
}
// --- History helpers ---
pub(crate) fn set_history_metadata(&mut self, log_id: u64, entry_count: usize) {
self.composer.set_history_metadata(log_id, entry_count);
}
pub(crate) fn flush_paste_burst_if_due(&mut self) -> bool {
self.composer.flush_paste_burst_if_due()
}
pub(crate) fn is_in_paste_burst(&self) -> bool {
self.composer.is_in_paste_burst()
}
pub(crate) fn on_history_entry_response(
&mut self,
log_id: u64,
@@ -473,6 +493,7 @@ mod tests {
has_input_focus: true,
enhanced_keys_supported: false,
placeholder_text: "Ask Codex to do anything".to_string(),
disable_paste_burst: false,
});
pane.push_approval_request(exec_request());
assert_eq!(CancellationEvent::Handled, pane.on_ctrl_c());
@@ -492,6 +513,7 @@ mod tests {
has_input_focus: true,
enhanced_keys_supported: false,
placeholder_text: "Ask Codex to do anything".to_string(),
disable_paste_burst: false,
});
// Create an approval modal (active view).
@@ -522,6 +544,7 @@ mod tests {
has_input_focus: true,
enhanced_keys_supported: false,
placeholder_text: "Ask Codex to do anything".to_string(),
disable_paste_burst: false,
});
// Start a running task so the status indicator is active above the composer.
@@ -589,6 +612,7 @@ mod tests {
has_input_focus: true,
enhanced_keys_supported: false,
placeholder_text: "Ask Codex to do anything".to_string(),
disable_paste_burst: false,
});
// Begin a task: show initial status.
@@ -619,6 +643,7 @@ mod tests {
has_input_focus: true,
enhanced_keys_supported: false,
placeholder_text: "Ask Codex to do anything".to_string(),
disable_paste_burst: false,
});
// Activate spinner (status view replaces composer) with no live ring.
@@ -669,11 +694,12 @@ mod tests {
has_input_focus: true,
enhanced_keys_supported: false,
placeholder_text: "Ask Codex to do anything".to_string(),
disable_paste_burst: false,
});
pane.set_task_running(true);
// Height=2 → composer visible; status is hidden to preserve composer. Spacer may collapse.
// Height=2 → status on one row, composer on the other.
let area2 = Rect::new(0, 0, 20, 2);
let mut buf2 = Buffer::empty(area2);
(&pane).render_ref(area2, &mut buf2);
@@ -689,8 +715,8 @@ mod tests {
"expected composer to be visible on one of the rows: row0={row0:?}, row1={row1:?}"
);
assert!(
!row0.contains("Working") && !row1.contains("Working"),
"status header should be hidden when height=2"
row0.contains("Working") || row1.contains("Working"),
"expected status header to be visible at height=2: row0={row0:?}, row1={row1:?}"
);
// Height=1 → no padding; single row is the composer (status hidden).

View File

@@ -0,0 +1,252 @@
use std::time::Duration;
use std::time::Instant;
// Heuristic thresholds for detecting paste-like input bursts.
// Detect quickly to avoid showing typed prefix before paste is recognized
const PASTE_BURST_MIN_CHARS: u16 = 3;
const PASTE_BURST_CHAR_INTERVAL: Duration = Duration::from_millis(8);
const PASTE_ENTER_SUPPRESS_WINDOW: Duration = Duration::from_millis(120);
#[derive(Default)]
pub(crate) struct PasteBurst {
last_plain_char_time: Option<Instant>,
consecutive_plain_char_burst: u16,
burst_window_until: Option<Instant>,
buffer: String,
active: bool,
// Hold first fast char briefly to avoid rendering flicker
pending_first_char: Option<(char, Instant)>,
}
pub(crate) enum CharDecision {
/// Start buffering and retroactively capture some already-inserted chars.
BeginBuffer { retro_chars: u16 },
/// We are currently buffering; append the current char into the buffer.
BufferAppend,
/// Do not insert/render this char yet; temporarily save the first fast
/// char while we wait to see if a paste-like burst follows.
RetainFirstChar,
/// Begin buffering using the previously saved first char (no retro grab needed).
BeginBufferFromPending,
}
pub(crate) struct RetroGrab {
pub start_byte: usize,
pub grabbed: String,
}
pub(crate) enum FlushResult {
Paste(String),
Typed(char),
None,
}
impl PasteBurst {
/// Recommended delay to wait between simulated keypresses (or before
/// scheduling a UI tick) so that a pending fast keystroke is flushed
/// out of the burst detector as normal typed input.
///
/// Primarily used by tests and by the TUI to reliably cross the
/// paste-burst timing threshold.
pub fn recommended_flush_delay() -> Duration {
PASTE_BURST_CHAR_INTERVAL + Duration::from_millis(1)
}
/// Entry point: decide how to treat a plain char with current timing.
pub fn on_plain_char(&mut self, ch: char, now: Instant) -> CharDecision {
match self.last_plain_char_time {
Some(prev) if now.duration_since(prev) <= PASTE_BURST_CHAR_INTERVAL => {
self.consecutive_plain_char_burst =
self.consecutive_plain_char_burst.saturating_add(1)
}
_ => self.consecutive_plain_char_burst = 1,
}
self.last_plain_char_time = Some(now);
if self.active {
self.burst_window_until = Some(now + PASTE_ENTER_SUPPRESS_WINDOW);
return CharDecision::BufferAppend;
}
// If we already held a first char and receive a second fast char,
// start buffering without retro-grabbing (we never rendered the first).
if let Some((held, held_at)) = self.pending_first_char
&& now.duration_since(held_at) <= PASTE_BURST_CHAR_INTERVAL
{
self.active = true;
// take() to clear pending; we already captured the held char above
let _ = self.pending_first_char.take();
self.buffer.push(held);
self.burst_window_until = Some(now + PASTE_ENTER_SUPPRESS_WINDOW);
return CharDecision::BeginBufferFromPending;
}
if self.consecutive_plain_char_burst >= PASTE_BURST_MIN_CHARS {
return CharDecision::BeginBuffer {
retro_chars: self.consecutive_plain_char_burst.saturating_sub(1),
};
}
// Save the first fast char very briefly to see if a burst follows.
self.pending_first_char = Some((ch, now));
CharDecision::RetainFirstChar
}
/// Flush the buffered burst if the inter-key timeout has elapsed.
///
/// Returns Some(String) when either:
/// - We were actively buffering paste-like input and the buffer is now
/// emitted as a single pasted string; or
/// - We had saved a single fast first-char with no subsequent burst and we
/// now emit that char as normal typed input.
///
/// Returns None if the timeout has not elapsed or there is nothing to flush.
pub fn flush_if_due(&mut self, now: Instant) -> FlushResult {
let timed_out = self
.last_plain_char_time
.is_some_and(|t| now.duration_since(t) > PASTE_BURST_CHAR_INTERVAL);
if timed_out && self.is_active_internal() {
self.active = false;
let out = std::mem::take(&mut self.buffer);
FlushResult::Paste(out)
} else if timed_out {
// If we were saving a single fast char and no burst followed,
// flush it as normal typed input.
if let Some((ch, _at)) = self.pending_first_char.take() {
FlushResult::Typed(ch)
} else {
FlushResult::None
}
} else {
FlushResult::None
}
}
/// While bursting: accumulate a newline into the buffer instead of
/// submitting the textarea.
///
/// Returns true if a newline was appended (we are in a burst context),
/// false otherwise.
pub fn append_newline_if_active(&mut self, now: Instant) -> bool {
if self.is_active() {
self.buffer.push('\n');
self.burst_window_until = Some(now + PASTE_ENTER_SUPPRESS_WINDOW);
true
} else {
false
}
}
/// Decide if Enter should insert a newline (burst context) vs submit.
pub fn newline_should_insert_instead_of_submit(&self, now: Instant) -> bool {
let in_burst_window = self.burst_window_until.is_some_and(|until| now <= until);
self.is_active() || in_burst_window
}
/// Keep the burst window alive.
pub fn extend_window(&mut self, now: Instant) {
self.burst_window_until = Some(now + PASTE_ENTER_SUPPRESS_WINDOW);
}
/// Begin buffering with retroactively grabbed text.
pub fn begin_with_retro_grabbed(&mut self, grabbed: String, now: Instant) {
if !grabbed.is_empty() {
self.buffer.push_str(&grabbed);
}
self.active = true;
self.burst_window_until = Some(now + PASTE_ENTER_SUPPRESS_WINDOW);
}
/// Append a char into the burst buffer.
pub fn append_char_to_buffer(&mut self, ch: char, now: Instant) {
self.buffer.push(ch);
self.burst_window_until = Some(now + PASTE_ENTER_SUPPRESS_WINDOW);
}
/// Decide whether to begin buffering by retroactively capturing recent
/// chars from the slice before the cursor.
///
/// Heuristic: if the retro-grabbed slice contains any whitespace or is
/// sufficiently long (>= 16 characters), treat it as paste-like to avoid
/// rendering the typed prefix momentarily before the paste is recognized.
/// This favors responsiveness and prevents flicker for typical pastes
/// (URLs, file paths, multiline text) while not triggering on short words.
///
/// Returns Some(RetroGrab) with the start byte and grabbed text when we
/// decide to buffer retroactively; otherwise None.
pub fn decide_begin_buffer(
&mut self,
now: Instant,
before: &str,
retro_chars: usize,
) -> Option<RetroGrab> {
let start_byte = retro_start_index(before, retro_chars);
let grabbed = before[start_byte..].to_string();
let looks_pastey =
grabbed.chars().any(|c| c.is_whitespace()) || grabbed.chars().count() >= 16;
if looks_pastey {
// Note: caller is responsible for removing this slice from UI text.
self.begin_with_retro_grabbed(grabbed.clone(), now);
Some(RetroGrab {
start_byte,
grabbed,
})
} else {
None
}
}
/// Before applying modified/non-char input: flush buffered burst immediately.
pub fn flush_before_modified_input(&mut self) -> Option<String> {
if self.is_active() {
self.active = false;
Some(std::mem::take(&mut self.buffer))
} else {
None
}
}
/// Clear only the timing window and any pending first-char.
///
/// Does not emit or clear the buffered text itself; callers should have
/// already flushed (if needed) via one of the flush methods above.
pub fn clear_window_after_non_char(&mut self) {
self.consecutive_plain_char_burst = 0;
self.last_plain_char_time = None;
self.burst_window_until = None;
self.active = false;
self.pending_first_char = None;
}
/// Returns true if we are in any paste-burst related transient state
/// (actively buffering, have a non-empty buffer, or have saved the first
/// fast char while waiting for a potential burst).
pub fn is_active(&self) -> bool {
self.is_active_internal() || self.pending_first_char.is_some()
}
fn is_active_internal(&self) -> bool {
self.active || !self.buffer.is_empty()
}
pub fn clear_after_explicit_paste(&mut self) {
self.last_plain_char_time = None;
self.consecutive_plain_char_burst = 0;
self.burst_window_until = None;
self.active = false;
self.buffer.clear();
self.pending_first_char = None;
}
}
pub(crate) fn retro_start_index(before: &str, retro_chars: usize) -> usize {
if retro_chars == 0 {
return before.len();
}
before
.char_indices()
.rev()
.nth(retro_chars.saturating_sub(1))
.map(|(idx, _)| idx)
.unwrap_or(0)
}

View File

@@ -0,0 +1,8 @@
---
source: tui/src/bottom_pane/chat_composer.rs
expression: terminal.backend()
---
"▌/mo "
"▌ "
"▌/model choose what model and reasoning effort to use "
"▌/mention mention a file "

View File

@@ -245,7 +245,6 @@ impl TextArea {
} => self.delete_backward_word(),
KeyEvent {
code: KeyCode::Backspace,
modifiers: KeyModifiers::NONE,
..
}
| KeyEvent {

View File

@@ -19,6 +19,7 @@ use codex_core::protocol::ExecApprovalRequestEvent;
use codex_core::protocol::ExecCommandBeginEvent;
use codex_core::protocol::ExecCommandEndEvent;
use codex_core::protocol::InputItem;
use codex_core::protocol::ListCustomPromptsResponseEvent;
use codex_core::protocol::McpListToolsResponseEvent;
use codex_core::protocol::McpToolCallBeginEvent;
use codex_core::protocol::McpToolCallEndEvent;
@@ -30,6 +31,7 @@ use codex_core::protocol::TokenUsage;
use codex_core::protocol::TurnAbortReason;
use codex_core::protocol::TurnDiffEvent;
use codex_core::protocol::WebSearchBeginEvent;
use codex_core::protocol::WebSearchEndEvent;
use codex_protocol::parse_command::ParsedCommand;
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
@@ -99,7 +101,6 @@ pub(crate) struct ChatWidget {
// Stream lifecycle controller
stream: StreamController,
running_commands: HashMap<String, RunningCommand>,
pending_exec_completions: Vec<(Vec<String>, Vec<ParsedCommand>, CommandOutput)>,
task_complete_pending: bool,
// Queue of interruptive UI events deferred during an active write cycle
interrupts: InterruptManager,
@@ -111,7 +112,6 @@ pub(crate) struct ChatWidget {
frame_requester: FrameRequester,
// Whether to include the initial welcome banner on session configured
show_welcome_banner: bool,
last_history_was_exec: bool,
// User messages queued while a turn is in progress
queued_user_messages: VecDeque<UserMessage>,
}
@@ -153,6 +153,8 @@ impl ChatWidget {
event,
self.show_welcome_banner,
));
// Ask codex-core to enumerate custom prompts for this session.
self.submit_op(Op::ListCustomPrompts);
if let Some(user_message) = self.initial_user_message.take() {
self.submit_user_message(user_message);
}
@@ -329,6 +331,7 @@ impl ChatWidget {
auto_approved: event.auto_approved,
},
event.changes,
&self.config.cwd,
));
}
@@ -355,9 +358,16 @@ impl ChatWidget {
self.defer_or_handle(|q| q.push_mcp_end(ev), |s| s.handle_mcp_end_now(ev2));
}
fn on_web_search_begin(&mut self, ev: WebSearchBeginEvent) {
fn on_web_search_begin(&mut self, _ev: WebSearchBeginEvent) {
self.flush_answer_stream_with_separator();
self.add_to_history(history_cell::new_web_search_call(ev.query));
}
fn on_web_search_end(&mut self, ev: WebSearchEndEvent) {
self.flush_answer_stream_with_separator();
self.add_to_history(history_cell::new_web_search_call(format!(
"Searched: {}",
ev.query
)));
}
fn on_get_history_entry_response(
@@ -431,14 +441,14 @@ impl ChatWidget {
self.task_complete_pending = false;
}
// A completed stream indicates non-exec content was just inserted.
// Reset the exec header grouping so the next exec shows its header.
self.last_history_was_exec = false;
self.flush_interrupt_queue();
}
}
#[inline]
fn handle_streaming_delta(&mut self, delta: String) {
// Before streaming agent content, flush any active exec cell group.
self.flush_active_exec_cell();
let sink = AppEventHistorySink(self.app_event_tx.clone());
self.stream.begin(&sink);
self.stream.push_and_maybe_commit(&delta, &sink);
@@ -451,31 +461,29 @@ impl ChatWidget {
Some(rc) => (rc.command, rc.parsed_cmd),
None => (vec![ev.call_id.clone()], Vec::new()),
};
self.pending_exec_completions.push((
command,
parsed,
CommandOutput {
exit_code: ev.exit_code,
stdout: ev.stdout.clone(),
stderr: ev.stderr.clone(),
formatted_output: ev.formatted_output.clone(),
},
));
if self.running_commands.is_empty() {
self.active_exec_cell = None;
let pending = std::mem::take(&mut self.pending_exec_completions);
for (command, parsed, output) in pending {
let include_header = !self.last_history_was_exec;
let cell = history_cell::new_completed_exec_command(
command,
parsed,
output,
include_header,
ev.duration,
);
self.add_to_history(cell);
self.last_history_was_exec = true;
if self.active_exec_cell.is_none() {
// This should have been created by handle_exec_begin_now, but in case it wasn't,
// create it now.
self.active_exec_cell = Some(history_cell::new_active_exec_command(
ev.call_id.clone(),
command,
parsed,
));
}
if let Some(cell) = self.active_exec_cell.as_mut() {
cell.complete_call(
&ev.call_id,
CommandOutput {
exit_code: ev.exit_code,
stdout: ev.stdout.clone(),
stderr: ev.stderr.clone(),
formatted_output: ev.formatted_output.clone(),
},
ev.duration,
);
if cell.should_flush() {
self.flush_active_exec_cell();
}
}
}
@@ -484,9 +492,9 @@ impl ChatWidget {
&mut self,
event: codex_core::protocol::PatchApplyEndEvent,
) {
if event.success {
self.add_to_history(history_cell::new_patch_apply_success(event.stdout));
} else {
// If the patch was successful, just let the "Edited" block stand.
// Otherwise, add a failure block.
if !event.success {
self.add_to_history(history_cell::new_patch_apply_failure(event.stderr));
}
}
@@ -512,6 +520,7 @@ impl ChatWidget {
self.add_to_history(history_cell::new_patch_event(
PatchEventType::ApprovalRequest,
ev.changes.clone(),
&self.config.cwd,
));
let request = ApprovalRequest::ApplyPatch {
@@ -532,19 +541,28 @@ impl ChatWidget {
parsed_cmd: ev.parsed_cmd.clone(),
},
);
// Accumulate parsed commands into a single active Exec cell so they stack
match self.active_exec_cell.as_mut() {
Some(exec) => {
exec.parsed.extend(ev.parsed_cmd);
}
_ => {
let include_header = !self.last_history_was_exec;
if let Some(exec) = &self.active_exec_cell {
if let Some(new_exec) = exec.with_added_call(
ev.call_id.clone(),
ev.command.clone(),
ev.parsed_cmd.clone(),
) {
self.active_exec_cell = Some(new_exec);
} else {
// Make a new cell.
self.flush_active_exec_cell();
self.active_exec_cell = Some(history_cell::new_active_exec_command(
ev.command,
ev.parsed_cmd,
include_header,
ev.call_id.clone(),
ev.command.clone(),
ev.parsed_cmd.clone(),
));
}
} else {
self.active_exec_cell = Some(history_cell::new_active_exec_command(
ev.call_id.clone(),
ev.command.clone(),
ev.parsed_cmd.clone(),
));
}
// Request a redraw so the working header and command list are visible immediately.
@@ -574,7 +592,7 @@ impl ChatWidget {
Constraint::Max(
self.active_exec_cell
.as_ref()
.map_or(0, |c| c.desired_height(area.width)),
.map_or(0, |c| c.desired_height(area.width) + 1),
),
Constraint::Min(self.bottom_pane.desired_height(area.width)),
])
@@ -604,6 +622,7 @@ impl ChatWidget {
has_input_focus: true,
enhanced_keys_supported,
placeholder_text: placeholder,
disable_paste_burst: config.disable_paste_burst,
}),
active_exec_cell: None,
config: config.clone(),
@@ -615,13 +634,11 @@ impl ChatWidget {
last_token_usage: TokenUsage::default(),
stream: StreamController::new(config),
running_commands: HashMap::new(),
pending_exec_completions: Vec::new(),
task_complete_pending: false,
interrupts: InterruptManager::new(),
reasoning_buffer: String::new(),
full_reasoning_buffer: String::new(),
session_id: None,
last_history_was_exec: false,
queued_user_messages: VecDeque::new(),
show_welcome_banner: true,
}
@@ -652,6 +669,7 @@ impl ChatWidget {
has_input_focus: true,
enhanced_keys_supported,
placeholder_text: placeholder,
disable_paste_burst: config.disable_paste_burst,
}),
active_exec_cell: None,
config: config.clone(),
@@ -660,13 +678,11 @@ impl ChatWidget {
last_token_usage: TokenUsage::default(),
stream: StreamController::new(config),
running_commands: HashMap::new(),
pending_exec_completions: Vec::new(),
task_complete_pending: false,
interrupts: InterruptManager::new(),
reasoning_buffer: String::new(),
full_reasoning_buffer: String::new(),
session_id: None,
last_history_was_exec: false,
queued_user_messages: VecDeque::new(),
show_welcome_banner: false,
}
@@ -677,7 +693,7 @@ impl ChatWidget {
+ self
.active_exec_cell
.as_ref()
.map_or(0, |c| c.desired_height(width))
.map_or(0, |c| c.desired_height(width) + 1)
}
pub(crate) fn handle_key_event(&mut self, key_event: KeyEvent) {
@@ -751,12 +767,20 @@ impl ChatWidget {
}
fn dispatch_command(&mut self, cmd: SlashCommand) {
if !cmd.available_during_task() && self.bottom_pane.is_task_running() {
let message = format!(
"'/{}' is disabled while a task is in progress.",
cmd.command()
);
self.add_to_history(history_cell::new_error_event(message));
self.request_redraw();
return;
}
match cmd {
SlashCommand::New => {
self.app_event_tx.send(AppEvent::NewSession);
}
SlashCommand::Init => {
// Guard: do not run if a task is active.
const INIT_PROMPT: &str = include_str!("../prompt_for_init_command.md");
self.submit_text_message(INIT_PROMPT.to_string());
}
@@ -850,20 +874,35 @@ impl ChatWidget {
self.bottom_pane.handle_paste(text);
}
// Returns true if caller should skip rendering this frame (a future frame is scheduled).
pub(crate) fn handle_paste_burst_tick(&mut self, frame_requester: FrameRequester) -> bool {
if self.bottom_pane.flush_paste_burst_if_due() {
// A paste just flushed; request an immediate redraw and skip this frame.
self.request_redraw();
true
} else if self.bottom_pane.is_in_paste_burst() {
// While capturing a burst, schedule a follow-up tick and skip this frame
// to avoid redundant renders between ticks.
frame_requester.schedule_frame_in(
crate::bottom_pane::ChatComposer::recommended_paste_flush_delay(),
);
true
} else {
false
}
}
fn flush_active_exec_cell(&mut self) {
if let Some(active) = self.active_exec_cell.take() {
self.last_history_was_exec = true;
self.app_event_tx
.send(AppEvent::InsertHistoryCell(Box::new(active)));
}
}
fn add_to_history(&mut self, cell: impl HistoryCell + 'static) {
// Only break exec grouping if the cell renders visible lines.
let has_display_lines = !cell.display_lines().is_empty();
self.flush_active_exec_cell();
if has_display_lines {
self.last_history_was_exec = false;
if !cell.display_lines(u16::MAX).is_empty() {
// Only break exec grouping if the cell renders visible lines.
self.flush_active_exec_cell();
}
self.app_event_tx
.send(AppEvent::InsertHistoryCell(Box::new(cell)));
@@ -961,8 +1000,10 @@ impl ChatWidget {
EventMsg::McpToolCallBegin(ev) => self.on_mcp_tool_call_begin(ev),
EventMsg::McpToolCallEnd(ev) => self.on_mcp_tool_call_end(ev),
EventMsg::WebSearchBegin(ev) => self.on_web_search_begin(ev),
EventMsg::WebSearchEnd(ev) => self.on_web_search_end(ev),
EventMsg::GetHistoryEntryResponse(ev) => self.on_get_history_entry_response(ev),
EventMsg::McpListToolsResponse(ev) => self.on_list_mcp_tools(ev),
EventMsg::ListCustomPromptsResponse(ev) => self.on_list_custom_prompts(ev),
EventMsg::ShutdownComplete => self.on_shutdown_complete(),
EventMsg::TurnDiff(TurnDiffEvent { unified_diff }) => self.on_turn_diff(unified_diff),
EventMsg::BackgroundEvent(BackgroundEventEvent { message }) => {
@@ -987,7 +1028,6 @@ impl ChatWidget {
let cell = cell.into_failed();
// Insert finalized exec into history and keep grouping consistent.
self.add_to_history(cell);
self.last_history_was_exec = true;
}
}
@@ -1042,6 +1082,7 @@ impl ChatWidget {
let is_current = preset.model == current_model && preset.effort == current_effort;
let model_slug = preset.model.to_string();
let effort = preset.effort;
let current_model = current_model.clone();
let actions: Vec<SelectionAction> = vec![Box::new(move |tx| {
tx.send(AppEvent::CodexOp(Op::OverrideTurnContext {
cwd: None,
@@ -1053,6 +1094,13 @@ impl ChatWidget {
}));
tx.send(AppEvent::UpdateModel(model_slug.clone()));
tx.send(AppEvent::UpdateReasoningEffort(effort));
tracing::info!(
"New model: {}, New effort: {}, Current model: {}, Current effort: {}",
model_slug.clone(),
effort,
current_model,
current_effort
);
})];
items.push(SelectionItem {
name,
@@ -1192,6 +1240,13 @@ impl ChatWidget {
self.add_to_history(history_cell::new_mcp_tools_output(&self.config, ev.tools));
}
fn on_list_custom_prompts(&mut self, ev: ListCustomPromptsResponseEvent) {
let len = ev.custom_prompts.len();
debug!("received {len} custom prompts");
// Forward to bottom pane so the slash popup can show them now.
self.bottom_pane.set_custom_prompts(ev.custom_prompts);
}
/// Programmatically submit a user text message as if typed in the
/// composer. The text will be added to conversation history and sent to
/// the agent.
@@ -1236,6 +1291,9 @@ impl WidgetRef for &ChatWidget {
let [active_cell_area, bottom_pane_area] = self.layout_areas(area);
(&self.bottom_pane).render(bottom_pane_area, buf);
if let Some(cell) = &self.active_exec_cell {
let mut active_cell_area = active_cell_area;
active_cell_area.y += 1;
active_cell_area.height -= 1;
cell.render_ref(active_cell_area, buf);
}
}

View File

@@ -0,0 +1,5 @@
---
source: tui/src/chatwidget/tests.rs
expression: lines_to_single_string(&approved_lines)
---
• Change Approved foo.txt (+1 -0)

View File

@@ -0,0 +1,6 @@
---
source: tui/src/chatwidget/tests.rs
expression: lines_to_single_string(&proposed_lines)
---
• Proposed Change foo.txt (+1 -0)
1 +hello

View File

@@ -3,6 +3,7 @@ source: tui/src/chatwidget/tests.rs
assertion_line: 728
expression: terminal.backend()
---
" "
"? Codex wants to run echo hello world "
" "
"Model wants to run a command "

View File

@@ -0,0 +1,11 @@
---
source: tui/src/chatwidget/tests.rs
expression: terminal.backend()
---
" "
"? Codex wants to run echo hello world "
" "
"▌Allow command? "
"▌ Yes Always No, provide feedback "
"▌ Approve and run the command "
" "

View File

@@ -1,8 +1,9 @@
---
source: tui/src/chatwidget/tests.rs
assertion_line: 763
assertion_line: 794
expression: terminal.backend()
---
" "
"The model wants to apply changes "
" "
"This will grant write access to /tmp for the remainder of this session. "

View File

@@ -0,0 +1,57 @@
---
source: tui/src/chatwidget/tests.rs
expression: visible_after
---
> Im going to scan the workspace and Cargo manifests to see build profiles and
dependencies that impact binary size. Then Ill summarize the main causes.
• Explored
└ List ls -la
Read Cargo.toml
• Ran
└ for d in ansi-escape apply-patch arg0 cli common core exec execpolicy
file-search linux-sandbox login mcp-client mcp-server mcp-types ollama
tui; do echo "--- $d/Cargo.toml"; sed -n '1,200p' $d/Cargo.toml; echo;
done
• Explored
└ Read Cargo.toml
> Heres whats driving size in this workspaces binaries.
Main Causes
- Static linking style: Each bin (codex, codex-tui, codex-exec,
codex-mcp-server, etc.) statically links its full dependency graph, so common
code isnt shared at runtime across executables.
- Heavy deps (HTTP/TLS): reqwest brings in Hyper, HTTP/2, compressors,
and a TLS stack (rustls by default; OpenSSL on musl). In core, login, tui,
and ollama you enable reqwest with json/stream, which still pulls a large
transitive set.
- Image/terminal stack: codex-tui includes image (with jpeg), ratatui,
crossterm, and ratatui-image, which together contribute significant code size
(decoders, pixel ops, terminal backends).
- Parsers/VMs: tree-sitter + tree-sitter-bash (in core and apply-patch) and
starlark (in execpolicy) include sizeable parser/VM tables and runtimes.
- Tokio runtime: Broad tokio features (rt-multi-thread, macros, process,
signal) across many crates inflate code size even if only subsets are used
per bin.
- Panic + backtraces: Default panic = unwind and backtrace support keep
unwinding tables and symbols that add weight.
- Per-target OpenSSL (musl): For *-unknown-linux-musl, core enables
openssl-sys with vendored, compiling OpenSSL into the binary—this adds
multiple megabytes per executable.
Build-Mode Notes
- Release settings: You use lto = "fat" and codegen-units = 1 (good for size),
but strip = "symbols" keeps debuginfo. Debuginfo is often the largest single
contributor; if you build in release with that setting, binaries can still
be large.
- Debug builds: cargo build (dev profile) includes full debuginfo, no LTO, and
assertions—outputs are much larger than cargo build --release.
If you want, I can outline targeted trims (e.g., strip = "debuginfo",
opt-level = "z", panic abort, tighter tokio/reqwest features) and estimate
impact per binary.

View File

@@ -1,7 +1,6 @@
---
source: tui/src/chatwidget/tests.rs
assertion_line: 779
expression: terminal.backend()
---
"▌ Ask Codex to do anything "
" "
" ⏎ send Ctrl+J newline Ctrl+T transc"

View File

@@ -1,6 +1,5 @@
---
source: tui/src/chatwidget/tests.rs
assertion_line: 779
expression: terminal.backend()
---
" "

View File

@@ -1,7 +1,6 @@
---
source: tui/src/chatwidget/tests.rs
assertion_line: 807
expression: terminal.backend()
---
" Thinking (0s • Esc to interrupt) "
"▌ Ask Codex to do anything "
" "

View File

@@ -1,6 +1,5 @@
---
source: tui/src/chatwidget/tests.rs
assertion_line: 807
expression: terminal.backend()
---
" "

View File

@@ -0,0 +1,15 @@
---
source: tui/src/chatwidget/tests.rs
expression: visual
---
> Im going to search the repo for where “Change Approved” is rendered to update
that view.
• Explored
└ Search Change Approved
Read diff_render.rs
Investigating rendering code (0s • Esc to interrupt)
▌Summarize recent commits
⏎ send Ctrl+J newline Ctrl+T transcript Ctrl+C quit

View File

@@ -2,5 +2,4 @@
source: tui/src/chatwidget/tests.rs
expression: combined
---
codex
Here is the result.
> Here is the result.

View File

@@ -0,0 +1,5 @@
---
source: tui/src/chatwidget/tests.rs
expression: blob
---
🖐  '/model' is disabled while a task is in progress.

View File

@@ -0,0 +1,6 @@
---
source: tui/src/chatwidget/tests.rs
expression: blob1
---
⠋ Exploring
└ List ls -la

View File

@@ -0,0 +1,6 @@
---
source: tui/src/chatwidget/tests.rs
expression: blob2
---
• Explored
└ List ls -la

View File

@@ -0,0 +1,7 @@
---
source: tui/src/chatwidget/tests.rs
expression: blob3
---
⠋ Exploring
└ List ls -la
Read foo.txt

View File

@@ -0,0 +1,7 @@
---
source: tui/src/chatwidget/tests.rs
expression: blob4
---
• Explored
└ List ls -la
Read foo.txt

View File

@@ -0,0 +1,7 @@
---
source: tui/src/chatwidget/tests.rs
expression: blob5
---
• Explored
└ List ls -la
Read foo.txt

View File

@@ -0,0 +1,7 @@
---
source: tui/src/chatwidget/tests.rs
expression: blob6
---
• Explored
└ List ls -la
Read foo.txt, bar.txt

View File

@@ -2,5 +2,4 @@
source: tui/src/chatwidget/tests.rs
expression: combined
---
codex
Here is the result.
> Here is the result.

View File

@@ -2,5 +2,4 @@
source: tui/src/chatwidget/tests.rs
expression: exec_blob
---
>_
✗ ⌨sleep 1
• Ran sleep 1

View File

@@ -1,6 +1,5 @@
---
source: tui/src/chatwidget/tests.rs
assertion_line: 878
expression: terminal.backend()
---
" "

View File

@@ -1,8 +1,9 @@
---
source: tui/src/chatwidget/tests.rs
assertion_line: 851
assertion_line: 921
expression: terminal.backend()
---
" "
"? Codex wants to run echo 'hello world' "
" "
"Codex wants to run a command "

File diff suppressed because it is too large Load Diff

View File

@@ -1 +0,0 @@
pub(crate) const DEFAULT_WRAP_COLS: u16 = 80;

View File

@@ -1,16 +1,17 @@
use crossterm::terminal;
use ratatui::style::Color;
use ratatui::style::Modifier;
use ratatui::style::Style;
use ratatui::style::Stylize;
use ratatui::text::Line as RtLine;
use ratatui::text::Span as RtSpan;
use std::collections::HashMap;
use std::path::Path;
use std::path::PathBuf;
use crate::common::DEFAULT_WRAP_COLS;
use codex_core::protocol::FileChange;
use crate::exec_command::relativize_to_home;
use crate::history_cell::PatchEventType;
use codex_core::git_info::get_git_repo_root;
use codex_core::protocol::FileChange;
const SPACES_AFTER_LINE_NUMBER: usize = 6;
@@ -22,205 +23,199 @@ enum DiffLineType {
}
pub(crate) fn create_diff_summary(
title: &str,
changes: &HashMap<PathBuf, FileChange>,
event_type: PatchEventType,
cwd: &Path,
wrap_cols: usize,
) -> Vec<RtLine<'static>> {
struct FileSummary {
display_path: String,
added: usize,
removed: usize,
}
let count_from_unified = |diff: &str| -> (usize, usize) {
if let Ok(patch) = diffy::Patch::from_str(diff) {
patch
.hunks()
.iter()
.flat_map(|h| h.lines())
.fold((0, 0), |(a, d), l| match l {
diffy::Line::Insert(_) => (a + 1, d),
diffy::Line::Delete(_) => (a, d + 1),
_ => (a, d),
})
} else {
// Fallback: manual scan to preserve counts even for unparsable diffs
let mut adds = 0usize;
let mut dels = 0usize;
for l in diff.lines() {
if l.starts_with("+++") || l.starts_with("---") || l.starts_with("@@") {
continue;
}
match l.as_bytes().first() {
Some(b'+') => adds += 1,
Some(b'-') => dels += 1,
_ => {}
}
let rows = collect_rows(changes);
let header_kind = match event_type {
PatchEventType::ApplyBegin { auto_approved } => {
if auto_approved {
HeaderKind::Edited
} else {
HeaderKind::ChangeApproved
}
(adds, dels)
}
PatchEventType::ApprovalRequest => HeaderKind::ProposedChange,
};
let mut files: Vec<FileSummary> = Vec::new();
for (path, change) in changes.iter() {
match change {
FileChange::Add { content } => files.push(FileSummary {
display_path: path.display().to_string(),
added: content.lines().count(),
removed: 0,
}),
FileChange::Delete => files.push(FileSummary {
display_path: path.display().to_string(),
added: 0,
removed: std::fs::read_to_string(path)
.ok()
.map(|s| s.lines().count())
.unwrap_or(0),
}),
FileChange::Update {
unified_diff,
move_path,
} => {
let (added, removed) = count_from_unified(unified_diff);
let display_path = if let Some(new_path) = move_path {
format!("{}{}", path.display(), new_path.display())
} else {
path.display().to_string()
};
files.push(FileSummary {
display_path,
added,
removed,
});
}
}
}
let file_count = files.len();
let total_added: usize = files.iter().map(|f| f.added).sum();
let total_removed: usize = files.iter().map(|f| f.removed).sum();
let noun = if file_count == 1 { "file" } else { "files" };
let mut out: Vec<RtLine<'static>> = Vec::new();
// Header
let mut header_spans: Vec<RtSpan<'static>> = Vec::new();
header_spans.push(RtSpan::styled(
title.to_owned(),
Style::default()
.fg(Color::Magenta)
.add_modifier(Modifier::BOLD),
));
header_spans.push(RtSpan::raw(" to "));
header_spans.push(RtSpan::raw(format!("{file_count} {noun} ")));
header_spans.push(RtSpan::raw("("));
header_spans.push(RtSpan::styled(
format!("+{total_added}"),
Style::default().fg(Color::Green),
));
header_spans.push(RtSpan::raw(" "));
header_spans.push(RtSpan::styled(
format!("-{total_removed}"),
Style::default().fg(Color::Red),
));
header_spans.push(RtSpan::raw(")"));
out.push(RtLine::from(header_spans));
// Dimmed per-file lines with prefix
for (idx, f) in files.iter().enumerate() {
let mut spans: Vec<RtSpan<'static>> = Vec::new();
spans.push(RtSpan::raw(f.display_path.clone()));
// Show per-file +/- counts only when there are multiple files
if file_count > 1 {
spans.push(RtSpan::raw(" ("));
spans.push(RtSpan::styled(
format!("+{}", f.added),
Style::default().fg(Color::Green),
));
spans.push(RtSpan::raw(" "));
spans.push(RtSpan::styled(
format!("-{}", f.removed),
Style::default().fg(Color::Red),
));
spans.push(RtSpan::raw(")"));
}
let mut line = RtLine::from(spans);
let prefix = if idx == 0 { "" } else { " " };
line.spans.insert(0, prefix.into());
line.spans
.iter_mut()
.for_each(|span| span.style = span.style.add_modifier(Modifier::DIM));
out.push(line);
}
let show_details = matches!(
event_type,
PatchEventType::ApplyBegin {
auto_approved: true
} | PatchEventType::ApprovalRequest
);
if show_details {
out.extend(render_patch_details(changes));
}
out
render_changes_block(rows, wrap_cols, header_kind, cwd)
}
fn render_patch_details(changes: &HashMap<PathBuf, FileChange>) -> Vec<RtLine<'static>> {
let mut out: Vec<RtLine<'static>> = Vec::new();
let term_cols: usize = terminal::size()
.map(|(w, _)| w as usize)
.unwrap_or(DEFAULT_WRAP_COLS.into());
// Shared row for per-file presentation
#[derive(Clone)]
struct Row {
#[allow(dead_code)]
path: PathBuf,
move_path: Option<PathBuf>,
added: usize,
removed: usize,
change: FileChange,
}
for (index, (path, change)) in changes.iter().enumerate() {
let is_first_file = index == 0;
// Add separator only between files (not at the very start)
if !is_first_file {
out.push(RtLine::from(vec![
RtSpan::raw(" "),
RtSpan::styled("...", style_dim()),
]));
fn collect_rows(changes: &HashMap<PathBuf, FileChange>) -> Vec<Row> {
let mut rows: Vec<Row> = Vec::new();
for (path, change) in changes.iter() {
let (added, removed) = match change {
FileChange::Add { content } => (content.lines().count(), 0),
FileChange::Delete { content } => (0, content.lines().count()),
FileChange::Update { unified_diff, .. } => calculate_add_remove_from_diff(unified_diff),
};
let move_path = match change {
FileChange::Update {
move_path: Some(new),
..
} => Some(new.clone()),
_ => None,
};
rows.push(Row {
path: path.clone(),
move_path,
added,
removed,
change: change.clone(),
});
}
rows.sort_by_key(|r| r.path.clone());
rows
}
enum HeaderKind {
ProposedChange,
Edited,
ChangeApproved,
}
fn render_changes_block(
rows: Vec<Row>,
wrap_cols: usize,
header_kind: HeaderKind,
cwd: &Path,
) -> Vec<RtLine<'static>> {
let mut out: Vec<RtLine<'static>> = Vec::new();
let term_cols = wrap_cols;
fn render_line_count_summary(added: usize, removed: usize) -> Vec<RtSpan<'static>> {
let mut spans = Vec::new();
spans.push("(".into());
spans.push(format!("+{added}").green());
spans.push(" ".into());
spans.push(format!("-{removed}").red());
spans.push(")".into());
spans
}
let render_path = |row: &Row| -> Vec<RtSpan<'static>> {
let mut spans = Vec::new();
spans.push(display_path_for(&row.path, cwd).into());
if let Some(move_path) = &row.move_path {
spans.push(format!("{}", display_path_for(move_path, cwd)).into());
}
match change {
spans
};
// Header
let total_added: usize = rows.iter().map(|r| r.added).sum();
let total_removed: usize = rows.iter().map(|r| r.removed).sum();
let file_count = rows.len();
let noun = if file_count == 1 { "file" } else { "files" };
let mut header_spans: Vec<RtSpan<'static>> = vec!["".into()];
match header_kind {
HeaderKind::ProposedChange => {
header_spans.push("Proposed Change".bold());
if let [row] = &rows[..] {
header_spans.push(" ".into());
header_spans.extend(render_path(row));
header_spans.push(" ".into());
header_spans.extend(render_line_count_summary(row.added, row.removed));
} else {
header_spans.push(format!(" to {file_count} {noun} ").into());
header_spans.extend(render_line_count_summary(total_added, total_removed));
}
}
HeaderKind::Edited => {
if let [row] = &rows[..] {
let verb = match &row.change {
FileChange::Add { .. } => "Added",
FileChange::Delete { .. } => "Deleted",
_ => "Edited",
};
header_spans.push(verb.bold());
header_spans.push(" ".into());
header_spans.extend(render_path(row));
header_spans.push(" ".into());
header_spans.extend(render_line_count_summary(row.added, row.removed));
} else {
header_spans.push("Edited".bold());
header_spans.push(format!(" {file_count} {noun} ").into());
header_spans.extend(render_line_count_summary(total_added, total_removed));
}
}
HeaderKind::ChangeApproved => {
header_spans.push("Change Approved".bold());
if let [row] = &rows[..] {
header_spans.push(" ".into());
header_spans.extend(render_path(row));
header_spans.push(" ".into());
header_spans.extend(render_line_count_summary(row.added, row.removed));
} else {
header_spans.push(format!(" {file_count} {noun} ").into());
header_spans.extend(render_line_count_summary(total_added, total_removed));
}
}
}
out.push(RtLine::from(header_spans));
// For Change Approved, we only show the header summary and no per-file/diff details.
if matches!(header_kind, HeaderKind::ChangeApproved) {
return out;
}
for (idx, r) in rows.into_iter().enumerate() {
// Insert a blank separator between file chunks (except before the first)
if idx > 0 {
out.push("".into());
}
// File header line (skip when single-file header already shows the name)
let skip_file_header =
matches!(header_kind, HeaderKind::ProposedChange | HeaderKind::Edited)
&& file_count == 1;
if !skip_file_header {
let mut header: Vec<RtSpan<'static>> = Vec::new();
header.push("".dim());
header.extend(render_path(&r));
header.push(" ".into());
header.extend(render_line_count_summary(r.added, r.removed));
out.push(RtLine::from(header));
}
match r.change {
FileChange::Add { content } => {
for (i, raw) in content.lines().enumerate() {
let ln = i + 1;
out.extend(push_wrapped_diff_line(
ln,
i + 1,
DiffLineType::Insert,
raw,
term_cols,
));
}
}
FileChange::Delete => {
let original = std::fs::read_to_string(path).unwrap_or_default();
for (i, raw) in original.lines().enumerate() {
let ln = i + 1;
FileChange::Delete { content } => {
for (i, raw) in content.lines().enumerate() {
out.extend(push_wrapped_diff_line(
ln,
i + 1,
DiffLineType::Delete,
raw,
term_cols,
));
}
}
FileChange::Update {
unified_diff,
move_path: _,
} => {
if let Ok(patch) = diffy::Patch::from_str(unified_diff) {
FileChange::Update { unified_diff, .. } => {
if let Ok(patch) = diffy::Patch::from_str(&unified_diff) {
let mut is_first_hunk = true;
for h in patch.hunks() {
// Render a simple separator between non-contiguous hunks
// instead of diff-style @@ headers.
if !is_first_hunk {
out.push(RtLine::from(vec![
RtSpan::raw(" "),
RtSpan::styled("", style_dim()),
]));
out.push(RtLine::from(vec![" ".into(), "".dim()]));
}
is_first_hunk = false;
@@ -265,13 +260,41 @@ fn render_patch_details(changes: &HashMap<PathBuf, FileChange>) -> Vec<RtLine<'s
}
}
}
out.push(RtLine::from(RtSpan::raw("")));
}
out
}
fn display_path_for(path: &Path, cwd: &Path) -> String {
let path_in_same_repo = match (get_git_repo_root(cwd), get_git_repo_root(path)) {
(Some(cwd_repo), Some(path_repo)) => cwd_repo == path_repo,
_ => false,
};
let chosen = if path_in_same_repo {
pathdiff::diff_paths(path, cwd).unwrap_or_else(|| path.to_path_buf())
} else {
relativize_to_home(path).unwrap_or_else(|| path.to_path_buf())
};
chosen.display().to_string()
}
fn calculate_add_remove_from_diff(diff: &str) -> (usize, usize) {
if let Ok(patch) = diffy::Patch::from_str(diff) {
patch
.hunks()
.iter()
.flat_map(|h| h.lines())
.fold((0, 0), |(a, d), l| match l {
diffy::Line::Insert(_) => (a + 1, d),
diffy::Line::Delete(_) => (a, d + 1),
diffy::Line::Context(_) => (a, d),
})
} else {
// For unparsable diffs, return 0 for both counts.
(0, 0)
}
}
fn push_wrapped_diff_line(
line_number: usize,
kind: DiffLineType,
@@ -290,10 +313,10 @@ fn push_wrapped_diff_line(
let prefix_cols = indent.len() + ln_str.len() + gap_after_ln;
let mut first = true;
let (sign_opt, line_style) = match kind {
DiffLineType::Insert => (Some('+'), Some(style_add())),
DiffLineType::Delete => (Some('-'), Some(style_del())),
DiffLineType::Context => (None, None),
let (sign_char, line_style) = match kind {
DiffLineType::Insert => ('+', style_add()),
DiffLineType::Delete => ('-', style_del()),
DiffLineType::Context => (' ', style_context()),
};
let mut lines: Vec<RtLine<'static>> = Vec::new();
@@ -301,9 +324,7 @@ fn push_wrapped_diff_line(
// Fit the content for the current terminal row:
// compute how many columns are available after the prefix, then split
// at a UTF-8 character boundary so this row's chunk fits exactly.
let available_content_cols = term_cols
.saturating_sub(if first { prefix_cols + 1 } else { prefix_cols })
.max(1);
let available_content_cols = term_cols.saturating_sub(prefix_cols + 1).max(1);
let split_at_byte_index = remaining_text
.char_indices()
.nth(available_content_cols)
@@ -313,41 +334,22 @@ fn push_wrapped_diff_line(
remaining_text = rest;
if first {
let mut spans: Vec<RtSpan<'static>> = Vec::new();
spans.push(RtSpan::raw(indent));
spans.push(RtSpan::styled(ln_str.clone(), style_dim()));
spans.push(RtSpan::raw(" ".repeat(gap_after_ln)));
// Always include a sign character at the start of the displayed chunk
// ('+' for insert, '-' for delete, ' ' for context) so gutters align.
let sign_char = sign_opt.unwrap_or(' ');
let display_chunk = format!("{sign_char}{chunk}");
let content_span = match line_style {
Some(style) => RtSpan::styled(display_chunk, style),
None => RtSpan::raw(display_chunk),
};
spans.push(content_span);
let mut line = RtLine::from(spans);
if let Some(style) = line_style {
line.style = line.style.patch(style);
}
lines.push(line);
// Build gutter (indent + line number + spacing) as a dimmed span
let gutter = format!("{indent}{ln_str}{}", " ".repeat(gap_after_ln));
// Content with a sign ('+'/'-'/' ') styled per diff kind
let content = format!("{sign_char}{chunk}");
lines.push(RtLine::from(vec![
RtSpan::styled(gutter, style_gutter()),
RtSpan::styled(content, line_style),
]));
first = false;
} else {
// Continuation lines keep a space for the sign column so content aligns
let hang_prefix = format!(
"{indent}{}{} ",
" ".repeat(ln_str.len()),
" ".repeat(gap_after_ln)
);
let content_span = match line_style {
Some(style) => RtSpan::styled(chunk.to_string(), style),
None => RtSpan::raw(chunk.to_string()),
};
let mut line = RtLine::from(vec![RtSpan::raw(hang_prefix), content_span]);
if let Some(style) = line_style {
line.style = line.style.patch(style);
}
lines.push(line);
let gutter = format!("{indent}{} ", " ".repeat(ln_str.len() + gap_after_ln));
lines.push(RtLine::from(vec![
RtSpan::styled(gutter, style_gutter()),
RtSpan::styled(chunk.to_string(), line_style),
]));
}
if remaining_text.is_empty() {
break;
@@ -356,10 +358,14 @@ fn push_wrapped_diff_line(
lines
}
fn style_dim() -> Style {
fn style_gutter() -> Style {
Style::default().add_modifier(Modifier::DIM)
}
fn style_context() -> Style {
Style::default()
}
fn style_add() -> Style {
Style::default().fg(Color::Green)
}
@@ -378,6 +384,12 @@ mod tests {
use ratatui::widgets::Paragraph;
use ratatui::widgets::WidgetRef;
use ratatui::widgets::Wrap;
fn diff_summary_for_tests(
changes: &HashMap<PathBuf, FileChange>,
event_type: PatchEventType,
) -> Vec<RtLine<'static>> {
create_diff_summary(changes, event_type, &PathBuf::from("/"), 80)
}
fn snapshot_lines(name: &str, lines: Vec<RtLine<'static>>, width: u16, height: u16) {
let mut terminal = Terminal::new(TestBackend::new(width, height)).expect("terminal");
@@ -391,6 +403,23 @@ mod tests {
assert_snapshot!(name, terminal.backend());
}
fn snapshot_lines_text(name: &str, lines: &[RtLine<'static>]) {
// Convert Lines to plain text rows and trim trailing spaces so it's
// easier to validate indentation visually in snapshots.
let text = lines
.iter()
.map(|l| {
l.spans
.iter()
.map(|s| s.content.as_ref())
.collect::<String>()
})
.map(|s| s.trim_end().to_string())
.collect::<Vec<_>>()
.join("\n");
assert_snapshot!(name, text);
}
#[test]
fn ui_snapshot_add_details() {
let mut changes: HashMap<PathBuf, FileChange> = HashMap::new();
@@ -401,8 +430,7 @@ mod tests {
},
);
let lines =
create_diff_summary("proposed patch", &changes, PatchEventType::ApprovalRequest);
let lines = diff_summary_for_tests(&changes, PatchEventType::ApprovalRequest);
snapshot_lines("add_details", lines, 80, 10);
}
@@ -423,8 +451,7 @@ mod tests {
},
);
let lines =
create_diff_summary("proposed patch", &changes, PatchEventType::ApprovalRequest);
let lines = diff_summary_for_tests(&changes, PatchEventType::ApprovalRequest);
snapshot_lines("update_details_with_rename", lines, 80, 12);
}
@@ -435,11 +462,10 @@ mod tests {
let long_line = "this is a very long line that should wrap across multiple terminal columns and continue";
// Call the wrapping function directly so we can precisely control the width
let lines =
push_wrapped_diff_line(1, DiffLineType::Insert, long_line, DEFAULT_WRAP_COLS.into());
let lines = push_wrapped_diff_line(1, DiffLineType::Insert, long_line, 80);
// Render into a small terminal to capture the visual layout
snapshot_lines("wrap_behavior_insert", lines, DEFAULT_WRAP_COLS + 10, 8);
snapshot_lines("wrap_behavior_insert", lines, 90, 8);
}
#[test]
@@ -458,8 +484,7 @@ mod tests {
},
);
let lines =
create_diff_summary("proposed patch", &changes, PatchEventType::ApprovalRequest);
let lines = diff_summary_for_tests(&changes, PatchEventType::ApprovalRequest);
snapshot_lines("single_line_replacement_counts", lines, 80, 8);
}
@@ -480,8 +505,7 @@ mod tests {
},
);
let lines =
create_diff_summary("proposed patch", &changes, PatchEventType::ApprovalRequest);
let lines = diff_summary_for_tests(&changes, PatchEventType::ApprovalRequest);
snapshot_lines("blank_context_line", lines, 80, 10);
}
@@ -503,10 +527,232 @@ mod tests {
},
);
let lines =
create_diff_summary("proposed patch", &changes, PatchEventType::ApprovalRequest);
let lines = diff_summary_for_tests(&changes, PatchEventType::ApprovalRequest);
// Height is large enough to show both hunks and the separator
snapshot_lines("vertical_ellipsis_between_hunks", lines, 80, 16);
}
#[test]
fn ui_snapshot_apply_update_block() {
let mut changes: HashMap<PathBuf, FileChange> = HashMap::new();
let original = "line one\nline two\nline three\n";
let modified = "line one\nline two changed\nline three\n";
let patch = diffy::create_patch(original, modified).to_string();
changes.insert(
PathBuf::from("example.txt"),
FileChange::Update {
unified_diff: patch,
move_path: None,
},
);
for (name, auto_approved) in [
("apply_update_block", true),
("apply_update_block_manual", false),
] {
let lines =
diff_summary_for_tests(&changes, PatchEventType::ApplyBegin { auto_approved });
snapshot_lines(name, lines, 80, 12);
}
}
#[test]
fn ui_snapshot_apply_update_with_rename_block() {
let mut changes: HashMap<PathBuf, FileChange> = HashMap::new();
let original = "A\nB\nC\n";
let modified = "A\nB changed\nC\n";
let patch = diffy::create_patch(original, modified).to_string();
changes.insert(
PathBuf::from("old_name.rs"),
FileChange::Update {
unified_diff: patch,
move_path: Some(PathBuf::from("new_name.rs")),
},
);
let lines = diff_summary_for_tests(
&changes,
PatchEventType::ApplyBegin {
auto_approved: true,
},
);
snapshot_lines("apply_update_with_rename_block", lines, 80, 12);
}
#[test]
fn ui_snapshot_apply_multiple_files_block() {
// Two files: one update and one add, to exercise combined header and per-file rows
let mut changes: HashMap<PathBuf, FileChange> = HashMap::new();
// File a.txt: single-line replacement (one delete, one insert)
let patch_a = diffy::create_patch("one\n", "one changed\n").to_string();
changes.insert(
PathBuf::from("a.txt"),
FileChange::Update {
unified_diff: patch_a,
move_path: None,
},
);
// File b.txt: newly added with one line
changes.insert(
PathBuf::from("b.txt"),
FileChange::Add {
content: "new\n".to_string(),
},
);
let lines = diff_summary_for_tests(
&changes,
PatchEventType::ApplyBegin {
auto_approved: true,
},
);
snapshot_lines("apply_multiple_files_block", lines, 80, 14);
}
#[test]
fn ui_snapshot_apply_add_block() {
let mut changes: HashMap<PathBuf, FileChange> = HashMap::new();
changes.insert(
PathBuf::from("new_file.txt"),
FileChange::Add {
content: "alpha\nbeta\n".to_string(),
},
);
let lines = diff_summary_for_tests(
&changes,
PatchEventType::ApplyBegin {
auto_approved: true,
},
);
snapshot_lines("apply_add_block", lines, 80, 10);
}
#[test]
fn ui_snapshot_apply_delete_block() {
// Write a temporary file so the delete renderer can read original content
let tmp_path = PathBuf::from("tmp_delete_example.txt");
std::fs::write(&tmp_path, "first\nsecond\nthird\n").expect("write tmp file");
let mut changes: HashMap<PathBuf, FileChange> = HashMap::new();
changes.insert(
tmp_path.clone(),
FileChange::Delete {
content: "first\nsecond\nthird\n".to_string(),
},
);
let lines = diff_summary_for_tests(
&changes,
PatchEventType::ApplyBegin {
auto_approved: true,
},
);
// Cleanup best-effort; rendering has already read the file
let _ = std::fs::remove_file(&tmp_path);
snapshot_lines("apply_delete_block", lines, 80, 12);
}
#[test]
fn ui_snapshot_apply_update_block_wraps_long_lines() {
// Create a patch with a long modified line to force wrapping
let original = "line 1\nshort\nline 3\n";
let modified = "line 1\nshort this_is_a_very_long_modified_line_that_should_wrap_across_multiple_terminal_columns_and_continue_even_further_beyond_eighty_columns_to_force_multiple_wraps\nline 3\n";
let patch = diffy::create_patch(original, modified).to_string();
let mut changes: HashMap<PathBuf, FileChange> = HashMap::new();
changes.insert(
PathBuf::from("long_example.txt"),
FileChange::Update {
unified_diff: patch,
move_path: None,
},
);
let lines = create_diff_summary(
&changes,
PatchEventType::ApplyBegin {
auto_approved: true,
},
&PathBuf::from("/"),
72,
);
// Render with backend width wider than wrap width to avoid Paragraph auto-wrap.
snapshot_lines("apply_update_block_wraps_long_lines", lines, 80, 12);
}
#[test]
fn ui_snapshot_apply_update_block_wraps_long_lines_text() {
// This mirrors the desired layout example: sign only on first inserted line,
// subsequent wrapped pieces start aligned under the line number gutter.
let original = "1\n2\n3\n4\n";
let modified = "1\nadded long line which wraps and_if_there_is_a_long_token_it_will_be_broken\n3\n4 context line which also wraps across\n";
let patch = diffy::create_patch(original, modified).to_string();
let mut changes: HashMap<PathBuf, FileChange> = HashMap::new();
changes.insert(
PathBuf::from("wrap_demo.txt"),
FileChange::Update {
unified_diff: patch,
move_path: None,
},
);
let mut lines = create_diff_summary(
&changes,
PatchEventType::ApplyBegin {
auto_approved: true,
},
&PathBuf::from("/"),
28,
);
// Drop the combined header for this text-only snapshot
if !lines.is_empty() {
lines.remove(0);
}
snapshot_lines_text("apply_update_block_wraps_long_lines_text", &lines);
}
#[test]
fn ui_snapshot_apply_update_block_relativizes_path() {
let cwd = std::env::current_dir().unwrap_or_else(|_| PathBuf::from("/"));
let abs_old = cwd.join("abs_old.rs");
let abs_new = cwd.join("abs_new.rs");
let original = "X\nY\n";
let modified = "X changed\nY\n";
let patch = diffy::create_patch(original, modified).to_string();
let mut changes: HashMap<PathBuf, FileChange> = HashMap::new();
changes.insert(
abs_old.clone(),
FileChange::Update {
unified_diff: patch,
move_path: Some(abs_new.clone()),
},
);
let lines = create_diff_summary(
&changes,
PatchEventType::ApplyBegin {
auto_approved: true,
},
&cwd,
80,
);
snapshot_lines("apply_update_block_relativizes_path", lines, 80, 10);
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -21,7 +21,8 @@ use ratatui::text::Span;
use textwrap::Options as TwOptions;
use textwrap::WordSplitter;
/// Insert `lines` above the viewport.
/// Insert `lines` above the viewport using the terminal's backend writer
/// (avoids direct stdout references).
pub(crate) fn insert_history_lines(terminal: &mut tui::Terminal, lines: Vec<Line>) {
let mut out = std::io::stdout();
insert_history_lines_to_writer(terminal, &mut out, lines);
@@ -262,7 +263,10 @@ where
}
/// Word-aware wrapping for a list of `Line`s preserving styles.
pub(crate) fn word_wrap_lines(lines: &[Line], width: u16) -> Vec<Line<'static>> {
pub(crate) fn word_wrap_lines<'a, I>(lines: I, width: u16) -> Vec<Line<'static>>
where
I: IntoIterator<Item = &'a Line<'a>>,
{
let mut out = Vec::new();
let w = width.max(1) as usize;
for line in lines {

View File

@@ -34,7 +34,6 @@ mod chatwidget;
mod citation_regex;
mod cli;
mod clipboard_paste;
mod common;
pub mod custom_terminal;
mod diff_render;
mod exec_command;
@@ -64,8 +63,6 @@ mod chatwidget_stream_tests;
#[cfg(not(debug_assertions))]
mod updates;
#[cfg(not(debug_assertions))]
use color_eyre::owo_colors::OwoColorize;
pub use cli::Cli;
@@ -128,6 +125,7 @@ pub async fn run_main(
base_instructions: None,
include_plan_tool: Some(true),
include_apply_patch_tool: None,
include_view_image_tool: None,
disable_response_storage: cli.oss.then_some(true),
show_raw_agent_reasoning: cli.oss.then_some(true),
tools_web_search_request: cli.web_search.then_some(true),

View File

@@ -1,4 +1,4 @@
use codex_core::util::is_inside_git_repo;
use codex_core::git_info::get_git_repo_root;
use codex_login::AuthManager;
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
@@ -88,7 +88,7 @@ impl OnboardingScreen {
auth_manager,
}))
}
let is_git_repo = is_inside_git_repo(&cwd);
let is_git_repo = get_git_repo_root(&cwd).is_some();
let highlighted = if is_git_repo {
TrustDirectorySelection::Trust
} else {

View File

@@ -140,16 +140,6 @@ pub(crate) fn log_inbound_app_event(event: &AppEvent) {
});
LOGGER.write_json_line(value);
}
// Internal UI events; still log for fidelity, but avoid heavy payloads.
AppEvent::InsertHistoryLines(lines) => {
let value = json!({
"ts": now_ts(),
"dir": "to_tui",
"kind": "insert_history",
"lines": lines.len(),
});
LOGGER.write_json_line(value);
}
AppEvent::InsertHistoryCell(cell) => {
let value = json!({
"ts": now_ts(),

View File

@@ -52,6 +52,26 @@ impl SlashCommand {
pub fn command(self) -> &'static str {
self.into()
}
/// Whether this command can be run while a task is in progress.
pub fn available_during_task(self) -> bool {
match self {
SlashCommand::New
| SlashCommand::Init
| SlashCommand::Compact
| SlashCommand::Model
| SlashCommand::Approvals
| SlashCommand::Logout => false,
SlashCommand::Diff
| SlashCommand::Mention
| SlashCommand::Status
| SlashCommand::Mcp
| SlashCommand::Quit => true,
#[cfg(debug_assertions)]
SlashCommand::TestApproval => true,
}
}
}
/// Return all built-in commands in a Vec paired with their command string.

View File

@@ -1,9 +1,9 @@
---
source: tui/src/diff_render.rs
assertion_line: 765
expression: terminal.backend()
---
"proposed patch to 1 file (+2 -0) "
" └ README.md "
"• Proposed Change README.md (+2 -0) "
" 1 +first line "
" 2 +second line "
" "
@@ -12,3 +12,4 @@ expression: terminal.backend()
" "
" "
" "
" "

View File

@@ -0,0 +1,14 @@
---
source: tui/src/diff_render.rs
expression: terminal.backend()
---
"• Added new_file.txt (+2 -0) "
" 1 +alpha "
" 2 +beta "
" "
" "
" "
" "
" "
" "
" "

View File

@@ -0,0 +1,16 @@
---
source: tui/src/diff_render.rs
expression: terminal.backend()
---
"• Deleted tmp_delete_example.txt (+0 -3) "
" 1 -first "
" 2 -second "
" 3 -third "
" "
" "
" "
" "
" "
" "
" "
" "

View File

@@ -0,0 +1,18 @@
---
source: tui/src/diff_render.rs
expression: terminal.backend()
---
"• Edited 2 files (+2 -1) "
" └ a.txt (+1 -1) "
" 1 -one "
" 1 +one changed "
" "
" └ b.txt (+1 -0) "
" 1 +new "
" "
" "
" "
" "
" "
" "
" "

View File

@@ -0,0 +1,17 @@
---
source: tui/src/diff_render.rs
assertion_line: 748
expression: terminal.backend()
---
"• Edited example.txt (+1 -1) "
" 1 line one "
" 2 -line two "
" 2 +line two changed "
" 3 line three "
" "
" "
" "
" "
" "
" "
" "

View File

@@ -0,0 +1,16 @@
---
source: tui/src/diff_render.rs
expression: terminal.backend()
---
"• Change Approved example.txt (+1 -1) "
" "
" "
" "
" "
" "
" "
" "
" "
" "
" "
" "

View File

@@ -0,0 +1,15 @@
---
source: tui/src/diff_render.rs
assertion_line: 748
expression: terminal.backend()
---
"• Edited abs_old.rs → abs_new.rs (+1 -1) "
" 1 -X "
" 1 +X changed "
" 2 Y "
" "
" "
" "
" "
" "
" "

View File

@@ -0,0 +1,17 @@
---
source: tui/src/diff_render.rs
assertion_line: 748
expression: terminal.backend()
---
"• Edited long_example.txt (+1 -1) "
" 1 line 1 "
" 2 -short "
" 2 +short this_is_a_very_long_modified_line_that_should_wrap_acro "
" ss_multiple_terminal_columns_and_continue_even_further_beyond "
" _eighty_columns_to_force_multiple_wraps "
" 3 line 3 "
" "
" "
" "
" "
" "

Some files were not shown because too many files have changed in this diff Show More