Compare commits

...

148 Commits
pr2500 ... oss

Author SHA1 Message Date
Ahmed Ibrahim
4e6bc85fc4 oss 2025-09-02 15:19:29 -07:00
Ahmed Ibrahim
8bb57afc84 oss 2025-09-02 15:18:04 -07:00
Ahmed Ibrahim
a8324c5d94 progress 2025-09-02 14:52:44 -07:00
Jeremy Rose
46e35a2345 tui: fix extra blank lines in streamed agent messages (#3065)
Fixes excessive blank lines appearing during agent message streaming.

- Only insert a separator blank line for new, non-streaming history
cells.
- Streaming continuations now append without adding a spacer,
eliminating extra gaps between chunks.

Affected area: TUI display of agent messages (tui/src/app.rs).
2025-09-02 13:45:51 -07:00
Reuben Narad
7bcdc5cc7c fix config reference table (#3063)
3 quick fixes to docs/config.md

- Fix the reference table so option lists render correctly
- Corrected the default `stream_max_retries` to 5 (Old: 10)
- Update example approval_policy to untrusted (Old: unless-allow-listed)
2025-09-02 13:03:11 -07:00
Michael Bolin
4b426f7e1e fix: leverage windows-11-arm for Windows ARM builds (#3062)
This is in support of https://github.com/openai/codex/issues/2979.

Once we have a release out, we can update the npm module and the VS Code
extension to take advantage of this.
2025-09-02 12:56:09 -07:00
Jeremy Rose
fcb62a0fa5 tui: hide '/init' suggestion when AGENTS.md exists (#3038)
Hide the “/init” suggestion in the new-session banner when an
`AGENTS.md` exists anywhere from the repo root down to the current
working directory.

Changes
- Conditional suggestion: use `discover_project_doc_paths(config)` to
suppress `/init` when agents docs are present.
- TUI style cleanup: switch banner construction to `Stylize` helpers
(`.bold()`, `.dim()`, `.into()`), avoiding `Span::styled`/`Span::raw`.
- Fixture update: remove `/init` line in
`tui/tests/fixtures/ideal-binary-response.txt` to match the new banner.

Validation
- Ran formatting and scoped lint fixes: `just fmt` and `just fix -p
codex-tui`.
- Tests: `cargo test -p codex-tui` passed (`176 passed, 0 failed`).

Notes
- No change to the `/init` command itself; only the welcome banner now
adapts based on presence of `AGENTS.md`.
2025-09-02 12:04:32 -07:00
Ahmed Ibrahim
eb40fe3451 Add logs to know when we users are changing the model (#3060) 2025-09-02 17:59:07 +00:00
Jeremy Rose
b32c79e371 tui: fix laggy typing (#2922)
we were checking every typed character to see if it was an image. this
involved going to disk, which was slow.

this was a bad interaction between image paste support and burst-paste
detection.
2025-09-02 10:35:29 -07:00
Jeremy Rose
e442ecedab rework message styling (#2877)
https://github.com/user-attachments/assets/cf07f62b-1895-44bb-b9c3-7a12032eb371
2025-09-02 17:29:58 +00:00
Lionel Cheng
3f8d6021ac Fix: Adapt pr template with correct link following doc refacto (#2982)
This PR fixes the link of contributing page in Pull Request template to
the right one following the migration of the section to a dedicated
file.

Signed-off-by: lionelchg <lionel.cheng@hotmail.fr>
2025-09-02 13:05:52 -04:00
Uhyeon Park
7ac6194c22 Bug fix: ignore Enter on empty input to avoid queuing blank messages (#3047)
## Summary
Pressing Enter with an empty composer was treated as a submission, which
queued a blank message while a task was running. This PR suppresses
submission when there is no text and no attachments.

## Root Cause

- ChatComposer returned Submitted even when the trimmed text was empty.
ChatWidget then queued it during a running task, leading to an empty
item appearing in the queued list and being popped later with no effect.

## Changes
- ChatComposer Enter handling: if trimmed text is empty and there are no
attached images, return None instead of Submitted.
- No changes to ChatWidget; behavior naturally stops queuing blanks at
the source.

## Code Paths

- Modified: `tui/src/bottom_pane/chat_composer.rs`
- Tests added:
    - `tui/src/bottom_pane/chat_composer.rs`: `empty_enter_returns_none`
- `tui/src/chatwidget/tests.rs`:
`empty_enter_during_task_does_not_queue`

## Result

### Before


https://github.com/user-attachments/assets/a40e2f6d-42ba-4a82-928b-8f5458f5884d

### After



https://github.com/user-attachments/assets/958900b7-a566-44fc-b16c-b80380739c92
2025-09-02 13:05:45 -04:00
Jeremy Rose
619436c58f remove extra quote from disabled-command message (#3035)
there was an extra ' floating around for some reason.
2025-09-02 09:46:41 -07:00
dependabot[bot]
1cc6b97227 chore(deps): bump regex-lite from 0.1.6 to 0.1.7 in /codex-rs (#3010)
Bumps [regex-lite](https://github.com/rust-lang/regex) from 0.1.6 to
0.1.7.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/rust-lang/regex/blob/master/CHANGELOG.md">regex-lite's
changelog</a>.</em></p>
<blockquote>
<h1>0.1.79</h1>
<ul>
<li>Require regex-syntax 0.3.8.</li>
</ul>
<h1>0.1.78</h1>
<ul>
<li>[PR <a
href="https://redirect.github.com/rust-lang/regex/issues/290">#290</a>](<a
href="https://redirect.github.com/rust-lang/regex/pull/290">rust-lang/regex#290</a>):
Fixes bug <a
href="https://redirect.github.com/rust-lang/regex/issues/289">#289</a>,
which caused some regexes with a certain combination
of literals to match incorrectly.</li>
</ul>
<h1>0.1.77</h1>
<ul>
<li>[PR <a
href="https://redirect.github.com/rust-lang/regex/issues/281">#281</a>](<a
href="https://redirect.github.com/rust-lang/regex/pull/281">rust-lang/regex#281</a>):
Fixes bug <a
href="https://redirect.github.com/rust-lang/regex/issues/280">#280</a>
by disabling all literal optimizations when a pattern
is partially anchored.</li>
</ul>
<h1>0.1.76</h1>
<ul>
<li>Tweak criteria for using the Teddy literal matcher.</li>
</ul>
<h1>0.1.75</h1>
<ul>
<li>[PR <a
href="https://redirect.github.com/rust-lang/regex/issues/275">#275</a>](<a
href="https://redirect.github.com/rust-lang/regex/pull/275">rust-lang/regex#275</a>):
Improves match verification performance in the Teddy SIMD searcher.</li>
<li>[PR <a
href="https://redirect.github.com/rust-lang/regex/issues/278">#278</a>](<a
href="https://redirect.github.com/rust-lang/regex/pull/278">rust-lang/regex#278</a>):
Replaces slow substring loop in the Teddy SIMD searcher with
Aho-Corasick.</li>
<li>Implemented DoubleEndedIterator on regex set match iterators.</li>
</ul>
<h1>0.1.74</h1>
<ul>
<li>Release regex-syntax 0.3.5 with a minor bug fix.</li>
<li>Fix bug <a
href="https://redirect.github.com/rust-lang/regex/issues/272">#272</a>.</li>
<li>Fix bug <a
href="https://redirect.github.com/rust-lang/regex/issues/277">#277</a>.</li>
<li>[PR <a
href="https://redirect.github.com/rust-lang/regex/issues/270">#270</a>](<a
href="https://redirect.github.com/rust-lang/regex/pull/270">rust-lang/regex#270</a>):
Fixes bugs <a
href="https://redirect.github.com/rust-lang/regex/issues/264">#264</a>,
<a
href="https://redirect.github.com/rust-lang/regex/issues/268">#268</a>
and an unreported where the DFA cache size could be
drastically underestimated in some cases (leading to high unexpected
memory
usage).</li>
</ul>
<h1>0.1.73</h1>
<ul>
<li>Release <code>regex-syntax 0.3.4</code>.</li>
<li>Bump <code>regex-syntax</code> dependency version for
<code>regex</code> to <code>0.3.4</code>.</li>
</ul>
<h1>0.1.72</h1>
<ul>
<li>[PR <a
href="https://redirect.github.com/rust-lang/regex/issues/262">#262</a>](<a
href="https://redirect.github.com/rust-lang/regex/pull/262">rust-lang/regex#262</a>):
Fixes a number of small bugs caught by fuzz testing (AFL).</li>
</ul>
<h1>0.1.71</h1>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="45c3da7681"><code>45c3da7</code></a>
regex-lite-0.1.7</li>
<li><a
href="873ed800c5"><code>873ed80</code></a>
regex-automata-0.4.10</li>
<li><a
href="ea834f8e1f"><code>ea834f8</code></a>
regex-syntax-0.8.6</li>
<li><a
href="86836fbe84"><code>86836fb</code></a>
changelog: 1.11.2</li>
<li><a
href="63a26c1a7f"><code>63a26c1</code></a>
cargo: ensure that 'perf' doesn't enable 'std' implicitly (<a
href="https://redirect.github.com/rust-lang/regex/issues/1150">#1150</a>)</li>
<li><a
href="dd96592be2"><code>dd96592</code></a>
doc: clarify CRLF mode effect</li>
<li><a
href="931dae0192"><code>931dae0</code></a>
cargo: point <code>repository</code> metadata to clonable URLs</li>
<li><a
href="a66fde6e80"><code>a66fde6</code></a>
doc: remove references to non-existent parameters</li>
<li><a
href="1873e96a7b"><code>1873e96</code></a>
automata: add <code>DFA::set_prefilter</code> method to the DFA
types</li>
<li><a
href="89ff15310b"><code>89ff153</code></a>
doc: fix misspelling typo</li>
<li>Additional commits viewable in <a
href="https://github.com/rust-lang/regex/compare/regex-lite-0.1.6...regex-lite-0.1.7">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=regex-lite&package-manager=cargo&previous-version=0.1.6&new-version=0.1.7)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-02 09:09:17 -07:00
Michael Bolin
7eee69d821 fix: try to populate the Windows cache for release builds when PRs are put up for review (#2884)
Windows release builds take close to 12 minutes whereas Mac/Linux are
closer to 5. Let's see if this speeds things up?
2025-08-28 23:48:29 -07:00
Michael Bolin
65636802f7 fix: drop Mutexes earlier in MCP server (#2878) 2025-08-28 22:50:16 -07:00
Michael Bolin
c988ce28fe fix: drop Mutex before calling tx_approve.send() (#2876) 2025-08-28 22:49:29 -07:00
Michael Bolin
cb2f952143 fix: remove unnecessary flush() calls (#2873)
Because we are writing to a pipe, these `flush()` calls are unnecessary,
so removing these saves us one syscall per write in these two cases.
2025-08-28 22:41:10 -07:00
Jeremy Rose
7d734bff65 suggest just fix -p in agents.md (#2881) 2025-08-28 22:32:53 -07:00
Michael Bolin
970e466ab3 fix: switch to unbounded channel (#2874)
#2747 encouraged me to audit our codebase for similar issues, as now I
am particularly suspicious that our flaky tests are due to a racy
deadlock.

I asked Codex to audit our code, and one of its suggestions was this:

> **High-Risk Patterns**
>
> All `send_*` methods await on a bounded
`mpsc::Sender<OutgoingMessage>`. If the writer blocks, the channel fills
and the processor task blocks on send, stops draining incoming requests,
and stdin reader eventually blocks on its send. This creates a
backpressure deadlock cycle across the three tasks.
>
> **Recommendations**
> * Server outgoing path: break the backpressure cycle
> * Option A (minimal risk): Change `OutgoingMessageSender` to use an
unbounded channel to decouple producer from stdout. Add rate logging so
floods are visible.
> * Option B (bounded + drop policy): Change `send_*` to try_send and
drop messages (or coalesce) when the queue is full, logging a warning.
This prevents processor stalls at the cost of losing messages under
extreme backpressure.
> * Option C (two-stage buffer): Keep bounded channel, but have a
dedicated “egress” task that drains an unbounded internal queue, writing
to stdout with retries and a shutdown timeout. This centralizes
backpressure policy.

So this PR is Option A.

Indeed, we previously used a bounded channel with a capacity of `128`,
but as we discovered recently with #2776, there are certainly cases
where we can get flooded with events.

That said, `test_shell_command_approval_triggers_elicitation` just
failed one one build when I put up this PR, so clearly we are not out of
the woods yet...

**Update:** I think I found the true source of the deadlock! See
https://github.com/openai/codex/pull/2876
2025-08-28 22:20:10 -07:00
Michael Bolin
5d2d3002ef fix: specify --profile to cargo clippy in CI (#2871)
Today we had a breakage in the release build that went unnoticed by CI.
Here is what happened:

- https://github.com/openai/codex/pull/2242 originally added some logic
to do release builds to prevent this from happening
- https://github.com/openai/codex/pull/2276 undid that change to try to
speed things up by removing the step to build all the individual crates
in release mode, assuming the `cargo check` call was sufficient
coverage, which it would have been, had it specified `--profile`

This PR adds `--profile` to the `cargo check` step so we should get the
desired coverage from our build matrix.

Indeed, enabling this in our CI uncovered a warning that is only present
in release mode that was going unnoticed.
2025-08-28 21:43:40 -07:00
agro
bb30996f7c Bugfix: Prevents brew install codex in comment to be executed (#2868)
The default install command causes unexpected code to be executed:

```
npm install -g @openai/codex # Alternatively: `brew install codex`
```

The problem is some environment will treat # as literal string, not
start of comment. Therefore the user will execute this instead (because
it's in backtick)

```
brew install codex
```

And then the npm command will error (because it's trying to install
package #)
2025-08-28 21:40:28 -07:00
dedrisian-oai
3f8184034f Fix CI release build (#2864) 2025-08-29 03:06:10 +00:00
unship
f7cb2f87a0 Bug fix: clone of incoming_tx can lead to deadlock (#2747)
POC code

```rust
use tokio::sync::mpsc;
use std::time::Duration;

#[tokio::main]
async fn main() {
    println!("=== Test 1: Simulating original MCP server pattern ===");
    test_original_pattern().await;
}

async fn test_original_pattern() {
    println!("Testing the original pattern from MCP server...");
    
    // Create channel - this simulates the original incoming_tx/incoming_rx
    let (tx, mut rx) = mpsc::channel::<String>(10);
    
    // Task 1: Simulates stdin reader that will naturally terminate
    let stdin_task = tokio::spawn({
        let tx_clone = tx.clone();
        async move {
            println!("  stdin_task: Started, will send 3 messages then exit");
            for i in 0..3 {
                let msg = format!("Message {}", i);
                if tx_clone.send(msg.clone()).await.is_err() {
                    println!("  stdin_task: Receiver dropped, exiting");
                    break;
                }
                println!("  stdin_task: Sent {}", msg);
                tokio::time::sleep(Duration::from_millis(300)).await;
            }
            println!("  stdin_task: Finished (simulating EOF)");
            // tx_clone is dropped here
        }
    });
    
    // Task 2: Simulates message processor
    let processor_task = tokio::spawn(async move {
        println!("  processor_task: Started, waiting for messages");
        while let Some(msg) = rx.recv().await {
            println!("  processor_task: Processing {}", msg);
            tokio::time::sleep(Duration::from_millis(100)).await;
        }
        println!("  processor_task: Finished (channel closed)");
    });
    
    // Task 3: Simulates stdout writer or other background task
    let background_task = tokio::spawn(async move {
        for i in 0..2 {
            tokio::time::sleep(Duration::from_millis(500)).await;
            println!("  background_task: Tick {}", i);
        }
        println!("  background_task: Finished");
    });
    
    println!("  main: Original tx is still alive here");
    println!("  main: About to call tokio::join! - will this deadlock?");
    
    // This is the pattern from the original code
    let _ = tokio::join!(stdin_task, processor_task, background_task);
}

```

---------

Co-authored-by: Michael Bolin <bolinfest@gmail.com>
2025-08-28 19:28:17 -07:00
Ahmed Ibrahim
9dbe7284d2 Following up on #2371 post commit feedback (#2852)
- Introduce websearch end to complement the begin 
- Moves the logic of adding the sebsearch tool to
create_tools_json_for_responses_api
- Making it the client responsibility to toggle the tool on or off 
- Other misc in #2371 post commit feedback
- Show the query:

<img width="1392" height="151" alt="image"
src="https://github.com/user-attachments/assets/8457f1a6-f851-44cf-bcca-0d4fe460ce89"
/>
2025-08-28 19:24:38 -07:00
dedrisian-oai
b8e8454b3f Custom /prompts (#2696)
Adds custom `/prompts` to `~/.codex/prompts/<command>.md`.

<img width="239" height="107" alt="Screenshot 2025-08-25 at 6 22 42 PM"
src="https://github.com/user-attachments/assets/fe6ebbaa-1bf6-49d3-95f9-fdc53b752679"
/>

---

Details:

1. Adds `Op::ListCustomPrompts` to core.
2. Returns `ListCustomPromptsResponse` with list of `CustomPrompt`
(name, content).
3. TUI calls the operation on load, and populates the custom prompts
(excluding prompts that collide with builtins).
4. Selecting the custom prompt automatically sends the prompt to the
agent.
2025-08-29 02:16:39 +00:00
HaxagonusD
bbcfd63aba UI: Make slash commands bold in welcome message (#2762)
## What
Make slash commands (/init, /status, /approvals, /model) bold and white
in the welcome message for better visibility.
<img width="990" height="286" alt="image"
src="https://github.com/user-attachments/assets/13f90e96-b84a-4659-aab4-576d84a31af7"
/>


## Why
The current welcome message displays all text in a dimmed style, making
the slash commands less prominent. Users need to quickly identify
available commands when starting Codex.

## How
Modified `tui/src/history_cell.rs` in the `new_session_info` function
to:
- Split each command line into separate spans
- Apply bold white styling to command text (`/init`, `/status`, etc.)
- Keep descriptions dimmed for visual contrast
- Maintain existing layout and spacing

## Test plan
- [ ] Run the TUI and verify commands appear bold in the welcome message
- [ ] Ensure descriptions remain dimmed for readability
- [ ] Confirm all existing tests pass
2025-08-28 18:12:41 -07:00
Eric Traut
6209d49520 Changed OAuth success screen to use the string "Codex" rather than "Codex CLI" (#2737) 2025-08-28 21:21:10 +00:00
Gabriel Peal
c3a8b96a60 Add a VS Code Extension issue template (#2853)
Template mostly copied from the bug template
2025-08-28 16:56:52 -04:00
Ahmed Ibrahim
c9ca63dc1e burst paste edge cases (#2683)
This PR fixes two edge cases in managing burst paste (mainly on power
shell).
Bugs:
- Needs an event key after paste to render the pasted items

> ChatComposer::flush_paste_burst_if_due() flushes on timeout. Called:
>     - Pre-render in App on TuiEvent::Draw.
>     - Via a delayed frame
>
BottomPane::request_redraw_in(ChatComposer::recommended_paste_flush_delay()).

- Parses two key events separately before starting parsing burst paste

> When threshold is crossed, pull preceding burst chars out of the
textarea and prepend to paste_burst_buffer, then keep buffering.

- Integrates with #2567 to bring image pasting to windows.
2025-08-28 12:54:12 -07:00
Ahmed Ibrahim
ed06f90fb3 Race condition in compact (#2746)
This fixes the flakiness in
`summarize_context_three_requests_and_instructions` because we should
trim history before sending task complete.
2025-08-28 12:53:00 -07:00
Michael Bolin
f09170b574 chore: print stderr from MCP server to test output using eprintln! (#2849)
Related to https://github.com/openai/codex/pull/2848, I don't see the
stderr from `codex mcp` colocated with the other stderr from
`test_shell_command_approval_triggers_elicitation()` when it fails even
though we have `RUST_LOG=debug` set when we spawn `codex mcp`:


1e9e703b96/codex-rs/mcp-server/tests/common/mcp_process.rs (L65)

Let's try this new logic which should be more explicit.
2025-08-28 12:43:13 -07:00
Michael Bolin
1e9e703b96 chore: try to make it easier to debug the flakiness of test_shell_command_approval_triggers_elicitation (#2848)
`test_shell_command_approval_triggers_elicitation()` is one of a number
of integration tests that we have observed to be flaky on GitHub CI, so
this PR tries to reduce the flakiness _and_ to provide us with more
information when it flakes. Specifically:

- Changed the command that we use to trigger the elicitation from `git
init` to `python3 -c 'import pathlib; pathlib.Path(r"{}").touch()'`
because running `git` seems more likely to invite variance.
- Increased the timeout to wait for the task response from 10s to 20s.
- Added more logging.
2025-08-28 12:33:33 -07:00
Michael Bolin
74d2741729 chore: require uninlined_format_args from clippy (#2845)
- added `uninlined_format_args` to `[workspace.lints.clippy]` in the
`Cargo.toml` for the workspace
- ran `cargo clippy --tests --fix`
- ran `just fmt`
2025-08-28 11:25:23 -07:00
Jeremy Rose
e5611aab07 disallow some slash commands while a task is running (#2792)
/new, /init, /models, /approvals, etc. don't work correctly during a
turn. disable them.
2025-08-28 10:15:59 -07:00
dedrisian-oai
4e9ad23864 Add "View Image" tool (#2723)
Adds a "View Image" tool so Codex can find and see images by itself:

<img width="1772" height="420" alt="Screenshot 2025-08-26 at 10 40
04 AM"
src="https://github.com/user-attachments/assets/7a459c7b-0b86-4125-82d9-05fbb35ade03"
/>
2025-08-27 17:41:23 -07:00
Jeremy Rose
3e309805ae fix cursor after suspend (#2690)
This was supposed to be fixed by #2569, but I think the actual fix got
lost in the refactoring.

Intended behavior: pressing ^Z moves the cursor below the viewport
before suspending.
2025-08-27 14:17:10 -07:00
Jeremy Rose
488a40211a fix (most) doubled lines and hanging list markers (#2789)
This was mostly written by codex under heavy guidance via test cases
drawn from logged session data and fuzzing. It also uncovered some bugs
in tui_markdown, which will in some cases split a list marker from the
list item content. We're not addressing those bugs for now.
2025-08-27 13:55:59 -07:00
Gabriel Peal
903178eeeb Point the CHANGELOG to the releases page (#2780)
The typescript changelog is misleading and unhelpful
2025-08-27 11:45:40 -07:00
Reuben Narad
6e4c9d5243 Added back codex-rs/config.md to link to new location (#2778)
Quick fix: point old config.md to new location
2025-08-27 18:37:41 +00:00
Reuben Narad
459363e17b README / docs refactor (#2724)
This PR cleans up the monolithic README by breaking it into a set
navigable pages under docs/ (install, getting started, configuration,
authentication, sandboxing and approvals, platform details, FAQ, ZDR,
contributing, license). The top‑level README is now more concise and
intuitive, (with corrected screenshots).

It also consolidates overlapping content from codex-rs/README.md into
the top‑level docs and updates links accordingly. The codex-rs README
remains in place for now as a pointer and for continuity.

Finally, added an extensive config reference table at the bottom of
docs/config.md.

---------

Co-authored-by: easong-openai <easong@openai.com>
2025-08-27 10:30:39 -07:00
Michael Bolin
ffe585387b fix: for now, limit the number of deltas sent back to the UI (#2776)
This is a stopgap solution, but today, we are seeing the client get
flooded with events. Since we already truncate the output we send to the
model, it feels reasonable to limit how many deltas we send to the
client.
2025-08-27 10:23:25 -07:00
Dylan
0cec0770e2 [mcp-server] Add GetConfig endpoint (#2725)
## Summary
Adds a GetConfig request to the MCP Protocol, so MCP clients can
evaluate the resolved config.toml settings which the harness is using.

## Testing
- [x] Added an end to end test of the endpoint
2025-08-27 09:59:03 -07:00
Ahmed Ibrahim
2d2f66f9c5 Bug fix: deduplicate assistant messages (#2758)
We are treating assistant messages in a different way than other
messages which resulted in a duplicated history.

See #2698
2025-08-27 01:29:16 -07:00
Ahmed Ibrahim
d0e06f74e2 send context window with task started (#2752)
- Send context window with task started
- Accounting for changing the model per turn
2025-08-27 00:04:21 -07:00
Gabriel Peal
4b6c6ce98f Make git_diff_against_sha more robust (#2749)
1. Ignore custom git diff drivers users may have set
2. Allow diffing against filenames that start with a dash
2025-08-27 01:53:00 -04:00
easong-openai
5df04c8a13 Cache transcript wraps (#2739)
Previously long transcripts would become unusable.
2025-08-26 22:20:09 -07:00
ae
3d8bca7814 feat: decrease testing when running interactively (#2707) 2025-08-26 19:57:04 -07:00
Ahmed Ibrahim
3eb11c10d0 Don't send Exec deltas on apply patch (#2742)
We are now sending exec deltas on apply patch which doesn't make sense.
2025-08-26 19:16:51 -07:00
mattsu
bd65c4db87 Fix crash when backspacing placeholders adjacent to multibyte text (#2674)
Prevented panics when deleting placeholders near multibyte characters by
clamping the cursor to a valid boundary and using get-based slicing

Added a regression test to ensure backspacing after multibyte text
leaves placeholders intact without crashing

---------

Co-authored-by: Ahmed Ibrahim <aibrahim@openai.com>
2025-08-26 18:31:49 -07:00
Jeremy Rose
b367790d9b fix emoji spacing (#2735)
before:
<img width="295" height="266" alt="Screenshot 2025-08-26 at 5 05 03 PM"
src="https://github.com/user-attachments/assets/3e876f08-26d0-407e-a995-28fd072e288f"
/>

after:
<img width="295" height="129" alt="Screenshot 2025-08-26 at 5 05 30 PM"
src="https://github.com/user-attachments/assets/2a019d52-19ed-40ef-8155-4f02c400796a"
/>
2025-08-26 17:34:24 -07:00
Jeremy Rose
435154ce93 fix transcript lines being added to diff view (#2721)
This fixes a bug where if you ran /diff while at turn was running,
transcript lines would be added to the end of the diff view. Also,
refactor to make this kind of issue less likely in future.
2025-08-27 00:03:11 +00:00
vinaybantupalli
fb3f6456cf fix issue #2713: adding support for alt+ctrl+h to delete backward word (#2717)
This pr addresses the fix for
https://github.com/openai/codex/issues/2713

### Changes:
  - Added key handler for `Alt+Ctrl+H` → `delete_backward_word()`
- Added test coverage in `delete_backward_word_alt_keys()` that verifies
both:
    - Standard `Alt+Backspace` binding continues to work
- New `Alt+Ctrl+H` binding works correctly for backward word deletion

### Testing:
  The test ensures both key combinations produce identical behavior:
  - Delete the previous word from "hello world" → "hello "
  - Cursor positioned correctly after deletion

###  Backward Compatibility:
This change is backward compatible - existing `Alt+Backspace`
functionality remains unchanged while adding support for the
terminal-specific `Alt+Ctrl+H` variant
2025-08-26 16:37:46 -07:00
Jeremy Rose
f2603a4e50 Esc while there are queued messages drops the messages back into the composer (#2687)
https://github.com/user-attachments/assets/bbb427c4-cdc7-4997-a4ef-8156e8170742
2025-08-26 16:26:50 -07:00
Jeremy Rose
eb161116f0 tui: render keyboard icon with emoji variation selector (⌨️) (#2728)
Use emoji variation selector (VS16) for the keyboard icon so it
consistently renders as emoji (⌨️) rather than text (⌨) across
terminals.

Touches TUI command rendering for unknown parsed commands. No behavior
change beyond display.
2025-08-26 16:11:21 -07:00
Wang
c229a67312 feat(core): Add remove_conversation to ConversationManager for ma… (#2613)
### What this PR does

This PR introduces a new public method,
remove_conversation(conversation_id: Uuid), to the ConversationManager.
This allows consumers of the codex-core library to manually remove a
conversation from the manager's in-memory storage.

### Why this change is needed
I am currently adapting the Codex client to run as a long-lived server
application. In this server environment, ConversationManager instances
persist for extended periods, and new conversations are created for each
incoming user request.

The current implementation of ConversationManager stores all created
conversations in a HashMap indefinitely, with no mechanism for removal.
This leads to unbounded memory growth in a server context, as every new
conversation permanently occupies memory.

While an automatic TTL-based cleanup mechanism could be one solution, a
simpler, more direct remove_conversation method provides the necessary
control for my use case. It allows my server application to explicitly
manage the lifecycle of conversations, such as cleaning them up after a
request is fully processed or after a period of inactivity is detected
at the application level.

This change provides a minimal, non-intrusive way to address the memory
management issue for server-like applications built on top of
codex-core, giving developers the flexibility to implement their own
cleanup logic.

Signed-off-by: M4n5ter <m4n5terrr@gmail.com>
Co-authored-by: Michael Bolin <mbolin@openai.com>
2025-08-26 15:16:43 -07:00
Michael Bolin
aa5fc5855d feat: remove the GitHub action that runs Codex for now (#2729)
There are some design issues with this action, so until we work them
out, we'll remove this code from the repository to avoid folks from
taking a dependency on it.
2025-08-26 13:44:23 -07:00
Jeremy Rose
db98d2ce25 enable alternate scroll in transcript mode (#2686)
this allows the mouse wheel to scroll the transcript / diff views.
2025-08-26 11:47:00 -07:00
ae
274d9b413f [feat] Simplfy command approval UI (#2708)
- Removed the plain "No" option, which confused the model,
  since we already have the "No, provide feedback" option,
  which works better.

# Before

<img width="476" height="168" alt="image"
src="https://github.com/user-attachments/assets/6e783d9f-dec9-4610-9cad-8442eb377a90"
/>

# After

<img width="553" height="175" alt="image"
src="https://github.com/user-attachments/assets/3cdae582-3366-47bc-9753-288930df2324"
/>
2025-08-26 10:08:06 -07:00
ae
8192cf147e [chore] Tweak AGENTS.md so agent doesn't always have to test (#2706) 2025-08-26 00:27:19 -07:00
Eric Traut
d32e4f25cf Added caps on retry config settings (#2701)
The CLI supports config settings `stream_max_retries` and
`request_max_retries` that allow users to override the default retry
counts (4 and 5, respectively). However, there's currently no cap placed
on these values. In theory, a user could configure an effectively
infinite retry count which could hammer the server. This PR adds a
reasonable cap (currently 100) to both of these values.
2025-08-25 22:51:01 -07:00
ae
a4d34235bc [fix] emoji padding (#2702)
- We use emojis as bullet icons of sorts, and in some common terminals
like Terminal or iTerm, these can render with insufficient padding
between the emoji and following text.
- This PR makes emoji look better in Terminal and iTerm, at the expense
of Ghostty. (All default fonts.)

# Terminal

<img width="420" height="123" alt="image"
src="https://github.com/user-attachments/assets/93590703-e35a-4781-a697-881d7ec95598"
/>

# iTerm

<img width="465" height="163" alt="image"
src="https://github.com/user-attachments/assets/f11e6558-d2db-4727-bb7e-2b61eed0a3b1"
/>

# Ghostty

<img width="485" height="142" alt="image"
src="https://github.com/user-attachments/assets/7a7b021f-5238-4672-8066-16cd1da32dc6"
/>
2025-08-25 22:49:19 -07:00
ae
d085f73a2a [feat] reduce bottom padding to 1 line (#2704) 2025-08-25 22:47:26 -07:00
Eric Traut
ab9250e714 Improved user message for rate-limit errors (#2695)
This PR improves the error message presented to the user when logged in
with ChatGPT and a rate-limit error occurs. In particular, it provides
the user with information about when the rate limit will be reset. It
removes older code that attempted to do the same but relied on parsing
of error messages that are not generated by the ChatGPT endpoint. The
new code uses newly-added error fields.
2025-08-25 21:42:10 -07:00
Jeremy Rose
e5283b6126 single control flow for both Esc and Ctrl+C (#2691)
Esc and Ctrl+C while a task is running should do the same thing. There
were some cases where pressing Esc would leave a "stuck" widget in the
history; this fixes that and cleans up the logic so there's just one
path for interrupting the task. Also clean up some subtly mishandled key
events (e.g. Ctrl+D would quit the app while an approval modal was
showing if the textarea was empty).

---------

Co-authored-by: Ahmed Ibrahim <aibrahim@openai.com>
2025-08-25 20:15:38 -07:00
Eric Traut
d63e44ae29 Fixed a bug that causes token refresh to not work in a seamless manner (#2699)
This PR fixes a bug in the token refresh logic. Token refresh is
performed in a retry loop so if we receive a 401 error, we refresh the
token, then we go around the loop again and reissue the fetch with a
fresh token. The bug is that we're not using the updated token on the
second and subsequent times through the loop. The result is that we'll
try to refresh the token a few more times until we hit the retry limit
(default of 4). The 401 error is then passed back up to the caller.
Subsequent calls will use the refreshed token, so the problem clears
itself up.

The fix is straightforward — make sure we use the updated auth
information each time through the retry loop.
2025-08-25 19:18:16 -07:00
Jeremy Rose
17e5077507 do not show timeouts as "sandbox error"s (#2587)
🙅🫸
```
✗ Failed (exit -1)
  └ 🧪 cargo test --all-features -q
    sandbox error: command timed out
```

😌👉
```
✗ Failed (exit -1)
  └ 🧪 cargo test --all-features -q
    error: command timed out
```
2025-08-25 17:52:23 -07:00
Jeremy Rose
b1079187e4 queued messages rendered italic (#2693)
<img width="416" height="215" alt="Screenshot 2025-08-25 at 5 29 53 PM"
src="https://github.com/user-attachments/assets/0f4178c9-6997-4e7a-bb30-0817b98d9748"
/>
2025-08-26 00:36:05 +00:00
Jeremy Rose
ae8f772ef2 do not schedule frames for Tui::Draw events in backtrack (#2692)
this was causing continuous rerendering when a transcript overlay was
present
2025-08-26 00:29:24 +00:00
dedrisian-oai
468a8b4c38 Copying / Dragging image files (MacOS Terminal + iTerm) (#2567)
In this PR:

- [x] Add support for dragging / copying image files into chat.
- [x] Don't remove image placeholders when submitting.
- [x] Add tests.

Works for:

- Image Files
- Dragging MacOS Screenshots (Terminal, iTerm)

Todos:

- [ ] In some terminals (VSCode, WIndows Powershell, and remote
SSH-ing), copy-pasting a file streams the escaped filepath as individual
key events rather than a single Paste event. We'll need to have a
function (in a separate PR) for detecting these paste events.
2025-08-25 16:39:42 -07:00
Gabriel Peal
cb32f9c64e Add auth to send_user_turn (#2688)
It is there for send_user_message but was omitted from send_user_turn.
Presumably this was a mistake
2025-08-25 18:57:20 -04:00
Ahmed Ibrahim
907afc9425 Fix esc (#2661)
Esc should have other functionalities when it's not used in a
backtracking situation. i.e. to cancel pop up menu when selecting
model/approvals or to interrupt an active turn.
2025-08-25 15:38:46 -07:00
Dylan
7f7d1e30f3 [exec] Clean up apply-patch tests (#2648)
## Summary
These tests were getting a bit unwieldy, and they're starting to become
load-bearing. Let's clean them up, and get them working solidly so we
can easily expand this harness with new tests.

## Test Plan
- [x] Tests continue to pass
2025-08-25 15:08:01 -07:00
Michael Bolin
568d6f819f fix: use backslash as path separator on Windows (#2684)
I noticed that when running `/status` on Windows, I saw something like:

```
Path: ~/src\codex
```

so now it should be:

```
Path: ~\src\codex
```

Admittedly, `~` is understood by PowerShell but not on Windows, in
general, but it's much less verbose than `%USERPROFILE%`.
2025-08-25 14:47:17 -07:00
Jeremy Rose
251c4c2ba9 tui: queue messages (#2637)
https://github.com/user-attachments/assets/44349aa6-3b97-4029-99e1-5484e9a8775f
2025-08-25 21:38:38 +00:00
Odysseas Yiakoumis
a6c346b9e1 avoid error when /compact response has no token_usage (#2417) (#2640)
**Context**  
When running `/compact`, `drain_to_completed` would throw an error if
`token_usage` was `None` in `ResponseEvent::Completed`. This made the
command fail even though everything else had succeeded.

**What changed**  
- Instead of erroring, we now just check `if let Some(token_usage)`
before sending the event.
- If it’s missing, we skip it and move on.  

**Why**  
This makes `AgentTask::compact()` behave in the same way as
`AgentTask::spawn()`, which also doesn’t error out when `token_usage`
isn’t available. Keeps things consistent and avoids unnecessary
failures.

**Fixes**  
Closes #2417

---------

Co-authored-by: Ahmed Ibrahim <aibrahim@openai.com>
2025-08-25 18:42:22 +00:00
Gabriel Peal
e307040f10 Index file (#2678) 2025-08-25 13:23:32 -04:00
dependabot[bot]
7d67e54628 chore(deps): bump toml_edit from 0.23.3 to 0.23.4 in /codex-rs (#2665) 2025-08-25 08:20:30 -07:00
Michael Bolin
295ca27e98 fix: Scope ExecSessionManager to Session instead of using global singleton (#2664)
The `SessionManager` in `exec_command` owns a number of
`ExecCommandSession` objects where `ExecCommandSession` has a
non-trivial implementation of `Drop`, so we want to be able to drop an
individual `SessionManager` to help ensure things get cleaned up in a
timely fashion. To that end, we should have one `SessionManager` per
session rather than one global one for the lifetime of the CLI process.
2025-08-24 22:52:49 -07:00
Michael Bolin
7b20db942a fix: build is broken on main; introduce ToolsConfigParams to help fix (#2663)
`ToolsConfig::new()` taking a large number of boolean params was hard to
manage and it finally bit us (see
https://github.com/openai/codex/pull/2660). This changes
`ToolsConfig::new()` so that it takes a struct (and also reduces the
visibility of some members, where possible).
2025-08-24 22:43:42 -07:00
Uhyeon Park
ee2ccb5cb6 Fix cache hit rate by making MCP tools order deterministic (#2611)
Fixes https://github.com/openai/codex/issues/2610

This PR sorts the tools in `get_openai_tools` by name to ensure a
consistent MCP tool order.

Currently, MCP servers are stored in a HashMap, which does not guarantee
ordering. As a result, the tool order changes across turns, effectively
breaking prompt caching in multi-turn sessions.

An alternative solution would be to replace the HashMap with an ordered
structure, but that would require a much larger code change. Given that
it is unrealistic to have so many MCP tools that sorting would cause
performance issues, this lightweight fix is chosen instead.

By ensuring deterministic tool order, this change should significantly
improve cache hit rates and prevent users from hitting usage limits too
quickly. (For reference, my own sessions last week reached the limit
unusually fast, with cache hit rates falling below 1%.)

## Result

After this fix, sessions with MCP servers now show caching behavior
almost identical to sessions without MCP servers.
Without MCP             |  With MCP
:-------------------------:|:-------------------------:
<img width="1368" height="1634" alt="image"
src="https://github.com/user-attachments/assets/26edab45-7be8-4d6a-b471-558016615fc8"
/> | <img width="1356" height="1632" alt="image"
src="https://github.com/user-attachments/assets/5f3634e0-3888-420b-9aaf-deefd9397b40"
/>
2025-08-24 19:56:24 -07:00
ae
8b49346657 fix: update gpt-5 stats (#2649)
- To match what's on <https://platform.openai.com/docs/models/gpt-5>.
2025-08-24 16:45:41 -07:00
dependabot[bot]
e49116a4c5 chore(deps): bump whoami from 1.6.0 to 1.6.1 in /codex-rs (#2497)
Bumps [whoami](https://github.com/ardaku/whoami) from 1.6.0 to 1.6.1.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/ardaku/whoami/commits">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=whoami&package-manager=cargo&previous-version=1.6.0&new-version=1.6.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-24 14:38:30 -07:00
Michael Bolin
517ffd00c6 feat: use the arg0 trick with apply_patch (#2646)
Historically, Codex CLI has treated `apply_patch` (and its sometimes
misspelling, `applypatch`) as a "virtual CLI," intercepting it when it
appears as the first arg to `command` for the `"container.exec",
`"shell"`, or `"local_shell"` tools.

This approach has a known limitation where if, say, the model created a
Python script that runs `apply_patch` and then tried to run the Python
script, we have no insight as to what the model is trying to do and the
Python Script would fail because `apply_patch` was never really on the
`PATH`.

One way to solve this problem is to require users to install an
`apply_patch` executable alongside the `codex` executable (or at least
put it someplace where Codex can discover it). Though to keep Codex CLI
as a standalone executable, we exploit "the arg0 trick" where we create
a temporary directory with an entry named `apply_patch` and prepend that
directory to the `PATH` for the duration of the invocation of Codex.

- On UNIX, `apply_patch` is a symlink to `codex`, which now changes its
behavior to behave like `apply_patch` if arg0 is `apply_patch` (or
`applypatch`)
- On Windows, `apply_patch.bat` is a batch script that runs `codex
--codex-run-as-apply-patch %*`, as Codex also changes its behavior if
the first argument is `--codex-run-as-apply-patch`.
2025-08-24 14:35:51 -07:00
Dylan
4157788310 [apply_patch] disable default freeform tool (#2643)
## Summary
We're seeing some issues in the freeform tool - let's disable by default
until it stabilizes.

## Testing
- [x] Ran locally, confirmed codex-cli could make edits
2025-08-24 11:12:37 -07:00
Jeremy Rose
32bbbbad61 test: faster test execution in codex-core (#2633)
this dramatically improves time to run `cargo test -p codex-core` (~25x
speedup).

before:
```
cargo test -p codex-core  35.96s user 68.63s system 19% cpu 8:49.80 total
```

after:
```
cargo test -p codex-core  5.51s user 8.16s system 63% cpu 21.407 total
```

both tests measured "hot", i.e. on a 2nd run with no filesystem changes,
to exclude compile times.

approach inspired by [Delete Cargo Integration
Tests](https://matklad.github.io/2021/02/27/delete-cargo-integration-tests.html),
we move all test cases in tests/ into a single suite in order to have a
single binary, as there is significant overhead for each test binary
executed, and because test execution is only parallelized with a single
binary.
2025-08-24 11:10:53 -07:00
Ahmed Ibrahim
c6a52d611c Resume conversation from an earlier point in history (#2607)
Fixing merge conflict of this: #2588


https://github.com/user-attachments/assets/392c7c37-cf8f-4ed6-952e-8215e8c57bc4
2025-08-23 23:23:15 -07:00
Reuben Narad
363636f5eb Add web search tool (#2371)
Adds web_search tool, enabling the model to use Responses API web_search
tool.
- Disabled by default, enabled by --search flag
- When --search is passed, exposes web_search_request function tool to
the model, which triggers user approval. When approved, the model can
use the web_search tool for the remainder of the turn
<img width="1033" height="294" alt="image"
src="https://github.com/user-attachments/assets/62ac6563-b946-465c-ba5d-9325af28b28f"
/>

---------

Co-authored-by: easong-openai <easong@openai.com>
2025-08-23 22:58:56 -07:00
Ahmed Ibrahim
957d44918d send-aggregated output (#2364)
We want to send an aggregated output of stderr and stdout so we don't
have to aggregate it stderr+stdout as we lose order sometimes.

---------

Co-authored-by: Gabriel Peal <gpeal@users.noreply.github.com>
2025-08-23 16:54:31 +00:00
easong-openai
eca97d8559 transcript hint (#2605)
Adds a hint to use ctrl-t to view transcript for more details

<img width="475" height="49" alt="image"
src="https://github.com/user-attachments/assets/6ff650eb-ed54-4699-be04-3c50f0f8f631"
/>
2025-08-23 01:06:22 -07:00
easong-openai
09819d9b47 Add the ability to interrupt and provide feedback to the model (#2381) 2025-08-22 20:34:43 -07:00
Michael Bolin
e3b03eaccb feat: StreamableShell with exec_command and write_stdin tools (#2574) 2025-08-22 18:10:55 -07:00
Ahmed Ibrahim
311ad0ce26 fork conversation from a previous message (#2575)
This can be the underlying logic in order to start a conversation from a
previous message. will need some love in the UI.

Base for building this: #2588
2025-08-22 17:06:09 -07:00
Jeremy Rose
5fa7d46ddf tui: fix resize on wezterm (#2600)
WezTerm doesn't respond to cursor queries during a synchronized update,
so resizing was broken there.
2025-08-22 16:33:18 -07:00
Jeremy Rose
d994019f3f tui: coalesce command output; show unabridged commands in transcript (#2590)
https://github.com/user-attachments/assets/effec7c7-732a-4b61-a2ae-3cb297b6b19b
2025-08-22 16:32:31 -07:00
Jeremy Rose
6de9541f0a tui: open transcript mode at the bottom (#2592)
this got lost when we switched /diff to use the pager.
2025-08-22 16:06:41 -07:00
wkrettek
85099017fd Fix typo in AGENTS.md (#2518)
- Change `examole` to `example`
2025-08-22 16:05:39 -07:00
dependabot[bot]
a5b2ebb49b chore(deps): bump reqwest from 0.12.22 to 0.12.23 in /codex-rs (#2492)
Bumps [reqwest](https://github.com/seanmonstar/reqwest) from 0.12.22 to
0.12.23.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/seanmonstar/reqwest/releases">reqwest's
releases</a>.</em></p>
<blockquote>
<h2>v0.12.23</h2>
<h2>tl;dr</h2>
<ul>
<li>🇺🇩🇸 Add <code>ClientBuilder::unix_socket(path)</code> option that
will force all requests over that Unix Domain Socket.</li>
<li>🔁 Add <code>ClientBuilder::retries(policy)</code> and
<code>reqwest::retry::Builder</code> to configure <a
href="https://seanmonstar.com/blog/reqwest-retries/">automatic
retries</a>.</li>
<li>Add <code>ClientBuilder::dns_resolver2()</code> with more ergonomic
argument bounds, allowing more resolver implementations.</li>
<li>Add <code>http3_*</code> options to
<code>blocking::ClientBuilder</code>.</li>
<li>Fix default TCP timeout values to enabled and faster.</li>
<li>Fix SOCKS proxies to default to port 1080</li>
<li>(wasm) Add cache methods to <code>RequestBuilder</code>.</li>
</ul>
<h2>What's Changed</h2>
<ul>
<li>Minimize package size by <a
href="https://github.com/weiznich"><code>@​weiznich</code></a> in <a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2759">seanmonstar/reqwest#2759</a></li>
<li>chore(dev-dependencies): bump brotli by <a
href="https://github.com/seanmonstar"><code>@​seanmonstar</code></a> in
<a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2760">seanmonstar/reqwest#2760</a></li>
<li>upgrade hickory-dns to 0.25 by <a
href="https://github.com/seanmonstar"><code>@​seanmonstar</code></a> in
<a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2761">seanmonstar/reqwest#2761</a></li>
<li>Re-expose http3 options in blocking::clientBuilder by <a
href="https://github.com/ducaale"><code>@​ducaale</code></a> in <a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2770">seanmonstar/reqwest#2770</a></li>
<li>fix(proxy): restore default port 1080 for SOCKS proxies without
explicit port by <a
href="https://github.com/0x676e67"><code>@​0x676e67</code></a> in <a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2771">seanmonstar/reqwest#2771</a></li>
<li>ci: use msrv-aware cargo in msrv job by <a
href="https://github.com/seanmonstar"><code>@​seanmonstar</code></a> in
<a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2779">seanmonstar/reqwest#2779</a></li>
<li>feat: add request cache option for wasm by <a
href="https://github.com/Spxg"><code>@​Spxg</code></a> in <a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2775">seanmonstar/reqwest#2775</a></li>
<li>style(client): use <code>std::task::ready!</code> macro to simplify
<code>Poll</code> branch match by <a
href="https://github.com/0x676e67"><code>@​0x676e67</code></a> in <a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2781">seanmonstar/reqwest#2781</a></li>
<li>fix: add default tcp keepalive and user_timeout values by <a
href="https://github.com/seanmonstar"><code>@​seanmonstar</code></a> in
<a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2780">seanmonstar/reqwest#2780</a></li>
<li>feat: add unix_socket() option to client builder by <a
href="https://github.com/seanmonstar"><code>@​seanmonstar</code></a> in
<a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2624">seanmonstar/reqwest#2624</a></li>
<li>Add retry policies by <a
href="https://github.com/seanmonstar"><code>@​seanmonstar</code></a> in
<a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2763">seanmonstar/reqwest#2763</a></li>
<li>refactor: loosen retry <code>for_host</code> parameter bounds by <a
href="https://github.com/Enduriel"><code>@​Enduriel</code></a> in <a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2792">seanmonstar/reqwest#2792</a></li>
<li>feat: add dns_resolver2 that is more ergonomic and flexible by <a
href="https://github.com/seanmonstar"><code>@​seanmonstar</code></a> in
<a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2793">seanmonstar/reqwest#2793</a></li>
<li>Prepare v0.12.23 by <a
href="https://github.com/seanmonstar"><code>@​seanmonstar</code></a> in
<a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2795">seanmonstar/reqwest#2795</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/weiznich"><code>@​weiznich</code></a>
made their first contribution in <a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2759">seanmonstar/reqwest#2759</a></li>
<li><a href="https://github.com/Spxg"><code>@​Spxg</code></a> made their
first contribution in <a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2775">seanmonstar/reqwest#2775</a></li>
<li><a href="https://github.com/Enduriel"><code>@​Enduriel</code></a>
made their first contribution in <a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2792">seanmonstar/reqwest#2792</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/seanmonstar/reqwest/compare/v0.12.22...v0.12.23">https://github.com/seanmonstar/reqwest/compare/v0.12.22...v0.12.23</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/seanmonstar/reqwest/blob/master/CHANGELOG.md">reqwest's
changelog</a>.</em></p>
<blockquote>
<h2>v0.12.23</h2>
<ul>
<li>Add <code>ClientBuilder::unix_socket(path)</code> option that will
force all requests over that Unix Domain Socket.</li>
<li>Add <code>ClientBuilder::retries(policy)</code> and
<code>reqwest::retry::Builder</code> to configure automatic
retries.</li>
<li>Add <code>ClientBuilder::dns_resolver2()</code> with more ergonomic
argument bounds, allowing more resolver implementations.</li>
<li>Add <code>http3_*</code> options to
<code>blocking::ClientBuilder</code>.</li>
<li>Fix default TCP timeout values to enabled and faster.</li>
<li>Fix SOCKS proxies to default to port 1080</li>
<li>(wasm) Add cache methods to <code>RequestBuilder</code>.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="ae7375b547"><code>ae7375b</code></a>
v0.12.23</li>
<li><a
href="9aacdc1e2b"><code>9aacdc1</code></a>
feat: add dns_resolver2 that is more ergonomic and flexible (<a
href="https://redirect.github.com/seanmonstar/reqwest/issues/2793">#2793</a>)</li>
<li><a
href="221be11bc6"><code>221be11</code></a>
refactor: loosen retry <code>for_host</code> parameter bounds (<a
href="https://redirect.github.com/seanmonstar/reqwest/issues/2792">#2792</a>)</li>
<li><a
href="acd1b05994"><code>acd1b05</code></a>
feat: add reqwest::retry policies (<a
href="https://redirect.github.com/seanmonstar/reqwest/issues/2763">#2763</a>)</li>
<li><a
href="54b6022b0f"><code>54b6022</code></a>
feat: add <code>ClientBuilder::unix_socket()</code> option (<a
href="https://redirect.github.com/seanmonstar/reqwest/issues/2624">#2624</a>)</li>
<li><a
href="6358cefd24"><code>6358cef</code></a>
fix: add default tcp keepalive and user_timeout values (<a
href="https://redirect.github.com/seanmonstar/reqwest/issues/2780">#2780</a>)</li>
<li><a
href="21226a5bc3"><code>21226a5</code></a>
style(client): use <code>std::task::ready!</code> macro to simplify Poll
branch matching...</li>
<li><a
href="82086e796b"><code>82086e7</code></a>
feat: add request cache options for wasm (<a
href="https://redirect.github.com/seanmonstar/reqwest/issues/2775">#2775</a>)</li>
<li><a
href="2a0f7a3670"><code>2a0f7a3</code></a>
ci: use msrv-aware cargo in msrv job (<a
href="https://redirect.github.com/seanmonstar/reqwest/issues/2779">#2779</a>)</li>
<li><a
href="f1868036ca"><code>f186803</code></a>
fix(proxy): restore default port 1080 for SOCKS proxies without explicit
port...</li>
<li>Additional commits viewable in <a
href="https://github.com/seanmonstar/reqwest/compare/v0.12.22...v0.12.23">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=reqwest&package-manager=cargo&previous-version=0.12.22&new-version=0.12.23)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-22 15:57:33 -07:00
Gabriel Peal
697c7cf4bf Fix flakiness in shell command approval test (#2547)
## Summary
- read the shell exec approval request's actual id instead of assuming
it is always 0
- use that id when validating and responding in the test

## Testing
- `cargo test -p codex-mcp-server
test_shell_command_approval_triggers_elicitation`

------
https://chatgpt.com/codex/tasks/task_i_68a6ab9c732c832c81522cbf11812be0
2025-08-22 18:46:35 -04:00
dependabot[bot]
34ac698bef chore(deps): bump serde_json from 1.0.142 to 1.0.143 in /codex-rs (#2498)
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.142 to
1.0.143.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/serde-rs/json/releases">serde_json's
releases</a>.</em></p>
<blockquote>
<h2>v1.0.143</h2>
<ul>
<li>Implement Clone and Debug for serde_json::Map iterators (<a
href="https://redirect.github.com/serde-rs/json/issues/1264">#1264</a>,
thanks <a
href="https://github.com/xlambein"><code>@​xlambein</code></a>)</li>
<li>Implement Default for CompactFormatter (<a
href="https://redirect.github.com/serde-rs/json/issues/1268">#1268</a>,
thanks <a href="https://github.com/SOF3"><code>@​SOF3</code></a>)</li>
<li>Implement FromStr for serde_json::Map (<a
href="https://redirect.github.com/serde-rs/json/issues/1271">#1271</a>,
thanks <a
href="https://github.com/mickvangelderen"><code>@​mickvangelderen</code></a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="10102c49bf"><code>10102c4</code></a>
Release 1.0.143</li>
<li><a
href="2a5b85312c"><code>2a5b853</code></a>
Replace super::super with absolute path within crate</li>
<li><a
href="447170bd38"><code>447170b</code></a>
Merge pull request 1271 from
mickvangelderen/mick/impl-from-str-for-map</li>
<li><a
href="ec190d6dfd"><code>ec190d6</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/json/issues/1264">#1264</a>
from xlambein/master</li>
<li><a
href="8be664752f"><code>8be6647</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/json/issues/1268">#1268</a>
from SOF3/compact-default</li>
<li><a
href="ba5b3cccea"><code>ba5b3cc</code></a>
Revert &quot;Pin nightly toolchain used for miri job&quot;</li>
<li><a
href="fd35a02901"><code>fd35a02</code></a>
Implement FromStr for Map&lt;String, Value&gt;</li>
<li><a
href="bea0fe6b3e"><code>bea0fe6</code></a>
Implement Default for CompactFormatter</li>
<li><a
href="0c0e9f6bfa"><code>0c0e9f6</code></a>
Add Clone and Debug impls to map iterators</li>
<li>See full diff in <a
href="https://github.com/serde-rs/json/compare/v1.0.142...v1.0.143">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=serde_json&package-manager=cargo&previous-version=1.0.142&new-version=1.0.143)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-22 15:45:14 -07:00
Ahmed Ibrahim
097782c775 Move models.rs to protocol (#2595)
Moving models.rs to protocol so we can use them in `Codex` operations
2025-08-22 22:18:54 +00:00
Michael Bolin
8ba8089592 fix: prefer sending MCP structuredContent as the function call response, if available (#2594)
Prior to this change, when we got a `CallToolResult` from an MCP server,
we JSON-serialized its `content` field as the `content` to send back to
the model as part of the function call output that we send back to the
model. This meant that we were dropping the `structuredContent` on the
floor.

Though reading
https://modelcontextprotocol.io/specification/2025-06-18/schema#tool, it
appears that if `outputSchema` is specified, then `structuredContent`
should be set, which seems to be a "higher-fidelity" response to the
function call. This PR updates our handling of `CallToolResult` to
prefer using the JSON-serialization of `structuredContent`, if present,
using `content` as a fallback.

Also, it appears that the sense of `success` was inverted prior to this
PR!
2025-08-22 14:10:18 -07:00
Jeremy Rose
57c498159a test: simplify tests in config.rs (#2586)
this is much easier to read, thanks @bolinfest for the suggestion.
2025-08-22 14:04:21 -07:00
Jeremy Rose
bbf42f4e12 improve performance of 'cargo test -p codex-tui' (#2593)
before:

```
$ time cargo test -p codex-tui -q
[...]
cargo test -p codex-tui -q  39.89s user 10.77s system 98% cpu 51.328 total
```

after:

```
$ time cargo test -p codex-tui -q
[...]
cargo test -p codex-tui -q  1.37s user 0.64s system 29% cpu 6.699 total
```

the major offenders were the textarea fuzz test and the custom_terminal
doctests. (i think the doctests were being recompiled every time which
made them extra slow?)
2025-08-22 14:03:58 -07:00
Dylan
6f0b499594 [config] Detect git worktrees for project trust (#2585)
## Summary
When resolving our current directory as a project, we want to be a
little bit more clever:
1. If we're in a sub-directory of a git repo, resolve our project
against the root of the git repo
2. If we're in a git worktree, resolve the project against the root of
the git repo

## Testing
- [x] Added unit tests
- [x] Confirmed locally with a git worktree (the one i was using for
this feature)
2025-08-22 13:54:51 -07:00
Dylan
236c4f76a6 [apply_patch] freeform apply_patch tool (#2576)
## Summary
GPT-5 introduced the concept of [custom
tools](https://platform.openai.com/docs/guides/function-calling#custom-tools),
which allow the model to send a raw string result back, simplifying
json-escape issues. We are migrating gpt-5 to use this by default.

However, gpt-oss models do not support custom tools, only normal
functions. So we keep both tool definitions, and provide whichever one
the model family supports.

## Testing
- [x] Tested locally with various models
- [x] Unit tests pass
2025-08-22 13:42:34 -07:00
Eric Traut
dc42ec0eb4 Add AuthManager and enhance GetAuthStatus command (#2577)
This PR adds a central `AuthManager` struct that manages the auth
information used across conversations and the MCP server. Prior to this,
each conversation and the MCP server got their own private snapshots of
the auth information, and changes to one (such as a logout or token
refresh) were not seen by others.

This is especially problematic when multiple instances of the CLI are
run. For example, consider the case where you start CLI 1 and log in to
ChatGPT account X and then start CLI 2 and log out and then log in to
ChatGPT account Y. The conversation in CLI 1 is still using account X,
but if you create a new conversation, it will suddenly (and
unexpectedly) switch to account Y.

With the `AuthManager`, auth information is read from disk at the time
the `ConversationManager` is constructed, and it is cached in memory.
All new conversations use this same auth information, as do any token
refreshes.

The `AuthManager` is also used by the MCP server's GetAuthStatus
command, which now returns the auth method currently used by the MCP
server.

This PR also includes an enhancement to the GetAuthStatus command. It
now accepts two new (optional) input parameters: `include_token` and
`refresh_token`. Callers can use this to request the in-use auth token
and can optionally request to refresh the token.

The PR also adds tests for the login and auth APIs that I recently added
to the MCP server.
2025-08-22 13:10:11 -07:00
Ahmed Ibrahim
cdc77c10fb Fix/tui windows multiline paste (#2544)
Introduce a minimal paste-burst heuristic in the chat composer so Enter
is treated as a newline during paste-like bursts (plain chars arriving
in very short intervals), avoiding premature submit after the first line
on Windows consoles that lack bracketed paste.

- Detect tight sequences of plain Char events; open a short window where
Enter inserts a newline instead of submitting.
- Extend the window on newline to handle blank lines in pasted content.
- No behavior change for terminals that already emit Event::Paste; no
OS/env toggles added.
2025-08-22 12:23:58 -07:00
pap-openai
c5d21a4564 ctrl+v image + @file accepts images (#1695)
allow ctrl+v in TUI for images + @file that are images are appended as
raw files (and read by the model) rather than pasted as a path that
cannot be read by the model.

Re-used components and same interface we're using for copying pasted
content in
72504f1d9c.
@aibrahim-oai as you've implemented this, mind having a look at this
one?


https://github.com/user-attachments/assets/c6c1153b-6b32-4558-b9a2-f8c57d2be710

---------

Co-authored-by: easong-openai <easong@openai.com>
Co-authored-by: Daniel Edrisian <dedrisian@openai.com>
Co-authored-by: Michael Bolin <mbolin@openai.com>
2025-08-22 17:05:43 +00:00
Jeremy Rose
59f6b1654f improve suspend behavior (#2569)
This is a somewhat roundabout way to fix the issue that pressing ^Z
would put the shell prompt in the wrong place (overwriting some of the
status area below the composer). While I'm at it, clean up the suspend
logic and fix some suspend-while-in-alt-screen behavior too.
2025-08-22 09:41:15 -07:00
vjain419
80b00a193e feat(gpt5): add model_verbosity for GPT‑5 via Responses API (#2108)
**Summary**
- Adds `model_verbosity` config (values: low, medium, high).
- Sends `text.verbosity` only for GPT‑5 family models via the Responses
API.
- Updates docs and adds serialization tests.

**Motivation**
- GPT‑5 introduces a verbosity control to steer output length/detail
without pro
mpt surgery.
- Exposing it as a config knob keeps prompts stable and makes behavior
explicit
and repeatable.

**Changes**
- Config:
  - Added `Verbosity` enum (low|medium|high).
- Added optional `model_verbosity` to `ConfigToml`, `Config`, and
`ConfigProfi
le`.
- Request wiring:
  - Extended `ResponsesApiRequest` with optional `text` object.
- Populates `text.verbosity` only when model family is `gpt-5`; omitted
otherw
ise.
- Tests:
- Verifies `text.verbosity` serializes when set and is omitted when not
set.
- Docs:
  - Added “GPT‑5 Verbosity” section in `codex-rs/README.md`.
  - Added `model_verbosity` section to `codex-rs/config.md`.

**Usage**
- In `~/.codex/config.toml`:
  - `model = "gpt-5"`
  - `model_verbosity = "low"` (or `"medium"` default, `"high"`)
- CLI override example:
  - `codex -c model="gpt-5" -c model_verbosity="high"`

**API Impact**
- Requests to GPT‑5 via Responses API include: `text: { verbosity:
"low|medium|h
igh" }` when configured.
- For legacy models or Chat Completions providers, `text` is omitted.

**Backward Compatibility**
- Default behavior unchanged when `model_verbosity` is not set (server
default “
medium”).

**Testing**
- Added unit tests for serialization/omission of `text.verbosity`.
- Ran `cargo fmt` and `cargo test --all-features` (all green).

**Docs**
- `README.md`: new “GPT‑5 Verbosity” note under Config with example.
- `config.md`: new `model_verbosity` section.

**Out of Scope**
- No changes to temperature/top_p or other GPT‑5 parameters.
- No changes to Chat Completions wiring.

**Risks / Notes**
- If OpenAI changes the wire shape for verbosity, we may need to update
`Respons
esApiRequest`.
- Behavior gated to `gpt-5` model family to avoid unexpected effects
elsewhere.

**Checklist**
- [x] Code gated to GPT‑5 family only
- [x] Docs updated (`README.md`, `config.md`)
- [x] Tests added and passing
- [x] Formatting applied

Release note: Add `model_verbosity` config to control GPT‑5 output verbosity via the Responses API (low|medium|high).
2025-08-22 09:12:10 -07:00
Jeremy Rose
76dc3f6054 show diff output in the pager (#2568)
this shows `/diff` output in an overlay like the transcript, instead of
dumping it into history.



https://github.com/user-attachments/assets/48e79b65-7f66-45dd-97b3-d5c627ac7349
2025-08-22 08:24:13 -07:00
Dylan
e4c275d615 [apply-patch] Clean up apply-patch tool definitions (#2539)
## Summary
We've experienced a bit of drift in system prompting for `apply_patch`:
- As pointed out in #2030 , our prettier formatting started altering
prompt.md in a few ways
- We introduced a separate markdown file for apply_patch instructions in
#993, but currently duplicate them in the prompt.md file
- We added a first-class apply_patch tool in #2303, which has yet
another definition

This PR starts to consolidate our logic in a few ways:
- We now only use
`apply_patch_tool_instructions.md](https://github.com/openai/codex/compare/dh--apply-patch-tool-definition?expand=1#diff-d4fffee5f85cb1975d3f66143a379e6c329de40c83ed5bf03ffd3829df985bea)
for system instructions
- We no longer include apply_patch system instructions if the tool is
specified

I'm leaving the definition in openai_tools.rs as duplicated text for now
because we're going to be iterated on the first-class tool soon.

## Testing
- [x] Added integration tests to verify prompt stability
- [x] Tested locally with several different models (gpt-5, gpt-oss,
o4-mini)
2025-08-21 20:07:41 -07:00
Dylan
9f71dcbf57 [shell_tool] Small updates to ensure shell consistency (#2571)
## Summary
Small update to hopefully improve some shell edge cases, and make the
function clearer to the model what is going on. Keeping `timeout` as an
alias means that calls with the previous name will still work.

## Test Plan
- [x] Tested locally, model still works
2025-08-21 19:58:07 -07:00
Jeremy Rose
750ca9e21d core: write explicit [projects] tables for trusted projects (#2523)
all of my trust_level settings in my ~/.codex/config.toml were on one
line.
2025-08-21 13:20:36 -07:00
Jeremy Rose
5fac7b2566 tweak thresholds for shimmer on non-true-color terminals (#2533)
https://github.com/user-attachments/assets/dc7bf820-eeec-4b78-aba9-231e1337921c
2025-08-21 11:44:18 -07:00
khai-oai
24c7be7da0 Update README.md (#2564)
Adding some notes about MCP tool calls are not running within the
sandbox
2025-08-21 11:26:37 -07:00
Jeremy Rose
4b4aa2a774 tui: transcript mode updates live (#2562)
moves TranscriptApp to be an "overlay", and continue to pump AppEvents
while the transcript is active, but forward all tui handling to the
transcript screen.
2025-08-21 11:17:29 -07:00
Jeremy Rose
16d16a4ddc refactor: move slash command handling into chatwidget (#2536)
no functional change, just moving the code that handles /foo into
chatwidget, since most commands just do things with chatwidget.
2025-08-21 10:36:58 -07:00
Jeremy Rose
9604671678 tui: show diff hunk headers to separate sections (#2488)
<img width="906" height="350" alt="Screenshot 2025-08-20 at 2 38 29 PM"
src="https://github.com/user-attachments/assets/272c43c2-dfa8-497f-afa0-cea31e26ca1f"
/>
2025-08-21 08:54:11 -07:00
Jeremy Rose
db934e438e read all AGENTS.md up to git root (#2532)
This updates our logic for AGENTS.md to match documented behavior, which
is to read all AGENTS.md files from cwd up to git root.
2025-08-21 08:52:17 -07:00
Jeremy Rose
5f6e1af1a5 scroll instead of clear on boot (#2535)
this actually works fine already in iterm without this change, but
Terminal.app adds a bunch of excess whitespace when we clear all.


https://github.com/user-attachments/assets/c5bd1809-c2ed-4daa-a148-944d2df52876
2025-08-21 08:51:26 -07:00
easong-openai
8ad56be06e Parse and expose stream errors (#2540) 2025-08-21 01:15:24 -07:00
Dylan
d2b2a6d13a [prompt] xml-format EnvironmentContext (#2272)
## Summary
Before we land #2243, let's start printing environment_context in our
preferred format. This struct will evolve over time with new
information, xml gives us a balance of human readable without too much
parsing, llm readable, and extensible.

Also moves us over to an Option-based struct, so we can easily provide
diffs to the model.

## Testing
- [x] Updated tests to reflect new format
2025-08-20 23:45:16 -07:00
Gabriel Peal
74683bab91 Add a serde tag to ParsedItem (#2546) 2025-08-21 01:34:46 -04:00
Eric Traut
dacff9675a Added new auth-related methods and events to mcp server (#2496)
This PR adds the following:
* A getAuthStatus method on the mcp server. This returns the auth method
currently in use (chatgpt or apikey) or none if the user is not
authenticated. It also returns the "preferred auth method" which
reflects the `preferred_auth_method` value in the config.
* A logout method on the mcp server. If called, it logs out the user and
deletes the `auth.json` file — the same behavior in the cli's `/logout`
command.
* An `authStatusChange` event notification that is sent when the auth
status changes due to successful login or logout operations.
* Logic to pass command-line config overrides to the mcp server at
startup time. This allows use cases like `codex mcp -c
preferred_auth_method=apikey`.
2025-08-20 20:36:34 -07:00
Jeremy Rose
697b4ce100 tui: show upgrade banner in history (#2537)
previously the upgrade banner was disappearing into scrollback when we
cleared the screen to start the tui.
2025-08-20 19:41:49 -07:00
Jeremy Rose
9193eb6b53 show thinking in transcript (#2538)
record the full reasoning trace and show it in transcript mode
2025-08-20 17:09:46 -07:00
Jeremy Rose
e95cad1946 hide CoT by default; show headers in status indicator (#2316)
Plan is for full CoT summaries to be visible in a "transcript view" when
we implement that, but for now they're hidden.


https://github.com/user-attachments/assets/e8a1b0ef-8f2a-48ff-9625-9c3c67d92cdb
2025-08-20 16:58:56 -07:00
Jeremy Rose
2ec5a28528 add transcript mode (#2525)
this adds a new 'transcript mode' that shows the full event history in a
"pager"-style interface.


https://github.com/user-attachments/assets/52df7a14-adb2-4ea7-a0f9-7f5eb8235182
2025-08-20 16:57:35 -07:00
eddy-win
050b9baeb6 Bridge command generation to powershell when on Windows (#2319)
## What? Why? How?
- When running on Windows, codex often tries to invoke bash commands,
which commonly fail (unless WSL is installed)
- Fix: Detect if powershell is available and, if so, route commands to
it
- Also add a shell_name property to environmental context for codex to
default to powershell commands when running in that environment

## Testing
- Tested within WSL and powershell (e.g. get top 5 largest files within
a folder and validated that commands generated were powershell commands)
- Tested within Zsh
- Updated unit tests

---------

Co-authored-by: Eddy Escardo <eddy@openai.com>
2025-08-20 16:30:34 -07:00
Michael Bolin
5ab30c73f3 fix: update build cache key in .github/workflows/codex.yml (#2534)
Change to match `.github/workflows/rust-ci.yml`, which was updated in
https://github.com/openai/codex/pull/2242:


250ae37c84/.github/workflows/rust-ci.yml (L120-L128)
2025-08-20 15:57:33 -07:00
ae
250ae37c84 tui: link docs when no MCP servers configured (#2516) 2025-08-20 14:58:04 -07:00
Ahmed Ibrahim
c579ae41ae Fix login for internal employees (#2528)
This PR:
- fixes for internal employee because we currently want to prefer SIWC
for them.
- fixes retrying forever on unauthorized access. we need to break
eventually on max retries.
2025-08-20 14:05:20 -07:00
Jeremy Rose
0d12380c3b refactor onboarding screen to a separate "app" (#2524)
this is in preparation for adding more separate "modes" to the tui, in
particular, a "transcript mode" to view a full history once #2316 lands.

1. split apart "tui events" from "app events".
2. remove onboarding-related events from AppEvent.
3. move several general drawing tools out of App and into a new Tui
class
2025-08-20 20:47:24 +00:00
Dylan
1a1516a80b [apply-patch] Fix applypatch for heredocs (#2477)
## Summary
Follow up to #2186 for #2072 - we added handling for `applypatch` in
default commands, but forgot to add detection to the heredocs logic.

## Testing
- [x] Added unit tests
2025-08-20 12:16:01 -07:00
Jeremy Rose
61bbabe7d9 tui: switch to using tokio + EventStream for processing crossterm events (#2489)
bringing the tui more into tokio-land to make it easier to factorize.

fyi @bolinfest
2025-08-20 17:11:09 +00:00
Jeremy Rose
8481eb4c6e tui: tab-completing a command moves the cursor to the end (#2362)
also tweak agents.md for faster `just fix`
2025-08-20 09:57:55 -07:00
Jeremy Rose
0ad4e11c84 detect terminal and include in request headers (#2437)
This adds the terminal version to the UA header.
2025-08-20 16:54:26 +00:00
ae
ee8c4ad23a feat: copy tweaks (#2502)
- For selectable options, use sentences starting in lowercase and not
ending with periods. To be honest I don't love this style, but better to
be consistent for now.
- Tweak some other strings.
- Put in more compelling suggestions on launch. Excited to put `/mcp` in
there next.
2025-08-20 07:26:14 +00:00
Ahmed Ibrahim
202af12926 Add a slash command to control permissions (#2474)
A slash command to control permissions



https://github.com/user-attachments/assets/c0edafcd-2085-4e09-8009-ba69c4f1c153

---------

Co-authored-by: ae <ae@openai.com>
2025-08-20 05:34:37 +00:00
Michael Bolin
ce434b1219 fix: prefer config var to env var (#2495) 2025-08-20 04:51:59 +00:00
Ahmed Ibrahim
d1f1e36836 Refresh ChatGPT auth token (#2484)
ChatGPT token's live for only 1 hour. If the session is longer we don't
refresh the token. We should get the expiry timestamp and attempt to
refresh before it.
2025-08-19 21:01:31 -07:00
Gabriel Peal
eaae56a1b0 Client headers (#2487) 2025-08-19 23:32:15 -04:00
Gabriel Peal
77148a5c61 Diff command (#2476) 2025-08-19 22:50:28 -04:00
Jamie Magee
17c98a7fd3 Enable Dependabot updates for Rust toolchain (#2460)
This change allows Dependabot to update the Rust toolchain version
defined in `rust-toolchain.toml`. See [Dependabot now supports Rust
toolchain updates - GitHub
Changelog](https://github.blog/changelog/2025-08-19-dependabot-now-supports-rust-toolchain-updates/)
for more details.
2025-08-19 18:07:21 -07:00
Ahmed Ibrahim
bc298e47ca Fix: Sign in appear even if using other providers. (#2475)
We shouldn't show the login screen when using other providers.
2025-08-19 23:56:49 +00:00
Ahmed Ibrahim
0d6678936f fix apply patch when only one file is rendered (#2468)
<img width="809" height="87" alt="image"
src="https://github.com/user-attachments/assets/6fe69643-10d7-4420-bbf2-e30c092b800f"
/>
2025-08-19 23:49:08 +00:00
317 changed files with 17929 additions and 8358 deletions

View File

@@ -0,0 +1,50 @@
name: 🧑‍💻 VS Code Extension
description: Report an issue with the VS Code extension
labels:
- extension
- needs triage
body:
- type: markdown
attributes:
value: |
Before submitting a new issue, please search for existing issues to see if your issue has already been reported.
If it has, please add a 👍 reaction (no need to leave a comment) to the existing issue instead of creating a new one.
- type: input
id: version
attributes:
label: What version of the VS Code extension are you using?
- type: input
id: ide
attributes:
label: Which IDE are you using?
description: Like `VS Code`, `Cursor`, `Windsurf`, etc.
- type: input
id: platform
attributes:
label: What platform is your computer?
description: |
For MacOS and Linux: copy the output of `uname -mprs`
For Windows: copy the output of `"$([Environment]::OSVersion | ForEach-Object VersionString) $(if ([Environment]::Is64BitOperatingSystem) { "x64" } else { "x86" })"` in the PowerShell console
- type: textarea
id: steps
attributes:
label: What steps can reproduce the bug?
description: Explain the bug and provide a code snippet that can reproduce it.
validations:
required: true
- type: textarea
id: expected
attributes:
label: What is the expected behavior?
description: If possible, please provide text instead of a screenshot.
- type: textarea
id: actual
attributes:
label: What do you see instead?
description: If possible, please provide text instead of a screenshot.
- type: textarea
id: notes
attributes:
label: Additional information
description: Is there anything else you think we should know?

View File

@@ -1 +0,0 @@
/node_modules/

View File

@@ -1,8 +0,0 @@
printWidth = 80
quoteProps = "consistent"
semi = true
tabWidth = 2
trailingComma = "all"
# Preserve existing behavior for markdown/text wrapping.
proseWrap = "preserve"

View File

@@ -1,140 +0,0 @@
# openai/codex-action
`openai/codex-action` is a GitHub Action that facilitates the use of [Codex](https://github.com/openai/codex) on GitHub issues and pull requests. Using the action, associate **labels** to run Codex with the appropriate prompt for the given context. Codex will respond by posting comments or creating PRs, whichever you specify!
Here is a sample workflow that uses `openai/codex-action`:
```yaml
name: Codex
on:
issues:
types: [opened, labeled]
pull_request:
branches: [main]
types: [labeled]
jobs:
codex:
if: ... # optional, but can be effective in conserving CI resources
runs-on: ubuntu-latest
# TODO(mbolin): Need to verify if/when `write` is necessary.
permissions:
contents: write
issues: write
pull-requests: write
steps:
# By default, Codex runs network disabled using --full-auto, so perform
# any setup that requires network (such as installing dependencies)
# before openai/codex-action.
- name: Checkout repository
uses: actions/checkout@v4
- name: Run Codex
uses: openai/codex-action@latest
with:
openai_api_key: ${{ secrets.CODEX_OPENAI_API_KEY }}
github_token: ${{ secrets.GITHUB_TOKEN }}
```
See sample usage in [`codex.yml`](../../workflows/codex.yml).
## Triggering the Action
Using the sample workflow above, we have:
```yaml
on:
issues:
types: [opened, labeled]
pull_request:
branches: [main]
types: [labeled]
```
which means our workflow will be triggered when any of the following events occur:
- a label is added to an issue
- a label is added to a pull request against the `main` branch
### Label-Based Triggers
To define a GitHub label that should trigger Codex, create a file named `.github/codex/labels/LABEL-NAME.md` in your repository where `LABEL-NAME` is the name of the label. The content of the file is the prompt template to use when the label is added (see more on [Prompt Template Variables](#prompt-template-variables) below).
For example, if the file `.github/codex/labels/codex-review.md` exists, then:
- Adding the `codex-review` label will trigger the workflow containing the `openai/codex-action` GitHub Action.
- When `openai/codex-action` starts, it will replace the `codex-review` label with `codex-review-in-progress`.
- When `openai/codex-action` is finished, it will replace the `codex-review-in-progress` label with `codex-review-completed`.
If Codex sees that either `codex-review-in-progress` or `codex-review-completed` is already present, it will not perform the action.
As determined by the [default config](./src/default-label-config.ts), Codex will act on the following labels by default:
- Adding the `codex-review` label to a pull request will have Codex review the PR and add it to the PR as a comment.
- Adding the `codex-triage` label to an issue will have Codex investigate the issue and report its findings as a comment.
- Adding the `codex-issue-fix` label to an issue will have Codex attempt to fix the issue and create a PR wit the fix, if any.
## Action Inputs
The `openai/codex-action` GitHub Action takes the following inputs
### `openai_api_key` (required)
Set your `OPENAI_API_KEY` as a [repository secret](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions). See **Secrets and varaibles** then **Actions** in the settings for your GitHub repo.
Note that the secret name does not have to be `OPENAI_API_KEY`. For example, you might want to name it `CODEX_OPENAI_API_KEY` and then configure it on `openai/codex-action` as follows:
```yaml
openai_api_key: ${{ secrets.CODEX_OPENAI_API_KEY }}
```
### `github_token` (required)
This is required so that Codex can post a comment or create a PR. Set this value on the action as follows:
```yaml
github_token: ${{ secrets.GITHUB_TOKEN }}
```
### `codex_args`
A whitespace-delimited list of arguments to pass to Codex. Defaults to `--full-auto`, but if you want to override the default model to use `o3`:
```yaml
codex_args: "--full-auto --model o3"
```
For more complex configurations, use the `codex_home` input.
### `codex_home`
If set, the value to use for the `$CODEX_HOME` environment variable when running Codex. As explained [in the docs](https://github.com/openai/codex/tree/main/codex-rs#readme), this folder can contain the `config.toml` to configure Codex, custom instructions, and log files.
This should be a relative path within your repo.
## Prompt Template Variables
As shown above, `"prompt"` and `"promptPath"` are used to define prompt templates that will be populated and passed to Codex in response to certain events. All template variables are of the form `{CODEX_ACTION_...}` and the supported values are defined below.
### `CODEX_ACTION_ISSUE_TITLE`
If the action was triggered on a GitHub issue, this is the issue title.
Specifically it is read as the `.issue.title` from the `$GITHUB_EVENT_PATH`.
### `CODEX_ACTION_ISSUE_BODY`
If the action was triggered on a GitHub issue, this is the issue body.
Specifically it is read as the `.issue.body` from the `$GITHUB_EVENT_PATH`.
### `CODEX_ACTION_GITHUB_EVENT_PATH`
The value of the `$GITHUB_EVENT_PATH` environment variable, which is the path to the file that contains the JSON payload for the event that triggered the workflow. Codex can use `jq` to read only the fields of interest from this file.
### `CODEX_ACTION_PR_DIFF`
If the action was triggered on a pull request, this is the diff between the base and head commits of the PR. It is the output from `git diff`.
Note that the content of the diff could be quite large, so is generally safer to point Codex at `CODEX_ACTION_GITHUB_EVENT_PATH` and let it decide how it wants to explore the change.

View File

@@ -1,125 +0,0 @@
name: "Codex [reusable action]"
description: "A reusable action that runs a Codex model."
inputs:
openai_api_key:
description: "The value to use as the OPENAI_API_KEY environment variable when running Codex."
required: true
trigger_phrase:
description: "Text to trigger Codex from a PR/issue body or comment."
required: false
default: ""
github_token:
description: "Token so Codex can comment on the PR or issue."
required: true
codex_args:
description: "A whitespace-delimited list of arguments to pass to Codex. Due to limitations in YAML, arguments with spaces are not supported. For more complex configurations, use the `codex_home` input."
required: false
default: "--config hide_agent_reasoning=true --full-auto"
codex_home:
description: "Value to use as the CODEX_HOME environment variable when running Codex."
required: false
codex_release_tag:
description: "The release tag of the Codex model to run, e.g., 'rust-v0.3.0'. Defaults to the latest release."
required: false
default: ""
runs:
using: "composite"
steps:
# Do this in Bash so we do not even bother to install Bun if the sender does
# not have write access to the repo.
- name: Verify user has write access to the repo.
env:
GH_TOKEN: ${{ github.token }}
shell: bash
run: |
set -euo pipefail
PERMISSION=$(gh api \
"/repos/${GITHUB_REPOSITORY}/collaborators/${{ github.event.sender.login }}/permission" \
| jq -r '.permission')
if [[ "$PERMISSION" != "admin" && "$PERMISSION" != "write" ]]; then
exit 1
fi
- name: Download Codex
env:
GH_TOKEN: ${{ github.token }}
shell: bash
run: |
set -euo pipefail
# Determine OS/arch and corresponding Codex artifact name.
uname_s=$(uname -s)
uname_m=$(uname -m)
case "$uname_s" in
Linux*) os="linux" ;;
Darwin*) os="apple-darwin" ;;
*) echo "Unsupported operating system: $uname_s"; exit 1 ;;
esac
case "$uname_m" in
x86_64*) arch="x86_64" ;;
arm64*|aarch64*) arch="aarch64" ;;
*) echo "Unsupported architecture: $uname_m"; exit 1 ;;
esac
# linux builds differentiate between musl and gnu.
if [[ "$os" == "linux" ]]; then
if [[ "$arch" == "x86_64" ]]; then
triple="${arch}-unknown-linux-musl"
else
# Only other supported linux build is aarch64 gnu.
triple="${arch}-unknown-linux-gnu"
fi
else
# macOS
triple="${arch}-apple-darwin"
fi
# Note that if we start baking version numbers into the artifact name,
# we will need to update this action.yml file to match.
artifact="codex-${triple}.tar.gz"
TAG_ARG="${{ inputs.codex_release_tag }}"
# The usage is `gh release download [<tag>] [flags]`, so if TAG_ARG
# is empty, we do not pass it so we can default to the latest release.
gh release download ${TAG_ARG:+$TAG_ARG} --repo openai/codex \
--pattern "$artifact" --output - \
| tar xzO > /usr/local/bin/codex
chmod +x /usr/local/bin/codex
# Display Codex version to confirm binary integrity.
codex --version
- name: Install Bun
uses: oven-sh/setup-bun@v2
with:
bun-version: 1.2.11
- name: Install dependencies
shell: bash
run: |
cd ${{ github.action_path }}
bun install --production
- name: Run Codex
shell: bash
run: bun run ${{ github.action_path }}/src/main.ts
# Process args plus environment variables often have a max of 128 KiB,
# so we should fit within that limit?
env:
INPUT_CODEX_ARGS: ${{ inputs.codex_args || '' }}
INPUT_CODEX_HOME: ${{ inputs.codex_home || ''}}
INPUT_TRIGGER_PHRASE: ${{ inputs.trigger_phrase || '' }}
OPENAI_API_KEY: ${{ inputs.openai_api_key }}
GITHUB_TOKEN: ${{ inputs.github_token }}
GITHUB_EVENT_ACTION: ${{ github.event.action || '' }}
GITHUB_EVENT_LABEL_NAME: ${{ github.event.label.name || '' }}
GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number || '' }}
GITHUB_EVENT_ISSUE_BODY: ${{ github.event.issue.body || '' }}
GITHUB_EVENT_REVIEW_BODY: ${{ github.event.review.body || '' }}
GITHUB_EVENT_COMMENT_BODY: ${{ github.event.comment.body || '' }}

View File

@@ -1,91 +0,0 @@
{
"lockfileVersion": 1,
"workspaces": {
"": {
"name": "codex-action",
"dependencies": {
"@actions/core": "^1.11.1",
"@actions/github": "^6.0.1",
},
"devDependencies": {
"@types/bun": "^1.2.20",
"@types/node": "^24.3.0",
"prettier": "^3.6.2",
"typescript": "^5.9.2",
},
},
},
"packages": {
"@actions/core": ["@actions/core@1.11.1", "", { "dependencies": { "@actions/exec": "^1.1.1", "@actions/http-client": "^2.0.1" } }, "sha512-hXJCSrkwfA46Vd9Z3q4cpEpHB1rL5NG04+/rbqW9d3+CSvtB1tYe8UTpAlixa1vj0m/ULglfEK2UKxMGxCxv5A=="],
"@actions/exec": ["@actions/exec@1.1.1", "", { "dependencies": { "@actions/io": "^1.0.1" } }, "sha512-+sCcHHbVdk93a0XT19ECtO/gIXoxvdsgQLzb2fE2/5sIZmWQuluYyjPQtrtTHdU1YzTZ7bAPN4sITq2xi1679w=="],
"@actions/github": ["@actions/github@6.0.1", "", { "dependencies": { "@actions/http-client": "^2.2.0", "@octokit/core": "^5.0.1", "@octokit/plugin-paginate-rest": "^9.2.2", "@octokit/plugin-rest-endpoint-methods": "^10.4.0", "@octokit/request": "^8.4.1", "@octokit/request-error": "^5.1.1", "undici": "^5.28.5" } }, "sha512-xbZVcaqD4XnQAe35qSQqskb3SqIAfRyLBrHMd/8TuL7hJSz2QtbDwnNM8zWx4zO5l2fnGtseNE3MbEvD7BxVMw=="],
"@actions/http-client": ["@actions/http-client@2.2.3", "", { "dependencies": { "tunnel": "^0.0.6", "undici": "^5.25.4" } }, "sha512-mx8hyJi/hjFvbPokCg4uRd4ZX78t+YyRPtnKWwIl+RzNaVuFpQHfmlGVfsKEJN8LwTCvL+DfVgAM04XaHkm6bA=="],
"@actions/io": ["@actions/io@1.1.3", "", {}, "sha512-wi9JjgKLYS7U/z8PPbco+PvTb/nRWjeoFlJ1Qer83k/3C5PHQi28hiVdeE2kHXmIL99mQFawx8qt/JPjZilJ8Q=="],
"@fastify/busboy": ["@fastify/busboy@2.1.1", "", {}, "sha512-vBZP4NlzfOlerQTnba4aqZoMhE/a9HY7HRqoOPaETQcSQuWEIyZMHGfVu6w9wGtGK5fED5qRs2DteVCjOH60sA=="],
"@octokit/auth-token": ["@octokit/auth-token@4.0.0", "", {}, "sha512-tY/msAuJo6ARbK6SPIxZrPBms3xPbfwBrulZe0Wtr/DIY9lje2HeV1uoebShn6mx7SjCHif6EjMvoREj+gZ+SA=="],
"@octokit/core": ["@octokit/core@5.2.1", "", { "dependencies": { "@octokit/auth-token": "^4.0.0", "@octokit/graphql": "^7.1.0", "@octokit/request": "^8.4.1", "@octokit/request-error": "^5.1.1", "@octokit/types": "^13.0.0", "before-after-hook": "^2.2.0", "universal-user-agent": "^6.0.0" } }, "sha512-dKYCMuPO1bmrpuogcjQ8z7ICCH3FP6WmxpwC03yjzGfZhj9fTJg6+bS1+UAplekbN2C+M61UNllGOOoAfGCrdQ=="],
"@octokit/endpoint": ["@octokit/endpoint@9.0.6", "", { "dependencies": { "@octokit/types": "^13.1.0", "universal-user-agent": "^6.0.0" } }, "sha512-H1fNTMA57HbkFESSt3Y9+FBICv+0jFceJFPWDePYlR/iMGrwM5ph+Dd4XRQs+8X+PUFURLQgX9ChPfhJ/1uNQw=="],
"@octokit/graphql": ["@octokit/graphql@7.1.1", "", { "dependencies": { "@octokit/request": "^8.4.1", "@octokit/types": "^13.0.0", "universal-user-agent": "^6.0.0" } }, "sha512-3mkDltSfcDUoa176nlGoA32RGjeWjl3K7F/BwHwRMJUW/IteSa4bnSV8p2ThNkcIcZU2umkZWxwETSSCJf2Q7g=="],
"@octokit/openapi-types": ["@octokit/openapi-types@24.2.0", "", {}, "sha512-9sIH3nSUttelJSXUrmGzl7QUBFul0/mB8HRYl3fOlgHbIWG+WnYDXU3v/2zMtAvuzZ/ed00Ei6on975FhBfzrg=="],
"@octokit/plugin-paginate-rest": ["@octokit/plugin-paginate-rest@9.2.2", "", { "dependencies": { "@octokit/types": "^12.6.0" }, "peerDependencies": { "@octokit/core": "5" } }, "sha512-u3KYkGF7GcZnSD/3UP0S7K5XUFT2FkOQdcfXZGZQPGv3lm4F2Xbf71lvjldr8c1H3nNbF+33cLEkWYbokGWqiQ=="],
"@octokit/plugin-rest-endpoint-methods": ["@octokit/plugin-rest-endpoint-methods@10.4.1", "", { "dependencies": { "@octokit/types": "^12.6.0" }, "peerDependencies": { "@octokit/core": "5" } }, "sha512-xV1b+ceKV9KytQe3zCVqjg+8GTGfDYwaT1ATU5isiUyVtlVAO3HNdzpS4sr4GBx4hxQ46s7ITtZrAsxG22+rVg=="],
"@octokit/request": ["@octokit/request@8.4.1", "", { "dependencies": { "@octokit/endpoint": "^9.0.6", "@octokit/request-error": "^5.1.1", "@octokit/types": "^13.1.0", "universal-user-agent": "^6.0.0" } }, "sha512-qnB2+SY3hkCmBxZsR/MPCybNmbJe4KAlfWErXq+rBKkQJlbjdJeS85VI9r8UqeLYLvnAenU8Q1okM/0MBsAGXw=="],
"@octokit/request-error": ["@octokit/request-error@5.1.1", "", { "dependencies": { "@octokit/types": "^13.1.0", "deprecation": "^2.0.0", "once": "^1.4.0" } }, "sha512-v9iyEQJH6ZntoENr9/yXxjuezh4My67CBSu9r6Ve/05Iu5gNgnisNWOsoJHTP6k0Rr0+HQIpnH+kyammu90q/g=="],
"@octokit/types": ["@octokit/types@13.10.0", "", { "dependencies": { "@octokit/openapi-types": "^24.2.0" } }, "sha512-ifLaO34EbbPj0Xgro4G5lP5asESjwHracYJvVaPIyXMuiuXLlhic3S47cBdTb+jfODkTE5YtGCLt3Ay3+J97sA=="],
"@types/bun": ["@types/bun@1.2.20", "", { "dependencies": { "bun-types": "1.2.20" } }, "sha512-dX3RGzQ8+KgmMw7CsW4xT5ITBSCrSbfHc36SNT31EOUg/LA9JWq0VDdEXDRSe1InVWpd2yLUM1FUF/kEOyTzYA=="],
"@types/node": ["@types/node@24.3.0", "", { "dependencies": { "undici-types": "~7.10.0" } }, "sha512-aPTXCrfwnDLj4VvXrm+UUCQjNEvJgNA8s5F1cvwQU+3KNltTOkBm1j30uNLyqqPNe7gE3KFzImYoZEfLhp4Yow=="],
"@types/react": ["@types/react@19.1.8", "", { "dependencies": { "csstype": "^3.0.2" } }, "sha512-AwAfQ2Wa5bCx9WP8nZL2uMZWod7J7/JSplxbTmBQ5ms6QpqNYm672H0Vu9ZVKVngQ+ii4R/byguVEUZQyeg44g=="],
"before-after-hook": ["before-after-hook@2.2.3", "", {}, "sha512-NzUnlZexiaH/46WDhANlyR2bXRopNg4F/zuSA3OpZnllCUgRaOF2znDioDWrmbNVsuZk6l9pMquQB38cfBZwkQ=="],
"bun-types": ["bun-types@1.2.20", "", { "dependencies": { "@types/node": "*" }, "peerDependencies": { "@types/react": "^19" } }, "sha512-pxTnQYOrKvdOwyiyd/7sMt9yFOenN004Y6O4lCcCUoKVej48FS5cvTw9geRaEcB9TsDZaJKAxPTVvi8tFsVuXA=="],
"csstype": ["csstype@3.1.3", "", {}, "sha512-M1uQkMl8rQK/szD0LNhtqxIPLpimGm8sOBwU7lLnCpSbTyY3yeU1Vc7l4KT5zT4s/yOxHH5O7tIuuLOCnLADRw=="],
"deprecation": ["deprecation@2.3.1", "", {}, "sha512-xmHIy4F3scKVwMsQ4WnVaS8bHOx0DmVwRywosKhaILI0ywMDWPtBSku2HNxRvF7jtwDRsoEwYQSfbxj8b7RlJQ=="],
"once": ["once@1.4.0", "", { "dependencies": { "wrappy": "1" } }, "sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w=="],
"prettier": ["prettier@3.6.2", "", { "bin": { "prettier": "bin/prettier.cjs" } }, "sha512-I7AIg5boAr5R0FFtJ6rCfD+LFsWHp81dolrFD8S79U9tb8Az2nGrJncnMSnys+bpQJfRUzqs9hnA81OAA3hCuQ=="],
"tunnel": ["tunnel@0.0.6", "", {}, "sha512-1h/Lnq9yajKY2PEbBadPXj3VxsDDu844OnaAo52UVmIzIvwwtBPIuNvkjuzBlTWpfJyUbG3ez0KSBibQkj4ojg=="],
"typescript": ["typescript@5.9.2", "", { "bin": { "tsc": "bin/tsc", "tsserver": "bin/tsserver" } }, "sha512-CWBzXQrc/qOkhidw1OzBTQuYRbfyxDXJMVJ1XNwUHGROVmuaeiEm3OslpZ1RV96d7SKKjZKrSJu3+t/xlw3R9A=="],
"undici": ["undici@5.29.0", "", { "dependencies": { "@fastify/busboy": "^2.0.0" } }, "sha512-raqeBD6NQK4SkWhQzeYKd1KmIG6dllBOTt55Rmkt4HtI9mwdWtJljnrXjAFUBLTSN67HWrOIZ3EPF4kjUw80Bg=="],
"undici-types": ["undici-types@7.10.0", "", {}, "sha512-t5Fy/nfn+14LuOc2KNYg75vZqClpAiqscVvMygNnlsHBFpSXdJaYtXMcdNLpl/Qvc3P2cB3s6lOV51nqsFq4ag=="],
"universal-user-agent": ["universal-user-agent@6.0.1", "", {}, "sha512-yCzhz6FN2wU1NiiQRogkTQszlQSlpWaw8SvVegAc+bDxbzHgh1vX8uIe8OYyMH6DwH+sdTJsgMl36+mSMdRJIQ=="],
"wrappy": ["wrappy@1.0.2", "", {}, "sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ=="],
"@octokit/plugin-paginate-rest/@octokit/types": ["@octokit/types@12.6.0", "", { "dependencies": { "@octokit/openapi-types": "^20.0.0" } }, "sha512-1rhSOfRa6H9w4YwK0yrf5faDaDTb+yLyBUKOCV4xtCDB5VmIPqd/v9yr9o6SAzOAlRxMiRiCic6JVM1/kunVkw=="],
"@octokit/plugin-rest-endpoint-methods/@octokit/types": ["@octokit/types@12.6.0", "", { "dependencies": { "@octokit/openapi-types": "^20.0.0" } }, "sha512-1rhSOfRa6H9w4YwK0yrf5faDaDTb+yLyBUKOCV4xtCDB5VmIPqd/v9yr9o6SAzOAlRxMiRiCic6JVM1/kunVkw=="],
"bun-types/@types/node": ["@types/node@24.2.1", "", { "dependencies": { "undici-types": "~7.10.0" } }, "sha512-DRh5K+ka5eJic8CjH7td8QpYEV6Zo10gfRkjHCO3weqZHWDtAaSTFtl4+VMqOJ4N5jcuhZ9/l+yy8rVgw7BQeQ=="],
"@octokit/plugin-paginate-rest/@octokit/types/@octokit/openapi-types": ["@octokit/openapi-types@20.0.0", "", {}, "sha512-EtqRBEjp1dL/15V7WiX5LJMIxxkdiGJnabzYx5Apx4FkQIFgAfKumXeYAqqJCj1s+BMX4cPFIFC4OLCR6stlnA=="],
"@octokit/plugin-rest-endpoint-methods/@octokit/types/@octokit/openapi-types": ["@octokit/openapi-types@20.0.0", "", {}, "sha512-EtqRBEjp1dL/15V7WiX5LJMIxxkdiGJnabzYx5Apx4FkQIFgAfKumXeYAqqJCj1s+BMX4cPFIFC4OLCR6stlnA=="],
}
}

View File

@@ -1,21 +0,0 @@
{
"name": "codex-action",
"version": "0.0.0",
"private": true,
"scripts": {
"format": "prettier --check src",
"format:fix": "prettier --write src",
"test": "bun test",
"typecheck": "tsc"
},
"dependencies": {
"@actions/core": "^1.11.1",
"@actions/github": "^6.0.1"
},
"devDependencies": {
"@types/bun": "^1.2.20",
"@types/node": "^24.3.0",
"prettier": "^3.6.2",
"typescript": "^5.9.2"
}
}

View File

@@ -1,85 +0,0 @@
import * as github from "@actions/github";
import type { EnvContext } from "./env-context";
/**
* Add an "eyes" reaction to the entity (issue, issue comment, or pull request
* review comment) that triggered the current Codex invocation.
*
* The purpose is to provide immediate feedback to the user similar to the
* *-in-progress label flow indicating that the bot has acknowledged the
* request and is working on it.
*
* We attempt to add the reaction best suited for the current GitHub event:
*
* • issues → POST /repos/{owner}/{repo}/issues/{issue_number}/reactions
* • issue_comment → POST /repos/{owner}/{repo}/issues/comments/{comment_id}/reactions
* • pull_request_review_comment → POST /repos/{owner}/{repo}/pulls/comments/{comment_id}/reactions
*
* If the specific target is unavailable (e.g. unexpected payload shape) we
* silently skip instead of failing the whole action because the reaction is
* merely cosmetic.
*/
export async function addEyesReaction(ctx: EnvContext): Promise<void> {
const octokit = ctx.getOctokit();
const { owner, repo } = github.context.repo;
const eventName = github.context.eventName;
try {
switch (eventName) {
case "issue_comment": {
const commentId = (github.context.payload as any)?.comment?.id;
if (commentId) {
await octokit.rest.reactions.createForIssueComment({
owner,
repo,
comment_id: commentId,
content: "eyes",
});
return;
}
break;
}
case "pull_request_review_comment": {
const commentId = (github.context.payload as any)?.comment?.id;
if (commentId) {
await octokit.rest.reactions.createForPullRequestReviewComment({
owner,
repo,
comment_id: commentId,
content: "eyes",
});
return;
}
break;
}
case "issues": {
const issueNumber = github.context.issue.number;
if (issueNumber) {
await octokit.rest.reactions.createForIssue({
owner,
repo,
issue_number: issueNumber,
content: "eyes",
});
return;
}
break;
}
default: {
// Fallback: try to react to the issue/PR if we have a number.
const issueNumber = github.context.issue.number;
if (issueNumber) {
await octokit.rest.reactions.createForIssue({
owner,
repo,
issue_number: issueNumber,
content: "eyes",
});
}
}
}
} catch (error) {
// Do not fail the action if reaction creation fails log and continue.
console.warn(`Failed to add \"eyes\" reaction: ${error}`);
}
}

View File

@@ -1,53 +0,0 @@
import type { EnvContext } from "./env-context";
import { runCodex } from "./run-codex";
import { postComment } from "./post-comment";
import { addEyesReaction } from "./add-reaction";
/**
* Handle `issue_comment` and `pull_request_review_comment` events once we know
* the action is supported.
*/
export async function onComment(ctx: EnvContext): Promise<void> {
const triggerPhrase = ctx.tryGet("INPUT_TRIGGER_PHRASE");
if (!triggerPhrase) {
console.warn("Empty trigger phrase: skipping.");
return;
}
// Attempt to get the body of the comment from the environment. Depending on
// the event type either `GITHUB_EVENT_COMMENT_BODY` (issue & PR comments) or
// `GITHUB_EVENT_REVIEW_BODY` (PR reviews) is set.
const commentBody =
ctx.tryGetNonEmpty("GITHUB_EVENT_COMMENT_BODY") ??
ctx.tryGetNonEmpty("GITHUB_EVENT_REVIEW_BODY") ??
ctx.tryGetNonEmpty("GITHUB_EVENT_ISSUE_BODY");
if (!commentBody) {
console.warn("Comment body not found in environment: skipping.");
return;
}
// Check if the trigger phrase is present.
if (!commentBody.includes(triggerPhrase)) {
console.log(
`Trigger phrase '${triggerPhrase}' not found: nothing to do for this comment.`,
);
return;
}
// Derive the prompt by removing the trigger phrase. Remove only the first
// occurrence to keep any additional occurrences that might be meaningful.
const prompt = commentBody.replace(triggerPhrase, "").trim();
if (prompt.length === 0) {
console.warn("Prompt is empty after removing trigger phrase: skipping");
return;
}
// Provide immediate feedback that we are working on the request.
await addEyesReaction(ctx);
// Run Codex and post the response as a new comment.
const lastMessage = await runCodex(prompt, ctx);
await postComment(lastMessage, ctx);
}

View File

@@ -1,11 +0,0 @@
import { readdirSync, statSync } from "fs";
import * as path from "path";
export interface Config {
labels: Record<string, LabelConfig>;
}
export interface LabelConfig {
/** Returns the prompt template. */
getPromptTemplate(): string;
}

View File

@@ -1,44 +0,0 @@
import type { Config } from "./config";
export function getDefaultConfig(): Config {
return {
labels: {
"codex-investigate-issue": {
getPromptTemplate: () =>
`
Troubleshoot whether the reported issue is valid.
Provide a concise and respectful comment summarizing the findings.
### {CODEX_ACTION_ISSUE_TITLE}
{CODEX_ACTION_ISSUE_BODY}
`.trim(),
},
"codex-code-review": {
getPromptTemplate: () =>
`
Review this PR and respond with a very concise final message, formatted in Markdown.
There should be a summary of the changes (1-2 sentences) and a few bullet points if necessary.
Then provide the **review** (1-2 sentences plus bullet points, friendly tone).
{CODEX_ACTION_GITHUB_EVENT_PATH} contains the JSON that triggered this GitHub workflow. It contains the \`base\` and \`head\` refs that define this PR. Both refs are available locally.
`.trim(),
},
"codex-attempt-fix": {
getPromptTemplate: () =>
`
Attempt to solve the reported issue.
If a code change is required, create a new branch, commit the fix, and open a pull-request that resolves the problem.
### {CODEX_ACTION_ISSUE_TITLE}
{CODEX_ACTION_ISSUE_BODY}
`.trim(),
},
},
};
}

View File

@@ -1,116 +0,0 @@
/*
* Centralised access to environment variables used by the Codex GitHub
* Action.
*
* To enable proper unit-testing we avoid reading from `process.env` at module
* initialisation time. Instead a `EnvContext` object is created (usually from
* the real `process.env`) and passed around explicitly or where that is not
* yet practical imported as the shared `defaultContext` singleton. Tests can
* create their own context backed by a stubbed map of variables without having
* to mutate global state.
*/
import { fail } from "./fail";
import * as github from "@actions/github";
export interface EnvContext {
/**
* Return the value for a given environment variable or terminate the action
* via `fail` if it is missing / empty.
*/
get(name: string): string;
/**
* Attempt to read an environment variable. Returns the value when present;
* otherwise returns undefined (does not call `fail`).
*/
tryGet(name: string): string | undefined;
/**
* Attempt to read an environment variable. Returns non-empty string value or
* null if unset or empty string.
*/
tryGetNonEmpty(name: string): string | null;
/**
* Return a memoised Octokit instance authenticated via the token resolved
* from the provided argument (when defined) or the environment variables
* `GITHUB_TOKEN`/`GH_TOKEN`.
*
* Subsequent calls return the same cached instance to avoid spawning
* multiple REST clients within a single action run.
*/
getOctokit(token?: string): ReturnType<typeof github.getOctokit>;
}
/** Internal helper *not* exported. */
function _getRequiredEnv(
name: string,
env: Record<string, string | undefined>,
): string | undefined {
const value = env[name];
// Avoid leaking secrets into logs while still logging non-secret variables.
if (name.endsWith("KEY") || name.endsWith("TOKEN")) {
if (value) {
console.log(`value for ${name} was found`);
}
} else {
console.log(`${name}=${value}`);
}
return value;
}
/** Create a context backed by the supplied environment map (defaults to `process.env`). */
export function createEnvContext(
env: Record<string, string | undefined> = process.env,
): EnvContext {
// Lazily instantiated Octokit client shared across this context.
let cachedOctokit: ReturnType<typeof github.getOctokit> | null = null;
return {
get(name: string): string {
const value = _getRequiredEnv(name, env);
if (value == null) {
fail(`Missing required environment variable: ${name}`);
}
return value;
},
tryGet(name: string): string | undefined {
return _getRequiredEnv(name, env);
},
tryGetNonEmpty(name: string): string | null {
const value = _getRequiredEnv(name, env);
return value == null || value === "" ? null : value;
},
getOctokit(token?: string) {
if (cachedOctokit) {
return cachedOctokit;
}
// Determine the token to authenticate with.
const githubToken = token ?? env["GITHUB_TOKEN"] ?? env["GH_TOKEN"];
if (!githubToken) {
fail(
"Unable to locate a GitHub token. `github_token` should have been set on the action.",
);
}
cachedOctokit = github.getOctokit(githubToken!);
return cachedOctokit;
},
};
}
/**
* Shared context built from the actual `process.env`. Production code that is
* not yet refactored to receive a context explicitly may import and use this
* singleton. Tests should avoid the singleton and instead pass their own
* context to the functions they exercise.
*/
export const defaultContext: EnvContext = createEnvContext();

View File

@@ -1,4 +0,0 @@
export function fail(message: string): never {
console.error(message);
process.exit(1);
}

View File

@@ -1,149 +0,0 @@
import { spawnSync } from "child_process";
import * as github from "@actions/github";
import { EnvContext } from "./env-context";
function runGit(args: string[], silent = true): string {
console.info(`Running git ${args.join(" ")}`);
const res = spawnSync("git", args, {
encoding: "utf8",
stdio: silent ? ["ignore", "pipe", "pipe"] : "inherit",
});
if (res.error) {
throw res.error;
}
if (res.status !== 0) {
// Return stderr so caller may handle; else throw.
throw new Error(
`git ${args.join(" ")} failed with code ${res.status}: ${res.stderr}`,
);
}
return res.stdout.trim();
}
function stageAllChanges() {
runGit(["add", "-A"]);
}
function hasStagedChanges(): boolean {
const res = spawnSync("git", ["diff", "--cached", "--quiet", "--exit-code"]);
return res.status !== 0;
}
function ensureOnBranch(
issueNumber: number,
protectedBranches: string[],
suggestedSlug?: string,
): string {
let branch = "";
try {
branch = runGit(["symbolic-ref", "--short", "-q", "HEAD"]);
} catch {
branch = "";
}
// If detached HEAD or on a protected branch, create a new branch.
if (!branch || protectedBranches.includes(branch)) {
if (suggestedSlug) {
const safeSlug = suggestedSlug
.toLowerCase()
.replace(/[^\w\s-]/g, "")
.trim()
.replace(/\s+/g, "-");
branch = `codex-fix-${issueNumber}-${safeSlug}`;
} else {
branch = `codex-fix-${issueNumber}-${Date.now()}`;
}
runGit(["switch", "-c", branch]);
}
return branch;
}
function commitIfNeeded(issueNumber: number) {
if (hasStagedChanges()) {
runGit([
"commit",
"-m",
`fix: automated fix for #${issueNumber} via Codex`,
]);
}
}
function pushBranch(branch: string, githubToken: string, ctx: EnvContext) {
const repoSlug = ctx.get("GITHUB_REPOSITORY"); // owner/repo
const remoteUrl = `https://x-access-token:${githubToken}@github.com/${repoSlug}.git`;
runGit(["push", "--force-with-lease", "-u", remoteUrl, `HEAD:${branch}`]);
}
/**
* If this returns a string, it is the URL of the created PR.
*/
export async function maybePublishPRForIssue(
issueNumber: number,
lastMessage: string,
ctx: EnvContext,
): Promise<string | undefined> {
// Only proceed if GITHUB_TOKEN available.
const githubToken =
ctx.tryGetNonEmpty("GITHUB_TOKEN") ?? ctx.tryGetNonEmpty("GH_TOKEN");
if (!githubToken) {
console.warn("No GitHub token - skipping PR creation.");
return undefined;
}
// Print `git status` for debugging.
runGit(["status"]);
// Stage any remaining changes so they can be committed and pushed.
stageAllChanges();
const octokit = ctx.getOctokit(githubToken);
const { owner, repo } = github.context.repo;
// Determine default branch to treat as protected.
let defaultBranch = "main";
try {
const repoInfo = await octokit.rest.repos.get({ owner, repo });
defaultBranch = repoInfo.data.default_branch ?? "main";
} catch (e) {
console.warn(`Failed to get default branch, assuming 'main': ${e}`);
}
const sanitizedMessage = lastMessage.replace(/\u2022/g, "-");
const [summaryLine] = sanitizedMessage.split(/\r?\n/);
const branch = ensureOnBranch(issueNumber, [defaultBranch, "master"], summaryLine);
commitIfNeeded(issueNumber);
pushBranch(branch, githubToken, ctx);
// Try to find existing PR for this branch
const headParam = `${owner}:${branch}`;
const existing = await octokit.rest.pulls.list({
owner,
repo,
head: headParam,
state: "open",
});
if (existing.data.length > 0) {
return existing.data[0].html_url;
}
// Determine base branch (default to main)
let baseBranch = "main";
try {
const repoInfo = await octokit.rest.repos.get({ owner, repo });
baseBranch = repoInfo.data.default_branch ?? "main";
} catch (e) {
console.warn(`Failed to get default branch, assuming 'main': ${e}`);
}
const pr = await octokit.rest.pulls.create({
owner,
repo,
title: summaryLine,
head: branch,
base: baseBranch,
body: sanitizedMessage,
});
return pr.data.html_url;
}

View File

@@ -1,16 +0,0 @@
export function setGitHubActionsUser(): void {
const commands = [
["git", "config", "--global", "user.name", "github-actions[bot]"],
[
"git",
"config",
"--global",
"user.email",
"41898282+github-actions[bot]@users.noreply.github.com",
],
];
for (const command of commands) {
Bun.spawnSync(command);
}
}

View File

@@ -1,11 +0,0 @@
import * as pathMod from "path";
import { EnvContext } from "./env-context";
export function resolveWorkspacePath(path: string, ctx: EnvContext): string {
if (pathMod.isAbsolute(path)) {
return path;
} else {
const workspace = ctx.get("GITHUB_WORKSPACE");
return pathMod.join(workspace, path);
}
}

View File

@@ -1,56 +0,0 @@
import type { Config, LabelConfig } from "./config";
import { getDefaultConfig } from "./default-label-config";
import { readFileSync, readdirSync, statSync } from "fs";
import * as path from "path";
/**
* Build an in-memory configuration object by scanning the repository for
* Markdown templates located in `.github/codex/labels`.
*
* Each `*.md` file in that directory represents a label that can trigger the
* Codex GitHub Action. The filename **without** the extension is interpreted
* as the label name, e.g. `codex-review.md` ➜ `codex-review`.
*
* For every such label we derive the corresponding `doneLabel` by appending
* the suffix `-completed`.
*/
export function loadConfig(workspace: string): Config {
const labelsDir = path.join(workspace, ".github", "codex", "labels");
let entries: string[];
try {
entries = readdirSync(labelsDir);
} catch {
// If the directory is missing, return the default configuration.
return getDefaultConfig();
}
const labels: Record<string, LabelConfig> = {};
for (const entry of entries) {
if (!entry.endsWith(".md")) {
continue;
}
const fullPath = path.join(labelsDir, entry);
if (!statSync(fullPath).isFile()) {
continue;
}
const labelName = entry.slice(0, -3); // trim ".md"
labels[labelName] = new FileLabelConfig(fullPath);
}
return { labels };
}
class FileLabelConfig implements LabelConfig {
constructor(private readonly promptPath: string) {}
getPromptTemplate(): string {
return readFileSync(this.promptPath, "utf8");
}
}

View File

@@ -1,80 +0,0 @@
#!/usr/bin/env bun
import type { Config } from "./config";
import { defaultContext, EnvContext } from "./env-context";
import { loadConfig } from "./load-config";
import { setGitHubActionsUser } from "./git-user";
import { onLabeled } from "./process-label";
import { ensureBaseAndHeadCommitsForPRAreAvailable } from "./prompt-template";
import { performAdditionalValidation } from "./verify-inputs";
import { onComment } from "./comment";
import { onReview } from "./review";
async function main(): Promise<void> {
const ctx: EnvContext = defaultContext;
// Build the configuration dynamically by scanning `.github/codex/labels`.
const GITHUB_WORKSPACE = ctx.get("GITHUB_WORKSPACE");
const config: Config = loadConfig(GITHUB_WORKSPACE);
// Optionally perform additional validation of prompt template files.
performAdditionalValidation(config, GITHUB_WORKSPACE);
const GITHUB_EVENT_NAME = ctx.get("GITHUB_EVENT_NAME");
const GITHUB_EVENT_ACTION = ctx.get("GITHUB_EVENT_ACTION");
// Set user.name and user.email to a bot before Codex runs, just in case it
// creates a commit.
setGitHubActionsUser();
switch (GITHUB_EVENT_NAME) {
case "issues": {
if (GITHUB_EVENT_ACTION === "labeled") {
await onLabeled(config, ctx);
return;
} else if (GITHUB_EVENT_ACTION === "opened") {
await onComment(ctx);
return;
}
break;
}
case "issue_comment": {
if (GITHUB_EVENT_ACTION === "created") {
await onComment(ctx);
return;
}
break;
}
case "pull_request": {
if (GITHUB_EVENT_ACTION === "labeled") {
await ensureBaseAndHeadCommitsForPRAreAvailable(ctx);
await onLabeled(config, ctx);
return;
}
break;
}
case "pull_request_review": {
await ensureBaseAndHeadCommitsForPRAreAvailable(ctx);
if (GITHUB_EVENT_ACTION === "submitted") {
await onReview(ctx);
return;
}
break;
}
case "pull_request_review_comment": {
await ensureBaseAndHeadCommitsForPRAreAvailable(ctx);
if (GITHUB_EVENT_ACTION === "created") {
await onComment(ctx);
return;
}
break;
}
}
console.warn(
`Unsupported action '${GITHUB_EVENT_ACTION}' for event '${GITHUB_EVENT_NAME}'.`,
);
}
main();

View File

@@ -1,62 +0,0 @@
import { fail } from "./fail";
import * as github from "@actions/github";
import { EnvContext } from "./env-context";
/**
* Post a comment to the issue / pull request currently in scope.
*
* Provide the environment context so that token lookup (inside getOctokit) does
* not rely on global state.
*/
export async function postComment(
commentBody: string,
ctx: EnvContext,
): Promise<void> {
// Append a footer with a link back to the workflow run, if available.
const footer = buildWorkflowRunFooter(ctx);
const bodyWithFooter = footer ? `${commentBody}${footer}` : commentBody;
const octokit = ctx.getOctokit();
console.info("Got Octokit instance for posting comment");
const { owner, repo } = github.context.repo;
const issueNumber = github.context.issue.number;
if (!issueNumber) {
console.warn(
"No issue or pull_request number found in GitHub context; skipping comment creation.",
);
return;
}
try {
console.info("Calling octokit.rest.issues.createComment()");
await octokit.rest.issues.createComment({
owner,
repo,
issue_number: issueNumber,
body: bodyWithFooter,
});
} catch (error) {
fail(`Failed to create comment via GitHub API: ${error}`);
}
}
/**
* Helper to build a Markdown fragment linking back to the workflow run that
* generated the current comment. Returns `undefined` if required environment
* variables are missing e.g. when running outside of GitHub Actions so we
* can gracefully skip the footer in those cases.
*/
function buildWorkflowRunFooter(ctx: EnvContext): string | undefined {
const serverUrl =
ctx.tryGetNonEmpty("GITHUB_SERVER_URL") ?? "https://github.com";
const repository = ctx.tryGetNonEmpty("GITHUB_REPOSITORY");
const runId = ctx.tryGetNonEmpty("GITHUB_RUN_ID");
if (!repository || !runId) {
return undefined;
}
const url = `${serverUrl}/${repository}/actions/runs/${runId}`;
return `\n\n---\n*[_View workflow run_](${url})*`;
}

View File

@@ -1,226 +0,0 @@
import { fail } from "./fail";
import { EnvContext } from "./env-context";
import { renderPromptTemplate } from "./prompt-template";
import { postComment } from "./post-comment";
import { runCodex } from "./run-codex";
import * as github from "@actions/github";
import { Config, LabelConfig } from "./config";
import { maybePublishPRForIssue } from "./git-helpers";
export async function onLabeled(
config: Config,
ctx: EnvContext,
): Promise<void> {
const GITHUB_EVENT_LABEL_NAME = ctx.get("GITHUB_EVENT_LABEL_NAME");
const labelConfig = config.labels[GITHUB_EVENT_LABEL_NAME] as
| LabelConfig
| undefined;
if (!labelConfig) {
fail(
`Label \`${GITHUB_EVENT_LABEL_NAME}\` not found in config: ${JSON.stringify(config)}`,
);
}
await processLabelConfig(ctx, GITHUB_EVENT_LABEL_NAME, labelConfig);
}
/**
* Wrapper that handles `-in-progress` and `-completed` semantics around the core lint/fix/review
* processing. It will:
*
* - Skip execution if the `-in-progress` or `-completed` label is already present.
* - Mark the PR/issue as `-in-progress`.
* - After successful execution, mark the PR/issue as `-completed`.
*/
async function processLabelConfig(
ctx: EnvContext,
label: string,
labelConfig: LabelConfig,
): Promise<void> {
const octokit = ctx.getOctokit();
const { owner, repo, issueNumber, labelNames } =
await getCurrentLabels(octokit);
const inProgressLabel = `${label}-in-progress`;
const completedLabel = `${label}-completed`;
for (const markerLabel of [inProgressLabel, completedLabel]) {
if (labelNames.includes(markerLabel)) {
console.log(
`Label '${markerLabel}' already present on issue/PR #${issueNumber}. Skipping Codex action.`,
);
// Clean up: remove the triggering label to avoid confusion and re-runs.
await addAndRemoveLabels(octokit, {
owner,
repo,
issueNumber,
remove: markerLabel,
});
return;
}
}
// Mark the PR/issue as in progress.
await addAndRemoveLabels(octokit, {
owner,
repo,
issueNumber,
add: inProgressLabel,
remove: label,
});
// Run the core Codex processing.
await processLabel(ctx, label, labelConfig);
// Mark the PR/issue as completed.
await addAndRemoveLabels(octokit, {
owner,
repo,
issueNumber,
add: completedLabel,
remove: inProgressLabel,
});
}
async function processLabel(
ctx: EnvContext,
label: string,
labelConfig: LabelConfig,
): Promise<void> {
const template = labelConfig.getPromptTemplate();
// If this is a review label, prepend explicit PR-diff scoping guidance to
// reduce out-of-scope feedback. Do this before rendering so placeholders in
// the guidance (e.g., {CODEX_ACTION_GITHUB_EVENT_PATH}) are substituted.
const isReview = label.toLowerCase().includes("review");
const reviewScopeGuidance = `
PR Diff Scope
- Only review changes between the PR's merge-base and head; do not comment on commits or files outside this range.
- Derive the base/head SHAs from the event JSON at {CODEX_ACTION_GITHUB_EVENT_PATH}, then compute and use the PR diff for all analysis and comments.
Commands to determine scope
- Resolve SHAs:
- BASE_SHA=$(jq -r '.pull_request.base.sha // .pull_request.base.ref' "{CODEX_ACTION_GITHUB_EVENT_PATH}")
- HEAD_SHA=$(jq -r '.pull_request.head.sha // .pull_request.head.ref' "{CODEX_ACTION_GITHUB_EVENT_PATH}")
- BASE_SHA=$(git rev-parse "$BASE_SHA")
- HEAD_SHA=$(git rev-parse "$HEAD_SHA")
- Prefer triple-dot (merge-base) semantics for PR diffs:
- Changed commits: git log --oneline "$BASE_SHA...$HEAD_SHA"
- Changed files: git diff --name-status "$BASE_SHA...$HEAD_SHA"
- Review hunks: git diff -U0 "$BASE_SHA...$HEAD_SHA"
Review rules
- Anchor every comment to a file and hunk present in git diff "$BASE_SHA...$HEAD_SHA".
- If you mention context outside the diff, label it as "Follow-up (outside this PR scope)" and keep it brief (<=2 bullets).
- Do not critique commits or files not reachable in the PR range (merge-base(base, head) → head).
`.trim();
const effectiveTemplate = isReview
? `${reviewScopeGuidance}\n\n${template}`
: template;
const populatedTemplate = await renderPromptTemplate(effectiveTemplate, ctx);
// Always run Codex and post the resulting message as a comment.
let commentBody = await runCodex(populatedTemplate, ctx);
// Current heuristic: only try to create a PR if "attempt" or "fix" is in the
// label name. (Yes, we plan to evolve this.)
if (label.indexOf("fix") !== -1 || label.indexOf("attempt") !== -1) {
console.info(`label ${label} indicates we should attempt to create a PR`);
const prUrl = await maybeFixIssue(ctx, commentBody);
if (prUrl) {
commentBody += `\n\n---\nOpened pull request: ${prUrl}`;
}
} else {
console.info(
`label ${label} does not indicate we should attempt to create a PR`,
);
}
await postComment(commentBody, ctx);
}
async function maybeFixIssue(
ctx: EnvContext,
lastMessage: string,
): Promise<string | undefined> {
// Attempt to create a PR out of any changes Codex produced.
const issueNumber = github.context.issue.number!; // exists for issues triggering this path
try {
return await maybePublishPRForIssue(issueNumber, lastMessage, ctx);
} catch (e) {
console.warn(`Failed to publish PR: ${e}`);
}
}
async function getCurrentLabels(
octokit: ReturnType<typeof github.getOctokit>,
): Promise<{
owner: string;
repo: string;
issueNumber: number;
labelNames: Array<string>;
}> {
const { owner, repo } = github.context.repo;
const issueNumber = github.context.issue.number;
if (!issueNumber) {
fail("No issue or pull_request number found in GitHub context.");
}
const { data: issueData } = await octokit.rest.issues.get({
owner,
repo,
issue_number: issueNumber,
});
const labelNames =
issueData.labels?.map((label: any) =>
typeof label === "string" ? label : label.name,
) ?? [];
return { owner, repo, issueNumber, labelNames };
}
async function addAndRemoveLabels(
octokit: ReturnType<typeof github.getOctokit>,
opts: {
owner: string;
repo: string;
issueNumber: number;
add?: string;
remove?: string;
},
): Promise<void> {
const { owner, repo, issueNumber, add, remove } = opts;
if (add) {
try {
await octokit.rest.issues.addLabels({
owner,
repo,
issue_number: issueNumber,
labels: [add],
});
} catch (error) {
console.warn(`Failed to add label '${add}': ${error}`);
}
}
if (remove) {
try {
await octokit.rest.issues.removeLabel({
owner,
repo,
issue_number: issueNumber,
name: remove,
});
} catch (error) {
console.warn(`Failed to remove label '${remove}': ${error}`);
}
}
}

View File

@@ -1,284 +0,0 @@
/*
* Utilities to render Codex prompt templates.
*
* A template is a Markdown (or plain-text) file that may contain one or more
* placeholders of the form `{CODEX_ACTION_<NAME>}`. At runtime these
* placeholders are substituted with dynamically generated content. Each
* placeholder is resolved **exactly once** even if it appears multiple times
* in the same template.
*/
import { readFile } from "fs/promises";
import { EnvContext } from "./env-context";
// ---------------------------------------------------------------------------
// Helpers
// ---------------------------------------------------------------------------
/**
* Lazily caches parsed `$GITHUB_EVENT_PATH` contents keyed by the file path so
* we only hit the filesystem once per unique event payload.
*/
const githubEventDataCache: Map<string, Promise<any>> = new Map();
function getGitHubEventData(ctx: EnvContext): Promise<any> {
const eventPath = ctx.get("GITHUB_EVENT_PATH");
let cached = githubEventDataCache.get(eventPath);
if (!cached) {
cached = readFile(eventPath, "utf8").then((raw) => JSON.parse(raw));
githubEventDataCache.set(eventPath, cached);
}
return cached;
}
async function runCommand(args: Array<string>): Promise<string> {
const result = Bun.spawnSync(args, {
stdout: "pipe",
stderr: "pipe",
});
if (result.success) {
return result.stdout.toString();
}
console.error(`Error running ${JSON.stringify(args)}: ${result.stderr}`);
return "";
}
// ---------------------------------------------------------------------------
// Public API
// ---------------------------------------------------------------------------
// Regex that captures the variable name without the surrounding { } braces.
const VAR_REGEX = /\{(CODEX_ACTION_[A-Z0-9_]+)\}/g;
// Cache individual placeholder values so each one is resolved at most once per
// process even if many templates reference it.
const placeholderCache: Map<string, Promise<string>> = new Map();
/**
* Parse a template string, resolve all placeholders and return the rendered
* result.
*/
export async function renderPromptTemplate(
template: string,
ctx: EnvContext,
): Promise<string> {
// ---------------------------------------------------------------------
// 1) Gather all *unique* placeholders present in the template.
// ---------------------------------------------------------------------
const variables = new Set<string>();
for (const match of template.matchAll(VAR_REGEX)) {
variables.add(match[1]);
}
// ---------------------------------------------------------------------
// 2) Kick off (or reuse) async resolution for each variable.
// ---------------------------------------------------------------------
for (const variable of variables) {
if (!placeholderCache.has(variable)) {
placeholderCache.set(variable, resolveVariable(variable, ctx));
}
}
// ---------------------------------------------------------------------
// 3) Await completion so we can perform a simple synchronous replace below.
// ---------------------------------------------------------------------
const resolvedEntries: [string, string][] = [];
for (const [key, promise] of placeholderCache.entries()) {
resolvedEntries.push([key, await promise]);
}
const resolvedMap = new Map<string, string>(resolvedEntries);
// ---------------------------------------------------------------------
// 4) Replace each occurrence. We use replace with a callback to ensure
// correct substitution even if variable names overlap (they shouldn't,
// but better safe than sorry).
// ---------------------------------------------------------------------
return template.replace(VAR_REGEX, (_, varName: string) => {
return resolvedMap.get(varName) ?? "";
});
}
export async function ensureBaseAndHeadCommitsForPRAreAvailable(
ctx: EnvContext,
): Promise<{ baseSha: string; headSha: string } | null> {
const prShas = await getPrShas(ctx);
if (prShas == null) {
console.warn("Unable to resolve PR branches");
return null;
}
const event = await getGitHubEventData(ctx);
const pr = event.pull_request;
if (!pr) {
console.warn("event.pull_request is not defined - unexpected");
return null;
}
const workspace = ctx.get("GITHUB_WORKSPACE");
// Refs (branch names)
const baseRef: string | undefined = pr.base?.ref;
const headRef: string | undefined = pr.head?.ref;
// Clone URLs
const baseRemoteUrl: string | undefined = pr.base?.repo?.clone_url;
const headRemoteUrl: string | undefined = pr.head?.repo?.clone_url;
if (!baseRef || !headRef || !baseRemoteUrl || !headRemoteUrl) {
console.warn(
"Missing PR ref or remote URL information - cannot fetch commits",
);
return null;
}
// Ensure we have the base branch.
await runCommand([
"git",
"-C",
workspace,
"fetch",
"--no-tags",
"origin",
baseRef,
]);
// Ensure we have the head branch.
if (headRemoteUrl === baseRemoteUrl) {
// Same repository the commit is available from `origin`.
await runCommand([
"git",
"-C",
workspace,
"fetch",
"--no-tags",
"origin",
headRef,
]);
} else {
// Fork make sure a `pr` remote exists that points at the fork. Attempting
// to add a remote that already exists causes git to error, so we swallow
// any non-zero exit codes from that specific command.
await runCommand([
"git",
"-C",
workspace,
"remote",
"add",
"pr",
headRemoteUrl,
]);
// Whether adding succeeded or the remote already existed, attempt to fetch
// the head ref from the `pr` remote.
await runCommand([
"git",
"-C",
workspace,
"fetch",
"--no-tags",
"pr",
headRef,
]);
}
return prShas;
}
// ---------------------------------------------------------------------------
// Internal helpers still exported for use by other modules.
// ---------------------------------------------------------------------------
export async function resolvePrDiff(ctx: EnvContext): Promise<string> {
const prShas = await ensureBaseAndHeadCommitsForPRAreAvailable(ctx);
if (prShas == null) {
console.warn("Unable to resolve PR branches");
return "";
}
const workspace = ctx.get("GITHUB_WORKSPACE");
const { baseSha, headSha } = prShas;
return runCommand([
"git",
"-C",
workspace,
"diff",
"--color=never",
`${baseSha}..${headSha}`,
]);
}
// ---------------------------------------------------------------------------
// Placeholder resolution
// ---------------------------------------------------------------------------
async function resolveVariable(name: string, ctx: EnvContext): Promise<string> {
switch (name) {
case "CODEX_ACTION_ISSUE_TITLE": {
const event = await getGitHubEventData(ctx);
const issue = event.issue ?? event.pull_request;
return issue?.title ?? "";
}
case "CODEX_ACTION_ISSUE_BODY": {
const event = await getGitHubEventData(ctx);
const issue = event.issue ?? event.pull_request;
return issue?.body ?? "";
}
case "CODEX_ACTION_GITHUB_EVENT_PATH": {
return ctx.get("GITHUB_EVENT_PATH");
}
case "CODEX_ACTION_BASE_REF": {
const event = await getGitHubEventData(ctx);
return event?.pull_request?.base?.ref ?? "";
}
case "CODEX_ACTION_HEAD_REF": {
const event = await getGitHubEventData(ctx);
return event?.pull_request?.head?.ref ?? "";
}
case "CODEX_ACTION_PR_DIFF": {
return resolvePrDiff(ctx);
}
// -------------------------------------------------------------------
// Add new template variables here.
// -------------------------------------------------------------------
default: {
// Unknown variable leave it blank to avoid leaking placeholders to the
// final prompt. The alternative would be to `fail()` here, but silently
// ignoring unknown placeholders is more forgiving and better matches the
// behaviour of typical template engines.
console.warn(`Unknown template variable: ${name}`);
return "";
}
}
}
async function getPrShas(
ctx: EnvContext,
): Promise<{ baseSha: string; headSha: string } | null> {
const event = await getGitHubEventData(ctx);
const pr = event.pull_request;
if (!pr) {
console.warn("event.pull_request is not defined");
return null;
}
// Prefer explicit SHAs if available to avoid relying on local branch names.
const baseSha: string | undefined = pr.base?.sha;
const headSha: string | undefined = pr.head?.sha;
if (!baseSha || !headSha) {
console.warn("one of base or head is not defined on event.pull_request");
return null;
}
return { baseSha, headSha };
}

View File

@@ -1,42 +0,0 @@
import type { EnvContext } from "./env-context";
import { runCodex } from "./run-codex";
import { postComment } from "./post-comment";
import { addEyesReaction } from "./add-reaction";
/**
* Handle `pull_request_review` events. We treat the review body the same way
* as a normal comment.
*/
export async function onReview(ctx: EnvContext): Promise<void> {
const triggerPhrase = ctx.tryGet("INPUT_TRIGGER_PHRASE");
if (!triggerPhrase) {
console.warn("Empty trigger phrase: skipping.");
return;
}
const reviewBody = ctx.tryGet("GITHUB_EVENT_REVIEW_BODY");
if (!reviewBody) {
console.warn("Review body not found in environment: skipping.");
return;
}
if (!reviewBody.includes(triggerPhrase)) {
console.log(
`Trigger phrase '${triggerPhrase}' not found: nothing to do for this review.`,
);
return;
}
const prompt = reviewBody.replace(triggerPhrase, "").trim();
if (prompt.length === 0) {
console.warn("Prompt is empty after removing trigger phrase: skipping.");
return;
}
await addEyesReaction(ctx);
const lastMessage = await runCodex(prompt, ctx);
await postComment(lastMessage, ctx);
}

View File

@@ -1,58 +0,0 @@
import { fail } from "./fail";
import { EnvContext } from "./env-context";
import { tmpdir } from "os";
import { join } from "node:path";
import { readFile, mkdtemp } from "fs/promises";
import { resolveWorkspacePath } from "./github-workspace";
/**
* Runs the Codex CLI with the provided prompt and returns the output written
* to the "last message" file.
*/
export async function runCodex(
prompt: string,
ctx: EnvContext,
): Promise<string> {
const OPENAI_API_KEY = ctx.get("OPENAI_API_KEY");
const tempDirPath = await mkdtemp(join(tmpdir(), "codex-"));
const lastMessageOutput = join(tempDirPath, "codex-prompt.md");
// Use the unified CLI and its `exec` subcommand instead of the old
// standalone `codex-exec` binary.
const args = ["/usr/local/bin/codex", "exec"];
const inputCodexArgs = ctx.tryGet("INPUT_CODEX_ARGS")?.trim();
if (inputCodexArgs) {
args.push(...inputCodexArgs.split(/\s+/));
}
args.push("--output-last-message", lastMessageOutput, prompt);
const env: Record<string, string> = { ...process.env, OPENAI_API_KEY };
const INPUT_CODEX_HOME = ctx.tryGet("INPUT_CODEX_HOME");
if (INPUT_CODEX_HOME) {
env.CODEX_HOME = resolveWorkspacePath(INPUT_CODEX_HOME, ctx);
}
console.log(`Running Codex: ${JSON.stringify(args)}`);
const result = Bun.spawnSync(args, {
stdout: "inherit",
stderr: "inherit",
env,
});
if (!result.success) {
fail(`Codex failed: see above for details.`);
}
// Read the output generated by Codex.
let lastMessage: string;
try {
lastMessage = await readFile(lastMessageOutput, "utf8");
} catch (err) {
fail(`Failed to read Codex output at '${lastMessageOutput}': ${err}`);
}
return lastMessage;
}

View File

@@ -1,33 +0,0 @@
// Validate the inputs passed to the composite action.
// The script currently ensures that the provided configuration file exists and
// matches the expected schema.
import type { Config } from "./config";
import { existsSync } from "fs";
import * as path from "path";
import { fail } from "./fail";
export function performAdditionalValidation(config: Config, workspace: string) {
// Additional validation: ensure referenced prompt files exist and are Markdown.
for (const [label, details] of Object.entries(config.labels)) {
// Determine which prompt key is present (the schema guarantees exactly one).
const promptPathStr =
(details as any).prompt ?? (details as any).promptPath;
if (promptPathStr) {
const promptPath = path.isAbsolute(promptPathStr)
? promptPathStr
: path.join(workspace, promptPathStr);
if (!existsSync(promptPath)) {
fail(`Prompt file for label '${label}' not found: ${promptPath}`);
}
if (!promptPath.endsWith(".md")) {
fail(
`Prompt file for label '${label}' must be a .md file (got ${promptPathStr}).`,
);
}
}
}
}

View File

@@ -1,15 +0,0 @@
{
"compilerOptions": {
"lib": ["ESNext"],
"target": "ESNext",
"module": "ESNext",
"moduleDetection": "force",
"moduleResolution": "bundler",
"noEmit": true,
"strict": true,
"skipLibCheck": true
},
"include": ["src"]
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 410 KiB

After

Width:  |  Height:  |  Size: 2.9 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 412 KiB

After

Width:  |  Height:  |  Size: 3.1 MiB

View File

@@ -24,3 +24,7 @@ updates:
directory: /
schedule:
interval: weekly
- package-ecosystem: rust-toolchain
directory: codex-rs
schedule:
interval: weekly

View File

@@ -21,6 +21,10 @@
"windows-x86_64": {
"regex": "^codex-x86_64-pc-windows-msvc\\.exe\\.zst$",
"path": "codex.exe"
},
"windows-aarch64": {
"regex": "^codex-aarch64-pc-windows-msvc\\.exe\\.zst$",
"path": "codex.exe"
}
}
}

View File

@@ -1,6 +1,6 @@
# External (non-OpenAI) Pull Request Requirements
Before opening this Pull Request, please read the "Contributing" section of the README or your PR may be closed:
https://github.com/openai/codex#contributing
Before opening this Pull Request, please read the dedicated "Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md
If your PR conforms to our contribution guidelines, replace this text with a detailed and high quality description of your changes.

View File

@@ -1,64 +0,0 @@
name: Codex
on:
issues:
types: [opened, labeled]
pull_request:
branches: [main]
types: [labeled]
jobs:
codex:
# This `if` check provides complex filtering logic to avoid running Codex
# on every PR. Admittedly, one thing this does not verify is whether the
# sender has write access to the repo: that must be done as part of a
# runtime step.
#
# Note the label values should match the ones in the .github/codex/labels
# folder.
if: |
(github.event_name == 'issues' && (
(github.event.action == 'labeled' && (github.event.label.name == 'codex-attempt' || github.event.label.name == 'codex-triage'))
)) ||
(github.event_name == 'pull_request' && github.event.action == 'labeled' && (github.event.label.name == 'codex-review' || github.event.label.name == 'codex-rust-review'))
runs-on: ubuntu-latest
permissions:
contents: write # can push or create branches
issues: write # for comments + labels on issues/PRs
pull-requests: write # for PR comments/labels
steps:
# TODO: Consider adding an optional mode (--dry-run?) to actions/codex
# that verifies whether Codex should actually be run for this event.
# (For example, it may be rejected because the sender does not have
# write access to the repo.) The benefit would be two-fold:
# 1. As the first step of this job, it gives us a chance to add a reaction
# or comment to the PR/issue ASAP to "ack" the request.
# 2. It saves resources by skipping the clone and setup steps below if
# Codex is not going to run.
- name: Checkout repository
uses: actions/checkout@v5
- uses: dtolnay/rust-toolchain@1.89
with:
targets: x86_64-unknown-linux-gnu
components: clippy
- uses: actions/cache@v4
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
${{ github.workspace }}/codex-rs/target/
key: cargo-ubuntu-24.04-x86_64-unknown-linux-gnu-${{ hashFiles('**/Cargo.lock') }}
# Note it is possible that the `verify` step internal to Run Codex will
# fail, in which case the work to setup the repo was worthless :(
- name: Run Codex
uses: ./.github/actions/codex
with:
openai_api_key: ${{ secrets.CODEX_OPENAI_API_KEY }}
github_token: ${{ secrets.GITHUB_TOKEN }}
codex_home: ./.github/codex/home

View File

@@ -100,15 +100,26 @@ jobs:
- runner: windows-latest
target: x86_64-pc-windows-msvc
profile: dev
- runner: windows-11-arm
target: aarch64-pc-windows-msvc
profile: dev
# Also run representative release builds on Mac and Linux because
# there could be release-only build errors we want to catch.
# Hopefully this also pre-populates the build cache to speed up
# releases.
- runner: macos-14
target: aarch64-apple-darwin
profile: release
- runner: ubuntu-24.04
target: x86_64-unknown-linux-musl
profile: release
- runner: windows-latest
target: x86_64-pc-windows-msvc
profile: release
- runner: windows-11-arm
target: aarch64-pc-windows-msvc
profile: release
steps:
- uses: actions/checkout@v5
@@ -134,7 +145,7 @@ jobs:
- name: cargo clippy
id: clippy
run: cargo clippy --target ${{ matrix.target }} --all-features --tests -- -D warnings
run: cargo clippy --target ${{ matrix.target }} --all-features --tests --profile ${{ matrix.profile }} -- -D warnings
# Running `cargo build` from the workspace root builds the workspace using
# the union of all features from third-party crates. This can mask errors

View File

@@ -72,6 +72,8 @@ jobs:
target: aarch64-unknown-linux-gnu
- runner: windows-latest
target: x86_64-pc-windows-msvc
- runner: windows-11-arm
target: aarch64-pc-windows-msvc
steps:
- uses: actions/checkout@v5
@@ -87,7 +89,7 @@ jobs:
~/.cargo/registry/cache/
~/.cargo/git/db/
${{ github.workspace }}/codex-rs/target/
key: cargo-release-${{ matrix.runner }}-${{ matrix.target }}-release-${{ hashFiles('**/Cargo.lock') }}
key: cargo-${{ matrix.runner }}-${{ matrix.target }}-release-${{ hashFiles('**/Cargo.lock') }}
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Install musl build tools

View File

@@ -1,3 +1,7 @@
/codex-cli/dist
/codex-cli/node_modules
pnpm-lock.yaml
prompt.md
*_prompt.md
*_instructions.md

View File

@@ -2,15 +2,16 @@
In the codex-rs folder where the rust code lives:
- Crate names are prefixed with `codex-`. For examole, the `core` folder's crate is named `codex-core`
- Crate names are prefixed with `codex-`. For example, the `core` folder's crate is named `codex-core`
- When using format! and you can inline variables into {}, always do that.
- Never add or modify any code related to `CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR` or `CODEX_SANDBOX_ENV_VAR`.
- You operate in a sandbox where `CODEX_SANDBOX_NETWORK_DISABLED=1` will be set whenever you use the `shell` tool. Any existing code that uses `CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR` was authored with this fact in mind. It is often used to early exit out of tests that the author knew you would not be able to run given your sandbox limitations.
- Similarly, when you spawn a process using Seatbelt (`/usr/bin/sandbox-exec`), `CODEX_SANDBOX=seatbelt` will be set on the child process. Integration tests that want to run Seatbelt themselves cannot be run under Seatbelt, so checks for `CODEX_SANDBOX=seatbelt` are also often used to early exit out of tests, as appropriate.
Before finalizing a change to `codex-rs`, run `just fmt` (in `codex-rs` directory) to format the code and `just fix` (in `codex-rs` directory) to fix any linter issues in the code. Additionally, run the tests:
Before finalizing a change to `codex-rs`, run `just fmt` (in `codex-rs` directory) to format the code and `just fix -p <project>` (in `codex-rs` directory) to fix any linter issues in the code. Prefer scoping with `-p` to avoid slow workspacewide Clippy builds; only run `just fix` without `-p` if you changed shared crates. Additionally, run the tests:
1. Run the test for the specific project that was changed. For example, if changes were made in `codex-rs/tui`, run `cargo test -p codex-tui`.
2. Once those pass, if any changes were made in common, core, or protocol, run the complete test suite with `cargo test --all-features`.
When running interactively, ask the user before running these commands to finalize.
## TUI style conventions

View File

@@ -1,211 +1 @@
# Changelog
You can install any of these versions: `npm install -g codex@version`
## `0.1.2505172129`
### 🪲 Bug Fixes
- Add node version check (#1007)
- Persist token after refresh (#1006)
## `0.1.2505171619`
- `codex --login` + `codex --free` (#998)
## `0.1.2505161800`
- Sign in with chatgpt credits (#974)
- Add support for OpenAI tool type, local_shell (#961)
## `0.1.2505161243`
- Sign in with chatgpt (#963)
- Session history viewer (#912)
- Apply patch issue when using different cwd (#942)
- Diff command for filenames with special characters (#954)
## `0.1.2505160811`
- `codex-mini-latest` (#951)
## `0.1.2505140839`
### 🪲 Bug Fixes
- Gpt-4.1 apply_patch handling (#930)
- Add support for fileOpener in config.json (#911)
- Patch in #366 and #367 for marked-terminal (#916)
- Remember to set lastIndex = 0 on shared RegExp (#918)
- Always load version from package.json at runtime (#909)
- Tweak the label for citations for better rendering (#919)
- Tighten up some logic around session timestamps and ids (#922)
- Change EventMsg enum so every variant takes a single struct (#925)
- Reasoning default to medium, show workdir when supplied (#931)
- Test_dev_null_write() was not using echo as intended (#923)
## `0.1.2504301751`
### 🚀 Features
- User config api key (#569)
- `@mention` files in codex (#701)
- Add `--reasoning` CLI flag (#314)
- Lower default retry wait time and increase number of tries (#720)
- Add common package registries domains to allowed-domains list (#414)
### 🪲 Bug Fixes
- Insufficient quota message (#758)
- Input keyboard shortcut opt+delete (#685)
- `/diff` should include untracked files (#686)
- Only allow running without sandbox if explicitly marked in safe container (#699)
- Tighten up check for /usr/bin/sandbox-exec (#710)
- Check if sandbox-exec is available (#696)
- Duplicate messages in quiet mode (#680)
## `0.1.2504251709`
### 🚀 Features
- Add openai model info configuration (#551)
- Added provider to run quiet mode function (#571)
- Create parent directories when creating new files (#552)
- Print bug report URL in terminal instead of opening browser (#510) (#528)
- Add support for custom provider configuration in the user config (#537)
- Add support for OpenAI-Organization and OpenAI-Project headers (#626)
- Add specific instructions for creating API keys in error msg (#581)
- Enhance toCodePoints to prevent potential unicode 14 errors (#615)
- More native keyboard navigation in multiline editor (#655)
- Display error on selection of invalid model (#594)
### 🪲 Bug Fixes
- Model selection (#643)
- Nits in apply patch (#640)
- Input keyboard shortcuts (#676)
- `apply_patch` unicode characters (#625)
- Don't clear turn input before retries (#611)
- More loosely match context for apply_patch (#610)
- Update bug report template - there is no --revision flag (#614)
- Remove outdated copy of text input and external editor feature (#670)
- Remove unreachable "disableResponseStorage" logic flow introduced in #543 (#573)
- Non-openai mode - fix for gemini content: null, fix 429 to throw before stream (#563)
- Only allow going up in history when not already in history if input is empty (#654)
- Do not grant "node" user sudo access when using run_in_container.sh (#627)
- Update scripts/build_container.sh to use pnpm instead of npm (#631)
- Update lint-staged config to use pnpm --filter (#582)
- Non-openai mode - don't default temp and top_p (#572)
- Fix error catching when checking for updates (#597)
- Close stdin when running an exec tool call (#636)
## `0.1.2504221401`
### 🚀 Features
- Show actionable errors when api keys are missing (#523)
- Add CLI `--version` flag (#492)
### 🪲 Bug Fixes
- Agent loop for ZDR (`disableResponseStorage`) (#543)
- Fix relative `workdir` check for `apply_patch` (#556)
- Minimal mid-stream #429 retry loop using existing back-off (#506)
- Inconsistent usage of base URL and API key (#507)
- Remove requirement for api key for ollama (#546)
- Support `[provider]_BASE_URL` (#542)
## `0.1.2504220136`
### 🚀 Features
- Add support for ZDR orgs (#481)
- Include fractional portion of chunk that exceeds stdout/stderr limit (#497)
## `0.1.2504211509`
### 🚀 Features
- Support multiple providers via Responses-Completion transformation (#247)
- Add user-defined safe commands configuration and approval logic #380 (#386)
- Allow switching approval modes when prompted to approve an edit/command (#400)
- Add support for `/diff` command autocomplete in TerminalChatInput (#431)
- Auto-open model selector if user selects deprecated model (#427)
- Read approvalMode from config file (#298)
- `/diff` command to view git diff (#426)
- Tab completions for file paths (#279)
- Add /command autocomplete (#317)
- Allow multi-line input (#438)
### 🪲 Bug Fixes
- `full-auto` support in quiet mode (#374)
- Enable shell option for child process execution (#391)
- Configure husky and lint-staged for pnpm monorepo (#384)
- Command pipe execution by improving shell detection (#437)
- Name of the file not matching the name of the component (#354)
- Allow proper exit from new Switch approval mode dialog (#453)
- Ensure /clear resets context and exclude system messages from approximateTokenUsed count (#443)
- `/clear` now clears terminal screen and resets context left indicator (#425)
- Correct fish completion function name in CLI script (#485)
- Auto-open model-selector when model is not found (#448)
- Remove unnecessary isLoggingEnabled() checks (#420)
- Improve test reliability for `raw-exec` (#434)
- Unintended tear down of agent loop (#483)
- Remove extraneous type casts (#462)
## `0.1.2504181820`
### 🚀 Features
- Add `/bug` report command (#312)
- Notify when a newer version is available (#333)
### 🪲 Bug Fixes
- Update context left display logic in TerminalChatInput component (#307)
- Improper spawn of sh on Windows Powershell (#318)
- `/bug` report command, thinking indicator (#381)
- Include pnpm lock file (#377)
## `0.1.2504172351`
### 🚀 Features
- Add Nix flake for reproducible development environments (#225)
### 🪲 Bug Fixes
- Handle invalid commands (#304)
- Raw-exec-process-group.test improve reliability and error handling (#280)
- Canonicalize the writeable paths used in seatbelt policy (#275)
## `0.1.2504172304`
### 🚀 Features
- Add shell completion subcommand (#138)
- Add command history persistence (#152)
- Shell command explanation option (#173)
- Support bun fallback runtime for codex CLI (#282)
- Add notifications for MacOS using Applescript (#160)
- Enhance image path detection in input processing (#189)
- `--config`/`-c` flag to open global instructions in nvim (#158)
- Update position of cursor when navigating input history with arrow keys to the end of the text (#255)
### 🪲 Bug Fixes
- Correct word deletion logic for trailing spaces (Ctrl+Backspace) (#131)
- Improve Windows compatibility for CLI commands and sandbox (#261)
- Correct typos in thinking texts (transcendent & parroting) (#108)
- Add empty vite config file to prevent resolving to parent (#273)
- Update regex to better match the retry error messages (#266)
- Add missing "as" in prompt prefix in agent loop (#186)
- Allow continuing after interrupting assistant (#178)
- Standardize filename to kebab-case 🐍➡️🥙 (#302)
- Small update to bug report template (#288)
- Duplicated message on model change (#276)
- Typos in prompts and comments (#195)
- Check workdir before spawn (#221)
<!-- generated - do not edit -->
The changelog can be found on the [releases page](https://github.com/openai/codex/releases)

678
README.md
View File

@@ -5,73 +5,25 @@
<p align="center"><strong>Codex CLI</strong> is a coding agent from OpenAI that runs locally on your computer.</br>If you are looking for the <em>cloud-based agent</em> from OpenAI, <strong>Codex Web</strong>, see <a href="https://chatgpt.com/codex">chatgpt.com/codex</a>.</p>
<p align="center">
<img src="./.github/codex-cli-splash.png" alt="Codex CLI splash" width="50%" />
<img src="./.github/codex-cli-splash.png" alt="Codex CLI splash" width="80%" />
</p>
---
<details>
<summary><strong>Table of contents</strong></summary>
<!-- Begin ToC -->
- [Quickstart](#quickstart)
- [Installing and running Codex CLI](#installing-and-running-codex-cli)
- [Using Codex with your ChatGPT plan](#using-codex-with-your-chatgpt-plan)
- [Connecting on a "Headless" Machine](#connecting-on-a-headless-machine)
- [Authenticate locally and copy your credentials to the "headless" machine](#authenticate-locally-and-copy-your-credentials-to-the-headless-machine)
- [Connecting through VPS or remote](#connecting-through-vps-or-remote)
- [Usage-based billing alternative: Use an OpenAI API key](#usage-based-billing-alternative-use-an-openai-api-key)
- [Forcing a specific auth method (advanced)](#forcing-a-specific-auth-method-advanced)
- [Choosing Codex's level of autonomy](#choosing-codexs-level-of-autonomy)
- [**1. Read/write**](#1-readwrite)
- [**2. Read-only**](#2-read-only)
- [**3. Advanced configuration**](#3-advanced-configuration)
- [Can I run without ANY approvals?](#can-i-run-without-any-approvals)
- [Fine-tuning in `config.toml`](#fine-tuning-in-configtoml)
- [Example prompts](#example-prompts)
- [Running with a prompt as input](#running-with-a-prompt-as-input)
- [Using Open Source Models](#using-open-source-models)
- [Platform sandboxing details](#platform-sandboxing-details)
- [Experimental technology disclaimer](#experimental-technology-disclaimer)
- [System requirements](#system-requirements)
- [CLI reference](#cli-reference)
- [Memory & project docs](#memory--project-docs)
- [Non-interactive / CI mode](#non-interactive--ci-mode)
- [Model Context Protocol (MCP)](#model-context-protocol-mcp)
- [Tracing / verbose logging](#tracing--verbose-logging)
- [DotSlash](#dotslash)
- [Configuration](#configuration)
- [FAQ](#faq)
- [Zero data retention (ZDR) usage](#zero-data-retention-zdr-usage)
- [Codex open source fund](#codex-open-source-fund)
- [Contributing](#contributing)
- [Development workflow](#development-workflow)
- [Writing high-impact code changes](#writing-high-impact-code-changes)
- [Opening a pull request](#opening-a-pull-request)
- [Review process](#review-process)
- [Community values](#community-values)
- [Getting help](#getting-help)
- [Contributor license agreement (CLA)](#contributor-license-agreement-cla)
- [Quick fixes](#quick-fixes)
- [Releasing `codex`](#releasing-codex)
- [Security & responsible AI](#security--responsible-ai)
- [License](#license)
<!-- End ToC -->
</details>
---
## Quickstart
### Installing and running Codex CLI
Install globally with your preferred package manager:
Install globally with your preferred package manager. If you use npm:
```shell
npm install -g @openai/codex # Alternatively: `brew install codex`
npm install -g @openai/codex
```
Alternatively, if you use Homebrew:
```shell
brew install codex
```
Then simply run `codex` to get started:
@@ -99,600 +51,52 @@ Each archive contains a single entry with the platform baked into the name (e.g.
### Using Codex with your ChatGPT plan
<p align="center">
<img src="./.github/codex-cli-login.png" alt="Codex CLI login" width="50%" />
<img src="./.github/codex-cli-login.png" alt="Codex CLI login" width="80%" />
</p>
Run `codex` and select **Sign in with ChatGPT**. You'll need a Plus, Pro, or Team ChatGPT account, and will get access to our latest models, including `gpt-5`, at no extra cost to your plan. (Enterprise is coming soon.)
Run `codex` and select **Sign in with ChatGPT**. We recommend signing into your ChatGPT account to use Codex as part of your Plus, Pro, Team, Edu, or Enterprise plan. [Learn more about what's included in your ChatGPT plan](https://help.openai.com/en/articles/11369540-codex-in-chatgpt).
> Important: If you've used the Codex CLI before, follow these steps to migrate from usage-based billing with your API key:
>
> 1. Update the CLI and ensure `codex --version` is `0.20.0` or later
> 2. Delete `~/.codex/auth.json` (this should be `C:\Users\USERNAME\.codex\auth.json` on Windows)
> 3. Run `codex login` again
You can also use Codex with an API key, but this requires [additional setup](./docs/authentication.md#usage-based-billing-alternative-use-an-openai-api-key). If you previously used an API key for usage-based billing, see the [migration steps](./docs/authentication.md#migrating-from-usage-based-billing-api-key). If you're having trouble with login, please comment on [this issue](https://github.com/openai/codex/issues/1243).
If you encounter problems with the login flow, please comment on [this issue](https://github.com/openai/codex/issues/1243).
### Model Context Protocol (MCP)
### Connecting on a "Headless" Machine
Codex CLI supports [MCP servers](./docs/advanced.md#model-context-protocol-mcp). Enable by adding an `mcp_servers` section to your `~/.codex/config.toml`.
Today, the login process entails running a server on `localhost:1455`. If you are on a "headless" server, such as a Docker container or are `ssh`'d into a remote machine, loading `localhost:1455` in the browser on your local machine will not automatically connect to the webserver running on the _headless_ machine, so you must use one of the following workarounds:
#### Authenticate locally and copy your credentials to the "headless" machine
### Configuration
The easiest solution is likely to run through the `codex login` process on your local machine such that `localhost:1455` _is_ accessible in your web browser. When you complete the authentication process, an `auth.json` file should be available at `$CODEX_HOME/auth.json` (on Mac/Linux, `$CODEX_HOME` defaults to `~/.codex` whereas on Windows, it defaults to `%USERPROFILE%\.codex`).
Because the `auth.json` file is not tied to a specific host, once you complete the authentication flow locally, you can copy the `$CODEX_HOME/auth.json` file to the headless machine and then `codex` should "just work" on that machine. Note to copy a file to a Docker container, you can do:
```shell
# substitute MY_CONTAINER with the name or id of your Docker container:
CONTAINER_HOME=$(docker exec MY_CONTAINER printenv HOME)
docker exec MY_CONTAINER mkdir -p "$CONTAINER_HOME/.codex"
docker cp auth.json MY_CONTAINER:"$CONTAINER_HOME/.codex/auth.json"
```
whereas if you are `ssh`'d into a remote machine, you likely want to use [`scp`](https://en.wikipedia.org/wiki/Secure_copy_protocol):
```shell
ssh user@remote 'mkdir -p ~/.codex'
scp ~/.codex/auth.json user@remote:~/.codex/auth.json
```
or try this one-liner:
```shell
ssh user@remote 'mkdir -p ~/.codex && cat > ~/.codex/auth.json' < ~/.codex/auth.json
```
#### Connecting through VPS or remote
If you run Codex on a remote machine (VPS/server) without a local browser, the login helper starts a server on `localhost:1455` on the remote host. To complete login in your local browser, forward that port to your machine before starting the login flow:
```bash
# From your local machine
ssh -L 1455:localhost:1455 <user>@<remote-host>
```
Then, in that SSH session, run `codex` and select "Sign in with ChatGPT". When prompted, open the printed URL (it will be `http://localhost:1455/...`) in your local browser. The traffic will be tunneled to the remote server.
### Usage-based billing alternative: Use an OpenAI API key
If you prefer to pay-as-you-go, you can still authenticate with your OpenAI API key by setting it as an environment variable:
```shell
export OPENAI_API_KEY="your-api-key-here"
```
Notes:
- This command only sets the key for your current terminal session, which we recommend. To set it for all future sessions, you can also add the `export` line to your shell's configuration file (e.g., `~/.zshrc`).
- If you have signed in with ChatGPT, Codex will default to using your ChatGPT credits. If you wish to use your API key, use the `/logout` command to clear your ChatGPT authentication.
#### Forcing a specific auth method (advanced)
You can explicitly choose which authentication Codex should prefer when both are available.
- To always use your API key (even when ChatGPT auth exists), set:
```toml
# ~/.codex/config.toml
preferred_auth_method = "apikey"
```
Or override ad-hoc via CLI:
```bash
codex --config preferred_auth_method="apikey"
```
- To prefer ChatGPT auth (default), set:
```toml
# ~/.codex/config.toml
preferred_auth_method = "chatgpt"
```
Notes:
- When `preferred_auth_method = "apikey"` and an API key is available, the login screen is skipped.
- When `preferred_auth_method = "chatgpt"` (default), Codex prefers ChatGPT auth if present; if only an API key is present, it will use the API key. Certain account types may also require API-key mode.
### Choosing Codex's level of autonomy
We always recommend running Codex in its default sandbox that gives you strong guardrails around what the agent can do. The default sandbox prevents it from editing files outside its workspace, or from accessing the network.
When you launch Codex in a new folder, it detects whether the folder is version controlled and recommends one of two levels of autonomy:
#### **1. Read/write**
- Codex can run commands and write files in the workspace without approval.
- To write files in other folders, access network, update git or perform other actions protected by the sandbox, Codex will need your permission.
- By default, the workspace includes the current directory, as well as temporary directories like `/tmp`. You can see what directories are in the workspace with the `/status` command. See the docs for how to customize this behavior.
- Advanced: You can manually specify this configuration by running `codex --sandbox workspace-write --ask-for-approval on-request`
- This is the recommended default for version-controlled folders.
#### **2. Read-only**
- Codex can run read-only commands without approval.
- To edit files, access network, or perform other actions protected by the sandbox, Codex will need your permission.
- Advanced: You can manually specify this configuration by running `codex --sandbox read-only --ask-for-approval on-request`
- This is the recommended default non-version-controlled folders.
#### **3. Advanced configuration**
Codex gives you fine-grained control over the sandbox with the `--sandbox` option, and over when it requests approval with the `--ask-for-approval` option. Run `codex help` for more on these options.
#### Can I run without ANY approvals?
Yes, run codex non-interactively with `--ask-for-approval never`. This option works with all `--sandbox` options, so you still have full control over Codex's level of autonomy. It will make its best attempt with whatever contrainsts you provide. For example:
- Use `codex --ask-for-approval never --sandbox read-only` when you are running many agents to answer questions in parallel in the same workspace.
- Use `codex --ask-for-approval never --sandbox workspace-write` when you want the agent to non-interactively take time to produce the best outcome, with strong guardrails around its behavior.
- Use `codex --ask-for-approval never --sandbox danger-full-access` to dangerously give the agent full autonomy. Because this disables important safety mechanisms, we recommend against using this unless running Codex in an isolated environment.
#### Fine-tuning in `config.toml`
```toml
# approval mode
approval_policy = "untrusted"
sandbox_mode = "read-only"
# full-auto mode
approval_policy = "on-request"
sandbox_mode = "workspace-write"
# Optional: allow network in workspace-write mode
[sandbox_workspace_write]
network_access = true
```
You can also save presets as **profiles**:
```toml
[profiles.full_auto]
approval_policy = "on-request"
sandbox_mode = "workspace-write"
[profiles.readonly_quiet]
approval_policy = "never"
sandbox_mode = "read-only"
```
### Example prompts
Below are a few bite-size examples you can copy-paste. Replace the text in quotes with your own task. See the [prompting guide](https://github.com/openai/codex/blob/main/codex-cli/examples/prompting_guide.md) for more tips and usage patterns.
| ✨ | What you type | What happens |
| --- | ------------------------------------------------------------------------------- | -------------------------------------------------------------------------- |
| 1 | `codex "Refactor the Dashboard component to React Hooks"` | Codex rewrites the class component, runs `npm test`, and shows the diff. |
| 2 | `codex "Generate SQL migrations for adding a users table"` | Infers your ORM, creates migration files, and runs them in a sandboxed DB. |
| 3 | `codex "Write unit tests for utils/date.ts"` | Generates tests, executes them, and iterates until they pass. |
| 4 | `codex "Bulk-rename *.jpeg -> *.jpg with git mv"` | Safely renames files and updates imports/usages. |
| 5 | `codex "Explain what this regex does: ^(?=.*[A-Z]).{8,}$"` | Outputs a step-by-step human explanation. |
| 6 | `codex "Carefully review this repo, and propose 3 high impact well-scoped PRs"` | Suggests impactful PRs in the current codebase. |
| 7 | `codex "Look for vulnerabilities and create a security review report"` | Finds and explains security bugs. |
## Running with a prompt as input
You can also run Codex CLI with a prompt as input:
```shell
codex "explain this codebase to me"
```
```shell
codex --full-auto "create the fanciest todo-list app"
```
That's it - Codex will scaffold a file, run it inside a sandbox, install any
missing dependencies, and show you the live result. Approve the changes and
they'll be committed to your working directory.
## Using Open Source Models
<details>
<summary><strong>Use <code>--profile</code> to use other models</strong></summary>
Codex also allows you to use other providers that support the OpenAI Chat Completions (or Responses) API.
To do so, you must first define custom [providers](./config.md#model_providers) in `~/.codex/config.toml`. For example, the provider for a standard Ollama setup would be defined as follows:
```toml
[model_providers.ollama]
name = "Ollama"
base_url = "http://localhost:11434/v1"
```
The `base_url` will have `/chat/completions` appended to it to build the full URL for the request.
For providers that also require an `Authorization` header of the form `Bearer: SECRET`, an `env_key` can be specified, which indicates the environment variable to read to use as the value of `SECRET` when making a request:
```toml
[model_providers.openrouter]
name = "OpenRouter"
base_url = "https://openrouter.ai/api/v1"
env_key = "OPENROUTER_API_KEY"
```
Providers that speak the Responses API are also supported by adding `wire_api = "responses"` as part of the definition. Accessing OpenAI models via Azure is an example of such a provider, though it also requires specifying additional `query_params` that need to be appended to the request URL:
```toml
[model_providers.azure]
name = "Azure"
# Make sure you set the appropriate subdomain for this URL.
base_url = "https://YOUR_PROJECT_NAME.openai.azure.com/openai"
env_key = "AZURE_OPENAI_API_KEY" # Or "OPENAI_API_KEY", whichever you use.
# Newer versions appear to support the responses API, see https://github.com/openai/codex/pull/1321
query_params = { api-version = "2025-04-01-preview" }
wire_api = "responses"
```
Once you have defined a provider you wish to use, you can configure it as your default provider as follows:
```toml
model_provider = "azure"
```
> [!TIP]
> If you find yourself experimenting with a variety of models and providers, then you likely want to invest in defining a _profile_ for each configuration like so:
```toml
[profiles.o3]
model_provider = "azure"
model = "o3"
[profiles.mistral]
model_provider = "ollama"
model = "mistral"
```
This way, you can specify one command-line argument (.e.g., `--profile o3`, `--profile mistral`) to override multiple settings together.
</details>
Codex can run fully locally against an OpenAI-compatible OSS host (like Ollama) using the `--oss` flag:
- Interactive UI:
- codex --oss
- Non-interactive (programmatic) mode:
- echo "Refactor utils" | codex exec --oss
Model selection when using `--oss`:
- If you omit `-m/--model`, Codex defaults to -m gpt-oss:20b and will verify it exists locally (downloading if needed).
- To pick a different size, pass one of:
- -m "gpt-oss:20b"
- -m "gpt-oss:120b"
Point Codex at your own OSS host:
- By default, `--oss` talks to http://localhost:11434/v1.
- To use a different host, set one of these environment variables before running Codex:
- CODEX_OSS_BASE_URL, for example:
- CODEX_OSS_BASE_URL="http://my-ollama.example.com:11434/v1" codex --oss -m gpt-oss:20b
- or CODEX_OSS_PORT (when the host is localhost):
- CODEX_OSS_PORT=11434 codex --oss
Advanced: you can persist this in your config instead of environment variables by overriding the built-in `oss` provider in `~/.codex/config.toml`:
```toml
[model_providers.oss]
name = "Open Source"
base_url = "http://my-ollama.example.com:11434/v1"
```
Codex CLI supports a rich set of configuration options, with preferences stored in `~/.codex/config.toml`. For full configuration options, see [Configuration](./docs/config.md).
---
### Platform sandboxing details
The mechanism Codex uses to implement the sandbox policy depends on your OS:
- **macOS 12+** uses **Apple Seatbelt** and runs commands using `sandbox-exec` with a profile (`-p`) that corresponds to the `--sandbox` that was specified.
- **Linux** uses a combination of Landlock/seccomp APIs to enforce the `sandbox` configuration.
Note that when running Linux in a containerized environment such as Docker, sandboxing may not work if the host/container configuration does not support the necessary Landlock/seccomp APIs. In such cases, we recommend configuring your Docker container so that it provides the sandbox guarantees you are looking for and then running `codex` with `--sandbox danger-full-access` (or, more simply, the `--dangerously-bypass-approvals-and-sandbox` flag) within your container.
---
## Experimental technology disclaimer
Codex CLI is an experimental project under active development. It is not yet stable, may contain bugs, incomplete features, or undergo breaking changes. We're building it in the open with the community and welcome:
- Bug reports
- Feature requests
- Pull requests
- Good vibes
Help us improve by filing issues or submitting PRs (see the section below for how to contribute)!
---
## System requirements
| Requirement | Details |
| --------------------------- | --------------------------------------------------------------- |
| Operating systems | macOS 12+, Ubuntu 20.04+/Debian 10+, or Windows 11 **via WSL2** |
| Git (optional, recommended) | 2.23+ for built-in PR helpers |
| RAM | 4-GB minimum (8-GB recommended) |
---
## CLI reference
| Command | Purpose | Example |
| ------------------ | ---------------------------------- | ------------------------------- |
| `codex` | Interactive TUI | `codex` |
| `codex "..."` | Initial prompt for interactive TUI | `codex "fix lint errors"` |
| `codex exec "..."` | Non-interactive "automation mode" | `codex exec "explain utils.ts"` |
Key flags: `--model/-m`, `--ask-for-approval/-a`.
---
## Memory & project docs
You can give Codex extra instructions and guidance using `AGENTS.md` files. Codex looks for `AGENTS.md` files in the following places, and merges them top-down:
1. `~/.codex/AGENTS.md` - personal global guidance
2. `AGENTS.md` at repo root - shared project notes
3. `AGENTS.md` in the current working directory - sub-folder/feature specifics
---
## Non-interactive / CI mode
Run Codex head-less in pipelines. Example GitHub Action step:
```yaml
- name: Update changelog via Codex
run: |
npm install -g @openai/codex
export OPENAI_API_KEY="${{ secrets.OPENAI_KEY }}"
codex exec --full-auto "update CHANGELOG for next release"
```
## Model Context Protocol (MCP)
The Codex CLI can be configured to leverage MCP servers by defining an [`mcp_servers`](./codex-rs/config.md#mcp_servers) section in `~/.codex/config.toml`. It is intended to mirror how tools such as Claude and Cursor define `mcpServers` in their respective JSON config files, though the Codex format is slightly different since it uses TOML rather than JSON, e.g.:
```toml
# IMPORTANT: the top-level key is `mcp_servers` rather than `mcpServers`.
[mcp_servers.server-name]
command = "npx"
args = ["-y", "mcp-server"]
env = { "API_KEY" = "value" }
```
> [!TIP]
> It is somewhat experimental, but the Codex CLI can also be run as an MCP _server_ via `codex mcp`. If you launch it with an MCP client such as `npx @modelcontextprotocol/inspector codex mcp` and send it a `tools/list` request, you will see that there is only one tool, `codex`, that accepts a grab-bag of inputs, including a catch-all `config` map for anything you might want to override. Feel free to play around with it and provide feedback via GitHub issues.
## Tracing / verbose logging
Because Codex is written in Rust, it honors the `RUST_LOG` environment variable to configure its logging behavior.
The TUI defaults to `RUST_LOG=codex_core=info,codex_tui=info` and log messages are written to `~/.codex/log/codex-tui.log`, so you can leave the following running in a separate terminal to monitor log messages as they are written:
```
tail -F ~/.codex/log/codex-tui.log
```
By comparison, the non-interactive mode (`codex exec`) defaults to `RUST_LOG=error`, but messages are printed inline, so there is no need to monitor a separate file.
See the Rust documentation on [`RUST_LOG`](https://docs.rs/env_logger/latest/env_logger/#enabling-logging) for more information on the configuration options.
---
### DotSlash
The GitHub Release also contains a [DotSlash](https://dotslash-cli.com/) file for the Codex CLI named `codex`. Using a DotSlash file makes it possible to make a lightweight commit to source control to ensure all contributors use the same version of an executable, regardless of what platform they use for development.
</details>
<details>
<summary><strong>Build from source</strong></summary>
```bash
# Clone the repository and navigate to the root of the Cargo workspace.
git clone https://github.com/openai/codex.git
cd codex/codex-rs
# Install the Rust toolchain, if necessary.
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
source "$HOME/.cargo/env"
rustup component add rustfmt
rustup component add clippy
# Build Codex.
cargo build
# Launch the TUI with a sample prompt.
cargo run --bin codex -- "explain this codebase to me"
# After making changes, ensure the code is clean.
cargo fmt -- --config imports_granularity=Item
cargo clippy --tests
# Run the tests.
cargo test
```
</details>
---
## Configuration
Codex supports a rich set of configuration options documented in [`codex-rs/config.md`](./codex-rs/config.md).
By default, Codex loads its configuration from `~/.codex/config.toml`.
Though `--config` can be used to set/override ad-hoc config values for individual invocations of `codex`.
---
## FAQ
<details>
<summary>OpenAI released a model called Codex in 2021 - is this related?</summary>
In 2021, OpenAI released Codex, an AI system designed to generate code from natural language prompts. That original Codex model was deprecated as of March 2023 and is separate from the CLI tool.
</details>
<details>
<summary>Which models are supported?</summary>
Any model available with [Responses API](https://platform.openai.com/docs/api-reference/responses). The default is `o4-mini`, but pass `--model gpt-4.1` or set `model: gpt-4.1` in your config file to override.
</details>
<details>
<summary>Why does <code>o3</code> or <code>o4-mini</code> not work for me?</summary>
It's possible that your [API account needs to be verified](https://help.openai.com/en/articles/10910291-api-organization-verification) in order to start streaming responses and seeing chain of thought summaries from the API. If you're still running into issues, please let us know!
</details>
<details>
<summary>How do I stop Codex from editing my files?</summary>
Codex runs model-generated commands in a sandbox. If a proposed command or file change doesn't look right, you can simply type **n** to deny the command or give the model feedback.
</details>
<details>
<summary>Does it work on Windows?</summary>
Not directly. It requires [Windows Subsystem for Linux (WSL2)](https://learn.microsoft.com/en-us/windows/wsl/install) - Codex has been tested on macOS and Linux with Node 22.
</details>
---
## Zero data retention (ZDR) usage
Codex CLI **does** support OpenAI organizations with [Zero Data Retention (ZDR)](https://platform.openai.com/docs/guides/your-data#zero-data-retention) enabled. If your OpenAI organization has Zero Data Retention enabled and you still encounter errors such as:
```
OpenAI rejected the request. Error details: Status: 400, Code: unsupported_parameter, Type: invalid_request_error, Message: 400 Previous response cannot be used for this organization due to Zero Data Retention.
```
Ensure you are running `codex` with `--config disable_response_storage=true` or add this line to `~/.codex/config.toml` to avoid specifying the command line option each time:
```toml
disable_response_storage = true
```
See [the configuration documentation on `disable_response_storage`](./codex-rs/config.md#disable_response_storage) for details.
---
## Codex open source fund
We're excited to launch a **$1 million initiative** supporting open source projects that use Codex CLI and other OpenAI models.
- Grants are awarded up to **$25,000** API credits.
- Applications are reviewed **on a rolling basis**.
**Interested? [Apply here](https://openai.com/form/codex-open-source-fund/).**
---
## Contributing
This project is under active development and the code will likely change pretty significantly.
**At the moment, we only plan to prioritize reviewing external contributions for bugs or security fixes.**
If you want to add a new feature or change the behavior of an existing one, please open an issue proposing the feature and get approval from an OpenAI team member before spending time building it.
**New contributions that don't go through this process may be closed** if they aren't aligned with our current roadmap or conflict with other priorities/upcoming features.
### Development workflow
- Create a _topic branch_ from `main` - e.g. `feat/interactive-prompt`.
- Keep your changes focused. Multiple unrelated fixes should be opened as separate PRs.
- Following the [development setup](#development-workflow) instructions above, ensure your change is free of lint warnings and test failures.
### Writing high-impact code changes
1. **Start with an issue.** Open a new one or comment on an existing discussion so we can agree on the solution before code is written.
2. **Add or update tests.** Every new feature or bug-fix should come with test coverage that fails before your change and passes afterwards. 100% coverage is not required, but aim for meaningful assertions.
3. **Document behaviour.** If your change affects user-facing behaviour, update the README, inline help (`codex --help`), or relevant example projects.
4. **Keep commits atomic.** Each commit should compile and the tests should pass. This makes reviews and potential rollbacks easier.
### Opening a pull request
- Fill in the PR template (or include similar information) - **What? Why? How?**
- Run **all** checks locally (`cargo test && cargo clippy --tests && cargo fmt -- --config imports_granularity=Item`). CI failures that could have been caught locally slow down the process.
- Make sure your branch is up-to-date with `main` and that you have resolved merge conflicts.
- Mark the PR as **Ready for review** only when you believe it is in a merge-able state.
### Review process
1. One maintainer will be assigned as a primary reviewer.
2. If your PR adds a new feature that was not previously discussed and approved, we may choose to close your PR (see [Contributing](#contributing)).
3. We may ask for changes - please do not take this personally. We value the work, but we also value consistency and long-term maintainability.
5. When there is consensus that the PR meets the bar, a maintainer will squash-and-merge.
### Community values
- **Be kind and inclusive.** Treat others with respect; we follow the [Contributor Covenant](https://www.contributor-covenant.org/).
- **Assume good intent.** Written communication is hard - err on the side of generosity.
- **Teach & learn.** If you spot something confusing, open an issue or PR with improvements.
### Getting help
If you run into problems setting up the project, would like feedback on an idea, or just want to say _hi_ - please open a Discussion or jump into the relevant issue. We are happy to help.
Together we can make Codex CLI an incredible tool. **Happy hacking!** :rocket:
### Contributor license agreement (CLA)
All contributors **must** accept the CLA. The process is lightweight:
1. Open your pull request.
2. Paste the following comment (or reply `recheck` if you've signed before):
```text
I have read the CLA Document and I hereby sign the CLA
```
3. The CLA-Assistant bot records your signature in the repo and marks the status check as passed.
No special Git commands, email attachments, or commit footers required.
#### Quick fixes
| Scenario | Command |
| ----------------- | ------------------------------------------------ |
| Amend last commit | `git commit --amend -s --no-edit && git push -f` |
The **DCO check** blocks merges until every commit in the PR carries the footer (with squash this is just the one).
### Releasing `codex`
_For admins only._
Make sure you are on `main` and have no local changes. Then run:
```shell
VERSION=0.2.0 # Can also be 0.2.0-alpha.1 or any valid Rust version.
./codex-rs/scripts/create_github_release.sh "$VERSION"
```
This will make a local commit on top of `main` with `version` set to `$VERSION` in `codex-rs/Cargo.toml` (note that on `main`, we leave the version as `version = "0.0.0"`).
This will push the commit using the tag `rust-v${VERSION}`, which in turn kicks off [the release workflow](.github/workflows/rust-release.yml). This will create a new GitHub Release named `$VERSION`.
If everything looks good in the generated GitHub Release, uncheck the **pre-release** box so it is the latest release.
Create a PR to update [`Formula/c/codex.rb`](https://github.com/Homebrew/homebrew-core/blob/main/Formula/c/codex.rb) on Homebrew.
---
## Security & responsible AI
Have you discovered a vulnerability or have concerns about model output? Please e-mail **security@openai.com** and we will respond promptly.
### Docs & FAQ
- [**Getting started**](./docs/getting-started.md)
- [CLI usage](./docs/getting-started.md#cli-usage)
- [Running with a prompt as input](./docs/getting-started.md#running-with-a-prompt-as-input)
- [Example prompts](./docs/getting-started.md#example-prompts)
- [Memory with AGENTS.md](./docs/getting-started.md#memory--project-docs)
- [Configuration](./docs/config.md)
- [**Sandbox & approvals**](./docs/sandbox.md)
- [**Authentication**](./docs/authentication.md)
- [Auth methods](./docs/authentication.md#forcing-a-specific-auth-method-advanced)
- [Login on a "Headless" machine](./docs/authentication.md#connecting-on-a-headless-machine)
- [**Advanced**](./docs/advanced.md)
- [Non-interactive / CI mode](./docs/advanced.md#non-interactive--ci-mode)
- [Tracing / verbose logging](./docs/advanced.md#tracing--verbose-logging)
- [Model Context Protocol (MCP)](./docs/advanced.md#model-context-protocol-mcp)
- [**Zero data retention (ZDR)**](./docs/zdr.md)
- [**Contributing**](./docs/contributing.md)
- [**Install & build**](./docs/install.md)
- [System Requirements](./docs/install.md#system-requirements)
- [DotSlash](./docs/install.md#dotslash)
- [Build from source](./docs/install.md#build-from-source)
- [**FAQ**](./docs/faq.md)
- [**Open source fund**](./docs/open-source-fund.md)
---
## License
This repository is licensed under the [Apache-2.0 License](LICENSE).

304
codex-rs/Cargo.lock generated
View File

@@ -186,6 +186,26 @@ version = "1.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dde20b3d026af13f561bdd0f15edf01fc734f0dafcedbaf42bba506a9517f223"
[[package]]
name = "arboard"
version = "3.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "55f533f8e0af236ffe5eb979b99381df3258853f00ba2e44b6e1955292c75227"
dependencies = [
"clipboard-win",
"image",
"log",
"objc2",
"objc2-app-kit",
"objc2-core-foundation",
"objc2-core-graphics",
"objc2-foundation",
"parking_lot",
"percent-encoding",
"windows-sys 0.59.0",
"x11rb",
]
[[package]]
name = "arg_enum_proc_macro"
version = "0.3.4"
@@ -615,6 +635,7 @@ name = "codex-apply-patch"
version = "0.0.0"
dependencies = [
"anyhow",
"assert_cmd",
"pretty_assertions",
"similar",
"tempfile",
@@ -632,6 +653,7 @@ dependencies = [
"codex-core",
"codex-linux-sandbox",
"dotenvy",
"tempfile",
"tokio",
]
@@ -711,6 +733,7 @@ dependencies = [
"mime_guess",
"openssl-sys",
"os_info",
"portable-pty",
"predicates",
"pretty_assertions",
"rand 0.9.2",
@@ -731,12 +754,13 @@ dependencies = [
"tokio-test",
"tokio-util",
"toml 0.9.5",
"toml_edit 0.23.3",
"toml_edit 0.23.4",
"tracing",
"tree-sitter",
"tree-sitter-bash",
"uuid",
"walkdir",
"which",
"whoami",
"wildmatch",
"wiremock",
@@ -753,6 +777,7 @@ dependencies = [
"codex-arg0",
"codex-common",
"codex-core",
"codex-login",
"codex-ollama",
"codex-protocol",
"core_test_support",
@@ -822,6 +847,7 @@ version = "0.0.0"
dependencies = [
"base64 0.22.1",
"chrono",
"codex-protocol",
"pretty_assertions",
"rand 0.8.5",
"reqwest",
@@ -900,13 +926,16 @@ dependencies = [
name = "codex-protocol"
version = "0.0.0"
dependencies = [
"base64 0.22.1",
"mcp-types",
"mime_guess",
"pretty_assertions",
"serde",
"serde_bytes",
"serde_json",
"strum 0.27.2",
"strum_macros 0.27.2",
"tracing",
"ts-rs",
"uuid",
]
@@ -926,6 +955,8 @@ name = "codex-tui"
version = "0.0.0"
dependencies = [
"anyhow",
"arboard",
"async-stream",
"base64 0.22.1",
"chrono",
"clap",
@@ -942,11 +973,13 @@ dependencies = [
"diffy",
"image",
"insta",
"itertools 0.14.0",
"lazy_static",
"libc",
"mcp-types",
"once_cell",
"path-clean",
"pathdiff",
"pretty_assertions",
"rand 0.9.2",
"ratatui",
@@ -959,8 +992,10 @@ dependencies = [
"strum 0.27.2",
"strum_macros 0.27.2",
"supports-color",
"tempfile",
"textwrap 0.16.2",
"tokio",
"tokio-stream",
"tracing",
"tracing-appender",
"tracing-subscriber",
@@ -968,6 +1003,7 @@ dependencies = [
"tui-markdown",
"unicode-segmentation",
"unicode-width 0.1.14",
"url",
"uuid",
"vt100",
]
@@ -1161,6 +1197,7 @@ checksum = "829d955a0bb380ef178a640b91779e3987da38c9aea133b20614cfed8cdea9c6"
dependencies = [
"bitflags 2.9.1",
"crossterm_winapi",
"futures-core",
"mio",
"parking_lot",
"rustix 0.38.44",
@@ -1405,6 +1442,16 @@ dependencies = [
"winapi",
]
[[package]]
name = "dispatch2"
version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "89a09f22a6c6069a18470eb92d2298acf25463f14256d24778e1230d789a2aec"
dependencies = [
"bitflags 2.9.1",
"objc2",
]
[[package]]
name = "display_container"
version = "0.9.0"
@@ -1438,6 +1485,12 @@ version = "0.15.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1aaf95b3e5c8f23aa320147307562d361db0ae0d51242340f558153b4eb2439b"
[[package]]
name = "downcast-rs"
version = "1.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "75b325c5dbd37f80359721ad39aca5a29fb04c89279657cffdda8736d0c0b9d2"
[[package]]
name = "dupe"
version = "0.9.1"
@@ -1683,6 +1736,17 @@ dependencies = [
"simd-adler32",
]
[[package]]
name = "filedescriptor"
version = "0.8.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e40758ed24c9b2eeb76c35fb0aebc66c626084edd827e07e1552279814c6682d"
dependencies = [
"libc",
"thiserror 1.0.69",
"winapi",
]
[[package]]
name = "fixedbitset"
version = "0.4.2"
@@ -1858,6 +1922,16 @@ dependencies = [
"version_check",
]
[[package]]
name = "gethostname"
version = "0.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0176e0459c2e4a1fe232f984bca6890e681076abb9934f6cea7c326f3fc47818"
dependencies = [
"libc",
"windows-targets 0.48.5",
]
[[package]]
name = "getopts"
version = "0.2.23"
@@ -2651,6 +2725,7 @@ checksum = "4488594b9328dee448adb906d8b126d9b7deb7cf5c22161ee591610bb1be83c0"
dependencies = [
"bitflags 2.9.1",
"libc",
"redox_syscall",
]
[[package]]
@@ -3054,6 +3129,42 @@ dependencies = [
"objc2-encode",
]
[[package]]
name = "objc2-app-kit"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e6f29f568bec459b0ddff777cec4fe3fd8666d82d5a40ebd0ff7e66134f89bcc"
dependencies = [
"bitflags 2.9.1",
"objc2",
"objc2-core-graphics",
"objc2-foundation",
]
[[package]]
name = "objc2-core-foundation"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1c10c2894a6fed806ade6027bcd50662746363a9589d3ec9d9bef30a4e4bc166"
dependencies = [
"bitflags 2.9.1",
"dispatch2",
"objc2",
]
[[package]]
name = "objc2-core-graphics"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "989c6c68c13021b5c2d6b71456ebb0f9dc78d752e86a98da7c716f4f9470f5a4"
dependencies = [
"bitflags 2.9.1",
"dispatch2",
"objc2",
"objc2-core-foundation",
"objc2-io-surface",
]
[[package]]
name = "objc2-encode"
version = "4.1.0"
@@ -3068,6 +3179,18 @@ checksum = "900831247d2fe1a09a683278e5384cfb8c80c79fe6b166f9d14bfdde0ea1b03c"
dependencies = [
"bitflags 2.9.1",
"objc2",
"objc2-core-foundation",
]
[[package]]
name = "objc2-io-surface"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7282e9ac92529fa3457ce90ebb15f4ecbc383e8338060960760fa2cf75420c3c"
dependencies = [
"bitflags 2.9.1",
"objc2",
"objc2-core-foundation",
]
[[package]]
@@ -3256,6 +3379,12 @@ dependencies = [
"once_cell",
]
[[package]]
name = "pathdiff"
version = "0.2.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "df94ce210e5bc13cb6651479fa48d14f601d9858cfe0467f43ae157023b938d3"
[[package]]
name = "percent-encoding"
version = "2.3.1"
@@ -3340,6 +3469,27 @@ dependencies = [
"portable-atomic",
]
[[package]]
name = "portable-pty"
version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b4a596a2b3d2752d94f51fac2d4a96737b8705dddd311a32b9af47211f08671e"
dependencies = [
"anyhow",
"bitflags 1.3.2",
"downcast-rs",
"filedescriptor",
"lazy_static",
"libc",
"log",
"nix",
"serial2",
"shared_library",
"shell-words",
"winapi",
"winreg",
]
[[package]]
name = "potential_utf"
version = "0.1.2"
@@ -3765,9 +3915,9 @@ dependencies = [
[[package]]
name = "regex-lite"
version = "0.1.6"
version = "0.1.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "53a49587ad06b26609c52e423de037e7f57f20d53535d66e08c695f347df952a"
checksum = "943f41321c63ef1c92fd763bfe054d2668f7f225a5c29f0105903dc2fc04ba30"
[[package]]
name = "regex-syntax"
@@ -3789,9 +3939,9 @@ checksum = "ba39f3699c378cd8970968dcbff9c43159ea4cfbd88d43c00b22f2ef10a435d2"
[[package]]
name = "reqwest"
version = "0.12.22"
version = "0.12.23"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cbc931937e6ca3a06e3b6c0aa7841849b160a90351d6ab467a8b9b9959767531"
checksum = "d429f34c8092b2d42c7c93cec323bb4adeb7c67698f70839adec842ec10c7ceb"
dependencies = [
"base64 0.22.1",
"bytes",
@@ -4183,9 +4333,9 @@ dependencies = [
[[package]]
name = "serde_json"
version = "1.0.142"
version = "1.0.143"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "030fedb782600dcbd6f02d479bf0d817ac3bb40d644745b769d6a96bc3afc5a7"
checksum = "d401abef1d108fbd9cbaebc3e46611f4b1021f714a0597a71f41ee463f5f4a5a"
dependencies = [
"indexmap 2.10.0",
"itoa",
@@ -4267,6 +4417,17 @@ dependencies = [
"syn 2.0.104",
]
[[package]]
name = "serial2"
version = "0.2.31"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "26e1e5956803a69ddd72ce2de337b577898801528749565def03515f82bad5bb"
dependencies = [
"cfg-if",
"libc",
"winapi",
]
[[package]]
name = "sha1"
version = "0.10.6"
@@ -4298,6 +4459,22 @@ dependencies = [
"lazy_static",
]
[[package]]
name = "shared_library"
version = "0.1.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5a9e7e0f2bfae24d8a5b5a66c5b257a83c7412304311512a0c054cd5e619da11"
dependencies = [
"lazy_static",
"libc",
]
[[package]]
name = "shell-words"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "24188a676b6ae68c3b2cb3a01be17fbf7240ce009799bb56d5b1409051e78fde"
[[package]]
name = "shlex"
version = "1.3.0"
@@ -5027,9 +5204,9 @@ dependencies = [
[[package]]
name = "toml_edit"
version = "0.23.3"
version = "0.23.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "17d3b47e6b7a040216ae5302712c94d1cf88c95b47efa80e2c59ce96c878267e"
checksum = "7211ff1b8f0d3adae1663b7da9ffe396eabe1ca25f0b0bee42b0da29a9ddce93"
dependencies = [
"indexmap 2.10.0",
"toml_datetime 0.7.0",
@@ -5597,12 +5774,24 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a751b3277700db47d3e574514de2eced5e54dc8a5436a3bf7a0b248b2cee16f3"
[[package]]
name = "whoami"
version = "1.6.0"
name = "which"
version = "6.0.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6994d13118ab492c3c80c1f81928718159254c53c472bf9ce36f8dae4add02a7"
checksum = "b4ee928febd44d98f2f459a4a79bd4d928591333a494a10a868418ac1b39cf1f"
dependencies = [
"redox_syscall",
"either",
"home",
"rustix 0.38.44",
"winsafe",
]
[[package]]
name = "whoami"
version = "1.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5d4a4db5077702ca3015d3d02d74974948aba2ad9e12ab7df718ee64ccd7e97d"
dependencies = [
"libredox",
"wasite",
"web-sys",
]
@@ -5829,6 +6018,21 @@ dependencies = [
"windows_x86_64_msvc 0.42.2",
]
[[package]]
name = "windows-targets"
version = "0.48.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9a2fa6e2155d7247be68c096456083145c183cbbbc2764150dda45a87197940c"
dependencies = [
"windows_aarch64_gnullvm 0.48.5",
"windows_aarch64_msvc 0.48.5",
"windows_i686_gnu 0.48.5",
"windows_i686_msvc 0.48.5",
"windows_x86_64_gnu 0.48.5",
"windows_x86_64_gnullvm 0.48.5",
"windows_x86_64_msvc 0.48.5",
]
[[package]]
name = "windows-targets"
version = "0.52.6"
@@ -5867,6 +6071,12 @@ version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "597a5118570b68bc08d8d59125332c54f1ba9d9adeedeef5b99b02ba2b0698f8"
[[package]]
name = "windows_aarch64_gnullvm"
version = "0.48.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2b38e32f0abccf9987a4e3079dfb67dcd799fb61361e53e2882c3cbaf0d905d8"
[[package]]
name = "windows_aarch64_gnullvm"
version = "0.52.6"
@@ -5885,6 +6095,12 @@ version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e08e8864a60f06ef0d0ff4ba04124db8b0fb3be5776a5cd47641e942e58c4d43"
[[package]]
name = "windows_aarch64_msvc"
version = "0.48.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dc35310971f3b2dbbf3f0690a219f40e2d9afcf64f9ab7cc1be722937c26b4bc"
[[package]]
name = "windows_aarch64_msvc"
version = "0.52.6"
@@ -5903,6 +6119,12 @@ version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c61d927d8da41da96a81f029489353e68739737d3beca43145c8afec9a31a84f"
[[package]]
name = "windows_i686_gnu"
version = "0.48.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a75915e7def60c94dcef72200b9a8e58e5091744960da64ec734a6c6e9b3743e"
[[package]]
name = "windows_i686_gnu"
version = "0.52.6"
@@ -5933,6 +6155,12 @@ version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "44d840b6ec649f480a41c8d80f9c65108b92d89345dd94027bfe06ac444d1060"
[[package]]
name = "windows_i686_msvc"
version = "0.48.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8f55c233f70c4b27f66c523580f78f1004e8b5a8b659e05a4eb49d4166cca406"
[[package]]
name = "windows_i686_msvc"
version = "0.52.6"
@@ -5951,6 +6179,12 @@ version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8de912b8b8feb55c064867cf047dda097f92d51efad5b491dfb98f6bbb70cb36"
[[package]]
name = "windows_x86_64_gnu"
version = "0.48.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "53d40abd2583d23e4718fddf1ebec84dbff8381c07cae67ff7768bbf19c6718e"
[[package]]
name = "windows_x86_64_gnu"
version = "0.52.6"
@@ -5969,6 +6203,12 @@ version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "26d41b46a36d453748aedef1486d5c7a85db22e56aff34643984ea85514e94a3"
[[package]]
name = "windows_x86_64_gnullvm"
version = "0.48.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0b7b52767868a23d5bab768e390dc5f5c55825b6d30b86c844ff2dc7414044cc"
[[package]]
name = "windows_x86_64_gnullvm"
version = "0.52.6"
@@ -5987,6 +6227,12 @@ version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9aec5da331524158c6d1a4ac0ab1541149c0b9505fde06423b02f5ef0106b9f0"
[[package]]
name = "windows_x86_64_msvc"
version = "0.48.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ed94fce61571a4006852b7389a063ab983c02eb1bb37b47f8272ce92d06d9538"
[[package]]
name = "windows_x86_64_msvc"
version = "0.52.6"
@@ -6008,6 +6254,21 @@ dependencies = [
"memchr",
]
[[package]]
name = "winreg"
version = "0.10.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "80d0f4e272c85def139476380b12f9ac60926689dd2e01d4923222f40580869d"
dependencies = [
"winapi",
]
[[package]]
name = "winsafe"
version = "0.0.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d135d17ab770252ad95e9a872d365cf3090e3be864a34ab46f48555993efc904"
[[package]]
name = "wiremock"
version = "0.6.4"
@@ -6047,6 +6308,23 @@ version = "0.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ea2f10b9bb0928dfb1b42b65e1f9e36f7f54dbdf08457afefb38afcdec4fa2bb"
[[package]]
name = "x11rb"
version = "0.13.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5d91ffca73ee7f68ce055750bf9f6eca0780b8c85eff9bc046a3b0da41755e12"
dependencies = [
"gethostname",
"rustix 0.38.44",
"x11rb-protocol",
]
[[package]]
name = "x11rb-protocol"
version = "0.13.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ec107c4503ea0b4a98ef47356329af139c0a4f7750e621cf2973cd3385ebcb3d"
[[package]]
name = "yaml-rust"
version = "0.4.5"

View File

@@ -34,6 +34,7 @@ rust = {}
[workspace.lints.clippy]
expect_used = "deny"
uninlined_format_args = "deny"
unwrap_used = "deny"
[profile.release]

View File

@@ -19,11 +19,11 @@ While we are [working to close the gap between the TypeScript and Rust implement
### Config
Codex supports a rich set of configuration options. Note that the Rust CLI uses `config.toml` instead of `config.json`. See [`config.md`](./config.md) for details.
Codex supports a rich set of configuration options. Note that the Rust CLI uses `config.toml` instead of `config.json`. See [`docs/config.md`](../docs/config.md) for details.
### Model Context Protocol Support
Codex CLI functions as an MCP client that can connect to MCP servers on startup. See the [`mcp_servers`](./config.md#mcp_servers) section in the configuration documentation for details.
Codex CLI functions as an MCP client that can connect to MCP servers on startup. See the [`mcp_servers`](../docs/config.md#mcp_servers) section in the configuration documentation for details.
It is still experimental, but you can also launch Codex as an MCP _server_ by running `codex mcp`. Use the [`@modelcontextprotocol/inspector`](https://github.com/modelcontextprotocol/inspector) to try it out:
@@ -33,7 +33,7 @@ npx @modelcontextprotocol/inspector codex mcp
### Notifications
You can enable notifications by configuring a script that is run whenever the agent finishes a turn. The [notify documentation](./config.md#notify) includes a detailed example that explains how to get desktop notifications via [terminal-notifier](https://github.com/julienXX/terminal-notifier) on macOS.
You can enable notifications by configuring a script that is run whenever the agent finishes a turn. The [notify documentation](../docs/config.md#notify) includes a detailed example that explains how to get desktop notifications via [terminal-notifier](https://github.com/julienXX/terminal-notifier) on macOS.
### `codex exec` to run Codex programmatially/non-interactively
@@ -43,6 +43,12 @@ To run Codex non-interactively, run `codex exec PROMPT` (you can also pass the p
Typing `@` triggers a fuzzy-filename search over the workspace root. Use up/down to select among the results and Tab or Enter to replace the `@` with the selected path. You can use Esc to cancel the search.
### EscEsc to edit a previous message
When the chat composer is empty, press Esc to prime “backtrack” mode. Press Esc again to open a transcript preview highlighting the last user message; press Esc repeatedly to step to older user messages. Press Enter to confirm and Codex will fork the conversation from that point, trim the visible transcript accordingly, and prefill the composer with the selected user message so you can edit and resubmit it.
In the transcript preview, the footer shows an `Esc edit prev` hint while editing is active.
### `--cd`/`-C` flag
Sometimes it is not convenient to `cd` to the directory you want Codex to use as the "working root" before running Codex. Fortunately, `codex` supports a `--cd` option so you can specify whatever folder you want. You can confirm that Codex is honoring `--cd` by double-checking the **workdir** it reports in the TUI at the start of a new session.

View File

@@ -7,6 +7,10 @@ version = { workspace = true }
name = "codex_apply_patch"
path = "src/lib.rs"
[[bin]]
name = "apply_patch"
path = "src/main.rs"
[lints]
workspace = true
@@ -18,5 +22,6 @@ tree-sitter = "0.25.8"
tree-sitter-bash = "0.25.0"
[dev-dependencies]
assert_cmd = "2"
pretty_assertions = "1.4.1"
tempfile = "3.13.0"

View File

@@ -1,17 +1,23 @@
To edit files, ALWAYS use the `shell` tool with `apply_patch` CLI. `apply_patch` effectively allows you to execute a diff/patch against a file, but the format of the diff specification is unique to this task, so pay careful attention to these instructions. To use the `apply_patch` CLI, you should call the shell tool with the following structure:
## `apply_patch`
```bash
{"cmd": ["apply_patch", "<<'EOF'\\n*** Begin Patch\\n[YOUR_PATCH]\\n*** End Patch\\nEOF\\n"], "workdir": "..."}
```
Use the `apply_patch` shell command to edit files.
Your patch language is a strippeddown, fileoriented diff format designed to be easy to parse and safe to apply. You can think of it as a highlevel envelope:
Where [YOUR_PATCH] is the actual content of your patch, specified in the following V4A diff format.
*** Begin Patch
[ one or more file sections ]
*** End Patch
*** [ACTION] File: [path/to/file] -> ACTION can be one of Add, Update, or Delete.
For each snippet of code that needs to be changed, repeat the following:
[context_before] -> See below for further instructions on context.
- [old_code] -> Precede the old code with a minus sign.
+ [new_code] -> Precede the new, replacement code with a plus sign.
[context_after] -> See below for further instructions on context.
Within that envelope, you get a sequence of file operations.
You MUST include a header to specify the action you are taking.
Each operation starts with one of three headers:
*** Add File: <path> - create a new file. Every following line is a + line (the initial contents).
*** Delete File: <path> - remove an existing file. Nothing follows.
*** Update File: <path> - patch an existing file in place (optionally with a rename).
May be immediately followed by *** Move to: <new path> if you want to rename the file.
Then one or more “hunks”, each introduced by @@ (optionally followed by a hunk header).
Within a hunk each line starts with:
For instructions on [context_before] and [context_after]:
- By default, show 3 lines of code immediately above and 3 lines immediately below each change. If a change is within 3 lines of a previous change, do NOT duplicate the first changes [context_after] lines in the second changes [context_before] lines.
@@ -25,16 +31,45 @@ For instructions on [context_before] and [context_after]:
- If a code block is repeated so many times in a class or function such that even a single `@@` statement and 3 lines of context cannot uniquely identify the snippet of code, you can use multiple `@@` statements to jump to the right context. For instance:
@@ class BaseClass
@@ def method():
@@ def method():
[3 lines of pre-context]
- [old_code]
+ [new_code]
[3 lines of post-context]
Note, then, that we do not use line numbers in this diff format, as the context is enough to uniquely identify code. An example of a message that you might pass as "input" to this function, in order to apply a patch, is shown below.
The full grammar definition is below:
Patch := Begin { FileOp } End
Begin := "*** Begin Patch" NEWLINE
End := "*** End Patch" NEWLINE
FileOp := AddFile | DeleteFile | UpdateFile
AddFile := "*** Add File: " path NEWLINE { "+" line NEWLINE }
DeleteFile := "*** Delete File: " path NEWLINE
UpdateFile := "*** Update File: " path NEWLINE [ MoveTo ] { Hunk }
MoveTo := "*** Move to: " newPath NEWLINE
Hunk := "@@" [ header ] NEWLINE { HunkLine } [ "*** End of File" NEWLINE ]
HunkLine := (" " | "-" | "+") text NEWLINE
A full patch can combine several operations:
*** Begin Patch
*** Add File: hello.txt
+Hello world
*** Update File: src/app.py
*** Move to: src/main.py
@@ def greet():
-print("Hi")
+print("Hello, world!")
*** Delete File: obsolete.txt
*** End Patch
It is important to remember:
- You must include a header with your intended action (Add/Delete/Update)
- You must prefix new lines with `+` even when creating a new file
- File references can only be relative, NEVER ABSOLUTE.
You can invoke apply_patch like:
```bash
{"cmd": ["apply_patch", "<<'EOF'\\n*** Begin Patch\\n*** Update File: pygorithm/searching/binary_search.py\\n@@ class BaseClass\\n@@ def search():\\n- pass\\n+ raise NotImplementedError()\\n@@ class Subclass\\n@@ def search():\\n- pass\\n+ raise NotImplementedError()\\n*** End Patch\\nEOF\\n"], "workdir": "..."}
```
File references can only be relative, NEVER ABSOLUTE. After the apply_patch command is run, it will always say "Done!", regardless of whether the patch was successfully applied or not. However, you can determine if there are issue and errors by looking at any warnings or logging lines printed BEFORE the "Done!" is output.
shell {"command":["apply_patch","*** Begin Patch\n*** Add File: hello.txt\n+Hello, world!\n*** End Patch\n"]}
```

View File

@@ -1,5 +1,6 @@
mod parser;
mod seek_sequence;
mod standalone_executable;
use std::collections::HashMap;
use std::path::Path;
@@ -19,9 +20,13 @@ use tree_sitter::LanguageError;
use tree_sitter::Parser;
use tree_sitter_bash::LANGUAGE as BASH;
pub use standalone_executable::main;
/// Detailed instructions for gpt-4.1 on how to use the `apply_patch` tool.
pub const APPLY_PATCH_TOOL_INSTRUCTIONS: &str = include_str!("../apply_patch_tool_instructions.md");
const APPLY_PATCH_COMMANDS: [&str; 2] = ["apply_patch", "applypatch"];
#[derive(Debug, Error, PartialEq)]
pub enum ApplyPatchError {
#[error(transparent)]
@@ -82,7 +87,6 @@ pub struct ApplyPatchArgs {
}
pub fn maybe_parse_apply_patch(argv: &[String]) -> MaybeApplyPatch {
const APPLY_PATCH_COMMANDS: [&str; 2] = ["apply_patch", "applypatch"];
match argv {
[cmd, body] if APPLY_PATCH_COMMANDS.contains(&cmd.as_str()) => match parse_patch(body) {
Ok(source) => MaybeApplyPatch::Body(source),
@@ -91,7 +95,9 @@ pub fn maybe_parse_apply_patch(argv: &[String]) -> MaybeApplyPatch {
[bash, flag, script]
if bash == "bash"
&& flag == "-lc"
&& script.trim_start().starts_with("apply_patch") =>
&& APPLY_PATCH_COMMANDS
.iter()
.any(|cmd| script.trim_start().starts_with(cmd)) =>
{
match extract_heredoc_body_from_apply_patch_command(script) {
Ok(body) => match parse_patch(&body) {
@@ -110,7 +116,9 @@ pub enum ApplyPatchFileChange {
Add {
content: String,
},
Delete,
Delete {
content: String,
},
Update {
unified_diff: String,
move_path: Option<PathBuf>,
@@ -204,7 +212,18 @@ pub fn maybe_parse_apply_patch_verified(argv: &[String], cwd: &Path) -> MaybeApp
changes.insert(path, ApplyPatchFileChange::Add { content: contents });
}
Hunk::DeleteFile { .. } => {
changes.insert(path, ApplyPatchFileChange::Delete);
let content = match std::fs::read_to_string(&path) {
Ok(content) => content,
Err(e) => {
return MaybeApplyPatchVerified::CorrectnessError(
ApplyPatchError::IoError(IoError {
context: format!("Failed to read {}", path.display()),
source: e,
}),
);
}
};
changes.insert(path, ApplyPatchFileChange::Delete { content });
}
Hunk::UpdateFile {
move_path, chunks, ..
@@ -262,7 +281,10 @@ pub fn maybe_parse_apply_patch_verified(argv: &[String], cwd: &Path) -> MaybeApp
fn extract_heredoc_body_from_apply_patch_command(
src: &str,
) -> std::result::Result<String, ExtractHeredocError> {
if !src.trim_start().starts_with("apply_patch") {
if !APPLY_PATCH_COMMANDS
.iter()
.any(|cmd| src.trim_start().starts_with(cmd))
{
return Err(ExtractHeredocError::CommandDidNotStartWithApplyPatch);
}
@@ -773,6 +795,33 @@ PATCH"#,
}
}
#[test]
fn test_heredoc_applypatch() {
let args = strs_to_strings(&[
"bash",
"-lc",
r#"applypatch <<'PATCH'
*** Begin Patch
*** Add File: foo
+hi
*** End Patch
PATCH"#,
]);
match maybe_parse_apply_patch(&args) {
MaybeApplyPatch::Body(ApplyPatchArgs { hunks, patch: _ }) => {
assert_eq!(
hunks,
vec![Hunk::AddFile {
path: PathBuf::from("foo"),
contents: "hi\n".to_string()
}]
);
}
result => panic!("expected MaybeApplyPatch::Body got {result:?}"),
}
}
#[test]
fn test_add_file_hunk_creates_file_with_contents() {
let dir = tempdir().unwrap();

View File

@@ -0,0 +1,3 @@
pub fn main() -> ! {
codex_apply_patch::main()
}

View File

@@ -0,0 +1,59 @@
use std::io::Read;
use std::io::Write;
pub fn main() -> ! {
let exit_code = run_main();
std::process::exit(exit_code);
}
/// We would prefer to return `std::process::ExitCode`, but its `exit_process()`
/// method is still a nightly API and we want main() to return !.
pub fn run_main() -> i32 {
// Expect either one argument (the full apply_patch payload) or read it from stdin.
let mut args = std::env::args_os();
let _argv0 = args.next();
let patch_arg = match args.next() {
Some(arg) => match arg.into_string() {
Ok(s) => s,
Err(_) => {
eprintln!("Error: apply_patch requires a UTF-8 PATCH argument.");
return 1;
}
},
None => {
// No argument provided; attempt to read the patch from stdin.
let mut buf = String::new();
match std::io::stdin().read_to_string(&mut buf) {
Ok(_) => {
if buf.is_empty() {
eprintln!("Usage: apply_patch 'PATCH'\n echo 'PATCH' | apply-patch");
return 2;
}
buf
}
Err(err) => {
eprintln!("Error: Failed to read PATCH from stdin.\n{err}");
return 1;
}
}
}
};
// Refuse extra args to avoid ambiguity.
if args.next().is_some() {
eprintln!("Error: apply_patch accepts exactly one argument.");
return 2;
}
let mut stdout = std::io::stdout();
let mut stderr = std::io::stderr();
match crate::apply_patch(&patch_arg, &mut stdout, &mut stderr) {
Ok(()) => {
// Flush to ensure output ordering when used in pipelines.
let _ = stdout.flush();
0
}
Err(_) => 1,
}
}

View File

@@ -0,0 +1,3 @@
// Single integration test binary that aggregates all test modules.
// The submodules live in `tests/suite/`.
mod suite;

View File

@@ -0,0 +1,90 @@
use assert_cmd::prelude::*;
use std::fs;
use std::process::Command;
use tempfile::tempdir;
#[test]
fn test_apply_patch_cli_add_and_update() -> anyhow::Result<()> {
let tmp = tempdir()?;
let file = "cli_test.txt";
let absolute_path = tmp.path().join(file);
// 1) Add a file
let add_patch = format!(
r#"*** Begin Patch
*** Add File: {file}
+hello
*** End Patch"#
);
Command::cargo_bin("apply_patch")
.expect("should find apply_patch binary")
.arg(add_patch)
.current_dir(tmp.path())
.assert()
.success()
.stdout(format!("Success. Updated the following files:\nA {file}\n"));
assert_eq!(fs::read_to_string(&absolute_path)?, "hello\n");
// 2) Update the file
let update_patch = format!(
r#"*** Begin Patch
*** Update File: {file}
@@
-hello
+world
*** End Patch"#
);
Command::cargo_bin("apply_patch")
.expect("should find apply_patch binary")
.arg(update_patch)
.current_dir(tmp.path())
.assert()
.success()
.stdout(format!("Success. Updated the following files:\nM {file}\n"));
assert_eq!(fs::read_to_string(&absolute_path)?, "world\n");
Ok(())
}
#[test]
fn test_apply_patch_cli_stdin_add_and_update() -> anyhow::Result<()> {
let tmp = tempdir()?;
let file = "cli_test_stdin.txt";
let absolute_path = tmp.path().join(file);
// 1) Add a file via stdin
let add_patch = format!(
r#"*** Begin Patch
*** Add File: {file}
+hello
*** End Patch"#
);
let mut cmd =
assert_cmd::Command::cargo_bin("apply_patch").expect("should find apply_patch binary");
cmd.current_dir(tmp.path());
cmd.write_stdin(add_patch)
.assert()
.success()
.stdout(format!("Success. Updated the following files:\nA {file}\n"));
assert_eq!(fs::read_to_string(&absolute_path)?, "hello\n");
// 2) Update the file via stdin
let update_patch = format!(
r#"*** Begin Patch
*** Update File: {file}
@@
-hello
+world
*** End Patch"#
);
let mut cmd =
assert_cmd::Command::cargo_bin("apply_patch").expect("should find apply_patch binary");
cmd.current_dir(tmp.path());
cmd.write_stdin(update_patch)
.assert()
.success()
.stdout(format!("Success. Updated the following files:\nM {file}\n"));
assert_eq!(fs::read_to_string(&absolute_path)?, "world\n");
Ok(())
}

View File

@@ -0,0 +1 @@
mod cli;

View File

@@ -16,4 +16,5 @@ codex-apply-patch = { path = "../apply-patch" }
codex-core = { path = "../core" }
codex-linux-sandbox = { path = "../linux-sandbox" }
dotenvy = "0.15.7"
tempfile = "3"
tokio = { version = "1", features = ["rt-multi-thread"] }

View File

@@ -3,6 +3,13 @@ use std::path::Path;
use std::path::PathBuf;
use codex_core::CODEX_APPLY_PATCH_ARG1;
#[cfg(unix)]
use std::os::unix::fs::symlink;
use tempfile::TempDir;
const LINUX_SANDBOX_ARG0: &str = "codex-linux-sandbox";
const APPLY_PATCH_ARG0: &str = "apply_patch";
const MISSPELLED_APPLY_PATCH_ARG0: &str = "applypatch";
/// While we want to deploy the Codex CLI as a single executable for simplicity,
/// we also want to expose some of its functionality as distinct CLIs, so we use
@@ -39,9 +46,11 @@ where
.and_then(|s| s.to_str())
.unwrap_or("");
if exe_name == "codex-linux-sandbox" {
if exe_name == LINUX_SANDBOX_ARG0 {
// Safety: [`run_main`] never returns.
codex_linux_sandbox::run_main();
} else if exe_name == APPLY_PATCH_ARG0 || exe_name == MISSPELLED_APPLY_PATCH_ARG0 {
codex_apply_patch::main();
}
let argv1 = args.next().unwrap_or_default();
@@ -68,6 +77,19 @@ where
// before creating any threads/the Tokio runtime.
load_dotenv();
// Retain the TempDir so it exists for the lifetime of the invocation of
// this executable. Admittedly, we could invoke `keep()` on it, but it
// would be nice to avoid leaving temporary directories behind, if possible.
let _path_entry = match prepend_path_entry_for_apply_patch() {
Ok(path_entry) => Some(path_entry),
Err(err) => {
// It is possible that Codex will proceed successfully even if
// updating the PATH fails, so warn the user and move on.
eprintln!("WARNING: proceeding, even though we could not update PATH: {err}");
None
}
};
// Regular invocation create a Tokio runtime and execute the provided
// async entry-point.
let runtime = tokio::runtime::Runtime::new()?;
@@ -113,3 +135,67 @@ where
}
}
}
/// Creates a temporary directory with either:
///
/// - UNIX: `apply_patch` symlink to the current executable
/// - WINDOWS: `apply_patch.bat` batch script to invoke the current executable
/// with the "secret" --codex-run-as-apply-patch flag.
///
/// This temporary directory is prepended to the PATH environment variable so
/// that `apply_patch` can be on the PATH without requiring the user to
/// install a separate `apply_patch` executable, simplifying the deployment of
/// Codex CLI.
///
/// IMPORTANT: This function modifies the PATH environment variable, so it MUST
/// be called before multiple threads are spawned.
fn prepend_path_entry_for_apply_patch() -> std::io::Result<TempDir> {
let temp_dir = TempDir::new()?;
let path = temp_dir.path();
for filename in &[APPLY_PATCH_ARG0, MISSPELLED_APPLY_PATCH_ARG0] {
let exe = std::env::current_exe()?;
#[cfg(unix)]
{
let link = path.join(filename);
symlink(&exe, &link)?;
}
#[cfg(windows)]
{
let batch_script = path.join(format!("{filename}.bat"));
std::fs::write(
&batch_script,
format!(
r#"@echo off
"{}" {CODEX_APPLY_PATCH_ARG1} %*
"#,
exe.display()
),
)?;
}
}
#[cfg(unix)]
const PATH_SEPARATOR: &str = ":";
#[cfg(windows)]
const PATH_SEPARATOR: &str = ";";
let path_element = path.display();
let updated_path_env_var = match std::env::var("PATH") {
Ok(existing_path) => {
format!("{path_element}{PATH_SEPARATOR}{existing_path}")
}
Err(_) => {
format!("{path_element}")
}
};
unsafe {
std::env::set_var("PATH", updated_path_env_var);
}
Ok(temp_dir)
}

View File

@@ -0,0 +1,3 @@
// Single integration test binary that aggregates all test modules.
// The submodules live in `tests/suite/`.
mod suite;

View File

@@ -0,0 +1,2 @@
// Aggregates all former standalone integration tests as modules.
mod apply_command_e2e;

View File

@@ -159,7 +159,7 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
codex_exec::run_main(exec_cli, codex_linux_sandbox_exe).await?;
}
Some(Subcommand::Mcp) => {
codex_mcp_server::run_main(codex_linux_sandbox_exe).await?;
codex_mcp_server::run_main(codex_linux_sandbox_exe, cli.config_overrides).await?;
}
Some(Subcommand::Login(mut login_cli)) => {
prepend_config_flags(&mut login_cli.config_overrides, cli.config_overrides);

View File

@@ -9,6 +9,7 @@ use codex_core::config::ConfigOverrides;
use codex_core::protocol::Event;
use codex_core::protocol::EventMsg;
use codex_core::protocol::Submission;
use codex_login::AuthManager;
use tokio::io::AsyncBufReadExt;
use tokio::io::BufReader;
use tracing::error;
@@ -36,7 +37,10 @@ pub async fn run_main(opts: ProtoCli) -> anyhow::Result<()> {
let config = Config::load_with_cli_overrides(overrides_vec, ConfigOverrides::default())?;
// Use conversation_manager API to start a conversation
let conversation_manager = ConversationManager::default();
let conversation_manager = ConversationManager::new(AuthManager::shared(
config.codex_home.clone(),
config.preferred_auth_method,
));
let NewConversation {
conversation_id: _,
conversation,

View File

@@ -0,0 +1,46 @@
use codex_core::protocol::AskForApproval;
use codex_core::protocol::SandboxPolicy;
/// A simple preset pairing an approval policy with a sandbox policy.
#[derive(Debug, Clone)]
pub struct ApprovalPreset {
/// Stable identifier for the preset.
pub id: &'static str,
/// Display label shown in UIs.
pub label: &'static str,
/// Short human description shown next to the label in UIs.
pub description: &'static str,
/// Approval policy to apply.
pub approval: AskForApproval,
/// Sandbox policy to apply.
pub sandbox: SandboxPolicy,
}
/// Built-in list of approval presets that pair approval and sandbox policy.
///
/// Keep this UI-agnostic so it can be reused by both TUI and MCP server.
pub fn builtin_approval_presets() -> Vec<ApprovalPreset> {
vec![
ApprovalPreset {
id: "read-only",
label: "Read Only",
description: "Codex can read files and answer questions. Codex requires approval to make edits, run commands, or access network",
approval: AskForApproval::OnRequest,
sandbox: SandboxPolicy::ReadOnly,
},
ApprovalPreset {
id: "auto",
label: "Auto",
description: "Codex can read files, make edits, and run commands in the workspace. Codex requires approval to work outside the workspace or access network",
approval: AskForApproval::OnRequest,
sandbox: SandboxPolicy::new_workspace_write_policy(),
},
ApprovalPreset {
id: "full-access",
label: "Full Access",
description: "Codex can read files, make edits, and run commands with network access, without approval. Exercise caution",
approval: AskForApproval::Never,
sandbox: SandboxPolicy::DangerFullAccess,
},
]
}

View File

@@ -31,3 +31,6 @@ pub use config_summary::create_config_summary_entries;
pub mod fuzzy_match;
// Shared model presets used by TUI and MCP server
pub mod model_presets;
// Shared approval presets (AskForApproval + Sandbox) used by TUI and MCP server
// Not to be confused with AskForApproval, which we should probably rename to EscalationPolicy.
pub mod approval_presets;

View File

@@ -24,28 +24,28 @@ pub fn builtin_model_presets() -> &'static [ModelPreset] {
ModelPreset {
id: "gpt-5-minimal",
label: "gpt-5 minimal",
description: "Fastest responses with very limited reasoning; ideal for coding, instructions, or lightweight tasks.",
description: "fastest responses with limited reasoning; ideal for coding, instructions, or lightweight tasks",
model: "gpt-5",
effort: ReasoningEffort::Minimal,
},
ModelPreset {
id: "gpt-5-low",
label: "gpt-5 low",
description: "Balances speed with some reasoning; useful for straightforward queries and short explanations.",
description: "balances speed with some reasoning; useful for straightforward queries and short explanations",
model: "gpt-5",
effort: ReasoningEffort::Low,
},
ModelPreset {
id: "gpt-5-medium",
label: "gpt-5 medium",
description: "Default setting; provides a solid balance of reasoning depth and latency for general-purpose tasks.",
description: "default setting; provides a solid balance of reasoning depth and latency for general-purpose tasks",
model: "gpt-5",
effort: ReasoningEffort::Medium,
},
ModelPreset {
id: "gpt-5-high",
label: "gpt-5 high",
description: "Maximizes reasoning depth for complex or ambiguous problems.",
description: "maximizes reasoning depth for complex or ambiguous problems",
model: "gpt-5",
effort: ReasoningEffort::High,
},

View File

@@ -1,535 +1,6 @@
# Config
# Configuration docs moved
Codex supports several mechanisms for setting config values:
This file has moved. Please see the latest configuration documentation here:
- Config-specific command-line flags, such as `--model o3` (highest precedence).
- A generic `-c`/`--config` flag that takes a `key=value` pair, such as `--config model="o3"`.
- The key can contain dots to set a value deeper than the root, e.g. `--config model_providers.openai.wire_api="chat"`.
- Values can contain objects, such as `--config shell_environment_policy.include_only=["PATH", "HOME", "USER"]`.
- For consistency with `config.toml`, values are in TOML format rather than JSON format, so use `{a = 1, b = 2}` rather than `{"a": 1, "b": 2}`.
- If `value` cannot be parsed as a valid TOML value, it is treated as a string value. This means that both `-c model="o3"` and `-c model=o3` are equivalent.
- The `$CODEX_HOME/config.toml` configuration file where the `CODEX_HOME` environment value defaults to `~/.codex`. (Note `CODEX_HOME` will also be where logs and other Codex-related information are stored.)
Both the `--config` flag and the `config.toml` file support the following options:
## model
The model that Codex should use.
```toml
model = "o3" # overrides the default of "gpt-5"
```
## model_providers
This option lets you override and amend the default set of model providers bundled with Codex. This value is a map where the key is the value to use with `model_provider` to select the corresponding provider.
For example, if you wanted to add a provider that uses the OpenAI 4o model via the chat completions API, then you could add the following configuration:
```toml
# Recall that in TOML, root keys must be listed before tables.
model = "gpt-4o"
model_provider = "openai-chat-completions"
[model_providers.openai-chat-completions]
# Name of the provider that will be displayed in the Codex UI.
name = "OpenAI using Chat Completions"
# The path `/chat/completions` will be amended to this URL to make the POST
# request for the chat completions.
base_url = "https://api.openai.com/v1"
# If `env_key` is set, identifies an environment variable that must be set when
# using Codex with this provider. The value of the environment variable must be
# non-empty and will be used in the `Bearer TOKEN` HTTP header for the POST request.
env_key = "OPENAI_API_KEY"
# Valid values for wire_api are "chat" and "responses". Defaults to "chat" if omitted.
wire_api = "chat"
# If necessary, extra query params that need to be added to the URL.
# See the Azure example below.
query_params = {}
```
Note this makes it possible to use Codex CLI with non-OpenAI models, so long as they use a wire API that is compatible with the OpenAI chat completions API. For example, you could define the following provider to use Codex CLI with Ollama running locally:
```toml
[model_providers.ollama]
name = "Ollama"
base_url = "http://localhost:11434/v1"
```
Or a third-party provider (using a distinct environment variable for the API key):
```toml
[model_providers.mistral]
name = "Mistral"
base_url = "https://api.mistral.ai/v1"
env_key = "MISTRAL_API_KEY"
```
Note that Azure requires `api-version` to be passed as a query parameter, so be sure to specify it as part of `query_params` when defining the Azure provider:
```toml
[model_providers.azure]
name = "Azure"
# Make sure you set the appropriate subdomain for this URL.
base_url = "https://YOUR_PROJECT_NAME.openai.azure.com/openai"
env_key = "AZURE_OPENAI_API_KEY" # Or "OPENAI_API_KEY", whichever you use.
query_params = { api-version = "2025-04-01-preview" }
```
It is also possible to configure a provider to include extra HTTP headers with a request. These can be hardcoded values (`http_headers`) or values read from environment variables (`env_http_headers`):
```toml
[model_providers.example]
# name, base_url, ...
# This will add the HTTP header `X-Example-Header` with value `example-value`
# to each request to the model provider.
http_headers = { "X-Example-Header" = "example-value" }
# This will add the HTTP header `X-Example-Features` with the value of the
# `EXAMPLE_FEATURES` environment variable to each request to the model provider
# _if_ the environment variable is set and its value is non-empty.
env_http_headers = { "X-Example-Features": "EXAMPLE_FEATURES" }
```
### Per-provider network tuning
The following optional settings control retry behaviour and streaming idle timeouts **per model provider**. They must be specified inside the corresponding `[model_providers.<id>]` block in `config.toml`. (Older releases accepted toplevel keys; those are now ignored.)
Example:
```toml
[model_providers.openai]
name = "OpenAI"
base_url = "https://api.openai.com/v1"
env_key = "OPENAI_API_KEY"
# network tuning overrides (all optional; falls back to builtin defaults)
request_max_retries = 4 # retry failed HTTP requests
stream_max_retries = 10 # retry dropped SSE streams
stream_idle_timeout_ms = 300000 # 5m idle timeout
```
#### request_max_retries
How many times Codex will retry a failed HTTP request to the model provider. Defaults to `4`.
#### stream_max_retries
Number of times Codex will attempt to reconnect when a streaming response is interrupted. Defaults to `10`.
#### stream_idle_timeout_ms
How long Codex will wait for activity on a streaming response before treating the connection as lost. Defaults to `300_000` (5 minutes).
## model_provider
Identifies which provider to use from the `model_providers` map. Defaults to `"openai"`. You can override the `base_url` for the built-in `openai` provider via the `OPENAI_BASE_URL` environment variable.
Note that if you override `model_provider`, then you likely want to override
`model`, as well. For example, if you are running ollama with Mistral locally,
then you would need to add the following to your config in addition to the new entry in the `model_providers` map:
```toml
model_provider = "ollama"
model = "mistral"
```
## approval_policy
Determines when the user should be prompted to approve whether Codex can execute a command:
```toml
# Codex has hardcoded logic that defines a set of "trusted" commands.
# Setting the approval_policy to `untrusted` means that Codex will prompt the
# user before running a command not in the "trusted" set.
#
# See https://github.com/openai/codex/issues/1260 for the plan to enable
# end-users to define their own trusted commands.
approval_policy = "untrusted"
```
If you want to be notified whenever a command fails, use "on-failure":
```toml
# If the command fails when run in the sandbox, Codex asks for permission to
# retry the command outside the sandbox.
approval_policy = "on-failure"
```
If you want the model to run until it decides that it needs to ask you for escalated permissions, use "on-request":
```toml
# The model decides when to escalate
approval_policy = "on-request"
```
Alternatively, you can have the model run until it is done, and never ask to run a command with escalated permissions:
```toml
# User is never prompted: if the command fails, Codex will automatically try
# something out. Note the `exec` subcommand always uses this mode.
approval_policy = "never"
```
## profiles
A _profile_ is a collection of configuration values that can be set together. Multiple profiles can be defined in `config.toml` and you can specify the one you
want to use at runtime via the `--profile` flag.
Here is an example of a `config.toml` that defines multiple profiles:
```toml
model = "o3"
approval_policy = "unless-allow-listed"
disable_response_storage = false
# Setting `profile` is equivalent to specifying `--profile o3` on the command
# line, though the `--profile` flag can still be used to override this value.
profile = "o3"
[model_providers.openai-chat-completions]
name = "OpenAI using Chat Completions"
base_url = "https://api.openai.com/v1"
env_key = "OPENAI_API_KEY"
wire_api = "chat"
[profiles.o3]
model = "o3"
model_provider = "openai"
approval_policy = "never"
model_reasoning_effort = "high"
model_reasoning_summary = "detailed"
[profiles.gpt3]
model = "gpt-3.5-turbo"
model_provider = "openai-chat-completions"
[profiles.zdr]
model = "o3"
model_provider = "openai"
approval_policy = "on-failure"
disable_response_storage = true
```
Users can specify config values at multiple levels. Order of precedence is as follows:
1. custom command-line argument, e.g., `--model o3`
2. as part of a profile, where the `--profile` is specified via a CLI (or in the config file itself)
3. as an entry in `config.toml`, e.g., `model = "o3"`
4. the default value that comes with Codex CLI (i.e., Codex CLI defaults to `gpt-5`)
## model_reasoning_effort
If the selected model is known to support reasoning (for example: `o3`, `o4-mini`, `codex-*`, `gpt-5`), reasoning is enabled by default when using the Responses API. As explained in the [OpenAI Platform documentation](https://platform.openai.com/docs/guides/reasoning?api-mode=responses#get-started-with-reasoning), this can be set to:
- `"minimal"`
- `"low"`
- `"medium"` (default)
- `"high"`
Note: to minimize reasoning, choose `"minimal"`.
## model_reasoning_summary
If the model name starts with `"o"` (as in `"o3"` or `"o4-mini"`) or `"codex"`, reasoning is enabled by default when using the Responses API. As explained in the [OpenAI Platform documentation](https://platform.openai.com/docs/guides/reasoning?api-mode=responses#reasoning-summaries), this can be set to:
- `"auto"` (default)
- `"concise"`
- `"detailed"`
To disable reasoning summaries, set `model_reasoning_summary` to `"none"` in your config:
```toml
model_reasoning_summary = "none" # disable reasoning summaries
```
## model_supports_reasoning_summaries
By default, `reasoning` is only set on requests to OpenAI models that are known to support them. To force `reasoning` to set on requests to the current model, you can force this behavior by setting the following in `config.toml`:
```toml
model_supports_reasoning_summaries = true
```
## sandbox_mode
Codex executes model-generated shell commands inside an OS-level sandbox.
In most cases you can pick the desired behaviour with a single option:
```toml
# same as `--sandbox read-only`
sandbox_mode = "read-only"
```
The default policy is `read-only`, which means commands can read any file on
disk, but attempts to write a file or access the network will be blocked.
A more relaxed policy is `workspace-write`. When specified, the current working directory for the Codex task will be writable (as well as `$TMPDIR` on macOS). Note that the CLI defaults to using the directory where it was spawned as `cwd`, though this can be overridden using `--cwd/-C`.
On macOS (and soon Linux), all writable roots (including `cwd`) that contain a `.git/` folder _as an immediate child_ will configure the `.git/` folder to be read-only while the rest of the Git repository will be writable. This means that commands like `git commit` will fail, by default (as it entails writing to `.git/`), and will require Codex to ask for permission.
```toml
# same as `--sandbox workspace-write`
sandbox_mode = "workspace-write"
# Extra settings that only apply when `sandbox = "workspace-write"`.
[sandbox_workspace_write]
# By default, the cwd for the Codex session will be writable as well as $TMPDIR
# (if set) and /tmp (if it exists). Setting the respective options to `true`
# will override those defaults.
exclude_tmpdir_env_var = false
exclude_slash_tmp = false
# Optional list of _additional_ writable roots beyond $TMPDIR and /tmp.
writable_roots = ["/Users/YOU/.pyenv/shims"]
# Allow the command being run inside the sandbox to make outbound network
# requests. Disabled by default.
network_access = false
```
To disable sandboxing altogether, specify `danger-full-access` like so:
```toml
# same as `--sandbox danger-full-access`
sandbox_mode = "danger-full-access"
```
This is reasonable to use if Codex is running in an environment that provides its own sandboxing (such as a Docker container) such that further sandboxing is unnecessary.
Though using this option may also be necessary if you try to use Codex in environments where its native sandboxing mechanisms are unsupported, such as older Linux kernels or on Windows.
## mcp_servers
Defines the list of MCP servers that Codex can consult for tool use. Currently, only servers that are launched by executing a program that communicate over stdio are supported. For servers that use the SSE transport, consider an adapter like [mcp-proxy](https://github.com/sparfenyuk/mcp-proxy).
**Note:** Codex may cache the list of tools and resources from an MCP server so that Codex can include this information in context at startup without spawning all the servers. This is designed to save resources by loading MCP servers lazily.
This config option is comparable to how Claude and Cursor define `mcpServers` in their respective JSON config files, though because Codex uses TOML for its config language, the format is slightly different. For example, the following config in JSON:
```json
{
"mcpServers": {
"server-name": {
"command": "npx",
"args": ["-y", "mcp-server"],
"env": {
"API_KEY": "value"
}
}
}
}
```
Should be represented as follows in `~/.codex/config.toml`:
```toml
# IMPORTANT: the top-level key is `mcp_servers` rather than `mcpServers`.
[mcp_servers.server-name]
command = "npx"
args = ["-y", "mcp-server"]
env = { "API_KEY" = "value" }
```
## disable_response_storage
Currently, customers whose accounts are set to use Zero Data Retention (ZDR) must set `disable_response_storage` to `true` so that Codex uses an alternative to the Responses API that works with ZDR:
```toml
disable_response_storage = true
```
## shell_environment_policy
Codex spawns subprocesses (e.g. when executing a `local_shell` tool-call suggested by the assistant). By default it now passes **your full environment** to those subprocesses. You can tune this behavior via the **`shell_environment_policy`** block in `config.toml`:
```toml
[shell_environment_policy]
# inherit can be "all" (default), "core", or "none"
inherit = "core"
# set to true to *skip* the filter for `"*KEY*"` and `"*TOKEN*"`
ignore_default_excludes = false
# exclude patterns (case-insensitive globs)
exclude = ["AWS_*", "AZURE_*"]
# force-set / override values
set = { CI = "1" }
# if provided, *only* vars matching these patterns are kept
include_only = ["PATH", "HOME"]
```
| Field | Type | Default | Description |
| ------------------------- | -------------------------- | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------- |
| `inherit` | string | `all` | Starting template for the environment:<br>`all` (clone full parent env), `core` (`HOME`, `PATH`, `USER`, …), or `none` (start empty). |
| `ignore_default_excludes` | boolean | `false` | When `false`, Codex removes any var whose **name** contains `KEY`, `SECRET`, or `TOKEN` (case-insensitive) before other rules run. |
| `exclude` | array&lt;string&gt; | `[]` | Case-insensitive glob patterns to drop after the default filter.<br>Examples: `"AWS_*"`, `"AZURE_*"`. |
| `set` | table&lt;string,string&gt; | `{}` | Explicit key/value overrides or additions always win over inherited values. |
| `include_only` | array&lt;string&gt; | `[]` | If non-empty, a whitelist of patterns; only variables that match _one_ pattern survive the final step. (Generally used with `inherit = "all"`.) |
The patterns are **glob style**, not full regular expressions: `*` matches any
number of characters, `?` matches exactly one, and character classes like
`[A-Z]`/`[^0-9]` are supported. Matching is always **case-insensitive**. This
syntax is documented in code as `EnvironmentVariablePattern` (see
`core/src/config_types.rs`).
If you just need a clean slate with a few custom entries you can write:
```toml
[shell_environment_policy]
inherit = "none"
set = { PATH = "/usr/bin", MY_FLAG = "1" }
```
Currently, `CODEX_SANDBOX_NETWORK_DISABLED=1` is also added to the environment, assuming network is disabled. This is not configurable.
## notify
Specify a program that will be executed to get notified about events generated by Codex. Note that the program will receive the notification argument as a string of JSON, e.g.:
```json
{
"type": "agent-turn-complete",
"turn-id": "12345",
"input-messages": ["Rename `foo` to `bar` and update the callsites."],
"last-assistant-message": "Rename complete and verified `cargo build` succeeds."
}
```
The `"type"` property will always be set. Currently, `"agent-turn-complete"` is the only notification type that is supported.
As an example, here is a Python script that parses the JSON and decides whether to show a desktop push notification using [terminal-notifier](https://github.com/julienXX/terminal-notifier) on macOS:
```python
#!/usr/bin/env python3
import json
import subprocess
import sys
def main() -> int:
if len(sys.argv) != 2:
print("Usage: notify.py <NOTIFICATION_JSON>")
return 1
try:
notification = json.loads(sys.argv[1])
except json.JSONDecodeError:
return 1
match notification_type := notification.get("type"):
case "agent-turn-complete":
assistant_message = notification.get("last-assistant-message")
if assistant_message:
title = f"Codex: {assistant_message}"
else:
title = "Codex: Turn Complete!"
input_messages = notification.get("input_messages", [])
message = " ".join(input_messages)
title += message
case _:
print(f"not sending a push notification for: {notification_type}")
return 0
subprocess.check_output(
[
"terminal-notifier",
"-title",
title,
"-message",
message,
"-group",
"codex",
"-ignoreDnD",
"-activate",
"com.googlecode.iterm2",
]
)
return 0
if __name__ == "__main__":
sys.exit(main())
```
To have Codex use this script for notifications, you would configure it via `notify` in `~/.codex/config.toml` using the appropriate path to `notify.py` on your computer:
```toml
notify = ["python3", "/Users/mbolin/.codex/notify.py"]
```
## history
By default, Codex CLI records messages sent to the model in `$CODEX_HOME/history.jsonl`. Note that on UNIX, the file permissions are set to `o600`, so it should only be readable and writable by the owner.
To disable this behavior, configure `[history]` as follows:
```toml
[history]
persistence = "none" # "save-all" is the default value
```
## file_opener
Identifies the editor/URI scheme to use for hyperlinking citations in model output. If set, citations to files in the model output will be hyperlinked using the specified URI scheme so they can be ctrl/cmd-clicked from the terminal to open them.
For example, if the model output includes a reference such as `【F:/home/user/project/main.py†L42-L50】`, then this would be rewritten to link to the URI `vscode://file/home/user/project/main.py:42`.
Note this is **not** a general editor setting (like `$EDITOR`), as it only accepts a fixed set of values:
- `"vscode"` (default)
- `"vscode-insiders"`
- `"windsurf"`
- `"cursor"`
- `"none"` to explicitly disable this feature
Currently, `"vscode"` is the default, though Codex does not verify VS Code is installed. As such, `file_opener` may default to `"none"` or something else in the future.
## hide_agent_reasoning
Codex intermittently emits "reasoning" events that show the model's internal "thinking" before it produces a final answer. Some users may find these events distracting, especially in CI logs or minimal terminal output.
Setting `hide_agent_reasoning` to `true` suppresses these events in **both** the TUI as well as the headless `exec` sub-command:
```toml
hide_agent_reasoning = true # defaults to false
```
## show_raw_agent_reasoning
Surfaces the models raw chain-of-thought ("raw reasoning content") when available.
Notes:
- Only takes effect if the selected model/provider actually emits raw reasoning content. Many models do not. When unsupported, this option has no visible effect.
- Raw reasoning may include intermediate thoughts or sensitive context. Enable only if acceptable for your workflow.
Example:
```toml
show_raw_agent_reasoning = true # defaults to false
```
## model_context_window
The size of the context window for the model, in tokens.
In general, Codex knows the context window for the most common OpenAI models, but if you are using a new model with an old version of the Codex CLI, then you can use `model_context_window` to tell Codex what value to use to determine how much context is left during a conversation.
## model_max_output_tokens
This is analogous to `model_context_window`, but for the maximum number of output tokens for the model.
## project_doc_max_bytes
Maximum number of bytes to read from an `AGENTS.md` file to include in the instructions sent with the first turn of a session. Defaults to 32 KiB.
## tui
Options that are specific to the TUI.
```toml
[tui]
# More to come here
```
- Full config docs: [docs/config.md](../docs/config.md)
- MCP servers section: [docs/config.md#mcp_servers](../docs/config.md#mcp_servers)

View File

@@ -6,6 +6,7 @@ version = { workspace = true }
[lib]
name = "codex_core"
path = "src/lib.rs"
doctest = false
[lints]
workspace = true
@@ -28,8 +29,9 @@ libc = "0.2.175"
mcp-types = { path = "../mcp-types" }
mime_guess = "2.0"
os_info = "3.12.0"
portable-pty = "0.9.0"
rand = "0.9"
regex-lite = "0.1.6"
regex-lite = "0.1.7"
reqwest = { version = "0.12", features = ["json", "stream"] }
serde = { version = "1", features = ["derive"] }
serde_bytes = "0.11"
@@ -50,12 +52,12 @@ tokio = { version = "1", features = [
] }
tokio-util = "0.7.16"
toml = "0.9.5"
toml_edit = "0.23.3"
toml_edit = "0.23.4"
tracing = { version = "0.1.41", features = ["log"] }
tree-sitter = "0.25.8"
tree-sitter-bash = "0.25.0"
uuid = { version = "1", features = ["serde", "v4"] }
whoami = "1.6.0"
whoami = "1.6.1"
wildmatch = "2.4.0"
@@ -71,6 +73,9 @@ openssl-sys = { version = "*", features = ["vendored"] }
[target.aarch64-unknown-linux-musl.dependencies]
openssl-sys = { version = "*", features = ["vendored"] }
[target.'cfg(target_os = "windows")'.dependencies]
which = "6"
[dev-dependencies]
assert_cmd = "2"
core_test_support = { path = "tests/common" }

View File

@@ -134,14 +134,6 @@ If completing the user's task requires writing or modifying files, your code and
- Do not use one-letter variable names unless explicitly requested.
- NEVER output inline citations like "【F:README.md†L5-L14】" in your outputs. The CLI is not able to render these so they will just be broken in the UI. Instead, if you output valid filepaths, users will be able to click on them to open the files in their editor.
## Testing your work
If the codebase has tests or the ability to build or run, you should use them to verify that your work is complete. Generally, your testing philosophy should be to start as specific as possible to the code you changed so that you can catch issues efficiently, then make your way to broader tests as you build confidence. If there's no test for the code you changed, and if the adjacent patterns in the codebases show that there's a logical place for you to add a test, you may do so. However, do not add tests to codebases with no tests, or where the patterns don't indicate so.
Once you're confident in correctness, use formatting commands to ensure that your code is well formatted. These commands can take time so you should run them on as precise a target as possible. If there are issues you can iterate up to 3 times to get formatting right, but if you still can't manage it's better to save the user time and present them a correct solution where you call out the formatting in your final message. If the codebase does not have a formatter configured, do not add one.
For all of testing, running, building, and formatting, do not attempt to fix unrelated bugs. It is not your responsibility to fix them. (You may mention them to the user in your final message though.)
## Sandbox and approvals
The Codex CLI harness supports several different sandboxing, and approval configurations that the user can choose from.
@@ -177,6 +169,22 @@ Note that when sandboxing is set to read-only, you'll need to request approval f
You will be told what filesystem sandboxing, network sandboxing, and approval mode are active in a developer or user message. If you are not told about this, assume that you are running with workspace-write, network sandboxing ON, and approval on-failure.
## Validating your work
If the codebase has tests or the ability to build or run, consider using them to verify that your work is complete.
When testing, your philosophy should be to start as specific as possible to the code you changed so that you can catch issues efficiently, then make your way to broader tests as you build confidence. If there's no test for the code you changed, and if the adjacent patterns in the codebases show that there's a logical place for you to add a test, you may do so. However, do not add tests to codebases with no tests.
Similarly, once you're confident in correctness, you can suggest or use formatting commands to ensure that your code is well formatted. If there are issues you can iterate up to 3 times to get formatting right, but if you still can't manage it's better to save the user time and present them a correct solution where you call out the formatting in your final message. If the codebase does not have a formatter configured, do not add one.
For all of testing, running, building, and formatting, do not attempt to fix unrelated bugs. It is not your responsibility to fix them. (You may mention them to the user in your final message though.)
Be mindful of whether to run validation commands proactively. In the absence of behavioral guidance:
- When running in non-interactive approval modes like **never** or **on-failure**, proactively run tests, lint and do whatever you need to ensure you've completed the task.
- When working in interactive approval modes like **untrusted**, or **on-request**, hold off on running tests or lint commands until the user is ready for you to finalize your output, because these commands take time to run and slow down iteration. Instead suggest what you want to do next, and let the user confirm first.
- When working on test-related tasks, such as adding tests, fixing tests, or reproducing a bug to verify behavior, you may proactively run tests regardless of approval mode. Use your judgement to decide whether this is a test-related task.
## Ambition vs. precision
For tasks that have no prior context (i.e. the user is starting something brand new), you should feel free to be ambitious and demonstrate creativity with your implementation.
@@ -270,67 +278,6 @@ When using the shell, you must adhere to the following guidelines:
- When searching for text or files, prefer using `rg` or `rg --files` respectively because `rg` is much faster than alternatives like `grep`. (If the `rg` command is not found, then use alternatives.)
- Read files in chunks with a max chunk size of 250 lines. Do not use python scripts to attempt to output larger chunks of a file. Command line output will be truncated after 10 kilobytes or 256 lines of output, regardless of the command used.
## `apply_patch`
Your patch language is a strippeddown, fileoriented diff format designed to be easy to parse and safe to apply. You can think of it as a highlevel envelope:
**_ Begin Patch
[ one or more file sections ]
_** End Patch
Within that envelope, you get a sequence of file operations.
You MUST include a header to specify the action you are taking.
Each operation starts with one of three headers:
**_ Add File: <path> - create a new file. Every following line is a + line (the initial contents).
_** Delete File: <path> - remove an existing file. Nothing follows.
\*\*\* Update File: <path> - patch an existing file in place (optionally with a rename).
May be immediately followed by \*\*\* Move to: <new path> if you want to rename the file.
Then one or more “hunks”, each introduced by @@ (optionally followed by a hunk header).
Within a hunk each line starts with:
- for inserted text,
* for removed text, or
space ( ) for context.
At the end of a truncated hunk you can emit \*\*\* End of File.
Patch := Begin { FileOp } End
Begin := "**_ Begin Patch" NEWLINE
End := "_** End Patch" NEWLINE
FileOp := AddFile | DeleteFile | UpdateFile
AddFile := "**_ Add File: " path NEWLINE { "+" line NEWLINE }
DeleteFile := "_** Delete File: " path NEWLINE
UpdateFile := "**_ Update File: " path NEWLINE [ MoveTo ] { Hunk }
MoveTo := "_** Move to: " newPath NEWLINE
Hunk := "@@" [ header ] NEWLINE { HunkLine } [ "*** End of File" NEWLINE ]
HunkLine := (" " | "-" | "+") text NEWLINE
A full patch can combine several operations:
**_ Begin Patch
_** Add File: hello.txt
+Hello world
**_ Update File: src/app.py
_** Move to: src/main.py
@@ def greet():
-print("Hi")
+print("Hello, world!")
**_ Delete File: obsolete.txt
_** End Patch
It is important to remember:
- You must include a header with your intended action (Add/Delete/Update)
- You must prefix new lines with `+` even when creating a new file
You can invoke apply_patch like:
```
shell {"command":["apply_patch","*** Begin Patch\n*** Add File: hello.txt\n+Hello, world!\n*** End Patch\n"]}
```
## `update_plan`
A tool named `update_plan` is available to you. You can use it to keep an uptodate, stepbystep plan for the task.

View File

@@ -1,13 +1,13 @@
use crate::codex::Session;
use crate::codex::TurnContext;
use crate::models::FunctionCallOutputPayload;
use crate::models::ResponseInputItem;
use crate::protocol::FileChange;
use crate::protocol::ReviewDecision;
use crate::safety::SafetyCheck;
use crate::safety::assess_patch_safety;
use codex_apply_patch::ApplyPatchAction;
use codex_apply_patch::ApplyPatchFileChange;
use codex_protocol::models::FunctionCallOutputPayload;
use codex_protocol::models::ResponseInputItem;
use std::collections::HashMap;
use std::path::PathBuf;
@@ -109,7 +109,9 @@ pub(crate) fn convert_apply_patch_to_protocol(
ApplyPatchFileChange::Add { content } => FileChange::Add {
content: content.clone(),
},
ApplyPatchFileChange::Delete => FileChange::Delete,
ApplyPatchFileChange::Delete { content } => FileChange::Delete {
content: content.clone(),
},
ApplyPatchFileChange::Update {
unified_diff,
move_path,

View File

@@ -19,14 +19,15 @@ use crate::ModelProviderInfo;
use crate::client_common::Prompt;
use crate::client_common::ResponseEvent;
use crate::client_common::ResponseStream;
use crate::config::Config;
use crate::error::CodexErr;
use crate::error::Result;
use crate::model_family::ModelFamily;
use crate::models::ContentItem;
use crate::models::ReasoningItemContent;
use crate::models::ResponseItem;
use crate::openai_tools::create_tools_json_for_chat_completions_api;
use crate::util::backoff;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ReasoningItemContent;
use codex_protocol::models::ResponseItem;
/// Implementation for the classic Chat Completions API.
pub(crate) async fn stream_chat_completions(
@@ -34,6 +35,7 @@ pub(crate) async fn stream_chat_completions(
model_family: &ModelFamily,
client: &reqwest::Client,
provider: &ModelProviderInfo,
config: &Config,
) -> Result<ResponseStream> {
// Build messages array
let mut messages = Vec::<serde_json::Value>::new();
@@ -102,8 +104,53 @@ pub(crate) async fn stream_chat_completions(
"content": output.content,
}));
}
ResponseItem::Reasoning { .. } | ResponseItem::Other => {
// Omit these items from the conversation history.
ResponseItem::CustomToolCall {
id,
call_id: _,
name,
input,
status: _,
} => {
messages.push(json!({
"role": "assistant",
"content": null,
"tool_calls": [{
"id": id,
"type": "custom",
"custom": {
"name": name,
"input": input,
}
}]
}));
}
ResponseItem::CustomToolCallOutput { call_id, output } => {
messages.push(json!({
"role": "tool",
"tool_call_id": call_id,
"content": output,
}));
}
ResponseItem::Reasoning {
id: _,
summary,
content,
encrypted_content: _,
} => {
if !config.skip_reasoning_in_chat_completions {
// There is no clear way of sending reasoning items over chat completions.
// We are sending it as an assistant message.
tracing::info!("reasoning item: {:?}", item);
let reasoning =
format!("Reasoning Summary: {summary:?}, Reasoning Content: {content:?}");
messages.push(json!({
"role": "assistant",
"content": reasoning,
}));
}
}
ResponseItem::WebSearchCall { .. } | ResponseItem::Other => {
tracing::info!("omitting item from chat completions: {:?}", item);
continue;
}
}
@@ -321,6 +368,8 @@ async fn process_chat_sse<S>(
}
if let Some(reasoning) = maybe_text {
// Accumulate so we can emit a terminal Reasoning item at end-of-turn.
reasoning_text.push_str(&reasoning);
let _ = tx_event
.send(Ok(ResponseEvent::ReasoningContentDelta(reasoning)))
.await;
@@ -482,16 +531,19 @@ where
// do NOT emit yet. Forward any other item (e.g. FunctionCall) right
// away so downstream consumers see it.
let is_assistant_delta = matches!(&item, crate::models::ResponseItem::Message { role, .. } if role == "assistant");
let is_assistant_delta = matches!(&item, codex_protocol::models::ResponseItem::Message { role, .. } if role == "assistant");
if is_assistant_delta {
// Only use the final assistant message if we have not
// seen any deltas; otherwise, deltas already built the
// cumulative text and this would duplicate it.
if this.cumulative.is_empty()
&& let crate::models::ResponseItem::Message { content, .. } = &item
&& let codex_protocol::models::ResponseItem::Message { content, .. } =
&item
&& let Some(text) = content.iter().find_map(|c| match c {
crate::models::ContentItem::OutputText { text } => Some(text),
codex_protocol::models::ContentItem::OutputText { text } => {
Some(text)
}
_ => None,
})
{
@@ -515,26 +567,27 @@ where
if !this.cumulative_reasoning.is_empty()
&& matches!(this.mode, AggregateMode::AggregatedOnly)
{
let aggregated_reasoning = crate::models::ResponseItem::Reasoning {
id: String::new(),
summary: Vec::new(),
content: Some(vec![
crate::models::ReasoningItemContent::ReasoningText {
text: std::mem::take(&mut this.cumulative_reasoning),
},
]),
encrypted_content: None,
};
let aggregated_reasoning =
codex_protocol::models::ResponseItem::Reasoning {
id: String::new(),
summary: Vec::new(),
content: Some(vec![
codex_protocol::models::ReasoningItemContent::ReasoningText {
text: std::mem::take(&mut this.cumulative_reasoning),
},
]),
encrypted_content: None,
};
this.pending
.push_back(ResponseEvent::OutputItemDone(aggregated_reasoning));
emitted_any = true;
}
if !this.cumulative.is_empty() {
let aggregated_message = crate::models::ResponseItem::Message {
let aggregated_message = codex_protocol::models::ResponseItem::Message {
id: None,
role: "assistant".to_string(),
content: vec![crate::models::ContentItem::OutputText {
content: vec![codex_protocol::models::ContentItem::OutputText {
text: std::mem::take(&mut this.cumulative),
}],
};
@@ -592,6 +645,9 @@ where
Poll::Ready(Some(Ok(ResponseEvent::ReasoningSummaryPartAdded))) => {
continue;
}
Poll::Ready(Some(Ok(ResponseEvent::WebSearchCallBegin { call_id }))) => {
return Poll::Ready(Some(Ok(ResponseEvent::WebSearchCallBegin { call_id })));
}
}
}
}

View File

@@ -1,14 +1,12 @@
use std::io::BufRead;
use std::path::Path;
use std::sync::OnceLock;
use std::time::Duration;
use bytes::Bytes;
use codex_login::AuthManager;
use codex_login::AuthMode;
use codex_login::CodexAuth;
use eventsource_stream::Eventsource;
use futures::prelude::*;
use regex_lite::Regex;
use reqwest::StatusCode;
use serde::Deserialize;
use serde::Serialize;
@@ -28,6 +26,7 @@ use crate::client_common::ResponseEvent;
use crate::client_common::ResponseStream;
use crate::client_common::ResponsesApiRequest;
use crate::client_common::create_reasoning_param_for_request;
use crate::client_common::create_text_param_for_request;
use crate::config::Config;
use crate::error::CodexErr;
use crate::error::Result;
@@ -36,13 +35,14 @@ use crate::flags::CODEX_RS_SSE_FIXTURE;
use crate::model_family::ModelFamily;
use crate::model_provider_info::ModelProviderInfo;
use crate::model_provider_info::WireApi;
use crate::models::ResponseItem;
use crate::openai_model_info::get_model_info;
use crate::openai_tools::create_tools_json_for_responses_api;
use crate::protocol::TokenUsage;
use crate::user_agent::get_codex_user_agent;
use crate::util::backoff;
use codex_protocol::config_types::ReasoningEffort as ReasoningEffortConfig;
use codex_protocol::config_types::ReasoningSummary as ReasoningSummaryConfig;
use codex_protocol::models::ResponseItem;
use std::sync::Arc;
#[derive(Debug, Deserialize)]
@@ -53,14 +53,17 @@ struct ErrorResponse {
#[derive(Debug, Deserialize)]
struct Error {
r#type: Option<String>,
code: Option<String>,
message: Option<String>,
// Optional fields available on "usage_limit_reached" and "usage_not_included" errors
plan_type: Option<String>,
resets_in_seconds: Option<u64>,
}
#[derive(Debug, Clone)]
pub struct ModelClient {
config: Arc<Config>,
auth: Option<CodexAuth>,
auth_manager: Option<Arc<AuthManager>>,
client: reqwest::Client,
provider: ModelProviderInfo,
session_id: Uuid,
@@ -71,7 +74,7 @@ pub struct ModelClient {
impl ModelClient {
pub fn new(
config: Arc<Config>,
auth: Option<CodexAuth>,
auth_manager: Option<Arc<AuthManager>>,
provider: ModelProviderInfo,
effort: ReasoningEffortConfig,
summary: ReasoningSummaryConfig,
@@ -79,7 +82,7 @@ impl ModelClient {
) -> Self {
Self {
config,
auth,
auth_manager,
client: reqwest::Client::new(),
provider,
session_id,
@@ -88,6 +91,12 @@ impl ModelClient {
}
}
pub fn get_model_context_window(&self) -> Option<u64> {
self.config
.model_context_window
.or_else(|| get_model_info(&self.config.model_family).map(|info| info.context_window))
}
/// Dispatches to either the Responses or Chat implementation depending on
/// the provider config. Public callers always invoke `stream()` the
/// specialised helpers are private to avoid accidental misuse.
@@ -101,6 +110,7 @@ impl ModelClient {
&self.config.model_family,
&self.client,
&self.provider,
&self.config,
)
.await?;
@@ -140,9 +150,13 @@ impl ModelClient {
return stream_from_fixture(path, self.provider.clone()).await;
}
let auth = self.auth.clone();
let auth_manager = self.auth_manager.clone();
let auth_mode = auth.as_ref().map(|a| a.mode);
let auth_mode = auth_manager
.as_ref()
.and_then(|m| m.auth())
.as_ref()
.map(|a| a.mode);
let store = prompt.store && auth_mode != Some(AuthMode::ChatGPT);
@@ -164,6 +178,19 @@ impl ModelClient {
let input_with_instructions = prompt.get_formatted_input();
// Only include `text.verbosity` for GPT-5 family models
let text = if self.config.model_family.family == "gpt-5" {
create_text_param_for_request(self.config.model_verbosity)
} else {
if self.config.model_verbosity.is_some() {
warn!(
"model_verbosity is set but ignored for non-gpt-5 model family: {}",
self.config.model_family.family
);
}
None
};
let payload = ResponsesApiRequest {
model: &self.config.model,
instructions: &full_instructions,
@@ -176,20 +203,24 @@ impl ModelClient {
stream: true,
include,
prompt_cache_key: Some(self.session_id.to_string()),
text,
};
let mut attempt = 0;
let max_retries = self.provider.request_max_retries();
trace!(
"POST to {}: {}",
self.provider.get_full_url(&auth),
serde_json::to_string(&payload)?
);
loop {
attempt += 1;
// Always fetch the latest auth in case a prior attempt refreshed the token.
let auth = auth_manager.as_ref().and_then(|m| m.auth());
trace!(
"POST to {}: {}",
self.provider.get_full_url(&auth),
serde_json::to_string(&payload)?
);
let mut req_builder = self
.provider
.create_request_builder(&self.client, &auth)
@@ -208,11 +239,7 @@ impl ModelClient {
req_builder = req_builder.header("chatgpt-account-id", account_id);
}
let originator = self
.config
.internal_originator
.as_deref()
.unwrap_or("codex_cli_rs");
let originator = &self.config.responses_originator_header;
req_builder = req_builder.header("originator", originator);
req_builder = req_builder.header("User-Agent", get_codex_user_agent(Some(originator)));
@@ -252,6 +279,13 @@ impl ModelClient {
.and_then(|v| v.to_str().ok())
.and_then(|s| s.parse::<u64>().ok());
if status == StatusCode::UNAUTHORIZED
&& let Some(manager) = auth_manager.as_ref()
&& manager.auth().is_some()
{
let _ = manager.refresh_token().await;
}
// The OpenAI Responses endpoint returns structured JSON bodies even for 4xx/5xx
// errors. When we bubble early with only the HTTP status the caller sees an opaque
// "unexpected status 400 Bad Request" which makes debugging nearly impossible.
@@ -259,7 +293,10 @@ impl ModelClient {
// exact error message (e.g. "Unknown parameter: 'input[0].metadata'"). The body is
// small and this branch only runs on error paths so the extra allocation is
// negligible.
if !(status == StatusCode::TOO_MANY_REQUESTS || status.is_server_error()) {
if !(status == StatusCode::TOO_MANY_REQUESTS
|| status == StatusCode::UNAUTHORIZED
|| status.is_server_error())
{
// Surface the error body to callers. Use `unwrap_or_default` per Clippy.
let body = res.text().await.unwrap_or_default();
return Err(CodexErr::UnexpectedStatus(status, body));
@@ -267,19 +304,20 @@ impl ModelClient {
if status == StatusCode::TOO_MANY_REQUESTS {
let body = res.json::<ErrorResponse>().await.ok();
if let Some(ErrorResponse {
error:
Error {
r#type: Some(error_type),
..
},
}) = body
{
if error_type == "usage_limit_reached" {
if let Some(ErrorResponse { error }) = body {
if error.r#type.as_deref() == Some("usage_limit_reached") {
// Prefer the plan_type provided in the error message if present
// because it's more up to date than the one encoded in the auth
// token.
let plan_type = error
.plan_type
.or_else(|| auth.and_then(|a| a.get_plan_type()));
let resets_in_seconds = error.resets_in_seconds;
return Err(CodexErr::UsageLimitReached(UsageLimitReachedError {
plan_type: auth.and_then(|a| a.get_plan_type()),
plan_type,
resets_in_seconds,
}));
} else if error_type == "usage_not_included" {
} else if error.r#type.as_deref() == Some("usage_not_included") {
return Err(CodexErr::UsageNotIncluded);
}
}
@@ -333,8 +371,8 @@ impl ModelClient {
self.summary
}
pub fn get_auth(&self) -> Option<CodexAuth> {
self.auth.clone()
pub fn get_auth_manager(&self) -> Option<Arc<AuthManager>> {
self.auth_manager.clone()
}
}
@@ -444,7 +482,8 @@ async fn process_sse<S>(
}
};
trace!("SSE event: {}", sse.data);
let raw = sse.data.clone();
trace!("SSE event: {}", raw);
let event: SseEvent = match serde_json::from_str(&sse.data) {
Ok(event) => event,
@@ -526,9 +565,8 @@ async fn process_sse<S>(
if let Some(error) = error {
match serde_json::from_value::<Error>(error.clone()) {
Ok(error) => {
let delay = try_parse_retry_after(&error);
let message = error.message.unwrap_or_default();
response_error = Some(CodexErr::Stream(message, delay));
response_error = Some(CodexErr::Stream(message, None));
}
Err(e) => {
debug!("failed to parse ErrorResponse: {e}");
@@ -553,11 +591,27 @@ async fn process_sse<S>(
}
"response.content_part.done"
| "response.function_call_arguments.delta"
| "response.custom_tool_call_input.delta"
| "response.custom_tool_call_input.done" // also emitted as response.output_item.done
| "response.in_progress"
| "response.output_item.added"
| "response.output_text.done" => {
// Currently, we ignore this event, but we handle it
// separately to skip the logging message in the `other` case.
| "response.output_text.done" => {}
"response.output_item.added" => {
if let Some(item) = event.item.as_ref() {
// Detect web_search_call begin and forward a synthetic event upstream.
if let Some(ty) = item.get("type").and_then(|v| v.as_str())
&& ty == "web_search_call"
{
let call_id = item
.get("id")
.and_then(|v| v.as_str())
.unwrap_or("")
.to_string();
let ev = ResponseEvent::WebSearchCallBegin { call_id };
if tx_event.send(Ok(ev)).await.is_err() {
return;
}
}
}
}
"response.reasoning_summary_part.added" => {
// Boundary between reasoning summary sections (e.g., titles).
@@ -567,7 +621,7 @@ async fn process_sse<S>(
}
}
"response.reasoning_summary_text.done" => {}
other => debug!(other, "sse event"),
_ => {}
}
}
}
@@ -598,40 +652,6 @@ async fn stream_from_fixture(
Ok(ResponseStream { rx_event })
}
fn rate_limit_regex() -> &'static Regex {
static RE: OnceLock<Regex> = OnceLock::new();
#[expect(clippy::unwrap_used)]
RE.get_or_init(|| Regex::new(r"Please try again in (\d+(?:\.\d+)?)(s|ms)").unwrap())
}
fn try_parse_retry_after(err: &Error) -> Option<Duration> {
if err.code != Some("rate_limit_exceeded".to_string()) {
return None;
}
// parse the Please try again in 1.898s format using regex
let re = rate_limit_regex();
if let Some(message) = &err.message
&& let Some(captures) = re.captures(message)
{
let seconds = captures.get(1);
let unit = captures.get(2);
if let (Some(value), Some(unit)) = (seconds, unit) {
let value = value.as_str().parse::<f64>().ok()?;
let unit = unit.as_str();
if unit == "s" {
return Some(Duration::from_secs_f64(value));
} else if unit == "ms" {
return Some(Duration::from_millis(value as u64));
}
}
}
None
}
#[cfg(test)]
mod tests {
use super::*;
@@ -852,7 +872,7 @@ mod tests {
msg,
"Rate limit reached for gpt-5 in organization org-AAA on tokens per min (TPM): Limit 30000, Used 22999, Requested 12528. Please try again in 11.054s. Visit https://platform.openai.com/account/rate-limits to learn more."
);
assert_eq!(*delay, Some(Duration::from_secs_f64(11.054)));
assert_eq!(*delay, None);
}
other => panic!("unexpected second event: {other:?}"),
}
@@ -956,27 +976,4 @@ mod tests {
);
}
}
#[test]
fn test_try_parse_retry_after() {
let err = Error {
r#type: None,
message: Some("Rate limit reached for gpt-5 in organization org- on tokens per min (TPM): Limit 1, Used 1, Requested 19304. Please try again in 28ms. Visit https://platform.openai.com/account/rate-limits to learn more.".to_string()),
code: Some("rate_limit_exceeded".to_string()),
};
let delay = try_parse_retry_after(&err);
assert_eq!(delay, Some(Duration::from_millis(28)));
}
#[test]
fn test_try_parse_retry_after_no_delay() {
let err = Error {
r#type: None,
message: Some("Rate limit reached for gpt-5 in organization <ORG> on tokens per min (TPM): Limit 30000, Used 6899, Requested 24050. Please try again in 1.898s. Visit https://platform.openai.com/account/rate-limits to learn more.".to_string()),
code: Some("rate_limit_exceeded".to_string()),
};
let delay = try_parse_retry_after(&err);
assert_eq!(delay, Some(Duration::from_secs_f64(1.898)));
}
}

View File

@@ -1,12 +1,13 @@
use crate::config_types::Verbosity as VerbosityConfig;
use crate::error::Result;
use crate::model_family::ModelFamily;
use crate::models::ContentItem;
use crate::models::ResponseItem;
use crate::openai_tools::OpenAiTool;
use crate::protocol::TokenUsage;
use codex_apply_patch::APPLY_PATCH_TOOL_INSTRUCTIONS;
use codex_protocol::config_types::ReasoningEffort as ReasoningEffortConfig;
use codex_protocol::config_types::ReasoningSummary as ReasoningSummaryConfig;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ResponseItem;
use futures::Stream;
use serde::Serialize;
use std::borrow::Cow;
@@ -47,7 +48,18 @@ impl Prompt {
.as_deref()
.unwrap_or(BASE_INSTRUCTIONS);
let mut sections: Vec<&str> = vec![base];
if model.needs_special_apply_patch_instructions {
// When there are no custom instructions, add apply_patch_tool_instructions if either:
// - the model needs special instructions (4.1), or
// - there is no apply_patch tool present
let is_apply_patch_tool_present = self.tools.iter().any(|tool| match tool {
OpenAiTool::Function(f) => f.name == "apply_patch",
OpenAiTool::Freeform(f) => f.name == "apply_patch",
_ => false,
});
if self.base_instructions_override.is_none()
&& (model.needs_special_apply_patch_instructions || !is_apply_patch_tool_present)
{
sections.push(APPLY_PATCH_TOOL_INSTRUCTIONS);
}
Cow::Owned(sections.join("\n"))
@@ -81,6 +93,9 @@ pub enum ResponseEvent {
ReasoningSummaryDelta(String),
ReasoningContentDelta(String),
ReasoningSummaryPartAdded,
WebSearchCallBegin {
call_id: String,
},
}
#[derive(Debug, Serialize)]
@@ -89,6 +104,32 @@ pub(crate) struct Reasoning {
pub(crate) summary: ReasoningSummaryConfig,
}
/// Controls under the `text` field in the Responses API for GPT-5.
#[derive(Debug, Serialize, Default, Clone, Copy)]
pub(crate) struct TextControls {
#[serde(skip_serializing_if = "Option::is_none")]
pub(crate) verbosity: Option<OpenAiVerbosity>,
}
#[derive(Debug, Serialize, Default, Clone, Copy)]
#[serde(rename_all = "lowercase")]
pub(crate) enum OpenAiVerbosity {
Low,
#[default]
Medium,
High,
}
impl From<VerbosityConfig> for OpenAiVerbosity {
fn from(v: VerbosityConfig) -> Self {
match v {
VerbosityConfig::Low => OpenAiVerbosity::Low,
VerbosityConfig::Medium => OpenAiVerbosity::Medium,
VerbosityConfig::High => OpenAiVerbosity::High,
}
}
}
/// Request object that is serialized as JSON and POST'ed when using the
/// Responses API.
#[derive(Debug, Serialize)]
@@ -109,6 +150,8 @@ pub(crate) struct ResponsesApiRequest<'a> {
pub(crate) include: Vec<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub(crate) prompt_cache_key: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub(crate) text: Option<TextControls>,
}
pub(crate) fn create_reasoning_param_for_request(
@@ -123,6 +166,14 @@ pub(crate) fn create_reasoning_param_for_request(
}
}
pub(crate) fn create_text_param_for_request(
verbosity: Option<VerbosityConfig>,
) -> Option<TextControls> {
verbosity.map(|v| TextControls {
verbosity: Some(v.into()),
})
}
pub(crate) struct ResponseStream {
pub(crate) rx_event: mpsc::Receiver<Result<ResponseEvent>>,
}
@@ -151,4 +202,57 @@ mod tests {
let full = prompt.get_full_instructions(&model_family);
assert_eq!(full, expected);
}
#[test]
fn serializes_text_verbosity_when_set() {
let input: Vec<ResponseItem> = vec![];
let tools: Vec<serde_json::Value> = vec![];
let req = ResponsesApiRequest {
model: "gpt-5",
instructions: "i",
input: &input,
tools: &tools,
tool_choice: "auto",
parallel_tool_calls: false,
reasoning: None,
store: true,
stream: true,
include: vec![],
prompt_cache_key: None,
text: Some(TextControls {
verbosity: Some(OpenAiVerbosity::Low),
}),
};
let v = serde_json::to_value(&req).expect("json");
assert_eq!(
v.get("text")
.and_then(|t| t.get("verbosity"))
.and_then(|s| s.as_str()),
Some("low")
);
}
#[test]
fn omits_text_when_not_set() {
let input: Vec<ResponseItem> = vec![];
let tools: Vec<serde_json::Value> = vec![];
let req = ResponsesApiRequest {
model: "gpt-5",
instructions: "i",
input: &input,
tools: &tools,
tool_choice: "auto",
parallel_tool_calls: false,
reasoning: None,
store: true,
stream: true,
include: vec![],
prompt_cache_key: None,
text: None,
};
let v = serde_json::to_value(&req).expect("json");
assert!(v.get("text").is_none());
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -6,6 +6,8 @@ use crate::config_types::ShellEnvironmentPolicy;
use crate::config_types::ShellEnvironmentPolicyToml;
use crate::config_types::Tui;
use crate::config_types::UriBasedFileOpener;
use crate::config_types::Verbosity;
use crate::git_info::resolve_root_git_project_for_trust;
use crate::model_family::ModelFamily;
use crate::model_family::find_family_for_model;
use crate::model_provider_info::ModelProviderInfo;
@@ -35,6 +37,8 @@ pub(crate) const PROJECT_DOC_MAX_BYTES: usize = 32 * 1024; // 32 KiB
const CONFIG_TOML_FILE: &str = "config.toml";
const DEFAULT_RESPONSES_ORIGINATOR_HEADER: &str = "codex_cli_rs";
/// Application configuration loaded from disk and merged with overrides.
#[derive(Debug, Clone, PartialEq)]
pub struct Config {
@@ -148,6 +152,9 @@ pub struct Config {
/// request using the Responses API.
pub model_reasoning_summary: ReasoningSummary,
/// Optional verbosity control for GPT-5 models (Responses API `text.verbosity`).
pub model_verbosity: Option<Verbosity>,
/// Base URL for requests to ChatGPT (as opposed to the OpenAI API).
pub chatgpt_base_url: String,
@@ -162,11 +169,26 @@ pub struct Config {
/// model family's default preference.
pub include_apply_patch_tool: bool,
pub tools_web_search_request: bool,
/// The value for the `originator` header included with Responses API requests.
pub internal_originator: Option<String>,
pub responses_originator_header: String,
/// If set to `true`, the API key will be signed with the `originator` header.
pub preferred_auth_method: AuthMode,
pub use_experimental_streamable_shell_tool: bool,
/// Include the `view_image` tool that lets the agent attach a local image path to context.
pub include_view_image_tool: bool,
/// When true, disables burst-paste detection for typed input entirely.
/// All characters are inserted as they are received, and no buffering
/// or placeholder replacement will occur for fast keypress bursts.
pub disable_paste_burst: bool,
/// When `true`, reasoning items in Chat Completions input will be skipped.
/// Defaults to `false`.
pub skip_reasoning_in_chat_completions: bool,
}
impl Config {
@@ -257,10 +279,61 @@ pub fn set_project_trusted(codex_home: &Path, project_path: &Path) -> anyhow::Re
Err(e) => return Err(e.into()),
};
// Mark the project as trusted. toml_edit is very good at handling
// missing properties
// Ensure we render a human-friendly structure:
//
// [projects]
// [projects."/path/to/project"]
// trust_level = "trusted"
//
// rather than inline tables like:
//
// [projects]
// "/path/to/project" = { trust_level = "trusted" }
let project_key = project_path.to_string_lossy().to_string();
doc["projects"][project_key.as_str()]["trust_level"] = toml_edit::value("trusted");
// Ensure top-level `projects` exists as a non-inline, explicit table. If it
// exists but was previously represented as a non-table (e.g., inline),
// replace it with an explicit table.
let mut created_projects_table = false;
{
let root = doc.as_table_mut();
let needs_table = !root.contains_key("projects")
|| root.get("projects").and_then(|i| i.as_table()).is_none();
if needs_table {
root.insert("projects", toml_edit::table());
created_projects_table = true;
}
}
let Some(projects_tbl) = doc["projects"].as_table_mut() else {
return Err(anyhow::anyhow!(
"projects table missing after initialization"
));
};
// If we created the `projects` table ourselves, keep it implicit so we
// don't render a standalone `[projects]` header.
if created_projects_table {
projects_tbl.set_implicit(true);
}
// Ensure the per-project entry is its own explicit table. If it exists but
// is not a table (e.g., an inline table), replace it with an explicit table.
let needs_proj_table = !projects_tbl.contains_key(project_key.as_str())
|| projects_tbl
.get(project_key.as_str())
.and_then(|i| i.as_table())
.is_none();
if needs_proj_table {
projects_tbl.insert(project_key.as_str(), toml_edit::table());
}
let Some(proj_tbl) = projects_tbl
.get_mut(project_key.as_str())
.and_then(|i| i.as_table_mut())
else {
return Err(anyhow::anyhow!("project table missing for {}", project_key));
};
proj_tbl.set_implicit(false);
proj_tbl["trust_level"] = toml_edit::value("trusted");
// ensure codex_home exists
std::fs::create_dir_all(codex_home)?;
@@ -396,6 +469,8 @@ pub struct ConfigToml {
pub model_reasoning_effort: Option<ReasoningEffort>,
pub model_reasoning_summary: Option<ReasoningSummary>,
/// Optional verbosity control for GPT-5 models (Responses API `text.verbosity`).
pub model_verbosity: Option<Verbosity>,
/// Override to force-enable reasoning summaries for the configured model.
pub model_supports_reasoning_summaries: Option<bool>,
@@ -409,13 +484,27 @@ pub struct ConfigToml {
/// Experimental path to a file whose contents replace the built-in BASE_INSTRUCTIONS.
pub experimental_instructions_file: Option<PathBuf>,
pub experimental_use_exec_command_tool: Option<bool>,
/// The value for the `originator` header included with Responses API requests.
pub internal_originator: Option<String>,
pub responses_originator_header_internal_override: Option<String>,
pub projects: Option<HashMap<String, ProjectConfig>>,
/// If set to `true`, the API key will be signed with the `originator` header.
pub preferred_auth_method: Option<AuthMode>,
/// Nested tools section for feature toggles
pub tools: Option<ToolsToml>,
/// When true, disables burst-paste detection for typed input entirely.
/// All characters are inserted as they are received, and no buffering
/// or placeholder replacement will occur for fast keypress bursts.
pub disable_paste_burst: Option<bool>,
/// When set to `true`, reasoning items will be skipped from Chat Completions input.
/// Defaults to `false`.
pub skip_reasoning_in_chat_completions: Option<bool>,
}
#[derive(Deserialize, Debug, Clone, PartialEq, Eq)]
@@ -423,6 +512,16 @@ pub struct ProjectConfig {
pub trust_level: Option<String>,
}
#[derive(Deserialize, Debug, Clone, Default)]
pub struct ToolsToml {
#[serde(default, alias = "web_search_request")]
pub web_search: Option<bool>,
/// Enable the `view_image` tool that lets the agent attach local images.
#[serde(default)]
pub view_image: Option<bool>,
}
impl ConfigToml {
/// Derive the effective sandbox policy from the configuration.
fn derive_sandbox_policy(&self, sandbox_mode_override: Option<SandboxMode>) -> SandboxPolicy {
@@ -452,10 +551,27 @@ impl ConfigToml {
pub fn is_cwd_trusted(&self, resolved_cwd: &Path) -> bool {
let projects = self.projects.clone().unwrap_or_default();
projects
.get(&resolved_cwd.to_string_lossy().to_string())
.map(|p| p.trust_level.clone().unwrap_or("".to_string()) == "trusted")
.unwrap_or(false)
let is_path_trusted = |path: &Path| {
let path_str = path.to_string_lossy().to_string();
projects
.get(&path_str)
.map(|p| p.trust_level.as_deref() == Some("trusted"))
.unwrap_or(false)
};
// Fast path: exact cwd match
if is_path_trusted(resolved_cwd) {
return true;
}
// If cwd lives inside a git worktree, check whether the root git project
// (the primary repository working directory) is trusted. This lets
// worktrees inherit trust from the main project.
if let Some(root_project) = resolve_root_git_project_for_trust(resolved_cwd) {
return is_path_trusted(&root_project);
}
false
}
pub fn get_config_profile(
@@ -493,8 +609,10 @@ pub struct ConfigOverrides {
pub base_instructions: Option<String>,
pub include_plan_tool: Option<bool>,
pub include_apply_patch_tool: Option<bool>,
pub include_view_image_tool: Option<bool>,
pub disable_response_storage: Option<bool>,
pub show_raw_agent_reasoning: Option<bool>,
pub tools_web_search_request: Option<bool>,
}
impl Config {
@@ -519,8 +637,10 @@ impl Config {
base_instructions,
include_plan_tool,
include_apply_patch_tool,
include_view_image_tool,
disable_response_storage,
show_raw_agent_reasoning,
tools_web_search_request: override_tools_web_search_request,
} = overrides;
let config_profile = match config_profile_key.as_ref().or(cfg.profile.as_ref()) {
@@ -582,6 +702,14 @@ impl Config {
let history = cfg.history.unwrap_or_default();
let tools_web_search_request = override_tools_web_search_request
.or(cfg.tools.as_ref().and_then(|t| t.web_search))
.unwrap_or(false);
let include_view_image_tool = include_view_image_tool
.or(cfg.tools.as_ref().and_then(|t| t.view_image))
.unwrap_or(true);
let model = model
.or(config_profile.model)
.or(cfg.model)
@@ -595,7 +723,7 @@ impl Config {
needs_special_apply_patch_instructions: false,
supports_reasoning_summaries,
uses_local_shell_tool: false,
uses_apply_patch_tool: false,
apply_patch_tool_type: None,
}
});
@@ -622,8 +750,9 @@ impl Config {
Self::get_base_instructions(experimental_instructions_path, &resolved_cwd)?;
let base_instructions = base_instructions.or(file_base_instructions);
let include_apply_patch_tool_val =
include_apply_patch_tool.unwrap_or(model_family.uses_apply_patch_tool);
let responses_originator_header: String = cfg
.responses_originator_header_internal_override
.unwrap_or(DEFAULT_RESPONSES_ORIGINATOR_HEADER.to_owned());
let config = Self {
model,
@@ -669,7 +798,7 @@ impl Config {
.model_reasoning_summary
.or(cfg.model_reasoning_summary)
.unwrap_or_default(),
model_verbosity: config_profile.model_verbosity.or(cfg.model_verbosity),
chatgpt_base_url: config_profile
.chatgpt_base_url
.or(cfg.chatgpt_base_url)
@@ -677,9 +806,18 @@ impl Config {
experimental_resume,
include_plan_tool: include_plan_tool.unwrap_or(false),
include_apply_patch_tool: include_apply_patch_tool_val,
internal_originator: cfg.internal_originator,
include_apply_patch_tool: include_apply_patch_tool.unwrap_or(false),
tools_web_search_request,
responses_originator_header,
preferred_auth_method: cfg.preferred_auth_method.unwrap_or(AuthMode::ChatGPT),
use_experimental_streamable_shell_tool: cfg
.experimental_use_exec_command_tool
.unwrap_or(false),
include_view_image_tool,
disable_paste_burst: cfg.disable_paste_burst.unwrap_or(false),
skip_reasoning_in_chat_completions: cfg
.skip_reasoning_in_chat_completions
.unwrap_or(false),
};
Ok(config)
}
@@ -1038,13 +1176,19 @@ disable_response_storage = true
show_raw_agent_reasoning: false,
model_reasoning_effort: ReasoningEffort::High,
model_reasoning_summary: ReasoningSummary::Detailed,
model_verbosity: None,
chatgpt_base_url: "https://chatgpt.com/backend-api/".to_string(),
experimental_resume: None,
base_instructions: None,
include_plan_tool: false,
include_apply_patch_tool: false,
internal_originator: None,
tools_web_search_request: false,
responses_originator_header: "codex_cli_rs".to_string(),
preferred_auth_method: AuthMode::ChatGPT,
use_experimental_streamable_shell_tool: false,
include_view_image_tool: true,
disable_paste_burst: false,
skip_reasoning_in_chat_completions: false,
},
o3_profile_config
);
@@ -1091,13 +1235,19 @@ disable_response_storage = true
show_raw_agent_reasoning: false,
model_reasoning_effort: ReasoningEffort::default(),
model_reasoning_summary: ReasoningSummary::default(),
model_verbosity: None,
chatgpt_base_url: "https://chatgpt.com/backend-api/".to_string(),
experimental_resume: None,
base_instructions: None,
include_plan_tool: false,
include_apply_patch_tool: false,
internal_originator: None,
tools_web_search_request: false,
responses_originator_header: "codex_cli_rs".to_string(),
preferred_auth_method: AuthMode::ChatGPT,
use_experimental_streamable_shell_tool: false,
include_view_image_tool: true,
disable_paste_burst: false,
skip_reasoning_in_chat_completions: false,
};
assert_eq!(expected_gpt3_profile_config, gpt3_profile_config);
@@ -1159,17 +1309,93 @@ disable_response_storage = true
show_raw_agent_reasoning: false,
model_reasoning_effort: ReasoningEffort::default(),
model_reasoning_summary: ReasoningSummary::default(),
model_verbosity: None,
chatgpt_base_url: "https://chatgpt.com/backend-api/".to_string(),
experimental_resume: None,
base_instructions: None,
include_plan_tool: false,
include_apply_patch_tool: false,
internal_originator: None,
tools_web_search_request: false,
responses_originator_header: "codex_cli_rs".to_string(),
preferred_auth_method: AuthMode::ChatGPT,
use_experimental_streamable_shell_tool: false,
include_view_image_tool: true,
disable_paste_burst: false,
skip_reasoning_in_chat_completions: false,
};
assert_eq!(expected_zdr_profile_config, zdr_profile_config);
Ok(())
}
#[test]
fn test_set_project_trusted_writes_explicit_tables() -> anyhow::Result<()> {
let codex_home = TempDir::new().unwrap();
let project_dir = TempDir::new().unwrap();
// Call the function under test
set_project_trusted(codex_home.path(), project_dir.path())?;
// Read back the generated config.toml and assert exact contents
let config_path = codex_home.path().join(CONFIG_TOML_FILE);
let contents = std::fs::read_to_string(&config_path)?;
let raw_path = project_dir.path().to_string_lossy();
let path_str = if raw_path.contains('\\') {
format!("'{raw_path}'")
} else {
format!("\"{raw_path}\"")
};
let expected = format!(
r#"[projects.{path_str}]
trust_level = "trusted"
"#
);
assert_eq!(contents, expected);
Ok(())
}
#[test]
fn test_set_project_trusted_converts_inline_to_explicit() -> anyhow::Result<()> {
let codex_home = TempDir::new().unwrap();
let project_dir = TempDir::new().unwrap();
// Seed config.toml with an inline project entry under [projects]
let config_path = codex_home.path().join(CONFIG_TOML_FILE);
let raw_path = project_dir.path().to_string_lossy();
let path_str = if raw_path.contains('\\') {
format!("'{raw_path}'")
} else {
format!("\"{raw_path}\"")
};
// Use a quoted key so backslashes don't require escaping on Windows
let initial = format!(
r#"[projects]
{path_str} = {{ trust_level = "untrusted" }}
"#
);
std::fs::create_dir_all(codex_home.path())?;
std::fs::write(&config_path, initial)?;
// Run the function; it should convert to explicit tables and set trusted
set_project_trusted(codex_home.path(), project_dir.path())?;
let contents = std::fs::read_to_string(&config_path)?;
// Assert exact output after conversion to explicit table
let expected = format!(
r#"[projects]
[projects.{path_str}]
trust_level = "trusted"
"#
);
assert_eq!(contents, expected);
Ok(())
}
// No test enforcing the presence of a standalone [projects] header.
}

View File

@@ -1,6 +1,7 @@
use serde::Deserialize;
use std::path::PathBuf;
use crate::config_types::Verbosity;
use crate::protocol::AskForApproval;
use codex_protocol::config_types::ReasoningEffort;
use codex_protocol::config_types::ReasoningSummary;
@@ -17,6 +18,7 @@ pub struct ConfigProfile {
pub disable_response_storage: Option<bool>,
pub model_reasoning_effort: Option<ReasoningEffort>,
pub model_reasoning_summary: Option<ReasoningSummary>,
pub model_verbosity: Option<Verbosity>,
pub chatgpt_base_url: Option<String>,
pub experimental_instructions_file: Option<PathBuf>,
}

View File

@@ -8,6 +8,8 @@ use std::path::PathBuf;
use wildmatch::WildMatchPattern;
use serde::Deserialize;
use serde::Serialize;
use strum_macros::Display;
#[derive(Deserialize, Debug, Clone, PartialEq)]
pub struct McpServerConfig {
@@ -183,3 +185,43 @@ impl From<ShellEnvironmentPolicyToml> for ShellEnvironmentPolicy {
}
}
}
/// See https://platform.openai.com/docs/guides/reasoning?api-mode=responses#get-started-with-reasoning
#[derive(Debug, Serialize, Deserialize, Default, Clone, Copy, PartialEq, Eq, Display)]
#[serde(rename_all = "lowercase")]
#[strum(serialize_all = "lowercase")]
pub enum ReasoningEffort {
Low,
#[default]
Medium,
High,
/// Option to disable reasoning.
None,
}
/// A summary of the reasoning performed by the model. This can be useful for
/// debugging and understanding the model's reasoning process.
/// See https://platform.openai.com/docs/guides/reasoning?api-mode=responses#reasoning-summaries
#[derive(Debug, Serialize, Deserialize, Default, Clone, Copy, PartialEq, Eq, Display)]
#[serde(rename_all = "lowercase")]
#[strum(serialize_all = "lowercase")]
pub enum ReasoningSummary {
#[default]
Auto,
Concise,
Detailed,
/// Option to disable reasoning summaries.
None,
}
/// Controls output length/detail on GPT-5 models via the Responses API.
/// Serialized with lowercase values to match the OpenAI API.
#[derive(Debug, Serialize, Deserialize, Default, Clone, Copy, PartialEq, Eq, Display)]
#[serde(rename_all = "lowercase")]
#[strum(serialize_all = "lowercase")]
pub enum Verbosity {
Low,
#[default]
Medium,
High,
}

View File

@@ -1,4 +1,4 @@
use crate::models::ResponseItem;
use codex_protocol::models::ResponseItem;
/// Transcript of conversation history
#[derive(Debug, Clone, Default)]
@@ -28,49 +28,7 @@ impl ConversationHistory {
continue;
}
// Merge adjacent assistant messages into a single history entry.
// This prevents duplicates when a partial assistant message was
// streamed into history earlier in the turn and the final full
// message is recorded at turn end.
match (&*item, self.items.last_mut()) {
(
ResponseItem::Message {
role: new_role,
content: new_content,
..
},
Some(ResponseItem::Message {
role: last_role,
content: last_content,
..
}),
) if new_role == "assistant" && last_role == "assistant" => {
append_text_content(last_content, new_content);
}
_ => {
self.items.push(item.clone());
}
}
}
}
/// Append a text `delta` to the latest assistant message, creating a new
/// assistant entry if none exists yet (e.g. first delta for this turn).
pub(crate) fn append_assistant_text(&mut self, delta: &str) {
match self.items.last_mut() {
Some(ResponseItem::Message { role, content, .. }) if role == "assistant" => {
append_text_delta(content, delta);
}
_ => {
// Start a new assistant message with the delta.
self.items.push(ResponseItem::Message {
id: None,
role: "assistant".to_string(),
content: vec![crate::models::ContentItem::OutputText {
text: delta.to_string(),
}],
});
}
self.items.push(item.clone());
}
}
@@ -110,44 +68,18 @@ fn is_api_message(message: &ResponseItem) -> bool {
ResponseItem::Message { role, .. } => role.as_str() != "system",
ResponseItem::FunctionCallOutput { .. }
| ResponseItem::FunctionCall { .. }
| ResponseItem::CustomToolCall { .. }
| ResponseItem::CustomToolCallOutput { .. }
| ResponseItem::LocalShellCall { .. }
| ResponseItem::Reasoning { .. } => true,
ResponseItem::Other => false,
}
}
/// Helper to append the textual content from `src` into `dst` in place.
fn append_text_content(
dst: &mut Vec<crate::models::ContentItem>,
src: &Vec<crate::models::ContentItem>,
) {
for c in src {
if let crate::models::ContentItem::OutputText { text } = c {
append_text_delta(dst, text);
}
}
}
/// Append a single text delta to the last OutputText item in `content`, or
/// push a new OutputText item if none exists.
fn append_text_delta(content: &mut Vec<crate::models::ContentItem>, delta: &str) {
if let Some(crate::models::ContentItem::OutputText { text }) = content
.iter_mut()
.rev()
.find(|c| matches!(c, crate::models::ContentItem::OutputText { .. }))
{
text.push_str(delta);
} else {
content.push(crate::models::ContentItem::OutputText {
text: delta.to_string(),
});
ResponseItem::WebSearchCall { .. } | ResponseItem::Other => false,
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::models::ContentItem;
use codex_protocol::models::ContentItem;
fn assistant_msg(text: &str) -> ResponseItem {
ResponseItem::Message {
@@ -169,49 +101,6 @@ mod tests {
}
}
#[test]
fn merges_adjacent_assistant_messages() {
let mut h = ConversationHistory::default();
let a1 = assistant_msg("Hello");
let a2 = assistant_msg(", world!");
h.record_items([&a1, &a2]);
let items = h.contents();
assert_eq!(
items,
vec![ResponseItem::Message {
id: None,
role: "assistant".to_string(),
content: vec![ContentItem::OutputText {
text: "Hello, world!".to_string()
}]
}]
);
}
#[test]
fn append_assistant_text_creates_and_appends() {
let mut h = ConversationHistory::default();
h.append_assistant_text("Hello");
h.append_assistant_text(", world");
// Now record a final full assistant message and verify it merges.
let final_msg = assistant_msg("!");
h.record_items([&final_msg]);
let items = h.contents();
assert_eq!(
items,
vec![ResponseItem::Message {
id: None,
role: "assistant".to_string(),
content: vec![ContentItem::OutputText {
text: "Hello, world!".to_string()
}]
}]
);
}
#[test]
fn filters_non_api_messages() {
let mut h = ConversationHistory::default();

View File

@@ -1,6 +1,7 @@
use std::collections::HashMap;
use std::sync::Arc;
use codex_login::AuthManager;
use codex_login::CodexAuth;
use tokio::sync::RwLock;
use uuid::Uuid;
@@ -15,6 +16,7 @@ use crate::error::Result as CodexResult;
use crate::protocol::Event;
use crate::protocol::EventMsg;
use crate::protocol::SessionConfiguredEvent;
use codex_protocol::models::ResponseItem;
/// Represents a newly created Codex conversation, including the first event
/// (which is [`EventMsg::SessionConfigured`]).
@@ -28,34 +30,48 @@ pub struct NewConversation {
/// maintaining them in memory.
pub struct ConversationManager {
conversations: Arc<RwLock<HashMap<Uuid, Arc<CodexConversation>>>>,
}
impl Default for ConversationManager {
fn default() -> Self {
Self {
conversations: Arc::new(RwLock::new(HashMap::new())),
}
}
auth_manager: Arc<AuthManager>,
}
impl ConversationManager {
pub async fn new_conversation(&self, config: Config) -> CodexResult<NewConversation> {
let auth = CodexAuth::from_codex_home(&config.codex_home, config.preferred_auth_method)?;
self.new_conversation_with_auth(config, auth).await
pub fn new(auth_manager: Arc<AuthManager>) -> Self {
Self {
conversations: Arc::new(RwLock::new(HashMap::new())),
auth_manager,
}
}
/// Used for integration tests: should not be used by ordinary business
/// logic.
pub async fn new_conversation_with_auth(
/// Construct with a dummy AuthManager containing the provided CodexAuth.
/// Used for integration tests: should not be used by ordinary business logic.
pub fn with_auth(auth: CodexAuth) -> Self {
Self::new(codex_login::AuthManager::from_auth_for_testing(auth))
}
pub async fn new_conversation(&self, config: Config) -> CodexResult<NewConversation> {
self.spawn_conversation(config, self.auth_manager.clone())
.await
}
async fn spawn_conversation(
&self,
config: Config,
auth: Option<CodexAuth>,
auth_manager: Arc<AuthManager>,
) -> CodexResult<NewConversation> {
let CodexSpawnOk {
codex,
session_id: conversation_id,
} = Codex::spawn(config, auth).await?;
} = {
let initial_history = None;
Codex::spawn(config, auth_manager, initial_history).await?
};
self.finalize_spawn(codex, conversation_id).await
}
async fn finalize_spawn(
&self,
codex: Codex,
conversation_id: Uuid,
) -> CodexResult<NewConversation> {
// The first event must be `SessionInitialized`. Validate and forward it
// to the caller so that they can display it in the conversation
// history.
@@ -93,4 +109,124 @@ impl ConversationManager {
.cloned()
.ok_or_else(|| CodexErr::ConversationNotFound(conversation_id))
}
pub async fn remove_conversation(&self, conversation_id: Uuid) {
self.conversations.write().await.remove(&conversation_id);
}
/// Fork an existing conversation by dropping the last `drop_last_messages`
/// user/assistant messages from its transcript and starting a new
/// conversation with identical configuration (unless overridden by the
/// caller's `config`). The new conversation will have a fresh id.
pub async fn fork_conversation(
&self,
conversation_history: Vec<ResponseItem>,
num_messages_to_drop: usize,
config: Config,
) -> CodexResult<NewConversation> {
// Compute the prefix up to the cut point.
let truncated_history =
truncate_after_dropping_last_messages(conversation_history, num_messages_to_drop);
// Spawn a new conversation with the computed initial history.
let auth_manager = self.auth_manager.clone();
let CodexSpawnOk {
codex,
session_id: conversation_id,
} = Codex::spawn(config, auth_manager, Some(truncated_history)).await?;
self.finalize_spawn(codex, conversation_id).await
}
}
/// Return a prefix of `items` obtained by dropping the last `n` user messages
/// and all items that follow them.
fn truncate_after_dropping_last_messages(items: Vec<ResponseItem>, n: usize) -> Vec<ResponseItem> {
if n == 0 || items.is_empty() {
return items;
}
// Walk backwards counting only `user` Message items, find cut index.
let mut count = 0usize;
let mut cut_index = 0usize;
for (idx, item) in items.iter().enumerate().rev() {
if let ResponseItem::Message { role, .. } = item
&& role == "user"
{
count += 1;
if count == n {
// Cut everything from this user message to the end.
cut_index = idx;
break;
}
}
}
if count < n {
// If fewer than n messages exist, drop everything.
Vec::new()
} else {
items.into_iter().take(cut_index).collect()
}
}
#[cfg(test)]
mod tests {
use super::*;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ReasoningItemReasoningSummary;
use codex_protocol::models::ResponseItem;
fn user_msg(text: &str) -> ResponseItem {
ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::OutputText {
text: text.to_string(),
}],
}
}
fn assistant_msg(text: &str) -> ResponseItem {
ResponseItem::Message {
id: None,
role: "assistant".to_string(),
content: vec![ContentItem::OutputText {
text: text.to_string(),
}],
}
}
#[test]
fn drops_from_last_user_only() {
let items = vec![
user_msg("u1"),
assistant_msg("a1"),
assistant_msg("a2"),
user_msg("u2"),
assistant_msg("a3"),
ResponseItem::Reasoning {
id: "r1".to_string(),
summary: vec![ReasoningItemReasoningSummary::SummaryText {
text: "s".to_string(),
}],
content: None,
encrypted_content: None,
},
ResponseItem::FunctionCall {
id: None,
name: "tool".to_string(),
arguments: "{}".to_string(),
call_id: "c1".to_string(),
},
assistant_msg("a4"),
];
let truncated = truncate_after_dropping_last_messages(items.clone(), 1);
assert_eq!(
truncated,
vec![items[0].clone(), items[1].clone(), items[2].clone()]
);
let truncated2 = truncate_after_dropping_last_messages(items, 2);
assert!(truncated2.is_empty());
}
}

View File

@@ -0,0 +1,127 @@
use codex_protocol::custom_prompts::CustomPrompt;
use std::collections::HashSet;
use std::path::Path;
use std::path::PathBuf;
use tokio::fs;
/// Return the default prompts directory: `$CODEX_HOME/prompts`.
/// If `CODEX_HOME` cannot be resolved, returns `None`.
pub fn default_prompts_dir() -> Option<PathBuf> {
crate::config::find_codex_home()
.ok()
.map(|home| home.join("prompts"))
}
/// Discover prompt files in the given directory, returning entries sorted by name.
/// Non-files are ignored. If the directory does not exist or cannot be read, returns empty.
pub async fn discover_prompts_in(dir: &Path) -> Vec<CustomPrompt> {
discover_prompts_in_excluding(dir, &HashSet::new()).await
}
/// Discover prompt files in the given directory, excluding any with names in `exclude`.
/// Returns entries sorted by name. Non-files are ignored. Missing/unreadable dir yields empty.
pub async fn discover_prompts_in_excluding(
dir: &Path,
exclude: &HashSet<String>,
) -> Vec<CustomPrompt> {
let mut out: Vec<CustomPrompt> = Vec::new();
let mut entries = match fs::read_dir(dir).await {
Ok(entries) => entries,
Err(_) => return out,
};
while let Ok(Some(entry)) = entries.next_entry().await {
let path = entry.path();
let is_file = entry
.file_type()
.await
.map(|ft| ft.is_file())
.unwrap_or(false);
if !is_file {
continue;
}
// Only include Markdown files with a .md extension.
let is_md = path
.extension()
.and_then(|s| s.to_str())
.map(|ext| ext.eq_ignore_ascii_case("md"))
.unwrap_or(false);
if !is_md {
continue;
}
let Some(name) = path
.file_stem()
.and_then(|s| s.to_str())
.map(|s| s.to_string())
else {
continue;
};
if exclude.contains(&name) {
continue;
}
let content = match fs::read_to_string(&path).await {
Ok(s) => s,
Err(_) => continue,
};
out.push(CustomPrompt {
name,
path,
content,
});
}
out.sort_by(|a, b| a.name.cmp(&b.name));
out
}
#[cfg(test)]
mod tests {
use super::*;
use std::fs;
use tempfile::tempdir;
#[tokio::test]
async fn empty_when_dir_missing() {
let tmp = tempdir().expect("create TempDir");
let missing = tmp.path().join("nope");
let found = discover_prompts_in(&missing).await;
assert!(found.is_empty());
}
#[tokio::test]
async fn discovers_and_sorts_files() {
let tmp = tempdir().expect("create TempDir");
let dir = tmp.path();
fs::write(dir.join("b.md"), b"b").unwrap();
fs::write(dir.join("a.md"), b"a").unwrap();
fs::create_dir(dir.join("subdir")).unwrap();
let found = discover_prompts_in(dir).await;
let names: Vec<String> = found.into_iter().map(|e| e.name).collect();
assert_eq!(names, vec!["a", "b"]);
}
#[tokio::test]
async fn excludes_builtins() {
let tmp = tempdir().expect("create TempDir");
let dir = tmp.path();
fs::write(dir.join("init.md"), b"ignored").unwrap();
fs::write(dir.join("foo.md"), b"ok").unwrap();
let mut exclude = HashSet::new();
exclude.insert("init".to_string());
let found = discover_prompts_in_excluding(dir, &exclude).await;
let names: Vec<String> = found.into_iter().map(|e| e.name).collect();
assert_eq!(names, vec!["foo"]);
}
#[tokio::test]
async fn skips_non_utf8_files() {
let tmp = tempdir().expect("create TempDir");
let dir = tmp.path();
// Valid UTF-8 file
fs::write(dir.join("good.md"), b"hello").unwrap();
// Invalid UTF-8 content in .md file (e.g., lone 0xFF byte)
fs::write(dir.join("bad.md"), vec![0xFF, 0xFE, b'\n']).unwrap();
let found = discover_prompts_in(dir).await;
let names: Vec<String> = found.into_iter().map(|e| e.name).collect();
assert_eq!(names, vec!["good"]);
}
}

View File

@@ -2,16 +2,16 @@ use serde::Deserialize;
use serde::Serialize;
use strum_macros::Display as DeriveDisplay;
use crate::models::ContentItem;
use crate::models::ResponseItem;
use crate::protocol::AskForApproval;
use crate::protocol::SandboxPolicy;
use crate::shell::Shell;
use codex_protocol::config_types::SandboxMode;
use std::fmt::Display;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ResponseItem;
use std::path::PathBuf;
/// wraps environment context message in a tag for the model to parse more easily.
pub(crate) const ENVIRONMENT_CONTEXT_START: &str = "<environment_context>\n";
pub(crate) const ENVIRONMENT_CONTEXT_START: &str = "<environment_context>";
pub(crate) const ENVIRONMENT_CONTEXT_END: &str = "</environment_context>";
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, DeriveDisplay)]
@@ -24,52 +24,85 @@ pub enum NetworkAccess {
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
#[serde(rename = "environment_context", rename_all = "snake_case")]
pub(crate) struct EnvironmentContext {
pub cwd: PathBuf,
pub approval_policy: AskForApproval,
pub sandbox_mode: SandboxMode,
pub network_access: NetworkAccess,
pub cwd: Option<PathBuf>,
pub approval_policy: Option<AskForApproval>,
pub sandbox_mode: Option<SandboxMode>,
pub network_access: Option<NetworkAccess>,
pub shell: Option<Shell>,
}
impl EnvironmentContext {
pub fn new(
cwd: PathBuf,
approval_policy: AskForApproval,
sandbox_policy: SandboxPolicy,
cwd: Option<PathBuf>,
approval_policy: Option<AskForApproval>,
sandbox_policy: Option<SandboxPolicy>,
shell: Option<Shell>,
) -> Self {
Self {
cwd,
approval_policy,
sandbox_mode: match sandbox_policy {
SandboxPolicy::DangerFullAccess => SandboxMode::DangerFullAccess,
SandboxPolicy::ReadOnly => SandboxMode::ReadOnly,
SandboxPolicy::WorkspaceWrite { .. } => SandboxMode::WorkspaceWrite,
Some(SandboxPolicy::DangerFullAccess) => Some(SandboxMode::DangerFullAccess),
Some(SandboxPolicy::ReadOnly) => Some(SandboxMode::ReadOnly),
Some(SandboxPolicy::WorkspaceWrite { .. }) => Some(SandboxMode::WorkspaceWrite),
None => None,
},
network_access: match sandbox_policy {
SandboxPolicy::DangerFullAccess => NetworkAccess::Enabled,
SandboxPolicy::ReadOnly => NetworkAccess::Restricted,
SandboxPolicy::WorkspaceWrite { network_access, .. } => {
Some(SandboxPolicy::DangerFullAccess) => Some(NetworkAccess::Enabled),
Some(SandboxPolicy::ReadOnly) => Some(NetworkAccess::Restricted),
Some(SandboxPolicy::WorkspaceWrite { network_access, .. }) => {
if network_access {
NetworkAccess::Enabled
Some(NetworkAccess::Enabled)
} else {
NetworkAccess::Restricted
Some(NetworkAccess::Restricted)
}
}
None => None,
},
shell,
}
}
}
impl Display for EnvironmentContext {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
writeln!(
f,
"Current working directory: {}",
self.cwd.to_string_lossy()
)?;
writeln!(f, "Approval policy: {}", self.approval_policy)?;
writeln!(f, "Sandbox mode: {}", self.sandbox_mode)?;
writeln!(f, "Network access: {}", self.network_access)?;
Ok(())
impl EnvironmentContext {
/// Serializes the environment context to XML. Libraries like `quick-xml`
/// require custom macros to handle Enums with newtypes, so we just do it
/// manually, to keep things simple. Output looks like:
///
/// ```xml
/// <environment_context>
/// <cwd>...</cwd>
/// <approval_policy>...</approval_policy>
/// <sandbox_mode>...</sandbox_mode>
/// <network_access>...</network_access>
/// <shell>...</shell>
/// </environment_context>
/// ```
pub fn serialize_to_xml(self) -> String {
let mut lines = vec![ENVIRONMENT_CONTEXT_START.to_string()];
if let Some(cwd) = self.cwd {
lines.push(format!(" <cwd>{}</cwd>", cwd.to_string_lossy()));
}
if let Some(approval_policy) = self.approval_policy {
lines.push(format!(
" <approval_policy>{approval_policy}</approval_policy>"
));
}
if let Some(sandbox_mode) = self.sandbox_mode {
lines.push(format!(" <sandbox_mode>{sandbox_mode}</sandbox_mode>"));
}
if let Some(network_access) = self.network_access {
lines.push(format!(
" <network_access>{network_access}</network_access>"
));
}
if let Some(shell) = self.shell
&& let Some(shell_name) = shell.name()
{
lines.push(format!(" <shell>{shell_name}</shell>"));
}
lines.push(ENVIRONMENT_CONTEXT_END.to_string());
lines.join("\n")
}
}
@@ -79,7 +112,7 @@ impl From<EnvironmentContext> for ResponseItem {
id: None,
role: "user".to_string(),
content: vec![ContentItem::InputText {
text: format!("{ENVIRONMENT_CONTEXT_START}{ec}{ENVIRONMENT_CONTEXT_END}"),
text: ec.serialize_to_xml(),
}],
}
}

View File

@@ -128,27 +128,70 @@ pub enum CodexErr {
#[derive(Debug)]
pub struct UsageLimitReachedError {
pub plan_type: Option<String>,
pub resets_in_seconds: Option<u64>,
}
impl std::fmt::Display for UsageLimitReachedError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
// Base message differs slightly for legacy ChatGPT Plus plan users.
if let Some(plan_type) = &self.plan_type
&& plan_type == "plus"
{
write!(
f,
"You've hit your usage limit. Upgrade to Pro (https://openai.com/chatgpt/pricing), or wait for limits to reset (every 5h and every week.)."
"You've hit your usage limit. Upgrade to Pro (https://openai.com/chatgpt/pricing) or try again"
)?;
if let Some(secs) = self.resets_in_seconds {
let reset_duration = format_reset_duration(secs);
write!(f, " in {reset_duration}.")?;
} else {
write!(f, " later.")?;
}
} else {
write!(
f,
"You've hit your usage limit. Limits reset every 5h and every week."
)?;
write!(f, "You've hit your usage limit.")?;
if let Some(secs) = self.resets_in_seconds {
let reset_duration = format_reset_duration(secs);
write!(f, " Try again in {reset_duration}.")?;
} else {
write!(f, " Try again later.")?;
}
}
Ok(())
}
}
fn format_reset_duration(total_secs: u64) -> String {
let days = total_secs / 86_400;
let hours = (total_secs % 86_400) / 3_600;
let minutes = (total_secs % 3_600) / 60;
let mut parts: Vec<String> = Vec::new();
if days > 0 {
let unit = if days == 1 { "day" } else { "days" };
parts.push(format!("{days} {unit}"));
}
if hours > 0 {
let unit = if hours == 1 { "hour" } else { "hours" };
parts.push(format!("{hours} {unit}"));
}
if minutes > 0 {
let unit = if minutes == 1 { "minute" } else { "minutes" };
parts.push(format!("{minutes} {unit}"));
}
if parts.is_empty() {
return "less than a minute".to_string();
}
match parts.len() {
1 => parts[0].clone(),
2 => format!("{} {}", parts[0], parts[1]),
_ => format!("{} {} {}", parts[0], parts[1], parts[2]),
}
}
#[derive(Debug)]
pub struct EnvVarError {
/// Name of the environment variable that is missing.
@@ -181,6 +224,8 @@ impl CodexErr {
pub fn get_error_message_ui(e: &CodexErr) -> String {
match e {
CodexErr::Sandbox(SandboxErr::Denied(_, _, stderr)) => stderr.to_string(),
// Timeouts are not sandbox errors from a UX perspective; present them plainly
CodexErr::Sandbox(SandboxErr::Timeout) => "error: command timed out".to_string(),
_ => e.to_string(),
}
}
@@ -193,19 +238,23 @@ mod tests {
fn usage_limit_reached_error_formats_plus_plan() {
let err = UsageLimitReachedError {
plan_type: Some("plus".to_string()),
resets_in_seconds: None,
};
assert_eq!(
err.to_string(),
"You've hit your usage limit. Upgrade to Pro (https://openai.com/chatgpt/pricing), or wait for limits to reset (every 5h and every week.)."
"You've hit your usage limit. Upgrade to Pro (https://openai.com/chatgpt/pricing) or try again later."
);
}
#[test]
fn usage_limit_reached_error_formats_default_when_none() {
let err = UsageLimitReachedError { plan_type: None };
let err = UsageLimitReachedError {
plan_type: None,
resets_in_seconds: None,
};
assert_eq!(
err.to_string(),
"You've hit your usage limit. Limits reset every 5h and every week."
"You've hit your usage limit. Try again later."
);
}
@@ -213,10 +262,59 @@ mod tests {
fn usage_limit_reached_error_formats_default_for_other_plans() {
let err = UsageLimitReachedError {
plan_type: Some("pro".to_string()),
resets_in_seconds: None,
};
assert_eq!(
err.to_string(),
"You've hit your usage limit. Limits reset every 5h and every week."
"You've hit your usage limit. Try again later."
);
}
#[test]
fn usage_limit_reached_includes_minutes_when_available() {
let err = UsageLimitReachedError {
plan_type: None,
resets_in_seconds: Some(5 * 60),
};
assert_eq!(
err.to_string(),
"You've hit your usage limit. Try again in 5 minutes."
);
}
#[test]
fn usage_limit_reached_includes_hours_and_minutes() {
let err = UsageLimitReachedError {
plan_type: Some("plus".to_string()),
resets_in_seconds: Some(3 * 3600 + 32 * 60),
};
assert_eq!(
err.to_string(),
"You've hit your usage limit. Upgrade to Pro (https://openai.com/chatgpt/pricing) or try again in 3 hours 32 minutes."
);
}
#[test]
fn usage_limit_reached_includes_days_hours_minutes() {
let err = UsageLimitReachedError {
plan_type: None,
resets_in_seconds: Some(2 * 86_400 + 3 * 3600 + 5 * 60),
};
assert_eq!(
err.to_string(),
"You've hit your usage limit. Try again in 2 days 3 hours 5 minutes."
);
}
#[test]
fn usage_limit_reached_less_than_minute() {
let err = UsageLimitReachedError {
plan_type: None,
resets_in_seconds: Some(30),
};
assert_eq!(
err.to_string(),
"You've hit your usage limit. Try again in less than a minute."
);
}
}

View File

@@ -28,18 +28,21 @@ use crate::spawn::StdioPolicy;
use crate::spawn::spawn_child_async;
use serde_bytes::ByteBuf;
// Maximum we send for each stream, which is either:
// - 10KiB OR
// - 256 lines
const MAX_STREAM_OUTPUT: usize = 10 * 1024;
const MAX_STREAM_OUTPUT_LINES: usize = 256;
const DEFAULT_TIMEOUT_MS: u64 = 10_000;
// Hardcode these since it does not seem worth including the libc crate just
// for these.
const SIGKILL_CODE: i32 = 9;
const TIMEOUT_CODE: i32 = 64;
const EXIT_CODE_SIGNAL_BASE: i32 = 128; // conventional shell: 128 + signal
// I/O buffer sizing
const READ_CHUNK_SIZE: usize = 8192; // bytes per read
const AGGREGATE_BUFFER_INITIAL_CAPACITY: usize = 8 * 1024; // 8 KiB
/// Limit the number of ExecCommandOutputDelta events emitted per exec call.
/// Aggregation still collects full output; only the live event stream is capped.
pub(crate) const MAX_EXEC_OUTPUT_DELTAS_PER_CALL: usize = 10_000;
#[derive(Debug, Clone)]
pub struct ExecParams {
@@ -153,6 +156,7 @@ pub async fn process_exec_tool_call(
exit_code,
stdout,
stderr,
aggregated_output: raw_output.aggregated_output.from_utf8_lossy(),
duration,
})
}
@@ -189,10 +193,11 @@ pub struct StreamOutput<T> {
pub truncated_after_lines: Option<u32>,
}
#[derive(Debug)]
pub struct RawExecToolCallOutput {
struct RawExecToolCallOutput {
pub exit_status: ExitStatus,
pub stdout: StreamOutput<Vec<u8>>,
pub stderr: StreamOutput<Vec<u8>>,
pub aggregated_output: StreamOutput<Vec<u8>>,
}
impl StreamOutput<String> {
@@ -213,11 +218,17 @@ impl StreamOutput<Vec<u8>> {
}
}
#[inline]
fn append_all(dst: &mut Vec<u8>, src: &[u8]) {
dst.extend_from_slice(src);
}
#[derive(Debug)]
pub struct ExecToolCallOutput {
pub exit_code: i32,
pub stdout: StreamOutput<String>,
pub stderr: StreamOutput<String>,
pub aggregated_output: StreamOutput<String>,
pub duration: Duration,
}
@@ -253,7 +264,7 @@ async fn exec(
/// Consumes the output of a child process, truncating it so it is suitable for
/// use as the output of a `shell` tool call. Also enforces specified timeout.
pub(crate) async fn consume_truncated_output(
async fn consume_truncated_output(
mut child: Child,
timeout: Duration,
stdout_stream: Option<StdoutStream>,
@@ -273,19 +284,19 @@ pub(crate) async fn consume_truncated_output(
))
})?;
let (agg_tx, agg_rx) = async_channel::unbounded::<Vec<u8>>();
let stdout_handle = tokio::spawn(read_capped(
BufReader::new(stdout_reader),
MAX_STREAM_OUTPUT,
MAX_STREAM_OUTPUT_LINES,
stdout_stream.clone(),
false,
Some(agg_tx.clone()),
));
let stderr_handle = tokio::spawn(read_capped(
BufReader::new(stderr_reader),
MAX_STREAM_OUTPUT,
MAX_STREAM_OUTPUT_LINES,
stdout_stream.clone(),
true,
Some(agg_tx.clone()),
));
let exit_status = tokio::select! {
@@ -297,38 +308,49 @@ pub(crate) async fn consume_truncated_output(
// timeout
child.start_kill()?;
// Debatable whether `child.wait().await` should be called here.
synthetic_exit_status(128 + TIMEOUT_CODE)
synthetic_exit_status(EXIT_CODE_SIGNAL_BASE + TIMEOUT_CODE)
}
}
}
_ = tokio::signal::ctrl_c() => {
child.start_kill()?;
synthetic_exit_status(128 + SIGKILL_CODE)
synthetic_exit_status(EXIT_CODE_SIGNAL_BASE + SIGKILL_CODE)
}
};
let stdout = stdout_handle.await??;
let stderr = stderr_handle.await??;
drop(agg_tx);
let mut combined_buf = Vec::with_capacity(AGGREGATE_BUFFER_INITIAL_CAPACITY);
while let Ok(chunk) = agg_rx.recv().await {
append_all(&mut combined_buf, &chunk);
}
let aggregated_output = StreamOutput {
text: combined_buf,
truncated_after_lines: None,
};
Ok(RawExecToolCallOutput {
exit_status,
stdout,
stderr,
aggregated_output,
})
}
async fn read_capped<R: AsyncRead + Unpin + Send + 'static>(
mut reader: R,
max_output: usize,
max_lines: usize,
stream: Option<StdoutStream>,
is_stderr: bool,
aggregate_tx: Option<Sender<Vec<u8>>>,
) -> io::Result<StreamOutput<Vec<u8>>> {
let mut buf = Vec::with_capacity(max_output.min(8 * 1024));
let mut tmp = [0u8; 8192];
let mut buf = Vec::with_capacity(AGGREGATE_BUFFER_INITIAL_CAPACITY);
let mut tmp = [0u8; READ_CHUNK_SIZE];
let mut emitted_deltas: usize = 0;
let mut remaining_bytes = max_output;
let mut remaining_lines = max_lines;
// No caps: append all bytes
loop {
let n = reader.read(&mut tmp).await?;
@@ -336,7 +358,9 @@ async fn read_capped<R: AsyncRead + Unpin + Send + 'static>(
break;
}
if let Some(stream) = &stream {
if let Some(stream) = &stream
&& emitted_deltas < MAX_EXEC_OUTPUT_DELTAS_PER_CALL
{
let chunk = tmp[..n].to_vec();
let msg = EventMsg::ExecCommandOutputDelta(ExecCommandOutputDeltaEvent {
call_id: stream.call_id.clone(),
@@ -353,35 +377,20 @@ async fn read_capped<R: AsyncRead + Unpin + Send + 'static>(
};
#[allow(clippy::let_unit_value)]
let _ = stream.tx_event.send(event).await;
emitted_deltas += 1;
}
// Copy into the buffer only while we still have byte and line budget.
if remaining_bytes > 0 && remaining_lines > 0 {
let mut copy_len = 0;
for &b in &tmp[..n] {
if remaining_bytes == 0 || remaining_lines == 0 {
break;
}
copy_len += 1;
remaining_bytes -= 1;
if b == b'\n' {
remaining_lines -= 1;
}
}
buf.extend_from_slice(&tmp[..copy_len]);
if let Some(tx) = &aggregate_tx {
let _ = tx.send(tmp[..n].to_vec()).await;
}
// Continue reading to EOF to avoid back-pressure, but discard once caps are hit.
append_all(&mut buf, &tmp[..n]);
// Continue reading to EOF to avoid back-pressure
}
let truncated = remaining_lines == 0 || remaining_bytes == 0;
Ok(StreamOutput {
text: buf,
truncated_after_lines: if truncated {
Some((max_lines - remaining_lines) as u32)
} else {
None
},
truncated_after_lines: None,
})
}

View File

@@ -0,0 +1,57 @@
use serde::Deserialize;
use serde::Serialize;
use crate::exec_command::session_id::SessionId;
#[derive(Debug, Clone, Deserialize)]
pub struct ExecCommandParams {
pub(crate) cmd: String,
#[serde(default = "default_yield_time")]
pub(crate) yield_time_ms: u64,
#[serde(default = "max_output_tokens")]
pub(crate) max_output_tokens: u64,
#[serde(default = "default_shell")]
pub(crate) shell: String,
#[serde(default = "default_login")]
pub(crate) login: bool,
}
fn default_yield_time() -> u64 {
10_000
}
fn max_output_tokens() -> u64 {
10_000
}
fn default_login() -> bool {
true
}
fn default_shell() -> String {
"/bin/bash".to_string()
}
#[derive(Debug, Deserialize, Serialize)]
pub struct WriteStdinParams {
pub(crate) session_id: SessionId,
pub(crate) chars: String,
#[serde(default = "write_stdin_default_yield_time_ms")]
pub(crate) yield_time_ms: u64,
#[serde(default = "write_stdin_default_max_output_tokens")]
pub(crate) max_output_tokens: u64,
}
fn write_stdin_default_yield_time_ms() -> u64 {
250
}
fn write_stdin_default_max_output_tokens() -> u64 {
10_000
}

View File

@@ -0,0 +1,83 @@
use std::sync::Mutex as StdMutex;
use tokio::sync::broadcast;
use tokio::sync::mpsc;
use tokio::task::JoinHandle;
#[derive(Debug)]
pub(crate) struct ExecCommandSession {
/// Queue for writing bytes to the process stdin (PTY master write side).
writer_tx: mpsc::Sender<Vec<u8>>,
/// Broadcast stream of output chunks read from the PTY. New subscribers
/// receive only chunks emitted after they subscribe.
output_tx: broadcast::Sender<Vec<u8>>,
/// Child killer handle for termination on drop (can signal independently
/// of a thread blocked in `.wait()`).
killer: StdMutex<Option<Box<dyn portable_pty::ChildKiller + Send + Sync>>>,
/// JoinHandle for the blocking PTY reader task.
reader_handle: StdMutex<Option<JoinHandle<()>>>,
/// JoinHandle for the stdin writer task.
writer_handle: StdMutex<Option<JoinHandle<()>>>,
/// JoinHandle for the child wait task.
wait_handle: StdMutex<Option<JoinHandle<()>>>,
}
impl ExecCommandSession {
pub(crate) fn new(
writer_tx: mpsc::Sender<Vec<u8>>,
output_tx: broadcast::Sender<Vec<u8>>,
killer: Box<dyn portable_pty::ChildKiller + Send + Sync>,
reader_handle: JoinHandle<()>,
writer_handle: JoinHandle<()>,
wait_handle: JoinHandle<()>,
) -> Self {
Self {
writer_tx,
output_tx,
killer: StdMutex::new(Some(killer)),
reader_handle: StdMutex::new(Some(reader_handle)),
writer_handle: StdMutex::new(Some(writer_handle)),
wait_handle: StdMutex::new(Some(wait_handle)),
}
}
pub(crate) fn writer_sender(&self) -> mpsc::Sender<Vec<u8>> {
self.writer_tx.clone()
}
pub(crate) fn output_receiver(&self) -> broadcast::Receiver<Vec<u8>> {
self.output_tx.subscribe()
}
}
impl Drop for ExecCommandSession {
fn drop(&mut self) {
// Best-effort: terminate child first so blocking tasks can complete.
if let Ok(mut killer_opt) = self.killer.lock()
&& let Some(mut killer) = killer_opt.take()
{
let _ = killer.kill();
}
// Abort background tasks; they may already have exited after kill.
if let Ok(mut h) = self.reader_handle.lock()
&& let Some(handle) = h.take()
{
handle.abort();
}
if let Ok(mut h) = self.writer_handle.lock()
&& let Some(handle) = h.take()
{
handle.abort();
}
if let Ok(mut h) = self.wait_handle.lock()
&& let Some(handle) = h.take()
{
handle.abort();
}
}
}

View File

@@ -0,0 +1,14 @@
mod exec_command_params;
mod exec_command_session;
mod responses_api;
mod session_id;
mod session_manager;
pub use exec_command_params::ExecCommandParams;
pub use exec_command_params::WriteStdinParams;
pub use responses_api::EXEC_COMMAND_TOOL_NAME;
pub use responses_api::WRITE_STDIN_TOOL_NAME;
pub use responses_api::create_exec_command_tool_for_responses_api;
pub use responses_api::create_write_stdin_tool_for_responses_api;
pub use session_manager::SessionManager as ExecSessionManager;
pub use session_manager::result_into_payload;

View File

@@ -0,0 +1,98 @@
use std::collections::BTreeMap;
use crate::openai_tools::JsonSchema;
use crate::openai_tools::ResponsesApiTool;
pub const EXEC_COMMAND_TOOL_NAME: &str = "exec_command";
pub const WRITE_STDIN_TOOL_NAME: &str = "write_stdin";
pub fn create_exec_command_tool_for_responses_api() -> ResponsesApiTool {
let mut properties = BTreeMap::<String, JsonSchema>::new();
properties.insert(
"cmd".to_string(),
JsonSchema::String {
description: Some("The shell command to execute.".to_string()),
},
);
properties.insert(
"yield_time_ms".to_string(),
JsonSchema::Number {
description: Some("The maximum time in milliseconds to wait for output.".to_string()),
},
);
properties.insert(
"max_output_tokens".to_string(),
JsonSchema::Number {
description: Some("The maximum number of tokens to output.".to_string()),
},
);
properties.insert(
"shell".to_string(),
JsonSchema::String {
description: Some("The shell to use. Defaults to \"/bin/bash\".".to_string()),
},
);
properties.insert(
"login".to_string(),
JsonSchema::Boolean {
description: Some(
"Whether to run the command as a login shell. Defaults to true.".to_string(),
),
},
);
ResponsesApiTool {
name: EXEC_COMMAND_TOOL_NAME.to_owned(),
description: r#"Execute shell commands on the local machine with streaming output."#
.to_string(),
strict: false,
parameters: JsonSchema::Object {
properties,
required: Some(vec!["cmd".to_string()]),
additional_properties: Some(false),
},
}
}
pub fn create_write_stdin_tool_for_responses_api() -> ResponsesApiTool {
let mut properties = BTreeMap::<String, JsonSchema>::new();
properties.insert(
"session_id".to_string(),
JsonSchema::Number {
description: Some("The ID of the exec_command session.".to_string()),
},
);
properties.insert(
"chars".to_string(),
JsonSchema::String {
description: Some("The characters to write to stdin.".to_string()),
},
);
properties.insert(
"yield_time_ms".to_string(),
JsonSchema::Number {
description: Some(
"The maximum time in milliseconds to wait for output after writing.".to_string(),
),
},
);
properties.insert(
"max_output_tokens".to_string(),
JsonSchema::Number {
description: Some("The maximum number of tokens to output.".to_string()),
},
);
ResponsesApiTool {
name: WRITE_STDIN_TOOL_NAME.to_owned(),
description: r#"Write characters to an exec session's stdin. Returns all stdout+stderr received within yield_time_ms.
Can write control characters (\u0003 for Ctrl-C), or an empty string to just poll stdout+stderr."#
.to_string(),
strict: false,
parameters: JsonSchema::Object {
properties,
required: Some(vec!["session_id".to_string(), "chars".to_string()]),
additional_properties: Some(false),
},
}
}

View File

@@ -0,0 +1,5 @@
use serde::Deserialize;
use serde::Serialize;
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Serialize, Deserialize)]
pub(crate) struct SessionId(pub u32);

View File

@@ -0,0 +1,665 @@
use std::collections::HashMap;
use std::io::ErrorKind;
use std::io::Read;
use std::sync::Arc;
use std::sync::Mutex as StdMutex;
use std::sync::atomic::AtomicU32;
use portable_pty::CommandBuilder;
use portable_pty::PtySize;
use portable_pty::native_pty_system;
use tokio::sync::Mutex;
use tokio::sync::mpsc;
use tokio::sync::oneshot;
use tokio::time::Duration;
use tokio::time::Instant;
use tokio::time::timeout;
use crate::exec_command::exec_command_params::ExecCommandParams;
use crate::exec_command::exec_command_params::WriteStdinParams;
use crate::exec_command::exec_command_session::ExecCommandSession;
use crate::exec_command::session_id::SessionId;
use codex_protocol::models::FunctionCallOutputPayload;
#[derive(Debug, Default)]
pub struct SessionManager {
next_session_id: AtomicU32,
sessions: Mutex<HashMap<SessionId, ExecCommandSession>>,
}
#[derive(Debug)]
pub struct ExecCommandOutput {
wall_time: Duration,
exit_status: ExitStatus,
original_token_count: Option<u64>,
output: String,
}
impl ExecCommandOutput {
fn to_text_output(&self) -> String {
let wall_time_secs = self.wall_time.as_secs_f32();
let termination_status = match self.exit_status {
ExitStatus::Exited(code) => format!("Process exited with code {code}"),
ExitStatus::Ongoing(session_id) => {
format!("Process running with session ID {}", session_id.0)
}
};
let truncation_status = match self.original_token_count {
Some(tokens) => {
format!("\nWarning: truncated output (original token count: {tokens})")
}
None => "".to_string(),
};
format!(
r#"Wall time: {wall_time_secs:.3} seconds
{termination_status}{truncation_status}
Output:
{output}"#,
output = self.output
)
}
}
#[derive(Debug)]
pub enum ExitStatus {
Exited(i32),
Ongoing(SessionId),
}
pub fn result_into_payload(result: Result<ExecCommandOutput, String>) -> FunctionCallOutputPayload {
match result {
Ok(output) => FunctionCallOutputPayload {
content: output.to_text_output(),
success: Some(true),
},
Err(err) => FunctionCallOutputPayload {
content: err,
success: Some(false),
},
}
}
impl SessionManager {
/// Processes the request and is required to send a response via `outgoing`.
pub async fn handle_exec_command_request(
&self,
params: ExecCommandParams,
) -> Result<ExecCommandOutput, String> {
// Allocate a session id.
let session_id = SessionId(
self.next_session_id
.fetch_add(1, std::sync::atomic::Ordering::SeqCst),
);
let (session, mut exit_rx) =
create_exec_command_session(params.clone())
.await
.map_err(|err| {
format!(
"failed to create exec command session for session id {}: {err}",
session_id.0
)
})?;
// Insert into session map.
let mut output_rx = session.output_receiver();
self.sessions.lock().await.insert(session_id, session);
// Collect output until either timeout expires or process exits.
// Do not cap during collection; truncate at the end if needed.
// Use a modest initial capacity to avoid large preallocation.
let cap_bytes_u64 = params.max_output_tokens.saturating_mul(4);
let cap_bytes: usize = cap_bytes_u64.min(usize::MAX as u64) as usize;
let mut collected: Vec<u8> = Vec::with_capacity(4096);
let start_time = Instant::now();
let deadline = start_time + Duration::from_millis(params.yield_time_ms);
let mut exit_code: Option<i32> = None;
loop {
if Instant::now() >= deadline {
break;
}
let remaining = deadline.saturating_duration_since(Instant::now());
tokio::select! {
biased;
exit = &mut exit_rx => {
exit_code = exit.ok();
// Small grace period to pull remaining buffered output
let grace_deadline = Instant::now() + Duration::from_millis(25);
while Instant::now() < grace_deadline {
match timeout(Duration::from_millis(1), output_rx.recv()).await {
Ok(Ok(chunk)) => {
collected.extend_from_slice(&chunk);
}
Ok(Err(tokio::sync::broadcast::error::RecvError::Lagged(_))) => {
// Skip missed messages; keep trying within grace period.
continue;
}
Ok(Err(tokio::sync::broadcast::error::RecvError::Closed)) => break,
Err(_) => break,
}
}
break;
}
chunk = timeout(remaining, output_rx.recv()) => {
match chunk {
Ok(Ok(chunk)) => {
collected.extend_from_slice(&chunk);
}
Ok(Err(tokio::sync::broadcast::error::RecvError::Lagged(_))) => {
// Skip missed messages; continue collecting fresh output.
}
Ok(Err(tokio::sync::broadcast::error::RecvError::Closed)) => { break; }
Err(_) => { break; }
}
}
}
}
let output = String::from_utf8_lossy(&collected).to_string();
let exit_status = if let Some(code) = exit_code {
ExitStatus::Exited(code)
} else {
ExitStatus::Ongoing(session_id)
};
// If output exceeds cap, truncate the middle and record original token estimate.
let (output, original_token_count) = truncate_middle(&output, cap_bytes);
Ok(ExecCommandOutput {
wall_time: Instant::now().duration_since(start_time),
exit_status,
original_token_count,
output,
})
}
/// Write characters to a session's stdin and collect combined output for up to `yield_time_ms`.
pub async fn handle_write_stdin_request(
&self,
params: WriteStdinParams,
) -> Result<ExecCommandOutput, String> {
let WriteStdinParams {
session_id,
chars,
yield_time_ms,
max_output_tokens,
} = params;
// Grab handles without holding the sessions lock across await points.
let (writer_tx, mut output_rx) = {
let sessions = self.sessions.lock().await;
match sessions.get(&session_id) {
Some(session) => (session.writer_sender(), session.output_receiver()),
None => {
return Err(format!("unknown session id {}", session_id.0));
}
}
};
// Write stdin if provided.
if !chars.is_empty() && writer_tx.send(chars.into_bytes()).await.is_err() {
return Err("failed to write to stdin".to_string());
}
// Collect output up to yield_time_ms, truncating to max_output_tokens bytes.
let mut collected: Vec<u8> = Vec::with_capacity(4096);
let start_time = Instant::now();
let deadline = start_time + Duration::from_millis(yield_time_ms);
loop {
let now = Instant::now();
if now >= deadline {
break;
}
let remaining = deadline - now;
match timeout(remaining, output_rx.recv()).await {
Ok(Ok(chunk)) => {
// Collect all output within the time budget; truncate at the end.
collected.extend_from_slice(&chunk);
}
Ok(Err(tokio::sync::broadcast::error::RecvError::Lagged(_))) => {
// Skip missed messages; continue collecting fresh output.
}
Ok(Err(tokio::sync::broadcast::error::RecvError::Closed)) => break,
Err(_) => break, // timeout
}
}
// Return structured output, truncating middle if over cap.
let output = String::from_utf8_lossy(&collected).to_string();
let cap_bytes_u64 = max_output_tokens.saturating_mul(4);
let cap_bytes: usize = cap_bytes_u64.min(usize::MAX as u64) as usize;
let (output, original_token_count) = truncate_middle(&output, cap_bytes);
Ok(ExecCommandOutput {
wall_time: Instant::now().duration_since(start_time),
exit_status: ExitStatus::Ongoing(session_id),
original_token_count,
output,
})
}
}
/// Spawn PTY and child process per spawn_exec_command_session logic.
async fn create_exec_command_session(
params: ExecCommandParams,
) -> anyhow::Result<(ExecCommandSession, oneshot::Receiver<i32>)> {
let ExecCommandParams {
cmd,
yield_time_ms: _,
max_output_tokens: _,
shell,
login,
} = params;
// Use the native pty implementation for the system
let pty_system = native_pty_system();
// Create a new pty
let pair = pty_system.openpty(PtySize {
rows: 24,
cols: 80,
pixel_width: 0,
pixel_height: 0,
})?;
// Spawn a shell into the pty
let mut command_builder = CommandBuilder::new(shell);
let shell_mode_opt = if login { "-lc" } else { "-c" };
command_builder.arg(shell_mode_opt);
command_builder.arg(cmd);
let mut child = pair.slave.spawn_command(command_builder)?;
// Obtain a killer that can signal the process independently of `.wait()`.
let killer = child.clone_killer();
// Channel to forward write requests to the PTY writer.
let (writer_tx, mut writer_rx) = mpsc::channel::<Vec<u8>>(128);
// Broadcast for streaming PTY output to readers: subscribers receive from subscription time.
let (output_tx, _) = tokio::sync::broadcast::channel::<Vec<u8>>(256);
// Reader task: drain PTY and forward chunks to output channel.
let mut reader = pair.master.try_clone_reader()?;
let output_tx_clone = output_tx.clone();
let reader_handle = tokio::task::spawn_blocking(move || {
let mut buf = [0u8; 8192];
loop {
match reader.read(&mut buf) {
Ok(0) => break, // EOF
Ok(n) => {
// Forward to broadcast; best-effort if there are subscribers.
let _ = output_tx_clone.send(buf[..n].to_vec());
}
Err(ref e) if e.kind() == ErrorKind::Interrupted => {
// Retry on EINTR
continue;
}
Err(ref e) if e.kind() == ErrorKind::WouldBlock => {
// We're in a blocking thread; back off briefly and retry.
std::thread::sleep(Duration::from_millis(5));
continue;
}
Err(_) => break,
}
}
});
// Writer task: apply stdin writes to the PTY writer.
let writer = pair.master.take_writer()?;
let writer = Arc::new(StdMutex::new(writer));
let writer_handle = tokio::spawn({
let writer = writer.clone();
async move {
while let Some(bytes) = writer_rx.recv().await {
let writer = writer.clone();
// Perform blocking write on a blocking thread.
let _ = tokio::task::spawn_blocking(move || {
if let Ok(mut guard) = writer.lock() {
use std::io::Write;
let _ = guard.write_all(&bytes);
let _ = guard.flush();
}
})
.await;
}
}
});
// Keep the child alive until it exits, then signal exit code.
let (exit_tx, exit_rx) = oneshot::channel::<i32>();
let wait_handle = tokio::task::spawn_blocking(move || {
let code = match child.wait() {
Ok(status) => status.exit_code() as i32,
Err(_) => -1,
};
let _ = exit_tx.send(code);
});
// Create and store the session with channels.
let session = ExecCommandSession::new(
writer_tx,
output_tx,
killer,
reader_handle,
writer_handle,
wait_handle,
);
Ok((session, exit_rx))
}
/// Truncate the middle of a UTF-8 string to at most `max_bytes` bytes,
/// preserving the beginning and the end. Returns the possibly truncated
/// string and `Some(original_token_count)` (estimated at 4 bytes/token)
/// if truncation occurred; otherwise returns the original string and `None`.
fn truncate_middle(s: &str, max_bytes: usize) -> (String, Option<u64>) {
// No truncation needed
if s.len() <= max_bytes {
return (s.to_string(), None);
}
let est_tokens = (s.len() as u64).div_ceil(4);
if max_bytes == 0 {
// Cannot keep any content; still return a full marker (never truncated).
return (format!("{est_tokens} tokens truncated…"), Some(est_tokens));
}
// Helper to truncate a string to a given byte length on a char boundary.
fn truncate_on_boundary(input: &str, max_len: usize) -> &str {
if input.len() <= max_len {
return input;
}
let mut end = max_len;
while end > 0 && !input.is_char_boundary(end) {
end -= 1;
}
&input[..end]
}
// Given a left/right budget, prefer newline boundaries; otherwise fall back
// to UTF-8 char boundaries.
fn pick_prefix_end(s: &str, left_budget: usize) -> usize {
if let Some(head) = s.get(..left_budget)
&& let Some(i) = head.rfind('\n')
{
return i + 1; // keep the newline so suffix starts on a fresh line
}
truncate_on_boundary(s, left_budget).len()
}
fn pick_suffix_start(s: &str, right_budget: usize) -> usize {
let start_tail = s.len().saturating_sub(right_budget);
if let Some(tail) = s.get(start_tail..)
&& let Some(i) = tail.find('\n')
{
return start_tail + i + 1; // start after newline
}
// Fall back to a char boundary at or after start_tail.
let mut idx = start_tail.min(s.len());
while idx < s.len() && !s.is_char_boundary(idx) {
idx += 1;
}
idx
}
// Refine marker length and budgets until stable. Marker is never truncated.
let mut guess_tokens = est_tokens; // worst-case: everything truncated
for _ in 0..4 {
let marker = format!("{guess_tokens} tokens truncated…");
let marker_len = marker.len();
let keep_budget = max_bytes.saturating_sub(marker_len);
if keep_budget == 0 {
// No room for any content within the cap; return a full, untruncated marker
// that reflects the entire truncated content.
return (format!("{est_tokens} tokens truncated…"), Some(est_tokens));
}
let left_budget = keep_budget / 2;
let right_budget = keep_budget - left_budget;
let prefix_end = pick_prefix_end(s, left_budget);
let mut suffix_start = pick_suffix_start(s, right_budget);
if suffix_start < prefix_end {
suffix_start = prefix_end;
}
let kept_content_bytes = prefix_end + (s.len() - suffix_start);
let truncated_content_bytes = s.len().saturating_sub(kept_content_bytes);
let new_tokens = (truncated_content_bytes as u64).div_ceil(4);
if new_tokens == guess_tokens {
let mut out = String::with_capacity(marker_len + kept_content_bytes + 1);
out.push_str(&s[..prefix_end]);
out.push_str(&marker);
// Place marker on its own line for symmetry when we keep line boundaries.
out.push('\n');
out.push_str(&s[suffix_start..]);
return (out, Some(est_tokens));
}
guess_tokens = new_tokens;
}
// Fallback: use last guess to build output.
let marker = format!("{guess_tokens} tokens truncated…");
let marker_len = marker.len();
let keep_budget = max_bytes.saturating_sub(marker_len);
if keep_budget == 0 {
return (format!("{est_tokens} tokens truncated…"), Some(est_tokens));
}
let left_budget = keep_budget / 2;
let right_budget = keep_budget - left_budget;
let prefix_end = pick_prefix_end(s, left_budget);
let suffix_start = pick_suffix_start(s, right_budget);
let mut out = String::with_capacity(marker_len + prefix_end + (s.len() - suffix_start) + 1);
out.push_str(&s[..prefix_end]);
out.push_str(&marker);
out.push('\n');
out.push_str(&s[suffix_start..]);
(out, Some(est_tokens))
}
#[cfg(test)]
mod tests {
use super::*;
use crate::exec_command::session_id::SessionId;
/// Test that verifies that [`SessionManager::handle_exec_command_request()`]
/// and [`SessionManager::handle_write_stdin_request()`] work as expected
/// in the presence of a process that never terminates (but produces
/// output continuously).
#[cfg(unix)]
#[allow(clippy::print_stderr)]
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
async fn session_manager_streams_and_truncates_from_now() {
use crate::exec_command::exec_command_params::ExecCommandParams;
use crate::exec_command::exec_command_params::WriteStdinParams;
use tokio::time::sleep;
let session_manager = SessionManager::default();
// Long-running loop that prints an increasing counter every ~100ms.
// Use Python for a portable, reliable sleep across shells/PTYs.
let cmd = r#"python3 - <<'PY'
import sys, time
count = 0
while True:
print(count)
sys.stdout.flush()
count += 100
time.sleep(0.1)
PY"#
.to_string();
// Start the session and collect ~3s of output.
let params = ExecCommandParams {
cmd,
yield_time_ms: 3_000,
max_output_tokens: 1_000, // large enough to avoid truncation here
shell: "/bin/bash".to_string(),
login: false,
};
let initial_output = match session_manager
.handle_exec_command_request(params.clone())
.await
{
Ok(v) => v,
Err(e) => {
// PTY may be restricted in some sandboxes; skip in that case.
if e.contains("openpty") || e.contains("Operation not permitted") {
eprintln!("skipping test due to restricted PTY: {e}");
return;
}
panic!("exec request failed unexpectedly: {e}");
}
};
eprintln!("initial output: {initial_output:?}");
// Should be ongoing (we launched a never-ending loop).
let session_id = match initial_output.exit_status {
ExitStatus::Ongoing(id) => id,
_ => panic!("expected ongoing session"),
};
// Parse the numeric lines and get the max observed value in the first window.
let first_nums = extract_monotonic_numbers(&initial_output.output);
assert!(
!first_nums.is_empty(),
"expected some output from first window"
);
let first_max = *first_nums.iter().max().unwrap();
// Wait ~4s so counters progress while we're not reading.
sleep(Duration::from_millis(4_000)).await;
// Now read ~3s of output "from now" only.
// Use a small token cap so truncation occurs and we test middle truncation.
let write_params = WriteStdinParams {
session_id,
chars: String::new(),
yield_time_ms: 3_000,
max_output_tokens: 16, // 16 tokens ~= 64 bytes -> likely truncation
};
let second = session_manager
.handle_write_stdin_request(write_params)
.await
.expect("write stdin should succeed");
// Verify truncation metadata and size bound (cap is tokens*4 bytes).
assert!(second.original_token_count.is_some());
let cap_bytes = (16u64 * 4) as usize;
assert!(second.output.len() <= cap_bytes);
// New middle marker should be present.
assert!(
second.output.contains("tokens truncated") && second.output.contains('…'),
"expected truncation marker in output, got: {}",
second.output
);
// Minimal freshness check: the earliest number we see in the second window
// should be significantly larger than the last from the first window.
let second_nums = extract_monotonic_numbers(&second.output);
assert!(
!second_nums.is_empty(),
"expected some numeric output from second window"
);
let second_min = *second_nums.iter().min().unwrap();
// We slept 4 seconds (~40 ticks at 100ms/tick, each +100), so expect
// an increase of roughly 4000 or more. Allow a generous margin.
assert!(
second_min >= first_max + 2000,
"second_min={second_min} first_max={first_max}",
);
}
#[cfg(unix)]
fn extract_monotonic_numbers(s: &str) -> Vec<i64> {
s.lines()
.filter_map(|line| {
if !line.is_empty()
&& line.chars().all(|c| c.is_ascii_digit())
&& let Ok(n) = line.parse::<i64>()
{
// Our generator increments by 100; ignore spurious fragments.
if n % 100 == 0 {
return Some(n);
}
}
None
})
.collect()
}
#[test]
fn to_text_output_exited_no_truncation() {
let out = ExecCommandOutput {
wall_time: Duration::from_millis(1234),
exit_status: ExitStatus::Exited(0),
original_token_count: None,
output: "hello".to_string(),
};
let text = out.to_text_output();
let expected = r#"Wall time: 1.234 seconds
Process exited with code 0
Output:
hello"#;
assert_eq!(expected, text);
}
#[test]
fn to_text_output_ongoing_with_truncation() {
let out = ExecCommandOutput {
wall_time: Duration::from_millis(500),
exit_status: ExitStatus::Ongoing(SessionId(42)),
original_token_count: Some(1000),
output: "abc".to_string(),
};
let text = out.to_text_output();
let expected = r#"Wall time: 0.500 seconds
Process running with session ID 42
Warning: truncated output (original token count: 1000)
Output:
abc"#;
assert_eq!(expected, text);
}
#[test]
fn truncate_middle_no_newlines_fallback() {
// A long string with no newlines that exceeds the cap.
let s = "abcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ";
let max_bytes = 16; // force truncation
let (out, original) = truncate_middle(s, max_bytes);
// For very small caps, we return the full, untruncated marker,
// even if it exceeds the cap.
assert_eq!(out, "…16 tokens truncated…");
// Original string length is 62 bytes => ceil(62/4) = 16 tokens.
assert_eq!(original, Some(16));
}
#[test]
fn truncate_middle_prefers_newline_boundaries() {
// Build a multi-line string of 20 numbered lines (each "NNN\n").
let mut s = String::new();
for i in 1..=20 {
s.push_str(&format!("{i:03}\n"));
}
// Total length: 20 lines * 4 bytes per line = 80 bytes.
assert_eq!(s.len(), 80);
// Choose a cap that forces truncation while leaving room for
// a few lines on each side after accounting for the marker.
let max_bytes = 64;
// Expect exact output: first 4 lines, marker, last 4 lines, and correct token estimate (80/4 = 20).
assert_eq!(
truncate_middle(&s, max_bytes),
(
r#"001
002
003
004
…12 tokens truncated…
017
018
019
020
"#
.to_string(),
Some(20)
)
);
}
}

View File

@@ -1,11 +1,45 @@
use std::collections::HashSet;
use std::path::Path;
use std::path::PathBuf;
use codex_protocol::mcp_protocol::GitSha;
use futures::future::join_all;
use serde::Deserialize;
use serde::Serialize;
use tokio::process::Command;
use tokio::time::Duration as TokioDuration;
use tokio::time::timeout;
/// Return `true` if the project folder specified by the `Config` is inside a
/// Git repository.
///
/// The check walks up the directory hierarchy looking for a `.git` file or
/// directory (note `.git` can be a file that contains a `gitdir` entry). This
/// approach does **not** require the `git` binary or the `git2` crate and is
/// therefore fairly lightweight.
///
/// Note that this does **not** detect *worktrees* created with
/// `git worktree add` where the checkout lives outside the main repository
/// directory. If you need Codex to work from such a checkout simply pass the
/// `--allow-no-git-exec` CLI flag that disables the repo requirement.
pub fn get_git_repo_root(base_dir: &Path) -> Option<PathBuf> {
let mut dir = base_dir.to_path_buf();
loop {
if dir.join(".git").exists() {
return Some(dir);
}
// Pop one component (go up one directory). `pop` returns false when
// we have reached the filesystem root.
if !dir.pop() {
break;
}
}
None
}
/// Timeout for git commands to prevent freezing on large repositories
const GIT_COMMAND_TIMEOUT: TokioDuration = TokioDuration::from_secs(5);
@@ -22,6 +56,12 @@ pub struct GitInfo {
pub repository_url: Option<String>,
}
#[derive(Serialize, Deserialize, Clone, Debug)]
pub struct GitDiffToRemote {
pub sha: GitSha,
pub diff: String,
}
/// Collect git repository information from the given working directory using command-line git.
/// Returns None if no git repository is found or if git operations fail.
/// Uses timeouts to prevent freezing on large repositories.
@@ -80,6 +120,21 @@ pub async fn collect_git_info(cwd: &Path) -> Option<GitInfo> {
Some(git_info)
}
/// Returns the closest git sha to HEAD that is on a remote as well as the diff to that sha.
pub async fn git_diff_to_remote(cwd: &Path) -> Option<GitDiffToRemote> {
get_git_repo_root(cwd)?;
let remotes = get_git_remotes(cwd).await?;
let branches = branch_ancestry(cwd).await?;
let base_sha = find_closest_sha(cwd, &branches, &remotes).await?;
let diff = diff_against_sha(cwd, &base_sha).await?;
Some(GitDiffToRemote {
sha: base_sha,
diff,
})
}
/// Run a git command with a timeout to prevent blocking on large repositories
async fn run_git_command_with_timeout(args: &[&str], cwd: &Path) -> Option<std::process::Output> {
let result = timeout(
@@ -94,6 +149,354 @@ async fn run_git_command_with_timeout(args: &[&str], cwd: &Path) -> Option<std::
}
}
async fn get_git_remotes(cwd: &Path) -> Option<Vec<String>> {
let output = run_git_command_with_timeout(&["remote"], cwd).await?;
if !output.status.success() {
return None;
}
let mut remotes: Vec<String> = String::from_utf8(output.stdout)
.ok()?
.lines()
.map(|s| s.to_string())
.collect();
if let Some(pos) = remotes.iter().position(|r| r == "origin") {
let origin = remotes.remove(pos);
remotes.insert(0, origin);
}
Some(remotes)
}
/// Attempt to determine the repository's default branch name.
///
/// Preference order:
/// 1) The symbolic ref at `refs/remotes/<remote>/HEAD` for the first remote (origin prioritized)
/// 2) `git remote show <remote>` parsed for "HEAD branch: <name>"
/// 3) Local fallback to existing `main` or `master` if present
async fn get_default_branch(cwd: &Path) -> Option<String> {
// Prefer the first remote (with origin prioritized)
let remotes = get_git_remotes(cwd).await.unwrap_or_default();
for remote in remotes {
// Try symbolic-ref, which returns something like: refs/remotes/origin/main
if let Some(symref_output) = run_git_command_with_timeout(
&[
"symbolic-ref",
"--quiet",
&format!("refs/remotes/{remote}/HEAD"),
],
cwd,
)
.await
&& symref_output.status.success()
&& let Ok(sym) = String::from_utf8(symref_output.stdout)
{
let trimmed = sym.trim();
if let Some((_, name)) = trimmed.rsplit_once('/') {
return Some(name.to_string());
}
}
// Fall back to parsing `git remote show <remote>` output
if let Some(show_output) =
run_git_command_with_timeout(&["remote", "show", &remote], cwd).await
&& show_output.status.success()
&& let Ok(text) = String::from_utf8(show_output.stdout)
{
for line in text.lines() {
let line = line.trim();
if let Some(rest) = line.strip_prefix("HEAD branch:") {
let name = rest.trim();
if !name.is_empty() {
return Some(name.to_string());
}
}
}
}
}
// No remote-derived default; try common local defaults if they exist
for candidate in ["main", "master"] {
if let Some(verify) = run_git_command_with_timeout(
&[
"rev-parse",
"--verify",
"--quiet",
&format!("refs/heads/{candidate}"),
],
cwd,
)
.await
&& verify.status.success()
{
return Some(candidate.to_string());
}
}
None
}
/// Build an ancestry of branches starting at the current branch and ending at the
/// repository's default branch (if determinable)..
async fn branch_ancestry(cwd: &Path) -> Option<Vec<String>> {
// Discover current branch (ignore detached HEAD by treating it as None)
let current_branch = run_git_command_with_timeout(&["rev-parse", "--abbrev-ref", "HEAD"], cwd)
.await
.and_then(|o| {
if o.status.success() {
String::from_utf8(o.stdout).ok()
} else {
None
}
})
.map(|s| s.trim().to_string())
.filter(|s| s != "HEAD");
// Discover default branch
let default_branch = get_default_branch(cwd).await;
let mut ancestry: Vec<String> = Vec::new();
let mut seen: HashSet<String> = HashSet::new();
if let Some(cb) = current_branch.clone() {
seen.insert(cb.clone());
ancestry.push(cb);
}
if let Some(db) = default_branch
&& !seen.contains(&db)
{
seen.insert(db.clone());
ancestry.push(db);
}
// Expand candidates: include any remote branches that already contain HEAD.
// This addresses cases where we're on a new local-only branch forked from a
// remote branch that isn't the repository default. We prioritize remotes in
// the order returned by get_git_remotes (origin first).
let remotes = get_git_remotes(cwd).await.unwrap_or_default();
for remote in remotes {
if let Some(output) = run_git_command_with_timeout(
&[
"for-each-ref",
"--format=%(refname:short)",
"--contains=HEAD",
&format!("refs/remotes/{remote}"),
],
cwd,
)
.await
&& output.status.success()
&& let Ok(text) = String::from_utf8(output.stdout)
{
for line in text.lines() {
let short = line.trim();
// Expect format like: "origin/feature"; extract the branch path after "remote/"
if let Some(stripped) = short.strip_prefix(&format!("{remote}/"))
&& !stripped.is_empty()
&& !seen.contains(stripped)
{
seen.insert(stripped.to_string());
ancestry.push(stripped.to_string());
}
}
}
}
// Ensure we return Some vector, even if empty, to allow caller logic to proceed
Some(ancestry)
}
// Helper for a single branch: return the remote SHA if present on any remote
// and the distance (commits ahead of HEAD) for that branch. The first item is
// None if the branch is not present on any remote. Returns None if distance
// could not be computed due to git errors/timeouts.
async fn branch_remote_and_distance(
cwd: &Path,
branch: &str,
remotes: &[String],
) -> Option<(Option<GitSha>, usize)> {
// Try to find the first remote ref that exists for this branch (origin prioritized by caller).
let mut found_remote_sha: Option<GitSha> = None;
let mut found_remote_ref: Option<String> = None;
for remote in remotes {
let remote_ref = format!("refs/remotes/{remote}/{branch}");
let Some(verify_output) =
run_git_command_with_timeout(&["rev-parse", "--verify", "--quiet", &remote_ref], cwd)
.await
else {
// Mirror previous behavior: if the verify call times out/fails at the process level,
// treat the entire branch as unusable.
return None;
};
if !verify_output.status.success() {
continue;
}
let Ok(sha) = String::from_utf8(verify_output.stdout) else {
// Mirror previous behavior and skip the entire branch on parse failure.
return None;
};
found_remote_sha = Some(GitSha::new(sha.trim()));
found_remote_ref = Some(remote_ref);
break;
}
// Compute distance as the number of commits HEAD is ahead of the branch.
// Prefer local branch name if it exists; otherwise fall back to the remote ref (if any).
let count_output = if let Some(local_count) =
run_git_command_with_timeout(&["rev-list", "--count", &format!("{branch}..HEAD")], cwd)
.await
{
if local_count.status.success() {
local_count
} else if let Some(remote_ref) = &found_remote_ref {
match run_git_command_with_timeout(
&["rev-list", "--count", &format!("{remote_ref}..HEAD")],
cwd,
)
.await
{
Some(remote_count) => remote_count,
None => return None,
}
} else {
return None;
}
} else if let Some(remote_ref) = &found_remote_ref {
match run_git_command_with_timeout(
&["rev-list", "--count", &format!("{remote_ref}..HEAD")],
cwd,
)
.await
{
Some(remote_count) => remote_count,
None => return None,
}
} else {
return None;
};
if !count_output.status.success() {
return None;
}
let Ok(distance_str) = String::from_utf8(count_output.stdout) else {
return None;
};
let Ok(distance) = distance_str.trim().parse::<usize>() else {
return None;
};
Some((found_remote_sha, distance))
}
// Finds the closest sha that exist on any of branches and also exists on any of the remotes.
async fn find_closest_sha(cwd: &Path, branches: &[String], remotes: &[String]) -> Option<GitSha> {
// A sha and how many commits away from HEAD it is.
let mut closest_sha: Option<(GitSha, usize)> = None;
for branch in branches {
let Some((maybe_remote_sha, distance)) =
branch_remote_and_distance(cwd, branch, remotes).await
else {
continue;
};
let Some(remote_sha) = maybe_remote_sha else {
// Preserve existing behavior: skip branches that are not present on a remote.
continue;
};
match &closest_sha {
None => closest_sha = Some((remote_sha, distance)),
Some((_, best_distance)) if distance < *best_distance => {
closest_sha = Some((remote_sha, distance));
}
_ => {}
}
}
closest_sha.map(|(sha, _)| sha)
}
async fn diff_against_sha(cwd: &Path, sha: &GitSha) -> Option<String> {
let output =
run_git_command_with_timeout(&["diff", "--no-textconv", "--no-ext-diff", &sha.0], cwd)
.await?;
// 0 is success and no diff.
// 1 is success but there is a diff.
let exit_ok = output.status.code().is_some_and(|c| c == 0 || c == 1);
if !exit_ok {
return None;
}
let mut diff = String::from_utf8(output.stdout).ok()?;
if let Some(untracked_output) =
run_git_command_with_timeout(&["ls-files", "--others", "--exclude-standard"], cwd).await
&& untracked_output.status.success()
{
let untracked: Vec<String> = String::from_utf8(untracked_output.stdout)
.ok()?
.lines()
.map(|s| s.to_string())
.filter(|s| !s.is_empty())
.collect();
if !untracked.is_empty() {
// Use platform-appropriate null device and guard paths with `--`.
let null_device: &str = if cfg!(windows) { "NUL" } else { "/dev/null" };
let futures_iter = untracked.into_iter().map(|file| async move {
let file_owned = file;
let args_vec: Vec<&str> = vec![
"diff",
"--no-textconv",
"--no-ext-diff",
"--binary",
"--no-index",
// -- ensures that filenames that start with - are not treated as options.
"--",
null_device,
&file_owned,
];
run_git_command_with_timeout(&args_vec, cwd).await
});
let results = join_all(futures_iter).await;
for extra in results.into_iter().flatten() {
if extra.status.code().is_some_and(|c| c == 0 || c == 1)
&& let Ok(s) = String::from_utf8(extra.stdout)
{
diff.push_str(&s);
}
}
}
}
Some(diff)
}
/// Resolve the path that should be used for trust checks. Similar to
/// `[get_git_repo_root]`, but resolves to the root of the main
/// repository. Handles worktrees.
pub fn resolve_root_git_project_for_trust(cwd: &Path) -> Option<PathBuf> {
let base = if cwd.is_dir() { cwd } else { cwd.parent()? };
// TODO: we should make this async, but it's primarily used deep in
// callstacks of sync code, and should almost always be fast
let git_dir_out = std::process::Command::new("git")
.args(["rev-parse", "--git-common-dir"])
.current_dir(base)
.output()
.ok()?;
if !git_dir_out.status.success() {
return None;
}
let git_dir_s = String::from_utf8(git_dir_out.stdout)
.ok()?
.trim()
.to_string();
let git_dir_path_raw = if Path::new(&git_dir_s).is_absolute() {
PathBuf::from(&git_dir_s)
} else {
base.join(&git_dir_s)
};
// Normalize to handle macOS /var vs /private/var and resolve ".." segments.
let git_dir_path = std::fs::canonicalize(&git_dir_path_raw).unwrap_or(git_dir_path_raw);
git_dir_path.parent().map(Path::to_path_buf)
}
#[cfg(test)]
mod tests {
use super::*;
@@ -104,7 +507,8 @@ mod tests {
// Helper function to create a test git repository
async fn create_test_git_repo(temp_dir: &TempDir) -> PathBuf {
let repo_path = temp_dir.path().to_path_buf();
let repo_path = temp_dir.path().join("repo");
fs::create_dir(&repo_path).expect("Failed to create repo dir");
let envs = vec![
("GIT_CONFIG_GLOBAL", "/dev/null"),
("GIT_CONFIG_NOSYSTEM", "1"),
@@ -159,6 +563,41 @@ mod tests {
repo_path
}
async fn create_test_git_repo_with_remote(temp_dir: &TempDir) -> (PathBuf, String) {
let repo_path = create_test_git_repo(temp_dir).await;
let remote_path = temp_dir.path().join("remote.git");
Command::new("git")
.args(["init", "--bare", remote_path.to_str().unwrap()])
.output()
.await
.expect("Failed to init bare remote");
Command::new("git")
.args(["remote", "add", "origin", remote_path.to_str().unwrap()])
.current_dir(&repo_path)
.output()
.await
.expect("Failed to add remote");
let output = Command::new("git")
.args(["rev-parse", "--abbrev-ref", "HEAD"])
.current_dir(&repo_path)
.output()
.await
.expect("Failed to get branch");
let branch = String::from_utf8(output.stdout).unwrap().trim().to_string();
Command::new("git")
.args(["push", "-u", "origin", &branch])
.current_dir(&repo_path)
.output()
.await
.expect("Failed to push initial commit");
(repo_path, branch)
}
#[tokio::test]
async fn test_collect_git_info_non_git_directory() {
let temp_dir = TempDir::new().expect("Failed to create temp dir");
@@ -272,6 +711,210 @@ mod tests {
assert_eq!(git_info.branch, Some("feature-branch".to_string()));
}
#[tokio::test]
async fn test_get_git_working_tree_state_clean_repo() {
let temp_dir = TempDir::new().expect("Failed to create temp dir");
let (repo_path, branch) = create_test_git_repo_with_remote(&temp_dir).await;
let remote_sha = Command::new("git")
.args(["rev-parse", &format!("origin/{branch}")])
.current_dir(&repo_path)
.output()
.await
.expect("Failed to rev-parse remote");
let remote_sha = String::from_utf8(remote_sha.stdout)
.unwrap()
.trim()
.to_string();
let state = git_diff_to_remote(&repo_path)
.await
.expect("Should collect working tree state");
assert_eq!(state.sha, GitSha::new(&remote_sha));
assert!(state.diff.is_empty());
}
#[tokio::test]
async fn test_get_git_working_tree_state_with_changes() {
let temp_dir = TempDir::new().expect("Failed to create temp dir");
let (repo_path, branch) = create_test_git_repo_with_remote(&temp_dir).await;
let tracked = repo_path.join("test.txt");
fs::write(&tracked, "modified").unwrap();
fs::write(repo_path.join("untracked.txt"), "new").unwrap();
let remote_sha = Command::new("git")
.args(["rev-parse", &format!("origin/{branch}")])
.current_dir(&repo_path)
.output()
.await
.expect("Failed to rev-parse remote");
let remote_sha = String::from_utf8(remote_sha.stdout)
.unwrap()
.trim()
.to_string();
let state = git_diff_to_remote(&repo_path)
.await
.expect("Should collect working tree state");
assert_eq!(state.sha, GitSha::new(&remote_sha));
assert!(state.diff.contains("test.txt"));
assert!(state.diff.contains("untracked.txt"));
}
#[tokio::test]
async fn test_get_git_working_tree_state_branch_fallback() {
let temp_dir = TempDir::new().expect("Failed to create temp dir");
let (repo_path, _branch) = create_test_git_repo_with_remote(&temp_dir).await;
Command::new("git")
.args(["checkout", "-b", "feature"])
.current_dir(&repo_path)
.output()
.await
.expect("Failed to create feature branch");
Command::new("git")
.args(["push", "-u", "origin", "feature"])
.current_dir(&repo_path)
.output()
.await
.expect("Failed to push feature branch");
Command::new("git")
.args(["checkout", "-b", "local-branch"])
.current_dir(&repo_path)
.output()
.await
.expect("Failed to create local branch");
let remote_sha = Command::new("git")
.args(["rev-parse", "origin/feature"])
.current_dir(&repo_path)
.output()
.await
.expect("Failed to rev-parse remote");
let remote_sha = String::from_utf8(remote_sha.stdout)
.unwrap()
.trim()
.to_string();
let state = git_diff_to_remote(&repo_path)
.await
.expect("Should collect working tree state");
assert_eq!(state.sha, GitSha::new(&remote_sha));
}
#[test]
fn resolve_root_git_project_for_trust_returns_none_outside_repo() {
let tmp = TempDir::new().expect("tempdir");
assert!(resolve_root_git_project_for_trust(tmp.path()).is_none());
}
#[tokio::test]
async fn resolve_root_git_project_for_trust_regular_repo_returns_repo_root() {
let temp_dir = TempDir::new().expect("Failed to create temp dir");
let repo_path = create_test_git_repo(&temp_dir).await;
let expected = std::fs::canonicalize(&repo_path).unwrap().to_path_buf();
assert_eq!(
resolve_root_git_project_for_trust(&repo_path),
Some(expected.clone())
);
let nested = repo_path.join("sub/dir");
std::fs::create_dir_all(&nested).unwrap();
assert_eq!(
resolve_root_git_project_for_trust(&nested),
Some(expected.clone())
);
}
#[tokio::test]
async fn resolve_root_git_project_for_trust_detects_worktree_and_returns_main_root() {
let temp_dir = TempDir::new().expect("Failed to create temp dir");
let repo_path = create_test_git_repo(&temp_dir).await;
// Create a linked worktree
let wt_root = temp_dir.path().join("wt");
let _ = std::process::Command::new("git")
.args([
"worktree",
"add",
wt_root.to_str().unwrap(),
"-b",
"feature/x",
])
.current_dir(&repo_path)
.output()
.expect("git worktree add");
let expected = std::fs::canonicalize(&repo_path).ok();
let got = resolve_root_git_project_for_trust(&wt_root)
.and_then(|p| std::fs::canonicalize(p).ok());
assert_eq!(got, expected);
let nested = wt_root.join("nested/sub");
std::fs::create_dir_all(&nested).unwrap();
let got_nested =
resolve_root_git_project_for_trust(&nested).and_then(|p| std::fs::canonicalize(p).ok());
assert_eq!(got_nested, expected);
}
#[test]
fn resolve_root_git_project_for_trust_non_worktrees_gitdir_returns_none() {
let tmp = TempDir::new().expect("tempdir");
let proj = tmp.path().join("proj");
std::fs::create_dir_all(proj.join("nested")).unwrap();
// `.git` is a file but does not point to a worktrees path
std::fs::write(
proj.join(".git"),
format!(
"gitdir: {}\n",
tmp.path().join("some/other/location").display()
),
)
.unwrap();
assert!(resolve_root_git_project_for_trust(&proj).is_none());
assert!(resolve_root_git_project_for_trust(&proj.join("nested")).is_none());
}
#[tokio::test]
async fn test_get_git_working_tree_state_unpushed_commit() {
let temp_dir = TempDir::new().expect("Failed to create temp dir");
let (repo_path, branch) = create_test_git_repo_with_remote(&temp_dir).await;
let remote_sha = Command::new("git")
.args(["rev-parse", &format!("origin/{branch}")])
.current_dir(&repo_path)
.output()
.await
.expect("Failed to rev-parse remote");
let remote_sha = String::from_utf8(remote_sha.stdout)
.unwrap()
.trim()
.to_string();
fs::write(repo_path.join("test.txt"), "updated").unwrap();
Command::new("git")
.args(["add", "test.txt"])
.current_dir(&repo_path)
.output()
.await
.expect("Failed to add file");
Command::new("git")
.args(["commit", "-m", "local change"])
.current_dir(&repo_path)
.output()
.await
.expect("Failed to commit");
let state = git_diff_to_remote(&repo_path)
.await
.expect("Should collect working tree state");
assert_eq!(state.sha, GitSha::new(&remote_sha));
assert!(state.diff.contains("updated"));
}
#[test]
fn test_git_info_serialization() {
let git_info = GitInfo {

View File

@@ -17,9 +17,11 @@ pub mod config;
pub mod config_profile;
pub mod config_types;
mod conversation_history;
pub mod custom_prompts;
mod environment_context;
pub mod error;
pub mod exec;
mod exec_command;
pub mod exec_env;
mod flags;
pub mod git_info;
@@ -39,16 +41,17 @@ mod conversation_manager;
pub use conversation_manager::ConversationManager;
pub use conversation_manager::NewConversation;
pub mod model_family;
mod models;
mod openai_model_info;
mod openai_tools;
pub mod plan_tool;
mod project_doc;
pub mod project_doc;
mod rollout;
pub(crate) mod safety;
pub mod seatbelt;
pub mod shell;
pub mod spawn;
pub mod terminal;
mod tool_apply_patch;
pub mod turn_diff_tracker;
pub mod user_agent;
mod user_notification;

View File

@@ -4,13 +4,13 @@ use std::time::Instant;
use tracing::error;
use crate::codex::Session;
use crate::models::FunctionCallOutputPayload;
use crate::models::ResponseInputItem;
use crate::protocol::Event;
use crate::protocol::EventMsg;
use crate::protocol::McpInvocation;
use crate::protocol::McpToolCallBeginEvent;
use crate::protocol::McpToolCallEndEvent;
use codex_protocol::models::FunctionCallOutputPayload;
use codex_protocol::models::ResponseInputItem;
/// Handles the specified tool call dispatches the appropriate
/// `McpToolCallBegin` and `McpToolCallEnd` events to the `Session`.

View File

@@ -1,3 +1,5 @@
use crate::tool_apply_patch::ApplyPatchToolType;
/// A model family is a group of models that share certain characteristics.
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub struct ModelFamily {
@@ -24,9 +26,9 @@ pub struct ModelFamily {
// See https://platform.openai.com/docs/guides/tools-local-shell
pub uses_local_shell_tool: bool,
/// True if the model performs better when `apply_patch` is provided as
/// a tool call instead of just a bash command.
pub uses_apply_patch_tool: bool,
/// Present if the model performs better when `apply_patch` is provided as
/// a tool call instead of just a bash command
pub apply_patch_tool_type: Option<ApplyPatchToolType>,
}
macro_rules! model_family {
@@ -40,7 +42,7 @@ macro_rules! model_family {
needs_special_apply_patch_instructions: false,
supports_reasoning_summaries: false,
uses_local_shell_tool: false,
uses_apply_patch_tool: false,
apply_patch_tool_type: None,
};
// apply overrides
$(
@@ -60,7 +62,7 @@ macro_rules! simple_model_family {
needs_special_apply_patch_instructions: false,
supports_reasoning_summaries: false,
uses_local_shell_tool: false,
uses_apply_patch_tool: false,
apply_patch_tool_type: None,
})
}};
}
@@ -95,7 +97,7 @@ pub fn find_family_for_model(slug: &str) -> Option<ModelFamily> {
needs_special_apply_patch_instructions: true,
)
} else if slug.starts_with("gpt-oss") {
model_family!(slug, "gpt-oss", uses_apply_patch_tool: true)
model_family!(slug, "gpt-oss", apply_patch_tool_type: Some(ApplyPatchToolType::Function))
} else if slug.starts_with("gpt-4o") {
simple_model_family!(slug, "gpt-4o")
} else if slug.starts_with("gpt-3.5") {

View File

@@ -17,6 +17,10 @@ use crate::error::EnvVarError;
const DEFAULT_STREAM_IDLE_TIMEOUT_MS: u64 = 300_000;
const DEFAULT_STREAM_MAX_RETRIES: u64 = 5;
const DEFAULT_REQUEST_MAX_RETRIES: u64 = 4;
/// Hard cap for user-configured `stream_max_retries`.
const MAX_STREAM_MAX_RETRIES: u64 = 100;
/// Hard cap for user-configured `request_max_retries`.
const MAX_REQUEST_MAX_RETRIES: u64 = 100;
/// Wire protocol that the provider speaks. Most third-party services only
/// implement the classic OpenAI Chat Completions JSON schema, whereas OpenAI
@@ -207,12 +211,14 @@ impl ModelProviderInfo {
pub fn request_max_retries(&self) -> u64 {
self.request_max_retries
.unwrap_or(DEFAULT_REQUEST_MAX_RETRIES)
.min(MAX_REQUEST_MAX_RETRIES)
}
/// Effective maximum number of stream reconnection attempts for this provider.
pub fn stream_max_retries(&self) -> u64 {
self.stream_max_retries
.unwrap_or(DEFAULT_STREAM_MAX_RETRIES)
.min(MAX_STREAM_MAX_RETRIES)
}
/// Effective idle timeout for streaming responses.

View File

@@ -79,13 +79,13 @@ pub(crate) fn get_model_info(model_family: &ModelFamily) -> Option<ModelInfo> {
}),
"gpt-5" => Some(ModelInfo {
context_window: 200_000,
max_output_tokens: 100_000,
context_window: 400_000,
max_output_tokens: 128_000,
}),
_ if slug.starts_with("codex-") => Some(ModelInfo {
context_window: 200_000,
max_output_tokens: 100_000,
context_window: 400_000,
max_output_tokens: 128_000,
}),
_ => None,

View File

@@ -9,6 +9,9 @@ use crate::model_family::ModelFamily;
use crate::plan_tool::PLAN_TOOL;
use crate::protocol::AskForApproval;
use crate::protocol::SandboxPolicy;
use crate::tool_apply_patch::ApplyPatchToolType;
use crate::tool_apply_patch::create_apply_patch_freeform_tool;
use crate::tool_apply_patch::create_apply_patch_json_tool;
#[derive(Debug, Clone, Serialize, PartialEq)]
pub struct ResponsesApiTool {
@@ -21,6 +24,20 @@ pub struct ResponsesApiTool {
pub(crate) parameters: JsonSchema,
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub struct FreeformTool {
pub(crate) name: String,
pub(crate) description: String,
pub(crate) format: FreeformToolFormat,
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub struct FreeformToolFormat {
pub(crate) r#type: String,
pub(crate) syntax: String,
pub(crate) definition: String,
}
/// When serialized as JSON, this produces a valid "Tool" in the OpenAI
/// Responses API.
#[derive(Debug, Clone, Serialize, PartialEq)]
@@ -30,6 +47,12 @@ pub(crate) enum OpenAiTool {
Function(ResponsesApiTool),
#[serde(rename = "local_shell")]
LocalShell {},
// TODO: Understand why we get an error on web_search although the API docs say it's supported.
// https://platform.openai.com/docs/guides/tools-web-search?api-mode=responses#:~:text=%7B%20type%3A%20%22web_search%22%20%7D%2C
#[serde(rename = "web_search_preview")]
WebSearch {},
#[serde(rename = "custom")]
Freeform(FreeformTool),
}
#[derive(Debug, Clone)]
@@ -37,38 +60,72 @@ pub enum ConfigShellToolType {
DefaultShell,
ShellWithRequest { sandbox_policy: SandboxPolicy },
LocalShell,
StreamableShell,
}
#[derive(Debug, Clone)]
pub struct ToolsConfig {
pub(crate) struct ToolsConfig {
pub shell_type: ConfigShellToolType,
pub plan_tool: bool,
pub apply_patch_tool: bool,
pub apply_patch_tool_type: Option<ApplyPatchToolType>,
pub web_search_request: bool,
pub include_view_image_tool: bool,
}
pub(crate) struct ToolsConfigParams<'a> {
pub(crate) model_family: &'a ModelFamily,
pub(crate) approval_policy: AskForApproval,
pub(crate) sandbox_policy: SandboxPolicy,
pub(crate) include_plan_tool: bool,
pub(crate) include_apply_patch_tool: bool,
pub(crate) include_web_search_request: bool,
pub(crate) use_streamable_shell_tool: bool,
pub(crate) include_view_image_tool: bool,
}
impl ToolsConfig {
pub fn new(
model_family: &ModelFamily,
approval_policy: AskForApproval,
sandbox_policy: SandboxPolicy,
include_plan_tool: bool,
include_apply_patch_tool: bool,
) -> Self {
let mut shell_type = if model_family.uses_local_shell_tool {
pub fn new(params: &ToolsConfigParams) -> Self {
let ToolsConfigParams {
model_family,
approval_policy,
sandbox_policy,
include_plan_tool,
include_apply_patch_tool,
include_web_search_request,
use_streamable_shell_tool,
include_view_image_tool,
} = params;
let mut shell_type = if *use_streamable_shell_tool {
ConfigShellToolType::StreamableShell
} else if model_family.uses_local_shell_tool {
ConfigShellToolType::LocalShell
} else {
ConfigShellToolType::DefaultShell
};
if matches!(approval_policy, AskForApproval::OnRequest) {
if matches!(approval_policy, AskForApproval::OnRequest) && !use_streamable_shell_tool {
shell_type = ConfigShellToolType::ShellWithRequest {
sandbox_policy: sandbox_policy.clone(),
}
}
let apply_patch_tool_type = match model_family.apply_patch_tool_type {
Some(ApplyPatchToolType::Freeform) => Some(ApplyPatchToolType::Freeform),
Some(ApplyPatchToolType::Function) => Some(ApplyPatchToolType::Function),
None => {
if *include_apply_patch_tool {
Some(ApplyPatchToolType::Freeform)
} else {
None
}
}
};
Self {
shell_type,
plan_tool: include_plan_tool,
apply_patch_tool: include_apply_patch_tool || model_family.uses_apply_patch_tool,
plan_tool: *include_plan_tool,
apply_patch_tool_type,
web_search_request: *include_web_search_request,
include_view_image_tool: *include_view_image_tool,
}
}
}
@@ -115,16 +172,20 @@ fn create_shell_tool() -> OpenAiTool {
"command".to_string(),
JsonSchema::Array {
items: Box::new(JsonSchema::String { description: None }),
description: None,
description: Some("The command to execute".to_string()),
},
);
properties.insert(
"workdir".to_string(),
JsonSchema::String { description: None },
JsonSchema::String {
description: Some("The working directory to execute the command in".to_string()),
},
);
properties.insert(
"timeout".to_string(),
JsonSchema::Number { description: None },
"timeout_ms".to_string(),
JsonSchema::Number {
description: Some("The timeout for the command in milliseconds".to_string()),
},
);
OpenAiTool::Function(ResponsesApiTool {
@@ -155,7 +216,7 @@ fn create_shell_tool_for_sandbox(sandbox_policy: &SandboxPolicy) -> OpenAiTool {
},
);
properties.insert(
"timeout".to_string(),
"timeout_ms".to_string(),
JsonSchema::Number {
description: Some("The timeout for the command in milliseconds".to_string()),
},
@@ -171,7 +232,7 @@ fn create_shell_tool_for_sandbox(sandbox_policy: &SandboxPolicy) -> OpenAiTool {
properties.insert(
"justification".to_string(),
JsonSchema::String {
description: Some("Only set if ask_for_escalated_permissions is true. 1-sentence explanation of why we want to run this command.".to_string()),
description: Some("Only set if with_escalated_permissions is true. 1-sentence explanation of why we want to run this command.".to_string()),
},
);
}
@@ -238,102 +299,50 @@ The shell tool is used to execute shell commands.
})
}
fn create_view_image_tool() -> OpenAiTool {
// Support only local filesystem path.
let mut properties = BTreeMap::new();
properties.insert(
"path".to_string(),
JsonSchema::String {
description: Some("Local filesystem path to an image file".to_string()),
},
);
OpenAiTool::Function(ResponsesApiTool {
name: "view_image".to_string(),
description:
"Attach a local image (by filesystem path) to the conversation context for this turn."
.to_string(),
strict: false,
parameters: JsonSchema::Object {
properties,
required: Some(vec!["path".to_string()]),
additional_properties: Some(false),
},
})
}
/// TODO(dylan): deprecate once we get rid of json tool
#[derive(Serialize, Deserialize)]
pub(crate) struct ApplyPatchToolArgs {
pub(crate) input: String,
}
fn create_apply_patch_tool() -> OpenAiTool {
// Minimal schema: one required string argument containing the patch body
let mut properties = BTreeMap::new();
properties.insert(
"input".to_string(),
JsonSchema::String {
description: Some(r#"The entire contents of the apply_patch command"#.to_string()),
},
);
OpenAiTool::Function(ResponsesApiTool {
name: "apply_patch".to_string(),
description: r#"Use this tool to edit files.
Your patch language is a strippeddown, fileoriented diff format designed to be easy to parse and safe to apply. You can think of it as a highlevel envelope:
**_ Begin Patch
[ one or more file sections ]
_** End Patch
Within that envelope, you get a sequence of file operations.
You MUST include a header to specify the action you are taking.
Each operation starts with one of three headers:
**_ Add File: <path> - create a new file. Every following line is a + line (the initial contents).
_** Delete File: <path> - remove an existing file. Nothing follows.
\*\*\* Update File: <path> - patch an existing file in place (optionally with a rename).
May be immediately followed by \*\*\* Move to: <new path> if you want to rename the file.
Then one or more “hunks”, each introduced by @@ (optionally followed by a hunk header).
Within a hunk each line starts with:
- for inserted text,
* for removed text, or
space ( ) for context.
At the end of a truncated hunk you can emit \*\*\* End of File.
Patch := Begin { FileOp } End
Begin := "**_ Begin Patch" NEWLINE
End := "_** End Patch" NEWLINE
FileOp := AddFile | DeleteFile | UpdateFile
AddFile := "**_ Add File: " path NEWLINE { "+" line NEWLINE }
DeleteFile := "_** Delete File: " path NEWLINE
UpdateFile := "**_ Update File: " path NEWLINE [ MoveTo ] { Hunk }
MoveTo := "_** Move to: " newPath NEWLINE
Hunk := "@@" [ header ] NEWLINE { HunkLine } [ "*** End of File" NEWLINE ]
HunkLine := (" " | "-" | "+") text NEWLINE
A full patch can combine several operations:
**_ Begin Patch
_** Add File: hello.txt
+Hello world
**_ Update File: src/app.py
_** Move to: src/main.py
@@ def greet():
-print("Hi")
+print("Hello, world!")
**_ Delete File: obsolete.txt
_** End Patch
It is important to remember:
- You must include a header with your intended action (Add/Delete/Update)
- You must prefix new lines with `+` even when creating a new file
"#
.to_string(),
strict: false,
parameters: JsonSchema::Object {
properties,
required: Some(vec!["input".to_string()]),
additional_properties: Some(false),
},
})
}
/// Returns JSON values that are compatible with Function Calling in the
/// Responses API:
/// https://platform.openai.com/docs/guides/function-calling?api-mode=responses
pub(crate) fn create_tools_json_for_responses_api(
pub fn create_tools_json_for_responses_api(
tools: &Vec<OpenAiTool>,
) -> crate::error::Result<Vec<serde_json::Value>> {
let mut tools_json = Vec::new();
for tool in tools {
tools_json.push(serde_json::to_value(tool)?);
let json = serde_json::to_value(tool)?;
tools_json.push(json);
}
Ok(tools_json)
}
/// Returns JSON values that are compatible with Function Calling in the
/// Chat Completions API:
/// https://platform.openai.com/docs/guides/function-calling?api-mode=chat
@@ -533,18 +542,47 @@ pub(crate) fn get_openai_tools(
ConfigShellToolType::LocalShell => {
tools.push(OpenAiTool::LocalShell {});
}
ConfigShellToolType::StreamableShell => {
tools.push(OpenAiTool::Function(
crate::exec_command::create_exec_command_tool_for_responses_api(),
));
tools.push(OpenAiTool::Function(
crate::exec_command::create_write_stdin_tool_for_responses_api(),
));
}
}
if config.plan_tool {
tools.push(PLAN_TOOL.clone());
}
if config.apply_patch_tool {
tools.push(create_apply_patch_tool());
if let Some(apply_patch_tool_type) = &config.apply_patch_tool_type {
match apply_patch_tool_type {
ApplyPatchToolType::Freeform => {
tools.push(create_apply_patch_freeform_tool());
}
ApplyPatchToolType::Function => {
tools.push(create_apply_patch_json_tool());
}
}
}
if config.web_search_request {
tools.push(OpenAiTool::WebSearch {});
}
// Include the view_image tool so the agent can attach images to context.
if config.include_view_image_tool {
tools.push(create_view_image_tool());
}
if let Some(mcp_tools) = mcp_tools {
for (name, tool) in mcp_tools {
// Ensure deterministic ordering to maximize prompt cache hits.
// HashMap iteration order is non-deterministic, so sort by fully-qualified tool name.
let mut entries: Vec<(String, mcp_types::Tool)> = mcp_tools.into_iter().collect();
entries.sort_by(|a, b| a.0.cmp(&b.0));
for (name, tool) in entries.into_iter() {
match mcp_tool_to_openai_tool(name.clone(), tool.clone()) {
Ok(converted_tool) => tools.push(OpenAiTool::Function(converted_tool)),
Err(e) => {
@@ -571,6 +609,8 @@ mod tests {
.map(|tool| match tool {
OpenAiTool::Function(ResponsesApiTool { name, .. }) => name,
OpenAiTool::LocalShell {} => "local_shell",
OpenAiTool::WebSearch {} => "web_search",
OpenAiTool::Freeform(FreeformTool { name, .. }) => name,
})
.collect::<Vec<_>>();
@@ -591,43 +631,58 @@ mod tests {
fn test_get_openai_tools() {
let model_family = find_family_for_model("codex-mini-latest")
.expect("codex-mini-latest should be a valid model family");
let config = ToolsConfig::new(
&model_family,
AskForApproval::Never,
SandboxPolicy::ReadOnly,
true,
model_family.uses_apply_patch_tool,
);
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::ReadOnly,
include_plan_tool: true,
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
});
let tools = get_openai_tools(&config, Some(HashMap::new()));
assert_eq_tool_names(&tools, &["local_shell", "update_plan"]);
assert_eq_tool_names(
&tools,
&["local_shell", "update_plan", "web_search", "view_image"],
);
}
#[test]
fn test_get_openai_tools_default_shell() {
let model_family = find_family_for_model("o3").expect("o3 should be a valid model family");
let config = ToolsConfig::new(
&model_family,
AskForApproval::Never,
SandboxPolicy::ReadOnly,
true,
model_family.uses_apply_patch_tool,
);
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::ReadOnly,
include_plan_tool: true,
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
});
let tools = get_openai_tools(&config, Some(HashMap::new()));
assert_eq_tool_names(&tools, &["shell", "update_plan"]);
assert_eq_tool_names(
&tools,
&["shell", "update_plan", "web_search", "view_image"],
);
}
#[test]
fn test_get_openai_tools_mcp_tools() {
let model_family = find_family_for_model("o3").expect("o3 should be a valid model family");
let config = ToolsConfig::new(
&model_family,
AskForApproval::Never,
SandboxPolicy::ReadOnly,
false,
model_family.uses_apply_patch_tool,
);
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::ReadOnly,
include_plan_tool: false,
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
});
let tools = get_openai_tools(
&config,
Some(HashMap::from([(
@@ -650,7 +705,7 @@ mod tests {
},
"required": [
"string_property",
"number_property"
"number_property",
],
"additionalProperties": Some(false),
},
@@ -666,10 +721,18 @@ mod tests {
)])),
);
assert_eq_tool_names(&tools, &["shell", "test_server/do_something_cool"]);
assert_eq_tool_names(
&tools,
&[
"shell",
"web_search",
"view_image",
"test_server/do_something_cool",
],
);
assert_eq!(
tools[1],
tools[3],
OpenAiTool::Function(ResponsesApiTool {
name: "test_server/do_something_cool".to_string(),
parameters: JsonSchema::Object {
@@ -712,16 +775,96 @@ mod tests {
);
}
#[test]
fn test_get_openai_tools_mcp_tools_sorted_by_name() {
let model_family = find_family_for_model("o3").expect("o3 should be a valid model family");
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::ReadOnly,
include_plan_tool: false,
include_apply_patch_tool: false,
include_web_search_request: false,
use_streamable_shell_tool: false,
include_view_image_tool: true,
});
// Intentionally construct a map with keys that would sort alphabetically.
let tools_map: HashMap<String, mcp_types::Tool> = HashMap::from([
(
"test_server/do".to_string(),
mcp_types::Tool {
name: "a".to_string(),
input_schema: ToolInputSchema {
properties: Some(serde_json::json!({})),
required: None,
r#type: "object".to_string(),
},
output_schema: None,
title: None,
annotations: None,
description: Some("a".to_string()),
},
),
(
"test_server/something".to_string(),
mcp_types::Tool {
name: "b".to_string(),
input_schema: ToolInputSchema {
properties: Some(serde_json::json!({})),
required: None,
r#type: "object".to_string(),
},
output_schema: None,
title: None,
annotations: None,
description: Some("b".to_string()),
},
),
(
"test_server/cool".to_string(),
mcp_types::Tool {
name: "c".to_string(),
input_schema: ToolInputSchema {
properties: Some(serde_json::json!({})),
required: None,
r#type: "object".to_string(),
},
output_schema: None,
title: None,
annotations: None,
description: Some("c".to_string()),
},
),
]);
let tools = get_openai_tools(&config, Some(tools_map));
// Expect shell first, followed by MCP tools sorted by fully-qualified name.
assert_eq_tool_names(
&tools,
&[
"shell",
"view_image",
"test_server/cool",
"test_server/do",
"test_server/something",
],
);
}
#[test]
fn test_mcp_tool_property_missing_type_defaults_to_string() {
let model_family = find_family_for_model("o3").expect("o3 should be a valid model family");
let config = ToolsConfig::new(
&model_family,
AskForApproval::Never,
SandboxPolicy::ReadOnly,
false,
model_family.uses_apply_patch_tool,
);
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::ReadOnly,
include_plan_tool: false,
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
});
let tools = get_openai_tools(
&config,
@@ -746,10 +889,13 @@ mod tests {
)])),
);
assert_eq_tool_names(&tools, &["shell", "dash/search"]);
assert_eq_tool_names(
&tools,
&["shell", "web_search", "view_image", "dash/search"],
);
assert_eq!(
tools[1],
tools[3],
OpenAiTool::Function(ResponsesApiTool {
name: "dash/search".to_string(),
parameters: JsonSchema::Object {
@@ -771,13 +917,16 @@ mod tests {
#[test]
fn test_mcp_tool_integer_normalized_to_number() {
let model_family = find_family_for_model("o3").expect("o3 should be a valid model family");
let config = ToolsConfig::new(
&model_family,
AskForApproval::Never,
SandboxPolicy::ReadOnly,
false,
model_family.uses_apply_patch_tool,
);
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::ReadOnly,
include_plan_tool: false,
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
});
let tools = get_openai_tools(
&config,
@@ -800,9 +949,12 @@ mod tests {
)])),
);
assert_eq_tool_names(&tools, &["shell", "dash/paginate"]);
assert_eq_tool_names(
&tools,
&["shell", "web_search", "view_image", "dash/paginate"],
);
assert_eq!(
tools[1],
tools[3],
OpenAiTool::Function(ResponsesApiTool {
name: "dash/paginate".to_string(),
parameters: JsonSchema::Object {
@@ -822,13 +974,16 @@ mod tests {
#[test]
fn test_mcp_tool_array_without_items_gets_default_string_items() {
let model_family = find_family_for_model("o3").expect("o3 should be a valid model family");
let config = ToolsConfig::new(
&model_family,
AskForApproval::Never,
SandboxPolicy::ReadOnly,
false,
model_family.uses_apply_patch_tool,
);
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::ReadOnly,
include_plan_tool: false,
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
});
let tools = get_openai_tools(
&config,
@@ -851,9 +1006,9 @@ mod tests {
)])),
);
assert_eq_tool_names(&tools, &["shell", "dash/tags"]);
assert_eq_tool_names(&tools, &["shell", "web_search", "view_image", "dash/tags"]);
assert_eq!(
tools[1],
tools[3],
OpenAiTool::Function(ResponsesApiTool {
name: "dash/tags".to_string(),
parameters: JsonSchema::Object {
@@ -876,13 +1031,16 @@ mod tests {
#[test]
fn test_mcp_tool_anyof_defaults_to_string() {
let model_family = find_family_for_model("o3").expect("o3 should be a valid model family");
let config = ToolsConfig::new(
&model_family,
AskForApproval::Never,
SandboxPolicy::ReadOnly,
false,
model_family.uses_apply_patch_tool,
);
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::ReadOnly,
include_plan_tool: false,
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
});
let tools = get_openai_tools(
&config,
@@ -905,9 +1063,9 @@ mod tests {
)])),
);
assert_eq_tool_names(&tools, &["shell", "dash/value"]);
assert_eq_tool_names(&tools, &["shell", "web_search", "view_image", "dash/value"]);
assert_eq!(
tools[1],
tools[3],
OpenAiTool::Function(ResponsesApiTool {
name: "dash/value".to_string(),
parameters: JsonSchema::Object {

View File

@@ -20,22 +20,6 @@ pub enum ParsedCommand {
query: Option<String>,
path: Option<String>,
},
Format {
cmd: String,
tool: Option<String>,
targets: Option<Vec<String>>,
},
Test {
cmd: String,
},
Lint {
cmd: String,
tool: Option<String>,
targets: Option<Vec<String>>,
},
Noop {
cmd: String,
},
Unknown {
cmd: String,
},
@@ -50,10 +34,6 @@ impl From<ParsedCommand> for codex_protocol::parse_command::ParsedCommand {
ParsedCommand::Read { cmd, name } => P::Read { cmd, name },
ParsedCommand::ListFiles { cmd, path } => P::ListFiles { cmd, path },
ParsedCommand::Search { cmd, query, path } => P::Search { cmd, query, path },
ParsedCommand::Format { cmd, tool, targets } => P::Format { cmd, tool, targets },
ParsedCommand::Test { cmd } => P::Test { cmd },
ParsedCommand::Lint { cmd, tool, targets } => P::Lint { cmd, tool, targets },
ParsedCommand::Noop { cmd } => P::Noop { cmd },
ParsedCommand::Unknown { cmd } => P::Unknown { cmd },
}
}
@@ -122,7 +102,7 @@ mod tests {
assert_parsed(
&vec_str(&["bash", "-lc", inner]),
vec![ParsedCommand::Unknown {
cmd: "git status | wc -l".to_string(),
cmd: "git status".to_string(),
}],
);
}
@@ -244,6 +224,17 @@ mod tests {
);
}
#[test]
fn cd_then_cat_is_single_read() {
assert_parsed(
&shlex_split_safe("cd foo && cat foo.txt"),
vec![ParsedCommand::Read {
cmd: "cat foo.txt".to_string(),
name: "foo.txt".to_string(),
}],
);
}
#[test]
fn supports_ls_with_pipe() {
let inner = "ls -la | sed -n '1,120p'";
@@ -315,27 +306,6 @@ mod tests {
);
}
#[test]
fn supports_npm_run_with_forwarded_args() {
assert_parsed(
&vec_str(&[
"npm",
"run",
"lint",
"--",
"--max-warnings",
"0",
"--format",
"json",
]),
vec![ParsedCommand::Lint {
cmd: "npm run lint -- --max-warnings 0 --format json".to_string(),
tool: Some("npm-script:lint".to_string()),
targets: None,
}],
);
}
#[test]
fn supports_grep_recursive_current_dir() {
assert_parsed(
@@ -396,173 +366,10 @@ mod tests {
fn supports_cd_and_rg_files() {
assert_parsed(
&shlex_split_safe("cd codex-rs && rg --files"),
vec![
ParsedCommand::Unknown {
cmd: "cd codex-rs".to_string(),
},
ParsedCommand::Search {
cmd: "rg --files".to_string(),
query: None,
path: None,
},
],
);
}
#[test]
fn echo_then_cargo_test_sequence() {
assert_parsed(
&shlex_split_safe("echo Running tests... && cargo test --all-features --quiet"),
vec![ParsedCommand::Test {
cmd: "cargo test --all-features --quiet".to_string(),
}],
);
}
#[test]
fn supports_cargo_fmt_and_test_with_config() {
assert_parsed(
&shlex_split_safe(
"cargo fmt -- --config imports_granularity=Item && cargo test -p core --all-features",
),
vec![
ParsedCommand::Format {
cmd: "cargo fmt -- --config 'imports_granularity=Item'".to_string(),
tool: Some("cargo fmt".to_string()),
targets: None,
},
ParsedCommand::Test {
cmd: "cargo test -p core --all-features".to_string(),
},
],
);
}
#[test]
fn recognizes_rustfmt_and_clippy() {
assert_parsed(
&shlex_split_safe("rustfmt src/main.rs"),
vec![ParsedCommand::Format {
cmd: "rustfmt src/main.rs".to_string(),
tool: Some("rustfmt".to_string()),
targets: Some(vec!["src/main.rs".to_string()]),
}],
);
assert_parsed(
&shlex_split_safe("cargo clippy -p core --all-features -- -D warnings"),
vec![ParsedCommand::Lint {
cmd: "cargo clippy -p core --all-features -- -D warnings".to_string(),
tool: Some("cargo clippy".to_string()),
targets: None,
}],
);
}
#[test]
fn recognizes_pytest_go_and_tools() {
assert_parsed(
&shlex_split_safe(
"pytest -k 'Login and not slow' tests/test_login.py::TestLogin::test_ok",
),
vec![ParsedCommand::Test {
cmd: "pytest -k 'Login and not slow' tests/test_login.py::TestLogin::test_ok"
.to_string(),
}],
);
assert_parsed(
&shlex_split_safe("go fmt ./..."),
vec![ParsedCommand::Format {
cmd: "go fmt ./...".to_string(),
tool: Some("go fmt".to_string()),
targets: Some(vec!["./...".to_string()]),
}],
);
assert_parsed(
&shlex_split_safe("go test ./pkg -run TestThing"),
vec![ParsedCommand::Test {
cmd: "go test ./pkg -run TestThing".to_string(),
}],
);
assert_parsed(
&shlex_split_safe("eslint . --max-warnings 0"),
vec![ParsedCommand::Lint {
cmd: "eslint . --max-warnings 0".to_string(),
tool: Some("eslint".to_string()),
targets: Some(vec![".".to_string()]),
}],
);
assert_parsed(
&shlex_split_safe("prettier -w ."),
vec![ParsedCommand::Format {
cmd: "prettier -w .".to_string(),
tool: Some("prettier".to_string()),
targets: Some(vec![".".to_string()]),
}],
);
}
#[test]
fn recognizes_jest_and_vitest_filters() {
assert_parsed(
&shlex_split_safe("jest -t 'should work' src/foo.test.ts"),
vec![ParsedCommand::Test {
cmd: "jest -t 'should work' src/foo.test.ts".to_string(),
}],
);
assert_parsed(
&shlex_split_safe("vitest -t 'runs' src/foo.test.tsx"),
vec![ParsedCommand::Test {
cmd: "vitest -t runs src/foo.test.tsx".to_string(),
}],
);
}
#[test]
fn recognizes_npx_and_scripts() {
assert_parsed(
&shlex_split_safe("npx eslint src"),
vec![ParsedCommand::Lint {
cmd: "npx eslint src".to_string(),
tool: Some("eslint".to_string()),
targets: Some(vec!["src".to_string()]),
}],
);
assert_parsed(
&shlex_split_safe("npx prettier -c ."),
vec![ParsedCommand::Format {
cmd: "npx prettier -c .".to_string(),
tool: Some("prettier".to_string()),
targets: Some(vec![".".to_string()]),
}],
);
assert_parsed(
&shlex_split_safe("pnpm run lint -- --max-warnings 0"),
vec![ParsedCommand::Lint {
cmd: "pnpm run lint -- --max-warnings 0".to_string(),
tool: Some("pnpm-script:lint".to_string()),
targets: None,
}],
);
assert_parsed(
&shlex_split_safe("npm test"),
vec![ParsedCommand::Test {
cmd: "npm test".to_string(),
}],
);
assert_parsed(
&shlex_split_safe("yarn test"),
vec![ParsedCommand::Test {
cmd: "yarn test".to_string(),
vec![ParsedCommand::Search {
cmd: "rg --files".to_string(),
query: None,
path: None,
}],
);
}
@@ -770,6 +577,51 @@ mod tests {
);
}
#[test]
fn parses_mixed_sequence_with_pipes_semicolons_and_or() {
// Provided long command sequence combining sequencing, pipelines, and ORs.
let inner = "pwd; ls -la; rg --files -g '!target' | wc -l; rg -n '^\\[workspace\\]' -n Cargo.toml || true; rg -n '^\\[package\\]' -n */Cargo.toml || true; cargo --version; rustc --version; cargo clippy --workspace --all-targets --all-features -q";
let args = vec_str(&["bash", "-lc", inner]);
let expected = vec![
ParsedCommand::Unknown {
cmd: "pwd".to_string(),
},
ParsedCommand::ListFiles {
cmd: shlex_join(&shlex_split_safe("ls -la")),
path: None,
},
ParsedCommand::Search {
cmd: shlex_join(&shlex_split_safe("rg --files -g '!target'")),
query: None,
path: Some("!target".to_string()),
},
ParsedCommand::Search {
cmd: shlex_join(&shlex_split_safe("rg -n '^\\[workspace\\]' -n Cargo.toml")),
query: Some("^\\[workspace\\]".to_string()),
path: Some("Cargo.toml".to_string()),
},
ParsedCommand::Search {
cmd: shlex_join(&shlex_split_safe("rg -n '^\\[package\\]' -n */Cargo.toml")),
query: Some("^\\[package\\]".to_string()),
path: Some("Cargo.toml".to_string()),
},
ParsedCommand::Unknown {
cmd: shlex_join(&shlex_split_safe("cargo --version")),
},
ParsedCommand::Unknown {
cmd: shlex_join(&shlex_split_safe("rustc --version")),
},
ParsedCommand::Unknown {
cmd: shlex_join(&shlex_split_safe(
"cargo clippy --workspace --all-targets --all-features -q",
)),
},
];
assert_parsed(&args, expected);
}
#[test]
fn strips_true_in_sequence() {
// `true` should be dropped from parsed sequences
@@ -867,159 +719,6 @@ mod tests {
);
}
#[test]
fn pnpm_test_is_parsed_as_test() {
assert_parsed(
&shlex_split_safe("pnpm test"),
vec![ParsedCommand::Test {
cmd: "pnpm test".to_string(),
}],
);
}
#[test]
fn pnpm_exec_vitest_is_unknown() {
// From commands_combined: cd codex-cli && pnpm exec vitest run tests/... --threads=false --passWithNoTests
let inner = "cd codex-cli && pnpm exec vitest run tests/file-tag-utils.test.ts --threads=false --passWithNoTests";
assert_parsed(
&shlex_split_safe(inner),
vec![
ParsedCommand::Unknown {
cmd: "cd codex-cli".to_string(),
},
ParsedCommand::Unknown {
cmd: "pnpm exec vitest run tests/file-tag-utils.test.ts '--threads=false' --passWithNoTests".to_string(),
},
],
);
}
#[test]
fn cargo_test_with_crate() {
assert_parsed(
&shlex_split_safe("cargo test -p codex-core parse_command::"),
vec![ParsedCommand::Test {
cmd: "cargo test -p codex-core parse_command::".to_string(),
}],
);
}
#[test]
fn cargo_test_with_crate_2() {
assert_parsed(
&shlex_split_safe(
"cd core && cargo test -q parse_command::tests::bash_dash_c_pipeline_parsing parse_command::tests::fd_file_finder_variants",
),
vec![ParsedCommand::Test {
cmd: "cargo test -q parse_command::tests::bash_dash_c_pipeline_parsing parse_command::tests::fd_file_finder_variants".to_string(),
}],
);
}
#[test]
fn cargo_test_with_crate_3() {
assert_parsed(
&shlex_split_safe("cd core && cargo test -q parse_command::tests"),
vec![ParsedCommand::Test {
cmd: "cargo test -q parse_command::tests".to_string(),
}],
);
}
#[test]
fn cargo_test_with_crate_4() {
assert_parsed(
&shlex_split_safe("cd core && cargo test --all-features parse_command -- --nocapture"),
vec![ParsedCommand::Test {
cmd: "cargo test --all-features parse_command -- --nocapture".to_string(),
}],
);
}
// Additional coverage for other common tools/frameworks
#[test]
fn recognizes_black_and_ruff() {
// black formats Python code
assert_parsed(
&shlex_split_safe("black src"),
vec![ParsedCommand::Format {
cmd: "black src".to_string(),
tool: Some("black".to_string()),
targets: Some(vec!["src".to_string()]),
}],
);
// ruff check is a linter; ensure we collect targets
assert_parsed(
&shlex_split_safe("ruff check ."),
vec![ParsedCommand::Lint {
cmd: "ruff check .".to_string(),
tool: Some("ruff".to_string()),
targets: Some(vec![".".to_string()]),
}],
);
// ruff format is a formatter
assert_parsed(
&shlex_split_safe("ruff format pkg/"),
vec![ParsedCommand::Format {
cmd: "ruff format pkg/".to_string(),
tool: Some("ruff".to_string()),
targets: Some(vec!["pkg/".to_string()]),
}],
);
}
#[test]
fn recognizes_pnpm_monorepo_test_and_npm_format_script() {
// pnpm -r test in a monorepo should still parse as a test action
assert_parsed(
&shlex_split_safe("pnpm -r test"),
vec![ParsedCommand::Test {
cmd: "pnpm -r test".to_string(),
}],
);
// npm run format should be recognized as a format action
assert_parsed(
&shlex_split_safe("npm run format -- -w ."),
vec![ParsedCommand::Format {
cmd: "npm run format -- -w .".to_string(),
tool: Some("npm-script:format".to_string()),
targets: None,
}],
);
}
#[test]
fn yarn_test_is_parsed_as_test() {
assert_parsed(
&shlex_split_safe("yarn test"),
vec![ParsedCommand::Test {
cmd: "yarn test".to_string(),
}],
);
}
#[test]
fn pytest_file_only_and_go_run_regex() {
// pytest invoked with a file path should be captured as a filter
assert_parsed(
&shlex_split_safe("pytest tests/test_example.py"),
vec![ParsedCommand::Test {
cmd: "pytest tests/test_example.py".to_string(),
}],
);
// go test with -run regex should capture the filter
assert_parsed(
&shlex_split_safe("go test ./... -run '^TestFoo$'"),
vec![ParsedCommand::Test {
cmd: "go test ./... -run '^TestFoo$'".to_string(),
}],
);
}
#[test]
fn grep_with_query_and_path() {
assert_parsed(
@@ -1090,30 +789,6 @@ mod tests {
);
}
#[test]
fn eslint_with_config_path_and_target() {
assert_parsed(
&shlex_split_safe("eslint -c .eslintrc.json src"),
vec![ParsedCommand::Lint {
cmd: "eslint -c .eslintrc.json src".to_string(),
tool: Some("eslint".to_string()),
targets: Some(vec!["src".to_string()]),
}],
);
}
#[test]
fn npx_eslint_with_config_path_and_target() {
assert_parsed(
&shlex_split_safe("npx eslint -c .eslintrc src"),
vec![ParsedCommand::Lint {
cmd: "npx eslint -c .eslintrc src".to_string(),
tool: Some("eslint".to_string()),
targets: Some(vec!["src".to_string()]),
}],
);
}
#[test]
fn fd_file_finder_variants() {
assert_parsed(
@@ -1202,16 +877,13 @@ fn simplify_once(commands: &[ParsedCommand]) -> Option<Vec<ParsedCommand>> {
return Some(commands[1..].to_vec());
}
// cd foo && [any Test command] => [any Test command]
// cd foo && [any command] => [any command] (keep non-cd when a cd is followed by something)
if let Some(idx) = commands.iter().position(|pc| match pc {
ParsedCommand::Unknown { cmd } => {
shlex_split(cmd).is_some_and(|t| t.first().map(|s| s.as_str()) == Some("cd"))
}
_ => false,
}) && commands
.iter()
.skip(idx + 1)
.any(|pc| matches!(pc, ParsedCommand::Test { .. }))
}) && commands.len() > idx + 1
{
let mut out = Vec::with_capacity(commands.len() - 1);
out.extend_from_slice(&commands[..idx]);
@@ -1220,10 +892,10 @@ fn simplify_once(commands: &[ParsedCommand]) -> Option<Vec<ParsedCommand>> {
}
// cmd || true => cmd
if let Some(idx) = commands.iter().position(|pc| match pc {
ParsedCommand::Noop { cmd } => cmd == "true",
_ => false,
}) {
if let Some(idx) = commands
.iter()
.position(|pc| matches!(pc, ParsedCommand::Unknown { cmd } if cmd == "true"))
{
let mut out = Vec::with_capacity(commands.len() - 1);
out.extend_from_slice(&commands[..idx]);
out.extend_from_slice(&commands[idx + 1..]);
@@ -1377,75 +1049,6 @@ fn skip_flag_values<'a>(args: &'a [String], flags_with_vals: &[&str]) -> Vec<&'a
out
}
/// Common flags for ESLint that take a following value and should not be
/// considered positional targets.
const ESLINT_FLAGS_WITH_VALUES: &[&str] = &[
"-c",
"--config",
"--parser",
"--parser-options",
"--rulesdir",
"--plugin",
"--max-warnings",
"--format",
];
fn collect_non_flag_targets(args: &[String]) -> Option<Vec<String>> {
let mut targets = Vec::new();
let mut skip_next = false;
for (i, a) in args.iter().enumerate() {
if a == "--" {
break;
}
if skip_next {
skip_next = false;
continue;
}
if a == "-p"
|| a == "--package"
|| a == "--features"
|| a == "-C"
|| a == "--config"
|| a == "--config-path"
|| a == "--out-dir"
|| a == "-o"
|| a == "--run"
|| a == "--max-warnings"
|| a == "--format"
{
if i + 1 < args.len() {
skip_next = true;
}
continue;
}
if a.starts_with('-') {
continue;
}
targets.push(a.clone());
}
if targets.is_empty() {
None
} else {
Some(targets)
}
}
fn collect_non_flag_targets_with_flags(
args: &[String],
flags_with_vals: &[&str],
) -> Option<Vec<String>> {
let targets: Vec<String> = skip_flag_values(args, flags_with_vals)
.into_iter()
.filter(|a| !a.starts_with('-'))
.cloned()
.collect();
if targets.is_empty() {
None
} else {
Some(targets)
}
}
fn is_pathish(s: &str) -> bool {
s == "."
|| s == ".."
@@ -1514,47 +1117,6 @@ fn parse_find_query_and_path(tail: &[String]) -> (Option<String>, Option<String>
(query, path)
}
fn classify_npm_like(tool: &str, tail: &[String], full_cmd: &[String]) -> Option<ParsedCommand> {
let mut r = tail;
if tool == "pnpm" && r.first().map(|s| s.as_str()) == Some("-r") {
r = &r[1..];
}
let mut script_name: Option<String> = None;
if r.first().map(|s| s.as_str()) == Some("run") {
script_name = r.get(1).cloned();
} else {
let is_test_cmd = (tool == "npm" && r.first().map(|s| s.as_str()) == Some("t"))
|| ((tool == "npm" || tool == "pnpm" || tool == "yarn")
&& r.first().map(|s| s.as_str()) == Some("test"));
if is_test_cmd {
script_name = Some("test".to_string());
}
}
if let Some(name) = script_name {
let lname = name.to_lowercase();
if lname == "test" || lname == "unit" || lname == "jest" || lname == "vitest" {
return Some(ParsedCommand::Test {
cmd: shlex_join(full_cmd),
});
}
if lname == "lint" || lname == "eslint" {
return Some(ParsedCommand::Lint {
cmd: shlex_join(full_cmd),
tool: Some(format!("{tool}-script:{name}")),
targets: None,
});
}
if lname == "format" || lname == "fmt" || lname == "prettier" {
return Some(ParsedCommand::Format {
cmd: shlex_join(full_cmd),
tool: Some(format!("{tool}-script:{name}")),
targets: None,
});
}
}
None
}
fn parse_bash_lc_commands(original: &[String]) -> Option<Vec<ParsedCommand>> {
let [bash, flag, script] = original else {
return None;
@@ -1586,7 +1148,7 @@ fn parse_bash_lc_commands(original: &[String]) -> Option<Vec<ParsedCommand>> {
.map(|tokens| summarize_main_tokens(&tokens))
.collect();
if commands.len() > 1 {
commands.retain(|pc| !matches!(pc, ParsedCommand::Noop { .. }));
commands.retain(|pc| !matches!(pc, ParsedCommand::Unknown { cmd } if cmd == "true"));
}
if commands.len() == 1 {
// If we reduced to a single command, attribute the full original script
@@ -1655,27 +1217,7 @@ fn parse_bash_lc_commands(original: &[String]) -> Option<Vec<ParsedCommand>> {
}
}
}
ParsedCommand::Format {
tool, targets, cmd, ..
} => ParsedCommand::Format {
cmd: cmd.clone(),
tool,
targets,
},
ParsedCommand::Test { cmd, .. } => ParsedCommand::Test { cmd: cmd.clone() },
ParsedCommand::Lint {
tool, targets, cmd, ..
} => ParsedCommand::Lint {
cmd: cmd.clone(),
tool,
targets,
},
ParsedCommand::Unknown { .. } => ParsedCommand::Unknown {
cmd: script.clone(),
},
ParsedCommand::Noop { .. } => ParsedCommand::Noop {
cmd: script.clone(),
},
other => other,
})
.collect();
}
@@ -1728,124 +1270,6 @@ fn drop_small_formatting_commands(mut commands: Vec<Vec<String>>) -> Vec<Vec<Str
fn summarize_main_tokens(main_cmd: &[String]) -> ParsedCommand {
match main_cmd.split_first() {
Some((head, tail)) if head == "true" && tail.is_empty() => ParsedCommand::Noop {
cmd: shlex_join(main_cmd),
},
// (sed-specific logic handled below in dedicated arm returning Read)
Some((head, tail))
if head == "cargo" && tail.first().map(|s| s.as_str()) == Some("fmt") =>
{
ParsedCommand::Format {
cmd: shlex_join(main_cmd),
tool: Some("cargo fmt".to_string()),
targets: collect_non_flag_targets(&tail[1..]),
}
}
Some((head, tail))
if head == "cargo" && tail.first().map(|s| s.as_str()) == Some("clippy") =>
{
ParsedCommand::Lint {
cmd: shlex_join(main_cmd),
tool: Some("cargo clippy".to_string()),
targets: collect_non_flag_targets(&tail[1..]),
}
}
Some((head, tail))
if head == "cargo" && tail.first().map(|s| s.as_str()) == Some("test") =>
{
ParsedCommand::Test {
cmd: shlex_join(main_cmd),
}
}
Some((head, tail)) if head == "rustfmt" => ParsedCommand::Format {
cmd: shlex_join(main_cmd),
tool: Some("rustfmt".to_string()),
targets: collect_non_flag_targets(tail),
},
Some((head, tail)) if head == "go" && tail.first().map(|s| s.as_str()) == Some("fmt") => {
ParsedCommand::Format {
cmd: shlex_join(main_cmd),
tool: Some("go fmt".to_string()),
targets: collect_non_flag_targets(&tail[1..]),
}
}
Some((head, tail)) if head == "go" && tail.first().map(|s| s.as_str()) == Some("test") => {
ParsedCommand::Test {
cmd: shlex_join(main_cmd),
}
}
Some((head, _)) if head == "pytest" => ParsedCommand::Test {
cmd: shlex_join(main_cmd),
},
Some((head, tail)) if head == "eslint" => {
// Treat configuration flags with values (e.g. `-c .eslintrc`) as non-targets.
let targets = collect_non_flag_targets_with_flags(tail, ESLINT_FLAGS_WITH_VALUES);
ParsedCommand::Lint {
cmd: shlex_join(main_cmd),
tool: Some("eslint".to_string()),
targets,
}
}
Some((head, tail)) if head == "prettier" => ParsedCommand::Format {
cmd: shlex_join(main_cmd),
tool: Some("prettier".to_string()),
targets: collect_non_flag_targets(tail),
},
Some((head, tail)) if head == "black" => ParsedCommand::Format {
cmd: shlex_join(main_cmd),
tool: Some("black".to_string()),
targets: collect_non_flag_targets(tail),
},
Some((head, tail))
if head == "ruff" && tail.first().map(|s| s.as_str()) == Some("check") =>
{
ParsedCommand::Lint {
cmd: shlex_join(main_cmd),
tool: Some("ruff".to_string()),
targets: collect_non_flag_targets(&tail[1..]),
}
}
Some((head, tail))
if head == "ruff" && tail.first().map(|s| s.as_str()) == Some("format") =>
{
ParsedCommand::Format {
cmd: shlex_join(main_cmd),
tool: Some("ruff".to_string()),
targets: collect_non_flag_targets(&tail[1..]),
}
}
Some((head, _)) if (head == "jest" || head == "vitest") => ParsedCommand::Test {
cmd: shlex_join(main_cmd),
},
Some((head, tail))
if head == "npx" && tail.first().map(|s| s.as_str()) == Some("eslint") =>
{
let targets = collect_non_flag_targets_with_flags(&tail[1..], ESLINT_FLAGS_WITH_VALUES);
ParsedCommand::Lint {
cmd: shlex_join(main_cmd),
tool: Some("eslint".to_string()),
targets,
}
}
Some((head, tail))
if head == "npx" && tail.first().map(|s| s.as_str()) == Some("prettier") =>
{
ParsedCommand::Format {
cmd: shlex_join(main_cmd),
tool: Some("prettier".to_string()),
targets: collect_non_flag_targets(&tail[1..]),
}
}
// NPM-like scripts including yarn
Some((tool, tail)) if (tool == "pnpm" || tool == "npm" || tool == "yarn") => {
if let Some(cmd) = classify_npm_like(tool, tail, main_cmd) {
cmd
} else {
ParsedCommand::Unknown {
cmd: shlex_join(main_cmd),
}
}
}
Some((head, tail)) if head == "ls" => {
// Avoid treating option values as paths (e.g., ls -I "*.test.js").
let candidates = skip_flag_values(

View File

@@ -2,13 +2,13 @@ use std::collections::BTreeMap;
use std::sync::LazyLock;
use crate::codex::Session;
use crate::models::FunctionCallOutputPayload;
use crate::models::ResponseInputItem;
use crate::openai_tools::JsonSchema;
use crate::openai_tools::OpenAiTool;
use crate::openai_tools::ResponsesApiTool;
use crate::protocol::Event;
use crate::protocol::EventMsg;
use codex_protocol::models::FunctionCallOutputPayload;
use codex_protocol::models::ResponseInputItem;
// Use the canonical plan tool types from the protocol crate to ensure
// type-identity matches events transported via `codex_protocol`.

View File

@@ -1,18 +1,19 @@
//! Project-level documentation discovery.
//!
//! Project-level documentation can be stored in a file named `AGENTS.md`.
//! Currently, we include only the contents of the first file found as follows:
//! Project-level documentation can be stored in files named `AGENTS.md`.
//! We include the concatenation of all files found along the path from the
//! repository root to the current working directory as follows:
//!
//! 1. Look for the doc file in the current working directory (as determined
//! by the `Config`).
//! 2. If not found, walk *upwards* until the Git repository root is reached
//! (detected by the presence of a `.git` directory/file), or failing that,
//! the filesystem root.
//! 3. If the Git root is encountered, look for the doc file there. If it
//! exists, the search stops we do **not** walk past the Git root.
//! 1. Determine the Git repository root by walking upwards from the current
//! working directory until a `.git` directory or file is found. If no Git
//! root is found, only the current working directory is considered.
//! 2. Collect every `AGENTS.md` found from the repository root down to the
//! current working directory (inclusive) and concatenate their contents in
//! that order.
//! 3. We do **not** walk past the Git root.
use crate::config::Config;
use std::path::Path;
use std::path::PathBuf;
use tokio::io::AsyncReadExt;
use tracing::error;
@@ -26,7 +27,7 @@ const PROJECT_DOC_SEPARATOR: &str = "\n\n--- project-doc ---\n\n";
/// Combines `Config::instructions` and `AGENTS.md` (if present) into a single
/// string of instructions.
pub(crate) async fn get_user_instructions(config: &Config) -> Option<String> {
match find_project_doc(config).await {
match read_project_docs(config).await {
Ok(Some(project_doc)) => match &config.user_instructions {
Some(original_instructions) => Some(format!(
"{original_instructions}{PROJECT_DOC_SEPARATOR}{project_doc}"
@@ -41,95 +42,135 @@ pub(crate) async fn get_user_instructions(config: &Config) -> Option<String> {
}
}
/// Attempt to locate and load the project documentation. Currently, the search
/// starts from `Config::cwd`, but if we may want to consider other directories
/// in the future, e.g., additional writable directories in the `SandboxPolicy`.
/// Attempt to locate and load the project documentation.
///
/// On success returns `Ok(Some(contents))`. If no documentation file is found
/// the function returns `Ok(None)`. Unexpected I/O failures bubble up as
/// `Err` so callers can decide how to handle them.
async fn find_project_doc(config: &Config) -> std::io::Result<Option<String>> {
let max_bytes = config.project_doc_max_bytes;
/// On success returns `Ok(Some(contents))` where `contents` is the
/// concatenation of all discovered docs. If no documentation file is found the
/// function returns `Ok(None)`. Unexpected I/O failures bubble up as `Err` so
/// callers can decide how to handle them.
pub async fn read_project_docs(config: &Config) -> std::io::Result<Option<String>> {
let max_total = config.project_doc_max_bytes;
// Attempt to load from the working directory first.
if let Some(doc) = load_first_candidate(&config.cwd, CANDIDATE_FILENAMES, max_bytes).await? {
return Ok(Some(doc));
if max_total == 0 {
return Ok(None);
}
// Walk up towards the filesystem root, stopping once we encounter the Git
// repository root. The presence of **either** a `.git` *file* or
// *directory* counts.
let mut dir = config.cwd.clone();
let paths = discover_project_doc_paths(config)?;
if paths.is_empty() {
return Ok(None);
}
// Canonicalize the path so that we do not end up in an infinite loop when
// `cwd` contains `..` components.
let mut remaining: u64 = max_total as u64;
let mut parts: Vec<String> = Vec::new();
for p in paths {
if remaining == 0 {
break;
}
let file = match tokio::fs::File::open(&p).await {
Ok(f) => f,
Err(e) if e.kind() == std::io::ErrorKind::NotFound => continue,
Err(e) => return Err(e),
};
let size = file.metadata().await?.len();
let mut reader = tokio::io::BufReader::new(file).take(remaining);
let mut data: Vec<u8> = Vec::new();
reader.read_to_end(&mut data).await?;
if size > remaining {
tracing::warn!(
"Project doc `{}` exceeds remaining budget ({} bytes) - truncating.",
p.display(),
remaining,
);
}
let text = String::from_utf8_lossy(&data).to_string();
if !text.trim().is_empty() {
parts.push(text);
remaining = remaining.saturating_sub(data.len() as u64);
}
}
if parts.is_empty() {
Ok(None)
} else {
Ok(Some(parts.join("\n\n")))
}
}
/// Discover the list of AGENTS.md files using the same search rules as
/// `read_project_docs`, but return the file paths instead of concatenated
/// contents. The list is ordered from repository root to the current working
/// directory (inclusive). Symlinks are allowed. When `project_doc_max_bytes`
/// is zero, returns an empty list.
pub fn discover_project_doc_paths(config: &Config) -> std::io::Result<Vec<PathBuf>> {
let mut dir = config.cwd.clone();
if let Ok(canon) = dir.canonicalize() {
dir = canon;
}
while let Some(parent) = dir.parent() {
// `.git` can be a *file* (for worktrees or submodules) or a *dir*.
let git_marker = dir.join(".git");
let git_exists = match tokio::fs::metadata(&git_marker).await {
// Build chain from cwd upwards and detect git root.
let mut chain: Vec<PathBuf> = vec![dir.clone()];
let mut git_root: Option<PathBuf> = None;
let mut cursor = dir.clone();
while let Some(parent) = cursor.parent() {
let git_marker = cursor.join(".git");
let git_exists = match std::fs::metadata(&git_marker) {
Ok(_) => true,
Err(e) if e.kind() == std::io::ErrorKind::NotFound => false,
Err(e) => return Err(e),
};
if git_exists {
// We are at the repo root attempt one final load.
if let Some(doc) = load_first_candidate(&dir, CANDIDATE_FILENAMES, max_bytes).await? {
return Ok(Some(doc));
}
git_root = Some(cursor.clone());
break;
}
dir = parent.to_path_buf();
chain.push(parent.to_path_buf());
cursor = parent.to_path_buf();
}
Ok(None)
}
/// Attempt to load the first candidate file found in `dir`. Returns the file
/// contents (truncated if it exceeds `max_bytes`) when successful.
async fn load_first_candidate(
dir: &Path,
names: &[&str],
max_bytes: usize,
) -> std::io::Result<Option<String>> {
for name in names {
let candidate = dir.join(name);
let file = match tokio::fs::File::open(&candidate).await {
Err(e) if e.kind() == std::io::ErrorKind::NotFound => continue,
Err(e) => return Err(e),
Ok(f) => f,
};
let size = file.metadata().await?.len();
let reader = tokio::io::BufReader::new(file);
let mut data = Vec::with_capacity(std::cmp::min(size as usize, max_bytes));
let mut limited = reader.take(max_bytes as u64);
limited.read_to_end(&mut data).await?;
if size as usize > max_bytes {
tracing::warn!(
"Project doc `{}` exceeds {max_bytes} bytes - truncating.",
candidate.display(),
);
let search_dirs: Vec<PathBuf> = if let Some(root) = git_root {
let mut dirs: Vec<PathBuf> = Vec::new();
let mut saw_root = false;
for p in chain.iter().rev() {
if !saw_root {
if p == &root {
saw_root = true;
} else {
continue;
}
}
dirs.push(p.clone());
}
dirs
} else {
vec![config.cwd.clone()]
};
let contents = String::from_utf8_lossy(&data).to_string();
if contents.trim().is_empty() {
// Empty file treat as not found.
continue;
let mut found: Vec<PathBuf> = Vec::new();
for d in search_dirs {
for name in CANDIDATE_FILENAMES {
let candidate = d.join(name);
match std::fs::symlink_metadata(&candidate) {
Ok(md) => {
let ft = md.file_type();
// Allow regular files and symlinks; opening will later fail for dangling links.
if ft.is_file() || ft.is_symlink() {
found.push(candidate);
break;
}
}
Err(e) if e.kind() == std::io::ErrorKind::NotFound => continue,
Err(e) => return Err(e),
}
}
return Ok(Some(contents));
}
Ok(None)
Ok(found)
}
#[cfg(test)]
@@ -278,4 +319,32 @@ mod tests {
assert_eq!(res, Some(INSTRUCTIONS.to_string()));
}
/// When both the repository root and the working directory contain
/// AGENTS.md files, their contents are concatenated from root to cwd.
#[tokio::test]
async fn concatenates_root_and_cwd_docs() {
let repo = tempfile::tempdir().expect("tempdir");
// Simulate a git repository.
std::fs::write(
repo.path().join(".git"),
"gitdir: /path/to/actual/git/dir\n",
)
.unwrap();
// Repo root doc.
fs::write(repo.path().join("AGENTS.md"), "root doc").unwrap();
// Nested working directory with its own doc.
let nested = repo.path().join("workspace/crate_a");
std::fs::create_dir_all(&nested).unwrap();
fs::write(nested.join("AGENTS.md"), "crate doc").unwrap();
let mut cfg = make_config(&repo, 4096, None);
cfg.cwd = nested;
let res = get_user_instructions(&cfg).await.expect("doc expected");
assert_eq!(res, "root doc\n\ncrate doc");
}
}

View File

@@ -22,7 +22,7 @@ use uuid::Uuid;
use crate::config::Config;
use crate::git_info::GitInfo;
use crate::git_info::collect_git_info;
use crate::models::ResponseItem;
use codex_protocol::models::ResponseItem;
const SESSIONS_SUBDIR: &str = "sessions";
@@ -132,8 +132,10 @@ impl RolloutRecorder {
| ResponseItem::LocalShellCall { .. }
| ResponseItem::FunctionCall { .. }
| ResponseItem::FunctionCallOutput { .. }
| ResponseItem::CustomToolCall { .. }
| ResponseItem::CustomToolCallOutput { .. }
| ResponseItem::Reasoning { .. } => filtered.push(item.clone()),
ResponseItem::Other => {
ResponseItem::WebSearchCall { .. } | ResponseItem::Other => {
// These should never be serialized.
continue;
}
@@ -194,8 +196,10 @@ impl RolloutRecorder {
| ResponseItem::LocalShellCall { .. }
| ResponseItem::FunctionCall { .. }
| ResponseItem::FunctionCallOutput { .. }
| ResponseItem::CustomToolCall { .. }
| ResponseItem::CustomToolCallOutput { .. }
| ResponseItem::Reasoning { .. } => items.push(item),
ResponseItem::Other => {}
ResponseItem::WebSearchCall { .. } | ResponseItem::Other => {}
},
Err(e) => {
warn!("failed to parse item: {v:?}, error: {e}");
@@ -317,10 +321,12 @@ async fn rollout_writer(
| ResponseItem::LocalShellCall { .. }
| ResponseItem::FunctionCall { .. }
| ResponseItem::FunctionCallOutput { .. }
| ResponseItem::CustomToolCall { .. }
| ResponseItem::CustomToolCallOutput { .. }
| ResponseItem::Reasoning { .. } => {
writer.write_line(&item).await?;
}
ResponseItem::Other => {}
ResponseItem::WebSearchCall { .. } | ResponseItem::Other => {}
}
}
}

View File

@@ -222,7 +222,7 @@ fn is_write_patch_constrained_to_writable_paths(
for (path, change) in action.changes() {
match change {
ApplyPatchFileChange::Add { .. } | ApplyPatchFileChange::Delete => {
ApplyPatchFileChange::Add { .. } | ApplyPatchFileChange::Delete { .. } => {
if !is_path_writable(path) {
return false;
}

View File

@@ -1,14 +1,24 @@
use serde::Deserialize;
use serde::Serialize;
use shlex;
use std::path::PathBuf;
#[derive(Debug, PartialEq, Eq)]
#[derive(Debug, PartialEq, Eq, Clone, Serialize, Deserialize)]
pub struct ZshShell {
shell_path: String,
zshrc_path: String,
}
#[derive(Debug, PartialEq, Eq)]
#[derive(Debug, PartialEq, Eq, Clone, Serialize, Deserialize)]
pub struct PowerShellConfig {
exe: String, // Executable name or path, e.g. "pwsh" or "powershell.exe".
bash_exe_fallback: Option<PathBuf>, // In case the model generates a bash command.
}
#[derive(Debug, PartialEq, Eq, Clone, Serialize, Deserialize)]
pub enum Shell {
Zsh(ZshShell),
PowerShell(PowerShellConfig),
Unknown,
}
@@ -33,6 +43,61 @@ impl Shell {
}
Some(result)
}
Shell::PowerShell(ps) => {
// If model generated a bash command, prefer a detected bash fallback
if let Some(script) = strip_bash_lc(&command) {
return match &ps.bash_exe_fallback {
Some(bash) => Some(vec![
bash.to_string_lossy().to_string(),
"-lc".to_string(),
script,
]),
// No bash fallback → run the script under PowerShell.
// It will likely fail (except for some simple commands), but the error
// should give a clue to the model to fix upon retry that it's running under PowerShell.
None => Some(vec![
ps.exe.clone(),
"-NoProfile".to_string(),
"-Command".to_string(),
script,
]),
};
}
// Not a bash command. If model did not generate a PowerShell command,
// turn it into a PowerShell command.
let first = command.first().map(String::as_str);
if first != Some(ps.exe.as_str()) {
// TODO (CODEX_2900): Handle escaping newlines.
if command.iter().any(|a| a.contains('\n') || a.contains('\r')) {
return Some(command);
}
let joined = shlex::try_join(command.iter().map(|s| s.as_str())).ok();
return joined.map(|arg| {
vec![
ps.exe.clone(),
"-NoProfile".to_string(),
"-Command".to_string(),
arg,
]
});
}
// Model generated a PowerShell command. Run it.
Some(command)
}
Shell::Unknown => None,
}
}
pub fn name(&self) -> Option<String> {
match self {
Shell::Zsh(zsh) => std::path::Path::new(&zsh.shell_path)
.file_name()
.map(|s| s.to_string_lossy().to_string()),
Shell::PowerShell(ps) => Some(ps.exe.clone()),
Shell::Unknown => None,
}
}
@@ -86,11 +151,51 @@ pub async fn default_user_shell() -> Shell {
}
}
#[cfg(not(target_os = "macos"))]
#[cfg(all(not(target_os = "macos"), not(target_os = "windows")))]
pub async fn default_user_shell() -> Shell {
Shell::Unknown
}
#[cfg(target_os = "windows")]
pub async fn default_user_shell() -> Shell {
use tokio::process::Command;
// Prefer PowerShell 7+ (`pwsh`) if available, otherwise fall back to Windows PowerShell.
let has_pwsh = Command::new("pwsh")
.arg("-NoLogo")
.arg("-NoProfile")
.arg("-Command")
.arg("$PSVersionTable.PSVersion.Major")
.output()
.await
.map(|o| o.status.success())
.unwrap_or(false);
let bash_exe = if Command::new("bash.exe")
.arg("--version")
.output()
.await
.ok()
.map(|o| o.status.success())
.unwrap_or(false)
{
which::which("bash.exe").ok()
} else {
None
};
if has_pwsh {
Shell::PowerShell(PowerShellConfig {
exe: "pwsh.exe".to_string(),
bash_exe_fallback: bash_exe,
})
} else {
Shell::PowerShell(PowerShellConfig {
exe: "powershell.exe".to_string(),
bash_exe_fallback: bash_exe,
})
}
}
#[cfg(test)]
#[cfg(target_os = "macos")]
mod tests {
@@ -231,3 +336,97 @@ mod tests {
}
}
}
#[cfg(test)]
#[cfg(target_os = "windows")]
mod tests_windows {
use super::*;
#[test]
fn test_format_default_shell_invocation_powershell() {
let cases = vec![
(
Shell::PowerShell(PowerShellConfig {
exe: "pwsh.exe".to_string(),
bash_exe_fallback: None,
}),
vec!["bash", "-lc", "echo hello"],
vec!["pwsh.exe", "-NoProfile", "-Command", "echo hello"],
),
(
Shell::PowerShell(PowerShellConfig {
exe: "powershell.exe".to_string(),
bash_exe_fallback: None,
}),
vec!["bash", "-lc", "echo hello"],
vec!["powershell.exe", "-NoProfile", "-Command", "echo hello"],
),
(
Shell::PowerShell(PowerShellConfig {
exe: "pwsh.exe".to_string(),
bash_exe_fallback: Some(PathBuf::from("bash.exe")),
}),
vec!["bash", "-lc", "echo hello"],
vec!["bash.exe", "-lc", "echo hello"],
),
(
Shell::PowerShell(PowerShellConfig {
exe: "pwsh.exe".to_string(),
bash_exe_fallback: Some(PathBuf::from("bash.exe")),
}),
vec![
"bash",
"-lc",
"apply_patch <<'EOF'\n*** Begin Patch\n*** Update File: destination_file.txt\n-original content\n+modified content\n*** End Patch\nEOF",
],
vec![
"bash.exe",
"-lc",
"apply_patch <<'EOF'\n*** Begin Patch\n*** Update File: destination_file.txt\n-original content\n+modified content\n*** End Patch\nEOF",
],
),
(
Shell::PowerShell(PowerShellConfig {
exe: "pwsh.exe".to_string(),
bash_exe_fallback: Some(PathBuf::from("bash.exe")),
}),
vec!["echo", "hello"],
vec!["pwsh.exe", "-NoProfile", "-Command", "echo hello"],
),
(
Shell::PowerShell(PowerShellConfig {
exe: "pwsh.exe".to_string(),
bash_exe_fallback: Some(PathBuf::from("bash.exe")),
}),
vec!["pwsh.exe", "-NoProfile", "-Command", "echo hello"],
vec!["pwsh.exe", "-NoProfile", "-Command", "echo hello"],
),
(
// TODO (CODEX_2900): Handle escaping newlines for powershell invocation.
Shell::PowerShell(PowerShellConfig {
exe: "powershell.exe".to_string(),
bash_exe_fallback: Some(PathBuf::from("bash.exe")),
}),
vec![
"codex-mcp-server.exe",
"--codex-run-as-apply-patch",
"*** Begin Patch\n*** Update File: C:\\Users\\person\\destination_file.txt\n-original content\n+modified content\n*** End Patch",
],
vec![
"codex-mcp-server.exe",
"--codex-run-as-apply-patch",
"*** Begin Patch\n*** Update File: C:\\Users\\person\\destination_file.txt\n-original content\n+modified content\n*** End Patch",
],
),
];
for (shell, input, expected_cmd) in cases {
let actual_cmd = shell
.format_default_shell_invocation(input.iter().map(|s| s.to_string()).collect());
assert_eq!(
actual_cmd,
Some(expected_cmd.iter().map(|s| s.to_string()).collect())
);
}
}
}

View File

@@ -0,0 +1,72 @@
use std::sync::OnceLock;
static TERMINAL: OnceLock<String> = OnceLock::new();
pub fn user_agent() -> String {
TERMINAL.get_or_init(detect_terminal).to_string()
}
/// Sanitize a header value to be used in a User-Agent string.
///
/// This function replaces any characters that are not allowed in a User-Agent string with an underscore.
///
/// # Arguments
///
/// * `value` - The value to sanitize.
fn is_valid_header_value_char(c: char) -> bool {
c.is_ascii_alphanumeric() || c == '-' || c == '_' || c == '.' || c == '/'
}
fn sanitize_header_value(value: String) -> String {
value.replace(|c| !is_valid_header_value_char(c), "_")
}
fn detect_terminal() -> String {
sanitize_header_value(
if let Ok(tp) = std::env::var("TERM_PROGRAM")
&& !tp.trim().is_empty()
{
let ver = std::env::var("TERM_PROGRAM_VERSION").ok();
match ver {
Some(v) if !v.trim().is_empty() => format!("{tp}/{v}"),
_ => tp,
}
} else if let Ok(v) = std::env::var("WEZTERM_VERSION") {
if !v.trim().is_empty() {
format!("WezTerm/{v}")
} else {
"WezTerm".to_string()
}
} else if std::env::var("KITTY_WINDOW_ID").is_ok()
|| std::env::var("TERM")
.map(|t| t.contains("kitty"))
.unwrap_or(false)
{
"kitty".to_string()
} else if std::env::var("ALACRITTY_SOCKET").is_ok()
|| std::env::var("TERM")
.map(|t| t == "alacritty")
.unwrap_or(false)
{
"Alacritty".to_string()
} else if let Ok(v) = std::env::var("KONSOLE_VERSION") {
if !v.trim().is_empty() {
format!("Konsole/{v}")
} else {
"Konsole".to_string()
}
} else if std::env::var("GNOME_TERMINAL_SCREEN").is_ok() {
return "gnome-terminal".to_string();
} else if let Ok(v) = std::env::var("VTE_VERSION") {
if !v.trim().is_empty() {
format!("VTE/{v}")
} else {
"VTE".to_string()
}
} else if std::env::var("WT_SESSION").is_ok() {
return "WindowsTerminal".to_string();
} else {
std::env::var("TERM").unwrap_or_else(|_| "unknown".to_string())
},
)
}

View File

@@ -0,0 +1,145 @@
use serde::Deserialize;
use serde::Serialize;
use std::collections::BTreeMap;
use crate::openai_tools::FreeformTool;
use crate::openai_tools::FreeformToolFormat;
use crate::openai_tools::JsonSchema;
use crate::openai_tools::OpenAiTool;
use crate::openai_tools::ResponsesApiTool;
#[derive(Serialize, Deserialize)]
pub(crate) struct ApplyPatchToolArgs {
pub(crate) input: String,
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, Hash)]
#[serde(rename_all = "snake_case")]
pub enum ApplyPatchToolType {
Freeform,
Function,
}
/// Returns a custom tool that can be used to edit files. Well-suited for GPT-5 models
/// https://platform.openai.com/docs/guides/function-calling#custom-tools
pub(crate) fn create_apply_patch_freeform_tool() -> OpenAiTool {
OpenAiTool::Freeform(FreeformTool {
name: "apply_patch".to_string(),
description: "Use the `apply_patch` tool to edit files".to_string(),
format: FreeformToolFormat {
r#type: "grammar".to_string(),
syntax: "lark".to_string(),
definition: r#"start: begin_patch hunk+ end_patch
begin_patch: "*** Begin Patch" LF
end_patch: "*** End Patch" LF?
hunk: add_hunk | delete_hunk | update_hunk
add_hunk: "*** Add File: " filename LF add_line+
delete_hunk: "*** Delete File: " filename LF
update_hunk: "*** Update File: " filename LF change_move? change?
filename: /(.+)/
add_line: "+" /(.+)/ LF -> line
change_move: "*** Move to: " filename LF
change: (change_context | change_line)+ eof_line?
change_context: ("@@" | "@@ " /(.+)/) LF
change_line: ("+" | "-" | " ") /(.+)/ LF
eof_line: "*** End of File" LF
%import common.LF
"#
.to_string(),
},
})
}
/// Returns a json tool that can be used to edit files. Should only be used with gpt-oss models
pub(crate) fn create_apply_patch_json_tool() -> OpenAiTool {
let mut properties = BTreeMap::new();
properties.insert(
"input".to_string(),
JsonSchema::String {
description: Some(r#"The entire contents of the apply_patch command"#.to_string()),
},
);
OpenAiTool::Function(ResponsesApiTool {
name: "apply_patch".to_string(),
description: r#"Use the `apply_patch` tool to edit files.
Your patch language is a strippeddown, fileoriented diff format designed to be easy to parse and safe to apply. You can think of it as a highlevel envelope:
*** Begin Patch
[ one or more file sections ]
*** End Patch
Within that envelope, you get a sequence of file operations.
You MUST include a header to specify the action you are taking.
Each operation starts with one of three headers:
*** Add File: <path> - create a new file. Every following line is a + line (the initial contents).
*** Delete File: <path> - remove an existing file. Nothing follows.
*** Update File: <path> - patch an existing file in place (optionally with a rename).
May be immediately followed by *** Move to: <new path> if you want to rename the file.
Then one or more “hunks”, each introduced by @@ (optionally followed by a hunk header).
Within a hunk each line starts with:
For instructions on [context_before] and [context_after]:
- By default, show 3 lines of code immediately above and 3 lines immediately below each change. If a change is within 3 lines of a previous change, do NOT duplicate the first changes [context_after] lines in the second changes [context_before] lines.
- If 3 lines of context is insufficient to uniquely identify the snippet of code within the file, use the @@ operator to indicate the class or function to which the snippet belongs. For instance, we might have:
@@ class BaseClass
[3 lines of pre-context]
- [old_code]
+ [new_code]
[3 lines of post-context]
- If a code block is repeated so many times in a class or function such that even a single `@@` statement and 3 lines of context cannot uniquely identify the snippet of code, you can use multiple `@@` statements to jump to the right context. For instance:
@@ class BaseClass
@@ def method():
[3 lines of pre-context]
- [old_code]
+ [new_code]
[3 lines of post-context]
The full grammar definition is below:
Patch := Begin { FileOp } End
Begin := "*** Begin Patch" NEWLINE
End := "*** End Patch" NEWLINE
FileOp := AddFile | DeleteFile | UpdateFile
AddFile := "*** Add File: " path NEWLINE { "+" line NEWLINE }
DeleteFile := "*** Delete File: " path NEWLINE
UpdateFile := "*** Update File: " path NEWLINE [ MoveTo ] { Hunk }
MoveTo := "*** Move to: " newPath NEWLINE
Hunk := "@@" [ header ] NEWLINE { HunkLine } [ "*** End of File" NEWLINE ]
HunkLine := (" " | "-" | "+") text NEWLINE
A full patch can combine several operations:
*** Begin Patch
*** Add File: hello.txt
+Hello world
*** Update File: src/app.py
*** Move to: src/main.py
@@ def greet():
-print("Hi")
+print("Hello, world!")
*** Delete File: obsolete.txt
*** End Patch
It is important to remember:
- You must include a header with your intended action (Add/Delete/Update)
- You must prefix new lines with `+` even when creating a new file
- File references can only be relative, NEVER ABSOLUTE.
"#
.to_string(),
strict: false,
parameters: JsonSchema::Object {
properties,
required: Some(vec!["input".to_string()]),
additional_properties: Some(false),
},
})
}

View File

@@ -578,7 +578,12 @@ index {ZERO_OID}..{right_oid}
fs::write(&file, "x\n").unwrap();
let mut acc = TurnDiffTracker::new();
let del_changes = HashMap::from([(file.clone(), FileChange::Delete)]);
let del_changes = HashMap::from([(
file.clone(),
FileChange::Delete {
content: "x\n".to_string(),
},
)]);
acc.on_patch_begin(&del_changes);
// Simulate apply: delete the file from disk.
@@ -741,7 +746,12 @@ index {left_oid}..{right_oid}
assert_eq!(first, expected_first);
// Next: introduce a brand-new path b.txt into baseline snapshots via a delete change.
let del_b = HashMap::from([(b.clone(), FileChange::Delete)]);
let del_b = HashMap::from([(
b.clone(),
FileChange::Delete {
content: "z\n".to_string(),
},
)]);
acc.on_patch_begin(&del_b);
// Simulate apply: delete b.txt.
let baseline_mode = file_mode_for_path(&b).unwrap_or(FileMode::Regular);

View File

@@ -4,11 +4,12 @@ pub fn get_codex_user_agent(originator: Option<&str>) -> String {
let build_version = env!("CARGO_PKG_VERSION");
let os_info = os_info::get();
format!(
"{}/{build_version} ({} {}; {})",
"{}/{build_version} ({} {}; {}) {}",
originator.unwrap_or(DEFAULT_ORIGINATOR),
os_info.os_type(),
os_info.version(),
os_info.architecture().unwrap_or("unknown"),
crate::terminal::user_agent()
)
}
@@ -27,9 +28,10 @@ mod tests {
fn test_macos() {
use regex_lite::Regex;
let user_agent = get_codex_user_agent(None);
let re =
Regex::new(r"^codex_cli_rs/\d+\.\d+\.\d+ \(Mac OS \d+\.\d+\.\d+; (x86_64|arm64)\)$")
.unwrap();
let re = Regex::new(
r"^codex_cli_rs/\d+\.\d+\.\d+ \(Mac OS \d+\.\d+\.\d+; (x86_64|arm64)\) (\S+)$",
)
.unwrap();
assert!(re.is_match(&user_agent));
}
}

View File

@@ -1,4 +1,3 @@
use std::path::Path;
use std::time::Duration;
use rand::Rng;
@@ -12,33 +11,3 @@ pub(crate) fn backoff(attempt: u64) -> Duration {
let jitter = rand::rng().random_range(0.9..1.1);
Duration::from_millis((base as f64 * jitter) as u64)
}
/// Return `true` if the project folder specified by the `Config` is inside a
/// Git repository.
///
/// The check walks up the directory hierarchy looking for a `.git` file or
/// directory (note `.git` can be a file that contains a `gitdir` entry). This
/// approach does **not** require the `git` binary or the `git2` crate and is
/// therefore fairly lightweight.
///
/// Note that this does **not** detect *worktrees* created with
/// `git worktree add` where the checkout lives outside the main repository
/// directory. If you need Codex to work from such a checkout simply pass the
/// `--allow-no-git-exec` CLI flag that disables the repo requirement.
pub fn is_inside_git_repo(base_dir: &Path) -> bool {
let mut dir = base_dir.to_path_buf();
loop {
if dir.join(".git").exists() {
return true;
}
// Pop one component (go up one directory). `pop` returns false when
// we have reached the filesystem root.
if !dir.pop() {
break;
}
}
false
}

View File

@@ -0,0 +1,3 @@
// Single integration test binary that aggregates all test modules.
// The submodules live in `tests/all/`.
mod suite;

Some files were not shown because too many files have changed in this diff Show More