Compare commits

...

155 Commits

Author SHA1 Message Date
Ahmed Ibrahim
ab5a129a7d Revamp status UI with rate limit reset times 2025-09-24 13:26:45 -07:00
Jeremy Rose
7bff8df10e hide the status indicator when the answer stream starts (#4101)
This eliminates a "bounce" at the end of streaming where we hide the
status indicator at the end of the turn and the composer moves up two
lines.

Also, simplify streaming further by removing the HistorySink and
inverting control, and collapsing a few single-element structures.
2025-09-24 11:51:48 -07:00
pakrym-oai
addc946d13 Simplify tool implemetations (#4160)
Use Result<String, FunctionCallError> for all tool handling code and
rely on error propagation instead of creating failed items everywhere.
2025-09-24 17:27:35 +00:00
dependabot[bot]
bffdbec2c5 chore(deps): bump chrono from 0.4.41 to 0.4.42 in /codex-rs (#4028)
Bumps [chrono](https://github.com/chronotope/chrono) from 0.4.41 to
0.4.42.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/chronotope/chrono/releases">chrono's
releases</a>.</em></p>
<blockquote>
<h2>0.4.42</h2>
<h2>What's Changed</h2>
<ul>
<li>Add fuzzer for DateTime::parse_from_str by <a
href="https://github.com/tyler92"><code>@​tyler92</code></a> in <a
href="https://redirect.github.com/chronotope/chrono/pull/1700">chronotope/chrono#1700</a></li>
<li>Fix wrong amount of micro/milliseconds by <a
href="https://github.com/nmlt"><code>@​nmlt</code></a> in <a
href="https://redirect.github.com/chronotope/chrono/pull/1703">chronotope/chrono#1703</a></li>
<li>Add warning about MappedLocalTime and wasm by <a
href="https://github.com/lutzky"><code>@​lutzky</code></a> in <a
href="https://redirect.github.com/chronotope/chrono/pull/1702">chronotope/chrono#1702</a></li>
<li>Fix incorrect parsing of fixed-length second fractions by <a
href="https://github.com/chris-leach"><code>@​chris-leach</code></a> in
<a
href="https://redirect.github.com/chronotope/chrono/pull/1705">chronotope/chrono#1705</a></li>
<li>Fix cfgs for <code>wasm32-linux</code> support by <a
href="https://github.com/arjunr2"><code>@​arjunr2</code></a> in <a
href="https://redirect.github.com/chronotope/chrono/pull/1707">chronotope/chrono#1707</a></li>
<li>Fix OpenHarmony's <code>tzdata</code> parsing by <a
href="https://github.com/ldm0"><code>@​ldm0</code></a> in <a
href="https://redirect.github.com/chronotope/chrono/pull/1679">chronotope/chrono#1679</a></li>
<li>Convert NaiveDate to/from days since unix epoch by <a
href="https://github.com/findepi"><code>@​findepi</code></a> in <a
href="https://redirect.github.com/chronotope/chrono/pull/1715">chronotope/chrono#1715</a></li>
<li>Add <code>?Sized</code> bound to related methods of
<code>DelayedFormat::write_to</code> by <a
href="https://github.com/Huliiiiii"><code>@​Huliiiiii</code></a> in <a
href="https://redirect.github.com/chronotope/chrono/pull/1721">chronotope/chrono#1721</a></li>
<li>Add <code>from_timestamp_secs</code> method to <code>DateTime</code>
by <a href="https://github.com/jasonaowen"><code>@​jasonaowen</code></a>
in <a
href="https://redirect.github.com/chronotope/chrono/pull/1719">chronotope/chrono#1719</a></li>
<li>Migrate to core::error::Error by <a
href="https://github.com/benbrittain"><code>@​benbrittain</code></a> in
<a
href="https://redirect.github.com/chronotope/chrono/pull/1704">chronotope/chrono#1704</a></li>
<li>Upgrade to windows-bindgen 0.63 by <a
href="https://github.com/djc"><code>@​djc</code></a> in <a
href="https://redirect.github.com/chronotope/chrono/pull/1730">chronotope/chrono#1730</a></li>
<li>strftime: simplify error handling by <a
href="https://github.com/djc"><code>@​djc</code></a> in <a
href="https://redirect.github.com/chronotope/chrono/pull/1731">chronotope/chrono#1731</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="f3fd15f976"><code>f3fd15f</code></a>
Bump version to 0.4.42</li>
<li><a
href="5cf5603500"><code>5cf5603</code></a>
strftime: add regression test case</li>
<li><a
href="a6231701ee"><code>a623170</code></a>
strftime: simplify error handling</li>
<li><a
href="36fbfb1221"><code>36fbfb1</code></a>
strftime: move specifier handling out of match to reduce rightward
drift</li>
<li><a
href="7f413c363b"><code>7f413c3</code></a>
strftime: yield None early</li>
<li><a
href="9d5dfe1640"><code>9d5dfe1</code></a>
strftime: outline constants</li>
<li><a
href="e5f6be7db4"><code>e5f6be7</code></a>
strftime: move error() method below caller</li>
<li><a
href="d516c2764d"><code>d516c27</code></a>
strftime: merge impl blocks</li>
<li><a
href="0ee2172fb9"><code>0ee2172</code></a>
strftime: re-order items to keep impls together</li>
<li><a
href="757a8b0226"><code>757a8b0</code></a>
Upgrade to windows-bindgen 0.63</li>
<li>Additional commits viewable in <a
href="https://github.com/chronotope/chrono/compare/v0.4.41...v0.4.42">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=chrono&package-manager=cargo&previous-version=0.4.41&new-version=0.4.42)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-24 16:53:26 +00:00
dependabot[bot]
353a5c2046 chore(deps): bump unicode-width from 0.1.14 to 0.2.1 in /codex-rs (#2156)
Bumps [unicode-width](https://github.com/unicode-rs/unicode-width) from
0.1.14 to 0.2.1.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="0085e91db7"><code>0085e91</code></a>
Publish 0.2.1</li>
<li><a
href="6db0c14cbd"><code>6db0c14</code></a>
Remove <code>compiler-builtins</code> from <code>rustc-dep-of-std</code>
dependencies (<a
href="https://redirect.github.com/unicode-rs/unicode-width/issues/77">#77</a>)</li>
<li><a
href="0bccd3f1b5"><code>0bccd3f</code></a>
update copyright year (<a
href="https://redirect.github.com/unicode-rs/unicode-width/issues/76">#76</a>)</li>
<li><a
href="7a7fcdc813"><code>7a7fcdc</code></a>
Support Unicode 16 (<a
href="https://redirect.github.com/unicode-rs/unicode-width/issues/74">#74</a>)</li>
<li><a
href="82d7136b49"><code>82d7136</code></a>
Advertise and enforce MSRV (<a
href="https://redirect.github.com/unicode-rs/unicode-width/issues/73">#73</a>)</li>
<li><a
href="e77b2929bc"><code>e77b292</code></a>
Make characters with <code>Line_Break=Ambiguous</code> ambiguous (<a
href="https://redirect.github.com/unicode-rs/unicode-width/issues/61">#61</a>)</li>
<li><a
href="5a7fced663"><code>5a7fced</code></a>
Update version number in Readme (<a
href="https://redirect.github.com/unicode-rs/unicode-width/issues/70">#70</a>)</li>
<li><a
href="79eab0d9fc"><code>79eab0d</code></a>
Publish 0.2.0 with newlines treated as width 1 (<a
href="https://redirect.github.com/unicode-rs/unicode-width/issues/68">#68</a>)</li>
<li>See full diff in <a
href="https://github.com/unicode-rs/unicode-width/compare/v0.1.14...v0.2.1">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=unicode-width&package-manager=cargo&previous-version=0.1.14&new-version=0.2.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

You can trigger a rebase of this PR by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-24 16:33:46 +00:00
Tien Nguyen
00c7f7a16c chore: remove once_cell dependency from multiple crates (#4154)
This commit removes the `once_cell` dependency from `Cargo.toml` files
in the `codex-rs` and `apply-patch` directories, replacing its usage
with `std::sync::LazyLock` and `std::sync::OnceLock` where applicable.
This change simplifies the dependency tree and utilizes standard library
features for lazy initialization.

# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.
2025-09-24 09:15:57 -07:00
Michael Bolin
82e65975b2 fix: add tolerance for ambiguous behavior in gh run list (#4162)
I am not sure what is going on, as
https://github.com/openai/codex/pull/3660 introduced this new logic and
I swear that CI was green before I merged that PR, but I am seeing
failures in this CI job this morning. This feels like a
non-backwards-compatible change in `gh`, but that feels unlikely...

Nevertheless, this is what I currently see on my laptop:

```
$ gh --version
gh version 2.76.2 (2025-07-30)
https://github.com/cli/cli/releases/tag/v2.76.2
$ gh run list --workflow .github/workflows/rust-release.yml --branch rust-v0.40.0 --json workflowName,url,headSha --jq 'first(.[])'
{
  "headSha": "5268705a69713752adcbd8416ef9e84a683f7aa3",
  "url": "https://github.com/openai/codex/actions/runs/17952349351",
  "workflowName": ".github/workflows/rust-release.yml"
}
```

Looking at sample output from an old GitHub issue
(https://github.com/cli/cli/issues/6678), it appears that, at least at
one point in time, the `workflowName` was _not_ the path to the
workflow.
2025-09-24 09:15:03 -07:00
Michael Bolin
639a6fd2f3 chore: upgrade to Rust 1.90 (#4124)
Inspired by Dependabot's attempt to do this:
https://github.com/openai/codex/pull/4029

The new version of Clippy found some unused structs that are removed in
this PR.

Though nothing stood out to me in the Release Notes in terms of things
we should start to take advantage of:
https://blog.rust-lang.org/2025/09/18/Rust-1.90.0/.
2025-09-24 08:32:00 -07:00
jif-oai
db4aa6f916 nit: 350k tokens (#4156)
350k tokens for gpt-5-codex auto-compaction and update comments for
better description
2025-09-24 15:31:27 +00:00
Ahmed Ibrahim
cb96f4f596 Add Reset in for rate limits (#4111)
- Parse the headers
- Reorganize the struct because it's getting too long
- show the resets at in the tui

<img width="324" height="79" alt="image"
src="https://github.com/user-attachments/assets/ca15cd48-f112-4556-91ab-1e3a9bc4683d"
/>
2025-09-24 15:31:08 +00:00
jif-oai
5b910f1f05 chore: extract readiness in a dedicated utils crate (#4140)
Create an `utils` directory for the small utils crates
2025-09-24 10:15:54 +00:00
jif-oai
af6304c641 nit: drop instruction override for auto-compact (#4137)
drop instruction override for auto-compact as this is not used and
dangerous as it invalidates the cache
2025-09-24 10:47:12 +01:00
jif-oai
b90eeabd74 nit: update auto compact to 250k (#4135)
update auto compact for gpt-5-codex to 250k
2025-09-24 09:41:33 +00:00
dependabot[bot]
f7d2f3e54d chore(deps): bump tempfile from 3.20.0 to 3.22.0 in /codex-rs (#4030)
Bumps [tempfile](https://github.com/Stebalien/tempfile) from 3.20.0 to
3.22.0.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/Stebalien/tempfile/blob/master/CHANGELOG.md">tempfile's
changelog</a>.</em></p>
<blockquote>
<h2>3.22.0</h2>
<ul>
<li>Updated <code>windows-sys</code> requirement to allow version
0.61.x</li>
<li>Remove <code>unstable-windows-keep-open-tempfile</code>
feature.</li>
</ul>
<h2>3.21.0</h2>
<ul>
<li>Updated <code>windows-sys</code> requirement to allow version
0.60.x</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="f720dbe098"><code>f720dbe</code></a>
chore: release 3.22.0</li>
<li><a
href="55d742cb5d"><code>55d742c</code></a>
chore: remove deprecated unstable feature flag</li>
<li><a
href="bc41a0b586"><code>bc41a0b</code></a>
build(deps): update windows-sys requirement from &gt;=0.52, &lt;0.61 to
&gt;=0.52, &lt;0....</li>
<li><a
href="3c55387ede"><code>3c55387</code></a>
test: make sure we don't drop tempdirs early (<a
href="https://redirect.github.com/Stebalien/tempfile/issues/373">#373</a>)</li>
<li><a
href="17bf644406"><code>17bf644</code></a>
doc(builder): clarify permissions (<a
href="https://redirect.github.com/Stebalien/tempfile/issues/372">#372</a>)</li>
<li><a
href="c7423f1761"><code>c7423f1</code></a>
doc(env): document the alternative to setting the tempdir (<a
href="https://redirect.github.com/Stebalien/tempfile/issues/371">#371</a>)</li>
<li><a
href="5af60ca9e3"><code>5af60ca</code></a>
test(wasi): run a few tests that shouldn't have been disabled (<a
href="https://redirect.github.com/Stebalien/tempfile/issues/370">#370</a>)</li>
<li><a
href="6c0c56198a"><code>6c0c561</code></a>
fix(doc): temp_dir doesn't check if writable</li>
<li><a
href="48bff5f54c"><code>48bff5f</code></a>
test(tempdir): configure tempdir on wasi</li>
<li><a
href="704a1d2752"><code>704a1d2</code></a>
test(tempdir): cleanup tempdir tests and run more tests on wasi</li>
<li>Additional commits viewable in <a
href="https://github.com/Stebalien/tempfile/compare/v3.20.0...v3.22.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=tempfile&package-manager=cargo&previous-version=3.20.0&new-version=3.22.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-23 23:41:35 -07:00
dependabot[bot]
3fe3b6328b chore(deps): bump log from 0.4.27 to 0.4.28 in /codex-rs (#4027)
[//]: # (dependabot-start)
⚠️  **Dependabot is rebasing this PR** ⚠️ 

Rebasing might not happen immediately, so don't worry if this takes some
time.

Note: if you make any changes to this PR yourself, they will take
precedence over the rebase.

---

[//]: # (dependabot-end)

Bumps [log](https://github.com/rust-lang/log) from 0.4.27 to 0.4.28.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/rust-lang/log/releases">log's
releases</a>.</em></p>
<blockquote>
<h2>0.4.28</h2>
<h2>What's Changed</h2>
<ul>
<li>ci: drop really old trick and ensure MSRV for all feature combo by
<a href="https://github.com/tisonkun"><code>@​tisonkun</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/676">rust-lang/log#676</a></li>
<li>chore: fix some typos in comment by <a
href="https://github.com/xixishidibei"><code>@​xixishidibei</code></a>
in <a
href="https://redirect.github.com/rust-lang/log/pull/677">rust-lang/log#677</a></li>
<li>Unhide <code>#[derive(Debug)]</code> in example by <a
href="https://github.com/ZylosLumen"><code>@​ZylosLumen</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/688">rust-lang/log#688</a></li>
<li>Chore: delete compare_exchange method for AtomicUsize on platforms
without atomics by <a
href="https://github.com/HaoliangXu"><code>@​HaoliangXu</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/690">rust-lang/log#690</a></li>
<li>Add <code>increment_severity()</code> and
<code>decrement_severity()</code> methods for <code>Level</code> and
<code>LevelFilter</code> by <a
href="https://github.com/nebkor"><code>@​nebkor</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/692">rust-lang/log#692</a></li>
<li>Prepare for 0.4.28 release by <a
href="https://github.com/KodrAus"><code>@​KodrAus</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/695">rust-lang/log#695</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/xixishidibei"><code>@​xixishidibei</code></a>
made their first contribution in <a
href="https://redirect.github.com/rust-lang/log/pull/677">rust-lang/log#677</a></li>
<li><a
href="https://github.com/ZylosLumen"><code>@​ZylosLumen</code></a> made
their first contribution in <a
href="https://redirect.github.com/rust-lang/log/pull/688">rust-lang/log#688</a></li>
<li><a
href="https://github.com/HaoliangXu"><code>@​HaoliangXu</code></a> made
their first contribution in <a
href="https://redirect.github.com/rust-lang/log/pull/690">rust-lang/log#690</a></li>
<li><a href="https://github.com/nebkor"><code>@​nebkor</code></a> made
their first contribution in <a
href="https://redirect.github.com/rust-lang/log/pull/692">rust-lang/log#692</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/rust-lang/log/compare/0.4.27...0.4.28">https://github.com/rust-lang/log/compare/0.4.27...0.4.28</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/rust-lang/log/blob/master/CHANGELOG.md">log's
changelog</a>.</em></p>
<blockquote>
<h2>[0.4.28] - 2025-09-02</h2>
<h2>What's Changed</h2>
<ul>
<li>ci: drop really old trick and ensure MSRV for all feature combo by
<a href="https://github.com/tisonkun"><code>@​tisonkun</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/676">rust-lang/log#676</a></li>
<li>Chore: delete compare_exchange method for AtomicUsize on platforms
without atomics by <a
href="https://github.com/HaoliangXu"><code>@​HaoliangXu</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/690">rust-lang/log#690</a></li>
<li>Add <code>increment_severity()</code> and
<code>decrement_severity()</code> methods for <code>Level</code> and
<code>LevelFilter</code> by <a
href="https://github.com/nebkor"><code>@​nebkor</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/692">rust-lang/log#692</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/xixishidibei"><code>@​xixishidibei</code></a>
made their first contribution in <a
href="https://redirect.github.com/rust-lang/log/pull/677">rust-lang/log#677</a></li>
<li><a
href="https://github.com/ZylosLumen"><code>@​ZylosLumen</code></a> made
their first contribution in <a
href="https://redirect.github.com/rust-lang/log/pull/688">rust-lang/log#688</a></li>
<li><a
href="https://github.com/HaoliangXu"><code>@​HaoliangXu</code></a> made
their first contribution in <a
href="https://redirect.github.com/rust-lang/log/pull/690">rust-lang/log#690</a></li>
<li><a href="https://github.com/nebkor"><code>@​nebkor</code></a> made
their first contribution in <a
href="https://redirect.github.com/rust-lang/log/pull/692">rust-lang/log#692</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/rust-lang/log/compare/0.4.27...0.4.28">https://github.com/rust-lang/log/compare/0.4.27...0.4.28</a></p>
<h3>Notable Changes</h3>
<ul>
<li>MSRV is bumped to 1.61.0 in <a
href="https://redirect.github.com/rust-lang/log/pull/676">rust-lang/log#676</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="6e1735597b"><code>6e17355</code></a>
Merge pull request <a
href="https://redirect.github.com/rust-lang/log/issues/695">#695</a>
from rust-lang/cargo/0.4.28</li>
<li><a
href="57719dbef5"><code>57719db</code></a>
focus on user-facing source changes in the changelog</li>
<li><a
href="e0630c6485"><code>e0630c6</code></a>
prepare for 0.4.28 release</li>
<li><a
href="60829b11f5"><code>60829b1</code></a>
Merge pull request <a
href="https://redirect.github.com/rust-lang/log/issues/692">#692</a>
from nebkor/up-and-down</li>
<li><a
href="95d44f8af5"><code>95d44f8</code></a>
change names of log-level-changing methods to be more descriptive</li>
<li><a
href="2b63dfada6"><code>2b63dfa</code></a>
Add <code>up()</code> and <code>down()</code> methods for
<code>Level</code> and <code>LevelFilter</code></li>
<li><a
href="3aa1359e92"><code>3aa1359</code></a>
Merge pull request <a
href="https://redirect.github.com/rust-lang/log/issues/690">#690</a>
from HaoliangXu/master</li>
<li><a
href="1091f2cbd2"><code>1091f2c</code></a>
Chore:delete compare_exchange method for AtomicUsize on platforms</li>
<li><a
href="24c5f44efd"><code>24c5f44</code></a>
Merge pull request <a
href="https://redirect.github.com/rust-lang/log/issues/688">#688</a>
from ZylosLumen/patch-1</li>
<li><a
href="4498495467"><code>4498495</code></a>
Unhide <code>#[derive(Debug)]</code> in example</li>
<li>Additional commits viewable in <a
href="https://github.com/rust-lang/log/compare/0.4.27...0.4.28">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=log&package-manager=cargo&previous-version=0.4.27&new-version=0.4.28)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-23 23:07:54 -07:00
dependabot[bot]
8144ddb3da chore(deps): bump serde from 1.0.224 to 1.0.226 in /codex-rs (#4031)
[//]: # (dependabot-start)
⚠️  **Dependabot is rebasing this PR** ⚠️ 

Rebasing might not happen immediately, so don't worry if this takes some
time.

Note: if you make any changes to this PR yourself, they will take
precedence over the rebase.

---

[//]: # (dependabot-end)

Bumps [serde](https://github.com/serde-rs/serde) from 1.0.224 to
1.0.226.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/serde-rs/serde/releases">serde's
releases</a>.</em></p>
<blockquote>
<h2>v1.0.226</h2>
<ul>
<li>Deduplicate variant matching logic inside generated Deserialize impl
for adjacently tagged enums (<a
href="https://redirect.github.com/serde-rs/serde/issues/2935">#2935</a>,
thanks <a
href="https://github.com/Mingun"><code>@​Mingun</code></a>)</li>
</ul>
<h2>v1.0.225</h2>
<ul>
<li>Avoid triggering a deprecation warning in derived Serialize and
Deserialize impls for a data structure that contains its own
deprecations (<a
href="https://redirect.github.com/serde-rs/serde/issues/2879">#2879</a>,
thanks <a
href="https://github.com/rcrisanti"><code>@​rcrisanti</code></a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1799547846"><code>1799547</code></a>
Release 1.0.226</li>
<li><a
href="2dbeefb11b"><code>2dbeefb</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/serde/issues/2935">#2935</a>
from Mingun/dedupe-adj-enums</li>
<li><a
href="8a3c29ff19"><code>8a3c29f</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/serde/issues/2986">#2986</a>
from dtolnay/didnotwork</li>
<li><a
href="defc24d361"><code>defc24d</code></a>
Remove &quot;did not work&quot; comment from test suite</li>
<li><a
href="2316610760"><code>2316610</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/serde/issues/2929">#2929</a>
from Mingun/flatten-enum-tests</li>
<li><a
href="c09e2bd690"><code>c09e2bd</code></a>
Add tests for flatten unit variant in adjacently tagged (tag + content)
enums</li>
<li><a
href="fe7dcc4cd8"><code>fe7dcc4</code></a>
Test all possible orders of map entries for enum-flatten-in-struct
representa...</li>
<li><a
href="a20e66e131"><code>a20e66e</code></a>
Check serialization in
flatten::enum_::internally_tagged::unit_enum_with_unkn...</li>
<li><a
href="1c1a5d95cd"><code>1c1a5d9</code></a>
Reorder struct_ and newtype tests of adjacently_tagged enums to match
order i...</li>
<li><a
href="ee3c2372fb"><code>ee3c237</code></a>
Opt in to generate-macro-expansion when building on docs.rs</li>
<li>Additional commits viewable in <a
href="https://github.com/serde-rs/serde/compare/v1.0.224...v1.0.226">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=serde&package-manager=cargo&previous-version=1.0.224&new-version=1.0.226)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-23 23:06:30 -07:00
Michael Bolin
9336f2b84b fix: npm publish --tag alpha when building an alpha release (#4112)
This updates our release process so that when we build an alpha of the
Codex CLI (as determined by pushing a tag of the format
`rust-v<cli-version>-alpha.<alpha-version>`), we will now publish the
corresponding npm module publicly, but under the `alpha` tag. As you can
see, this PR adds `--tag alpha` to the `npm publish` command, as
appropriate.
2025-09-23 23:03:43 -07:00
Michael Bolin
af37785bca fix: vendor ripgrep in the npm module (#3660)
We try to ensure ripgrep (`rg`) is provided with Codex.

- For `brew`, we declare it as a dependency of our formula:

08d82d8b00/Formula/c/codex.rb (L24)
- For `npm`, we declare `@vscode/ripgrep` as a dependency, which
installs the platform-specific binary as part of a `postinstall` script:

fdb8dadcae/codex-cli/package.json (L22)
- Users who download the CLI directly from GitHub Releases are on their
own.

In practice, I have seen `@vscode/ripgrep` fail on occasion. Here is a
trace from a GitHub workflow:

```
npm error code 1
npm error path /Users/runner/hostedtoolcache/node/20.19.5/arm64/lib/node_modules/@openai/codex/node_modules/@vscode/ripgrep
npm error command failed
npm error command sh -c node ./lib/postinstall.js
npm error Finding release for v13.0.0-13
npm error GET https://api.github.com/repos/microsoft/ripgrep-prebuilt/releases/tags/v13.0.0-13
npm error Deleting invalid download cache
npm error Download attempt 1 failed, retrying in 2 seconds...
npm error Finding release for v13.0.0-13
npm error GET https://api.github.com/repos/microsoft/ripgrep-prebuilt/releases/tags/v13.0.0-13
npm error Deleting invalid download cache
npm error Download attempt 2 failed, retrying in 4 seconds...
npm error Finding release for v13.0.0-13
npm error GET https://api.github.com/repos/microsoft/ripgrep-prebuilt/releases/tags/v13.0.0-13
npm error Deleting invalid download cache
npm error Download attempt 3 failed, retrying in 8 seconds...
npm error Finding release for v13.0.0-13
npm error GET https://api.github.com/repos/microsoft/ripgrep-prebuilt/releases/tags/v13.0.0-13
npm error Deleting invalid download cache
npm error Download attempt 4 failed, retrying in 16 seconds...
npm error Finding release for v13.0.0-13
npm error GET https://api.github.com/repos/microsoft/ripgrep-prebuilt/releases/tags/v13.0.0-13
npm error Deleting invalid download cache
npm error Error: Request failed: 403
```

To eliminate this error, this PR changes things so that we vendor the
`rg` binary into https://www.npmjs.com/package/@openai/codex so it is
guaranteed to be included when a user runs `npm i -g @openai/codex`.

The downside of this approach is the increase in package size: we
include the `rg` executable for six architectures (in addition to the
six copies of `codex` we already include). In a follow-up, I plan to add
support for "slices" of our npm module, so that soon users will be able
to do:

```
npm install -g @openai/codex@aarch64-apple-darwin
```

Admittedly, this is a sizable change and I tried to clean some things up
in the process:

- `install_native_deps.sh` has been replaced by `install_native_deps.py`
- `stage_release.sh` and `stage_rust_release.py` has been replaced by
`build_npm_package.py`

We now vendor in a DotSlash file for ripgrep (as a modest attempt to
facilitate local testing) and then build up the extension by:

- creating a temp directory and copying `package.json` over to it with
the target value for `"version"`
- finding the GitHub workflow that corresponds to the
`--release-version` and copying the various `codex` artifacts to
respective `vendor/TARGET_TRIPLE/codex` folder
- downloading the `rg` artifacts specified in the DotSlash file and
copying them over to the respective `vendor/TARGET_TRIPLE/path` folder
- if `--pack-output` is specified, runs `npm pack` on the temp directory

To test, I downloaded the artifact produced by this CI job:


https://github.com/openai/codex/actions/runs/17961595388/job/51085840022?pr=3660

and verified that `node ./bin/codex.js 'which -a rg'` worked as
intended.
2025-09-23 23:00:33 -07:00
Dylan
594248f415 [exec] add include-plan-tool flag and print it nicely (#3461)
### Summary
Sometimes in exec runs, we want to allow the model to use the
`update_plan` tool, but that's not easily configurable. This change adds
a feature flag for this, and formats the output so it's human-readable

## Test Plan
<img width="1280" height="354" alt="Screenshot 2025-09-11 at 12 39
44 AM"
src="https://github.com/user-attachments/assets/72e11070-fb98-47f5-a784-5123ca7333d9"
/>
2025-09-23 16:50:59 -07:00
Ahmed Ibrahim
8227a5ba1b Send limits when getting rate limited (#4102)
Users need visibility on rate limits when they are rate limited.
2025-09-23 22:56:34 +00:00
pakrym-oai
fdb8dadcae Add exec output-schema parameter (#4079)
Adds structured output to `exec` via the `--structured-output`
parameter.
2025-09-23 13:59:16 -07:00
pakrym-oai
0f9a796617 Use anyhow::Result in tests for error propagation (#4105) 2025-09-23 13:31:36 -07:00
Ahmed Ibrahim
c6e8671b2a Refactor codex card layout (#4069)
Refactor it to be used in status
2025-09-23 17:37:14 +00:00
jif-oai
b84a920067 chore: compact do not modify instructions (#4088)
Keep the developer instruction and insert the summarisation message as a
user message instead
2025-09-23 17:59:17 +01:00
jif-oai
6cd5309d91 feat: readiness tool (#4090)
Readiness flag with token-based subscription and async wait function
that waits for all the subscribers to be ready
2025-09-23 17:27:20 +01:00
Ahmed Ibrahim
664ee07540 Rate limits warning (#4075)
Only show the highest warning rate.
Change the warning threshold
2025-09-23 09:15:16 -07:00
ae
51c465bddc fix: usage data tweaks (#4082)
- Only show the usage data section when signed in with ChatGPT. (Tested
with Chat auth and API auth.)
- Friendlier string change.
- Also removed `.dim()` on the string, since it was the only string in
`/status` that was dim.
2025-09-23 09:14:02 -07:00
jif-oai
e0fbc112c7 feat: git tooling for undo (#3914)
## Summary
Introduces a “ghost commit” workflow that snapshots the tree without
touching refs.
1. git commit-tree writes an unreferenced commit object from the current
index, optionally pointing to the current HEAD as its parent.
2. We then stash that commit id and use git restore --source <ghost> to
roll the worktree (and index) back to the recorded snapshot later on.

## Details
- Ghost commits live only as loose objects—we never update branches or
tags—so the repo history stays untouched while still giving us a full
tree snapshot.
- Force-included paths let us stage otherwise ignored files before
capturing the tree.
- Restoration rehydrates both tracked and force-included files while
leaving untracked/ignored files alone.
2025-09-23 16:59:52 +01:00
pakrym-oai
76ecbb3d8e Use TestCodex builder in stream retry tests (#4096)
## Summary
- refactor the stream retry integration tests to construct conversations
through `TestCodex`
- remove bespoke config and tempdir setup now handled by the shared
builder

## Testing
- cargo test -p codex-core --test all
stream_error_allows_next_turn::continue_after_stream_error
- cargo test -p codex-core --test all
stream_no_completed::retries_on_early_close

------
https://chatgpt.com/codex/tasks/task_i_68d2b94d83888320bc75a0bc3bd77b49
2025-09-23 08:57:08 -07:00
jif-oai
2451b19d13 chore: enable auto-compaction for gpt-5-codex (#4093)
enable auto-compaction for `gpt-5-codex` at 220k tokens
2025-09-23 16:12:36 +01:00
pakrym-oai
5c7d9e27b1 Add notifier tests (#4064)
Proposal:
1. Use anyhow for tests and avoid unwrap
2. Extract a helper for starting a test instance of codex
2025-09-23 14:25:46 +00:00
Thibault Sottiaux
c93e77b68b feat: update default (#4076)
Changes:
- Default model and docs now use gpt-5-codex. 
- Disables the GPT-5 Codex NUX by default.
- Keeps presets available for API key users.
2025-09-22 20:10:52 -07:00
dedrisian-oai
c415827ac2 Truncate potentially long user messages in compact message. (#4068)
If a prior user message is massive, any future `/compact` task would
fail because we're verbatim copying the user message into the new chat.
2025-09-22 23:12:26 +00:00
Jeremy Rose
4e0550b995 fix codex resume message at end of session (#3957)
This was only being printed when running the codex-tui executable
directly, not via the codex-cli wrapper.
2025-09-22 22:24:31 +00:00
Jeremy Rose
f54a49157b Fix pager overlay clear between pages (#3952)
should fix characters sometimes hanging around while scrolling the
transcript.
2025-09-22 15:12:29 -07:00
Ahmed Ibrahim
dd56750612 Change headers and struct of rate limits (#4060) 2025-09-22 21:06:20 +00:00
dedrisian-oai
8bc73a2bfd Fix branch mode prompt for /review (#4061)
Updates `/review` branch mode to review against a branch's upstream.
2025-09-22 12:34:08 -07:00
jif-oai
be366a31ab chore: clippy on redundant closure (#4058)
Add redundant closure clippy rules and let Codex fix it by minimising
FQP
2025-09-22 19:30:16 +00:00
Ahmed Ibrahim
c75920a071 Change limits warning copy (#4059) 2025-09-22 18:52:45 +00:00
dedrisian-oai
8daba53808 feat: Add view stack to BottomPane (#4026)
Adds a "View Stack" to the bottom pane to allow for pushing/popping
bottom panels.

`esc` will go back instead of dismissing.

Benefit: We retain the "selection state" of a parent panel (e.g. the
review panel).
2025-09-22 11:29:39 -07:00
Ahmed Ibrahim
d2940bd4c3 Remove /limits after moving to /status (#4055)
Moved to /status #4053
2025-09-22 18:23:05 +00:00
friel-openai
76a9b11678 Tui: fix backtracking (#4020)
Backtracking multiple times could drop earlier turns. We now derive the
active user-turn positions from the transcript on demand (keying off the
latest session header) instead of caching state. This keeps the replayed
context intact during repeated edits and adds a regression test.
2025-09-22 11:16:25 -07:00
Jeremy Rose
fa80bbb587 simplify StreamController (#3928)
no intended functional change, just simplifying the code.
2025-09-22 11:14:04 -07:00
Ahmed Ibrahim
434eb4fd49 Add limits to /status (#4053)
Add limits to status

<img width="579" height="430" alt="image"
src="https://github.com/user-attachments/assets/d3794d92-ffca-47be-8011-b4452223cc89"
/>
2025-09-22 18:13:34 +00:00
Jeremy Rose
19f46439ae timeouts for mcp tool calls (#3959)
defaults to 60sec, overridable with MCP_TOOL_TIMEOUT or on a per-server
basis in the config.
2025-09-22 10:30:59 -07:00
jif-oai
e258ca61b4 chore: more clippy rules 2 (#4057)
The only file to watch is the cargo.toml
All the others come from just fix + a few manual small fix

The set of rules have been taken from the list of clippy rules
arbitrarily while trying to optimise the learning and style of the code
while limiting the loss of productivity
2025-09-22 17:16:02 +00:00
jif-oai
e5fe50d3ce chore: unify cargo versions (#4044)
Unify cargo versions at root
2025-09-22 16:47:01 +00:00
pakrym-oai
14a115d488 Add non_sandbox_test helper (#3880)
Makes tests shorter
2025-09-22 14:50:41 +00:00
dedrisian-oai
5996ee0e5f feat: Add more /review options (#3961)
Adds the following options:

1. Review current changes
2. Review a specific commit
3. Review against a base branch (PR style)
4. Custom instructions

<img width="487" height="330" alt="Screenshot 2025-09-20 at 2 11 36 PM"
src="https://github.com/user-attachments/assets/edb0aaa5-5747-47fa-881f-cc4c4f7fe8bc"
/>

---

\+ Adds the following UI helpers:

1. Makes list selection searchable
2. Adds navigation to the bottom pane, so you could add a stack of
popups
3. Basic custom prompt view
2025-09-21 20:18:35 -07:00
Ahmed Ibrahim
a4ebd069e5 Tui: Rate limits (#3977)
### /limits: show rate limits graph

<img width="442" height="287" alt="image"
src="https://github.com/user-attachments/assets/3e29a241-a4b0-4df8-bf71-43dc4dd805ca"
/>

### Warning on close to rate limits:

<img width="507" height="96" alt="image"
src="https://github.com/user-attachments/assets/732a958b-d240-4a89-8289-caa92de83537"
/>

Based on #3965
2025-09-21 10:20:49 -07:00
Ahmed Ibrahim
04504d8218 Forward Rate limits to the UI (#3965)
We currently get information about rate limits in the response headers.
We want to forward them to the clients to have better transparency.
UI/UX plans have been discussed and this information is needed.
2025-09-20 21:26:16 -07:00
Jeremy Rose
42d335deb8 Cache keyboard enhancement detection before event streams (#3950)
Hopefully fixes incorrectly showing ^J instead of Shift+Enter in the key
hints occasionally.
2025-09-19 21:38:36 +00:00
Jeremy Rose
ad0c2b4db3 don't clear screen on startup (#3925) 2025-09-19 14:22:58 -07:00
Jeremy Rose
ff389dc52f fix alignment in slash command popup (#3937) 2025-09-19 19:08:04 +00:00
pakrym-oai
9b18875a42 Use helpers instead of fixtures (#3888)
Move to using test helper method everywhere.
2025-09-19 06:46:25 -07:00
pakrym-oai
881c7978f1 Move responses mocking helpers to a shared lib (#3878)
These are generally useful
2025-09-18 17:53:14 -07:00
Ahmed Ibrahim
a7fda70053 Use a unified shell tell to not break cache (#3814)
Currently, we change the tool description according to the sandbox
policy and approval policy. This breaks the cache when the user hits
`/approvals`. This PR does the following:
- Always use the shell with escalation parameter:
- removes `create_shell_tool_for_sandbox` and always uses unified tool
via `create_shell_tool`
- Reject the func call when the model uses escalation parameter when it
cannot.
2025-09-19 00:08:28 +00:00
Michael Bolin
de64f5f007 fix: update try_parse_word_only_commands_sequence() to return commands in order (#3881)
Incidentally, we had a test for this in
`accepts_multiple_commands_with_allowed_operators()`, but it was
verifying the bad behavior. Oops!
2025-09-18 16:07:38 -07:00
Michael Bolin
8595237505 fix: ensure cwd for conversation and sandbox are separate concerns (#3874)
Previous to this PR, both of these functions take a single `cwd`:


71038381aa/codex-rs/core/src/seatbelt.rs (L19-L25)


71038381aa/codex-rs/core/src/landlock.rs (L16-L23)

whereas `cwd` and `sandbox_cwd` should be set independently (fixed in
this PR).

Added `sandbox_distinguishes_command_and_policy_cwds()` to
`codex-rs/exec/tests/suite/sandbox.rs` to verify this.
2025-09-18 14:37:06 -07:00
dedrisian-oai
62258df92f feat: /review (#3774)
Adds `/review` action in TUI

<img width="637" height="370" alt="Screenshot 2025-09-17 at 12 41 19 AM"
src="https://github.com/user-attachments/assets/b1979a6e-844a-4b97-ab20-107c185aec1d"
/>
2025-09-18 14:14:16 -07:00
Jeremy Rose
b34e906396 Reland "refactor transcript view to handle HistoryCells" (#3753)
Reland of #3538
2025-09-18 20:55:53 +00:00
Jeremy Rose
71038381aa fix error on missing notifications in [tui] (#3867)
Fixes #3811.
2025-09-18 11:25:09 -07:00
jif-oai
277fc6254e chore: use tokio mutex and async function to prevent blocking a worker (#3850)
### Why Use `tokio::sync::Mutex`

`std::sync::Mutex` are not _async-aware_. As a result, they will block
the entire thread instead of just yielding the task. Furthermore they
can be poisoned which is not the case of `tokio` Mutex.
This allows the Tokio runtime to continue running other tasks while
waiting for the lock, preventing deadlocks and performance bottlenecks.

In general, this is preferred in async environment
2025-09-18 18:21:52 +01:00
jif-oai
992b531180 fix: some nit Rust reference issues (#3849)
Fix some small references issue. No behavioural change. Just making the
code cleaner
2025-09-18 18:18:06 +01:00
Jeremy Rose
84a0ba9bf5 hint for codex resume on tui exit (#3757)
<img width="931" height="438" alt="Screenshot 2025-09-16 at 4 25 19 PM"
src="https://github.com/user-attachments/assets/ccfb8df1-feaf-45b4-8f7f-56100de916d5"
/>
2025-09-18 09:28:32 -07:00
jif-oai
4a5d6f7c71 Make ESC button work when auto-compaction (#3857)
Only emit a task finished when the compaction comes from a `/compact`
2025-09-18 15:34:16 +00:00
jif-oai
1b3c8b8e94 Unify animations (#3729)
Unify the animation in a single code and add the CTRL + . in the
onboarding
2025-09-18 16:27:15 +01:00
pakrym-oai
d4aba772cb Switch to uuid_v7 and tighten ConversationId usage (#3819)
Make sure conversations have a timestamp.
2025-09-18 14:37:03 +00:00
jif-oai
4c97eeb32a bug: Ignore tests for now (#3777)
Ignore flaky / long tests for now
2025-09-18 10:43:45 +01:00
Thibault Sottiaux
c9505488a1 chore: update "Codex CLI harness, sandboxing, and approvals" section (#3822) 2025-09-17 16:48:20 -07:00
Jeremy Rose
530382db05 Use agent reply text in turn notifications (#3756)
Instead of "Agent turn complete", turn-complete notifications now
include the first handful of chars from the agent's final message.
2025-09-17 11:23:46 -07:00
Abhishek Bhardwaj
208089e58e AGENTS.md: Add instruction to install missing commands (#3807)
This change instructs the model to install any missing command. Else
tokens are wasted when it tries to run
commands that aren't available multiple times before installing them.
2025-09-17 11:06:59 -07:00
Michael Bolin
e5fdb5b0fd fix: specify --repo when calling gh (#3806)
Often, `gh` infers `--repo` when it is run from a Git clone, but our
`publish-npm` step is designed to avoid the overhead of cloning the
repo, so add the `--repo` option explicitly to fix things.
2025-09-17 11:05:22 -07:00
Michael Bolin
5332f6e215 fix: make publish-npm its own job with specific permissions (#3767)
The build for `v0.37.0-alpha.3` failed on the `Create GitHub Release`
step:

https://github.com/openai/codex/actions/runs/17786866086/job/50556513221

with:

```
⚠️ GitHub release failed with status: 403
{"message":"Resource not accessible by integration","documentation_url":"https://docs.github.com/rest/releases/releases#create-a-release","status":"403"}
Skip retry — your GitHub token/PAT does not have the required permission to create a release
```

I believe I should have not introduced a top-level `permissions` for the
workflow in https://github.com/openai/codex/pull/3431 because that
affected the `permissions` for each job in the workflow.

This PR introduces `publish-npm` as its own job, which allows us to:

- consolidate all the Node.js-related steps required for publishing
- limit the reach of the `id-token: write` permission
- skip it altogether if is an alpha build

With this PR, each of `release`, `publish-npm`, and `update-branch` has
an explicit `permissions` block.
2025-09-16 22:55:53 -07:00
Michael Bolin
5d87f5d24a fix: ensure pnpm is installed before running npm install (#3763)
Note we do the same thing in `ci.yml`:


791d7b125f/.github/workflows/ci.yml (L17-L25)
2025-09-16 21:36:13 -07:00
Michael Bolin
791d7b125f fix: make GitHub Action publish to npm using trusted publishing (#3431) 2025-09-16 20:33:59 -07:00
dedrisian-oai
72733e34c4 Add dev message upon review out (#3758)
Proposal: We want to record a dev message like so:

```
{
      "type": "message",
      "role": "user",
      "content": [
        {
          "type": "input_text",
          "text": "<user_action>
  <context>User initiated a review task. Here's the full review output from reviewer model. User may select one or more comments to resolve.</context>
  <action>review</action>
  <results>
  {findings_str}
  </results>
</user_action>"
        }
      ]
    },
```

Without showing in the chat transcript.

Rough idea, but it fixes issue where the user finishes a review thread,
and asks the parent "fix the rest of the review issues" thinking that
the parent knows about it.

### Question: Why not a tool call?

Because the agent didn't make the call, it was a human. + we haven't
implemented sub-agents yet, and we'll need to think about the way we
represent these human-led tool calls for the agent.
2025-09-16 18:43:32 -07:00
Jeremy Rose
b8d2b1a576 restyle thinking outputs (#3755)
<img width="1205" height="930" alt="Screenshot 2025-09-16 at 2 23 18 PM"
src="https://github.com/user-attachments/assets/bb2494f1-dd59-4bc9-9c4e-740605c999fd"
/>
2025-09-16 16:42:43 -07:00
dedrisian-oai
7fe4021f95 Review mode core updates (#3701)
1. Adds the environment prompt (including cwd) to review thread
2. Prepends the review prompt as a user message (temporary fix so the
instructions are not replaced on backend)
3. Sets reasoning to low
4. Sets default review model to `gpt-5-codex`
2025-09-16 13:36:51 -07:00
Dylan
11285655c4 fix: Record EnvironmentContext in SendUserTurn (#3678)
## Summary
SendUserTurn has not been correctly handling updates to policies. While
the tui protocol handles this in `Op::OverrideTurnContext`, the
SendUserTurn should be appending `EnvironmentContext` messages when the
sandbox settings change. MCP client behavior should match the cli
behavior, so we update `SendUserTurn` message to match.

## Testing
- [x] Added prompt caching tests
2025-09-16 11:32:20 -07:00
Ahmed Ibrahim
244687303b Persist search items (#3745)
Let's record the search items because they are part of the history.
2025-09-16 18:02:15 +00:00
pakrym-oai
5e2c4f7e35 Update azure model provider example (#3680)
Make the section linkable.
2025-09-16 08:43:29 -07:00
Dylan
a8026d3846 fix: read-only escalations (#3673)
## Summary
Splitting out this smaller fix from #2694 - fixes the sandbox
permissions so Chat / read-only mode tool definition matches
expectations

## Testing 
- [x] Tested locally

<img width="1271" height="629" alt="Screenshot 2025-09-15 at 2 51 19 PM"
src="https://github.com/user-attachments/assets/fcb247e4-30b6-4199-80d7-a2876d79ad7d"
/>
2025-09-15 19:01:10 -07:00
easong-openai
45bccd36b0 fix permissions alignment 2025-09-15 17:34:04 -07:00
dependabot[bot]
404c126fc3 chore(deps): bump wildmatch from 2.4.0 to 2.5.0 in /codex-rs (#3619)
Bumps [wildmatch](https://github.com/becheran/wildmatch) from 2.4.0 to
2.5.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/becheran/wildmatch/releases">wildmatch's
releases</a>.</em></p>
<blockquote>
<h2>v2.5.0</h2>
<p><a
href="https://redirect.github.com/becheran/wildmatch/pull/27">becheran/wildmatch#27</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="b39902c120"><code>b39902c</code></a>
chore: Release wildmatch version 2.5.0</li>
<li><a
href="87a8cf4c80"><code>87a8cf4</code></a>
Merge pull request <a
href="https://redirect.github.com/becheran/wildmatch/issues/28">#28</a>
from smichaku/micha/fix-unicode-case-insensitive-matching</li>
<li><a
href="a3ab4903f5"><code>a3ab490</code></a>
fix: Fix unicode matching for non-ASCII characters</li>
<li>See full diff in <a
href="https://github.com/becheran/wildmatch/compare/v2.4.0...v2.5.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=wildmatch&package-manager=cargo&previous-version=2.4.0&new-version=2.5.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-15 12:57:17 -07:00
dependabot[bot]
88027552dd chore(deps): bump serde from 1.0.219 to 1.0.223 in /codex-rs (#3618)
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.219 to
1.0.223.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/serde-rs/serde/releases">serde's
releases</a>.</em></p>
<blockquote>
<h2>v1.0.223</h2>
<ul>
<li>Fix serde_core documentation links (<a
href="https://redirect.github.com/serde-rs/serde/issues/2978">#2978</a>)</li>
</ul>
<h2>v1.0.222</h2>
<ul>
<li>Make <code>serialize_with</code> attribute produce code that works
if respanned to 2024 edition (<a
href="https://redirect.github.com/serde-rs/serde/issues/2950">#2950</a>,
thanks <a href="https://github.com/aytey"><code>@​aytey</code></a>)</li>
</ul>
<h2>v1.0.221</h2>
<ul>
<li>Documentation improvements (<a
href="https://redirect.github.com/serde-rs/serde/issues/2973">#2973</a>)</li>
<li>Deprecate <code>serde_if_integer128!</code> macro (<a
href="https://redirect.github.com/serde-rs/serde/issues/2975">#2975</a>)</li>
</ul>
<h2>v1.0.220</h2>
<ul>
<li>Add a way for data formats to depend on serde traits without waiting
for serde_derive compilation: <a
href="https://docs.rs/serde_core">https://docs.rs/serde_core</a> (<a
href="https://redirect.github.com/serde-rs/serde/issues/2608">#2608</a>,
thanks <a
href="https://github.com/osiewicz"><code>@​osiewicz</code></a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="6c316d7cb5"><code>6c316d7</code></a>
Release 1.0.223</li>
<li><a
href="a4ac0c2bc6"><code>a4ac0c2</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/serde/issues/2978">#2978</a>
from dtolnay/htmlrooturl</li>
<li><a
href="ed76364f87"><code>ed76364</code></a>
Change serde_core's html_root_url to docs.rs/serde_core</li>
<li><a
href="57e21a1afa"><code>57e21a1</code></a>
Release 1.0.222</li>
<li><a
href="bb58726133"><code>bb58726</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/serde/issues/2950">#2950</a>
from aytey/fix_lifetime_issue_2024</li>
<li><a
href="3f6925125b"><code>3f69251</code></a>
Delete unneeded field of MapDeserializer</li>
<li><a
href="fd4decf2fe"><code>fd4decf</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/serde/issues/2976">#2976</a>
from dtolnay/content</li>
<li><a
href="00b1b6b2b5"><code>00b1b6b</code></a>
Move Content's Deserialize impl from serde_core to serde</li>
<li><a
href="cf141aa8c7"><code>cf141aa</code></a>
Move Content's Clone impl from serde_core to serde</li>
<li><a
href="ff3aee490a"><code>ff3aee4</code></a>
Release 1.0.221</li>
<li>Additional commits viewable in <a
href="https://github.com/serde-rs/serde/compare/v1.0.219...v1.0.223">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=serde&package-manager=cargo&previous-version=1.0.219&new-version=1.0.223)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-15 12:56:20 -07:00
Michael Bolin
ca8bd09d56 chore: simplify dep so serde=1 in Cargo.toml (#3664)
With this change, dependabot should just have to update `Cargo.lock` for
`serde`, e.g.:

- https://github.com/openai/codex/pull/3617
- https://github.com/openai/codex/pull/3618
2025-09-15 19:22:29 +00:00
dependabot[bot]
39ed8a7d26 chore(deps): bump serde_json from 1.0.143 to 1.0.145 in /codex-rs (#3617)
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.143 to
1.0.145.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/serde-rs/json/releases">serde_json's
releases</a>.</em></p>
<blockquote>
<h2>v1.0.145</h2>
<ul>
<li>Raise serde version requirement to &gt;=1.0.220</li>
</ul>
<h2>v1.0.144</h2>
<ul>
<li>Switch serde dependency to serde_core (<a
href="https://redirect.github.com/serde-rs/json/issues/1285">#1285</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="efa66e3a1d"><code>efa66e3</code></a>
Release 1.0.145</li>
<li><a
href="23679e2b9d"><code>23679e2</code></a>
Add serde version constraint</li>
<li><a
href="fc27bafbf7"><code>fc27baf</code></a>
Release 1.0.144</li>
<li><a
href="caef3c6ea6"><code>caef3c6</code></a>
Ignore uninlined_format_args pedantic clippy lint</li>
<li><a
href="81ba3aaaff"><code>81ba3aa</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/json/issues/1285">#1285</a>
from dtolnay/serdecore</li>
<li><a
href="d21e8ce7a7"><code>d21e8ce</code></a>
Switch serde dependency to serde_core</li>
<li><a
href="6beb6cd596"><code>6beb6cd</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/json/issues/1286">#1286</a>
from dtolnay/up</li>
<li><a
href="1dbc803749"><code>1dbc803</code></a>
Raise required compiler to Rust 1.61</li>
<li><a
href="0bf5d87003"><code>0bf5d87</code></a>
Enforce trybuild &gt;= 1.0.108</li>
<li><a
href="d12e943590"><code>d12e943</code></a>
Update actions/checkout@v4 -&gt; v5</li>
<li>See full diff in <a
href="https://github.com/serde-rs/json/compare/v1.0.143...v1.0.145">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=serde_json&package-manager=cargo&previous-version=1.0.143&new-version=1.0.145)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-15 11:58:57 -07:00
Michael Bolin
2df7f7efe5 chore: restore prerelease logic in rust-release.yml (#3659)
Revert #3645.
2025-09-15 17:52:49 +00:00
Jeremy Rose
0560079c41 notifications on approvals and turn end (#3329)
uses OSC 9 to notify when a turn ends or approval is required. won't
work in vs code or terminal.app but iterm2/kitty/wezterm supports it :)
2025-09-15 10:22:02 -07:00
Michael Bolin
0de154194d fix: change MIN_ANIMATION_HEIGHT so show_animation is calculated correctly (#3656)
Reported height was `20` instead of `21`, so `area.height >=
MIN_ANIMATION_HEIGHT` was `false` and therefore `show_animation` was
`false`, so the animation never displayed.
2025-09-15 10:02:53 -07:00
ae
5c583fe89b feat: tweak onboarding strings (#3650) 2025-09-15 08:49:37 -07:00
easong-openai
cf63cbf153 fix stray login url characters persisting in login (#3639)
<img width="885" height="177" alt="image"
src="https://github.com/user-attachments/assets/d396e0a5-f303-494f-bab1-f7af57b88a3e"
/>


Fixes this.
2025-09-15 15:44:53 +00:00
pakrym-oai
b1c291e2bb Add file reference guidelines to gpt-5 prompt (#3651) 2025-09-15 08:35:30 -07:00
Thibault Sottiaux
934d728946 feat: skip animations on small terminals (#3647)
Changes:
- skip the welcome animation when the terminal area is below 60x21
- skip the model upgrade animation when the terminal area is below 60x24
to avoid clipping

---------

Co-authored-by: Michael Bolin <mbolin@openai.com>
2025-09-15 08:30:32 -07:00
Michael Bolin
f037b2fd56 chore: rename (#3648) 2025-09-15 08:17:13 -07:00
Thibault Sottiaux
d60cbed691 fix: add references (#3633) 2025-09-15 07:48:22 -07:00
Michael Bolin
6aafe37752 chore: set prerelease:true for now (#3645) 2025-09-15 07:17:46 -07:00
jimmyfraiture2
d555b68469 fix: race condition unified exec (#3644)
Fix race condition without storing an rx in the session
2025-09-15 06:52:39 -07:00
ae
9baa5c33da feat: update splash (#3631)
- Update splash styling.
- Add center truncation for long paths.
  (Uses new `center_truncate_path` utility.)
- Update the suggested commands.


## New splash
<img width="560" height="326" alt="image"
src="https://github.com/user-attachments/assets/b80d7075-f376-4019-a464-b96a78b0676d"
/>

## Example with truncation:
<img width="524" height="317" alt="image"
src="https://github.com/user-attachments/assets/b023c5cc-0bf0-4d21-9b98-bfea85546eda"
/>
2025-09-15 06:44:40 -07:00
dependabot[bot]
fdf4a68646 chore(deps): bump tracing-subscriber from 0.3.19 to 0.3.20 in /codex-rs (#3620)
Bumps [tracing-subscriber](https://github.com/tokio-rs/tracing) from
0.3.19 to 0.3.20.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/tokio-rs/tracing/releases">tracing-subscriber's
releases</a>.</em></p>
<blockquote>
<h2>tracing-subscriber 0.3.20</h2>
<p><strong>Security Fix</strong>: ANSI Escape Sequence Injection
(CVE-TBD)</p>
<h2>Impact</h2>
<p>Previous versions of tracing-subscriber were vulnerable to ANSI
escape sequence injection attacks. Untrusted user input containing ANSI
escape sequences could be injected into terminal output when logged,
potentially allowing attackers to:</p>
<ul>
<li>Manipulate terminal title bars</li>
<li>Clear screens or modify terminal display</li>
<li>Potentially mislead users through terminal manipulation</li>
</ul>
<p>In isolation, impact is minimal, however security issues have been
found in terminal emulators that enabled an attacker to use ANSI escape
sequences via logs to exploit vulnerabilities in the terminal
emulator.</p>
<h2>Solution</h2>
<p>Version 0.3.20 fixes this vulnerability by escaping ANSI control
characters in when writing events to destinations that may be printed to
the terminal.</p>
<h2>Affected Versions</h2>
<p>All versions of tracing-subscriber prior to 0.3.20 are affected by
this vulnerability.</p>
<h2>Recommendations</h2>
<p>Immediate Action Required: We recommend upgrading to
tracing-subscriber 0.3.20 immediately, especially if your
application:</p>
<ul>
<li>Logs user-provided input (form data, HTTP headers, query parameters,
etc.)</li>
<li>Runs in environments where terminal output is displayed to
users</li>
</ul>
<h2>Migration</h2>
<p>This is a patch release with no breaking API changes. Simply update
your Cargo.toml:</p>
<pre lang="toml"><code>[dependencies]
tracing-subscriber = &quot;0.3.20&quot;
</code></pre>
<h2>Acknowledgments</h2>
<p>We would like to thank <a href="http://github.com/zefr0x">zefr0x</a>
who responsibly reported the issue at
<code>security@tokio.rs</code>.</p>
<p>If you believe you have found a security vulnerability in any
tokio-rs project, please email us at <code>security@tokio.rs</code>.</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="4c52ca5266"><code>4c52ca5</code></a>
fmt: fix ANSI escape sequence injection vulnerability (<a
href="https://redirect.github.com/tokio-rs/tracing/issues/3368">#3368</a>)</li>
<li><a
href="f71cebe41e"><code>f71cebe</code></a>
subscriber: impl Clone for EnvFilter (<a
href="https://redirect.github.com/tokio-rs/tracing/issues/3360">#3360</a>)</li>
<li><a
href="3a1f571102"><code>3a1f571</code></a>
Fix CI (<a
href="https://redirect.github.com/tokio-rs/tracing/issues/3361">#3361</a>)</li>
<li><a
href="e63ef57f3d"><code>e63ef57</code></a>
chore: prepare tracing-attributes 0.1.30 (<a
href="https://redirect.github.com/tokio-rs/tracing/issues/3316">#3316</a>)</li>
<li><a
href="6e59a13b1a"><code>6e59a13</code></a>
attributes: fix tracing::instrument regression around shadowing (<a
href="https://redirect.github.com/tokio-rs/tracing/issues/3311">#3311</a>)</li>
<li><a
href="e4df761275"><code>e4df761</code></a>
tracing: update core to 0.1.34 and attributes to 0.1.29 (<a
href="https://redirect.github.com/tokio-rs/tracing/issues/3305">#3305</a>)</li>
<li><a
href="643f392ebb"><code>643f392</code></a>
chore: prepare tracing-attributes 0.1.29 (<a
href="https://redirect.github.com/tokio-rs/tracing/issues/3304">#3304</a>)</li>
<li><a
href="d08e7a6eea"><code>d08e7a6</code></a>
chore: prepare tracing-core 0.1.34 (<a
href="https://redirect.github.com/tokio-rs/tracing/issues/3302">#3302</a>)</li>
<li><a
href="6e70c571d3"><code>6e70c57</code></a>
tracing-subscriber: count numbers of enters in <code>Timings</code> (<a
href="https://redirect.github.com/tokio-rs/tracing/issues/2944">#2944</a>)</li>
<li><a
href="c01d4fd9de"><code>c01d4fd</code></a>
fix docs and enable CI on <code>main</code> branch (<a
href="https://redirect.github.com/tokio-rs/tracing/issues/3295">#3295</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/tokio-rs/tracing/compare/tracing-subscriber-0.3.19...tracing-subscriber-0.3.20">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=tracing-subscriber&package-manager=cargo&previous-version=0.3.19&new-version=0.3.20)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-15 00:51:33 -07:00
dependabot[bot]
adc9e1526b chore(deps): bump slab from 0.4.10 to 0.4.11 in /codex-rs (#3635)
Bumps [slab](https://github.com/tokio-rs/slab) from 0.4.10 to 0.4.11.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/tokio-rs/slab/releases">slab's
releases</a>.</em></p>
<blockquote>
<h2>v0.4.11</h2>
<ul>
<li>Fix <code>Slab::get_disjoint_mut</code> out of bounds (<a
href="https://redirect.github.com/tokio-rs/slab/issues/152">#152</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/tokio-rs/slab/blob/master/CHANGELOG.md">slab's
changelog</a>.</em></p>
<blockquote>
<h1>0.4.11 (August 8, 2025)</h1>
<ul>
<li>Fix <code>Slab::get_disjoint_mut</code> out of bounds (<a
href="https://redirect.github.com/tokio-rs/slab/issues/152">#152</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="2e5779f8eb"><code>2e5779f</code></a>
Release v0.4.11 (<a
href="https://redirect.github.com/tokio-rs/slab/issues/153">#153</a>)</li>
<li><a
href="2d65c514bc"><code>2d65c51</code></a>
Fix get_disjoint_mut error condition (<a
href="https://redirect.github.com/tokio-rs/slab/issues/152">#152</a>)</li>
<li>See full diff in <a
href="https://github.com/tokio-rs/slab/compare/v0.4.10...v0.4.11">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=slab&package-manager=cargo&previous-version=0.4.10&new-version=0.4.11)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts page](https://github.com/openai/codex/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-15 00:48:53 -07:00
Ed Bayes
b9af1d2b16 Login flow polish (#3632)
# Description
- Update sign in flow

# Tests
- Passes CI

---------

Co-authored-by: Michael Bolin <mbolin@openai.com>
2025-09-15 00:42:53 -07:00
Ahmed Ibrahim
2d52e3b40a Fix codex resume so flags (cd, model, search, etc.) still work (#3625)
Bug: now we can add flags/config values only before resume. 

`codex -m gpt-5 resume` works

However, `codex resume -m gpt-5` should also work.

This PR is following this
[approach](https://stackoverflow.com/questions/76408952/rust-clap-re-use-same-arguments-in-different-subcommand)
in doing so.

I didn't convert those flags to global because we have `codex login`
that shouldn't expect them.
2025-09-15 06:16:17 +00:00
Thibault Sottiaux
6039f8a126 feat: tighten preset filter, tame storage load logs, enable rollout prompt by default (#3628)
Summary
- common: use exact equality for Swiftfox exclusion to avoid hiding
future slugs that merely contain the substring
- core: treat missing internal_storage.json as expected (debug), warn
only on real IO/parse errors
- tui: drop DEBUG_HIGH gate; always consider showing rollout prompt, but
suppress under ApiKey auth mode
2025-09-14 23:05:41 -07:00
Ahmed Ibrahim
50262a44ce Show abort in the resume (#3629)
Show abort error when resuming a session
2025-09-15 05:24:30 +00:00
Ed Bayes
839b2ae7cf Change animation frames (#3627)
## Description
- Changes animation frames to be smaller
- Cleans up file names and popup logic

## tests
- Passes local CI
2025-09-15 04:36:34 +00:00
easong-openai
6a8e743d57 initial mcp add interface (#3543)
Adds `codex mcp add`, `codex mcp list`, `codex mcp remove`. Currently writes to global config.
2025-09-15 04:30:56 +00:00
Thibault Sottiaux
a797051921 chore: update swiftfox_prompt.md (#3624) 2025-09-15 04:10:35 +00:00
Thibault Sottiaux
d7d9d96d6c feat: add reasoning level to header (#3622) 2025-09-15 03:59:22 +00:00
Ahmed Ibrahim
26f1246a89 Revert "refactor transcript view to handle HistoryCells" (#3614)
Reverts openai/codex#3538
It panics on forking first message. It also calculates the index in a
wrong way.
2025-09-15 03:39:36 +00:00
Ahmed Ibrahim
6581da9b57 Show the header when resuming a conversation (#3615) 2025-09-15 03:31:08 +00:00
Eric Traut
900bb01486 When logging in using ChatGPT, make sure to overwrite API key (#3611)
When logging in using ChatGPT using the `codex login` command, a
successful login should write a new `auth.json` file with the ChatGPT
token information. The old code attempted to retain the API key and
merge the token information into the existing `auth.json` file. With the
new simplified login mechanism, `auth.json` should have auth information
for only ChatGPT or API Key, not both.

The `codex login --api-key <key>` code path was already doing the right
thing here, but the `codex login` command was incorrect. This PR fixes
the problem and adds test cases for both commands.
2025-09-14 19:48:18 -07:00
Ahmed Ibrahim
2ad6a37192 Don't show the model for apikey (#3607) 2025-09-15 01:32:18 +00:00
Eric Traut
e5dd7f0934 Fix get_auth_status response when using custom provider (#3581)
This PR addresses an edge-case bug that appears in the VS Code extension
in the following situation:
1. Log in using ChatGPT (using either the CLI or extension). This will
create an `auth.json` file.
2. Manually modify `config.toml` to specify a custom provider.
3. Start a fresh copy of the VS Code extension.

The profile menu in the VS Code extension will indicate that you are
logged in using ChatGPT even though you're not.

This is caused by the `get_auth_status` method returning an
`auth_method: 'chatgpt'` when a custom provider is configured and it
doesn't use OpenAI auth (i.e. `requires_openai_auth` is false). The
method should always return `auth_method: None` if
`requires_openai_auth` is false.

The same bug also causes the NUX (new user experience) screen to be
displayed in the VSCE in this situation.
2025-09-14 18:27:02 -07:00
Dylan
b6673838e8 fix: model family and apply_patch consistency (#3603)
## Summary
Resolves a merge conflict between #3597 and #3560, and adds tests to
double check our apply_patch configuration.

## Testing
- [x] Added unit tests

---------

Co-authored-by: dedrisian-oai <dedrisian@openai.com>
2025-09-14 18:20:37 -07:00
Fouad Matin
1823906215 fix(tui): update full-auto to default preset (#3608)
Update `--full-auto` to use default preset
2025-09-14 18:14:11 -07:00
Fouad Matin
5185d69f13 fix(core): flaky test completed_commands_do_not_persist_sessions (#3596)
Fix flaky test:
```
        FAIL [   2.641s] codex-core unified_exec::tests::completed_commands_do_not_persist_sessions
  stdout ───

    running 1 test
    test unified_exec::tests::completed_commands_do_not_persist_sessions ... FAILED

    failures:

    failures:
        unified_exec::tests::completed_commands_do_not_persist_sessions

    test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 235 filtered out; finished in 2.63s
    
  stderr ───

    thread 'unified_exec::tests::completed_commands_do_not_persist_sessions' panicked at core/src/unified_exec/mod.rs:582:9:
    assertion failed: result.output.contains("codex")
```
2025-09-14 18:04:05 -07:00
pakrym-oai
4dffa496ac Skip frames files in codespell (#3606)
Fixes CI
2025-09-14 18:00:23 -07:00
Ahmed Ibrahim
ce984b2c71 Add session header to chat widget (#3592)
<img width="570" height="332" alt="image"
src="https://github.com/user-attachments/assets/ca6dfcb0-f3a1-4b3e-978d-4f844ba77527"
/>
2025-09-14 17:53:50 -07:00
pakrym-oai
c47febf221 Append full raw reasoning event text (#3605)
We don't emit correct delta events and only get full reasoning back.
Append it to history.
2025-09-14 17:50:06 -07:00
jimmyfraiture2
76c37c5493 feat: UI animation (#3590)
Add NUX animation

---------

Co-authored-by: Thibault Sottiaux <tibo@openai.com>
2025-09-14 17:42:17 -07:00
dedrisian-oai
2aa84b8891 Fix EventMsg Optional (#3604) 2025-09-15 00:34:33 +00:00
pakrym-oai
9177bdae5e Only one branch for swiftfox (#3601)
Make each model family have a single branch.
2025-09-14 16:56:22 -07:00
Ahmed Ibrahim
a30e5e40ee enable-resume (#3537)
Adding the ability to resume conversations.
we have one verb `resume`. 

Behavior:

`tui`:
`codex resume`: opens session picker
`codex resume --last`: continue last message
`codex resume <session id>`: continue conversation with `session id`

`exec`:
`codex resume --last`: continue last conversation
`codex resume <session id>`: continue conversation with `session id`

Implementation:
- I added a function to find the path in `~/.codex/sessions/` with a
`UUID`. This is helpful in resuming with session id.
- Added the above mentioned flags
- Added lots of testing
2025-09-14 19:33:19 -04:00
jimmyfraiture2
99e1d33bd1 feat: update model save (#3589)
Edit model save to save by default as global or on the profile depending
on the session
2025-09-14 16:25:43 -07:00
dedrisian-oai
b2f6fc3b9a Fix flaky windows test (#3564)
There are exactly 4 types of flaky tests in Windows x86 right now:

1. `review_input_isolated_from_parent_history` => Times out waiting for
closing events
2. `review_does_not_emit_agent_message_on_structured_output` => Times
out waiting for closing events
3. `auto_compact_runs_after_token_limit_hit` => Times out waiting for
closing events
4. `auto_compact_runs_after_token_limit_hit` => Also has a problem where
auto compact should add a third request, but receives 4 requests.

1, 2, and 3 seem to be solved with increasing threads on windows runner
from 2 -> 4.

Don't know yet why # 4 is happening, but probably also because of
WireMock issues on windows causing races.
2025-09-14 23:20:25 +00:00
pakrym-oai
51f88fd04a Fix swiftfox model selector (#3598)
The model shouldn't be saved with a suffix. The effort is a separate
field.
2025-09-14 23:12:21 +00:00
pakrym-oai
916fdc2a37 Add per-model-family prompts (#3597)
Allows more flexibility in defining prompts.
2025-09-14 22:45:15 +00:00
pakrym-oai
863d9c237e Include command output when sending timeout to model (#3576)
Being able to see the output helps the model decide how to handle the
timeout.
2025-09-14 14:38:26 -07:00
Ahmed Ibrahim
7e1543f5d8 Align user history message prefix width (#3467)
<img width="798" height="340" alt="image"
src="https://github.com/user-attachments/assets/fdd63f40-9c94-4e3a-bce5-2d2f333a384f"
/>
2025-09-14 20:51:08 +00:00
Ahmed Ibrahim
d701eb32d7 Gate model upgrade prompt behind ChatGPT auth (#3586)
- refresh the login_state after onboarding.
- should be on chatgpt for upgrade
2025-09-14 13:08:24 -07:00
Michael Bolin
9baae77533 chore: update output_lines() to take a struct instead of a sequence of bools (#3591)
I found the boolean literals hard to follow.
2025-09-14 13:07:38 -07:00
Ahmed Ibrahim
e932722292 Add spacing before queued status indicator messages (#3474)
<img width="687" height="174" alt="image"
src="https://github.com/user-attachments/assets/e68f5a29-cb2d-4aa6-9cbd-f492878d8d0a"
/>
2025-09-14 15:37:28 -04:00
Ahmed Ibrahim
bbea6bbf7e Handle resuming/forking after compact (#3533)
We need to construct the history different when compact happens. For
this, we need to just consider the history after compact and convert
compact to a response item.

This needs to change and use `build_compact_history` when this #3446 is
merged.
2025-09-14 13:23:31 +00:00
Jeremy Rose
4891ee29c5 refactor transcript view to handle HistoryCells (#3538)
No (intended) functional change.

This refactors the transcript view to hold a list of HistoryCells
instead of a list of Lines. This simplifies and makes much of the logic
more robust, as well as laying the groundwork for future changes, e.g.
live-updating history cells in the transcript.

Similar to #2879 in goal. Fixes #2755.
2025-09-13 19:23:14 -07:00
Thibault Sottiaux
bac8a427f3 chore: default swiftfox models to experimental reasoning summaries (#3560) 2025-09-13 23:40:54 +00:00
Thibault Sottiaux
14ab1063a7 chore: rename 2025-09-12 23:17:41 -07:00
Thibault Sottiaux
a77364bbaa chore: remove descriptions 2025-09-12 22:55:40 -07:00
Thibault Sottiaux
19b4ed3c96 w 2025-09-12 22:44:05 -07:00
pakrym-oai
3d4acbaea0 Preserve IDs for more item types in azure (#3542)
https://github.com/openai/codex/issues/3509
2025-09-13 01:09:56 +00:00
pakrym-oai
414b8be8b6 Always request encrypted cot (#3539)
Otherwise future requests will fail with 500
2025-09-12 23:51:30 +00:00
dedrisian-oai
90a0fd342f Review Mode (Core) (#3401)
## 📝 Review Mode -- Core

This PR introduces the Core implementation for Review mode:

- New op `Op::Review { prompt: String }:` spawns a child review task
with isolated context, a review‑specific system prompt, and a
`Config.review_model`.
- `EnteredReviewMode`: emitted when the child review session starts.
Every event from this point onwards reflects the review session.
- `ExitedReviewMode(Option<ReviewOutputEvent>)`: emitted when the review
finishes or is interrupted, with optional structured findings:

```json
{
  "findings": [
    {
      "title": "<≤ 80 chars, imperative>",
      "body": "<valid Markdown explaining *why* this is a problem; cite files/lines/functions>",
      "confidence_score": <float 0.0-1.0>,
      "priority": <int 0-3>,
      "code_location": {
        "absolute_file_path": "<file path>",
        "line_range": {"start": <int>, "end": <int>}
      }
    }
  ],
  "overall_correctness": "patch is correct" | "patch is incorrect",
  "overall_explanation": "<1-3 sentence explanation justifying the overall_correctness verdict>",
  "overall_confidence_score": <float 0.0-1.0>
}
```

## Questions

### Why separate out its own message history?

We want the review thread to match the training of our review models as
much as possible -- that means using a custom prompt, removing user
instructions, and starting a clean chat history.

We also want to make sure the review thread doesn't leak into the parent
thread.

### Why do this as a mode, vs. sub-agents?

1. We want review to be a synchronous task, so it's fine for now to do a
bespoke implementation.
2. We're still unclear about the final structure for sub-agents. We'd
prefer to land this quickly and then refactor into sub-agents without
rushing that implementation.
2025-09-12 23:25:10 +00:00
jif-oai
8d56d2f655 fix: NIT None reasoning effort (#3536)
Fix the reasoning effort not being set to None in the UI
2025-09-12 21:17:49 +00:00
jif-oai
8408f3e8ed Fix NUX UI (#3534)
Fix NUX UI
2025-09-12 14:09:31 -07:00
Jeremy Rose
b8ccfe9b65 core: expand default sandbox (#3483)
this adds some more capabilities to the default sandbox which I feel are
safe. Most are in the
[renderer.sb](https://source.chromium.org/chromium/chromium/src/+/main:sandbox/policy/mac/renderer.sb)
sandbox for chrome renderers, which i feel is fair game for codex
commands.

Specific changes:

1. Allow processes in the sandbox to send signals to any other process
in the same sandbox (e.g. child processes or daemonized processes),
instead of just themselves.
2. Allow user-preference-read
3. Allow process-info* to anything in the same sandbox. This is a bit
wider than Chromium allows, but it seems OK to me to allow anything in
the sandbox to get details about other processes in the same sandbox.
Bazel uses these to e.g. wait for another process to exit.
4. Allow all CPU feature detection, this seems harmless to me. It's
wider than Chromium, but Chromium is concerned about fingerprinting, and
tightly controls what CPU features they actually care about, and we
don't have either that restriction or that advantage.
5. Allow new sysctl-reads:
   ```
     (sysctl-name "vm.loadavg")
     (sysctl-name-prefix "kern.proc.pgrp.")
     (sysctl-name-prefix "kern.proc.pid.")
     (sysctl-name-prefix "net.routetable.")
   ```
bazel needs these for waiting on child processes and for communicating
with its local build server, i believe. I wonder if we should just allow
all (sysctl-read), as reading any arbitrary info about the system seems
fine to me.
6. Allow iokit-open on RootDomainUserClient. This has to do with power
management I believe, and Chromium allows renderers to do this, so okay.
Bazel needs it to boot successfully, possibly for sleep/wake callbacks?
7. Mach lookup to `com.apple.system.opendirectoryd.libinfo`, which has
to do with user data, and which Chrome allows.
8. Mach lookup to `com.apple.PowerManagement.control`. Chromium allows
its GPU process to do this, but not its renderers. Bazel needs this to
boot, probably relatedly to sleep/wake stuff.
2025-09-12 14:03:02 -07:00
pakrym-oai
e3c6903199 Add Azure Responses API workaround (#3528)
Azure Responses API doesn't work well with store:false and response
items.

If store = false and id is sent an error is thrown that ID is not found
If store = false and id is not sent an error is thrown that ID is
required

Add detection for Azure urls and add a workaround to preserve reasoning
item IDs and send store:true
2025-09-12 13:52:15 -07:00
Jeremy Rose
5f6e95b592 if a command parses as a patch, do not attempt to run it (#3382)
sometimes the model forgets to actually invoke `apply_patch` and puts a
patch as the script body. trying to execute this as bash sometimes
creates files named `,` or `{` or does other unknown things, so catch
this situation and return an error to the model.
2025-09-12 13:47:41 -07:00
Ahmed Ibrahim
a2e9cc5530 Update interruption error message styling (#3470)
<img width="497" height="76" alt="image"
src="https://github.com/user-attachments/assets/a1ad279d-1d01-41cd-ac14-b3343a392563"
/>

<img width="493" height="74" alt="image"
src="https://github.com/user-attachments/assets/baf487ba-430e-40fe-8944-2071ec052962"
/>
2025-09-12 16:17:02 -04:00
jif-oai
ea225df22e feat: context compaction (#3446)
## Compact feature:
1. Stops the model when the context window become too large
2. Add a user turn, asking for the model to summarize
3. Build a bridge that contains all the previous user message + the
summary. Rendered from a template
4. Start sampling again from a clean conversation with only that bridge
2025-09-12 13:07:10 -07:00
Ahmed Ibrahim
d4848e558b Add spacing before composer footer hints (#3469)
<img width="647" height="82" alt="image"
src="https://github.com/user-attachments/assets/867eb5d9-3076-4018-846e-260a50408185"
/>
2025-09-12 15:31:24 -04:00
Ahmed Ibrahim
1a6a95fb2a Add spacing between dropdown headers and items (#3472)
<img width="927" height="194" alt="image"
src="https://github.com/user-attachments/assets/f4cb999b-16c3-448a-aed4-060bed8b96dd"
/>

<img width="1246" height="205" alt="image"
src="https://github.com/user-attachments/assets/5d9ba5bd-0c02-46da-a809-b583a176528a"
/>
2025-09-12 15:31:15 -04:00
jif-oai
c6fd056aa6 feat: reasoning effort as optional (#3527)
Allow the reasoning effort to be optional
2025-09-12 12:06:33 -07:00
Michael Bolin
abdcb40f4c feat: change the behavior of SetDefaultModel RPC so None clears the value. (#3529)
It turns out that we want slightly different behavior for the
`SetDefaultModel` RPC because some models do not work with reasoning
(like GPT-4.1), so we should be able to explicitly clear this value.

Verified in `codex-rs/mcp-server/tests/suite/set_default_model.rs`.
2025-09-12 11:35:51 -07:00
Dylan
4ae6b9787a standardize shell description (#3514)
## Summary
Standardizes the shell description across sandbox_types, since we cover
this in the prompt, and have moved necessary details (like
network_access and writeable workspace roots) to EnvironmentContext
messages.

## Test Plan
- [x] updated unit tests
2025-09-12 14:24:09 -04:00
590 changed files with 22347 additions and 3832 deletions

View File

@@ -25,3 +25,4 @@ jobs:
uses: codespell-project/actions-codespell@406322ec52dd7b488e48c1c4b82e2a8b3a1bf630 # v2
with:
ignore_words_file: .codespellignore
skip: frame*.txt

View File

@@ -167,6 +167,12 @@ jobs:
needs: build
name: release
runs-on: ubuntu-latest
permissions:
contents: write
actions: read
outputs:
version: ${{ steps.release_name.outputs.name }}
tag: ${{ github.ref_name }}
steps:
- name: Checkout repository
@@ -220,6 +226,48 @@ jobs:
tag: ${{ github.ref_name }}
config: .github/dotslash-config.json
# Publish to npm using OIDC authentication.
# July 31, 2025: https://github.blog/changelog/2025-07-31-npm-trusted-publishing-with-oidc-is-generally-available/
# npm docs: https://docs.npmjs.com/trusted-publishers
publish-npm:
# Skip this step for pre-releases (alpha/beta).
if: ${{ !contains(needs.release.outputs.version, '-') }}
name: publish-npm
needs: release
runs-on: ubuntu-latest
permissions:
id-token: write # Required for OIDC
contents: read
steps:
- name: Setup Node.js
uses: actions/setup-node@v5
with:
node-version: 22
registry-url: "https://registry.npmjs.org"
scope: "@openai"
# Trusted publishing requires npm CLI version 11.5.1 or later.
- name: Update npm
run: npm install -g npm@latest
- name: Download npm tarball from release
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
set -euo pipefail
version="${{ needs.release.outputs.version }}"
tag="${{ needs.release.outputs.tag }}"
mkdir -p dist/npm
gh release download "$tag" \
--repo "${GITHUB_REPOSITORY}" \
--pattern "codex-npm-${version}.tgz" \
--dir dist/npm
# No NODE_AUTH_TOKEN needed because we use OIDC.
- name: Publish to npm
run: npm publish "${GITHUB_WORKSPACE}/dist/npm/codex-npm-${{ needs.release.outputs.version }}.tgz"
update-branch:
name: Update latest-alpha-cli branch
permissions:

View File

@@ -4,6 +4,7 @@ In the codex-rs folder where the rust code lives:
- Crate names are prefixed with `codex-`. For example, the `core` folder's crate is named `codex-core`
- When using format! and you can inline variables into {}, always do that.
- Install any commands the repo relies on (for example `just`, `rg`, or `cargo-insta`) if they aren't already available before running instructions here.
- Never add or modify any code related to `CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR` or `CODEX_SANDBOX_ENV_VAR`.
- You operate in a sandbox where `CODEX_SANDBOX_NETWORK_DISABLED=1` will be set whenever you use the `shell` tool. Any existing code that uses `CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR` was authored with this fact in mind. It is often used to early exit out of tests that the author knew you would not be able to run given your sandbox limitations.
- Similarly, when you spawn a process using Seatbelt (`/usr/bin/sandbox-exec`), `CODEX_SANDBOX=seatbelt` will be set on the child process. Integration tests that want to run Seatbelt themselves cannot be run under Seatbelt, so checks for `CODEX_SANDBOX=seatbelt` are also often used to early exit out of tests, as appropriate.

View File

@@ -15,7 +15,8 @@
],
"repository": {
"type": "git",
"url": "git+https://github.com/openai/codex.git"
"url": "git+https://github.com/openai/codex.git",
"directory": "codex-cli"
},
"dependencies": {
"@vscode/ripgrep": "^1.15.14"

232
codex-rs/Cargo.lock generated
View File

@@ -212,6 +212,50 @@ dependencies = [
"term",
]
[[package]]
name = "askama"
version = "0.12.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b79091df18a97caea757e28cd2d5fda49c6cd4bd01ddffd7ff01ace0c0ad2c28"
dependencies = [
"askama_derive",
"askama_escape",
"humansize",
"num-traits",
"percent-encoding",
]
[[package]]
name = "askama_derive"
version = "0.12.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "19fe8d6cb13c4714962c072ea496f3392015f0989b1a2847bb4b2d9effd71d83"
dependencies = [
"askama_parser",
"basic-toml",
"mime",
"mime_guess",
"proc-macro2",
"quote",
"serde",
"syn 2.0.104",
]
[[package]]
name = "askama_escape"
version = "0.10.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "619743e34b5ba4e9703bba34deac3427c72507c7159f5fd030aea8cac0cfe341"
[[package]]
name = "askama_parser"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "acb1161c6b64d1c3d83108213c2a2533a342ac225aabd0bda218278c2ddb00c0"
dependencies = [
"nom",
]
[[package]]
name = "assert-json-diff"
version = "2.0.2"
@@ -272,6 +316,17 @@ dependencies = [
"syn 2.0.104",
]
[[package]]
name = "async-trait"
version = "0.1.89"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9035ad2d096bed7955a320ee7e2230574d28fd3c3a0f186cbea1ff3c7eed5dbb"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.104",
]
[[package]]
name = "atomic-waker"
version = "1.1.2"
@@ -305,6 +360,15 @@ version = "0.22.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "72b3254f16251a8381aa12e40e3c4d2f0199f8c6508fbecb9d91f575e0fbb8c6"
[[package]]
name = "basic-toml"
version = "0.1.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ba62675e8242a4c4e806d12f11d136e626e6c8361d6b829310732241652a178a"
dependencies = [
"serde",
]
[[package]]
name = "beef"
version = "0.5.2"
@@ -354,7 +418,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "234113d19d0d7d613b40e86fb654acf958910802bcceab913a4f9e7cda03b1a4"
dependencies = [
"memchr",
"regex-automata 0.4.9",
"regex-automata",
"serde",
]
@@ -572,6 +636,7 @@ name = "codex-cli"
version = "0.0.0"
dependencies = [
"anyhow",
"assert_cmd",
"clap",
"clap_complete",
"codex-arg0",
@@ -584,7 +649,12 @@ dependencies = [
"codex-protocol",
"codex-protocol-ts",
"codex-tui",
"owo-colors",
"predicates",
"pretty_assertions",
"serde_json",
"supports-color",
"tempfile",
"tokio",
"tracing",
"tracing-subscriber",
@@ -594,10 +664,13 @@ dependencies = [
name = "codex-common"
version = "0.0.0"
dependencies = [
"async-trait",
"clap",
"codex-core",
"codex-protocol",
"serde",
"thiserror 2.0.16",
"tokio",
"toml",
]
@@ -606,12 +679,14 @@ name = "codex-core"
version = "0.0.0"
dependencies = [
"anyhow",
"askama",
"assert_cmd",
"async-channel",
"base64",
"bytes",
"chrono",
"codex-apply-patch",
"codex-file-search",
"codex-mcp-client",
"codex-protocol",
"core_test_support",
@@ -628,7 +703,7 @@ dependencies = [
"portable-pty",
"predicates",
"pretty_assertions",
"rand 0.9.2",
"rand",
"regex-lite",
"reqwest",
"seccompiler",
@@ -679,6 +754,8 @@ dependencies = [
"tokio",
"tracing",
"tracing-subscriber",
"uuid",
"walkdir",
"wiremock",
]
@@ -715,6 +792,16 @@ dependencies = [
"tokio",
]
[[package]]
name = "codex-git-tooling"
version = "0.0.0"
dependencies = [
"pretty_assertions",
"tempfile",
"thiserror 2.0.16",
"walkdir",
]
[[package]]
name = "codex-linux-sandbox"
version = "0.0.0"
@@ -732,11 +819,13 @@ dependencies = [
name = "codex-login"
version = "0.0.0"
dependencies = [
"anyhow",
"base64",
"chrono",
"codex-core",
"codex-protocol",
"rand 0.8.5",
"core_test_support",
"rand",
"reqwest",
"serde",
"serde_json",
@@ -774,6 +863,7 @@ dependencies = [
"codex-core",
"codex-login",
"codex-protocol",
"core_test_support",
"mcp-types",
"mcp_test_support",
"os_info",
@@ -810,6 +900,7 @@ dependencies = [
name = "codex-protocol"
version = "0.0.0"
dependencies = [
"anyhow",
"base64",
"icu_decimal",
"icu_locale_core",
@@ -854,12 +945,14 @@ dependencies = [
"codex-common",
"codex-core",
"codex-file-search",
"codex-git-tooling",
"codex-login",
"codex-ollama",
"codex-protocol",
"color-eyre",
"crossterm",
"diffy",
"dirs",
"image",
"insta",
"itertools 0.14.0",
@@ -871,7 +964,7 @@ dependencies = [
"pathdiff",
"pretty_assertions",
"pulldown-cmark",
"rand 0.9.2",
"rand",
"ratatui",
"regex-lite",
"serde",
@@ -1010,10 +1103,12 @@ checksum = "773648b94d0e5d620f64f280777445740e61fe701025087ec8b57f45c791888b"
name = "core_test_support"
version = "0.0.0"
dependencies = [
"anyhow",
"codex-core",
"serde_json",
"tempfile",
"tokio",
"wiremock",
]
[[package]]
@@ -1266,7 +1361,7 @@ version = "0.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b545b8c50194bdd008283985ab0b31dba153cfd5b3066a92770634fbc0d7d291"
dependencies = [
"nu-ansi-term 0.50.1",
"nu-ansi-term",
]
[[package]]
@@ -1848,7 +1943,7 @@ dependencies = [
"aho-corasick",
"bstr",
"log",
"regex-automata 0.4.9",
"regex-automata",
"regex-syntax 0.8.5",
]
@@ -1981,6 +2076,15 @@ version = "1.0.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "df3b46402a9d5adb4c86a0cf463f42e19994e3ee891101b1841f30a545cb49a9"
[[package]]
name = "humansize"
version = "2.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6cb51c9a029ddc91b07a787f1d86b53ccfa49b0e86688c946ebe8d3555685dd7"
dependencies = [
"libm",
]
[[package]]
name = "hyper"
version = "1.7.0"
@@ -2254,7 +2358,7 @@ dependencies = [
"globset",
"log",
"memchr",
"regex-automata 0.4.9",
"regex-automata",
"same-file",
"walkdir",
"winapi-util",
@@ -2536,6 +2640,12 @@ version = "0.2.175"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6a82ae493e598baaea5209805c49bbf2ea7de956d50d7da0da1164f9c6d28543"
[[package]]
name = "libm"
version = "0.2.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f9fbbcab51052fe104eb5e5d351cf728d30a5be1fe14d9be8a3b097481fb97de"
[[package]]
name = "libredox"
version = "0.1.6"
@@ -2633,11 +2743,11 @@ checksum = "3e2e65a1a2e43cfcb47a895c4c8b10d1f4a61097f9f254f183aee60cad9c651d"
[[package]]
name = "matchers"
version = "0.1.0"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8263075bb86c5a1b1427b5ae862e8889656f126e9f77c484496e8b47cf5c5558"
checksum = "d1525a2a28c7f4fa0fc98bb91ae755d1e2d1505079e05539e35bc876b5d65ae9"
dependencies = [
"regex-automata 0.1.10",
"regex-automata",
]
[[package]]
@@ -2811,16 +2921,6 @@ version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "61807f77802ff30975e01f4f071c8ba10c022052f98b3294119f3e615d13e5be"
[[package]]
name = "nu-ansi-term"
version = "0.46.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "77a8165726e8236064dbb45459242600304b42a5ea24ee2948e18e023bf7ba84"
dependencies = [
"overload",
"winapi",
]
[[package]]
name = "nu-ansi-term"
version = "0.50.1"
@@ -3059,12 +3159,6 @@ dependencies = [
"windows-sys 0.52.0",
]
[[package]]
name = "overload"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b15813163c1d831bf4a13c3610c05c0d03b39feb07f7e09fa234dac9b15aaf39"
[[package]]
name = "owo-colors"
version = "4.2.2"
@@ -3389,35 +3483,14 @@ dependencies = [
"nibble_vec",
]
[[package]]
name = "rand"
version = "0.8.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "34af8d1a0e25924bc5b7c43c079c942339d8f0a8b57c39049bef581b46327404"
dependencies = [
"libc",
"rand_chacha 0.3.1",
"rand_core 0.6.4",
]
[[package]]
name = "rand"
version = "0.9.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6db2770f06117d490610c7488547d543617b21bfa07796d7a12f6f1bd53850d1"
dependencies = [
"rand_chacha 0.9.0",
"rand_core 0.9.3",
]
[[package]]
name = "rand_chacha"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e6c10a63a0fa32252be49d21e7709d4d4baf8d231c2dbce1eaa8141b9b127d88"
dependencies = [
"ppv-lite86",
"rand_core 0.6.4",
"rand_chacha",
"rand_core",
]
[[package]]
@@ -3427,16 +3500,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d3022b5f1df60f26e1ffddd6c66e8aa15de382ae63b3a0c1bfc0e4d3e3f325cb"
dependencies = [
"ppv-lite86",
"rand_core 0.9.3",
]
[[package]]
name = "rand_core"
version = "0.6.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ec0be4795e2f6a28069bec0b5ff3e2ac9bafc99e6a9a7dc3547996c5c816922c"
dependencies = [
"getrandom 0.2.16",
"rand_core",
]
[[package]]
@@ -3527,19 +3591,10 @@ checksum = "b544ef1b4eac5dc2db33ea63606ae9ffcfac26c1416a2806ae0bf5f56b201191"
dependencies = [
"aho-corasick",
"memchr",
"regex-automata 0.4.9",
"regex-automata",
"regex-syntax 0.8.5",
]
[[package]]
name = "regex-automata"
version = "0.1.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6c230d73fb8d8c1b9c0b3135c5142a8acee3a0558fb8db5cf1cb65f8d7862132"
dependencies = [
"regex-syntax 0.6.29",
]
[[package]]
name = "regex-automata"
version = "0.4.9"
@@ -3874,18 +3929,28 @@ dependencies = [
[[package]]
name = "serde"
version = "1.0.219"
version = "1.0.224"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5f0e2c6ed6606019b4e29e69dbaba95b11854410e5347d525002456dbbb786b6"
checksum = "6aaeb1e94f53b16384af593c71e20b095e958dab1d26939c1b70645c5cfbcc0b"
dependencies = [
"serde_core",
"serde_derive",
]
[[package]]
name = "serde_core"
version = "1.0.224"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "32f39390fa6346e24defbcdd3d9544ba8a19985d0af74df8501fbfe9a64341ab"
dependencies = [
"serde_derive",
]
[[package]]
name = "serde_derive"
version = "1.0.219"
version = "1.0.224"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5b0276cf7f2c73365f7157c8123c21cd9a50fbbd844757af28ca1f5925fc2a00"
checksum = "87ff78ab5e8561c9a675bfc1785cb07ae721f0ee53329a595cefd8c04c2ac4e0"
dependencies = [
"proc-macro2",
"quote",
@@ -3905,15 +3970,16 @@ dependencies = [
[[package]]
name = "serde_json"
version = "1.0.143"
version = "1.0.145"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d401abef1d108fbd9cbaebc3e46611f4b1021f714a0597a71f41ee463f5f4a5a"
checksum = "402a6f66d8c709116cf22f558eab210f5a50187f702eb4d7e5ef38d9a7f1c79c"
dependencies = [
"indexmap 2.10.0",
"itoa",
"memchr",
"ryu",
"serde",
"serde_core",
]
[[package]]
@@ -4100,9 +4166,9 @@ checksum = "56199f7ddabf13fe5074ce809e7d3f42b42ae711800501b5b16ea82ad029c39d"
[[package]]
name = "slab"
version = "0.4.10"
version = "0.4.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "04dc19736151f35336d325007ac991178d504a119863a2fcb3758cdb5e52c50d"
checksum = "7a2ae44ef20feb57a68b23d846850f861394c2e02dc425a50098ae8c90267589"
[[package]]
name = "smallvec"
@@ -4834,14 +4900,14 @@ dependencies = [
[[package]]
name = "tracing-subscriber"
version = "0.3.19"
version = "0.3.20"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e8189decb5ac0fa7bc8b96b7cb9b2701d60d48805aca84a238004d665fcc4008"
checksum = "2054a14f5307d601f88daf0553e1cbf472acc4f2c51afab632431cdcd72124d5"
dependencies = [
"matchers",
"nu-ansi-term 0.46.0",
"nu-ansi-term",
"once_cell",
"regex",
"regex-automata",
"sharded-slab",
"smallvec",
"thread_local",
@@ -5229,9 +5295,9 @@ dependencies = [
[[package]]
name = "wildmatch"
version = "2.4.0"
version = "2.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "68ce1ab1f8c62655ebe1350f589c61e505cf94d385bc6a12899442d9081e71fd"
checksum = "39b7d07a236abaef6607536ccfaf19b396dbe3f5110ddb73d39f4562902ed382"
[[package]]
name = "winapi"

View File

@@ -9,6 +9,7 @@ members = [
"exec",
"execpolicy",
"file-search",
"git-tooling",
"linux-sandbox",
"login",
"mcp-client",
@@ -29,15 +30,169 @@ version = "0.0.0"
# edition.
edition = "2024"
[workspace.dependencies]
# Internal
codex-ansi-escape = { path = "ansi-escape" }
codex-apply-patch = { path = "apply-patch" }
codex-arg0 = { path = "arg0" }
codex-chatgpt = { path = "chatgpt" }
codex-common = { path = "common" }
codex-core = { path = "core" }
codex-exec = { path = "exec" }
codex-file-search = { path = "file-search" }
codex-git-tooling = { path = "git-tooling" }
codex-linux-sandbox = { path = "linux-sandbox" }
codex-login = { path = "login" }
codex-mcp-client = { path = "mcp-client" }
codex-mcp-server = { path = "mcp-server" }
codex-ollama = { path = "ollama" }
codex-protocol = { path = "protocol" }
codex-protocol-ts = { path = "protocol-ts" }
codex-tui = { path = "tui" }
core_test_support = { path = "core/tests/common" }
mcp-types = { path = "mcp-types" }
mcp_test_support = { path = "mcp-server/tests/common" }
# External
allocative = "0.3.3"
ansi-to-tui = "7.0.0"
anyhow = "1"
arboard = "3"
askama = "0.12"
assert_cmd = "2"
async-channel = "2.3.1"
async-stream = "0.3.6"
async-trait = "0.1.89"
base64 = "0.22.1"
bytes = "1.10.1"
chrono = "0.4.40"
clap = "4"
clap_complete = "4"
color-eyre = "0.6.3"
crossterm = "0.28.1"
derive_more = "2"
diffy = "0.4.2"
dirs = "6"
dotenvy = "0.15.7"
env-flags = "0.1.1"
env_logger = "0.11.5"
eventsource-stream = "0.2.3"
futures = "0.3"
icu_decimal = "2.0.0"
icu_locale_core = "2.0.0"
ignore = "0.4.23"
image = { version = "^0.25.8", default-features = false }
insta = "1.43.2"
itertools = "0.14.0"
landlock = "0.4.1"
lazy_static = "1"
libc = "0.2.175"
log = "0.4"
maplit = "1.0.2"
mime_guess = "2.0.5"
multimap = "0.10.0"
nucleo-matcher = "0.3.1"
once_cell = "1"
openssl-sys = "*"
os_info = "3.12.0"
owo-colors = "4.2.0"
path-absolutize = "3.1.1"
path-clean = "1.0.1"
pathdiff = "0.2"
portable-pty = "0.9.0"
predicates = "3"
pretty_assertions = "1.4.1"
pulldown-cmark = "0.10"
rand = "0.9"
ratatui = "0.29.0"
regex-lite = "0.1.7"
reqwest = "0.12"
schemars = "0.8.22"
seccompiler = "0.5.0"
serde = "1"
serde_json = "1"
serde_with = "3.14"
sha1 = "0.10.6"
sha2 = "0.10"
shlex = "1.3.0"
similar = "2.7.0"
starlark = "0.13.0"
strum = "0.27.2"
strum_macros = "0.27.2"
supports-color = "3.0.2"
sys-locale = "0.3.2"
tempfile = "3.13.0"
textwrap = "0.16.2"
thiserror = "2.0.16"
time = "0.3"
tiny_http = "0.12"
tokio = "1"
tokio-stream = "0.1.17"
tokio-test = "0.4"
tokio-util = "0.7.16"
toml = "0.9.5"
toml_edit = "0.23.4"
tracing = "0.1.41"
tracing-appender = "0.2.3"
tracing-subscriber = "0.3.20"
tree-sitter = "0.25.9"
tree-sitter-bash = "0.25.0"
ts-rs = "11"
unicode-segmentation = "1.12.0"
unicode-width = "0.1"
url = "2"
urlencoding = "2.1"
uuid = "1"
vt100 = "0.16.2"
walkdir = "2.5.0"
webbrowser = "1.0"
which = "6"
wildmatch = "2.5.0"
wiremock = "0.6"
[workspace.lints]
rust = {}
[workspace.lints.clippy]
expect_used = "deny"
identity_op = "deny"
manual_clamp = "deny"
manual_filter = "deny"
manual_find = "deny"
manual_flatten = "deny"
manual_map = "deny"
manual_memcpy = "deny"
manual_non_exhaustive = "deny"
manual_ok_or = "deny"
manual_range_contains = "deny"
manual_retain = "deny"
manual_strip = "deny"
manual_try_fold = "deny"
manual_unwrap_or = "deny"
needless_borrow = "deny"
needless_borrowed_reference = "deny"
needless_collect = "deny"
needless_late_init = "deny"
needless_option_as_deref = "deny"
needless_question_mark = "deny"
needless_update = "deny"
redundant_clone = "deny"
redundant_closure = "deny"
redundant_closure_for_method_calls = "deny"
redundant_static_lifetimes = "deny"
trivially_copy_pass_by_ref = "deny"
uninlined_format_args = "deny"
unnecessary_filter_map = "deny"
unnecessary_lazy_evaluations = "deny"
unnecessary_sort_by = "deny"
unnecessary_to_owned = "deny"
unwrap_used = "deny"
# cargo-shear cannot see the platform-specific openssl-sys usage, so we
# silence the false positive here instead of deleting a real dependency.
[workspace.metadata.cargo-shear]
ignored = ["openssl-sys"]
[profile.release]
lto = "fat"
# Because we bundle some of these executables with the TypeScript CLI, we

View File

@@ -8,9 +8,9 @@ name = "codex_ansi_escape"
path = "src/lib.rs"
[dependencies]
ansi-to-tui = "7.0.0"
ratatui = { version = "0.29.0", features = [
ansi-to-tui = { workspace = true }
ratatui = { workspace = true, features = [
"unstable-rendered-line-info",
"unstable-widget-ref",
] }
tracing = { version = "0.1.41", features = ["log"] }
tracing = { workspace = true, features = ["log"] }

View File

@@ -15,14 +15,14 @@ path = "src/main.rs"
workspace = true
[dependencies]
anyhow = "1"
similar = "2.7.0"
thiserror = "2.0.16"
tree-sitter = "0.25.9"
tree-sitter-bash = "0.25.0"
once_cell = "1"
anyhow = { workspace = true }
similar = { workspace = true }
thiserror = { workspace = true }
tree-sitter = { workspace = true }
tree-sitter-bash = { workspace = true }
once_cell = { workspace = true }
[dev-dependencies]
assert_cmd = "2"
pretty_assertions = "1.4.1"
tempfile = "3.13.0"
assert_cmd = { workspace = true }
pretty_assertions = { workspace = true }
tempfile = { workspace = true }

View File

@@ -40,6 +40,11 @@ pub enum ApplyPatchError {
/// Error that occurs while computing replacements when applying patch chunks
#[error("{0}")]
ComputeReplacements(String),
/// A raw patch body was provided without an explicit `apply_patch` invocation.
#[error(
"patch detected without explicit call to apply_patch. Rerun as [\"apply_patch\", \"<patch>\"]"
)]
ImplicitInvocation,
}
impl From<std::io::Error> for ApplyPatchError {
@@ -93,10 +98,12 @@ pub struct ApplyPatchArgs {
pub fn maybe_parse_apply_patch(argv: &[String]) -> MaybeApplyPatch {
match argv {
// Direct invocation: apply_patch <patch>
[cmd, body] if APPLY_PATCH_COMMANDS.contains(&cmd.as_str()) => match parse_patch(body) {
Ok(source) => MaybeApplyPatch::Body(source),
Err(e) => MaybeApplyPatch::PatchParseError(e),
},
// Bash heredoc form: (optional `cd <path> &&`) apply_patch <<'EOF' ...
[bash, flag, script] if bash == "bash" && flag == "-lc" => {
match extract_apply_patch_from_bash(script) {
Ok((body, workdir)) => match parse_patch(&body) {
@@ -207,6 +214,26 @@ impl ApplyPatchAction {
/// cwd must be an absolute path so that we can resolve relative paths in the
/// patch.
pub fn maybe_parse_apply_patch_verified(argv: &[String], cwd: &Path) -> MaybeApplyPatchVerified {
// Detect a raw patch body passed directly as the command or as the body of a bash -lc
// script. In these cases, report an explicit error rather than applying the patch.
match argv {
[body] => {
if parse_patch(body).is_ok() {
return MaybeApplyPatchVerified::CorrectnessError(
ApplyPatchError::ImplicitInvocation,
);
}
}
[bash, flag, script] if bash == "bash" && flag == "-lc" => {
if parse_patch(script).is_ok() {
return MaybeApplyPatchVerified::CorrectnessError(
ApplyPatchError::ImplicitInvocation,
);
}
}
_ => {}
}
match maybe_parse_apply_patch(argv) {
MaybeApplyPatch::Body(ApplyPatchArgs {
patch,
@@ -621,21 +648,18 @@ fn derive_new_contents_from_chunks(
}
};
let mut original_lines: Vec<String> = original_contents
.split('\n')
.map(|s| s.to_string())
.collect();
let mut original_lines: Vec<String> = original_contents.split('\n').map(String::from).collect();
// Drop the trailing empty element that results from the final newline so
// that line counts match the behaviour of standard `diff`.
if original_lines.last().is_some_and(|s| s.is_empty()) {
if original_lines.last().is_some_and(String::is_empty) {
original_lines.pop();
}
let replacements = compute_replacements(&original_lines, path, chunks)?;
let new_lines = apply_replacements(original_lines, &replacements);
let mut new_lines = new_lines;
if !new_lines.last().is_some_and(|s| s.is_empty()) {
if !new_lines.last().is_some_and(String::is_empty) {
new_lines.push(String::new());
}
let new_contents = new_lines.join("\n");
@@ -679,7 +703,7 @@ fn compute_replacements(
if chunk.old_lines.is_empty() {
// Pure addition (no old lines). We'll add them at the end or just
// before the final empty line if one exists.
let insertion_idx = if original_lines.last().is_some_and(|s| s.is_empty()) {
let insertion_idx = if original_lines.last().is_some_and(String::is_empty) {
original_lines.len() - 1
} else {
original_lines.len()
@@ -705,11 +729,11 @@ fn compute_replacements(
let mut new_slice: &[String] = &chunk.new_lines;
if found.is_none() && pattern.last().is_some_and(|s| s.is_empty()) {
if found.is_none() && pattern.last().is_some_and(String::is_empty) {
// Retry without the trailing empty line which represents the final
// newline in the file.
pattern = &pattern[..pattern.len() - 1];
if new_slice.last().is_some_and(|s| s.is_empty()) {
if new_slice.last().is_some_and(String::is_empty) {
new_slice = &new_slice[..new_slice.len() - 1];
}
@@ -821,6 +845,7 @@ mod tests {
use super::*;
use pretty_assertions::assert_eq;
use std::fs;
use std::string::ToString;
use tempfile::tempdir;
/// Helper to construct a patch with the given body.
@@ -829,7 +854,7 @@ mod tests {
}
fn strs_to_strings(strs: &[&str]) -> Vec<String> {
strs.iter().map(|s| s.to_string()).collect()
strs.iter().map(ToString::to_string).collect()
}
// Test helpers to reduce repetition when building bash -lc heredoc scripts
@@ -875,6 +900,28 @@ mod tests {
));
}
#[test]
fn test_implicit_patch_single_arg_is_error() {
let patch = "*** Begin Patch\n*** Add File: foo\n+hi\n*** End Patch".to_string();
let args = vec![patch];
let dir = tempdir().unwrap();
assert!(matches!(
maybe_parse_apply_patch_verified(&args, dir.path()),
MaybeApplyPatchVerified::CorrectnessError(ApplyPatchError::ImplicitInvocation)
));
}
#[test]
fn test_implicit_patch_bash_script_is_error() {
let script = "*** Begin Patch\n*** Add File: foo\n+hi\n*** End Patch";
let args = args_bash(script);
let dir = tempdir().unwrap();
assert!(matches!(
maybe_parse_apply_patch_verified(&args, dir.path()),
MaybeApplyPatchVerified::CorrectnessError(ApplyPatchError::ImplicitInvocation)
));
}
#[test]
fn test_literal() {
let args = strs_to_strings(&[

View File

@@ -112,9 +112,10 @@ pub(crate) fn seek_sequence(
#[cfg(test)]
mod tests {
use super::seek_sequence;
use std::string::ToString;
fn to_vec(strings: &[&str]) -> Vec<String> {
strings.iter().map(|s| s.to_string()).collect()
strings.iter().map(ToString::to_string).collect()
}
#[test]

View File

@@ -11,10 +11,10 @@ path = "src/lib.rs"
workspace = true
[dependencies]
anyhow = "1"
codex-apply-patch = { path = "../apply-patch" }
codex-core = { path = "../core" }
codex-linux-sandbox = { path = "../linux-sandbox" }
dotenvy = "0.15.7"
tempfile = "3"
tokio = { version = "1", features = ["rt-multi-thread"] }
anyhow = { workspace = true }
codex-apply-patch = { workspace = true }
codex-core = { workspace = true }
codex-linux-sandbox = { workspace = true }
dotenvy = { workspace = true }
tempfile = { workspace = true }
tokio = { workspace = true, features = ["rt-multi-thread"] }

View File

@@ -54,7 +54,7 @@ where
let argv1 = args.next().unwrap_or_default();
if argv1 == CODEX_APPLY_PATCH_ARG1 {
let patch_arg = args.next().and_then(|s| s.to_str().map(|s| s.to_owned()));
let patch_arg = args.next().and_then(|s| s.to_str().map(str::to_owned));
let exit_code = match patch_arg {
Some(patch_arg) => {
let mut stdout = std::io::stdout();

View File

@@ -7,13 +7,13 @@ version = { workspace = true }
workspace = true
[dependencies]
anyhow = "1"
clap = { version = "4", features = ["derive"] }
codex-common = { path = "../common", features = ["cli"] }
codex-core = { path = "../core" }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
tokio = { version = "1", features = ["full"] }
anyhow = { workspace = true }
clap = { workspace = true, features = ["derive"] }
codex-common = { workspace = true, features = ["cli"] }
codex-core = { workspace = true }
serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }
tokio = { workspace = true, features = ["full"] }
[dev-dependencies]
tempfile = "3"
tempfile = { workspace = true }

View File

@@ -15,26 +15,34 @@ path = "src/lib.rs"
workspace = true
[dependencies]
anyhow = "1"
clap = { version = "4", features = ["derive"] }
clap_complete = "4"
codex-arg0 = { path = "../arg0" }
codex-chatgpt = { path = "../chatgpt" }
codex-common = { path = "../common", features = ["cli"] }
codex-core = { path = "../core" }
codex-exec = { path = "../exec" }
codex-login = { path = "../login" }
codex-mcp-server = { path = "../mcp-server" }
codex-protocol = { path = "../protocol" }
codex-tui = { path = "../tui" }
serde_json = "1"
tokio = { version = "1", features = [
anyhow = { workspace = true }
clap = { workspace = true, features = ["derive"] }
clap_complete = { workspace = true }
codex-arg0 = { workspace = true }
codex-chatgpt = { workspace = true }
codex-common = { workspace = true, features = ["cli"] }
codex-core = { workspace = true }
codex-exec = { workspace = true }
codex-login = { workspace = true }
codex-mcp-server = { workspace = true }
codex-protocol = { workspace = true }
codex-protocol-ts = { workspace = true }
codex-tui = { workspace = true }
owo-colors = { workspace = true }
serde_json = { workspace = true }
supports-color = { workspace = true }
tokio = { workspace = true, features = [
"io-std",
"macros",
"process",
"rt-multi-thread",
"signal",
] }
tracing = "0.1.41"
tracing-subscriber = "0.3.19"
codex-protocol-ts = { path = "../protocol-ts" }
tracing = { workspace = true }
tracing-subscriber = { workspace = true }
[dev-dependencies]
assert_cmd = { workspace = true }
predicates = { workspace = true }
pretty_assertions = { workspace = true }
tempfile = { workspace = true }

View File

@@ -64,7 +64,6 @@ async fn run_command_under_sandbox(
sandbox_type: SandboxType,
) -> anyhow::Result<()> {
let sandbox_mode = create_sandbox_mode(full_auto);
let cwd = std::env::current_dir()?;
let config = Config::load_with_cli_overrides(
config_overrides
.parse_overrides()
@@ -75,13 +74,29 @@ async fn run_command_under_sandbox(
..Default::default()
},
)?;
// In practice, this should be `std::env::current_dir()` because this CLI
// does not support `--cwd`, but let's use the config value for consistency.
let cwd = config.cwd.clone();
// For now, we always use the same cwd for both the command and the
// sandbox policy. In the future, we could add a CLI option to set them
// separately.
let sandbox_policy_cwd = cwd.clone();
let stdio_policy = StdioPolicy::Inherit;
let env = create_env(&config.shell_environment_policy);
let mut child = match sandbox_type {
SandboxType::Seatbelt => {
spawn_command_under_seatbelt(command, &config.sandbox_policy, cwd, stdio_policy, env)
.await?
spawn_command_under_seatbelt(
command,
cwd,
&config.sandbox_policy,
sandbox_policy_cwd.as_path(),
stdio_policy,
env,
)
.await?
}
SandboxType::Landlock => {
#[expect(clippy::expect_used)]
@@ -91,8 +106,9 @@ async fn run_command_under_sandbox(
spawn_command_under_linux_sandbox(
codex_linux_sandbox_exe,
command,
&config.sandbox_policy,
cwd,
&config.sandbox_policy,
sandbox_policy_cwd.as_path(),
stdio_policy,
env,
)

View File

@@ -14,9 +14,15 @@ use codex_cli::login::run_logout;
use codex_cli::proto;
use codex_common::CliConfigOverrides;
use codex_exec::Cli as ExecCli;
use codex_tui::AppExitInfo;
use codex_tui::Cli as TuiCli;
use owo_colors::OwoColorize;
use std::path::PathBuf;
use supports_color::Stream;
mod mcp_cmd;
use crate::mcp_cmd::McpCli;
use crate::proto::ProtoCli;
/// Codex CLI
@@ -56,8 +62,8 @@ enum Subcommand {
/// Remove stored authentication credentials.
Logout(LogoutCommand),
/// Experimental: run Codex as an MCP server.
Mcp,
/// [experimental] Run Codex as an MCP server and manage MCP servers.
Mcp(McpCli),
/// Run the Protocol stream via stdin/stdout
#[clap(visible_alias = "p")]
@@ -73,6 +79,9 @@ enum Subcommand {
#[clap(visible_alias = "a")]
Apply(ApplyCommand),
/// Resume a previous interactive session (picker by default; use --last to continue the most recent).
Resume(ResumeCommand),
/// Internal: generate TypeScript protocol bindings.
#[clap(hide = true)]
GenerateTs(GenerateTsCommand),
@@ -85,6 +94,21 @@ struct CompletionCommand {
shell: Shell,
}
#[derive(Debug, Parser)]
struct ResumeCommand {
/// Conversation/session id (UUID). When provided, resumes this session.
/// If omitted, use --last to pick the most recent recorded session.
#[arg(value_name = "SESSION_ID")]
session_id: Option<String>,
/// Continue the most recent session without showing the picker.
#[arg(long = "last", default_value_t = false, conflicts_with = "session_id")]
last: bool,
#[clap(flatten)]
config_overrides: TuiCli,
}
#[derive(Debug, Parser)]
struct DebugArgs {
#[command(subcommand)]
@@ -135,6 +159,41 @@ struct GenerateTsCommand {
prettier: Option<PathBuf>,
}
fn format_exit_messages(exit_info: AppExitInfo, color_enabled: bool) -> Vec<String> {
let AppExitInfo {
token_usage,
conversation_id,
} = exit_info;
if token_usage.is_zero() {
return Vec::new();
}
let mut lines = vec![format!(
"{}",
codex_core::protocol::FinalOutput::from(token_usage)
)];
if let Some(session_id) = conversation_id {
let resume_cmd = format!("codex resume {session_id}");
let command = if color_enabled {
resume_cmd.cyan().to_string()
} else {
resume_cmd
};
lines.push(format!("To continue this session, run {command}."));
}
lines
}
fn print_exit_messages(exit_info: AppExitInfo) {
let color_enabled = supports_color::on(Stream::Stdout).is_some();
for line in format_exit_messages(exit_info, color_enabled) {
println!("{line}");
}
}
fn main() -> anyhow::Result<()> {
arg0_dispatch_or_else(|codex_linux_sandbox_exe| async move {
cli_main(codex_linux_sandbox_exe).await?;
@@ -143,26 +202,52 @@ fn main() -> anyhow::Result<()> {
}
async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()> {
let cli = MultitoolCli::parse();
let MultitoolCli {
config_overrides: root_config_overrides,
mut interactive,
subcommand,
} = MultitoolCli::parse();
match cli.subcommand {
match subcommand {
None => {
let mut tui_cli = cli.interactive;
prepend_config_flags(&mut tui_cli.config_overrides, cli.config_overrides);
let usage = codex_tui::run_main(tui_cli, codex_linux_sandbox_exe).await?;
if !usage.is_zero() {
println!("{}", codex_core::protocol::FinalOutput::from(usage));
}
prepend_config_flags(
&mut interactive.config_overrides,
root_config_overrides.clone(),
);
let exit_info = codex_tui::run_main(interactive, codex_linux_sandbox_exe).await?;
print_exit_messages(exit_info);
}
Some(Subcommand::Exec(mut exec_cli)) => {
prepend_config_flags(&mut exec_cli.config_overrides, cli.config_overrides);
prepend_config_flags(
&mut exec_cli.config_overrides,
root_config_overrides.clone(),
);
codex_exec::run_main(exec_cli, codex_linux_sandbox_exe).await?;
}
Some(Subcommand::Mcp) => {
codex_mcp_server::run_main(codex_linux_sandbox_exe, cli.config_overrides).await?;
Some(Subcommand::Mcp(mut mcp_cli)) => {
// Propagate any root-level config overrides (e.g. `-c key=value`).
prepend_config_flags(&mut mcp_cli.config_overrides, root_config_overrides.clone());
mcp_cli.run(codex_linux_sandbox_exe).await?;
}
Some(Subcommand::Resume(ResumeCommand {
session_id,
last,
config_overrides,
})) => {
interactive = finalize_resume_interactive(
interactive,
root_config_overrides.clone(),
session_id,
last,
config_overrides,
);
codex_tui::run_main(interactive, codex_linux_sandbox_exe).await?;
}
Some(Subcommand::Login(mut login_cli)) => {
prepend_config_flags(&mut login_cli.config_overrides, cli.config_overrides);
prepend_config_flags(
&mut login_cli.config_overrides,
root_config_overrides.clone(),
);
match login_cli.action {
Some(LoginSubcommand::Status) => {
run_login_status(login_cli.config_overrides).await;
@@ -177,11 +262,17 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
}
}
Some(Subcommand::Logout(mut logout_cli)) => {
prepend_config_flags(&mut logout_cli.config_overrides, cli.config_overrides);
prepend_config_flags(
&mut logout_cli.config_overrides,
root_config_overrides.clone(),
);
run_logout(logout_cli.config_overrides).await;
}
Some(Subcommand::Proto(mut proto_cli)) => {
prepend_config_flags(&mut proto_cli.config_overrides, cli.config_overrides);
prepend_config_flags(
&mut proto_cli.config_overrides,
root_config_overrides.clone(),
);
proto::run_main(proto_cli).await?;
}
Some(Subcommand::Completion(completion_cli)) => {
@@ -189,7 +280,10 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
}
Some(Subcommand::Debug(debug_args)) => match debug_args.cmd {
DebugCommand::Seatbelt(mut seatbelt_cli) => {
prepend_config_flags(&mut seatbelt_cli.config_overrides, cli.config_overrides);
prepend_config_flags(
&mut seatbelt_cli.config_overrides,
root_config_overrides.clone(),
);
codex_cli::debug_sandbox::run_command_under_seatbelt(
seatbelt_cli,
codex_linux_sandbox_exe,
@@ -197,7 +291,10 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
.await?;
}
DebugCommand::Landlock(mut landlock_cli) => {
prepend_config_flags(&mut landlock_cli.config_overrides, cli.config_overrides);
prepend_config_flags(
&mut landlock_cli.config_overrides,
root_config_overrides.clone(),
);
codex_cli::debug_sandbox::run_command_under_landlock(
landlock_cli,
codex_linux_sandbox_exe,
@@ -206,7 +303,10 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
}
},
Some(Subcommand::Apply(mut apply_cli)) => {
prepend_config_flags(&mut apply_cli.config_overrides, cli.config_overrides);
prepend_config_flags(
&mut apply_cli.config_overrides,
root_config_overrides.clone(),
);
run_apply_command(apply_cli, None).await?;
}
Some(Subcommand::GenerateTs(gen_cli)) => {
@@ -228,8 +328,256 @@ fn prepend_config_flags(
.splice(0..0, cli_config_overrides.raw_overrides);
}
/// Build the final `TuiCli` for a `codex resume` invocation.
fn finalize_resume_interactive(
mut interactive: TuiCli,
root_config_overrides: CliConfigOverrides,
session_id: Option<String>,
last: bool,
resume_cli: TuiCli,
) -> TuiCli {
// Start with the parsed interactive CLI so resume shares the same
// configuration surface area as `codex` without additional flags.
let resume_session_id = session_id;
interactive.resume_picker = resume_session_id.is_none() && !last;
interactive.resume_last = last;
interactive.resume_session_id = resume_session_id;
// Merge resume-scoped flags and overrides with highest precedence.
merge_resume_cli_flags(&mut interactive, resume_cli);
// Propagate any root-level config overrides (e.g. `-c key=value`).
prepend_config_flags(&mut interactive.config_overrides, root_config_overrides);
interactive
}
/// Merge flags provided to `codex resume` so they take precedence over any
/// root-level flags. Only overrides fields explicitly set on the resume-scoped
/// CLI. Also appends `-c key=value` overrides with highest precedence.
fn merge_resume_cli_flags(interactive: &mut TuiCli, resume_cli: TuiCli) {
if let Some(model) = resume_cli.model {
interactive.model = Some(model);
}
if resume_cli.oss {
interactive.oss = true;
}
if let Some(profile) = resume_cli.config_profile {
interactive.config_profile = Some(profile);
}
if let Some(sandbox) = resume_cli.sandbox_mode {
interactive.sandbox_mode = Some(sandbox);
}
if let Some(approval) = resume_cli.approval_policy {
interactive.approval_policy = Some(approval);
}
if resume_cli.full_auto {
interactive.full_auto = true;
}
if resume_cli.dangerously_bypass_approvals_and_sandbox {
interactive.dangerously_bypass_approvals_and_sandbox = true;
}
if let Some(cwd) = resume_cli.cwd {
interactive.cwd = Some(cwd);
}
if resume_cli.web_search {
interactive.web_search = true;
}
if !resume_cli.images.is_empty() {
interactive.images = resume_cli.images;
}
if let Some(prompt) = resume_cli.prompt {
interactive.prompt = Some(prompt);
}
interactive
.config_overrides
.raw_overrides
.extend(resume_cli.config_overrides.raw_overrides);
}
fn print_completion(cmd: CompletionCommand) {
let mut app = MultitoolCli::command();
let name = "codex";
generate(cmd.shell, &mut app, name, &mut std::io::stdout());
}
#[cfg(test)]
mod tests {
use super::*;
use codex_core::protocol::TokenUsage;
use codex_protocol::mcp_protocol::ConversationId;
fn finalize_from_args(args: &[&str]) -> TuiCli {
let cli = MultitoolCli::try_parse_from(args).expect("parse");
let MultitoolCli {
interactive,
config_overrides: root_overrides,
subcommand,
} = cli;
let Subcommand::Resume(ResumeCommand {
session_id,
last,
config_overrides: resume_cli,
}) = subcommand.expect("resume present")
else {
unreachable!()
};
finalize_resume_interactive(interactive, root_overrides, session_id, last, resume_cli)
}
fn sample_exit_info(conversation: Option<&str>) -> AppExitInfo {
let token_usage = TokenUsage {
output_tokens: 2,
total_tokens: 2,
..Default::default()
};
AppExitInfo {
token_usage,
conversation_id: conversation
.map(ConversationId::from_string)
.map(Result::unwrap),
}
}
#[test]
fn format_exit_messages_skips_zero_usage() {
let exit_info = AppExitInfo {
token_usage: TokenUsage::default(),
conversation_id: None,
};
let lines = format_exit_messages(exit_info, false);
assert!(lines.is_empty());
}
#[test]
fn format_exit_messages_includes_resume_hint_without_color() {
let exit_info = sample_exit_info(Some("123e4567-e89b-12d3-a456-426614174000"));
let lines = format_exit_messages(exit_info, false);
assert_eq!(
lines,
vec![
"Token usage: total=2 input=0 output=2".to_string(),
"To continue this session, run codex resume 123e4567-e89b-12d3-a456-426614174000."
.to_string(),
]
);
}
#[test]
fn format_exit_messages_applies_color_when_enabled() {
let exit_info = sample_exit_info(Some("123e4567-e89b-12d3-a456-426614174000"));
let lines = format_exit_messages(exit_info, true);
assert_eq!(lines.len(), 2);
assert!(lines[1].contains("\u{1b}[36m"));
}
#[test]
fn resume_model_flag_applies_when_no_root_flags() {
let interactive = finalize_from_args(["codex", "resume", "-m", "gpt-5-test"].as_ref());
assert_eq!(interactive.model.as_deref(), Some("gpt-5-test"));
assert!(interactive.resume_picker);
assert!(!interactive.resume_last);
assert_eq!(interactive.resume_session_id, None);
}
#[test]
fn resume_picker_logic_none_and_not_last() {
let interactive = finalize_from_args(["codex", "resume"].as_ref());
assert!(interactive.resume_picker);
assert!(!interactive.resume_last);
assert_eq!(interactive.resume_session_id, None);
}
#[test]
fn resume_picker_logic_last() {
let interactive = finalize_from_args(["codex", "resume", "--last"].as_ref());
assert!(!interactive.resume_picker);
assert!(interactive.resume_last);
assert_eq!(interactive.resume_session_id, None);
}
#[test]
fn resume_picker_logic_with_session_id() {
let interactive = finalize_from_args(["codex", "resume", "1234"].as_ref());
assert!(!interactive.resume_picker);
assert!(!interactive.resume_last);
assert_eq!(interactive.resume_session_id.as_deref(), Some("1234"));
}
#[test]
fn resume_merges_option_flags_and_full_auto() {
let interactive = finalize_from_args(
[
"codex",
"resume",
"sid",
"--oss",
"--full-auto",
"--search",
"--sandbox",
"workspace-write",
"--ask-for-approval",
"on-request",
"-m",
"gpt-5-test",
"-p",
"my-profile",
"-C",
"/tmp",
"-i",
"/tmp/a.png,/tmp/b.png",
]
.as_ref(),
);
assert_eq!(interactive.model.as_deref(), Some("gpt-5-test"));
assert!(interactive.oss);
assert_eq!(interactive.config_profile.as_deref(), Some("my-profile"));
assert!(matches!(
interactive.sandbox_mode,
Some(codex_common::SandboxModeCliArg::WorkspaceWrite)
));
assert!(matches!(
interactive.approval_policy,
Some(codex_common::ApprovalModeCliArg::OnRequest)
));
assert!(interactive.full_auto);
assert_eq!(
interactive.cwd.as_deref(),
Some(std::path::Path::new("/tmp"))
);
assert!(interactive.web_search);
let has_a = interactive
.images
.iter()
.any(|p| p == std::path::Path::new("/tmp/a.png"));
let has_b = interactive
.images
.iter()
.any(|p| p == std::path::Path::new("/tmp/b.png"));
assert!(has_a && has_b);
assert!(!interactive.resume_picker);
assert!(!interactive.resume_last);
assert_eq!(interactive.resume_session_id.as_deref(), Some("sid"));
}
#[test]
fn resume_merges_dangerously_bypass_flag() {
let interactive = finalize_from_args(
[
"codex",
"resume",
"--dangerously-bypass-approvals-and-sandbox",
]
.as_ref(),
);
assert!(interactive.dangerously_bypass_approvals_and_sandbox);
assert!(interactive.resume_picker);
assert!(!interactive.resume_last);
assert_eq!(interactive.resume_session_id, None);
}
}

384
codex-rs/cli/src/mcp_cmd.rs Normal file
View File

@@ -0,0 +1,384 @@
use std::collections::BTreeMap;
use std::collections::HashMap;
use std::path::PathBuf;
use anyhow::Context;
use anyhow::Result;
use anyhow::anyhow;
use anyhow::bail;
use codex_common::CliConfigOverrides;
use codex_core::config::Config;
use codex_core::config::ConfigOverrides;
use codex_core::config::find_codex_home;
use codex_core::config::load_global_mcp_servers;
use codex_core::config::write_global_mcp_servers;
use codex_core::config_types::McpServerConfig;
/// [experimental] Launch Codex as an MCP server or manage configured MCP servers.
///
/// Subcommands:
/// - `serve` — run the MCP server on stdio
/// - `list` — list configured servers (with `--json`)
/// - `get` — show a single server (with `--json`)
/// - `add` — add a server launcher entry to `~/.codex/config.toml`
/// - `remove` — delete a server entry
#[derive(Debug, clap::Parser)]
pub struct McpCli {
#[clap(flatten)]
pub config_overrides: CliConfigOverrides,
#[command(subcommand)]
pub cmd: Option<McpSubcommand>,
}
#[derive(Debug, clap::Subcommand)]
pub enum McpSubcommand {
/// [experimental] Run the Codex MCP server (stdio transport).
Serve,
/// [experimental] List configured MCP servers.
List(ListArgs),
/// [experimental] Show details for a configured MCP server.
Get(GetArgs),
/// [experimental] Add a global MCP server entry.
Add(AddArgs),
/// [experimental] Remove a global MCP server entry.
Remove(RemoveArgs),
}
#[derive(Debug, clap::Parser)]
pub struct ListArgs {
/// Output the configured servers as JSON.
#[arg(long)]
pub json: bool,
}
#[derive(Debug, clap::Parser)]
pub struct GetArgs {
/// Name of the MCP server to display.
pub name: String,
/// Output the server configuration as JSON.
#[arg(long)]
pub json: bool,
}
#[derive(Debug, clap::Parser)]
pub struct AddArgs {
/// Name for the MCP server configuration.
pub name: String,
/// Environment variables to set when launching the server.
#[arg(long, value_parser = parse_env_pair, value_name = "KEY=VALUE")]
pub env: Vec<(String, String)>,
/// Command to launch the MCP server.
#[arg(trailing_var_arg = true, num_args = 1..)]
pub command: Vec<String>,
}
#[derive(Debug, clap::Parser)]
pub struct RemoveArgs {
/// Name of the MCP server configuration to remove.
pub name: String,
}
impl McpCli {
pub async fn run(self, codex_linux_sandbox_exe: Option<PathBuf>) -> Result<()> {
let McpCli {
config_overrides,
cmd,
} = self;
let subcommand = cmd.unwrap_or(McpSubcommand::Serve);
match subcommand {
McpSubcommand::Serve => {
codex_mcp_server::run_main(codex_linux_sandbox_exe, config_overrides).await?;
}
McpSubcommand::List(args) => {
run_list(&config_overrides, args)?;
}
McpSubcommand::Get(args) => {
run_get(&config_overrides, args)?;
}
McpSubcommand::Add(args) => {
run_add(&config_overrides, args)?;
}
McpSubcommand::Remove(args) => {
run_remove(&config_overrides, args)?;
}
}
Ok(())
}
}
fn run_add(config_overrides: &CliConfigOverrides, add_args: AddArgs) -> Result<()> {
// Validate any provided overrides even though they are not currently applied.
config_overrides.parse_overrides().map_err(|e| anyhow!(e))?;
let AddArgs { name, env, command } = add_args;
validate_server_name(&name)?;
let mut command_parts = command.into_iter();
let command_bin = command_parts
.next()
.ok_or_else(|| anyhow!("command is required"))?;
let command_args: Vec<String> = command_parts.collect();
let env_map = if env.is_empty() {
None
} else {
let mut map = HashMap::new();
for (key, value) in env {
map.insert(key, value);
}
Some(map)
};
let codex_home = find_codex_home().context("failed to resolve CODEX_HOME")?;
let mut servers = load_global_mcp_servers(&codex_home)
.with_context(|| format!("failed to load MCP servers from {}", codex_home.display()))?;
let new_entry = McpServerConfig {
command: command_bin,
args: command_args,
env: env_map,
startup_timeout_sec: None,
tool_timeout_sec: None,
};
servers.insert(name.clone(), new_entry);
write_global_mcp_servers(&codex_home, &servers)
.with_context(|| format!("failed to write MCP servers to {}", codex_home.display()))?;
println!("Added global MCP server '{name}'.");
Ok(())
}
fn run_remove(config_overrides: &CliConfigOverrides, remove_args: RemoveArgs) -> Result<()> {
config_overrides.parse_overrides().map_err(|e| anyhow!(e))?;
let RemoveArgs { name } = remove_args;
validate_server_name(&name)?;
let codex_home = find_codex_home().context("failed to resolve CODEX_HOME")?;
let mut servers = load_global_mcp_servers(&codex_home)
.with_context(|| format!("failed to load MCP servers from {}", codex_home.display()))?;
let removed = servers.remove(&name).is_some();
if removed {
write_global_mcp_servers(&codex_home, &servers)
.with_context(|| format!("failed to write MCP servers to {}", codex_home.display()))?;
}
if removed {
println!("Removed global MCP server '{name}'.");
} else {
println!("No MCP server named '{name}' found.");
}
Ok(())
}
fn run_list(config_overrides: &CliConfigOverrides, list_args: ListArgs) -> Result<()> {
let overrides = config_overrides.parse_overrides().map_err(|e| anyhow!(e))?;
let config = Config::load_with_cli_overrides(overrides, ConfigOverrides::default())
.context("failed to load configuration")?;
let mut entries: Vec<_> = config.mcp_servers.iter().collect();
entries.sort_by(|(a, _), (b, _)| a.cmp(b));
if list_args.json {
let json_entries: Vec<_> = entries
.into_iter()
.map(|(name, cfg)| {
let env = cfg.env.as_ref().map(|env| {
env.iter()
.map(|(k, v)| (k.clone(), v.clone()))
.collect::<BTreeMap<_, _>>()
});
serde_json::json!({
"name": name,
"command": cfg.command,
"args": cfg.args,
"env": env,
"startup_timeout_sec": cfg
.startup_timeout_sec
.map(|timeout| timeout.as_secs_f64()),
"tool_timeout_sec": cfg
.tool_timeout_sec
.map(|timeout| timeout.as_secs_f64()),
})
})
.collect();
let output = serde_json::to_string_pretty(&json_entries)?;
println!("{output}");
return Ok(());
}
if entries.is_empty() {
println!("No MCP servers configured yet. Try `codex mcp add my-tool -- my-command`.");
return Ok(());
}
let mut rows: Vec<[String; 4]> = Vec::new();
for (name, cfg) in entries {
let args = if cfg.args.is_empty() {
"-".to_string()
} else {
cfg.args.join(" ")
};
let env = match cfg.env.as_ref() {
None => "-".to_string(),
Some(map) if map.is_empty() => "-".to_string(),
Some(map) => {
let mut pairs: Vec<_> = map.iter().collect();
pairs.sort_by(|(a, _), (b, _)| a.cmp(b));
pairs
.into_iter()
.map(|(k, v)| format!("{k}={v}"))
.collect::<Vec<_>>()
.join(", ")
}
};
rows.push([name.clone(), cfg.command.clone(), args, env]);
}
let mut widths = ["Name".len(), "Command".len(), "Args".len(), "Env".len()];
for row in &rows {
for (i, cell) in row.iter().enumerate() {
widths[i] = widths[i].max(cell.len());
}
}
println!(
"{:<name_w$} {:<cmd_w$} {:<args_w$} {:<env_w$}",
"Name",
"Command",
"Args",
"Env",
name_w = widths[0],
cmd_w = widths[1],
args_w = widths[2],
env_w = widths[3],
);
for row in rows {
println!(
"{:<name_w$} {:<cmd_w$} {:<args_w$} {:<env_w$}",
row[0],
row[1],
row[2],
row[3],
name_w = widths[0],
cmd_w = widths[1],
args_w = widths[2],
env_w = widths[3],
);
}
Ok(())
}
fn run_get(config_overrides: &CliConfigOverrides, get_args: GetArgs) -> Result<()> {
let overrides = config_overrides.parse_overrides().map_err(|e| anyhow!(e))?;
let config = Config::load_with_cli_overrides(overrides, ConfigOverrides::default())
.context("failed to load configuration")?;
let Some(server) = config.mcp_servers.get(&get_args.name) else {
bail!("No MCP server named '{name}' found.", name = get_args.name);
};
if get_args.json {
let env = server.env.as_ref().map(|env| {
env.iter()
.map(|(k, v)| (k.clone(), v.clone()))
.collect::<BTreeMap<_, _>>()
});
let output = serde_json::to_string_pretty(&serde_json::json!({
"name": get_args.name,
"command": server.command,
"args": server.args,
"env": env,
"startup_timeout_sec": server
.startup_timeout_sec
.map(|timeout| timeout.as_secs_f64()),
"tool_timeout_sec": server
.tool_timeout_sec
.map(|timeout| timeout.as_secs_f64()),
}))?;
println!("{output}");
return Ok(());
}
println!("{}", get_args.name);
println!(" command: {}", server.command);
let args = if server.args.is_empty() {
"-".to_string()
} else {
server.args.join(" ")
};
println!(" args: {args}");
let env_display = match server.env.as_ref() {
None => "-".to_string(),
Some(map) if map.is_empty() => "-".to_string(),
Some(map) => {
let mut pairs: Vec<_> = map.iter().collect();
pairs.sort_by(|(a, _), (b, _)| a.cmp(b));
pairs
.into_iter()
.map(|(k, v)| format!("{k}={v}"))
.collect::<Vec<_>>()
.join(", ")
}
};
println!(" env: {env_display}");
if let Some(timeout) = server.startup_timeout_sec {
println!(" startup_timeout_sec: {}", timeout.as_secs_f64());
}
if let Some(timeout) = server.tool_timeout_sec {
println!(" tool_timeout_sec: {}", timeout.as_secs_f64());
}
println!(" remove: codex mcp remove {}", get_args.name);
Ok(())
}
fn parse_env_pair(raw: &str) -> Result<(String, String), String> {
let mut parts = raw.splitn(2, '=');
let key = parts
.next()
.map(str::trim)
.filter(|s| !s.is_empty())
.ok_or_else(|| "environment entries must be in KEY=VALUE form".to_string())?;
let value = parts
.next()
.map(str::to_string)
.ok_or_else(|| "environment entries must be in KEY=VALUE form".to_string())?;
Ok((key.to_string(), value))
}
fn validate_server_name(name: &str) -> Result<()> {
let is_valid = !name.is_empty()
&& name
.chars()
.all(|c| c.is_ascii_alphanumeric() || c == '-' || c == '_');
if is_valid {
Ok(())
} else {
bail!("invalid server name '{name}' (use letters, numbers, '-', '_')");
}
}

View File

@@ -0,0 +1,86 @@
use std::path::Path;
use anyhow::Result;
use codex_core::config::load_global_mcp_servers;
use predicates::str::contains;
use pretty_assertions::assert_eq;
use tempfile::TempDir;
fn codex_command(codex_home: &Path) -> Result<assert_cmd::Command> {
let mut cmd = assert_cmd::Command::cargo_bin("codex")?;
cmd.env("CODEX_HOME", codex_home);
Ok(cmd)
}
#[test]
fn add_and_remove_server_updates_global_config() -> Result<()> {
let codex_home = TempDir::new()?;
let mut add_cmd = codex_command(codex_home.path())?;
add_cmd
.args(["mcp", "add", "docs", "--", "echo", "hello"])
.assert()
.success()
.stdout(contains("Added global MCP server 'docs'."));
let servers = load_global_mcp_servers(codex_home.path())?;
assert_eq!(servers.len(), 1);
let docs = servers.get("docs").expect("server should exist");
assert_eq!(docs.command, "echo");
assert_eq!(docs.args, vec!["hello".to_string()]);
assert!(docs.env.is_none());
let mut remove_cmd = codex_command(codex_home.path())?;
remove_cmd
.args(["mcp", "remove", "docs"])
.assert()
.success()
.stdout(contains("Removed global MCP server 'docs'."));
let servers = load_global_mcp_servers(codex_home.path())?;
assert!(servers.is_empty());
let mut remove_again_cmd = codex_command(codex_home.path())?;
remove_again_cmd
.args(["mcp", "remove", "docs"])
.assert()
.success()
.stdout(contains("No MCP server named 'docs' found."));
let servers = load_global_mcp_servers(codex_home.path())?;
assert!(servers.is_empty());
Ok(())
}
#[test]
fn add_with_env_preserves_key_order_and_values() -> Result<()> {
let codex_home = TempDir::new()?;
let mut add_cmd = codex_command(codex_home.path())?;
add_cmd
.args([
"mcp",
"add",
"envy",
"--env",
"FOO=bar",
"--env",
"ALPHA=beta",
"--",
"python",
"server.py",
])
.assert()
.success();
let servers = load_global_mcp_servers(codex_home.path())?;
let envy = servers.get("envy").expect("server should exist");
let env = envy.env.as_ref().expect("env should be present");
assert_eq!(env.len(), 2);
assert_eq!(env.get("FOO"), Some(&"bar".to_string()));
assert_eq!(env.get("ALPHA"), Some(&"beta".to_string()));
Ok(())
}

View File

@@ -0,0 +1,106 @@
use std::path::Path;
use anyhow::Result;
use predicates::str::contains;
use pretty_assertions::assert_eq;
use serde_json::Value as JsonValue;
use tempfile::TempDir;
fn codex_command(codex_home: &Path) -> Result<assert_cmd::Command> {
let mut cmd = assert_cmd::Command::cargo_bin("codex")?;
cmd.env("CODEX_HOME", codex_home);
Ok(cmd)
}
#[test]
fn list_shows_empty_state() -> Result<()> {
let codex_home = TempDir::new()?;
let mut cmd = codex_command(codex_home.path())?;
let output = cmd.args(["mcp", "list"]).output()?;
assert!(output.status.success());
let stdout = String::from_utf8(output.stdout)?;
assert!(stdout.contains("No MCP servers configured yet."));
Ok(())
}
#[test]
fn list_and_get_render_expected_output() -> Result<()> {
let codex_home = TempDir::new()?;
let mut add = codex_command(codex_home.path())?;
add.args([
"mcp",
"add",
"docs",
"--env",
"TOKEN=secret",
"--",
"docs-server",
"--port",
"4000",
])
.assert()
.success();
let mut list_cmd = codex_command(codex_home.path())?;
let list_output = list_cmd.args(["mcp", "list"]).output()?;
assert!(list_output.status.success());
let stdout = String::from_utf8(list_output.stdout)?;
assert!(stdout.contains("Name"));
assert!(stdout.contains("docs"));
assert!(stdout.contains("docs-server"));
assert!(stdout.contains("TOKEN=secret"));
let mut list_json_cmd = codex_command(codex_home.path())?;
let json_output = list_json_cmd.args(["mcp", "list", "--json"]).output()?;
assert!(json_output.status.success());
let stdout = String::from_utf8(json_output.stdout)?;
let parsed: JsonValue = serde_json::from_str(&stdout)?;
let array = parsed.as_array().expect("expected array");
assert_eq!(array.len(), 1);
let entry = &array[0];
assert_eq!(entry.get("name"), Some(&JsonValue::String("docs".into())));
assert_eq!(
entry.get("command"),
Some(&JsonValue::String("docs-server".into()))
);
let args = entry
.get("args")
.and_then(|v| v.as_array())
.expect("args array");
assert_eq!(
args,
&vec![
JsonValue::String("--port".into()),
JsonValue::String("4000".into())
]
);
let env = entry
.get("env")
.and_then(|v| v.as_object())
.expect("env map");
assert_eq!(env.get("TOKEN"), Some(&JsonValue::String("secret".into())));
let mut get_cmd = codex_command(codex_home.path())?;
let get_output = get_cmd.args(["mcp", "get", "docs"]).output()?;
assert!(get_output.status.success());
let stdout = String::from_utf8(get_output.stdout)?;
assert!(stdout.contains("docs"));
assert!(stdout.contains("command: docs-server"));
assert!(stdout.contains("args: --port 4000"));
assert!(stdout.contains("env: TOKEN=secret"));
assert!(stdout.contains("remove: codex mcp remove docs"));
let mut get_json_cmd = codex_command(codex_home.path())?;
get_json_cmd
.args(["mcp", "get", "docs", "--json"])
.assert()
.success()
.stdout(contains("\"name\": \"docs\""));
Ok(())
}

View File

@@ -7,11 +7,14 @@ version = { workspace = true }
workspace = true
[dependencies]
clap = { version = "4", features = ["derive", "wrap_help"], optional = true }
codex-core = { path = "../core" }
codex-protocol = { path = "../protocol" }
serde = { version = "1", optional = true }
toml = { version = "0.9", optional = true }
async-trait = { workspace = true }
clap = { workspace = true, features = ["derive", "wrap_help"], optional = true }
codex-core = { workspace = true }
codex-protocol = { workspace = true }
serde = { workspace = true, optional = true }
toml = { workspace = true, optional = true }
thiserror = { workspace = true }
tokio = { workspace = true }
[features]
# Separate feature so that `clap` is not a mandatory dependency.

View File

@@ -17,7 +17,10 @@ pub fn create_config_summary_entries(config: &Config) -> Vec<(&'static str, Stri
{
entries.push((
"reasoning effort",
config.model_reasoning_effort.to_string(),
config
.model_reasoning_effort
.map(|effort| effort.to_string())
.unwrap_or_else(|| "none".to_string()),
));
entries.push((
"reasoning summaries",

View File

@@ -34,3 +34,5 @@ pub mod model_presets;
// Shared approval presets (AskForApproval + Sandbox) used by TUI and MCP server
// Not to be confused with AskForApproval, which we should probably rename to EscalationPolicy.
pub mod approval_presets;
// Readiness flag with token-based authorization and async waiting (Tokio).
pub mod readiness;

View File

@@ -1,4 +1,5 @@
use codex_core::protocol_config_types::ReasoningEffort;
use codex_protocol::mcp_protocol::AuthMode;
/// A simple preset pairing a model slug with a reasoning effort.
#[derive(Debug, Clone, Copy)]
@@ -12,50 +13,61 @@ pub struct ModelPreset {
/// Model slug (e.g., "gpt-5").
pub model: &'static str,
/// Reasoning effort to apply for this preset.
pub effort: ReasoningEffort,
pub effort: Option<ReasoningEffort>,
}
/// Built-in list of model presets that pair a model with a reasoning effort.
///
/// Keep this UI-agnostic so it can be reused by both TUI and MCP server.
pub fn builtin_model_presets() -> &'static [ModelPreset] {
// Order reflects effort from minimal to high.
const PRESETS: &[ModelPreset] = &[
ModelPreset {
id: "gpt-5-minimal",
label: "gpt-5 minimal",
description: "— fastest responses with limited reasoning; ideal for coding, instructions, or lightweight tasks",
model: "gpt-5",
effort: ReasoningEffort::Minimal,
},
ModelPreset {
id: "gpt-5-low",
label: "gpt-5 low",
description: "— balances speed with some reasoning; useful for straightforward queries and short explanations",
model: "gpt-5",
effort: ReasoningEffort::Low,
},
ModelPreset {
id: "gpt-5-medium",
label: "gpt-5 medium",
description: "— default setting; provides a solid balance of reasoning depth and latency for general-purpose tasks",
model: "gpt-5",
effort: ReasoningEffort::Medium,
},
ModelPreset {
id: "gpt-5-high",
label: "gpt-5 high",
description: "— maximizes reasoning depth for complex or ambiguous problems",
model: "gpt-5",
effort: ReasoningEffort::High,
},
ModelPreset {
id: "gpt-5-high-new",
label: "gpt-5 high new",
description: "— our latest release tuned to rely on the model's built-in reasoning defaults",
model: "gpt-5-high-new",
effort: ReasoningEffort::Medium,
},
];
PRESETS
const PRESETS: &[ModelPreset] = &[
ModelPreset {
id: "gpt-5-codex-low",
label: "gpt-5-codex low",
description: "",
model: "gpt-5-codex",
effort: Some(ReasoningEffort::Low),
},
ModelPreset {
id: "gpt-5-codex-medium",
label: "gpt-5-codex medium",
description: "",
model: "gpt-5-codex",
effort: None,
},
ModelPreset {
id: "gpt-5-codex-high",
label: "gpt-5-codex high",
description: "",
model: "gpt-5-codex",
effort: Some(ReasoningEffort::High),
},
ModelPreset {
id: "gpt-5-minimal",
label: "gpt-5 minimal",
description: "— fastest responses with limited reasoning; ideal for coding, instructions, or lightweight tasks",
model: "gpt-5",
effort: Some(ReasoningEffort::Minimal),
},
ModelPreset {
id: "gpt-5-low",
label: "gpt-5 low",
description: "— balances speed with some reasoning; useful for straightforward queries and short explanations",
model: "gpt-5",
effort: Some(ReasoningEffort::Low),
},
ModelPreset {
id: "gpt-5-medium",
label: "gpt-5 medium",
description: "— default setting; provides a solid balance of reasoning depth and latency for general-purpose tasks",
model: "gpt-5",
effort: Some(ReasoningEffort::Medium),
},
ModelPreset {
id: "gpt-5-high",
label: "gpt-5 high",
description: "— maximizes reasoning depth for complex or ambiguous problems",
model: "gpt-5",
effort: Some(ReasoningEffort::High),
},
];
pub fn builtin_model_presets(_auth_mode: Option<AuthMode>) -> Vec<ModelPreset> {
PRESETS.to_vec()
}

View File

@@ -0,0 +1,249 @@
//! Readiness flag with token-based authorization and async waiting (Tokio).
use std::collections::HashSet;
use std::sync::atomic::AtomicBool;
use std::sync::atomic::AtomicI32;
use std::sync::atomic::Ordering;
use std::time::Duration;
use tokio::sync::Mutex;
use tokio::sync::watch;
use tokio::time;
/// Opaque subscription token returned by `subscribe()`.
#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug)]
pub struct Token(i32);
const LOCK_TIMEOUT: Duration = Duration::from_millis(1000);
#[async_trait::async_trait]
pub trait Readiness: Send + Sync + 'static {
/// Returns true if the flag is currently marked ready. At least one token needs to be marked
/// as ready before.
/// `true` is not reversible.
fn is_ready(&self) -> bool;
/// Subscribe to readiness and receive an authorization token.
///
/// If the flag is already ready, returns `FlagAlreadyReady`.
async fn subscribe(&self) -> Result<Token, errors::ReadinessError>;
/// Attempt to mark the flag ready, validated by the provided token.
///
/// Returns `true` iff:
/// - `token` is currently subscribed, and
/// - the flag was not already ready.
async fn mark_ready(&self, token: Token) -> Result<bool, errors::ReadinessError>;
/// Asynchronously wait until the flag becomes ready.
async fn wait_ready(&self);
}
pub struct ReadinessFlag {
/// Atomic for cheap reads.
ready: AtomicBool,
/// Used to generate the next i32 token.
next_id: AtomicI32,
/// Set of active subscriptions.
tokens: Mutex<HashSet<Token>>,
/// Broadcasts readiness to async waiters.
tx: watch::Sender<bool>,
}
impl ReadinessFlag {
/// Create a new, not-yet-ready flag.
pub fn new() -> Self {
let (tx, _rx) = watch::channel(false);
Self {
ready: AtomicBool::new(false),
next_id: AtomicI32::new(1), // Reserve 0.
tokens: Mutex::new(HashSet::new()),
tx,
}
}
async fn with_tokens<R>(
&self,
f: impl FnOnce(&mut HashSet<Token>) -> R,
) -> Result<R, errors::ReadinessError> {
let mut guard = time::timeout(LOCK_TIMEOUT, self.tokens.lock())
.await
.map_err(|_| errors::ReadinessError::TokenLockFailed)?;
Ok(f(&mut guard))
}
}
impl Default for ReadinessFlag {
fn default() -> Self {
Self::new()
}
}
#[async_trait::async_trait]
impl Readiness for ReadinessFlag {
fn is_ready(&self) -> bool {
self.ready.load(Ordering::Acquire)
}
async fn subscribe(&self) -> Result<Token, errors::ReadinessError> {
if self.is_ready() {
return Err(errors::ReadinessError::FlagAlreadyReady);
}
// Generate a token; ensure it's not 0.
let token = Token(self.next_id.fetch_add(1, Ordering::Relaxed));
// Recheck readiness while holding the lock so mark_ready can't flip the flag between the
// check above and inserting the token.
let inserted = self
.with_tokens(|tokens| {
if self.is_ready() {
return false;
}
tokens.insert(token);
true
})
.await?;
if !inserted {
return Err(errors::ReadinessError::FlagAlreadyReady);
}
Ok(token)
}
async fn mark_ready(&self, token: Token) -> Result<bool, errors::ReadinessError> {
if self.is_ready() {
return Ok(false);
}
if token.0 == 0 {
return Ok(false); // Never authorize.
}
let marked = self
.with_tokens(|set| {
if !set.remove(&token) {
return false; // invalid or already used
}
self.ready.store(true, Ordering::Release);
set.clear(); // no further tokens needed once ready
true
})
.await?;
if !marked {
return Ok(false);
}
// Best-effort broadcast; ignore error if there are no receivers.
let _ = self.tx.send(true);
Ok(true)
}
async fn wait_ready(&self) {
if self.is_ready() {
return;
}
let mut rx = self.tx.subscribe();
// Fast-path check before awaiting.
if *rx.borrow() {
return;
}
// Await changes until true is observed.
while rx.changed().await.is_ok() {
if *rx.borrow() {
break;
}
}
}
}
mod errors {
use thiserror::Error;
#[derive(Debug, Error)]
pub enum ReadinessError {
#[error("Failed to acquire readiness token lock")]
TokenLockFailed,
#[error("Flag is already ready. Impossible to subscribe")]
FlagAlreadyReady,
}
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use super::Readiness;
use super::ReadinessFlag;
use super::Token;
use super::errors::ReadinessError;
#[tokio::test]
async fn subscribe_and_mark_ready_roundtrip() -> Result<(), ReadinessError> {
let flag = ReadinessFlag::new();
let token = flag.subscribe().await?;
assert!(flag.mark_ready(token).await?);
assert!(flag.is_ready());
Ok(())
}
#[tokio::test]
async fn subscribe_after_ready_returns_none() -> Result<(), ReadinessError> {
let flag = ReadinessFlag::new();
let token = flag.subscribe().await?;
assert!(flag.mark_ready(token).await?);
assert!(flag.subscribe().await.is_err());
Ok(())
}
#[tokio::test]
async fn mark_ready_rejects_unknown_token() -> Result<(), ReadinessError> {
let flag = ReadinessFlag::new();
assert!(!flag.mark_ready(Token(42)).await?);
assert!(!flag.is_ready());
Ok(())
}
#[tokio::test]
async fn wait_ready_unblocks_after_mark_ready() -> Result<(), ReadinessError> {
let flag = Arc::new(ReadinessFlag::new());
let token = flag.subscribe().await?;
let waiter = {
let flag = Arc::clone(&flag);
tokio::spawn(async move {
flag.wait_ready().await;
})
};
assert!(flag.mark_ready(token).await?);
waiter.await.expect("waiting task should not panic");
Ok(())
}
#[tokio::test]
async fn mark_ready_twice_uses_single_token() -> Result<(), ReadinessError> {
let flag = ReadinessFlag::new();
let token = flag.subscribe().await?;
assert!(flag.mark_ready(token).await?);
assert!(!flag.mark_ready(token).await?);
Ok(())
}
#[tokio::test]
async fn subscribe_returns_error_when_lock_is_held() {
let flag = ReadinessFlag::new();
let _guard = flag
.tokens
.try_lock()
.expect("initial lock acquisition should succeed");
let err = flag
.subscribe()
.await
.expect_err("contended subscribe should report a lock failure");
assert!(matches!(err, ReadinessError::TokenLockFailed));
}
}

View File

@@ -4,82 +4,89 @@ name = "codex-core"
version = { workspace = true }
[lib]
doctest = false
name = "codex_core"
path = "src/lib.rs"
doctest = false
[lints]
workspace = true
[dependencies]
anyhow = "1"
async-channel = "2.3.1"
base64 = "0.22"
bytes = "1.10.1"
chrono = { version = "0.4", features = ["serde"] }
codex-apply-patch = { path = "../apply-patch" }
codex-mcp-client = { path = "../mcp-client" }
codex-protocol = { path = "../protocol" }
dirs = "6"
env-flags = "0.1.1"
eventsource-stream = "0.2.3"
futures = "0.3"
libc = "0.2.175"
mcp-types = { path = "../mcp-types" }
os_info = "3.12.0"
portable-pty = "0.9.0"
rand = "0.9"
regex-lite = "0.1.7"
reqwest = { version = "0.12", features = ["json", "stream"] }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
sha1 = "0.10.6"
shlex = "1.3.0"
similar = "2.7.0"
strum_macros = "0.27.2"
tempfile = "3"
thiserror = "2.0.16"
time = { version = "0.3", features = ["formatting", "parsing", "local-offset", "macros"] }
tokio = { version = "1", features = [
anyhow = { workspace = true }
askama = { workspace = true }
async-channel = { workspace = true }
base64 = { workspace = true }
bytes = { workspace = true }
chrono = { workspace = true, features = ["serde"] }
codex-apply-patch = { workspace = true }
codex-file-search = { workspace = true }
codex-mcp-client = { workspace = true }
codex-protocol = { workspace = true }
dirs = { workspace = true }
env-flags = { workspace = true }
eventsource-stream = { workspace = true }
futures = { workspace = true }
libc = { workspace = true }
mcp-types = { workspace = true }
os_info = { workspace = true }
portable-pty = { workspace = true }
rand = { workspace = true }
regex-lite = { workspace = true }
reqwest = { workspace = true, features = ["json", "stream"] }
serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }
sha1 = { workspace = true }
shlex = { workspace = true }
similar = { workspace = true }
strum_macros = { workspace = true }
tempfile = { workspace = true }
thiserror = { workspace = true }
time = { workspace = true, features = [
"formatting",
"parsing",
"local-offset",
"macros",
] }
tokio = { workspace = true, features = [
"io-std",
"macros",
"process",
"rt-multi-thread",
"signal",
] }
tokio-util = "0.7.16"
toml = "0.9.5"
toml_edit = "0.23.4"
tracing = { version = "0.1.41", features = ["log"] }
tree-sitter = "0.25.9"
tree-sitter-bash = "0.25.0"
uuid = { version = "1", features = ["serde", "v4"] }
which = "6"
wildmatch = "2.4.0"
tokio-util = { workspace = true }
toml = { workspace = true }
toml_edit = { workspace = true }
tracing = { workspace = true, features = ["log"] }
tree-sitter = { workspace = true }
tree-sitter-bash = { workspace = true }
uuid = { workspace = true, features = ["serde", "v4"] }
which = { workspace = true }
wildmatch = { workspace = true }
[target.'cfg(target_os = "linux")'.dependencies]
landlock = "0.4.1"
seccompiler = "0.5.0"
landlock = { workspace = true }
seccompiler = { workspace = true }
# Build OpenSSL from source for musl builds.
[target.x86_64-unknown-linux-musl.dependencies]
openssl-sys = { version = "*", features = ["vendored"] }
openssl-sys = { workspace = true, features = ["vendored"] }
# Build OpenSSL from source for musl builds.
[target.aarch64-unknown-linux-musl.dependencies]
openssl-sys = { version = "*", features = ["vendored"] }
openssl-sys = { workspace = true, features = ["vendored"] }
[dev-dependencies]
assert_cmd = "2"
core_test_support = { path = "tests/common" }
maplit = "1.0.2"
predicates = "3"
pretty_assertions = "1.4.1"
tempfile = "3"
tokio-test = "0.4"
walkdir = "2.5.0"
wiremock = "0.6"
assert_cmd = { workspace = true }
core_test_support = { workspace = true }
maplit = { workspace = true }
predicates = { workspace = true }
pretty_assertions = { workspace = true }
tempfile = { workspace = true }
tokio-test = { workspace = true }
walkdir = { workspace = true }
wiremock = { workspace = true }
[package.metadata.cargo-shear]
ignored = ["openssl-sys"]

View File

@@ -0,0 +1,104 @@
You are Codex, based on GPT-5. You are running as a coding agent in the Codex CLI on a user's computer.
## General
- The arguments to `shell` will be passed to execvp(). Most terminal commands should be prefixed with ["bash", "-lc"].
- Always set the `workdir` param when using the shell function. Do not use `cd` unless absolutely necessary.
- When searching for text or files, prefer using `rg` or `rg --files` respectively because `rg` is much faster than alternatives like `grep`. (If the `rg` command is not found, then use alternatives.)
## Editing constraints
- Default to ASCII when editing or creating files. Only introduce non-ASCII or other Unicode characters when there is a clear justification and the file already uses them.
- Add succinct code comments that explain what is going on if code is not self-explanatory. You should not add comments like "Assigns the value to the variable", but a brief comment might be useful ahead of a complex code block that the user would otherwise have to spend time parsing out. Usage of these comments should be rare.
- You may be in a dirty git worktree.
* NEVER revert existing changes you did not make unless explicitly requested, since these changes were made by the user.
* If asked to make a commit or code edits and there are unrelated changes to your work or changes that you didn't make in those files, don't revert those changes.
* If the changes are in files you've touched recently, you should read carefully and understand how you can work with the changes rather than reverting them.
* If the changes are in unrelated files, just ignore them and don't revert them.
- While you are working, you might notice unexpected changes that you didn't make. If this happens, STOP IMMEDIATELY and ask the user how they would like to proceed.
## Plan tool
When using the planning tool:
- Skip using the planning tool for straightforward tasks (roughly the easiest 25%).
- Do not make single-step plans.
- When you made a plan, update it after having performed one of the sub-tasks that you shared on the plan.
## Codex CLI harness, sandboxing, and approvals
The Codex CLI harness supports several different configurations for sandboxing and escalation approvals that the user can choose from.
Filesystem sandboxing defines which files can be read or written. The options for `sandbox_mode` are:
- **read-only**: The sandbox only permits reading files.
- **workspace-write**: The sandbox permits reading files, and editing files in `cwd` and `writable_roots`. Editing files in other directories requires approval.
- **danger-full-access**: No filesystem sandboxing - all commands are permitted.
Network sandboxing defines whether network can be accessed without approval. Options for `network_access` are:
- **restricted**: Requires approval
- **enabled**: No approval needed
Approvals are your mechanism to get user consent to run shell commands without the sandbox. Possible configuration options for `approval_policy` are
- **untrusted**: The harness will escalate most commands for user approval, apart from a limited allowlist of safe "read" commands.
- **on-failure**: The harness will allow all commands to run in the sandbox (if enabled), and failures will be escalated to the user for approval to run again without the sandbox.
- **on-request**: Commands will be run in the sandbox by default, and you can specify in your tool call if you want to escalate a command to run without sandboxing. (Note that this mode is not always available. If it is, you'll see parameters for it in the `shell` command description.)
- **never**: This is a non-interactive mode where you may NEVER ask the user for approval to run commands. Instead, you must always persist and work around constraints to solve the task for the user. You MUST do your utmost best to finish the task and validate your work before yielding. If this mode is paired with `danger-full-access`, take advantage of it to deliver the best outcome for the user. Further, in this mode, your default testing philosophy is overridden: Even if you don't see local patterns for testing, you may add tests and scripts to validate your work. Just remove them before yielding.
When you are running with `approval_policy == on-request`, and sandboxing enabled, here are scenarios where you'll need to request approval:
- You need to run a command that writes to a directory that requires it (e.g. running tests that write to /var)
- You need to run a GUI app (e.g., open/xdg-open/osascript) to open browsers or files.
- You are running sandboxed and need to run a command that requires network access (e.g. installing packages)
- If you run a command that is important to solving the user's query, but it fails because of sandboxing, rerun the command with approval. ALWAYS proceed to use the `with_escalated_permissions` and `justification` parameters - do not message the user before requesting approval for the command.
- You are about to take a potentially destructive action such as an `rm` or `git reset` that the user did not explicitly ask for
- (for all of these, you should weigh alternative paths that do not require approval)
When `sandbox_mode` is set to read-only, you'll need to request approval for any command that isn't a read.
You will be told what filesystem sandboxing, network sandboxing, and approval mode are active in a developer or user message. If you are not told about this, assume that you are running with workspace-write, network sandboxing enabled, and approval on-failure.
Although they introduce friction to the user because your work is paused until the user responds, you should leverage them when necessary to accomplish important work. If the completing the task requires escalated permissions, Do not let these settings or the sandbox deter you from attempting to accomplish the user's task unless it is set to "never", in which case never ask for approvals.
When requesting approval to execute a command that will require escalated privileges:
- Provide the `with_escalated_permissions` parameter with the boolean value true
- Include a short, 1 sentence explanation for why you need to enable `with_escalated_permissions` in the justification parameter
## Special user requests
- If the user makes a simple request (such as asking for the time) which you can fulfill by running a terminal command (such as `date`), you should do so.
- If the user asks for a "review", default to a code review mindset: prioritise identifying bugs, risks, behavioural regressions, and missing tests. Findings must be the primary focus of the response - keep summaries or overviews brief and only after enumerating the issues. Present findings first (ordered by severity with file/line references), follow with open questions or assumptions, and offer a change-summary only as a secondary detail. If no findings are discovered, state that explicitly and mention any residual risks or testing gaps.
## Presenting your work and final message
You are producing plain text that will later be styled by the CLI. Follow these rules exactly. Formatting should make results easy to scan, but not feel mechanical. Use judgment to decide how much structure adds value.
- Default: be very concise; friendly coding teammate tone.
- Ask only when needed; suggest ideas; mirror the user's style.
- For substantial work, summarize clearly; follow finalanswer formatting.
- Skip heavy formatting for simple confirmations.
- Don't dump large files you've written; reference paths only.
- No "save/copy this file" - User is on the same machine.
- Offer logical next steps (tests, commits, build) briefly; add verify steps if you couldn't do something.
- For code changes:
* Lead with a quick explanation of the change, and then give more details on the context covering where and why a change was made. Do not start this explanation with "summary", just jump right in.
* If there are natural next steps the user may want to take, suggest them at the end of your response. Do not make suggestions if there are no natural next steps.
* When suggesting multiple options, use numeric lists for the suggestions so the user can quickly respond with a single number.
- The user does not command execution outputs. When asked to show the output of a command (e.g. `git show`), relay the important details in your answer or summarize the key lines so the user understands the result.
### Final answer structure and style guidelines
- Plain text; CLI handles styling. Use structure only when it helps scanability.
- Headers: optional; short Title Case (1-3 words) wrapped in **…**; no blank line before the first bullet; add only if they truly help.
- Bullets: use - ; merge related points; keep to one line when possible; 46 per list ordered by importance; keep phrasing consistent.
- Monospace: backticks for commands/paths/env vars/code ids and inline examples; use for literal keyword bullets; never combine with **.
- Code samples or multi-line snippets should be wrapped in fenced code blocks; add a language hint whenever obvious.
- Structure: group related bullets; order sections general → specific → supporting; for subsections, start with a bolded keyword bullet, then items; match complexity to the task.
- Tone: collaborative, concise, factual; present tense, active voice; selfcontained; no "above/below"; parallel wording.
- Don'ts: no nested bullets/hierarchies; no ANSI codes; don't cram unrelated keywords; keep keyword lists short—wrap/reformat if long; avoid naming formatting styles in answers.
- Adaptation: code explanations → precise, structured with code refs; simple tasks → lead with outcome; big changes → logical walkthrough + rationale + next actions; casual one-offs → plain sentences, no headers/bullets.
- File References: When referencing files in your response, make sure to include the relevant start line and always follow the below rules:
* Use inline code to make file paths clickable.
* Each reference should have a stand alone path. Even if it's the same file.
* Accepted: absolute, workspacerelative, a/ or b/ diff prefixes, or bare filename/suffix.
* Line/column (1based, optional): :line[:column] or #Lline[Ccolumn] (column defaults to 1).
* Do not use URIs like file://, vscode://, or https://.
* Do not provide range of lines
* Examples: src/app.ts, src/app.ts:42, b/server/index.js#L10, C:\repo\project\main.rs:12:5

View File

@@ -251,6 +251,16 @@ You are producing plain text that will later be styled by the CLI. Follow these
- Apply to inline examples and to bullet keywords if the keyword itself is a literal file/command.
- Never mix monospace and bold markers; choose one based on whether its a keyword (`**`) or inline code/path (`` ` ``).
**File References**
When referencing files in your response, make sure to include the relevant start line and always follow the below rules:
* Use inline code to make file paths clickable.
* Each reference should have a stand alone path. Even if it's the same file.
* Accepted: absolute, workspacerelative, a/ or b/ diff prefixes, or bare filename/suffix.
* Line/column (1based, optional): :line[:column] or #Lline[Ccolumn] (column defaults to 1).
* Do not use URIs like file://, vscode://, or https://.
* Do not provide range of lines
* Examples: src/app.ts, src/app.ts:42, b/server/index.js#L10, C:\repo\project\main.rs:12:5
**Structure**
- Place related bullets together; dont mix unrelated concepts in the same section.

View File

@@ -0,0 +1,87 @@
# Review guidelines:
You are acting as a reviewer for a proposed code change made by another engineer.
Below are some default guidelines for determining whether the original author would appreciate the issue being flagged.
These are not the final word in determining whether an issue is a bug. In many cases, you will encounter other, more specific guidelines. These may be present elsewhere in a developer message, a user message, a file, or even elsewhere in this system message.
Those guidelines should be considered to override these general instructions.
Here are the general guidelines for determining whether something is a bug and should be flagged.
1. It meaningfully impacts the accuracy, performance, security, or maintainability of the code.
2. The bug is discrete and actionable (i.e. not a general issue with the codebase or a combination of multiple issues).
3. Fixing the bug does not demand a level of rigor that is not present in the rest of the codebase (e.g. one doesn't need very detailed comments and input validation in a repository of one-off scripts in personal projects)
4. The bug was introduced in the commit (pre-existing bugs should not be flagged).
5. The author of the original PR would likely fix the issue if they were made aware of it.
6. The bug does not rely on unstated assumptions about the codebase or author's intent.
7. It is not enough to speculate that a change may disrupt another part of the codebase, to be considered a bug, one must identify the other parts of the code that are provably affected.
8. The bug is clearly not just an intentional change by the original author.
When flagging a bug, you will also provide an accompanying comment. Once again, these guidelines are not the final word on how to construct a comment -- defer to any subsequent guidelines that you encounter.
1. The comment should be clear about why the issue is a bug.
2. The comment should appropriately communicate the severity of the issue. It should not claim that an issue is more severe than it actually is.
3. The comment should be brief. The body should be at most 1 paragraph. It should not introduce line breaks within the natural language flow unless it is necessary for the code fragment.
4. The comment should not include any chunks of code longer than 3 lines. Any code chunks should be wrapped in markdown inline code tags or a code block.
5. The comment should clearly and explicitly communicate the scenarios, environments, or inputs that are necessary for the bug to arise. The comment should immediately indicate that the issue's severity depends on these factors.
6. The comment's tone should be matter-of-fact and not accusatory or overly positive. It should read as a helpful AI assistant suggestion without sounding too much like a human reviewer.
7. The comment should be written such that the original author can immediately grasp the idea without close reading.
8. The comment should avoid excessive flattery and comments that are not helpful to the original author. The comment should avoid phrasing like "Great job ...", "Thanks for ...".
Below are some more detailed guidelines that you should apply to this specific review.
HOW MANY FINDINGS TO RETURN:
Output all findings that the original author would fix if they knew about it. If there is no finding that a person would definitely love to see and fix, prefer outputting no findings. Do not stop at the first qualifying finding. Continue until you've listed every qualifying finding.
GUIDELINES:
- Ignore trivial style unless it obscures meaning or violates documented standards.
- Use one comment per distinct issue (or a multi-line range if necessary).
- Use ```suggestion blocks ONLY for concrete replacement code (minimal lines; no commentary inside the block).
- In every ```suggestion block, preserve the exact leading whitespace of the replaced lines (spaces vs tabs, number of spaces).
- Do NOT introduce or remove outer indentation levels unless that is the actual fix.
The comments will be presented in the code review as inline comments. You should avoid providing unnecessary location details in the comment body. Always keep the line range as short as possible for interpreting the issue. Avoid ranges longer than 510 lines; instead, choose the most suitable subrange that pinpoints the problem.
At the beginning of the finding title, tag the bug with priority level. For example "[P1] Un-padding slices along wrong tensor dimensions". [P0] Drop everything to fix. Blocking release, operations, or major usage. Only use for universal issues that do not depend on any assumptions about the inputs. · [P1] Urgent. Should be addressed in the next cycle · [P2] Normal. To be fixed eventually · [P3] Low. Nice to have.
Additionally, include a numeric priority field in the JSON output for each finding: set "priority" to 0 for P0, 1 for P1, 2 for P2, or 3 for P3. If a priority cannot be determined, omit the field or use null.
At the end of your findings, output an "overall correctness" verdict of whether or not the patch should be considered "correct".
Correct implies that existing code and tests will not break, and the patch is free of bugs and other blocking issues.
Ignore non-blocking issues such as style, formatting, typos, documentation, and other nits.
FORMATTING GUIDELINES:
The finding description should be one paragraph.
OUTPUT FORMAT:
## Output schema — MUST MATCH *exactly*
```json
{
"findings": [
{
"title": "<≤ 80 chars, imperative>",
"body": "<valid Markdown explaining *why* this is a problem; cite files/lines/functions>",
"confidence_score": <float 0.0-1.0>,
"priority": <int 0-3, optional>,
"code_location": {
"absolute_file_path": "<file path>",
"line_range": {"start": <int>, "end": <int>}
}
}
],
"overall_correctness": "patch is correct" | "patch is incorrect",
"overall_explanation": "<1-3 sentence explanation justifying the overall_correctness verdict>",
"overall_confidence_score": <float 0.0-1.0>
}
```
* **Do not** wrap the JSON in markdown fences or extra prose.
* The code_location field is required and must include absolute_file_path and line_range.
*Line ranges must be as short as possible for interpreting the issue (avoid ranges over 510 lines; pick the most suitable subrange).
* The code_location should overlap with the diff.
* Do not generate a PR fix.

View File

@@ -408,6 +408,32 @@ mod tests {
assert_eq!(auth_dot_json, same_auth_dot_json);
}
#[test]
fn login_with_api_key_overwrites_existing_auth_json() {
let dir = tempdir().unwrap();
let auth_path = dir.path().join("auth.json");
let stale_auth = json!({
"OPENAI_API_KEY": "sk-old",
"tokens": {
"id_token": "stale.header.payload",
"access_token": "stale-access",
"refresh_token": "stale-refresh",
"account_id": "stale-acc"
}
});
std::fs::write(
&auth_path,
serde_json::to_string_pretty(&stale_auth).unwrap(),
)
.unwrap();
super::login_with_api_key(dir.path(), "sk-new").expect("login_with_api_key should succeed");
let auth = super::try_read_auth_json(&auth_path).expect("auth.json should parse");
assert_eq!(auth.openai_api_key.as_deref(), Some("sk-new"));
assert!(auth.tokens.is_none(), "tokens should be cleared");
}
#[tokio::test]
async fn pro_account_with_no_api_key_uses_chatgpt_auth() {
let codex_home = tempdir().unwrap();

View File

@@ -1,3 +1,4 @@
use tree_sitter::Node;
use tree_sitter::Parser;
use tree_sitter::Tree;
use tree_sitter_bash::LANGUAGE as BASH;
@@ -73,6 +74,9 @@ pub fn try_parse_word_only_commands_sequence(tree: &Tree, src: &str) -> Option<V
}
}
// Walk uses a stack (LIFO), so re-sort by position to restore source order.
command_nodes.sort_by_key(Node::start_byte);
let mut commands = Vec::new();
for node in command_nodes {
if let Some(words) = parse_plain_command_from_node(node, src) {
@@ -150,10 +154,10 @@ mod tests {
let src = "ls && pwd; echo 'hi there' | wc -l";
let cmds = parse_seq(src).unwrap();
let expected: Vec<Vec<String>> = vec![
vec!["wc".to_string(), "-l".to_string()],
vec!["echo".to_string(), "hi there".to_string()],
vec!["pwd".to_string()],
vec!["ls".to_string()],
vec!["pwd".to_string()],
vec!["echo".to_string(), "hi there".to_string()],
vec!["wc".to_string(), "-l".to_string()],
];
assert_eq!(cmds, expected);
}

View File

@@ -35,6 +35,12 @@ pub(crate) async fn stream_chat_completions(
client: &reqwest::Client,
provider: &ModelProviderInfo,
) -> Result<ResponseStream> {
if prompt.output_schema.is_some() {
return Err(CodexErr::UnsupportedOperation(
"output_schema is not supported for Chat Completions API".to_string(),
));
}
// Build messages array
let mut messages = Vec::<serde_json::Value>::new();
@@ -462,7 +468,7 @@ async fn process_chat_sse<S>(
if let Some(reasoning_val) = choice.get("delta").and_then(|d| d.get("reasoning")) {
let mut maybe_text = reasoning_val
.as_str()
.map(|s| s.to_string())
.map(str::to_string)
.filter(|s| !s.is_empty());
if maybe_text.is_none() && reasoning_val.is_object() {
@@ -716,6 +722,9 @@ where
// Not an assistant message forward immediately.
return Poll::Ready(Some(Ok(ResponseEvent::OutputItemDone(item))));
}
Poll::Ready(Some(Ok(ResponseEvent::RateLimits(snapshot)))) => {
return Poll::Ready(Some(Ok(ResponseEvent::RateLimits(snapshot))));
}
Poll::Ready(Some(Ok(ResponseEvent::Completed {
response_id,
token_usage,

View File

@@ -4,6 +4,7 @@ use std::sync::OnceLock;
use std::time::Duration;
use crate::AuthManager;
use crate::auth::CodexAuth;
use bytes::Bytes;
use codex_protocol::mcp_protocol::AuthMode;
use codex_protocol::mcp_protocol::ConversationId;
@@ -11,6 +12,7 @@ use eventsource_stream::Eventsource;
use futures::prelude::*;
use regex_lite::Regex;
use reqwest::StatusCode;
use reqwest::header::HeaderMap;
use serde::Deserialize;
use serde::Serialize;
use serde_json::Value;
@@ -40,6 +42,8 @@ use crate::model_provider_info::ModelProviderInfo;
use crate::model_provider_info::WireApi;
use crate::openai_model_info::get_model_info;
use crate::openai_tools::create_tools_json_for_responses_api;
use crate::protocol::RateLimitSnapshot;
use crate::protocol::RateLimitWindow;
use crate::protocol::TokenUsage;
use crate::token_data::PlanType;
use crate::util::backoff;
@@ -72,7 +76,7 @@ pub struct ModelClient {
client: reqwest::Client,
provider: ModelProviderInfo,
conversation_id: ConversationId,
effort: ReasoningEffortConfig,
effort: Option<ReasoningEffortConfig>,
summary: ReasoningSummaryConfig,
}
@@ -81,7 +85,7 @@ impl ModelClient {
config: Arc<Config>,
auth_manager: Option<Arc<AuthManager>>,
provider: ModelProviderInfo,
effort: ReasoningEffortConfig,
effort: Option<ReasoningEffortConfig>,
summary: ReasoningSummaryConfig,
conversation_id: ConversationId,
) -> Self {
@@ -104,6 +108,12 @@ impl ModelClient {
.or_else(|| get_model_info(&self.config.model_family).map(|info| info.context_window))
}
pub fn get_auto_compact_token_limit(&self) -> Option<i64> {
self.config.model_auto_compact_token_limit.or_else(|| {
get_model_info(&self.config.model_family).and_then(|info| info.auto_compact_token_limit)
})
}
/// Dispatches to either the Responses or Chat implementation depending on
/// the provider config. Public callers always invoke `stream()` the
/// specialised helpers are private to avoid accidental misuse.
@@ -176,7 +186,7 @@ impl ModelClient {
// Only include `text.verbosity` for GPT-5 family models
let text = if self.config.model_family.family == "gpt-5" {
create_text_param_for_request(self.config.model_verbosity)
create_text_param_for_request(self.config.model_verbosity, &prompt.output_schema)
} else {
if self.config.model_verbosity.is_some() {
warn!(
@@ -187,6 +197,15 @@ impl ModelClient {
None
};
// In general, we want to explicitly send `store: false` when using the Responses API,
// but in practice, the Azure Responses API rejects `store: false`:
//
// - If store = false and id is sent an error is thrown that ID is not found
// - If store = false and id is not sent an error is thrown that ID is required
//
// For Azure, we send `store: true` and preserve reasoning item IDs.
let azure_workaround = self.provider.is_azure_responses_endpoint();
let payload = ResponsesApiRequest {
model: &self.config.model,
instructions: &full_instructions,
@@ -195,13 +214,19 @@ impl ModelClient {
tool_choice: "auto",
parallel_tool_calls: false,
reasoning,
store: false,
store: azure_workaround,
stream: true,
include,
prompt_cache_key: Some(self.conversation_id.to_string()),
text,
};
let mut payload_json = serde_json::to_value(&payload)?;
if azure_workaround {
attach_item_ids(&mut payload_json, &input_with_instructions);
}
let payload_body = serde_json::to_string(&payload_json)?;
let mut attempt = 0;
let max_retries = self.provider.request_max_retries();
@@ -214,7 +239,7 @@ impl ModelClient {
trace!(
"POST to {}: {}",
self.provider.get_full_url(&auth),
serde_json::to_string(&payload)?
payload_body.as_str()
);
let mut req_builder = self
@@ -228,7 +253,7 @@ impl ModelClient {
.header("conversation_id", self.conversation_id.to_string())
.header("session_id", self.conversation_id.to_string())
.header(reqwest::header::ACCEPT, "text/event-stream")
.json(&payload);
.json(&payload_json);
if let Some(auth) = auth.as_ref()
&& auth.mode == AuthMode::ChatGPT
@@ -253,6 +278,15 @@ impl ModelClient {
Ok(resp) if resp.status().is_success() => {
let (tx_event, rx_event) = mpsc::channel::<Result<ResponseEvent>>(1600);
if let Some(snapshot) = parse_rate_limit_snapshot(resp.headers())
&& tx_event
.send(Ok(ResponseEvent::RateLimits(snapshot)))
.await
.is_err()
{
debug!("receiver dropped rate limit snapshot event");
}
// spawn task to process SSE
let stream = resp.bytes_stream().map_err(CodexErr::Reqwest);
tokio::spawn(process_sse(
@@ -297,6 +331,7 @@ impl ModelClient {
}
if status == StatusCode::TOO_MANY_REQUESTS {
let rate_limit_snapshot = parse_rate_limit_snapshot(res.headers());
let body = res.json::<ErrorResponse>().await.ok();
if let Some(ErrorResponse { error }) = body {
if error.r#type.as_deref() == Some("usage_limit_reached") {
@@ -305,11 +340,12 @@ impl ModelClient {
// token.
let plan_type = error
.plan_type
.or_else(|| auth.as_ref().and_then(|a| a.get_plan_type()));
.or_else(|| auth.as_ref().and_then(CodexAuth::get_plan_type));
let resets_in_seconds = error.resets_in_seconds;
return Err(CodexErr::UsageLimitReached(UsageLimitReachedError {
plan_type,
resets_in_seconds,
rate_limits: rate_limit_snapshot,
}));
} else if error.r#type.as_deref() == Some("usage_not_included") {
return Err(CodexErr::UsageNotIncluded);
@@ -356,7 +392,7 @@ impl ModelClient {
}
/// Returns the current reasoning effort setting.
pub fn get_reasoning_effort(&self) -> ReasoningEffortConfig {
pub fn get_reasoning_effort(&self) -> Option<ReasoningEffortConfig> {
self.effort
}
@@ -425,6 +461,85 @@ struct ResponseCompletedOutputTokensDetails {
reasoning_tokens: u64,
}
fn attach_item_ids(payload_json: &mut Value, original_items: &[ResponseItem]) {
let Some(input_value) = payload_json.get_mut("input") else {
return;
};
let serde_json::Value::Array(items) = input_value else {
return;
};
for (value, item) in items.iter_mut().zip(original_items.iter()) {
if let ResponseItem::Reasoning { id, .. }
| ResponseItem::Message { id: Some(id), .. }
| ResponseItem::WebSearchCall { id: Some(id), .. }
| ResponseItem::FunctionCall { id: Some(id), .. }
| ResponseItem::LocalShellCall { id: Some(id), .. }
| ResponseItem::CustomToolCall { id: Some(id), .. } = item
{
if id.is_empty() {
continue;
}
if let Some(obj) = value.as_object_mut() {
obj.insert("id".to_string(), Value::String(id.clone()));
}
}
}
}
fn parse_rate_limit_snapshot(headers: &HeaderMap) -> Option<RateLimitSnapshot> {
let primary = parse_rate_limit_window(
headers,
"x-codex-primary-used-percent",
"x-codex-primary-window-minutes",
"x-codex-primary-reset-after-seconds",
);
let secondary = parse_rate_limit_window(
headers,
"x-codex-secondary-used-percent",
"x-codex-secondary-window-minutes",
"x-codex-secondary-reset-after-seconds",
);
if primary.is_none() && secondary.is_none() {
return None;
}
Some(RateLimitSnapshot { primary, secondary })
}
fn parse_rate_limit_window(
headers: &HeaderMap,
used_percent_header: &str,
window_minutes_header: &str,
resets_header: &str,
) -> Option<RateLimitWindow> {
let used_percent = parse_header_f64(headers, used_percent_header)?;
Some(RateLimitWindow {
used_percent,
window_minutes: parse_header_u64(headers, window_minutes_header),
resets_in_seconds: parse_header_u64(headers, resets_header),
})
}
fn parse_header_f64(headers: &HeaderMap, name: &str) -> Option<f64> {
parse_header_str(headers, name)?
.parse::<f64>()
.ok()
.filter(|v| v.is_finite())
}
fn parse_header_u64(headers: &HeaderMap, name: &str) -> Option<u64> {
parse_header_str(headers, name)?.parse::<u64>().ok()
}
fn parse_header_str<'a>(headers: &'a HeaderMap, name: &str) -> Option<&'a str> {
headers.get(name)?.to_str().ok()
}
async fn process_sse<S>(
stream: S,
tx_event: mpsc::Sender<Result<ResponseEvent>>,

View File

@@ -1,6 +1,7 @@
use crate::error::Result;
use crate::model_family::ModelFamily;
use crate::openai_tools::OpenAiTool;
use crate::protocol::RateLimitSnapshot;
use crate::protocol::TokenUsage;
use codex_apply_patch::APPLY_PATCH_TOOL_INSTRUCTIONS;
use codex_protocol::config_types::ReasoningEffort as ReasoningEffortConfig;
@@ -9,15 +10,16 @@ use codex_protocol::config_types::Verbosity as VerbosityConfig;
use codex_protocol::models::ResponseItem;
use futures::Stream;
use serde::Serialize;
use serde_json::Value;
use std::borrow::Cow;
use std::ops::Deref;
use std::pin::Pin;
use std::task::Context;
use std::task::Poll;
use tokio::sync::mpsc;
/// The `instructions` field in the payload sent to a model should always start
/// with this content.
const BASE_INSTRUCTIONS: &str = include_str!("../prompt.md");
/// Review thread system prompt. Edit `core/src/review_prompt.md` to customize.
pub const REVIEW_PROMPT: &str = include_str!("../review_prompt.md");
/// API request payload for a single model turn
#[derive(Default, Debug, Clone)]
@@ -31,18 +33,20 @@ pub struct Prompt {
/// Optional override for the built-in BASE_INSTRUCTIONS.
pub base_instructions_override: Option<String>,
/// Optional the output schema for the model's response.
pub output_schema: Option<Value>,
}
impl Prompt {
pub(crate) fn get_full_instructions(&self, model: &ModelFamily) -> Cow<'_, str> {
pub(crate) fn get_full_instructions<'a>(&'a self, model: &'a ModelFamily) -> Cow<'a, str> {
let base = self
.base_instructions_override
.as_deref()
.unwrap_or(BASE_INSTRUCTIONS);
let mut sections: Vec<&str> = vec![base];
// When there are no custom instructions, add apply_patch_tool_instructions if either:
// - the model needs special instructions (4.1), or
.unwrap_or(model.base_instructions.deref());
// When there are no custom instructions, add apply_patch_tool_instructions if:
// - the model needs special instructions (4.1)
// AND
// - there is no apply_patch tool present
let is_apply_patch_tool_present = self.tools.iter().any(|tool| match tool {
OpenAiTool::Function(f) => f.name == "apply_patch",
@@ -50,11 +54,13 @@ impl Prompt {
_ => false,
});
if self.base_instructions_override.is_none()
&& (model.needs_special_apply_patch_instructions || !is_apply_patch_tool_present)
&& model.needs_special_apply_patch_instructions
&& !is_apply_patch_tool_present
{
sections.push(APPLY_PATCH_TOOL_INSTRUCTIONS);
Cow::Owned(format!("{base}\n{APPLY_PATCH_TOOL_INSTRUCTIONS}"))
} else {
Cow::Borrowed(base)
}
Cow::Owned(sections.join("\n"))
}
pub(crate) fn get_formatted_input(&self) -> Vec<ResponseItem> {
@@ -77,22 +83,42 @@ pub enum ResponseEvent {
WebSearchCallBegin {
call_id: String,
},
RateLimits(RateLimitSnapshot),
}
#[derive(Debug, Serialize)]
pub(crate) struct Reasoning {
pub(crate) effort: ReasoningEffortConfig,
pub(crate) summary: ReasoningSummaryConfig,
#[serde(skip_serializing_if = "Option::is_none")]
pub(crate) effort: Option<ReasoningEffortConfig>,
#[serde(skip_serializing_if = "Option::is_none")]
pub(crate) summary: Option<ReasoningSummaryConfig>,
}
#[derive(Debug, Serialize, Default, Clone)]
#[serde(rename_all = "snake_case")]
pub(crate) enum TextFormatType {
#[default]
JsonSchema,
}
#[derive(Debug, Serialize, Default, Clone)]
pub(crate) struct TextFormat {
pub(crate) r#type: TextFormatType,
pub(crate) strict: bool,
pub(crate) schema: Value,
pub(crate) name: String,
}
/// Controls under the `text` field in the Responses API for GPT-5.
#[derive(Debug, Serialize, Default, Clone, Copy)]
#[derive(Debug, Serialize, Default, Clone)]
pub(crate) struct TextControls {
#[serde(skip_serializing_if = "Option::is_none")]
pub(crate) verbosity: Option<OpenAiVerbosity>,
#[serde(skip_serializing_if = "Option::is_none")]
pub(crate) format: Option<TextFormat>,
}
#[derive(Debug, Serialize, Default, Clone, Copy)]
#[derive(Debug, Serialize, Default, Clone)]
#[serde(rename_all = "lowercase")]
pub(crate) enum OpenAiVerbosity {
Low,
@@ -136,21 +162,35 @@ pub(crate) struct ResponsesApiRequest<'a> {
pub(crate) fn create_reasoning_param_for_request(
model_family: &ModelFamily,
effort: ReasoningEffortConfig,
effort: Option<ReasoningEffortConfig>,
summary: ReasoningSummaryConfig,
) -> Option<Reasoning> {
if model_family.supports_reasoning_summaries {
Some(Reasoning { effort, summary })
} else {
None
if !model_family.supports_reasoning_summaries {
return None;
}
Some(Reasoning {
effort,
summary: Some(summary),
})
}
pub(crate) fn create_text_param_for_request(
verbosity: Option<VerbosityConfig>,
output_schema: &Option<Value>,
) -> Option<TextControls> {
verbosity.map(|v| TextControls {
verbosity: Some(v.into()),
if verbosity.is_none() && output_schema.is_none() {
return None;
}
Some(TextControls {
verbosity: verbosity.map(std::convert::Into::into),
format: output_schema.as_ref().map(|schema| TextFormat {
r#type: TextFormatType::JsonSchema,
strict: true,
schema: schema.clone(),
name: "codex_output_schema".to_string(),
}),
})
}
@@ -169,18 +209,64 @@ impl Stream for ResponseStream {
#[cfg(test)]
mod tests {
use crate::model_family::find_family_for_model;
use pretty_assertions::assert_eq;
use super::*;
struct InstructionsTestCase {
pub slug: &'static str,
pub expects_apply_patch_instructions: bool,
}
#[test]
fn get_full_instructions_no_user_content() {
let prompt = Prompt {
..Default::default()
};
let expected = format!("{BASE_INSTRUCTIONS}\n{APPLY_PATCH_TOOL_INSTRUCTIONS}");
let model_family = find_family_for_model("gpt-4.1").expect("known model slug");
let full = prompt.get_full_instructions(&model_family);
assert_eq!(full, expected);
let test_cases = vec![
InstructionsTestCase {
slug: "gpt-3.5",
expects_apply_patch_instructions: true,
},
InstructionsTestCase {
slug: "gpt-4.1",
expects_apply_patch_instructions: true,
},
InstructionsTestCase {
slug: "gpt-4o",
expects_apply_patch_instructions: true,
},
InstructionsTestCase {
slug: "gpt-5",
expects_apply_patch_instructions: true,
},
InstructionsTestCase {
slug: "codex-mini-latest",
expects_apply_patch_instructions: true,
},
InstructionsTestCase {
slug: "gpt-oss:120b",
expects_apply_patch_instructions: false,
},
InstructionsTestCase {
slug: "gpt-5-codex",
expects_apply_patch_instructions: false,
},
];
for test_case in test_cases {
let model_family = find_family_for_model(test_case.slug).expect("known model slug");
let expected = if test_case.expects_apply_patch_instructions {
format!(
"{}\n{}",
model_family.clone().base_instructions,
APPLY_PATCH_TOOL_INSTRUCTIONS
)
} else {
model_family.clone().base_instructions
};
let full = prompt.get_full_instructions(&model_family);
assert_eq!(full, expected);
}
}
#[test]
@@ -201,6 +287,7 @@ mod tests {
prompt_cache_key: None,
text: Some(TextControls {
verbosity: Some(OpenAiVerbosity::Low),
format: None,
}),
};
@@ -213,6 +300,52 @@ mod tests {
);
}
#[test]
fn serializes_text_schema_with_strict_format() {
let input: Vec<ResponseItem> = vec![];
let tools: Vec<serde_json::Value> = vec![];
let schema = serde_json::json!({
"type": "object",
"properties": {
"answer": {"type": "string"}
},
"required": ["answer"],
});
let text_controls =
create_text_param_for_request(None, &Some(schema.clone())).expect("text controls");
let req = ResponsesApiRequest {
model: "gpt-5",
instructions: "i",
input: &input,
tools: &tools,
tool_choice: "auto",
parallel_tool_calls: false,
reasoning: None,
store: false,
stream: true,
include: vec![],
prompt_cache_key: None,
text: Some(text_controls),
};
let v = serde_json::to_value(&req).expect("json");
let text = v.get("text").expect("text field");
assert!(text.get("verbosity").is_none());
let format = text.get("format").expect("format field");
assert_eq!(
format.get("name"),
Some(&serde_json::Value::String("codex_output_schema".into()))
);
assert_eq!(
format.get("type"),
Some(&serde_json::Value::String("json_schema".into()))
);
assert_eq!(format.get("strict"), Some(&serde_json::Value::Bool(true)));
assert_eq!(format.get("schema"), Some(&schema));
}
#[test]
fn omits_text_when_not_set() {
let input: Vec<ResponseItem> = vec![];

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,425 @@
use std::sync::Arc;
use super::AgentTask;
use super::Session;
use super::TurnContext;
use super::get_last_assistant_message_from_turn;
use crate::Prompt;
use crate::client_common::ResponseEvent;
use crate::error::CodexErr;
use crate::error::Result as CodexResult;
use crate::protocol::AgentMessageEvent;
use crate::protocol::CompactedItem;
use crate::protocol::ErrorEvent;
use crate::protocol::Event;
use crate::protocol::EventMsg;
use crate::protocol::InputItem;
use crate::protocol::InputMessageKind;
use crate::protocol::TaskCompleteEvent;
use crate::protocol::TaskStartedEvent;
use crate::protocol::TurnContextItem;
use crate::truncate::truncate_middle;
use crate::util::backoff;
use askama::Template;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ResponseInputItem;
use codex_protocol::models::ResponseItem;
use codex_protocol::protocol::RolloutItem;
use futures::prelude::*;
pub const SUMMARIZATION_PROMPT: &str = include_str!("../../templates/compact/prompt.md");
const COMPACT_USER_MESSAGE_MAX_TOKENS: usize = 20_000;
#[derive(Template)]
#[template(path = "compact/history_bridge.md", escape = "none")]
struct HistoryBridgeTemplate<'a> {
user_messages_text: &'a str,
summary_text: &'a str,
}
pub(super) async fn spawn_compact_task(
sess: Arc<Session>,
turn_context: Arc<TurnContext>,
sub_id: String,
input: Vec<InputItem>,
) {
let task = AgentTask::compact(sess.clone(), turn_context, sub_id, input);
sess.set_task(task).await;
}
pub(super) async fn run_inline_auto_compact_task(
sess: Arc<Session>,
turn_context: Arc<TurnContext>,
) {
let sub_id = sess.next_internal_sub_id();
let input = vec![InputItem::Text {
text: SUMMARIZATION_PROMPT.to_string(),
}];
run_compact_task_inner(sess, turn_context, sub_id, input, None, false).await;
}
pub(super) async fn run_compact_task(
sess: Arc<Session>,
turn_context: Arc<TurnContext>,
sub_id: String,
input: Vec<InputItem>,
) {
let start_event = Event {
id: sub_id.clone(),
msg: EventMsg::TaskStarted(TaskStartedEvent {
model_context_window: turn_context.client.get_model_context_window(),
}),
};
sess.send_event(start_event).await;
run_compact_task_inner(
sess.clone(),
turn_context,
sub_id.clone(),
input,
None,
true,
)
.await;
let event = Event {
id: sub_id,
msg: EventMsg::TaskComplete(TaskCompleteEvent {
last_agent_message: None,
}),
};
sess.send_event(event).await;
}
async fn run_compact_task_inner(
sess: Arc<Session>,
turn_context: Arc<TurnContext>,
sub_id: String,
input: Vec<InputItem>,
instructions_override: Option<String>,
remove_task_on_completion: bool,
) {
let initial_input_for_turn: ResponseInputItem = ResponseInputItem::from(input);
let turn_input = sess
.turn_input_with_history(vec![initial_input_for_turn.clone().into()])
.await;
let prompt = Prompt {
input: turn_input,
tools: Vec::new(),
base_instructions_override: instructions_override,
output_schema: None,
};
let max_retries = turn_context.client.get_provider().stream_max_retries();
let mut retries = 0;
let rollout_item = RolloutItem::TurnContext(TurnContextItem {
cwd: turn_context.cwd.clone(),
approval_policy: turn_context.approval_policy,
sandbox_policy: turn_context.sandbox_policy.clone(),
model: turn_context.client.get_model(),
effort: turn_context.client.get_reasoning_effort(),
summary: turn_context.client.get_reasoning_summary(),
});
sess.persist_rollout_items(&[rollout_item]).await;
loop {
let attempt_result = drain_to_completed(&sess, turn_context.as_ref(), &prompt).await;
match attempt_result {
Ok(()) => {
break;
}
Err(CodexErr::Interrupted) => {
return;
}
Err(e) => {
if retries < max_retries {
retries += 1;
let delay = backoff(retries);
sess.notify_stream_error(
&sub_id,
format!(
"stream error: {e}; retrying {retries}/{max_retries} in {delay:?}"
),
)
.await;
tokio::time::sleep(delay).await;
continue;
} else {
let event = Event {
id: sub_id.clone(),
msg: EventMsg::Error(ErrorEvent {
message: e.to_string(),
}),
};
sess.send_event(event).await;
return;
}
}
}
}
if remove_task_on_completion {
sess.remove_task(&sub_id).await;
}
let history_snapshot = {
let state = sess.state.lock().await;
state.history.contents()
};
let summary_text = get_last_assistant_message_from_turn(&history_snapshot).unwrap_or_default();
let user_messages = collect_user_messages(&history_snapshot);
let initial_context = sess.build_initial_context(turn_context.as_ref());
let new_history = build_compacted_history(initial_context, &user_messages, &summary_text);
{
let mut state = sess.state.lock().await;
state.history.replace(new_history);
}
let rollout_item = RolloutItem::Compacted(CompactedItem {
message: summary_text.clone(),
});
sess.persist_rollout_items(&[rollout_item]).await;
let event = Event {
id: sub_id.clone(),
msg: EventMsg::AgentMessage(AgentMessageEvent {
message: "Compact task completed".to_string(),
}),
};
sess.send_event(event).await;
}
pub fn content_items_to_text(content: &[ContentItem]) -> Option<String> {
let mut pieces = Vec::new();
for item in content {
match item {
ContentItem::InputText { text } | ContentItem::OutputText { text } => {
if !text.is_empty() {
pieces.push(text.as_str());
}
}
ContentItem::InputImage { .. } => {}
}
}
if pieces.is_empty() {
None
} else {
Some(pieces.join("\n"))
}
}
pub(crate) fn collect_user_messages(items: &[ResponseItem]) -> Vec<String> {
items
.iter()
.filter_map(|item| match item {
ResponseItem::Message { role, content, .. } if role == "user" => {
content_items_to_text(content)
}
_ => None,
})
.filter(|text| !is_session_prefix_message(text))
.collect()
}
pub fn is_session_prefix_message(text: &str) -> bool {
matches!(
InputMessageKind::from(("user", text)),
InputMessageKind::UserInstructions | InputMessageKind::EnvironmentContext
)
}
pub(crate) fn build_compacted_history(
initial_context: Vec<ResponseItem>,
user_messages: &[String],
summary_text: &str,
) -> Vec<ResponseItem> {
let mut history = initial_context;
let mut user_messages_text = if user_messages.is_empty() {
"(none)".to_string()
} else {
user_messages.join("\n\n")
};
// Truncate the concatenated prior user messages so the bridge message
// stays well under the context window (approx. 4 bytes/token).
let max_bytes = COMPACT_USER_MESSAGE_MAX_TOKENS * 4;
if user_messages_text.len() > max_bytes {
user_messages_text = truncate_middle(&user_messages_text, max_bytes).0;
}
let summary_text = if summary_text.is_empty() {
"(no summary available)".to_string()
} else {
summary_text.to_string()
};
let Ok(bridge) = HistoryBridgeTemplate {
user_messages_text: &user_messages_text,
summary_text: &summary_text,
}
.render() else {
return vec![];
};
history.push(ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::InputText { text: bridge }],
});
history
}
async fn drain_to_completed(
sess: &Session,
turn_context: &TurnContext,
prompt: &Prompt,
) -> CodexResult<()> {
let mut stream = turn_context.client.clone().stream(prompt).await?;
loop {
let maybe_event = stream.next().await;
let Some(event) = maybe_event else {
return Err(CodexErr::Stream(
"stream closed before response.completed".into(),
None,
));
};
match event {
Ok(ResponseEvent::OutputItemDone(item)) => {
let mut state = sess.state.lock().await;
state.history.record_items(std::slice::from_ref(&item));
}
Ok(ResponseEvent::Completed { .. }) => {
return Ok(());
}
Ok(_) => continue,
Err(e) => return Err(e),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use pretty_assertions::assert_eq;
#[test]
fn content_items_to_text_joins_non_empty_segments() {
let items = vec![
ContentItem::InputText {
text: "hello".to_string(),
},
ContentItem::OutputText {
text: String::new(),
},
ContentItem::OutputText {
text: "world".to_string(),
},
];
let joined = content_items_to_text(&items);
assert_eq!(Some("hello\nworld".to_string()), joined);
}
#[test]
fn content_items_to_text_ignores_image_only_content() {
let items = vec![ContentItem::InputImage {
image_url: "file://image.png".to_string(),
}];
let joined = content_items_to_text(&items);
assert_eq!(None, joined);
}
#[test]
fn collect_user_messages_extracts_user_text_only() {
let items = vec![
ResponseItem::Message {
id: Some("assistant".to_string()),
role: "assistant".to_string(),
content: vec![ContentItem::OutputText {
text: "ignored".to_string(),
}],
},
ResponseItem::Message {
id: Some("user".to_string()),
role: "user".to_string(),
content: vec![
ContentItem::InputText {
text: "first".to_string(),
},
ContentItem::OutputText {
text: "second".to_string(),
},
],
},
ResponseItem::Other,
];
let collected = collect_user_messages(&items);
assert_eq!(vec!["first\nsecond".to_string()], collected);
}
#[test]
fn collect_user_messages_filters_session_prefix_entries() {
let items = vec![
ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::InputText {
text: "<user_instructions>do things</user_instructions>".to_string(),
}],
},
ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::InputText {
text: "<ENVIRONMENT_CONTEXT>cwd=/tmp</ENVIRONMENT_CONTEXT>".to_string(),
}],
},
ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::InputText {
text: "real user message".to_string(),
}],
},
];
let collected = collect_user_messages(&items);
assert_eq!(vec!["real user message".to_string()], collected);
}
#[test]
fn build_compacted_history_truncates_overlong_user_messages() {
// Prepare a very large prior user message so the aggregated
// `user_messages_text` exceeds the truncation threshold used by
// `build_compacted_history` (80k bytes).
let big = "X".repeat(200_000);
let history = build_compacted_history(Vec::new(), std::slice::from_ref(&big), "SUMMARY");
// Expect exactly one bridge message added to history (plus any initial context we provided, which is none).
assert_eq!(history.len(), 1);
// Extract the text content of the bridge message.
let bridge_text = match &history[0] {
ResponseItem::Message { role, content, .. } if role == "user" => {
content_items_to_text(content).unwrap_or_default()
}
other => panic!("unexpected item in history: {other:?}"),
};
// The bridge should contain the truncation marker and not the full original payload.
assert!(
bridge_text.contains("tokens truncated"),
"expected truncation marker in bridge message"
);
assert!(
!bridge_text.contains(&big),
"bridge should not include the full oversized user text"
);
assert!(
bridge_text.contains("SUMMARY"),
"bridge should include the provided summary text"
);
}
}

View File

@@ -1,6 +1,7 @@
use crate::config_profile::ConfigProfile;
use crate::config_types::History;
use crate::config_types::McpServerConfig;
use crate::config_types::Notifications;
use crate::config_types::ReasoningSummaryFormat;
use crate::config_types::SandboxWorkspaceWrite;
use crate::config_types::ShellEnvironmentPolicy;
@@ -9,6 +10,7 @@ use crate::config_types::Tui;
use crate::config_types::UriBasedFileOpener;
use crate::git_info::resolve_root_git_project_for_trust;
use crate::model_family::ModelFamily;
use crate::model_family::derive_default_model_family;
use crate::model_family::find_family_for_model;
use crate::model_provider_info::ModelProviderInfo;
use crate::model_provider_info::built_in_model_providers;
@@ -24,15 +26,20 @@ use codex_protocol::mcp_protocol::Tools;
use codex_protocol::mcp_protocol::UserSavedConfig;
use dirs::home_dir;
use serde::Deserialize;
use std::collections::BTreeMap;
use std::collections::HashMap;
use std::path::Path;
use std::path::PathBuf;
use tempfile::NamedTempFile;
use toml::Value as TomlValue;
use toml_edit::Array as TomlArray;
use toml_edit::DocumentMut;
use toml_edit::Item as TomlItem;
use toml_edit::Table as TomlTable;
const OPENAI_DEFAULT_MODEL: &str = "gpt-5";
pub const GPT5_HIGH_MODEL: &str = "gpt-5-high";
const OPENAI_DEFAULT_MODEL: &str = "gpt-5-codex";
const OPENAI_DEFAULT_REVIEW_MODEL: &str = "gpt-5-codex";
pub const GPT_5_CODEX_MEDIUM_MODEL: &str = "gpt-5-codex";
/// Maximum number of bytes of the documentation that will be embedded. Larger
/// files are *silently truncated* to this size so we do not take up too much of
@@ -47,6 +54,9 @@ pub struct Config {
/// Optional override of model selection.
pub model: String,
/// Model used specifically for review sessions. Defaults to "gpt-5-codex".
pub review_model: String,
pub model_family: ModelFamily,
/// Size of the context window for the model, in tokens.
@@ -55,6 +65,9 @@ pub struct Config {
/// Maximum number of output tokens.
pub model_max_output_tokens: Option<u64>,
/// Token usage threshold triggering auto-compaction of conversation history.
pub model_auto_compact_token_limit: Option<i64>,
/// Key into the model_providers map that specifies which provider to use.
pub model_provider_id: String,
@@ -105,6 +118,10 @@ pub struct Config {
/// If unset the feature is disabled.
pub notify: Option<Vec<String>>,
/// TUI notifications preference. When set, the TUI will send OSC 9 notifications on approvals
/// and turn completions when not focused.
pub tui_notifications: Notifications,
/// The directory that should be treated as the current working directory
/// for the session. All relative paths inside the business-logic layer are
/// resolved against this path.
@@ -140,7 +157,7 @@ pub struct Config {
/// Value to use for `reasoning.effort` when making a request using the
/// Responses API.
pub model_reasoning_effort: ReasoningEffort,
pub model_reasoning_effort: Option<ReasoningEffort>,
/// If not "none", the value to use for `reasoning.summary` when making a
/// request using the Responses API.
@@ -152,9 +169,6 @@ pub struct Config {
/// Base URL for requests to ChatGPT (as opposed to the OpenAI API).
pub chatgpt_base_url: String,
/// Experimental rollout resume path (absolute path to .jsonl; undocumented).
pub experimental_resume: Option<PathBuf>,
/// Include an experimental plan tool that the model can use to update its current plan and status of each step.
pub include_plan_tool: bool,
@@ -259,6 +273,86 @@ pub fn load_config_as_toml(codex_home: &Path) -> std::io::Result<TomlValue> {
}
}
pub fn load_global_mcp_servers(
codex_home: &Path,
) -> std::io::Result<BTreeMap<String, McpServerConfig>> {
let root_value = load_config_as_toml(codex_home)?;
let Some(servers_value) = root_value.get("mcp_servers") else {
return Ok(BTreeMap::new());
};
servers_value
.clone()
.try_into()
.map_err(|e| std::io::Error::new(std::io::ErrorKind::InvalidData, e))
}
pub fn write_global_mcp_servers(
codex_home: &Path,
servers: &BTreeMap<String, McpServerConfig>,
) -> std::io::Result<()> {
let config_path = codex_home.join(CONFIG_TOML_FILE);
let mut doc = match std::fs::read_to_string(&config_path) {
Ok(contents) => contents
.parse::<DocumentMut>()
.map_err(|e| std::io::Error::new(std::io::ErrorKind::InvalidData, e))?,
Err(e) if e.kind() == std::io::ErrorKind::NotFound => DocumentMut::new(),
Err(e) => return Err(e),
};
doc.as_table_mut().remove("mcp_servers");
if !servers.is_empty() {
let mut table = TomlTable::new();
table.set_implicit(true);
doc["mcp_servers"] = TomlItem::Table(table);
for (name, config) in servers {
let mut entry = TomlTable::new();
entry.set_implicit(false);
entry["command"] = toml_edit::value(config.command.clone());
if !config.args.is_empty() {
let mut args = TomlArray::new();
for arg in &config.args {
args.push(arg.clone());
}
entry["args"] = TomlItem::Value(args.into());
}
if let Some(env) = &config.env
&& !env.is_empty()
{
let mut env_table = TomlTable::new();
env_table.set_implicit(false);
let mut pairs: Vec<_> = env.iter().collect();
pairs.sort_by(|(a, _), (b, _)| a.cmp(b));
for (key, value) in pairs {
env_table.insert(key, toml_edit::value(value.clone()));
}
entry["env"] = TomlItem::Table(env_table);
}
if let Some(timeout) = config.startup_timeout_sec {
entry["startup_timeout_sec"] = toml_edit::value(timeout.as_secs_f64());
}
if let Some(timeout) = config.tool_timeout_sec {
entry["tool_timeout_sec"] = toml_edit::value(timeout.as_secs_f64());
}
doc["mcp_servers"][name.as_str()] = TomlItem::Table(entry);
}
}
std::fs::create_dir_all(codex_home)?;
let tmp_file = NamedTempFile::new_in(codex_home)?;
std::fs::write(tmp_file.path(), doc.to_string())?;
tmp_file.persist(config_path).map_err(|err| err.error)?;
Ok(())
}
fn set_project_trusted_inner(doc: &mut DocumentMut, project_path: &Path) -> anyhow::Result<()> {
// Ensure we render a human-friendly structure:
//
@@ -423,14 +517,24 @@ pub async fn persist_model_selection(
if let Some(profile_name) = active_profile {
let profile_table = ensure_profile_table(&mut doc, profile_name)?;
profile_table["model"] = toml_edit::value(model);
if let Some(effort) = effort {
profile_table["model_reasoning_effort"] = toml_edit::value(effort.to_string());
match effort {
Some(effort) => {
profile_table["model_reasoning_effort"] = toml_edit::value(effort.to_string());
}
None => {
profile_table.remove("model_reasoning_effort");
}
}
} else {
let table = doc.as_table_mut();
table["model"] = toml_edit::value(model);
if let Some(effort) = effort {
table["model_reasoning_effort"] = toml_edit::value(effort.to_string());
match effort {
Some(effort) => {
table["model_reasoning_effort"] = toml_edit::value(effort.to_string());
}
None => {
table.remove("model_reasoning_effort");
}
}
}
@@ -499,6 +603,8 @@ fn apply_toml_override(root: &mut TomlValue, path: &str, value: TomlValue) {
pub struct ConfigToml {
/// Optional override of model selection.
pub model: Option<String>,
/// Review model override used by the `/review` feature.
pub review_model: Option<String>,
/// Provider to use from the model_providers map.
pub model_provider: Option<String>,
@@ -509,6 +615,9 @@ pub struct ConfigToml {
/// Maximum number of output tokens.
pub model_max_output_tokens: Option<u64>,
/// Token usage threshold triggering auto-compaction of conversation history.
pub model_auto_compact_token_limit: Option<i64>,
/// Default approval policy for executing commands.
pub approval_policy: Option<AskForApproval>,
@@ -579,9 +688,6 @@ pub struct ConfigToml {
/// Base URL for requests to ChatGPT (as opposed to the OpenAI API).
pub chatgpt_base_url: Option<String>,
/// Experimental rollout resume path (absolute path to .jsonl; undocumented).
pub experimental_resume: Option<PathBuf>,
/// Experimental path to a file whose contents replace the built-in BASE_INSTRUCTIONS.
pub experimental_instructions_file: Option<PathBuf>,
@@ -724,6 +830,7 @@ impl ConfigToml {
#[derive(Default, Debug, Clone)]
pub struct ConfigOverrides {
pub model: Option<String>,
pub review_model: Option<String>,
pub cwd: Option<PathBuf>,
pub approval_policy: Option<AskForApproval>,
pub sandbox_mode: Option<SandboxMode>,
@@ -751,6 +858,7 @@ impl Config {
// Destructure ConfigOverrides fully to ensure all overrides are applied.
let ConfigOverrides {
model,
review_model: override_review_model,
cwd,
approval_policy,
sandbox_mode,
@@ -841,15 +949,8 @@ impl Config {
.or(cfg.model)
.unwrap_or_else(default_model);
let mut model_family = find_family_for_model(&model).unwrap_or_else(|| ModelFamily {
slug: model.clone(),
family: model.clone(),
needs_special_apply_patch_instructions: false,
supports_reasoning_summaries: false,
reasoning_summary_format: ReasoningSummaryFormat::None,
uses_local_shell_tool: false,
apply_patch_tool_type: None,
});
let mut model_family =
find_family_for_model(&model).unwrap_or_else(|| derive_default_model_family(&model));
if let Some(supports_reasoning_summaries) = cfg.model_supports_reasoning_summaries {
model_family.supports_reasoning_summaries = supports_reasoning_summaries;
@@ -867,8 +968,11 @@ impl Config {
.as_ref()
.map(|info| info.max_output_tokens)
});
let experimental_resume = cfg.experimental_resume;
let model_auto_compact_token_limit = cfg.model_auto_compact_token_limit.or_else(|| {
openai_model_info
.as_ref()
.and_then(|info| info.auto_compact_token_limit)
});
// Load base instructions override from a file if specified. If the
// path is relative, resolve it against the effective cwd so the
@@ -881,11 +985,18 @@ impl Config {
Self::get_base_instructions(experimental_instructions_path, &resolved_cwd)?;
let base_instructions = base_instructions.or(file_base_instructions);
// Default review model when not set in config; allow CLI override to take precedence.
let review_model = override_review_model
.or(cfg.review_model)
.unwrap_or_else(default_review_model);
let config = Self {
model,
review_model,
model_family,
model_context_window,
model_max_output_tokens,
model_auto_compact_token_limit,
model_provider_id,
model_provider,
cwd: resolved_cwd,
@@ -913,8 +1024,7 @@ impl Config {
.unwrap_or(false),
model_reasoning_effort: config_profile
.model_reasoning_effort
.or(cfg.model_reasoning_effort)
.unwrap_or_default(),
.or(cfg.model_reasoning_effort),
model_reasoning_summary: config_profile
.model_reasoning_summary
.or(cfg.model_reasoning_summary)
@@ -924,8 +1034,6 @@ impl Config {
.chatgpt_base_url
.or(cfg.chatgpt_base_url)
.unwrap_or("https://chatgpt.com/backend-api/".to_string()),
experimental_resume,
include_plan_tool: include_plan_tool.unwrap_or(false),
include_apply_patch_tool: include_apply_patch_tool.unwrap_or(false),
tools_web_search_request,
@@ -938,6 +1046,11 @@ impl Config {
include_view_image_tool,
active_profile: active_profile_name,
disable_paste_burst: cfg.disable_paste_burst.unwrap_or(false),
tui_notifications: cfg
.tui
.as_ref()
.map(|t| t.notifications.clone())
.unwrap_or_default(),
};
Ok(config)
}
@@ -1006,6 +1119,10 @@ fn default_model() -> String {
OPENAI_DEFAULT_MODEL.to_string()
}
fn default_review_model() -> String {
OPENAI_DEFAULT_REVIEW_MODEL.to_string()
}
/// Returns the path to the Codex configuration directory, which can be
/// specified by the `CODEX_HOME` environment variable. If not set, defaults to
/// `~/.codex`.
@@ -1044,10 +1161,12 @@ pub fn log_dir(cfg: &Config) -> std::io::Result<PathBuf> {
#[cfg(test)]
mod tests {
use crate::config_types::HistoryPersistence;
use crate::config_types::Notifications;
use super::*;
use pretty_assertions::assert_eq;
use std::time::Duration;
use tempfile::TempDir;
#[test]
@@ -1082,6 +1201,19 @@ persistence = "none"
);
}
#[test]
fn tui_config_missing_notifications_field_defaults_to_disabled() {
let cfg = r#"
[tui]
"#;
let parsed = toml::from_str::<ConfigToml>(cfg)
.expect("TUI config without notifications should succeed");
let tui = parsed.tui.expect("config should include tui section");
assert_eq!(tui.notifications, Notifications::Enabled(false));
}
#[test]
fn test_sandbox_config_parsing() {
let sandbox_full_access = r#"
@@ -1138,6 +1270,72 @@ exclude_slash_tmp = true
);
}
#[test]
fn load_global_mcp_servers_returns_empty_if_missing() -> anyhow::Result<()> {
let codex_home = TempDir::new()?;
let servers = load_global_mcp_servers(codex_home.path())?;
assert!(servers.is_empty());
Ok(())
}
#[test]
fn write_global_mcp_servers_round_trips_entries() -> anyhow::Result<()> {
let codex_home = TempDir::new()?;
let mut servers = BTreeMap::new();
servers.insert(
"docs".to_string(),
McpServerConfig {
command: "echo".to_string(),
args: vec!["hello".to_string()],
env: None,
startup_timeout_sec: Some(Duration::from_secs(3)),
tool_timeout_sec: Some(Duration::from_secs(5)),
},
);
write_global_mcp_servers(codex_home.path(), &servers)?;
let loaded = load_global_mcp_servers(codex_home.path())?;
assert_eq!(loaded.len(), 1);
let docs = loaded.get("docs").expect("docs entry");
assert_eq!(docs.command, "echo");
assert_eq!(docs.args, vec!["hello".to_string()]);
assert_eq!(docs.startup_timeout_sec, Some(Duration::from_secs(3)));
assert_eq!(docs.tool_timeout_sec, Some(Duration::from_secs(5)));
let empty = BTreeMap::new();
write_global_mcp_servers(codex_home.path(), &empty)?;
let loaded = load_global_mcp_servers(codex_home.path())?;
assert!(loaded.is_empty());
Ok(())
}
#[test]
fn load_global_mcp_servers_accepts_legacy_ms_field() -> anyhow::Result<()> {
let codex_home = TempDir::new()?;
let config_path = codex_home.path().join(CONFIG_TOML_FILE);
std::fs::write(
&config_path,
r#"
[mcp_servers]
[mcp_servers.docs]
command = "echo"
startup_timeout_ms = 2500
"#,
)?;
let servers = load_global_mcp_servers(codex_home.path())?;
let docs = servers.get("docs").expect("docs entry");
assert_eq!(docs.startup_timeout_sec, Some(Duration::from_millis(2500)));
Ok(())
}
#[tokio::test]
async fn persist_model_selection_updates_defaults() -> anyhow::Result<()> {
let codex_home = TempDir::new()?;
@@ -1145,7 +1343,7 @@ exclude_slash_tmp = true
persist_model_selection(
codex_home.path(),
None,
"gpt-5-high-new",
"gpt-5-codex",
Some(ReasoningEffort::High),
)
.await?;
@@ -1154,7 +1352,7 @@ exclude_slash_tmp = true
tokio::fs::read_to_string(codex_home.path().join(CONFIG_TOML_FILE)).await?;
let parsed: ConfigToml = toml::from_str(&serialized)?;
assert_eq!(parsed.model.as_deref(), Some("gpt-5-high-new"));
assert_eq!(parsed.model.as_deref(), Some("gpt-5-codex"));
assert_eq!(parsed.model_reasoning_effort, Some(ReasoningEffort::High));
Ok(())
@@ -1168,7 +1366,7 @@ exclude_slash_tmp = true
tokio::fs::write(
&config_path,
r#"
model = "gpt-5"
model = "gpt-5-codex"
model_reasoning_effort = "medium"
[profiles.dev]
@@ -1208,8 +1406,8 @@ model = "gpt-4.1"
persist_model_selection(
codex_home.path(),
Some("dev"),
"gpt-5-high-new",
Some(ReasoningEffort::Low),
"gpt-5-codex",
Some(ReasoningEffort::Medium),
)
.await?;
@@ -1221,8 +1419,11 @@ model = "gpt-4.1"
.get("dev")
.expect("profile should be created");
assert_eq!(profile.model.as_deref(), Some("gpt-5-high-new"));
assert_eq!(profile.model_reasoning_effort, Some(ReasoningEffort::Low));
assert_eq!(profile.model.as_deref(), Some("gpt-5-codex"));
assert_eq!(
profile.model_reasoning_effort,
Some(ReasoningEffort::Medium)
);
Ok(())
}
@@ -1240,7 +1441,7 @@ model = "gpt-4"
model_reasoning_effort = "medium"
[profiles.prod]
model = "gpt-5"
model = "gpt-5-codex"
"#,
)
.await?;
@@ -1271,7 +1472,7 @@ model = "gpt-5"
.profiles
.get("prod")
.and_then(|profile| profile.model.as_deref()),
Some("gpt-5"),
Some("gpt-5-codex"),
);
Ok(())
@@ -1418,9 +1619,11 @@ model_verbosity = "high"
assert_eq!(
Config {
model: "o3".to_string(),
review_model: OPENAI_DEFAULT_REVIEW_MODEL.to_string(),
model_family: find_family_for_model("o3").expect("known model slug"),
model_context_window: Some(200_000),
model_max_output_tokens: Some(100_000),
model_auto_compact_token_limit: None,
model_provider_id: "openai".to_string(),
model_provider: fixture.openai_provider.clone(),
approval_policy: AskForApproval::Never,
@@ -1438,11 +1641,10 @@ model_verbosity = "high"
codex_linux_sandbox_exe: None,
hide_agent_reasoning: false,
show_raw_agent_reasoning: false,
model_reasoning_effort: ReasoningEffort::High,
model_reasoning_effort: Some(ReasoningEffort::High),
model_reasoning_summary: ReasoningSummary::Detailed,
model_verbosity: None,
chatgpt_base_url: "https://chatgpt.com/backend-api/".to_string(),
experimental_resume: None,
base_instructions: None,
include_plan_tool: false,
include_apply_patch_tool: false,
@@ -1452,6 +1654,7 @@ model_verbosity = "high"
include_view_image_tool: true,
active_profile: Some("o3".to_string()),
disable_paste_burst: false,
tui_notifications: Default::default(),
},
o3_profile_config
);
@@ -1474,9 +1677,11 @@ model_verbosity = "high"
)?;
let expected_gpt3_profile_config = Config {
model: "gpt-3.5-turbo".to_string(),
review_model: OPENAI_DEFAULT_REVIEW_MODEL.to_string(),
model_family: find_family_for_model("gpt-3.5-turbo").expect("known model slug"),
model_context_window: Some(16_385),
model_max_output_tokens: Some(4_096),
model_auto_compact_token_limit: None,
model_provider_id: "openai-chat-completions".to_string(),
model_provider: fixture.openai_chat_completions_provider.clone(),
approval_policy: AskForApproval::UnlessTrusted,
@@ -1494,11 +1699,10 @@ model_verbosity = "high"
codex_linux_sandbox_exe: None,
hide_agent_reasoning: false,
show_raw_agent_reasoning: false,
model_reasoning_effort: ReasoningEffort::default(),
model_reasoning_effort: None,
model_reasoning_summary: ReasoningSummary::default(),
model_verbosity: None,
chatgpt_base_url: "https://chatgpt.com/backend-api/".to_string(),
experimental_resume: None,
base_instructions: None,
include_plan_tool: false,
include_apply_patch_tool: false,
@@ -1508,6 +1712,7 @@ model_verbosity = "high"
include_view_image_tool: true,
active_profile: Some("gpt3".to_string()),
disable_paste_burst: false,
tui_notifications: Default::default(),
};
assert_eq!(expected_gpt3_profile_config, gpt3_profile_config);
@@ -1545,9 +1750,11 @@ model_verbosity = "high"
)?;
let expected_zdr_profile_config = Config {
model: "o3".to_string(),
review_model: OPENAI_DEFAULT_REVIEW_MODEL.to_string(),
model_family: find_family_for_model("o3").expect("known model slug"),
model_context_window: Some(200_000),
model_max_output_tokens: Some(100_000),
model_auto_compact_token_limit: None,
model_provider_id: "openai".to_string(),
model_provider: fixture.openai_provider.clone(),
approval_policy: AskForApproval::OnFailure,
@@ -1565,11 +1772,10 @@ model_verbosity = "high"
codex_linux_sandbox_exe: None,
hide_agent_reasoning: false,
show_raw_agent_reasoning: false,
model_reasoning_effort: ReasoningEffort::default(),
model_reasoning_effort: None,
model_reasoning_summary: ReasoningSummary::default(),
model_verbosity: None,
chatgpt_base_url: "https://chatgpt.com/backend-api/".to_string(),
experimental_resume: None,
base_instructions: None,
include_plan_tool: false,
include_apply_patch_tool: false,
@@ -1579,6 +1785,7 @@ model_verbosity = "high"
include_view_image_tool: true,
active_profile: Some("zdr".to_string()),
disable_paste_burst: false,
tui_notifications: Default::default(),
};
assert_eq!(expected_zdr_profile_config, zdr_profile_config);
@@ -1602,9 +1809,11 @@ model_verbosity = "high"
)?;
let expected_gpt5_profile_config = Config {
model: "gpt-5".to_string(),
review_model: OPENAI_DEFAULT_REVIEW_MODEL.to_string(),
model_family: find_family_for_model("gpt-5").expect("known model slug"),
model_context_window: Some(272_000),
model_max_output_tokens: Some(128_000),
model_auto_compact_token_limit: None,
model_provider_id: "openai".to_string(),
model_provider: fixture.openai_provider.clone(),
approval_policy: AskForApproval::OnFailure,
@@ -1622,11 +1831,10 @@ model_verbosity = "high"
codex_linux_sandbox_exe: None,
hide_agent_reasoning: false,
show_raw_agent_reasoning: false,
model_reasoning_effort: ReasoningEffort::High,
model_reasoning_effort: Some(ReasoningEffort::High),
model_reasoning_summary: ReasoningSummary::Detailed,
model_verbosity: Some(Verbosity::High),
chatgpt_base_url: "https://chatgpt.com/backend-api/".to_string(),
experimental_resume: None,
base_instructions: None,
include_plan_tool: false,
include_apply_patch_tool: false,
@@ -1636,6 +1844,7 @@ model_verbosity = "high"
include_view_image_tool: true,
active_profile: Some("gpt5".to_string()),
disable_paste_burst: false,
tui_notifications: Default::default(),
};
assert_eq!(expected_gpt5_profile_config, gpt5_profile_config);
@@ -1739,3 +1948,46 @@ trust_level = "trusted"
Ok(())
}
}
#[cfg(test)]
mod notifications_tests {
use crate::config_types::Notifications;
use serde::Deserialize;
#[derive(Deserialize, Debug, PartialEq)]
struct TuiTomlTest {
notifications: Notifications,
}
#[derive(Deserialize, Debug, PartialEq)]
struct RootTomlTest {
tui: TuiTomlTest,
}
#[test]
fn test_tui_notifications_true() {
let toml = r#"
[tui]
notifications = true
"#;
let parsed: RootTomlTest = toml::from_str(toml).expect("deserialize notifications=true");
assert!(matches!(
parsed.tui.notifications,
Notifications::Enabled(true)
));
}
#[test]
fn test_tui_notifications_custom_array() {
let toml = r#"
[tui]
notifications = ["foo"]
"#;
let parsed: RootTomlTest =
toml::from_str(toml).expect("deserialize notifications=[\"foo\"]");
assert!(matches!(
parsed.tui.notifications,
Notifications::Custom(ref v) if v == &vec!["foo".to_string()]
));
}
}

View File

@@ -7,6 +7,12 @@ use toml_edit::DocumentMut;
pub const CONFIG_KEY_MODEL: &str = "model";
pub const CONFIG_KEY_EFFORT: &str = "model_reasoning_effort";
#[derive(Copy, Clone)]
enum NoneBehavior {
Skip,
Remove,
}
/// Persist overrides into `config.toml` using explicit key segments per
/// override. This avoids ambiguity with keys that contain dots or spaces.
pub async fn persist_overrides(
@@ -14,47 +20,12 @@ pub async fn persist_overrides(
profile: Option<&str>,
overrides: &[(&[&str], &str)],
) -> Result<()> {
let config_path = codex_home.join(CONFIG_TOML_FILE);
let with_options: Vec<(&[&str], Option<&str>)> = overrides
.iter()
.map(|(segments, value)| (*segments, Some(*value)))
.collect();
let mut doc = match tokio::fs::read_to_string(&config_path).await {
Ok(s) => s.parse::<DocumentMut>()?,
Err(e) if e.kind() == std::io::ErrorKind::NotFound => {
tokio::fs::create_dir_all(codex_home).await?;
DocumentMut::new()
}
Err(e) => return Err(e.into()),
};
let effective_profile = if let Some(p) = profile {
Some(p.to_owned())
} else {
doc.get("profile")
.and_then(|i| i.as_str())
.map(|s| s.to_string())
};
for (segments, val) in overrides.iter().copied() {
let value = toml_edit::value(val);
if let Some(ref name) = effective_profile {
if segments.first().copied() == Some("profiles") {
apply_toml_edit_override_segments(&mut doc, segments, value);
} else {
let mut seg_buf: Vec<&str> = Vec::with_capacity(2 + segments.len());
seg_buf.push("profiles");
seg_buf.push(name.as_str());
seg_buf.extend_from_slice(segments);
apply_toml_edit_override_segments(&mut doc, &seg_buf, value);
}
} else {
apply_toml_edit_override_segments(&mut doc, segments, value);
}
}
let tmp_file = NamedTempFile::new_in(codex_home)?;
tokio::fs::write(tmp_file.path(), doc.to_string()).await?;
tmp_file.persist(config_path)?;
Ok(())
persist_overrides_with_behavior(codex_home, profile, &with_options, NoneBehavior::Skip).await
}
/// Persist overrides where values may be optional. Any entries with `None`
@@ -65,16 +36,17 @@ pub async fn persist_non_null_overrides(
profile: Option<&str>,
overrides: &[(&[&str], Option<&str>)],
) -> Result<()> {
let filtered: Vec<(&[&str], &str)> = overrides
.iter()
.filter_map(|(k, v)| v.map(|vv| (*k, vv)))
.collect();
persist_overrides_with_behavior(codex_home, profile, overrides, NoneBehavior::Skip).await
}
if filtered.is_empty() {
return Ok(());
}
persist_overrides(codex_home, profile, &filtered).await
/// Persist overrides where `None` values clear any existing values from the
/// configuration file.
pub async fn persist_overrides_and_clear_if_none(
codex_home: &Path,
profile: Option<&str>,
overrides: &[(&[&str], Option<&str>)],
) -> Result<()> {
persist_overrides_with_behavior(codex_home, profile, overrides, NoneBehavior::Remove).await
}
/// Apply a single override onto a `toml_edit` document while preserving
@@ -121,6 +93,125 @@ fn apply_toml_edit_override_segments(
current[last] = value;
}
async fn persist_overrides_with_behavior(
codex_home: &Path,
profile: Option<&str>,
overrides: &[(&[&str], Option<&str>)],
none_behavior: NoneBehavior,
) -> Result<()> {
if overrides.is_empty() {
return Ok(());
}
let should_skip = match none_behavior {
NoneBehavior::Skip => overrides.iter().all(|(_, value)| value.is_none()),
NoneBehavior::Remove => false,
};
if should_skip {
return Ok(());
}
let config_path = codex_home.join(CONFIG_TOML_FILE);
let read_result = tokio::fs::read_to_string(&config_path).await;
let mut doc = match read_result {
Ok(contents) => contents.parse::<DocumentMut>()?,
Err(e) if e.kind() == std::io::ErrorKind::NotFound => {
if overrides
.iter()
.all(|(_, value)| value.is_none() && matches!(none_behavior, NoneBehavior::Remove))
{
return Ok(());
}
tokio::fs::create_dir_all(codex_home).await?;
DocumentMut::new()
}
Err(e) => return Err(e.into()),
};
let effective_profile = if let Some(p) = profile {
Some(p.to_owned())
} else {
doc.get("profile")
.and_then(|i| i.as_str())
.map(str::to_string)
};
let mut mutated = false;
for (segments, value) in overrides.iter().copied() {
let mut seg_buf: Vec<&str> = Vec::new();
let segments_to_apply: &[&str];
if let Some(ref name) = effective_profile {
if segments.first().copied() == Some("profiles") {
segments_to_apply = segments;
} else {
seg_buf.reserve(2 + segments.len());
seg_buf.push("profiles");
seg_buf.push(name.as_str());
seg_buf.extend_from_slice(segments);
segments_to_apply = seg_buf.as_slice();
}
} else {
segments_to_apply = segments;
}
match value {
Some(v) => {
let item_value = toml_edit::value(v);
apply_toml_edit_override_segments(&mut doc, segments_to_apply, item_value);
mutated = true;
}
None => {
if matches!(none_behavior, NoneBehavior::Remove)
&& remove_toml_edit_segments(&mut doc, segments_to_apply)
{
mutated = true;
}
}
}
}
if !mutated {
return Ok(());
}
let tmp_file = NamedTempFile::new_in(codex_home)?;
tokio::fs::write(tmp_file.path(), doc.to_string()).await?;
tmp_file.persist(config_path)?;
Ok(())
}
fn remove_toml_edit_segments(doc: &mut DocumentMut, segments: &[&str]) -> bool {
use toml_edit::Item;
if segments.is_empty() {
return false;
}
let mut current = doc.as_table_mut();
for seg in &segments[..segments.len() - 1] {
let Some(item) = current.get_mut(seg) else {
return false;
};
match item {
Item::Table(table) => {
current = table;
}
_ => {
return false;
}
}
}
current.remove(segments[segments.len() - 1]).is_some()
}
#[cfg(test)]
mod tests {
use super::*;
@@ -137,7 +228,7 @@ mod tests {
codex_home,
None,
&[
(&[CONFIG_KEY_MODEL], "gpt-5"),
(&[CONFIG_KEY_MODEL], "gpt-5-codex"),
(&[CONFIG_KEY_EFFORT], "high"),
],
)
@@ -145,7 +236,7 @@ mod tests {
.expect("persist");
let contents = read_config(codex_home).await;
let expected = r#"model = "gpt-5"
let expected = r#"model = "gpt-5-codex"
model_reasoning_effort = "high"
"#;
assert_eq!(contents, expected);
@@ -257,7 +348,7 @@ model_reasoning_effort = "high"
&[
(&["a", "b", "c"], "v"),
(&["x"], "y"),
(&["profiles", "p1", CONFIG_KEY_MODEL], "gpt-5"),
(&["profiles", "p1", CONFIG_KEY_MODEL], "gpt-5-codex"),
],
)
.await
@@ -270,7 +361,7 @@ model_reasoning_effort = "high"
c = "v"
[profiles.p1]
model = "gpt-5"
model = "gpt-5-codex"
"#;
assert_eq!(contents, expected);
}
@@ -363,7 +454,7 @@ existing = "keep"
codex_home,
None,
&[
(&[CONFIG_KEY_MODEL], "gpt-5"),
(&[CONFIG_KEY_MODEL], "gpt-5-codex"),
(&[CONFIG_KEY_EFFORT], "minimal"),
],
)
@@ -375,7 +466,7 @@ existing = "keep"
# should be preserved
existing = "keep"
model = "gpt-5"
model = "gpt-5-codex"
model_reasoning_effort = "minimal"
"#;
assert_eq!(contents, expected);
@@ -433,7 +524,7 @@ model = "o3"
let codex_home = tmpdir.path();
// Seed with a model value only
let seed = "model = \"gpt-5\"\n";
let seed = "model = \"gpt-5-codex\"\n";
tokio::fs::write(codex_home.join(CONFIG_TOML_FILE), seed)
.await
.expect("seed write");
@@ -444,7 +535,7 @@ model = "o3"
.expect("persist");
let contents = read_config(codex_home).await;
let expected = r#"model = "gpt-5"
let expected = r#"model = "gpt-5-codex"
model_reasoning_effort = "high"
"#;
assert_eq!(contents, expected);
@@ -488,7 +579,7 @@ model = "o4-mini"
// No active profile key; we'll target an explicit override
let seed = r#"[profiles.team]
model = "gpt-5"
model = "gpt-5-codex"
"#;
tokio::fs::write(codex_home.join(CONFIG_TOML_FILE), seed)
.await
@@ -504,7 +595,7 @@ model = "gpt-5"
let contents = read_config(codex_home).await;
let expected = r#"[profiles.team]
model = "gpt-5"
model = "gpt-5-codex"
model_reasoning_effort = "minimal"
"#;
assert_eq!(contents, expected);
@@ -520,7 +611,7 @@ model_reasoning_effort = "minimal"
codex_home,
None,
&[
(&[CONFIG_KEY_MODEL], Some("gpt-5")),
(&[CONFIG_KEY_MODEL], Some("gpt-5-codex")),
(&[CONFIG_KEY_EFFORT], None),
],
)
@@ -528,7 +619,7 @@ model_reasoning_effort = "minimal"
.expect("persist");
let contents = read_config(codex_home).await;
let expected = "model = \"gpt-5\"\n";
let expected = "model = \"gpt-5-codex\"\n";
assert_eq!(contents, expected);
}
@@ -574,6 +665,81 @@ model = "o3"
assert_eq!(contents, expected);
}
#[tokio::test]
async fn persist_clear_none_removes_top_level_value() {
let tmpdir = tempdir().expect("tmp");
let codex_home = tmpdir.path();
let seed = r#"model = "gpt-5-codex"
model_reasoning_effort = "medium"
"#;
tokio::fs::write(codex_home.join(CONFIG_TOML_FILE), seed)
.await
.expect("seed write");
persist_overrides_and_clear_if_none(
codex_home,
None,
&[
(&[CONFIG_KEY_MODEL], None),
(&[CONFIG_KEY_EFFORT], Some("high")),
],
)
.await
.expect("persist");
let contents = read_config(codex_home).await;
let expected = "model_reasoning_effort = \"high\"\n";
assert_eq!(contents, expected);
}
#[tokio::test]
async fn persist_clear_none_respects_active_profile() {
let tmpdir = tempdir().expect("tmp");
let codex_home = tmpdir.path();
let seed = r#"profile = "team"
[profiles.team]
model = "gpt-4"
model_reasoning_effort = "minimal"
"#;
tokio::fs::write(codex_home.join(CONFIG_TOML_FILE), seed)
.await
.expect("seed write");
persist_overrides_and_clear_if_none(
codex_home,
None,
&[
(&[CONFIG_KEY_MODEL], None),
(&[CONFIG_KEY_EFFORT], Some("high")),
],
)
.await
.expect("persist");
let contents = read_config(codex_home).await;
let expected = r#"profile = "team"
[profiles.team]
model_reasoning_effort = "high"
"#;
assert_eq!(contents, expected);
}
#[tokio::test]
async fn persist_clear_none_noop_when_file_missing() {
let tmpdir = tempdir().expect("tmp");
let codex_home = tmpdir.path();
persist_overrides_and_clear_if_none(codex_home, None, &[(&[CONFIG_KEY_MODEL], None)])
.await
.expect("persist");
assert!(!codex_home.join(CONFIG_TOML_FILE).exists());
}
// Test helper moved to bottom per review guidance.
async fn read_config(codex_home: &Path) -> String {
let p = codex_home.join(CONFIG_TOML_FILE);

View File

@@ -5,11 +5,15 @@
use std::collections::HashMap;
use std::path::PathBuf;
use std::time::Duration;
use wildmatch::WildMatchPattern;
use serde::Deserialize;
use serde::Deserializer;
use serde::Serialize;
use serde::de::Error as SerdeError;
#[derive(Deserialize, Debug, Clone, PartialEq)]
#[derive(Serialize, Debug, Clone, PartialEq)]
pub struct McpServerConfig {
pub command: String,
@@ -19,9 +23,84 @@ pub struct McpServerConfig {
#[serde(default)]
pub env: Option<HashMap<String, String>>,
/// Startup timeout in milliseconds for initializing MCP server & initially listing tools.
#[serde(default)]
pub startup_timeout_ms: Option<u64>,
/// Startup timeout in seconds for initializing MCP server & initially listing tools.
#[serde(
default,
with = "option_duration_secs",
skip_serializing_if = "Option::is_none"
)]
pub startup_timeout_sec: Option<Duration>,
/// Default timeout for MCP tool calls initiated via this server.
#[serde(default, with = "option_duration_secs")]
pub tool_timeout_sec: Option<Duration>,
}
impl<'de> Deserialize<'de> for McpServerConfig {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: Deserializer<'de>,
{
#[derive(Deserialize)]
struct RawMcpServerConfig {
command: String,
#[serde(default)]
args: Vec<String>,
#[serde(default)]
env: Option<HashMap<String, String>>,
#[serde(default)]
startup_timeout_sec: Option<f64>,
#[serde(default)]
startup_timeout_ms: Option<u64>,
#[serde(default, with = "option_duration_secs")]
tool_timeout_sec: Option<Duration>,
}
let raw = RawMcpServerConfig::deserialize(deserializer)?;
let startup_timeout_sec = match (raw.startup_timeout_sec, raw.startup_timeout_ms) {
(Some(sec), _) => {
let duration = Duration::try_from_secs_f64(sec).map_err(SerdeError::custom)?;
Some(duration)
}
(None, Some(ms)) => Some(Duration::from_millis(ms)),
(None, None) => None,
};
Ok(Self {
command: raw.command,
args: raw.args,
env: raw.env,
startup_timeout_sec,
tool_timeout_sec: raw.tool_timeout_sec,
})
}
}
mod option_duration_secs {
use serde::Deserialize;
use serde::Deserializer;
use serde::Serializer;
use std::time::Duration;
pub fn serialize<S>(value: &Option<Duration>, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
match value {
Some(duration) => serializer.serialize_some(&duration.as_secs_f64()),
None => serializer.serialize_none(),
}
}
pub fn deserialize<'de, D>(deserializer: D) -> Result<Option<Duration>, D::Error>
where
D: Deserializer<'de>,
{
let secs = Option::<f64>::deserialize(deserializer)?;
secs.map(|secs| Duration::try_from_secs_f64(secs).map_err(serde::de::Error::custom))
.transpose()
}
}
#[derive(Deserialize, Debug, Copy, Clone, PartialEq)]
@@ -76,9 +155,27 @@ pub enum HistoryPersistence {
None,
}
#[derive(Debug, Clone, PartialEq, Eq, Deserialize)]
#[serde(untagged)]
pub enum Notifications {
Enabled(bool),
Custom(Vec<String>),
}
impl Default for Notifications {
fn default() -> Self {
Self::Enabled(false)
}
}
/// Collection of settings that are specific to the TUI.
#[derive(Deserialize, Debug, Clone, PartialEq, Default)]
pub struct Tui {}
pub struct Tui {
/// Enable desktop notifications from the TUI when the terminal is unfocused.
/// Defaults to `false`.
#[serde(default)]
pub notifications: Notifications,
}
#[derive(Deserialize, Debug, Clone, PartialEq, Default)]
pub struct SandboxWorkspaceWrite {

View File

@@ -1,4 +1,3 @@
use codex_protocol::models::ContentItem;
use codex_protocol::models::ResponseItem;
/// Transcript of conversation history
@@ -33,52 +32,8 @@ impl ConversationHistory {
}
}
pub(crate) fn keep_last_messages(&mut self, n: usize) {
if n == 0 {
self.items.clear();
return;
}
// Collect the last N message items (assistant/user), newest to oldest.
let mut kept: Vec<ResponseItem> = Vec::with_capacity(n);
for item in self.items.iter().rev() {
if let ResponseItem::Message { role, content, .. } = item {
kept.push(ResponseItem::Message {
// we need to remove the id or the model will complain that messages are sent without
// their reasonings
id: None,
role: role.clone(),
content: content.clone(),
});
if kept.len() == n {
break;
}
}
}
// Preserve chronological order (oldest to newest) within the kept slice.
kept.reverse();
self.items = kept;
}
pub(crate) fn last_agent_message(&self) -> String {
for item in self.items.iter().rev() {
if let ResponseItem::Message { role, content, .. } = item
&& role == "assistant"
{
return content
.iter()
.find_map(|ci| {
if let ContentItem::OutputText { text } = ci {
Some(text.clone())
} else {
None
}
})
.unwrap_or_default();
}
}
String::new()
pub(crate) fn replace(&mut self, items: Vec<ResponseItem>) {
self.items = items;
}
}
@@ -92,8 +47,9 @@ fn is_api_message(message: &ResponseItem) -> bool {
| ResponseItem::CustomToolCall { .. }
| ResponseItem::CustomToolCallOutput { .. }
| ResponseItem::LocalShellCall { .. }
| ResponseItem::Reasoning { .. } => true,
ResponseItem::WebSearchCall { .. } | ResponseItem::Other => false,
| ResponseItem::Reasoning { .. }
| ResponseItem::WebSearchCall { .. } => true,
ResponseItem::Other => false,
}
}

View File

@@ -3,6 +3,8 @@ use crate::CodexAuth;
use crate::codex::Codex;
use crate::codex::CodexSpawnOk;
use crate::codex::INITIAL_SUBMIT_ID;
use crate::codex::compact::content_items_to_text;
use crate::codex::compact::is_session_prefix_message;
use crate::codex_conversation::CodexConversation;
use crate::config::Config;
use crate::error::CodexErr;
@@ -59,21 +61,11 @@ impl ConversationManager {
config: Config,
auth_manager: Arc<AuthManager>,
) -> CodexResult<NewConversation> {
// TO BE REFACTORED: use the config experimental_resume field until we have a mainstream way.
if let Some(resume_path) = config.experimental_resume.as_ref() {
let initial_history = RolloutRecorder::get_rollout_history(resume_path).await?;
let CodexSpawnOk {
codex,
conversation_id,
} = Codex::spawn(config, auth_manager, initial_history).await?;
self.finalize_spawn(codex, conversation_id).await
} else {
let CodexSpawnOk {
codex,
conversation_id,
} = Codex::spawn(config, auth_manager, InitialHistory::New).await?;
self.finalize_spawn(codex, conversation_id).await
}
let CodexSpawnOk {
codex,
conversation_id,
} = Codex::spawn(config, auth_manager, InitialHistory::New).await?;
self.finalize_spawn(codex, conversation_id).await
}
async fn finalize_spawn(
@@ -144,19 +136,19 @@ impl ConversationManager {
self.conversations.write().await.remove(conversation_id)
}
/// Fork an existing conversation by dropping the last `drop_last_messages`
/// user/assistant messages from its transcript and starting a new
/// Fork an existing conversation by taking messages up to the given position
/// (not including the message at the given position) and starting a new
/// conversation with identical configuration (unless overridden by the
/// caller's `config`). The new conversation will have a fresh id.
pub async fn fork_conversation(
&self,
num_messages_to_drop: usize,
nth_user_message: usize,
config: Config,
path: PathBuf,
) -> CodexResult<NewConversation> {
// Compute the prefix up to the cut point.
let history = RolloutRecorder::get_rollout_history(&path).await?;
let history = truncate_after_dropping_last_messages(history, num_messages_to_drop);
let history = truncate_before_nth_user_message(history, nth_user_message);
// Spawn a new conversation with the computed initial history.
let auth_manager = self.auth_manager.clone();
@@ -169,33 +161,30 @@ impl ConversationManager {
}
}
/// Return a prefix of `items` obtained by dropping the last `n` user messages
/// and all items that follow them.
fn truncate_after_dropping_last_messages(history: InitialHistory, n: usize) -> InitialHistory {
if n == 0 {
return InitialHistory::Forked(history.get_rollout_items());
}
// Work directly on rollout items, and cut the vector at the nth-from-last user message input.
/// Return a prefix of `items` obtained by cutting strictly before the nth user message
/// (0-based) and all items that follow it.
fn truncate_before_nth_user_message(history: InitialHistory, n: usize) -> InitialHistory {
// Work directly on rollout items, and cut the vector at the nth user message input.
let items: Vec<RolloutItem> = history.get_rollout_items();
// Find indices of user message inputs in rollout order.
let mut user_positions: Vec<usize> = Vec::new();
for (idx, item) in items.iter().enumerate() {
if let RolloutItem::ResponseItem(ResponseItem::Message { role, .. }) = item
if let RolloutItem::ResponseItem(ResponseItem::Message { role, content, .. }) = item
&& role == "user"
&& content_items_to_text(content).is_some_and(|text| !is_session_prefix_message(&text))
{
user_positions.push(idx);
}
}
// If fewer than n user messages exist, treat as empty.
if user_positions.len() < n {
// If fewer than or equal to n user messages exist, treat as empty (out of range).
if user_positions.len() <= n {
return InitialHistory::New;
}
// Cut strictly before the nth-from-last user message (do not keep the nth itself).
let cut_idx = user_positions[user_positions.len() - n];
// Cut strictly before the nth user message (do not keep the nth itself).
let cut_idx = user_positions[n];
let rolled: Vec<RolloutItem> = items.into_iter().take(cut_idx).collect();
if rolled.is_empty() {
@@ -208,9 +197,11 @@ fn truncate_after_dropping_last_messages(history: InitialHistory, n: usize) -> I
#[cfg(test)]
mod tests {
use super::*;
use crate::codex::make_session_and_context;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ReasoningItemReasoningSummary;
use codex_protocol::models::ResponseItem;
use pretty_assertions::assert_eq;
fn user_msg(text: &str) -> ResponseItem {
ResponseItem::Message {
@@ -262,7 +253,7 @@ mod tests {
.cloned()
.map(RolloutItem::ResponseItem)
.collect();
let truncated = truncate_after_dropping_last_messages(InitialHistory::Forked(initial), 1);
let truncated = truncate_before_nth_user_message(InitialHistory::Forked(initial), 1);
let got_items = truncated.get_rollout_items();
let expected_items = vec![
RolloutItem::ResponseItem(items[0].clone()),
@@ -279,7 +270,37 @@ mod tests {
.cloned()
.map(RolloutItem::ResponseItem)
.collect();
let truncated2 = truncate_after_dropping_last_messages(InitialHistory::Forked(initial2), 2);
let truncated2 = truncate_before_nth_user_message(InitialHistory::Forked(initial2), 2);
assert!(matches!(truncated2, InitialHistory::New));
}
#[test]
fn ignores_session_prefix_messages_when_truncating() {
let (session, turn_context) = make_session_and_context();
let mut items = session.build_initial_context(&turn_context);
items.push(user_msg("feature request"));
items.push(assistant_msg("ack"));
items.push(user_msg("second question"));
items.push(assistant_msg("answer"));
let rollout_items: Vec<RolloutItem> = items
.iter()
.cloned()
.map(RolloutItem::ResponseItem)
.collect();
let truncated = truncate_before_nth_user_message(InitialHistory::Forked(rollout_items), 1);
let got_items = truncated.get_rollout_items();
let expected: Vec<RolloutItem> = vec![
RolloutItem::ResponseItem(items[0].clone()),
RolloutItem::ResponseItem(items[1].clone()),
RolloutItem::ResponseItem(items[2].clone()),
];
assert_eq!(
serde_json::to_value(&got_items).unwrap(),
serde_json::to_value(&expected).unwrap()
);
}
}

View File

@@ -52,7 +52,7 @@ pub async fn discover_prompts_in_excluding(
let Some(name) = path
.file_stem()
.and_then(|s| s.to_str())
.map(|s| s.to_string())
.map(str::to_string)
else {
continue;
};

View File

@@ -2,6 +2,7 @@ use serde::Deserialize;
use serde::Serialize;
use strum_macros::Display as DeriveDisplay;
use crate::codex::TurnContext;
use crate::protocol::AskForApproval;
use crate::protocol::SandboxPolicy;
use crate::shell::Shell;
@@ -71,6 +72,39 @@ impl EnvironmentContext {
shell,
}
}
/// Compares two environment contexts, ignoring the shell. Useful when
/// comparing turn to turn, since the initial environment_context will
/// include the shell, and then it is not configurable from turn to turn.
pub fn equals_except_shell(&self, other: &EnvironmentContext) -> bool {
let EnvironmentContext {
cwd,
approval_policy,
sandbox_mode,
network_access,
writable_roots,
// should compare all fields except shell
shell: _,
} = other;
self.cwd == *cwd
&& self.approval_policy == *approval_policy
&& self.sandbox_mode == *sandbox_mode
&& self.network_access == *network_access
&& self.writable_roots == *writable_roots
}
}
impl From<&TurnContext> for EnvironmentContext {
fn from(turn_context: &TurnContext) -> Self {
Self::new(
Some(turn_context.cwd.clone()),
Some(turn_context.approval_policy),
Some(turn_context.sandbox_policy.clone()),
// Shell is not configurable from turn to turn
None,
)
}
}
impl EnvironmentContext {
@@ -140,6 +174,9 @@ impl From<EnvironmentContext> for ResponseItem {
#[cfg(test)]
mod tests {
use crate::shell::BashShell;
use crate::shell::ZshShell;
use super::*;
use pretty_assertions::assert_eq;
@@ -210,4 +247,82 @@ mod tests {
assert_eq!(context.serialize_to_xml(), expected);
}
#[test]
fn equals_except_shell_compares_approval_policy() {
// Approval policy
let context1 = EnvironmentContext::new(
Some(PathBuf::from("/repo")),
Some(AskForApproval::OnRequest),
Some(workspace_write_policy(vec!["/repo"], false)),
None,
);
let context2 = EnvironmentContext::new(
Some(PathBuf::from("/repo")),
Some(AskForApproval::Never),
Some(workspace_write_policy(vec!["/repo"], true)),
None,
);
assert!(!context1.equals_except_shell(&context2));
}
#[test]
fn equals_except_shell_compares_sandbox_policy() {
let context1 = EnvironmentContext::new(
Some(PathBuf::from("/repo")),
Some(AskForApproval::OnRequest),
Some(SandboxPolicy::new_read_only_policy()),
None,
);
let context2 = EnvironmentContext::new(
Some(PathBuf::from("/repo")),
Some(AskForApproval::OnRequest),
Some(SandboxPolicy::new_workspace_write_policy()),
None,
);
assert!(!context1.equals_except_shell(&context2));
}
#[test]
fn equals_except_shell_compares_workspace_write_policy() {
let context1 = EnvironmentContext::new(
Some(PathBuf::from("/repo")),
Some(AskForApproval::OnRequest),
Some(workspace_write_policy(vec!["/repo", "/tmp", "/var"], false)),
None,
);
let context2 = EnvironmentContext::new(
Some(PathBuf::from("/repo")),
Some(AskForApproval::OnRequest),
Some(workspace_write_policy(vec!["/repo", "/tmp"], true)),
None,
);
assert!(!context1.equals_except_shell(&context2));
}
#[test]
fn equals_except_shell_ignores_shell() {
let context1 = EnvironmentContext::new(
Some(PathBuf::from("/repo")),
Some(AskForApproval::OnRequest),
Some(workspace_write_policy(vec!["/repo"], false)),
Some(Shell::Bash(BashShell {
shell_path: "/bin/bash".into(),
bashrc_path: "/home/user/.bashrc".into(),
})),
);
let context2 = EnvironmentContext::new(
Some(PathBuf::from("/repo")),
Some(AskForApproval::OnRequest),
Some(workspace_write_policy(vec!["/repo"], false)),
Some(Shell::Zsh(ZshShell {
shell_path: "/bin/zsh".into(),
zshrc_path: "/home/user/.zshrc".into(),
})),
);
assert!(context1.equals_except_shell(&context2));
}
}

View File

@@ -1,6 +1,8 @@
use crate::exec::ExecToolCallOutput;
use crate::token_data::KnownPlan;
use crate::token_data::PlanType;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::protocol::RateLimitSnapshot;
use reqwest::StatusCode;
use serde_json;
use std::io;
@@ -13,8 +15,11 @@ pub type Result<T> = std::result::Result<T, CodexErr>;
#[derive(Error, Debug)]
pub enum SandboxErr {
/// Error from sandbox execution
#[error("sandbox denied exec error, exit code: {0}, stdout: {1}, stderr: {2}")]
Denied(i32, String, String),
#[error(
"sandbox denied exec error, exit code: {}, stdout: {}, stderr: {}",
.output.exit_code, .output.stdout.text, .output.stderr.text
)]
Denied { output: Box<ExecToolCallOutput> },
/// Error from linux seccomp filter setup
#[cfg(target_os = "linux")]
@@ -28,7 +33,7 @@ pub enum SandboxErr {
/// Command timed out
#[error("command timed out")]
Timeout,
Timeout { output: Box<ExecToolCallOutput> },
/// Command was killed by a signal
#[error("command was killed by a signal")]
@@ -100,6 +105,9 @@ pub enum CodexErr {
#[error("codex-linux-sandbox was required but not provided")]
LandlockSandboxExecutableNotProvided,
#[error("unsupported operation: {0}")]
UnsupportedOperation(String),
// -----------------------------------------------------------------
// Automatic conversions for common external error types
// -----------------------------------------------------------------
@@ -131,6 +139,7 @@ pub enum CodexErr {
pub struct UsageLimitReachedError {
pub(crate) plan_type: Option<PlanType>,
pub(crate) resets_in_seconds: Option<u64>,
pub(crate) rate_limits: Option<RateLimitSnapshot>,
}
impl std::fmt::Display for UsageLimitReachedError {
@@ -245,9 +254,12 @@ impl CodexErr {
pub fn get_error_message_ui(e: &CodexErr) -> String {
match e {
CodexErr::Sandbox(SandboxErr::Denied(_, _, stderr)) => stderr.to_string(),
CodexErr::Sandbox(SandboxErr::Denied { output }) => output.stderr.text.clone(),
// Timeouts are not sandbox errors from a UX perspective; present them plainly
CodexErr::Sandbox(SandboxErr::Timeout) => "error: command timed out".to_string(),
CodexErr::Sandbox(SandboxErr::Timeout { output }) => format!(
"error: command timed out after {} ms",
output.duration.as_millis()
),
_ => e.to_string(),
}
}
@@ -255,12 +267,29 @@ pub fn get_error_message_ui(e: &CodexErr) -> String {
#[cfg(test)]
mod tests {
use super::*;
use codex_protocol::protocol::RateLimitWindow;
fn rate_limit_snapshot() -> RateLimitSnapshot {
RateLimitSnapshot {
primary: Some(RateLimitWindow {
used_percent: 50.0,
window_minutes: Some(60),
resets_in_seconds: Some(3600),
}),
secondary: Some(RateLimitWindow {
used_percent: 30.0,
window_minutes: Some(120),
resets_in_seconds: Some(7200),
}),
}
}
#[test]
fn usage_limit_reached_error_formats_plus_plan() {
let err = UsageLimitReachedError {
plan_type: Some(PlanType::Known(KnownPlan::Plus)),
resets_in_seconds: None,
rate_limits: Some(rate_limit_snapshot()),
};
assert_eq!(
err.to_string(),
@@ -273,6 +302,7 @@ mod tests {
let err = UsageLimitReachedError {
plan_type: Some(PlanType::Known(KnownPlan::Free)),
resets_in_seconds: Some(3600),
rate_limits: Some(rate_limit_snapshot()),
};
assert_eq!(
err.to_string(),
@@ -285,6 +315,7 @@ mod tests {
let err = UsageLimitReachedError {
plan_type: None,
resets_in_seconds: None,
rate_limits: Some(rate_limit_snapshot()),
};
assert_eq!(
err.to_string(),
@@ -297,6 +328,7 @@ mod tests {
let err = UsageLimitReachedError {
plan_type: Some(PlanType::Known(KnownPlan::Team)),
resets_in_seconds: Some(3600),
rate_limits: Some(rate_limit_snapshot()),
};
assert_eq!(
err.to_string(),
@@ -309,6 +341,7 @@ mod tests {
let err = UsageLimitReachedError {
plan_type: Some(PlanType::Known(KnownPlan::Business)),
resets_in_seconds: None,
rate_limits: Some(rate_limit_snapshot()),
};
assert_eq!(
err.to_string(),
@@ -321,6 +354,7 @@ mod tests {
let err = UsageLimitReachedError {
plan_type: Some(PlanType::Known(KnownPlan::Pro)),
resets_in_seconds: None,
rate_limits: Some(rate_limit_snapshot()),
};
assert_eq!(
err.to_string(),
@@ -333,6 +367,7 @@ mod tests {
let err = UsageLimitReachedError {
plan_type: None,
resets_in_seconds: Some(5 * 60),
rate_limits: Some(rate_limit_snapshot()),
};
assert_eq!(
err.to_string(),
@@ -345,6 +380,7 @@ mod tests {
let err = UsageLimitReachedError {
plan_type: Some(PlanType::Known(KnownPlan::Plus)),
resets_in_seconds: Some(3 * 3600 + 32 * 60),
rate_limits: Some(rate_limit_snapshot()),
};
assert_eq!(
err.to_string(),
@@ -357,6 +393,7 @@ mod tests {
let err = UsageLimitReachedError {
plan_type: None,
resets_in_seconds: Some(2 * 86_400 + 3 * 3600 + 5 * 60),
rate_limits: Some(rate_limit_snapshot()),
};
assert_eq!(
err.to_string(),
@@ -369,6 +406,7 @@ mod tests {
let err = UsageLimitReachedError {
plan_type: None,
resets_in_seconds: Some(30),
rate_limits: Some(rate_limit_snapshot()),
};
assert_eq!(
err.to_string(),

View File

@@ -3,6 +3,7 @@ use std::os::unix::process::ExitStatusExt;
use std::collections::HashMap;
use std::io;
use std::path::Path;
use std::path::PathBuf;
use std::process::ExitStatus;
use std::time::Duration;
@@ -34,6 +35,7 @@ const DEFAULT_TIMEOUT_MS: u64 = 10_000;
const SIGKILL_CODE: i32 = 9;
const TIMEOUT_CODE: i32 = 64;
const EXIT_CODE_SIGNAL_BASE: i32 = 128; // conventional shell: 128 + signal
const EXEC_TIMEOUT_EXIT_CODE: i32 = 124; // conventional timeout exit code
// I/O buffer sizing
const READ_CHUNK_SIZE: usize = 8192; // bytes per read
@@ -43,7 +45,7 @@ const AGGREGATE_BUFFER_INITIAL_CAPACITY: usize = 8 * 1024; // 8 KiB
/// Aggregation still collects full output; only the live event stream is capped.
pub(crate) const MAX_EXEC_OUTPUT_DELTAS_PER_CALL: usize = 10_000;
#[derive(Debug, Clone)]
#[derive(Clone, Debug)]
pub struct ExecParams {
pub command: Vec<String>,
pub cwd: PathBuf,
@@ -81,33 +83,41 @@ pub async fn process_exec_tool_call(
params: ExecParams,
sandbox_type: SandboxType,
sandbox_policy: &SandboxPolicy,
sandbox_cwd: &Path,
codex_linux_sandbox_exe: &Option<PathBuf>,
stdout_stream: Option<StdoutStream>,
) -> Result<ExecToolCallOutput> {
let start = Instant::now();
let timeout_duration = params.timeout_duration();
let raw_output_result: std::result::Result<RawExecToolCallOutput, CodexErr> = match sandbox_type
{
SandboxType::None => exec(params, sandbox_policy, stdout_stream.clone()).await,
SandboxType::MacosSeatbelt => {
let timeout = params.timeout_duration();
let ExecParams {
command, cwd, env, ..
command,
cwd: command_cwd,
env,
..
} = params;
let child = spawn_command_under_seatbelt(
command,
command_cwd,
sandbox_policy,
cwd,
sandbox_cwd,
StdioPolicy::RedirectForShellTool,
env,
)
.await?;
consume_truncated_output(child, timeout, stdout_stream.clone()).await
consume_truncated_output(child, timeout_duration, stdout_stream.clone()).await
}
SandboxType::LinuxSeccomp => {
let timeout = params.timeout_duration();
let ExecParams {
command, cwd, env, ..
command,
cwd: command_cwd,
env,
..
} = params;
let codex_linux_sandbox_exe = codex_linux_sandbox_exe
@@ -116,48 +126,64 @@ pub async fn process_exec_tool_call(
let child = spawn_command_under_linux_sandbox(
codex_linux_sandbox_exe,
command,
command_cwd,
sandbox_policy,
cwd,
sandbox_cwd,
StdioPolicy::RedirectForShellTool,
env,
)
.await?;
consume_truncated_output(child, timeout, stdout_stream).await
consume_truncated_output(child, timeout_duration, stdout_stream).await
}
};
let duration = start.elapsed();
match raw_output_result {
Ok(raw_output) => {
let stdout = raw_output.stdout.from_utf8_lossy();
let stderr = raw_output.stderr.from_utf8_lossy();
#[allow(unused_mut)]
let mut timed_out = raw_output.timed_out;
#[cfg(target_family = "unix")]
match raw_output.exit_status.signal() {
Some(TIMEOUT_CODE) => return Err(CodexErr::Sandbox(SandboxErr::Timeout)),
Some(signal) => {
return Err(CodexErr::Sandbox(SandboxErr::Signal(signal)));
{
if let Some(signal) = raw_output.exit_status.signal() {
if signal == TIMEOUT_CODE {
timed_out = true;
} else {
return Err(CodexErr::Sandbox(SandboxErr::Signal(signal)));
}
}
None => {}
}
let exit_code = raw_output.exit_status.code().unwrap_or(-1);
if exit_code != 0 && is_likely_sandbox_denied(sandbox_type, exit_code) {
return Err(CodexErr::Sandbox(SandboxErr::Denied(
exit_code,
stdout.text,
stderr.text,
)));
let mut exit_code = raw_output.exit_status.code().unwrap_or(-1);
if timed_out {
exit_code = EXEC_TIMEOUT_EXIT_CODE;
}
Ok(ExecToolCallOutput {
let stdout = raw_output.stdout.from_utf8_lossy();
let stderr = raw_output.stderr.from_utf8_lossy();
let aggregated_output = raw_output.aggregated_output.from_utf8_lossy();
let exec_output = ExecToolCallOutput {
exit_code,
stdout,
stderr,
aggregated_output: raw_output.aggregated_output.from_utf8_lossy(),
aggregated_output,
duration,
})
timed_out,
};
if timed_out {
return Err(CodexErr::Sandbox(SandboxErr::Timeout {
output: Box::new(exec_output),
}));
}
if exit_code != 0 && is_likely_sandbox_denied(sandbox_type, exit_code) {
return Err(CodexErr::Sandbox(SandboxErr::Denied {
output: Box::new(exec_output),
}));
}
Ok(exec_output)
}
Err(err) => {
tracing::error!("exec error: {err}");
@@ -197,6 +223,7 @@ struct RawExecToolCallOutput {
pub stdout: StreamOutput<Vec<u8>>,
pub stderr: StreamOutput<Vec<u8>>,
pub aggregated_output: StreamOutput<Vec<u8>>,
pub timed_out: bool,
}
impl StreamOutput<String> {
@@ -229,6 +256,7 @@ pub struct ExecToolCallOutput {
pub stderr: StreamOutput<String>,
pub aggregated_output: StreamOutput<String>,
pub duration: Duration,
pub timed_out: bool,
}
async fn exec(
@@ -298,22 +326,24 @@ async fn consume_truncated_output(
Some(agg_tx.clone()),
));
let exit_status = tokio::select! {
let (exit_status, timed_out) = tokio::select! {
result = tokio::time::timeout(timeout, child.wait()) => {
match result {
Ok(Ok(exit_status)) => exit_status,
Ok(e) => e?,
Ok(status_result) => {
let exit_status = status_result?;
(exit_status, false)
}
Err(_) => {
// timeout
child.start_kill()?;
// Debatable whether `child.wait().await` should be called here.
synthetic_exit_status(EXIT_CODE_SIGNAL_BASE + TIMEOUT_CODE)
(synthetic_exit_status(EXIT_CODE_SIGNAL_BASE + TIMEOUT_CODE), true)
}
}
}
_ = tokio::signal::ctrl_c() => {
child.start_kill()?;
synthetic_exit_status(EXIT_CODE_SIGNAL_BASE + SIGKILL_CODE)
(synthetic_exit_status(EXIT_CODE_SIGNAL_BASE + SIGKILL_CODE), false)
}
};
@@ -336,6 +366,7 @@ async fn consume_truncated_output(
stdout,
stderr,
aggregated_output,
timed_out,
})
}

View File

@@ -38,16 +38,20 @@ impl ExecCommandSession {
writer_handle: JoinHandle<()>,
wait_handle: JoinHandle<()>,
exit_status: std::sync::Arc<std::sync::atomic::AtomicBool>,
) -> Self {
Self {
writer_tx,
output_tx,
killer: StdMutex::new(Some(killer)),
reader_handle: StdMutex::new(Some(reader_handle)),
writer_handle: StdMutex::new(Some(writer_handle)),
wait_handle: StdMutex::new(Some(wait_handle)),
exit_status,
}
) -> (Self, broadcast::Receiver<Vec<u8>>) {
let initial_output_rx = output_tx.subscribe();
(
Self {
writer_tx,
output_tx,
killer: StdMutex::new(Some(killer)),
reader_handle: StdMutex::new(Some(reader_handle)),
writer_handle: StdMutex::new(Some(writer_handle)),
wait_handle: StdMutex::new(Some(wait_handle)),
exit_status,
},
initial_output_rx,
)
}
pub(crate) fn writer_sender(&self) -> mpsc::Sender<Vec<u8>> {

View File

@@ -93,18 +93,16 @@ impl SessionManager {
.fetch_add(1, std::sync::atomic::Ordering::SeqCst),
);
let (session, mut exit_rx) =
create_exec_command_session(params.clone())
.await
.map_err(|err| {
format!(
"failed to create exec command session for session id {}: {err}",
session_id.0
)
})?;
let (session, mut output_rx, mut exit_rx) = create_exec_command_session(params.clone())
.await
.map_err(|err| {
format!(
"failed to create exec command session for session id {}: {err}",
session_id.0
)
})?;
// Insert into session map.
let mut output_rx = session.output_receiver();
self.sessions.lock().await.insert(session_id, session);
// Collect output until either timeout expires or process exits.
@@ -245,7 +243,11 @@ impl SessionManager {
/// Spawn PTY and child process per spawn_exec_command_session logic.
async fn create_exec_command_session(
params: ExecCommandParams,
) -> anyhow::Result<(ExecCommandSession, oneshot::Receiver<i32>)> {
) -> anyhow::Result<(
ExecCommandSession,
tokio::sync::broadcast::Receiver<Vec<u8>>,
oneshot::Receiver<i32>,
)> {
let ExecCommandParams {
cmd,
yield_time_ms: _,
@@ -279,7 +281,6 @@ async fn create_exec_command_session(
let (writer_tx, mut writer_rx) = mpsc::channel::<Vec<u8>>(128);
// Broadcast for streaming PTY output to readers: subscribers receive from subscription time.
let (output_tx, _) = tokio::sync::broadcast::channel::<Vec<u8>>(256);
// Reader task: drain PTY and forward chunks to output channel.
let mut reader = pair.master.try_clone_reader()?;
let output_tx_clone = output_tx.clone();
@@ -341,7 +342,7 @@ async fn create_exec_command_session(
});
// Create and store the session with channels.
let session = ExecCommandSession::new(
let (session, initial_output_rx) = ExecCommandSession::new(
writer_tx,
output_tx,
killer,
@@ -350,7 +351,7 @@ async fn create_exec_command_session(
wait_handle,
exit_status,
);
Ok((session, exit_rx))
Ok((session, initial_output_rx, exit_rx))
}
#[cfg(test)]

View File

@@ -108,6 +108,61 @@ pub async fn collect_git_info(cwd: &Path) -> Option<GitInfo> {
Some(git_info)
}
/// A minimal commit summary entry used for pickers (subject + timestamp + sha).
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct CommitLogEntry {
pub sha: String,
/// Unix timestamp (seconds since epoch) of the commit time (committer time).
pub timestamp: i64,
/// Single-line subject of the commit message.
pub subject: String,
}
/// Return the last `limit` commits reachable from HEAD for the current branch.
/// Each entry contains the SHA, commit timestamp (seconds), and subject line.
/// Returns an empty vector if not in a git repo or on error/timeout.
pub async fn recent_commits(cwd: &Path, limit: usize) -> Vec<CommitLogEntry> {
// Ensure we're in a git repo first to avoid noisy errors.
let Some(out) = run_git_command_with_timeout(&["rev-parse", "--git-dir"], cwd).await else {
return Vec::new();
};
if !out.status.success() {
return Vec::new();
}
let fmt = "%H%x1f%ct%x1f%s"; // <sha> <US> <commit_time> <US> <subject>
let n = limit.max(1).to_string();
let Some(log_out) =
run_git_command_with_timeout(&["log", "-n", &n, &format!("--pretty=format:{fmt}")], cwd)
.await
else {
return Vec::new();
};
if !log_out.status.success() {
return Vec::new();
}
let text = String::from_utf8_lossy(&log_out.stdout);
let mut entries: Vec<CommitLogEntry> = Vec::new();
for line in text.lines() {
let mut parts = line.split('\u{001f}');
let sha = parts.next().unwrap_or("").trim();
let ts_s = parts.next().unwrap_or("").trim();
let subject = parts.next().unwrap_or("").trim();
if sha.is_empty() || ts_s.is_empty() {
continue;
}
let timestamp = ts_s.parse::<i64>().unwrap_or(0);
entries.push(CommitLogEntry {
sha: sha.to_string(),
timestamp,
subject: subject.to_string(),
});
}
entries
}
/// Returns the closest git sha to HEAD that is on a remote as well as the diff to that sha.
pub async fn git_diff_to_remote(cwd: &Path) -> Option<GitDiffToRemote> {
get_git_repo_root(cwd)?;
@@ -145,7 +200,7 @@ async fn get_git_remotes(cwd: &Path) -> Option<Vec<String>> {
let mut remotes: Vec<String> = String::from_utf8(output.stdout)
.ok()?
.lines()
.map(|s| s.to_string())
.map(str::to_string)
.collect();
if let Some(pos) = remotes.iter().position(|r| r == "origin") {
let origin = remotes.remove(pos);
@@ -202,6 +257,11 @@ async fn get_default_branch(cwd: &Path) -> Option<String> {
}
// No remote-derived default; try common local defaults if they exist
get_default_branch_local(cwd).await
}
/// Attempt to determine the repository's default branch name from local branches.
async fn get_default_branch_local(cwd: &Path) -> Option<String> {
for candidate in ["main", "master"] {
if let Some(verify) = run_git_command_with_timeout(
&[
@@ -417,7 +477,7 @@ async fn diff_against_sha(cwd: &Path, sha: &GitSha) -> Option<String> {
let untracked: Vec<String> = String::from_utf8(untracked_output.stdout)
.ok()?
.lines()
.map(|s| s.to_string())
.map(str::to_string)
.filter(|s| !s.is_empty())
.collect();
@@ -485,6 +545,46 @@ pub fn resolve_root_git_project_for_trust(cwd: &Path) -> Option<PathBuf> {
git_dir_path.parent().map(Path::to_path_buf)
}
/// Returns a list of local git branches.
/// Includes the default branch at the beginning of the list, if it exists.
pub async fn local_git_branches(cwd: &Path) -> Vec<String> {
let mut branches: Vec<String> = if let Some(out) =
run_git_command_with_timeout(&["branch", "--format=%(refname:short)"], cwd).await
&& out.status.success()
{
String::from_utf8_lossy(&out.stdout)
.lines()
.map(|s| s.trim().to_string())
.filter(|s| !s.is_empty())
.collect()
} else {
Vec::new()
};
branches.sort_unstable();
if let Some(base) = get_default_branch_local(cwd).await
&& let Some(pos) = branches.iter().position(|name| name == &base)
{
let base_branch = branches.remove(pos);
branches.insert(0, base_branch);
}
branches
}
/// Returns the current checked out branch name.
pub async fn current_branch_name(cwd: &Path) -> Option<String> {
let out = run_git_command_with_timeout(&["branch", "--show-current"], cwd).await?;
if !out.status.success() {
return None;
}
String::from_utf8(out.stdout)
.ok()
.map(|s| s.trim().to_string())
.filter(|name| !name.is_empty())
}
#[cfg(test)]
mod tests {
use super::*;
@@ -551,6 +651,80 @@ mod tests {
repo_path
}
#[tokio::test]
async fn test_recent_commits_non_git_directory_returns_empty() {
let temp_dir = TempDir::new().expect("Failed to create temp dir");
let entries = recent_commits(temp_dir.path(), 10).await;
assert!(entries.is_empty(), "expected no commits outside a git repo");
}
#[tokio::test]
async fn test_recent_commits_orders_and_limits() {
use tokio::time::Duration;
use tokio::time::sleep;
let temp_dir = TempDir::new().expect("Failed to create temp dir");
let repo_path = create_test_git_repo(&temp_dir).await;
// Make three distinct commits with small delays to ensure ordering by timestamp.
fs::write(repo_path.join("file.txt"), "one").unwrap();
Command::new("git")
.args(["add", "file.txt"])
.current_dir(&repo_path)
.output()
.await
.expect("git add");
Command::new("git")
.args(["commit", "-m", "first change"])
.current_dir(&repo_path)
.output()
.await
.expect("git commit 1");
sleep(Duration::from_millis(1100)).await;
fs::write(repo_path.join("file.txt"), "two").unwrap();
Command::new("git")
.args(["add", "file.txt"])
.current_dir(&repo_path)
.output()
.await
.expect("git add 2");
Command::new("git")
.args(["commit", "-m", "second change"])
.current_dir(&repo_path)
.output()
.await
.expect("git commit 2");
sleep(Duration::from_millis(1100)).await;
fs::write(repo_path.join("file.txt"), "three").unwrap();
Command::new("git")
.args(["add", "file.txt"])
.current_dir(&repo_path)
.output()
.await
.expect("git add 3");
Command::new("git")
.args(["commit", "-m", "third change"])
.current_dir(&repo_path)
.output()
.await
.expect("git commit 3");
// Request the latest 3 commits; should be our three changes in reverse time order.
let entries = recent_commits(&repo_path, 3).await;
assert_eq!(entries.len(), 3);
assert_eq!(entries[0].subject, "third change");
assert_eq!(entries[1].subject, "second change");
assert_eq!(entries[2].subject, "first change");
// Basic sanity on SHA formatting
for e in entries {
assert!(e.sha.len() >= 7 && e.sha.chars().all(|c| c.is_ascii_hexdigit()));
}
}
async fn create_test_git_repo_with_remote(temp_dir: &TempDir) -> (PathBuf, String) {
let repo_path = create_test_git_repo(temp_dir).await;
let remote_path = temp_dir.path().join("remote.git");

View File

@@ -1,17 +1,31 @@
use anyhow::Context;
use serde::Deserialize;
use serde::Serialize;
use std::io::ErrorKind;
use std::path::Path;
use std::path::PathBuf;
pub(crate) const INTERNAL_STORAGE_FILE: &str = "internal_storage.json";
#[derive(Debug, Default, Clone, Serialize, Deserialize)]
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct InternalStorage {
#[serde(skip)]
storage_path: PathBuf,
#[serde(default)]
pub gpt_5_high_model_prompt_seen: bool,
#[serde(default = "default_gpt_5_codex_model_prompt_seen")]
pub gpt_5_codex_model_prompt_seen: bool,
}
const fn default_gpt_5_codex_model_prompt_seen() -> bool {
true
}
impl Default for InternalStorage {
fn default() -> Self {
Self {
storage_path: PathBuf::new(),
gpt_5_codex_model_prompt_seen: default_gpt_5_codex_model_prompt_seen(),
}
}
}
// TODO(jif) generalise all the file writers and build proper async channel inserters.
@@ -31,7 +45,14 @@ impl InternalStorage {
}
},
Err(error) => {
tracing::warn!("failed to read internal storage: {error:?}");
if error.kind() == ErrorKind::NotFound {
tracing::debug!(
"internal storage not found at {}; initializing defaults",
storage_path.display()
);
} else {
tracing::warn!("failed to read internal storage: {error:?}");
}
Self::empty(storage_path)
}
}

View File

@@ -160,9 +160,10 @@ fn is_valid_sed_n_arg(arg: Option<&str>) -> bool {
#[cfg(test)]
mod tests {
use super::*;
use std::string::ToString;
fn vec_str(args: &[&str]) -> Vec<String> {
args.iter().map(|s| s.to_string()).collect()
args.iter().map(ToString::to_string).collect()
}
#[test]

View File

@@ -16,21 +16,22 @@ use tokio::process::Child;
pub async fn spawn_command_under_linux_sandbox<P>(
codex_linux_sandbox_exe: P,
command: Vec<String>,
command_cwd: PathBuf,
sandbox_policy: &SandboxPolicy,
cwd: PathBuf,
sandbox_policy_cwd: &Path,
stdio_policy: StdioPolicy,
env: HashMap<String, String>,
) -> std::io::Result<Child>
where
P: AsRef<Path>,
{
let args = create_linux_sandbox_command_args(command, sandbox_policy, &cwd);
let args = create_linux_sandbox_command_args(command, sandbox_policy, sandbox_policy_cwd);
let arg0 = Some("codex-linux-sandbox");
spawn_child_async(
codex_linux_sandbox_exe.as_ref().to_path_buf(),
args,
arg0,
cwd,
command_cwd,
sandbox_policy,
stdio_policy,
env,
@@ -42,10 +43,13 @@ where
fn create_linux_sandbox_command_args(
command: Vec<String>,
sandbox_policy: &SandboxPolicy,
cwd: &Path,
sandbox_policy_cwd: &Path,
) -> Vec<String> {
#[expect(clippy::expect_used)]
let sandbox_policy_cwd = cwd.to_str().expect("cwd must be valid UTF-8").to_string();
let sandbox_policy_cwd = sandbox_policy_cwd
.to_str()
.expect("cwd must be valid UTF-8")
.to_string();
#[expect(clippy::expect_used)]
let sandbox_policy_json =

View File

@@ -46,6 +46,7 @@ pub use model_provider_info::built_in_model_providers;
pub use model_provider_info::create_oss_provider_with_base_url;
mod conversation_manager;
mod event_mapping;
pub mod review_format;
pub use codex_protocol::protocol::InitialHistory;
pub use conversation_manager::ConversationManager;
pub use conversation_manager::NewConversation;
@@ -70,6 +71,7 @@ pub use rollout::ARCHIVED_SESSIONS_SUBDIR;
pub use rollout::RolloutRecorder;
pub use rollout::SESSIONS_SUBDIR;
pub use rollout::SessionMeta;
pub use rollout::find_conversation_path_by_id_str;
pub use rollout::list::ConversationItem;
pub use rollout::list::ConversationsPage;
pub use rollout::list::Cursor;
@@ -87,8 +89,11 @@ pub use codex_protocol::config_types as protocol_config_types;
pub use client::ModelClient;
pub use client_common::Prompt;
pub use client_common::REVIEW_PROMPT;
pub use client_common::ResponseEvent;
pub use client_common::ResponseStream;
pub use codex::compact::content_items_to_text;
pub use codex::compact::is_session_prefix_message;
pub use codex_protocol::models::ContentItem;
pub use codex_protocol::models::LocalShellAction;
pub use codex_protocol::models::LocalShellExecAction;

View File

@@ -40,6 +40,9 @@ const MAX_TOOL_NAME_LENGTH: usize = 64;
/// Default timeout for initializing MCP server & initially listing tools.
const DEFAULT_STARTUP_TIMEOUT: Duration = Duration::from_secs(10);
/// Default timeout for individual tool calls.
const DEFAULT_TOOL_TIMEOUT: Duration = Duration::from_secs(60);
/// Map that holds a startup error for every MCP server that could **not** be
/// spawned successfully.
pub type ClientStartErrors = HashMap<String, anyhow::Error>;
@@ -85,6 +88,7 @@ struct ToolInfo {
struct ManagedClient {
client: Arc<McpClient>,
startup_timeout: Duration,
tool_timeout: Option<Duration>,
}
/// A thin wrapper around a set of running [`McpClient`] instances.
@@ -132,10 +136,9 @@ impl McpConnectionManager {
continue;
}
let startup_timeout = cfg
.startup_timeout_ms
.map(Duration::from_millis)
.unwrap_or(DEFAULT_STARTUP_TIMEOUT);
let startup_timeout = cfg.startup_timeout_sec.unwrap_or(DEFAULT_STARTUP_TIMEOUT);
let tool_timeout = cfg.tool_timeout_sec.unwrap_or(DEFAULT_TOOL_TIMEOUT);
join_set.spawn(async move {
let McpServerConfig {
@@ -171,19 +174,19 @@ impl McpConnectionManager {
protocol_version: mcp_types::MCP_SCHEMA_VERSION.to_owned(),
};
let initialize_notification_params = None;
match client
let init_result = client
.initialize(
params,
initialize_notification_params,
Some(startup_timeout),
)
.await
{
Ok(_response) => (server_name, Ok((client, startup_timeout))),
Err(e) => (server_name, Err(e)),
}
.await;
(
(server_name, tool_timeout),
init_result.map(|_| (client, startup_timeout)),
)
}
Err(e) => (server_name, Err(e.into())),
Err(e) => ((server_name, tool_timeout), Err(e.into())),
}
});
}
@@ -191,8 +194,8 @@ impl McpConnectionManager {
let mut clients: HashMap<String, ManagedClient> = HashMap::with_capacity(join_set.len());
while let Some(res) = join_set.join_next().await {
let (server_name, client_res) = match res {
Ok((server_name, client_res)) => (server_name, client_res),
let ((server_name, tool_timeout), client_res) = match res {
Ok(result) => result,
Err(e) => {
warn!("Task panic when starting MCP server: {e:#}");
continue;
@@ -206,6 +209,7 @@ impl McpConnectionManager {
ManagedClient {
client: Arc::new(client),
startup_timeout,
tool_timeout: Some(tool_timeout),
},
);
}
@@ -243,14 +247,13 @@ impl McpConnectionManager {
server: &str,
tool: &str,
arguments: Option<serde_json::Value>,
timeout: Option<Duration>,
) -> Result<mcp_types::CallToolResult> {
let client = self
let managed = self
.clients
.get(server)
.ok_or_else(|| anyhow!("unknown MCP server '{server}'"))?
.client
.clone();
.ok_or_else(|| anyhow!("unknown MCP server '{server}'"))?;
let client = managed.client.clone();
let timeout = managed.tool_timeout;
client
.call_tool(tool.to_string(), arguments, timeout)

View File

@@ -1,4 +1,3 @@
use std::time::Duration;
use std::time::Instant;
use tracing::error;
@@ -21,7 +20,6 @@ pub(crate) async fn handle_mcp_tool_call(
server: String,
tool_name: String,
arguments: String,
timeout: Option<Duration>,
) -> ResponseInputItem {
// Parse the `arguments` as JSON. An empty string is OK, but invalid JSON
// is not.
@@ -58,7 +56,7 @@ pub(crate) async fn handle_mcp_tool_call(
let start = Instant::now();
// Perform the tool call.
let result = sess
.call_tool(&server, &tool_name, arguments_value.clone(), timeout)
.call_tool(&server, &tool_name, arguments_value.clone())
.await
.map_err(|e| format!("tool call error: {e}"));
let tool_call_end_event = EventMsg::McpToolCallEnd(McpToolCallEndEvent {

View File

@@ -1,6 +1,11 @@
use crate::config_types::ReasoningSummaryFormat;
use crate::tool_apply_patch::ApplyPatchToolType;
/// The `instructions` field in the payload sent to a model should always start
/// with this content.
const BASE_INSTRUCTIONS: &str = include_str!("../prompt.md");
const GPT_5_CODEX_INSTRUCTIONS: &str = include_str!("../gpt_5_codex_prompt.md");
/// A model family is a group of models that share certain characteristics.
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub struct ModelFamily {
@@ -33,6 +38,9 @@ pub struct ModelFamily {
/// Present if the model performs better when `apply_patch` is provided as
/// a tool call instead of just a bash command
pub apply_patch_tool_type: Option<ApplyPatchToolType>,
// Instructions to use for querying the model
pub base_instructions: String,
}
macro_rules! model_family {
@@ -48,6 +56,7 @@ macro_rules! model_family {
reasoning_summary_format: ReasoningSummaryFormat::None,
uses_local_shell_tool: false,
apply_patch_tool_type: None,
base_instructions: BASE_INSTRUCTIONS.to_string(),
};
// apply overrides
$(
@@ -57,22 +66,6 @@ macro_rules! model_family {
}};
}
macro_rules! simple_model_family {
(
$slug:expr, $family:expr
) => {{
Some(ModelFamily {
slug: $slug.to_string(),
family: $family.to_string(),
needs_special_apply_patch_instructions: false,
supports_reasoning_summaries: false,
reasoning_summary_format: ReasoningSummaryFormat::None,
uses_local_shell_tool: false,
apply_patch_tool_type: None,
})
}};
}
/// Returns a `ModelFamily` for the given model slug, or `None` if the slug
/// does not match any known model family.
pub fn find_family_for_model(slug: &str) -> Option<ModelFamily> {
@@ -80,23 +73,20 @@ pub fn find_family_for_model(slug: &str) -> Option<ModelFamily> {
model_family!(
slug, "o3",
supports_reasoning_summaries: true,
needs_special_apply_patch_instructions: true,
)
} else if slug.starts_with("o4-mini") {
model_family!(
slug, "o4-mini",
supports_reasoning_summaries: true,
needs_special_apply_patch_instructions: true,
)
} else if slug.starts_with("codex-mini-latest") {
model_family!(
slug, "codex-mini-latest",
supports_reasoning_summaries: true,
uses_local_shell_tool: true,
)
} else if slug.starts_with("codex-") {
model_family!(
slug, slug,
supports_reasoning_summaries: true,
reasoning_summary_format: ReasoningSummaryFormat::Experimental,
needs_special_apply_patch_instructions: true,
)
} else if slug.starts_with("gpt-4.1") {
model_family!(
@@ -106,15 +96,36 @@ pub fn find_family_for_model(slug: &str) -> Option<ModelFamily> {
} else if slug.starts_with("gpt-oss") || slug.starts_with("openai/gpt-oss") {
model_family!(slug, "gpt-oss", apply_patch_tool_type: Some(ApplyPatchToolType::Function))
} else if slug.starts_with("gpt-4o") {
simple_model_family!(slug, "gpt-4o")
model_family!(slug, "gpt-4o", needs_special_apply_patch_instructions: true)
} else if slug.starts_with("gpt-3.5") {
simple_model_family!(slug, "gpt-3.5")
model_family!(slug, "gpt-3.5", needs_special_apply_patch_instructions: true)
} else if slug.starts_with("codex-") || slug.starts_with("gpt-5-codex") {
model_family!(
slug, slug,
supports_reasoning_summaries: true,
reasoning_summary_format: ReasoningSummaryFormat::Experimental,
base_instructions: GPT_5_CODEX_INSTRUCTIONS.to_string(),
)
} else if slug.starts_with("gpt-5") {
model_family!(
slug, "gpt-5",
supports_reasoning_summaries: true,
needs_special_apply_patch_instructions: true,
)
} else {
None
}
}
pub fn derive_default_model_family(model: &str) -> ModelFamily {
ModelFamily {
slug: model.to_string(),
family: model.to_string(),
needs_special_apply_patch_instructions: false,
supports_reasoning_summaries: false,
reasoning_summary_format: ReasoningSummaryFormat::None,
uses_local_shell_tool: false,
apply_patch_tool_type: None,
base_instructions: BASE_INSTRUCTIONS.to_string(),
}
}

View File

@@ -162,6 +162,21 @@ impl ModelProviderInfo {
}
}
pub(crate) fn is_azure_responses_endpoint(&self) -> bool {
if self.wire_api != WireApi::Responses {
return false;
}
if self.name.eq_ignore_ascii_case("azure") {
return true;
}
self.base_url
.as_ref()
.map(|base| matches_azure_responses_base_url(base))
.unwrap_or(false)
}
/// Apply provider-specific HTTP headers (both static and environment-based)
/// onto an existing `reqwest::RequestBuilder` and return the updated
/// builder.
@@ -329,6 +344,18 @@ pub fn create_oss_provider_with_base_url(base_url: &str) -> ModelProviderInfo {
}
}
fn matches_azure_responses_base_url(base_url: &str) -> bool {
let base = base_url.to_ascii_lowercase();
const AZURE_MARKERS: [&str; 5] = [
"openai.azure.",
"cognitiveservices.azure.",
"aoai.azure.",
"azure-api.",
"azurefd.",
];
AZURE_MARKERS.iter().any(|marker| base.contains(marker))
}
#[cfg(test)]
mod tests {
use super::*;
@@ -419,4 +446,69 @@ env_http_headers = { "X-Example-Env-Header" = "EXAMPLE_ENV_VAR" }
let provider: ModelProviderInfo = toml::from_str(azure_provider_toml).unwrap();
assert_eq!(expected_provider, provider);
}
#[test]
fn detects_azure_responses_base_urls() {
fn provider_for(base_url: &str) -> ModelProviderInfo {
ModelProviderInfo {
name: "test".into(),
base_url: Some(base_url.into()),
env_key: None,
env_key_instructions: None,
wire_api: WireApi::Responses,
query_params: None,
http_headers: None,
env_http_headers: None,
request_max_retries: None,
stream_max_retries: None,
stream_idle_timeout_ms: None,
requires_openai_auth: false,
}
}
let positive_cases = [
"https://foo.openai.azure.com/openai",
"https://foo.openai.azure.us/openai/deployments/bar",
"https://foo.cognitiveservices.azure.cn/openai",
"https://foo.aoai.azure.com/openai",
"https://foo.openai.azure-api.net/openai",
"https://foo.z01.azurefd.net/",
];
for base_url in positive_cases {
let provider = provider_for(base_url);
assert!(
provider.is_azure_responses_endpoint(),
"expected {base_url} to be detected as Azure"
);
}
let named_provider = ModelProviderInfo {
name: "Azure".into(),
base_url: Some("https://example.com".into()),
env_key: None,
env_key_instructions: None,
wire_api: WireApi::Responses,
query_params: None,
http_headers: None,
env_http_headers: None,
request_max_retries: None,
stream_max_retries: None,
stream_idle_timeout_ms: None,
requires_openai_auth: false,
};
assert!(named_provider.is_azure_responses_endpoint());
let negative_cases = [
"https://api.openai.com/v1",
"https://example.com/openai",
"https://myproxy.azurewebsites.net/openai",
];
for base_url in negative_cases {
let provider = provider_for(base_url);
assert!(
!provider.is_azure_responses_endpoint(),
"expected {base_url} not to be detected as Azure"
);
}
}
}

View File

@@ -12,6 +12,19 @@ pub(crate) struct ModelInfo {
/// Maximum number of output tokens that can be generated for the model.
pub(crate) max_output_tokens: u64,
/// Token threshold where we should automatically compact conversation history.
pub(crate) auto_compact_token_limit: Option<i64>,
}
impl ModelInfo {
const fn new(context_window: u64, max_output_tokens: u64) -> Self {
Self {
context_window,
max_output_tokens,
auto_compact_token_limit: None,
}
}
}
pub(crate) fn get_model_info(model_family: &ModelFamily) -> Option<ModelInfo> {
@@ -20,73 +33,43 @@ pub(crate) fn get_model_info(model_family: &ModelFamily) -> Option<ModelInfo> {
// OSS models have a 128k shared token pool.
// Arbitrarily splitting it: 3/4 input context, 1/4 output.
// https://openai.com/index/gpt-oss-model-card/
"gpt-oss-20b" => Some(ModelInfo {
context_window: 96_000,
max_output_tokens: 32_000,
}),
"gpt-oss-120b" => Some(ModelInfo {
context_window: 96_000,
max_output_tokens: 32_000,
}),
"gpt-oss-20b" => Some(ModelInfo::new(96_000, 32_000)),
"gpt-oss-120b" => Some(ModelInfo::new(96_000, 32_000)),
// https://platform.openai.com/docs/models/o3
"o3" => Some(ModelInfo {
context_window: 200_000,
max_output_tokens: 100_000,
}),
"o3" => Some(ModelInfo::new(200_000, 100_000)),
// https://platform.openai.com/docs/models/o4-mini
"o4-mini" => Some(ModelInfo {
context_window: 200_000,
max_output_tokens: 100_000,
}),
"o4-mini" => Some(ModelInfo::new(200_000, 100_000)),
// https://platform.openai.com/docs/models/codex-mini-latest
"codex-mini-latest" => Some(ModelInfo {
context_window: 200_000,
max_output_tokens: 100_000,
}),
"codex-mini-latest" => Some(ModelInfo::new(200_000, 100_000)),
// As of Jun 25, 2025, gpt-4.1 defaults to gpt-4.1-2025-04-14.
// https://platform.openai.com/docs/models/gpt-4.1
"gpt-4.1" | "gpt-4.1-2025-04-14" => Some(ModelInfo {
context_window: 1_047_576,
max_output_tokens: 32_768,
}),
"gpt-4.1" | "gpt-4.1-2025-04-14" => Some(ModelInfo::new(1_047_576, 32_768)),
// As of Jun 25, 2025, gpt-4o defaults to gpt-4o-2024-08-06.
// https://platform.openai.com/docs/models/gpt-4o
"gpt-4o" | "gpt-4o-2024-08-06" => Some(ModelInfo {
context_window: 128_000,
max_output_tokens: 16_384,
}),
"gpt-4o" | "gpt-4o-2024-08-06" => Some(ModelInfo::new(128_000, 16_384)),
// https://platform.openai.com/docs/models/gpt-4o?snapshot=gpt-4o-2024-05-13
"gpt-4o-2024-05-13" => Some(ModelInfo {
context_window: 128_000,
max_output_tokens: 4_096,
}),
"gpt-4o-2024-05-13" => Some(ModelInfo::new(128_000, 4_096)),
// https://platform.openai.com/docs/models/gpt-4o?snapshot=gpt-4o-2024-11-20
"gpt-4o-2024-11-20" => Some(ModelInfo {
context_window: 128_000,
max_output_tokens: 16_384,
}),
"gpt-4o-2024-11-20" => Some(ModelInfo::new(128_000, 16_384)),
// https://platform.openai.com/docs/models/gpt-3.5-turbo
"gpt-3.5-turbo" => Some(ModelInfo {
context_window: 16_385,
max_output_tokens: 4_096,
}),
"gpt-3.5-turbo" => Some(ModelInfo::new(16_385, 4_096)),
_ if slug.starts_with("gpt-5") => Some(ModelInfo {
_ if slug.starts_with("gpt-5-codex") => Some(ModelInfo {
context_window: 272_000,
max_output_tokens: 128_000,
auto_compact_token_limit: Some(220_000),
}),
_ if slug.starts_with("codex-") => Some(ModelInfo {
context_window: 272_000,
max_output_tokens: 128_000,
}),
_ if slug.starts_with("gpt-5") => Some(ModelInfo::new(272_000, 128_000)),
_ if slug.starts_with("codex-") => Some(ModelInfo::new(272_000, 128_000)),
_ => None,
}

View File

@@ -7,8 +7,6 @@ use std::collections::HashMap;
use crate::model_family::ModelFamily;
use crate::plan_tool::PLAN_TOOL;
use crate::protocol::AskForApproval;
use crate::protocol::SandboxPolicy;
use crate::tool_apply_patch::ApplyPatchToolType;
use crate::tool_apply_patch::create_apply_patch_freeform_tool;
use crate::tool_apply_patch::create_apply_patch_json_tool;
@@ -57,10 +55,9 @@ pub(crate) enum OpenAiTool {
#[derive(Debug, Clone)]
pub enum ConfigShellToolType {
DefaultShell,
ShellWithRequest { sandbox_policy: SandboxPolicy },
LocalShell,
StreamableShell,
Default,
Local,
Streamable,
}
#[derive(Debug, Clone)]
@@ -75,8 +72,6 @@ pub(crate) struct ToolsConfig {
pub(crate) struct ToolsConfigParams<'a> {
pub(crate) model_family: &'a ModelFamily,
pub(crate) approval_policy: AskForApproval,
pub(crate) sandbox_policy: SandboxPolicy,
pub(crate) include_plan_tool: bool,
pub(crate) include_apply_patch_tool: bool,
pub(crate) include_web_search_request: bool,
@@ -89,8 +84,6 @@ impl ToolsConfig {
pub fn new(params: &ToolsConfigParams) -> Self {
let ToolsConfigParams {
model_family,
approval_policy,
sandbox_policy,
include_plan_tool,
include_apply_patch_tool,
include_web_search_request,
@@ -98,18 +91,13 @@ impl ToolsConfig {
include_view_image_tool,
experimental_unified_exec_tool,
} = params;
let mut shell_type = if *use_streamable_shell_tool {
ConfigShellToolType::StreamableShell
let shell_type = if *use_streamable_shell_tool {
ConfigShellToolType::Streamable
} else if model_family.uses_local_shell_tool {
ConfigShellToolType::LocalShell
ConfigShellToolType::Local
} else {
ConfigShellToolType::DefaultShell
ConfigShellToolType::Default
};
if matches!(approval_policy, AskForApproval::OnRequest) && !use_streamable_shell_tool {
shell_type = ConfigShellToolType::ShellWithRequest {
sandbox_policy: sandbox_policy.clone(),
}
}
let apply_patch_tool_type = match model_family.apply_patch_tool_type {
Some(ApplyPatchToolType::Freeform) => Some(ApplyPatchToolType::Freeform),
@@ -170,40 +158,6 @@ pub(crate) enum JsonSchema {
},
}
fn create_shell_tool() -> OpenAiTool {
let mut properties = BTreeMap::new();
properties.insert(
"command".to_string(),
JsonSchema::Array {
items: Box::new(JsonSchema::String { description: None }),
description: Some("The command to execute".to_string()),
},
);
properties.insert(
"workdir".to_string(),
JsonSchema::String {
description: Some("The working directory to execute the command in".to_string()),
},
);
properties.insert(
"timeout_ms".to_string(),
JsonSchema::Number {
description: Some("The timeout for the command in milliseconds".to_string()),
},
);
OpenAiTool::Function(ResponsesApiTool {
name: "shell".to_string(),
description: "Runs a shell command and returns its output".to_string(),
strict: false,
parameters: JsonSchema::Object {
properties,
required: Some(vec!["command".to_string()]),
additional_properties: Some(false),
},
})
}
fn create_unified_exec_tool() -> OpenAiTool {
let mut properties = BTreeMap::new();
properties.insert(
@@ -251,7 +205,7 @@ fn create_unified_exec_tool() -> OpenAiTool {
})
}
fn create_shell_tool_for_sandbox(sandbox_policy: &SandboxPolicy) -> OpenAiTool {
fn create_shell_tool() -> OpenAiTool {
let mut properties = BTreeMap::new();
properties.insert(
"command".to_string(),
@@ -273,73 +227,22 @@ fn create_shell_tool_for_sandbox(sandbox_policy: &SandboxPolicy) -> OpenAiTool {
},
);
if matches!(sandbox_policy, SandboxPolicy::WorkspaceWrite { .. }) {
properties.insert(
properties.insert(
"with_escalated_permissions".to_string(),
JsonSchema::Boolean {
description: Some("Whether to request escalated permissions. Set to true if command needs to be run without sandbox restrictions".to_string()),
},
);
properties.insert(
properties.insert(
"justification".to_string(),
JsonSchema::String {
description: Some("Only set if with_escalated_permissions is true. 1-sentence explanation of why we want to run this command.".to_string()),
},
);
}
let description = match sandbox_policy {
SandboxPolicy::WorkspaceWrite {
network_access,
..
} => {
let network_line = if !network_access {
"\n - Commands that require network access"
} else {
""
};
format!(
r#"
The shell tool is used to execute shell commands.
- When invoking the shell tool, your call will be running in a sandbox, and some shell commands will require escalated privileges:
- Types of actions that require escalated privileges:
- Writing files other than those in the writable roots (see the environment context for the allowed directories){network_line}
- Examples of commands that require escalated privileges:
- git commit
- npm install or pnpm install
- cargo build
- cargo test
- When invoking a command that will require escalated privileges:
- Provide the with_escalated_permissions parameter with the boolean value true
- Include a short, 1 sentence explanation for why we need to run with_escalated_permissions in the justification parameter."#,
)
}
SandboxPolicy::DangerFullAccess => {
"Runs a shell command and returns its output.".to_string()
}
SandboxPolicy::ReadOnly => {
r#"
The shell tool is used to execute shell commands.
- When invoking the shell tool, your call will be running in a sandbox, and some shell commands (including apply_patch) will require escalated permissions:
- Types of actions that require escalated privileges:
- Writing files
- Applying patches
- Examples of commands that require escalated privileges:
- apply_patch
- git commit
- npm install or pnpm install
- cargo build
- cargo test
- When invoking a command that will require escalated privileges:
- Provide the with_escalated_permissions parameter with the boolean value true
- Include a short, 1 sentence explanation for why we need to run with_escalated_permissions in the justification parameter"#.to_string()
}
};
OpenAiTool::Function(ResponsesApiTool {
name: "shell".to_string(),
description,
description: "Runs a shell command and returns its output.".to_string(),
strict: false,
parameters: JsonSchema::Object {
properties,
@@ -382,7 +285,7 @@ pub(crate) struct ApplyPatchToolArgs {
/// Responses API:
/// https://platform.openai.com/docs/guides/function-calling?api-mode=responses
pub fn create_tools_json_for_responses_api(
tools: &Vec<OpenAiTool>,
tools: &[OpenAiTool],
) -> crate::error::Result<Vec<serde_json::Value>> {
let mut tools_json = Vec::new();
@@ -397,7 +300,7 @@ pub fn create_tools_json_for_responses_api(
/// Chat Completions API:
/// https://platform.openai.com/docs/guides/function-calling?api-mode=chat
pub(crate) fn create_tools_json_for_chat_completions_api(
tools: &Vec<OpenAiTool>,
tools: &[OpenAiTool],
) -> crate::error::Result<Vec<serde_json::Value>> {
// We start with the JSON for the Responses API and than rewrite it to match
// the chat completions tool call format.
@@ -497,10 +400,7 @@ fn sanitize_json_schema(value: &mut JsonValue) {
}
// Normalize/ensure type
let mut ty = map
.get("type")
.and_then(|v| v.as_str())
.map(|s| s.to_string());
let mut ty = map.get("type").and_then(|v| v.as_str()).map(str::to_string);
// If type is an array (union), pick first supported; else leave to inference
if ty.is_none()
@@ -586,16 +486,13 @@ pub(crate) fn get_openai_tools(
tools.push(create_unified_exec_tool());
} else {
match &config.shell_type {
ConfigShellToolType::DefaultShell => {
ConfigShellToolType::Default => {
tools.push(create_shell_tool());
}
ConfigShellToolType::ShellWithRequest { sandbox_policy } => {
tools.push(create_shell_tool_for_sandbox(sandbox_policy));
}
ConfigShellToolType::LocalShell => {
ConfigShellToolType::Local => {
tools.push(OpenAiTool::LocalShell {});
}
ConfigShellToolType::StreamableShell => {
ConfigShellToolType::Streamable => {
tools.push(OpenAiTool::Function(
crate::exec_command::create_exec_command_tool_for_responses_api(),
));
@@ -685,8 +582,6 @@ mod tests {
.expect("codex-mini-latest should be a valid model family");
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::ReadOnly,
include_plan_tool: true,
include_apply_patch_tool: false,
include_web_search_request: true,
@@ -707,8 +602,6 @@ mod tests {
let model_family = find_family_for_model("o3").expect("o3 should be a valid model family");
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::ReadOnly,
include_plan_tool: true,
include_apply_patch_tool: false,
include_web_search_request: true,
@@ -729,8 +622,6 @@ mod tests {
let model_family = find_family_for_model("o3").expect("o3 should be a valid model family");
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::ReadOnly,
include_plan_tool: false,
include_apply_patch_tool: false,
include_web_search_request: true,
@@ -835,8 +726,6 @@ mod tests {
let model_family = find_family_for_model("o3").expect("o3 should be a valid model family");
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::ReadOnly,
include_plan_tool: false,
include_apply_patch_tool: false,
include_web_search_request: false,
@@ -913,8 +802,6 @@ mod tests {
let model_family = find_family_for_model("o3").expect("o3 should be a valid model family");
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::ReadOnly,
include_plan_tool: false,
include_apply_patch_tool: false,
include_web_search_request: true,
@@ -976,8 +863,6 @@ mod tests {
let model_family = find_family_for_model("o3").expect("o3 should be a valid model family");
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::ReadOnly,
include_plan_tool: false,
include_apply_patch_tool: false,
include_web_search_request: true,
@@ -1034,8 +919,6 @@ mod tests {
let model_family = find_family_for_model("o3").expect("o3 should be a valid model family");
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::ReadOnly,
include_plan_tool: false,
include_apply_patch_tool: false,
include_web_search_request: true,
@@ -1095,8 +978,6 @@ mod tests {
let model_family = find_family_for_model("o3").expect("o3 should be a valid model family");
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::ReadOnly,
include_plan_tool: false,
include_apply_patch_tool: false,
include_web_search_request: true,
@@ -1149,14 +1030,8 @@ mod tests {
}
#[test]
fn test_shell_tool_for_sandbox_workspace_write() {
let sandbox_policy = SandboxPolicy::WorkspaceWrite {
writable_roots: vec!["workspace".into()],
network_access: false,
exclude_tmpdir_env_var: false,
exclude_slash_tmp: false,
};
let tool = super::create_shell_tool_for_sandbox(&sandbox_policy);
fn test_shell_tool() {
let tool = super::create_shell_tool();
let OpenAiTool::Function(ResponsesApiTool {
description, name, ..
}) = &tool
@@ -1165,63 +1040,7 @@ mod tests {
};
assert_eq!(name, "shell");
let expected = r#"
The shell tool is used to execute shell commands.
- When invoking the shell tool, your call will be running in a sandbox, and some shell commands will require escalated privileges:
- Types of actions that require escalated privileges:
- Writing files other than those in the writable roots (see the environment context for the allowed directories)
- Commands that require network access
- Examples of commands that require escalated privileges:
- git commit
- npm install or pnpm install
- cargo build
- cargo test
- When invoking a command that will require escalated privileges:
- Provide the with_escalated_permissions parameter with the boolean value true
- Include a short, 1 sentence explanation for why we need to run with_escalated_permissions in the justification parameter."#;
let expected = "Runs a shell command and returns its output.";
assert_eq!(description, expected);
}
#[test]
fn test_shell_tool_for_sandbox_readonly() {
let tool = super::create_shell_tool_for_sandbox(&SandboxPolicy::ReadOnly);
let OpenAiTool::Function(ResponsesApiTool {
description, name, ..
}) = &tool
else {
panic!("expected function tool");
};
assert_eq!(name, "shell");
let expected = r#"
The shell tool is used to execute shell commands.
- When invoking the shell tool, your call will be running in a sandbox, and some shell commands (including apply_patch) will require escalated permissions:
- Types of actions that require escalated privileges:
- Writing files
- Applying patches
- Examples of commands that require escalated privileges:
- apply_patch
- git commit
- npm install or pnpm install
- cargo build
- cargo test
- When invoking a command that will require escalated privileges:
- Provide the with_escalated_permissions parameter with the boolean value true
- Include a short, 1 sentence explanation for why we need to run with_escalated_permissions in the justification parameter"#;
assert_eq!(description, expected);
}
#[test]
fn test_shell_tool_for_sandbox_danger_full_access() {
let tool = super::create_shell_tool_for_sandbox(&SandboxPolicy::DangerFullAccess);
let OpenAiTool::Function(ResponsesApiTool {
description, name, ..
}) = &tool
else {
panic!("expected function tool");
};
assert_eq!(name, "shell");
assert_eq!(description, "Runs a shell command and returns its output.");
}
}

View File

@@ -40,7 +40,7 @@ impl From<ParsedCommand> for codex_protocol::parse_command::ParsedCommand {
}
fn shlex_join(tokens: &[String]) -> String {
shlex_try_join(tokens.iter().map(|s| s.as_str()))
shlex_try_join(tokens.iter().map(String::as_str))
.unwrap_or_else(|_| "<command included NUL byte>".to_string())
}
@@ -72,13 +72,14 @@ pub fn parse_command(command: &[String]) -> Vec<ParsedCommand> {
/// Tests are at the top to encourage using TDD + Codex to fix the implementation.
mod tests {
use super::*;
use std::string::ToString;
fn shlex_split_safe(s: &str) -> Vec<String> {
shlex_split(s).unwrap_or_else(|| s.split_whitespace().map(|s| s.to_string()).collect())
shlex_split(s).unwrap_or_else(|| s.split_whitespace().map(ToString::to_string).collect())
}
fn vec_str(args: &[&str]) -> Vec<String> {
args.iter().map(|s| s.to_string()).collect()
args.iter().map(ToString::to_string).collect()
}
fn assert_parsed(args: &[String], expected: Vec<ParsedCommand>) {
@@ -894,7 +895,7 @@ fn simplify_once(commands: &[ParsedCommand]) -> Option<Vec<ParsedCommand>> {
// echo ... && ...rest => ...rest
if let ParsedCommand::Unknown { cmd } = &commands[0]
&& shlex_split(cmd).is_some_and(|t| t.first().map(|s| s.as_str()) == Some("echo"))
&& shlex_split(cmd).is_some_and(|t| t.first().map(String::as_str) == Some("echo"))
{
return Some(commands[1..].to_vec());
}
@@ -902,7 +903,7 @@ fn simplify_once(commands: &[ParsedCommand]) -> Option<Vec<ParsedCommand>> {
// cd foo && [any command] => [any command] (keep non-cd when a cd is followed by something)
if let Some(idx) = commands.iter().position(|pc| match pc {
ParsedCommand::Unknown { cmd } => {
shlex_split(cmd).is_some_and(|t| t.first().map(|s| s.as_str()) == Some("cd"))
shlex_split(cmd).is_some_and(|t| t.first().map(String::as_str) == Some("cd"))
}
_ => false,
}) && commands.len() > idx + 1
@@ -1035,7 +1036,7 @@ fn short_display_path(path: &str) -> String {
});
parts
.next()
.map(|s| s.to_string())
.map(str::to_string)
.unwrap_or_else(|| trimmed.to_string())
}
@@ -1156,10 +1157,8 @@ fn parse_bash_lc_commands(original: &[String]) -> Option<Vec<ParsedCommand>> {
// bias toward the primary command when pipelines are present.
// First, drop obvious small formatting helpers (e.g., wc/awk/etc).
let had_multiple_commands = all_commands.len() > 1;
// The bash AST walker yields commands in right-to-left order for
// connector/pipeline sequences. Reverse to reflect actual execution order.
let mut filtered_commands = drop_small_formatting_commands(all_commands);
filtered_commands.reverse();
// Commands arrive in source order; drop formatting helpers while preserving it.
let filtered_commands = drop_small_formatting_commands(all_commands);
if filtered_commands.is_empty() {
return Some(vec![ParsedCommand::Unknown {
cmd: script.clone(),
@@ -1192,8 +1191,8 @@ fn parse_bash_lc_commands(original: &[String]) -> Option<Vec<ParsedCommand>> {
if had_connectors {
let has_pipe = script_tokens.iter().any(|t| t == "|");
let has_sed_n = script_tokens.windows(2).any(|w| {
w.first().map(|s| s.as_str()) == Some("sed")
&& w.get(1).map(|s| s.as_str()) == Some("-n")
w.first().map(String::as_str) == Some("sed")
&& w.get(1).map(String::as_str) == Some("-n")
});
if has_pipe && has_sed_n {
ParsedCommand::Read {
@@ -1273,7 +1272,7 @@ fn is_small_formatting_command(tokens: &[String]) -> bool {
// Keep `sed -n <range> file` (treated as a file read elsewhere);
// otherwise consider it a formatting helper in a pipeline.
tokens.len() < 4
|| !(tokens[1] == "-n" && is_valid_sed_n_arg(tokens.get(2).map(|s| s.as_str())))
|| !(tokens[1] == "-n" && is_valid_sed_n_arg(tokens.get(2).map(String::as_str)))
}
_ => false,
}
@@ -1320,7 +1319,7 @@ fn summarize_main_tokens(main_cmd: &[String]) -> ParsedCommand {
(None, non_flags.first().map(|s| short_display_path(s)))
} else {
(
non_flags.first().cloned().map(|s| s.to_string()),
non_flags.first().cloned().map(String::from),
non_flags.get(1).map(|s| short_display_path(s)),
)
};
@@ -1355,7 +1354,7 @@ fn summarize_main_tokens(main_cmd: &[String]) -> ParsedCommand {
.collect();
// Do not shorten the query: grep patterns may legitimately contain slashes
// and should be preserved verbatim. Only paths should be shortened.
let query = non_flags.first().cloned().map(|s| s.to_string());
let query = non_flags.first().cloned().map(String::from);
let path = non_flags.get(1).map(|s| short_display_path(s));
ParsedCommand::Search {
cmd: shlex_join(main_cmd),
@@ -1365,7 +1364,7 @@ fn summarize_main_tokens(main_cmd: &[String]) -> ParsedCommand {
}
Some((head, tail)) if head == "cat" => {
// Support both `cat <file>` and `cat -- <file>` forms.
let effective_tail: &[String] = if tail.first().map(|s| s.as_str()) == Some("--") {
let effective_tail: &[String] = if tail.first().map(String::as_str) == Some("--") {
&tail[1..]
} else {
tail
@@ -1481,7 +1480,7 @@ fn summarize_main_tokens(main_cmd: &[String]) -> ParsedCommand {
if head == "sed"
&& tail.len() >= 3
&& tail[0] == "-n"
&& is_valid_sed_n_arg(tail.get(1).map(|s| s.as_str())) =>
&& is_valid_sed_n_arg(tail.get(1).map(String::as_str)) =>
{
if let Some(path) = tail.get(2) {
let name = short_display_path(path);

View File

@@ -1,21 +0,0 @@
You are a summarization assistant. A conversation follows between a user and a coding-focused AI (Codex). Your task is to generate a clear summary capturing:
• High-level objective or problem being solved
• Key instructions or design decisions given by the user
• Main code actions or behaviors from the AI
• Important variables, functions, modules, or outputs discussed
• Any unresolved questions or next steps
Produce the summary in a structured format like:
**Objective:**
**User instructions:** … (bulleted)
**AI actions / code behavior:** … (bulleted)
**Important entities:** … (e.g. function names, variables, files)
**Open issues / next steps:** … (if any)
**Summary (concise):** (one or two sentences)

View File

@@ -0,0 +1,55 @@
use crate::protocol::ReviewFinding;
// Note: We keep this module UI-agnostic. It returns plain strings that
// higher layers (e.g., TUI) may style as needed.
fn format_location(item: &ReviewFinding) -> String {
let path = item.code_location.absolute_file_path.display();
let start = item.code_location.line_range.start;
let end = item.code_location.line_range.end;
format!("{path}:{start}-{end}")
}
/// Format a full review findings block as plain text lines.
///
/// - When `selection` is `Some`, each item line includes a checkbox marker:
/// "[x]" for selected items and "[ ]" for unselected. Missing indices
/// default to selected.
/// - When `selection` is `None`, the marker is omitted and a simple bullet is
/// rendered ("- Title — path:start-end").
pub fn format_review_findings_block(
findings: &[ReviewFinding],
selection: Option<&[bool]>,
) -> String {
let mut lines: Vec<String> = Vec::new();
lines.push(String::new());
// Header
if findings.len() > 1 {
lines.push("Full review comments:".to_string());
} else {
lines.push("Review comment:".to_string());
}
for (idx, item) in findings.iter().enumerate() {
lines.push(String::new());
let title = &item.title;
let location = format_location(item);
if let Some(flags) = selection {
// Default to selected if index is out of bounds.
let checked = flags.get(idx).copied().unwrap_or(true);
let marker = if checked { "[x]" } else { "[ ]" };
lines.push(format!("- {marker} {title}{location}"));
} else {
lines.push(format!("- {title}{location}"));
}
for body_line in item.body.lines() {
lines.push(format!(" {body_line}"));
}
}
lines.join("\n")
}

View File

@@ -3,6 +3,10 @@ use std::io::{self};
use std::path::Path;
use std::path::PathBuf;
use codex_file_search as file_search;
use std::num::NonZero;
use std::sync::Arc;
use std::sync::atomic::AtomicBool;
use time::OffsetDateTime;
use time::PrimitiveDateTime;
use time::format_description::FormatItem;
@@ -334,3 +338,48 @@ async fn read_head_and_flags(
Ok((head, saw_session_meta, saw_user_event))
}
/// Locate a recorded conversation rollout file by its UUID string using the existing
/// paginated listing implementation. Returns `Ok(Some(path))` if found, `Ok(None)` if not present
/// or the id is invalid.
pub async fn find_conversation_path_by_id_str(
codex_home: &Path,
id_str: &str,
) -> io::Result<Option<PathBuf>> {
// Validate UUID format early.
if Uuid::parse_str(id_str).is_err() {
return Ok(None);
}
let mut root = codex_home.to_path_buf();
root.push(SESSIONS_SUBDIR);
if !root.exists() {
return Ok(None);
}
// This is safe because we know the values are valid.
#[allow(clippy::unwrap_used)]
let limit = NonZero::new(1).unwrap();
// This is safe because we know the values are valid.
#[allow(clippy::unwrap_used)]
let threads = NonZero::new(2).unwrap();
let cancel = Arc::new(AtomicBool::new(false));
let exclude: Vec<String> = Vec::new();
let compute_indices = false;
let results = file_search::run(
id_str,
limit,
&root,
exclude,
threads,
cancel,
compute_indices,
)
.map_err(|e| io::Error::other(format!("file search failed: {e}")))?;
Ok(results
.matches
.into_iter()
.next()
.map(|m| root.join(m.path)))
}

View File

@@ -8,6 +8,7 @@ pub(crate) mod policy;
pub mod recorder;
pub use codex_protocol::protocol::SessionMeta;
pub use list::find_conversation_path_by_id_str;
pub use recorder::RolloutRecorder;
pub use recorder::RolloutRecorderParams;

View File

@@ -25,8 +25,9 @@ pub(crate) fn should_persist_response_item(item: &ResponseItem) -> bool {
| ResponseItem::FunctionCall { .. }
| ResponseItem::FunctionCallOutput { .. }
| ResponseItem::CustomToolCall { .. }
| ResponseItem::CustomToolCallOutput { .. } => true,
ResponseItem::WebSearchCall { .. } | ResponseItem::Other => false,
| ResponseItem::CustomToolCallOutput { .. }
| ResponseItem::WebSearchCall { .. } => true,
ResponseItem::Other => false,
}
}
@@ -38,7 +39,10 @@ pub(crate) fn should_persist_event_msg(ev: &EventMsg) -> bool {
| EventMsg::AgentMessage(_)
| EventMsg::AgentReasoning(_)
| EventMsg::AgentReasoningRawContent(_)
| EventMsg::TokenCount(_) => true,
| EventMsg::TokenCount(_)
| EventMsg::EnteredReviewMode(_)
| EventMsg::ExitedReviewMode(_)
| EventMsg::TurnAborted(_) => true,
EventMsg::Error(_)
| EventMsg::TaskStarted(_)
| EventMsg::TaskComplete(_)
@@ -65,7 +69,6 @@ pub(crate) fn should_persist_event_msg(ev: &EventMsg) -> bool {
| EventMsg::McpListToolsResponse(_)
| EventMsg::ListCustomPromptsResponse(_)
| EventMsg::PlanUpdate(_)
| EventMsg::TurnAborted(_)
| EventMsg::ShutdownComplete
| EventMsg::ConversationPath(_) => false,
}

View File

@@ -204,7 +204,6 @@ impl RolloutRecorder {
pub(crate) async fn get_rollout_history(path: &Path) -> std::io::Result<InitialHistory> {
info!("Resuming rollout from {path:?}");
tracing::error!("Resuming rollout from {path:?}");
let text = tokio::fs::read_to_string(path).await?;
if text.trim().is_empty() {
return Err(IoError::other("empty session file"));
@@ -254,7 +253,7 @@ impl RolloutRecorder {
}
}
tracing::error!(
info!(
"Resumed rollout with {} items, conversation ID: {:?}",
items.len(),
conversation_id

View File

@@ -18,19 +18,20 @@ const MACOS_PATH_TO_SEATBELT_EXECUTABLE: &str = "/usr/bin/sandbox-exec";
pub async fn spawn_command_under_seatbelt(
command: Vec<String>,
command_cwd: PathBuf,
sandbox_policy: &SandboxPolicy,
cwd: PathBuf,
sandbox_policy_cwd: &Path,
stdio_policy: StdioPolicy,
mut env: HashMap<String, String>,
) -> std::io::Result<Child> {
let args = create_seatbelt_command_args(command, sandbox_policy, &cwd);
let args = create_seatbelt_command_args(command, sandbox_policy, sandbox_policy_cwd);
let arg0 = None;
env.insert(CODEX_SANDBOX_ENV_VAR.to_string(), "seatbelt".to_string());
spawn_child_async(
PathBuf::from(MACOS_PATH_TO_SEATBELT_EXECUTABLE),
args,
arg0,
cwd,
command_cwd,
sandbox_policy,
stdio_policy,
env,
@@ -41,7 +42,7 @@ pub async fn spawn_command_under_seatbelt(
fn create_seatbelt_command_args(
command: Vec<String>,
sandbox_policy: &SandboxPolicy,
cwd: &Path,
sandbox_policy_cwd: &Path,
) -> Vec<String> {
let (file_write_policy, extra_cli_args) = {
if sandbox_policy.has_full_disk_write_access() {
@@ -51,7 +52,7 @@ fn create_seatbelt_command_args(
Vec::<String>::new(),
)
} else {
let writable_roots = sandbox_policy.get_writable_roots_with_cwd(cwd);
let writable_roots = sandbox_policy.get_writable_roots_with_cwd(sandbox_policy_cwd);
let mut writable_folder_policies: Vec<String> = Vec::new();
let mut cli_args: Vec<String> = Vec::new();

View File

@@ -2,6 +2,7 @@
; inspired by Chrome's sandbox policy:
; https://source.chromium.org/chromium/chromium/src/+/main:sandbox/policy/mac/common.sb;l=273-319;drc=7b3962fe2e5fc9e2ee58000dc8fbf3429d84d3bd
; https://source.chromium.org/chromium/chromium/src/+/main:sandbox/policy/mac/renderer.sb;l=64;drc=7b3962fe2e5fc9e2ee58000dc8fbf3429d84d3bd
; start with closed-by-default
(deny default)
@@ -9,7 +10,13 @@
; child processes inherit the policy of their parent
(allow process-exec)
(allow process-fork)
(allow signal (target self))
(allow signal (target same-sandbox))
; Allow cf prefs to work.
(allow user-preference-read)
; process-info
(allow process-info* (target same-sandbox))
(allow file-write-data
(require-all
@@ -32,28 +39,22 @@
(sysctl-name "hw.l3cachesize_compat")
(sysctl-name "hw.logicalcpu_max")
(sysctl-name "hw.machine")
(sysctl-name "hw.memsize")
(sysctl-name "hw.ncpu")
(sysctl-name "hw.nperflevels")
(sysctl-name "hw.optional.arm.FEAT_BF16")
(sysctl-name "hw.optional.arm.FEAT_DotProd")
(sysctl-name "hw.optional.arm.FEAT_FCMA")
(sysctl-name "hw.optional.arm.FEAT_FHM")
(sysctl-name "hw.optional.arm.FEAT_FP16")
(sysctl-name "hw.optional.arm.FEAT_I8MM")
(sysctl-name "hw.optional.arm.FEAT_JSCVT")
(sysctl-name "hw.optional.arm.FEAT_LSE")
(sysctl-name "hw.optional.arm.FEAT_RDM")
(sysctl-name "hw.optional.arm.FEAT_SHA512")
(sysctl-name "hw.optional.armv8_2_sha512")
(sysctl-name "hw.memsize")
(sysctl-name "hw.pagesize")
; Chrome locks these CPU feature detection down a bit more tightly,
; but mostly for fingerprinting concerns which isn't an issue for codex.
(sysctl-name-prefix "hw.optional.arm.")
(sysctl-name-prefix "hw.optional.armv8_")
(sysctl-name "hw.packages")
(sysctl-name "hw.pagesize_compat")
(sysctl-name "hw.pagesize")
(sysctl-name "hw.physicalcpu_max")
(sysctl-name "hw.tbfrequency_compat")
(sysctl-name "hw.vectorunit")
(sysctl-name "kern.hostname")
(sysctl-name "kern.maxfilesperproc")
(sysctl-name "kern.maxproc")
(sysctl-name "kern.osproductversion")
(sysctl-name "kern.osrelease")
(sysctl-name "kern.ostype")
@@ -63,14 +64,27 @@
(sysctl-name "kern.usrstack64")
(sysctl-name "kern.version")
(sysctl-name "sysctl.proc_cputype")
(sysctl-name "vm.loadavg")
(sysctl-name-prefix "hw.perflevel")
(sysctl-name-prefix "kern.proc.pgrp.")
(sysctl-name-prefix "kern.proc.pid.")
(sysctl-name-prefix "net.routetable.")
)
; IOKit
(allow iokit-open
(iokit-registry-entry-class "RootDomainUserClient")
)
; needed to look up user info, see https://crbug.com/792228
(allow mach-lookup
(global-name "com.apple.system.opendirectoryd.libinfo")
)
; Added on top of Chrome profile
; Needed for python multiprocessing on MacOS for the SemLock
(allow ipc-posix-sem)
; needed to look up user info, see https://crbug.com/792228
(allow mach-lookup
(global-name "com.apple.system.opendirectoryd.libinfo")
(global-name "com.apple.PowerManagement.control")
)

View File

@@ -5,20 +5,20 @@ use std::path::PathBuf;
#[derive(Debug, PartialEq, Eq, Clone, Serialize, Deserialize)]
pub struct ZshShell {
shell_path: String,
zshrc_path: String,
pub(crate) shell_path: String,
pub(crate) zshrc_path: String,
}
#[derive(Debug, PartialEq, Eq, Clone, Serialize, Deserialize)]
pub struct BashShell {
shell_path: String,
bashrc_path: String,
pub(crate) shell_path: String,
pub(crate) bashrc_path: String,
}
#[derive(Debug, PartialEq, Eq, Clone, Serialize, Deserialize)]
pub struct PowerShellConfig {
exe: String, // Executable name or path, e.g. "pwsh" or "powershell.exe".
bash_exe_fallback: Option<PathBuf>, // In case the model generates a bash command.
pub(crate) exe: String, // Executable name or path, e.g. "pwsh" or "powershell.exe".
pub(crate) bash_exe_fallback: Option<PathBuf>, // In case the model generates a bash command.
}
#[derive(Debug, PartialEq, Eq, Clone, Serialize, Deserialize)]
@@ -32,15 +32,19 @@ pub enum Shell {
impl Shell {
pub fn format_default_shell_invocation(&self, command: Vec<String>) -> Option<Vec<String>> {
match self {
Shell::Zsh(zsh) => {
format_shell_invocation_with_rc(&command, &zsh.shell_path, &zsh.zshrc_path)
}
Shell::Bash(bash) => {
format_shell_invocation_with_rc(&command, &bash.shell_path, &bash.bashrc_path)
}
Shell::Zsh(zsh) => format_shell_invocation_with_rc(
command.as_slice(),
&zsh.shell_path,
&zsh.zshrc_path,
),
Shell::Bash(bash) => format_shell_invocation_with_rc(
command.as_slice(),
&bash.shell_path,
&bash.bashrc_path,
),
Shell::PowerShell(ps) => {
// If model generated a bash command, prefer a detected bash fallback
if let Some(script) = strip_bash_lc(&command) {
if let Some(script) = strip_bash_lc(command.as_slice()) {
return match &ps.bash_exe_fallback {
Some(bash) => Some(vec![
bash.to_string_lossy().to_string(),
@@ -69,7 +73,7 @@ impl Shell {
return Some(command);
}
let joined = shlex::try_join(command.iter().map(|s| s.as_str())).ok();
let joined = shlex::try_join(command.iter().map(String::as_str)).ok();
return joined.map(|arg| {
vec![
ps.exe.clone(),
@@ -102,12 +106,12 @@ impl Shell {
}
fn format_shell_invocation_with_rc(
command: &Vec<String>,
command: &[String],
shell_path: &str,
rc_path: &str,
) -> Option<Vec<String>> {
let joined = strip_bash_lc(command)
.or_else(|| shlex::try_join(command.iter().map(|s| s.as_str())).ok())?;
.or_else(|| shlex::try_join(command.iter().map(String::as_str)).ok())?;
let rc_command = if std::path::Path::new(rc_path).exists() {
format!("source {rc_path} && ({joined})")
@@ -118,8 +122,8 @@ fn format_shell_invocation_with_rc(
Some(vec![shell_path.to_string(), "-lc".to_string(), rc_command])
}
fn strip_bash_lc(command: &Vec<String>) -> Option<String> {
match command.as_slice() {
fn strip_bash_lc(command: &[String]) -> Option<String> {
match command {
// exactly three items
[first, second, third]
// first two must be "bash", "-lc"
@@ -220,6 +224,7 @@ pub async fn default_user_shell() -> Shell {
mod tests {
use super::*;
use std::process::Command;
use std::string::ToString;
#[tokio::test]
async fn test_current_shell_detects_zsh() {
@@ -323,7 +328,7 @@ mod tests {
});
let actual_cmd = shell
.format_default_shell_invocation(input.iter().map(|s| s.to_string()).collect());
.format_default_shell_invocation(input.iter().map(ToString::to_string).collect());
let expected_cmd = expected_cmd
.iter()
.map(|s| s.replace("BASHRC_PATH", bashrc_path.to_str().unwrap()))
@@ -345,6 +350,7 @@ mod tests {
},
SandboxType::None,
&SandboxPolicy::DangerFullAccess,
temp_home.path(),
&None,
None,
)
@@ -366,6 +372,7 @@ mod tests {
#[cfg(target_os = "macos")]
mod macos_tests {
use super::*;
use std::string::ToString;
#[tokio::test]
async fn test_run_with_profile_escaping_and_execution() {
@@ -429,7 +436,7 @@ mod macos_tests {
});
let actual_cmd = shell
.format_default_shell_invocation(input.iter().map(|s| s.to_string()).collect());
.format_default_shell_invocation(input.iter().map(ToString::to_string).collect());
let expected_cmd = expected_cmd
.iter()
.map(|s| s.replace("ZSHRC_PATH", zshrc_path.to_str().unwrap()))
@@ -451,6 +458,7 @@ mod macos_tests {
},
SandboxType::None,
&SandboxPolicy::DangerFullAccess,
temp_home.path(),
&None,
None,
)
@@ -553,10 +561,10 @@ mod tests_windows {
for (shell, input, expected_cmd) in cases {
let actual_cmd = shell
.format_default_shell_invocation(input.iter().map(|s| s.to_string()).collect());
.format_default_shell_invocation(input.iter().map(|s| (*s).to_string()).collect());
assert_eq!(
actual_cmd,
Some(expected_cmd.iter().map(|s| s.to_string()).collect())
Some(expected_cmd.iter().map(|s| (*s).to_string()).collect())
);
}
}

View File

@@ -65,7 +65,7 @@ impl TurnDiffTracker {
let baseline_file_info = if path.exists() {
let mode = file_mode_for_path(path);
let mode_val = mode.unwrap_or(FileMode::Regular);
let content = blob_bytes(path, &mode_val).unwrap_or_default();
let content = blob_bytes(path, mode_val).unwrap_or_default();
let oid = if mode == Some(FileMode::Symlink) {
format!("{:x}", git_blob_sha1_hex_bytes(&content))
} else {
@@ -266,7 +266,7 @@ impl TurnDiffTracker {
};
let current_mode = file_mode_for_path(&current_external_path).unwrap_or(FileMode::Regular);
let right_bytes = blob_bytes(&current_external_path, &current_mode);
let right_bytes = blob_bytes(&current_external_path, current_mode);
// Compute displays with &mut self before borrowing any baseline content.
let left_display = self.relative_to_git_root_str(&baseline_external_path);
@@ -388,7 +388,7 @@ enum FileMode {
}
impl FileMode {
fn as_str(&self) -> &'static str {
fn as_str(self) -> &'static str {
match self {
FileMode::Regular => "100644",
#[cfg(unix)]
@@ -427,9 +427,9 @@ fn file_mode_for_path(_path: &Path) -> Option<FileMode> {
Some(FileMode::Regular)
}
fn blob_bytes(path: &Path, mode: &FileMode) -> Option<Vec<u8>> {
fn blob_bytes(path: &Path, mode: FileMode) -> Option<Vec<u8>> {
if path.exists() {
let contents = if *mode == FileMode::Symlink {
let contents = if mode == FileMode::Symlink {
symlink_blob_bytes(path)
.ok_or_else(|| anyhow!("failed to read symlink target for {}", path.display()))
} else {

View File

@@ -100,10 +100,13 @@ type OutputBuffer = Arc<Mutex<OutputBufferState>>;
type OutputHandles = (OutputBuffer, Arc<Notify>);
impl ManagedUnifiedExecSession {
fn new(session: ExecCommandSession) -> Self {
fn new(
session: ExecCommandSession,
initial_output_rx: tokio::sync::broadcast::Receiver<Vec<u8>>,
) -> Self {
let output_buffer = Arc::new(Mutex::new(OutputBufferState::default()));
let output_notify = Arc::new(Notify::new());
let mut receiver = session.output_receiver();
let mut receiver = initial_output_rx;
let buffer_clone = Arc::clone(&output_buffer);
let notify_clone = Arc::clone(&output_notify);
let output_task = tokio::spawn(async move {
@@ -193,8 +196,8 @@ impl UnifiedExecSessionManager {
} else {
let command = request.input_chunks.to_vec();
let new_id = self.next_session_id.fetch_add(1, Ordering::SeqCst);
let session = create_unified_exec_session(&command).await?;
let managed_session = ManagedUnifiedExecSession::new(session);
let (session, initial_output_rx) = create_unified_exec_session(&command).await?;
let managed_session = ManagedUnifiedExecSession::new(session, initial_output_rx);
let (buffer, notify) = managed_session.output_handles();
writer_tx = managed_session.writer_sender();
output_buffer = buffer;
@@ -297,7 +300,13 @@ impl UnifiedExecSessionManager {
async fn create_unified_exec_session(
command: &[String],
) -> Result<ExecCommandSession, UnifiedExecError> {
) -> Result<
(
ExecCommandSession,
tokio::sync::broadcast::Receiver<Vec<u8>>,
),
UnifiedExecError,
> {
if command.is_empty() {
return Err(UnifiedExecError::MissingCommandLine);
}
@@ -380,7 +389,7 @@ async fn create_unified_exec_session(
wait_exit_status.store(true, Ordering::SeqCst);
});
Ok(ExecCommandSession::new(
let (session, initial_output_rx) = ExecCommandSession::new(
writer_tx,
output_tx,
killer,
@@ -388,7 +397,8 @@ async fn create_unified_exec_session(
writer_handle,
wait_handle,
exit_status,
))
);
Ok((session, initial_output_rx))
}
#[cfg(test)]
@@ -421,7 +431,7 @@ mod tests {
.handle_request(UnifiedExecRequest {
session_id: None,
input_chunks: &["bash".to_string(), "-i".to_string()],
timeout_ms: Some(1_500),
timeout_ms: Some(2_500),
})
.await?;
let session_id = open_shell.session_id.expect("expected session_id");
@@ -441,7 +451,7 @@ mod tests {
.handle_request(UnifiedExecRequest {
session_id: Some(session_id),
input_chunks: &["echo $CODEX_INTERACTIVE_SHELL_VAR\n".to_string()],
timeout_ms: Some(1_500),
timeout_ms: Some(2_500),
})
.await?;
assert!(out_2.output.contains("codex"));
@@ -458,7 +468,7 @@ mod tests {
.handle_request(UnifiedExecRequest {
session_id: None,
input_chunks: &["/bin/bash".to_string(), "-i".to_string()],
timeout_ms: Some(1_500),
timeout_ms: Some(2_500),
})
.await?;
let session_a = shell_a.session_id.expect("expected session id");
@@ -467,7 +477,7 @@ mod tests {
.handle_request(UnifiedExecRequest {
session_id: Some(session_a),
input_chunks: &["export CODEX_INTERACTIVE_SHELL_VAR=codex\n".to_string()],
timeout_ms: Some(1_500),
timeout_ms: Some(2_500),
})
.await?;
@@ -478,7 +488,7 @@ mod tests {
"echo".to_string(),
"$CODEX_INTERACTIVE_SHELL_VAR\n".to_string(),
],
timeout_ms: Some(1_500),
timeout_ms: Some(2_500),
})
.await?;
assert!(!out_2.output.contains("codex"));
@@ -487,7 +497,7 @@ mod tests {
.handle_request(UnifiedExecRequest {
session_id: Some(session_a),
input_chunks: &["echo $CODEX_INTERACTIVE_SHELL_VAR\n".to_string()],
timeout_ms: Some(1_500),
timeout_ms: Some(2_500),
})
.await?;
assert!(out_3.output.contains("codex"));
@@ -504,7 +514,7 @@ mod tests {
.handle_request(UnifiedExecRequest {
session_id: None,
input_chunks: &["bash".to_string(), "-i".to_string()],
timeout_ms: Some(1_500),
timeout_ms: Some(2_500),
})
.await?;
let session_id = open_shell.session_id.expect("expected session id");
@@ -516,7 +526,7 @@ mod tests {
"export".to_string(),
"CODEX_INTERACTIVE_SHELL_VAR=codex\n".to_string(),
],
timeout_ms: Some(1_500),
timeout_ms: Some(2_500),
})
.await?;
@@ -547,6 +557,7 @@ mod tests {
#[cfg(unix)]
#[tokio::test]
#[ignore] // Ignored while we have a better way to test this.
async fn requests_with_large_timeout_are_capped() -> Result<(), UnifiedExecError> {
let manager = UnifiedExecSessionManager::default();
@@ -568,13 +579,14 @@ mod tests {
#[cfg(unix)]
#[tokio::test]
#[ignore] // Ignored while we have a better way to test this.
async fn completed_commands_do_not_persist_sessions() -> Result<(), UnifiedExecError> {
let manager = UnifiedExecSessionManager::default();
let result = manager
.handle_request(UnifiedExecRequest {
session_id: None,
input_chunks: &["/bin/echo".to_string(), "codex".to_string()],
timeout_ms: Some(1_500),
timeout_ms: Some(2_500),
})
.await?;
@@ -595,7 +607,7 @@ mod tests {
.handle_request(UnifiedExecRequest {
session_id: None,
input_chunks: &["/bin/bash".to_string(), "-i".to_string()],
timeout_ms: Some(1_500),
timeout_ms: Some(2_500),
})
.await?;
let session_id = open_shell.session_id.expect("expected session id");
@@ -604,7 +616,7 @@ mod tests {
.handle_request(UnifiedExecRequest {
session_id: Some(session_id),
input_chunks: &["exit\n".to_string()],
timeout_ms: Some(1_500),
timeout_ms: Some(2_500),
})
.await?;

View File

@@ -1,4 +1,45 @@
use serde::Serialize;
use tracing::error;
use tracing::warn;
#[derive(Debug, Default)]
pub(crate) struct UserNotifier {
notify_command: Option<Vec<String>>,
}
impl UserNotifier {
pub(crate) fn notify(&self, notification: &UserNotification) {
if let Some(notify_command) = &self.notify_command
&& !notify_command.is_empty()
{
self.invoke_notify(notify_command, notification)
}
}
fn invoke_notify(&self, notify_command: &[String], notification: &UserNotification) {
let Ok(json) = serde_json::to_string(&notification) else {
error!("failed to serialise notification payload");
return;
};
let mut command = std::process::Command::new(&notify_command[0]);
if notify_command.len() > 1 {
command.args(&notify_command[1..]);
}
command.arg(json);
// Fire-and-forget we do not wait for completion.
if let Err(e) = command.spawn() {
warn!("failed to spawn notifier '{}': {e}", notify_command[0]);
}
}
pub(crate) fn new(notify: Option<Vec<String>>) -> Self {
Self {
notify_command: notify,
}
}
}
/// User can configure a program that will receive notifications. Each
/// notification is serialized as JSON and passed as an argument to the
@@ -21,9 +62,10 @@ pub(crate) enum UserNotification {
#[cfg(test)]
mod tests {
use super::*;
use anyhow::Result;
#[test]
fn test_user_notification() {
fn test_user_notification() -> Result<()> {
let notification = UserNotification::AgentTurnComplete {
turn_id: "12345".to_string(),
input_messages: vec!["Rename `foo` to `bar` and update the callsites.".to_string()],
@@ -31,10 +73,11 @@ mod tests {
"Rename complete and verified `cargo build` succeeds.".to_string(),
),
};
let serialized = serde_json::to_string(&notification).unwrap();
let serialized = serde_json::to_string(&notification)?;
assert_eq!(
serialized,
r#"{"type":"agent-turn-complete","turn-id":"12345","input-messages":["Rename `foo` to `bar` and update the callsites."],"last-assistant-message":"Rename complete and verified `cargo build` succeeds."}"#
);
Ok(())
}
}

View File

@@ -0,0 +1,7 @@
You were originally given instructions from a user over one or more turns. Here were the user messages:
{{ user_messages_text }}
Another language model started to solve this problem and produced a summary of its thinking process. You also have access to the state of the tools that were used by that language model. Use this to build on the work that has already been done and avoid duplicating work. Here is the summary produced by the other language model, use the information in this summary to assist with your own analysis:
{{ summary_text }}

View File

@@ -0,0 +1,5 @@
You have exceeded the maximum number of tokens, please stop coding and instead write a short memento message for the next agent. Your note should:
- Summarize what you finished and what still needs work. If there was a recent update_plan call, repeat its steps verbatim.
- List outstanding TODOs with file paths / line numbers so they're easy to find.
- Flag code that needs more tests (edge cases, performance, integration, etc.).
- Record any open bugs, quirks, or setup steps that will make it easier for the next agent to pick up where you left off.

View File

@@ -1,13 +1,15 @@
[package]
edition = "2024"
name = "core_test_support"
version = { workspace = true }
edition = "2024"
[lib]
path = "lib.rs"
[dependencies]
codex-core = { path = "../.." }
serde_json = "1"
tempfile = "3"
tokio = { version = "1", features = ["time"] }
anyhow = { workspace = true }
codex-core = { workspace = true }
serde_json = { workspace = true }
tempfile = { workspace = true }
tokio = { workspace = true, features = ["time"] }
wiremock = { workspace = true }

View File

@@ -7,6 +7,9 @@ use codex_core::config::Config;
use codex_core::config::ConfigOverrides;
use codex_core::config::ConfigToml;
pub mod responses;
pub mod test_codex;
/// Returns a default `Config` whose on-disk state is confined to the provided
/// temporary directory. Using a per-test directory keeps tests hermetic and
/// avoids clobbering a developers real `~/.codex`.
@@ -124,3 +127,21 @@ where
}
}
}
#[macro_export]
macro_rules! non_sandbox_test {
// For tests that return ()
() => {{
if ::std::env::var("CODEX_SANDBOX_NETWORK_DISABLED").is_ok() {
println!("Skipping test because it cannot execute when network is disabled in a Codex sandbox.");
return;
}
}};
// For tests that return Result<(), _>
(result $(,)?) => {{
if ::std::env::var("CODEX_SANDBOX_NETWORK_DISABLED").is_ok() {
println!("Skipping test because it cannot execute when network is disabled in a Codex sandbox.");
return ::core::result::Result::Ok(());
}
}};
}

View File

@@ -0,0 +1,133 @@
use serde_json::Value;
use wiremock::BodyPrintLimit;
use wiremock::Mock;
use wiremock::MockServer;
use wiremock::ResponseTemplate;
use wiremock::matchers::method;
use wiremock::matchers::path;
/// Build an SSE stream body from a list of JSON events.
pub fn sse(events: Vec<Value>) -> String {
use std::fmt::Write as _;
let mut out = String::new();
for ev in events {
let kind = ev.get("type").and_then(|v| v.as_str()).unwrap();
writeln!(&mut out, "event: {kind}").unwrap();
if !ev.as_object().map(|o| o.len() == 1).unwrap_or(false) {
write!(&mut out, "data: {ev}\n\n").unwrap();
} else {
out.push('\n');
}
}
out
}
/// Convenience: SSE event for a completed response with a specific id.
pub fn ev_completed(id: &str) -> Value {
serde_json::json!({
"type": "response.completed",
"response": {
"id": id,
"usage": {"input_tokens":0,"input_tokens_details":null,"output_tokens":0,"output_tokens_details":null,"total_tokens":0}
}
})
}
pub fn ev_completed_with_tokens(id: &str, total_tokens: u64) -> Value {
serde_json::json!({
"type": "response.completed",
"response": {
"id": id,
"usage": {
"input_tokens": total_tokens,
"input_tokens_details": null,
"output_tokens": 0,
"output_tokens_details": null,
"total_tokens": total_tokens
}
}
})
}
/// Convenience: SSE event for a single assistant message output item.
pub fn ev_assistant_message(id: &str, text: &str) -> Value {
serde_json::json!({
"type": "response.output_item.done",
"item": {
"type": "message",
"role": "assistant",
"id": id,
"content": [{"type": "output_text", "text": text}]
}
})
}
pub fn ev_function_call(call_id: &str, name: &str, arguments: &str) -> Value {
serde_json::json!({
"type": "response.output_item.done",
"item": {
"type": "function_call",
"call_id": call_id,
"name": name,
"arguments": arguments
}
})
}
/// Convenience: SSE event for an `apply_patch` custom tool call with raw patch
/// text. This mirrors the payload produced by the Responses API when the model
/// invokes `apply_patch` directly (before we convert it to a function call).
pub fn ev_apply_patch_custom_tool_call(call_id: &str, patch: &str) -> Value {
serde_json::json!({
"type": "response.output_item.done",
"item": {
"type": "custom_tool_call",
"name": "apply_patch",
"input": patch,
"call_id": call_id
}
})
}
/// Convenience: SSE event for an `apply_patch` function call. The Responses API
/// wraps the patch content in a JSON string under the `input` key; we recreate
/// the same structure so downstream code exercises the full parsing path.
pub fn ev_apply_patch_function_call(call_id: &str, patch: &str) -> Value {
let arguments = serde_json::json!({ "input": patch });
let arguments = serde_json::to_string(&arguments).expect("serialize apply_patch arguments");
serde_json::json!({
"type": "response.output_item.done",
"item": {
"type": "function_call",
"name": "apply_patch",
"arguments": arguments,
"call_id": call_id
}
})
}
pub fn sse_response(body: String) -> ResponseTemplate {
ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.set_body_raw(body, "text/event-stream")
}
pub async fn mount_sse_once<M>(server: &MockServer, matcher: M, body: String)
where
M: wiremock::Match + Send + Sync + 'static,
{
Mock::given(method("POST"))
.and(path("/v1/responses"))
.and(matcher)
.respond_with(sse_response(body))
.mount(server)
.await;
}
pub async fn start_mock_server() -> MockServer {
MockServer::builder()
.body_print_limit(BodyPrintLimit::Limited(80_000))
.start()
.await
}

View File

@@ -0,0 +1,75 @@
use std::mem::swap;
use std::sync::Arc;
use codex_core::CodexAuth;
use codex_core::CodexConversation;
use codex_core::ConversationManager;
use codex_core::ModelProviderInfo;
use codex_core::NewConversation;
use codex_core::built_in_model_providers;
use codex_core::config::Config;
use codex_core::protocol::SessionConfiguredEvent;
use tempfile::TempDir;
use crate::load_default_config_for_test;
type ConfigMutator = dyn FnOnce(&mut Config);
pub struct TestCodexBuilder {
config_mutators: Vec<Box<ConfigMutator>>,
}
impl TestCodexBuilder {
pub fn with_config<T>(mut self, mutator: T) -> Self
where
T: FnOnce(&mut Config) + 'static,
{
self.config_mutators.push(Box::new(mutator));
self
}
pub async fn build(&mut self, server: &wiremock::MockServer) -> anyhow::Result<TestCodex> {
// Build config pointing to the mock server and spawn Codex.
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let home = TempDir::new()?;
let cwd = TempDir::new()?;
let mut config = load_default_config_for_test(&home);
config.cwd = cwd.path().to_path_buf();
config.model_provider = model_provider;
let mut mutators = vec![];
swap(&mut self.config_mutators, &mut mutators);
for mutator in mutators {
mutator(&mut config)
}
let conversation_manager = ConversationManager::with_auth(CodexAuth::from_api_key("dummy"));
let NewConversation {
conversation,
session_configured,
..
} = conversation_manager.new_conversation(config).await?;
Ok(TestCodex {
home,
cwd,
codex: conversation,
session_configured,
})
}
}
pub struct TestCodex {
pub home: TempDir,
pub cwd: TempDir,
pub codex: Arc<CodexConversation>,
pub session_configured: SessionConfiguredEvent,
}
pub fn test_codex() -> TestCodexBuilder {
TestCodexBuilder {
config_mutators: vec![],
}
}

View File

@@ -1,7 +1,7 @@
use assert_cmd::Command as AssertCommand;
use codex_core::RolloutRecorder;
use codex_core::protocol::GitInfo;
use codex_core::spawn::CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR;
use core_test_support::non_sandbox_test;
use std::time::Duration;
use std::time::Instant;
use tempfile::TempDir;
@@ -21,12 +21,7 @@ use wiremock::matchers::path;
/// 4. Ensures the response is received exactly once and contains "hi"
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn chat_mode_stream_cli() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
non_sandbox_test!();
let server = MockServer::start().await;
let sse = concat!(
@@ -102,12 +97,7 @@ async fn chat_mode_stream_cli() {
/// received by a mock OpenAI Responses endpoint.
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn exec_cli_applies_experimental_instructions_file() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
non_sandbox_test!();
// Start mock server which will capture the request and return a minimal
// SSE stream for a single turn.
@@ -195,12 +185,7 @@ async fn exec_cli_applies_experimental_instructions_file() {
/// 4. Ensures the fixture content is correctly streamed through the CLI
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn responses_api_stream_cli() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
non_sandbox_test!();
let fixture =
std::path::Path::new(env!("CARGO_MANIFEST_DIR")).join("tests/cli_responses_fixture.sse");
@@ -232,12 +217,7 @@ async fn responses_api_stream_cli() {
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn integration_creates_and_checks_session_file() {
// Honor sandbox network restrictions for CI parity with the other tests.
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
non_sandbox_test!();
// 1. Temp home so we read/write isolated session files.
let home = TempDir::new().unwrap();
@@ -420,12 +400,6 @@ async fn integration_creates_and_checks_session_file() {
// Second run: resume should update the existing file.
let marker2 = format!("integration-resume-{}", Uuid::new_v4());
let prompt2 = format!("echo {marker2}");
// Crossplatform safe resume override. On Windows, backslashes in a TOML string must be escaped
// or the parse will fail and the raw literal (including quotes) may be preserved all the way down
// to Config, which in turn breaks resume because the path is invalid. Normalize to forward slashes
// to sidestep the issue.
let resume_path_str = path.to_string_lossy().replace('\\', "/");
let resume_override = format!("experimental_resume=\"{resume_path_str}\"");
let mut cmd2 = AssertCommand::new("cargo");
cmd2.arg("run")
.arg("-p")
@@ -434,11 +408,11 @@ async fn integration_creates_and_checks_session_file() {
.arg("--")
.arg("exec")
.arg("--skip-git-repo-check")
.arg("-c")
.arg(&resume_override)
.arg("-C")
.arg(env!("CARGO_MANIFEST_DIR"))
.arg(&prompt2);
.arg(&prompt2)
.arg("resume")
.arg("--last");
cmd2.env("CODEX_HOME", home.path())
.env("OPENAI_API_KEY", "dummy")
.env("CODEX_RS_SSE_FIXTURE", &fixture)

View File

@@ -1,18 +1,34 @@
use codex_core::CodexAuth;
use codex_core::ContentItem;
use codex_core::ConversationManager;
use codex_core::LocalShellAction;
use codex_core::LocalShellExecAction;
use codex_core::LocalShellStatus;
use codex_core::ModelClient;
use codex_core::ModelProviderInfo;
use codex_core::NewConversation;
use codex_core::Prompt;
use codex_core::ReasoningItemContent;
use codex_core::ResponseEvent;
use codex_core::ResponseItem;
use codex_core::WireApi;
use codex_core::built_in_model_providers;
use codex_core::protocol::EventMsg;
use codex_core::protocol::InputItem;
use codex_core::protocol::Op;
use codex_core::spawn::CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::models::ReasoningItemReasoningSummary;
use codex_protocol::models::WebSearchAction;
use core_test_support::load_default_config_for_test;
use core_test_support::load_sse_fixture_with_id;
use core_test_support::non_sandbox_test;
use core_test_support::responses;
use core_test_support::test_codex::test_codex;
use core_test_support::wait_for_event;
use futures::StreamExt;
use serde_json::json;
use std::io::Write;
use std::sync::Arc;
use tempfile::TempDir;
use uuid::Uuid;
use wiremock::Mock;
@@ -111,12 +127,7 @@ fn write_auth_json(
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn resume_includes_initial_messages_and_sends_prior_items() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
non_sandbox_test!();
// Create a fake rollout session file with prior user + system + assistant messages.
let tmpdir = TempDir::new().unwrap();
@@ -222,20 +233,21 @@ async fn resume_includes_initial_messages_and_sends_prior_items() {
let codex_home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&codex_home);
config.model_provider = model_provider;
config.experimental_resume = Some(session_path.clone());
// Also configure user instructions to ensure they are NOT delivered on resume.
config.user_instructions = Some("be nice".to_string());
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));
let auth_manager =
codex_core::AuthManager::from_auth_for_testing(CodexAuth::from_api_key("Test API Key"));
let NewConversation {
conversation: codex,
session_configured,
..
} = conversation_manager
.new_conversation(config)
.resume_conversation_from_rollout(config, session_path.clone(), auth_manager)
.await
.expect("create new conversation");
.expect("resume conversation");
// 1) Assert initial_messages only includes existing EventMsg entries; response items are not converted
let initial_msgs = session_configured
@@ -281,12 +293,7 @@ async fn resume_includes_initial_messages_and_sends_prior_items() {
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn includes_conversation_id_and_model_headers_in_request() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
non_sandbox_test!();
// Mock server
let server = MockServer::start().await;
@@ -411,12 +418,7 @@ async fn includes_base_instructions_override_in_request() {
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn chatgpt_auth_sends_correct_request() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
non_sandbox_test!();
// Mock server
let server = MockServer::start().await;
@@ -490,12 +492,7 @@ async fn chatgpt_auth_sends_correct_request() {
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn prefers_apikey_when_config_prefers_apikey_even_with_chatgpt_tokens() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
non_sandbox_test!();
// Mock server
let server = MockServer::start().await;
@@ -620,6 +617,365 @@ async fn includes_user_instructions_message_in_request() {
assert_message_ends_with(&request_body["input"][1], "</environment_context>");
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn azure_responses_request_includes_store_and_reasoning_ids() {
non_sandbox_test!();
let server = MockServer::start().await;
let sse_body = concat!(
"data: {\"type\":\"response.created\",\"response\":{}}\n\n",
"data: {\"type\":\"response.completed\",\"response\":{\"id\":\"resp_1\"}}\n\n",
);
let template = ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.set_body_raw(sse_body, "text/event-stream");
Mock::given(method("POST"))
.and(path("/openai/responses"))
.respond_with(template)
.expect(1)
.mount(&server)
.await;
let provider = ModelProviderInfo {
name: "azure".into(),
base_url: Some(format!("{}/openai", server.uri())),
env_key: None,
env_key_instructions: None,
wire_api: WireApi::Responses,
query_params: None,
http_headers: None,
env_http_headers: None,
request_max_retries: Some(0),
stream_max_retries: Some(0),
stream_idle_timeout_ms: Some(5_000),
requires_openai_auth: false,
};
let codex_home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&codex_home);
config.model_provider_id = provider.name.clone();
config.model_provider = provider.clone();
let effort = config.model_reasoning_effort;
let summary = config.model_reasoning_summary;
let config = Arc::new(config);
let client = ModelClient::new(
Arc::clone(&config),
None,
provider,
effort,
summary,
ConversationId::new(),
);
let mut prompt = Prompt::default();
prompt.input.push(ResponseItem::Reasoning {
id: "reasoning-id".into(),
summary: vec![ReasoningItemReasoningSummary::SummaryText {
text: "summary".into(),
}],
content: Some(vec![ReasoningItemContent::ReasoningText {
text: "content".into(),
}]),
encrypted_content: None,
});
prompt.input.push(ResponseItem::Message {
id: Some("message-id".into()),
role: "assistant".into(),
content: vec![ContentItem::OutputText {
text: "message".into(),
}],
});
prompt.input.push(ResponseItem::WebSearchCall {
id: Some("web-search-id".into()),
status: Some("completed".into()),
action: WebSearchAction::Search {
query: "weather".into(),
},
});
prompt.input.push(ResponseItem::FunctionCall {
id: Some("function-id".into()),
name: "do_thing".into(),
arguments: "{}".into(),
call_id: "function-call-id".into(),
});
prompt.input.push(ResponseItem::LocalShellCall {
id: Some("local-shell-id".into()),
call_id: Some("local-shell-call-id".into()),
status: LocalShellStatus::Completed,
action: LocalShellAction::Exec(LocalShellExecAction {
command: vec!["echo".into(), "hello".into()],
timeout_ms: None,
working_directory: None,
env: None,
user: None,
}),
});
prompt.input.push(ResponseItem::CustomToolCall {
id: Some("custom-tool-id".into()),
status: Some("completed".into()),
call_id: "custom-tool-call-id".into(),
name: "custom_tool".into(),
input: "{}".into(),
});
let mut stream = client
.stream(&prompt)
.await
.expect("responses stream to start");
while let Some(event) = stream.next().await {
if let Ok(ResponseEvent::Completed { .. }) = event {
break;
}
}
let requests = server
.received_requests()
.await
.expect("mock server collected requests");
assert_eq!(requests.len(), 1, "expected a single request");
let body: serde_json::Value = requests[0]
.body_json()
.expect("request body to be valid JSON");
assert_eq!(body["store"], serde_json::Value::Bool(true));
assert_eq!(body["stream"], serde_json::Value::Bool(true));
assert_eq!(body["input"].as_array().map(Vec::len), Some(6));
assert_eq!(body["input"][0]["id"].as_str(), Some("reasoning-id"));
assert_eq!(body["input"][1]["id"].as_str(), Some("message-id"));
assert_eq!(body["input"][2]["id"].as_str(), Some("web-search-id"));
assert_eq!(body["input"][3]["id"].as_str(), Some("function-id"));
assert_eq!(body["input"][4]["id"].as_str(), Some("local-shell-id"));
assert_eq!(body["input"][5]["id"].as_str(), Some("custom-tool-id"));
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn token_count_includes_rate_limits_snapshot() {
let server = MockServer::start().await;
let sse_body = responses::sse(vec![responses::ev_completed_with_tokens("resp_rate", 123)]);
let response = ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.insert_header("x-codex-primary-used-percent", "12.5")
.insert_header("x-codex-secondary-used-percent", "40.0")
.insert_header("x-codex-primary-window-minutes", "10")
.insert_header("x-codex-secondary-window-minutes", "60")
.insert_header("x-codex-primary-reset-after-seconds", "1800")
.insert_header("x-codex-secondary-reset-after-seconds", "7200")
.set_body_raw(sse_body, "text/event-stream");
Mock::given(method("POST"))
.and(path("/v1/responses"))
.respond_with(response)
.expect(1)
.mount(&server)
.await;
let mut provider = built_in_model_providers()["openai"].clone();
provider.base_url = Some(format!("{}/v1", server.uri()));
let home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&home);
config.model_provider = provider;
let conversation_manager = ConversationManager::with_auth(CodexAuth::from_api_key("test"));
let codex = conversation_manager
.new_conversation(config)
.await
.expect("create conversation")
.conversation;
codex
.submit(Op::UserInput {
items: vec![InputItem::Text {
text: "hello".into(),
}],
})
.await
.unwrap();
let first_token_event =
wait_for_event(&codex, |msg| matches!(msg, EventMsg::TokenCount(_))).await;
let rate_limit_only = match first_token_event {
EventMsg::TokenCount(ev) => ev,
_ => unreachable!(),
};
let rate_limit_json = serde_json::to_value(&rate_limit_only).unwrap();
pretty_assertions::assert_eq!(
rate_limit_json,
json!({
"info": null,
"rate_limits": {
"primary": {
"used_percent": 12.5,
"window_minutes": 10,
"resets_in_seconds": 1800
},
"secondary": {
"used_percent": 40.0,
"window_minutes": 60,
"resets_in_seconds": 7200
}
}
})
);
let token_event = wait_for_event(
&codex,
|msg| matches!(msg, EventMsg::TokenCount(ev) if ev.info.is_some()),
)
.await;
let final_payload = match token_event {
EventMsg::TokenCount(ev) => ev,
_ => unreachable!(),
};
// Assert full JSON for the final token count event (usage + rate limits)
let final_json = serde_json::to_value(&final_payload).unwrap();
pretty_assertions::assert_eq!(
final_json,
json!({
"info": {
"total_token_usage": {
"input_tokens": 123,
"cached_input_tokens": 0,
"output_tokens": 0,
"reasoning_output_tokens": 0,
"total_tokens": 123
},
"last_token_usage": {
"input_tokens": 123,
"cached_input_tokens": 0,
"output_tokens": 0,
"reasoning_output_tokens": 0,
"total_tokens": 123
},
// Default model is gpt-5-codex in tests → 272000 context window
"model_context_window": 272000
},
"rate_limits": {
"primary": {
"used_percent": 12.5,
"window_minutes": 10,
"resets_in_seconds": 1800
},
"secondary": {
"used_percent": 40.0,
"window_minutes": 60,
"resets_in_seconds": 7200
}
}
})
);
let usage = final_payload
.info
.expect("token usage info should be recorded after completion");
assert_eq!(usage.total_token_usage.total_tokens, 123);
let final_snapshot = final_payload
.rate_limits
.expect("latest rate limit snapshot should be retained");
assert_eq!(
final_snapshot
.primary
.as_ref()
.map(|window| window.used_percent),
Some(12.5)
);
assert_eq!(
final_snapshot
.primary
.as_ref()
.and_then(|window| window.resets_in_seconds),
Some(1800)
);
wait_for_event(&codex, |msg| matches!(msg, EventMsg::TaskComplete(_))).await;
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn usage_limit_error_emits_rate_limit_event() -> anyhow::Result<()> {
let server = MockServer::start().await;
let response = ResponseTemplate::new(429)
.insert_header("x-codex-primary-used-percent", "100.0")
.insert_header("x-codex-secondary-used-percent", "87.5")
.insert_header("x-codex-primary-over-secondary-limit-percent", "95.0")
.insert_header("x-codex-primary-window-minutes", "15")
.insert_header("x-codex-secondary-window-minutes", "60")
.set_body_json(json!({
"error": {
"type": "usage_limit_reached",
"message": "limit reached",
"resets_in_seconds": 42,
"plan_type": "pro"
}
}));
Mock::given(method("POST"))
.and(path("/v1/responses"))
.respond_with(response)
.expect(1)
.mount(&server)
.await;
let mut builder = test_codex();
let codex_fixture = builder.build(&server).await?;
let codex = codex_fixture.codex.clone();
let expected_limits = json!({
"primary": {
"used_percent": 100.0,
"window_minutes": 15,
"resets_in_seconds": null
},
"secondary": {
"used_percent": 87.5,
"window_minutes": 60,
"resets_in_seconds": null
}
});
let submission_id = codex
.submit(Op::UserInput {
items: vec![InputItem::Text {
text: "hello".into(),
}],
})
.await
.expect("submission should succeed while emitting usage limit error events");
let token_event = wait_for_event(&codex, |msg| matches!(msg, EventMsg::TokenCount(_))).await;
let EventMsg::TokenCount(event) = token_event else {
unreachable!();
};
let event_json = serde_json::to_value(&event).expect("serialize token count event");
pretty_assertions::assert_eq!(
event_json,
json!({
"info": null,
"rate_limits": expected_limits
})
);
let error_event = wait_for_event(&codex, |msg| matches!(msg, EventMsg::Error(_))).await;
let EventMsg::Error(error_event) = error_event else {
unreachable!();
};
assert!(
error_event.message.to_lowercase().contains("usage limit"),
"unexpected error message for submission {submission_id}: {}",
error_event.message
);
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn azure_overrides_assign_properties_used_for_responses_url() {
let existing_env_var_with_random_value = if cfg!(windows) { "USERNAME" } else { "USER" };
@@ -785,12 +1141,7 @@ fn create_dummy_codex_auth() -> CodexAuth {
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn history_dedupes_streamed_and_final_messages_across_turns() {
// Skip under Codex sandbox network restrictions (mirrors other tests).
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
non_sandbox_test!();
// Mock server that will receive three sequential requests and return the same SSE stream
// each time: a few deltas, then a final assistant message, then completed.

View File

@@ -1,105 +1,62 @@
#![expect(clippy::unwrap_used)]
use codex_core::CodexAuth;
use codex_core::ConversationManager;
use codex_core::ModelProviderInfo;
use codex_core::NewConversation;
use codex_core::built_in_model_providers;
use codex_core::protocol::ErrorEvent;
use codex_core::protocol::EventMsg;
use codex_core::protocol::InputItem;
use codex_core::protocol::Op;
use codex_core::protocol::RolloutItem;
use codex_core::protocol::RolloutLine;
use codex_core::spawn::CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR;
use core_test_support::load_default_config_for_test;
use core_test_support::wait_for_event;
use serde_json::Value;
use tempfile::TempDir;
use wiremock::Mock;
use wiremock::MockServer;
use wiremock::Request;
use wiremock::Respond;
use wiremock::ResponseTemplate;
use wiremock::matchers::method;
use wiremock::matchers::path;
use codex_core::codex::compact::SUMMARIZATION_PROMPT;
use core_test_support::non_sandbox_test;
use core_test_support::responses::ev_assistant_message;
use core_test_support::responses::ev_completed;
use core_test_support::responses::ev_completed_with_tokens;
use core_test_support::responses::ev_function_call;
use core_test_support::responses::mount_sse_once;
use core_test_support::responses::sse;
use core_test_support::responses::sse_response;
use core_test_support::responses::start_mock_server;
use pretty_assertions::assert_eq;
use std::sync::Arc;
use std::sync::Mutex;
use std::sync::atomic::AtomicUsize;
use std::sync::atomic::Ordering;
// --- Test helpers -----------------------------------------------------------
/// Build an SSE stream body from a list of JSON events.
fn sse(events: Vec<Value>) -> String {
use std::fmt::Write as _;
let mut out = String::new();
for ev in events {
let kind = ev.get("type").and_then(|v| v.as_str()).unwrap();
writeln!(&mut out, "event: {kind}").unwrap();
if !ev.as_object().map(|o| o.len() == 1).unwrap_or(false) {
write!(&mut out, "data: {ev}\n\n").unwrap();
} else {
out.push('\n');
}
}
out
}
/// Convenience: SSE event for a completed response with a specific id.
fn ev_completed(id: &str) -> Value {
serde_json::json!({
"type": "response.completed",
"response": {
"id": id,
"usage": {"input_tokens":0,"input_tokens_details":null,"output_tokens":0,"output_tokens_details":null,"total_tokens":0}
}
})
}
/// Convenience: SSE event for a single assistant message output item.
fn ev_assistant_message(id: &str, text: &str) -> Value {
serde_json::json!({
"type": "response.output_item.done",
"item": {
"type": "message",
"role": "assistant",
"id": id,
"content": [{"type": "output_text", "text": text}]
}
})
}
fn sse_response(body: String) -> ResponseTemplate {
ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.set_body_raw(body, "text/event-stream")
}
async fn mount_sse_once<M>(server: &MockServer, matcher: M, body: String)
where
M: wiremock::Match + Send + Sync + 'static,
{
Mock::given(method("POST"))
.and(path("/v1/responses"))
.and(matcher)
.respond_with(sse_response(body))
.expect(1)
.mount(server)
.await;
}
const FIRST_REPLY: &str = "FIRST_REPLY";
const SUMMARY_TEXT: &str = "SUMMARY_ONLY_CONTEXT";
const SUMMARIZE_TRIGGER: &str = "Start Summarization";
pub(super) const FIRST_REPLY: &str = "FIRST_REPLY";
pub(super) const SUMMARY_TEXT: &str = "SUMMARY_ONLY_CONTEXT";
const THIRD_USER_MSG: &str = "next turn";
const AUTO_SUMMARY_TEXT: &str = "AUTO_SUMMARY";
const FIRST_AUTO_MSG: &str = "token limit start";
const SECOND_AUTO_MSG: &str = "token limit push";
const STILL_TOO_BIG_REPLY: &str = "STILL_TOO_BIG";
const MULTI_AUTO_MSG: &str = "multi auto";
const SECOND_LARGE_REPLY: &str = "SECOND_LARGE_REPLY";
const FIRST_AUTO_SUMMARY: &str = "FIRST_AUTO_SUMMARY";
const SECOND_AUTO_SUMMARY: &str = "SECOND_AUTO_SUMMARY";
const FINAL_REPLY: &str = "FINAL_REPLY";
const DUMMY_FUNCTION_NAME: &str = "unsupported_tool";
const DUMMY_CALL_ID: &str = "call-multi-auto";
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn summarize_context_three_requests_and_instructions() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
non_sandbox_test!();
// Set up a mock server that we can inspect after the run.
let server = MockServer::start().await;
let server = start_mock_server().await;
// SSE 1: assistant replies normally so it is recorded in history.
let sse1 = sse(vec![
@@ -120,13 +77,13 @@ async fn summarize_context_three_requests_and_instructions() {
let first_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains("\"text\":\"hello world\"")
&& !body.contains(&format!("\"text\":\"{SUMMARIZE_TRIGGER}\""))
&& !body.contains("You have exceeded the maximum number of tokens")
};
mount_sse_once(&server, first_matcher, sse1).await;
let second_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains(&format!("\"text\":\"{SUMMARIZE_TRIGGER}\""))
body.contains("You have exceeded the maximum number of tokens")
};
mount_sse_once(&server, second_matcher, sse2).await;
@@ -144,6 +101,7 @@ async fn summarize_context_three_requests_and_instructions() {
let home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&home);
config.model_provider = model_provider;
config.model_auto_compact_token_limit = Some(200_000);
let conversation_manager = ConversationManager::with_auth(CodexAuth::from_api_key("dummy"));
let NewConversation {
conversation: codex,
@@ -163,7 +121,7 @@ async fn summarize_context_three_requests_and_instructions() {
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
// 2) Summarize second hit with summarization instructions.
// 2) Summarize second hit should include the summarization prompt.
codex.submit(Op::Compact).await.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
@@ -190,16 +148,12 @@ async fn summarize_context_three_requests_and_instructions() {
let body2 = req2.body_json::<serde_json::Value>().unwrap();
let body3 = req3.body_json::<serde_json::Value>().unwrap();
// System instructions should change for the summarization turn.
// Manual compact should keep the baseline developer instructions.
let instr1 = body1.get("instructions").and_then(|v| v.as_str()).unwrap();
let instr2 = body2.get("instructions").and_then(|v| v.as_str()).unwrap();
assert_ne!(
assert_eq!(
instr1, instr2,
"summarization should override base instructions"
);
assert!(
instr2.contains("You are a summarization assistant"),
"summarization instructions not applied"
"manual compact should keep the standard developer instructions"
);
// The summarization request should include the injected user input marker.
@@ -209,14 +163,17 @@ async fn summarize_context_three_requests_and_instructions() {
assert_eq!(last2.get("type").unwrap().as_str().unwrap(), "message");
assert_eq!(last2.get("role").unwrap().as_str().unwrap(), "user");
let text2 = last2["content"][0]["text"].as_str().unwrap();
assert!(text2.contains(SUMMARIZE_TRIGGER));
assert_eq!(
text2, SUMMARIZATION_PROMPT,
"expected summarize trigger, got `{text2}`"
);
// Third request must contain only the summary from step 2 as prior history plus new user msg.
// Third request must contain the refreshed instructions, bridge summary message and new user msg.
let input3 = body3.get("input").and_then(|v| v.as_array()).unwrap();
println!("third request body: {body3}");
assert!(
input3.len() >= 2,
"expected summary + new user message in third request"
input3.len() >= 3,
"expected refreshed context and new user message in third request"
);
// Collect all (role, text) message tuples.
@@ -232,24 +189,35 @@ async fn summarize_context_three_requests_and_instructions() {
}
}
// Exactly one assistant message should remain after compaction and the new user message is present.
// No previous assistant messages should remain and the new user message is present.
let assistant_count = messages.iter().filter(|(r, _)| r == "assistant").count();
assert_eq!(
assistant_count, 1,
"exactly one assistant message should remain after compaction"
);
assert_eq!(assistant_count, 0, "assistant history should be cleared");
assert!(
messages
.iter()
.any(|(r, t)| r == "user" && t == THIRD_USER_MSG),
"third request should include the new user message"
);
let Some((_, bridge_text)) = messages.iter().find(|(role, text)| {
role == "user"
&& (text.contains("Here were the user messages")
|| text.contains("Here are all the user messages"))
&& text.contains(SUMMARY_TEXT)
}) else {
panic!("expected a bridge message containing the summary");
};
assert!(
!messages.iter().any(|(_, t)| t.contains("hello world")),
"third request should not include the original user input"
bridge_text.contains("hello world"),
"bridge should capture earlier user messages"
);
assert!(
!messages.iter().any(|(_, t)| t.contains(SUMMARIZE_TRIGGER)),
!bridge_text.contains(SUMMARIZATION_PROMPT),
"bridge text should not echo the summarize trigger"
);
assert!(
!messages
.iter()
.any(|(_, text)| text.contains(SUMMARIZATION_PROMPT)),
"third request should not include the summarize trigger"
);
@@ -258,6 +226,7 @@ async fn summarize_context_three_requests_and_instructions() {
wait_for_event(&codex, |ev| matches!(ev, EventMsg::ShutdownComplete)).await;
// Verify rollout contains APITurn entries for each API call and a Compacted entry.
println!("rollout path: {}", rollout_path.display());
let text = std::fs::read_to_string(&rollout_path).unwrap_or_else(|e| {
panic!(
"failed to read rollout file {}: {e}",
@@ -296,3 +265,566 @@ async fn summarize_context_three_requests_and_instructions() {
"expected a Compacted entry containing the summarizer output"
);
}
// Windows CI only: bump to 4 workers to prevent SSE/event starvation and test timeouts.
#[cfg_attr(windows, tokio::test(flavor = "multi_thread", worker_threads = 4))]
#[cfg_attr(not(windows), tokio::test(flavor = "multi_thread", worker_threads = 2))]
async fn auto_compact_runs_after_token_limit_hit() {
non_sandbox_test!();
let server = start_mock_server().await;
let sse1 = sse(vec![
ev_assistant_message("m1", FIRST_REPLY),
ev_completed_with_tokens("r1", 70_000),
]);
let sse2 = sse(vec![
ev_assistant_message("m2", "SECOND_REPLY"),
ev_completed_with_tokens("r2", 330_000),
]);
let sse3 = sse(vec![
ev_assistant_message("m3", AUTO_SUMMARY_TEXT),
ev_completed_with_tokens("r3", 200),
]);
let first_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains(FIRST_AUTO_MSG)
&& !body.contains(SECOND_AUTO_MSG)
&& !body.contains("You have exceeded the maximum number of tokens")
};
Mock::given(method("POST"))
.and(path("/v1/responses"))
.and(first_matcher)
.respond_with(sse_response(sse1))
.mount(&server)
.await;
let second_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains(SECOND_AUTO_MSG)
&& body.contains(FIRST_AUTO_MSG)
&& !body.contains("You have exceeded the maximum number of tokens")
};
Mock::given(method("POST"))
.and(path("/v1/responses"))
.and(second_matcher)
.respond_with(sse_response(sse2))
.mount(&server)
.await;
let third_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains("You have exceeded the maximum number of tokens")
};
Mock::given(method("POST"))
.and(path("/v1/responses"))
.and(third_matcher)
.respond_with(sse_response(sse3))
.mount(&server)
.await;
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&home);
config.model_provider = model_provider;
config.model_auto_compact_token_limit = Some(200_000);
let conversation_manager = ConversationManager::with_auth(CodexAuth::from_api_key("dummy"));
let codex = conversation_manager
.new_conversation(config)
.await
.unwrap()
.conversation;
codex
.submit(Op::UserInput {
items: vec![InputItem::Text {
text: FIRST_AUTO_MSG.into(),
}],
})
.await
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
codex
.submit(Op::UserInput {
items: vec![InputItem::Text {
text: SECOND_AUTO_MSG.into(),
}],
})
.await
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
// wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
let requests = server.received_requests().await.unwrap();
assert!(
requests.len() >= 3,
"auto compact should add at least a third request, got {}",
requests.len()
);
let is_auto_compact = |req: &wiremock::Request| {
std::str::from_utf8(&req.body)
.unwrap_or("")
.contains("You have exceeded the maximum number of tokens")
};
let auto_compact_count = requests.iter().filter(|req| is_auto_compact(req)).count();
assert_eq!(
auto_compact_count, 1,
"expected exactly one auto compact request"
);
let auto_compact_index = requests
.iter()
.enumerate()
.find_map(|(idx, req)| is_auto_compact(req).then_some(idx))
.expect("auto compact request missing");
assert_eq!(
auto_compact_index, 2,
"auto compact should add a third request"
);
let body_first = requests[0].body_json::<serde_json::Value>().unwrap();
let body3 = requests[auto_compact_index]
.body_json::<serde_json::Value>()
.unwrap();
let instructions = body3
.get("instructions")
.and_then(|v| v.as_str())
.unwrap_or_default();
let baseline_instructions = body_first
.get("instructions")
.and_then(|v| v.as_str())
.unwrap_or_default()
.to_string();
assert_eq!(
instructions, baseline_instructions,
"auto compact should keep the standard developer instructions",
);
let input3 = body3.get("input").and_then(|v| v.as_array()).unwrap();
let last3 = input3
.last()
.expect("auto compact request should append a user message");
assert_eq!(last3.get("type").and_then(|v| v.as_str()), Some("message"));
assert_eq!(last3.get("role").and_then(|v| v.as_str()), Some("user"));
let last_text = last3
.get("content")
.and_then(|v| v.as_array())
.and_then(|items| items.first())
.and_then(|item| item.get("text"))
.and_then(|text| text.as_str())
.unwrap_or_default();
assert_eq!(
last_text, SUMMARIZATION_PROMPT,
"auto compact should send the summarization prompt as a user message",
);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn auto_compact_persists_rollout_entries() {
non_sandbox_test!();
let server = start_mock_server().await;
let sse1 = sse(vec![
ev_assistant_message("m1", FIRST_REPLY),
ev_completed_with_tokens("r1", 70_000),
]);
let sse2 = sse(vec![
ev_assistant_message("m2", "SECOND_REPLY"),
ev_completed_with_tokens("r2", 330_000),
]);
let sse3 = sse(vec![
ev_assistant_message("m3", AUTO_SUMMARY_TEXT),
ev_completed_with_tokens("r3", 200),
]);
let first_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains(FIRST_AUTO_MSG)
&& !body.contains(SECOND_AUTO_MSG)
&& !body.contains("You have exceeded the maximum number of tokens")
};
Mock::given(method("POST"))
.and(path("/v1/responses"))
.and(first_matcher)
.respond_with(sse_response(sse1))
.mount(&server)
.await;
let second_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains(SECOND_AUTO_MSG)
&& body.contains(FIRST_AUTO_MSG)
&& !body.contains("You have exceeded the maximum number of tokens")
};
Mock::given(method("POST"))
.and(path("/v1/responses"))
.and(second_matcher)
.respond_with(sse_response(sse2))
.mount(&server)
.await;
let third_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains("You have exceeded the maximum number of tokens")
};
Mock::given(method("POST"))
.and(path("/v1/responses"))
.and(third_matcher)
.respond_with(sse_response(sse3))
.mount(&server)
.await;
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&home);
config.model_provider = model_provider;
let conversation_manager = ConversationManager::with_auth(CodexAuth::from_api_key("dummy"));
let NewConversation {
conversation: codex,
session_configured,
..
} = conversation_manager.new_conversation(config).await.unwrap();
codex
.submit(Op::UserInput {
items: vec![InputItem::Text {
text: FIRST_AUTO_MSG.into(),
}],
})
.await
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
codex
.submit(Op::UserInput {
items: vec![InputItem::Text {
text: SECOND_AUTO_MSG.into(),
}],
})
.await
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
codex.submit(Op::Shutdown).await.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::ShutdownComplete)).await;
let rollout_path = session_configured.rollout_path;
let text = std::fs::read_to_string(&rollout_path).unwrap_or_else(|e| {
panic!(
"failed to read rollout file {}: {e}",
rollout_path.display()
)
});
let mut turn_context_count = 0usize;
for line in text.lines() {
let trimmed = line.trim();
if trimmed.is_empty() {
continue;
}
let Ok(entry): Result<RolloutLine, _> = serde_json::from_str(trimmed) else {
continue;
};
match entry.item {
RolloutItem::TurnContext(_) => {
turn_context_count += 1;
}
RolloutItem::Compacted(_) => {}
_ => {}
}
}
assert!(
turn_context_count >= 2,
"expected at least two turn context entries, got {turn_context_count}"
);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn auto_compact_stops_after_failed_attempt() {
non_sandbox_test!();
let server = start_mock_server().await;
let sse1 = sse(vec![
ev_assistant_message("m1", FIRST_REPLY),
ev_completed_with_tokens("r1", 500),
]);
let sse2 = sse(vec![
ev_assistant_message("m2", SUMMARY_TEXT),
ev_completed_with_tokens("r2", 50),
]);
let sse3 = sse(vec![
ev_assistant_message("m3", STILL_TOO_BIG_REPLY),
ev_completed_with_tokens("r3", 500),
]);
let first_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains(FIRST_AUTO_MSG)
&& !body.contains("You have exceeded the maximum number of tokens")
};
Mock::given(method("POST"))
.and(path("/v1/responses"))
.and(first_matcher)
.respond_with(sse_response(sse1.clone()))
.mount(&server)
.await;
let second_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains("You have exceeded the maximum number of tokens")
};
Mock::given(method("POST"))
.and(path("/v1/responses"))
.and(second_matcher)
.respond_with(sse_response(sse2.clone()))
.mount(&server)
.await;
let third_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
!body.contains("You have exceeded the maximum number of tokens")
&& body.contains(SUMMARY_TEXT)
};
Mock::given(method("POST"))
.and(path("/v1/responses"))
.and(third_matcher)
.respond_with(sse_response(sse3.clone()))
.mount(&server)
.await;
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&home);
config.model_provider = model_provider;
config.model_auto_compact_token_limit = Some(200);
let conversation_manager = ConversationManager::with_auth(CodexAuth::from_api_key("dummy"));
let codex = conversation_manager
.new_conversation(config)
.await
.unwrap()
.conversation;
codex
.submit(Op::UserInput {
items: vec![InputItem::Text {
text: FIRST_AUTO_MSG.into(),
}],
})
.await
.unwrap();
let error_event = wait_for_event(&codex, |ev| matches!(ev, EventMsg::Error(_))).await;
let EventMsg::Error(ErrorEvent { message }) = error_event else {
panic!("expected error event");
};
assert!(
message.contains("limit"),
"error message should include limit information: {message}"
);
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
let requests = server.received_requests().await.unwrap();
assert_eq!(
requests.len(),
3,
"auto compact should attempt at most one summarization before erroring"
);
let last_body = requests[2].body_json::<serde_json::Value>().unwrap();
let input = last_body
.get("input")
.and_then(|v| v.as_array())
.unwrap_or_else(|| panic!("unexpected request format: {last_body}"));
let contains_prompt = input.iter().any(|item| {
item.get("type").and_then(|v| v.as_str()) == Some("message")
&& item.get("role").and_then(|v| v.as_str()) == Some("user")
&& item
.get("content")
.and_then(|v| v.as_array())
.and_then(|items| items.first())
.and_then(|entry| entry.get("text"))
.and_then(|text| text.as_str())
.map(|text| text == SUMMARIZATION_PROMPT)
.unwrap_or(false)
});
assert!(
!contains_prompt,
"third request should be the follow-up turn, not another summarization",
);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn auto_compact_allows_multiple_attempts_when_interleaved_with_other_turn_events() {
non_sandbox_test!();
let server = start_mock_server().await;
let sse1 = sse(vec![
ev_assistant_message("m1", FIRST_REPLY),
ev_completed_with_tokens("r1", 500),
]);
let sse2 = sse(vec![
ev_assistant_message("m2", FIRST_AUTO_SUMMARY),
ev_completed_with_tokens("r2", 50),
]);
let sse3 = sse(vec![
ev_function_call(DUMMY_CALL_ID, DUMMY_FUNCTION_NAME, "{}"),
ev_completed_with_tokens("r3", 150),
]);
let sse4 = sse(vec![
ev_assistant_message("m4", SECOND_LARGE_REPLY),
ev_completed_with_tokens("r4", 450),
]);
let sse5 = sse(vec![
ev_assistant_message("m5", SECOND_AUTO_SUMMARY),
ev_completed_with_tokens("r5", 60),
]);
let sse6 = sse(vec![
ev_assistant_message("m6", FINAL_REPLY),
ev_completed_with_tokens("r6", 120),
]);
#[derive(Clone)]
struct SeqResponder {
bodies: Arc<Vec<String>>,
calls: Arc<AtomicUsize>,
requests: Arc<Mutex<Vec<Vec<u8>>>>,
}
impl SeqResponder {
fn new(bodies: Vec<String>) -> Self {
Self {
bodies: Arc::new(bodies),
calls: Arc::new(AtomicUsize::new(0)),
requests: Arc::new(Mutex::new(Vec::new())),
}
}
fn recorded_requests(&self) -> Vec<Vec<u8>> {
self.requests.lock().unwrap().clone()
}
}
impl Respond for SeqResponder {
fn respond(&self, req: &Request) -> ResponseTemplate {
let idx = self.calls.fetch_add(1, Ordering::SeqCst);
self.requests.lock().unwrap().push(req.body.clone());
let body = self
.bodies
.get(idx)
.unwrap_or_else(|| panic!("unexpected request index {idx}"))
.clone();
ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.set_body_raw(body, "text/event-stream")
}
}
let responder = SeqResponder::new(vec![sse1, sse2, sse3, sse4, sse5, sse6]);
Mock::given(method("POST"))
.and(path("/v1/responses"))
.respond_with(responder.clone())
.expect(6)
.mount(&server)
.await;
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&home);
config.model_provider = model_provider;
config.model_auto_compact_token_limit = Some(200);
let conversation_manager = ConversationManager::with_auth(CodexAuth::from_api_key("dummy"));
let codex = conversation_manager
.new_conversation(config)
.await
.unwrap()
.conversation;
codex
.submit(Op::UserInput {
items: vec![InputItem::Text {
text: MULTI_AUTO_MSG.into(),
}],
})
.await
.unwrap();
let mut auto_compact_lifecycle_events = Vec::new();
loop {
let event = codex.next_event().await.unwrap();
if event.id.starts_with("auto-compact-")
&& matches!(
event.msg,
EventMsg::TaskStarted(_) | EventMsg::TaskComplete(_)
)
{
auto_compact_lifecycle_events.push(event);
continue;
}
if let EventMsg::TaskComplete(_) = &event.msg
&& !event.id.starts_with("auto-compact-")
{
break;
}
}
assert!(
auto_compact_lifecycle_events.is_empty(),
"auto compact should not emit task lifecycle events"
);
let request_bodies: Vec<String> = responder
.recorded_requests()
.into_iter()
.map(|body| String::from_utf8(body).unwrap_or_default())
.collect();
assert_eq!(
request_bodies.len(),
6,
"expected six requests including two auto compactions"
);
assert!(
request_bodies[0].contains(MULTI_AUTO_MSG),
"first request should contain the user input"
);
assert!(
request_bodies[1].contains("You have exceeded the maximum number of tokens"),
"first auto compact request should include the summarization prompt"
);
assert!(
request_bodies[3].contains(&format!("unsupported call: {DUMMY_FUNCTION_NAME}")),
"function call output should be sent before the second auto compact"
);
assert!(
request_bodies[4].contains("You have exceeded the maximum number of tokens"),
"second auto compact request should include the summarization prompt"
);
}

View File

@@ -0,0 +1,837 @@
#![allow(clippy::expect_used)]
//! Integration tests that cover compacting, resuming, and forking conversations.
//!
//! Each test sets up a mocked SSE conversation and drives the conversation through
//! a specific sequence of operations. After every operation we capture the
//! request payload that Codex would send to the model and assert that the
//! model-visible history matches the expected sequence of messages.
use super::compact::FIRST_REPLY;
use super::compact::SUMMARY_TEXT;
use codex_core::CodexAuth;
use codex_core::CodexConversation;
use codex_core::ConversationManager;
use codex_core::ModelProviderInfo;
use codex_core::NewConversation;
use codex_core::built_in_model_providers;
use codex_core::codex::compact::SUMMARIZATION_PROMPT;
use codex_core::config::Config;
use codex_core::protocol::ConversationPathResponseEvent;
use codex_core::protocol::EventMsg;
use codex_core::protocol::InputItem;
use codex_core::protocol::Op;
use codex_core::spawn::CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR;
use core_test_support::load_default_config_for_test;
use core_test_support::responses::ev_assistant_message;
use core_test_support::responses::ev_completed;
use core_test_support::responses::mount_sse_once;
use core_test_support::responses::sse;
use core_test_support::wait_for_event;
use pretty_assertions::assert_eq;
use serde_json::Value;
use serde_json::json;
use std::sync::Arc;
use tempfile::TempDir;
use wiremock::MockServer;
const AFTER_SECOND_RESUME: &str = "AFTER_SECOND_RESUME";
fn network_disabled() -> bool {
std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok()
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
/// Scenario: compact an initial conversation, resume it, fork one turn back, and
/// ensure the model-visible history matches expectations at each request.
async fn compact_resume_and_fork_preserve_model_history_view() {
if network_disabled() {
println!("Skipping test because network is disabled in this sandbox");
return;
}
// 1. Arrange mocked SSE responses for the initial compact/resume/fork flow.
let server = MockServer::start().await;
mount_initial_flow(&server).await;
// 2. Start a new conversation and drive it through the compact/resume/fork steps.
let (_home, config, manager, base) = start_test_conversation(&server).await;
user_turn(&base, "hello world").await;
compact_conversation(&base).await;
user_turn(&base, "AFTER_COMPACT").await;
let base_path = fetch_conversation_path(&base, "base conversation").await;
assert!(
base_path.exists(),
"compact+resume test expects base path {base_path:?} to exist",
);
let resumed = resume_conversation(&manager, &config, base_path).await;
user_turn(&resumed, "AFTER_RESUME").await;
let resumed_path = fetch_conversation_path(&resumed, "resumed conversation").await;
assert!(
resumed_path.exists(),
"compact+resume test expects resumed path {resumed_path:?} to exist",
);
let forked = fork_conversation(&manager, &config, resumed_path, 2).await;
user_turn(&forked, "AFTER_FORK").await;
// 3. Capture the requests to the model and validate the history slices.
let requests = gather_request_bodies(&server).await;
// input after compact is a prefix of input after resume/fork
let input_after_compact = json!(requests[requests.len() - 3]["input"]);
let input_after_resume = json!(requests[requests.len() - 2]["input"]);
let input_after_fork = json!(requests[requests.len() - 1]["input"]);
let compact_arr = input_after_compact
.as_array()
.expect("input after compact should be an array");
let resume_arr = input_after_resume
.as_array()
.expect("input after resume should be an array");
let fork_arr = input_after_fork
.as_array()
.expect("input after fork should be an array");
assert!(
compact_arr.len() <= resume_arr.len(),
"after-resume input should have at least as many items as after-compact",
);
assert_eq!(compact_arr.as_slice(), &resume_arr[..compact_arr.len()]);
assert!(
compact_arr.len() <= fork_arr.len(),
"after-fork input should have at least as many items as after-compact",
);
assert_eq!(
&compact_arr.as_slice()[..compact_arr.len()],
&fork_arr[..compact_arr.len()]
);
let prompt = requests[0]["instructions"]
.as_str()
.unwrap_or_default()
.to_string();
let user_instructions = requests[0]["input"][0]["content"][0]["text"]
.as_str()
.unwrap_or_default()
.to_string();
let environment_context = requests[0]["input"][1]["content"][0]["text"]
.as_str()
.unwrap_or_default()
.to_string();
let tool_calls = json!(requests[0]["tools"].as_array());
let prompt_cache_key = requests[0]["prompt_cache_key"]
.as_str()
.unwrap_or_default()
.to_string();
let fork_prompt_cache_key = requests[requests.len() - 1]["prompt_cache_key"]
.as_str()
.unwrap_or_default()
.to_string();
let user_turn_1 = json!(
{
"model": "gpt-5-codex",
"instructions": prompt,
"input": [
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": user_instructions
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": environment_context
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "hello world"
}
]
}
],
"tools": tool_calls,
"tool_choice": "auto",
"parallel_tool_calls": false,
"reasoning": {
"summary": "auto"
},
"store": false,
"stream": true,
"include": [
"reasoning.encrypted_content"
],
"prompt_cache_key": prompt_cache_key
});
let compact_1 = json!(
{
"model": "gpt-5-codex",
"instructions": prompt,
"input": [
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": user_instructions
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": environment_context
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "hello world"
}
]
},
{
"type": "message",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "FIRST_REPLY"
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": SUMMARIZATION_PROMPT
}
]
}
],
"tools": [],
"tool_choice": "auto",
"parallel_tool_calls": false,
"reasoning": {
"summary": "auto"
},
"store": false,
"stream": true,
"include": [
"reasoning.encrypted_content"
],
"prompt_cache_key": prompt_cache_key
});
let user_turn_2_after_compact = json!(
{
"model": "gpt-5-codex",
"instructions": prompt,
"input": [
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": user_instructions
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": environment_context
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "You were originally given instructions from a user over one or more turns. Here were the user messages:
hello world
Another language model started to solve this problem and produced a summary of its thinking process. You also have access to the state of the tools that were used by that language model. Use this to build on the work that has already been done and avoid duplicating work. Here is the summary produced by the other language model, use the information in this summary to assist with your own analysis:
SUMMARY_ONLY_CONTEXT"
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "AFTER_COMPACT"
}
]
}
],
"tools": tool_calls,
"tool_choice": "auto",
"parallel_tool_calls": false,
"reasoning": {
"summary": "auto"
},
"store": false,
"stream": true,
"include": [
"reasoning.encrypted_content"
],
"prompt_cache_key": prompt_cache_key
});
let usert_turn_3_after_resume = json!(
{
"model": "gpt-5-codex",
"instructions": prompt,
"input": [
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": user_instructions
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": environment_context
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "You were originally given instructions from a user over one or more turns. Here were the user messages:
hello world
Another language model started to solve this problem and produced a summary of its thinking process. You also have access to the state of the tools that were used by that language model. Use this to build on the work that has already been done and avoid duplicating work. Here is the summary produced by the other language model, use the information in this summary to assist with your own analysis:
SUMMARY_ONLY_CONTEXT"
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "AFTER_COMPACT"
}
]
},
{
"type": "message",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "AFTER_COMPACT_REPLY"
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "AFTER_RESUME"
}
]
}
],
"tools": tool_calls,
"tool_choice": "auto",
"parallel_tool_calls": false,
"reasoning": {
"summary": "auto"
},
"store": false,
"stream": true,
"include": [
"reasoning.encrypted_content"
],
"prompt_cache_key": prompt_cache_key
});
let user_turn_3_after_fork = json!(
{
"model": "gpt-5-codex",
"instructions": prompt,
"input": [
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": user_instructions
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": environment_context
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "You were originally given instructions from a user over one or more turns. Here were the user messages:
hello world
Another language model started to solve this problem and produced a summary of its thinking process. You also have access to the state of the tools that were used by that language model. Use this to build on the work that has already been done and avoid duplicating work. Here is the summary produced by the other language model, use the information in this summary to assist with your own analysis:
SUMMARY_ONLY_CONTEXT"
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "AFTER_COMPACT"
}
]
},
{
"type": "message",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "AFTER_COMPACT_REPLY"
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "AFTER_FORK"
}
]
}
],
"tools": tool_calls,
"tool_choice": "auto",
"parallel_tool_calls": false,
"reasoning": {
"summary": "auto"
},
"store": false,
"stream": true,
"include": [
"reasoning.encrypted_content"
],
"prompt_cache_key": fork_prompt_cache_key
});
let mut expected = json!([
user_turn_1,
compact_1,
user_turn_2_after_compact,
usert_turn_3_after_resume,
user_turn_3_after_fork
]);
normalize_line_endings(&mut expected);
assert_eq!(requests.len(), 5);
assert_eq!(json!(requests), expected);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
/// Scenario: after the forked branch is compacted, resuming again should reuse
/// the compacted history and only append the new user message.
async fn compact_resume_after_second_compaction_preserves_history() {
if network_disabled() {
println!("Skipping test because network is disabled in this sandbox");
return;
}
// 1. Arrange mocked SSE responses for the initial flow plus the second compact.
let server = MockServer::start().await;
mount_initial_flow(&server).await;
mount_second_compact_flow(&server).await;
// 2. Drive the conversation through compact -> resume -> fork -> compact -> resume.
let (_home, config, manager, base) = start_test_conversation(&server).await;
user_turn(&base, "hello world").await;
compact_conversation(&base).await;
user_turn(&base, "AFTER_COMPACT").await;
let base_path = fetch_conversation_path(&base, "base conversation").await;
assert!(
base_path.exists(),
"second compact test expects base path {base_path:?} to exist",
);
let resumed = resume_conversation(&manager, &config, base_path).await;
user_turn(&resumed, "AFTER_RESUME").await;
let resumed_path = fetch_conversation_path(&resumed, "resumed conversation").await;
assert!(
resumed_path.exists(),
"second compact test expects resumed path {resumed_path:?} to exist",
);
let forked = fork_conversation(&manager, &config, resumed_path, 3).await;
user_turn(&forked, "AFTER_FORK").await;
compact_conversation(&forked).await;
user_turn(&forked, "AFTER_COMPACT_2").await;
let forked_path = fetch_conversation_path(&forked, "forked conversation").await;
assert!(
forked_path.exists(),
"second compact test expects forked path {forked_path:?} to exist",
);
let resumed_again = resume_conversation(&manager, &config, forked_path).await;
user_turn(&resumed_again, AFTER_SECOND_RESUME).await;
let requests = gather_request_bodies(&server).await;
let input_after_compact = json!(requests[requests.len() - 2]["input"]);
let input_after_resume = json!(requests[requests.len() - 1]["input"]);
// test input after compact before resume is the same as input after resume
let compact_input_array = input_after_compact
.as_array()
.expect("input after compact should be an array");
let resume_input_array = input_after_resume
.as_array()
.expect("input after resume should be an array");
assert!(
compact_input_array.len() <= resume_input_array.len(),
"after-resume input should have at least as many items as after-compact"
);
assert_eq!(
compact_input_array.as_slice(),
&resume_input_array[..compact_input_array.len()]
);
// hard coded test
let prompt = requests[0]["instructions"]
.as_str()
.unwrap_or_default()
.to_string();
let user_instructions = requests[0]["input"][0]["content"][0]["text"]
.as_str()
.unwrap_or_default()
.to_string();
let environment_instructions = requests[0]["input"][1]["content"][0]["text"]
.as_str()
.unwrap_or_default()
.to_string();
let mut expected = json!([
{
"instructions": prompt,
"input": [
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": user_instructions
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": environment_instructions
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "You were originally given instructions from a user over one or more turns. Here were the user messages:\n\nAFTER_FORK\n\nAnother language model started to solve this problem and produced a summary of its thinking process. You also have access to the state of the tools that were used by that language model. Use this to build on the work that has already been done and avoid duplicating work. Here is the summary produced by the other language model, use the information in this summary to assist with your own analysis:\n\nSUMMARY_ONLY_CONTEXT"
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "AFTER_COMPACT_2"
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "AFTER_SECOND_RESUME"
}
]
}
],
}
]);
normalize_line_endings(&mut expected);
let last_request_after_2_compacts = json!([{
"instructions": requests[requests.len() -1]["instructions"],
"input": requests[requests.len() -1]["input"],
}]);
assert_eq!(expected, last_request_after_2_compacts);
}
fn normalize_line_endings(value: &mut Value) {
match value {
Value::String(text) => {
if text.contains('\r') {
*text = text.replace("\r\n", "\n").replace('\r', "\n");
}
}
Value::Array(items) => {
for item in items {
normalize_line_endings(item);
}
}
Value::Object(map) => {
for item in map.values_mut() {
normalize_line_endings(item);
}
}
_ => {}
}
}
async fn gather_request_bodies(server: &MockServer) -> Vec<Value> {
server
.received_requests()
.await
.expect("mock server should not fail")
.into_iter()
.map(|req| {
let mut value = req.body_json::<Value>().expect("valid JSON body");
normalize_line_endings(&mut value);
value
})
.collect()
}
async fn mount_initial_flow(server: &MockServer) {
let sse1 = sse(vec![
ev_assistant_message("m1", FIRST_REPLY),
ev_completed("r1"),
]);
let sse2 = sse(vec![
ev_assistant_message("m2", SUMMARY_TEXT),
ev_completed("r2"),
]);
let sse3 = sse(vec![
ev_assistant_message("m3", "AFTER_COMPACT_REPLY"),
ev_completed("r3"),
]);
let sse4 = sse(vec![ev_completed("r4")]);
let sse5 = sse(vec![ev_completed("r5")]);
let match_first = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains("\"text\":\"hello world\"")
&& !body.contains("You have exceeded the maximum number of tokens")
&& !body.contains(&format!("\"text\":\"{SUMMARY_TEXT}\""))
&& !body.contains("\"text\":\"AFTER_COMPACT\"")
&& !body.contains("\"text\":\"AFTER_RESUME\"")
&& !body.contains("\"text\":\"AFTER_FORK\"")
};
mount_sse_once(server, match_first, sse1).await;
let match_compact = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains("You have exceeded the maximum number of tokens")
};
mount_sse_once(server, match_compact, sse2).await;
let match_after_compact = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains("\"text\":\"AFTER_COMPACT\"")
&& !body.contains("\"text\":\"AFTER_RESUME\"")
&& !body.contains("\"text\":\"AFTER_FORK\"")
};
mount_sse_once(server, match_after_compact, sse3).await;
let match_after_resume = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains("\"text\":\"AFTER_RESUME\"")
};
mount_sse_once(server, match_after_resume, sse4).await;
let match_after_fork = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains("\"text\":\"AFTER_FORK\"")
};
mount_sse_once(server, match_after_fork, sse5).await;
}
async fn mount_second_compact_flow(server: &MockServer) {
let sse6 = sse(vec![
ev_assistant_message("m4", SUMMARY_TEXT),
ev_completed("r6"),
]);
let sse7 = sse(vec![ev_completed("r7")]);
let match_second_compact = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains("You have exceeded the maximum number of tokens")
&& body.contains("AFTER_FORK")
};
mount_sse_once(server, match_second_compact, sse6).await;
let match_after_second_resume = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains(&format!("\"text\":\"{AFTER_SECOND_RESUME}\""))
};
mount_sse_once(server, match_after_second_resume, sse7).await;
}
async fn start_test_conversation(
server: &MockServer,
) -> (TempDir, Config, ConversationManager, Arc<CodexConversation>) {
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let home = TempDir::new().expect("create temp dir");
let mut config = load_default_config_for_test(&home);
config.model_provider = model_provider;
let manager = ConversationManager::with_auth(CodexAuth::from_api_key("dummy"));
let NewConversation { conversation, .. } = manager
.new_conversation(config.clone())
.await
.expect("create conversation");
(home, config, manager, conversation)
}
async fn user_turn(conversation: &Arc<CodexConversation>, text: &str) {
conversation
.submit(Op::UserInput {
items: vec![InputItem::Text { text: text.into() }],
})
.await
.expect("submit user turn");
wait_for_event(conversation, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
}
async fn compact_conversation(conversation: &Arc<CodexConversation>) {
conversation
.submit(Op::Compact)
.await
.expect("compact conversation");
wait_for_event(conversation, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
}
async fn fetch_conversation_path(
conversation: &Arc<CodexConversation>,
context: &str,
) -> std::path::PathBuf {
conversation
.submit(Op::GetPath)
.await
.expect("request conversation path");
match wait_for_event(conversation, |ev| {
matches!(ev, EventMsg::ConversationPath(_))
})
.await
{
EventMsg::ConversationPath(ConversationPathResponseEvent { path, .. }) => path,
_ => panic!("expected ConversationPath event for {context}"),
}
}
async fn resume_conversation(
manager: &ConversationManager,
config: &Config,
path: std::path::PathBuf,
) -> Arc<CodexConversation> {
let auth_manager =
codex_core::AuthManager::from_auth_for_testing(CodexAuth::from_api_key("dummy"));
let NewConversation { conversation, .. } = manager
.resume_conversation_from_rollout(config.clone(), path, auth_manager)
.await
.expect("resume conversation");
conversation
}
#[cfg(test)]
async fn fork_conversation(
manager: &ConversationManager,
config: &Config,
path: std::path::PathBuf,
nth_user_message: usize,
) -> Arc<CodexConversation> {
let NewConversation { conversation, .. } = manager
.fork_conversation(nth_user_message, config.clone(), path)
.await
.expect("fork conversation");
conversation
}

View File

@@ -1,6 +1,7 @@
#![cfg(target_os = "macos")]
use std::collections::HashMap;
use std::string::ToString;
use codex_core::exec::ExecParams;
use codex_core::exec::ExecToolCallOutput;
@@ -29,7 +30,7 @@ async fn run_test_cmd(tmp: TempDir, cmd: Vec<&str>) -> Result<ExecToolCallOutput
assert_eq!(sandbox_type, SandboxType::MacosSeatbelt);
let params = ExecParams {
command: cmd.iter().map(|s| s.to_string()).collect(),
command: cmd.iter().map(ToString::to_string).collect(),
cwd: tmp.path().to_path_buf(),
timeout_ms: Some(1000),
env: HashMap::new(),
@@ -39,7 +40,7 @@ async fn run_test_cmd(tmp: TempDir, cmd: Vec<&str>) -> Result<ExecToolCallOutput
let policy = SandboxPolicy::new_read_only_policy();
process_exec_tool_call(params, sandbox_type, &policy, &None, None).await
process_exec_tool_call(params, sandbox_type, &policy, tmp.path(), &None, None).await
}
/// Command succeeds with exit code 0 normally

View File

@@ -2,8 +2,11 @@
use std::collections::HashMap;
use std::path::PathBuf;
use std::time::Duration;
use async_channel::Receiver;
use codex_core::error::CodexErr;
use codex_core::error::SandboxErr;
use codex_core::exec::ExecParams;
use codex_core::exec::SandboxType;
use codex_core::exec::StdoutStream;
@@ -46,9 +49,10 @@ async fn test_exec_stdout_stream_events_echo() {
"printf 'hello-world\n'".to_string(),
];
let cwd = std::env::current_dir().unwrap_or_else(|_| PathBuf::from("."));
let params = ExecParams {
command: cmd,
cwd: std::env::current_dir().unwrap_or_else(|_| PathBuf::from(".")),
cwd: cwd.clone(),
timeout_ms: Some(5_000),
env: HashMap::new(),
with_escalated_permissions: None,
@@ -61,6 +65,7 @@ async fn test_exec_stdout_stream_events_echo() {
params,
SandboxType::None,
&policy,
cwd.as_path(),
&None,
Some(stdout_stream),
)
@@ -96,9 +101,10 @@ async fn test_exec_stderr_stream_events_echo() {
"printf 'oops\n' 1>&2".to_string(),
];
let cwd = std::env::current_dir().unwrap_or_else(|_| PathBuf::from("."));
let params = ExecParams {
command: cmd,
cwd: std::env::current_dir().unwrap_or_else(|_| PathBuf::from(".")),
cwd: cwd.clone(),
timeout_ms: Some(5_000),
env: HashMap::new(),
with_escalated_permissions: None,
@@ -111,6 +117,7 @@ async fn test_exec_stderr_stream_events_echo() {
params,
SandboxType::None,
&policy,
cwd.as_path(),
&None,
Some(stdout_stream),
)
@@ -149,9 +156,10 @@ async fn test_aggregated_output_interleaves_in_order() {
"printf 'O1\\n'; sleep 0.01; printf 'E1\\n' 1>&2; sleep 0.01; printf 'O2\\n'; sleep 0.01; printf 'E2\\n' 1>&2".to_string(),
];
let cwd = std::env::current_dir().unwrap_or_else(|_| PathBuf::from("."));
let params = ExecParams {
command: cmd,
cwd: std::env::current_dir().unwrap_or_else(|_| PathBuf::from(".")),
cwd: cwd.clone(),
timeout_ms: Some(5_000),
env: HashMap::new(),
with_escalated_permissions: None,
@@ -160,9 +168,16 @@ async fn test_aggregated_output_interleaves_in_order() {
let policy = SandboxPolicy::new_read_only_policy();
let result = process_exec_tool_call(params, SandboxType::None, &policy, &None, None)
.await
.expect("process_exec_tool_call");
let result = process_exec_tool_call(
params,
SandboxType::None,
&policy,
cwd.as_path(),
&None,
None,
)
.await
.expect("process_exec_tool_call");
assert_eq!(result.exit_code, 0);
assert_eq!(result.stdout.text, "O1\nO2\n");
@@ -170,3 +185,45 @@ async fn test_aggregated_output_interleaves_in_order() {
assert_eq!(result.aggregated_output.text, "O1\nE1\nO2\nE2\n");
assert_eq!(result.aggregated_output.truncated_after_lines, None);
}
#[tokio::test]
async fn test_exec_timeout_returns_partial_output() {
let cmd = vec![
"/bin/sh".to_string(),
"-c".to_string(),
"printf 'before\\n'; sleep 2; printf 'after\\n'".to_string(),
];
let cwd = std::env::current_dir().unwrap_or_else(|_| PathBuf::from("."));
let params = ExecParams {
command: cmd,
cwd: cwd.clone(),
timeout_ms: Some(200),
env: HashMap::new(),
with_escalated_permissions: None,
justification: None,
};
let policy = SandboxPolicy::new_read_only_policy();
let result = process_exec_tool_call(
params,
SandboxType::None,
&policy,
cwd.as_path(),
&None,
None,
)
.await;
let Err(CodexErr::Sandbox(SandboxErr::Timeout { output })) = result else {
panic!("expected timeout error");
};
assert_eq!(output.exit_code, 124);
assert_eq!(output.stdout.text, "before\n");
assert!(output.stderr.text.is_empty());
assert_eq!(output.aggregated_output.text, "before\n");
assert!(output.duration >= Duration::from_millis(200));
assert!(output.timed_out);
}

View File

@@ -5,6 +5,8 @@ use codex_core::ModelProviderInfo;
use codex_core::NewConversation;
use codex_core::ResponseItem;
use codex_core::built_in_model_providers;
use codex_core::content_items_to_text;
use codex_core::is_session_prefix_message;
use codex_core::protocol::ConversationPathResponseEvent;
use codex_core::protocol::EventMsg;
use codex_core::protocol::InputItem;
@@ -104,13 +106,16 @@ async fn fork_conversation_twice_drops_to_first_message() {
items
};
// Compute expected prefixes after each fork by truncating base rollout at nth-from-last user input.
// Compute expected prefixes after each fork by truncating base rollout
// strictly before the nth user input (0-based).
let base_items = read_items(&base_path);
let find_user_input_positions = |items: &[RolloutItem]| -> Vec<usize> {
let mut pos = Vec::new();
for (i, it) in items.iter().enumerate() {
if let RolloutItem::ResponseItem(ResponseItem::Message { role, content, .. }) = it
&& role == "user"
&& content_items_to_text(content)
.is_some_and(|text| !is_session_prefix_message(&text))
{
// Consider any user message as an input boundary; recorder stores both EventMsg and ResponseItem.
// We specifically look for input items, which are represented as ContentItem::InputText.
@@ -126,11 +131,8 @@ async fn fork_conversation_twice_drops_to_first_message() {
};
let user_inputs = find_user_input_positions(&base_items);
// After dropping last user input (n=1), cut strictly before that input if present, else empty.
let cut1 = user_inputs
.get(user_inputs.len().saturating_sub(1))
.copied()
.unwrap_or(0);
// After cutting at nth user input (n=1 → second user message), cut strictly before that input.
let cut1 = user_inputs.get(1).copied().unwrap_or(0);
let expected_after_first: Vec<RolloutItem> = base_items[..cut1].to_vec();
// After dropping again (n=1 on fork1), compute expected relative to fork1's rollout.
@@ -161,12 +163,12 @@ async fn fork_conversation_twice_drops_to_first_message() {
serde_json::to_value(&expected_after_first).unwrap()
);
// Fork again with n=1 → drops the (new) last user message, leaving only the first.
// Fork again with n=0 → drops the (new) last user message, leaving only the first.
let NewConversation {
conversation: codex_fork2,
..
} = conversation_manager
.fork_conversation(1, config_for_fork.clone(), fork1_path.clone())
.fork_conversation(0, config_for_fork.clone(), fork1_path.clone())
.await
.expect("fork 2");

View File

@@ -0,0 +1,97 @@
#![cfg(not(target_os = "windows"))]
use codex_core::protocol::AskForApproval;
use codex_core::protocol::EventMsg;
use codex_core::protocol::InputItem;
use codex_core::protocol::Op;
use codex_core::protocol::SandboxPolicy;
use codex_protocol::config_types::ReasoningSummary;
use core_test_support::non_sandbox_test;
use core_test_support::responses;
use core_test_support::test_codex::TestCodex;
use core_test_support::test_codex::test_codex;
use core_test_support::wait_for_event;
use pretty_assertions::assert_eq;
use responses::ev_assistant_message;
use responses::ev_completed;
use responses::sse;
use responses::start_mock_server;
const SCHEMA: &str = r#"
{
"type": "object",
"properties": {
"explanation": { "type": "string" },
"final_answer": { "type": "string" }
},
"required": ["explanation", "final_answer"],
"additionalProperties": false
}
"#;
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn codex_returns_json_result() -> anyhow::Result<()> {
non_sandbox_test!(result);
let server = start_mock_server().await;
let sse1 = sse(vec![
ev_assistant_message(
"m2",
r#"{"explanation": "explanation", "final_answer": "final_answer"}"#,
),
ev_completed("r1"),
]);
let expected_schema: serde_json::Value = serde_json::from_str(SCHEMA)?;
let match_json_text_param = move |req: &wiremock::Request| {
let body: serde_json::Value = serde_json::from_slice(&req.body).unwrap_or_default();
let Some(text) = body.get("text") else {
return false;
};
let Some(format) = text.get("format") else {
return false;
};
format.get("name") == Some(&serde_json::Value::String("codex_output_schema".into()))
&& format.get("type") == Some(&serde_json::Value::String("json_schema".into()))
&& format.get("strict") == Some(&serde_json::Value::Bool(true))
&& format.get("schema") == Some(&expected_schema)
};
responses::mount_sse_once(&server, match_json_text_param, sse1).await;
let TestCodex { codex, cwd, .. } = test_codex().build(&server).await?;
// 1) Normal user input should hit server once.
codex
.submit(Op::UserTurn {
items: vec![InputItem::Text {
text: "hello world".into(),
}],
final_output_json_schema: Some(serde_json::from_str(SCHEMA)?),
cwd: cwd.path().to_path_buf(),
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::DangerFullAccess,
model: "gpt-5".to_string(),
effort: None,
summary: ReasoningSummary::Auto,
})
.await?;
let message = wait_for_event(&codex, |ev| matches!(ev, EventMsg::AgentMessage(_))).await;
if let EventMsg::AgentMessage(message) = message {
let json: serde_json::Value = serde_json::from_str(&message.message)?;
assert_eq!(
json.get("explanation"),
Some(&serde_json::Value::String("explanation".into()))
);
assert_eq!(
json.get("final_answer"),
Some(&serde_json::Value::String("final_answer".into()))
);
} else {
anyhow::bail!("expected agent message event");
}
Ok(())
}

View File

@@ -3,12 +3,17 @@
mod cli_stream;
mod client;
mod compact;
mod compact_resume_fork;
mod exec;
mod exec_stream_events;
mod fork_conversation;
mod json_result;
mod live_cli;
mod model_overrides;
mod prompt_caching;
mod review;
mod rollout_list_find;
mod seatbelt;
mod stream_error_allows_next_turn;
mod stream_no_completed;
mod user_notification;

View File

@@ -36,7 +36,7 @@ async fn override_turn_context_does_not_persist_when_config_exists() {
approval_policy: None,
sandbox_policy: None,
model: Some("o3".to_string()),
effort: Some(ReasoningEffort::High),
effort: Some(Some(ReasoningEffort::High)),
summary: None,
})
.await
@@ -76,7 +76,7 @@ async fn override_turn_context_does_not_create_config_file() {
approval_policy: None,
sandbox_policy: None,
model: Some("o3".to_string()),
effort: Some(ReasoningEffort::Medium),
effort: Some(Some(ReasoningEffort::Medium)),
summary: None,
})
.await

View File

@@ -12,6 +12,7 @@ use codex_core::protocol::Op;
use codex_core::protocol::SandboxPolicy;
use codex_core::protocol_config_types::ReasoningEffort;
use codex_core::protocol_config_types::ReasoningSummary;
use codex_core::shell::Shell;
use codex_core::shell::default_user_shell;
use core_test_support::load_default_config_for_test;
use core_test_support::load_sse_fixture_with_id;
@@ -23,6 +24,30 @@ use wiremock::ResponseTemplate;
use wiremock::matchers::method;
use wiremock::matchers::path;
fn text_user_input(text: String) -> serde_json::Value {
serde_json::json!({
"type": "message",
"role": "user",
"content": [ { "type": "input_text", "text": text } ]
})
}
fn default_env_context_str(cwd: &str, shell: &Shell) -> String {
format!(
r#"<environment_context>
<cwd>{}</cwd>
<approval_policy>on-request</approval_policy>
<sandbox_mode>read-only</sandbox_mode>
<network_access>restricted</network_access>
{}</environment_context>"#,
cwd,
match shell.name() {
Some(name) => format!(" <shell>{name}</shell>\n"),
None => String::new(),
}
)
}
/// Build minimal SSE stream with completed marker using the JSON fixture.
fn sse_completed(id: &str) -> String {
load_sse_fixture_with_id("tests/fixtures/completed_template.json", id)
@@ -159,6 +184,7 @@ async fn prompt_tools_are_consistent_across_requests() {
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));
let expected_instructions = config.model_family.base_instructions.clone();
let codex = conversation_manager
.new_conversation(config)
.await
@@ -188,7 +214,6 @@ async fn prompt_tools_are_consistent_across_requests() {
let requests = server.received_requests().await.unwrap();
assert_eq!(requests.len(), 2, "expected two POST requests");
let expected_instructions: &str = include_str!("../../prompt.md");
// our internal implementation is responsible for keeping tools in sync
// with the OpenAI schema, so we just verify the tool presence here
let expected_tools_names: &[&str] = &["shell", "update_plan", "apply_patch", "view_image"];
@@ -387,7 +412,7 @@ async fn overrides_turn_context_but_keeps_cached_prefix_and_key_constant() {
exclude_slash_tmp: true,
}),
model: Some("o3".to_string()),
effort: Some(ReasoningEffort::High),
effort: Some(Some(ReasoningEffort::High)),
summary: Some(ReasoningSummary::Detailed),
})
.await
@@ -519,8 +544,9 @@ async fn per_turn_overrides_keep_cached_prefix_and_key_constant() {
exclude_slash_tmp: true,
},
model: "o3".to_string(),
effort: ReasoningEffort::High,
effort: Some(ReasoningEffort::High),
summary: ReasoningSummary::Detailed,
final_output_json_schema: None,
})
.await
.unwrap();
@@ -546,12 +572,266 @@ async fn per_turn_overrides_keep_cached_prefix_and_key_constant() {
"role": "user",
"content": [ { "type": "input_text", "text": "hello 2" } ]
});
let expected_env_text_2 = format!(
r#"<environment_context>
<cwd>{}</cwd>
<approval_policy>never</approval_policy>
<sandbox_mode>workspace-write</sandbox_mode>
<network_access>enabled</network_access>
<writable_roots>
<root>{}</root>
</writable_roots>
</environment_context>"#,
new_cwd.path().to_string_lossy(),
writable.path().to_string_lossy(),
);
let expected_env_msg_2 = serde_json::json!({
"type": "message",
"role": "user",
"content": [ { "type": "input_text", "text": expected_env_text_2 } ]
});
let expected_body2 = serde_json::json!(
[
body1["input"].as_array().unwrap().as_slice(),
[expected_user_message_2].as_slice(),
[expected_env_msg_2, expected_user_message_2].as_slice(),
]
.concat()
);
assert_eq!(body2["input"], expected_body2);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn send_user_turn_with_no_changes_does_not_send_environment_context() {
use pretty_assertions::assert_eq;
let server = MockServer::start().await;
let sse = sse_completed("resp");
let template = ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.set_body_raw(sse, "text/event-stream");
Mock::given(method("POST"))
.and(path("/v1/responses"))
.respond_with(template)
.expect(2)
.mount(&server)
.await;
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let cwd = TempDir::new().unwrap();
let codex_home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&codex_home);
config.cwd = cwd.path().to_path_buf();
config.model_provider = model_provider;
config.user_instructions = Some("be consistent and helpful".to_string());
let default_cwd = config.cwd.clone();
let default_approval_policy = config.approval_policy;
let default_sandbox_policy = config.sandbox_policy.clone();
let default_model = config.model.clone();
let default_effort = config.model_reasoning_effort;
let default_summary = config.model_reasoning_summary;
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));
let codex = conversation_manager
.new_conversation(config)
.await
.expect("create new conversation")
.conversation;
codex
.submit(Op::UserTurn {
items: vec![InputItem::Text {
text: "hello 1".into(),
}],
cwd: default_cwd.clone(),
approval_policy: default_approval_policy,
sandbox_policy: default_sandbox_policy.clone(),
model: default_model.clone(),
effort: default_effort,
summary: default_summary,
final_output_json_schema: None,
})
.await
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
codex
.submit(Op::UserTurn {
items: vec![InputItem::Text {
text: "hello 2".into(),
}],
cwd: default_cwd.clone(),
approval_policy: default_approval_policy,
sandbox_policy: default_sandbox_policy.clone(),
model: default_model.clone(),
effort: default_effort,
summary: default_summary,
final_output_json_schema: None,
})
.await
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
let requests = server.received_requests().await.unwrap();
assert_eq!(requests.len(), 2, "expected two POST requests");
let body1 = requests[0].body_json::<serde_json::Value>().unwrap();
let body2 = requests[1].body_json::<serde_json::Value>().unwrap();
let shell = default_user_shell().await;
let expected_ui_text =
"<user_instructions>\n\nbe consistent and helpful\n\n</user_instructions>";
let expected_ui_msg = text_user_input(expected_ui_text.to_string());
let expected_env_msg_1 = text_user_input(default_env_context_str(
&cwd.path().to_string_lossy(),
&shell,
));
let expected_user_message_1 = text_user_input("hello 1".to_string());
let expected_input_1 = serde_json::Value::Array(vec![
expected_ui_msg.clone(),
expected_env_msg_1.clone(),
expected_user_message_1.clone(),
]);
assert_eq!(body1["input"], expected_input_1);
let expected_user_message_2 = text_user_input("hello 2".to_string());
let expected_input_2 = serde_json::Value::Array(vec![
expected_ui_msg,
expected_env_msg_1,
expected_user_message_1,
expected_user_message_2,
]);
assert_eq!(body2["input"], expected_input_2);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn send_user_turn_with_changes_sends_environment_context() {
use pretty_assertions::assert_eq;
let server = MockServer::start().await;
let sse = sse_completed("resp");
let template = ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.set_body_raw(sse, "text/event-stream");
Mock::given(method("POST"))
.and(path("/v1/responses"))
.respond_with(template)
.expect(2)
.mount(&server)
.await;
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let cwd = TempDir::new().unwrap();
let codex_home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&codex_home);
config.cwd = cwd.path().to_path_buf();
config.model_provider = model_provider;
config.user_instructions = Some("be consistent and helpful".to_string());
let default_cwd = config.cwd.clone();
let default_approval_policy = config.approval_policy;
let default_sandbox_policy = config.sandbox_policy.clone();
let default_model = config.model.clone();
let default_effort = config.model_reasoning_effort;
let default_summary = config.model_reasoning_summary;
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));
let codex = conversation_manager
.new_conversation(config.clone())
.await
.expect("create new conversation")
.conversation;
codex
.submit(Op::UserTurn {
items: vec![InputItem::Text {
text: "hello 1".into(),
}],
cwd: default_cwd.clone(),
approval_policy: default_approval_policy,
sandbox_policy: default_sandbox_policy.clone(),
model: default_model,
effort: default_effort,
summary: default_summary,
final_output_json_schema: None,
})
.await
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
codex
.submit(Op::UserTurn {
items: vec![InputItem::Text {
text: "hello 2".into(),
}],
cwd: default_cwd.clone(),
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::DangerFullAccess,
model: "o3".to_string(),
effort: Some(ReasoningEffort::High),
summary: ReasoningSummary::Detailed,
final_output_json_schema: None,
})
.await
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
let requests = server.received_requests().await.unwrap();
assert_eq!(requests.len(), 2, "expected two POST requests");
let body1 = requests[0].body_json::<serde_json::Value>().unwrap();
let body2 = requests[1].body_json::<serde_json::Value>().unwrap();
let shell = default_user_shell().await;
let expected_ui_text =
"<user_instructions>\n\nbe consistent and helpful\n\n</user_instructions>";
let expected_ui_msg = serde_json::json!({
"type": "message",
"role": "user",
"content": [ { "type": "input_text", "text": expected_ui_text } ]
});
let expected_env_text_1 = default_env_context_str(&default_cwd.to_string_lossy(), &shell);
let expected_env_msg_1 = text_user_input(expected_env_text_1);
let expected_user_message_1 = text_user_input("hello 1".to_string());
let expected_input_1 = serde_json::Value::Array(vec![
expected_ui_msg.clone(),
expected_env_msg_1.clone(),
expected_user_message_1.clone(),
]);
assert_eq!(body1["input"], expected_input_1);
let expected_env_msg_2 = text_user_input(format!(
r#"<environment_context>
<cwd>{}</cwd>
<approval_policy>never</approval_policy>
<sandbox_mode>danger-full-access</sandbox_mode>
<network_access>enabled</network_access>
</environment_context>"#,
default_cwd.to_string_lossy()
));
let expected_user_message_2 = text_user_input("hello 2".to_string());
let expected_input_2 = serde_json::Value::Array(vec![
expected_ui_msg,
expected_env_msg_1,
expected_user_message_1,
expected_env_msg_2,
expected_user_message_2,
]);
assert_eq!(body2["input"], expected_input_2);
}

View File

@@ -0,0 +1,671 @@
use codex_core::CodexAuth;
use codex_core::CodexConversation;
use codex_core::ContentItem;
use codex_core::ConversationManager;
use codex_core::ModelProviderInfo;
use codex_core::REVIEW_PROMPT;
use codex_core::ResponseItem;
use codex_core::built_in_model_providers;
use codex_core::config::Config;
use codex_core::protocol::ConversationPathResponseEvent;
use codex_core::protocol::ENVIRONMENT_CONTEXT_OPEN_TAG;
use codex_core::protocol::EventMsg;
use codex_core::protocol::ExitedReviewModeEvent;
use codex_core::protocol::InputItem;
use codex_core::protocol::Op;
use codex_core::protocol::ReviewCodeLocation;
use codex_core::protocol::ReviewFinding;
use codex_core::protocol::ReviewLineRange;
use codex_core::protocol::ReviewOutputEvent;
use codex_core::protocol::ReviewRequest;
use codex_core::protocol::RolloutItem;
use codex_core::protocol::RolloutLine;
use core_test_support::load_default_config_for_test;
use core_test_support::load_sse_fixture_with_id_from_str;
use core_test_support::non_sandbox_test;
use core_test_support::wait_for_event;
use pretty_assertions::assert_eq;
use std::path::PathBuf;
use std::sync::Arc;
use tempfile::TempDir;
use tokio::io::AsyncWriteExt as _;
use uuid::Uuid;
use wiremock::Mock;
use wiremock::MockServer;
use wiremock::ResponseTemplate;
use wiremock::matchers::method;
use wiremock::matchers::path;
/// Verify that submitting `Op::Review` spawns a child task and emits
/// EnteredReviewMode -> ExitedReviewMode(None) -> TaskComplete
/// in that order when the model returns a structured review JSON payload.
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn review_op_emits_lifecycle_and_review_output() {
// Skip under Codex sandbox network restrictions.
non_sandbox_test!();
// Start mock Responses API server. Return a single assistant message whose
// text is a JSON-encoded ReviewOutputEvent.
let review_json = serde_json::json!({
"findings": [
{
"title": "Prefer Stylize helpers",
"body": "Use .dim()/.bold() chaining instead of manual Style where possible.",
"confidence_score": 0.9,
"priority": 1,
"code_location": {
"absolute_file_path": "/tmp/file.rs",
"line_range": {"start": 10, "end": 20}
}
}
],
"overall_correctness": "good",
"overall_explanation": "All good with some improvements suggested.",
"overall_confidence_score": 0.8
})
.to_string();
let sse_template = r#"[
{"type":"response.output_item.done", "item":{
"type":"message", "role":"assistant",
"content":[{"type":"output_text","text":__REVIEW__}]
}},
{"type":"response.completed", "response": {"id": "__ID__"}}
]"#;
let review_json_escaped = serde_json::to_string(&review_json).unwrap();
let sse_raw = sse_template.replace("__REVIEW__", &review_json_escaped);
let server = start_responses_server_with_sse(&sse_raw, 1).await;
let codex_home = TempDir::new().unwrap();
let codex = new_conversation_for_server(&server, &codex_home, |_| {}).await;
// Submit review request.
codex
.submit(Op::Review {
review_request: ReviewRequest {
prompt: "Please review my changes".to_string(),
user_facing_hint: "my changes".to_string(),
},
})
.await
.unwrap();
// Verify lifecycle: Entered -> Exited(Some(review)) -> TaskComplete.
let _entered = wait_for_event(&codex, |ev| matches!(ev, EventMsg::EnteredReviewMode(_))).await;
let closed = wait_for_event(&codex, |ev| matches!(ev, EventMsg::ExitedReviewMode(_))).await;
let review = match closed {
EventMsg::ExitedReviewMode(ev) => ev
.review_output
.expect("expected ExitedReviewMode with Some(review_output)"),
other => panic!("expected ExitedReviewMode(..), got {other:?}"),
};
// Deep compare full structure using PartialEq (floats are f32 on both sides).
let expected = ReviewOutputEvent {
findings: vec![ReviewFinding {
title: "Prefer Stylize helpers".to_string(),
body: "Use .dim()/.bold() chaining instead of manual Style where possible.".to_string(),
confidence_score: 0.9,
priority: 1,
code_location: ReviewCodeLocation {
absolute_file_path: PathBuf::from("/tmp/file.rs"),
line_range: ReviewLineRange { start: 10, end: 20 },
},
}],
overall_correctness: "good".to_string(),
overall_explanation: "All good with some improvements suggested.".to_string(),
overall_confidence_score: 0.8,
};
assert_eq!(expected, review);
let _complete = wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
// Also verify that a user message with the header and a formatted finding
// was recorded back in the parent session's rollout.
codex.submit(Op::GetPath).await.unwrap();
let history_event =
wait_for_event(&codex, |ev| matches!(ev, EventMsg::ConversationPath(_))).await;
let path = match history_event {
EventMsg::ConversationPath(ConversationPathResponseEvent { path, .. }) => path,
other => panic!("expected ConversationPath event, got {other:?}"),
};
let text = std::fs::read_to_string(&path).expect("read rollout file");
let mut saw_header = false;
let mut saw_finding_line = false;
for line in text.lines() {
if line.trim().is_empty() {
continue;
}
let v: serde_json::Value = serde_json::from_str(line).expect("jsonl line");
let rl: RolloutLine = serde_json::from_value(v).expect("rollout line");
if let RolloutItem::ResponseItem(ResponseItem::Message { role, content, .. }) = rl.item
&& role == "user"
{
for c in content {
if let ContentItem::InputText { text } = c {
if text.contains("full review output from reviewer model") {
saw_header = true;
}
if text.contains("- Prefer Stylize helpers — /tmp/file.rs:10-20") {
saw_finding_line = true;
}
}
}
}
}
assert!(saw_header, "user header missing from rollout");
assert!(
saw_finding_line,
"formatted finding line missing from rollout"
);
server.verify().await;
}
/// When the model returns plain text that is not JSON, ensure the child
/// lifecycle still occurs and the plain text is surfaced via
/// ExitedReviewMode(Some(..)) as the overall_explanation.
// Windows CI only: bump to 4 workers to prevent SSE/event starvation and test timeouts.
#[cfg_attr(windows, tokio::test(flavor = "multi_thread", worker_threads = 4))]
#[cfg_attr(not(windows), tokio::test(flavor = "multi_thread", worker_threads = 2))]
async fn review_op_with_plain_text_emits_review_fallback() {
non_sandbox_test!();
let sse_raw = r#"[
{"type":"response.output_item.done", "item":{
"type":"message", "role":"assistant",
"content":[{"type":"output_text","text":"just plain text"}]
}},
{"type":"response.completed", "response": {"id": "__ID__"}}
]"#;
let server = start_responses_server_with_sse(sse_raw, 1).await;
let codex_home = TempDir::new().unwrap();
let codex = new_conversation_for_server(&server, &codex_home, |_| {}).await;
codex
.submit(Op::Review {
review_request: ReviewRequest {
prompt: "Plain text review".to_string(),
user_facing_hint: "plain text review".to_string(),
},
})
.await
.unwrap();
let _entered = wait_for_event(&codex, |ev| matches!(ev, EventMsg::EnteredReviewMode(_))).await;
let closed = wait_for_event(&codex, |ev| matches!(ev, EventMsg::ExitedReviewMode(_))).await;
let review = match closed {
EventMsg::ExitedReviewMode(ev) => ev
.review_output
.expect("expected ExitedReviewMode with Some(review_output)"),
other => panic!("expected ExitedReviewMode(..), got {other:?}"),
};
// Expect a structured fallback carrying the plain text.
let expected = ReviewOutputEvent {
overall_explanation: "just plain text".to_string(),
..Default::default()
};
assert_eq!(expected, review);
let _complete = wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
server.verify().await;
}
/// When the model returns structured JSON in a review, ensure no AgentMessage
/// is emitted; the UI consumes the structured result via ExitedReviewMode.
// Windows CI only: bump to 4 workers to prevent SSE/event starvation and test timeouts.
#[cfg_attr(windows, tokio::test(flavor = "multi_thread", worker_threads = 4))]
#[cfg_attr(not(windows), tokio::test(flavor = "multi_thread", worker_threads = 2))]
async fn review_does_not_emit_agent_message_on_structured_output() {
non_sandbox_test!();
let review_json = serde_json::json!({
"findings": [
{
"title": "Example",
"body": "Structured review output.",
"confidence_score": 0.5,
"priority": 1,
"code_location": {
"absolute_file_path": "/tmp/file.rs",
"line_range": {"start": 1, "end": 2}
}
}
],
"overall_correctness": "ok",
"overall_explanation": "ok",
"overall_confidence_score": 0.5
})
.to_string();
let sse_template = r#"[
{"type":"response.output_item.done", "item":{
"type":"message", "role":"assistant",
"content":[{"type":"output_text","text":__REVIEW__}]
}},
{"type":"response.completed", "response": {"id": "__ID__"}}
]"#;
let review_json_escaped = serde_json::to_string(&review_json).unwrap();
let sse_raw = sse_template.replace("__REVIEW__", &review_json_escaped);
let server = start_responses_server_with_sse(&sse_raw, 1).await;
let codex_home = TempDir::new().unwrap();
let codex = new_conversation_for_server(&server, &codex_home, |_| {}).await;
codex
.submit(Op::Review {
review_request: ReviewRequest {
prompt: "check structured".to_string(),
user_facing_hint: "check structured".to_string(),
},
})
.await
.unwrap();
// Drain events until TaskComplete; ensure none are AgentMessage.
use tokio::time::Duration;
use tokio::time::timeout;
let mut saw_entered = false;
let mut saw_exited = false;
loop {
let ev = timeout(Duration::from_secs(5), codex.next_event())
.await
.expect("timeout waiting for event")
.expect("stream ended unexpectedly");
match ev.msg {
EventMsg::TaskComplete(_) => break,
EventMsg::AgentMessage(_) => {
panic!("unexpected AgentMessage during review with structured output")
}
EventMsg::EnteredReviewMode(_) => saw_entered = true,
EventMsg::ExitedReviewMode(_) => saw_exited = true,
_ => {}
}
}
assert!(saw_entered && saw_exited, "missing review lifecycle events");
server.verify().await;
}
/// Ensure that when a custom `review_model` is set in the config, the review
/// request uses that model (and not the main chat model).
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn review_uses_custom_review_model_from_config() {
non_sandbox_test!();
// Minimal stream: just a completed event
let sse_raw = r#"[
{"type":"response.completed", "response": {"id": "__ID__"}}
]"#;
let server = start_responses_server_with_sse(sse_raw, 1).await;
let codex_home = TempDir::new().unwrap();
// Choose a review model different from the main model; ensure it is used.
let codex = new_conversation_for_server(&server, &codex_home, |cfg| {
cfg.model = "gpt-4.1".to_string();
cfg.review_model = "gpt-5".to_string();
})
.await;
codex
.submit(Op::Review {
review_request: ReviewRequest {
prompt: "use custom model".to_string(),
user_facing_hint: "use custom model".to_string(),
},
})
.await
.unwrap();
// Wait for completion
let _entered = wait_for_event(&codex, |ev| matches!(ev, EventMsg::EnteredReviewMode(_))).await;
let _closed = wait_for_event(&codex, |ev| {
matches!(
ev,
EventMsg::ExitedReviewMode(ExitedReviewModeEvent {
review_output: None
})
)
})
.await;
let _complete = wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
// Assert the request body model equals the configured review model
let request = &server.received_requests().await.unwrap()[0];
let body = request.body_json::<serde_json::Value>().unwrap();
assert_eq!(body["model"].as_str().unwrap(), "gpt-5");
server.verify().await;
}
/// When a review session begins, it must not prepend prior chat history from
/// the parent session. The request `input` should contain only the review
/// prompt from the user.
// Windows CI only: bump to 4 workers to prevent SSE/event starvation and test timeouts.
#[cfg_attr(windows, tokio::test(flavor = "multi_thread", worker_threads = 4))]
#[cfg_attr(not(windows), tokio::test(flavor = "multi_thread", worker_threads = 2))]
async fn review_input_isolated_from_parent_history() {
non_sandbox_test!();
// Mock server for the single review request
let sse_raw = r#"[
{"type":"response.completed", "response": {"id": "__ID__"}}
]"#;
let server = start_responses_server_with_sse(sse_raw, 1).await;
// Seed a parent session history via resume file with both user + assistant items.
let codex_home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&codex_home);
config.model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let session_file = codex_home.path().join("resume.jsonl");
{
let mut f = tokio::fs::File::create(&session_file).await.unwrap();
let convo_id = Uuid::new_v4();
// Proper session_meta line (enveloped) with a conversation id
let meta_line = serde_json::json!({
"timestamp": "2024-01-01T00:00:00.000Z",
"type": "session_meta",
"payload": {
"id": convo_id,
"timestamp": "2024-01-01T00:00:00Z",
"instructions": null,
"cwd": ".",
"originator": "test_originator",
"cli_version": "test_version"
}
});
f.write_all(format!("{meta_line}\n").as_bytes())
.await
.unwrap();
// Prior user message (enveloped response_item)
let user = codex_protocol::models::ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![codex_protocol::models::ContentItem::InputText {
text: "parent: earlier user message".to_string(),
}],
};
let user_json = serde_json::to_value(&user).unwrap();
let user_line = serde_json::json!({
"timestamp": "2024-01-01T00:00:01.000Z",
"type": "response_item",
"payload": user_json
});
f.write_all(format!("{user_line}\n").as_bytes())
.await
.unwrap();
// Prior assistant message (enveloped response_item)
let assistant = codex_protocol::models::ResponseItem::Message {
id: None,
role: "assistant".to_string(),
content: vec![codex_protocol::models::ContentItem::OutputText {
text: "parent: assistant reply".to_string(),
}],
};
let assistant_json = serde_json::to_value(&assistant).unwrap();
let assistant_line = serde_json::json!({
"timestamp": "2024-01-01T00:00:02.000Z",
"type": "response_item",
"payload": assistant_json
});
f.write_all(format!("{assistant_line}\n").as_bytes())
.await
.unwrap();
}
let codex =
resume_conversation_for_server(&server, &codex_home, session_file.clone(), |_| {}).await;
// Submit review request; it must start fresh (no parent history in `input`).
let review_prompt = "Please review only this".to_string();
codex
.submit(Op::Review {
review_request: ReviewRequest {
prompt: review_prompt.clone(),
user_facing_hint: review_prompt.clone(),
},
})
.await
.unwrap();
let _entered = wait_for_event(&codex, |ev| matches!(ev, EventMsg::EnteredReviewMode(_))).await;
let _closed = wait_for_event(&codex, |ev| {
matches!(
ev,
EventMsg::ExitedReviewMode(ExitedReviewModeEvent {
review_output: None
})
)
})
.await;
let _complete = wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
// Assert the request `input` contains the environment context followed by the review prompt.
let request = &server.received_requests().await.unwrap()[0];
let body = request.body_json::<serde_json::Value>().unwrap();
let input = body["input"].as_array().expect("input array");
assert_eq!(
input.len(),
2,
"expected environment context and review prompt"
);
let env_msg = &input[0];
assert_eq!(env_msg["type"].as_str().unwrap(), "message");
assert_eq!(env_msg["role"].as_str().unwrap(), "user");
let env_text = env_msg["content"][0]["text"].as_str().expect("env text");
assert!(
env_text.starts_with(ENVIRONMENT_CONTEXT_OPEN_TAG),
"environment context must be the first item"
);
assert!(
env_text.contains("<cwd>"),
"environment context should include cwd"
);
let review_msg = &input[1];
assert_eq!(review_msg["type"].as_str().unwrap(), "message");
assert_eq!(review_msg["role"].as_str().unwrap(), "user");
assert_eq!(
review_msg["content"][0]["text"].as_str().unwrap(),
format!("{REVIEW_PROMPT}\n\n---\n\nNow, here's your task: Please review only this",)
);
// Also verify that a user interruption note was recorded in the rollout.
codex.submit(Op::GetPath).await.unwrap();
let history_event =
wait_for_event(&codex, |ev| matches!(ev, EventMsg::ConversationPath(_))).await;
let path = match history_event {
EventMsg::ConversationPath(ConversationPathResponseEvent { path, .. }) => path,
other => panic!("expected ConversationPath event, got {other:?}"),
};
let text = std::fs::read_to_string(&path).expect("read rollout file");
let mut saw_interruption_message = false;
for line in text.lines() {
if line.trim().is_empty() {
continue;
}
let v: serde_json::Value = serde_json::from_str(line).expect("jsonl line");
let rl: RolloutLine = serde_json::from_value(v).expect("rollout line");
if let RolloutItem::ResponseItem(ResponseItem::Message { role, content, .. }) = rl.item
&& role == "user"
{
for c in content {
if let ContentItem::InputText { text } = c
&& text.contains("User initiated a review task, but was interrupted.")
{
saw_interruption_message = true;
break;
}
}
}
if saw_interruption_message {
break;
}
}
assert!(
saw_interruption_message,
"expected user interruption message in rollout"
);
server.verify().await;
}
/// After a review thread finishes, its conversation should not leak into the
/// parent session. A subsequent parent turn must not include any review
/// messages in its request `input`.
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn review_history_does_not_leak_into_parent_session() {
non_sandbox_test!();
// Respond to both the review request and the subsequent parent request.
let sse_raw = r#"[
{"type":"response.output_item.done", "item":{
"type":"message", "role":"assistant",
"content":[{"type":"output_text","text":"review assistant output"}]
}},
{"type":"response.completed", "response": {"id": "__ID__"}}
]"#;
let server = start_responses_server_with_sse(sse_raw, 2).await;
let codex_home = TempDir::new().unwrap();
let codex = new_conversation_for_server(&server, &codex_home, |_| {}).await;
// 1) Run a review turn that produces an assistant message (isolated in child).
codex
.submit(Op::Review {
review_request: ReviewRequest {
prompt: "Start a review".to_string(),
user_facing_hint: "Start a review".to_string(),
},
})
.await
.unwrap();
let _entered = wait_for_event(&codex, |ev| matches!(ev, EventMsg::EnteredReviewMode(_))).await;
let _closed = wait_for_event(&codex, |ev| {
matches!(
ev,
EventMsg::ExitedReviewMode(ExitedReviewModeEvent {
review_output: Some(_)
})
)
})
.await;
let _complete = wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
// 2) Continue in the parent session; request input must not include any review items.
let followup = "back to parent".to_string();
codex
.submit(Op::UserInput {
items: vec![InputItem::Text {
text: followup.clone(),
}],
})
.await
.unwrap();
let _complete = wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
// Inspect the second request (parent turn) input contents.
// Parent turns include session initial messages (user_instructions, environment_context).
// Critically, no messages from the review thread should appear.
let requests = server.received_requests().await.unwrap();
assert_eq!(requests.len(), 2);
let body = requests[1].body_json::<serde_json::Value>().unwrap();
let input = body["input"].as_array().expect("input array");
// Must include the followup as the last item for this turn
let last = input.last().expect("at least one item in input");
assert_eq!(last["role"].as_str().unwrap(), "user");
let last_text = last["content"][0]["text"].as_str().unwrap();
assert_eq!(last_text, followup);
// Ensure no review-thread content leaked into the parent request
let contains_review_prompt = input
.iter()
.any(|msg| msg["content"][0]["text"].as_str().unwrap_or_default() == "Start a review");
let contains_review_assistant = input.iter().any(|msg| {
msg["content"][0]["text"].as_str().unwrap_or_default() == "review assistant output"
});
assert!(
!contains_review_prompt,
"review prompt leaked into parent turn input"
);
assert!(
!contains_review_assistant,
"review assistant output leaked into parent turn input"
);
server.verify().await;
}
/// Start a mock Responses API server and mount the given SSE stream body.
async fn start_responses_server_with_sse(sse_raw: &str, expected_requests: usize) -> MockServer {
let server = MockServer::start().await;
let sse = load_sse_fixture_with_id_from_str(sse_raw, &Uuid::new_v4().to_string());
Mock::given(method("POST"))
.and(path("/v1/responses"))
.respond_with(
ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.set_body_raw(sse.clone(), "text/event-stream"),
)
.expect(expected_requests as u64)
.mount(&server)
.await;
server
}
/// Create a conversation configured to talk to the provided mock server.
#[expect(clippy::expect_used)]
async fn new_conversation_for_server<F>(
server: &MockServer,
codex_home: &TempDir,
mutator: F,
) -> Arc<CodexConversation>
where
F: FnOnce(&mut Config),
{
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let mut config = load_default_config_for_test(codex_home);
config.model_provider = model_provider;
mutator(&mut config);
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));
conversation_manager
.new_conversation(config)
.await
.expect("create conversation")
.conversation
}
/// Create a conversation resuming from a rollout file, configured to talk to the provided mock server.
#[expect(clippy::expect_used)]
async fn resume_conversation_for_server<F>(
server: &MockServer,
codex_home: &TempDir,
resume_path: std::path::PathBuf,
mutator: F,
) -> Arc<CodexConversation>
where
F: FnOnce(&mut Config),
{
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let mut config = load_default_config_for_test(codex_home);
config.model_provider = model_provider;
mutator(&mut config);
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));
let auth_manager =
codex_core::AuthManager::from_auth_for_testing(CodexAuth::from_api_key("Test API Key"));
conversation_manager
.resume_conversation_from_rollout(config, resume_path, auth_manager)
.await
.expect("resume conversation")
.conversation
}

View File

@@ -0,0 +1,50 @@
#![allow(clippy::unwrap_used, clippy::expect_used)]
use std::io::Write;
use std::path::PathBuf;
use codex_core::find_conversation_path_by_id_str;
use tempfile::TempDir;
use uuid::Uuid;
/// Create sessions/YYYY/MM/DD and write a minimal rollout file containing the
/// provided conversation id in the SessionMeta line. Returns the absolute path.
fn write_minimal_rollout_with_id(codex_home: &TempDir, id: Uuid) -> PathBuf {
let sessions = codex_home.path().join("sessions/2024/01/01");
std::fs::create_dir_all(&sessions).unwrap();
let file = sessions.join(format!("rollout-2024-01-01T00-00-00-{id}.jsonl"));
let mut f = std::fs::File::create(&file).unwrap();
// Minimal first line: session_meta with the id so content search can find it
writeln!(
f,
"{}",
serde_json::json!({
"timestamp": "2024-01-01T00:00:00.000Z",
"type": "session_meta",
"payload": {
"id": id,
"timestamp": "2024-01-01T00:00:00Z",
"instructions": null,
"cwd": ".",
"originator": "test",
"cli_version": "test"
}
})
)
.unwrap();
file
}
#[tokio::test]
async fn find_locates_rollout_file_by_id() {
let home = TempDir::new().unwrap();
let id = Uuid::new_v4();
let expected = write_minimal_rollout_with_id(&home, id);
let found = find_conversation_path_by_id_str(home.path(), &id.to_string())
.await
.unwrap();
assert_eq!(found.unwrap(), expected);
}

View File

@@ -171,6 +171,8 @@ async fn python_getpwuid_works_under_seatbelt() {
// ReadOnly is sufficient here since we are only exercising user lookup.
let policy = SandboxPolicy::ReadOnly;
let command_cwd = std::env::current_dir().expect("getcwd");
let sandbox_cwd = command_cwd.clone();
let mut child = spawn_command_under_seatbelt(
vec![
@@ -179,8 +181,9 @@ async fn python_getpwuid_works_under_seatbelt() {
// Print the passwd struct; success implies lookup worked.
"import pwd, os; print(pwd.getpwuid(os.getuid()))".to_string(),
],
command_cwd,
&policy,
std::env::current_dir().expect("should be able to get current dir"),
sandbox_cwd.as_path(),
StdioPolicy::RedirectForShellTool,
HashMap::new(),
)
@@ -216,13 +219,16 @@ fn create_test_scenario(tmp: &TempDir) -> TestScenario {
/// Note that `path` must be absolute.
async fn touch(path: &Path, policy: &SandboxPolicy) -> bool {
assert!(path.is_absolute(), "Path must be absolute: {path:?}");
let command_cwd = std::env::current_dir().expect("getcwd");
let sandbox_cwd = command_cwd.clone();
let mut child = spawn_command_under_seatbelt(
vec![
"/usr/bin/touch".to_string(),
path.to_string_lossy().to_string(),
],
command_cwd,
policy,
std::env::current_dir().expect("should be able to get current dir"),
sandbox_cwd.as_path(),
StdioPolicy::RedirectForShellTool,
HashMap::new(),
)

View File

@@ -1,17 +1,15 @@
use std::time::Duration;
use codex_core::CodexAuth;
use codex_core::ConversationManager;
use codex_core::ModelProviderInfo;
use codex_core::WireApi;
use codex_core::protocol::EventMsg;
use codex_core::protocol::InputItem;
use codex_core::protocol::Op;
use codex_core::spawn::CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR;
use core_test_support::load_default_config_for_test;
use core_test_support::load_sse_fixture_with_id;
use core_test_support::non_sandbox_test;
use core_test_support::test_codex::TestCodex;
use core_test_support::test_codex::test_codex;
use core_test_support::wait_for_event_with_timeout;
use tempfile::TempDir;
use wiremock::Mock;
use wiremock::MockServer;
use wiremock::ResponseTemplate;
@@ -25,12 +23,7 @@ fn sse_completed(id: &str) -> String {
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn continue_after_stream_error() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
non_sandbox_test!();
let server = MockServer::start().await;
@@ -83,18 +76,14 @@ async fn continue_after_stream_error() {
requires_openai_auth: false,
};
let home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&home);
config.base_instructions = Some("You are a helpful assistant".to_string());
config.model_provider = provider;
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));
let codex = conversation_manager
.new_conversation(config)
let TestCodex { codex, .. } = test_codex()
.with_config(move |config| {
config.base_instructions = Some("You are a helpful assistant".to_string());
config.model_provider = provider;
})
.build(&server)
.await
.unwrap()
.conversation;
.unwrap();
codex
.submit(Op::UserInput {

View File

@@ -3,17 +3,16 @@
use std::time::Duration;
use codex_core::CodexAuth;
use codex_core::ConversationManager;
use codex_core::ModelProviderInfo;
use codex_core::WireApi;
use codex_core::protocol::EventMsg;
use codex_core::protocol::InputItem;
use codex_core::protocol::Op;
use codex_core::spawn::CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR;
use core_test_support::load_default_config_for_test;
use core_test_support::load_sse_fixture;
use core_test_support::load_sse_fixture_with_id;
use tempfile::TempDir;
use core_test_support::non_sandbox_test;
use core_test_support::test_codex::TestCodex;
use core_test_support::test_codex::test_codex;
use tokio::time::timeout;
use wiremock::Mock;
use wiremock::MockServer;
@@ -33,12 +32,7 @@ fn sse_completed(id: &str) -> String {
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn retries_on_early_close() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
non_sandbox_test!();
let server = MockServer::start().await;
@@ -79,7 +73,7 @@ async fn retries_on_early_close() {
// provider is not set.
env_key: Some("PATH".into()),
env_key_instructions: None,
wire_api: codex_core::WireApi::Responses,
wire_api: WireApi::Responses,
query_params: None,
http_headers: None,
env_http_headers: None,
@@ -90,16 +84,13 @@ async fn retries_on_early_close() {
requires_openai_auth: false,
};
let codex_home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&codex_home);
config.model_provider = model_provider;
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));
let codex = conversation_manager
.new_conversation(config)
let TestCodex { codex, .. } = test_codex()
.with_config(move |config| {
config.model_provider = model_provider;
})
.build(&server)
.await
.unwrap()
.conversation;
.unwrap();
codex
.submit(Op::UserInput {

View File

@@ -0,0 +1,73 @@
#![cfg(not(target_os = "windows"))]
use std::os::unix::fs::PermissionsExt;
use codex_core::protocol::EventMsg;
use codex_core::protocol::InputItem;
use codex_core::protocol::Op;
use core_test_support::non_sandbox_test;
use core_test_support::responses;
use core_test_support::test_codex::TestCodex;
use core_test_support::test_codex::test_codex;
use core_test_support::wait_for_event;
use tempfile::TempDir;
use wiremock::matchers::any;
use responses::ev_assistant_message;
use responses::ev_completed;
use responses::sse;
use responses::start_mock_server;
use tokio::time::Duration;
use tokio::time::sleep;
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn summarize_context_three_requests_and_instructions() -> anyhow::Result<()> {
non_sandbox_test!(result);
let server = start_mock_server().await;
let sse1 = sse(vec![ev_assistant_message("m1", "Done"), ev_completed("r1")]);
responses::mount_sse_once(&server, any(), sse1).await;
let notify_dir = TempDir::new()?;
// write a script to the notify that touches a file next to it
let notify_script = notify_dir.path().join("notify.sh");
std::fs::write(
&notify_script,
r#"#!/bin/bash
set -e
echo -n "${@: -1}" > $(dirname "${0}")/notify.txt"#,
)?;
std::fs::set_permissions(&notify_script, std::fs::Permissions::from_mode(0o755))?;
let notify_file = notify_dir.path().join("notify.txt");
let notify_script_str = notify_script.to_str().unwrap().to_string();
let TestCodex { codex, .. } = test_codex()
.with_config(move |cfg| cfg.notify = Some(vec![notify_script_str]))
.build(&server)
.await?;
// 1) Normal user input should hit server once.
codex
.submit(Op::UserInput {
items: vec![InputItem::Text {
text: "hello world".into(),
}],
})
.await?;
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
// We fork the notify script, so we need to wait for it to write to the file.
for _ in 0..100u32 {
if notify_file.exists() {
break;
}
sleep(Duration::from_millis(100)).await;
}
assert!(notify_file.exists());
Ok(())
}

View File

@@ -0,0 +1,122 @@
# Codex MCP Interface [experimental]
This document describes Codexs experimental MCP interface: a JSONRPC API that runs over the Model Context Protocol (MCP) transport to control a local Codex engine.
- Status: experimental and subject to change without notice
- Server binary: `codex mcp` (or `codex-mcp-server`)
- Transport: standard MCP over stdio (JSONRPC 2.0, linedelimited)
## Overview
Codex exposes a small set of MCPcompatible methods to create and manage conversations, send user input, receive live events, and handle approval prompts. The types are defined in `protocol/src/mcp_protocol.rs` and reused by the MCP server implementation in `mcp-server/`.
At a glance:
- Conversations
- `newConversation` → start a Codex session
- `sendUserMessage` / `sendUserTurn` → send user input into a conversation
- `interruptConversation` → stop the current turn
- `listConversations`, `resumeConversation`, `archiveConversation`
- Configuration and info
- `getUserSavedConfig`, `setDefaultModel`, `getUserAgent`, `userInfo`
- Auth
- `loginApiKey`, `loginChatGpt`, `cancelLoginChatGpt`, `logoutChatGpt`, `getAuthStatus`
- Utilities
- `gitDiffToRemote`, `execOneOffCommand`
- Approvals (server → client requests)
- `applyPatchApproval`, `execCommandApproval`
- Notifications (server → client)
- `loginChatGptComplete`, `authStatusChange`
- `codex/event` stream with agent events
See code for full type definitions and exact shapes: `protocol/src/mcp_protocol.rs`.
## Starting the server
Run Codex as an MCP server and connect an MCP client:
```bash
codex mcp | your_mcp_client
```
For a simple inspection UI, you can also try:
```bash
npx @modelcontextprotocol/inspector codex mcp
```
## Conversations
Start a new session with optional overrides:
Request `newConversation` params (subset):
- `model`: string model id (e.g. "o3", "gpt-5", "gpt-5-codex")
- `profile`: optional named profile
- `cwd`: optional working directory
- `approvalPolicy`: `untrusted` | `on-request` | `on-failure` | `never`
- `sandbox`: `read-only` | `workspace-write` | `danger-full-access`
- `config`: map of additional config overrides
- `baseInstructions`: optional instruction override
- `includePlanTool` / `includeApplyPatchTool`: booleans
Response: `{ conversationId, model, reasoningEffort?, rolloutPath }`
Send input to the active turn:
- `sendUserMessage` → enqueue items to the conversation
- `sendUserTurn` → structured turn with explicit `cwd`, `approvalPolicy`, `sandboxPolicy`, `model`, optional `effort`, and `summary`
Interrupt a running turn: `interruptConversation`.
List/resume/archive: `listConversations`, `resumeConversation`, `archiveConversation`.
## Event stream
While a conversation runs, the server sends notifications:
- `codex/event` with the serialized Codex event payload. The shape matches `core/src/protocol.rs`s `Event` and `EventMsg` types. Some notifications include a `_meta.requestId` to correlate with the originating request.
- Auth notifications via method names `loginChatGptComplete` and `authStatusChange`.
Clients should render events and, when present, surface approval requests (see next section).
## Approvals (server → client)
When Codex needs approval to apply changes or run commands, the server issues JSONRPC requests to the client:
- `applyPatchApproval { conversationId, callId, fileChanges, reason?, grantRoot? }`
- `execCommandApproval { conversationId, callId, command, cwd, reason? }`
The client must reply with `{ decision: "allow" | "deny" }` for each request.
## Auth helpers
For ChatGPT or APIkey based auth flows, the server exposes helpers:
- `loginApiKey { apiKey }`
- `loginChatGpt` → returns `{ loginId, authUrl }`; browser completes flow; then `loginChatGptComplete` notification follows
- `cancelLoginChatGpt { loginId }`, `logoutChatGpt`, `getAuthStatus { includeToken?, refreshToken? }`
## Example: start and send a message
```json
{ "jsonrpc": "2.0", "id": 1, "method": "newConversation", "params": { "model": "gpt-5", "approvalPolicy": "on-request" } }
```
Server responds:
```json
{ "jsonrpc": "2.0", "id": 1, "result": { "conversationId": "c7b0…", "model": "gpt-5", "rolloutPath": "/path/to/rollout.jsonl" } }
```
Then send input:
```json
{ "jsonrpc": "2.0", "id": 2, "method": "sendUserMessage", "params": { "conversationId": "c7b0…", "items": [{ "type": "text", "text": "Hello Codex" }] } }
```
While processing, the server emits `codex/event` notifications containing agent output, approvals, and status updates.
## Compatibility and stability
This interface is experimental. Method names, fields, and event shapes may evolve. For the authoritative schema, consult `protocol/src/mcp_protocol.rs` and the corresponding server wiring in `mcp-server/`.

View File

@@ -15,35 +15,37 @@ path = "src/lib.rs"
workspace = true
[dependencies]
anyhow = "1"
chrono = "0.4.40"
clap = { version = "4", features = ["derive"] }
codex-arg0 = { path = "../arg0" }
codex-common = { path = "../common", features = [
anyhow = { workspace = true }
chrono = { workspace = true }
clap = { workspace = true, features = ["derive"] }
codex-arg0 = { workspace = true }
codex-common = { workspace = true, features = [
"cli",
"elapsed",
"sandbox_summary",
] }
codex-core = { path = "../core" }
codex-ollama = { path = "../ollama" }
codex-protocol = { path = "../protocol" }
owo-colors = "4.2.0"
serde_json = "1"
shlex = "1.3.0"
tokio = { version = "1", features = [
codex-core = { workspace = true }
codex-ollama = { workspace = true }
codex-protocol = { workspace = true }
owo-colors = { workspace = true }
serde_json = { workspace = true }
shlex = { workspace = true }
tokio = { workspace = true, features = [
"io-std",
"macros",
"process",
"rt-multi-thread",
"signal",
] }
tracing = { version = "0.1.41", features = ["log"] }
tracing-subscriber = { version = "0.3.19", features = ["env-filter"] }
tracing = { workspace = true, features = ["log"] }
tracing-subscriber = { workspace = true, features = ["env-filter"] }
[dev-dependencies]
assert_cmd = "2"
core_test_support = { path = "../core/tests/common" }
libc = "0.2"
predicates = "3"
tempfile = "3.13.0"
wiremock = "0.6"
assert_cmd = { workspace = true }
core_test_support = { workspace = true }
libc = { workspace = true }
predicates = { workspace = true }
tempfile = { workspace = true }
uuid = { workspace = true }
walkdir = { workspace = true }
wiremock = { workspace = true }

View File

@@ -6,6 +6,10 @@ use std::path::PathBuf;
#[derive(Parser, Debug)]
#[command(version)]
pub struct Cli {
/// Action to perform. If omitted, runs a new non-interactive session.
#[command(subcommand)]
pub command: Option<Command>,
/// Optional image(s) to attach to the initial prompt.
#[arg(long = "image", short = 'i', value_name = "FILE", value_delimiter = ',', num_args = 1..)]
pub images: Vec<PathBuf>,
@@ -48,6 +52,10 @@ pub struct Cli {
#[arg(long = "skip-git-repo-check", default_value_t = false)]
pub skip_git_repo_check: bool,
/// Path to a JSON Schema file describing the model's final response shape.
#[arg(long = "output-schema", value_name = "FILE")]
pub output_schema: Option<PathBuf>,
#[clap(skip)]
pub config_overrides: CliConfigOverrides,
@@ -69,6 +77,28 @@ pub struct Cli {
pub prompt: Option<String>,
}
#[derive(Debug, clap::Subcommand)]
pub enum Command {
/// Resume a previous session by id or pick the most recent with --last.
Resume(ResumeArgs),
}
#[derive(Parser, Debug)]
pub struct ResumeArgs {
/// Conversation/session id (UUID). When provided, resumes this session.
/// If omitted, use --last to pick the most recent recorded session.
#[arg(value_name = "SESSION_ID")]
pub session_id: Option<String>,
/// Resume the most recent recorded session (newest) without specifying an id.
#[arg(long = "last", default_value_t = false, conflicts_with = "session_id")]
pub last: bool,
/// Prompt to send after resuming the session. If `-` is used, read from stdin.
#[arg(value_name = "PROMPT")]
pub prompt: Option<String>,
}
#[derive(Debug, Clone, Copy, Default, PartialEq, Eq, ValueEnum)]
#[value(rename_all = "kebab-case")]
pub enum Color {

View File

@@ -558,17 +558,22 @@ impl EventProcessor for EventProcessorWithHumanOutput {
TurnAbortReason::Replaced => {
ts_println!(self, "task aborted: replaced by a new task");
}
TurnAbortReason::ReviewEnded => {
ts_println!(self, "task aborted: review ended");
}
},
EventMsg::ShutdownComplete => return CodexStatus::Shutdown,
EventMsg::ConversationPath(_) => {}
EventMsg::UserMessage(_) => {}
EventMsg::EnteredReviewMode(_) => {}
EventMsg::ExitedReviewMode(_) => {}
}
CodexStatus::Running
}
}
fn escape_command(command: &[String]) -> String {
try_join(command.iter().map(|s| s.as_str())).unwrap_or_else(|_| command.join(" "))
try_join(command.iter().map(String::as_str)).unwrap_or_else(|_| command.join(" "))
}
fn format_file_change(change: &FileChange) -> &'static str {

View File

@@ -25,16 +25,20 @@ use codex_ollama::DEFAULT_OSS_MODEL;
use codex_protocol::config_types::SandboxMode;
use event_processor_with_human_output::EventProcessorWithHumanOutput;
use event_processor_with_json_output::EventProcessorWithJsonOutput;
use serde_json::Value;
use tracing::debug;
use tracing::error;
use tracing::info;
use tracing_subscriber::EnvFilter;
use crate::cli::Command as ExecCommand;
use crate::event_processor::CodexStatus;
use crate::event_processor::EventProcessor;
use codex_core::find_conversation_path_by_id_str;
pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()> {
let Cli {
command,
images,
model: model_cli_arg,
oss,
@@ -48,11 +52,19 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
json: json_mode,
sandbox_mode: sandbox_mode_cli_arg,
prompt,
output_schema: output_schema_path,
config_overrides,
} = cli;
// Determine the prompt based on CLI arg and/or stdin.
let prompt = match prompt {
// Determine the prompt source (parent or subcommand) and read from stdin if needed.
let prompt_arg = match &command {
// Allow prompt before the subcommand by falling back to the parent-level prompt
// when the Resume subcommand did not provide its own prompt.
Some(ExecCommand::Resume(args)) => args.prompt.clone().or(prompt),
None => prompt,
};
let prompt = match prompt_arg {
Some(p) if p != "-" => p,
// Either `-` was passed or no positional arg.
maybe_dash => {
@@ -86,6 +98,8 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
}
};
let output_schema = load_output_schema(output_schema_path);
let (stdout_with_ansi, stderr_with_ansi) = match color {
cli::Color::Always => (true, true),
cli::Color::Never => (false, false),
@@ -137,6 +151,7 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
// Load configuration and determine approval policy
let overrides = ConfigOverrides {
model,
review_model: None,
config_profile,
// This CLI is intended to be headless and has no affordances for asking
// the user for approval.
@@ -182,18 +197,43 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
// is using.
event_processor.print_config_summary(&config, &prompt);
if !skip_git_repo_check && get_git_repo_root(&config.cwd.to_path_buf()).is_none() {
let default_cwd = config.cwd.to_path_buf();
let default_approval_policy = config.approval_policy;
let default_sandbox_policy = config.sandbox_policy.clone();
let default_model = config.model.clone();
let default_effort = config.model_reasoning_effort;
let default_summary = config.model_reasoning_summary;
if !skip_git_repo_check && get_git_repo_root(&default_cwd).is_none() {
eprintln!("Not inside a trusted directory and --skip-git-repo-check was not specified.");
std::process::exit(1);
}
let conversation_manager =
ConversationManager::new(AuthManager::shared(config.codex_home.clone()));
// Handle resume subcommand by resolving a rollout path and using explicit resume API.
let NewConversation {
conversation_id: _,
conversation,
session_configured,
} = conversation_manager.new_conversation(config).await?;
} = if let Some(ExecCommand::Resume(args)) = command {
let resume_path = resolve_resume_path(&config, &args).await?;
if let Some(path) = resume_path {
conversation_manager
.resume_conversation_from_rollout(
config.clone(),
path,
AuthManager::shared(config.codex_home.clone()),
)
.await?
} else {
conversation_manager.new_conversation(config).await?
}
} else {
conversation_manager.new_conversation(config).await?
};
info!("Codex initialized with event: {session_configured:?}");
let (tx, mut rx) = tokio::sync::mpsc::unbounded_channel::<Event>();
@@ -259,7 +299,18 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
// Send the prompt.
let items: Vec<InputItem> = vec![InputItem::Text { text: prompt }];
let initial_prompt_task_id = conversation.submit(Op::UserInput { items }).await?;
let initial_prompt_task_id = conversation
.submit(Op::UserTurn {
items,
cwd: default_cwd,
approval_policy: default_approval_policy,
sandbox_policy: default_sandbox_policy,
model: default_model,
effort: default_effort,
summary: default_summary,
final_output_json_schema: output_schema,
})
.await?;
info!("Sent prompt with event ID: {initial_prompt_task_id}");
// Run the loop until the task is complete.
@@ -278,3 +329,49 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
Ok(())
}
async fn resolve_resume_path(
config: &Config,
args: &crate::cli::ResumeArgs,
) -> anyhow::Result<Option<PathBuf>> {
if args.last {
match codex_core::RolloutRecorder::list_conversations(&config.codex_home, 1, None).await {
Ok(page) => Ok(page.items.first().map(|it| it.path.clone())),
Err(e) => {
error!("Error listing conversations: {e}");
Ok(None)
}
}
} else if let Some(id_str) = args.session_id.as_deref() {
let path = find_conversation_path_by_id_str(&config.codex_home, id_str).await?;
Ok(path)
} else {
Ok(None)
}
}
fn load_output_schema(path: Option<PathBuf>) -> Option<Value> {
let path = path?;
let schema_str = match std::fs::read_to_string(&path) {
Ok(contents) => contents,
Err(err) => {
eprintln!(
"Failed to read output schema file {}: {err}",
path.display()
);
std::process::exit(1);
}
};
match serde_json::from_str::<Value>(&schema_str) {
Ok(value) => Some(value),
Err(err) => {
eprintln!(
"Output schema file {} is not valid JSON: {err}",
path.display()
);
std::process::exit(1);
}
}
}

View File

@@ -0,0 +1,10 @@
event: response.created
data: {"type":"response.created","response":{"id":"resp1"}}
event: response.output_item.done
data: {"type":"response.output_item.done","item":{"type":"message","role":"assistant","content":[{"type":"output_text","text":"fixture hello"}]}}
event: response.completed
data: {"type":"response.completed","response":{"id":"resp1","output":[]}}

View File

@@ -1,25 +0,0 @@
[
{
"type": "response.output_item.done",
"item": {
"type": "custom_tool_call",
"name": "apply_patch",
"input": "*** Begin Patch\n*** Add File: test.md\n+Hello world\n*** End Patch",
"call_id": "__ID__"
}
},
{
"type": "response.completed",
"response": {
"id": "__ID__",
"usage": {
"input_tokens": 0,
"input_tokens_details": null,
"output_tokens": 0,
"output_tokens_details": null,
"total_tokens": 0
},
"output": []
}
}
]

Some files were not shown because too many files have changed in this diff Show More