Compare commits

...

49 Commits

Author SHA1 Message Date
Ahmed Ibrahim
a4825ae5f2 small edit 2025-08-25 10:12:06 -07:00
Odysseas YIAKOUMIS
dc56984853 avoid error when /compact response has no token_usage (#2417) 2025-08-24 18:00:57 +04:00
Ahmed Ibrahim
c6a52d611c Resume conversation from an earlier point in history (#2607)
Fixing merge conflict of this: #2588


https://github.com/user-attachments/assets/392c7c37-cf8f-4ed6-952e-8215e8c57bc4
2025-08-23 23:23:15 -07:00
Reuben Narad
363636f5eb Add web search tool (#2371)
Adds web_search tool, enabling the model to use Responses API web_search
tool.
- Disabled by default, enabled by --search flag
- When --search is passed, exposes web_search_request function tool to
the model, which triggers user approval. When approved, the model can
use the web_search tool for the remainder of the turn
<img width="1033" height="294" alt="image"
src="https://github.com/user-attachments/assets/62ac6563-b946-465c-ba5d-9325af28b28f"
/>

---------

Co-authored-by: easong-openai <easong@openai.com>
2025-08-23 22:58:56 -07:00
Ahmed Ibrahim
957d44918d send-aggregated output (#2364)
We want to send an aggregated output of stderr and stdout so we don't
have to aggregate it stderr+stdout as we lose order sometimes.

---------

Co-authored-by: Gabriel Peal <gpeal@users.noreply.github.com>
2025-08-23 16:54:31 +00:00
easong-openai
eca97d8559 transcript hint (#2605)
Adds a hint to use ctrl-t to view transcript for more details

<img width="475" height="49" alt="image"
src="https://github.com/user-attachments/assets/6ff650eb-ed54-4699-be04-3c50f0f8f631"
/>
2025-08-23 01:06:22 -07:00
easong-openai
09819d9b47 Add the ability to interrupt and provide feedback to the model (#2381) 2025-08-22 20:34:43 -07:00
Michael Bolin
e3b03eaccb feat: StreamableShell with exec_command and write_stdin tools (#2574) 2025-08-22 18:10:55 -07:00
Ahmed Ibrahim
311ad0ce26 fork conversation from a previous message (#2575)
This can be the underlying logic in order to start a conversation from a
previous message. will need some love in the UI.

Base for building this: #2588
2025-08-22 17:06:09 -07:00
Jeremy Rose
5fa7d46ddf tui: fix resize on wezterm (#2600)
WezTerm doesn't respond to cursor queries during a synchronized update,
so resizing was broken there.
2025-08-22 16:33:18 -07:00
Jeremy Rose
d994019f3f tui: coalesce command output; show unabridged commands in transcript (#2590)
https://github.com/user-attachments/assets/effec7c7-732a-4b61-a2ae-3cb297b6b19b
2025-08-22 16:32:31 -07:00
Jeremy Rose
6de9541f0a tui: open transcript mode at the bottom (#2592)
this got lost when we switched /diff to use the pager.
2025-08-22 16:06:41 -07:00
wkrettek
85099017fd Fix typo in AGENTS.md (#2518)
- Change `examole` to `example`
2025-08-22 16:05:39 -07:00
dependabot[bot]
a5b2ebb49b chore(deps): bump reqwest from 0.12.22 to 0.12.23 in /codex-rs (#2492)
Bumps [reqwest](https://github.com/seanmonstar/reqwest) from 0.12.22 to
0.12.23.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/seanmonstar/reqwest/releases">reqwest's
releases</a>.</em></p>
<blockquote>
<h2>v0.12.23</h2>
<h2>tl;dr</h2>
<ul>
<li>🇺🇩🇸 Add <code>ClientBuilder::unix_socket(path)</code> option that
will force all requests over that Unix Domain Socket.</li>
<li>🔁 Add <code>ClientBuilder::retries(policy)</code> and
<code>reqwest::retry::Builder</code> to configure <a
href="https://seanmonstar.com/blog/reqwest-retries/">automatic
retries</a>.</li>
<li>Add <code>ClientBuilder::dns_resolver2()</code> with more ergonomic
argument bounds, allowing more resolver implementations.</li>
<li>Add <code>http3_*</code> options to
<code>blocking::ClientBuilder</code>.</li>
<li>Fix default TCP timeout values to enabled and faster.</li>
<li>Fix SOCKS proxies to default to port 1080</li>
<li>(wasm) Add cache methods to <code>RequestBuilder</code>.</li>
</ul>
<h2>What's Changed</h2>
<ul>
<li>Minimize package size by <a
href="https://github.com/weiznich"><code>@​weiznich</code></a> in <a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2759">seanmonstar/reqwest#2759</a></li>
<li>chore(dev-dependencies): bump brotli by <a
href="https://github.com/seanmonstar"><code>@​seanmonstar</code></a> in
<a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2760">seanmonstar/reqwest#2760</a></li>
<li>upgrade hickory-dns to 0.25 by <a
href="https://github.com/seanmonstar"><code>@​seanmonstar</code></a> in
<a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2761">seanmonstar/reqwest#2761</a></li>
<li>Re-expose http3 options in blocking::clientBuilder by <a
href="https://github.com/ducaale"><code>@​ducaale</code></a> in <a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2770">seanmonstar/reqwest#2770</a></li>
<li>fix(proxy): restore default port 1080 for SOCKS proxies without
explicit port by <a
href="https://github.com/0x676e67"><code>@​0x676e67</code></a> in <a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2771">seanmonstar/reqwest#2771</a></li>
<li>ci: use msrv-aware cargo in msrv job by <a
href="https://github.com/seanmonstar"><code>@​seanmonstar</code></a> in
<a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2779">seanmonstar/reqwest#2779</a></li>
<li>feat: add request cache option for wasm by <a
href="https://github.com/Spxg"><code>@​Spxg</code></a> in <a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2775">seanmonstar/reqwest#2775</a></li>
<li>style(client): use <code>std::task::ready!</code> macro to simplify
<code>Poll</code> branch match by <a
href="https://github.com/0x676e67"><code>@​0x676e67</code></a> in <a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2781">seanmonstar/reqwest#2781</a></li>
<li>fix: add default tcp keepalive and user_timeout values by <a
href="https://github.com/seanmonstar"><code>@​seanmonstar</code></a> in
<a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2780">seanmonstar/reqwest#2780</a></li>
<li>feat: add unix_socket() option to client builder by <a
href="https://github.com/seanmonstar"><code>@​seanmonstar</code></a> in
<a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2624">seanmonstar/reqwest#2624</a></li>
<li>Add retry policies by <a
href="https://github.com/seanmonstar"><code>@​seanmonstar</code></a> in
<a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2763">seanmonstar/reqwest#2763</a></li>
<li>refactor: loosen retry <code>for_host</code> parameter bounds by <a
href="https://github.com/Enduriel"><code>@​Enduriel</code></a> in <a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2792">seanmonstar/reqwest#2792</a></li>
<li>feat: add dns_resolver2 that is more ergonomic and flexible by <a
href="https://github.com/seanmonstar"><code>@​seanmonstar</code></a> in
<a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2793">seanmonstar/reqwest#2793</a></li>
<li>Prepare v0.12.23 by <a
href="https://github.com/seanmonstar"><code>@​seanmonstar</code></a> in
<a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2795">seanmonstar/reqwest#2795</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/weiznich"><code>@​weiznich</code></a>
made their first contribution in <a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2759">seanmonstar/reqwest#2759</a></li>
<li><a href="https://github.com/Spxg"><code>@​Spxg</code></a> made their
first contribution in <a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2775">seanmonstar/reqwest#2775</a></li>
<li><a href="https://github.com/Enduriel"><code>@​Enduriel</code></a>
made their first contribution in <a
href="https://redirect.github.com/seanmonstar/reqwest/pull/2792">seanmonstar/reqwest#2792</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/seanmonstar/reqwest/compare/v0.12.22...v0.12.23">https://github.com/seanmonstar/reqwest/compare/v0.12.22...v0.12.23</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/seanmonstar/reqwest/blob/master/CHANGELOG.md">reqwest's
changelog</a>.</em></p>
<blockquote>
<h2>v0.12.23</h2>
<ul>
<li>Add <code>ClientBuilder::unix_socket(path)</code> option that will
force all requests over that Unix Domain Socket.</li>
<li>Add <code>ClientBuilder::retries(policy)</code> and
<code>reqwest::retry::Builder</code> to configure automatic
retries.</li>
<li>Add <code>ClientBuilder::dns_resolver2()</code> with more ergonomic
argument bounds, allowing more resolver implementations.</li>
<li>Add <code>http3_*</code> options to
<code>blocking::ClientBuilder</code>.</li>
<li>Fix default TCP timeout values to enabled and faster.</li>
<li>Fix SOCKS proxies to default to port 1080</li>
<li>(wasm) Add cache methods to <code>RequestBuilder</code>.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="ae7375b547"><code>ae7375b</code></a>
v0.12.23</li>
<li><a
href="9aacdc1e2b"><code>9aacdc1</code></a>
feat: add dns_resolver2 that is more ergonomic and flexible (<a
href="https://redirect.github.com/seanmonstar/reqwest/issues/2793">#2793</a>)</li>
<li><a
href="221be11bc6"><code>221be11</code></a>
refactor: loosen retry <code>for_host</code> parameter bounds (<a
href="https://redirect.github.com/seanmonstar/reqwest/issues/2792">#2792</a>)</li>
<li><a
href="acd1b05994"><code>acd1b05</code></a>
feat: add reqwest::retry policies (<a
href="https://redirect.github.com/seanmonstar/reqwest/issues/2763">#2763</a>)</li>
<li><a
href="54b6022b0f"><code>54b6022</code></a>
feat: add <code>ClientBuilder::unix_socket()</code> option (<a
href="https://redirect.github.com/seanmonstar/reqwest/issues/2624">#2624</a>)</li>
<li><a
href="6358cefd24"><code>6358cef</code></a>
fix: add default tcp keepalive and user_timeout values (<a
href="https://redirect.github.com/seanmonstar/reqwest/issues/2780">#2780</a>)</li>
<li><a
href="21226a5bc3"><code>21226a5</code></a>
style(client): use <code>std::task::ready!</code> macro to simplify Poll
branch matching...</li>
<li><a
href="82086e796b"><code>82086e7</code></a>
feat: add request cache options for wasm (<a
href="https://redirect.github.com/seanmonstar/reqwest/issues/2775">#2775</a>)</li>
<li><a
href="2a0f7a3670"><code>2a0f7a3</code></a>
ci: use msrv-aware cargo in msrv job (<a
href="https://redirect.github.com/seanmonstar/reqwest/issues/2779">#2779</a>)</li>
<li><a
href="f1868036ca"><code>f186803</code></a>
fix(proxy): restore default port 1080 for SOCKS proxies without explicit
port...</li>
<li>Additional commits viewable in <a
href="https://github.com/seanmonstar/reqwest/compare/v0.12.22...v0.12.23">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=reqwest&package-manager=cargo&previous-version=0.12.22&new-version=0.12.23)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-22 15:57:33 -07:00
Gabriel Peal
697c7cf4bf Fix flakiness in shell command approval test (#2547)
## Summary
- read the shell exec approval request's actual id instead of assuming
it is always 0
- use that id when validating and responding in the test

## Testing
- `cargo test -p codex-mcp-server
test_shell_command_approval_triggers_elicitation`

------
https://chatgpt.com/codex/tasks/task_i_68a6ab9c732c832c81522cbf11812be0
2025-08-22 18:46:35 -04:00
dependabot[bot]
34ac698bef chore(deps): bump serde_json from 1.0.142 to 1.0.143 in /codex-rs (#2498)
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.142 to
1.0.143.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/serde-rs/json/releases">serde_json's
releases</a>.</em></p>
<blockquote>
<h2>v1.0.143</h2>
<ul>
<li>Implement Clone and Debug for serde_json::Map iterators (<a
href="https://redirect.github.com/serde-rs/json/issues/1264">#1264</a>,
thanks <a
href="https://github.com/xlambein"><code>@​xlambein</code></a>)</li>
<li>Implement Default for CompactFormatter (<a
href="https://redirect.github.com/serde-rs/json/issues/1268">#1268</a>,
thanks <a href="https://github.com/SOF3"><code>@​SOF3</code></a>)</li>
<li>Implement FromStr for serde_json::Map (<a
href="https://redirect.github.com/serde-rs/json/issues/1271">#1271</a>,
thanks <a
href="https://github.com/mickvangelderen"><code>@​mickvangelderen</code></a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="10102c49bf"><code>10102c4</code></a>
Release 1.0.143</li>
<li><a
href="2a5b85312c"><code>2a5b853</code></a>
Replace super::super with absolute path within crate</li>
<li><a
href="447170bd38"><code>447170b</code></a>
Merge pull request 1271 from
mickvangelderen/mick/impl-from-str-for-map</li>
<li><a
href="ec190d6dfd"><code>ec190d6</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/json/issues/1264">#1264</a>
from xlambein/master</li>
<li><a
href="8be664752f"><code>8be6647</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/json/issues/1268">#1268</a>
from SOF3/compact-default</li>
<li><a
href="ba5b3cccea"><code>ba5b3cc</code></a>
Revert &quot;Pin nightly toolchain used for miri job&quot;</li>
<li><a
href="fd35a02901"><code>fd35a02</code></a>
Implement FromStr for Map&lt;String, Value&gt;</li>
<li><a
href="bea0fe6b3e"><code>bea0fe6</code></a>
Implement Default for CompactFormatter</li>
<li><a
href="0c0e9f6bfa"><code>0c0e9f6</code></a>
Add Clone and Debug impls to map iterators</li>
<li>See full diff in <a
href="https://github.com/serde-rs/json/compare/v1.0.142...v1.0.143">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=serde_json&package-manager=cargo&previous-version=1.0.142&new-version=1.0.143)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-22 15:45:14 -07:00
Ahmed Ibrahim
097782c775 Move models.rs to protocol (#2595)
Moving models.rs to protocol so we can use them in `Codex` operations
2025-08-22 22:18:54 +00:00
Michael Bolin
8ba8089592 fix: prefer sending MCP structuredContent as the function call response, if available (#2594)
Prior to this change, when we got a `CallToolResult` from an MCP server,
we JSON-serialized its `content` field as the `content` to send back to
the model as part of the function call output that we send back to the
model. This meant that we were dropping the `structuredContent` on the
floor.

Though reading
https://modelcontextprotocol.io/specification/2025-06-18/schema#tool, it
appears that if `outputSchema` is specified, then `structuredContent`
should be set, which seems to be a "higher-fidelity" response to the
function call. This PR updates our handling of `CallToolResult` to
prefer using the JSON-serialization of `structuredContent`, if present,
using `content` as a fallback.

Also, it appears that the sense of `success` was inverted prior to this
PR!
2025-08-22 14:10:18 -07:00
Jeremy Rose
57c498159a test: simplify tests in config.rs (#2586)
this is much easier to read, thanks @bolinfest for the suggestion.
2025-08-22 14:04:21 -07:00
Jeremy Rose
bbf42f4e12 improve performance of 'cargo test -p codex-tui' (#2593)
before:

```
$ time cargo test -p codex-tui -q
[...]
cargo test -p codex-tui -q  39.89s user 10.77s system 98% cpu 51.328 total
```

after:

```
$ time cargo test -p codex-tui -q
[...]
cargo test -p codex-tui -q  1.37s user 0.64s system 29% cpu 6.699 total
```

the major offenders were the textarea fuzz test and the custom_terminal
doctests. (i think the doctests were being recompiled every time which
made them extra slow?)
2025-08-22 14:03:58 -07:00
Dylan
6f0b499594 [config] Detect git worktrees for project trust (#2585)
## Summary
When resolving our current directory as a project, we want to be a
little bit more clever:
1. If we're in a sub-directory of a git repo, resolve our project
against the root of the git repo
2. If we're in a git worktree, resolve the project against the root of
the git repo

## Testing
- [x] Added unit tests
- [x] Confirmed locally with a git worktree (the one i was using for
this feature)
2025-08-22 13:54:51 -07:00
Dylan
236c4f76a6 [apply_patch] freeform apply_patch tool (#2576)
## Summary
GPT-5 introduced the concept of [custom
tools](https://platform.openai.com/docs/guides/function-calling#custom-tools),
which allow the model to send a raw string result back, simplifying
json-escape issues. We are migrating gpt-5 to use this by default.

However, gpt-oss models do not support custom tools, only normal
functions. So we keep both tool definitions, and provide whichever one
the model family supports.

## Testing
- [x] Tested locally with various models
- [x] Unit tests pass
2025-08-22 13:42:34 -07:00
Eric Traut
dc42ec0eb4 Add AuthManager and enhance GetAuthStatus command (#2577)
This PR adds a central `AuthManager` struct that manages the auth
information used across conversations and the MCP server. Prior to this,
each conversation and the MCP server got their own private snapshots of
the auth information, and changes to one (such as a logout or token
refresh) were not seen by others.

This is especially problematic when multiple instances of the CLI are
run. For example, consider the case where you start CLI 1 and log in to
ChatGPT account X and then start CLI 2 and log out and then log in to
ChatGPT account Y. The conversation in CLI 1 is still using account X,
but if you create a new conversation, it will suddenly (and
unexpectedly) switch to account Y.

With the `AuthManager`, auth information is read from disk at the time
the `ConversationManager` is constructed, and it is cached in memory.
All new conversations use this same auth information, as do any token
refreshes.

The `AuthManager` is also used by the MCP server's GetAuthStatus
command, which now returns the auth method currently used by the MCP
server.

This PR also includes an enhancement to the GetAuthStatus command. It
now accepts two new (optional) input parameters: `include_token` and
`refresh_token`. Callers can use this to request the in-use auth token
and can optionally request to refresh the token.

The PR also adds tests for the login and auth APIs that I recently added
to the MCP server.
2025-08-22 13:10:11 -07:00
Ahmed Ibrahim
cdc77c10fb Fix/tui windows multiline paste (#2544)
Introduce a minimal paste-burst heuristic in the chat composer so Enter
is treated as a newline during paste-like bursts (plain chars arriving
in very short intervals), avoiding premature submit after the first line
on Windows consoles that lack bracketed paste.

- Detect tight sequences of plain Char events; open a short window where
Enter inserts a newline instead of submitting.
- Extend the window on newline to handle blank lines in pasted content.
- No behavior change for terminals that already emit Event::Paste; no
OS/env toggles added.
2025-08-22 12:23:58 -07:00
pap-openai
c5d21a4564 ctrl+v image + @file accepts images (#1695)
allow ctrl+v in TUI for images + @file that are images are appended as
raw files (and read by the model) rather than pasted as a path that
cannot be read by the model.

Re-used components and same interface we're using for copying pasted
content in
72504f1d9c.
@aibrahim-oai as you've implemented this, mind having a look at this
one?


https://github.com/user-attachments/assets/c6c1153b-6b32-4558-b9a2-f8c57d2be710

---------

Co-authored-by: easong-openai <easong@openai.com>
Co-authored-by: Daniel Edrisian <dedrisian@openai.com>
Co-authored-by: Michael Bolin <mbolin@openai.com>
2025-08-22 17:05:43 +00:00
Jeremy Rose
59f6b1654f improve suspend behavior (#2569)
This is a somewhat roundabout way to fix the issue that pressing ^Z
would put the shell prompt in the wrong place (overwriting some of the
status area below the composer). While I'm at it, clean up the suspend
logic and fix some suspend-while-in-alt-screen behavior too.
2025-08-22 09:41:15 -07:00
vjain419
80b00a193e feat(gpt5): add model_verbosity for GPT‑5 via Responses API (#2108)
**Summary**
- Adds `model_verbosity` config (values: low, medium, high).
- Sends `text.verbosity` only for GPT‑5 family models via the Responses
API.
- Updates docs and adds serialization tests.

**Motivation**
- GPT‑5 introduces a verbosity control to steer output length/detail
without pro
mpt surgery.
- Exposing it as a config knob keeps prompts stable and makes behavior
explicit
and repeatable.

**Changes**
- Config:
  - Added `Verbosity` enum (low|medium|high).
- Added optional `model_verbosity` to `ConfigToml`, `Config`, and
`ConfigProfi
le`.
- Request wiring:
  - Extended `ResponsesApiRequest` with optional `text` object.
- Populates `text.verbosity` only when model family is `gpt-5`; omitted
otherw
ise.
- Tests:
- Verifies `text.verbosity` serializes when set and is omitted when not
set.
- Docs:
  - Added “GPT‑5 Verbosity” section in `codex-rs/README.md`.
  - Added `model_verbosity` section to `codex-rs/config.md`.

**Usage**
- In `~/.codex/config.toml`:
  - `model = "gpt-5"`
  - `model_verbosity = "low"` (or `"medium"` default, `"high"`)
- CLI override example:
  - `codex -c model="gpt-5" -c model_verbosity="high"`

**API Impact**
- Requests to GPT‑5 via Responses API include: `text: { verbosity:
"low|medium|h
igh" }` when configured.
- For legacy models or Chat Completions providers, `text` is omitted.

**Backward Compatibility**
- Default behavior unchanged when `model_verbosity` is not set (server
default “
medium”).

**Testing**
- Added unit tests for serialization/omission of `text.verbosity`.
- Ran `cargo fmt` and `cargo test --all-features` (all green).

**Docs**
- `README.md`: new “GPT‑5 Verbosity” note under Config with example.
- `config.md`: new `model_verbosity` section.

**Out of Scope**
- No changes to temperature/top_p or other GPT‑5 parameters.
- No changes to Chat Completions wiring.

**Risks / Notes**
- If OpenAI changes the wire shape for verbosity, we may need to update
`Respons
esApiRequest`.
- Behavior gated to `gpt-5` model family to avoid unexpected effects
elsewhere.

**Checklist**
- [x] Code gated to GPT‑5 family only
- [x] Docs updated (`README.md`, `config.md`)
- [x] Tests added and passing
- [x] Formatting applied

Release note: Add `model_verbosity` config to control GPT‑5 output verbosity via the Responses API (low|medium|high).
2025-08-22 09:12:10 -07:00
Jeremy Rose
76dc3f6054 show diff output in the pager (#2568)
this shows `/diff` output in an overlay like the transcript, instead of
dumping it into history.



https://github.com/user-attachments/assets/48e79b65-7f66-45dd-97b3-d5c627ac7349
2025-08-22 08:24:13 -07:00
Dylan
e4c275d615 [apply-patch] Clean up apply-patch tool definitions (#2539)
## Summary
We've experienced a bit of drift in system prompting for `apply_patch`:
- As pointed out in #2030 , our prettier formatting started altering
prompt.md in a few ways
- We introduced a separate markdown file for apply_patch instructions in
#993, but currently duplicate them in the prompt.md file
- We added a first-class apply_patch tool in #2303, which has yet
another definition

This PR starts to consolidate our logic in a few ways:
- We now only use
`apply_patch_tool_instructions.md](https://github.com/openai/codex/compare/dh--apply-patch-tool-definition?expand=1#diff-d4fffee5f85cb1975d3f66143a379e6c329de40c83ed5bf03ffd3829df985bea)
for system instructions
- We no longer include apply_patch system instructions if the tool is
specified

I'm leaving the definition in openai_tools.rs as duplicated text for now
because we're going to be iterated on the first-class tool soon.

## Testing
- [x] Added integration tests to verify prompt stability
- [x] Tested locally with several different models (gpt-5, gpt-oss,
o4-mini)
2025-08-21 20:07:41 -07:00
Dylan
9f71dcbf57 [shell_tool] Small updates to ensure shell consistency (#2571)
## Summary
Small update to hopefully improve some shell edge cases, and make the
function clearer to the model what is going on. Keeping `timeout` as an
alias means that calls with the previous name will still work.

## Test Plan
- [x] Tested locally, model still works
2025-08-21 19:58:07 -07:00
Jeremy Rose
750ca9e21d core: write explicit [projects] tables for trusted projects (#2523)
all of my trust_level settings in my ~/.codex/config.toml were on one
line.
2025-08-21 13:20:36 -07:00
Jeremy Rose
5fac7b2566 tweak thresholds for shimmer on non-true-color terminals (#2533)
https://github.com/user-attachments/assets/dc7bf820-eeec-4b78-aba9-231e1337921c
2025-08-21 11:44:18 -07:00
khai-oai
24c7be7da0 Update README.md (#2564)
Adding some notes about MCP tool calls are not running within the
sandbox
2025-08-21 11:26:37 -07:00
Jeremy Rose
4b4aa2a774 tui: transcript mode updates live (#2562)
moves TranscriptApp to be an "overlay", and continue to pump AppEvents
while the transcript is active, but forward all tui handling to the
transcript screen.
2025-08-21 11:17:29 -07:00
Jeremy Rose
16d16a4ddc refactor: move slash command handling into chatwidget (#2536)
no functional change, just moving the code that handles /foo into
chatwidget, since most commands just do things with chatwidget.
2025-08-21 10:36:58 -07:00
Jeremy Rose
9604671678 tui: show diff hunk headers to separate sections (#2488)
<img width="906" height="350" alt="Screenshot 2025-08-20 at 2 38 29 PM"
src="https://github.com/user-attachments/assets/272c43c2-dfa8-497f-afa0-cea31e26ca1f"
/>
2025-08-21 08:54:11 -07:00
Jeremy Rose
db934e438e read all AGENTS.md up to git root (#2532)
This updates our logic for AGENTS.md to match documented behavior, which
is to read all AGENTS.md files from cwd up to git root.
2025-08-21 08:52:17 -07:00
Jeremy Rose
5f6e1af1a5 scroll instead of clear on boot (#2535)
this actually works fine already in iterm without this change, but
Terminal.app adds a bunch of excess whitespace when we clear all.


https://github.com/user-attachments/assets/c5bd1809-c2ed-4daa-a148-944d2df52876
2025-08-21 08:51:26 -07:00
easong-openai
8ad56be06e Parse and expose stream errors (#2540) 2025-08-21 01:15:24 -07:00
Dylan
d2b2a6d13a [prompt] xml-format EnvironmentContext (#2272)
## Summary
Before we land #2243, let's start printing environment_context in our
preferred format. This struct will evolve over time with new
information, xml gives us a balance of human readable without too much
parsing, llm readable, and extensible.

Also moves us over to an Option-based struct, so we can easily provide
diffs to the model.

## Testing
- [x] Updated tests to reflect new format
2025-08-20 23:45:16 -07:00
Gabriel Peal
74683bab91 Add a serde tag to ParsedItem (#2546) 2025-08-21 01:34:46 -04:00
Eric Traut
dacff9675a Added new auth-related methods and events to mcp server (#2496)
This PR adds the following:
* A getAuthStatus method on the mcp server. This returns the auth method
currently in use (chatgpt or apikey) or none if the user is not
authenticated. It also returns the "preferred auth method" which
reflects the `preferred_auth_method` value in the config.
* A logout method on the mcp server. If called, it logs out the user and
deletes the `auth.json` file — the same behavior in the cli's `/logout`
command.
* An `authStatusChange` event notification that is sent when the auth
status changes due to successful login or logout operations.
* Logic to pass command-line config overrides to the mcp server at
startup time. This allows use cases like `codex mcp -c
preferred_auth_method=apikey`.
2025-08-20 20:36:34 -07:00
Jeremy Rose
697b4ce100 tui: show upgrade banner in history (#2537)
previously the upgrade banner was disappearing into scrollback when we
cleared the screen to start the tui.
2025-08-20 19:41:49 -07:00
Jeremy Rose
9193eb6b53 show thinking in transcript (#2538)
record the full reasoning trace and show it in transcript mode
2025-08-20 17:09:46 -07:00
Jeremy Rose
e95cad1946 hide CoT by default; show headers in status indicator (#2316)
Plan is for full CoT summaries to be visible in a "transcript view" when
we implement that, but for now they're hidden.


https://github.com/user-attachments/assets/e8a1b0ef-8f2a-48ff-9625-9c3c67d92cdb
2025-08-20 16:58:56 -07:00
Jeremy Rose
2ec5a28528 add transcript mode (#2525)
this adds a new 'transcript mode' that shows the full event history in a
"pager"-style interface.


https://github.com/user-attachments/assets/52df7a14-adb2-4ea7-a0f9-7f5eb8235182
2025-08-20 16:57:35 -07:00
eddy-win
050b9baeb6 Bridge command generation to powershell when on Windows (#2319)
## What? Why? How?
- When running on Windows, codex often tries to invoke bash commands,
which commonly fail (unless WSL is installed)
- Fix: Detect if powershell is available and, if so, route commands to
it
- Also add a shell_name property to environmental context for codex to
default to powershell commands when running in that environment

## Testing
- Tested within WSL and powershell (e.g. get top 5 largest files within
a folder and validated that commands generated were powershell commands)
- Tested within Zsh
- Updated unit tests

---------

Co-authored-by: Eddy Escardo <eddy@openai.com>
2025-08-20 16:30:34 -07:00
Michael Bolin
5ab30c73f3 fix: update build cache key in .github/workflows/codex.yml (#2534)
Change to match `.github/workflows/rust-ci.yml`, which was updated in
https://github.com/openai/codex/pull/2242:


250ae37c84/.github/workflows/rust-ci.yml (L120-L128)
2025-08-20 15:57:33 -07:00
ae
250ae37c84 tui: link docs when no MCP servers configured (#2516) 2025-08-20 14:58:04 -07:00
121 changed files with 7899 additions and 1696 deletions

View File

@@ -52,7 +52,7 @@ jobs:
~/.cargo/registry/cache/
~/.cargo/git/db/
${{ github.workspace }}/codex-rs/target/
key: cargo-ubuntu-24.04-x86_64-unknown-linux-gnu-${{ hashFiles('**/Cargo.lock') }}
key: cargo-ubuntu-24.04-x86_64-unknown-linux-gnu-dev-${{ hashFiles('**/Cargo.lock') }}
# Note it is possible that the `verify` step internal to Run Codex will
# fail, in which case the work to setup the repo was worthless :(

View File

@@ -1,3 +1,7 @@
/codex-cli/dist
/codex-cli/node_modules
pnpm-lock.yaml
prompt.md
*_prompt.md
*_instructions.md

View File

@@ -2,7 +2,7 @@
In the codex-rs folder where the rust code lives:
- Crate names are prefixed with `codex-`. For examole, the `core` folder's crate is named `codex-core`
- Crate names are prefixed with `codex-`. For example, the `core` folder's crate is named `codex-core`
- When using format! and you can inline variables into {}, always do that.
- Never add or modify any code related to `CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR` or `CODEX_SANDBOX_ENV_VAR`.
- You operate in a sandbox where `CODEX_SANDBOX_NETWORK_DISABLED=1` will be set whenever you use the `shell` tool. Any existing code that uses `CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR` was authored with this fact in mind. It is often used to early exit out of tests that the author knew you would not be able to run given your sandbox limitations.

View File

@@ -383,6 +383,13 @@ base_url = "http://my-ollama.example.com:11434/v1"
### Platform sandboxing details
By default, Codex CLI runs code and shell commands inside a restricted sandbox to protect your system.
> [!IMPORTANT]
> Not all tool calls are sandboxed. Specifically, **trusted Model Context Protocol (MCP) tool calls** are executed outside of the sandbox.
> This is intentional: MCP tools are explicitly configured and trusted by you, and they often need to connect to **external applications or services** (e.g. issue trackers, databases, messaging systems).
> Running them outside the sandbox allows Codex to integrate with these external systems without being blocked by sandbox restrictions.
The mechanism Codex uses to implement the sandbox policy depends on your OS:
- **macOS 12+** uses **Apple Seatbelt** and runs commands using `sandbox-exec` with a profile (`-p`) that corresponds to the `--sandbox` that was specified.

271
codex-rs/Cargo.lock generated
View File

@@ -186,6 +186,26 @@ version = "1.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dde20b3d026af13f561bdd0f15edf01fc734f0dafcedbaf42bba506a9517f223"
[[package]]
name = "arboard"
version = "3.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "55f533f8e0af236ffe5eb979b99381df3258853f00ba2e44b6e1955292c75227"
dependencies = [
"clipboard-win",
"image",
"log",
"objc2",
"objc2-app-kit",
"objc2-core-foundation",
"objc2-core-graphics",
"objc2-foundation",
"parking_lot",
"percent-encoding",
"windows-sys 0.59.0",
"x11rb",
]
[[package]]
name = "arg_enum_proc_macro"
version = "0.3.4"
@@ -711,6 +731,7 @@ dependencies = [
"mime_guess",
"openssl-sys",
"os_info",
"portable-pty",
"predicates",
"pretty_assertions",
"rand 0.9.2",
@@ -737,6 +758,7 @@ dependencies = [
"tree-sitter-bash",
"uuid",
"walkdir",
"which",
"whoami",
"wildmatch",
"wiremock",
@@ -753,6 +775,7 @@ dependencies = [
"codex-arg0",
"codex-common",
"codex-core",
"codex-login",
"codex-ollama",
"codex-protocol",
"core_test_support",
@@ -822,6 +845,7 @@ version = "0.0.0"
dependencies = [
"base64 0.22.1",
"chrono",
"codex-protocol",
"pretty_assertions",
"rand 0.8.5",
"reqwest",
@@ -900,13 +924,16 @@ dependencies = [
name = "codex-protocol"
version = "0.0.0"
dependencies = [
"base64 0.22.1",
"mcp-types",
"mime_guess",
"pretty_assertions",
"serde",
"serde_bytes",
"serde_json",
"strum 0.27.2",
"strum_macros 0.27.2",
"tracing",
"ts-rs",
"uuid",
]
@@ -926,6 +953,7 @@ name = "codex-tui"
version = "0.0.0"
dependencies = [
"anyhow",
"arboard",
"async-stream",
"base64 0.22.1",
"chrono",
@@ -960,6 +988,7 @@ dependencies = [
"strum 0.27.2",
"strum_macros 0.27.2",
"supports-color",
"tempfile",
"textwrap 0.16.2",
"tokio",
"tokio-stream",
@@ -1408,6 +1437,16 @@ dependencies = [
"winapi",
]
[[package]]
name = "dispatch2"
version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "89a09f22a6c6069a18470eb92d2298acf25463f14256d24778e1230d789a2aec"
dependencies = [
"bitflags 2.9.1",
"objc2",
]
[[package]]
name = "display_container"
version = "0.9.0"
@@ -1441,6 +1480,12 @@ version = "0.15.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1aaf95b3e5c8f23aa320147307562d361db0ae0d51242340f558153b4eb2439b"
[[package]]
name = "downcast-rs"
version = "1.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "75b325c5dbd37f80359721ad39aca5a29fb04c89279657cffdda8736d0c0b9d2"
[[package]]
name = "dupe"
version = "0.9.1"
@@ -1686,6 +1731,17 @@ dependencies = [
"simd-adler32",
]
[[package]]
name = "filedescriptor"
version = "0.8.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e40758ed24c9b2eeb76c35fb0aebc66c626084edd827e07e1552279814c6682d"
dependencies = [
"libc",
"thiserror 1.0.69",
"winapi",
]
[[package]]
name = "fixedbitset"
version = "0.4.2"
@@ -1861,6 +1917,16 @@ dependencies = [
"version_check",
]
[[package]]
name = "gethostname"
version = "0.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0176e0459c2e4a1fe232f984bca6890e681076abb9934f6cea7c326f3fc47818"
dependencies = [
"libc",
"windows-targets 0.48.5",
]
[[package]]
name = "getopts"
version = "0.2.23"
@@ -3057,6 +3123,42 @@ dependencies = [
"objc2-encode",
]
[[package]]
name = "objc2-app-kit"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e6f29f568bec459b0ddff777cec4fe3fd8666d82d5a40ebd0ff7e66134f89bcc"
dependencies = [
"bitflags 2.9.1",
"objc2",
"objc2-core-graphics",
"objc2-foundation",
]
[[package]]
name = "objc2-core-foundation"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1c10c2894a6fed806ade6027bcd50662746363a9589d3ec9d9bef30a4e4bc166"
dependencies = [
"bitflags 2.9.1",
"dispatch2",
"objc2",
]
[[package]]
name = "objc2-core-graphics"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "989c6c68c13021b5c2d6b71456ebb0f9dc78d752e86a98da7c716f4f9470f5a4"
dependencies = [
"bitflags 2.9.1",
"dispatch2",
"objc2",
"objc2-core-foundation",
"objc2-io-surface",
]
[[package]]
name = "objc2-encode"
version = "4.1.0"
@@ -3071,6 +3173,18 @@ checksum = "900831247d2fe1a09a683278e5384cfb8c80c79fe6b166f9d14bfdde0ea1b03c"
dependencies = [
"bitflags 2.9.1",
"objc2",
"objc2-core-foundation",
]
[[package]]
name = "objc2-io-surface"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7282e9ac92529fa3457ce90ebb15f4ecbc383e8338060960760fa2cf75420c3c"
dependencies = [
"bitflags 2.9.1",
"objc2",
"objc2-core-foundation",
]
[[package]]
@@ -3343,6 +3457,27 @@ dependencies = [
"portable-atomic",
]
[[package]]
name = "portable-pty"
version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b4a596a2b3d2752d94f51fac2d4a96737b8705dddd311a32b9af47211f08671e"
dependencies = [
"anyhow",
"bitflags 1.3.2",
"downcast-rs",
"filedescriptor",
"lazy_static",
"libc",
"log",
"nix",
"serial2",
"shared_library",
"shell-words",
"winapi",
"winreg",
]
[[package]]
name = "potential_utf"
version = "0.1.2"
@@ -3792,9 +3927,9 @@ checksum = "ba39f3699c378cd8970968dcbff9c43159ea4cfbd88d43c00b22f2ef10a435d2"
[[package]]
name = "reqwest"
version = "0.12.22"
version = "0.12.23"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cbc931937e6ca3a06e3b6c0aa7841849b160a90351d6ab467a8b9b9959767531"
checksum = "d429f34c8092b2d42c7c93cec323bb4adeb7c67698f70839adec842ec10c7ceb"
dependencies = [
"base64 0.22.1",
"bytes",
@@ -4186,9 +4321,9 @@ dependencies = [
[[package]]
name = "serde_json"
version = "1.0.142"
version = "1.0.143"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "030fedb782600dcbd6f02d479bf0d817ac3bb40d644745b769d6a96bc3afc5a7"
checksum = "d401abef1d108fbd9cbaebc3e46611f4b1021f714a0597a71f41ee463f5f4a5a"
dependencies = [
"indexmap 2.10.0",
"itoa",
@@ -4270,6 +4405,17 @@ dependencies = [
"syn 2.0.104",
]
[[package]]
name = "serial2"
version = "0.2.31"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "26e1e5956803a69ddd72ce2de337b577898801528749565def03515f82bad5bb"
dependencies = [
"cfg-if",
"libc",
"winapi",
]
[[package]]
name = "sha1"
version = "0.10.6"
@@ -4301,6 +4447,22 @@ dependencies = [
"lazy_static",
]
[[package]]
name = "shared_library"
version = "0.1.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5a9e7e0f2bfae24d8a5b5a66c5b257a83c7412304311512a0c054cd5e619da11"
dependencies = [
"lazy_static",
"libc",
]
[[package]]
name = "shell-words"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "24188a676b6ae68c3b2cb3a01be17fbf7240ce009799bb56d5b1409051e78fde"
[[package]]
name = "shlex"
version = "1.3.0"
@@ -5599,6 +5761,18 @@ version = "0.1.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a751b3277700db47d3e574514de2eced5e54dc8a5436a3bf7a0b248b2cee16f3"
[[package]]
name = "which"
version = "6.0.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b4ee928febd44d98f2f459a4a79bd4d928591333a494a10a868418ac1b39cf1f"
dependencies = [
"either",
"home",
"rustix 0.38.44",
"winsafe",
]
[[package]]
name = "whoami"
version = "1.6.0"
@@ -5832,6 +6006,21 @@ dependencies = [
"windows_x86_64_msvc 0.42.2",
]
[[package]]
name = "windows-targets"
version = "0.48.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9a2fa6e2155d7247be68c096456083145c183cbbbc2764150dda45a87197940c"
dependencies = [
"windows_aarch64_gnullvm 0.48.5",
"windows_aarch64_msvc 0.48.5",
"windows_i686_gnu 0.48.5",
"windows_i686_msvc 0.48.5",
"windows_x86_64_gnu 0.48.5",
"windows_x86_64_gnullvm 0.48.5",
"windows_x86_64_msvc 0.48.5",
]
[[package]]
name = "windows-targets"
version = "0.52.6"
@@ -5870,6 +6059,12 @@ version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "597a5118570b68bc08d8d59125332c54f1ba9d9adeedeef5b99b02ba2b0698f8"
[[package]]
name = "windows_aarch64_gnullvm"
version = "0.48.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2b38e32f0abccf9987a4e3079dfb67dcd799fb61361e53e2882c3cbaf0d905d8"
[[package]]
name = "windows_aarch64_gnullvm"
version = "0.52.6"
@@ -5888,6 +6083,12 @@ version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e08e8864a60f06ef0d0ff4ba04124db8b0fb3be5776a5cd47641e942e58c4d43"
[[package]]
name = "windows_aarch64_msvc"
version = "0.48.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dc35310971f3b2dbbf3f0690a219f40e2d9afcf64f9ab7cc1be722937c26b4bc"
[[package]]
name = "windows_aarch64_msvc"
version = "0.52.6"
@@ -5906,6 +6107,12 @@ version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c61d927d8da41da96a81f029489353e68739737d3beca43145c8afec9a31a84f"
[[package]]
name = "windows_i686_gnu"
version = "0.48.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a75915e7def60c94dcef72200b9a8e58e5091744960da64ec734a6c6e9b3743e"
[[package]]
name = "windows_i686_gnu"
version = "0.52.6"
@@ -5936,6 +6143,12 @@ version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "44d840b6ec649f480a41c8d80f9c65108b92d89345dd94027bfe06ac444d1060"
[[package]]
name = "windows_i686_msvc"
version = "0.48.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8f55c233f70c4b27f66c523580f78f1004e8b5a8b659e05a4eb49d4166cca406"
[[package]]
name = "windows_i686_msvc"
version = "0.52.6"
@@ -5954,6 +6167,12 @@ version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8de912b8b8feb55c064867cf047dda097f92d51efad5b491dfb98f6bbb70cb36"
[[package]]
name = "windows_x86_64_gnu"
version = "0.48.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "53d40abd2583d23e4718fddf1ebec84dbff8381c07cae67ff7768bbf19c6718e"
[[package]]
name = "windows_x86_64_gnu"
version = "0.52.6"
@@ -5972,6 +6191,12 @@ version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "26d41b46a36d453748aedef1486d5c7a85db22e56aff34643984ea85514e94a3"
[[package]]
name = "windows_x86_64_gnullvm"
version = "0.48.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0b7b52767868a23d5bab768e390dc5f5c55825b6d30b86c844ff2dc7414044cc"
[[package]]
name = "windows_x86_64_gnullvm"
version = "0.52.6"
@@ -5990,6 +6215,12 @@ version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9aec5da331524158c6d1a4ac0ab1541149c0b9505fde06423b02f5ef0106b9f0"
[[package]]
name = "windows_x86_64_msvc"
version = "0.48.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ed94fce61571a4006852b7389a063ab983c02eb1bb37b47f8272ce92d06d9538"
[[package]]
name = "windows_x86_64_msvc"
version = "0.52.6"
@@ -6011,6 +6242,21 @@ dependencies = [
"memchr",
]
[[package]]
name = "winreg"
version = "0.10.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "80d0f4e272c85def139476380b12f9ac60926689dd2e01d4923222f40580869d"
dependencies = [
"winapi",
]
[[package]]
name = "winsafe"
version = "0.0.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d135d17ab770252ad95e9a872d365cf3090e3be864a34ab46f48555993efc904"
[[package]]
name = "wiremock"
version = "0.6.4"
@@ -6050,6 +6296,23 @@ version = "0.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ea2f10b9bb0928dfb1b42b65e1f9e36f7f54dbdf08457afefb38afcdec4fa2bb"
[[package]]
name = "x11rb"
version = "0.13.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5d91ffca73ee7f68ce055750bf9f6eca0780b8c85eff9bc046a3b0da41755e12"
dependencies = [
"gethostname",
"rustix 0.38.44",
"x11rb-protocol",
]
[[package]]
name = "x11rb-protocol"
version = "0.13.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ec107c4503ea0b4a98ef47356329af139c0a4f7750e621cf2973cd3385ebcb3d"
[[package]]
name = "yaml-rust"
version = "0.4.5"

View File

@@ -43,6 +43,12 @@ To run Codex non-interactively, run `codex exec PROMPT` (you can also pass the p
Typing `@` triggers a fuzzy-filename search over the workspace root. Use up/down to select among the results and Tab or Enter to replace the `@` with the selected path. You can use Esc to cancel the search.
### EscEsc to edit a previous message
When the chat composer is empty, press Esc to prime “backtrack” mode. Press Esc again to open a transcript preview highlighting the last user message; press Esc repeatedly to step to older user messages. Press Enter to confirm and Codex will fork the conversation from that point, trim the visible transcript accordingly, and prefill the composer with the selected user message so you can edit and resubmit it.
In the transcript preview, the footer shows an `Esc edit prev` hint while editing is active.
### `--cd`/`-C` flag
Sometimes it is not convenient to `cd` to the directory you want Codex to use as the "working root" before running Codex. Fortunately, `codex` supports a `--cd` option so you can specify whatever folder you want. You can confirm that Codex is honoring `--cd` by double-checking the **workdir** it reports in the TUI at the start of a new session.

View File

@@ -1,17 +1,23 @@
To edit files, ALWAYS use the `shell` tool with `apply_patch` CLI. `apply_patch` effectively allows you to execute a diff/patch against a file, but the format of the diff specification is unique to this task, so pay careful attention to these instructions. To use the `apply_patch` CLI, you should call the shell tool with the following structure:
## `apply_patch`
```bash
{"cmd": ["apply_patch", "<<'EOF'\\n*** Begin Patch\\n[YOUR_PATCH]\\n*** End Patch\\nEOF\\n"], "workdir": "..."}
```
Use the `apply_patch` shell command to edit files.
Your patch language is a strippeddown, fileoriented diff format designed to be easy to parse and safe to apply. You can think of it as a highlevel envelope:
Where [YOUR_PATCH] is the actual content of your patch, specified in the following V4A diff format.
*** Begin Patch
[ one or more file sections ]
*** End Patch
*** [ACTION] File: [path/to/file] -> ACTION can be one of Add, Update, or Delete.
For each snippet of code that needs to be changed, repeat the following:
[context_before] -> See below for further instructions on context.
- [old_code] -> Precede the old code with a minus sign.
+ [new_code] -> Precede the new, replacement code with a plus sign.
[context_after] -> See below for further instructions on context.
Within that envelope, you get a sequence of file operations.
You MUST include a header to specify the action you are taking.
Each operation starts with one of three headers:
*** Add File: <path> - create a new file. Every following line is a + line (the initial contents).
*** Delete File: <path> - remove an existing file. Nothing follows.
*** Update File: <path> - patch an existing file in place (optionally with a rename).
May be immediately followed by *** Move to: <new path> if you want to rename the file.
Then one or more “hunks”, each introduced by @@ (optionally followed by a hunk header).
Within a hunk each line starts with:
For instructions on [context_before] and [context_after]:
- By default, show 3 lines of code immediately above and 3 lines immediately below each change. If a change is within 3 lines of a previous change, do NOT duplicate the first changes [context_after] lines in the second changes [context_before] lines.
@@ -25,16 +31,45 @@ For instructions on [context_before] and [context_after]:
- If a code block is repeated so many times in a class or function such that even a single `@@` statement and 3 lines of context cannot uniquely identify the snippet of code, you can use multiple `@@` statements to jump to the right context. For instance:
@@ class BaseClass
@@ def method():
@@ def method():
[3 lines of pre-context]
- [old_code]
+ [new_code]
[3 lines of post-context]
Note, then, that we do not use line numbers in this diff format, as the context is enough to uniquely identify code. An example of a message that you might pass as "input" to this function, in order to apply a patch, is shown below.
The full grammar definition is below:
Patch := Begin { FileOp } End
Begin := "*** Begin Patch" NEWLINE
End := "*** End Patch" NEWLINE
FileOp := AddFile | DeleteFile | UpdateFile
AddFile := "*** Add File: " path NEWLINE { "+" line NEWLINE }
DeleteFile := "*** Delete File: " path NEWLINE
UpdateFile := "*** Update File: " path NEWLINE [ MoveTo ] { Hunk }
MoveTo := "*** Move to: " newPath NEWLINE
Hunk := "@@" [ header ] NEWLINE { HunkLine } [ "*** End of File" NEWLINE ]
HunkLine := (" " | "-" | "+") text NEWLINE
A full patch can combine several operations:
*** Begin Patch
*** Add File: hello.txt
+Hello world
*** Update File: src/app.py
*** Move to: src/main.py
@@ def greet():
-print("Hi")
+print("Hello, world!")
*** Delete File: obsolete.txt
*** End Patch
It is important to remember:
- You must include a header with your intended action (Add/Delete/Update)
- You must prefix new lines with `+` even when creating a new file
- File references can only be relative, NEVER ABSOLUTE.
You can invoke apply_patch like:
```bash
{"cmd": ["apply_patch", "<<'EOF'\\n*** Begin Patch\\n*** Update File: pygorithm/searching/binary_search.py\\n@@ class BaseClass\\n@@ def search():\\n- pass\\n+ raise NotImplementedError()\\n@@ class Subclass\\n@@ def search():\\n- pass\\n+ raise NotImplementedError()\\n*** End Patch\\nEOF\\n"], "workdir": "..."}
```
File references can only be relative, NEVER ABSOLUTE. After the apply_patch command is run, it will always say "Done!", regardless of whether the patch was successfully applied or not. However, you can determine if there are issue and errors by looking at any warnings or logging lines printed BEFORE the "Done!" is output.
shell {"command":["apply_patch","*** Begin Patch\n*** Add File: hello.txt\n+Hello, world!\n*** End Patch\n"]}
```

View File

@@ -159,7 +159,7 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
codex_exec::run_main(exec_cli, codex_linux_sandbox_exe).await?;
}
Some(Subcommand::Mcp) => {
codex_mcp_server::run_main(codex_linux_sandbox_exe).await?;
codex_mcp_server::run_main(codex_linux_sandbox_exe, cli.config_overrides).await?;
}
Some(Subcommand::Login(mut login_cli)) => {
prepend_config_flags(&mut login_cli.config_overrides, cli.config_overrides);

View File

@@ -9,6 +9,7 @@ use codex_core::config::ConfigOverrides;
use codex_core::protocol::Event;
use codex_core::protocol::EventMsg;
use codex_core::protocol::Submission;
use codex_login::AuthManager;
use tokio::io::AsyncBufReadExt;
use tokio::io::BufReader;
use tracing::error;
@@ -36,7 +37,10 @@ pub async fn run_main(opts: ProtoCli) -> anyhow::Result<()> {
let config = Config::load_with_cli_overrides(overrides_vec, ConfigOverrides::default())?;
// Use conversation_manager API to start a conversation
let conversation_manager = ConversationManager::default();
let conversation_manager = ConversationManager::new(AuthManager::shared(
config.codex_home.clone(),
config.preferred_auth_method,
));
let NewConversation {
conversation_id: _,
conversation,

View File

@@ -243,6 +243,25 @@ To disable reasoning summaries, set `model_reasoning_summary` to `"none"` in you
model_reasoning_summary = "none" # disable reasoning summaries
```
## model_verbosity
Controls output length/detail on GPT5 family models when using the Responses API. Supported values:
- `"low"`
- `"medium"` (default when omitted)
- `"high"`
When set, Codex includes a `text` object in the request payload with the configured verbosity, for example: `"text": { "verbosity": "low" }`.
Example:
```toml
model = "gpt-5"
model_verbosity = "low"
```
Note: This applies only to providers using the Responses API. Chat Completions providers are unaffected.
## model_supports_reasoning_summaries
By default, `reasoning` is only set on requests to OpenAI models that are known to support them. To force `reasoning` to set on requests to the current model, you can force this behavior by setting the following in `config.toml`:

View File

@@ -28,6 +28,7 @@ libc = "0.2.175"
mcp-types = { path = "../mcp-types" }
mime_guess = "2.0"
os_info = "3.12.0"
portable-pty = "0.9.0"
rand = "0.9"
regex-lite = "0.1.6"
reqwest = { version = "0.12", features = ["json", "stream"] }
@@ -71,6 +72,9 @@ openssl-sys = { version = "*", features = ["vendored"] }
[target.aarch64-unknown-linux-musl.dependencies]
openssl-sys = { version = "*", features = ["vendored"] }
[target.'cfg(target_os = "windows")'.dependencies]
which = "6"
[dev-dependencies]
assert_cmd = "2"
core_test_support = { path = "tests/common" }

View File

@@ -270,67 +270,6 @@ When using the shell, you must adhere to the following guidelines:
- When searching for text or files, prefer using `rg` or `rg --files` respectively because `rg` is much faster than alternatives like `grep`. (If the `rg` command is not found, then use alternatives.)
- Read files in chunks with a max chunk size of 250 lines. Do not use python scripts to attempt to output larger chunks of a file. Command line output will be truncated after 10 kilobytes or 256 lines of output, regardless of the command used.
## `apply_patch`
Your patch language is a strippeddown, fileoriented diff format designed to be easy to parse and safe to apply. You can think of it as a highlevel envelope:
**_ Begin Patch
[ one or more file sections ]
_** End Patch
Within that envelope, you get a sequence of file operations.
You MUST include a header to specify the action you are taking.
Each operation starts with one of three headers:
**_ Add File: <path> - create a new file. Every following line is a + line (the initial contents).
_** Delete File: <path> - remove an existing file. Nothing follows.
\*\*\* Update File: <path> - patch an existing file in place (optionally with a rename).
May be immediately followed by \*\*\* Move to: <new path> if you want to rename the file.
Then one or more “hunks”, each introduced by @@ (optionally followed by a hunk header).
Within a hunk each line starts with:
- for inserted text,
* for removed text, or
space ( ) for context.
At the end of a truncated hunk you can emit \*\*\* End of File.
Patch := Begin { FileOp } End
Begin := "**_ Begin Patch" NEWLINE
End := "_** End Patch" NEWLINE
FileOp := AddFile | DeleteFile | UpdateFile
AddFile := "**_ Add File: " path NEWLINE { "+" line NEWLINE }
DeleteFile := "_** Delete File: " path NEWLINE
UpdateFile := "**_ Update File: " path NEWLINE [ MoveTo ] { Hunk }
MoveTo := "_** Move to: " newPath NEWLINE
Hunk := "@@" [ header ] NEWLINE { HunkLine } [ "*** End of File" NEWLINE ]
HunkLine := (" " | "-" | "+") text NEWLINE
A full patch can combine several operations:
**_ Begin Patch
_** Add File: hello.txt
+Hello world
**_ Update File: src/app.py
_** Move to: src/main.py
@@ def greet():
-print("Hi")
+print("Hello, world!")
**_ Delete File: obsolete.txt
_** End Patch
It is important to remember:
- You must include a header with your intended action (Add/Delete/Update)
- You must prefix new lines with `+` even when creating a new file
You can invoke apply_patch like:
```
shell {"command":["apply_patch","*** Begin Patch\n*** Add File: hello.txt\n+Hello, world!\n*** End Patch\n"]}
```
## `update_plan`
A tool named `update_plan` is available to you. You can use it to keep an uptodate, stepbystep plan for the task.

View File

@@ -1,13 +1,13 @@
use crate::codex::Session;
use crate::codex::TurnContext;
use crate::models::FunctionCallOutputPayload;
use crate::models::ResponseInputItem;
use crate::protocol::FileChange;
use crate::protocol::ReviewDecision;
use crate::safety::SafetyCheck;
use crate::safety::assess_patch_safety;
use codex_apply_patch::ApplyPatchAction;
use codex_apply_patch::ApplyPatchFileChange;
use codex_protocol::models::FunctionCallOutputPayload;
use codex_protocol::models::ResponseInputItem;
use std::collections::HashMap;
use std::path::PathBuf;

View File

@@ -22,11 +22,11 @@ use crate::client_common::ResponseStream;
use crate::error::CodexErr;
use crate::error::Result;
use crate::model_family::ModelFamily;
use crate::models::ContentItem;
use crate::models::ReasoningItemContent;
use crate::models::ResponseItem;
use crate::openai_tools::create_tools_json_for_chat_completions_api;
use crate::util::backoff;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ReasoningItemContent;
use codex_protocol::models::ResponseItem;
/// Implementation for the classic Chat Completions API.
pub(crate) async fn stream_chat_completions(
@@ -102,6 +102,33 @@ pub(crate) async fn stream_chat_completions(
"content": output.content,
}));
}
ResponseItem::CustomToolCall {
id,
call_id: _,
name,
input,
status: _,
} => {
messages.push(json!({
"role": "assistant",
"content": null,
"tool_calls": [{
"id": id,
"type": "custom",
"custom": {
"name": name,
"input": input,
}
}]
}));
}
ResponseItem::CustomToolCallOutput { call_id, output } => {
messages.push(json!({
"role": "tool",
"tool_call_id": call_id,
"content": output,
}));
}
ResponseItem::Reasoning { .. } | ResponseItem::Other => {
// Omit these items from the conversation history.
continue;
@@ -482,16 +509,19 @@ where
// do NOT emit yet. Forward any other item (e.g. FunctionCall) right
// away so downstream consumers see it.
let is_assistant_delta = matches!(&item, crate::models::ResponseItem::Message { role, .. } if role == "assistant");
let is_assistant_delta = matches!(&item, codex_protocol::models::ResponseItem::Message { role, .. } if role == "assistant");
if is_assistant_delta {
// Only use the final assistant message if we have not
// seen any deltas; otherwise, deltas already built the
// cumulative text and this would duplicate it.
if this.cumulative.is_empty()
&& let crate::models::ResponseItem::Message { content, .. } = &item
&& let codex_protocol::models::ResponseItem::Message { content, .. } =
&item
&& let Some(text) = content.iter().find_map(|c| match c {
crate::models::ContentItem::OutputText { text } => Some(text),
codex_protocol::models::ContentItem::OutputText { text } => {
Some(text)
}
_ => None,
})
{
@@ -515,26 +545,27 @@ where
if !this.cumulative_reasoning.is_empty()
&& matches!(this.mode, AggregateMode::AggregatedOnly)
{
let aggregated_reasoning = crate::models::ResponseItem::Reasoning {
id: String::new(),
summary: Vec::new(),
content: Some(vec![
crate::models::ReasoningItemContent::ReasoningText {
text: std::mem::take(&mut this.cumulative_reasoning),
},
]),
encrypted_content: None,
};
let aggregated_reasoning =
codex_protocol::models::ResponseItem::Reasoning {
id: String::new(),
summary: Vec::new(),
content: Some(vec![
codex_protocol::models::ReasoningItemContent::ReasoningText {
text: std::mem::take(&mut this.cumulative_reasoning),
},
]),
encrypted_content: None,
};
this.pending
.push_back(ResponseEvent::OutputItemDone(aggregated_reasoning));
emitted_any = true;
}
if !this.cumulative.is_empty() {
let aggregated_message = crate::models::ResponseItem::Message {
let aggregated_message = codex_protocol::models::ResponseItem::Message {
id: None,
role: "assistant".to_string(),
content: vec![crate::models::ContentItem::OutputText {
content: vec![codex_protocol::models::ContentItem::OutputText {
text: std::mem::take(&mut this.cumulative),
}],
};
@@ -592,6 +623,12 @@ where
Poll::Ready(Some(Ok(ResponseEvent::ReasoningSummaryPartAdded))) => {
continue;
}
Poll::Ready(Some(Ok(ResponseEvent::WebSearchCallBegin { .. }))) => {
return Poll::Ready(Some(Ok(ResponseEvent::WebSearchCallBegin {
call_id: String::new(),
query: None,
})));
}
}
}
}

View File

@@ -4,8 +4,8 @@ use std::sync::OnceLock;
use std::time::Duration;
use bytes::Bytes;
use codex_login::AuthManager;
use codex_login::AuthMode;
use codex_login::CodexAuth;
use eventsource_stream::Eventsource;
use futures::prelude::*;
use regex_lite::Regex;
@@ -28,6 +28,7 @@ use crate::client_common::ResponseEvent;
use crate::client_common::ResponseStream;
use crate::client_common::ResponsesApiRequest;
use crate::client_common::create_reasoning_param_for_request;
use crate::client_common::create_text_param_for_request;
use crate::config::Config;
use crate::error::CodexErr;
use crate::error::Result;
@@ -36,13 +37,13 @@ use crate::flags::CODEX_RS_SSE_FIXTURE;
use crate::model_family::ModelFamily;
use crate::model_provider_info::ModelProviderInfo;
use crate::model_provider_info::WireApi;
use crate::models::ResponseItem;
use crate::openai_tools::create_tools_json_for_responses_api;
use crate::protocol::TokenUsage;
use crate::user_agent::get_codex_user_agent;
use crate::util::backoff;
use codex_protocol::config_types::ReasoningEffort as ReasoningEffortConfig;
use codex_protocol::config_types::ReasoningSummary as ReasoningSummaryConfig;
use codex_protocol::models::ResponseItem;
use std::sync::Arc;
#[derive(Debug, Deserialize)]
@@ -60,7 +61,7 @@ struct Error {
#[derive(Debug, Clone)]
pub struct ModelClient {
config: Arc<Config>,
auth: Option<CodexAuth>,
auth_manager: Option<Arc<AuthManager>>,
client: reqwest::Client,
provider: ModelProviderInfo,
session_id: Uuid,
@@ -71,7 +72,7 @@ pub struct ModelClient {
impl ModelClient {
pub fn new(
config: Arc<Config>,
auth: Option<CodexAuth>,
auth_manager: Option<Arc<AuthManager>>,
provider: ModelProviderInfo,
effort: ReasoningEffortConfig,
summary: ReasoningSummaryConfig,
@@ -79,7 +80,7 @@ impl ModelClient {
) -> Self {
Self {
config,
auth,
auth_manager,
client: reqwest::Client::new(),
provider,
session_id,
@@ -140,14 +141,29 @@ impl ModelClient {
return stream_from_fixture(path, self.provider.clone()).await;
}
let auth = self.auth.clone();
let auth_manager = self.auth_manager.clone();
let auth = auth_manager.as_ref().and_then(|m| m.auth());
let auth_mode = auth.as_ref().map(|a| a.mode);
let store = prompt.store && auth_mode != Some(AuthMode::ChatGPT);
let full_instructions = prompt.get_full_instructions(&self.config.model_family);
let tools_json = create_tools_json_for_responses_api(&prompt.tools)?;
let mut tools_json = create_tools_json_for_responses_api(&prompt.tools)?;
// ChatGPT backend expects the preview name for web search.
if auth_mode == Some(AuthMode::ChatGPT) {
for tool in &mut tools_json {
if let Some(map) = tool.as_object_mut()
&& map.get("type").and_then(|v| v.as_str()) == Some("web_search")
{
map.insert(
"type".to_string(),
serde_json::Value::String("web_search_preview".to_string()),
);
}
}
}
let reasoning = create_reasoning_param_for_request(
&self.config.model_family,
self.effort,
@@ -164,6 +180,19 @@ impl ModelClient {
let input_with_instructions = prompt.get_formatted_input();
// Only include `text.verbosity` for GPT-5 family models
let text = if self.config.model_family.family == "gpt-5" {
create_text_param_for_request(self.config.model_verbosity)
} else {
if self.config.model_verbosity.is_some() {
warn!(
"model_verbosity is set but ignored for non-gpt-5 model family: {}",
self.config.model_family.family
);
}
None
};
let payload = ResponsesApiRequest {
model: &self.config.model,
instructions: &full_instructions,
@@ -176,6 +205,7 @@ impl ModelClient {
stream: true,
include,
prompt_cache_key: Some(self.session_id.to_string()),
text,
};
let mut attempt = 0;
@@ -249,9 +279,10 @@ impl ModelClient {
.and_then(|s| s.parse::<u64>().ok());
if status == StatusCode::UNAUTHORIZED
&& let Some(a) = auth.as_ref()
&& let Some(manager) = auth_manager.as_ref()
&& manager.auth().is_some()
{
let _ = a.refresh_token().await;
let _ = manager.refresh_token().await;
}
// The OpenAI Responses endpoint returns structured JSON bodies even for 4xx/5xx
@@ -338,8 +369,8 @@ impl ModelClient {
self.summary
}
pub fn get_auth(&self) -> Option<CodexAuth> {
self.auth.clone()
pub fn get_auth_manager(&self) -> Option<Arc<AuthManager>> {
self.auth_manager.clone()
}
}
@@ -449,7 +480,8 @@ async fn process_sse<S>(
}
};
trace!("SSE event: {}", sse.data);
let raw = sse.data.clone();
trace!("SSE event: {}", raw);
let event: SseEvent = match serde_json::from_str(&sse.data) {
Ok(event) => event,
@@ -558,11 +590,29 @@ async fn process_sse<S>(
}
"response.content_part.done"
| "response.function_call_arguments.delta"
| "response.custom_tool_call_input.delta"
| "response.custom_tool_call_input.done" // also emitted as response.output_item.done
| "response.in_progress"
| "response.output_item.added"
| "response.output_text.done" => {
// Currently, we ignore this event, but we handle it
// separately to skip the logging message in the `other` case.
if event.kind == "response.output_item.added"
&& let Some(item) = event.item.as_ref()
{
// Detect web_search_call begin and forward a synthetic event upstream.
if let Some(ty) = item.get("type").and_then(|v| v.as_str())
&& ty == "web_search_call"
{
let call_id = item
.get("id")
.and_then(|v| v.as_str())
.unwrap_or("")
.to_string();
let ev = ResponseEvent::WebSearchCallBegin { call_id, query: None };
if tx_event.send(Ok(ev)).await.is_err() {
return;
}
}
}
}
"response.reasoning_summary_part.added" => {
// Boundary between reasoning summary sections (e.g., titles).
@@ -572,7 +622,7 @@ async fn process_sse<S>(
}
}
"response.reasoning_summary_text.done" => {}
other => debug!(other, "sse event"),
_ => {}
}
}
}

View File

@@ -1,12 +1,13 @@
use crate::config_types::Verbosity as VerbosityConfig;
use crate::error::Result;
use crate::model_family::ModelFamily;
use crate::models::ContentItem;
use crate::models::ResponseItem;
use crate::openai_tools::OpenAiTool;
use crate::protocol::TokenUsage;
use codex_apply_patch::APPLY_PATCH_TOOL_INSTRUCTIONS;
use codex_protocol::config_types::ReasoningEffort as ReasoningEffortConfig;
use codex_protocol::config_types::ReasoningSummary as ReasoningSummaryConfig;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ResponseItem;
use futures::Stream;
use serde::Serialize;
use std::borrow::Cow;
@@ -47,7 +48,18 @@ impl Prompt {
.as_deref()
.unwrap_or(BASE_INSTRUCTIONS);
let mut sections: Vec<&str> = vec![base];
if model.needs_special_apply_patch_instructions {
// When there are no custom instructions, add apply_patch_tool_instructions if either:
// - the model needs special instructions (4.1), or
// - there is no apply_patch tool present
let is_apply_patch_tool_present = self.tools.iter().any(|tool| match tool {
OpenAiTool::Function(f) => f.name == "apply_patch",
OpenAiTool::Freeform(f) => f.name == "apply_patch",
_ => false,
});
if self.base_instructions_override.is_none()
&& (model.needs_special_apply_patch_instructions || !is_apply_patch_tool_present)
{
sections.push(APPLY_PATCH_TOOL_INSTRUCTIONS);
}
Cow::Owned(sections.join("\n"))
@@ -81,6 +93,10 @@ pub enum ResponseEvent {
ReasoningSummaryDelta(String),
ReasoningContentDelta(String),
ReasoningSummaryPartAdded,
WebSearchCallBegin {
call_id: String,
query: Option<String>,
},
}
#[derive(Debug, Serialize)]
@@ -89,6 +105,32 @@ pub(crate) struct Reasoning {
pub(crate) summary: ReasoningSummaryConfig,
}
/// Controls under the `text` field in the Responses API for GPT-5.
#[derive(Debug, Serialize, Default, Clone, Copy)]
pub(crate) struct TextControls {
#[serde(skip_serializing_if = "Option::is_none")]
pub(crate) verbosity: Option<OpenAiVerbosity>,
}
#[derive(Debug, Serialize, Default, Clone, Copy)]
#[serde(rename_all = "lowercase")]
pub(crate) enum OpenAiVerbosity {
Low,
#[default]
Medium,
High,
}
impl From<VerbosityConfig> for OpenAiVerbosity {
fn from(v: VerbosityConfig) -> Self {
match v {
VerbosityConfig::Low => OpenAiVerbosity::Low,
VerbosityConfig::Medium => OpenAiVerbosity::Medium,
VerbosityConfig::High => OpenAiVerbosity::High,
}
}
}
/// Request object that is serialized as JSON and POST'ed when using the
/// Responses API.
#[derive(Debug, Serialize)]
@@ -109,6 +151,8 @@ pub(crate) struct ResponsesApiRequest<'a> {
pub(crate) include: Vec<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub(crate) prompt_cache_key: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub(crate) text: Option<TextControls>,
}
pub(crate) fn create_reasoning_param_for_request(
@@ -123,6 +167,14 @@ pub(crate) fn create_reasoning_param_for_request(
}
}
pub(crate) fn create_text_param_for_request(
verbosity: Option<VerbosityConfig>,
) -> Option<TextControls> {
verbosity.map(|v| TextControls {
verbosity: Some(v.into()),
})
}
pub(crate) struct ResponseStream {
pub(crate) rx_event: mpsc::Receiver<Result<ResponseEvent>>,
}
@@ -151,4 +203,57 @@ mod tests {
let full = prompt.get_full_instructions(&model_family);
assert_eq!(full, expected);
}
#[test]
fn serializes_text_verbosity_when_set() {
let input: Vec<ResponseItem> = vec![];
let tools: Vec<serde_json::Value> = vec![];
let req = ResponsesApiRequest {
model: "gpt-5",
instructions: "i",
input: &input,
tools: &tools,
tool_choice: "auto",
parallel_tool_calls: false,
reasoning: None,
store: true,
stream: true,
include: vec![],
prompt_cache_key: None,
text: Some(TextControls {
verbosity: Some(OpenAiVerbosity::Low),
}),
};
let v = serde_json::to_value(&req).expect("json");
assert_eq!(
v.get("text")
.and_then(|t| t.get("verbosity"))
.and_then(|s| s.as_str()),
Some("low")
);
}
#[test]
fn omits_text_when_not_set() {
let input: Vec<ResponseItem> = vec![];
let tools: Vec<serde_json::Value> = vec![];
let req = ResponsesApiRequest {
model: "gpt-5",
instructions: "i",
input: &input,
tools: &tools,
tool_choice: "auto",
parallel_tool_calls: false,
reasoning: None,
store: true,
stream: true,
include: vec![],
prompt_cache_key: None,
text: None,
};
let v = serde_json::to_value(&req).expect("json");
assert!(v.get("text").is_none());
}
}

View File

@@ -13,7 +13,8 @@ use async_channel::Sender;
use codex_apply_patch::ApplyPatchAction;
use codex_apply_patch::MaybeApplyPatchVerified;
use codex_apply_patch::maybe_parse_apply_patch_verified;
use codex_login::CodexAuth;
use codex_login::AuthManager;
use codex_protocol::protocol::ConversationHistoryResponseEvent;
use codex_protocol::protocol::TurnAbortReason;
use codex_protocol::protocol::TurnAbortedEvent;
use futures::prelude::*;
@@ -52,18 +53,15 @@ use crate::exec::SandboxType;
use crate::exec::StdoutStream;
use crate::exec::StreamOutput;
use crate::exec::process_exec_tool_call;
use crate::exec_command::EXEC_COMMAND_TOOL_NAME;
use crate::exec_command::ExecCommandParams;
use crate::exec_command::SESSION_MANAGER;
use crate::exec_command::WRITE_STDIN_TOOL_NAME;
use crate::exec_command::WriteStdinParams;
use crate::exec_env::create_env;
use crate::mcp_connection_manager::McpConnectionManager;
use crate::mcp_tool_call::handle_mcp_tool_call;
use crate::model_family::find_family_for_model;
use crate::models::ContentItem;
use crate::models::FunctionCallOutputPayload;
use crate::models::LocalShellAction;
use crate::models::ReasoningItemContent;
use crate::models::ReasoningItemReasoningSummary;
use crate::models::ResponseInputItem;
use crate::models::ResponseItem;
use crate::models::ShellToolCallParams;
use crate::openai_tools::ApplyPatchToolArgs;
use crate::openai_tools::ToolsConfig;
use crate::openai_tools::get_openai_tools;
@@ -94,9 +92,11 @@ use crate::protocol::PatchApplyEndEvent;
use crate::protocol::ReviewDecision;
use crate::protocol::SandboxPolicy;
use crate::protocol::SessionConfiguredEvent;
use crate::protocol::StreamErrorEvent;
use crate::protocol::Submission;
use crate::protocol::TaskCompleteEvent;
use crate::protocol::TurnDiffEvent;
use crate::protocol::WebSearchBeginEvent;
use crate::rollout::RolloutRecorder;
use crate::safety::SafetyCheck;
use crate::safety::assess_command_safety;
@@ -107,6 +107,14 @@ use crate::user_notification::UserNotification;
use crate::util::backoff;
use codex_protocol::config_types::ReasoningEffort as ReasoningEffortConfig;
use codex_protocol::config_types::ReasoningSummary as ReasoningSummaryConfig;
use codex_protocol::models::ContentItem;
use codex_protocol::models::FunctionCallOutputPayload;
use codex_protocol::models::LocalShellAction;
use codex_protocol::models::ReasoningItemContent;
use codex_protocol::models::ReasoningItemReasoningSummary;
use codex_protocol::models::ResponseInputItem;
use codex_protocol::models::ResponseItem;
use codex_protocol::models::ShellToolCallParams;
// A convenience extension trait for acquiring mutex locks where poisoning is
// unrecoverable and should abort the program. This avoids scattered `.unwrap()`
@@ -140,11 +148,23 @@ pub struct CodexSpawnOk {
}
pub(crate) const INITIAL_SUBMIT_ID: &str = "";
pub(crate) const SUBMISSION_CHANNEL_CAPACITY: usize = 64;
// Model-formatting limits: clients get full streams; oonly content sent to the model is truncated.
pub(crate) const MODEL_FORMAT_MAX_BYTES: usize = 10 * 1024; // 10 KiB
pub(crate) const MODEL_FORMAT_MAX_LINES: usize = 256; // lines
pub(crate) const MODEL_FORMAT_HEAD_LINES: usize = MODEL_FORMAT_MAX_LINES / 2;
pub(crate) const MODEL_FORMAT_TAIL_LINES: usize = MODEL_FORMAT_MAX_LINES - MODEL_FORMAT_HEAD_LINES; // 128
pub(crate) const MODEL_FORMAT_HEAD_BYTES: usize = MODEL_FORMAT_MAX_BYTES / 2;
impl Codex {
/// Spawn a new [`Codex`] and initialize the session.
pub async fn spawn(config: Config, auth: Option<CodexAuth>) -> CodexResult<CodexSpawnOk> {
let (tx_sub, rx_sub) = async_channel::bounded(64);
pub async fn spawn(
config: Config,
auth_manager: Arc<AuthManager>,
initial_history: Option<Vec<ResponseItem>>,
) -> CodexResult<CodexSpawnOk> {
let (tx_sub, rx_sub) = async_channel::bounded(SUBMISSION_CHANNEL_CAPACITY);
let (tx_event, rx_event) = async_channel::unbounded();
let user_instructions = get_user_instructions(&config).await;
@@ -168,17 +188,27 @@ impl Codex {
};
// Generate a unique ID for the lifetime of this Codex session.
let (session, turn_context) =
Session::new(configure_session, config.clone(), auth, tx_event.clone())
.await
.map_err(|e| {
error!("Failed to create session: {e:#}");
CodexErr::InternalAgentDied
})?;
let (session, turn_context) = Session::new(
configure_session,
config.clone(),
auth_manager.clone(),
tx_event.clone(),
initial_history,
)
.await
.map_err(|e| {
error!("Failed to create session: {e:#}");
CodexErr::InternalAgentDied
})?;
let session_id = session.session_id;
// This task will run until Op::Shutdown is received.
tokio::spawn(submission_loop(session, turn_context, config, rx_sub));
tokio::spawn(submission_loop(
session.clone(),
turn_context,
config,
rx_sub,
));
let codex = Codex {
next_id: AtomicU64::new(0),
tx_sub,
@@ -322,8 +352,9 @@ impl Session {
async fn new(
configure_session: ConfigureSession,
config: Arc<Config>,
auth: Option<CodexAuth>,
auth_manager: Arc<AuthManager>,
tx_event: Sender<Event>,
initial_history: Option<Vec<ResponseItem>>,
) -> anyhow::Result<(Arc<Self>, TurnContext)> {
let ConfigureSession {
provider,
@@ -383,14 +414,15 @@ impl Session {
}
let rollout_result = match rollout_res {
Ok((session_id, maybe_saved, recorder)) => {
let restored_items: Option<Vec<ResponseItem>> =
let restored_items: Option<Vec<ResponseItem>> = initial_history.or_else(|| {
maybe_saved.and_then(|saved_session| {
if saved_session.items.is_empty() {
None
} else {
Some(saved_session.items)
}
});
})
});
RolloutResult {
session_id,
rollout_recorder: Some(recorder),
@@ -466,7 +498,7 @@ impl Session {
// construct the model client.
let client = ModelClient::new(
config.clone(),
auth.clone(),
Some(auth_manager.clone()),
provider.clone(),
model_reasoning_effort,
model_reasoning_summary,
@@ -480,6 +512,8 @@ impl Session {
sandbox_policy.clone(),
config.include_plan_tool,
config.include_apply_patch_tool,
config.tools_web_search_request,
config.use_experimental_streamable_shell_tool,
),
user_instructions,
base_instructions,
@@ -508,9 +542,10 @@ impl Session {
conversation_items.push(Prompt::format_user_instructions_message(user_instructions));
}
conversation_items.push(ResponseItem::from(EnvironmentContext::new(
turn_context.cwd.to_path_buf(),
turn_context.approval_policy,
turn_context.sandbox_policy.clone(),
Some(turn_context.cwd.clone()),
Some(turn_context.approval_policy),
Some(turn_context.sandbox_policy.clone()),
Some(sess.user_shell.clone()),
)));
sess.record_conversation_items(&conversation_items).await;
@@ -692,7 +727,6 @@ impl Session {
let _ = self.tx_event.send(event).await;
}
#[allow(clippy::too_many_arguments)]
async fn on_exec_command_end(
&self,
turn_diff_tracker: &mut TurnDiffTracker,
@@ -704,14 +738,15 @@ impl Session {
let ExecToolCallOutput {
stdout,
stderr,
aggregated_output,
duration,
exit_code,
} = output;
// Because stdout and stderr could each be up to 100 KiB, we send
// truncated versions.
const MAX_STREAM_OUTPUT: usize = 5 * 1024; // 5KiB
let stdout = stdout.text.chars().take(MAX_STREAM_OUTPUT).collect();
let stderr = stderr.text.chars().take(MAX_STREAM_OUTPUT).collect();
// Send full stdout/stderr to clients; do not truncate.
let stdout = stdout.text.clone();
let stderr = stderr.text.clone();
let formatted_output = format_exec_output_str(output);
let aggregated_output: String = aggregated_output.text.clone();
let msg = if is_apply_patch {
EventMsg::PatchApplyEnd(PatchApplyEndEvent {
@@ -725,8 +760,10 @@ impl Session {
call_id: call_id.to_string(),
stdout,
stderr,
duration: *duration,
aggregated_output,
exit_code: *exit_code,
duration: *duration,
formatted_output,
})
};
@@ -784,6 +821,7 @@ impl Session {
exit_code: -1,
stdout: StreamOutput::new(String::new()),
stderr: StreamOutput::new(get_error_message_ui(e)),
aggregated_output: StreamOutput::new(get_error_message_ui(e)),
duration: Duration::default(),
};
&output_stderr
@@ -814,6 +852,16 @@ impl Session {
let _ = self.tx_event.send(event).await;
}
async fn notify_stream_error(&self, sub_id: &str, message: impl Into<String>) {
let event = Event {
id: sub_id.to_string(),
msg: EventMsg::StreamError(StreamErrorEvent {
message: message.into(),
}),
};
let _ = self.tx_event.send(event).await;
}
/// Build the full turn input by concatenating the current conversation
/// history with additional items for this turn.
pub fn turn_input_with_history(&self, extra: Vec<ResponseItem>) -> Vec<ResponseItem> {
@@ -1022,7 +1070,8 @@ async fn submission_loop(
let effective_effort = effort.unwrap_or(prev.client.get_reasoning_effort());
let effective_summary = summary.unwrap_or(prev.client.get_reasoning_summary());
let auth = prev.client.get_auth();
let auth_manager = prev.client.get_auth_manager();
// Build updated config for the client
let mut updated_config = (*config).clone();
updated_config.model = effective_model.clone();
@@ -1030,7 +1079,7 @@ async fn submission_loop(
let client = ModelClient::new(
Arc::new(updated_config),
auth,
auth_manager,
provider,
effective_effort,
effective_summary,
@@ -1049,6 +1098,8 @@ async fn submission_loop(
new_sandbox_policy.clone(),
config.include_plan_tool,
config.include_apply_patch_tool,
config.tools_web_search_request,
config.use_experimental_streamable_shell_tool,
);
let new_turn_context = TurnContext {
@@ -1067,9 +1118,11 @@ async fn submission_loop(
turn_context = Arc::new(new_turn_context);
if cwd.is_some() || approval_policy.is_some() || sandbox_policy.is_some() {
sess.record_conversation_items(&[ResponseItem::from(EnvironmentContext::new(
new_cwd,
new_approval_policy,
new_sandbox_policy,
cwd,
approval_policy,
sandbox_policy,
// Shell is not configurable from turn to turn
None,
))])
.await;
}
@@ -1125,6 +1178,8 @@ async fn submission_loop(
sandbox_policy.clone(),
config.include_plan_tool,
config.include_apply_patch_tool,
config.tools_web_search_request,
config.use_experimental_streamable_shell_tool,
),
user_instructions: turn_context.user_instructions.clone(),
base_instructions: turn_context.base_instructions.clone(),
@@ -1263,6 +1318,21 @@ async fn submission_loop(
}
break;
}
Op::GetHistory => {
let tx_event = sess.tx_event.clone();
let sub_id = sub.id.clone();
let event = Event {
id: sub_id.clone(),
msg: EventMsg::ConversationHistory(ConversationHistoryResponseEvent {
conversation_id: sess.session_id,
entries: sess.state.lock_unchecked().history.contents(),
}),
};
if let Err(e) = tx_event.send(event).await {
warn!("failed to send ConversationHistory event: {e}");
}
}
_ => {
// Ignore unknown ops; enum is non_exhaustive to allow extensions.
}
@@ -1384,29 +1454,38 @@ async fn run_task(
},
);
}
(
ResponseItem::CustomToolCall { .. },
Some(ResponseInputItem::CustomToolCallOutput { call_id, output }),
) => {
items_to_record_in_conversation_history.push(item);
items_to_record_in_conversation_history.push(
ResponseItem::CustomToolCallOutput {
call_id: call_id.clone(),
output: output.clone(),
},
);
}
(
ResponseItem::FunctionCall { .. },
Some(ResponseInputItem::McpToolCallOutput { call_id, result }),
) => {
items_to_record_in_conversation_history.push(item);
let (content, success): (String, Option<bool>) = match result {
Ok(CallToolResult {
content,
is_error,
structured_content: _,
}) => match serde_json::to_string(content) {
Ok(content) => (content, *is_error),
Err(e) => {
warn!("Failed to serialize MCP tool call output: {e}");
(e.to_string(), Some(true))
}
let output = match result {
Ok(call_tool_result) => {
convert_call_tool_result_to_function_call_output_payload(
call_tool_result,
)
}
Err(err) => FunctionCallOutputPayload {
content: err.clone(),
success: Some(false),
},
Err(e) => (e.clone(), Some(true)),
};
items_to_record_in_conversation_history.push(
ResponseItem::FunctionCallOutput {
call_id: call_id.clone(),
output: FunctionCallOutputPayload { content, success },
output,
},
);
}
@@ -1520,7 +1599,7 @@ async fn run_turn(
// Surface retry information to any UI/frontend so the
// user understands what is happening instead of staring
// at a seemingly frozen screen.
sess.notify_background_event(
sess.notify_stream_error(
&sub_id,
format!(
"stream error: {e}; retrying {retries}/{max_retries} in {delay:?}"
@@ -1564,6 +1643,7 @@ async fn try_run_turn(
call_id: Some(call_id),
..
} => Some(call_id),
ResponseItem::CustomToolCallOutput { call_id, .. } => Some(call_id),
_ => None,
})
.collect::<Vec<_>>();
@@ -1581,6 +1661,7 @@ async fn try_run_turn(
call_id: Some(call_id),
..
} => Some(call_id),
ResponseItem::CustomToolCall { call_id, .. } => Some(call_id),
_ => None,
})
.filter_map(|call_id| {
@@ -1590,12 +1671,9 @@ async fn try_run_turn(
Some(call_id.clone())
}
})
.map(|call_id| ResponseItem::FunctionCallOutput {
.map(|call_id| ResponseItem::CustomToolCallOutput {
call_id: call_id.clone(),
output: FunctionCallOutputPayload {
content: "aborted".to_string(),
success: Some(false),
},
output: "aborted".to_string(),
})
.collect::<Vec<_>>()
};
@@ -1613,6 +1691,7 @@ async fn try_run_turn(
let mut stream = turn_context.client.clone().stream(&prompt).await?;
let mut output = Vec::new();
loop {
// Poll the next item from the model stream. We must inspect *both* Ok and Err
// cases so that transient stream failures (e.g., dropped SSE connection before
@@ -1649,6 +1728,16 @@ async fn try_run_turn(
.await?;
output.push(ProcessedResponseItem { item, response });
}
ResponseEvent::WebSearchCallBegin { call_id, query } => {
let q = query.unwrap_or_else(|| "Searching Web...".to_string());
let _ = sess
.tx_event
.send(Event {
id: sub_id.to_string(),
msg: EventMsg::WebSearchBegin(WebSearchBeginEvent { call_id, query: q }),
})
.await;
}
ResponseEvent::Completed {
response_id: _,
token_usage,
@@ -1755,7 +1844,7 @@ async fn run_compact_task(
if retries < max_retries {
retries += 1;
let delay = backoff(retries);
sess.notify_background_event(
sess.notify_stream_error(
&sub_id,
format!(
"stream error: {e}; retrying {retries}/{max_retries} in {delay:?}"
@@ -1860,7 +1949,7 @@ async fn handle_response_item(
call_id,
..
} => {
info!("FunctionCall: {arguments}");
info!("FunctionCall: {name}({arguments})");
Some(
handle_function_call(
sess,
@@ -1917,10 +2006,32 @@ async fn handle_response_item(
.await,
)
}
ResponseItem::CustomToolCall {
id: _,
call_id,
name,
input,
status: _,
} => Some(
handle_custom_tool_call(
sess,
turn_context,
turn_diff_tracker,
sub_id.to_string(),
name,
input,
call_id,
)
.await,
),
ResponseItem::FunctionCallOutput { .. } => {
debug!("unexpected FunctionCallOutput from stream");
None
}
ResponseItem::CustomToolCallOutput { .. } => {
debug!("unexpected CustomToolCallOutput from stream");
None
}
ResponseItem::Other => None,
};
Ok(output)
@@ -1985,6 +2096,52 @@ async fn handle_function_call(
.await
}
"update_plan" => handle_update_plan(sess, arguments, sub_id, call_id).await,
EXEC_COMMAND_TOOL_NAME => {
// TODO(mbolin): Sandbox check.
let exec_params = match serde_json::from_str::<ExecCommandParams>(&arguments) {
Ok(params) => params,
Err(e) => {
return ResponseInputItem::FunctionCallOutput {
call_id,
output: FunctionCallOutputPayload {
content: format!("failed to parse function arguments: {e}"),
success: Some(false),
},
};
}
};
let result = SESSION_MANAGER
.handle_exec_command_request(exec_params)
.await;
let function_call_output = crate::exec_command::result_into_payload(result);
ResponseInputItem::FunctionCallOutput {
call_id,
output: function_call_output,
}
}
WRITE_STDIN_TOOL_NAME => {
let write_stdin_params = match serde_json::from_str::<WriteStdinParams>(&arguments) {
Ok(params) => params,
Err(e) => {
return ResponseInputItem::FunctionCallOutput {
call_id,
output: FunctionCallOutputPayload {
content: format!("failed to parse function arguments: {e}"),
success: Some(false),
},
};
}
};
let result = SESSION_MANAGER
.handle_write_stdin_request(write_stdin_params)
.await;
let function_call_output: FunctionCallOutputPayload =
crate::exec_command::result_into_payload(result);
ResponseInputItem::FunctionCallOutput {
call_id,
output: function_call_output,
}
}
_ => {
match sess.mcp_connection_manager.parse_tool_name(&name) {
Some((server, tool_name)) => {
@@ -2010,6 +2167,58 @@ async fn handle_function_call(
}
}
async fn handle_custom_tool_call(
sess: &Session,
turn_context: &TurnContext,
turn_diff_tracker: &mut TurnDiffTracker,
sub_id: String,
name: String,
input: String,
call_id: String,
) -> ResponseInputItem {
info!("CustomToolCall: {name} {input}");
match name.as_str() {
"apply_patch" => {
let exec_params = ExecParams {
command: vec!["apply_patch".to_string(), input.clone()],
cwd: turn_context.cwd.clone(),
timeout_ms: None,
env: HashMap::new(),
with_escalated_permissions: None,
justification: None,
};
let resp = handle_container_exec_with_params(
exec_params,
sess,
turn_context,
turn_diff_tracker,
sub_id,
call_id,
)
.await;
// Convert function-call style output into a custom tool call output
match resp {
ResponseInputItem::FunctionCallOutput { call_id, output } => {
ResponseInputItem::CustomToolCallOutput {
call_id,
output: output.content,
}
}
// Pass through if already a custom tool output or other variant
other => other,
}
}
_ => {
debug!("unexpected CustomToolCall from stream");
ResponseInputItem::CustomToolCallOutput {
call_id,
output: format!("unsupported custom tool call: {name}"),
}
}
}
}
fn to_exec_params(params: ShellToolCallParams, turn_context: &TurnContext) -> ExecParams {
ExecParams {
command: params.command,
@@ -2051,18 +2260,20 @@ pub struct ExecInvokeArgs<'a> {
pub stdout_stream: Option<StdoutStream>,
}
fn maybe_run_with_user_profile(
fn maybe_translate_shell_command(
params: ExecParams,
sess: &Session,
turn_context: &TurnContext,
) -> ExecParams {
if turn_context.shell_environment_policy.use_profile {
let command = sess
let should_translate = matches!(sess.user_shell, crate::shell::Shell::PowerShell(_))
|| turn_context.shell_environment_policy.use_profile;
if should_translate
&& let Some(command) = sess
.user_shell
.format_default_shell_invocation(params.command.clone());
if let Some(command) = command {
return ExecParams { command, ..params };
}
.format_default_shell_invocation(params.command.clone())
{
return ExecParams { command, ..params };
}
params
}
@@ -2227,7 +2438,7 @@ async fn handle_container_exec_with_params(
),
};
let params = maybe_run_with_user_profile(params, sess, turn_context);
let params = maybe_translate_shell_command(params, sess, turn_context);
let output_result = sess
.run_exec_with_events(
turn_diff_tracker,
@@ -2251,7 +2462,7 @@ async fn handle_container_exec_with_params(
let ExecToolCallOutput { exit_code, .. } = &output;
let is_success = *exit_code == 0;
let content = format_exec_output(output);
let content = format_exec_output(&output);
ResponseInputItem::FunctionCallOutput {
call_id: call_id.clone(),
output: FunctionCallOutputPayload {
@@ -2384,7 +2595,7 @@ async fn handle_sandbox_error(
let ExecToolCallOutput { exit_code, .. } = &retry_output;
let is_success = *exit_code == 0;
let content = format_exec_output(retry_output);
let content = format_exec_output(&retry_output);
ResponseInputItem::FunctionCallOutput {
call_id: call_id.clone(),
@@ -2416,13 +2627,113 @@ async fn handle_sandbox_error(
}
}
fn format_exec_output_str(exec_output: &ExecToolCallOutput) -> String {
let ExecToolCallOutput {
aggregated_output, ..
} = exec_output;
// Head+tail truncation for the model: show the beginning and end with an elision.
// Clients still receive full streams; only this formatted summary is capped.
let s = aggregated_output.text.as_str();
let total_lines = s.lines().count();
if s.len() <= MODEL_FORMAT_MAX_BYTES && total_lines <= MODEL_FORMAT_MAX_LINES {
return s.to_string();
}
let lines: Vec<&str> = s.lines().collect();
let head_take = MODEL_FORMAT_HEAD_LINES.min(lines.len());
let tail_take = MODEL_FORMAT_TAIL_LINES.min(lines.len().saturating_sub(head_take));
let omitted = lines.len().saturating_sub(head_take + tail_take);
// Join head and tail blocks (lines() strips newlines; reinsert them)
let head_block = lines
.iter()
.take(head_take)
.cloned()
.collect::<Vec<_>>()
.join("\n");
let tail_block = if tail_take > 0 {
lines[lines.len() - tail_take..].join("\n")
} else {
String::new()
};
let marker = format!("\n[... omitted {omitted} of {total_lines} lines ...]\n\n");
// Byte budgets for head/tail around the marker
let mut head_budget = MODEL_FORMAT_HEAD_BYTES.min(MODEL_FORMAT_MAX_BYTES);
let tail_budget = MODEL_FORMAT_MAX_BYTES.saturating_sub(head_budget + marker.len());
if tail_budget == 0 && marker.len() >= MODEL_FORMAT_MAX_BYTES {
// Degenerate case: marker alone exceeds budget; return a clipped marker
return take_bytes_at_char_boundary(&marker, MODEL_FORMAT_MAX_BYTES).to_string();
}
if tail_budget == 0 {
// Make room for the marker by shrinking head
head_budget = MODEL_FORMAT_MAX_BYTES.saturating_sub(marker.len());
}
// Enforce line-count cap by trimming head/tail lines
let head_lines_text = head_block;
let tail_lines_text = tail_block;
// Build final string respecting byte budgets
let head_part = take_bytes_at_char_boundary(&head_lines_text, head_budget);
let mut result = String::with_capacity(MODEL_FORMAT_MAX_BYTES.min(s.len()));
result.push_str(head_part);
result.push_str(&marker);
let remaining = MODEL_FORMAT_MAX_BYTES.saturating_sub(result.len());
let tail_budget_final = remaining;
let tail_part = take_last_bytes_at_char_boundary(&tail_lines_text, tail_budget_final);
result.push_str(tail_part);
result
}
// Truncate a &str to a byte budget at a char boundary (prefix)
#[inline]
fn take_bytes_at_char_boundary(s: &str, maxb: usize) -> &str {
if s.len() <= maxb {
return s;
}
let mut last_ok = 0;
for (i, ch) in s.char_indices() {
let nb = i + ch.len_utf8();
if nb > maxb {
break;
}
last_ok = nb;
}
&s[..last_ok]
}
// Take a suffix of a &str within a byte budget at a char boundary
#[inline]
fn take_last_bytes_at_char_boundary(s: &str, maxb: usize) -> &str {
if s.len() <= maxb {
return s;
}
let mut start = s.len();
let mut used = 0usize;
for (i, ch) in s.char_indices().rev() {
let nb = ch.len_utf8();
if used + nb > maxb {
break;
}
start = i;
used += nb;
if start == 0 {
break;
}
}
&s[start..]
}
/// Exec output is a pre-serialized JSON payload
fn format_exec_output(exec_output: ExecToolCallOutput) -> String {
fn format_exec_output(exec_output: &ExecToolCallOutput) -> String {
let ExecToolCallOutput {
exit_code,
stdout,
stderr,
duration,
..
} = exec_output;
#[derive(Serialize)]
@@ -2440,20 +2751,12 @@ fn format_exec_output(exec_output: ExecToolCallOutput) -> String {
// round to 1 decimal place
let duration_seconds = ((duration.as_secs_f32()) * 10.0).round() / 10.0;
let is_success = exit_code == 0;
let output = if is_success { stdout } else { stderr };
let mut formatted_output = output.text;
if let Some(truncated_after_lines) = output.truncated_after_lines {
formatted_output.push_str(&format!(
"\n\n[Output truncated after {truncated_after_lines} lines: too many lines or bytes.]",
));
}
let formatted_output = format_exec_output_str(exec_output);
let payload = ExecOutput {
output: &formatted_output,
metadata: ExecMetadata {
exit_code,
exit_code: *exit_code,
duration_seconds,
},
};
@@ -2507,15 +2810,7 @@ async fn drain_to_completed(
response_id: _,
token_usage,
}) => {
let token_usage = match token_usage {
Some(usage) => usage,
None => {
return Err(CodexErr::Stream(
"token_usage was None in ResponseEvent::Completed".into(),
None,
));
}
};
let token_usage = token_usage.unwrap_or_default();
sess.tx_event
.send(Event {
id: sub_id.to_string(),
@@ -2523,6 +2818,7 @@ async fn drain_to_completed(
})
.await
.ok();
return Ok(());
}
Ok(_) => continue,
@@ -2530,3 +2826,209 @@ async fn drain_to_completed(
}
}
}
fn convert_call_tool_result_to_function_call_output_payload(
call_tool_result: &CallToolResult,
) -> FunctionCallOutputPayload {
let CallToolResult {
content,
is_error,
structured_content,
} = call_tool_result;
// In terms of what to send back to the model, we prefer structured_content,
// if available, and fallback to content, otherwise.
let mut is_success = is_error != &Some(true);
let content = if let Some(structured_content) = structured_content
&& structured_content != &serde_json::Value::Null
&& let Ok(serialized_structured_content) = serde_json::to_string(&structured_content)
{
serialized_structured_content
} else {
match serde_json::to_string(&content) {
Ok(serialized_content) => serialized_content,
Err(err) => {
// If we could not serialize either content or structured_content to
// JSON, flag this as an error.
is_success = false;
err.to_string()
}
}
};
FunctionCallOutputPayload {
content,
success: Some(is_success),
}
}
#[cfg(test)]
mod tests {
use super::*;
use mcp_types::ContentBlock;
use mcp_types::TextContent;
use pretty_assertions::assert_eq;
use serde_json::json;
use std::time::Duration as StdDuration;
fn text_block(s: &str) -> ContentBlock {
ContentBlock::TextContent(TextContent {
annotations: None,
text: s.to_string(),
r#type: "text".to_string(),
})
}
#[test]
fn prefers_structured_content_when_present() {
let ctr = CallToolResult {
// Content present but should be ignored because structured_content is set.
content: vec![text_block("ignored")],
is_error: None,
structured_content: Some(json!({
"ok": true,
"value": 42
})),
};
let got = convert_call_tool_result_to_function_call_output_payload(&ctr);
let expected = FunctionCallOutputPayload {
content: serde_json::to_string(&json!({
"ok": true,
"value": 42
}))
.unwrap(),
success: Some(true),
};
assert_eq!(expected, got);
}
#[test]
fn model_truncation_head_tail_by_lines() {
// Build 400 short lines so line-count limit, not byte budget, triggers truncation
let lines: Vec<String> = (1..=400).map(|i| format!("line{i}")).collect();
let full = lines.join("\n");
let exec = ExecToolCallOutput {
exit_code: 0,
stdout: StreamOutput::new(String::new()),
stderr: StreamOutput::new(String::new()),
aggregated_output: StreamOutput::new(full.clone()),
duration: StdDuration::from_secs(1),
};
let out = format_exec_output_str(&exec);
// Expect elision marker with correct counts
let omitted = 400 - MODEL_FORMAT_MAX_LINES; // 144
let marker = format!("\n[... omitted {omitted} of 400 lines ...]\n\n");
assert!(out.contains(&marker), "missing marker: {out}");
// Validate head and tail
let parts: Vec<&str> = out.split(&marker).collect();
assert_eq!(parts.len(), 2, "expected one marker split");
let head = parts[0];
let tail = parts[1];
let expected_head: String = (1..=MODEL_FORMAT_HEAD_LINES)
.map(|i| format!("line{i}"))
.collect::<Vec<_>>()
.join("\n");
assert!(head.starts_with(&expected_head), "head mismatch");
let expected_tail: String = ((400 - MODEL_FORMAT_TAIL_LINES + 1)..=400)
.map(|i| format!("line{i}"))
.collect::<Vec<_>>()
.join("\n");
assert!(tail.ends_with(&expected_tail), "tail mismatch");
}
#[test]
fn model_truncation_respects_byte_budget() {
// Construct a large output (about 100kB) so byte budget dominates
let big_line = "x".repeat(100);
let full = std::iter::repeat_n(big_line.clone(), 1000)
.collect::<Vec<_>>()
.join("\n");
let exec = ExecToolCallOutput {
exit_code: 0,
stdout: StreamOutput::new(String::new()),
stderr: StreamOutput::new(String::new()),
aggregated_output: StreamOutput::new(full.clone()),
duration: StdDuration::from_secs(1),
};
let out = format_exec_output_str(&exec);
assert!(out.len() <= MODEL_FORMAT_MAX_BYTES, "exceeds byte budget");
assert!(out.contains("omitted"), "should contain elision marker");
// Ensure head and tail are drawn from the original
assert!(full.starts_with(out.chars().take(8).collect::<String>().as_str()));
assert!(
full.ends_with(
out.chars()
.rev()
.take(8)
.collect::<String>()
.chars()
.rev()
.collect::<String>()
.as_str()
)
);
}
#[test]
fn falls_back_to_content_when_structured_is_null() {
let ctr = CallToolResult {
content: vec![text_block("hello"), text_block("world")],
is_error: None,
structured_content: Some(serde_json::Value::Null),
};
let got = convert_call_tool_result_to_function_call_output_payload(&ctr);
let expected = FunctionCallOutputPayload {
content: serde_json::to_string(&vec![text_block("hello"), text_block("world")])
.unwrap(),
success: Some(true),
};
assert_eq!(expected, got);
}
#[test]
fn success_flag_reflects_is_error_true() {
let ctr = CallToolResult {
content: vec![text_block("unused")],
is_error: Some(true),
structured_content: Some(json!({ "message": "bad" })),
};
let got = convert_call_tool_result_to_function_call_output_payload(&ctr);
let expected = FunctionCallOutputPayload {
content: serde_json::to_string(&json!({ "message": "bad" })).unwrap(),
success: Some(false),
};
assert_eq!(expected, got);
}
#[test]
fn success_flag_true_with_no_error_and_content_used() {
let ctr = CallToolResult {
content: vec![text_block("alpha")],
is_error: Some(false),
structured_content: None,
};
let got = convert_call_tool_result_to_function_call_output_payload(&ctr);
let expected = FunctionCallOutputPayload {
content: serde_json::to_string(&vec![text_block("alpha")]).unwrap(),
success: Some(true),
};
assert_eq!(expected, got);
}
}

View File

@@ -6,6 +6,8 @@ use crate::config_types::ShellEnvironmentPolicy;
use crate::config_types::ShellEnvironmentPolicyToml;
use crate::config_types::Tui;
use crate::config_types::UriBasedFileOpener;
use crate::config_types::Verbosity;
use crate::git_info::resolve_root_git_project_for_trust;
use crate::model_family::ModelFamily;
use crate::model_family::find_family_for_model;
use crate::model_provider_info::ModelProviderInfo;
@@ -150,6 +152,9 @@ pub struct Config {
/// request using the Responses API.
pub model_reasoning_summary: ReasoningSummary,
/// Optional verbosity control for GPT-5 models (Responses API `text.verbosity`).
pub model_verbosity: Option<Verbosity>,
/// Base URL for requests to ChatGPT (as opposed to the OpenAI API).
pub chatgpt_base_url: String,
@@ -164,11 +169,15 @@ pub struct Config {
/// model family's default preference.
pub include_apply_patch_tool: bool,
pub tools_web_search_request: bool,
/// The value for the `originator` header included with Responses API requests.
pub responses_originator_header: String,
/// If set to `true`, the API key will be signed with the `originator` header.
pub preferred_auth_method: AuthMode,
pub use_experimental_streamable_shell_tool: bool,
}
impl Config {
@@ -259,10 +268,61 @@ pub fn set_project_trusted(codex_home: &Path, project_path: &Path) -> anyhow::Re
Err(e) => return Err(e.into()),
};
// Mark the project as trusted. toml_edit is very good at handling
// missing properties
// Ensure we render a human-friendly structure:
//
// [projects]
// [projects."/path/to/project"]
// trust_level = "trusted"
//
// rather than inline tables like:
//
// [projects]
// "/path/to/project" = { trust_level = "trusted" }
let project_key = project_path.to_string_lossy().to_string();
doc["projects"][project_key.as_str()]["trust_level"] = toml_edit::value("trusted");
// Ensure top-level `projects` exists as a non-inline, explicit table. If it
// exists but was previously represented as a non-table (e.g., inline),
// replace it with an explicit table.
let mut created_projects_table = false;
{
let root = doc.as_table_mut();
let needs_table = !root.contains_key("projects")
|| root.get("projects").and_then(|i| i.as_table()).is_none();
if needs_table {
root.insert("projects", toml_edit::table());
created_projects_table = true;
}
}
let Some(projects_tbl) = doc["projects"].as_table_mut() else {
return Err(anyhow::anyhow!(
"projects table missing after initialization"
));
};
// If we created the `projects` table ourselves, keep it implicit so we
// don't render a standalone `[projects]` header.
if created_projects_table {
projects_tbl.set_implicit(true);
}
// Ensure the per-project entry is its own explicit table. If it exists but
// is not a table (e.g., an inline table), replace it with an explicit table.
let needs_proj_table = !projects_tbl.contains_key(project_key.as_str())
|| projects_tbl
.get(project_key.as_str())
.and_then(|i| i.as_table())
.is_none();
if needs_proj_table {
projects_tbl.insert(project_key.as_str(), toml_edit::table());
}
let Some(proj_tbl) = projects_tbl
.get_mut(project_key.as_str())
.and_then(|i| i.as_table_mut())
else {
return Err(anyhow::anyhow!("project table missing for {}", project_key));
};
proj_tbl.set_implicit(false);
proj_tbl["trust_level"] = toml_edit::value("trusted");
// ensure codex_home exists
std::fs::create_dir_all(codex_home)?;
@@ -398,6 +458,8 @@ pub struct ConfigToml {
pub model_reasoning_effort: Option<ReasoningEffort>,
pub model_reasoning_summary: Option<ReasoningSummary>,
/// Optional verbosity control for GPT-5 models (Responses API `text.verbosity`).
pub model_verbosity: Option<Verbosity>,
/// Override to force-enable reasoning summaries for the configured model.
pub model_supports_reasoning_summaries: Option<bool>,
@@ -411,6 +473,8 @@ pub struct ConfigToml {
/// Experimental path to a file whose contents replace the built-in BASE_INSTRUCTIONS.
pub experimental_instructions_file: Option<PathBuf>,
pub experimental_use_exec_command_tool: Option<bool>,
/// The value for the `originator` header included with Responses API requests.
pub responses_originator_header_internal_override: Option<String>,
@@ -418,6 +482,9 @@ pub struct ConfigToml {
/// If set to `true`, the API key will be signed with the `originator` header.
pub preferred_auth_method: Option<AuthMode>,
/// Nested tools section for feature toggles
pub tools: Option<ToolsToml>,
}
#[derive(Deserialize, Debug, Clone, PartialEq, Eq)]
@@ -425,6 +492,13 @@ pub struct ProjectConfig {
pub trust_level: Option<String>,
}
#[derive(Deserialize, Debug, Clone, Default)]
pub struct ToolsToml {
// Renamed from `web_search_request`; keep alias for backwards compatibility.
#[serde(default, alias = "web_search_request")]
pub web_search: Option<bool>,
}
impl ConfigToml {
/// Derive the effective sandbox policy from the configuration.
fn derive_sandbox_policy(&self, sandbox_mode_override: Option<SandboxMode>) -> SandboxPolicy {
@@ -454,10 +528,27 @@ impl ConfigToml {
pub fn is_cwd_trusted(&self, resolved_cwd: &Path) -> bool {
let projects = self.projects.clone().unwrap_or_default();
projects
.get(&resolved_cwd.to_string_lossy().to_string())
.map(|p| p.trust_level.clone().unwrap_or("".to_string()) == "trusted")
.unwrap_or(false)
let is_path_trusted = |path: &Path| {
let path_str = path.to_string_lossy().to_string();
projects
.get(&path_str)
.map(|p| p.trust_level.as_deref() == Some("trusted"))
.unwrap_or(false)
};
// Fast path: exact cwd match
if is_path_trusted(resolved_cwd) {
return true;
}
// If cwd lives inside a git worktree, check whether the root git project
// (the primary repository working directory) is trusted. This lets
// worktrees inherit trust from the main project.
if let Some(root_project) = resolve_root_git_project_for_trust(resolved_cwd) {
return is_path_trusted(&root_project);
}
false
}
pub fn get_config_profile(
@@ -497,6 +588,7 @@ pub struct ConfigOverrides {
pub include_apply_patch_tool: Option<bool>,
pub disable_response_storage: Option<bool>,
pub show_raw_agent_reasoning: Option<bool>,
pub tools_web_search_request: Option<bool>,
}
impl Config {
@@ -523,6 +615,7 @@ impl Config {
include_apply_patch_tool,
disable_response_storage,
show_raw_agent_reasoning,
tools_web_search_request: override_tools_web_search_request,
} = overrides;
let config_profile = match config_profile_key.as_ref().or(cfg.profile.as_ref()) {
@@ -561,7 +654,7 @@ impl Config {
})?
.clone();
let shell_environment_policy = cfg.shell_environment_policy.into();
let shell_environment_policy = cfg.shell_environment_policy.clone().into();
let resolved_cwd = {
use std::env;
@@ -582,7 +675,11 @@ impl Config {
}
};
let history = cfg.history.unwrap_or_default();
let history = cfg.history.clone().unwrap_or_default();
let tools_web_search_request = override_tools_web_search_request
.or(cfg.tools.as_ref().and_then(|t| t.web_search))
.unwrap_or(false);
let model = model
.or(config_profile.model)
@@ -597,7 +694,7 @@ impl Config {
needs_special_apply_patch_instructions: false,
supports_reasoning_summaries,
uses_local_shell_tool: false,
uses_apply_patch_tool: false,
apply_patch_tool_type: None,
}
});
@@ -624,9 +721,6 @@ impl Config {
Self::get_base_instructions(experimental_instructions_path, &resolved_cwd)?;
let base_instructions = base_instructions.or(file_base_instructions);
let include_apply_patch_tool_val =
include_apply_patch_tool.unwrap_or(model_family.uses_apply_patch_tool);
let responses_originator_header: String = cfg
.responses_originator_header_internal_override
.unwrap_or(DEFAULT_RESPONSES_ORIGINATOR_HEADER.to_owned());
@@ -659,7 +753,7 @@ impl Config {
codex_home,
history,
file_opener: cfg.file_opener.unwrap_or(UriBasedFileOpener::VsCode),
tui: cfg.tui.unwrap_or_default(),
tui: cfg.tui.clone().unwrap_or_default(),
codex_linux_sandbox_exe,
hide_agent_reasoning: cfg.hide_agent_reasoning.unwrap_or(false),
@@ -675,17 +769,21 @@ impl Config {
.model_reasoning_summary
.or(cfg.model_reasoning_summary)
.unwrap_or_default(),
model_verbosity: config_profile.model_verbosity.or(cfg.model_verbosity),
chatgpt_base_url: config_profile
.chatgpt_base_url
.or(cfg.chatgpt_base_url)
.or(cfg.chatgpt_base_url.clone())
.unwrap_or("https://chatgpt.com/backend-api/".to_string()),
experimental_resume,
include_plan_tool: include_plan_tool.unwrap_or(false),
include_apply_patch_tool: include_apply_patch_tool_val,
include_apply_patch_tool: include_apply_patch_tool.unwrap_or(false),
tools_web_search_request,
responses_originator_header,
preferred_auth_method: cfg.preferred_auth_method.unwrap_or(AuthMode::ChatGPT),
use_experimental_streamable_shell_tool: cfg
.experimental_use_exec_command_tool
.unwrap_or(false),
};
Ok(config)
}
@@ -1044,13 +1142,16 @@ disable_response_storage = true
show_raw_agent_reasoning: false,
model_reasoning_effort: ReasoningEffort::High,
model_reasoning_summary: ReasoningSummary::Detailed,
model_verbosity: None,
chatgpt_base_url: "https://chatgpt.com/backend-api/".to_string(),
experimental_resume: None,
base_instructions: None,
include_plan_tool: false,
include_apply_patch_tool: false,
tools_web_search_request: false,
responses_originator_header: "codex_cli_rs".to_string(),
preferred_auth_method: AuthMode::ChatGPT,
use_experimental_streamable_shell_tool: false,
},
o3_profile_config
);
@@ -1097,13 +1198,16 @@ disable_response_storage = true
show_raw_agent_reasoning: false,
model_reasoning_effort: ReasoningEffort::default(),
model_reasoning_summary: ReasoningSummary::default(),
model_verbosity: None,
chatgpt_base_url: "https://chatgpt.com/backend-api/".to_string(),
experimental_resume: None,
base_instructions: None,
include_plan_tool: false,
include_apply_patch_tool: false,
tools_web_search_request: false,
responses_originator_header: "codex_cli_rs".to_string(),
preferred_auth_method: AuthMode::ChatGPT,
use_experimental_streamable_shell_tool: false,
};
assert_eq!(expected_gpt3_profile_config, gpt3_profile_config);
@@ -1165,17 +1269,90 @@ disable_response_storage = true
show_raw_agent_reasoning: false,
model_reasoning_effort: ReasoningEffort::default(),
model_reasoning_summary: ReasoningSummary::default(),
model_verbosity: None,
chatgpt_base_url: "https://chatgpt.com/backend-api/".to_string(),
experimental_resume: None,
base_instructions: None,
include_plan_tool: false,
include_apply_patch_tool: false,
tools_web_search_request: false,
responses_originator_header: "codex_cli_rs".to_string(),
preferred_auth_method: AuthMode::ChatGPT,
use_experimental_streamable_shell_tool: false,
};
assert_eq!(expected_zdr_profile_config, zdr_profile_config);
Ok(())
}
#[test]
fn test_set_project_trusted_writes_explicit_tables() -> anyhow::Result<()> {
let codex_home = TempDir::new().unwrap();
let project_dir = TempDir::new().unwrap();
// Call the function under test
set_project_trusted(codex_home.path(), project_dir.path())?;
// Read back the generated config.toml and assert exact contents
let config_path = codex_home.path().join(CONFIG_TOML_FILE);
let contents = std::fs::read_to_string(&config_path)?;
let raw_path = project_dir.path().to_string_lossy();
let path_str = if raw_path.contains('\\') {
format!("'{}'", raw_path)
} else {
format!("\"{}\"", raw_path)
};
let expected = format!(
r#"[projects.{path_str}]
trust_level = "trusted"
"#
);
assert_eq!(contents, expected);
Ok(())
}
#[test]
fn test_set_project_trusted_converts_inline_to_explicit() -> anyhow::Result<()> {
let codex_home = TempDir::new().unwrap();
let project_dir = TempDir::new().unwrap();
// Seed config.toml with an inline project entry under [projects]
let config_path = codex_home.path().join(CONFIG_TOML_FILE);
let raw_path = project_dir.path().to_string_lossy();
let path_str = if raw_path.contains('\\') {
format!("'{}'", raw_path)
} else {
format!("\"{}\"", raw_path)
};
// Use a quoted key so backslashes don't require escaping on Windows
let initial = format!(
r#"[projects]
{path_str} = {{ trust_level = "untrusted" }}
"#
);
std::fs::create_dir_all(codex_home.path())?;
std::fs::write(&config_path, initial)?;
// Run the function; it should convert to explicit tables and set trusted
set_project_trusted(codex_home.path(), project_dir.path())?;
let contents = std::fs::read_to_string(&config_path)?;
// Assert exact output after conversion to explicit table
let expected = format!(
r#"[projects]
[projects.{path_str}]
trust_level = "trusted"
"#
);
assert_eq!(contents, expected);
Ok(())
}
// No test enforcing the presence of a standalone [projects] header.
}

View File

@@ -1,6 +1,7 @@
use serde::Deserialize;
use std::path::PathBuf;
use crate::config_types::Verbosity;
use crate::protocol::AskForApproval;
use codex_protocol::config_types::ReasoningEffort;
use codex_protocol::config_types::ReasoningSummary;
@@ -17,6 +18,7 @@ pub struct ConfigProfile {
pub disable_response_storage: Option<bool>,
pub model_reasoning_effort: Option<ReasoningEffort>,
pub model_reasoning_summary: Option<ReasoningSummary>,
pub model_verbosity: Option<Verbosity>,
pub chatgpt_base_url: Option<String>,
pub experimental_instructions_file: Option<PathBuf>,
}

View File

@@ -8,6 +8,8 @@ use std::path::PathBuf;
use wildmatch::WildMatchPattern;
use serde::Deserialize;
use serde::Serialize;
use strum_macros::Display;
#[derive(Deserialize, Debug, Clone, PartialEq)]
pub struct McpServerConfig {
@@ -183,3 +185,43 @@ impl From<ShellEnvironmentPolicyToml> for ShellEnvironmentPolicy {
}
}
}
/// See https://platform.openai.com/docs/guides/reasoning?api-mode=responses#get-started-with-reasoning
#[derive(Debug, Serialize, Deserialize, Default, Clone, Copy, PartialEq, Eq, Display)]
#[serde(rename_all = "lowercase")]
#[strum(serialize_all = "lowercase")]
pub enum ReasoningEffort {
Low,
#[default]
Medium,
High,
/// Option to disable reasoning.
None,
}
/// A summary of the reasoning performed by the model. This can be useful for
/// debugging and understanding the model's reasoning process.
/// See https://platform.openai.com/docs/guides/reasoning?api-mode=responses#reasoning-summaries
#[derive(Debug, Serialize, Deserialize, Default, Clone, Copy, PartialEq, Eq, Display)]
#[serde(rename_all = "lowercase")]
#[strum(serialize_all = "lowercase")]
pub enum ReasoningSummary {
#[default]
Auto,
Concise,
Detailed,
/// Option to disable reasoning summaries.
None,
}
/// Controls output length/detail on GPT-5 models via the Responses API.
/// Serialized with lowercase values to match the OpenAI API.
#[derive(Debug, Serialize, Deserialize, Default, Clone, Copy, PartialEq, Eq, Display)]
#[serde(rename_all = "lowercase")]
#[strum(serialize_all = "lowercase")]
pub enum Verbosity {
Low,
#[default]
Medium,
High,
}

View File

@@ -1,4 +1,4 @@
use crate::models::ResponseItem;
use codex_protocol::models::ResponseItem;
/// Transcript of conversation history
#[derive(Debug, Clone, Default)]
@@ -66,7 +66,7 @@ impl ConversationHistory {
self.items.push(ResponseItem::Message {
id: None,
role: "assistant".to_string(),
content: vec![crate::models::ContentItem::OutputText {
content: vec![codex_protocol::models::ContentItem::OutputText {
text: delta.to_string(),
}],
});
@@ -110,6 +110,8 @@ fn is_api_message(message: &ResponseItem) -> bool {
ResponseItem::Message { role, .. } => role.as_str() != "system",
ResponseItem::FunctionCallOutput { .. }
| ResponseItem::FunctionCall { .. }
| ResponseItem::CustomToolCall { .. }
| ResponseItem::CustomToolCallOutput { .. }
| ResponseItem::LocalShellCall { .. }
| ResponseItem::Reasoning { .. } => true,
ResponseItem::Other => false,
@@ -118,11 +120,11 @@ fn is_api_message(message: &ResponseItem) -> bool {
/// Helper to append the textual content from `src` into `dst` in place.
fn append_text_content(
dst: &mut Vec<crate::models::ContentItem>,
src: &Vec<crate::models::ContentItem>,
dst: &mut Vec<codex_protocol::models::ContentItem>,
src: &Vec<codex_protocol::models::ContentItem>,
) {
for c in src {
if let crate::models::ContentItem::OutputText { text } = c {
if let codex_protocol::models::ContentItem::OutputText { text } = c {
append_text_delta(dst, text);
}
}
@@ -130,15 +132,15 @@ fn append_text_content(
/// Append a single text delta to the last OutputText item in `content`, or
/// push a new OutputText item if none exists.
fn append_text_delta(content: &mut Vec<crate::models::ContentItem>, delta: &str) {
if let Some(crate::models::ContentItem::OutputText { text }) = content
fn append_text_delta(content: &mut Vec<codex_protocol::models::ContentItem>, delta: &str) {
if let Some(codex_protocol::models::ContentItem::OutputText { text }) = content
.iter_mut()
.rev()
.find(|c| matches!(c, crate::models::ContentItem::OutputText { .. }))
.find(|c| matches!(c, codex_protocol::models::ContentItem::OutputText { .. }))
{
text.push_str(delta);
} else {
content.push(crate::models::ContentItem::OutputText {
content.push(codex_protocol::models::ContentItem::OutputText {
text: delta.to_string(),
});
}
@@ -147,7 +149,7 @@ fn append_text_delta(content: &mut Vec<crate::models::ContentItem>, delta: &str)
#[cfg(test)]
mod tests {
use super::*;
use crate::models::ContentItem;
use codex_protocol::models::ContentItem;
fn assistant_msg(text: &str) -> ResponseItem {
ResponseItem::Message {

View File

@@ -1,6 +1,7 @@
use std::collections::HashMap;
use std::sync::Arc;
use codex_login::AuthManager;
use codex_login::CodexAuth;
use tokio::sync::RwLock;
use uuid::Uuid;
@@ -15,6 +16,7 @@ use crate::error::Result as CodexResult;
use crate::protocol::Event;
use crate::protocol::EventMsg;
use crate::protocol::SessionConfiguredEvent;
use codex_protocol::models::ResponseItem;
/// Represents a newly created Codex conversation, including the first event
/// (which is [`EventMsg::SessionConfigured`]).
@@ -28,34 +30,48 @@ pub struct NewConversation {
/// maintaining them in memory.
pub struct ConversationManager {
conversations: Arc<RwLock<HashMap<Uuid, Arc<CodexConversation>>>>,
}
impl Default for ConversationManager {
fn default() -> Self {
Self {
conversations: Arc::new(RwLock::new(HashMap::new())),
}
}
auth_manager: Arc<AuthManager>,
}
impl ConversationManager {
pub async fn new_conversation(&self, config: Config) -> CodexResult<NewConversation> {
let auth = CodexAuth::from_codex_home(&config.codex_home, config.preferred_auth_method)?;
self.new_conversation_with_auth(config, auth).await
pub fn new(auth_manager: Arc<AuthManager>) -> Self {
Self {
conversations: Arc::new(RwLock::new(HashMap::new())),
auth_manager,
}
}
/// Used for integration tests: should not be used by ordinary business
/// logic.
pub async fn new_conversation_with_auth(
/// Construct with a dummy AuthManager containing the provided CodexAuth.
/// Used for integration tests: should not be used by ordinary business logic.
pub fn with_auth(auth: CodexAuth) -> Self {
Self::new(codex_login::AuthManager::from_auth_for_testing(auth))
}
pub async fn new_conversation(&self, config: Config) -> CodexResult<NewConversation> {
self.spawn_conversation(config, self.auth_manager.clone())
.await
}
async fn spawn_conversation(
&self,
config: Config,
auth: Option<CodexAuth>,
auth_manager: Arc<AuthManager>,
) -> CodexResult<NewConversation> {
let CodexSpawnOk {
codex,
session_id: conversation_id,
} = Codex::spawn(config, auth).await?;
} = {
let initial_history = None;
Codex::spawn(config, auth_manager, initial_history).await?
};
self.finalize_spawn(codex, conversation_id).await
}
async fn finalize_spawn(
&self,
codex: Codex,
conversation_id: Uuid,
) -> CodexResult<NewConversation> {
// The first event must be `SessionInitialized`. Validate and forward it
// to the caller so that they can display it in the conversation
// history.
@@ -93,4 +109,120 @@ impl ConversationManager {
.cloned()
.ok_or_else(|| CodexErr::ConversationNotFound(conversation_id))
}
/// Fork an existing conversation by dropping the last `drop_last_messages`
/// user/assistant messages from its transcript and starting a new
/// conversation with identical configuration (unless overridden by the
/// caller's `config`). The new conversation will have a fresh id.
pub async fn fork_conversation(
&self,
conversation_history: Vec<ResponseItem>,
num_messages_to_drop: usize,
config: Config,
) -> CodexResult<NewConversation> {
// Compute the prefix up to the cut point.
let truncated_history =
truncate_after_dropping_last_messages(conversation_history, num_messages_to_drop);
// Spawn a new conversation with the computed initial history.
let auth_manager = self.auth_manager.clone();
let CodexSpawnOk {
codex,
session_id: conversation_id,
} = Codex::spawn(config, auth_manager, Some(truncated_history)).await?;
self.finalize_spawn(codex, conversation_id).await
}
}
/// Return a prefix of `items` obtained by dropping the last `n` user messages
/// and all items that follow them.
fn truncate_after_dropping_last_messages(items: Vec<ResponseItem>, n: usize) -> Vec<ResponseItem> {
if n == 0 || items.is_empty() {
return items;
}
// Walk backwards counting only `user` Message items, find cut index.
let mut count = 0usize;
let mut cut_index = 0usize;
for (idx, item) in items.iter().enumerate().rev() {
if let ResponseItem::Message { role, .. } = item
&& role == "user"
{
count += 1;
if count == n {
// Cut everything from this user message to the end.
cut_index = idx;
break;
}
}
}
if count < n {
// If fewer than n messages exist, drop everything.
Vec::new()
} else {
items.into_iter().take(cut_index).collect()
}
}
#[cfg(test)]
mod tests {
use super::*;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ReasoningItemReasoningSummary;
use codex_protocol::models::ResponseItem;
fn user_msg(text: &str) -> ResponseItem {
ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::OutputText {
text: text.to_string(),
}],
}
}
fn assistant_msg(text: &str) -> ResponseItem {
ResponseItem::Message {
id: None,
role: "assistant".to_string(),
content: vec![ContentItem::OutputText {
text: text.to_string(),
}],
}
}
#[test]
fn drops_from_last_user_only() {
let items = vec![
user_msg("u1"),
assistant_msg("a1"),
assistant_msg("a2"),
user_msg("u2"),
assistant_msg("a3"),
ResponseItem::Reasoning {
id: "r1".to_string(),
summary: vec![ReasoningItemReasoningSummary::SummaryText {
text: "s".to_string(),
}],
content: None,
encrypted_content: None,
},
ResponseItem::FunctionCall {
id: None,
name: "tool".to_string(),
arguments: "{}".to_string(),
call_id: "c1".to_string(),
},
assistant_msg("a4"),
];
let truncated = truncate_after_dropping_last_messages(items.clone(), 1);
assert_eq!(
truncated,
vec![items[0].clone(), items[1].clone(), items[2].clone()]
);
let truncated2 = truncate_after_dropping_last_messages(items, 2);
assert!(truncated2.is_empty());
}
}

View File

@@ -2,16 +2,16 @@ use serde::Deserialize;
use serde::Serialize;
use strum_macros::Display as DeriveDisplay;
use crate::models::ContentItem;
use crate::models::ResponseItem;
use crate::protocol::AskForApproval;
use crate::protocol::SandboxPolicy;
use crate::shell::Shell;
use codex_protocol::config_types::SandboxMode;
use std::fmt::Display;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ResponseItem;
use std::path::PathBuf;
/// wraps environment context message in a tag for the model to parse more easily.
pub(crate) const ENVIRONMENT_CONTEXT_START: &str = "<environment_context>\n";
pub(crate) const ENVIRONMENT_CONTEXT_START: &str = "<environment_context>";
pub(crate) const ENVIRONMENT_CONTEXT_END: &str = "</environment_context>";
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, DeriveDisplay)]
@@ -24,52 +24,87 @@ pub enum NetworkAccess {
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
#[serde(rename = "environment_context", rename_all = "snake_case")]
pub(crate) struct EnvironmentContext {
pub cwd: PathBuf,
pub approval_policy: AskForApproval,
pub sandbox_mode: SandboxMode,
pub network_access: NetworkAccess,
pub cwd: Option<PathBuf>,
pub approval_policy: Option<AskForApproval>,
pub sandbox_mode: Option<SandboxMode>,
pub network_access: Option<NetworkAccess>,
pub shell: Option<Shell>,
}
impl EnvironmentContext {
pub fn new(
cwd: PathBuf,
approval_policy: AskForApproval,
sandbox_policy: SandboxPolicy,
cwd: Option<PathBuf>,
approval_policy: Option<AskForApproval>,
sandbox_policy: Option<SandboxPolicy>,
shell: Option<Shell>,
) -> Self {
Self {
cwd,
approval_policy,
sandbox_mode: match sandbox_policy {
SandboxPolicy::DangerFullAccess => SandboxMode::DangerFullAccess,
SandboxPolicy::ReadOnly => SandboxMode::ReadOnly,
SandboxPolicy::WorkspaceWrite { .. } => SandboxMode::WorkspaceWrite,
Some(SandboxPolicy::DangerFullAccess) => Some(SandboxMode::DangerFullAccess),
Some(SandboxPolicy::ReadOnly) => Some(SandboxMode::ReadOnly),
Some(SandboxPolicy::WorkspaceWrite { .. }) => Some(SandboxMode::WorkspaceWrite),
None => None,
},
network_access: match sandbox_policy {
SandboxPolicy::DangerFullAccess => NetworkAccess::Enabled,
SandboxPolicy::ReadOnly => NetworkAccess::Restricted,
SandboxPolicy::WorkspaceWrite { network_access, .. } => {
Some(SandboxPolicy::DangerFullAccess) => Some(NetworkAccess::Enabled),
Some(SandboxPolicy::ReadOnly) => Some(NetworkAccess::Restricted),
Some(SandboxPolicy::WorkspaceWrite { network_access, .. }) => {
if network_access {
NetworkAccess::Enabled
Some(NetworkAccess::Enabled)
} else {
NetworkAccess::Restricted
Some(NetworkAccess::Restricted)
}
}
None => None,
},
shell,
}
}
}
impl Display for EnvironmentContext {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
writeln!(
f,
"Current working directory: {}",
self.cwd.to_string_lossy()
)?;
writeln!(f, "Approval policy: {}", self.approval_policy)?;
writeln!(f, "Sandbox mode: {}", self.sandbox_mode)?;
writeln!(f, "Network access: {}", self.network_access)?;
Ok(())
impl EnvironmentContext {
/// Serializes the environment context to XML. Libraries like `quick-xml`
/// require custom macros to handle Enums with newtypes, so we just do it
/// manually, to keep things simple. Output looks like:
///
/// ```xml
/// <environment_context>
/// <cwd>...</cwd>
/// <approval_policy>...</approval_policy>
/// <sandbox_mode>...</sandbox_mode>
/// <network_access>...</network_access>
/// <shell>...</shell>
/// </environment_context>
/// ```
pub fn serialize_to_xml(self) -> String {
let mut lines = vec![ENVIRONMENT_CONTEXT_START.to_string()];
if let Some(cwd) = self.cwd {
lines.push(format!(" <cwd>{}</cwd>", cwd.to_string_lossy()));
}
if let Some(approval_policy) = self.approval_policy {
lines.push(format!(
" <approval_policy>{}</approval_policy>",
approval_policy
));
}
if let Some(sandbox_mode) = self.sandbox_mode {
lines.push(format!(" <sandbox_mode>{}</sandbox_mode>", sandbox_mode));
}
if let Some(network_access) = self.network_access {
lines.push(format!(
" <network_access>{}</network_access>",
network_access
));
}
if let Some(shell) = self.shell
&& let Some(shell_name) = shell.name()
{
lines.push(format!(" <shell>{}</shell>", shell_name));
}
lines.push(ENVIRONMENT_CONTEXT_END.to_string());
lines.join("\n")
}
}
@@ -79,7 +114,7 @@ impl From<EnvironmentContext> for ResponseItem {
id: None,
role: "user".to_string(),
content: vec![ContentItem::InputText {
text: format!("{ENVIRONMENT_CONTEXT_START}{ec}{ENVIRONMENT_CONTEXT_END}"),
text: ec.serialize_to_xml(),
}],
}
}

View File

@@ -28,18 +28,17 @@ use crate::spawn::StdioPolicy;
use crate::spawn::spawn_child_async;
use serde_bytes::ByteBuf;
// Maximum we send for each stream, which is either:
// - 10KiB OR
// - 256 lines
const MAX_STREAM_OUTPUT: usize = 10 * 1024;
const MAX_STREAM_OUTPUT_LINES: usize = 256;
const DEFAULT_TIMEOUT_MS: u64 = 10_000;
// Hardcode these since it does not seem worth including the libc crate just
// for these.
const SIGKILL_CODE: i32 = 9;
const TIMEOUT_CODE: i32 = 64;
const EXIT_CODE_SIGNAL_BASE: i32 = 128; // conventional shell: 128 + signal
// I/O buffer sizing
const READ_CHUNK_SIZE: usize = 8192; // bytes per read
const AGGREGATE_BUFFER_INITIAL_CAPACITY: usize = 8 * 1024; // 8 KiB
#[derive(Debug, Clone)]
pub struct ExecParams {
@@ -153,6 +152,7 @@ pub async fn process_exec_tool_call(
exit_code,
stdout,
stderr,
aggregated_output: raw_output.aggregated_output.from_utf8_lossy(),
duration,
})
}
@@ -189,10 +189,11 @@ pub struct StreamOutput<T> {
pub truncated_after_lines: Option<u32>,
}
#[derive(Debug)]
pub struct RawExecToolCallOutput {
struct RawExecToolCallOutput {
pub exit_status: ExitStatus,
pub stdout: StreamOutput<Vec<u8>>,
pub stderr: StreamOutput<Vec<u8>>,
pub aggregated_output: StreamOutput<Vec<u8>>,
}
impl StreamOutput<String> {
@@ -213,11 +214,17 @@ impl StreamOutput<Vec<u8>> {
}
}
#[inline]
fn append_all(dst: &mut Vec<u8>, src: &[u8]) {
dst.extend_from_slice(src);
}
#[derive(Debug)]
pub struct ExecToolCallOutput {
pub exit_code: i32,
pub stdout: StreamOutput<String>,
pub stderr: StreamOutput<String>,
pub aggregated_output: StreamOutput<String>,
pub duration: Duration,
}
@@ -253,7 +260,7 @@ async fn exec(
/// Consumes the output of a child process, truncating it so it is suitable for
/// use as the output of a `shell` tool call. Also enforces specified timeout.
pub(crate) async fn consume_truncated_output(
async fn consume_truncated_output(
mut child: Child,
timeout: Duration,
stdout_stream: Option<StdoutStream>,
@@ -273,19 +280,19 @@ pub(crate) async fn consume_truncated_output(
))
})?;
let (agg_tx, agg_rx) = async_channel::unbounded::<Vec<u8>>();
let stdout_handle = tokio::spawn(read_capped(
BufReader::new(stdout_reader),
MAX_STREAM_OUTPUT,
MAX_STREAM_OUTPUT_LINES,
stdout_stream.clone(),
false,
Some(agg_tx.clone()),
));
let stderr_handle = tokio::spawn(read_capped(
BufReader::new(stderr_reader),
MAX_STREAM_OUTPUT,
MAX_STREAM_OUTPUT_LINES,
stdout_stream.clone(),
true,
Some(agg_tx.clone()),
));
let exit_status = tokio::select! {
@@ -297,38 +304,48 @@ pub(crate) async fn consume_truncated_output(
// timeout
child.start_kill()?;
// Debatable whether `child.wait().await` should be called here.
synthetic_exit_status(128 + TIMEOUT_CODE)
synthetic_exit_status(EXIT_CODE_SIGNAL_BASE + TIMEOUT_CODE)
}
}
}
_ = tokio::signal::ctrl_c() => {
child.start_kill()?;
synthetic_exit_status(128 + SIGKILL_CODE)
synthetic_exit_status(EXIT_CODE_SIGNAL_BASE + SIGKILL_CODE)
}
};
let stdout = stdout_handle.await??;
let stderr = stderr_handle.await??;
drop(agg_tx);
let mut combined_buf = Vec::with_capacity(AGGREGATE_BUFFER_INITIAL_CAPACITY);
while let Ok(chunk) = agg_rx.recv().await {
append_all(&mut combined_buf, &chunk);
}
let aggregated_output = StreamOutput {
text: combined_buf,
truncated_after_lines: None,
};
Ok(RawExecToolCallOutput {
exit_status,
stdout,
stderr,
aggregated_output,
})
}
async fn read_capped<R: AsyncRead + Unpin + Send + 'static>(
mut reader: R,
max_output: usize,
max_lines: usize,
stream: Option<StdoutStream>,
is_stderr: bool,
aggregate_tx: Option<Sender<Vec<u8>>>,
) -> io::Result<StreamOutput<Vec<u8>>> {
let mut buf = Vec::with_capacity(max_output.min(8 * 1024));
let mut tmp = [0u8; 8192];
let mut buf = Vec::with_capacity(AGGREGATE_BUFFER_INITIAL_CAPACITY);
let mut tmp = [0u8; READ_CHUNK_SIZE];
let mut remaining_bytes = max_output;
let mut remaining_lines = max_lines;
// No caps: append all bytes
loop {
let n = reader.read(&mut tmp).await?;
@@ -355,33 +372,17 @@ async fn read_capped<R: AsyncRead + Unpin + Send + 'static>(
let _ = stream.tx_event.send(event).await;
}
// Copy into the buffer only while we still have byte and line budget.
if remaining_bytes > 0 && remaining_lines > 0 {
let mut copy_len = 0;
for &b in &tmp[..n] {
if remaining_bytes == 0 || remaining_lines == 0 {
break;
}
copy_len += 1;
remaining_bytes -= 1;
if b == b'\n' {
remaining_lines -= 1;
}
}
buf.extend_from_slice(&tmp[..copy_len]);
if let Some(tx) = &aggregate_tx {
let _ = tx.send(tmp[..n].to_vec()).await;
}
// Continue reading to EOF to avoid back-pressure, but discard once caps are hit.
}
let truncated = remaining_lines == 0 || remaining_bytes == 0;
append_all(&mut buf, &tmp[..n]);
// Continue reading to EOF to avoid back-pressure
}
Ok(StreamOutput {
text: buf,
truncated_after_lines: if truncated {
Some((max_lines - remaining_lines) as u32)
} else {
None
},
truncated_after_lines: None,
})
}

View File

@@ -0,0 +1,57 @@
use serde::Deserialize;
use serde::Serialize;
use crate::exec_command::session_id::SessionId;
#[derive(Debug, Clone, Deserialize)]
pub struct ExecCommandParams {
pub(crate) cmd: String,
#[serde(default = "default_yield_time")]
pub(crate) yield_time_ms: u64,
#[serde(default = "max_output_tokens")]
pub(crate) max_output_tokens: u64,
#[serde(default = "default_shell")]
pub(crate) shell: String,
#[serde(default = "default_login")]
pub(crate) login: bool,
}
fn default_yield_time() -> u64 {
10_000
}
fn max_output_tokens() -> u64 {
10_000
}
fn default_login() -> bool {
true
}
fn default_shell() -> String {
"/bin/bash".to_string()
}
#[derive(Debug, Deserialize, Serialize)]
pub struct WriteStdinParams {
pub(crate) session_id: SessionId,
pub(crate) chars: String,
#[serde(default = "write_stdin_default_yield_time_ms")]
pub(crate) yield_time_ms: u64,
#[serde(default = "write_stdin_default_max_output_tokens")]
pub(crate) max_output_tokens: u64,
}
fn write_stdin_default_yield_time_ms() -> u64 {
250
}
fn write_stdin_default_max_output_tokens() -> u64 {
10_000
}

View File

@@ -0,0 +1,83 @@
use std::sync::Mutex as StdMutex;
use tokio::sync::broadcast;
use tokio::sync::mpsc;
use tokio::task::JoinHandle;
#[derive(Debug)]
pub(crate) struct ExecCommandSession {
/// Queue for writing bytes to the process stdin (PTY master write side).
writer_tx: mpsc::Sender<Vec<u8>>,
/// Broadcast stream of output chunks read from the PTY. New subscribers
/// receive only chunks emitted after they subscribe.
output_tx: broadcast::Sender<Vec<u8>>,
/// Child killer handle for termination on drop (can signal independently
/// of a thread blocked in `.wait()`).
killer: StdMutex<Option<Box<dyn portable_pty::ChildKiller + Send + Sync>>>,
/// JoinHandle for the blocking PTY reader task.
reader_handle: StdMutex<Option<JoinHandle<()>>>,
/// JoinHandle for the stdin writer task.
writer_handle: StdMutex<Option<JoinHandle<()>>>,
/// JoinHandle for the child wait task.
wait_handle: StdMutex<Option<JoinHandle<()>>>,
}
impl ExecCommandSession {
pub(crate) fn new(
writer_tx: mpsc::Sender<Vec<u8>>,
output_tx: broadcast::Sender<Vec<u8>>,
killer: Box<dyn portable_pty::ChildKiller + Send + Sync>,
reader_handle: JoinHandle<()>,
writer_handle: JoinHandle<()>,
wait_handle: JoinHandle<()>,
) -> Self {
Self {
writer_tx,
output_tx,
killer: StdMutex::new(Some(killer)),
reader_handle: StdMutex::new(Some(reader_handle)),
writer_handle: StdMutex::new(Some(writer_handle)),
wait_handle: StdMutex::new(Some(wait_handle)),
}
}
pub(crate) fn writer_sender(&self) -> mpsc::Sender<Vec<u8>> {
self.writer_tx.clone()
}
pub(crate) fn output_receiver(&self) -> broadcast::Receiver<Vec<u8>> {
self.output_tx.subscribe()
}
}
impl Drop for ExecCommandSession {
fn drop(&mut self) {
// Best-effort: terminate child first so blocking tasks can complete.
if let Ok(mut killer_opt) = self.killer.lock()
&& let Some(mut killer) = killer_opt.take()
{
let _ = killer.kill();
}
// Abort background tasks; they may already have exited after kill.
if let Ok(mut h) = self.reader_handle.lock()
&& let Some(handle) = h.take()
{
handle.abort();
}
if let Ok(mut h) = self.writer_handle.lock()
&& let Some(handle) = h.take()
{
handle.abort();
}
if let Ok(mut h) = self.wait_handle.lock()
&& let Some(handle) = h.take()
{
handle.abort();
}
}
}

View File

@@ -0,0 +1,14 @@
mod exec_command_params;
mod exec_command_session;
mod responses_api;
mod session_id;
mod session_manager;
pub use exec_command_params::ExecCommandParams;
pub use exec_command_params::WriteStdinParams;
pub use responses_api::EXEC_COMMAND_TOOL_NAME;
pub use responses_api::WRITE_STDIN_TOOL_NAME;
pub use responses_api::create_exec_command_tool_for_responses_api;
pub use responses_api::create_write_stdin_tool_for_responses_api;
pub use session_manager::SESSION_MANAGER;
pub use session_manager::result_into_payload;

View File

@@ -0,0 +1,98 @@
use std::collections::BTreeMap;
use crate::openai_tools::JsonSchema;
use crate::openai_tools::ResponsesApiTool;
pub const EXEC_COMMAND_TOOL_NAME: &str = "exec_command";
pub const WRITE_STDIN_TOOL_NAME: &str = "write_stdin";
pub fn create_exec_command_tool_for_responses_api() -> ResponsesApiTool {
let mut properties = BTreeMap::<String, JsonSchema>::new();
properties.insert(
"cmd".to_string(),
JsonSchema::String {
description: Some("The shell command to execute.".to_string()),
},
);
properties.insert(
"yield_time_ms".to_string(),
JsonSchema::Number {
description: Some("The maximum time in milliseconds to wait for output.".to_string()),
},
);
properties.insert(
"max_output_tokens".to_string(),
JsonSchema::Number {
description: Some("The maximum number of tokens to output.".to_string()),
},
);
properties.insert(
"shell".to_string(),
JsonSchema::String {
description: Some("The shell to use. Defaults to \"/bin/bash\".".to_string()),
},
);
properties.insert(
"login".to_string(),
JsonSchema::Boolean {
description: Some(
"Whether to run the command as a login shell. Defaults to true.".to_string(),
),
},
);
ResponsesApiTool {
name: EXEC_COMMAND_TOOL_NAME.to_owned(),
description: r#"Execute shell commands on the local machine with streaming output."#
.to_string(),
strict: false,
parameters: JsonSchema::Object {
properties,
required: Some(vec!["cmd".to_string()]),
additional_properties: Some(false),
},
}
}
pub fn create_write_stdin_tool_for_responses_api() -> ResponsesApiTool {
let mut properties = BTreeMap::<String, JsonSchema>::new();
properties.insert(
"session_id".to_string(),
JsonSchema::Number {
description: Some("The ID of the exec_command session.".to_string()),
},
);
properties.insert(
"chars".to_string(),
JsonSchema::String {
description: Some("The characters to write to stdin.".to_string()),
},
);
properties.insert(
"yield_time_ms".to_string(),
JsonSchema::Number {
description: Some(
"The maximum time in milliseconds to wait for output after writing.".to_string(),
),
},
);
properties.insert(
"max_output_tokens".to_string(),
JsonSchema::Number {
description: Some("The maximum number of tokens to output.".to_string()),
},
);
ResponsesApiTool {
name: WRITE_STDIN_TOOL_NAME.to_owned(),
description: r#"Write characters to an exec session's stdin. Returns all stdout+stderr received within yield_time_ms.
Can write control characters (\u0003 for Ctrl-C), or an empty string to just poll stdout+stderr."#
.to_string(),
strict: false,
parameters: JsonSchema::Object {
properties,
required: Some(vec!["session_id".to_string(), "chars".to_string()]),
additional_properties: Some(false),
},
}
}

View File

@@ -0,0 +1,5 @@
use serde::Deserialize;
use serde::Serialize;
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Serialize, Deserialize)]
pub(crate) struct SessionId(pub u32);

View File

@@ -0,0 +1,677 @@
use std::collections::HashMap;
use std::io::ErrorKind;
use std::io::Read;
use std::sync::Arc;
use std::sync::LazyLock;
use std::sync::Mutex as StdMutex;
use std::sync::atomic::AtomicU32;
use portable_pty::CommandBuilder;
use portable_pty::PtySize;
use portable_pty::native_pty_system;
use tokio::sync::Mutex;
use tokio::sync::mpsc;
use tokio::sync::oneshot;
use tokio::time::Duration;
use tokio::time::Instant;
use tokio::time::timeout;
use crate::exec_command::exec_command_params::ExecCommandParams;
use crate::exec_command::exec_command_params::WriteStdinParams;
use crate::exec_command::exec_command_session::ExecCommandSession;
use crate::exec_command::session_id::SessionId;
use codex_protocol::models::FunctionCallOutputPayload;
pub static SESSION_MANAGER: LazyLock<SessionManager> = LazyLock::new(SessionManager::default);
#[derive(Debug, Default)]
pub struct SessionManager {
next_session_id: AtomicU32,
sessions: Mutex<HashMap<SessionId, ExecCommandSession>>,
}
#[derive(Debug)]
pub struct ExecCommandOutput {
wall_time: Duration,
exit_status: ExitStatus,
original_token_count: Option<u64>,
output: String,
}
impl ExecCommandOutput {
fn to_text_output(&self) -> String {
let wall_time_secs = self.wall_time.as_secs_f32();
let termination_status = match self.exit_status {
ExitStatus::Exited(code) => format!("Process exited with code {code}"),
ExitStatus::Ongoing(session_id) => {
format!("Process running with session ID {}", session_id.0)
}
};
let truncation_status = match self.original_token_count {
Some(tokens) => {
format!("\nWarning: truncated output (original token count: {tokens})")
}
None => "".to_string(),
};
format!(
r#"Wall time: {wall_time_secs:.3} seconds
{termination_status}{truncation_status}
Output:
{output}"#,
output = self.output
)
}
}
#[derive(Debug)]
pub enum ExitStatus {
Exited(i32),
Ongoing(SessionId),
}
pub fn result_into_payload(result: Result<ExecCommandOutput, String>) -> FunctionCallOutputPayload {
match result {
Ok(output) => FunctionCallOutputPayload {
content: output.to_text_output(),
success: Some(true),
},
Err(err) => FunctionCallOutputPayload {
content: err,
success: Some(false),
},
}
}
impl SessionManager {
/// Processes the request and is required to send a response via `outgoing`.
pub async fn handle_exec_command_request(
&self,
params: ExecCommandParams,
) -> Result<ExecCommandOutput, String> {
// Allocate a session id.
let session_id = SessionId(
self.next_session_id
.fetch_add(1, std::sync::atomic::Ordering::SeqCst),
);
let (session, mut exit_rx) =
create_exec_command_session(params.clone())
.await
.map_err(|err| {
format!(
"failed to create exec command session for session id {}: {err}",
session_id.0
)
})?;
// Insert into session map.
let mut output_rx = session.output_receiver();
self.sessions.lock().await.insert(session_id, session);
// Collect output until either timeout expires or process exits.
// Do not cap during collection; truncate at the end if needed.
// Use a modest initial capacity to avoid large preallocation.
let cap_bytes_u64 = params.max_output_tokens.saturating_mul(4);
let cap_bytes: usize = cap_bytes_u64.min(usize::MAX as u64) as usize;
let mut collected: Vec<u8> = Vec::with_capacity(4096);
let start_time = Instant::now();
let deadline = start_time + Duration::from_millis(params.yield_time_ms);
let mut exit_code: Option<i32> = None;
loop {
if Instant::now() >= deadline {
break;
}
let remaining = deadline.saturating_duration_since(Instant::now());
tokio::select! {
biased;
exit = &mut exit_rx => {
exit_code = exit.ok();
// Small grace period to pull remaining buffered output
let grace_deadline = Instant::now() + Duration::from_millis(25);
while Instant::now() < grace_deadline {
match timeout(Duration::from_millis(1), output_rx.recv()).await {
Ok(Ok(chunk)) => {
collected.extend_from_slice(&chunk);
}
Ok(Err(tokio::sync::broadcast::error::RecvError::Lagged(_))) => {
// Skip missed messages; keep trying within grace period.
continue;
}
Ok(Err(tokio::sync::broadcast::error::RecvError::Closed)) => break,
Err(_) => break,
}
}
break;
}
chunk = timeout(remaining, output_rx.recv()) => {
match chunk {
Ok(Ok(chunk)) => {
collected.extend_from_slice(&chunk);
}
Ok(Err(tokio::sync::broadcast::error::RecvError::Lagged(_))) => {
// Skip missed messages; continue collecting fresh output.
}
Ok(Err(tokio::sync::broadcast::error::RecvError::Closed)) => { break; }
Err(_) => { break; }
}
}
}
}
let output = String::from_utf8_lossy(&collected).to_string();
let exit_status = if let Some(code) = exit_code {
ExitStatus::Exited(code)
} else {
ExitStatus::Ongoing(session_id)
};
// If output exceeds cap, truncate the middle and record original token estimate.
let (output, original_token_count) = truncate_middle(&output, cap_bytes);
Ok(ExecCommandOutput {
wall_time: Instant::now().duration_since(start_time),
exit_status,
original_token_count,
output,
})
}
/// Write characters to a session's stdin and collect combined output for up to `yield_time_ms`.
pub async fn handle_write_stdin_request(
&self,
params: WriteStdinParams,
) -> Result<ExecCommandOutput, String> {
let WriteStdinParams {
session_id,
chars,
yield_time_ms,
max_output_tokens,
} = params;
// Grab handles without holding the sessions lock across await points.
let (writer_tx, mut output_rx) = {
let sessions = self.sessions.lock().await;
match sessions.get(&session_id) {
Some(session) => (session.writer_sender(), session.output_receiver()),
None => {
return Err(format!("unknown session id {}", session_id.0));
}
}
};
// Write stdin if provided.
if !chars.is_empty() && writer_tx.send(chars.into_bytes()).await.is_err() {
return Err("failed to write to stdin".to_string());
}
// Collect output up to yield_time_ms, truncating to max_output_tokens bytes.
let mut collected: Vec<u8> = Vec::with_capacity(4096);
let start_time = Instant::now();
let deadline = start_time + Duration::from_millis(yield_time_ms);
loop {
let now = Instant::now();
if now >= deadline {
break;
}
let remaining = deadline - now;
match timeout(remaining, output_rx.recv()).await {
Ok(Ok(chunk)) => {
// Collect all output within the time budget; truncate at the end.
collected.extend_from_slice(&chunk);
}
Ok(Err(tokio::sync::broadcast::error::RecvError::Lagged(_))) => {
// Skip missed messages; continue collecting fresh output.
}
Ok(Err(tokio::sync::broadcast::error::RecvError::Closed)) => break,
Err(_) => break, // timeout
}
}
// Return structured output, truncating middle if over cap.
let output = String::from_utf8_lossy(&collected).to_string();
let cap_bytes_u64 = max_output_tokens.saturating_mul(4);
let cap_bytes: usize = cap_bytes_u64.min(usize::MAX as u64) as usize;
let (output, original_token_count) = truncate_middle(&output, cap_bytes);
Ok(ExecCommandOutput {
wall_time: Instant::now().duration_since(start_time),
exit_status: ExitStatus::Ongoing(session_id),
original_token_count,
output,
})
}
}
/// Spawn PTY and child process per spawn_exec_command_session logic.
async fn create_exec_command_session(
params: ExecCommandParams,
) -> anyhow::Result<(ExecCommandSession, oneshot::Receiver<i32>)> {
let ExecCommandParams {
cmd,
yield_time_ms: _,
max_output_tokens: _,
shell,
login,
} = params;
// Use the native pty implementation for the system
let pty_system = native_pty_system();
// Create a new pty
let pair = pty_system.openpty(PtySize {
rows: 24,
cols: 80,
pixel_width: 0,
pixel_height: 0,
})?;
// Spawn a shell into the pty
let mut command_builder = CommandBuilder::new(shell);
let shell_mode_opt = if login { "-lc" } else { "-c" };
command_builder.arg(shell_mode_opt);
command_builder.arg(cmd);
let mut child = pair.slave.spawn_command(command_builder)?;
// Obtain a killer that can signal the process independently of `.wait()`.
let killer = child.clone_killer();
// Channel to forward write requests to the PTY writer.
let (writer_tx, mut writer_rx) = mpsc::channel::<Vec<u8>>(128);
// Broadcast for streaming PTY output to readers: subscribers receive from subscription time.
let (output_tx, _) = tokio::sync::broadcast::channel::<Vec<u8>>(256);
// Reader task: drain PTY and forward chunks to output channel.
let mut reader = pair.master.try_clone_reader()?;
let output_tx_clone = output_tx.clone();
let reader_handle = tokio::task::spawn_blocking(move || {
let mut buf = [0u8; 8192];
loop {
match reader.read(&mut buf) {
Ok(0) => break, // EOF
Ok(n) => {
// Forward to broadcast; best-effort if there are subscribers.
let _ = output_tx_clone.send(buf[..n].to_vec());
}
Err(ref e) if e.kind() == ErrorKind::Interrupted => {
// Retry on EINTR
continue;
}
Err(ref e) if e.kind() == ErrorKind::WouldBlock => {
// We're in a blocking thread; back off briefly and retry.
std::thread::sleep(Duration::from_millis(5));
continue;
}
Err(_) => break,
}
}
});
// Writer task: apply stdin writes to the PTY writer.
let writer = pair.master.take_writer()?;
let writer = Arc::new(StdMutex::new(writer));
let writer_handle = tokio::spawn({
let writer = writer.clone();
async move {
while let Some(bytes) = writer_rx.recv().await {
let writer = writer.clone();
// Perform blocking write on a blocking thread.
let _ = tokio::task::spawn_blocking(move || {
if let Ok(mut guard) = writer.lock() {
use std::io::Write;
let _ = guard.write_all(&bytes);
let _ = guard.flush();
}
})
.await;
}
}
});
// Keep the child alive until it exits, then signal exit code.
let (exit_tx, exit_rx) = oneshot::channel::<i32>();
let wait_handle = tokio::task::spawn_blocking(move || {
let code = match child.wait() {
Ok(status) => status.exit_code() as i32,
Err(_) => -1,
};
let _ = exit_tx.send(code);
});
// Create and store the session with channels.
let session = ExecCommandSession::new(
writer_tx,
output_tx,
killer,
reader_handle,
writer_handle,
wait_handle,
);
Ok((session, exit_rx))
}
/// Truncate the middle of a UTF-8 string to at most `max_bytes` bytes,
/// preserving the beginning and the end. Returns the possibly truncated
/// string and `Some(original_token_count)` (estimated at 4 bytes/token)
/// if truncation occurred; otherwise returns the original string and `None`.
fn truncate_middle(s: &str, max_bytes: usize) -> (String, Option<u64>) {
// No truncation needed
if s.len() <= max_bytes {
return (s.to_string(), None);
}
let est_tokens = (s.len() as u64).div_ceil(4);
if max_bytes == 0 {
// Cannot keep any content; still return a full marker (never truncated).
return (
format!("{} tokens truncated…", est_tokens),
Some(est_tokens),
);
}
// Helper to truncate a string to a given byte length on a char boundary.
fn truncate_on_boundary(input: &str, max_len: usize) -> &str {
if input.len() <= max_len {
return input;
}
let mut end = max_len;
while end > 0 && !input.is_char_boundary(end) {
end -= 1;
}
&input[..end]
}
// Given a left/right budget, prefer newline boundaries; otherwise fall back
// to UTF-8 char boundaries.
fn pick_prefix_end(s: &str, left_budget: usize) -> usize {
if let Some(head) = s.get(..left_budget)
&& let Some(i) = head.rfind('\n')
{
return i + 1; // keep the newline so suffix starts on a fresh line
}
truncate_on_boundary(s, left_budget).len()
}
fn pick_suffix_start(s: &str, right_budget: usize) -> usize {
let start_tail = s.len().saturating_sub(right_budget);
if let Some(tail) = s.get(start_tail..)
&& let Some(i) = tail.find('\n')
{
return start_tail + i + 1; // start after newline
}
// Fall back to a char boundary at or after start_tail.
let mut idx = start_tail.min(s.len());
while idx < s.len() && !s.is_char_boundary(idx) {
idx += 1;
}
idx
}
// Refine marker length and budgets until stable. Marker is never truncated.
let mut guess_tokens = est_tokens; // worst-case: everything truncated
for _ in 0..4 {
let marker = format!("{} tokens truncated…", guess_tokens);
let marker_len = marker.len();
let keep_budget = max_bytes.saturating_sub(marker_len);
if keep_budget == 0 {
// No room for any content within the cap; return a full, untruncated marker
// that reflects the entire truncated content.
return (
format!("{} tokens truncated…", est_tokens),
Some(est_tokens),
);
}
let left_budget = keep_budget / 2;
let right_budget = keep_budget - left_budget;
let prefix_end = pick_prefix_end(s, left_budget);
let mut suffix_start = pick_suffix_start(s, right_budget);
if suffix_start < prefix_end {
suffix_start = prefix_end;
}
let kept_content_bytes = prefix_end + (s.len() - suffix_start);
let truncated_content_bytes = s.len().saturating_sub(kept_content_bytes);
let new_tokens = (truncated_content_bytes as u64).div_ceil(4);
if new_tokens == guess_tokens {
let mut out = String::with_capacity(marker_len + kept_content_bytes + 1);
out.push_str(&s[..prefix_end]);
out.push_str(&marker);
// Place marker on its own line for symmetry when we keep line boundaries.
out.push('\n');
out.push_str(&s[suffix_start..]);
return (out, Some(est_tokens));
}
guess_tokens = new_tokens;
}
// Fallback: use last guess to build output.
let marker = format!("{} tokens truncated…", guess_tokens);
let marker_len = marker.len();
let keep_budget = max_bytes.saturating_sub(marker_len);
if keep_budget == 0 {
return (
format!("{} tokens truncated…", est_tokens),
Some(est_tokens),
);
}
let left_budget = keep_budget / 2;
let right_budget = keep_budget - left_budget;
let prefix_end = pick_prefix_end(s, left_budget);
let suffix_start = pick_suffix_start(s, right_budget);
let mut out = String::with_capacity(marker_len + prefix_end + (s.len() - suffix_start) + 1);
out.push_str(&s[..prefix_end]);
out.push_str(&marker);
out.push('\n');
out.push_str(&s[suffix_start..]);
(out, Some(est_tokens))
}
#[cfg(test)]
mod tests {
use super::*;
use crate::exec_command::session_id::SessionId;
/// Test that verifies that [`SessionManager::handle_exec_command_request()`]
/// and [`SessionManager::handle_write_stdin_request()`] work as expected
/// in the presence of a process that never terminates (but produces
/// output continuously).
#[cfg(unix)]
#[allow(clippy::print_stderr)]
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
async fn session_manager_streams_and_truncates_from_now() {
use crate::exec_command::exec_command_params::ExecCommandParams;
use crate::exec_command::exec_command_params::WriteStdinParams;
use tokio::time::sleep;
let session_manager = SessionManager::default();
// Long-running loop that prints an increasing counter every ~100ms.
// Use Python for a portable, reliable sleep across shells/PTYs.
let cmd = r#"python3 - <<'PY'
import sys, time
count = 0
while True:
print(count)
sys.stdout.flush()
count += 100
time.sleep(0.1)
PY"#
.to_string();
// Start the session and collect ~3s of output.
let params = ExecCommandParams {
cmd,
yield_time_ms: 3_000,
max_output_tokens: 1_000, // large enough to avoid truncation here
shell: "/bin/bash".to_string(),
login: false,
};
let initial_output = match session_manager
.handle_exec_command_request(params.clone())
.await
{
Ok(v) => v,
Err(e) => {
// PTY may be restricted in some sandboxes; skip in that case.
if e.contains("openpty") || e.contains("Operation not permitted") {
eprintln!("skipping test due to restricted PTY: {e}");
return;
}
panic!("exec request failed unexpectedly: {e}");
}
};
eprintln!("initial output: {initial_output:?}");
// Should be ongoing (we launched a never-ending loop).
let session_id = match initial_output.exit_status {
ExitStatus::Ongoing(id) => id,
_ => panic!("expected ongoing session"),
};
// Parse the numeric lines and get the max observed value in the first window.
let first_nums = extract_monotonic_numbers(&initial_output.output);
assert!(
!first_nums.is_empty(),
"expected some output from first window"
);
let first_max = *first_nums.iter().max().unwrap();
// Wait ~4s so counters progress while we're not reading.
sleep(Duration::from_millis(4_000)).await;
// Now read ~3s of output "from now" only.
// Use a small token cap so truncation occurs and we test middle truncation.
let write_params = WriteStdinParams {
session_id,
chars: String::new(),
yield_time_ms: 3_000,
max_output_tokens: 16, // 16 tokens ~= 64 bytes -> likely truncation
};
let second = session_manager
.handle_write_stdin_request(write_params)
.await
.expect("write stdin should succeed");
// Verify truncation metadata and size bound (cap is tokens*4 bytes).
assert!(second.original_token_count.is_some());
let cap_bytes = (16u64 * 4) as usize;
assert!(second.output.len() <= cap_bytes);
// New middle marker should be present.
assert!(
second.output.contains("tokens truncated") && second.output.contains('…'),
"expected truncation marker in output, got: {}",
second.output
);
// Minimal freshness check: the earliest number we see in the second window
// should be significantly larger than the last from the first window.
let second_nums = extract_monotonic_numbers(&second.output);
assert!(
!second_nums.is_empty(),
"expected some numeric output from second window"
);
let second_min = *second_nums.iter().min().unwrap();
// We slept 4 seconds (~40 ticks at 100ms/tick, each +100), so expect
// an increase of roughly 4000 or more. Allow a generous margin.
assert!(
second_min >= first_max + 2000,
"second_min={second_min} first_max={first_max}",
);
}
#[cfg(unix)]
fn extract_monotonic_numbers(s: &str) -> Vec<i64> {
s.lines()
.filter_map(|line| {
if !line.is_empty()
&& line.chars().all(|c| c.is_ascii_digit())
&& let Ok(n) = line.parse::<i64>()
{
// Our generator increments by 100; ignore spurious fragments.
if n % 100 == 0 {
return Some(n);
}
}
None
})
.collect()
}
#[test]
fn to_text_output_exited_no_truncation() {
let out = ExecCommandOutput {
wall_time: Duration::from_millis(1234),
exit_status: ExitStatus::Exited(0),
original_token_count: None,
output: "hello".to_string(),
};
let text = out.to_text_output();
let expected = r#"Wall time: 1.234 seconds
Process exited with code 0
Output:
hello"#;
assert_eq!(expected, text);
}
#[test]
fn to_text_output_ongoing_with_truncation() {
let out = ExecCommandOutput {
wall_time: Duration::from_millis(500),
exit_status: ExitStatus::Ongoing(SessionId(42)),
original_token_count: Some(1000),
output: "abc".to_string(),
};
let text = out.to_text_output();
let expected = r#"Wall time: 0.500 seconds
Process running with session ID 42
Warning: truncated output (original token count: 1000)
Output:
abc"#;
assert_eq!(expected, text);
}
#[test]
fn truncate_middle_no_newlines_fallback() {
// A long string with no newlines that exceeds the cap.
let s = "abcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ";
let max_bytes = 16; // force truncation
let (out, original) = truncate_middle(s, max_bytes);
// For very small caps, we return the full, untruncated marker,
// even if it exceeds the cap.
assert_eq!(out, "…16 tokens truncated…");
// Original string length is 62 bytes => ceil(62/4) = 16 tokens.
assert_eq!(original, Some(16));
}
#[test]
fn truncate_middle_prefers_newline_boundaries() {
// Build a multi-line string of 20 numbered lines (each "NNN\n").
let mut s = String::new();
for i in 1..=20 {
s.push_str(&format!("{i:03}\n"));
}
// Total length: 20 lines * 4 bytes per line = 80 bytes.
assert_eq!(s.len(), 80);
// Choose a cap that forces truncation while leaving room for
// a few lines on each side after accounting for the marker.
let max_bytes = 64;
// Expect exact output: first 4 lines, marker, last 4 lines, and correct token estimate (80/4 = 20).
assert_eq!(
truncate_middle(&s, max_bytes),
(
r#"001
002
003
004
…12 tokens truncated…
017
018
019
020
"#
.to_string(),
Some(20)
)
);
}
}

View File

@@ -1,5 +1,6 @@
use std::collections::HashSet;
use std::path::Path;
use std::path::PathBuf;
use codex_protocol::mcp_protocol::GitSha;
use futures::future::join_all;
@@ -425,6 +426,38 @@ async fn diff_against_sha(cwd: &Path, sha: &GitSha) -> Option<String> {
Some(diff)
}
/// Resolve the path that should be used for trust checks. Similar to
/// `[utils::is_inside_git_repo]`, but resolves to the root of the main
/// repository. Handles worktrees.
pub fn resolve_root_git_project_for_trust(cwd: &Path) -> Option<PathBuf> {
let base = if cwd.is_dir() { cwd } else { cwd.parent()? };
// TODO: we should make this async, but it's primarily used deep in
// callstacks of sync code, and should almost always be fast
let git_dir_out = std::process::Command::new("git")
.args(["rev-parse", "--git-common-dir"])
.current_dir(base)
.output()
.ok()?;
if !git_dir_out.status.success() {
return None;
}
let git_dir_s = String::from_utf8(git_dir_out.stdout)
.ok()?
.trim()
.to_string();
let git_dir_path_raw = if Path::new(&git_dir_s).is_absolute() {
PathBuf::from(&git_dir_s)
} else {
base.join(&git_dir_s)
};
// Normalize to handle macOS /var vs /private/var and resolve ".." segments.
let git_dir_path = std::fs::canonicalize(&git_dir_path_raw).unwrap_or(git_dir_path_raw);
git_dir_path.parent().map(Path::to_path_buf)
}
#[cfg(test)]
mod tests {
use super::*;
@@ -732,6 +765,80 @@ mod tests {
assert_eq!(state.sha, GitSha::new(&remote_sha));
}
#[test]
fn resolve_root_git_project_for_trust_returns_none_outside_repo() {
let tmp = TempDir::new().expect("tempdir");
assert!(resolve_root_git_project_for_trust(tmp.path()).is_none());
}
#[tokio::test]
async fn resolve_root_git_project_for_trust_regular_repo_returns_repo_root() {
let temp_dir = TempDir::new().expect("Failed to create temp dir");
let repo_path = create_test_git_repo(&temp_dir).await;
let expected = std::fs::canonicalize(&repo_path).unwrap().to_path_buf();
assert_eq!(
resolve_root_git_project_for_trust(&repo_path),
Some(expected.clone())
);
let nested = repo_path.join("sub/dir");
std::fs::create_dir_all(&nested).unwrap();
assert_eq!(
resolve_root_git_project_for_trust(&nested),
Some(expected.clone())
);
}
#[tokio::test]
async fn resolve_root_git_project_for_trust_detects_worktree_and_returns_main_root() {
let temp_dir = TempDir::new().expect("Failed to create temp dir");
let repo_path = create_test_git_repo(&temp_dir).await;
// Create a linked worktree
let wt_root = temp_dir.path().join("wt");
let _ = std::process::Command::new("git")
.args([
"worktree",
"add",
wt_root.to_str().unwrap(),
"-b",
"feature/x",
])
.current_dir(&repo_path)
.output()
.expect("git worktree add");
let expected = std::fs::canonicalize(&repo_path).ok();
let got = resolve_root_git_project_for_trust(&wt_root)
.and_then(|p| std::fs::canonicalize(p).ok());
assert_eq!(got, expected);
let nested = wt_root.join("nested/sub");
std::fs::create_dir_all(&nested).unwrap();
let got_nested =
resolve_root_git_project_for_trust(&nested).and_then(|p| std::fs::canonicalize(p).ok());
assert_eq!(got_nested, expected);
}
#[test]
fn resolve_root_git_project_for_trust_non_worktrees_gitdir_returns_none() {
let tmp = TempDir::new().expect("tempdir");
let proj = tmp.path().join("proj");
std::fs::create_dir_all(proj.join("nested")).unwrap();
// `.git` is a file but does not point to a worktrees path
std::fs::write(
proj.join(".git"),
format!(
"gitdir: {}\n",
tmp.path().join("some/other/location").display()
),
)
.unwrap();
assert!(resolve_root_git_project_for_trust(&proj).is_none());
assert!(resolve_root_git_project_for_trust(&proj.join("nested")).is_none());
}
#[tokio::test]
async fn test_get_git_working_tree_state_unpushed_commit() {
let temp_dir = TempDir::new().expect("Failed to create temp dir");

View File

@@ -20,6 +20,7 @@ mod conversation_history;
mod environment_context;
pub mod error;
pub mod exec;
mod exec_command;
pub mod exec_env;
mod flags;
pub mod git_info;
@@ -39,17 +40,17 @@ mod conversation_manager;
pub use conversation_manager::ConversationManager;
pub use conversation_manager::NewConversation;
pub mod model_family;
mod models;
mod openai_model_info;
mod openai_tools;
pub mod plan_tool;
mod project_doc;
pub mod project_doc;
mod rollout;
pub(crate) mod safety;
pub mod seatbelt;
pub mod shell;
pub mod spawn;
pub mod terminal;
mod tool_apply_patch;
pub mod turn_diff_tracker;
pub mod user_agent;
mod user_notification;

View File

@@ -4,13 +4,13 @@ use std::time::Instant;
use tracing::error;
use crate::codex::Session;
use crate::models::FunctionCallOutputPayload;
use crate::models::ResponseInputItem;
use crate::protocol::Event;
use crate::protocol::EventMsg;
use crate::protocol::McpInvocation;
use crate::protocol::McpToolCallBeginEvent;
use crate::protocol::McpToolCallEndEvent;
use codex_protocol::models::FunctionCallOutputPayload;
use codex_protocol::models::ResponseInputItem;
/// Handles the specified tool call dispatches the appropriate
/// `McpToolCallBegin` and `McpToolCallEnd` events to the `Session`.

View File

@@ -1,3 +1,5 @@
use crate::tool_apply_patch::ApplyPatchToolType;
/// A model family is a group of models that share certain characteristics.
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub struct ModelFamily {
@@ -24,9 +26,9 @@ pub struct ModelFamily {
// See https://platform.openai.com/docs/guides/tools-local-shell
pub uses_local_shell_tool: bool,
/// True if the model performs better when `apply_patch` is provided as
/// a tool call instead of just a bash command.
pub uses_apply_patch_tool: bool,
/// Present if the model performs better when `apply_patch` is provided as
/// a tool call instead of just a bash command
pub apply_patch_tool_type: Option<ApplyPatchToolType>,
}
macro_rules! model_family {
@@ -40,7 +42,7 @@ macro_rules! model_family {
needs_special_apply_patch_instructions: false,
supports_reasoning_summaries: false,
uses_local_shell_tool: false,
uses_apply_patch_tool: false,
apply_patch_tool_type: None,
};
// apply overrides
$(
@@ -60,7 +62,7 @@ macro_rules! simple_model_family {
needs_special_apply_patch_instructions: false,
supports_reasoning_summaries: false,
uses_local_shell_tool: false,
uses_apply_patch_tool: false,
apply_patch_tool_type: None,
})
}};
}
@@ -88,6 +90,7 @@ pub fn find_family_for_model(slug: &str) -> Option<ModelFamily> {
model_family!(
slug, slug,
supports_reasoning_summaries: true,
apply_patch_tool_type: Some(ApplyPatchToolType::Freeform),
)
} else if slug.starts_with("gpt-4.1") {
model_family!(
@@ -95,7 +98,7 @@ pub fn find_family_for_model(slug: &str) -> Option<ModelFamily> {
needs_special_apply_patch_instructions: true,
)
} else if slug.starts_with("gpt-oss") {
model_family!(slug, "gpt-oss", uses_apply_patch_tool: true)
model_family!(slug, "gpt-oss", apply_patch_tool_type: Some(ApplyPatchToolType::Function))
} else if slug.starts_with("gpt-4o") {
simple_model_family!(slug, "gpt-4o")
} else if slug.starts_with("gpt-3.5") {
@@ -104,6 +107,7 @@ pub fn find_family_for_model(slug: &str) -> Option<ModelFamily> {
model_family!(
slug, "gpt-5",
supports_reasoning_summaries: true,
apply_patch_tool_type: Some(ApplyPatchToolType::Freeform),
)
} else {
None

View File

@@ -9,6 +9,9 @@ use crate::model_family::ModelFamily;
use crate::plan_tool::PLAN_TOOL;
use crate::protocol::AskForApproval;
use crate::protocol::SandboxPolicy;
use crate::tool_apply_patch::ApplyPatchToolType;
use crate::tool_apply_patch::create_apply_patch_freeform_tool;
use crate::tool_apply_patch::create_apply_patch_json_tool;
#[derive(Debug, Clone, Serialize, PartialEq)]
pub struct ResponsesApiTool {
@@ -21,6 +24,20 @@ pub struct ResponsesApiTool {
pub(crate) parameters: JsonSchema,
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub struct FreeformTool {
pub(crate) name: String,
pub(crate) description: String,
pub(crate) format: FreeformToolFormat,
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub struct FreeformToolFormat {
pub(crate) r#type: String,
pub(crate) syntax: String,
pub(crate) definition: String,
}
/// When serialized as JSON, this produces a valid "Tool" in the OpenAI
/// Responses API.
#[derive(Debug, Clone, Serialize, PartialEq)]
@@ -30,6 +47,10 @@ pub(crate) enum OpenAiTool {
Function(ResponsesApiTool),
#[serde(rename = "local_shell")]
LocalShell {},
#[serde(rename = "web_search")]
WebSearch {},
#[serde(rename = "custom")]
Freeform(FreeformTool),
}
#[derive(Debug, Clone)]
@@ -37,13 +58,15 @@ pub enum ConfigShellToolType {
DefaultShell,
ShellWithRequest { sandbox_policy: SandboxPolicy },
LocalShell,
StreamableShell,
}
#[derive(Debug, Clone)]
pub struct ToolsConfig {
pub shell_type: ConfigShellToolType,
pub plan_tool: bool,
pub apply_patch_tool: bool,
pub apply_patch_tool_type: Option<ApplyPatchToolType>,
pub web_search_request: bool,
}
impl ToolsConfig {
@@ -53,22 +76,39 @@ impl ToolsConfig {
sandbox_policy: SandboxPolicy,
include_plan_tool: bool,
include_apply_patch_tool: bool,
include_web_search_request: bool,
use_streamable_shell_tool: bool,
) -> Self {
let mut shell_type = if model_family.uses_local_shell_tool {
let mut shell_type = if use_streamable_shell_tool {
ConfigShellToolType::StreamableShell
} else if model_family.uses_local_shell_tool {
ConfigShellToolType::LocalShell
} else {
ConfigShellToolType::DefaultShell
};
if matches!(approval_policy, AskForApproval::OnRequest) {
if matches!(approval_policy, AskForApproval::OnRequest) && !use_streamable_shell_tool {
shell_type = ConfigShellToolType::ShellWithRequest {
sandbox_policy: sandbox_policy.clone(),
}
}
let apply_patch_tool_type = match model_family.apply_patch_tool_type {
Some(ApplyPatchToolType::Freeform) => Some(ApplyPatchToolType::Freeform),
Some(ApplyPatchToolType::Function) => Some(ApplyPatchToolType::Function),
None => {
if include_apply_patch_tool {
Some(ApplyPatchToolType::Freeform)
} else {
None
}
}
};
Self {
shell_type,
plan_tool: include_plan_tool,
apply_patch_tool: include_apply_patch_tool || model_family.uses_apply_patch_tool,
apply_patch_tool_type,
web_search_request: include_web_search_request,
}
}
}
@@ -115,16 +155,20 @@ fn create_shell_tool() -> OpenAiTool {
"command".to_string(),
JsonSchema::Array {
items: Box::new(JsonSchema::String { description: None }),
description: None,
description: Some("The command to execute".to_string()),
},
);
properties.insert(
"workdir".to_string(),
JsonSchema::String { description: None },
JsonSchema::String {
description: Some("The working directory to execute the command in".to_string()),
},
);
properties.insert(
"timeout".to_string(),
JsonSchema::Number { description: None },
"timeout_ms".to_string(),
JsonSchema::Number {
description: Some("The timeout for the command in milliseconds".to_string()),
},
);
OpenAiTool::Function(ResponsesApiTool {
@@ -155,7 +199,7 @@ fn create_shell_tool_for_sandbox(sandbox_policy: &SandboxPolicy) -> OpenAiTool {
},
);
properties.insert(
"timeout".to_string(),
"timeout_ms".to_string(),
JsonSchema::Number {
description: Some("The timeout for the command in milliseconds".to_string()),
},
@@ -171,7 +215,7 @@ fn create_shell_tool_for_sandbox(sandbox_policy: &SandboxPolicy) -> OpenAiTool {
properties.insert(
"justification".to_string(),
JsonSchema::String {
description: Some("Only set if ask_for_escalated_permissions is true. 1-sentence explanation of why we want to run this command.".to_string()),
description: Some("Only set if with_escalated_permissions is true. 1-sentence explanation of why we want to run this command.".to_string()),
},
);
}
@@ -237,92 +281,16 @@ The shell tool is used to execute shell commands.
},
})
}
/// TODO(dylan): deprecate once we get rid of json tool
#[derive(Serialize, Deserialize)]
pub(crate) struct ApplyPatchToolArgs {
pub(crate) input: String,
}
fn create_apply_patch_tool() -> OpenAiTool {
// Minimal schema: one required string argument containing the patch body
let mut properties = BTreeMap::new();
properties.insert(
"input".to_string(),
JsonSchema::String {
description: Some(r#"The entire contents of the apply_patch command"#.to_string()),
},
);
OpenAiTool::Function(ResponsesApiTool {
name: "apply_patch".to_string(),
description: r#"Use this tool to edit files.
Your patch language is a strippeddown, fileoriented diff format designed to be easy to parse and safe to apply. You can think of it as a highlevel envelope:
**_ Begin Patch
[ one or more file sections ]
_** End Patch
Within that envelope, you get a sequence of file operations.
You MUST include a header to specify the action you are taking.
Each operation starts with one of three headers:
**_ Add File: <path> - create a new file. Every following line is a + line (the initial contents).
_** Delete File: <path> - remove an existing file. Nothing follows.
\*\*\* Update File: <path> - patch an existing file in place (optionally with a rename).
May be immediately followed by \*\*\* Move to: <new path> if you want to rename the file.
Then one or more “hunks”, each introduced by @@ (optionally followed by a hunk header).
Within a hunk each line starts with:
- for inserted text,
* for removed text, or
space ( ) for context.
At the end of a truncated hunk you can emit \*\*\* End of File.
Patch := Begin { FileOp } End
Begin := "**_ Begin Patch" NEWLINE
End := "_** End Patch" NEWLINE
FileOp := AddFile | DeleteFile | UpdateFile
AddFile := "**_ Add File: " path NEWLINE { "+" line NEWLINE }
DeleteFile := "_** Delete File: " path NEWLINE
UpdateFile := "**_ Update File: " path NEWLINE [ MoveTo ] { Hunk }
MoveTo := "_** Move to: " newPath NEWLINE
Hunk := "@@" [ header ] NEWLINE { HunkLine } [ "*** End of File" NEWLINE ]
HunkLine := (" " | "-" | "+") text NEWLINE
A full patch can combine several operations:
**_ Begin Patch
_** Add File: hello.txt
+Hello world
**_ Update File: src/app.py
_** Move to: src/main.py
@@ def greet():
-print("Hi")
+print("Hello, world!")
**_ Delete File: obsolete.txt
_** End Patch
It is important to remember:
- You must include a header with your intended action (Add/Delete/Update)
- You must prefix new lines with `+` even when creating a new file
"#
.to_string(),
strict: false,
parameters: JsonSchema::Object {
properties,
required: Some(vec!["input".to_string()]),
additional_properties: Some(false),
},
})
}
/// Returns JSON values that are compatible with Function Calling in the
/// Responses API:
/// https://platform.openai.com/docs/guides/function-calling?api-mode=responses
pub(crate) fn create_tools_json_for_responses_api(
pub fn create_tools_json_for_responses_api(
tools: &Vec<OpenAiTool>,
) -> crate::error::Result<Vec<serde_json::Value>> {
let mut tools_json = Vec::new();
@@ -533,14 +501,33 @@ pub(crate) fn get_openai_tools(
ConfigShellToolType::LocalShell => {
tools.push(OpenAiTool::LocalShell {});
}
ConfigShellToolType::StreamableShell => {
tools.push(OpenAiTool::Function(
crate::exec_command::create_exec_command_tool_for_responses_api(),
));
tools.push(OpenAiTool::Function(
crate::exec_command::create_write_stdin_tool_for_responses_api(),
));
}
}
if config.plan_tool {
tools.push(PLAN_TOOL.clone());
}
if config.apply_patch_tool {
tools.push(create_apply_patch_tool());
if let Some(apply_patch_tool_type) = &config.apply_patch_tool_type {
match apply_patch_tool_type {
ApplyPatchToolType::Freeform => {
tools.push(create_apply_patch_freeform_tool());
}
ApplyPatchToolType::Function => {
tools.push(create_apply_patch_json_tool());
}
}
}
if config.web_search_request {
tools.push(OpenAiTool::WebSearch {});
}
if let Some(mcp_tools) = mcp_tools {
@@ -571,6 +558,8 @@ mod tests {
.map(|tool| match tool {
OpenAiTool::Function(ResponsesApiTool { name, .. }) => name,
OpenAiTool::LocalShell {} => "local_shell",
OpenAiTool::WebSearch {} => "web_search",
OpenAiTool::Freeform(FreeformTool { name, .. }) => name,
})
.collect::<Vec<_>>();
@@ -596,11 +585,13 @@ mod tests {
AskForApproval::Never,
SandboxPolicy::ReadOnly,
true,
model_family.uses_apply_patch_tool,
false,
true,
/*use_experimental_streamable_shell_tool*/ false,
);
let tools = get_openai_tools(&config, Some(HashMap::new()));
assert_eq_tool_names(&tools, &["local_shell", "update_plan"]);
assert_eq_tool_names(&tools, &["local_shell", "update_plan", "web_search"]);
}
#[test]
@@ -611,11 +602,13 @@ mod tests {
AskForApproval::Never,
SandboxPolicy::ReadOnly,
true,
model_family.uses_apply_patch_tool,
false,
true,
/*use_experimental_streamable_shell_tool*/ false,
);
let tools = get_openai_tools(&config, Some(HashMap::new()));
assert_eq_tool_names(&tools, &["shell", "update_plan"]);
assert_eq_tool_names(&tools, &["shell", "update_plan", "web_search"]);
}
#[test]
@@ -626,7 +619,9 @@ mod tests {
AskForApproval::Never,
SandboxPolicy::ReadOnly,
false,
model_family.uses_apply_patch_tool,
false,
true,
/*use_experimental_streamable_shell_tool*/ false,
);
let tools = get_openai_tools(
&config,
@@ -649,8 +644,8 @@ mod tests {
"number_property": { "type": "number" },
},
"required": [
"string_property",
"number_property"
"string_property".to_string(),
"number_property".to_string()
],
"additionalProperties": Some(false),
},
@@ -666,10 +661,13 @@ mod tests {
)])),
);
assert_eq_tool_names(&tools, &["shell", "test_server/do_something_cool"]);
assert_eq_tool_names(
&tools,
&["shell", "web_search", "test_server/do_something_cool"],
);
assert_eq!(
tools[1],
tools[2],
OpenAiTool::Function(ResponsesApiTool {
name: "test_server/do_something_cool".to_string(),
parameters: JsonSchema::Object {
@@ -720,7 +718,9 @@ mod tests {
AskForApproval::Never,
SandboxPolicy::ReadOnly,
false,
model_family.uses_apply_patch_tool,
false,
true,
/*use_experimental_streamable_shell_tool*/ false,
);
let tools = get_openai_tools(
@@ -746,10 +746,10 @@ mod tests {
)])),
);
assert_eq_tool_names(&tools, &["shell", "dash/search"]);
assert_eq_tool_names(&tools, &["shell", "web_search", "dash/search"]);
assert_eq!(
tools[1],
tools[2],
OpenAiTool::Function(ResponsesApiTool {
name: "dash/search".to_string(),
parameters: JsonSchema::Object {
@@ -776,7 +776,9 @@ mod tests {
AskForApproval::Never,
SandboxPolicy::ReadOnly,
false,
model_family.uses_apply_patch_tool,
false,
true,
/*use_experimental_streamable_shell_tool*/ false,
);
let tools = get_openai_tools(
@@ -800,9 +802,9 @@ mod tests {
)])),
);
assert_eq_tool_names(&tools, &["shell", "dash/paginate"]);
assert_eq_tool_names(&tools, &["shell", "web_search", "dash/paginate"]);
assert_eq!(
tools[1],
tools[2],
OpenAiTool::Function(ResponsesApiTool {
name: "dash/paginate".to_string(),
parameters: JsonSchema::Object {
@@ -827,7 +829,9 @@ mod tests {
AskForApproval::Never,
SandboxPolicy::ReadOnly,
false,
model_family.uses_apply_patch_tool,
false,
true,
/*use_experimental_streamable_shell_tool*/ false,
);
let tools = get_openai_tools(
@@ -851,9 +855,9 @@ mod tests {
)])),
);
assert_eq_tool_names(&tools, &["shell", "dash/tags"]);
assert_eq_tool_names(&tools, &["shell", "web_search", "dash/tags"]);
assert_eq!(
tools[1],
tools[2],
OpenAiTool::Function(ResponsesApiTool {
name: "dash/tags".to_string(),
parameters: JsonSchema::Object {
@@ -881,7 +885,9 @@ mod tests {
AskForApproval::Never,
SandboxPolicy::ReadOnly,
false,
model_family.uses_apply_patch_tool,
false,
true,
/*use_experimental_streamable_shell_tool*/ false,
);
let tools = get_openai_tools(
@@ -905,9 +911,9 @@ mod tests {
)])),
);
assert_eq_tool_names(&tools, &["shell", "dash/value"]);
assert_eq_tool_names(&tools, &["shell", "web_search", "dash/value"]);
assert_eq!(
tools[1],
tools[2],
OpenAiTool::Function(ResponsesApiTool {
name: "dash/value".to_string(),
parameters: JsonSchema::Object {

View File

@@ -2,13 +2,13 @@ use std::collections::BTreeMap;
use std::sync::LazyLock;
use crate::codex::Session;
use crate::models::FunctionCallOutputPayload;
use crate::models::ResponseInputItem;
use crate::openai_tools::JsonSchema;
use crate::openai_tools::OpenAiTool;
use crate::openai_tools::ResponsesApiTool;
use crate::protocol::Event;
use crate::protocol::EventMsg;
use codex_protocol::models::FunctionCallOutputPayload;
use codex_protocol::models::ResponseInputItem;
// Use the canonical plan tool types from the protocol crate to ensure
// type-identity matches events transported via `codex_protocol`.

View File

@@ -1,18 +1,19 @@
//! Project-level documentation discovery.
//!
//! Project-level documentation can be stored in a file named `AGENTS.md`.
//! Currently, we include only the contents of the first file found as follows:
//! Project-level documentation can be stored in files named `AGENTS.md`.
//! We include the concatenation of all files found along the path from the
//! repository root to the current working directory as follows:
//!
//! 1. Look for the doc file in the current working directory (as determined
//! by the `Config`).
//! 2. If not found, walk *upwards* until the Git repository root is reached
//! (detected by the presence of a `.git` directory/file), or failing that,
//! the filesystem root.
//! 3. If the Git root is encountered, look for the doc file there. If it
//! exists, the search stops we do **not** walk past the Git root.
//! 1. Determine the Git repository root by walking upwards from the current
//! working directory until a `.git` directory or file is found. If no Git
//! root is found, only the current working directory is considered.
//! 2. Collect every `AGENTS.md` found from the repository root down to the
//! current working directory (inclusive) and concatenate their contents in
//! that order.
//! 3. We do **not** walk past the Git root.
use crate::config::Config;
use std::path::Path;
use std::path::PathBuf;
use tokio::io::AsyncReadExt;
use tracing::error;
@@ -26,7 +27,7 @@ const PROJECT_DOC_SEPARATOR: &str = "\n\n--- project-doc ---\n\n";
/// Combines `Config::instructions` and `AGENTS.md` (if present) into a single
/// string of instructions.
pub(crate) async fn get_user_instructions(config: &Config) -> Option<String> {
match find_project_doc(config).await {
match read_project_docs(config).await {
Ok(Some(project_doc)) => match &config.user_instructions {
Some(original_instructions) => Some(format!(
"{original_instructions}{PROJECT_DOC_SEPARATOR}{project_doc}"
@@ -41,95 +42,135 @@ pub(crate) async fn get_user_instructions(config: &Config) -> Option<String> {
}
}
/// Attempt to locate and load the project documentation. Currently, the search
/// starts from `Config::cwd`, but if we may want to consider other directories
/// in the future, e.g., additional writable directories in the `SandboxPolicy`.
/// Attempt to locate and load the project documentation.
///
/// On success returns `Ok(Some(contents))`. If no documentation file is found
/// the function returns `Ok(None)`. Unexpected I/O failures bubble up as
/// `Err` so callers can decide how to handle them.
async fn find_project_doc(config: &Config) -> std::io::Result<Option<String>> {
let max_bytes = config.project_doc_max_bytes;
/// On success returns `Ok(Some(contents))` where `contents` is the
/// concatenation of all discovered docs. If no documentation file is found the
/// function returns `Ok(None)`. Unexpected I/O failures bubble up as `Err` so
/// callers can decide how to handle them.
pub async fn read_project_docs(config: &Config) -> std::io::Result<Option<String>> {
let max_total = config.project_doc_max_bytes;
// Attempt to load from the working directory first.
if let Some(doc) = load_first_candidate(&config.cwd, CANDIDATE_FILENAMES, max_bytes).await? {
return Ok(Some(doc));
if max_total == 0 {
return Ok(None);
}
// Walk up towards the filesystem root, stopping once we encounter the Git
// repository root. The presence of **either** a `.git` *file* or
// *directory* counts.
let mut dir = config.cwd.clone();
let paths = discover_project_doc_paths(config)?;
if paths.is_empty() {
return Ok(None);
}
// Canonicalize the path so that we do not end up in an infinite loop when
// `cwd` contains `..` components.
let mut remaining: u64 = max_total as u64;
let mut parts: Vec<String> = Vec::new();
for p in paths {
if remaining == 0 {
break;
}
let file = match tokio::fs::File::open(&p).await {
Ok(f) => f,
Err(e) if e.kind() == std::io::ErrorKind::NotFound => continue,
Err(e) => return Err(e),
};
let size = file.metadata().await?.len();
let mut reader = tokio::io::BufReader::new(file).take(remaining);
let mut data: Vec<u8> = Vec::new();
reader.read_to_end(&mut data).await?;
if size > remaining {
tracing::warn!(
"Project doc `{}` exceeds remaining budget ({} bytes) - truncating.",
p.display(),
remaining,
);
}
let text = String::from_utf8_lossy(&data).to_string();
if !text.trim().is_empty() {
parts.push(text);
remaining = remaining.saturating_sub(data.len() as u64);
}
}
if parts.is_empty() {
Ok(None)
} else {
Ok(Some(parts.join("\n\n")))
}
}
/// Discover the list of AGENTS.md files using the same search rules as
/// `read_project_docs`, but return the file paths instead of concatenated
/// contents. The list is ordered from repository root to the current working
/// directory (inclusive). Symlinks are allowed. When `project_doc_max_bytes`
/// is zero, returns an empty list.
pub fn discover_project_doc_paths(config: &Config) -> std::io::Result<Vec<PathBuf>> {
let mut dir = config.cwd.clone();
if let Ok(canon) = dir.canonicalize() {
dir = canon;
}
while let Some(parent) = dir.parent() {
// `.git` can be a *file* (for worktrees or submodules) or a *dir*.
let git_marker = dir.join(".git");
let git_exists = match tokio::fs::metadata(&git_marker).await {
// Build chain from cwd upwards and detect git root.
let mut chain: Vec<PathBuf> = vec![dir.clone()];
let mut git_root: Option<PathBuf> = None;
let mut cursor = dir.clone();
while let Some(parent) = cursor.parent() {
let git_marker = cursor.join(".git");
let git_exists = match std::fs::metadata(&git_marker) {
Ok(_) => true,
Err(e) if e.kind() == std::io::ErrorKind::NotFound => false,
Err(e) => return Err(e),
};
if git_exists {
// We are at the repo root attempt one final load.
if let Some(doc) = load_first_candidate(&dir, CANDIDATE_FILENAMES, max_bytes).await? {
return Ok(Some(doc));
}
git_root = Some(cursor.clone());
break;
}
dir = parent.to_path_buf();
chain.push(parent.to_path_buf());
cursor = parent.to_path_buf();
}
Ok(None)
}
/// Attempt to load the first candidate file found in `dir`. Returns the file
/// contents (truncated if it exceeds `max_bytes`) when successful.
async fn load_first_candidate(
dir: &Path,
names: &[&str],
max_bytes: usize,
) -> std::io::Result<Option<String>> {
for name in names {
let candidate = dir.join(name);
let file = match tokio::fs::File::open(&candidate).await {
Err(e) if e.kind() == std::io::ErrorKind::NotFound => continue,
Err(e) => return Err(e),
Ok(f) => f,
};
let size = file.metadata().await?.len();
let reader = tokio::io::BufReader::new(file);
let mut data = Vec::with_capacity(std::cmp::min(size as usize, max_bytes));
let mut limited = reader.take(max_bytes as u64);
limited.read_to_end(&mut data).await?;
if size as usize > max_bytes {
tracing::warn!(
"Project doc `{}` exceeds {max_bytes} bytes - truncating.",
candidate.display(),
);
let search_dirs: Vec<PathBuf> = if let Some(root) = git_root {
let mut dirs: Vec<PathBuf> = Vec::new();
let mut saw_root = false;
for p in chain.iter().rev() {
if !saw_root {
if p == &root {
saw_root = true;
} else {
continue;
}
}
dirs.push(p.clone());
}
dirs
} else {
vec![config.cwd.clone()]
};
let contents = String::from_utf8_lossy(&data).to_string();
if contents.trim().is_empty() {
// Empty file treat as not found.
continue;
let mut found: Vec<PathBuf> = Vec::new();
for d in search_dirs {
for name in CANDIDATE_FILENAMES {
let candidate = d.join(name);
match std::fs::symlink_metadata(&candidate) {
Ok(md) => {
let ft = md.file_type();
// Allow regular files and symlinks; opening will later fail for dangling links.
if ft.is_file() || ft.is_symlink() {
found.push(candidate);
break;
}
}
Err(e) if e.kind() == std::io::ErrorKind::NotFound => continue,
Err(e) => return Err(e),
}
}
return Ok(Some(contents));
}
Ok(None)
Ok(found)
}
#[cfg(test)]
@@ -278,4 +319,32 @@ mod tests {
assert_eq!(res, Some(INSTRUCTIONS.to_string()));
}
/// When both the repository root and the working directory contain
/// AGENTS.md files, their contents are concatenated from root to cwd.
#[tokio::test]
async fn concatenates_root_and_cwd_docs() {
let repo = tempfile::tempdir().expect("tempdir");
// Simulate a git repository.
std::fs::write(
repo.path().join(".git"),
"gitdir: /path/to/actual/git/dir\n",
)
.unwrap();
// Repo root doc.
fs::write(repo.path().join("AGENTS.md"), "root doc").unwrap();
// Nested working directory with its own doc.
let nested = repo.path().join("workspace/crate_a");
std::fs::create_dir_all(&nested).unwrap();
fs::write(nested.join("AGENTS.md"), "crate doc").unwrap();
let mut cfg = make_config(&repo, 4096, None);
cfg.cwd = nested;
let res = get_user_instructions(&cfg).await.expect("doc expected");
assert_eq!(res, "root doc\n\ncrate doc");
}
}

View File

@@ -22,7 +22,7 @@ use uuid::Uuid;
use crate::config::Config;
use crate::git_info::GitInfo;
use crate::git_info::collect_git_info;
use crate::models::ResponseItem;
use codex_protocol::models::ResponseItem;
const SESSIONS_SUBDIR: &str = "sessions";
@@ -132,6 +132,8 @@ impl RolloutRecorder {
| ResponseItem::LocalShellCall { .. }
| ResponseItem::FunctionCall { .. }
| ResponseItem::FunctionCallOutput { .. }
| ResponseItem::CustomToolCall { .. }
| ResponseItem::CustomToolCallOutput { .. }
| ResponseItem::Reasoning { .. } => filtered.push(item.clone()),
ResponseItem::Other => {
// These should never be serialized.
@@ -194,6 +196,8 @@ impl RolloutRecorder {
| ResponseItem::LocalShellCall { .. }
| ResponseItem::FunctionCall { .. }
| ResponseItem::FunctionCallOutput { .. }
| ResponseItem::CustomToolCall { .. }
| ResponseItem::CustomToolCallOutput { .. }
| ResponseItem::Reasoning { .. } => items.push(item),
ResponseItem::Other => {}
},
@@ -317,6 +321,8 @@ async fn rollout_writer(
| ResponseItem::LocalShellCall { .. }
| ResponseItem::FunctionCall { .. }
| ResponseItem::FunctionCallOutput { .. }
| ResponseItem::CustomToolCall { .. }
| ResponseItem::CustomToolCallOutput { .. }
| ResponseItem::Reasoning { .. } => {
writer.write_line(&item).await?;
}

View File

@@ -1,14 +1,24 @@
use serde::Deserialize;
use serde::Serialize;
use shlex;
use std::path::PathBuf;
#[derive(Debug, PartialEq, Eq)]
#[derive(Debug, PartialEq, Eq, Clone, Serialize, Deserialize)]
pub struct ZshShell {
shell_path: String,
zshrc_path: String,
}
#[derive(Debug, PartialEq, Eq)]
#[derive(Debug, PartialEq, Eq, Clone, Serialize, Deserialize)]
pub struct PowerShellConfig {
exe: String, // Executable name or path, e.g. "pwsh" or "powershell.exe".
bash_exe_fallback: Option<PathBuf>, // In case the model generates a bash command.
}
#[derive(Debug, PartialEq, Eq, Clone, Serialize, Deserialize)]
pub enum Shell {
Zsh(ZshShell),
PowerShell(PowerShellConfig),
Unknown,
}
@@ -33,6 +43,61 @@ impl Shell {
}
Some(result)
}
Shell::PowerShell(ps) => {
// If model generated a bash command, prefer a detected bash fallback
if let Some(script) = strip_bash_lc(&command) {
return match &ps.bash_exe_fallback {
Some(bash) => Some(vec![
bash.to_string_lossy().to_string(),
"-lc".to_string(),
script,
]),
// No bash fallback → run the script under PowerShell.
// It will likely fail (except for some simple commands), but the error
// should give a clue to the model to fix upon retry that it's running under PowerShell.
None => Some(vec![
ps.exe.clone(),
"-NoProfile".to_string(),
"-Command".to_string(),
script,
]),
};
}
// Not a bash command. If model did not generate a PowerShell command,
// turn it into a PowerShell command.
let first = command.first().map(String::as_str);
if first != Some(ps.exe.as_str()) {
// TODO (CODEX_2900): Handle escaping newlines.
if command.iter().any(|a| a.contains('\n') || a.contains('\r')) {
return Some(command);
}
let joined = shlex::try_join(command.iter().map(|s| s.as_str())).ok();
return joined.map(|arg| {
vec![
ps.exe.clone(),
"-NoProfile".to_string(),
"-Command".to_string(),
arg,
]
});
}
// Model generated a PowerShell command. Run it.
Some(command)
}
Shell::Unknown => None,
}
}
pub fn name(&self) -> Option<String> {
match self {
Shell::Zsh(zsh) => std::path::Path::new(&zsh.shell_path)
.file_name()
.map(|s| s.to_string_lossy().to_string()),
Shell::PowerShell(ps) => Some(ps.exe.clone()),
Shell::Unknown => None,
}
}
@@ -86,11 +151,51 @@ pub async fn default_user_shell() -> Shell {
}
}
#[cfg(not(target_os = "macos"))]
#[cfg(all(not(target_os = "macos"), not(target_os = "windows")))]
pub async fn default_user_shell() -> Shell {
Shell::Unknown
}
#[cfg(target_os = "windows")]
pub async fn default_user_shell() -> Shell {
use tokio::process::Command;
// Prefer PowerShell 7+ (`pwsh`) if available, otherwise fall back to Windows PowerShell.
let has_pwsh = Command::new("pwsh")
.arg("-NoLogo")
.arg("-NoProfile")
.arg("-Command")
.arg("$PSVersionTable.PSVersion.Major")
.output()
.await
.map(|o| o.status.success())
.unwrap_or(false);
let bash_exe = if Command::new("bash.exe")
.arg("--version")
.output()
.await
.ok()
.map(|o| o.status.success())
.unwrap_or(false)
{
which::which("bash.exe").ok()
} else {
None
};
if has_pwsh {
Shell::PowerShell(PowerShellConfig {
exe: "pwsh.exe".to_string(),
bash_exe_fallback: bash_exe,
})
} else {
Shell::PowerShell(PowerShellConfig {
exe: "powershell.exe".to_string(),
bash_exe_fallback: bash_exe,
})
}
}
#[cfg(test)]
#[cfg(target_os = "macos")]
mod tests {
@@ -231,3 +336,97 @@ mod tests {
}
}
}
#[cfg(test)]
#[cfg(target_os = "windows")]
mod tests_windows {
use super::*;
#[test]
fn test_format_default_shell_invocation_powershell() {
let cases = vec![
(
Shell::PowerShell(PowerShellConfig {
exe: "pwsh.exe".to_string(),
bash_exe_fallback: None,
}),
vec!["bash", "-lc", "echo hello"],
vec!["pwsh.exe", "-NoProfile", "-Command", "echo hello"],
),
(
Shell::PowerShell(PowerShellConfig {
exe: "powershell.exe".to_string(),
bash_exe_fallback: None,
}),
vec!["bash", "-lc", "echo hello"],
vec!["powershell.exe", "-NoProfile", "-Command", "echo hello"],
),
(
Shell::PowerShell(PowerShellConfig {
exe: "pwsh.exe".to_string(),
bash_exe_fallback: Some(PathBuf::from("bash.exe")),
}),
vec!["bash", "-lc", "echo hello"],
vec!["bash.exe", "-lc", "echo hello"],
),
(
Shell::PowerShell(PowerShellConfig {
exe: "pwsh.exe".to_string(),
bash_exe_fallback: Some(PathBuf::from("bash.exe")),
}),
vec![
"bash",
"-lc",
"apply_patch <<'EOF'\n*** Begin Patch\n*** Update File: destination_file.txt\n-original content\n+modified content\n*** End Patch\nEOF",
],
vec![
"bash.exe",
"-lc",
"apply_patch <<'EOF'\n*** Begin Patch\n*** Update File: destination_file.txt\n-original content\n+modified content\n*** End Patch\nEOF",
],
),
(
Shell::PowerShell(PowerShellConfig {
exe: "pwsh.exe".to_string(),
bash_exe_fallback: Some(PathBuf::from("bash.exe")),
}),
vec!["echo", "hello"],
vec!["pwsh.exe", "-NoProfile", "-Command", "echo hello"],
),
(
Shell::PowerShell(PowerShellConfig {
exe: "pwsh.exe".to_string(),
bash_exe_fallback: Some(PathBuf::from("bash.exe")),
}),
vec!["pwsh.exe", "-NoProfile", "-Command", "echo hello"],
vec!["pwsh.exe", "-NoProfile", "-Command", "echo hello"],
),
(
// TODO (CODEX_2900): Handle escaping newlines for powershell invocation.
Shell::PowerShell(PowerShellConfig {
exe: "powershell.exe".to_string(),
bash_exe_fallback: Some(PathBuf::from("bash.exe")),
}),
vec![
"codex-mcp-server.exe",
"--codex-run-as-apply-patch",
"*** Begin Patch\n*** Update File: C:\\Users\\person\\destination_file.txt\n-original content\n+modified content\n*** End Patch",
],
vec![
"codex-mcp-server.exe",
"--codex-run-as-apply-patch",
"*** Begin Patch\n*** Update File: C:\\Users\\person\\destination_file.txt\n-original content\n+modified content\n*** End Patch",
],
),
];
for (shell, input, expected_cmd) in cases {
let actual_cmd = shell
.format_default_shell_invocation(input.iter().map(|s| s.to_string()).collect());
assert_eq!(
actual_cmd,
Some(expected_cmd.iter().map(|s| s.to_string()).collect())
);
}
}
}

View File

@@ -0,0 +1,145 @@
use serde::Deserialize;
use serde::Serialize;
use std::collections::BTreeMap;
use crate::openai_tools::FreeformTool;
use crate::openai_tools::FreeformToolFormat;
use crate::openai_tools::JsonSchema;
use crate::openai_tools::OpenAiTool;
use crate::openai_tools::ResponsesApiTool;
#[derive(Serialize, Deserialize)]
pub(crate) struct ApplyPatchToolArgs {
pub(crate) input: String,
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, Hash)]
#[serde(rename_all = "snake_case")]
pub enum ApplyPatchToolType {
Freeform,
Function,
}
/// Returns a custom tool that can be used to edit files. Well-suited for GPT-5 models
/// https://platform.openai.com/docs/guides/function-calling#custom-tools
pub(crate) fn create_apply_patch_freeform_tool() -> OpenAiTool {
OpenAiTool::Freeform(FreeformTool {
name: "apply_patch".to_string(),
description: "Use the `apply_patch` tool to edit files".to_string(),
format: FreeformToolFormat {
r#type: "grammar".to_string(),
syntax: "lark".to_string(),
definition: r#"start: begin_patch hunk+ end_patch
begin_patch: "*** Begin Patch" LF
end_patch: "*** End Patch" LF?
hunk: add_hunk | delete_hunk | update_hunk
add_hunk: "*** Add File: " filename LF add_line+
delete_hunk: "*** Delete File: " filename LF
update_hunk: "*** Update File: " filename LF change_move? change?
filename: /(.+)/
add_line: "+" /(.+)/ LF -> line
change_move: "*** Move to: " filename LF
change: (change_context | change_line)+ eof_line?
change_context: ("@@" | "@@ " /(.+)/) LF
change_line: ("+" | "-" | " ") /(.+)/ LF
eof_line: "*** End of File" LF
%import common.LF
"#
.to_string(),
},
})
}
/// Returns a json tool that can be used to edit files. Should only be used with gpt-oss models
pub(crate) fn create_apply_patch_json_tool() -> OpenAiTool {
let mut properties = BTreeMap::new();
properties.insert(
"input".to_string(),
JsonSchema::String {
description: Some(r#"The entire contents of the apply_patch command"#.to_string()),
},
);
OpenAiTool::Function(ResponsesApiTool {
name: "apply_patch".to_string(),
description: r#"Use the `apply_patch` tool to edit files.
Your patch language is a strippeddown, fileoriented diff format designed to be easy to parse and safe to apply. You can think of it as a highlevel envelope:
*** Begin Patch
[ one or more file sections ]
*** End Patch
Within that envelope, you get a sequence of file operations.
You MUST include a header to specify the action you are taking.
Each operation starts with one of three headers:
*** Add File: <path> - create a new file. Every following line is a + line (the initial contents).
*** Delete File: <path> - remove an existing file. Nothing follows.
*** Update File: <path> - patch an existing file in place (optionally with a rename).
May be immediately followed by *** Move to: <new path> if you want to rename the file.
Then one or more “hunks”, each introduced by @@ (optionally followed by a hunk header).
Within a hunk each line starts with:
For instructions on [context_before] and [context_after]:
- By default, show 3 lines of code immediately above and 3 lines immediately below each change. If a change is within 3 lines of a previous change, do NOT duplicate the first changes [context_after] lines in the second changes [context_before] lines.
- If 3 lines of context is insufficient to uniquely identify the snippet of code within the file, use the @@ operator to indicate the class or function to which the snippet belongs. For instance, we might have:
@@ class BaseClass
[3 lines of pre-context]
- [old_code]
+ [new_code]
[3 lines of post-context]
- If a code block is repeated so many times in a class or function such that even a single `@@` statement and 3 lines of context cannot uniquely identify the snippet of code, you can use multiple `@@` statements to jump to the right context. For instance:
@@ class BaseClass
@@ def method():
[3 lines of pre-context]
- [old_code]
+ [new_code]
[3 lines of post-context]
The full grammar definition is below:
Patch := Begin { FileOp } End
Begin := "*** Begin Patch" NEWLINE
End := "*** End Patch" NEWLINE
FileOp := AddFile | DeleteFile | UpdateFile
AddFile := "*** Add File: " path NEWLINE { "+" line NEWLINE }
DeleteFile := "*** Delete File: " path NEWLINE
UpdateFile := "*** Update File: " path NEWLINE [ MoveTo ] { Hunk }
MoveTo := "*** Move to: " newPath NEWLINE
Hunk := "@@" [ header ] NEWLINE { HunkLine } [ "*** End of File" NEWLINE ]
HunkLine := (" " | "-" | "+") text NEWLINE
A full patch can combine several operations:
*** Begin Patch
*** Add File: hello.txt
+Hello world
*** Update File: src/app.py
*** Move to: src/main.py
@@ def greet():
-print("Hi")
+print("Hello, world!")
*** Delete File: obsolete.txt
*** End Patch
It is important to remember:
- You must include a header with your intended action (Add/Delete/Update)
- You must prefix new lines with `+` even when creating a new file
- File references can only be relative, NEVER ABSOLUTE.
"#
.to_string(),
strict: false,
parameters: JsonSchema::Object {
properties,
required: Some(vec!["input".to_string()]),
additional_properties: Some(false),
},
})
}

View File

@@ -142,13 +142,14 @@ async fn includes_session_id_and_model_headers_in_request() {
let mut config = load_default_config_for_test(&codex_home);
config.model_provider = model_provider;
let conversation_manager = ConversationManager::default();
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));
let NewConversation {
conversation: codex,
conversation_id,
session_configured: _,
} = conversation_manager
.new_conversation_with_auth(config, Some(CodexAuth::from_api_key("Test API Key")))
.new_conversation(config)
.await
.expect("create new conversation");
@@ -207,9 +208,10 @@ async fn includes_base_instructions_override_in_request() {
config.base_instructions = Some("test instructions".to_string());
config.model_provider = model_provider;
let conversation_manager = ConversationManager::default();
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));
let codex = conversation_manager
.new_conversation_with_auth(config, Some(CodexAuth::from_api_key("Test API Key")))
.new_conversation(config)
.await
.expect("create new conversation")
.conversation;
@@ -262,9 +264,10 @@ async fn originator_config_override_is_used() {
config.model_provider = model_provider;
config.responses_originator_header = "my_override".to_owned();
let conversation_manager = ConversationManager::default();
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));
let codex = conversation_manager
.new_conversation_with_auth(config, Some(CodexAuth::from_api_key("Test API Key")))
.new_conversation(config)
.await
.expect("create new conversation")
.conversation;
@@ -318,13 +321,13 @@ async fn chatgpt_auth_sends_correct_request() {
let codex_home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&codex_home);
config.model_provider = model_provider;
let conversation_manager = ConversationManager::default();
let conversation_manager = ConversationManager::with_auth(create_dummy_codex_auth());
let NewConversation {
conversation: codex,
conversation_id,
session_configured: _,
} = conversation_manager
.new_conversation_with_auth(config, Some(create_dummy_codex_auth()))
.new_conversation(config)
.await
.expect("create new conversation");
@@ -411,7 +414,13 @@ async fn prefers_chatgpt_token_when_config_prefers_chatgpt() {
config.model_provider = model_provider;
config.preferred_auth_method = AuthMode::ChatGPT;
let conversation_manager = ConversationManager::default();
let auth_manager =
match CodexAuth::from_codex_home(codex_home.path(), config.preferred_auth_method) {
Ok(Some(auth)) => codex_login::AuthManager::from_auth_for_testing(auth),
Ok(None) => panic!("No CodexAuth found in codex_home"),
Err(e) => panic!("Failed to load CodexAuth: {}", e),
};
let conversation_manager = ConversationManager::new(auth_manager);
let NewConversation {
conversation: codex,
..
@@ -486,7 +495,13 @@ async fn prefers_apikey_when_config_prefers_apikey_even_with_chatgpt_tokens() {
config.model_provider = model_provider;
config.preferred_auth_method = AuthMode::ApiKey;
let conversation_manager = ConversationManager::default();
let auth_manager =
match CodexAuth::from_codex_home(codex_home.path(), config.preferred_auth_method) {
Ok(Some(auth)) => codex_login::AuthManager::from_auth_for_testing(auth),
Ok(None) => panic!("No CodexAuth found in codex_home"),
Err(e) => panic!("Failed to load CodexAuth: {}", e),
};
let conversation_manager = ConversationManager::new(auth_manager);
let NewConversation {
conversation: codex,
..
@@ -540,9 +555,10 @@ async fn includes_user_instructions_message_in_request() {
config.model_provider = model_provider;
config.user_instructions = Some("be nice".to_string());
let conversation_manager = ConversationManager::default();
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));
let codex = conversation_manager
.new_conversation_with_auth(config, Some(CodexAuth::from_api_key("Test API Key")))
.new_conversation(config)
.await
.expect("create new conversation")
.conversation;
@@ -632,9 +648,9 @@ async fn azure_overrides_assign_properties_used_for_responses_url() {
let mut config = load_default_config_for_test(&codex_home);
config.model_provider = provider;
let conversation_manager = ConversationManager::default();
let conversation_manager = ConversationManager::with_auth(create_dummy_codex_auth());
let codex = conversation_manager
.new_conversation_with_auth(config, None)
.new_conversation(config)
.await
.expect("create new conversation")
.conversation;
@@ -708,9 +724,9 @@ async fn env_var_overrides_loaded_auth() {
let mut config = load_default_config_for_test(&codex_home);
config.model_provider = provider;
let conversation_manager = ConversationManager::default();
let conversation_manager = ConversationManager::with_auth(create_dummy_codex_auth());
let codex = conversation_manager
.new_conversation_with_auth(config, Some(create_dummy_codex_auth()))
.new_conversation(config)
.await
.expect("create new conversation")
.conversation;

View File

@@ -141,9 +141,9 @@ async fn summarize_context_three_requests_and_instructions() {
let home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&home);
config.model_provider = model_provider;
let conversation_manager = ConversationManager::default();
let conversation_manager = ConversationManager::with_auth(CodexAuth::from_api_key("dummy"));
let codex = conversation_manager
.new_conversation_with_auth(config, Some(CodexAuth::from_api_key("dummy")))
.new_conversation(config)
.await
.unwrap()
.conversation;

View File

@@ -70,12 +70,12 @@ async fn truncates_output_lines() {
let output = run_test_cmd(tmp, cmd).await.unwrap();
let expected_output = (1..=256)
let expected_output = (1..=300)
.map(|i| format!("{i}\n"))
.collect::<Vec<_>>()
.join("");
assert_eq!(output.stdout.text, expected_output);
assert_eq!(output.stdout.truncated_after_lines, Some(256));
assert_eq!(output.stdout.truncated_after_lines, None);
}
/// Command succeeds with exit code 0 normally
@@ -91,8 +91,8 @@ async fn truncates_output_bytes() {
let output = run_test_cmd(tmp, cmd).await.unwrap();
assert_eq!(output.stdout.text.len(), 10240);
assert_eq!(output.stdout.truncated_after_lines, Some(10));
assert!(output.stdout.text.len() >= 15000);
assert_eq!(output.stdout.truncated_after_lines, None);
}
/// Command not found returns exit code 127, this is not considered a sandbox error

View File

@@ -139,3 +139,34 @@ async fn test_exec_stderr_stream_events_echo() {
}
assert_eq!(String::from_utf8_lossy(&err), "oops\n");
}
#[tokio::test]
async fn test_aggregated_output_interleaves_in_order() {
// Spawn a shell that alternates stdout and stderr with sleeps to enforce order.
let cmd = vec![
"/bin/sh".to_string(),
"-c".to_string(),
"printf 'O1\\n'; sleep 0.01; printf 'E1\\n' 1>&2; sleep 0.01; printf 'O2\\n'; sleep 0.01; printf 'E2\\n' 1>&2".to_string(),
];
let params = ExecParams {
command: cmd,
cwd: std::env::current_dir().unwrap_or_else(|_| PathBuf::from(".")),
timeout_ms: Some(5_000),
env: HashMap::new(),
with_escalated_permissions: None,
justification: None,
};
let policy = SandboxPolicy::new_read_only_policy();
let result = process_exec_tool_call(params, SandboxType::None, &policy, &None, None)
.await
.expect("process_exec_tool_call");
assert_eq!(result.exit_code, 0);
assert_eq!(result.stdout.text, "O1\nO2\n");
assert_eq!(result.stderr.text, "E1\nE2\n");
assert_eq!(result.aggregated_output.text, "O1\nE1\nO2\nE2\n");
assert_eq!(result.aggregated_output.truncated_after_lines, None);
}

View File

@@ -1,6 +1,9 @@
#![allow(clippy::unwrap_used)]
use codex_core::ConversationManager;
use codex_core::ModelProviderInfo;
use codex_core::built_in_model_providers;
use codex_core::model_family::find_family_for_model;
use codex_core::protocol::AskForApproval;
use codex_core::protocol::EventMsg;
use codex_core::protocol::InputItem;
@@ -8,6 +11,7 @@ use codex_core::protocol::Op;
use codex_core::protocol::SandboxPolicy;
use codex_core::protocol_config_types::ReasoningEffort;
use codex_core::protocol_config_types::ReasoningSummary;
use codex_core::shell::default_user_shell;
use codex_login::CodexAuth;
use core_test_support::load_default_config_for_test;
use core_test_support::load_sse_fixture_with_id;
@@ -24,6 +28,185 @@ fn sse_completed(id: &str) -> String {
load_sse_fixture_with_id("tests/fixtures/completed_template.json", id)
}
fn assert_tool_names(body: &serde_json::Value, expected_names: &[&str]) {
assert_eq!(
body["tools"]
.as_array()
.unwrap()
.iter()
.map(|t| t["name"].as_str().unwrap().to_string())
.collect::<Vec<_>>(),
expected_names
);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
async fn codex_mini_latest_tools() {
use pretty_assertions::assert_eq;
let server = MockServer::start().await;
let sse = sse_completed("resp");
let template = ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.set_body_raw(sse, "text/event-stream");
// Expect two POSTs to /v1/responses
Mock::given(method("POST"))
.and(path("/v1/responses"))
.respond_with(template)
.expect(2)
.mount(&server)
.await;
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let cwd = TempDir::new().unwrap();
let codex_home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&codex_home);
config.cwd = cwd.path().to_path_buf();
config.model_provider = model_provider;
config.user_instructions = Some("be consistent and helpful".to_string());
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));
config.include_apply_patch_tool = false;
config.model = "codex-mini-latest".to_string();
config.model_family = find_family_for_model("codex-mini-latest").unwrap();
let codex = conversation_manager
.new_conversation(config)
.await
.expect("create new conversation")
.conversation;
codex
.submit(Op::UserInput {
items: vec![InputItem::Text {
text: "hello 1".into(),
}],
})
.await
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
codex
.submit(Op::UserInput {
items: vec![InputItem::Text {
text: "hello 2".into(),
}],
})
.await
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
let requests = server.received_requests().await.unwrap();
assert_eq!(requests.len(), 2, "expected two POST requests");
let expected_instructions = [
include_str!("../prompt.md"),
include_str!("../../apply-patch/apply_patch_tool_instructions.md"),
]
.join("\n");
let body0 = requests[0].body_json::<serde_json::Value>().unwrap();
assert_eq!(
body0["instructions"],
serde_json::json!(expected_instructions),
);
let body1 = requests[1].body_json::<serde_json::Value>().unwrap();
assert_eq!(
body1["instructions"],
serde_json::json!(expected_instructions),
);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
async fn prompt_tools_are_consistent_across_requests() {
use pretty_assertions::assert_eq;
let server = MockServer::start().await;
let sse = sse_completed("resp");
let template = ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.set_body_raw(sse, "text/event-stream");
// Expect two POSTs to /v1/responses
Mock::given(method("POST"))
.and(path("/v1/responses"))
.respond_with(template)
.expect(2)
.mount(&server)
.await;
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let cwd = TempDir::new().unwrap();
let codex_home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&codex_home);
config.cwd = cwd.path().to_path_buf();
config.model_provider = model_provider;
config.user_instructions = Some("be consistent and helpful".to_string());
config.include_apply_patch_tool = true;
config.include_plan_tool = true;
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));
let codex = conversation_manager
.new_conversation(config)
.await
.expect("create new conversation")
.conversation;
codex
.submit(Op::UserInput {
items: vec![InputItem::Text {
text: "hello 1".into(),
}],
})
.await
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
codex
.submit(Op::UserInput {
items: vec![InputItem::Text {
text: "hello 2".into(),
}],
})
.await
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
let requests = server.received_requests().await.unwrap();
assert_eq!(requests.len(), 2, "expected two POST requests");
let expected_instructions: &str = include_str!("../prompt.md");
// our internal implementation is responsible for keeping tools in sync
// with the OpenAI schema, so we just verify the tool presence here
let expected_tools_names: &[&str] = &["shell", "update_plan", "apply_patch"];
let body0 = requests[0].body_json::<serde_json::Value>().unwrap();
assert_eq!(
body0["instructions"],
serde_json::json!(expected_instructions),
);
assert_tool_names(&body0, expected_tools_names);
let body1 = requests[1].body_json::<serde_json::Value>().unwrap();
assert_eq!(
body1["instructions"],
serde_json::json!(expected_instructions),
);
assert_tool_names(&body1, expected_tools_names);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn prefixes_context_and_instructions_once_and_consistently_across_requests() {
use pretty_assertions::assert_eq;
@@ -55,9 +238,10 @@ async fn prefixes_context_and_instructions_once_and_consistently_across_requests
config.model_provider = model_provider;
config.user_instructions = Some("be consistent and helpful".to_string());
let conversation_manager = ConversationManager::default();
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));
let codex = conversation_manager
.new_conversation_with_auth(config, Some(CodexAuth::from_api_key("Test API Key")))
.new_conversation(config)
.await
.expect("create new conversation")
.conversation;
@@ -85,9 +269,20 @@ async fn prefixes_context_and_instructions_once_and_consistently_across_requests
let requests = server.received_requests().await.unwrap();
assert_eq!(requests.len(), 2, "expected two POST requests");
let shell = default_user_shell().await;
let expected_env_text = format!(
"<environment_context>\nCurrent working directory: {}\nApproval policy: on-request\nSandbox mode: read-only\nNetwork access: restricted\n</environment_context>",
cwd.path().to_string_lossy()
r#"<environment_context>
<cwd>{}</cwd>
<approval_policy>on-request</approval_policy>
<sandbox_mode>read-only</sandbox_mode>
<network_access>restricted</network_access>
{}</environment_context>"#,
cwd.path().to_string_lossy(),
match shell.name() {
Some(name) => format!(" <shell>{}</shell>\n", name),
None => String::new(),
}
);
let expected_ui_text =
"<user_instructions>\n\nbe consistent and helpful\n\n</user_instructions>";
@@ -165,9 +360,10 @@ async fn overrides_turn_context_but_keeps_cached_prefix_and_key_constant() {
config.model_provider = model_provider;
config.user_instructions = Some("be consistent and helpful".to_string());
let conversation_manager = ConversationManager::default();
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));
let codex = conversation_manager
.new_conversation_with_auth(config, Some(CodexAuth::from_api_key("Test API Key")))
.new_conversation(config)
.await
.expect("create new conversation")
.conversation;
@@ -183,12 +379,10 @@ async fn overrides_turn_context_but_keeps_cached_prefix_and_key_constant() {
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
// Change everything about the turn context.
let new_cwd = TempDir::new().unwrap();
let writable = TempDir::new().unwrap();
codex
.submit(Op::OverrideTurnContext {
cwd: Some(new_cwd.path().to_path_buf()),
cwd: None,
approval_policy: Some(AskForApproval::Never),
sandbox_policy: Some(SandboxPolicy::WorkspaceWrite {
writable_roots: vec![writable.path().to_path_buf()],
@@ -220,7 +414,6 @@ async fn overrides_turn_context_but_keeps_cached_prefix_and_key_constant() {
let body1 = requests[0].body_json::<serde_json::Value>().unwrap();
let body2 = requests[1].body_json::<serde_json::Value>().unwrap();
// prompt_cache_key should remain constant across overrides
assert_eq!(
body1["prompt_cache_key"], body2["prompt_cache_key"],
@@ -236,11 +429,13 @@ async fn overrides_turn_context_but_keeps_cached_prefix_and_key_constant() {
"content": [ { "type": "input_text", "text": "hello 2" } ]
});
// After overriding the turn context, the environment context should be emitted again
// reflecting the new cwd, approval policy and sandbox settings.
let expected_env_text_2 = format!(
"<environment_context>\nCurrent working directory: {}\nApproval policy: never\nSandbox mode: workspace-write\nNetwork access: enabled\n</environment_context>",
new_cwd.path().to_string_lossy()
);
// reflecting the new approval policy and sandbox settings. Omit cwd because it did
// not change.
let expected_env_text_2 = r#"<environment_context>
<approval_policy>never</approval_policy>
<sandbox_mode>workspace-write</sandbox_mode>
<network_access>enabled</network_access>
</environment_context>"#;
let expected_env_msg_2 = serde_json::json!({
"type": "message",
"id": serde_json::Value::Null,
@@ -288,9 +483,10 @@ async fn per_turn_overrides_keep_cached_prefix_and_key_constant() {
config.model_provider = model_provider;
config.user_instructions = Some("be consistent and helpful".to_string());
let conversation_manager = ConversationManager::default();
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));
let codex = conversation_manager
.new_conversation_with_auth(config, Some(CodexAuth::from_api_key("Test API Key")))
.new_conversation(config)
.await
.expect("create new conversation")
.conversation;

View File

@@ -88,9 +88,10 @@ async fn continue_after_stream_error() {
config.base_instructions = Some("You are a helpful assistant".to_string());
config.model_provider = provider;
let conversation_manager = ConversationManager::default();
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));
let codex = conversation_manager
.new_conversation_with_auth(config, Some(CodexAuth::from_api_key("Test API Key")))
.new_conversation(config)
.await
.unwrap()
.conversation;

View File

@@ -93,9 +93,10 @@ async fn retries_on_early_close() {
let codex_home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&codex_home);
config.model_provider = model_provider;
let conversation_manager = ConversationManager::default();
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));
let codex = conversation_manager
.new_conversation_with_auth(config, Some(CodexAuth::from_api_key("Test API Key")))
.new_conversation(config)
.await
.unwrap()
.conversation;

View File

@@ -25,6 +25,7 @@ codex-common = { path = "../common", features = [
"sandbox_summary",
] }
codex-core = { path = "../core" }
codex-login = { path = "../login" }
codex-ollama = { path = "../ollama" }
codex-protocol = { path = "../protocol" }
owo-colors = "4.2.0"

View File

@@ -20,9 +20,11 @@ use codex_core::protocol::McpToolCallEndEvent;
use codex_core::protocol::PatchApplyBeginEvent;
use codex_core::protocol::PatchApplyEndEvent;
use codex_core::protocol::SessionConfiguredEvent;
use codex_core::protocol::StreamErrorEvent;
use codex_core::protocol::TaskCompleteEvent;
use codex_core::protocol::TurnAbortReason;
use codex_core::protocol::TurnDiffEvent;
use codex_core::protocol::WebSearchBeginEvent;
use owo_colors::OwoColorize;
use owo_colors::Style;
use shlex::try_join;
@@ -174,6 +176,9 @@ impl EventProcessor for EventProcessorWithHumanOutput {
EventMsg::BackgroundEvent(BackgroundEventEvent { message }) => {
ts_println!(self, "{}", message.style(self.dimmed));
}
EventMsg::StreamError(StreamErrorEvent { message }) => {
ts_println!(self, "{}", message.style(self.dimmed));
}
EventMsg::TaskStarted => {
// Ignore.
}
@@ -283,10 +288,10 @@ impl EventProcessor for EventProcessorWithHumanOutput {
EventMsg::ExecCommandOutputDelta(_) => {}
EventMsg::ExecCommandEnd(ExecCommandEndEvent {
call_id,
stdout,
stderr,
aggregated_output,
duration,
exit_code,
..
}) => {
let exec_command = self.call_id_to_command.remove(&call_id);
let (duration, call) = if let Some(ExecCommandBegin { command, .. }) = exec_command
@@ -299,8 +304,7 @@ impl EventProcessor for EventProcessorWithHumanOutput {
("".to_string(), format!("exec('{call_id}')"))
};
let output = if exit_code == 0 { stdout } else { stderr };
let truncated_output = output
let truncated_output = aggregated_output
.lines()
.take(MAX_OUTPUT_LINES_FOR_EXEC_TOOL_CALL)
.collect::<Vec<_>>()
@@ -358,6 +362,9 @@ impl EventProcessor for EventProcessorWithHumanOutput {
}
}
}
EventMsg::WebSearchBegin(WebSearchBeginEvent { call_id: _, query }) => {
ts_println!(self, "🌐 {query}");
}
EventMsg::PatchApplyBegin(PatchApplyBeginEvent {
call_id,
auto_approved,
@@ -535,6 +542,7 @@ impl EventProcessor for EventProcessorWithHumanOutput {
}
},
EventMsg::ShutdownComplete => return CodexStatus::Shutdown,
EventMsg::ConversationHistory(_) => {}
}
CodexStatus::Running
}

View File

@@ -20,6 +20,7 @@ use codex_core::protocol::InputItem;
use codex_core::protocol::Op;
use codex_core::protocol::TaskCompleteEvent;
use codex_core::util::is_inside_git_repo;
use codex_login::AuthManager;
use codex_ollama::DEFAULT_OSS_MODEL;
use codex_protocol::config_types::SandboxMode;
use event_processor_with_human_output::EventProcessorWithHumanOutput;
@@ -149,6 +150,7 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
include_apply_patch_tool: None,
disable_response_storage: oss.then_some(true),
show_raw_agent_reasoning: oss.then_some(true),
tools_web_search_request: None,
};
// Parse `-c` overrides.
let cli_kv_overrides = match config_overrides.parse_overrides() {
@@ -185,7 +187,10 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
std::process::exit(1);
}
let conversation_manager = ConversationManager::default();
let conversation_manager = ConversationManager::new(AuthManager::shared(
config.codex_home.clone(),
config.preferred_auth_method,
));
let NewConversation {
conversation_id: _,
conversation,

View File

@@ -123,6 +123,155 @@ async fn test_apply_patch_tool() -> anyhow::Result<()> {
// Start a mock model server
let server = MockServer::start().await;
// First response: model calls apply_patch to create test.md
let first = ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.set_body_raw(
load_sse_fixture_with_id_from_str(SSE_TOOL_CALL_ADD, "call1"),
"text/event-stream",
);
Mock::given(method("POST"))
// .and(path("/v1/responses"))
.respond_with(first)
.up_to_n_times(1)
.mount(&server)
.await;
// Second response: model calls apply_patch to update test.md
let second = ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.set_body_raw(
load_sse_fixture_with_id_from_str(SSE_TOOL_CALL_UPDATE, "call2"),
"text/event-stream",
);
Mock::given(method("POST"))
.and(path("/v1/responses"))
.respond_with(second)
.up_to_n_times(1)
.mount(&server)
.await;
let final_completed = ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.set_body_raw(
load_sse_fixture_with_id_from_str(SSE_TOOL_CALL_COMPLETED, "resp3"),
"text/event-stream",
);
Mock::given(method("POST"))
.and(path("/v1/responses"))
.respond_with(final_completed)
.expect(1)
.mount(&server)
.await;
let tmp_cwd = TempDir::new().unwrap();
Command::cargo_bin("codex-exec")
.context("should find binary for codex-exec")?
.current_dir(tmp_cwd.path())
.env("CODEX_HOME", tmp_cwd.path())
.env("OPENAI_API_KEY", "dummy")
.env("OPENAI_BASE_URL", format!("{}/v1", server.uri()))
.arg("--skip-git-repo-check")
.arg("-s")
.arg("workspace-write")
.arg("foo")
.assert()
.success();
// Verify final file contents
let final_path = tmp_cwd.path().join("test.md");
let contents = std::fs::read_to_string(&final_path)
.unwrap_or_else(|e| panic!("failed reading {}: {e}", final_path.display()));
assert_eq!(contents, "Final text\n");
Ok(())
}
#[cfg(not(target_os = "windows"))]
#[tokio::test]
async fn test_apply_patch_freeform_tool() -> anyhow::Result<()> {
use core_test_support::load_sse_fixture_with_id_from_str;
use tempfile::TempDir;
use wiremock::Mock;
use wiremock::MockServer;
use wiremock::ResponseTemplate;
use wiremock::matchers::method;
use wiremock::matchers::path;
const SSE_TOOL_CALL_ADD: &str = r#"[
{
"type": "response.output_item.done",
"item": {
"type": "custom_tool_call",
"name": "apply_patch",
"input": "*** Begin Patch\n*** Add File: test.md\n+Hello world\n*** End Patch",
"call_id": "__ID__"
}
},
{
"type": "response.completed",
"response": {
"id": "__ID__",
"usage": {
"input_tokens": 0,
"input_tokens_details": null,
"output_tokens": 0,
"output_tokens_details": null,
"total_tokens": 0
},
"output": []
}
}
]"#;
const SSE_TOOL_CALL_UPDATE: &str = r#"[
{
"type": "response.output_item.done",
"item": {
"type": "custom_tool_call",
"name": "apply_patch",
"input": "*** Begin Patch\n*** Update File: test.md\n@@\n-Hello world\n+Final text\n*** End Patch",
"call_id": "__ID__"
}
},
{
"type": "response.completed",
"response": {
"id": "__ID__",
"usage": {
"input_tokens": 0,
"input_tokens_details": null,
"output_tokens": 0,
"output_tokens_details": null,
"total_tokens": 0
},
"output": []
}
}
]"#;
const SSE_TOOL_CALL_COMPLETED: &str = r#"[
{
"type": "response.completed",
"response": {
"id": "__ID__",
"usage": {
"input_tokens": 0,
"input_tokens_details": null,
"output_tokens": 0,
"output_tokens_details": null,
"total_tokens": 0
},
"output": []
}
}
]"#;
// Start a mock model server
let server = MockServer::start().await;
// First response: model calls apply_patch to create test.md
let first = ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")

View File

@@ -26,7 +26,7 @@ multimap = "0.10.0"
path-absolutize = "3.1.1"
regex-lite = "0.1"
serde = { version = "1.0.194", features = ["derive"] }
serde_json = "1.0.142"
serde_json = "1.0.143"
serde_with = { version = "3", features = ["macros"] }
[dev-dependencies]

View File

@@ -17,5 +17,5 @@ clap = { version = "4", features = ["derive"] }
ignore = "0.4.23"
nucleo-matcher = "0.3.1"
serde = { version = "1", features = ["derive"] }
serde_json = "1.0.142"
serde_json = "1.0.143"
tokio = { version = "1", features = ["full"] }

View File

@@ -9,6 +9,7 @@ workspace = true
[dependencies]
base64 = "0.22"
chrono = { version = "0.4", features = ["serde"] }
codex-protocol = { path = "../protocol" }
rand = "0.8"
reqwest = { version = "0.12", features = ["json", "blocking"] }
serde = { version = "1", features = ["derive"] }

View File

@@ -0,0 +1,129 @@
use std::path::PathBuf;
use std::sync::Arc;
use std::sync::RwLock;
use crate::AuthMode;
use crate::CodexAuth;
/// Internal cached auth state.
#[derive(Clone, Debug)]
struct CachedAuth {
preferred_auth_mode: AuthMode,
auth: Option<CodexAuth>,
}
/// Central manager providing a single source of truth for auth.json derived
/// authentication data. It loads once (or on preference change) and then
/// hands out cloned `CodexAuth` values so the rest of the program has a
/// consistent snapshot.
///
/// External modifications to `auth.json` will NOT be observed until
/// `reload()` is called explicitly. This matches the design goal of avoiding
/// different parts of the program seeing inconsistent auth data midrun.
#[derive(Debug)]
pub struct AuthManager {
codex_home: PathBuf,
inner: RwLock<CachedAuth>,
}
impl AuthManager {
/// Create a new manager loading the initial auth using the provided
/// preferred auth method. Errors loading auth are swallowed; `auth()` will
/// simply return `None` in that case so callers can treat it as an
/// unauthenticated state.
pub fn new(codex_home: PathBuf, preferred_auth_mode: AuthMode) -> Self {
let auth = crate::CodexAuth::from_codex_home(&codex_home, preferred_auth_mode)
.ok()
.flatten();
Self {
codex_home,
inner: RwLock::new(CachedAuth {
preferred_auth_mode,
auth,
}),
}
}
/// Create an AuthManager with a specific CodexAuth, for testing only.
pub fn from_auth_for_testing(auth: CodexAuth) -> Arc<Self> {
let preferred_auth_mode = auth.mode;
let cached = CachedAuth {
preferred_auth_mode,
auth: Some(auth),
};
Arc::new(Self {
codex_home: PathBuf::new(),
inner: RwLock::new(cached),
})
}
/// Current cached auth (clone). May be `None` if not logged in or load failed.
pub fn auth(&self) -> Option<CodexAuth> {
self.inner.read().ok().and_then(|c| c.auth.clone())
}
/// Preferred auth method used when (re)loading.
pub fn preferred_auth_method(&self) -> AuthMode {
self.inner
.read()
.map(|c| c.preferred_auth_mode)
.unwrap_or(AuthMode::ApiKey)
}
/// Force a reload using the existing preferred auth method. Returns
/// whether the auth value changed.
pub fn reload(&self) -> bool {
let preferred = self.preferred_auth_method();
let new_auth = crate::CodexAuth::from_codex_home(&self.codex_home, preferred)
.ok()
.flatten();
if let Ok(mut guard) = self.inner.write() {
let changed = !AuthManager::auths_equal(&guard.auth, &new_auth);
guard.auth = new_auth;
changed
} else {
false
}
}
fn auths_equal(a: &Option<CodexAuth>, b: &Option<CodexAuth>) -> bool {
match (a, b) {
(None, None) => true,
(Some(a), Some(b)) => a == b,
_ => false,
}
}
/// Convenience constructor returning an `Arc` wrapper.
pub fn shared(codex_home: PathBuf, preferred_auth_mode: AuthMode) -> Arc<Self> {
Arc::new(Self::new(codex_home, preferred_auth_mode))
}
/// Attempt to refresh the current auth token (if any). On success, reload
/// the auth state from disk so other components observe refreshed token.
pub async fn refresh_token(&self) -> std::io::Result<Option<String>> {
let auth = match self.auth() {
Some(a) => a,
None => return Ok(None),
};
match auth.refresh_token().await {
Ok(token) => {
// Reload to pick up persisted changes.
self.reload();
Ok(Some(token))
}
Err(e) => Err(e),
}
}
/// Log out by deleting the ondisk auth.json (if present). Returns Ok(true)
/// if a file was removed, Ok(false) if no auth file existed. On success,
/// reloads the inmemory auth cache so callers immediately observe the
/// unauthenticated state.
pub fn logout(&self) -> std::io::Result<bool> {
let removed = crate::logout(&self.codex_home)?;
// Always reload to clear any cached auth (even if file absent).
self.reload();
Ok(removed)
}
}

View File

@@ -23,19 +23,15 @@ pub use crate::server::run_login_server;
pub use crate::token_data::TokenData;
use crate::token_data::parse_id_token;
mod auth_manager;
mod pkce;
mod server;
mod token_data;
pub const CLIENT_ID: &str = "app_EMoamEEZ73f0CkXaXp7hrann";
pub const OPENAI_API_KEY_ENV_VAR: &str = "OPENAI_API_KEY";
#[derive(Clone, Debug, PartialEq, Copy, Eq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum AuthMode {
ApiKey,
ChatGPT,
}
pub use auth_manager::AuthManager;
pub use codex_protocol::mcp_protocol::AuthMode;
#[derive(Debug, Clone)]
pub struct CodexAuth {

View File

@@ -17,7 +17,7 @@ workspace = true
[dependencies]
anyhow = "1"
codex-arg0 = { path = "../arg0" }
codex-common = { path = "../common" }
codex-common = { path = "../common", features = ["cli"] }
codex-core = { path = "../core" }
codex-login = { path = "../login" }
codex-protocol = { path = "../protocol" }

View File

@@ -14,6 +14,8 @@ use codex_core::protocol::Event;
use codex_core::protocol::EventMsg;
use codex_core::protocol::ExecApprovalRequestEvent;
use codex_core::protocol::ReviewDecision;
use codex_login::AuthManager;
use codex_protocol::mcp_protocol::AuthMode;
use codex_protocol::mcp_protocol::GitDiffToRemoteResponse;
use mcp_types::JSONRPCErrorError;
use mcp_types::RequestId;
@@ -38,6 +40,7 @@ use codex_protocol::mcp_protocol::AddConversationListenerParams;
use codex_protocol::mcp_protocol::AddConversationSubscriptionResponse;
use codex_protocol::mcp_protocol::ApplyPatchApprovalParams;
use codex_protocol::mcp_protocol::ApplyPatchApprovalResponse;
use codex_protocol::mcp_protocol::AuthStatusChangeNotification;
use codex_protocol::mcp_protocol::ClientRequest;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::mcp_protocol::EXEC_COMMAND_APPROVAL_METHOD;
@@ -46,7 +49,6 @@ use codex_protocol::mcp_protocol::ExecCommandApprovalResponse;
use codex_protocol::mcp_protocol::InputItem as WireInputItem;
use codex_protocol::mcp_protocol::InterruptConversationParams;
use codex_protocol::mcp_protocol::InterruptConversationResponse;
use codex_protocol::mcp_protocol::LOGIN_CHATGPT_COMPLETE_EVENT;
use codex_protocol::mcp_protocol::LoginChatGptCompleteNotification;
use codex_protocol::mcp_protocol::LoginChatGptResponse;
use codex_protocol::mcp_protocol::NewConversationParams;
@@ -57,6 +59,7 @@ use codex_protocol::mcp_protocol::SendUserMessageParams;
use codex_protocol::mcp_protocol::SendUserMessageResponse;
use codex_protocol::mcp_protocol::SendUserTurnParams;
use codex_protocol::mcp_protocol::SendUserTurnResponse;
use codex_protocol::mcp_protocol::ServerNotification;
// Duration before a ChatGPT login attempt is abandoned.
const LOGIN_CHATGPT_TIMEOUT: Duration = Duration::from_secs(10 * 60);
@@ -74,9 +77,11 @@ impl ActiveLogin {
/// Handles JSON-RPC messages for Codex conversations.
pub(crate) struct CodexMessageProcessor {
auth_manager: Arc<AuthManager>,
conversation_manager: Arc<ConversationManager>,
outgoing: Arc<OutgoingMessageSender>,
codex_linux_sandbox_exe: Option<PathBuf>,
config: Arc<Config>,
conversation_listeners: HashMap<Uuid, oneshot::Sender<()>>,
active_login: Arc<Mutex<Option<ActiveLogin>>>,
// Queue of pending interrupt requests per conversation. We reply when TurnAborted arrives.
@@ -85,14 +90,18 @@ pub(crate) struct CodexMessageProcessor {
impl CodexMessageProcessor {
pub fn new(
auth_manager: Arc<AuthManager>,
conversation_manager: Arc<ConversationManager>,
outgoing: Arc<OutgoingMessageSender>,
codex_linux_sandbox_exe: Option<PathBuf>,
config: Arc<Config>,
) -> Self {
Self {
auth_manager,
conversation_manager,
outgoing,
codex_linux_sandbox_exe,
config,
conversation_listeners: HashMap::new(),
active_login: Arc::new(Mutex::new(None)),
pending_interrupts: Arc::new(Mutex::new(HashMap::new())),
@@ -122,32 +131,26 @@ impl CodexMessageProcessor {
ClientRequest::RemoveConversationListener { request_id, params } => {
self.remove_conversation_listener(request_id, params).await;
}
ClientRequest::GitDiffToRemote { request_id, params } => {
self.git_diff_to_origin(request_id, params.cwd).await;
}
ClientRequest::LoginChatGpt { request_id } => {
self.login_chatgpt(request_id).await;
}
ClientRequest::CancelLoginChatGpt { request_id, params } => {
self.cancel_login_chatgpt(request_id, params.login_id).await;
}
ClientRequest::GitDiffToRemote { request_id, params } => {
self.git_diff_to_origin(request_id, params.cwd).await;
ClientRequest::LogoutChatGpt { request_id } => {
self.logout_chatgpt(request_id).await;
}
ClientRequest::GetAuthStatus { request_id, params } => {
self.get_auth_status(request_id, params).await;
}
}
}
async fn login_chatgpt(&mut self, request_id: RequestId) {
let config =
match Config::load_with_cli_overrides(Default::default(), ConfigOverrides::default()) {
Ok(cfg) => cfg,
Err(err) => {
let error = JSONRPCErrorError {
code: INTERNAL_ERROR_CODE,
message: format!("error loading config for login: {err}"),
data: None,
};
self.outgoing.send_error(request_id, error).await;
return;
}
};
let config = self.config.as_ref();
let opts = LoginServerOptions {
open_browser: false,
@@ -184,6 +187,7 @@ impl CodexMessageProcessor {
// Spawn background task to monitor completion.
let outgoing_clone = self.outgoing.clone();
let active_login = self.active_login.clone();
let auth_manager = self.auth_manager.clone();
tokio::spawn(async move {
let (success, error_msg) = match tokio::time::timeout(
LOGIN_CHATGPT_TIMEOUT,
@@ -199,19 +203,30 @@ impl CodexMessageProcessor {
(false, Some("Login timed out".to_string()))
}
};
let notification = LoginChatGptCompleteNotification {
let payload = LoginChatGptCompleteNotification {
login_id,
success,
error: error_msg,
};
let params = serde_json::to_value(&notification).ok();
outgoing_clone
.send_notification(OutgoingNotification {
method: LOGIN_CHATGPT_COMPLETE_EVENT.to_string(),
params,
})
.send_server_notification(ServerNotification::LoginChatGptComplete(payload))
.await;
// Send an auth status change notification.
if success {
// Update in-memory auth cache now that login completed.
auth_manager.reload();
// Notify clients with the actual current auth mode.
let current_auth_method = auth_manager.auth().map(|a| a.mode);
let payload = AuthStatusChangeNotification {
auth_method: current_auth_method,
};
outgoing_clone
.send_server_notification(ServerNotification::AuthStatusChange(payload))
.await;
}
// Clear the active login if it matches this attempt. It may have been replaced or cancelled.
let mut guard = active_login.lock().await;
if guard.as_ref().map(|l| l.login_id) == Some(login_id) {
@@ -260,6 +275,85 @@ impl CodexMessageProcessor {
}
}
async fn logout_chatgpt(&mut self, request_id: RequestId) {
{
// Cancel any active login attempt.
let mut guard = self.active_login.lock().await;
if let Some(active) = guard.take() {
active.drop();
}
}
if let Err(err) = self.auth_manager.logout() {
let error = JSONRPCErrorError {
code: INTERNAL_ERROR_CODE,
message: format!("logout failed: {err}"),
data: None,
};
self.outgoing.send_error(request_id, error).await;
return;
}
self.outgoing
.send_response(
request_id,
codex_protocol::mcp_protocol::LogoutChatGptResponse {},
)
.await;
// Send auth status change notification reflecting the current auth mode
// after logout (which may fall back to API key via env var).
let current_auth_method = self.auth_manager.auth().map(|auth| auth.mode);
let payload = AuthStatusChangeNotification {
auth_method: current_auth_method,
};
self.outgoing
.send_server_notification(ServerNotification::AuthStatusChange(payload))
.await;
}
async fn get_auth_status(
&self,
request_id: RequestId,
params: codex_protocol::mcp_protocol::GetAuthStatusParams,
) {
let preferred_auth_method: AuthMode = self.auth_manager.preferred_auth_method();
let include_token = params.include_token.unwrap_or(false);
let do_refresh = params.refresh_token.unwrap_or(false);
if do_refresh && let Err(err) = self.auth_manager.refresh_token().await {
tracing::warn!("failed to refresh token while getting auth status: {err}");
}
let response = match self.auth_manager.auth() {
Some(auth) => {
let (reported_auth_method, token_opt) = match auth.get_token().await {
Ok(token) if !token.is_empty() => {
let tok = if include_token { Some(token) } else { None };
(Some(auth.mode), tok)
}
Ok(_) => (None, None),
Err(err) => {
tracing::warn!("failed to get token for auth status: {err}");
(None, None)
}
};
codex_protocol::mcp_protocol::GetAuthStatusResponse {
auth_method: reported_auth_method,
preferred_auth_method,
auth_token: token_opt,
}
}
None => codex_protocol::mcp_protocol::GetAuthStatusResponse {
auth_method: None,
preferred_auth_method,
auth_token: None,
},
};
self.outgoing.send_response(request_id, response).await;
}
async fn process_new_conversation(&self, request_id: RequestId, params: NewConversationParams) {
let config = match derive_config_from_params(params, self.codex_linux_sandbox_exe.clone()) {
Ok(config) => config,
@@ -644,6 +738,7 @@ fn derive_config_from_params(
include_apply_patch_tool,
disable_response_storage: None,
show_raw_agent_reasoning: None,
tools_web_search_request: None,
};
let cli_overrides = cli_overrides

View File

@@ -163,6 +163,7 @@ impl CodexToolCallParam {
include_apply_patch_tool: None,
disable_response_storage: None,
show_raw_agent_reasoning: None,
tools_web_search_request: None,
};
let cli_overrides = cli_overrides

View File

@@ -268,12 +268,15 @@ async fn run_codex_tool_session_inner(
| EventMsg::ExecCommandOutputDelta(_)
| EventMsg::ExecCommandEnd(_)
| EventMsg::BackgroundEvent(_)
| EventMsg::StreamError(_)
| EventMsg::PatchApplyBegin(_)
| EventMsg::PatchApplyEnd(_)
| EventMsg::TurnDiff(_)
| EventMsg::WebSearchBegin(_)
| EventMsg::GetHistoryEntryResponse(_)
| EventMsg::PlanUpdate(_)
| EventMsg::TurnAborted(_)
| EventMsg::ConversationHistory(_)
| EventMsg::ShutdownComplete => {
// For now, we do not do anything extra for these
// events. Note that

View File

@@ -1,9 +1,14 @@
//! Prototype MCP server.
#![deny(clippy::print_stdout, clippy::print_stderr)]
use std::io::ErrorKind;
use std::io::Result as IoResult;
use std::path::PathBuf;
use codex_common::CliConfigOverrides;
use codex_core::config::Config;
use codex_core::config::ConfigOverrides;
use mcp_types::JSONRPCMessage;
use tokio::io::AsyncBufReadExt;
use tokio::io::AsyncWriteExt;
@@ -41,7 +46,10 @@ pub use crate::patch_approval::PatchApprovalResponse;
/// plenty for an interactive CLI.
const CHANNEL_CAPACITY: usize = 128;
pub async fn run_main(codex_linux_sandbox_exe: Option<PathBuf>) -> IoResult<()> {
pub async fn run_main(
codex_linux_sandbox_exe: Option<PathBuf>,
cli_config_overrides: CliConfigOverrides,
) -> IoResult<()> {
// Install a simple subscriber so `tracing` output is visible. Users can
// control the log level with `RUST_LOG`.
tracing_subscriber::fmt()
@@ -77,10 +85,27 @@ pub async fn run_main(codex_linux_sandbox_exe: Option<PathBuf>) -> IoResult<()>
}
});
// Parse CLI overrides once and derive the base Config eagerly so later
// components do not need to work with raw TOML values.
let cli_kv_overrides = cli_config_overrides.parse_overrides().map_err(|e| {
std::io::Error::new(
ErrorKind::InvalidInput,
format!("error parsing -c overrides: {e}"),
)
})?;
let config = Config::load_with_cli_overrides(cli_kv_overrides, ConfigOverrides::default())
.map_err(|e| {
std::io::Error::new(ErrorKind::InvalidData, format!("error loading config: {e}"))
})?;
// Task: process incoming messages.
let processor_handle = tokio::spawn({
let outgoing_message_sender = OutgoingMessageSender::new(outgoing_tx);
let mut processor = MessageProcessor::new(outgoing_message_sender, codex_linux_sandbox_exe);
let mut processor = MessageProcessor::new(
outgoing_message_sender,
codex_linux_sandbox_exe,
std::sync::Arc::new(config),
);
async move {
while let Some(msg) = incoming_rx.recv().await {
match msg {

View File

@@ -1,9 +1,10 @@
use codex_arg0::arg0_dispatch_or_else;
use codex_common::CliConfigOverrides;
use codex_mcp_server::run_main;
fn main() -> anyhow::Result<()> {
arg0_dispatch_or_else(|codex_linux_sandbox_exe| async move {
run_main(codex_linux_sandbox_exe).await?;
run_main(codex_linux_sandbox_exe, CliConfigOverrides::default()).await?;
Ok(())
})
}

View File

@@ -1,6 +1,5 @@
use std::collections::HashMap;
use std::path::PathBuf;
use std::sync::Arc;
use crate::codex_message_processor::CodexMessageProcessor;
use crate::codex_tool_config::CodexToolCallParam;
@@ -12,8 +11,9 @@ use crate::outgoing_message::OutgoingMessageSender;
use codex_protocol::mcp_protocol::ClientRequest;
use codex_core::ConversationManager;
use codex_core::config::Config as CodexConfig;
use codex_core::config::Config;
use codex_core::protocol::Submission;
use codex_login::AuthManager;
use mcp_types::CallToolRequestParams;
use mcp_types::CallToolResult;
use mcp_types::ClientRequest as McpClientRequest;
@@ -30,6 +30,7 @@ use mcp_types::ServerCapabilitiesTools;
use mcp_types::ServerNotification;
use mcp_types::TextContent;
use serde_json::json;
use std::sync::Arc;
use tokio::sync::Mutex;
use tokio::task;
use uuid::Uuid;
@@ -49,13 +50,18 @@ impl MessageProcessor {
pub(crate) fn new(
outgoing: OutgoingMessageSender,
codex_linux_sandbox_exe: Option<PathBuf>,
config: Arc<Config>,
) -> Self {
let outgoing = Arc::new(outgoing);
let conversation_manager = Arc::new(ConversationManager::default());
let auth_manager =
AuthManager::shared(config.codex_home.clone(), config.preferred_auth_method);
let conversation_manager = Arc::new(ConversationManager::new(auth_manager.clone()));
let codex_message_processor = CodexMessageProcessor::new(
auth_manager,
conversation_manager.clone(),
outgoing.clone(),
codex_linux_sandbox_exe.clone(),
config,
);
Self {
codex_message_processor,
@@ -344,7 +350,7 @@ impl MessageProcessor {
}
}
async fn handle_tool_call_codex(&self, id: RequestId, arguments: Option<serde_json::Value>) {
let (initial_prompt, config): (String, CodexConfig) = match arguments {
let (initial_prompt, config): (String, Config) = match arguments {
Some(json_val) => match serde_json::from_value::<CodexToolCallParam>(json_val) {
Ok(tool_cfg) => match tool_cfg.into_config(self.codex_linux_sandbox_exe.clone()) {
Ok(cfg) => cfg,

View File

@@ -3,6 +3,7 @@ use std::sync::atomic::AtomicI64;
use std::sync::atomic::Ordering;
use codex_core::protocol::Event;
use codex_protocol::mcp_protocol::ServerNotification;
use mcp_types::JSONRPC_VERSION;
use mcp_types::JSONRPCError;
use mcp_types::JSONRPCErrorError;
@@ -121,6 +122,17 @@ impl OutgoingMessageSender {
.await;
}
pub(crate) async fn send_server_notification(&self, notification: ServerNotification) {
let method = format!("codex/event/{}", notification);
let params = match serde_json::to_value(&notification) {
Ok(serde_json::Value::Object(mut map)) => map.remove("data"),
_ => None,
};
let outgoing_message =
OutgoingMessage::Notification(OutgoingNotification { method, params });
let _ = self.sender.send(outgoing_message).await;
}
pub(crate) async fn send_notification(&self, notification: OutgoingNotification) {
let outgoing_message = OutgoingMessage::Notification(notification);
let _ = self.sender.send(outgoing_message).await;

View File

@@ -0,0 +1,142 @@
use std::path::Path;
use codex_login::login_with_api_key;
use codex_protocol::mcp_protocol::AuthMode;
use codex_protocol::mcp_protocol::GetAuthStatusParams;
use codex_protocol::mcp_protocol::GetAuthStatusResponse;
use mcp_test_support::McpProcess;
use mcp_test_support::to_response;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use pretty_assertions::assert_eq;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
// Helper to create a config.toml; mirrors create_conversation.rs
fn create_config_toml(codex_home: &Path) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
r#"
model = "mock-model"
approval_policy = "never"
sandbox_mode = "danger-full-access"
model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "http://127.0.0.1:0/v1"
wire_api = "chat"
request_max_retries = 0
stream_max_retries = 0
"#,
)
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn get_auth_status_no_auth() {
let codex_home = TempDir::new().unwrap_or_else(|e| panic!("create tempdir: {e}"));
create_config_toml(codex_home.path()).expect("write config.toml");
let mut mcp = McpProcess::new(codex_home.path())
.await
.expect("spawn mcp process");
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize())
.await
.expect("init timeout")
.expect("init failed");
let request_id = mcp
.send_get_auth_status_request(GetAuthStatusParams {
include_token: Some(true),
refresh_token: Some(false),
})
.await
.expect("send getAuthStatus");
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
)
.await
.expect("getAuthStatus timeout")
.expect("getAuthStatus response");
let status: GetAuthStatusResponse = to_response(resp).expect("deserialize status");
assert_eq!(status.auth_method, None, "expected no auth method");
assert_eq!(status.auth_token, None, "expected no token");
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn get_auth_status_with_api_key() {
let codex_home = TempDir::new().unwrap_or_else(|e| panic!("create tempdir: {e}"));
create_config_toml(codex_home.path()).expect("write config.toml");
login_with_api_key(codex_home.path(), "sk-test-key").expect("seed api key");
let mut mcp = McpProcess::new(codex_home.path())
.await
.expect("spawn mcp process");
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize())
.await
.expect("init timeout")
.expect("init failed");
let request_id = mcp
.send_get_auth_status_request(GetAuthStatusParams {
include_token: Some(true),
refresh_token: Some(false),
})
.await
.expect("send getAuthStatus");
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
)
.await
.expect("getAuthStatus timeout")
.expect("getAuthStatus response");
let status: GetAuthStatusResponse = to_response(resp).expect("deserialize status");
assert_eq!(status.auth_method, Some(AuthMode::ApiKey));
assert_eq!(status.auth_token, Some("sk-test-key".to_string()));
assert_eq!(status.preferred_auth_method, AuthMode::ChatGPT);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn get_auth_status_with_api_key_no_include_token() {
let codex_home = TempDir::new().unwrap_or_else(|e| panic!("create tempdir: {e}"));
create_config_toml(codex_home.path()).expect("write config.toml");
login_with_api_key(codex_home.path(), "sk-test-key").expect("seed api key");
let mut mcp = McpProcess::new(codex_home.path())
.await
.expect("spawn mcp process");
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize())
.await
.expect("init timeout")
.expect("init failed");
// Build params via struct so None field is omitted in wire JSON.
let params = GetAuthStatusParams {
include_token: None,
refresh_token: Some(false),
};
let request_id = mcp
.send_get_auth_status_request(params)
.await
.expect("send getAuthStatus");
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
)
.await
.expect("getAuthStatus timeout")
.expect("getAuthStatus response");
let status: GetAuthStatusResponse = to_response(resp).expect("deserialize status");
assert_eq!(status.auth_method, Some(AuthMode::ApiKey));
assert!(status.auth_token.is_none(), "token must be omitted");
assert_eq!(status.preferred_auth_method, AuthMode::ChatGPT);
}

View File

@@ -86,9 +86,7 @@ async fn shell_command_approval_triggers_elicitation() -> anyhow::Result<()> {
)
.await??;
// This is the first request from the server, so the id should be 0 given
// how things are currently implemented.
let elicitation_request_id = RequestId::Integer(0);
let elicitation_request_id = elicitation_request.id.clone();
let params = serde_json::from_value::<ExecApprovalElicitRequestParams>(
elicitation_request
.params

View File

@@ -13,6 +13,8 @@ use anyhow::Context;
use assert_cmd::prelude::*;
use codex_mcp_server::CodexToolCallParam;
use codex_protocol::mcp_protocol::AddConversationListenerParams;
use codex_protocol::mcp_protocol::CancelLoginChatGptParams;
use codex_protocol::mcp_protocol::GetAuthStatusParams;
use codex_protocol::mcp_protocol::InterruptConversationParams;
use codex_protocol::mcp_protocol::NewConversationParams;
use codex_protocol::mcp_protocol::RemoveConversationListenerParams;
@@ -217,6 +219,34 @@ impl McpProcess {
self.send_request("interruptConversation", params).await
}
/// Send a `getAuthStatus` JSON-RPC request.
pub async fn send_get_auth_status_request(
&mut self,
params: GetAuthStatusParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("getAuthStatus", params).await
}
/// Send a `loginChatGpt` JSON-RPC request.
pub async fn send_login_chat_gpt_request(&mut self) -> anyhow::Result<i64> {
self.send_request("loginChatGpt", None).await
}
/// Send a `cancelLoginChatGpt` JSON-RPC request.
pub async fn send_cancel_login_chat_gpt_request(
&mut self,
params: CancelLoginChatGptParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("cancelLoginChatGpt", params).await
}
/// Send a `logoutChatGpt` JSON-RPC request.
pub async fn send_logout_chat_gpt_request(&mut self) -> anyhow::Result<i64> {
self.send_request("logoutChatGpt", None).await
}
async fn send_request(
&mut self,
method: &str,

View File

@@ -0,0 +1,146 @@
use std::path::Path;
use std::time::Duration;
use codex_login::login_with_api_key;
use codex_protocol::mcp_protocol::CancelLoginChatGptParams;
use codex_protocol::mcp_protocol::CancelLoginChatGptResponse;
use codex_protocol::mcp_protocol::GetAuthStatusParams;
use codex_protocol::mcp_protocol::GetAuthStatusResponse;
use codex_protocol::mcp_protocol::LoginChatGptResponse;
use codex_protocol::mcp_protocol::LogoutChatGptResponse;
use mcp_test_support::McpProcess;
use mcp_test_support::to_response;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
// Helper to create a config.toml; mirrors create_conversation.rs
fn create_config_toml(codex_home: &Path) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
r#"
model = "mock-model"
approval_policy = "never"
sandbox_mode = "danger-full-access"
model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "http://127.0.0.1:0/v1"
wire_api = "chat"
request_max_retries = 0
stream_max_retries = 0
"#,
)
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn logout_chatgpt_removes_auth() {
let codex_home = TempDir::new().unwrap_or_else(|e| panic!("create tempdir: {e}"));
create_config_toml(codex_home.path()).expect("write config.toml");
login_with_api_key(codex_home.path(), "sk-test-key").expect("seed api key");
assert!(codex_home.path().join("auth.json").exists());
let mut mcp = McpProcess::new(codex_home.path())
.await
.expect("spawn mcp process");
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize())
.await
.expect("init timeout")
.expect("init failed");
let id = mcp
.send_logout_chat_gpt_request()
.await
.expect("send logoutChatGpt");
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(id)),
)
.await
.expect("logoutChatGpt timeout")
.expect("logoutChatGpt response");
let _ok: LogoutChatGptResponse = to_response(resp).expect("deserialize logout response");
assert!(
!codex_home.path().join("auth.json").exists(),
"auth.json should be deleted"
);
// Verify status reflects signed-out state.
let status_id = mcp
.send_get_auth_status_request(GetAuthStatusParams {
include_token: Some(true),
refresh_token: Some(false),
})
.await
.expect("send getAuthStatus");
let status_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(status_id)),
)
.await
.expect("getAuthStatus timeout")
.expect("getAuthStatus response");
let status: GetAuthStatusResponse = to_response(status_resp).expect("deserialize status");
assert_eq!(status.auth_method, None);
assert_eq!(status.auth_token, None);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn login_and_cancel_chatgpt() {
let codex_home = TempDir::new().unwrap_or_else(|e| panic!("create tempdir: {e}"));
create_config_toml(codex_home.path()).expect("write config.toml");
let mut mcp = McpProcess::new(codex_home.path())
.await
.expect("spawn mcp process");
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize())
.await
.expect("init timeout")
.expect("init failed");
let login_id = mcp
.send_login_chat_gpt_request()
.await
.expect("send loginChatGpt");
let login_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(login_id)),
)
.await
.expect("loginChatGpt timeout")
.expect("loginChatGpt response");
let login: LoginChatGptResponse = to_response(login_resp).expect("deserialize login resp");
let cancel_id = mcp
.send_cancel_login_chat_gpt_request(CancelLoginChatGptParams {
login_id: login.login_id,
})
.await
.expect("send cancelLoginChatGpt");
let cancel_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(cancel_id)),
)
.await
.expect("cancelLoginChatGpt timeout")
.expect("cancelLoginChatGpt response");
let _ok: CancelLoginChatGptResponse =
to_response(cancel_resp).expect("deserialize cancel response");
// Optionally observe the completion notification; do not fail if it races.
let maybe_note = timeout(
Duration::from_secs(2),
mcp.read_stream_until_notification_message("codex/event/login_chat_gpt_complete"),
)
.await;
if maybe_note.is_err() {
eprintln!("warning: did not observe login_chat_gpt_complete notification after cancel");
}
}

View File

@@ -32,15 +32,21 @@ pub fn generate_ts(out_dir: &Path, prettier: Option<&Path>) -> Result<()> {
codex_protocol::mcp_protocol::SendUserTurnResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::InterruptConversationParams::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::InterruptConversationResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::GitDiffToRemoteParams::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::GitDiffToRemoteResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::LoginChatGptResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::LoginChatGptCompleteNotification::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::CancelLoginChatGptParams::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::CancelLoginChatGptResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::GitDiffToRemoteParams::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::LogoutChatGptParams::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::LogoutChatGptResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::GetAuthStatusParams::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::GetAuthStatusResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::ApplyPatchApprovalParams::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::ApplyPatchApprovalResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::ExecCommandApprovalParams::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::ExecCommandApprovalResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::ServerNotification::export_all_to(out_dir)?;
// Prepend header to each generated .ts file
let ts_files = ts_files_in(out_dir)?;

View File

@@ -11,12 +11,15 @@ path = "src/lib.rs"
workspace = true
[dependencies]
base64 = "0.22.1"
mcp-types = { path = "../mcp-types" }
mime_guess = "2.0.5"
serde = { version = "1", features = ["derive"] }
serde_bytes = "0.11"
serde_json = "1"
strum = "0.27.2"
strum_macros = "0.27.2"
tracing = "0.1.41"
ts-rs = { version = "11", features = ["uuid-impl", "serde-json-impl"] }
uuid = { version = "1", features = ["serde", "v4"] }

View File

@@ -1,6 +1,7 @@
pub mod config_types;
pub mod mcp_protocol;
pub mod message_history;
pub mod models;
pub mod parse_command;
pub mod plan_tool;
pub mod protocol;

View File

@@ -13,6 +13,7 @@ use crate::protocol::TurnAbortReason;
use mcp_types::RequestId;
use serde::Deserialize;
use serde::Serialize;
use strum_macros::Display;
use ts_rs::TS;
use uuid::Uuid;
@@ -36,6 +37,13 @@ impl GitSha {
}
}
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, Eq, TS)]
#[serde(rename_all = "lowercase")]
pub enum AuthMode {
ApiKey,
ChatGPT,
}
/// Request from the client to the server.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(tag = "method", rename_all = "camelCase")]
@@ -70,6 +78,11 @@ pub enum ClientRequest {
request_id: RequestId,
params: RemoveConversationListenerParams,
},
GitDiffToRemote {
#[serde(rename = "id")]
request_id: RequestId,
params: GitDiffToRemoteParams,
},
LoginChatGpt {
#[serde(rename = "id")]
request_id: RequestId,
@@ -79,10 +92,14 @@ pub enum ClientRequest {
request_id: RequestId,
params: CancelLoginChatGptParams,
},
GitDiffToRemote {
LogoutChatGpt {
#[serde(rename = "id")]
request_id: RequestId,
params: GitDiffToRemoteParams,
},
GetAuthStatus {
#[serde(rename = "id")]
request_id: RequestId,
params: GetAuthStatusParams,
},
}
@@ -161,18 +178,6 @@ pub struct GitDiffToRemoteResponse {
pub diff: String,
}
// Event name for notifying client of login completion or failure.
pub const LOGIN_CHATGPT_COMPLETE_EVENT: &str = "codex/event/login_chatgpt_complete";
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(rename_all = "camelCase")]
pub struct LoginChatGptCompleteNotification {
pub login_id: Uuid,
pub success: bool,
#[serde(skip_serializing_if = "Option::is_none")]
pub error: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(rename_all = "camelCase")]
pub struct CancelLoginChatGptParams {
@@ -189,6 +194,35 @@ pub struct GitDiffToRemoteParams {
#[serde(rename_all = "camelCase")]
pub struct CancelLoginChatGptResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(rename_all = "camelCase")]
pub struct LogoutChatGptParams {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(rename_all = "camelCase")]
pub struct LogoutChatGptResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(rename_all = "camelCase")]
pub struct GetAuthStatusParams {
/// If true, include the current auth token (if available) in the response.
#[serde(skip_serializing_if = "Option::is_none")]
pub include_token: Option<bool>,
/// If true, attempt to refresh the token before returning status.
#[serde(skip_serializing_if = "Option::is_none")]
pub refresh_token: Option<bool>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(rename_all = "camelCase")]
pub struct GetAuthStatusResponse {
#[serde(skip_serializing_if = "Option::is_none")]
pub auth_method: Option<AuthMode>,
pub preferred_auth_method: AuthMode,
#[serde(skip_serializing_if = "Option::is_none")]
pub auth_token: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(rename_all = "camelCase")]
pub struct SendUserMessageParams {
@@ -321,6 +355,34 @@ pub struct ApplyPatchApprovalResponse {
pub decision: ReviewDecision,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(rename_all = "camelCase")]
pub struct LoginChatGptCompleteNotification {
pub login_id: Uuid,
pub success: bool,
#[serde(skip_serializing_if = "Option::is_none")]
pub error: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(rename_all = "camelCase")]
pub struct AuthStatusChangeNotification {
/// Current authentication method; omitted if signed out.
#[serde(skip_serializing_if = "Option::is_none")]
pub auth_method: Option<AuthMode>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS, Display)]
#[serde(tag = "type", content = "data", rename_all = "snake_case")]
#[strum(serialize_all = "snake_case")]
pub enum ServerNotification {
/// Authentication status changed
AuthStatusChange(AuthStatusChangeNotification),
/// ChatGPT login flow completed
LoginChatGptComplete(LoginChatGptCompleteNotification),
}
#[cfg(test)]
mod tests {
use super::*;

View File

@@ -24,6 +24,10 @@ pub enum ResponseInputItem {
call_id: String,
result: Result<CallToolResult, String>,
},
CustomToolCallOutput {
call_id: String,
output: String,
},
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
@@ -77,6 +81,20 @@ pub enum ResponseItem {
call_id: String,
output: FunctionCallOutputPayload,
},
CustomToolCall {
#[serde(default, skip_serializing_if = "Option::is_none")]
id: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
status: Option<String>,
call_id: String,
name: String,
input: String,
},
CustomToolCallOutput {
call_id: String,
output: String,
},
#[serde(other)]
Other,
}
@@ -114,6 +132,9 @@ impl From<ResponseInputItem> for ResponseItem {
),
},
},
ResponseInputItem::CustomToolCallOutput { call_id, output } => {
Self::CustomToolCallOutput { call_id, output }
}
}
}
}
@@ -183,7 +204,6 @@ impl From<Vec<InputItem>> for ResponseInputItem {
None
}
},
_ => None,
})
.collect::<Vec<ContentItem>>(),
}
@@ -197,10 +217,8 @@ pub struct ShellToolCallParams {
pub command: Vec<String>,
pub workdir: Option<String>,
/// This is the maximum time in seconds that the command is allowed to run.
#[serde(rename = "timeout")]
// The wire format uses `timeout`, which has ambiguous units, so we use
// `timeout_ms` as the field name so it is clear in code.
/// This is the maximum time in milliseconds that the command is allowed to run.
#[serde(alias = "timeout")]
pub timeout_ms: Option<u64>,
#[serde(skip_serializing_if = "Option::is_none")]
pub with_escalated_permissions: Option<bool>,

View File

@@ -2,6 +2,7 @@ use serde::Deserialize;
use serde::Serialize;
#[derive(Debug, Clone, PartialEq, Eq, Deserialize, Serialize)]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum ParsedCommand {
Read {
cmd: String,

View File

@@ -22,6 +22,7 @@ use uuid::Uuid;
use crate::config_types::ReasoningEffort as ReasoningEffortConfig;
use crate::config_types::ReasoningSummary as ReasoningSummaryConfig;
use crate::message_history::HistoryEntry;
use crate::models::ResponseItem;
use crate::parse_command::ParsedCommand;
use crate::plan_tool::UpdatePlanArgs;
@@ -137,6 +138,10 @@ pub enum Op {
/// Request a single history entry identified by `log_id` + `offset`.
GetHistoryEntryRequest { offset: usize, log_id: u64 },
/// Request the full in-memory conversation transcript for the current session.
/// Reply is delivered via `EventMsg::ConversationHistory`.
GetHistory,
/// Request the list of MCP tools available across all configured servers.
/// Reply is delivered via `EventMsg::McpListToolsResponse`.
ListMcpTools,
@@ -432,6 +437,8 @@ pub enum EventMsg {
McpToolCallEnd(McpToolCallEndEvent),
WebSearchBegin(WebSearchBeginEvent),
/// Notification that the server is about to execute a command.
ExecCommandBegin(ExecCommandBeginEvent),
@@ -446,6 +453,10 @@ pub enum EventMsg {
BackgroundEvent(BackgroundEventEvent),
/// Notification that a model stream experienced an error or disconnect
/// and the system is handling it (e.g., retrying with backoff).
StreamError(StreamErrorEvent),
/// Notification that the agent is about to apply a code patch. Mirrors
/// `ExecCommandBegin` so frontends can show progress indicators.
PatchApplyBegin(PatchApplyBeginEvent),
@@ -467,6 +478,8 @@ pub enum EventMsg {
/// Notification that the agent is shutting down.
ShutdownComplete,
ConversationHistory(ConversationHistoryResponseEvent),
}
// Individual event payload types matching each `EventMsg` variant.
@@ -647,6 +660,20 @@ impl McpToolCallEndEvent {
}
}
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct WebSearchBeginEvent {
pub call_id: String,
pub query: String,
}
/// Response payload for `Op::GetHistory` containing the current session's
/// in-memory transcript.
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct ConversationHistoryResponseEvent {
pub conversation_id: Uuid,
pub entries: Vec<ResponseItem>,
}
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct ExecCommandBeginEvent {
/// Identifier so this can be paired with the ExecCommandEnd event.
@@ -666,10 +693,15 @@ pub struct ExecCommandEndEvent {
pub stdout: String,
/// Captured stderr
pub stderr: String,
/// Captured aggregated output
#[serde(default)]
pub aggregated_output: String,
/// The command's exit code.
pub exit_code: i32,
/// The duration of the command execution.
pub duration: Duration,
/// Formatted output from the command, as seen by the model.
pub formatted_output: String,
}
#[derive(Debug, Clone, Deserialize, Serialize)]
@@ -721,6 +753,11 @@ pub struct BackgroundEventEvent {
pub message: String,
}
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct StreamErrorEvent {
pub message: String,
}
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct PatchApplyBeginEvent {
/// Identifier so this can be paired with the PatchApplyEnd event.

View File

@@ -22,6 +22,7 @@ workspace = true
[dependencies]
anyhow = "1"
arboard = "3"
async-stream = "0.3.6"
base64 = "0.22.1"
chrono = { version = "0.4", features = ["serde"] }
@@ -41,7 +42,10 @@ codex-protocol = { path = "../protocol" }
color-eyre = "0.6.3"
crossterm = { version = "0.28.1", features = ["bracketed-paste", "event-stream"] }
diffy = "0.4.2"
image = { version = "^0.25.6", default-features = false, features = ["jpeg"] }
image = { version = "^0.25.6", default-features = false, features = [
"jpeg",
"png",
] }
lazy_static = "1"
mcp-types = { path = "../mcp-types" }
once_cell = "1"
@@ -61,6 +65,7 @@ shlex = "1.3.0"
strum = "0.27.2"
strum_macros = "0.27.2"
supports-color = "3.0.2"
tempfile = "3"
textwrap = "0.16.2"
tokio = { version = "1", features = [
"io-std",

View File

@@ -1,21 +1,23 @@
use crate::app_backtrack::BacktrackState;
use crate::app_event::AppEvent;
use crate::app_event_sender::AppEventSender;
use crate::chatwidget::ChatWidget;
use crate::file_search::FileSearchManager;
use crate::get_git_diff::get_git_diff;
use crate::slash_command::SlashCommand;
use crate::transcript_app::TranscriptApp;
use crate::tui;
use crate::tui::TuiEvent;
use codex_ansi_escape::ansi_escape_line;
use codex_core::ConversationManager;
use codex_core::config::Config;
use codex_core::protocol::Event;
use codex_core::protocol::Op;
use codex_core::protocol::TokenUsage;
use codex_login::AuthManager;
use color_eyre::eyre::Result;
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
use crossterm::event::KeyEventKind;
use crossterm::terminal::supports_keyboard_enhancement;
use ratatui::style::Stylize;
use ratatui::text::Line;
use std::path::PathBuf;
use std::sync::Arc;
use std::sync::atomic::AtomicBool;
@@ -24,26 +26,37 @@ use std::thread;
use std::time::Duration;
use tokio::select;
use tokio::sync::mpsc::unbounded_channel;
// use uuid::Uuid;
pub(crate) struct App {
server: Arc<ConversationManager>,
app_event_tx: AppEventSender,
chat_widget: ChatWidget,
pub(crate) server: Arc<ConversationManager>,
pub(crate) app_event_tx: AppEventSender,
pub(crate) chat_widget: ChatWidget,
/// Config is stored here so we can recreate ChatWidgets as needed.
config: Config,
pub(crate) config: Config,
file_search: FileSearchManager,
pub(crate) file_search: FileSearchManager,
enhanced_keys_supported: bool,
pub(crate) transcript_lines: Vec<Line<'static>>,
// Transcript overlay state
pub(crate) transcript_overlay: Option<TranscriptApp>,
pub(crate) deferred_history_lines: Vec<Line<'static>>,
pub(crate) enhanced_keys_supported: bool,
/// Controls the animation thread that sends CommitTick events.
commit_anim_running: Arc<AtomicBool>,
pub(crate) commit_anim_running: Arc<AtomicBool>,
// Esc-backtracking state grouped
pub(crate) backtrack: crate::app_backtrack::BacktrackState,
}
impl App {
pub async fn run(
tui: &mut tui::Tui,
auth_manager: Arc<AuthManager>,
config: Config,
initial_prompt: Option<String>,
initial_images: Vec<PathBuf>,
@@ -52,7 +65,7 @@ impl App {
let (app_event_tx, mut app_event_rx) = unbounded_channel();
let app_event_tx = AppEventSender::new(app_event_tx);
let conversation_manager = Arc::new(ConversationManager::default());
let conversation_manager = Arc::new(ConversationManager::new(auth_manager.clone()));
let enhanced_keys_supported = supports_keyboard_enhancement().unwrap_or(false);
@@ -75,7 +88,11 @@ impl App {
config,
file_search,
enhanced_keys_supported,
transcript_lines: Vec::new(),
transcript_overlay: None,
deferred_history_lines: Vec::new(),
commit_anim_running: Arc::new(AtomicBool::new(false)),
backtrack: BacktrackState::default(),
};
let tui_events = tui.event_stream();
@@ -85,7 +102,7 @@ impl App {
while select! {
Some(event) = app_event_rx.recv() => {
app.handle_event(tui, event)?
app.handle_event(tui, event).await?
}
Some(event) = tui_events.next() => {
app.handle_tui_event(tui, event).await?
@@ -100,43 +117,86 @@ impl App {
tui: &mut tui::Tui,
event: TuiEvent,
) -> Result<bool> {
match event {
TuiEvent::Key(key_event) => {
self.handle_key_event(key_event).await;
}
TuiEvent::Paste(pasted) => {
// Many terminals convert newlines to \r when pasting (e.g., iTerm2),
// but tui-textarea expects \n. Normalize CR to LF.
// [tui-textarea]: https://github.com/rhysd/tui-textarea/blob/4d18622eeac13b309e0ff6a55a46ac6706da68cf/src/textarea.rs#L782-L783
// [iTerm2]: https://github.com/gnachman/iTerm2/blob/5d0c0d9f68523cbd0494dad5422998964a2ecd8d/sources/iTermPasteHelper.m#L206-L216
let pasted = pasted.replace("\r", "\n");
self.chat_widget.handle_paste(pasted);
}
TuiEvent::Draw => {
tui.draw(
self.chat_widget.desired_height(tui.terminal.size()?.width),
|frame| {
frame.render_widget_ref(&self.chat_widget, frame.area());
if let Some((x, y)) = self.chat_widget.cursor_pos(frame.area()) {
frame.set_cursor_position((x, y));
}
},
)?;
}
#[cfg(unix)]
TuiEvent::ResumeFromSuspend => {
let cursor_pos = tui.terminal.get_cursor_position()?;
tui.terminal
.set_viewport_area(ratatui::layout::Rect::new(0, cursor_pos.y, 0, 0));
if self.transcript_overlay.is_some() {
let _ = self.handle_backtrack_overlay_event(tui, event).await?;
} else {
match event {
TuiEvent::Key(key_event) => {
self.handle_key_event(tui, key_event).await;
}
TuiEvent::Paste(pasted) => {
// Many terminals convert newlines to \r when pasting (e.g., iTerm2),
// but tui-textarea expects \n. Normalize CR to LF.
// [tui-textarea]: https://github.com/rhysd/tui-textarea/blob/4d18622eeac13b309e0ff6a55a46ac6706da68cf/src/textarea.rs#L782-L783
// [iTerm2]: https://github.com/gnachman/iTerm2/blob/5d0c0d9f68523cbd0494dad5422998964a2ecd8d/sources/iTermPasteHelper.m#L206-L216
let pasted = pasted.replace("\r", "\n");
self.chat_widget.handle_paste(pasted);
}
TuiEvent::Draw => {
tui.draw(
self.chat_widget.desired_height(tui.terminal.size()?.width),
|frame| {
frame.render_widget_ref(&self.chat_widget, frame.area());
if let Some((x, y)) = self.chat_widget.cursor_pos(frame.area()) {
frame.set_cursor_position((x, y));
}
},
)?;
}
TuiEvent::AttachImage {
path,
width,
height,
format_label,
} => {
self.chat_widget
.attach_image(path, width, height, format_label);
}
}
}
Ok(true)
}
fn handle_event(&mut self, tui: &mut tui::Tui, event: AppEvent) -> Result<bool> {
async fn handle_event(&mut self, tui: &mut tui::Tui, event: AppEvent) -> Result<bool> {
match event {
AppEvent::InsertHistory(lines) => {
tui.insert_history_lines(lines);
AppEvent::NewSession => {
self.chat_widget = ChatWidget::new(
self.config.clone(),
self.server.clone(),
tui.frame_requester(),
self.app_event_tx.clone(),
None,
Vec::new(),
self.enhanced_keys_supported,
);
tui.frame_requester().schedule_frame();
}
AppEvent::InsertHistoryLines(lines) => {
if let Some(overlay) = &mut self.transcript_overlay {
overlay.insert_lines(lines.clone());
tui.frame_requester().schedule_frame();
}
self.transcript_lines.extend(lines.clone());
if self.transcript_overlay.is_some() {
self.deferred_history_lines.extend(lines);
} else {
tui.insert_history_lines(lines);
}
}
AppEvent::InsertHistoryCell(cell) => {
if let Some(overlay) = &mut self.transcript_overlay {
overlay.insert_lines(cell.transcript_lines());
tui.frame_requester().schedule_frame();
}
self.transcript_lines.extend(cell.transcript_lines());
let display = cell.display_lines();
if !display.is_empty() {
if self.transcript_overlay.is_some() {
self.deferred_history_lines.extend(display);
} else {
tui.insert_history_lines(display);
}
}
}
AppEvent::StartCommitAnimation => {
if self
@@ -163,118 +223,29 @@ impl App {
AppEvent::CodexEvent(event) => {
self.chat_widget.handle_codex_event(event);
}
AppEvent::ConversationHistory(ev) => {
self.on_conversation_history_for_backtrack(tui, ev).await?;
}
AppEvent::ExitRequest => {
return Ok(false);
}
AppEvent::CodexOp(op) => self.chat_widget.submit_op(op),
AppEvent::DiffResult(text) => {
self.chat_widget.add_diff_output(text);
// Clear the in-progress state in the bottom pane
self.chat_widget.on_diff_complete();
// Enter alternate screen using TUI helper and build pager lines
let _ = tui.enter_alt_screen();
let pager_lines: Vec<ratatui::text::Line<'static>> = if text.trim().is_empty() {
vec!["No changes detected.".italic().into()]
} else {
text.lines().map(ansi_escape_line).collect()
};
self.transcript_overlay = Some(TranscriptApp::with_title(
pager_lines,
"D I F F".to_string(),
));
tui.frame_requester().schedule_frame();
}
AppEvent::DispatchCommand(command) => match command {
SlashCommand::New => {
// User accepted switch to chat view.
let new_widget = ChatWidget::new(
self.config.clone(),
self.server.clone(),
tui.frame_requester(),
self.app_event_tx.clone(),
None,
Vec::new(),
self.enhanced_keys_supported,
);
self.chat_widget = new_widget;
tui.frame_requester().schedule_frame();
}
SlashCommand::Init => {
// Guard: do not run if a task is active.
const INIT_PROMPT: &str = include_str!("../prompt_for_init_command.md");
self.chat_widget
.submit_text_message(INIT_PROMPT.to_string());
}
SlashCommand::Compact => {
self.chat_widget.clear_token_usage();
self.app_event_tx.send(AppEvent::CodexOp(Op::Compact));
}
SlashCommand::Model => {
self.chat_widget.open_model_popup();
}
SlashCommand::Approvals => {
self.chat_widget.open_approvals_popup();
}
SlashCommand::Quit => {
return Ok(false);
}
SlashCommand::Logout => {
if let Err(e) = codex_login::logout(&self.config.codex_home) {
tracing::error!("failed to logout: {e}");
}
return Ok(false);
}
SlashCommand::Diff => {
self.chat_widget.add_diff_in_progress();
let tx = self.app_event_tx.clone();
tokio::spawn(async move {
let text = match get_git_diff().await {
Ok((is_git_repo, diff_text)) => {
if is_git_repo {
diff_text
} else {
"`/diff` — _not inside a git repository_".to_string()
}
}
Err(e) => format!("Failed to compute diff: {e}"),
};
tx.send(AppEvent::DiffResult(text));
});
}
SlashCommand::Mention => {
self.chat_widget.insert_str("@");
}
SlashCommand::Status => {
self.chat_widget.add_status_output();
}
SlashCommand::Mcp => {
self.chat_widget.add_mcp_output();
}
#[cfg(debug_assertions)]
SlashCommand::TestApproval => {
use codex_core::protocol::EventMsg;
use std::collections::HashMap;
use codex_core::protocol::ApplyPatchApprovalRequestEvent;
use codex_core::protocol::FileChange;
self.app_event_tx.send(AppEvent::CodexEvent(Event {
id: "1".to_string(),
// msg: EventMsg::ExecApprovalRequest(ExecApprovalRequestEvent {
// call_id: "1".to_string(),
// command: vec!["git".into(), "apply".into()],
// cwd: self.config.cwd.clone(),
// reason: Some("test".to_string()),
// }),
msg: EventMsg::ApplyPatchApprovalRequest(ApplyPatchApprovalRequestEvent {
call_id: "1".to_string(),
changes: HashMap::from([
(
PathBuf::from("/tmp/test.txt"),
FileChange::Add {
content: "test".to_string(),
},
),
(
PathBuf::from("/tmp/test2.txt"),
FileChange::Update {
unified_diff: "+test\n-test2".to_string(),
move_path: None,
},
),
]),
reason: None,
grant_root: Some(PathBuf::from("/tmp")),
}),
}));
}
},
AppEvent::StartFileSearch(query) => {
if !query.is_empty() {
self.file_search.on_user_query(query);
@@ -303,7 +274,7 @@ impl App {
self.chat_widget.token_usage().clone()
}
async fn handle_key_event(&mut self, key_event: KeyEvent) {
async fn handle_key_event(&mut self, tui: &mut tui::Tui, key_event: KeyEvent) {
match key_event {
KeyEvent {
code: KeyCode::Char('c'),
@@ -322,9 +293,46 @@ impl App {
self.app_event_tx.send(AppEvent::ExitRequest);
}
KeyEvent {
code: KeyCode::Char('t'),
modifiers: crossterm::event::KeyModifiers::CONTROL,
kind: KeyEventKind::Press,
..
} => {
// Enter alternate screen and set viewport to full size.
let _ = tui.enter_alt_screen();
self.transcript_overlay = Some(TranscriptApp::new(self.transcript_lines.clone()));
tui.frame_requester().schedule_frame();
}
// Esc primes/advances backtracking when composer is empty.
KeyEvent {
code: KeyCode::Esc,
kind: KeyEventKind::Press | KeyEventKind::Repeat,
..
} => {
self.handle_backtrack_esc_key(tui);
}
// Enter confirms backtrack when primed + count > 0. Otherwise pass to widget.
KeyEvent {
code: KeyCode::Enter,
kind: KeyEventKind::Press,
..
} if self.backtrack.primed
&& self.backtrack.count > 0
&& self.chat_widget.composer_is_empty() =>
{
// Delegate to helper for clarity; preserves behavior.
self.confirm_backtrack_from_main();
}
KeyEvent {
kind: KeyEventKind::Press | KeyEventKind::Repeat,
..
} => {
// Any non-Esc key press should cancel a primed backtrack.
// This avoids stale "Esc-primed" state after the user starts typing
// (even if they later backspace to empty).
if key_event.code != KeyCode::Esc && self.backtrack.primed {
self.reset_backtrack_state();
}
self.chat_widget.handle_key_event(key_event);
}
_ => {

View File

@@ -0,0 +1,349 @@
use crate::app::App;
use crate::backtrack_helpers;
use crate::transcript_app::TranscriptApp;
use crate::tui;
use crate::tui::TuiEvent;
use codex_core::protocol::ConversationHistoryResponseEvent;
use color_eyre::eyre::Result;
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
use crossterm::event::KeyEventKind;
/// Aggregates all backtrack-related state used by the App.
#[derive(Default)]
pub(crate) struct BacktrackState {
/// True when Esc has primed backtrack mode in the main view.
pub(crate) primed: bool,
/// Session id of the base conversation to fork from.
pub(crate) base_id: Option<uuid::Uuid>,
/// Current step count (Nth last user message).
pub(crate) count: usize,
/// True when the transcript overlay is showing a backtrack preview.
pub(crate) overlay_preview_active: bool,
/// Pending fork request: (base_id, drop_count, prefill).
pub(crate) pending: Option<(uuid::Uuid, usize, String)>,
}
impl App {
/// Route overlay events when transcript overlay is active.
/// - If backtrack preview is active: Esc steps selection; Enter confirms.
/// - Otherwise: Esc begins preview; all other events forward to overlay.
/// interactions (Esc to step target, Enter to confirm) and overlay lifecycle.
pub(crate) async fn handle_backtrack_overlay_event(
&mut self,
tui: &mut tui::Tui,
event: TuiEvent,
) -> Result<bool> {
if self.backtrack.overlay_preview_active {
match event {
TuiEvent::Key(KeyEvent {
code: KeyCode::Esc,
kind: KeyEventKind::Press | KeyEventKind::Repeat,
..
}) => {
self.overlay_step_backtrack(tui, event)?;
Ok(true)
}
TuiEvent::Key(KeyEvent {
code: KeyCode::Enter,
kind: KeyEventKind::Press,
..
}) => {
self.overlay_confirm_backtrack(tui);
Ok(true)
}
// Catchall: forward any other events to the overlay widget.
_ => {
self.overlay_forward_event(tui, event)?;
Ok(true)
}
}
} else if let TuiEvent::Key(KeyEvent {
code: KeyCode::Esc,
kind: KeyEventKind::Press | KeyEventKind::Repeat,
..
}) = event
{
// First Esc in transcript overlay: begin backtrack preview at latest user message.
self.begin_overlay_backtrack_preview(tui);
Ok(true)
} else {
// Not in backtrack mode: forward events to the overlay widget.
self.overlay_forward_event(tui, event)?;
Ok(true)
}
}
/// Handle global Esc presses for backtracking when no overlay is present.
pub(crate) fn handle_backtrack_esc_key(&mut self, tui: &mut tui::Tui) {
// Only handle backtracking when composer is empty to avoid clobbering edits.
if self.chat_widget.composer_is_empty() {
if !self.backtrack.primed {
self.prime_backtrack();
} else if self.transcript_overlay.is_none() {
self.open_backtrack_preview(tui);
} else if self.backtrack.overlay_preview_active {
self.step_backtrack_and_highlight(tui);
}
}
}
/// Stage a backtrack and request conversation history from the agent.
pub(crate) fn request_backtrack(
&mut self,
prefill: String,
base_id: uuid::Uuid,
drop_last_messages: usize,
) {
self.backtrack.pending = Some((base_id, drop_last_messages, prefill));
self.app_event_tx.send(crate::app_event::AppEvent::CodexOp(
codex_core::protocol::Op::GetHistory,
));
}
/// Open transcript overlay (enters alternate screen and shows full transcript).
pub(crate) fn open_transcript_overlay(&mut self, tui: &mut tui::Tui) {
let _ = tui.enter_alt_screen();
self.transcript_overlay = Some(TranscriptApp::new(self.transcript_lines.clone()));
tui.frame_requester().schedule_frame();
}
/// Close transcript overlay and restore normal UI.
pub(crate) fn close_transcript_overlay(&mut self, tui: &mut tui::Tui) {
let _ = tui.leave_alt_screen();
let was_backtrack = self.backtrack.overlay_preview_active;
if !self.deferred_history_lines.is_empty() {
let lines = std::mem::take(&mut self.deferred_history_lines);
tui.insert_history_lines(lines);
}
self.transcript_overlay = None;
self.backtrack.overlay_preview_active = false;
if was_backtrack {
// Ensure backtrack state is fully reset when overlay closes (e.g. via 'q').
self.reset_backtrack_state();
}
}
/// Re-render the full transcript into the terminal scrollback in one call.
/// Useful when switching sessions to ensure prior history remains visible.
pub(crate) fn render_transcript_once(&mut self, tui: &mut tui::Tui) {
if !self.transcript_lines.is_empty() {
tui.insert_history_lines(self.transcript_lines.clone());
}
}
/// Initialize backtrack state and show composer hint.
fn prime_backtrack(&mut self) {
self.backtrack.primed = true;
self.backtrack.count = 0;
self.backtrack.base_id = self.chat_widget.session_id();
self.chat_widget.show_esc_backtrack_hint();
}
/// Open overlay and begin backtrack preview flow (first step + highlight).
fn open_backtrack_preview(&mut self, tui: &mut tui::Tui) {
self.open_transcript_overlay(tui);
self.backtrack.overlay_preview_active = true;
// Composer is hidden by overlay; clear its hint.
self.chat_widget.clear_esc_backtrack_hint();
self.step_backtrack_and_highlight(tui);
}
/// When overlay is already open, begin preview mode and select latest user message.
fn begin_overlay_backtrack_preview(&mut self, tui: &mut tui::Tui) {
self.backtrack.primed = true;
self.backtrack.base_id = self.chat_widget.session_id();
self.backtrack.overlay_preview_active = true;
let sel = self.compute_backtrack_selection(tui, 1);
self.apply_backtrack_selection(sel);
tui.frame_requester().schedule_frame();
}
/// Step selection to the next older user message and update overlay.
fn step_backtrack_and_highlight(&mut self, tui: &mut tui::Tui) {
let next = self.backtrack.count.saturating_add(1);
let sel = self.compute_backtrack_selection(tui, next);
self.apply_backtrack_selection(sel);
tui.frame_requester().schedule_frame();
}
/// Compute normalized target, scroll offset, and highlight for requested step.
fn compute_backtrack_selection(
&self,
tui: &tui::Tui,
requested_n: usize,
) -> (usize, Option<usize>, Option<(usize, usize)>) {
let nth = backtrack_helpers::normalize_backtrack_n(&self.transcript_lines, requested_n);
let header_idx =
backtrack_helpers::find_nth_last_user_header_index(&self.transcript_lines, nth);
let offset = header_idx.map(|idx| {
backtrack_helpers::wrapped_offset_before(
&self.transcript_lines,
idx,
tui.terminal.viewport_area.width,
)
});
let hl = backtrack_helpers::highlight_range_for_nth_last_user(&self.transcript_lines, nth);
(nth, offset, hl)
}
/// Apply a computed backtrack selection to the overlay and internal counter.
fn apply_backtrack_selection(
&mut self,
selection: (usize, Option<usize>, Option<(usize, usize)>),
) {
let (nth, offset, hl) = selection;
self.backtrack.count = nth;
if let Some(overlay) = &mut self.transcript_overlay {
if let Some(off) = offset {
overlay.scroll_offset = off;
}
overlay.set_highlight_range(hl);
}
}
/// Forward any event to the overlay and close it if done.
fn overlay_forward_event(&mut self, tui: &mut tui::Tui, event: TuiEvent) -> Result<()> {
if let Some(overlay) = &mut self.transcript_overlay {
overlay.handle_event(tui, event)?;
if overlay.is_done {
self.close_transcript_overlay(tui);
}
}
tui.frame_requester().schedule_frame();
Ok(())
}
/// Handle Enter in overlay backtrack preview: confirm selection and reset state.
fn overlay_confirm_backtrack(&mut self, tui: &mut tui::Tui) {
if let Some(base_id) = self.backtrack.base_id {
let drop_last_messages = self.backtrack.count;
let prefill =
backtrack_helpers::nth_last_user_text(&self.transcript_lines, drop_last_messages)
.unwrap_or_default();
self.close_transcript_overlay(tui);
self.request_backtrack(prefill, base_id, drop_last_messages);
}
self.reset_backtrack_state();
}
/// Handle Esc in overlay backtrack preview: step selection if armed, else forward.
fn overlay_step_backtrack(&mut self, tui: &mut tui::Tui, event: TuiEvent) -> Result<()> {
if self.backtrack.base_id.is_some() {
self.step_backtrack_and_highlight(tui);
} else {
self.overlay_forward_event(tui, event)?;
}
Ok(())
}
/// Confirm a primed backtrack from the main view (no overlay visible).
/// Computes the prefill from the selected user message and requests history.
pub(crate) fn confirm_backtrack_from_main(&mut self) {
if let Some(base_id) = self.backtrack.base_id {
let drop_last_messages = self.backtrack.count;
let prefill =
backtrack_helpers::nth_last_user_text(&self.transcript_lines, drop_last_messages)
.unwrap_or_default();
self.request_backtrack(prefill, base_id, drop_last_messages);
}
self.reset_backtrack_state();
}
/// Clear all backtrack-related state and composer hints.
pub(crate) fn reset_backtrack_state(&mut self) {
self.backtrack.primed = false;
self.backtrack.base_id = None;
self.backtrack.count = 0;
// In case a hint is somehow still visible (e.g., race with overlay open/close).
self.chat_widget.clear_esc_backtrack_hint();
}
/// Handle a ConversationHistory response while a backtrack is pending.
/// If it matches the primed base session, fork and switch to the new conversation.
pub(crate) async fn on_conversation_history_for_backtrack(
&mut self,
tui: &mut tui::Tui,
ev: ConversationHistoryResponseEvent,
) -> Result<()> {
if let Some((base_id, _, _)) = self.backtrack.pending.as_ref()
&& ev.conversation_id == *base_id
&& let Some((_, drop_count, prefill)) = self.backtrack.pending.take()
{
self.fork_and_switch_to_new_conversation(tui, ev, drop_count, prefill)
.await;
}
Ok(())
}
/// Fork the conversation using provided history and switch UI/state accordingly.
async fn fork_and_switch_to_new_conversation(
&mut self,
tui: &mut tui::Tui,
ev: ConversationHistoryResponseEvent,
drop_count: usize,
prefill: String,
) {
let cfg = self.chat_widget.config_ref().clone();
// Perform the fork via a thin wrapper for clarity/testability.
let result = self
.perform_fork(ev.entries.clone(), drop_count, cfg.clone())
.await;
match result {
Ok(new_conv) => {
self.install_forked_conversation(tui, cfg, new_conv, drop_count, &prefill)
}
Err(e) => tracing::error!("error forking conversation: {e:#}"),
}
}
/// Thin wrapper around ConversationManager::fork_conversation.
async fn perform_fork(
&self,
entries: Vec<codex_protocol::models::ResponseItem>,
drop_count: usize,
cfg: codex_core::config::Config,
) -> codex_core::error::Result<codex_core::NewConversation> {
self.server
.fork_conversation(entries, drop_count, cfg)
.await
}
/// Install a forked conversation into the ChatWidget and update UI to reflect selection.
fn install_forked_conversation(
&mut self,
tui: &mut tui::Tui,
cfg: codex_core::config::Config,
new_conv: codex_core::NewConversation,
drop_count: usize,
prefill: &str,
) {
let conv = new_conv.conversation;
let session_configured = new_conv.session_configured;
self.chat_widget = crate::chatwidget::ChatWidget::new_from_existing(
cfg,
conv,
session_configured,
tui.frame_requester(),
self.app_event_tx.clone(),
self.enhanced_keys_supported,
);
// Trim transcript up to the selected user message and re-render it.
self.trim_transcript_for_backtrack(drop_count);
self.render_transcript_once(tui);
if !prefill.is_empty() {
self.chat_widget.insert_str(prefill);
}
tui.frame_requester().schedule_frame();
}
/// Trim transcript_lines to preserve only content up to the selected user message.
fn trim_transcript_for_backtrack(&mut self, drop_count: usize) {
if let Some(cut_idx) =
backtrack_helpers::find_nth_last_user_header_index(&self.transcript_lines, drop_count)
{
self.transcript_lines.truncate(cut_idx);
} else {
self.transcript_lines.clear();
}
}
}

View File

@@ -1,8 +1,10 @@
use codex_core::protocol::ConversationHistoryResponseEvent;
use codex_core::protocol::Event;
use codex_file_search::FileMatch;
use ratatui::text::Line;
use crate::slash_command::SlashCommand;
use crate::history_cell::HistoryCell;
use codex_core::protocol::AskForApproval;
use codex_core::protocol::SandboxPolicy;
use codex_core::protocol_config_types::ReasoningEffort;
@@ -12,6 +14,9 @@ use codex_core::protocol_config_types::ReasoningEffort;
pub(crate) enum AppEvent {
CodexEvent(Event),
/// Start a new session.
NewSession,
/// Request to exit the application gracefully.
ExitRequest,
@@ -19,10 +24,6 @@ pub(crate) enum AppEvent {
/// bubbling channels through layers of widgets.
CodexOp(codex_core::protocol::Op),
/// Dispatch a recognized slash command from the UI (composer) to the app
/// layer so it can be handled centrally.
DispatchCommand(SlashCommand),
/// Kick off an asynchronous file search for the given query (text after
/// the `@`). Previous searches may be cancelled by the app layer so there
/// is at most one in-flight search.
@@ -39,7 +40,8 @@ pub(crate) enum AppEvent {
/// Result of computing a `/diff` command.
DiffResult(String),
InsertHistory(Vec<Line<'static>>),
InsertHistoryLines(Vec<Line<'static>>),
InsertHistoryCell(Box<dyn HistoryCell>),
StartCommitAnimation,
StopCommitAnimation,
@@ -56,4 +58,7 @@ pub(crate) enum AppEvent {
/// Update the current sandbox policy in the running app and widget.
UpdateSandboxPolicy(SandboxPolicy),
/// Forwarded conversation history snapshot from the current conversation.
ConversationHistory(ConversationHistoryResponseEvent),
}

View File

@@ -0,0 +1,154 @@
use ratatui::text::Line;
/// Convenience: compute the highlight range for the Nth last user message.
pub(crate) fn highlight_range_for_nth_last_user(
lines: &[Line<'_>],
n: usize,
) -> Option<(usize, usize)> {
let header = find_nth_last_user_header_index(lines, n)?;
Some(highlight_range_from_header(lines, header))
}
/// Compute the wrapped display-line offset before `header_idx`, for a given width.
pub(crate) fn wrapped_offset_before(lines: &[Line<'_>], header_idx: usize, width: u16) -> usize {
let before = &lines[0..header_idx];
crate::insert_history::word_wrap_lines(before, width).len()
}
/// Find the header index for the Nth last user message in the transcript.
/// Returns `None` if `n == 0` or there are fewer than `n` user messages.
pub(crate) fn find_nth_last_user_header_index(lines: &[Line<'_>], n: usize) -> Option<usize> {
if n == 0 {
return None;
}
let mut found = 0usize;
for (idx, line) in lines.iter().enumerate().rev() {
let content: String = line
.spans
.iter()
.map(|s| s.content.as_ref())
.collect::<Vec<_>>()
.join("");
if content.trim() == "user" {
found += 1;
if found == n {
return Some(idx);
}
}
}
None
}
/// Normalize a requested backtrack step `n` against the available user messages.
/// - Returns `0` if there are no user messages.
/// - Returns `n` if the Nth last user message exists.
/// - Otherwise wraps to `1` (the most recent user message).
pub(crate) fn normalize_backtrack_n(lines: &[Line<'_>], n: usize) -> usize {
if n == 0 {
return 0;
}
if find_nth_last_user_header_index(lines, n).is_some() {
return n;
}
if find_nth_last_user_header_index(lines, 1).is_some() {
1
} else {
0
}
}
/// Extract the text content of the Nth last user message.
/// The message body is considered to be the lines following the "user" header
/// until the first blank line.
pub(crate) fn nth_last_user_text(lines: &[Line<'_>], n: usize) -> Option<String> {
let header_idx = find_nth_last_user_header_index(lines, n)?;
extract_message_text_after_header(lines, header_idx)
}
/// Extract message text starting after `header_idx` until the first blank line.
fn extract_message_text_after_header(lines: &[Line<'_>], header_idx: usize) -> Option<String> {
let start = header_idx + 1;
let mut out: Vec<String> = Vec::new();
for line in lines.iter().skip(start) {
let is_blank = line
.spans
.iter()
.all(|s| s.content.as_ref().trim().is_empty());
if is_blank {
break;
}
let text: String = line
.spans
.iter()
.map(|s| s.content.as_ref())
.collect::<Vec<_>>()
.join("");
out.push(text);
}
if out.is_empty() {
None
} else {
Some(out.join("\n"))
}
}
/// Given a header index, return the inclusive range for the message block
/// [header_idx, end) where end is the first blank line after the header or the
/// end of the transcript.
fn highlight_range_from_header(lines: &[Line<'_>], header_idx: usize) -> (usize, usize) {
let mut end = header_idx + 1;
while end < lines.len() {
let is_blank = lines[end]
.spans
.iter()
.all(|s| s.content.as_ref().trim().is_empty());
if is_blank {
break;
}
end += 1;
}
(header_idx, end)
}
#[cfg(test)]
mod tests {
use super::*;
use ratatui::text::Span;
fn line(s: &str) -> Line<'static> {
Line::from(Span::raw(s.to_string()))
}
fn transcript_with_users(count: usize) -> Vec<Line<'static>> {
// Build a transcript with `count` user messages, each followed by one body line and a blank line.
let mut v = Vec::new();
for i in 0..count {
v.push(line("user"));
v.push(line(&format!("message {i}")));
v.push(line(""));
}
v
}
#[test]
fn normalize_wraps_to_one_when_past_oldest() {
let lines = transcript_with_users(2);
assert_eq!(normalize_backtrack_n(&lines, 1), 1);
assert_eq!(normalize_backtrack_n(&lines, 2), 2);
// Requesting 3rd when only 2 exist wraps to 1
assert_eq!(normalize_backtrack_n(&lines, 3), 1);
}
#[test]
fn normalize_returns_zero_when_no_user_messages() {
let lines = transcript_with_users(0);
assert_eq!(normalize_backtrack_n(&lines, 1), 0);
assert_eq!(normalize_backtrack_n(&lines, 5), 0);
}
#[test]
fn normalize_keeps_valid_n() {
let lines = transcript_with_users(3);
assert_eq!(normalize_backtrack_n(&lines, 2), 2);
}
}

View File

@@ -28,6 +28,11 @@ pub(crate) trait BottomPaneView {
/// Render the view: this will be displayed in place of the composer.
fn render(&self, area: Rect, buf: &mut Buffer);
/// Update the status indicator animated header. Default no-op.
fn update_status_header(&mut self, _header: String) {
// no-op
}
/// Called when task completes to check if the view should be hidden.
fn should_hide_when_task_is_done(&mut self) -> bool {
false

View File

@@ -23,6 +23,7 @@ use ratatui::widgets::WidgetRef;
use super::chat_composer_history::ChatComposerHistory;
use super::command_popup::CommandPopup;
use super::file_search_popup::FileSearchPopup;
use crate::slash_command::SlashCommand;
use crate::app_event::AppEvent;
use crate::app_event_sender::AppEventSender;
@@ -30,6 +31,16 @@ use crate::bottom_pane::textarea::TextArea;
use crate::bottom_pane::textarea::TextAreaState;
use codex_file_search::FileMatch;
use std::cell::RefCell;
use std::collections::HashMap;
use std::path::Path;
use std::path::PathBuf;
use std::time::Duration;
use std::time::Instant;
// Heuristic thresholds for detecting paste-like input bursts.
const PASTE_BURST_MIN_CHARS: u16 = 3;
const PASTE_BURST_CHAR_INTERVAL: Duration = Duration::from_millis(8);
const PASTE_ENTER_SUPPRESS_WINDOW: Duration = Duration::from_millis(120);
/// If the pasted content exceeds this number of characters, replace it with a
/// placeholder in the UI.
@@ -38,9 +49,16 @@ const LARGE_PASTE_CHAR_THRESHOLD: usize = 1000;
/// Result returned when the user interacts with the text area.
pub enum InputResult {
Submitted(String),
Command(SlashCommand),
None,
}
#[derive(Clone, Debug, PartialEq)]
struct AttachedImage {
placeholder: String,
path: PathBuf,
}
struct TokenUsageInfo {
total_token_usage: TokenUsage,
last_token_usage: TokenUsage,
@@ -63,13 +81,22 @@ pub(crate) struct ChatComposer {
app_event_tx: AppEventSender,
history: ChatComposerHistory,
ctrl_c_quit_hint: bool,
esc_backtrack_hint: bool,
use_shift_enter_hint: bool,
dismissed_file_popup_token: Option<String>,
current_file_query: Option<String>,
pending_pastes: Vec<(String, String)>,
token_usage_info: Option<TokenUsageInfo>,
has_focus: bool,
attached_images: Vec<AttachedImage>,
placeholder_text: String,
// Heuristic state to detect non-bracketed paste bursts.
last_plain_char_time: Option<Instant>,
consecutive_plain_char_burst: u16,
paste_burst_until: Option<Instant>,
// Buffer to accumulate characters during a detected non-bracketed paste burst.
paste_burst_buffer: String,
in_paste_burst_mode: bool,
}
/// Popup state at most one can be visible at any time.
@@ -95,13 +122,20 @@ impl ChatComposer {
app_event_tx,
history: ChatComposerHistory::new(),
ctrl_c_quit_hint: false,
esc_backtrack_hint: false,
use_shift_enter_hint,
dismissed_file_popup_token: None,
current_file_query: None,
pending_pastes: Vec::new(),
token_usage_info: None,
has_focus: has_input_focus,
attached_images: Vec::new(),
placeholder_text,
last_plain_char_time: None,
consecutive_plain_char_burst: 0,
paste_burst_until: None,
paste_burst_buffer: String::new(),
in_paste_burst_mode: false,
}
}
@@ -189,11 +223,29 @@ impl ChatComposer {
} else {
self.textarea.insert_str(&pasted);
}
// Explicit paste events should not trigger Enter suppression.
self.last_plain_char_time = None;
self.consecutive_plain_char_burst = 0;
self.paste_burst_until = None;
self.sync_command_popup();
self.sync_file_search_popup();
true
}
pub fn attach_image(&mut self, path: PathBuf, width: u32, height: u32, format_label: &str) {
let placeholder = format!("[image {width}x{height} {format_label}]");
// Insert as an element to match large paste placeholder behavior:
// styled distinctly and treated atomically for cursor/mutations.
self.textarea.insert_element(&placeholder);
self.attached_images
.push(AttachedImage { placeholder, path });
}
pub fn take_recent_submission_images(&mut self) -> Vec<PathBuf> {
let images = std::mem::take(&mut self.attached_images);
images.into_iter().map(|img| img.path).collect()
}
/// Integrate results from an asynchronous file search.
pub(crate) fn on_file_search_result(&mut self, query: String, matches: Vec<FileMatch>) {
// Only apply if user is still editing a token starting with `query`.
@@ -289,15 +341,15 @@ impl ChatComposer {
..
} => {
if let Some(cmd) = popup.selected_command() {
// Send command to the app layer.
self.app_event_tx.send(AppEvent::DispatchCommand(*cmd));
// Clear textarea so no residual text remains.
self.textarea.set_text("");
let result = (InputResult::Command(*cmd), true);
// Hide popup since the command has been dispatched.
self.active_popup = ActivePopup::None;
return (InputResult::None, true);
return result;
}
// Fallback to default newline handling if no command selected.
self.handle_key_event_without_popup(key_event)
@@ -344,19 +396,74 @@ impl ChatComposer {
modifiers: KeyModifiers::NONE,
..
} => {
if let Some(sel) = popup.selected_match() {
let sel_path = sel.to_string();
// Drop popup borrow before using self mutably again.
self.insert_selected_path(&sel_path);
let Some(sel) = popup.selected_match() else {
self.active_popup = ActivePopup::None;
return (InputResult::None, true);
};
let sel_path = sel.to_string();
// If selected path looks like an image (png/jpeg), attach as image instead of inserting text.
let is_image = Self::is_image_path(&sel_path);
if is_image {
// Determine dimensions; if that fails fall back to normal path insertion.
let path_buf = PathBuf::from(&sel_path);
if let Ok((w, h)) = image::image_dimensions(&path_buf) {
// Remove the current @token (mirror logic from insert_selected_path without inserting text)
// using the flat text and byte-offset cursor API.
let cursor_offset = self.textarea.cursor();
let text = self.textarea.text();
let before_cursor = &text[..cursor_offset];
let after_cursor = &text[cursor_offset..];
// Determine token boundaries in the full text.
let start_idx = before_cursor
.char_indices()
.rfind(|(_, c)| c.is_whitespace())
.map(|(idx, c)| idx + c.len_utf8())
.unwrap_or(0);
let end_rel_idx = after_cursor
.char_indices()
.find(|(_, c)| c.is_whitespace())
.map(|(idx, _)| idx)
.unwrap_or(after_cursor.len());
let end_idx = cursor_offset + end_rel_idx;
self.textarea.replace_range(start_idx..end_idx, "");
self.textarea.set_cursor(start_idx);
let format_label = match Path::new(&sel_path)
.extension()
.and_then(|e| e.to_str())
.map(|s| s.to_ascii_lowercase())
{
Some(ext) if ext == "png" => "PNG",
Some(ext) if ext == "jpg" || ext == "jpeg" => "JPEG",
_ => "IMG",
};
self.attach_image(path_buf.clone(), w, h, format_label);
// Add a trailing space to keep typing fluid.
self.textarea.insert_str(" ");
} else {
// Fallback to plain path insertion if metadata read fails.
self.insert_selected_path(&sel_path);
}
} else {
// Non-image: inserting file path.
self.insert_selected_path(&sel_path);
}
(InputResult::None, false)
// No selection: treat Enter as closing the popup/session.
self.active_popup = ActivePopup::None;
(InputResult::None, true)
}
input => self.handle_input_basic(input),
}
}
fn is_image_path(path: &str) -> bool {
let lower = path.to_ascii_lowercase();
lower.ends_with(".png") || lower.ends_with(".jpg") || lower.ends_with(".jpeg")
}
/// Extract the `@token` that the cursor is currently positioned on, if any.
///
/// The returned string **does not** include the leading `@`.
@@ -532,6 +639,60 @@ impl ChatComposer {
modifiers: KeyModifiers::NONE,
..
} => {
// If we're in a paste-like burst capture, treat Enter as part of the burst
// and accumulate it rather than submitting or inserting immediately.
// Do not treat Enter as paste inside a slash-command context.
let in_slash_context = matches!(self.active_popup, ActivePopup::Command(_))
|| self
.textarea
.text()
.lines()
.next()
.unwrap_or("")
.starts_with('/');
if (self.in_paste_burst_mode || !self.paste_burst_buffer.is_empty())
&& !in_slash_context
{
self.paste_burst_buffer.push('\n');
let now = Instant::now();
// Keep the window alive so subsequent lines are captured too.
self.paste_burst_until = Some(now + PASTE_ENTER_SUPPRESS_WINDOW);
return (InputResult::None, true);
}
// If we have pending placeholder pastes, submit immediately to expand them.
if !self.pending_pastes.is_empty() {
let mut text = self.textarea.text().to_string();
self.textarea.set_text("");
for (placeholder, actual) in &self.pending_pastes {
if text.contains(placeholder) {
text = text.replace(placeholder, actual);
}
}
self.pending_pastes.clear();
if text.is_empty() {
return (InputResult::None, true);
}
self.history.record_local_submission(&text);
return (InputResult::Submitted(text), true);
}
// During a paste-like burst, treat Enter as a newline instead of submit.
let now = Instant::now();
let tight_after_char = self
.last_plain_char_time
.is_some_and(|t| now.duration_since(t) <= PASTE_BURST_CHAR_INTERVAL);
let recent_after_char = self
.last_plain_char_time
.is_some_and(|t| now.duration_since(t) <= PASTE_ENTER_SUPPRESS_WINDOW);
let burst_by_count =
recent_after_char && self.consecutive_plain_char_burst >= PASTE_BURST_MIN_CHARS;
let in_burst_window = self.paste_burst_until.is_some_and(|until| now <= until);
if tight_after_char || burst_by_count || in_burst_window {
self.textarea.insert_str("\n");
self.paste_burst_until = Some(now + PASTE_ENTER_SUPPRESS_WINDOW);
return (InputResult::None, true);
}
let mut text = self.textarea.text().to_string();
self.textarea.set_text("");
@@ -543,12 +704,19 @@ impl ChatComposer {
}
self.pending_pastes.clear();
if text.is_empty() {
(InputResult::None, true)
} else {
self.history.record_local_submission(&text);
(InputResult::Submitted(text), true)
// Strip image placeholders from the submitted text; images are retrieved via take_recent_submission_images()
for img in &self.attached_images {
if text.contains(&img.placeholder) {
text = text.replace(&img.placeholder, "");
}
}
text = text.trim().to_string();
if !text.is_empty() {
self.history.record_local_submission(&text);
}
// Do not clear attached_images here; ChatWidget drains them via take_recent_submission_images().
(InputResult::Submitted(text), true)
}
input => self.handle_input_basic(input),
}
@@ -556,17 +724,302 @@ impl ChatComposer {
/// Handle generic Input events that modify the textarea content.
fn handle_input_basic(&mut self, input: KeyEvent) -> (InputResult, bool) {
// If we have a buffered non-bracketed paste burst and enough time has
// elapsed since the last char, flush it before handling a new input.
let now = Instant::now();
let timed_out = self
.last_plain_char_time
.is_some_and(|t| now.duration_since(t) > PASTE_BURST_CHAR_INTERVAL);
if timed_out && (!self.paste_burst_buffer.is_empty() || self.in_paste_burst_mode) {
let pasted = std::mem::take(&mut self.paste_burst_buffer);
self.in_paste_burst_mode = false;
// Reuse normal paste path (handles large-paste placeholders).
self.handle_paste(pasted);
}
// If we're capturing a burst and receive Enter, accumulate it instead of inserting.
if matches!(input.code, KeyCode::Enter)
&& (self.in_paste_burst_mode || !self.paste_burst_buffer.is_empty())
{
self.paste_burst_buffer.push('\n');
self.paste_burst_until = Some(now + PASTE_ENTER_SUPPRESS_WINDOW);
return (InputResult::None, true);
}
// Intercept plain Char inputs to optionally accumulate into a burst buffer.
if let KeyEvent {
code: KeyCode::Char(ch),
modifiers,
..
} = input
{
let has_ctrl_or_alt =
modifiers.contains(KeyModifiers::CONTROL) || modifiers.contains(KeyModifiers::ALT);
if !has_ctrl_or_alt {
// Update burst heuristics.
match self.last_plain_char_time {
Some(prev) if now.duration_since(prev) <= PASTE_BURST_CHAR_INTERVAL => {
self.consecutive_plain_char_burst =
self.consecutive_plain_char_burst.saturating_add(1);
}
_ => {
self.consecutive_plain_char_burst = 1;
}
}
self.last_plain_char_time = Some(now);
// If we're already buffering, capture the char into the buffer.
if self.in_paste_burst_mode {
self.paste_burst_buffer.push(ch);
// Keep the window alive while we receive the burst.
self.paste_burst_until = Some(now + PASTE_ENTER_SUPPRESS_WINDOW);
return (InputResult::None, true);
} else if self.consecutive_plain_char_burst >= PASTE_BURST_MIN_CHARS {
// Do not start burst buffering while typing a slash command (first line starts with '/').
let first_line = self.textarea.text().lines().next().unwrap_or("");
if first_line.starts_with('/') {
// Keep heuristics but do not buffer.
self.paste_burst_until = Some(now + PASTE_ENTER_SUPPRESS_WINDOW);
// Insert normally.
self.textarea.input(input);
let text_after = self.textarea.text();
self.pending_pastes
.retain(|(placeholder, _)| text_after.contains(placeholder));
return (InputResult::None, true);
}
// Begin buffering from this character onward.
self.paste_burst_buffer.push(ch);
self.in_paste_burst_mode = true;
// Keep the window alive to continue capturing.
self.paste_burst_until = Some(now + PASTE_ENTER_SUPPRESS_WINDOW);
return (InputResult::None, true);
}
// Not buffering: insert normally and continue.
self.textarea.input(input);
let text_after = self.textarea.text();
self.pending_pastes
.retain(|(placeholder, _)| text_after.contains(placeholder));
return (InputResult::None, true);
} else {
// Modified char ends any burst: flush buffered content before applying.
if !self.paste_burst_buffer.is_empty() || self.in_paste_burst_mode {
let pasted = std::mem::take(&mut self.paste_burst_buffer);
self.in_paste_burst_mode = false;
self.handle_paste(pasted);
}
}
}
// For non-char inputs (or after flushing), handle normally.
// Special handling for backspace on placeholders
if let KeyEvent {
code: KeyCode::Backspace,
..
} = input
&& self.try_remove_any_placeholder_at_cursor()
{
return (InputResult::None, true);
}
// Normal input handling
self.textarea.input(input);
let text_after = self.textarea.text();
// Update paste-burst heuristic for plain Char (no Ctrl/Alt) events.
let crossterm::event::KeyEvent {
code, modifiers, ..
} = input;
match code {
KeyCode::Char(_) => {
let has_ctrl_or_alt = modifiers.contains(KeyModifiers::CONTROL)
|| modifiers.contains(KeyModifiers::ALT);
if has_ctrl_or_alt {
// Modified char: clear burst window.
self.consecutive_plain_char_burst = 0;
self.last_plain_char_time = None;
self.paste_burst_until = None;
self.in_paste_burst_mode = false;
self.paste_burst_buffer.clear();
}
// Plain chars handled above.
}
KeyCode::Enter => {
// Keep burst window alive (supports blank lines in paste).
}
_ => {
// Other keys: clear burst window and any buffer (after flushing earlier).
self.consecutive_plain_char_burst = 0;
self.last_plain_char_time = None;
self.paste_burst_until = None;
self.in_paste_burst_mode = false;
// Do not clear paste_burst_buffer here; it should have been flushed above.
}
}
// Check if any placeholders were removed and remove their corresponding pending pastes
self.pending_pastes
.retain(|(placeholder, _)| text_after.contains(placeholder));
// Keep attached images in proportion to how many matching placeholders exist in the text.
// This handles duplicate placeholders that share the same visible label.
if !self.attached_images.is_empty() {
let mut needed: HashMap<String, usize> = HashMap::new();
for img in &self.attached_images {
needed
.entry(img.placeholder.clone())
.or_insert_with(|| text_after.matches(&img.placeholder).count());
}
let mut used: HashMap<String, usize> = HashMap::new();
let mut kept: Vec<AttachedImage> = Vec::with_capacity(self.attached_images.len());
for img in self.attached_images.drain(..) {
let total_needed = *needed.get(&img.placeholder).unwrap_or(&0);
let used_count = used.entry(img.placeholder.clone()).or_insert(0);
if *used_count < total_needed {
kept.push(img);
*used_count += 1;
}
}
self.attached_images = kept;
}
(InputResult::None, true)
}
/// Attempts to remove an image or paste placeholder if the cursor is at the end of one.
/// Returns true if a placeholder was removed.
fn try_remove_any_placeholder_at_cursor(&mut self) -> bool {
let p = self.textarea.cursor();
let text = self.textarea.text();
// Try image placeholders first
let mut out: Option<(usize, String)> = None;
// Detect if the cursor is at the end of any image placeholder.
// If duplicates exist, remove the specific occurrence's mapping.
for (i, img) in self.attached_images.iter().enumerate() {
let ph = &img.placeholder;
if p < ph.len() {
continue;
}
let start = p - ph.len();
if text[start..p] != *ph {
continue;
}
// Count the number of occurrences of `ph` before `start`.
let mut occ_before = 0usize;
let mut search_pos = 0usize;
while search_pos < start {
if let Some(found) = text[search_pos..start].find(ph) {
occ_before += 1;
search_pos += found + ph.len();
} else {
break;
}
}
// Remove the occ_before-th attached image that shares this placeholder label.
out = if let Some((remove_idx, _)) = self
.attached_images
.iter()
.enumerate()
.filter(|(_, img2)| img2.placeholder == *ph)
.nth(occ_before)
{
Some((remove_idx, ph.clone()))
} else {
Some((i, ph.clone()))
};
break;
}
if let Some((idx, placeholder)) = out {
self.textarea.replace_range(p - placeholder.len()..p, "");
self.attached_images.remove(idx);
return true;
}
// Also handle when the cursor is at the START of an image placeholder.
// let result = 'out: {
let out: Option<(usize, String)> = 'out: {
for (i, img) in self.attached_images.iter().enumerate() {
let ph = &img.placeholder;
if p + ph.len() > text.len() {
continue;
}
if &text[p..p + ph.len()] != ph {
continue;
}
// Count occurrences of `ph` before `p`.
let mut occ_before = 0usize;
let mut search_pos = 0usize;
while search_pos < p {
if let Some(found) = text[search_pos..p].find(ph) {
occ_before += 1;
search_pos += found + ph.len();
} else {
break 'out None;
}
}
if let Some((remove_idx, _)) = self
.attached_images
.iter()
.enumerate()
.filter(|(_, img2)| img2.placeholder == *ph)
.nth(occ_before)
{
break 'out Some((remove_idx, ph.clone()));
} else {
break 'out Some((i, ph.clone()));
}
}
None
};
if let Some((idx, placeholder)) = out {
self.textarea.replace_range(p..p + placeholder.len(), "");
self.attached_images.remove(idx);
return true;
}
// Then try pasted-content placeholders
if let Some(placeholder) = self.pending_pastes.iter().find_map(|(ph, _)| {
if p < ph.len() {
return None;
}
let start = p - ph.len();
if text[start..p] == *ph {
Some(ph.clone())
} else {
None
}
}) {
self.textarea.replace_range(p - placeholder.len()..p, "");
self.pending_pastes.retain(|(ph, _)| ph != &placeholder);
return true;
}
// Also handle when the cursor is at the START of a pasted-content placeholder.
if let Some(placeholder) = self.pending_pastes.iter().find_map(|(ph, _)| {
if p + ph.len() > text.len() {
return None;
}
if &text[p..p + ph.len()] == ph {
Some(ph.clone())
} else {
None
}
}) {
self.textarea.replace_range(p..p + placeholder.len(), "");
self.pending_pastes.retain(|(ph, _)| ph != &placeholder);
return true;
}
false
}
/// Synchronize `self.command_popup` with the current text in the
/// textarea. This must be called after every modification that can change
/// the text so the popup is shown/updated/hidden as appropriate.
@@ -640,6 +1093,10 @@ impl ChatComposer {
fn set_has_focus(&mut self, has_focus: bool) {
self.has_focus = has_focus;
}
pub(crate) fn set_esc_backtrack_hint(&mut self, show: bool) {
self.esc_backtrack_hint = show;
}
}
impl WidgetRef for &ChatComposer {
@@ -679,11 +1136,19 @@ impl WidgetRef for &ChatComposer {
Span::from(" send "),
newline_hint_key.set_style(key_hint_style),
Span::from(" newline "),
"Ctrl+T".set_style(key_hint_style),
Span::from(" transcript "),
"Ctrl+C".set_style(key_hint_style),
Span::from(" quit"),
]
};
if !self.ctrl_c_quit_hint && self.esc_backtrack_hint {
hint.push(Span::from(" "));
hint.push("Esc".set_style(key_hint_style));
hint.push(Span::from(" edit prev"));
}
// Append token/context usage info to the footer hints when available.
if let Some(token_usage_info) = &self.token_usage_info {
let token_usage = &token_usage_info.total_token_usage;
@@ -744,10 +1209,14 @@ impl WidgetRef for &ChatComposer {
#[cfg(test)]
mod tests {
use super::*;
use std::path::PathBuf;
use crate::app_event::AppEvent;
use crate::bottom_pane::AppEventSender;
use crate::bottom_pane::ChatComposer;
use crate::bottom_pane::InputResult;
use crate::bottom_pane::chat_composer::AttachedImage;
use crate::bottom_pane::chat_composer::LARGE_PASTE_CHAR_THRESHOLD;
use crate::bottom_pane::textarea::TextArea;
use tokio::sync::mpsc::unbounded_channel;
@@ -1039,9 +1508,8 @@ mod tests {
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
use crossterm::event::KeyModifiers;
use tokio::sync::mpsc::error::TryRecvError;
let (tx, mut rx) = unbounded_channel::<AppEvent>();
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
@@ -1057,25 +1525,18 @@ mod tests {
let (result, _needs_redraw) =
composer.handle_key_event(KeyEvent::new(KeyCode::Enter, KeyModifiers::NONE));
// When a slash command is dispatched, the composer should not submit
// literal text and should clear its textarea.
// When a slash command is dispatched, the composer should return a
// Command result (not submit literal text) and clear its textarea.
match result {
InputResult::None => {}
InputResult::Command(cmd) => {
assert_eq!(cmd.command(), "init");
}
InputResult::Submitted(text) => {
panic!("expected command dispatch, but composer submitted literal text: {text}")
}
InputResult::None => panic!("expected Command result for '/init'"),
}
assert!(composer.textarea.is_empty(), "composer should be cleared");
// Verify a DispatchCommand event for the "init" command was sent.
match rx.try_recv() {
Ok(AppEvent::DispatchCommand(cmd)) => {
assert_eq!(cmd.command(), "init");
}
Ok(_other) => panic!("unexpected app event"),
Err(TryRecvError::Empty) => panic!("expected a DispatchCommand event for '/init'"),
Err(TryRecvError::Disconnected) => panic!("app event channel disconnected"),
}
}
#[test]
@@ -1105,9 +1566,8 @@ mod tests {
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
use crossterm::event::KeyModifiers;
use tokio::sync::mpsc::error::TryRecvError;
let (tx, mut rx) = unbounded_channel::<AppEvent>();
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
@@ -1120,24 +1580,16 @@ mod tests {
composer.handle_key_event(KeyEvent::new(KeyCode::Enter, KeyModifiers::NONE));
match result {
InputResult::None => {}
InputResult::Command(cmd) => {
assert_eq!(cmd.command(), "mention");
}
InputResult::Submitted(text) => {
panic!("expected command dispatch, but composer submitted literal text: {text}")
}
InputResult::None => panic!("expected Command result for '/mention'"),
}
assert!(composer.textarea.is_empty(), "composer should be cleared");
match rx.try_recv() {
Ok(AppEvent::DispatchCommand(cmd)) => {
assert_eq!(cmd.command(), "mention");
composer.insert_str("@");
}
Ok(_other) => panic!("unexpected app event"),
Err(TryRecvError::Empty) => panic!("expected a DispatchCommand event for '/mention'"),
Err(TryRecvError::Disconnected) => {
panic!("app event channel disconnected")
}
}
composer.insert_str("@");
assert_eq!(composer.textarea.text(), "@");
}
@@ -1327,4 +1779,112 @@ mod tests {
]
);
}
// --- Image attachment tests ---
#[test]
fn attach_image_and_submit_includes_image_paths() {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let path = PathBuf::from("/tmp/image1.png");
composer.attach_image(path.clone(), 32, 16, "PNG");
composer.handle_paste(" hi".into());
let (result, _) =
composer.handle_key_event(KeyEvent::new(KeyCode::Enter, KeyModifiers::NONE));
match result {
InputResult::Submitted(text) => assert_eq!(text, "hi"),
_ => panic!("expected Submitted"),
}
let imgs = composer.take_recent_submission_images();
assert_eq!(vec![path], imgs);
}
#[test]
fn attach_image_without_text_submits_empty_text_and_images() {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let path = PathBuf::from("/tmp/image2.png");
composer.attach_image(path.clone(), 10, 5, "PNG");
let (result, _) =
composer.handle_key_event(KeyEvent::new(KeyCode::Enter, KeyModifiers::NONE));
match result {
InputResult::Submitted(text) => assert!(text.is_empty()),
_ => panic!("expected Submitted"),
}
let imgs = composer.take_recent_submission_images();
assert_eq!(imgs.len(), 1);
assert_eq!(imgs[0], path);
assert!(composer.attached_images.is_empty());
}
#[test]
fn image_placeholder_backspace_behaves_like_text_placeholder() {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let path = PathBuf::from("/tmp/image3.png");
composer.attach_image(path.clone(), 20, 10, "PNG");
let placeholder = composer.attached_images[0].placeholder.clone();
// Case 1: backspace at end
composer.textarea.move_cursor_to_end_of_line(false);
composer.handle_key_event(KeyEvent::new(KeyCode::Backspace, KeyModifiers::NONE));
assert!(!composer.textarea.text().contains(&placeholder));
assert!(composer.attached_images.is_empty());
// Re-add and test backspace in middle: should break the placeholder string
// and drop the image mapping (same as text placeholder behavior).
composer.attach_image(path.clone(), 20, 10, "PNG");
let placeholder2 = composer.attached_images[0].placeholder.clone();
// Move cursor to roughly middle of placeholder
if let Some(start_pos) = composer.textarea.text().find(&placeholder2) {
let mid_pos = start_pos + (placeholder2.len() / 2);
composer.textarea.set_cursor(mid_pos);
composer.handle_key_event(KeyEvent::new(KeyCode::Backspace, KeyModifiers::NONE));
assert!(!composer.textarea.text().contains(&placeholder2));
assert!(composer.attached_images.is_empty());
} else {
panic!("Placeholder not found in textarea");
}
}
#[test]
fn deleting_one_of_duplicate_image_placeholders_removes_matching_entry() {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer =
ChatComposer::new(true, sender, false, "Ask Codex to do anything".to_string());
let path1 = PathBuf::from("/tmp/image_dup1.png");
let path2 = PathBuf::from("/tmp/image_dup2.png");
composer.attach_image(path1.clone(), 10, 5, "PNG");
// separate placeholders with a space for clarity
composer.handle_paste(" ".into());
composer.attach_image(path2.clone(), 10, 5, "PNG");
let ph = composer.attached_images[0].placeholder.clone();
let text = composer.textarea.text().to_string();
let start1 = text.find(&ph).expect("first placeholder present");
let end1 = start1 + ph.len();
composer.textarea.set_cursor(end1);
// Backspace should delete the first placeholder and its mapping.
composer.handle_key_event(KeyEvent::new(KeyCode::Backspace, KeyModifiers::NONE));
let new_text = composer.textarea.text().to_string();
assert_eq!(1, new_text.matches(&ph).count(), "one placeholder remains");
assert_eq!(
vec![AttachedImage {
path: path2,
placeholder: "[image 10x5 PNG]".to_string()
}],
composer.attached_images,
"one image mapping remains"
);
}
}

View File

@@ -1,4 +1,5 @@
//! Bottom pane: shows the ChatComposer or a BottomPaneView, if one is active.
use std::path::PathBuf;
use crate::app_event_sender::AppEventSender;
use crate::tui::FrameRequester;
@@ -8,6 +9,8 @@ use codex_core::protocol::TokenUsage;
use codex_file_search::FileMatch;
use crossterm::event::KeyEvent;
use ratatui::buffer::Buffer;
use ratatui::layout::Constraint;
use ratatui::layout::Layout;
use ratatui::layout::Rect;
use ratatui::widgets::WidgetRef;
@@ -53,6 +56,7 @@ pub(crate) struct BottomPane {
has_input_focus: bool,
is_task_running: bool,
ctrl_c_quit_hint: bool,
esc_backtrack_hint: bool,
/// True if the active view is the StatusIndicatorView that replaces the
/// composer during a running task.
@@ -84,6 +88,7 @@ impl BottomPane {
has_input_focus: params.has_input_focus,
is_task_running: false,
ctrl_c_quit_hint: false,
esc_backtrack_hint: false,
status_view_active: false,
}
}
@@ -94,8 +99,31 @@ impl BottomPane {
} else {
self.composer.desired_height(width)
};
let top_pad = if self.active_view.is_none() || self.status_view_active {
1
} else {
0
};
view_height
.saturating_add(Self::BOTTOM_PAD_LINES)
.saturating_add(top_pad)
}
view_height.saturating_add(Self::BOTTOM_PAD_LINES)
fn layout(&self, area: Rect) -> Rect {
let top = if self.active_view.is_none() || self.status_view_active {
1
} else {
0
};
let [_, content, _] = Layout::vertical([
Constraint::Max(top),
Constraint::Min(1),
Constraint::Max(BottomPane::BOTTOM_PAD_LINES),
])
.areas(area);
content
}
pub fn cursor_pos(&self, area: Rect) -> Option<(u16, u16)> {
@@ -103,10 +131,11 @@ impl BottomPane {
// status indicator shown while a task is running, or approval modal).
// In these states the textarea is not interactable, so we should not
// show its caret.
if self.active_view.is_some() {
if self.active_view.is_some() || self.status_view_active {
None
} else {
self.composer.cursor_pos(area)
let content = self.layout(area);
self.composer.cursor_pos(content)
}
}
@@ -182,6 +211,17 @@ impl BottomPane {
self.request_redraw();
}
/// Update the animated header shown to the left of the brackets in the
/// status indicator (defaults to "Working"). This will update the active
/// StatusIndicatorView if present; otherwise, if a live overlay is active,
/// it will update that. If neither is present, this call is a no-op.
pub(crate) fn update_status_header(&mut self, header: String) {
if let Some(view) = self.active_view.as_mut() {
view.update_status_header(header.clone());
self.request_redraw();
}
}
pub(crate) fn show_ctrl_c_quit_hint(&mut self) {
self.ctrl_c_quit_hint = true;
self.composer
@@ -202,6 +242,22 @@ impl BottomPane {
self.ctrl_c_quit_hint
}
pub(crate) fn show_esc_backtrack_hint(&mut self) {
self.esc_backtrack_hint = true;
self.composer.set_esc_backtrack_hint(true);
self.request_redraw();
}
pub(crate) fn clear_esc_backtrack_hint(&mut self) {
if self.esc_backtrack_hint {
self.esc_backtrack_hint = false;
self.composer.set_esc_backtrack_hint(false);
self.request_redraw();
}
}
// esc_backtrack_hint_visible removed; hints are controlled internally.
pub fn set_task_running(&mut self, running: bool) {
self.is_task_running = running;
@@ -331,35 +387,34 @@ impl BottomPane {
self.composer.on_file_search_result(query, matches);
self.request_redraw();
}
pub(crate) fn attach_image(
&mut self,
path: PathBuf,
width: u32,
height: u32,
format_label: &str,
) {
if self.active_view.is_none() {
self.composer
.attach_image(path, width, height, format_label);
self.request_redraw();
}
}
pub(crate) fn take_recent_submission_images(&mut self) -> Vec<PathBuf> {
self.composer.take_recent_submission_images()
}
}
impl WidgetRef for &BottomPane {
fn render_ref(&self, area: Rect, buf: &mut Buffer) {
let content = self.layout(area);
if let Some(view) = &self.active_view {
// Reserve bottom padding lines; keep at least 1 line for the view.
let avail = area.height;
if avail > 0 {
let pad = BottomPane::BOTTOM_PAD_LINES.min(avail.saturating_sub(1));
let view_rect = Rect {
x: area.x,
y: area.y,
width: area.width,
height: avail - pad,
};
view.render(view_rect, buf);
}
view.render(content, buf);
} else {
let avail = area.height;
if avail > 0 {
let composer_rect = Rect {
x: area.x,
y: area.y,
width: area.width,
// Reserve bottom padding
height: avail - BottomPane::BOTTOM_PAD_LINES.min(avail.saturating_sub(1)),
};
(&self.composer).render_ref(composer_rect, buf);
}
(&self.composer).render_ref(content, buf);
}
}
}
@@ -462,18 +517,16 @@ mod tests {
assert!(pane.active_view.is_some(), "active view should be present");
// Render and ensure the top row includes the Working header instead of the composer.
// Give the animation thread a moment to tick.
std::thread::sleep(std::time::Duration::from_millis(120));
let area = Rect::new(0, 0, 40, 3);
let mut buf = Buffer::empty(area);
(&pane).render_ref(area, &mut buf);
let mut row0 = String::new();
let mut row1 = String::new();
for x in 0..area.width {
row0.push(buf[(x, 0)].symbol().chars().next().unwrap_or(' '));
row1.push(buf[(x, 1)].symbol().chars().next().unwrap_or(' '));
}
assert!(
row0.contains("Working"),
"expected Working header after denial: {row0:?}"
row1.contains("Working"),
"expected Working header after denial on row 1: {row1:?}"
);
// Drain the channel to avoid unused warnings.
@@ -495,17 +548,13 @@ mod tests {
// Begin a task: show initial status.
pane.set_task_running(true);
// Allow some frames so the animation thread ticks.
std::thread::sleep(std::time::Duration::from_millis(120));
// Render and confirm the line contains the "Working" header.
let area = Rect::new(0, 0, 40, 3);
let mut buf = Buffer::empty(area);
(&pane).render_ref(area, &mut buf);
let mut row0 = String::new();
for x in 0..area.width {
row0.push(buf[(x, 0)].symbol().chars().next().unwrap_or(' '));
row0.push(buf[(x, 1)].symbol().chars().next().unwrap_or(' '));
}
assert!(
row0.contains("Working"),
@@ -538,12 +587,12 @@ mod tests {
let mut buf = Buffer::empty(area);
(&pane).render_ref(area, &mut buf);
// Top row contains the status header
// Row 1 contains the status header (row 0 is the spacer)
let mut top = String::new();
for x in 0..area.width {
top.push(buf[(x, 0)].symbol().chars().next().unwrap_or(' '));
top.push(buf[(x, 1)].symbol().chars().next().unwrap_or(' '));
}
assert_eq!(buf[(0, 0)].symbol().chars().next().unwrap_or(' '), '▌');
assert_eq!(buf[(0, 1)].symbol().chars().next().unwrap_or(' '), '▌');
assert!(
top.contains("Working"),
"expected Working header on top row: {top:?}"
@@ -580,7 +629,7 @@ mod tests {
pane.set_task_running(true);
// Height=2 → pad shrinks to 1; bottom row is blank, top row has spinner.
// Height=2 → with spacer, spinner on row 1; no bottom padding.
let area2 = Rect::new(0, 0, 20, 2);
let mut buf2 = Buffer::empty(area2);
(&pane).render_ref(area2, &mut buf2);
@@ -590,13 +639,10 @@ mod tests {
row0.push(buf2[(x, 0)].symbol().chars().next().unwrap_or(' '));
row1.push(buf2[(x, 1)].symbol().chars().next().unwrap_or(' '));
}
assert!(row0.trim().is_empty(), "expected spacer on row 0: {row0:?}");
assert!(
row0.contains("Working"),
"expected Working header on row 0: {row0:?}"
);
assert!(
row1.trim().is_empty(),
"expected bottom padding on row 1: {row1:?}"
row1.contains("Working"),
"expected Working on row 1: {row1:?}"
);
// Height=1 → no padding; single row is the spinner.

View File

@@ -11,4 +11,4 @@ expression: terminal.backend()
"▌ "
"▌ "
"▌ "
" ⏎ send Ctrl+J newline Ctrl+C quit "
" ⏎ send Ctrl+J newline Ctrl+T transcript Ctrl+C quit "

View File

@@ -11,4 +11,4 @@ expression: terminal.backend()
"▌ "
"▌ "
"▌ "
" ⏎ send Ctrl+J newline Ctrl+C quit "
" ⏎ send Ctrl+J newline Ctrl+T transcript Ctrl+C quit "

View File

@@ -11,4 +11,4 @@ expression: terminal.backend()
"▌ "
"▌ "
"▌ "
" ⏎ send Ctrl+J newline Ctrl+C quit "
" ⏎ send Ctrl+J newline Ctrl+T transcript Ctrl+C quit "

View File

@@ -11,4 +11,4 @@ expression: terminal.backend()
"▌ "
"▌ "
"▌ "
" ⏎ send Ctrl+J newline Ctrl+C quit "
" ⏎ send Ctrl+J newline Ctrl+T transcript Ctrl+C quit "

View File

@@ -11,4 +11,4 @@ expression: terminal.backend()
"▌ "
"▌ "
"▌ "
" ⏎ send Ctrl+J newline Ctrl+C quit "
" ⏎ send Ctrl+J newline Ctrl+T transcript Ctrl+C quit "

View File

@@ -24,9 +24,17 @@ impl StatusIndicatorView {
pub fn update_text(&mut self, text: String) {
self.view.update_text(text);
}
pub fn update_header(&mut self, header: String) {
self.view.update_header(header);
}
}
impl BottomPaneView for StatusIndicatorView {
fn update_status_header(&mut self, header: String) {
self.update_header(header);
}
fn should_hide_when_task_is_done(&mut self) -> bool {
true
}

View File

@@ -1473,7 +1473,7 @@ mod tests {
.timestamp() as u64;
let mut rng = rand::rngs::StdRng::seed_from_u64(pst_today_seed);
for _case in 0..10_000 {
for _case in 0..500 {
let mut ta = TextArea::new();
let mut state = TextAreaState::default();
// Track element payloads we insert. Payloads use characters '[' and ']' which
@@ -1497,7 +1497,7 @@ mod tests {
let mut width: u16 = rng.random_range(1..=12);
let mut height: u16 = rng.random_range(1..=4);
for _step in 0..200 {
for _step in 0..60 {
// Mostly stable width/height, occasionally change
if rng.random_bool(0.1) {
width = rng.random_range(1..=12);

View File

@@ -23,9 +23,12 @@ use codex_core::protocol::McpToolCallBeginEvent;
use codex_core::protocol::McpToolCallEndEvent;
use codex_core::protocol::Op;
use codex_core::protocol::PatchApplyBeginEvent;
use codex_core::protocol::StreamErrorEvent;
use codex_core::protocol::TaskCompleteEvent;
use codex_core::protocol::TokenUsage;
use codex_core::protocol::TurnAbortReason;
use codex_core::protocol::TurnDiffEvent;
use codex_core::protocol::WebSearchBeginEvent;
use codex_protocol::parse_command::ParsedCommand;
use crossterm::event::KeyEvent;
use crossterm::event::KeyEventKind;
@@ -47,11 +50,13 @@ use crate::bottom_pane::CancellationEvent;
use crate::bottom_pane::InputResult;
use crate::bottom_pane::SelectionAction;
use crate::bottom_pane::SelectionItem;
use crate::get_git_diff::get_git_diff;
use crate::history_cell;
use crate::history_cell::CommandOutput;
use crate::history_cell::ExecCell;
use crate::history_cell::HistoryCell;
use crate::history_cell::PatchEventType;
use crate::slash_command::SlashCommand;
use crate::tui::FrameRequester;
// streaming internals are provided by crate::streaming and crate::markdown_stream
use crate::user_approval_widget::ApprovalRequest;
@@ -59,6 +64,7 @@ mod interrupts;
use self::interrupts::InterruptManager;
mod agent;
use self::agent::spawn_agent;
use self::agent::spawn_agent_from_existing;
use crate::streaming::controller::AppEventHistorySink;
use crate::streaming::controller::StreamController;
use codex_common::approval_presets::ApprovalPreset;
@@ -89,8 +95,6 @@ pub(crate) struct ChatWidget {
last_token_usage: TokenUsage,
// Stream lifecycle controller
stream: StreamController,
// Track the most recently active stream kind in the current turn
last_stream_kind: Option<StreamKind>,
running_commands: HashMap<String, RunningCommand>,
pending_exec_completions: Vec<(Vec<String>, Vec<ParsedCommand>, CommandOutput)>,
task_complete_pending: bool,
@@ -98,8 +102,15 @@ pub(crate) struct ChatWidget {
interrupts: InterruptManager,
// Whether a redraw is needed after handling the current event
needs_redraw: bool,
// Accumulates the current reasoning block text to extract a header
reasoning_buffer: String,
// Accumulates full reasoning content for transcript-only recording
full_reasoning_buffer: String,
session_id: Option<Uuid>,
frame_requester: FrameRequester,
// Whether to include the initial welcome banner on session configured
show_welcome_banner: bool,
last_history_was_exec: bool,
}
struct UserMessage {
@@ -107,8 +118,6 @@ struct UserMessage {
image_paths: Vec<PathBuf>,
}
use crate::streaming::StreamKind;
impl From<String> for UserMessage {
fn from(text: String) -> Self {
Self {
@@ -133,14 +142,18 @@ impl ChatWidget {
}
fn flush_answer_stream_with_separator(&mut self) {
let sink = AppEventHistorySink(self.app_event_tx.clone());
let _ = self.stream.finalize(StreamKind::Answer, true, &sink);
let _ = self.stream.finalize(true, &sink);
}
// --- Small event handlers ---
fn on_session_configured(&mut self, event: codex_core::protocol::SessionConfiguredEvent) {
self.bottom_pane
.set_history_metadata(event.history_log_id, event.history_entry_count);
self.session_id = Some(event.session_id);
self.add_to_history(&history_cell::new_session_info(&self.config, event, true));
self.add_to_history(history_cell::new_session_info(
&self.config,
event,
self.show_welcome_banner,
));
if let Some(user_message) = self.initial_user_message.take() {
self.submit_user_message(user_message);
}
@@ -150,30 +163,48 @@ impl ChatWidget {
fn on_agent_message(&mut self, message: String) {
let sink = AppEventHistorySink(self.app_event_tx.clone());
let finished = self.stream.apply_final_answer(&message, &sink);
self.last_stream_kind = Some(StreamKind::Answer);
self.handle_if_stream_finished(finished);
self.mark_needs_redraw();
}
fn on_agent_message_delta(&mut self, delta: String) {
self.handle_streaming_delta(StreamKind::Answer, delta);
self.handle_streaming_delta(delta);
}
fn on_agent_reasoning_delta(&mut self, delta: String) {
self.handle_streaming_delta(StreamKind::Reasoning, delta);
// For reasoning deltas, do not stream to history. Accumulate the
// current reasoning block and extract the first bold element
// (between **/**) as the chunk header. Show this header as status.
self.reasoning_buffer.push_str(&delta);
if let Some(header) = extract_first_bold(&self.reasoning_buffer) {
// Update the shimmer header to the extracted reasoning chunk header.
self.bottom_pane.update_status_header(header);
} else {
// Fallback while we don't yet have a bold header: leave existing header as-is.
}
self.mark_needs_redraw();
}
fn on_agent_reasoning_final(&mut self, text: String) {
let sink = AppEventHistorySink(self.app_event_tx.clone());
let finished = self.stream.apply_final_reasoning(&text, &sink);
self.last_stream_kind = Some(StreamKind::Reasoning);
self.handle_if_stream_finished(finished);
fn on_agent_reasoning_final(&mut self) {
// At the end of a reasoning block, record transcript-only content.
self.full_reasoning_buffer.push_str(&self.reasoning_buffer);
if !self.full_reasoning_buffer.is_empty() {
self.add_to_history(history_cell::new_reasoning_block(
self.full_reasoning_buffer.clone(),
&self.config,
));
}
self.reasoning_buffer.clear();
self.full_reasoning_buffer.clear();
self.mark_needs_redraw();
}
fn on_reasoning_section_break(&mut self) {
let sink = AppEventHistorySink(self.app_event_tx.clone());
self.stream.insert_reasoning_section_break(&sink);
// Start a new reasoning block for header extraction and accumulate transcript.
self.full_reasoning_buffer.push_str(&self.reasoning_buffer);
self.full_reasoning_buffer.push_str("\n\n");
self.reasoning_buffer.clear();
}
// Raw reasoning uses the same flow as summarized reasoning
@@ -182,7 +213,8 @@ impl ChatWidget {
self.bottom_pane.clear_ctrl_c_quit_hint();
self.bottom_pane.set_task_running(true);
self.stream.reset_headers_for_new_turn();
self.last_stream_kind = None;
self.full_reasoning_buffer.clear();
self.reasoning_buffer.clear();
self.mark_needs_redraw();
}
@@ -191,9 +223,7 @@ impl ChatWidget {
// without emitting stray headers for other streams.
if self.stream.is_write_cycle_active() {
let sink = AppEventHistorySink(self.app_event_tx.clone());
if let Some(kind) = self.last_stream_kind {
let _ = self.stream.finalize(kind, true, &sink);
}
let _ = self.stream.finalize(true, &sink);
}
// Mark task stopped and request redraw now that all content is in history.
self.bottom_pane.set_task_running(false);
@@ -212,7 +242,7 @@ impl ChatWidget {
}
fn on_error(&mut self, message: String) {
self.add_to_history(&history_cell::new_error_event(message));
self.add_to_history(history_cell::new_error_event(message));
self.bottom_pane.set_task_running(false);
self.running_commands.clear();
self.stream.clear_all();
@@ -220,7 +250,7 @@ impl ChatWidget {
}
fn on_plan_update(&mut self, update: codex_core::plan_tool::UpdatePlanArgs) {
self.add_to_history(&history_cell::new_plan_update(update));
self.add_to_history(history_cell::new_plan_update(update));
}
fn on_exec_approval_request(&mut self, id: String, ev: ExecApprovalRequestEvent) {
@@ -255,7 +285,7 @@ impl ChatWidget {
}
fn on_patch_apply_begin(&mut self, event: PatchApplyBeginEvent) {
self.add_to_history(&history_cell::new_patch_event(
self.add_to_history(history_cell::new_patch_event(
PatchEventType::ApplyBegin {
auto_approved: event.auto_approved,
},
@@ -286,6 +316,11 @@ impl ChatWidget {
self.defer_or_handle(|q| q.push_mcp_end(ev), |s| s.handle_mcp_end_now(ev2));
}
fn on_web_search_begin(&mut self, ev: WebSearchBeginEvent) {
self.flush_answer_stream_with_separator();
self.add_to_history(history_cell::new_web_search_call(ev.query));
}
fn on_get_history_entry_response(
&mut self,
event: codex_core::protocol::GetHistoryEntryResponseEvent,
@@ -310,6 +345,12 @@ impl ChatWidget {
fn on_background_event(&mut self, message: String) {
debug!("BackgroundEvent: {message}");
}
fn on_stream_error(&mut self, message: String) {
// Show stream errors in the transcript so users see retry/backoff info.
self.add_to_history(history_cell::new_stream_error_event(message));
self.mark_needs_redraw();
}
/// Periodic tick to commit at most one queued line to history with a small delay,
/// animating the output.
pub(crate) fn on_commit_tick(&mut self) {
@@ -350,15 +391,17 @@ impl ChatWidget {
self.bottom_pane.set_task_running(false);
self.task_complete_pending = false;
}
// A completed stream indicates non-exec content was just inserted.
// Reset the exec header grouping so the next exec shows its header.
self.last_history_was_exec = false;
self.flush_interrupt_queue();
}
}
#[inline]
fn handle_streaming_delta(&mut self, kind: StreamKind, delta: String) {
fn handle_streaming_delta(&mut self, delta: String) {
let sink = AppEventHistorySink(self.app_event_tx.clone());
self.stream.begin(kind, &sink);
self.last_stream_kind = Some(kind);
self.stream.begin(&sink);
self.stream.push_and_maybe_commit(&delta, &sink);
self.mark_needs_redraw();
}
@@ -376,6 +419,7 @@ impl ChatWidget {
exit_code: ev.exit_code,
stdout: ev.stdout.clone(),
stderr: ev.stderr.clone(),
formatted_output: ev.formatted_output.clone(),
},
));
@@ -383,9 +427,16 @@ impl ChatWidget {
self.active_exec_cell = None;
let pending = std::mem::take(&mut self.pending_exec_completions);
for (command, parsed, output) in pending {
self.add_to_history(&history_cell::new_completed_exec_command(
command, parsed, output,
));
let include_header = !self.last_history_was_exec;
let cell = history_cell::new_completed_exec_command(
command,
parsed,
output,
include_header,
ev.duration,
);
self.add_to_history(cell);
self.last_history_was_exec = true;
}
}
}
@@ -395,9 +446,9 @@ impl ChatWidget {
event: codex_core::protocol::PatchApplyEndEvent,
) {
if event.success {
self.add_to_history(&history_cell::new_patch_apply_success(event.stdout));
self.add_to_history(history_cell::new_patch_apply_success(event.stdout));
} else {
self.add_to_history(&history_cell::new_patch_apply_failure(event.stderr));
self.add_to_history(history_cell::new_patch_apply_failure(event.stderr));
}
}
@@ -419,7 +470,7 @@ impl ChatWidget {
ev: ApplyPatchApprovalRequestEvent,
) {
self.flush_answer_stream_with_separator();
self.add_to_history(&history_cell::new_patch_event(
self.add_to_history(history_cell::new_patch_event(
PatchEventType::ApprovalRequest,
ev.changes.clone(),
));
@@ -448,9 +499,11 @@ impl ChatWidget {
exec.parsed.extend(ev.parsed_cmd);
}
_ => {
let include_header = !self.last_history_was_exec;
self.active_exec_cell = Some(history_cell::new_active_exec_command(
ev.command,
ev.parsed_cmd,
include_header,
));
}
}
@@ -461,11 +514,11 @@ impl ChatWidget {
pub(crate) fn handle_mcp_begin_now(&mut self, ev: McpToolCallBeginEvent) {
self.flush_answer_stream_with_separator();
self.add_to_history(&history_cell::new_active_mcp_tool_call(ev.invocation));
self.add_to_history(history_cell::new_active_mcp_tool_call(ev.invocation));
}
pub(crate) fn handle_mcp_end_now(&mut self, ev: McpToolCallEndEvent) {
self.flush_answer_stream_with_separator();
self.add_to_history(&*history_cell::new_completed_mcp_tool_call(
self.add_boxed_history(history_cell::new_completed_mcp_tool_call(
80,
ev.invocation,
ev.duration,
@@ -532,13 +585,61 @@ impl ChatWidget {
total_token_usage: TokenUsage::default(),
last_token_usage: TokenUsage::default(),
stream: StreamController::new(config),
last_stream_kind: None,
running_commands: HashMap::new(),
pending_exec_completions: Vec::new(),
task_complete_pending: false,
interrupts: InterruptManager::new(),
needs_redraw: false,
reasoning_buffer: String::new(),
full_reasoning_buffer: String::new(),
session_id: None,
last_history_was_exec: false,
show_welcome_banner: true,
}
}
/// Create a ChatWidget attached to an existing conversation (e.g., a fork).
pub(crate) fn new_from_existing(
config: Config,
conversation: std::sync::Arc<codex_core::CodexConversation>,
session_configured: codex_core::protocol::SessionConfiguredEvent,
frame_requester: FrameRequester,
app_event_tx: AppEventSender,
enhanced_keys_supported: bool,
) -> Self {
let mut rng = rand::rng();
let placeholder = EXAMPLE_PROMPTS[rng.random_range(0..EXAMPLE_PROMPTS.len())].to_string();
let codex_op_tx =
spawn_agent_from_existing(conversation, session_configured, app_event_tx.clone());
Self {
app_event_tx: app_event_tx.clone(),
frame_requester: frame_requester.clone(),
codex_op_tx,
bottom_pane: BottomPane::new(BottomPaneParams {
frame_requester,
app_event_tx,
has_input_focus: true,
enhanced_keys_supported,
placeholder_text: placeholder,
}),
active_exec_cell: None,
config: config.clone(),
initial_user_message: None,
total_token_usage: TokenUsage::default(),
last_token_usage: TokenUsage::default(),
stream: StreamController::new(config),
running_commands: HashMap::new(),
pending_exec_completions: Vec::new(),
task_complete_pending: false,
interrupts: InterruptManager::new(),
needs_redraw: false,
reasoning_buffer: String::new(),
full_reasoning_buffer: String::new(),
session_id: None,
last_history_was_exec: false,
show_welcome_banner: false,
}
}
@@ -557,27 +658,156 @@ impl ChatWidget {
match self.bottom_pane.handle_key_event(key_event) {
InputResult::Submitted(text) => {
self.submit_user_message(text.into());
let images = self.bottom_pane.take_recent_submission_images();
self.submit_user_message(UserMessage {
text,
image_paths: images,
});
}
InputResult::Command(cmd) => {
self.dispatch_command(cmd);
}
InputResult::None => {}
}
}
pub(crate) fn attach_image(
&mut self,
path: PathBuf,
width: u32,
height: u32,
format_label: &str,
) {
tracing::info!(
"attach_image path={path:?} width={width} height={height} format={format_label}",
);
self.bottom_pane
.attach_image(path.clone(), width, height, format_label);
self.request_redraw();
}
fn dispatch_command(&mut self, cmd: SlashCommand) {
match cmd {
SlashCommand::New => {
self.app_event_tx.send(AppEvent::NewSession);
}
SlashCommand::Init => {
// Guard: do not run if a task is active.
const INIT_PROMPT: &str = include_str!("../prompt_for_init_command.md");
self.submit_text_message(INIT_PROMPT.to_string());
}
SlashCommand::Compact => {
self.clear_token_usage();
self.app_event_tx.send(AppEvent::CodexOp(Op::Compact));
}
SlashCommand::Model => {
self.open_model_popup();
}
SlashCommand::Approvals => {
self.open_approvals_popup();
}
SlashCommand::Quit => {
self.app_event_tx.send(AppEvent::ExitRequest);
}
SlashCommand::Logout => {
if let Err(e) = codex_login::logout(&self.config.codex_home) {
tracing::error!("failed to logout: {e}");
}
self.app_event_tx.send(AppEvent::ExitRequest);
}
SlashCommand::Diff => {
self.add_diff_in_progress();
let tx = self.app_event_tx.clone();
tokio::spawn(async move {
let text = match get_git_diff().await {
Ok((is_git_repo, diff_text)) => {
if is_git_repo {
diff_text
} else {
"`/diff` — _not inside a git repository_".to_string()
}
}
Err(e) => format!("Failed to compute diff: {e}"),
};
tx.send(AppEvent::DiffResult(text));
});
}
SlashCommand::Mention => {
self.insert_str("@");
}
SlashCommand::Status => {
self.add_status_output();
}
SlashCommand::Mcp => {
self.add_mcp_output();
}
#[cfg(debug_assertions)]
SlashCommand::TestApproval => {
use codex_core::protocol::EventMsg;
use std::collections::HashMap;
use codex_core::protocol::ApplyPatchApprovalRequestEvent;
use codex_core::protocol::FileChange;
self.app_event_tx.send(AppEvent::CodexEvent(Event {
id: "1".to_string(),
// msg: EventMsg::ExecApprovalRequest(ExecApprovalRequestEvent {
// call_id: "1".to_string(),
// command: vec!["git".into(), "apply".into()],
// cwd: self.config.cwd.clone(),
// reason: Some("test".to_string()),
// }),
msg: EventMsg::ApplyPatchApprovalRequest(ApplyPatchApprovalRequestEvent {
call_id: "1".to_string(),
changes: HashMap::from([
(
PathBuf::from("/tmp/test.txt"),
FileChange::Add {
content: "test".to_string(),
},
),
(
PathBuf::from("/tmp/test2.txt"),
FileChange::Update {
unified_diff: "+test\n-test2".to_string(),
move_path: None,
},
),
]),
reason: None,
grant_root: Some(PathBuf::from("/tmp")),
}),
}));
}
}
}
pub(crate) fn handle_paste(&mut self, text: String) {
self.bottom_pane.handle_paste(text);
}
fn flush_active_exec_cell(&mut self) {
if let Some(active) = self.active_exec_cell.take() {
self.last_history_was_exec = true;
self.app_event_tx
.send(AppEvent::InsertHistory(active.display_lines()));
.send(AppEvent::InsertHistoryCell(Box::new(active)));
}
}
fn add_to_history(&mut self, cell: &dyn HistoryCell) {
fn add_to_history(&mut self, cell: impl HistoryCell + 'static) {
// Only break exec grouping if the cell renders visible lines.
let has_display_lines = !cell.display_lines().is_empty();
self.flush_active_exec_cell();
if has_display_lines {
self.last_history_was_exec = false;
}
self.app_event_tx
.send(AppEvent::InsertHistory(cell.display_lines()));
.send(AppEvent::InsertHistoryCell(Box::new(cell)));
}
fn add_boxed_history(&mut self, cell: Box<dyn HistoryCell>) {
self.flush_active_exec_cell();
self.app_event_tx.send(AppEvent::InsertHistoryCell(cell));
}
fn submit_user_message(&mut self, user_message: UserMessage) {
@@ -613,7 +843,7 @@ impl ChatWidget {
// Only show the text portion in conversation history.
if !text.is_empty() {
self.add_to_history(&history_cell::new_user_prompt(text.clone()));
self.add_to_history(history_cell::new_user_prompt(text.clone()));
}
}
@@ -641,16 +871,23 @@ impl ChatWidget {
| EventMsg::AgentReasoningRawContentDelta(AgentReasoningRawContentDeltaEvent {
delta,
}) => self.on_agent_reasoning_delta(delta),
EventMsg::AgentReasoning(AgentReasoningEvent { text })
| EventMsg::AgentReasoningRawContent(AgentReasoningRawContentEvent { text }) => {
self.on_agent_reasoning_final(text)
EventMsg::AgentReasoning(AgentReasoningEvent { .. })
| EventMsg::AgentReasoningRawContent(AgentReasoningRawContentEvent { .. }) => {
self.on_agent_reasoning_final()
}
EventMsg::AgentReasoningSectionBreak(_) => self.on_reasoning_section_break(),
EventMsg::TaskStarted => self.on_task_started(),
EventMsg::TaskComplete(TaskCompleteEvent { .. }) => self.on_task_complete(),
EventMsg::TokenCount(token_usage) => self.on_token_count(token_usage),
EventMsg::Error(ErrorEvent { message }) => self.on_error(message),
EventMsg::TurnAborted(_) => self.on_error("Turn interrupted".to_owned()),
EventMsg::TurnAborted(ev) => match ev.reason {
TurnAbortReason::Interrupted => {
self.on_error("Tell the model what to do differently".to_owned())
}
TurnAbortReason::Replaced => {
self.on_error("Turn aborted: replaced by a new task".to_owned())
}
},
EventMsg::PlanUpdate(update) => self.on_plan_update(update),
EventMsg::ExecApprovalRequest(ev) => self.on_exec_approval_request(id, ev),
EventMsg::ApplyPatchApprovalRequest(ev) => self.on_apply_patch_approval_request(id, ev),
@@ -661,6 +898,7 @@ impl ChatWidget {
EventMsg::ExecCommandEnd(ev) => self.on_exec_command_end(ev),
EventMsg::McpToolCallBegin(ev) => self.on_mcp_tool_call_begin(ev),
EventMsg::McpToolCallEnd(ev) => self.on_mcp_tool_call_end(ev),
EventMsg::WebSearchBegin(ev) => self.on_web_search_begin(ev),
EventMsg::GetHistoryEntryResponse(ev) => self.on_get_history_entry_response(ev),
EventMsg::McpListToolsResponse(ev) => self.on_list_mcp_tools(ev),
EventMsg::ShutdownComplete => self.on_shutdown_complete(),
@@ -668,6 +906,12 @@ impl ChatWidget {
EventMsg::BackgroundEvent(BackgroundEventEvent { message }) => {
self.on_background_event(message)
}
EventMsg::StreamError(StreamErrorEvent { message }) => self.on_stream_error(message),
EventMsg::ConversationHistory(ev) => {
// Forward to App so it can process backtrack flows.
self.app_event_tx
.send(crate::app_event::AppEvent::ConversationHistory(ev));
}
}
// Coalesce redraws: issue at most one after handling the event
if self.needs_redraw {
@@ -687,14 +931,13 @@ impl ChatWidget {
self.request_redraw();
}
pub(crate) fn add_diff_output(&mut self, diff_output: String) {
pub(crate) fn on_diff_complete(&mut self) {
self.bottom_pane.set_task_running(false);
self.add_to_history(&history_cell::new_diff_output(diff_output));
self.mark_needs_redraw();
}
pub(crate) fn add_status_output(&mut self) {
self.add_to_history(&history_cell::new_status_output(
self.add_to_history(history_cell::new_status_output(
&self.config,
&self.total_token_usage,
&self.session_id,
@@ -805,7 +1048,7 @@ impl ChatWidget {
pub(crate) fn add_mcp_output(&mut self) {
if self.config.mcp_servers.is_empty() {
self.add_to_history(&history_cell::empty_mcp_output());
self.add_to_history(history_cell::empty_mcp_output());
} else {
self.submit_op(Op::ListMcpTools);
}
@@ -843,6 +1086,14 @@ impl ChatWidget {
pub(crate) fn insert_str(&mut self, text: &str) {
self.bottom_pane.insert_str(text);
}
pub(crate) fn show_esc_backtrack_hint(&mut self) {
self.bottom_pane.show_esc_backtrack_hint();
}
pub(crate) fn clear_esc_backtrack_hint(&mut self) {
self.bottom_pane.clear_esc_backtrack_hint();
}
/// Forward an `Op` directly to codex.
pub(crate) fn submit_op(&self, op: Op) {
// Record outbound operation for session replay fidelity.
@@ -853,7 +1104,7 @@ impl ChatWidget {
}
fn on_list_mcp_tools(&mut self, ev: McpListToolsResponseEvent) {
self.add_to_history(&history_cell::new_mcp_tools_output(&self.config, ev.tools));
self.add_to_history(history_cell::new_mcp_tools_output(&self.config, ev.tools));
}
/// Programmatically submit a user text message as if typed in the
@@ -870,6 +1121,16 @@ impl ChatWidget {
&self.total_token_usage
}
pub(crate) fn session_id(&self) -> Option<Uuid> {
self.session_id
}
/// Return a reference to the widget's current config (includes any
/// runtime overrides applied via TUI, e.g., model or approval policy).
pub(crate) fn config_ref(&self) -> &Config {
&self.config
}
pub(crate) fn clear_token_usage(&mut self) {
self.total_token_usage = TokenUsage::default();
self.bottom_pane.set_token_usage(
@@ -932,5 +1193,35 @@ fn add_token_usage(current_usage: &TokenUsage, new_usage: &TokenUsage) -> TokenU
}
}
// Extract the first bold (Markdown) element in the form **...** from `s`.
// Returns the inner text if found; otherwise `None`.
fn extract_first_bold(s: &str) -> Option<String> {
let bytes = s.as_bytes();
let mut i = 0usize;
while i + 1 < bytes.len() {
if bytes[i] == b'*' && bytes[i + 1] == b'*' {
let start = i + 2;
let mut j = start;
while j + 1 < bytes.len() {
if bytes[j] == b'*' && bytes[j + 1] == b'*' {
// Found closing **
let inner = &s[start..j];
let trimmed = inner.trim();
if !trimmed.is_empty() {
return Some(trimmed.to_string());
} else {
return None;
}
}
j += 1;
}
// No closing; stop searching (wait for more deltas)
return None;
}
i += 1;
}
None
}
#[cfg(test)]
mod tests;

View File

@@ -1,5 +1,6 @@
use std::sync::Arc;
use codex_core::CodexConversation;
use codex_core::ConversationManager;
use codex_core::NewConversation;
use codex_core::config::Config;
@@ -59,3 +60,40 @@ pub(crate) fn spawn_agent(
codex_op_tx
}
/// Spawn agent loops for an existing conversation (e.g., a forked conversation).
/// Sends the provided `SessionConfiguredEvent` immediately, then forwards subsequent
/// events and accepts Ops for submission.
pub(crate) fn spawn_agent_from_existing(
conversation: std::sync::Arc<CodexConversation>,
session_configured: codex_core::protocol::SessionConfiguredEvent,
app_event_tx: AppEventSender,
) -> UnboundedSender<Op> {
let (codex_op_tx, mut codex_op_rx) = unbounded_channel::<Op>();
let app_event_tx_clone = app_event_tx.clone();
tokio::spawn(async move {
// Forward the captured `SessionConfigured` event so it can be rendered in the UI.
let ev = codex_core::protocol::Event {
id: "".to_string(),
msg: codex_core::protocol::EventMsg::SessionConfigured(session_configured),
};
app_event_tx_clone.send(AppEvent::CodexEvent(ev));
let conversation_clone = conversation.clone();
tokio::spawn(async move {
while let Some(op) = codex_op_rx.recv().await {
let id = conversation_clone.submit(op).await;
if let Err(e) = id {
tracing::error!("failed to submit op: {e}");
}
}
});
while let Ok(event) = conversation.next_event().await {
app_event_tx_clone.send(AppEvent::CodexEvent(event));
}
});
codex_op_tx
}

View File

@@ -1,10 +1,6 @@
---
source: tui/src/chatwidget/tests.rs
assertion_line: 886
expression: combined
---
thinking
I will first analyze the request.
codex
Here is the result.

View File

@@ -2,8 +2,5 @@
source: tui/src/chatwidget/tests.rs
expression: combined
---
thinking
I will first analyze the request.
codex
Here is the result.

View File

@@ -19,7 +19,9 @@ use codex_core::protocol::ExecCommandEndEvent;
use codex_core::protocol::FileChange;
use codex_core::protocol::PatchApplyBeginEvent;
use codex_core::protocol::PatchApplyEndEvent;
use codex_core::protocol::StreamErrorEvent;
use codex_core::protocol::TaskCompleteEvent;
use codex_login::CodexAuth;
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
use crossterm::event::KeyModifiers;
@@ -42,6 +44,31 @@ fn test_config() -> Config {
.expect("config")
}
// Backward-compat shim for older session logs that predate the
// `formatted_output` field on ExecCommandEnd events.
fn upgrade_event_payload_for_tests(mut payload: serde_json::Value) -> serde_json::Value {
if let Some(obj) = payload.as_object_mut()
&& let Some(msg) = obj.get_mut("msg")
&& let Some(m) = msg.as_object_mut()
{
let ty = m.get("type").and_then(|v| v.as_str()).unwrap_or("");
if ty == "exec_command_end" && !m.contains_key("formatted_output") {
let stdout = m.get("stdout").and_then(|v| v.as_str()).unwrap_or("");
let stderr = m.get("stderr").and_then(|v| v.as_str()).unwrap_or("");
let formatted = if stderr.is_empty() {
stdout.to_string()
} else {
format!("{stdout}{stderr}")
};
m.insert(
"formatted_output".to_string(),
serde_json::Value::String(formatted),
);
}
}
payload
}
#[test]
fn final_answer_without_newline_is_flushed_immediately() {
let (mut chat, mut rx, _op_rx) = make_chatwidget_manual();
@@ -103,7 +130,9 @@ async fn helpers_are_available_and_do_not_panic() {
let (tx_raw, _rx) = unbounded_channel::<AppEvent>();
let tx = AppEventSender::new(tx_raw);
let cfg = test_config();
let conversation_manager = Arc::new(ConversationManager::default());
let conversation_manager = Arc::new(ConversationManager::with_auth(CodexAuth::from_api_key(
"test",
)));
let mut w = ChatWidget::new(
cfg,
conversation_manager,
@@ -144,14 +173,17 @@ fn make_chatwidget_manual() -> (
total_token_usage: TokenUsage::default(),
last_token_usage: TokenUsage::default(),
stream: StreamController::new(cfg),
last_stream_kind: None,
running_commands: HashMap::new(),
pending_exec_completions: Vec::new(),
task_complete_pending: false,
interrupts: InterruptManager::new(),
needs_redraw: false,
reasoning_buffer: String::new(),
full_reasoning_buffer: String::new(),
session_id: None,
frame_requester: crate::tui::FrameRequester::test_dummy(),
show_welcome_banner: true,
last_history_was_exec: false,
};
(widget, rx, op_rx)
}
@@ -161,8 +193,10 @@ fn drain_insert_history(
) -> Vec<Vec<ratatui::text::Line<'static>>> {
let mut out = Vec::new();
while let Ok(ev) = rx.try_recv() {
if let AppEvent::InsertHistory(lines) = ev {
out.push(lines);
match ev {
AppEvent::InsertHistoryLines(lines) => out.push(lines),
AppEvent::InsertHistoryCell(cell) => out.push(cell.display_lines()),
_ => {}
}
}
out
@@ -230,8 +264,10 @@ fn exec_history_cell_shows_working_then_completed() {
call_id: "call-1".into(),
stdout: "done".into(),
stderr: String::new(),
aggregated_output: "done".into(),
exit_code: 0,
duration: std::time::Duration::from_millis(5),
formatted_output: "done".into(),
}),
});
@@ -243,8 +279,12 @@ fn exec_history_cell_shows_working_then_completed() {
);
let blob = lines_to_single_string(&cells[0]);
assert!(
blob.contains("Completed"),
"expected completed exec cell to show Completed header: {blob:?}"
blob.contains('✓'),
"expected completed exec cell to show success marker: {blob:?}"
);
assert!(
blob.contains("echo done"),
"expected command text to be present: {blob:?}"
);
}
@@ -275,8 +315,10 @@ fn exec_history_cell_shows_working_then_failed() {
call_id: "call-2".into(),
stdout: String::new(),
stderr: "error".into(),
aggregated_output: "error".into(),
exit_code: 2,
duration: std::time::Duration::from_millis(7),
formatted_output: "".into(),
}),
});
@@ -288,9 +330,82 @@ fn exec_history_cell_shows_working_then_failed() {
);
let blob = lines_to_single_string(&cells[0]);
assert!(
blob.contains("Failed (exit 2)"),
"expected completed exec cell to show Failed header with exit code: {blob:?}"
blob.contains('✗'),
"expected failure marker present: {blob:?}"
);
assert!(
blob.contains("false"),
"expected command text present: {blob:?}"
);
}
#[test]
fn exec_history_extends_previous_when_consecutive() {
let (mut chat, mut rx, _op_rx) = make_chatwidget_manual();
// First command
chat.handle_codex_event(Event {
id: "call-a".into(),
msg: EventMsg::ExecCommandBegin(ExecCommandBeginEvent {
call_id: "call-a".into(),
command: vec!["bash".into(), "-lc".into(), "echo one".into()],
cwd: std::env::current_dir().unwrap_or_else(|_| PathBuf::from(".")),
parsed_cmd: vec![
codex_core::parse_command::ParsedCommand::Unknown {
cmd: "echo one".into(),
}
.into(),
],
}),
});
chat.handle_codex_event(Event {
id: "call-a".into(),
msg: EventMsg::ExecCommandEnd(ExecCommandEndEvent {
call_id: "call-a".into(),
stdout: "one".into(),
stderr: String::new(),
aggregated_output: "one".into(),
exit_code: 0,
duration: std::time::Duration::from_millis(5),
formatted_output: "one".into(),
}),
});
let first_cells = drain_insert_history(&mut rx);
assert_eq!(first_cells.len(), 1, "first exec should insert history");
// Second command
chat.handle_codex_event(Event {
id: "call-b".into(),
msg: EventMsg::ExecCommandBegin(ExecCommandBeginEvent {
call_id: "call-b".into(),
command: vec!["bash".into(), "-lc".into(), "echo two".into()],
cwd: std::env::current_dir().unwrap_or_else(|_| PathBuf::from(".")),
parsed_cmd: vec![
codex_core::parse_command::ParsedCommand::Unknown {
cmd: "echo two".into(),
}
.into(),
],
}),
});
chat.handle_codex_event(Event {
id: "call-b".into(),
msg: EventMsg::ExecCommandEnd(ExecCommandEndEvent {
call_id: "call-b".into(),
stdout: "two".into(),
stderr: String::new(),
aggregated_output: "two".into(),
exit_code: 0,
duration: std::time::Duration::from_millis(5),
formatted_output: "two".into(),
}),
});
let second_cells = drain_insert_history(&mut rx);
assert_eq!(second_cells.len(), 1, "second exec should extend history");
let first_blob = lines_to_single_string(&first_cells[0]);
let second_blob = lines_to_single_string(&second_cells[0]);
assert!(first_blob.contains('✓'));
assert!(second_blob.contains("echo two"));
}
#[tokio::test(flavor = "current_thread")]
@@ -333,16 +448,30 @@ async fn binary_size_transcript_matches_ideal_fixture() {
match kind {
"codex_event" => {
if let Some(payload) = v.get("payload") {
let ev: Event = serde_json::from_value(payload.clone()).expect("parse");
let ev: Event =
serde_json::from_value(upgrade_event_payload_for_tests(payload.clone()))
.expect("parse");
chat.handle_codex_event(ev);
while let Ok(app_ev) = rx.try_recv() {
if let AppEvent::InsertHistory(lines) = app_ev {
transcript.push_str(&lines_to_single_string(&lines));
crate::insert_history::insert_history_lines_to_writer(
&mut terminal,
&mut ansi,
lines,
);
match app_ev {
AppEvent::InsertHistoryLines(lines) => {
transcript.push_str(&lines_to_single_string(&lines));
crate::insert_history::insert_history_lines_to_writer(
&mut terminal,
&mut ansi,
lines,
);
}
AppEvent::InsertHistoryCell(cell) => {
let lines = cell.display_lines();
transcript.push_str(&lines_to_single_string(&lines));
crate::insert_history::insert_history_lines_to_writer(
&mut terminal,
&mut ansi,
lines,
);
}
_ => {}
}
}
}
@@ -353,13 +482,25 @@ async fn binary_size_transcript_matches_ideal_fixture() {
{
chat.on_commit_tick();
while let Ok(app_ev) = rx.try_recv() {
if let AppEvent::InsertHistory(lines) = app_ev {
transcript.push_str(&lines_to_single_string(&lines));
crate::insert_history::insert_history_lines_to_writer(
&mut terminal,
&mut ansi,
lines,
);
match app_ev {
AppEvent::InsertHistoryLines(lines) => {
transcript.push_str(&lines_to_single_string(&lines));
crate::insert_history::insert_history_lines_to_writer(
&mut terminal,
&mut ansi,
lines,
);
}
AppEvent::InsertHistoryCell(cell) => {
let lines = cell.display_lines();
transcript.push_str(&lines_to_single_string(&lines));
crate::insert_history::insert_history_lines_to_writer(
&mut terminal,
&mut ansi,
lines,
);
}
_ => {}
}
}
}
@@ -375,6 +516,11 @@ async fn binary_size_transcript_matches_ideal_fixture() {
.expect("read ideal-binary-response.txt");
// Normalize line endings for Windows vs. Unix checkouts
let ideal = ideal.replace("\r\n", "\n");
let ideal_first_line = ideal
.lines()
.find(|l| !l.trim().is_empty())
.unwrap_or("")
.to_string();
// Build the final VT100 visual by parsing the ANSI stream. Trim trailing spaces per line
// and drop trailing empty lines so the shape matches the ideal fixture exactly.
@@ -400,22 +546,68 @@ async fn binary_size_transcript_matches_ideal_fixture() {
while lines.last().is_some_and(|l| l.is_empty()) {
lines.pop();
}
// Compare only after the last session banner marker, and start at the next 'thinking' line.
// Compare only after the last session banner marker. Skip the transient
// 'thinking' header if present, and start from the first non-empty line
// of content that follows.
const MARKER_PREFIX: &str = ">_ You are using OpenAI Codex in ";
let last_marker_line_idx = lines
.iter()
.rposition(|l| l.starts_with(MARKER_PREFIX))
.expect("marker not found in visible output");
let thinking_line_idx = (last_marker_line_idx + 1..lines.len())
.find(|&idx| lines[idx].trim_start() == "thinking")
.expect("no 'thinking' line found after marker");
// Anchor to the first ideal line if present; otherwise use heuristics.
let start_idx = (last_marker_line_idx + 1..lines.len())
.find(|&idx| lines[idx].trim_start() == ideal_first_line)
.or_else(|| {
// Prefer the first assistant content line (blockquote '>' prefix) after the marker.
(last_marker_line_idx + 1..lines.len())
.find(|&idx| lines[idx].trim_start().starts_with('>'))
})
.unwrap_or_else(|| {
// Fallback: first non-empty, non-'thinking' line
(last_marker_line_idx + 1..lines.len())
.find(|&idx| {
let t = lines[idx].trim_start();
!t.is_empty() && t != "thinking"
})
.expect("no content line found after marker")
});
let mut compare_lines: Vec<String> = Vec::new();
// Ensure the first line is exactly 'thinking' without leading spaces to match the fixture
compare_lines.push(lines[thinking_line_idx].trim_start().to_string());
compare_lines.extend(lines[(thinking_line_idx + 1)..].iter().cloned());
// Ensure the first line is trimmed-left to match the fixture shape.
compare_lines.push(lines[start_idx].trim_start().to_string());
compare_lines.extend(lines[(start_idx + 1)..].iter().cloned());
let visible_after = compare_lines.join("\n");
// Normalize: drop a leading 'thinking' line if present in either side to
// avoid coupling to whether the reasoning header is rendered in history.
fn drop_leading_thinking(s: &str) -> String {
let mut it = s.lines();
let first = it.next();
let rest = it.collect::<Vec<_>>().join("\n");
if first.is_some_and(|l| l.trim() == "thinking") {
rest
} else {
s.to_string()
}
}
let visible_after = drop_leading_thinking(&visible_after);
let ideal = drop_leading_thinking(&ideal);
// Normalize: strip leading Markdown blockquote markers ('>' or '> ') which
// may be present in rendered transcript lines but not in the ideal text.
fn strip_blockquotes(s: &str) -> String {
s.lines()
.map(|l| {
l.strip_prefix("> ")
.or_else(|| l.strip_prefix('>'))
.unwrap_or(l)
})
.collect::<Vec<_>>()
.join("\n")
}
let visible_after = strip_blockquotes(&visible_after);
let ideal = strip_blockquotes(&ideal);
// Optionally update the fixture when env var is set
if std::env::var("UPDATE_IDEAL").as_deref() == Ok("1") {
let mut p = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
@@ -746,7 +938,26 @@ fn plan_update_renders_history_cell() {
}
#[test]
fn headers_emitted_on_stream_begin_for_answer_and_reasoning() {
fn stream_error_is_rendered_to_history() {
let (mut chat, mut rx, _op_rx) = make_chatwidget_manual();
let msg = "stream error: stream disconnected before completion: idle timeout waiting for SSE; retrying 1/5 in 211ms…";
chat.handle_codex_event(Event {
id: "sub-1".into(),
msg: EventMsg::StreamError(StreamErrorEvent {
message: msg.to_string(),
}),
});
let cells = drain_insert_history(&mut rx);
assert!(!cells.is_empty(), "expected a history cell for StreamError");
let blob = lines_to_single_string(cells.last().unwrap());
assert!(blob.contains(""));
assert!(blob.contains("stream error:"));
assert!(blob.contains("idle timeout waiting for SSE"));
}
#[test]
fn headers_emitted_on_stream_begin_for_answer_and_not_for_reasoning() {
let (mut chat, mut rx, _op_rx) = make_chatwidget_manual();
// Answer: no header until a newline commit
@@ -758,7 +969,7 @@ fn headers_emitted_on_stream_begin_for_answer_and_reasoning() {
});
let mut saw_codex_pre = false;
while let Ok(ev) = rx.try_recv() {
if let AppEvent::InsertHistory(lines) = ev {
if let AppEvent::InsertHistoryLines(lines) = ev {
let s = lines
.iter()
.flat_map(|l| l.spans.iter())
@@ -786,7 +997,7 @@ fn headers_emitted_on_stream_begin_for_answer_and_reasoning() {
chat.on_commit_tick();
let mut saw_codex_post = false;
while let Ok(ev) = rx.try_recv() {
if let AppEvent::InsertHistory(lines) = ev {
if let AppEvent::InsertHistoryLines(lines) = ev {
let s = lines
.iter()
.flat_map(|l| l.spans.iter())
@@ -804,7 +1015,7 @@ fn headers_emitted_on_stream_begin_for_answer_and_reasoning() {
"expected 'codex' header to be emitted after first newline commit"
);
// Reasoning: header immediately
// Reasoning: do NOT emit a history header; status text is updated instead
let (mut chat2, mut rx2, _op_rx2) = make_chatwidget_manual();
chat2.handle_codex_event(Event {
id: "sub-b".into(),
@@ -814,7 +1025,7 @@ fn headers_emitted_on_stream_begin_for_answer_and_reasoning() {
});
let mut saw_thinking = false;
while let Ok(ev) = rx2.try_recv() {
if let AppEvent::InsertHistory(lines) = ev {
if let AppEvent::InsertHistoryLines(lines) = ev {
let s = lines
.iter()
.flat_map(|l| l.spans.iter())
@@ -828,8 +1039,8 @@ fn headers_emitted_on_stream_begin_for_answer_and_reasoning() {
}
}
assert!(
saw_thinking,
"expected 'thinking' header to be emitted at stream start"
!saw_thinking,
"reasoning deltas should not emit history headers"
);
}

View File

@@ -54,6 +54,10 @@ pub struct Cli {
#[clap(long = "cd", short = 'C', value_name = "DIR")]
pub cwd: Option<PathBuf>,
/// Enable web search (off by default). When enabled, the native Responses `web_search` tool is available to the model (no percall approval).
#[arg(long = "search", default_value_t = false)]
pub web_search: bool,
#[clap(skip)]
pub config_overrides: CliConfigOverrides,
}

View File

@@ -0,0 +1,97 @@
use std::path::PathBuf;
use tempfile::Builder;
#[derive(Debug)]
pub enum PasteImageError {
ClipboardUnavailable(String),
NoImage(String),
EncodeFailed(String),
IoError(String),
}
impl std::fmt::Display for PasteImageError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
PasteImageError::ClipboardUnavailable(msg) => write!(f, "clipboard unavailable: {msg}"),
PasteImageError::NoImage(msg) => write!(f, "no image on clipboard: {msg}"),
PasteImageError::EncodeFailed(msg) => write!(f, "could not encode image: {msg}"),
PasteImageError::IoError(msg) => write!(f, "io error: {msg}"),
}
}
}
impl std::error::Error for PasteImageError {}
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum EncodedImageFormat {
Png,
}
impl EncodedImageFormat {
pub fn label(self) -> &'static str {
match self {
EncodedImageFormat::Png => "PNG",
}
}
}
#[derive(Debug, Clone)]
pub struct PastedImageInfo {
pub width: u32,
pub height: u32,
pub encoded_format: EncodedImageFormat, // Always PNG for now.
}
/// Capture image from system clipboard, encode to PNG, and return bytes + info.
pub fn paste_image_as_png() -> Result<(Vec<u8>, PastedImageInfo), PasteImageError> {
tracing::debug!("attempting clipboard image read");
let mut cb = arboard::Clipboard::new()
.map_err(|e| PasteImageError::ClipboardUnavailable(e.to_string()))?;
let img = cb
.get_image()
.map_err(|e| PasteImageError::NoImage(e.to_string()))?;
let w = img.width as u32;
let h = img.height as u32;
let mut png: Vec<u8> = Vec::new();
let Some(rgba_img) = image::RgbaImage::from_raw(w, h, img.bytes.into_owned()) else {
return Err(PasteImageError::EncodeFailed("invalid RGBA buffer".into()));
};
let dyn_img = image::DynamicImage::ImageRgba8(rgba_img);
tracing::debug!("clipboard image decoded RGBA {w}x{h}");
{
let mut cursor = std::io::Cursor::new(&mut png);
dyn_img
.write_to(&mut cursor, image::ImageFormat::Png)
.map_err(|e| PasteImageError::EncodeFailed(e.to_string()))?;
}
tracing::debug!(
"clipboard image encoded to PNG ({len} bytes)",
len = png.len()
);
Ok((
png,
PastedImageInfo {
width: w,
height: h,
encoded_format: EncodedImageFormat::Png,
},
))
}
/// Convenience: write to a temp file and return its path + info.
pub fn paste_image_to_temp_png() -> Result<(PathBuf, PastedImageInfo), PasteImageError> {
let (png, info) = paste_image_as_png()?;
// Create a unique temporary file with a .png suffix to avoid collisions.
let tmp = Builder::new()
.prefix("codex-clipboard-")
.suffix(".png")
.tempfile()
.map_err(|e| PasteImageError::IoError(e.to_string()))?;
std::fs::write(tmp.path(), &png).map_err(|e| PasteImageError::IoError(e.to_string()))?;
// Persist the file (so it remains after the handle is dropped) and return its PathBuf.
let (_file, path) = tmp
.keep()
.map_err(|e| PasteImageError::IoError(e.error.to_string()))?;
Ok((path, info))
}

View File

@@ -70,20 +70,6 @@ impl Frame<'_> {
/// Usually the area argument is the size of the current frame or a sub-area of the current
/// frame (which can be obtained using [`Layout`] to split the total area).
///
/// # Example
///
/// ```rust
/// # use ratatui::{backend::TestBackend, Terminal};
/// # let backend = TestBackend::new(5, 5);
/// # let mut terminal = Terminal::new(backend).unwrap();
/// # let mut frame = terminal.get_frame();
/// use ratatui::{layout::Rect, widgets::Block};
///
/// let block = Block::new();
/// let area = Rect::new(0, 0, 5, 5);
/// frame.render_widget(block, area);
/// ```
///
/// [`Layout`]: crate::layout::Layout
pub fn render_widget<W: Widget>(&mut self, widget: W, area: Rect) {
widget.render(area, self.buffer);
@@ -93,22 +79,6 @@ impl Frame<'_> {
///
/// Usually the area argument is the size of the current frame or a sub-area of the current
/// frame (which can be obtained using [`Layout`] to split the total area).
///
/// # Example
///
/// ```rust
/// # #[cfg(feature = "unstable-widget-ref")] {
/// # use ratatui::{backend::TestBackend, Terminal};
/// # let backend = TestBackend::new(5, 5);
/// # let mut terminal = Terminal::new(backend).unwrap();
/// # let mut frame = terminal.get_frame();
/// use ratatui::{layout::Rect, widgets::Block};
///
/// let block = Block::new();
/// let area = Rect::new(0, 0, 5, 5);
/// frame.render_widget_ref(block, area);
/// # }
/// ```
#[allow(clippy::needless_pass_by_value)]
pub fn render_widget_ref<W: WidgetRef>(&mut self, widget: W, area: Rect) {
widget.render_ref(area, self.buffer);
@@ -122,24 +92,6 @@ impl Frame<'_> {
/// The last argument should be an instance of the [`StatefulWidget::State`] associated to the
/// given [`StatefulWidget`].
///
/// # Example
///
/// ```rust
/// # use ratatui::{backend::TestBackend, Terminal};
/// # let backend = TestBackend::new(5, 5);
/// # let mut terminal = Terminal::new(backend).unwrap();
/// # let mut frame = terminal.get_frame();
/// use ratatui::{
/// layout::Rect,
/// widgets::{List, ListItem, ListState},
/// };
///
/// let mut state = ListState::default().with_selected(Some(1));
/// let list = List::new(vec![ListItem::new("Item 1"), ListItem::new("Item 2")]);
/// let area = Rect::new(0, 0, 5, 5);
/// frame.render_stateful_widget(list, area, &mut state);
/// ```
///
/// [`Layout`]: crate::layout::Layout
pub fn render_stateful_widget<W>(&mut self, widget: W, area: Rect, state: &mut W::State)
where
@@ -156,26 +108,6 @@ impl Frame<'_> {
///
/// The last argument should be an instance of the [`StatefulWidgetRef::State`] associated to
/// the given [`StatefulWidgetRef`].
///
/// # Example
///
/// ```rust
/// # #[cfg(feature = "unstable-widget-ref")] {
/// # use ratatui::{backend::TestBackend, Terminal};
/// # let backend = TestBackend::new(5, 5);
/// # let mut terminal = Terminal::new(backend).unwrap();
/// # let mut frame = terminal.get_frame();
/// use ratatui::{
/// layout::Rect,
/// widgets::{List, ListItem, ListState},
/// };
///
/// let mut state = ListState::default().with_selected(Some(1));
/// let list = List::new(vec![ListItem::new("Item 1"), ListItem::new("Item 2")]);
/// let area = Rect::new(0, 0, 5, 5);
/// frame.render_stateful_widget_ref(list, area, &mut state);
/// # }
/// ```
#[allow(clippy::needless_pass_by_value)]
pub fn render_stateful_widget_ref<W>(&mut self, widget: W, area: Rect, state: &mut W::State)
where
@@ -216,17 +148,6 @@ impl Frame<'_> {
/// This count is particularly useful when dealing with dynamic content or animations where the
/// state of the display changes over time. By tracking the frame count, developers can
/// synchronize updates or changes to the content with the rendering process.
///
/// # Examples
///
/// ```rust
/// # use ratatui::{backend::TestBackend, Terminal};
/// # let backend = TestBackend::new(5, 5);
/// # let mut terminal = Terminal::new(backend).unwrap();
/// # let mut frame = terminal.get_frame();
/// let current_count = frame.count();
/// println!("Current frame count: {}", current_count);
/// ```
pub const fn count(&self) -> usize {
self.count
}
@@ -277,19 +198,6 @@ where
B: Backend,
{
/// Creates a new [`Terminal`] with the given [`Backend`] and [`TerminalOptions`].
///
/// # Example
///
/// ```rust
/// use std::io::stdout;
///
/// use ratatui::{backend::CrosstermBackend, layout::Rect, Terminal, TerminalOptions, Viewport};
///
/// let backend = CrosstermBackend::new(stdout());
/// let viewport = Viewport::Fixed(Rect::new(0, 0, 10, 10));
/// let terminal = Terminal::with_options(backend, TerminalOptions { viewport })?;
/// # std::io::Result::Ok(())
/// ```
pub fn with_options(mut backend: B) -> io::Result<Self> {
let screen_size = backend.size()?;
let cursor_pos = backend.get_cursor_position()?;
@@ -394,29 +302,6 @@ where
/// previous frame to determine what has changed, and only the changes are written to the
/// terminal. If the render callback does not fully render the frame, the terminal will not be
/// in a consistent state.
///
/// # Examples
///
/// ```
/// # let backend = ratatui::backend::TestBackend::new(10, 10);
/// # let mut terminal = ratatui::Terminal::new(backend)?;
/// use ratatui::{layout::Position, widgets::Paragraph};
///
/// // with a closure
/// terminal.draw(|frame| {
/// let area = frame.area();
/// frame.render_widget(Paragraph::new("Hello World!"), area);
/// frame.set_cursor_position(Position { x: 0, y: 0 });
/// })?;
///
/// // or with a function
/// terminal.draw(render)?;
///
/// fn render(frame: &mut ratatui::Frame) {
/// frame.render_widget(Paragraph::new("Hello World!"), frame.area());
/// }
/// # std::io::Result::Ok(())
/// ```
pub fn draw<F>(&mut self, render_callback: F) -> io::Result<()>
where
F: FnOnce(&mut Frame),
@@ -462,36 +347,6 @@ where
/// previous frame to determine what has changed, and only the changes are written to the
/// terminal. If the render function does not fully render the frame, the terminal will not be
/// in a consistent state.
///
/// # Examples
///
/// ```should_panic
/// # use ratatui::layout::Position;;
/// # let backend = ratatui::backend::TestBackend::new(10, 10);
/// # let mut terminal = ratatui::Terminal::new(backend)?;
/// use std::io;
///
/// use ratatui::widgets::Paragraph;
///
/// // with a closure
/// terminal.try_draw(|frame| {
/// let value: u8 = "not a number".parse().map_err(io::Error::other)?;
/// let area = frame.area();
/// frame.render_widget(Paragraph::new("Hello World!"), area);
/// frame.set_cursor_position(Position { x: 0, y: 0 });
/// io::Result::Ok(())
/// })?;
///
/// // or with a function
/// terminal.try_draw(render)?;
///
/// fn render(frame: &mut ratatui::Frame) -> io::Result<()> {
/// let value: u8 = "not a number".parse().map_err(io::Error::other)?;
/// frame.render_widget(Paragraph::new("Hello World!"), frame.area());
/// Ok(())
/// }
/// # io::Result::Ok(())
/// ```
pub fn try_draw<F, E>(&mut self, render_callback: F) -> io::Result<()>
where
F: FnOnce(&mut Frame) -> Result<(), E>,

View File

@@ -212,7 +212,18 @@ fn render_patch_details(changes: &HashMap<PathBuf, FileChange>) -> Vec<RtLine<'s
move_path: _,
} => {
if let Ok(patch) = diffy::Patch::from_str(unified_diff) {
let mut is_first_hunk = true;
for h in patch.hunks() {
// Render a simple separator between non-contiguous hunks
// instead of diff-style @@ headers.
if !is_first_hunk {
out.push(RtLine::from(vec![
RtSpan::raw(" "),
RtSpan::styled("", style_dim()),
]));
}
is_first_hunk = false;
let mut old_ln = h.old_range().start();
let mut new_ln = h.new_range().start();
for l in h.lines() {
@@ -276,8 +287,7 @@ fn push_wrapped_diff_line(
// ("+"/"-" for inserts/deletes, or a space for context lines) so alignment
// stays consistent across all diff lines.
let gap_after_ln = SPACES_AFTER_LINE_NUMBER.saturating_sub(ln_str.len());
let first_prefix_cols = indent.len() + ln_str.len() + gap_after_ln;
let cont_prefix_cols = indent.len() + ln_str.len() + gap_after_ln;
let prefix_cols = indent.len() + ln_str.len() + gap_after_ln;
let mut first = true;
let (sign_opt, line_style) = match kind {
@@ -286,16 +296,14 @@ fn push_wrapped_diff_line(
DiffLineType::Context => (None, None),
};
let mut lines: Vec<RtLine<'static>> = Vec::new();
while !remaining_text.is_empty() {
let prefix_cols = if first {
first_prefix_cols
} else {
cont_prefix_cols
};
loop {
// Fit the content for the current terminal row:
// compute how many columns are available after the prefix, then split
// at a UTF-8 character boundary so this row's chunk fits exactly.
let available_content_cols = term_cols.saturating_sub(prefix_cols).max(1);
let available_content_cols = term_cols
.saturating_sub(if first { prefix_cols + 1 } else { prefix_cols })
.max(1);
let split_at_byte_index = remaining_text
.char_indices()
.nth(available_content_cols)
@@ -341,6 +349,9 @@ fn push_wrapped_diff_line(
}
lines.push(line);
}
if remaining_text.is_empty() {
break;
}
}
lines
}
@@ -430,4 +441,72 @@ mod tests {
// Render into a small terminal to capture the visual layout
snapshot_lines("wrap_behavior_insert", lines, DEFAULT_WRAP_COLS + 10, 8);
}
#[test]
fn ui_snapshot_single_line_replacement_counts() {
// Reproduce: one deleted line replaced by one inserted line, no extra context
let original = "# Codex CLI (Rust Implementation)\n";
let modified = "# Codex CLI (Rust Implementation) banana\n";
let patch = diffy::create_patch(original, modified).to_string();
let mut changes: HashMap<PathBuf, FileChange> = HashMap::new();
changes.insert(
PathBuf::from("README.md"),
FileChange::Update {
unified_diff: patch,
move_path: None,
},
);
let lines =
create_diff_summary("proposed patch", &changes, PatchEventType::ApprovalRequest);
snapshot_lines("single_line_replacement_counts", lines, 80, 8);
}
#[test]
fn ui_snapshot_blank_context_line() {
// Ensure a hunk that includes a blank context line at the beginning is rendered visibly
let original = "\nY\n";
let modified = "\nY changed\n";
let patch = diffy::create_patch(original, modified).to_string();
let mut changes: HashMap<PathBuf, FileChange> = HashMap::new();
changes.insert(
PathBuf::from("example.txt"),
FileChange::Update {
unified_diff: patch,
move_path: None,
},
);
let lines =
create_diff_summary("proposed patch", &changes, PatchEventType::ApprovalRequest);
snapshot_lines("blank_context_line", lines, 80, 10);
}
#[test]
fn ui_snapshot_vertical_ellipsis_between_hunks() {
// Create a patch with two separate hunks to ensure we render the vertical ellipsis (⋮)
let original =
"line 1\nline 2\nline 3\nline 4\nline 5\nline 6\nline 7\nline 8\nline 9\nline 10\n";
let modified = "line 1\nline two changed\nline 3\nline 4\nline 5\nline 6\nline 7\nline 8\nline nine changed\nline 10\n";
let patch = diffy::create_patch(original, modified).to_string();
let mut changes: HashMap<PathBuf, FileChange> = HashMap::new();
changes.insert(
PathBuf::from("example.txt"),
FileChange::Update {
unified_diff: patch,
move_path: None,
},
);
let lines =
create_diff_summary("proposed patch", &changes, PatchEventType::ApprovalRequest);
// Height is large enough to show both hunks and the separator
snapshot_lines("vertical_ellipsis_between_hunks", lines, 80, 16);
}
}

View File

@@ -9,13 +9,7 @@ pub(crate) fn escape_command(command: &[String]) -> String {
pub(crate) fn strip_bash_lc_and_escape(command: &[String]) -> String {
match command {
// exactly three items
[first, second, third]
// first two must be "bash", "-lc"
if first == "bash" && second == "-lc" =>
{
third.clone() // borrow `third`
}
[first, second, third] if first == "bash" && second == "-lc" => third.clone(),
_ => escape_command(command),
}
}

Some files were not shown because too many files have changed in this diff Show More