Compare commits

...

102 Commits

Author SHA1 Message Date
Michael Bolin
7a3e4b607f fix: add bazelisk and dotslash to Dockerfile for devcontainer 2026-01-08 22:48:51 -08:00
Michael Bolin
d3ff668f68 fix: remove existing process hardening from Codex CLI (#8951)
As explained in https://github.com/openai/codex/issues/8945 and
https://github.com/openai/codex/issues/8472, there are legitimate cases
where users expect processes spawned by Codex to inherit environment
variables such as `LD_LIBRARY_PATH` and `DYLD_LIBRARY_PATH`, where
failing to do so can cause significant performance issues.

This PR removes the use of
`codex_process_hardening::pre_main_hardening()` in Codex CLI (which was
added not in response to a known security issue, but because it seemed
like a prudent thing to do from a security perspective:
https://github.com/openai/codex/pull/4521), but we will continue to use
it in `codex-responses-api-proxy`. At some point, we probably want to
introduce a slightly different version of
`codex_process_hardening::pre_main_hardening()` in Codex CLI that
excludes said environment variables from the Codex process itself, but
continues to propagate them to subprocesses.
2026-01-08 21:19:34 -08:00
Ahmed Ibrahim
81caee3400 Add 5s timeout to models list call + integration test (#8942)
- Enforce a 5s timeout around the remote models refresh to avoid hanging
/models calls.
2026-01-08 18:06:10 -08:00
Thibault Sottiaux
51dd5af807 fix: treat null MCP resource args as empty (#8917)
Handle null tool arguments in the MCP resource handler so optional
resource tools accept null without failing, preserving normal JSON
parsing for non-null payloads and improving robustness when models emit
null; this avoids spurious argument parse errors for list/read MCP
resource calls.
2026-01-08 17:47:02 -08:00
iceweasel-oai
6372ba9d5f Elevated sandbox NUX (#8789)
Elevated Sandbox NUX:

* prompt for elevated sandbox setup when agent mode is selected (via
/approvals or at startup)
* prompt for degraded sandbox if elevated setup is declined or fails
* introduce /elevate-sandbox command to upgrade from degraded
experience.
2026-01-08 16:23:06 -08:00
Michael Bolin
bdfdebcfa1 fix: increase timeout for wait_for_event() for Bazel (#8946)
This seems to be necessary to get the Bazel builds on ARM Linux to go
green on https://github.com/openai/codex/pull/8875.

I don't feel great about timeout-whack-a-mole, but we're still learning
here...
2026-01-08 15:37:46 -08:00
pakrym-oai
62a73b6d58 Attempt to reload auth as a step in 401 recovery (#8880)
When authentication fails, first attempt to reload the auth from file
and then attempt to refresh it.
2026-01-08 15:06:44 -08:00
Celia Chen
be4364bb80 [chore] move app server tests from chat completion to responses (#8939)
We are deprecating chat completions. Move all app server tests from chat
completion to responses.
2026-01-08 22:27:55 +00:00
Ahmed Ibrahim
0d3e673019 remove get_responses_requests and get_responses_request_bodies to use in-place matcher (#8858) 2026-01-08 13:57:48 -08:00
Anton Panasenko
41a317321d feat: fork conversation/thread (#8866)
## Summary
- add thread/conversation fork endpoints to the protocol (v1 + v2)
- implement fork handling in app-server using thread manager and config
overrides
- add fork coverage in app-server tests and document `thread/fork` usage
2026-01-08 12:54:20 -08:00
Celia Chen
051bf81df9 [fix] app server flaky send_messages test (#8874)
Fix flakiness of CI test:
https://github.com/openai/codex/actions/runs/20350530276/job/58473691434?pr=8282

This PR does two things:
1. move the flakiness test to use responses API instead of chat
completion API
2. make mcp_process agnostic to the order of
responses/notifications/requests that come in, by buffering messages not
read
2026-01-08 20:41:21 +00:00
Michael Bolin
a70f5b0b3c fix: correct login shell mismatch in the accept_elicitation_for_prompt_rule() test (#8931)
Because the path to `git` is used to construct `elicitations_to_accept`,
we need to ensure that we resolve which `git` to use the same way our
Bash process will:


c9c6560685/codex-rs/exec-server/tests/suite/accept_elicitation.rs (L59-L69)

This fixes an issue when running the test on macOS using Bazel
(https://github.com/openai/codex/pull/8875) where the login shell chose
`/opt/homebrew/bin/git` whereas the non-login shell chose
`/usr/bin/git`.
2026-01-08 12:37:38 -08:00
Michael Bolin
224c4867dd fix: increase timeout for tests that have been flaking with timeout issues (#8932)
I have seen this test flake out sometimes when running the macOS build
using Bazel in CI: https://github.com/openai/codex/pull/8875. Perhaps
Bazel runs with greater parallelism, inducing a heavier load, causing an
issue?
2026-01-08 20:31:03 +00:00
jif-oai
c9c6560685 nit: parse_arguments (#8927) 2026-01-08 19:49:17 +00:00
pakrym-oai
634764ece9 Immutable CodexAuth (#8857)
Historically we started with a CodexAuth that knew how to refresh it's
own tokens and then added AuthManager that did a different kind of
refresh (re-reading from disk).

I don't think it makes sense for both `CodexAuth` and `AuthManager` to
be mutable and contain behaviors.

Move all refresh logic into `AuthManager` and keep `CodexAuth` as a data
object.
2026-01-08 11:43:56 -08:00
Felipe Petroski Such
5bc3e325a6 add tooltip hint for shell commands (!) (#8926)
I didn't know this existed because its not listed in the hints.
2026-01-08 19:31:20 +00:00
gt-oai
4156060416 Add read-only when backfilling requirements from managed_config (#8913)
When a user has a managed_config which doesn't specify read-only, Codex
fails to launch.
2026-01-08 11:27:46 -08:00
Thibault Sottiaux
98122cbad0 fix: preserve core env vars on Windows (#8897)
This updates core shell environment policy handling to match Windows
case-insensitive variable names and adds a Windows-only regression test,
so Path/TEMP are no longer dropped when inherit=core.
2026-01-08 10:36:36 -08:00
github-actions[bot]
7b21b443bb Update models.json (#8792)
Automated update of models.json.

Co-authored-by: aibrahim-oai <219906144+aibrahim-oai@users.noreply.github.com>
2026-01-08 10:26:01 -08:00
gt-oai
93dec9045e otel test: retry WouldBlock errors (#8915)
This test looks flaky on Windows:

```
        FAIL [   0.034s] (1442/2802) codex-otel::tests suite::otlp_http_loopback::otlp_http_exporter_sends_metrics_to_collector
  stdout ───

    running 1 test
    test suite::otlp_http_loopback::otlp_http_exporter_sends_metrics_to_collector ... FAILED

    failures:

    failures:
        suite::otlp_http_loopback::otlp_http_exporter_sends_metrics_to_collector

    test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 14 filtered out; finished in 0.02s
    
  stderr ───
    Error: ProviderShutdown { source: InternalFailure("[InternalFailure(\"Failed to shutdown\")]") }

────────────
     Summary [ 175.360s] 2802 tests run: 2801 passed, 1 failed, 15 skipped
        FAIL [   0.034s] (1442/2802) codex-otel::tests suite::otlp_http_loopback::otlp_http_exporter_sends_metrics_to_collector
```
2026-01-08 18:18:49 +00:00
jif-oai
69898e3dba clean: all history cloning (#8916) 2026-01-08 18:17:18 +00:00
Celia Chen
c4af304c77 [fix] app server flaky thread/resume tests (#8870)
Fix flakiness of CI tests:
https://github.com/openai/codex/actions/runs/20350530276/job/58473691443?pr=8282

This PR does two things:
1. test with responses API instead of chat completions API in
thread_resume tests;
2. have a new responses API fixture that mocks out arbitrary numbers of
responses API calls (including no calls) and have the same repeated
response.

Tested by CI
2026-01-08 10:17:05 -08:00
jif-oai
5b7707dfb1 feat: add list loaded threads to app server (#8902) 2026-01-08 17:48:20 +00:00
Michael Bolin
59d6937550 fix: reduce duplicate include_str!() calls (#8914) 2026-01-08 17:20:41 +00:00
gt-oai
932a5a446f config requirements: improve requirement error messages (#8843)
**Before:**
```
Error loading configuration: value `Never` is not in the allowed set [OnRequest]
```

**After:**
```
Error loading configuration: invalid value for `approval_policy`: `Never` is not in the
allowed set [OnRequest] (set by MDM com.openai.codex:requirements_toml_base64)
```

Done by introducing a new struct `ConfigRequirementsWithSources` onto
which we `merge_unset_fields` now. Also introduces a pair of requirement
value and its `RequirementSource` (inspired by `ConfigLayerSource`):

```rust
pub struct Sourced<T> {
    pub value: T,
    pub source: RequirementSource,
}
```
2026-01-08 16:11:14 +00:00
zbarsky-openai
484f6f4c26 gitignore bazel-* (#8911)
QoL improvement so we don't accidentally add these dirs while we
prototype bazel things
2026-01-08 07:50:58 -08:00
jif-oai
5522663f92 feat: add a few metrics (#8910) 2026-01-08 15:39:57 +00:00
jif-oai
98e171258c nit: drop unused function call error (#8903) 2026-01-08 15:07:30 +00:00
jif-oai
da667b1f56 chore: drop useless interaction_input (#8907) 2026-01-08 15:01:07 +00:00
Michael Bolin
1e29774fce fix: leverage codex_utils_cargo_bin() in codex-rs/core/tests/suite (#8887)
This eliminates our dependency on the `escargot` crate and better
prepares us for Bazel builds: https://github.com/openai/codex/pull/8875.
2026-01-08 14:56:16 +00:00
Denis Andrejew
9ce6bbc43e Avoid setpgid for inherited stdio on macOS (#8691)
## Summary
- avoid setting a new process group when stdio is inherited (keeps child
in foreground PG)
- keep process-group isolation when stdio is redirected so killpg
cleanup still works
- prevents macOS job-control SIGTTIN stops that look like hangs after
output

## Testing
- `cargo build -p codex-cli`
- `GIT_CONFIG_GLOBAL=/dev/null GIT_CONFIG_NOSYSTEM=1
CARGO_BIN_EXE_codex=/Users/denis/Code/codex/codex-rs/target/debug/codex
/opt/homebrew/bin/timeout 30m cargo test -p codex-core -p codex-exec`

## Context
This fixes macOS sandbox hangs for commands like `elixir -v` / `erl
-noshell`, where the child was moved into a new process group while
still attached to the controlling TTY. See issue #8690.

## Authorship & collaboration
- This change and analysis were authored by **Codex** (AI coding agent).
- Human collaborator: @seeekr provided repro environment, context, and
review guidance.
- CLI used: `codex-cli 0.77.0`.
- Model: `gpt-5.2-codex (xhigh)`.

Co-authored-by: Eric Traut <etraut@openai.com>
2026-01-08 07:50:40 -07:00
Michael Bolin
7520d8ba58 fix: leverage find_resource! macro in load_sse_fixture_with_id (#8888)
This helps prepare us for Bazel builds:
https://github.com/openai/codex/pull/8875.
2026-01-08 09:34:05 -05:00
jif-oai
0318f30ed8 chore: add small debug client (#8894)
Small debug client, do not use in production
2026-01-08 13:40:14 +00:00
Thibault Sottiaux
be212db0c8 fix: include project instructions in /review subagent (#8899)
Include project-level AGENTS.md and skills in /review sessions so the
review sub-agent uses the same instruction pipeline as standard runs,
keeping reviewer context aligned with normal sessions.
2026-01-08 13:31:01 +00:00
Thibault Sottiaux
5b022c2904 chore: align error limit comment (#8896) 2026-01-08 13:30:33 +00:00
jif-oai
e21ce6c5de chore: drop metrics exporter config (#8892)
Dropped for now as enterprises should not be able to use it
2026-01-08 13:20:18 +00:00
Thibault Sottiaux
267c05fb30 fix: stabilize list_dir pagination order (#8826)
Sort list_dir entries before applying offset/limit so pagination matches
the displayed order, update pagination/truncation expectations, and add
coverage for sorted pagination. This ensures stable, predictable
directory pages when list_dir is enabled.
2026-01-08 03:51:47 -08:00
jif-oai
634650dd25 feat: metrics capabilities (#8318)
Add metrics capabilities to Codex. The `README.md` is up to date.

This will not be merged with the metrics before this PR of course:
https://github.com/openai/codex/pull/8350
2026-01-08 11:47:36 +00:00
jif-oai
8a0c2e5841 chore: add list thread ids on manager (#8855) 2026-01-08 10:53:58 +00:00
Dylan Hurd
0f8bb4579b fix: windows can now paste non-ascii multiline text (#8774)
## Summary
This PR builds _heavily_ on the work from @occurrent in #8021 - I've
only added a small fix, added additional tests, and propagated the
changes to tui2.

From the original PR:

> On Windows, Codex relies on PasteBurst for paste detection because
bracketed paste is not reliably available via crossterm.
> 
> When pasted content starts with non-ASCII characters, input is routed
through handle_non_ascii_char, which bypasses the normal paste burst
logic. This change extends the paste burst window for that path, which
should ensure that Enter is correctly grouped as part of the paste.


## Testing
- [x] tested locally cross-platform
- [x] added regression tests

---------

Co-authored-by: occur <occurring@outlook.com>
2026-01-07 23:21:49 -08:00
Michael Bolin
35fd69a9f0 fix: make the find_resource! macro responsible for the absolutize() call (#8884)
https://github.com/openai/codex/pull/8879 introduced the
`find_resource!` macro, but now that I am about to use it in more
places, I realize that it should take care of this normalization case
for callers.

Note the `use $crate::path_absolutize::Absolutize;` line is there so
that users of `find_resource!` do not have to explicitly include
`path-absolutize` to their own `Cargo.toml`.
2026-01-07 23:03:43 -08:00
iceweasel-oai
ccba737d26 add ability to disable input temporarily in the TUI. (#8876)
We will disable input while the elevated sandbox setup is running.
2026-01-07 20:56:48 -08:00
xl-openai
75076aabfe Support UserInput::Skill in V2 API. (#8864)
Allow client to specify explicit skill invocation in v2 API.
2026-01-07 18:26:35 -08:00
Michael Bolin
f6b563ec64 feat: introduce find_resource! macro that works with Cargo or Bazel (#8879)
To support Bazelification in https://github.com/openai/codex/pull/8875,
this PR introduces a new `find_resource!` macro that we use in place of
our existing logic in tests that looks for resources relative to the
compile-time `CARGO_MANIFEST_DIR` env var.

To make this work, we plan to add the following to all `rust_library()`
and `rust_test()` Bazel rules in the project:

```
rustc_env = {
    "BAZEL_PACKAGE": native.package_name(),
},
```

Our new `find_resource!` macro reads this value via
`option_env!("BAZEL_PACKAGE")` so that the Bazel package _of the code
using `find_resource!`_ is injected into the code expanded from the
macro. (If `find_resource()` were a function, then
`option_env!("BAZEL_PACKAGE")` would always be
`codex-rs/utils/cargo-bin`, which is not what we want.)

Note we only consider the `BAZEL_PACKAGE` value when the `RUNFILES_DIR`
environment variable is set at runtime, indicating that the test is
being run by Bazel. In this case, we have to concatenate the runtime
`RUNFILES_DIR` with the compile-time `BAZEL_PACKAGE` value to build the
path to the resource.

In testing this change, I discovered one funky edge case in
`codex-rs/exec-server/tests/common/lib.rs` where we have to _normalize_
(but not canonicalize!) the result from `find_resource!` because the
path contains a `common/..` component that does not exist on disk when
the test is run under Bazel, so it must be semantically normalized using
the [`path-absolutize`](https://crates.io/crates/path-absolutize) crate
before it is passed to `dotslash fetch`.

Because this new behavior may be non-obvious, this PR also updates
`AGENTS.md` to make humans/Codex aware that this API is preferred.
2026-01-07 18:06:08 -08:00
iceweasel-oai
357e4c902b add footer note to TUI (#8867)
This will be used by the elevated sandbox NUX to give a hint on how to
run the elevated sandbox when in the non-elevated mode.
2026-01-07 16:44:28 -08:00
Michael Bolin
ef8b8ebc94 fix: use tokio for I/O in an async function (#8868)
I thought this might solve a bug I'm working on, but it turned out to be
a red herring. Nevertheless, this seems like the right thing to do here.
2026-01-07 16:36:23 -08:00
Michael Bolin
54b290ec1d fix: update resource path resolution logic so it works with Bazel (#8861)
The Bazelification work in-flight over at
https://github.com/openai/codex/pull/8832 needs this fix so that Bazel
can find the path to the DotSlash file for `bash`.

With this change, the following almost works:

```
bazel test --test_output=errors //codex-rs/exec-server:exec-server-all-test
```

That is, now the `list_tools` test passes, but
`accept_elicitation_for_prompt_rule` still fails because it runs
Seatbelt itself, so it needs to be run outside Bazel's local sandboxing.
2026-01-07 22:33:05 +00:00
Shijie Rao
efd0c21b9b Feat: appServer.requirementList for requirement.toml (#8800)
### Summary
We are exposing requirements via `requirement/list` method from
app-server so that we can conditionally disable the agent mode dropdown
selection in VSCE and correctly setting the default value.

### Sample output
#### `etc/codex/requirements.toml`
<img width="497" height="49" alt="Screenshot 2026-01-06 at 11 32 06 PM"
src="https://github.com/user-attachments/assets/fbd9402e-515f-4b9e-a158-2abb23e866a0"
/>

#### App server response
<img width="1107" height="79" alt="Screenshot 2026-01-06 at 11 30 18 PM"
src="https://github.com/user-attachments/assets/c0d669cd-54ef-4789-a26c-adb2c41950af"
/>
2026-01-07 13:57:44 -08:00
xl-openai
61e81af887 Support symlink for skills discovery. (#8801)
Skills discovery now follows symlink entries for SkillScope::User
($CODEX_HOME/skills) and SkillScope::Admin (e.g. /etc/codex/skills).

Added cycle protection: directories are canonicalized and tracked in a
visited set to prevent infinite traversal from circular links.

Added per-root traversal limits to avoid accidentally scanning huge
trees:
- max depth: 6
- max directories: 2000 (logs a warning if truncated)

For now, symlink stat failures and traversal truncation are logged
rather than surfaced as UI “invalid SKILL.md” warnings.
2026-01-07 13:34:48 -08:00
gt-oai
f07b8aa591 Warn in /model if BASE_URL set (#8847)
<img width="763" height="349" alt="Screenshot 2026-01-07 at 18 37 59"
src="https://github.com/user-attachments/assets/569d01cb-ea91-4113-889b-ba74df24adaf"
/>

It may not make sense to use the `/model` menu with a custom
OPENAI_BASE_URL. But some model proxies may support it, so we shouldn't
disable it completely. A warning is a reasonable compromise.
2026-01-07 21:24:18 +00:00
darlingm
5f3f70203c Clarify YAML frontmatter formatting in skill-creator (#8610)
Fixes #8609

# Summary

Emphasize single-line name/description values and quoting when values
could be interpreted as YAML syntax.

# Testing

Not run (skill-only change.)
2026-01-07 14:24:02 -07:00
Channing Conger
21c6d40a44 Add feature for optional request compression (#8767)
Adds a new feature
`enable_request_compression` that will compress using zstd requests to
the codex-backend. Currently only enabled for codex-backend so only enabled for openai providers when using chatgpt::auth even when the feature is enabled

Added a new info log line too for evaluating the compression ratio and
overhead off compressing before requesting. You can enable with
`RUST_LOG=$RUST_LOG,codex_client::transport=info`

```
2026-01-06T00:09:48.272113Z  INFO codex_client::transport: Compressed request body with zstd pre_compression_bytes=28914 post_compression_bytes=11485 compression_duration_ms=0
```
2026-01-07 13:21:40 -08:00
Ahmed Ibrahim
a9b5e8a136 Simplify error managment in run_turn (#8849) 2026-01-07 13:15:46 -08:00
Ahmed Ibrahim
187924d761 Override truncation policy at model info level (#8856)
We used to override truncation policy by comparing model info vs config
value in context manager. A better way to do it is to construct model
info using the config value
2026-01-07 13:06:20 -08:00
Owen Lin
66450f0445 fix: implement 'Allow this session' for apply_patch approvals (#8451)
**Summary**
This PR makes “ApprovalDecision::AcceptForSession / don’t ask again this
session” actually work for `apply_patch` approvals by caching approvals
based on absolute file paths in codex-core, properly wiring it through
app-server v2, and exposing the choice in both TUI and TUI2.
- This brings `apply_patch` calls to be at feature-parity with general
shell commands, which also have a "Yes, and don't ask again" option.
- This also fixes VSCE's "Allow this session" button to actually work.

While we're at it, also split the app-server v2 protocol's
`ApprovalDecision` enum so execpolicy amendments are only available for
command execution approvals.

**Key changes**
- Core: per-session patch approval allowlist keyed by absolute file
paths
- Handles multi-file patches and renames/moves by recording both source
and destination paths for `Update { move_path: Some(...) }`.
- Extend the `Approvable` trait and `ApplyPatchRuntime` to work with
multiple keys, because an `apply_patch` tool call can modify multiple
files. For a request to be auto-approved, we will need to check that all
file paths have been approved previously.
- App-server v2: honor AcceptForSession for file changes
- File-change approval responses now map AcceptForSession to
ReviewDecision::ApprovedForSession (no longer downgraded to plain
Approved).
- Replace `ApprovalDecision` with two enums:
`CommandExecutionApprovalDecision` and `FileChangeApprovalDecision`
- TUI / TUI2: expose “don’t ask again for these files this session”
- Patch approval overlays now include a third option (“Yes, and don’t
ask again for these files this session (s)”).
    - Snapshot updates for the approval modal.

**Tests added/updated**
- Core:
- Integration test that proves ApprovedForSession on a patch skips the
next patch prompt for the same file
- App-server:
- v2 integration test verifying
FileChangeApprovalDecision::AcceptForSession works properly

**User-visible behavior**
- When the user approves a patch “for session”, future patches touching
only those previously approved file(s) will no longer prompt gain during
that session (both via app-server v2 and TUI/TUI2).

**Manual testing**
Tested both TUI and TUI2 - see screenshots below.

TUI:
<img width="1082" height="355" alt="image"
src="https://github.com/user-attachments/assets/adcf45ad-d428-498d-92fc-1a0a420878d9"
/>


TUI2:
<img width="1089" height="438" alt="image"
src="https://github.com/user-attachments/assets/dd768b1a-2f5f-4bd6-98fd-e52c1d3abd9e"
/>
2026-01-07 20:11:12 +00:00
Celia Chen
e8421c761c [chore] update app server doc with skills (#8853) 2026-01-07 20:07:01 +00:00
jif-oai
fe460e0f9a chore: drop some deprecated (#8848) 2026-01-07 19:54:45 +00:00
jif-oai
1253d19641 chore: drop useless feature flags (#8850) 2026-01-07 19:54:32 +00:00
Ahmed Ibrahim
4c9b4b684f Fix app-server write_models_cache to treat models with less priority number as higher priority. (#8844)
Rank models with p0 higher than p1. This shouldn't result in any
behavioral changes. Just reordering.
2026-01-07 11:22:13 -08:00
pakrym-oai
018de994b0 Stop using AuthManager as the source of codex_home (#8846) 2026-01-07 18:56:20 +00:00
Ahmed Ibrahim
c31960b13a remove unnecessary todos (#8842)
> // todo(aibrahim): why are we passing model here while it can change?

we update it on each turn with `.with_model`

> //TODO(aibrahim): run CI in release mode.

although it's good to have, release builds take double the time tests
take.

> // todo(aibrahim): make this async function

we figured out another way of doing this sync
2026-01-07 10:43:10 -08:00
Ahmed Ibrahim
9179c9deac Merge Modelfamily into modelinfo (#8763)
- Merge ModelFamily into ModelInfo
- Remove logic for adding instructions to apply patch
- Add compaction limit and visible context window to `ModelInfo`
2026-01-07 10:35:09 -08:00
Michael Bolin
a1e81180f8 fix: upgrade lru crate to 0.16.3 (#8845)
See https://rustsec.org/advisories/RUSTSEC-2026-0002.

Though our `ratatui` fork has a transitive dep on an older version of
the `lru` crate, so to get CI green ASAP, this PR also adds an exception
to `deny.toml` for `RUSTSEC-2026-0002`, but hopefully this will be
short-lived.
2026-01-07 10:11:27 -08:00
pakrym-oai
fedcb8f63c Move tests below auth manager (#8840)
To simplify future diffs
2026-01-07 17:36:44 +00:00
jif-oai
116059c3a0 chore: unify conversation with thread name (#8830)
Done and verified by Codex + refactor feature of RustRover
2026-01-07 17:04:53 +00:00
Thibault Sottiaux
0d788e6263 fix: handle early codex exec exit (#8825)
Fixes CodexExec to avoid missing early process exits by registering the
exit handler up front and deferring the error until after stdout is
drained, and adds a regression test that simulates a fast-exit child
while still producing output so hangs are caught.
2026-01-07 08:54:27 -08:00
jif-oai
4cef89a122 chore: rename unified exec sessions (#8822)
Renaming done by Codex
2026-01-07 16:12:47 +00:00
Thibault Sottiaux
124a09e577 fix: handle /review arguments in TUI (#8823)
Handle /review <instructions> in the TUI and TUI2 by routing it as a
custom review command instead of plain text, wiring command dispatch and
adding composer coverage so typing /review text starts a review directly
rather than posting a message. User impact: /review with arguments now
kicks off the review flow, previously it would just forward as a plain
command and not actually start a review.
2026-01-07 13:14:55 +00:00
Thibault Sottiaux
a59052341d fix: parse git apply paths correctly (#8824)
Fixes apply.rs path parsing so 
- quoted diff headers are tokenized and extracted correctly, 
- /dev/null headers are ignored before prefix stripping to avoid bogus
dev/null paths, and
- git apply output paths are unescaped from C-style quoting.

**Why**
This prevents potentially missed staging and misclassified paths when
applying or reverting patches, which could lead to incorrect behavior
for repos with spaces or escaped characters in filenames.

**Impact**
I checked and this is only used in the cloud tasks support and `codex
apply <task_id>` flow.
2026-01-07 13:00:31 +00:00
jif-oai
8372d61be7 chore: silent just fmt (#8820)
Done to avoid spammy warnings to end up in the model context without
having to switch to nightly
```
Warning: can't set `imports_granularity = Item`, unstable features are only available in nightly channel.
```
2026-01-07 12:16:38 +00:00
Thibault Sottiaux
230a045ac9 chore: stabilize core tool parallelism test (#8805)
Set login=false for the shell tool in the timing-based parallelism test
so it does not depend on slow user login shells, making the test
deterministic without user-facing changes. This prevents occasional
flakes when running locally.
2026-01-07 09:26:47 +00:00
charley-oai
3389465c8d Enable model upgrade popup even when selected model is no longer in picker (#8802)
With `config.toml`:
```
model = "gpt-5.1-codex"
```
(where `gpt-5.1-codex` has `show_in_picker: false` in
[`model_presets.rs`](https://github.com/openai/codex/blob/main/codex-rs/core/src/models_manager/model_presets.rs);
this happens if the user hasn't used codex in a while so they didn't see
the popup before their model was changed to `show_in_picker: false`)

The upgrade picker used to not show (because `gpt-5.1-codex` was
filtered out of the model list in code). Now, the filtering is done
downstream in tui and app-server, so the model upgrade popup shows:

<img width="1503" height="227" alt="Screenshot 2026-01-06 at 5 04 37 PM"
src="https://github.com/user-attachments/assets/26144cc2-0b3f-4674-ac17-e476781ec548"
/>
2026-01-06 19:32:27 -08:00
Thibault Sottiaux
8b4d27dfcd fix: truncate long approval prefixes when rendering (#8734)
Fixes inscrutable multiline approval requests:
<img width="686" height="844" alt="image"
src="https://github.com/user-attachments/assets/cf9493dc-79e6-4168-8020-0ef0fe676d5e"
/>
2026-01-06 15:17:01 -08:00
Michael Bolin
dc1a568dc7 fix: populate the release notes when the release is created (#8799)
Use the contents of the commit message from the commit associated with
the tag (that contains the version bump) as the release notes by writing
them to a file and then specifying the file as the `body_path` of
`softprops/action-gh-release@v2`.
2026-01-06 15:02:39 -08:00
sayan-oai
54ded1a3c0 add web_search_cached flag (#8795)
Add `web_search_cached` feature to config. Enables `web_search` tool
with access only to cached/indexed results (see
[docs](https://platform.openai.com/docs/guides/tools-web-search#live-internet-access)).

This takes precedence over the existing `web_search_request`, which
continues to enable `web_search` over live results as it did before.

`web_search_cached` is disabled for review mode, as `web_search_request`
is.
2026-01-06 14:53:59 -08:00
Celia Chen
11d4f3f45e [app-server] fix config loading for conversations (#8765)
Currently we don't load config properly for app server conversations.
see:
https://linear.app/openai/issue/CODEX-3956/config-flags-not-respected-in-codex-app-server.
This PR fixes that by respecting the config passed in.

Tested by running `cargo build -p codex-cli &&
RUST_LOG=codex_app_server=debug CODEX_BIN=target/debug/codex cargo run
-p codex-app-server-test-client -- \
--config
model_providers.mock_provider.base_url=\"http://localhost:4010/v2\" \
    --config model_provider=\"mock_provider\" \
    --config model_providers.mock_provider.name="hello" \
    send-message-v2 "hello"`
and verified that the mock_provider is called instead of default
provider.

#closes
https://linear.app/openai/issue/CODEX-3956/config-flags-not-respected-in-codex-app-server

---------

Co-authored-by: Michael Bolin <mbolin@openai.com>
2026-01-06 22:02:17 +00:00
Owen Lin
8b7ec31ba7 feat(app-server): thread/rollback API (#8454)
Add `thread/rollback` to app-server to support IDEs undo-ing the last N
turns of a thread.

For context, an IDE partner will be supporting an "undo" capability
where the IDE (the app-server client) will be responsible for reverting
the local changes made during the last turn. To support this well, we
also need a way to drop the last turn (or more generally, the last N
turns) from the agent's context. This is what `thread/rollback` does.

**Core idea**: A Thread rollback is represented as a persisted event
message (EventMsg::ThreadRollback) in the rollout JSONL file, not by
rewriting history. On resume, both the model's context (core replay) and
the UI turn list (app-server v2's thread history builder) apply these
markers so the pruned history is consistent across live conversations
and `thread/resume`.

Implementation notes:
- Rollback only affects agent context and appends to the rollout file;
clients are responsible for reverting files on disk.
- If a thread rollback is currently in progress, subsequent
`thread/rollback` calls are rejected.
- Because we use `CodexConversation::submit` and codex core tracks
active turns, returning an error on concurrent rollbacks is communicated
via an `EventMsg::Error` with a new variant
`CodexErrorInfo::ThreadRollbackFailed`. app-server watches for that and
sends the BAD_REQUEST RPC response.

Tests cover thread rollbacks in both core and app-server, including when
`num_turns` > existing turns (which clears all turns).

**Note**: this explicitly does **not** behave like `/undo` which we just
removed from the CLI, which does the opposite of what `thread/rollback`
does. `/undo` reverts local changes via ghost commits/snapshots and does
not modify the agent's context / conversation history.
2026-01-06 21:23:48 +00:00
jif-oai
188f79afee feat: drop agent bus and store the agent status in codex directly (#8788) 2026-01-06 19:44:39 +00:00
Josh McKinney
a0b2d03302 Clear copy pill background and add snapshot test (#8777)
### Motivation
- Fix a visual bug where transcript text could bleed through the
on-screen copy "pill" overlay.
- Ensure the copy affordance fully covers the underlying buffer so the
pill background is solid and consistent with styling.
- Document the approach in-code to make the background-clearing
rationale explicit.

### Description
- Clear the pill area before drawing by iterating `Rect::positions()`
and calling `cell.set_symbol(" ")` and `cell.set_style(base_style)` in
`render_copy_pill` in `transcript_copy_ui.rs`.
- Added an explanatory comment for why the pill background is explicitly
cleared.
- Added a unit test `copy_pill_clears_background` and committed the
corresponding snapshot file to validate the rendering behavior.

### Testing
- Ran `just fmt` (formatting completed; non-blocking environment warning
may appear).
- Ran `just fix -p codex-tui2` to apply lints/fixes (completed). 
- Ran `cargo test -p codex-tui2` and all tests passed (snapshot updated
and tests succeeded).

------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_695c9b23e9b8832997d5a457c4d83410)
2026-01-06 11:21:26 -08:00
xl-openai
4ce9d0aa7b suppress popups while browsing input history (#8772) 2026-01-06 11:13:21 -08:00
jif-oai
1dd1355df3 feat: agent controller (#8783)
Added an agent control plane that lets sessions spawn or message other
conversations via `AgentControl`.

`AgentBus` (core/src/agent/bus.rs) keeps track of the last known status
of a conversation.

ConversationManager now holds shared state behind an Arc so AgentControl
keeps only a weak back-reference, the goal is just to avoid explicit
cycle reference.

Follow-ups:
* Build a small tool in the TUI to be able to see every agent and send
manual message to each of them
* Handle approval requests in this TUI
* Add tools to spawn/communicate between agents (see related design)
* Define agent types
2026-01-06 19:08:02 +00:00
Javi
915352b10c feat: add analytics config setting (#8350) 2026-01-06 19:04:13 +00:00
jif-oai
740bf0e755 chore: clear background terminals on interrupt (#8786) 2026-01-06 19:01:07 +00:00
jif-oai
d1c6329c32 feat: forced tool tips (#8752)
Force an announcement tooltip in the CLI. This query the gh repo on this
[file](https://raw.githubusercontent.com/openai/codex/main/announcement_tip.toml)
which contains announcements in TOML looking like this:
```
# Example announcement tips for Codex TUI.
# Each [[announcements]] entry is evaluated in order; the last matching one is shown.
# Dates are UTC, formatted as YYYY-MM-DD. The from_date is inclusive and the to_date is exclusive.
# version_regex matches against the CLI version (env!("CARGO_PKG_VERSION")); omit to apply to all versions.
# target_app specify which app should display the announcement (cli, vsce, ...).

[[announcements]]
content = "Welcome to Codex! Check out the new onboarding flow."
from_date = "2024-10-01"
to_date = "2024-10-15"
version_regex = "^0\\.0\\.0$"
target_app = "cli"
``` 

To make this efficient, the announcement is queried on a best effort
basis at the launch of the CLI (no refresh made after this).
This is done in an async way and we display the announcement (with 100%
probability) iff the announcement is available, the cache is correctly
warmed and there is a matching announcement (matching is recomputed for
each new session).
2026-01-06 18:02:05 +00:00
Owen Lin
cab7136fb3 chore: add model/list call to app-server-test-client (#8331)
Allows us to run `cargo run -p codex-app-server-test-client --
model-list` to return the list of models over app-server.
2026-01-06 17:50:17 +00:00
jif-oai
32db8ea5ca feat: add head-tail buffer for unified_exec (#8735) 2026-01-06 15:48:44 +00:00
Abdelkader Boudih
06e21c7a65 fix: update model examples to gpt-5.2 (#8566)
The models are outdated and sometime get used by GPT when it to try
delegate.

I have read the CLA Document and I hereby sign the CLA
2026-01-06 08:47:29 -07:00
Michael Bolin
7ecd0dc9b3 fix: stop honoring CODEX_MANAGED_CONFIG_PATH environment variable in production (#8762) 2026-01-06 07:10:27 -08:00
jif-oai
8858012fd1 chore: emit unified exec begin only when PTY exist (#8780) 2026-01-06 13:12:54 +00:00
Thibault Sottiaux
6346e4f560 fix: fix readiness subscribe token wrap-around (#8770)
Fixes ReadinessFlag::subscribe to avoid handing out token 0 or duplicate
tokens on i32 wrap-around, adds regression tests, and prevents readiness
gates from getting stuck waiting on an unmarkable or mis-authorized
token.
2026-01-06 13:09:02 +00:00
Josh McKinney
4c3d2a5bbe fix: render cwd-relative paths in tui (#8771)
Display paths relative to the cwd before checking git roots so view
image tool calls keep project-local names in jj/no-.git workspaces.
2026-01-06 03:17:40 +00:00
Josh McKinney
c92dbea7c1 tui2: stop baking streaming wraps; reflow agent markdown (#8761)
Background
Streaming assistant prose in tui2 was being rendered with viewport-width
wrapping during streaming, then stored in history cells as already split
`Line`s. Those width-derived breaks became indistinguishable from hard
newlines, so the transcript could not "un-split" on resize. This also
degraded copy/paste, since soft wraps looked like hard breaks.

What changed
- Introduce width-agnostic `MarkdownLogicalLine` output in
`tui2/src/markdown_render.rs`, preserving markdown wrap semantics:
initial/subsequent indents, per-line style, and a preformatted flag.
- Update the streaming collector (`tui2/src/markdown_stream.rs`) to emit
logical lines (newline-gated) and remove any captured viewport width.
- Update streaming orchestration (`tui2/src/streaming/*`) to queue and
emit logical lines, producing `AgentMessageCell::new_logical(...)`.
- Make `AgentMessageCell` store logical lines and wrap at render time in
`HistoryCell::transcript_lines_with_joiners(width)`, emitting joiners so
copy/paste can join soft-wrap continuations correctly.

Overlay deferral
When an overlay is active, defer *cells* (not rendered `Vec<Line>`) and
render them at overlay close time. This avoids baking width-derived
wraps based on a stale width.

Tests + docs
- Add resize/reflow regression tests + snapshots for streamed agent
output.
- Expand module/API docs for the new logical-line streaming pipeline and
clarify joiner semantics.
- Align scrollback-related docs/comments with current tui2 behavior
(main draw loop does not flush queued "history lines" to the terminal).

More details
See `codex-rs/tui2/docs/streaming_wrapping_design.md` for the full
problem statement and solution approach, and
`codex-rs/tui2/docs/tui_viewport_and_history.md` for viewport vs printed
output behavior.
2026-01-05 18:37:58 -08:00
Thibault Sottiaux
771f1ca6ab fix: accept whitespace-padded patch markers (#8746)
Trim whitespace when validating '*** Begin Patch'/'*** End Patch'
markers in codex-apply-patch so padded marker lines parse as intended,
and add regression coverage (unit + fixture scenario); this avoids
apply_patch failures when models include extra spacing. Tested with
cargo test -p codex-apply-patch.
2026-01-05 17:41:23 -08:00
Dylan Hurd
b1c93e135b chore(apply-patch) additional scenarios (#8230)
## Summary
More apply-patch scenarios

## Testing
- [x] This pr only adds tests
2026-01-05 15:56:38 -08:00
Curtis 'Fjord' Hawthorne
5f8776d34d Allow global exec flags after resume and fix CI codex build/timeout (#8440)
**Motivation**
- Bring `codex exec resume` to parity with top‑level flags so global
options (git check bypass, json, model, sandbox toggles) work after the
subcommand, including when outside a git repo.

**Description**
- Exec CLI: mark `--skip-git-repo-check`, `--json`, `--model`,
`--full-auto`, and `--dangerously-bypass-approvals-and-sandbox` as
global so they’re accepted after `resume`.
- Tests: add `exec_resume_accepts_global_flags_after_subcommand` to
verify those flags work when passed after `resume`.

**Testing**
- `just fmt`
- `cargo test -p codex-exec` (pass; ran with elevated perms to allow
network/port binds)
- Manual: exercised `codex exec resume` with global flags after the
subcommand to confirm behavior.
2026-01-05 22:12:09 +00:00
xl-openai
58a91a0b50 Use ConfigLayerStack for skills discovery. (#8497)
Use ConfigLayerStack to get all folders while loading skills.
2026-01-05 13:47:39 -08:00
Matthew Zeng
c29afc0cf3 [device-auth] Update login instruction for headless environments. (#8753)
We've seen reports that people who try to login on a remote/headless
machine will open the login link on their own machine and got errors.
Update the instructions to ask those users to use `codex login
--device-auth` instead.

<img width="1434" height="938" alt="CleanShot 2026-01-05 at 11 35 02@2x"
src="https://github.com/user-attachments/assets/2b209953-6a42-4eb0-8b55-bb0733f2e373"
/>
2026-01-05 13:46:42 -08:00
Michael Bolin
cafb07fe6e feat: add justification arg to prefix_rule() in *.rules (#8751)
Adds an optional `justification` parameter to the `prefix_rule()`
execpolicy DSL so policy authors can attach human-readable rationale to
a rule. That justification is propagated through parsing/matching and
can be surfaced to the model (or approval UI) when a command is blocked
or requires approval.

When a command is rejected (or gated behind approval) due to policy, a
generic message makes it hard for the model/user to understand what went
wrong and what to do instead. Allowing policy authors to supply a short
justification improves debuggability and helps guide the model toward
compliant alternatives.

Example:

```python
prefix_rule(
    pattern = ["git", "push"],
    decision = "forbidden",
    justification = "pushing is blocked in this repo",
)
```

If Codex tried to run `git push origin main`, now the failure would
include:

```
`git push origin main` rejected: pushing is blocked in this repo
```

whereas previously, all it was told was:

```
execpolicy forbids this command
```
2026-01-05 21:24:48 +00:00
iceweasel-oai
07f077dfb3 best effort to "hide" Sandbox users (#8492)
The elevated sandbox creates two new Windows users - CodexSandboxOffline
and CodexSandboxOnline. This is necessary, so this PR does all that it
can to "hide" those users. It uses the registry plus directory flags (on
their home directories) to get them to show up as little as possible.
2026-01-05 12:29:10 -08:00
Abrar Ahmed
7cf6f1c723 Use issuer URL in device auth prompt link (#7858)
## Summary

When using device-code login with a custom issuer
(`--experimental_issuer`), Codex correctly uses that issuer for the auth
flow — but the **terminal prompt still told users to open the default
OpenAI device URL** (`https://auth.openai.com/codex/device`). That’s
confusing and can send users to the **wrong domain** (especially for
enterprise/staging issuers). This PR updates the prompt (and related
URLs) to consistently use the configured issuer. 🎯

---

## 🔧 What changed

* 🔗 **Device auth prompt link** now uses the configured issuer (instead
of a hard-coded OpenAI URL)
* 🧭 **Redirect callback URL** is derived from the same issuer for
consistency
* 🧼 Minor cleanup: normalize the issuer base URL once and reuse it
(avoids formatting quirks like trailing `/`)

---

## 🧪 Repro + Before/After

### ▶️ Command

```bash
codex login --device-auth --experimental_issuer https://auth.example.com
```

###  Before (wrong link shown)

```text
1. Open this link in your browser and sign in to your account
   https://auth.openai.com/codex/device
```

###  After (correct link shown)

```text
1. Open this link in your browser and sign in to your account
   https://auth.example.com/codex/device
```

Full example output (same as before, but with the correct URL):

```text
Welcome to Codex [v0.72.0]
OpenAI's command-line coding agent

Follow these steps to sign in with ChatGPT using device code authorization:

1. Open this link in your browser and sign in to your account
   https://auth.example.com/codex/device

2. Enter this one-time code (expires in 15 minutes)
   BUT6-0M8K4

Device codes are a common phishing target. Never share this code.
```

---

##  Test plan

* 🟦 `codex login --device-auth` (default issuer): output remains
unchanged
* 🟩 `codex login --device-auth --experimental_issuer
https://auth.example.com`:

  * prompt link points to the issuer 
  * callback URL is derived from the same issuer 
  * no double slashes / mismatched domains 

Co-authored-by: Eric Traut <etraut@openai.com>
2026-01-05 13:09:05 -07:00
Gav Verma
57f8158608 chore: improve skills render section (#8459)
This change improves the skills render section
- Separate the skills list from usage rules with clear subheadings
- Define skill more clearly upfront
- Remove confusing trigger/discovery wording and make reference-following guidance more actionable
2026-01-05 11:55:26 -08:00
iceweasel-oai
95580f229e never let sandbox write to .codex/ or .codex/.sandbox/ (#8683)
Never treat .codex or .codex/.sandbox as a workspace root.
Handle write permissions to .codex/.sandbox in a single method so that
the sandbox setup/runner can write logs and other setup files to that
directory.
2026-01-05 11:54:21 -08:00
373 changed files with 19165 additions and 5647 deletions

View File

@@ -14,6 +14,26 @@ RUN apt-get update && \
pkg-config clang musl-tools libssl-dev just && \
rm -rf /var/lib/apt/lists/*
# Install Bazel via Bazelisk (mirrors bazelbuild/setup-bazelisk@v3).
ARG BAZELISK_VERSION=latest
RUN arch="$(uname -m)" && \
case "$arch" in \
x86_64) arch="amd64" ;; \
aarch64) arch="arm64" ;; \
*) echo "Unsupported architecture: $arch" >&2; exit 1 ;; \
esac && \
if [ "$BAZELISK_VERSION" = "latest" ]; then \
url="https://github.com/bazelbuild/bazelisk/releases/latest/download/bazelisk-linux-${arch}"; \
else \
url="https://github.com/bazelbuild/bazelisk/releases/download/v${BAZELISK_VERSION}/bazelisk-linux-${arch}"; \
fi && \
curl -fsSL "$url" -o /usr/local/bin/bazelisk && \
chmod +x /usr/local/bin/bazelisk && \
ln -s /usr/local/bin/bazelisk /usr/local/bin/bazel
# Install dotslash.
RUN curl -LSfs "https://github.com/facebook/dotslash/releases/download/v0.5.8/dotslash-ubuntu-22.04.$(uname -m).tar.gz" | tar fxz - -C /usr/local/bin
# Ubuntu 24.04 ships with user 'ubuntu' already created with UID 1000.
USER ubuntu

View File

@@ -323,6 +323,26 @@ jobs:
- name: Checkout repository
uses: actions/checkout@v6
- name: Generate release notes from tag commit message
id: release_notes
shell: bash
run: |
set -euo pipefail
# On tag pushes, GITHUB_SHA may be a tag object for annotated tags;
# peel it to the underlying commit.
commit="$(git rev-parse "${GITHUB_SHA}^{commit}")"
notes_path="${RUNNER_TEMP}/release-notes.md"
# Use the commit message for the commit the tag points at (not the
# annotated tag message).
git log -1 --format=%B "${commit}" > "${notes_path}"
# Ensure trailing newline so GitHub's markdown renderer doesn't
# occasionally run the last line into subsequent content.
echo >> "${notes_path}"
echo "path=${notes_path}" >> "${GITHUB_OUTPUT}"
- uses: actions/download-artifact@v7
with:
path: dist
@@ -395,6 +415,7 @@ jobs:
with:
name: ${{ steps.release_name.outputs.name }}
tag_name: ${{ github.ref_name }}
body_path: ${{ steps.release_notes.outputs.path }}
files: dist/**
# Mark as prerelease only when the version has a suffix after x.y.z
# (e.g. -alpha, -beta). Otherwise publish a normal release.

1
.gitignore vendored
View File

@@ -9,6 +9,7 @@ node_modules
# build
dist/
bazel-*
build/
out/
storybook-static/

View File

@@ -77,11 +77,11 @@ If you dont have the tool:
- Prefer deep equals comparisons whenever possible. Perform `assert_eq!()` on entire objects, rather than individual fields.
- Avoid mutating process environment in tests; prefer passing environment-derived flags or dependencies from above.
### Spawning workspace binaries in tests (Cargo vs Buck2)
### Spawning workspace binaries in tests (Cargo vs Bazel)
- Prefer `codex_utils_cargo_bin::cargo_bin("...")` over `assert_cmd::Command::cargo_bin(...)` or `escargot` when tests need to spawn first-party binaries.
- Under Buck2, `CARGO_BIN_EXE_*` may be project-relative (e.g. `buck-out/...`), which breaks if a test changes its working directory. `codex_utils_cargo_bin::cargo_bin` resolves to an absolute path first.
- When locating fixture files under Buck2, avoid `env!("CARGO_MANIFEST_DIR")` (Buck codegen sets it to `"."`). Prefer deriving paths from `codex_utils_cargo_bin::buck_project_root()` when needed.
- Under Bazel, binaries and resources may live under runfiles; use `codex_utils_cargo_bin::cargo_bin` to resolve absolute paths that remain stable after `chdir`.
- When locating fixture files or test resources under Bazel, avoid `env!("CARGO_MANIFEST_DIR")`. Prefer `codex_utils_cargo_bin::find_resource!` so paths resolve correctly under both Cargo and Bazel runfiles.
### Integration tests (core)

16
announcement_tip.toml Normal file
View File

@@ -0,0 +1,16 @@
# Example announcement tips for Codex TUI.
# Each [[announcements]] entry is evaluated in order; the last matching one is shown.
# Dates are UTC, formatted as YYYY-MM-DD. The from_date is inclusive and the to_date is exclusive.
# version_regex matches against the CLI version (env!("CARGO_PKG_VERSION")); omit to apply to all versions.
# target_app specify which app should display the announcement (cli, vsce, ...).
[[announcements]]
content = "Welcome to Codex! Check out the new onboarding flow."
from_date = "2024-10-01"
to_date = "2024-10-15"
target_app = "cli"
[[announcements]]
content = "This is a test announcement"
version_regex = "^0\\.0\\.0$"
to_date = "2026-01-10"

96
codex-rs/Cargo.lock generated
View File

@@ -360,7 +360,7 @@ dependencies = [
"objc2-foundation",
"parking_lot",
"percent-encoding",
"windows-sys 0.52.0",
"windows-sys 0.60.2",
"wl-clipboard-rs",
"x11rb",
]
@@ -819,6 +819,8 @@ version = "1.2.30"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "deec109607ca693028562ed836a5f1c4b8bd77755c4e132fc5ce11b0b6211ae7"
dependencies = [
"jobserver",
"libc",
"shlex",
]
@@ -1127,6 +1129,7 @@ dependencies = [
"codex-common",
"codex-core",
"codex-git",
"codex-utils-cargo-bin",
"serde",
"serde_json",
"tempfile",
@@ -1153,7 +1156,6 @@ dependencies = [
"codex-execpolicy",
"codex-login",
"codex-mcp-server",
"codex-process-hardening",
"codex-protocol",
"codex-responses-api-proxy",
"codex-rmcp-client",
@@ -1163,7 +1165,6 @@ dependencies = [
"codex-utils-absolute-path",
"codex-utils-cargo-bin",
"codex-windows-sandbox",
"ctor 0.5.0",
"libc",
"owo-colors",
"predicates",
@@ -1197,6 +1198,7 @@ dependencies = [
"tracing",
"tracing-opentelemetry",
"tracing-subscriber",
"zstd",
]
[[package]]
@@ -1298,7 +1300,6 @@ dependencies = [
"dunce",
"encoding_rs",
"env-flags",
"escargot",
"eventsource-stream",
"futures",
"http 1.3.1",
@@ -1348,6 +1349,19 @@ dependencies = [
"which",
"wildmatch",
"wiremock",
"zstd",
]
[[package]]
name = "codex-debug-client"
version = "0.0.0"
dependencies = [
"anyhow",
"clap",
"codex-app-server-protocol",
"pretty_assertions",
"serde",
"serde_json",
]
[[package]]
@@ -1605,10 +1619,12 @@ dependencies = [
"opentelemetry-otlp",
"opentelemetry-semantic-conventions",
"opentelemetry_sdk",
"pretty_assertions",
"reqwest",
"serde",
"serde_json",
"strum_macros 0.27.2",
"thiserror 2.0.17",
"tokio",
"tracing",
"tracing-opentelemetry",
@@ -1866,7 +1882,7 @@ dependencies = [
name = "codex-utils-cache"
version = "0.0.0"
dependencies = [
"lru 0.16.2",
"lru 0.16.3",
"sha1",
"tokio",
]
@@ -1876,6 +1892,7 @@ name = "codex-utils-cargo-bin"
version = "0.0.0"
dependencies = [
"assert_cmd",
"path-absolutize",
"thiserror 2.0.17",
]
@@ -2781,7 +2798,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "778e2ac28f6c47af28e4907f13ffd1e1ddbd400980a9abd7c8df189bf578a5ad"
dependencies = [
"libc",
"windows-sys 0.52.0",
"windows-sys 0.60.2",
]
[[package]]
@@ -2790,17 +2807,6 @@ version = "3.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dea2df4cf52843e0452895c455a1a2cfbb842a1e7329671acf418fdc53ed4c59"
[[package]]
name = "escargot"
version = "0.5.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "11c3aea32bc97b500c9ca6a72b768a26e558264303d101d3409cf6d57a9ed0cf"
dependencies = [
"log",
"serde",
"serde_json",
]
[[package]]
name = "event-listener"
version = "5.4.0"
@@ -2889,7 +2895,7 @@ checksum = "0ce92ff622d6dadf7349484f42c93271a0d49b7cc4d466a936405bacbe10aa78"
dependencies = [
"cfg-if",
"rustix 1.0.8",
"windows-sys 0.52.0",
"windows-sys 0.59.0",
]
[[package]]
@@ -3830,7 +3836,7 @@ checksum = "e04d7f318608d35d4b61ddd75cbdaee86b023ebe2bd5a66ee0915f0bf93095a9"
dependencies = [
"hermit-abi",
"libc",
"windows-sys 0.52.0",
"windows-sys 0.59.0",
]
[[package]]
@@ -3924,6 +3930,16 @@ version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8eaf4bc02d17cbdd7ff4c7438cafcdf7fb9a4613313ad11b4f8fefe7d3fa0130"
[[package]]
name = "jobserver"
version = "0.1.34"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9afb3de4395d6b3e67a780b6de64b51c978ecf11cb9a462c66be7d4ca9039d33"
dependencies = [
"getrandom 0.3.3",
"libc",
]
[[package]]
name = "js-sys"
version = "0.3.77"
@@ -4151,9 +4167,9 @@ dependencies = [
[[package]]
name = "lru"
version = "0.16.2"
version = "0.16.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "96051b46fc183dc9cd4a223960ef37b9af631b55191852a8274bfef064cda20f"
checksum = "a1dc47f592c06f33f8e3aea9591776ec7c9f9e4124778ff8a3c3b87159f7e593"
dependencies = [
"hashbrown 0.16.0",
]
@@ -5331,7 +5347,7 @@ dependencies = [
"once_cell",
"socket2 0.6.1",
"tracing",
"windows-sys 0.52.0",
"windows-sys 0.60.2",
]
[[package]]
@@ -5459,7 +5475,7 @@ dependencies = [
"indoc",
"itertools 0.14.0",
"kasuari",
"lru 0.16.2",
"lru 0.16.3",
"strum 0.27.2",
"thiserror 2.0.17",
"unicode-segmentation",
@@ -5710,7 +5726,7 @@ dependencies = [
"errno",
"libc",
"linux-raw-sys 0.4.15",
"windows-sys 0.52.0",
"windows-sys 0.59.0",
]
[[package]]
@@ -5723,7 +5739,7 @@ dependencies = [
"errno",
"libc",
"linux-raw-sys 0.9.4",
"windows-sys 0.52.0",
"windows-sys 0.60.2",
]
[[package]]
@@ -8011,7 +8027,7 @@ version = "0.1.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cf221c93e13a30d793f7645a0e7762c55d169dbb0a49671918a2319d289b10bb"
dependencies = [
"windows-sys 0.52.0",
"windows-sys 0.59.0",
]
[[package]]
@@ -8809,6 +8825,34 @@ dependencies = [
"syn 2.0.104",
]
[[package]]
name = "zstd"
version = "0.13.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e91ee311a569c327171651566e07972200e76fcfe2242a4fa446149a3881c08a"
dependencies = [
"zstd-safe",
]
[[package]]
name = "zstd-safe"
version = "7.2.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8f49c4d5f0abb602a93fb8736af2a4f4dd9512e36f7f570d66e65ff867ed3b9d"
dependencies = [
"zstd-sys",
]
[[package]]
name = "zstd-sys"
version = "2.0.16+zstd.1.5.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "91e19ebc2adc8f83e43039e79776e3fda8ca919132d68a1fed6a5faca2683748"
dependencies = [
"cc",
"pkg-config",
]
[[package]]
name = "zune-core"
version = "0.4.12"

View File

@@ -6,6 +6,7 @@ members = [
"app-server",
"app-server-protocol",
"app-server-test-client",
"debug-client",
"apply-patch",
"arg0",
"feedback",
@@ -134,7 +135,6 @@ dunce = "1.0.4"
encoding_rs = "0.8.35"
env-flags = "0.1.1"
env_logger = "0.11.5"
escargot = "0.5"
eventsource-stream = "0.2.3"
futures = { version = "0.3", default-features = false }
http = "1.3.1"
@@ -152,7 +152,7 @@ landlock = "0.4.4"
lazy_static = "1"
libc = "0.2.177"
log = "0.4"
lru = "0.16.2"
lru = "0.16.3"
maplit = "1.0.2"
mime_guess = "2.0.5"
multimap = "0.10.0"
@@ -218,6 +218,7 @@ tracing-subscriber = "0.3.22"
tracing-test = "0.2.5"
tree-sitter = "0.25.10"
tree-sitter-bash = "0.25"
zstd = "0.13"
tree-sitter-highlight = "0.25.10"
ts-rs = "11"
tui-scrollbar = "0.2.1"

View File

@@ -109,14 +109,26 @@ client_request_definitions! {
params: v2::ThreadResumeParams,
response: v2::ThreadResumeResponse,
},
ThreadFork => "thread/fork" {
params: v2::ThreadForkParams,
response: v2::ThreadForkResponse,
},
ThreadArchive => "thread/archive" {
params: v2::ThreadArchiveParams,
response: v2::ThreadArchiveResponse,
},
ThreadRollback => "thread/rollback" {
params: v2::ThreadRollbackParams,
response: v2::ThreadRollbackResponse,
},
ThreadList => "thread/list" {
params: v2::ThreadListParams,
response: v2::ThreadListResponse,
},
ThreadLoadedList => "thread/loaded/list" {
params: v2::ThreadLoadedListParams,
response: v2::ThreadLoadedListResponse,
},
SkillsList => "skills/list" {
params: v2::SkillsListParams,
response: v2::SkillsListResponse,
@@ -193,6 +205,11 @@ client_request_definitions! {
response: v2::ConfigWriteResponse,
},
ConfigRequirementsRead => "configRequirements/read" {
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
response: v2::ConfigRequirementsReadResponse,
},
GetAccount => "account/read" {
params: v2::GetAccountParams,
response: v2::GetAccountResponse,
@@ -217,6 +234,11 @@ client_request_definitions! {
params: v1::ResumeConversationParams,
response: v1::ResumeConversationResponse,
},
/// Fork a recorded Codex conversation into a new session.
ForkConversation {
params: v1::ForkConversationParams,
response: v1::ForkConversationResponse,
},
ArchiveConversation {
params: v1::ArchiveConversationParams,
response: v1::ArchiveConversationResponse,
@@ -565,7 +587,7 @@ client_notification_definitions! {
mod tests {
use super::*;
use anyhow::Result;
use codex_protocol::ConversationId;
use codex_protocol::ThreadId;
use codex_protocol::account::PlanType;
use codex_protocol::parse_command::ParsedCommand;
use codex_protocol::protocol::AskForApproval;
@@ -614,7 +636,7 @@ mod tests {
#[test]
fn conversation_id_serializes_as_plain_string() -> Result<()> {
let id = ConversationId::from_string("67e55044-10b1-426f-9247-bb680e5fe0c8")?;
let id = ThreadId::from_string("67e55044-10b1-426f-9247-bb680e5fe0c8")?;
assert_eq!(
json!("67e55044-10b1-426f-9247-bb680e5fe0c8"),
@@ -625,11 +647,10 @@ mod tests {
#[test]
fn conversation_id_deserializes_from_plain_string() -> Result<()> {
let id: ConversationId =
serde_json::from_value(json!("67e55044-10b1-426f-9247-bb680e5fe0c8"))?;
let id: ThreadId = serde_json::from_value(json!("67e55044-10b1-426f-9247-bb680e5fe0c8"))?;
assert_eq!(
ConversationId::from_string("67e55044-10b1-426f-9247-bb680e5fe0c8")?,
ThreadId::from_string("67e55044-10b1-426f-9247-bb680e5fe0c8")?,
id,
);
Ok(())
@@ -650,7 +671,7 @@ mod tests {
#[test]
fn serialize_server_request() -> Result<()> {
let conversation_id = ConversationId::from_string("67e55044-10b1-426f-9247-bb680e5fe0c8")?;
let conversation_id = ThreadId::from_string("67e55044-10b1-426f-9247-bb680e5fe0c8")?;
let params = v1::ExecCommandApprovalParams {
conversation_id,
call_id: "call-42".to_string(),
@@ -708,6 +729,22 @@ mod tests {
Ok(())
}
#[test]
fn serialize_config_requirements_read() -> Result<()> {
let request = ClientRequest::ConfigRequirementsRead {
request_id: RequestId::Integer(1),
params: None,
};
assert_eq!(
json!({
"method": "configRequirements/read",
"id": 1,
}),
serde_json::to_value(&request)?,
);
Ok(())
}
#[test]
fn serialize_account_login_api_key() -> Result<()> {
let request = ClientRequest::LoginAccount {

View File

@@ -6,6 +6,7 @@ use crate::protocol::v2::UserInput;
use codex_protocol::protocol::AgentReasoningEvent;
use codex_protocol::protocol::AgentReasoningRawContentEvent;
use codex_protocol::protocol::EventMsg;
use codex_protocol::protocol::ThreadRolledBackEvent;
use codex_protocol::protocol::TurnAbortedEvent;
use codex_protocol::protocol::UserMessageEvent;
@@ -57,6 +58,7 @@ impl ThreadHistoryBuilder {
EventMsg::TokenCount(_) => {}
EventMsg::EnteredReviewMode(_) => {}
EventMsg::ExitedReviewMode(_) => {}
EventMsg::ThreadRolledBack(payload) => self.handle_thread_rollback(payload),
EventMsg::UndoCompleted(_) => {}
EventMsg::TurnAborted(payload) => self.handle_turn_aborted(payload),
_ => {}
@@ -130,6 +132,23 @@ impl ThreadHistoryBuilder {
turn.status = TurnStatus::Interrupted;
}
fn handle_thread_rollback(&mut self, payload: &ThreadRolledBackEvent) {
self.finish_current_turn();
let n = usize::try_from(payload.num_turns).unwrap_or(usize::MAX);
if n >= self.turns.len() {
self.turns.clear();
} else {
self.turns.truncate(self.turns.len().saturating_sub(n));
}
// Re-number subsequent synthetic ids so the pruned history is consistent.
self.next_turn_index =
i64::try_from(self.turns.len().saturating_add(1)).unwrap_or(i64::MAX);
let item_count: usize = self.turns.iter().map(|t| t.items.len()).sum();
self.next_item_index = i64::try_from(item_count.saturating_add(1)).unwrap_or(i64::MAX);
}
fn finish_current_turn(&mut self) {
if let Some(turn) = self.current_turn.take() {
if turn.items.is_empty() {
@@ -213,6 +232,7 @@ mod tests {
use codex_protocol::protocol::AgentMessageEvent;
use codex_protocol::protocol::AgentReasoningEvent;
use codex_protocol::protocol::AgentReasoningRawContentEvent;
use codex_protocol::protocol::ThreadRolledBackEvent;
use codex_protocol::protocol::TurnAbortReason;
use codex_protocol::protocol::TurnAbortedEvent;
use codex_protocol::protocol::UserMessageEvent;
@@ -410,4 +430,95 @@ mod tests {
}
);
}
#[test]
fn drops_last_turns_on_thread_rollback() {
let events = vec![
EventMsg::UserMessage(UserMessageEvent {
message: "First".into(),
images: None,
}),
EventMsg::AgentMessage(AgentMessageEvent {
message: "A1".into(),
}),
EventMsg::UserMessage(UserMessageEvent {
message: "Second".into(),
images: None,
}),
EventMsg::AgentMessage(AgentMessageEvent {
message: "A2".into(),
}),
EventMsg::ThreadRolledBack(ThreadRolledBackEvent { num_turns: 1 }),
EventMsg::UserMessage(UserMessageEvent {
message: "Third".into(),
images: None,
}),
EventMsg::AgentMessage(AgentMessageEvent {
message: "A3".into(),
}),
];
let turns = build_turns_from_event_msgs(&events);
let expected = vec![
Turn {
id: "turn-1".into(),
status: TurnStatus::Completed,
error: None,
items: vec![
ThreadItem::UserMessage {
id: "item-1".into(),
content: vec![UserInput::Text {
text: "First".into(),
}],
},
ThreadItem::AgentMessage {
id: "item-2".into(),
text: "A1".into(),
},
],
},
Turn {
id: "turn-2".into(),
status: TurnStatus::Completed,
error: None,
items: vec![
ThreadItem::UserMessage {
id: "item-3".into(),
content: vec![UserInput::Text {
text: "Third".into(),
}],
},
ThreadItem::AgentMessage {
id: "item-4".into(),
text: "A3".into(),
},
],
},
];
assert_eq!(turns, expected);
}
#[test]
fn thread_rollback_clears_all_turns_when_num_turns_exceeds_history() {
let events = vec![
EventMsg::UserMessage(UserMessageEvent {
message: "One".into(),
images: None,
}),
EventMsg::AgentMessage(AgentMessageEvent {
message: "A1".into(),
}),
EventMsg::UserMessage(UserMessageEvent {
message: "Two".into(),
images: None,
}),
EventMsg::AgentMessage(AgentMessageEvent {
message: "A2".into(),
}),
EventMsg::ThreadRolledBack(ThreadRolledBackEvent { num_turns: 99 }),
];
let turns = build_turns_from_event_msgs(&events);
assert_eq!(turns, Vec::<Turn>::new());
}
}

View File

@@ -1,7 +1,7 @@
use std::collections::HashMap;
use std::path::PathBuf;
use codex_protocol::ConversationId;
use codex_protocol::ThreadId;
use codex_protocol::config_types::ForcedLoginMethod;
use codex_protocol::config_types::ReasoningSummary;
use codex_protocol::config_types::SandboxMode;
@@ -68,7 +68,7 @@ pub struct NewConversationParams {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct NewConversationResponse {
pub conversation_id: ConversationId,
pub conversation_id: ThreadId,
pub model: String,
pub reasoning_effort: Option<ReasoningEffort>,
pub rollout_path: PathBuf,
@@ -77,7 +77,16 @@ pub struct NewConversationResponse {
#[derive(Serialize, Deserialize, Debug, Clone, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ResumeConversationResponse {
pub conversation_id: ConversationId,
pub conversation_id: ThreadId,
pub model: String,
pub initial_messages: Option<Vec<EventMsg>>,
pub rollout_path: PathBuf,
}
#[derive(Serialize, Deserialize, Debug, Clone, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ForkConversationResponse {
pub conversation_id: ThreadId,
pub model: String,
pub initial_messages: Option<Vec<EventMsg>>,
pub rollout_path: PathBuf,
@@ -90,9 +99,9 @@ pub enum GetConversationSummaryParams {
#[serde(rename = "rolloutPath")]
rollout_path: PathBuf,
},
ConversationId {
ThreadId {
#[serde(rename = "conversationId")]
conversation_id: ConversationId,
conversation_id: ThreadId,
},
}
@@ -113,7 +122,7 @@ pub struct ListConversationsParams {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ConversationSummary {
pub conversation_id: ConversationId,
pub conversation_id: ThreadId,
pub path: PathBuf,
pub preview: String,
pub timestamp: Option<String>,
@@ -143,11 +152,19 @@ pub struct ListConversationsResponse {
#[serde(rename_all = "camelCase")]
pub struct ResumeConversationParams {
pub path: Option<PathBuf>,
pub conversation_id: Option<ConversationId>,
pub conversation_id: Option<ThreadId>,
pub history: Option<Vec<ResponseItem>>,
pub overrides: Option<NewConversationParams>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ForkConversationParams {
pub path: Option<PathBuf>,
pub conversation_id: Option<ThreadId>,
pub overrides: Option<NewConversationParams>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct AddConversationSubscriptionResponse {
@@ -158,7 +175,7 @@ pub struct AddConversationSubscriptionResponse {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ArchiveConversationParams {
pub conversation_id: ConversationId,
pub conversation_id: ThreadId,
pub rollout_path: PathBuf,
}
@@ -198,7 +215,7 @@ pub struct GitDiffToRemoteResponse {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ApplyPatchApprovalParams {
pub conversation_id: ConversationId,
pub conversation_id: ThreadId,
/// Use to correlate this with [codex_core::protocol::PatchApplyBeginEvent]
/// and [codex_core::protocol::PatchApplyEndEvent].
pub call_id: String,
@@ -219,7 +236,7 @@ pub struct ApplyPatchApprovalResponse {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ExecCommandApprovalParams {
pub conversation_id: ConversationId,
pub conversation_id: ThreadId,
/// Use to correlate this with [codex_core::protocol::ExecCommandBeginEvent]
/// and [codex_core::protocol::ExecCommandEndEvent].
pub call_id: String,
@@ -369,14 +386,14 @@ pub struct SandboxSettings {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct SendUserMessageParams {
pub conversation_id: ConversationId,
pub conversation_id: ThreadId,
pub items: Vec<InputItem>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct SendUserTurnParams {
pub conversation_id: ConversationId,
pub conversation_id: ThreadId,
pub items: Vec<InputItem>,
pub cwd: PathBuf,
pub approval_policy: AskForApproval,
@@ -395,7 +412,7 @@ pub struct SendUserTurnResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct InterruptConversationParams {
pub conversation_id: ConversationId,
pub conversation_id: ThreadId,
}
#[derive(Serialize, Deserialize, Debug, Clone, JsonSchema, TS)]
@@ -411,7 +428,7 @@ pub struct SendUserMessageResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct AddConversationListenerParams {
pub conversation_id: ConversationId,
pub conversation_id: ThreadId,
#[serde(default)]
pub experimental_raw_events: bool,
}
@@ -445,7 +462,7 @@ pub struct LoginChatGptCompleteNotification {
#[derive(Serialize, Deserialize, Debug, Clone, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct SessionConfiguredNotification {
pub session_id: ConversationId,
pub session_id: ThreadId,
pub model: String,
pub reasoning_effort: Option<ReasoningEffort>,
pub history_log_id: u64,

View File

@@ -89,6 +89,7 @@ pub enum CodexErrorInfo {
InternalServerError,
Unauthorized,
BadRequest,
ThreadRollbackFailed,
SandboxError,
/// The response SSE stream disconnected in the middle of a turn before completion.
ResponseStreamDisconnected {
@@ -119,6 +120,7 @@ impl From<CoreCodexErrorInfo> for CodexErrorInfo {
CoreCodexErrorInfo::InternalServerError => CodexErrorInfo::InternalServerError,
CoreCodexErrorInfo::Unauthorized => CodexErrorInfo::Unauthorized,
CoreCodexErrorInfo::BadRequest => CodexErrorInfo::BadRequest,
CoreCodexErrorInfo::ThreadRollbackFailed => CodexErrorInfo::ThreadRollbackFailed,
CoreCodexErrorInfo::SandboxError => CodexErrorInfo::SandboxError,
CoreCodexErrorInfo::ResponseStreamDisconnected { http_status_code } => {
CodexErrorInfo::ResponseStreamDisconnected { http_status_code }
@@ -330,6 +332,15 @@ pub struct ProfileV2 {
pub additional: HashMap<String, JsonValue>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "snake_case")]
#[ts(export_to = "v2/")]
pub struct AnalyticsConfig {
pub enabled: Option<bool>,
#[serde(default, flatten)]
pub additional: HashMap<String, JsonValue>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "snake_case")]
#[ts(export_to = "v2/")]
@@ -354,6 +365,7 @@ pub struct Config {
pub model_reasoning_effort: Option<ReasoningEffort>,
pub model_reasoning_summary: Option<ReasoningSummary>,
pub model_verbosity: Option<Verbosity>,
pub analytics: Option<AnalyticsConfig>,
#[serde(default, flatten)]
pub additional: HashMap<String, JsonValue>,
}
@@ -441,6 +453,22 @@ pub struct ConfigReadResponse {
pub layers: Option<Vec<ConfigLayer>>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ConfigRequirements {
pub allowed_approval_policies: Option<Vec<AskForApproval>>,
pub allowed_sandbox_modes: Option<Vec<SandboxMode>>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ConfigRequirementsReadResponse {
/// Null if no requirements are configured (e.g. no requirements.toml/MDM entries).
pub requirements: Option<ConfigRequirements>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -475,14 +503,33 @@ pub struct ConfigEdit {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub enum ApprovalDecision {
pub enum CommandExecutionApprovalDecision {
/// User approved the command.
Accept,
/// Approve and remember the approval for the session.
/// User approved the command and future identical commands should run without prompting.
AcceptForSession,
/// User approved the command, and wants to apply the proposed execpolicy amendment so future
/// matching commands can run without prompting.
AcceptWithExecpolicyAmendment {
execpolicy_amendment: ExecPolicyAmendment,
},
/// User denied the command. The agent will continue the turn.
Decline,
/// User denied the command. The turn will also be immediately interrupted.
Cancel,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub enum FileChangeApprovalDecision {
/// User approved the file changes.
Accept,
/// User approved the file changes and future changes to the same files should run without prompting.
AcceptForSession,
/// User denied the file changes. The agent will continue the turn.
Decline,
/// User denied the file changes. The turn will also be immediately interrupted.
Cancel,
}
@@ -1033,6 +1080,47 @@ pub struct ThreadResumeResponse {
pub reasoning_effort: Option<ReasoningEffort>,
}
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
/// There are two ways to fork a thread:
/// 1. By thread_id: load the thread from disk by thread_id and fork it into a new thread.
/// 2. By path: load the thread from disk by path and fork it into a new thread.
///
/// If using path, the thread_id param will be ignored.
///
/// Prefer using thread_id whenever possible.
pub struct ThreadForkParams {
pub thread_id: String,
/// [UNSTABLE] Specify the rollout path to fork from.
/// If specified, the thread_id param will be ignored.
pub path: Option<PathBuf>,
/// Configuration overrides for the forked thread, if any.
pub model: Option<String>,
pub model_provider: Option<String>,
pub cwd: Option<String>,
pub approval_policy: Option<AskForApproval>,
pub sandbox: Option<SandboxMode>,
pub config: Option<HashMap<String, serde_json::Value>>,
pub base_instructions: Option<String>,
pub developer_instructions: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadForkResponse {
pub thread: Thread,
pub model: String,
pub model_provider: String,
pub cwd: PathBuf,
pub approval_policy: AskForApproval,
pub sandbox: SandboxPolicy,
pub reasoning_effort: Option<ReasoningEffort>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -1045,6 +1133,30 @@ pub struct ThreadArchiveParams {
#[ts(export_to = "v2/")]
pub struct ThreadArchiveResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadRollbackParams {
pub thread_id: String,
/// The number of turns to drop from the end of the thread. Must be >= 1.
///
/// This only modifies the thread's history and does not revert local file changes
/// that have been made by the agent. Clients are responsible for reverting these changes.
pub num_turns: u32,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadRollbackResponse {
/// The updated thread after applying the rollback, with `turns` populated.
///
/// The ThreadItems stored in each Turn are lossy since we explicitly do not
/// persist all agent interactions, such as command executions. This is the same
/// behavior as `thread/resume`.
pub thread: Thread,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -1068,6 +1180,27 @@ pub struct ThreadListResponse {
pub next_cursor: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Default, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadLoadedListParams {
/// Opaque pagination cursor returned by a previous call.
pub cursor: Option<String>,
/// Optional page size; defaults to no limit.
pub limit: Option<u32>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadLoadedListResponse {
/// Thread ids for sessions currently loaded in memory.
pub data: Vec<String>,
/// Opaque cursor to pass to the next call to continue after the last item.
/// if None, there are no more items to return.
pub next_cursor: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -1183,7 +1316,7 @@ pub struct Thread {
pub source: SessionSource,
/// Optional Git metadata captured when the thread was created.
pub git_info: Option<GitInfo>,
/// Only populated on a `thread/resume` response.
/// Only populated on `thread/resume`, `thread/rollback`, `thread/fork` responses.
/// For all other responses and notifications returning a Thread,
/// the turns field will be an empty list.
pub turns: Vec<Turn>,
@@ -1211,6 +1344,7 @@ pub struct ThreadTokenUsageUpdatedNotification {
pub struct ThreadTokenUsage {
pub total: TokenUsageBreakdown,
pub last: TokenUsageBreakdown,
// TODO(aibrahim): make this not optional
#[ts(type = "number | null")]
pub model_context_window: Option<i64>,
}
@@ -1258,7 +1392,7 @@ impl From<CoreTokenUsage> for TokenUsageBreakdown {
#[ts(export_to = "v2/")]
pub struct Turn {
pub id: String,
/// Only populated on a `thread/resume` response.
/// Only populated on a `thread/resume` or `thread/fork` response.
/// For all other responses and notifications returning a Turn,
/// the items field will be an empty list.
pub items: Vec<ThreadItem>,
@@ -1404,6 +1538,7 @@ pub enum UserInput {
Text { text: String },
Image { url: String },
LocalImage { path: PathBuf },
Skill { name: String, path: PathBuf },
}
impl UserInput {
@@ -1412,6 +1547,7 @@ impl UserInput {
UserInput::Text { text } => CoreUserInput::Text { text },
UserInput::Image { url } => CoreUserInput::Image { image_url: url },
UserInput::LocalImage { path } => CoreUserInput::LocalImage { path },
UserInput::Skill { name, path } => CoreUserInput::Skill { name, path },
}
}
}
@@ -1422,6 +1558,7 @@ impl From<CoreUserInput> for UserInput {
CoreUserInput::Text { text } => UserInput::Text { text },
CoreUserInput::Image { image_url } => UserInput::Image { url: image_url },
CoreUserInput::LocalImage { path } => UserInput::LocalImage { path },
CoreUserInput::Skill { name, path } => UserInput::Skill { name, path },
_ => unreachable!("unsupported user input variant"),
}
}
@@ -1848,7 +1985,7 @@ pub struct CommandExecutionRequestApprovalParams {
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct CommandExecutionRequestApprovalResponse {
pub decision: ApprovalDecision,
pub decision: CommandExecutionApprovalDecision,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
@@ -1868,7 +2005,7 @@ pub struct FileChangeRequestApprovalParams {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[ts(export_to = "v2/")]
pub struct FileChangeRequestApprovalResponse {
pub decision: ApprovalDecision,
pub decision: FileChangeApprovalDecision,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
@@ -2007,6 +2144,10 @@ mod tests {
CoreUserInput::LocalImage {
path: PathBuf::from("local/image.png"),
},
CoreUserInput::Skill {
name: "skill-creator".to_string(),
path: PathBuf::from("/repo/.codex/skills/skill-creator/SKILL.md"),
},
],
});
@@ -2024,6 +2165,10 @@ mod tests {
UserInput::LocalImage {
path: PathBuf::from("local/image.png"),
},
UserInput::Skill {
name: "skill-creator".to_string(),
path: PathBuf::from("/repo/.codex/skills/skill-creator/SKILL.md"),
},
],
}
);

View File

@@ -13,16 +13,18 @@ use std::time::Duration;
use anyhow::Context;
use anyhow::Result;
use anyhow::bail;
use clap::ArgAction;
use clap::Parser;
use clap::Subcommand;
use codex_app_server_protocol::AddConversationListenerParams;
use codex_app_server_protocol::AddConversationSubscriptionResponse;
use codex_app_server_protocol::ApprovalDecision;
use codex_app_server_protocol::AskForApproval;
use codex_app_server_protocol::ClientInfo;
use codex_app_server_protocol::ClientRequest;
use codex_app_server_protocol::CommandExecutionApprovalDecision;
use codex_app_server_protocol::CommandExecutionRequestApprovalParams;
use codex_app_server_protocol::CommandExecutionRequestApprovalResponse;
use codex_app_server_protocol::FileChangeApprovalDecision;
use codex_app_server_protocol::FileChangeRequestApprovalParams;
use codex_app_server_protocol::FileChangeRequestApprovalResponse;
use codex_app_server_protocol::GetAccountRateLimitsResponse;
@@ -35,6 +37,8 @@ use codex_app_server_protocol::JSONRPCRequest;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::LoginChatGptCompleteNotification;
use codex_app_server_protocol::LoginChatGptResponse;
use codex_app_server_protocol::ModelListParams;
use codex_app_server_protocol::ModelListResponse;
use codex_app_server_protocol::NewConversationParams;
use codex_app_server_protocol::NewConversationResponse;
use codex_app_server_protocol::RequestId;
@@ -49,7 +53,7 @@ use codex_app_server_protocol::TurnStartParams;
use codex_app_server_protocol::TurnStartResponse;
use codex_app_server_protocol::TurnStatus;
use codex_app_server_protocol::UserInput as V2UserInput;
use codex_protocol::ConversationId;
use codex_protocol::ThreadId;
use codex_protocol::protocol::Event;
use codex_protocol::protocol::EventMsg;
use serde::Serialize;
@@ -65,6 +69,19 @@ struct Cli {
#[arg(long, env = "CODEX_BIN", default_value = "codex")]
codex_bin: String,
/// Forwarded to the `codex` CLI as `--config key=value`. Repeatable.
///
/// Example:
/// `--config 'model_providers.mock.base_url="http://localhost:4010/v2"'`
#[arg(
short = 'c',
long = "config",
value_name = "key=value",
action = ArgAction::Append,
global = true
)]
config_overrides: Vec<String>,
#[command(subcommand)]
command: CliCommand,
}
@@ -113,37 +130,54 @@ enum CliCommand {
TestLogin,
/// Fetch the current account rate limits from the Codex app-server.
GetAccountRateLimits,
/// List the available models from the Codex app-server.
#[command(name = "model-list")]
ModelList,
}
fn main() -> Result<()> {
let Cli { codex_bin, command } = Cli::parse();
let Cli {
codex_bin,
config_overrides,
command,
} = Cli::parse();
match command {
CliCommand::SendMessage { user_message } => send_message(codex_bin, user_message),
CliCommand::SendMessageV2 { user_message } => send_message_v2(codex_bin, user_message),
CliCommand::SendMessage { user_message } => {
send_message(&codex_bin, &config_overrides, user_message)
}
CliCommand::SendMessageV2 { user_message } => {
send_message_v2(&codex_bin, &config_overrides, user_message)
}
CliCommand::TriggerCmdApproval { user_message } => {
trigger_cmd_approval(codex_bin, user_message)
trigger_cmd_approval(&codex_bin, &config_overrides, user_message)
}
CliCommand::TriggerPatchApproval { user_message } => {
trigger_patch_approval(codex_bin, user_message)
trigger_patch_approval(&codex_bin, &config_overrides, user_message)
}
CliCommand::NoTriggerCmdApproval => no_trigger_cmd_approval(codex_bin),
CliCommand::NoTriggerCmdApproval => no_trigger_cmd_approval(&codex_bin, &config_overrides),
CliCommand::SendFollowUpV2 {
first_message,
follow_up_message,
} => send_follow_up_v2(codex_bin, first_message, follow_up_message),
CliCommand::TestLogin => test_login(codex_bin),
CliCommand::GetAccountRateLimits => get_account_rate_limits(codex_bin),
} => send_follow_up_v2(
&codex_bin,
&config_overrides,
first_message,
follow_up_message,
),
CliCommand::TestLogin => test_login(&codex_bin, &config_overrides),
CliCommand::GetAccountRateLimits => get_account_rate_limits(&codex_bin, &config_overrides),
CliCommand::ModelList => model_list(&codex_bin, &config_overrides),
}
}
fn send_message(codex_bin: String, user_message: String) -> Result<()> {
let mut client = CodexClient::spawn(codex_bin)?;
fn send_message(codex_bin: &str, config_overrides: &[String], user_message: String) -> Result<()> {
let mut client = CodexClient::spawn(codex_bin, config_overrides)?;
let initialize = client.initialize()?;
println!("< initialize response: {initialize:?}");
let conversation = client.new_conversation()?;
let conversation = client.start_thread()?;
println!("< newConversation response: {conversation:?}");
let subscription = client.add_conversation_listener(&conversation.conversation_id)?;
@@ -154,51 +188,66 @@ fn send_message(codex_bin: String, user_message: String) -> Result<()> {
client.stream_conversation(&conversation.conversation_id)?;
client.remove_conversation_listener(subscription.subscription_id)?;
client.remove_thread_listener(subscription.subscription_id)?;
Ok(())
}
fn send_message_v2(codex_bin: String, user_message: String) -> Result<()> {
send_message_v2_with_policies(codex_bin, user_message, None, None)
fn send_message_v2(
codex_bin: &str,
config_overrides: &[String],
user_message: String,
) -> Result<()> {
send_message_v2_with_policies(codex_bin, config_overrides, user_message, None, None)
}
fn trigger_cmd_approval(codex_bin: String, user_message: Option<String>) -> Result<()> {
fn trigger_cmd_approval(
codex_bin: &str,
config_overrides: &[String],
user_message: Option<String>,
) -> Result<()> {
let default_prompt =
"Run `touch /tmp/should-trigger-approval` so I can confirm the file exists.";
let message = user_message.unwrap_or_else(|| default_prompt.to_string());
send_message_v2_with_policies(
codex_bin,
config_overrides,
message,
Some(AskForApproval::OnRequest),
Some(SandboxPolicy::ReadOnly),
)
}
fn trigger_patch_approval(codex_bin: String, user_message: Option<String>) -> Result<()> {
fn trigger_patch_approval(
codex_bin: &str,
config_overrides: &[String],
user_message: Option<String>,
) -> Result<()> {
let default_prompt =
"Create a file named APPROVAL_DEMO.txt containing a short hello message using apply_patch.";
let message = user_message.unwrap_or_else(|| default_prompt.to_string());
send_message_v2_with_policies(
codex_bin,
config_overrides,
message,
Some(AskForApproval::OnRequest),
Some(SandboxPolicy::ReadOnly),
)
}
fn no_trigger_cmd_approval(codex_bin: String) -> Result<()> {
fn no_trigger_cmd_approval(codex_bin: &str, config_overrides: &[String]) -> Result<()> {
let prompt = "Run `touch should_not_trigger_approval.txt`";
send_message_v2_with_policies(codex_bin, prompt.to_string(), None, None)
send_message_v2_with_policies(codex_bin, config_overrides, prompt.to_string(), None, None)
}
fn send_message_v2_with_policies(
codex_bin: String,
codex_bin: &str,
config_overrides: &[String],
user_message: String,
approval_policy: Option<AskForApproval>,
sandbox_policy: Option<SandboxPolicy>,
) -> Result<()> {
let mut client = CodexClient::spawn(codex_bin)?;
let mut client = CodexClient::spawn(codex_bin, config_overrides)?;
let initialize = client.initialize()?;
println!("< initialize response: {initialize:?}");
@@ -222,11 +271,12 @@ fn send_message_v2_with_policies(
}
fn send_follow_up_v2(
codex_bin: String,
codex_bin: &str,
config_overrides: &[String],
first_message: String,
follow_up_message: String,
) -> Result<()> {
let mut client = CodexClient::spawn(codex_bin)?;
let mut client = CodexClient::spawn(codex_bin, config_overrides)?;
let initialize = client.initialize()?;
println!("< initialize response: {initialize:?}");
@@ -259,8 +309,8 @@ fn send_follow_up_v2(
Ok(())
}
fn test_login(codex_bin: String) -> Result<()> {
let mut client = CodexClient::spawn(codex_bin)?;
fn test_login(codex_bin: &str, config_overrides: &[String]) -> Result<()> {
let mut client = CodexClient::spawn(codex_bin, config_overrides)?;
let initialize = client.initialize()?;
println!("< initialize response: {initialize:?}");
@@ -289,8 +339,8 @@ fn test_login(codex_bin: String) -> Result<()> {
}
}
fn get_account_rate_limits(codex_bin: String) -> Result<()> {
let mut client = CodexClient::spawn(codex_bin)?;
fn get_account_rate_limits(codex_bin: &str, config_overrides: &[String]) -> Result<()> {
let mut client = CodexClient::spawn(codex_bin, config_overrides)?;
let initialize = client.initialize()?;
println!("< initialize response: {initialize:?}");
@@ -301,6 +351,18 @@ fn get_account_rate_limits(codex_bin: String) -> Result<()> {
Ok(())
}
fn model_list(codex_bin: &str, config_overrides: &[String]) -> Result<()> {
let mut client = CodexClient::spawn(codex_bin, config_overrides)?;
let initialize = client.initialize()?;
println!("< initialize response: {initialize:?}");
let response = client.model_list(ModelListParams::default())?;
println!("< model/list response: {response:?}");
Ok(())
}
struct CodexClient {
child: Child,
stdin: Option<ChildStdin>,
@@ -309,8 +371,12 @@ struct CodexClient {
}
impl CodexClient {
fn spawn(codex_bin: String) -> Result<Self> {
let mut codex_app_server = Command::new(&codex_bin)
fn spawn(codex_bin: &str, config_overrides: &[String]) -> Result<Self> {
let mut cmd = Command::new(codex_bin);
for override_kv in config_overrides {
cmd.arg("--config").arg(override_kv);
}
let mut codex_app_server = cmd
.arg("app-server")
.stdin(Stdio::piped())
.stdout(Stdio::piped())
@@ -351,7 +417,7 @@ impl CodexClient {
self.send_request(request, request_id, "initialize")
}
fn new_conversation(&mut self) -> Result<NewConversationResponse> {
fn start_thread(&mut self) -> Result<NewConversationResponse> {
let request_id = self.request_id();
let request = ClientRequest::NewConversation {
request_id: request_id.clone(),
@@ -363,7 +429,7 @@ impl CodexClient {
fn add_conversation_listener(
&mut self,
conversation_id: &ConversationId,
conversation_id: &ThreadId,
) -> Result<AddConversationSubscriptionResponse> {
let request_id = self.request_id();
let request = ClientRequest::AddConversationListener {
@@ -377,7 +443,7 @@ impl CodexClient {
self.send_request(request, request_id, "addConversationListener")
}
fn remove_conversation_listener(&mut self, subscription_id: Uuid) -> Result<()> {
fn remove_thread_listener(&mut self, subscription_id: Uuid) -> Result<()> {
let request_id = self.request_id();
let request = ClientRequest::RemoveConversationListener {
request_id: request_id.clone(),
@@ -395,7 +461,7 @@ impl CodexClient {
fn send_user_message(
&mut self,
conversation_id: &ConversationId,
conversation_id: &ThreadId,
message: &str,
) -> Result<SendUserMessageResponse> {
let request_id = self.request_id();
@@ -452,7 +518,17 @@ impl CodexClient {
self.send_request(request, request_id, "account/rateLimits/read")
}
fn stream_conversation(&mut self, conversation_id: &ConversationId) -> Result<()> {
fn model_list(&mut self, params: ModelListParams) -> Result<ModelListResponse> {
let request_id = self.request_id();
let request = ClientRequest::ModelList {
request_id: request_id.clone(),
params,
};
self.send_request(request, request_id, "model/list")
}
fn stream_conversation(&mut self, conversation_id: &ThreadId) -> Result<()> {
loop {
let notification = self.next_notification()?;
@@ -589,7 +665,7 @@ impl CodexClient {
fn extract_event(
&self,
notification: JSONRPCNotification,
conversation_id: &ConversationId,
conversation_id: &ThreadId,
) -> Result<Option<Event>> {
let params = notification
.params
@@ -603,7 +679,7 @@ impl CodexClient {
let conversation_value = map
.remove("conversationId")
.context("event missing conversationId")?;
let notification_conversation: ConversationId = serde_json::from_value(conversation_value)
let notification_conversation: ThreadId = serde_json::from_value(conversation_value)
.context("conversationId was not a valid UUID")?;
if &notification_conversation != conversation_id {
@@ -770,7 +846,7 @@ impl CodexClient {
}
let response = CommandExecutionRequestApprovalResponse {
decision: ApprovalDecision::Accept,
decision: CommandExecutionApprovalDecision::Accept,
};
self.send_server_request_response(request_id, &response)?;
println!("< approved commandExecution request for item {item_id}");
@@ -801,7 +877,7 @@ impl CodexClient {
}
let response = FileChangeRequestApprovalResponse {
decision: ApprovalDecision::Accept,
decision: FileChangeApprovalDecision::Accept,
};
self.send_server_request_response(request_id, &response)?;
println!("< approved fileChange request for item {item_id}");

View File

@@ -11,6 +11,8 @@
- [Initialization](#initialization)
- [API Overview](#api-overview)
- [Events](#events)
- [Approvals](#approvals)
- [Skills](#skills)
- [Auth endpoints](#auth-endpoints)
## Protocol
@@ -39,7 +41,7 @@ Use the thread APIs to create, list, or archive conversations. Drive a conversat
## Lifecycle Overview
- Initialize once: Immediately after launching the codex app-server process, send an `initialize` request with your client metadata, then emit an `initialized` notification. Any other request before this handshake gets rejected.
- Start (or resume) a thread: Call `thread/start` to open a fresh conversation. The response returns the thread object and youll also get a `thread/started` notification. If youre continuing an existing conversation, call `thread/resume` with its ID instead.
- Start (or resume) a thread: Call `thread/start` to open a fresh conversation. The response returns the thread object and youll also get a `thread/started` notification. If youre continuing an existing conversation, call `thread/resume` with its ID instead. If you want to branch from an existing conversation, call `thread/fork` to create a new thread id with copied history.
- Begin a turn: To send user input, call `turn/start` with the target `threadId` and the user's input. Optional fields let you override model, cwd, sandbox policy, etc. This immediately returns the new turn object and triggers a `turn/started` notification.
- Stream events: After `turn/start`, keep reading JSON-RPC notifications on stdout. Youll see `item/started`, `item/completed`, deltas like `item/agentMessage/delta`, tool progress, etc. These represent streaming model output plus any side effects (commands, tool calls, reasoning notes).
- Finish the turn: When the model is done (or the turn is interrupted via making the `turn/interrupt` call), the server sends `turn/completed` with the final turn state and token usage.
@@ -70,8 +72,11 @@ Example (from OpenAI's official VSCode extension):
- `thread/start` — create a new thread; emits `thread/started` and auto-subscribes you to turn/item events for that thread.
- `thread/resume` — reopen an existing thread by id so subsequent `turn/start` calls append to it.
- `thread/fork` — fork an existing thread into a new thread id by copying the stored history; emits `thread/started` and auto-subscribes you to turn/item events for the new thread.
- `thread/list` — page through stored rollouts; supports cursor-based pagination and optional `modelProviders` filtering.
- `thread/loaded/list` — list the thread ids currently loaded in memory.
- `thread/archive` — move a threads rollout file into the archived directory; returns `{}` on success.
- `thread/rollback` — drop the last N turns from the agents in-memory context and persist a rollback marker in the rollout so future resumes see the pruned history; returns the updated `thread` (with `turns` populated) on success.
- `turn/start` — add user input to a thread and begin Codex generation; responds with the initial `turn` object and streams `turn/started`, `item/*`, and `turn/completed` notifications.
- `turn/interrupt` — request cancellation of an in-flight turn by `(thread_id, turn_id)`; success is an empty `{}` response and the turn finishes with `status: "interrupted"`.
- `review/start` — kick off Codexs automated reviewer for a thread; responds like `turn/start` and emits `item/started`/`item/completed` notifications with `enteredReviewMode` and `exitedReviewMode` items, plus a final assistant `agentMessage` containing the review.
@@ -85,6 +90,7 @@ Example (from OpenAI's official VSCode extension):
- `config/read` — fetch the effective config on disk after resolving config layering.
- `config/value/write` — write a single config key/value to the user's config.toml on disk.
- `config/batchWrite` — apply multiple config edits atomically to the user's config.toml on disk.
- `configRequirements/read` — fetch the loaded requirements allow-lists from `requirements.toml` and/or MDM (or `null` if none are configured).
### Example: Start or resume a thread
@@ -117,6 +123,14 @@ To continue a stored session, call `thread/resume` with the `thread.id` you prev
{ "id": 11, "result": { "thread": { "id": "thr_123", } } }
```
To branch from a stored session, call `thread/fork` with the `thread.id`. This creates a new thread id and emits a `thread/started` notification for it:
```json
{ "method": "thread/fork", "id": 12, "params": { "threadId": "thr_123" } }
{ "id": 12, "result": { "thread": { "id": "thr_456", } } }
{ "method": "thread/started", "params": { "thread": { } } }
```
### Example: List threads (with pagination & filters)
`thread/list` lets you render a history UI. Pass any combination of:
@@ -143,6 +157,17 @@ Example:
When `nextCursor` is `null`, youve reached the final page.
### Example: List loaded threads
`thread/loaded/list` returns thread ids currently loaded in memory. This is useful when you want to check which sessions are active without scanning rollouts on disk.
```json
{ "method": "thread/loaded/list", "id": 21 }
{ "id": 21, "result": {
"data": ["thr_123", "thr_456"]
} }
```
### Example: Archive a thread
Use `thread/archive` to move the persisted rollout (stored as a JSONL file on disk) into the archived sessions directory.
@@ -195,6 +220,26 @@ You can optionally specify config overrides on the new turn. If specified, these
} } }
```
### Example: Start a turn (invoke a skill)
Invoke a skill explicitly by including `$<skill-name>` in the text input and adding a `skill` input item alongside it.
```json
{ "method": "turn/start", "id": 33, "params": {
"threadId": "thr_123",
"input": [
{ "type": "text", "text": "$skill-creator Add a new skill for triaging flaky CI and include step-by-step usage." },
{ "type": "skill", "name": "skill-creator", "path": "/Users/me/.codex/skills/skill-creator/SKILL.md" }
]
} }
{ "id": 33, "result": { "turn": {
"id": "turn_457",
"status": "inProgress",
"items": [],
"error": null
} } }
```
### Example: Interrupt an active turn
You can cancel a running Turn with `turn/interrupt`.
@@ -404,6 +449,46 @@ Order of messages:
UI guidance for IDEs: surface an approval dialog as soon as the request arrives. The turn will proceed after the server receives a response to the approval request. The terminal `item/completed` notification will be sent with the appropriate status.
## Skills
Invoke a skill by including `$<skill-name>` in the text input. Add a `skill` input item (recommended) so the backend injects full skill instructions instead of relying on the model to resolve the name.
```json
{
"method": "turn/start",
"id": 101,
"params": {
"threadId": "thread-1",
"input": [
{ "type": "text", "text": "$skill-creator Add a new skill for triaging flaky CI." },
{ "type": "skill", "name": "skill-creator", "path": "/Users/me/.codex/skills/skill-creator/SKILL.md" }
]
}
}
```
If you omit the `skill` item, the model will still parse the `$<skill-name>` marker and try to locate the skill, which can add latency.
Example:
```
$skill-creator Add a new skill for triaging flaky CI and include step-by-step usage.
```
Use `skills/list` to fetch the available skills (optionally scoped by `cwd` and/or with `forceReload`).
```json
{ "method": "skills/list", "id": 25, "params": {
"cwd": "/Users/me/project",
"forceReload": false
} }
{ "id": 25, "result": {
"skills": [
{ "name": "skill-creator", "description": "Create or update a Codex skill" }
]
} }
```
## Auth endpoints
The JSON-RPC auth/account surface exposes request/response methods plus server-initiated notifications (no `id`). Use these to determine auth state, start or cancel logins, logout, and inspect ChatGPT rate limits.

View File

@@ -1,15 +1,21 @@
use crate::codex_message_processor::ApiVersion;
use crate::codex_message_processor::PendingInterrupts;
use crate::codex_message_processor::PendingRollbacks;
use crate::codex_message_processor::TurnSummary;
use crate::codex_message_processor::TurnSummaryStore;
use crate::codex_message_processor::read_event_msgs_from_rollout;
use crate::codex_message_processor::read_summary_from_rollout;
use crate::codex_message_processor::summary_to_thread;
use crate::error_code::INTERNAL_ERROR_CODE;
use crate::error_code::INVALID_REQUEST_ERROR_CODE;
use crate::outgoing_message::OutgoingMessageSender;
use codex_app_server_protocol::AccountRateLimitsUpdatedNotification;
use codex_app_server_protocol::AgentMessageDeltaNotification;
use codex_app_server_protocol::ApplyPatchApprovalParams;
use codex_app_server_protocol::ApplyPatchApprovalResponse;
use codex_app_server_protocol::ApprovalDecision;
use codex_app_server_protocol::CodexErrorInfo as V2CodexErrorInfo;
use codex_app_server_protocol::CommandAction as V2ParsedCommand;
use codex_app_server_protocol::CommandExecutionApprovalDecision;
use codex_app_server_protocol::CommandExecutionOutputDeltaNotification;
use codex_app_server_protocol::CommandExecutionRequestApprovalParams;
use codex_app_server_protocol::CommandExecutionRequestApprovalResponse;
@@ -20,6 +26,7 @@ use codex_app_server_protocol::ErrorNotification;
use codex_app_server_protocol::ExecCommandApprovalParams;
use codex_app_server_protocol::ExecCommandApprovalResponse;
use codex_app_server_protocol::ExecPolicyAmendment as V2ExecPolicyAmendment;
use codex_app_server_protocol::FileChangeApprovalDecision;
use codex_app_server_protocol::FileChangeOutputDeltaNotification;
use codex_app_server_protocol::FileChangeRequestApprovalParams;
use codex_app_server_protocol::FileChangeRequestApprovalResponse;
@@ -27,6 +34,7 @@ use codex_app_server_protocol::FileUpdateChange;
use codex_app_server_protocol::InterruptConversationResponse;
use codex_app_server_protocol::ItemCompletedNotification;
use codex_app_server_protocol::ItemStartedNotification;
use codex_app_server_protocol::JSONRPCErrorError;
use codex_app_server_protocol::McpToolCallError;
use codex_app_server_protocol::McpToolCallResult;
use codex_app_server_protocol::McpToolCallStatus;
@@ -40,6 +48,7 @@ use codex_app_server_protocol::ServerNotification;
use codex_app_server_protocol::ServerRequestPayload;
use codex_app_server_protocol::TerminalInteractionNotification;
use codex_app_server_protocol::ThreadItem;
use codex_app_server_protocol::ThreadRollbackResponse;
use codex_app_server_protocol::ThreadTokenUsage;
use codex_app_server_protocol::ThreadTokenUsageUpdatedNotification;
use codex_app_server_protocol::Turn;
@@ -50,9 +59,11 @@ use codex_app_server_protocol::TurnInterruptResponse;
use codex_app_server_protocol::TurnPlanStep;
use codex_app_server_protocol::TurnPlanUpdatedNotification;
use codex_app_server_protocol::TurnStatus;
use codex_core::CodexConversation;
use codex_app_server_protocol::build_turns_from_event_msgs;
use codex_core::CodexThread;
use codex_core::parse_command::shlex_join;
use codex_core::protocol::ApplyPatchApprovalRequestEvent;
use codex_core::protocol::CodexErrorInfo as CoreCodexErrorInfo;
use codex_core::protocol::Event;
use codex_core::protocol::EventMsg;
use codex_core::protocol::ExecApprovalRequestEvent;
@@ -66,7 +77,7 @@ use codex_core::protocol::TokenCountEvent;
use codex_core::protocol::TurnDiffEvent;
use codex_core::review_format::format_review_findings_block;
use codex_core::review_prompts;
use codex_protocol::ConversationId;
use codex_protocol::ThreadId;
use codex_protocol::plan_tool::UpdatePlanArgs;
use codex_protocol::protocol::ReviewOutputEvent;
use std::collections::HashMap;
@@ -78,14 +89,17 @@ use tracing::error;
type JsonValue = serde_json::Value;
#[allow(clippy::too_many_arguments)]
pub(crate) async fn apply_bespoke_event_handling(
event: Event,
conversation_id: ConversationId,
conversation: Arc<CodexConversation>,
conversation_id: ThreadId,
conversation: Arc<CodexThread>,
outgoing: Arc<OutgoingMessageSender>,
pending_interrupts: PendingInterrupts,
pending_rollbacks: PendingRollbacks,
turn_summary_store: TurnSummaryStore,
api_version: ApiVersion,
fallback_model_provider: String,
) {
let Event {
id: event_turn_id,
@@ -337,6 +351,26 @@ pub(crate) async fn apply_bespoke_event_handling(
.await;
}
EventMsg::Error(ev) => {
let message = ev.message.clone();
let codex_error_info = ev.codex_error_info.clone();
// If this error belongs to an in-flight `thread/rollback` request, fail that request
// (and clear pending state) so subsequent rollbacks are unblocked.
//
// Don't send a notification for this error.
if matches!(
codex_error_info,
Some(CoreCodexErrorInfo::ThreadRollbackFailed)
) {
return handle_thread_rollback_failed(
conversation_id,
message,
&pending_rollbacks,
&outgoing,
)
.await;
};
let turn_error = TurnError {
message: ev.message,
codex_error_info: ev.codex_error_info.map(V2CodexErrorInfo::from),
@@ -345,7 +379,7 @@ pub(crate) async fn apply_bespoke_event_handling(
handle_error(conversation_id, turn_error.clone(), &turn_summary_store).await;
outgoing
.send_server_notification(ServerNotification::Error(ErrorNotification {
error: turn_error,
error: turn_error.clone(),
will_retry: false,
thread_id: conversation_id.to_string(),
turn_id: event_turn_id.clone(),
@@ -690,6 +724,58 @@ pub(crate) async fn apply_bespoke_event_handling(
)
.await;
}
EventMsg::ThreadRolledBack(_rollback_event) => {
let pending = {
let mut map = pending_rollbacks.lock().await;
map.remove(&conversation_id)
};
if let Some(request_id) = pending {
let rollout_path = conversation.rollout_path();
let response = match read_summary_from_rollout(
rollout_path.as_path(),
fallback_model_provider.as_str(),
)
.await
{
Ok(summary) => {
let mut thread = summary_to_thread(summary);
match read_event_msgs_from_rollout(rollout_path.as_path()).await {
Ok(events) => {
thread.turns = build_turns_from_event_msgs(&events);
ThreadRollbackResponse { thread }
}
Err(err) => {
let error = JSONRPCErrorError {
code: INTERNAL_ERROR_CODE,
message: format!(
"failed to load rollout `{}`: {err}",
rollout_path.display()
),
data: None,
};
outgoing.send_error(request_id, error).await;
return;
}
}
}
Err(err) => {
let error = JSONRPCErrorError {
code: INTERNAL_ERROR_CODE,
message: format!(
"failed to load rollout `{}`: {err}",
rollout_path.display()
),
data: None,
};
outgoing.send_error(request_id, error).await;
return;
}
};
outgoing.send_response(request_id, response).await;
}
}
EventMsg::TurnDiff(turn_diff_event) => {
handle_turn_diff(
conversation_id,
@@ -716,7 +802,7 @@ pub(crate) async fn apply_bespoke_event_handling(
}
async fn handle_turn_diff(
conversation_id: ConversationId,
conversation_id: ThreadId,
event_turn_id: &str,
turn_diff_event: TurnDiffEvent,
api_version: ApiVersion,
@@ -735,7 +821,7 @@ async fn handle_turn_diff(
}
async fn handle_turn_plan_update(
conversation_id: ConversationId,
conversation_id: ThreadId,
event_turn_id: &str,
plan_update_event: UpdatePlanArgs,
api_version: ApiVersion,
@@ -759,7 +845,7 @@ async fn handle_turn_plan_update(
}
async fn emit_turn_completed_with_status(
conversation_id: ConversationId,
conversation_id: ThreadId,
event_turn_id: String,
status: TurnStatus,
error: Option<TurnError>,
@@ -780,7 +866,7 @@ async fn emit_turn_completed_with_status(
}
async fn complete_file_change_item(
conversation_id: ConversationId,
conversation_id: ThreadId,
item_id: String,
changes: Vec<FileUpdateChange>,
status: PatchApplyStatus,
@@ -812,7 +898,7 @@ async fn complete_file_change_item(
#[allow(clippy::too_many_arguments)]
async fn complete_command_execution_item(
conversation_id: ConversationId,
conversation_id: ThreadId,
turn_id: String,
item_id: String,
command: String,
@@ -845,7 +931,7 @@ async fn complete_command_execution_item(
async fn maybe_emit_raw_response_item_completed(
api_version: ApiVersion,
conversation_id: ConversationId,
conversation_id: ThreadId,
turn_id: &str,
item: codex_protocol::models::ResponseItem,
outgoing: &OutgoingMessageSender,
@@ -865,7 +951,7 @@ async fn maybe_emit_raw_response_item_completed(
}
async fn find_and_remove_turn_summary(
conversation_id: ConversationId,
conversation_id: ThreadId,
turn_summary_store: &TurnSummaryStore,
) -> TurnSummary {
let mut map = turn_summary_store.lock().await;
@@ -873,7 +959,7 @@ async fn find_and_remove_turn_summary(
}
async fn handle_turn_complete(
conversation_id: ConversationId,
conversation_id: ThreadId,
event_turn_id: String,
outgoing: &OutgoingMessageSender,
turn_summary_store: &TurnSummaryStore,
@@ -889,7 +975,7 @@ async fn handle_turn_complete(
}
async fn handle_turn_interrupted(
conversation_id: ConversationId,
conversation_id: ThreadId,
event_turn_id: String,
outgoing: &OutgoingMessageSender,
turn_summary_store: &TurnSummaryStore,
@@ -906,8 +992,33 @@ async fn handle_turn_interrupted(
.await;
}
async fn handle_thread_rollback_failed(
conversation_id: ThreadId,
message: String,
pending_rollbacks: &PendingRollbacks,
outgoing: &OutgoingMessageSender,
) {
let pending_rollback = {
let mut map = pending_rollbacks.lock().await;
map.remove(&conversation_id)
};
if let Some(request_id) = pending_rollback {
outgoing
.send_error(
request_id,
JSONRPCErrorError {
code: INVALID_REQUEST_ERROR_CODE,
message: message.clone(),
data: None,
},
)
.await;
}
}
async fn handle_token_count_event(
conversation_id: ConversationId,
conversation_id: ThreadId,
turn_id: String,
token_count_event: TokenCountEvent,
outgoing: &OutgoingMessageSender,
@@ -935,7 +1046,7 @@ async fn handle_token_count_event(
}
async fn handle_error(
conversation_id: ConversationId,
conversation_id: ThreadId,
error: TurnError,
turn_summary_store: &TurnSummaryStore,
) {
@@ -946,7 +1057,7 @@ async fn handle_error(
async fn on_patch_approval_response(
event_turn_id: String,
receiver: oneshot::Receiver<JsonValue>,
codex: Arc<CodexConversation>,
codex: Arc<CodexThread>,
) {
let response = receiver.await;
let value = match response {
@@ -988,7 +1099,7 @@ async fn on_patch_approval_response(
async fn on_exec_approval_response(
event_turn_id: String,
receiver: oneshot::Receiver<JsonValue>,
conversation: Arc<CodexConversation>,
conversation: Arc<CodexThread>,
) {
let response = receiver.await;
let value = match response {
@@ -1083,14 +1194,29 @@ fn format_file_change_diff(change: &CoreFileChange) -> String {
}
}
fn map_file_change_approval_decision(
decision: FileChangeApprovalDecision,
) -> (ReviewDecision, Option<PatchApplyStatus>) {
match decision {
FileChangeApprovalDecision::Accept => (ReviewDecision::Approved, None),
FileChangeApprovalDecision::AcceptForSession => (ReviewDecision::ApprovedForSession, None),
FileChangeApprovalDecision::Decline => {
(ReviewDecision::Denied, Some(PatchApplyStatus::Declined))
}
FileChangeApprovalDecision::Cancel => {
(ReviewDecision::Abort, Some(PatchApplyStatus::Declined))
}
}
}
#[allow(clippy::too_many_arguments)]
async fn on_file_change_request_approval_response(
event_turn_id: String,
conversation_id: ConversationId,
conversation_id: ThreadId,
item_id: String,
changes: Vec<FileUpdateChange>,
receiver: oneshot::Receiver<JsonValue>,
codex: Arc<CodexConversation>,
codex: Arc<CodexThread>,
outgoing: Arc<OutgoingMessageSender>,
turn_summary_store: TurnSummaryStore,
) {
@@ -1101,23 +1227,12 @@ async fn on_file_change_request_approval_response(
.unwrap_or_else(|err| {
error!("failed to deserialize FileChangeRequestApprovalResponse: {err}");
FileChangeRequestApprovalResponse {
decision: ApprovalDecision::Decline,
decision: FileChangeApprovalDecision::Decline,
}
});
let (decision, completion_status) = match response.decision {
ApprovalDecision::Accept
| ApprovalDecision::AcceptForSession
| ApprovalDecision::AcceptWithExecpolicyAmendment { .. } => {
(ReviewDecision::Approved, None)
}
ApprovalDecision::Decline => {
(ReviewDecision::Denied, Some(PatchApplyStatus::Declined))
}
ApprovalDecision::Cancel => {
(ReviewDecision::Abort, Some(PatchApplyStatus::Declined))
}
};
let (decision, completion_status) =
map_file_change_approval_decision(response.decision);
// Allow EventMsg::PatchApplyEnd to emit ItemCompleted for accepted patches.
// Only short-circuit on declines/cancels/failures.
(decision, completion_status)
@@ -1155,13 +1270,13 @@ async fn on_file_change_request_approval_response(
#[allow(clippy::too_many_arguments)]
async fn on_command_execution_request_approval_response(
event_turn_id: String,
conversation_id: ConversationId,
conversation_id: ThreadId,
item_id: String,
command: String,
cwd: PathBuf,
command_actions: Vec<V2ParsedCommand>,
receiver: oneshot::Receiver<JsonValue>,
conversation: Arc<CodexConversation>,
conversation: Arc<CodexThread>,
outgoing: Arc<OutgoingMessageSender>,
) {
let response = receiver.await;
@@ -1171,16 +1286,18 @@ async fn on_command_execution_request_approval_response(
.unwrap_or_else(|err| {
error!("failed to deserialize CommandExecutionRequestApprovalResponse: {err}");
CommandExecutionRequestApprovalResponse {
decision: ApprovalDecision::Decline,
decision: CommandExecutionApprovalDecision::Decline,
}
});
let decision = response.decision;
let (decision, completion_status) = match decision {
ApprovalDecision::Accept => (ReviewDecision::Approved, None),
ApprovalDecision::AcceptForSession => (ReviewDecision::ApprovedForSession, None),
ApprovalDecision::AcceptWithExecpolicyAmendment {
CommandExecutionApprovalDecision::Accept => (ReviewDecision::Approved, None),
CommandExecutionApprovalDecision::AcceptForSession => {
(ReviewDecision::ApprovedForSession, None)
}
CommandExecutionApprovalDecision::AcceptWithExecpolicyAmendment {
execpolicy_amendment,
} => (
ReviewDecision::ApprovedExecpolicyAmendment {
@@ -1188,11 +1305,11 @@ async fn on_command_execution_request_approval_response(
},
None,
),
ApprovalDecision::Decline => (
CommandExecutionApprovalDecision::Decline => (
ReviewDecision::Denied,
Some(CommandExecutionStatus::Declined),
),
ApprovalDecision::Cancel => (
CommandExecutionApprovalDecision::Cancel => (
ReviewDecision::Abort,
Some(CommandExecutionStatus::Declined),
),
@@ -1332,9 +1449,17 @@ mod tests {
Arc::new(Mutex::new(HashMap::new()))
}
#[test]
fn file_change_accept_for_session_maps_to_approved_for_session() {
let (decision, completion_status) =
map_file_change_approval_decision(FileChangeApprovalDecision::AcceptForSession);
assert_eq!(decision, ReviewDecision::ApprovedForSession);
assert_eq!(completion_status, None);
}
#[tokio::test]
async fn test_handle_error_records_message() -> Result<()> {
let conversation_id = ConversationId::new();
let conversation_id = ThreadId::new();
let turn_summary_store = new_turn_summary_store();
handle_error(
@@ -1362,7 +1487,7 @@ mod tests {
#[tokio::test]
async fn test_handle_turn_complete_emits_completed_without_error() -> Result<()> {
let conversation_id = ConversationId::new();
let conversation_id = ThreadId::new();
let event_turn_id = "complete1".to_string();
let (tx, mut rx) = mpsc::channel(CHANNEL_CAPACITY);
let outgoing = Arc::new(OutgoingMessageSender::new(tx));
@@ -1394,7 +1519,7 @@ mod tests {
#[tokio::test]
async fn test_handle_turn_interrupted_emits_interrupted_with_error() -> Result<()> {
let conversation_id = ConversationId::new();
let conversation_id = ThreadId::new();
let event_turn_id = "interrupt1".to_string();
let turn_summary_store = new_turn_summary_store();
handle_error(
@@ -1436,7 +1561,7 @@ mod tests {
#[tokio::test]
async fn test_handle_turn_complete_emits_failed_with_error() -> Result<()> {
let conversation_id = ConversationId::new();
let conversation_id = ThreadId::new();
let event_turn_id = "complete_err1".to_string();
let turn_summary_store = new_turn_summary_store();
handle_error(
@@ -1501,7 +1626,7 @@ mod tests {
],
};
let conversation_id = ConversationId::new();
let conversation_id = ThreadId::new();
handle_turn_plan_update(
conversation_id,
@@ -1535,7 +1660,7 @@ mod tests {
#[tokio::test]
async fn test_handle_token_count_event_emits_usage_and_rate_limits() -> Result<()> {
let conversation_id = ConversationId::new();
let conversation_id = ThreadId::new();
let turn_id = "turn-123".to_string();
let (tx, mut rx) = mpsc::channel(CHANNEL_CAPACITY);
let outgoing = Arc::new(OutgoingMessageSender::new(tx));
@@ -1620,7 +1745,7 @@ mod tests {
#[tokio::test]
async fn test_handle_token_count_event_without_usage_info() -> Result<()> {
let conversation_id = ConversationId::new();
let conversation_id = ThreadId::new();
let turn_id = "turn-456".to_string();
let (tx, mut rx) = mpsc::channel(CHANNEL_CAPACITY);
let outgoing = Arc::new(OutgoingMessageSender::new(tx));
@@ -1654,7 +1779,7 @@ mod tests {
},
};
let thread_id = ConversationId::new().to_string();
let thread_id = ThreadId::new().to_string();
let turn_id = "turn_1".to_string();
let notification = construct_mcp_tool_call_notification(
begin_event.clone(),
@@ -1684,8 +1809,8 @@ mod tests {
#[tokio::test]
async fn test_handle_turn_complete_emits_error_multiple_turns() -> Result<()> {
// Conversation A will have two turns; Conversation B will have one turn.
let conversation_a = ConversationId::new();
let conversation_b = ConversationId::new();
let conversation_a = ThreadId::new();
let conversation_b = ThreadId::new();
let turn_summary_store = new_turn_summary_store();
let (tx, mut rx) = mpsc::channel(CHANNEL_CAPACITY);
@@ -1812,7 +1937,7 @@ mod tests {
},
};
let thread_id = ConversationId::new().to_string();
let thread_id = ThreadId::new().to_string();
let turn_id = "turn_2".to_string();
let notification = construct_mcp_tool_call_notification(
begin_event.clone(),
@@ -1863,7 +1988,7 @@ mod tests {
result: Ok(result),
};
let thread_id = ConversationId::new().to_string();
let thread_id = ThreadId::new().to_string();
let turn_id = "turn_3".to_string();
let notification = construct_mcp_tool_call_end_notification(
end_event.clone(),
@@ -1906,7 +2031,7 @@ mod tests {
result: Err("boom".to_string()),
};
let thread_id = ConversationId::new().to_string();
let thread_id = ThreadId::new().to_string();
let turn_id = "turn_4".to_string();
let notification = construct_mcp_tool_call_end_notification(
end_event.clone(),
@@ -1940,7 +2065,7 @@ mod tests {
let (tx, mut rx) = mpsc::channel(CHANNEL_CAPACITY);
let outgoing = OutgoingMessageSender::new(tx);
let unified_diff = "--- a\n+++ b\n".to_string();
let conversation_id = ConversationId::new();
let conversation_id = ThreadId::new();
handle_turn_diff(
conversation_id,
@@ -1975,7 +2100,7 @@ mod tests {
async fn test_handle_turn_diff_is_noop_for_v1() -> Result<()> {
let (tx, mut rx) = mpsc::channel(CHANNEL_CAPACITY);
let outgoing = OutgoingMessageSender::new(tx);
let conversation_id = ConversationId::new();
let conversation_id = ThreadId::new();
handle_turn_diff(
conversation_id,

File diff suppressed because it is too large Load Diff

View File

@@ -3,12 +3,18 @@ use crate::error_code::INVALID_REQUEST_ERROR_CODE;
use codex_app_server_protocol::ConfigBatchWriteParams;
use codex_app_server_protocol::ConfigReadParams;
use codex_app_server_protocol::ConfigReadResponse;
use codex_app_server_protocol::ConfigRequirements;
use codex_app_server_protocol::ConfigRequirementsReadResponse;
use codex_app_server_protocol::ConfigValueWriteParams;
use codex_app_server_protocol::ConfigWriteErrorCode;
use codex_app_server_protocol::ConfigWriteResponse;
use codex_app_server_protocol::JSONRPCErrorError;
use codex_app_server_protocol::SandboxMode;
use codex_core::config::ConfigService;
use codex_core::config::ConfigServiceError;
use codex_core::config_loader::ConfigRequirementsToml;
use codex_core::config_loader::LoaderOverrides;
use codex_core::config_loader::SandboxModeRequirement as CoreSandboxModeRequirement;
use serde_json::json;
use std::path::PathBuf;
use toml::Value as TomlValue;
@@ -19,9 +25,13 @@ pub(crate) struct ConfigApi {
}
impl ConfigApi {
pub(crate) fn new(codex_home: PathBuf, cli_overrides: Vec<(String, TomlValue)>) -> Self {
pub(crate) fn new(
codex_home: PathBuf,
cli_overrides: Vec<(String, TomlValue)>,
loader_overrides: LoaderOverrides,
) -> Self {
Self {
service: ConfigService::new(codex_home, cli_overrides),
service: ConfigService::new(codex_home, cli_overrides, loader_overrides),
}
}
@@ -32,6 +42,19 @@ impl ConfigApi {
self.service.read(params).await.map_err(map_error)
}
pub(crate) async fn config_requirements_read(
&self,
) -> Result<ConfigRequirementsReadResponse, JSONRPCErrorError> {
let requirements = self
.service
.read_requirements()
.await
.map_err(map_error)?
.map(map_requirements_toml_to_api);
Ok(ConfigRequirementsReadResponse { requirements })
}
pub(crate) async fn write_value(
&self,
params: ConfigValueWriteParams,
@@ -47,6 +70,32 @@ impl ConfigApi {
}
}
fn map_requirements_toml_to_api(requirements: ConfigRequirementsToml) -> ConfigRequirements {
ConfigRequirements {
allowed_approval_policies: requirements.allowed_approval_policies.map(|policies| {
policies
.into_iter()
.map(codex_app_server_protocol::AskForApproval::from)
.collect()
}),
allowed_sandbox_modes: requirements.allowed_sandbox_modes.map(|modes| {
modes
.into_iter()
.filter_map(map_sandbox_mode_requirement_to_api)
.collect()
}),
}
}
fn map_sandbox_mode_requirement_to_api(mode: CoreSandboxModeRequirement) -> Option<SandboxMode> {
match mode {
CoreSandboxModeRequirement::ReadOnly => Some(SandboxMode::ReadOnly),
CoreSandboxModeRequirement::WorkspaceWrite => Some(SandboxMode::WorkspaceWrite),
CoreSandboxModeRequirement::DangerFullAccess => Some(SandboxMode::DangerFullAccess),
CoreSandboxModeRequirement::ExternalSandbox => None,
}
}
fn map_error(err: ConfigServiceError) -> JSONRPCErrorError {
if let Some(code) = err.write_error_code() {
return config_write_error(code, err.to_string());
@@ -68,3 +117,38 @@ fn config_write_error(code: ConfigWriteErrorCode, message: impl Into<String>) ->
})),
}
}
#[cfg(test)]
mod tests {
use super::*;
use codex_protocol::protocol::AskForApproval as CoreAskForApproval;
use pretty_assertions::assert_eq;
#[test]
fn map_requirements_toml_to_api_converts_core_enums() {
let requirements = ConfigRequirementsToml {
allowed_approval_policies: Some(vec![
CoreAskForApproval::Never,
CoreAskForApproval::OnRequest,
]),
allowed_sandbox_modes: Some(vec![
CoreSandboxModeRequirement::ReadOnly,
CoreSandboxModeRequirement::ExternalSandbox,
]),
};
let mapped = map_requirements_toml_to_api(requirements);
assert_eq!(
mapped.allowed_approval_policies,
Some(vec![
codex_app_server_protocol::AskForApproval::Never,
codex_app_server_protocol::AskForApproval::OnRequest,
])
);
assert_eq!(
mapped.allowed_sandbox_modes,
Some(vec![SandboxMode::ReadOnly]),
);
}
}

View File

@@ -1,7 +1,8 @@
#![deny(clippy::print_stdout, clippy::print_stderr)]
use codex_common::CliConfigOverrides;
use codex_core::config::Config;
use codex_core::config::ConfigBuilder;
use codex_core::config_loader::LoaderOverrides;
use std::io::ErrorKind;
use std::io::Result as IoResult;
use std::path::PathBuf;
@@ -42,6 +43,7 @@ const CHANNEL_CAPACITY: usize = 128;
pub async fn run_main(
codex_linux_sandbox_exe: Option<PathBuf>,
cli_config_overrides: CliConfigOverrides,
loader_overrides: LoaderOverrides,
) -> IoResult<()> {
// Set up channels.
let (incoming_tx, mut incoming_rx) = mpsc::channel::<JSONRPCMessage>(CHANNEL_CAPACITY);
@@ -78,7 +80,11 @@ pub async fn run_main(
format!("error parsing -c overrides: {e}"),
)
})?;
let config = Config::load_with_cli_overrides(cli_kv_overrides.clone())
let loader_overrides_for_config_api = loader_overrides.clone();
let config = ConfigBuilder::default()
.cli_overrides(cli_kv_overrides.clone())
.loader_overrides(loader_overrides)
.build()
.await
.map_err(|e| {
std::io::Error::new(ErrorKind::InvalidData, format!("error loading config: {e}"))
@@ -120,11 +126,13 @@ pub async fn run_main(
let processor_handle = tokio::spawn({
let outgoing_message_sender = OutgoingMessageSender::new(outgoing_tx);
let cli_overrides: Vec<(String, TomlValue)> = cli_kv_overrides.clone();
let loader_overrides = loader_overrides_for_config_api;
let mut processor = MessageProcessor::new(
outgoing_message_sender,
codex_linux_sandbox_exe,
std::sync::Arc::new(config),
cli_overrides,
loader_overrides,
feedback.clone(),
);
async move {

View File

@@ -1,10 +1,42 @@
use codex_app_server::run_main;
use codex_arg0::arg0_dispatch_or_else;
use codex_common::CliConfigOverrides;
use codex_core::config_loader::LoaderOverrides;
use std::path::PathBuf;
// Debug-only test hook: lets integration tests point the server at a temporary
// managed config file without writing to /etc.
const MANAGED_CONFIG_PATH_ENV_VAR: &str = "CODEX_APP_SERVER_MANAGED_CONFIG_PATH";
fn main() -> anyhow::Result<()> {
arg0_dispatch_or_else(|codex_linux_sandbox_exe| async move {
run_main(codex_linux_sandbox_exe, CliConfigOverrides::default()).await?;
let managed_config_path = managed_config_path_from_debug_env();
let loader_overrides = LoaderOverrides {
managed_config_path,
..Default::default()
};
run_main(
codex_linux_sandbox_exe,
CliConfigOverrides::default(),
loader_overrides,
)
.await?;
Ok(())
})
}
fn managed_config_path_from_debug_env() -> Option<PathBuf> {
#[cfg(debug_assertions)]
{
if let Ok(value) = std::env::var(MANAGED_CONFIG_PATH_ENV_VAR) {
return if value.is_empty() {
None
} else {
Some(PathBuf::from(value))
};
}
}
None
}

View File

@@ -18,8 +18,9 @@ use codex_app_server_protocol::JSONRPCRequest;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_core::AuthManager;
use codex_core::ConversationManager;
use codex_core::ThreadManager;
use codex_core::config::Config;
use codex_core::config_loader::LoaderOverrides;
use codex_core::default_client::USER_AGENT_SUFFIX;
use codex_core::default_client::get_codex_user_agent;
use codex_feedback::CodexFeedback;
@@ -41,6 +42,7 @@ impl MessageProcessor {
codex_linux_sandbox_exe: Option<PathBuf>,
config: Arc<Config>,
cli_overrides: Vec<(String, TomlValue)>,
loader_overrides: LoaderOverrides,
feedback: CodexFeedback,
) -> Self {
let outgoing = Arc::new(outgoing);
@@ -49,20 +51,21 @@ impl MessageProcessor {
false,
config.cli_auth_credentials_store_mode,
);
let conversation_manager = Arc::new(ConversationManager::new(
let thread_manager = Arc::new(ThreadManager::new(
config.codex_home.clone(),
auth_manager.clone(),
SessionSource::VSCode,
));
let codex_message_processor = CodexMessageProcessor::new(
auth_manager,
conversation_manager,
thread_manager,
outgoing.clone(),
codex_linux_sandbox_exe,
Arc::clone(&config),
cli_overrides.clone(),
feedback,
);
let config_api = ConfigApi::new(config.codex_home.clone(), cli_overrides);
let config_api = ConfigApi::new(config.codex_home.clone(), cli_overrides, loader_overrides);
Self {
outgoing,
@@ -155,6 +158,12 @@ impl MessageProcessor {
ClientRequest::ConfigBatchWrite { request_id, params } => {
self.handle_config_batch_write(request_id, params).await;
}
ClientRequest::ConfigRequirementsRead {
request_id,
params: _,
} => {
self.handle_config_requirements_read(request_id).await;
}
other => {
self.codex_message_processor.process_request(other).await;
}
@@ -207,4 +216,11 @@ impl MessageProcessor {
Err(error) => self.outgoing.send_error(request_id, error).await,
}
}
async fn handle_config_requirements_read(&self, request_id: RequestId) {
match self.config_api.config_requirements_read().await {
Ok(response) => self.outgoing.send_response(request_id, response).await,
Err(error) => self.outgoing.send_error(request_id, error).await,
}
}
}

View File

@@ -2,19 +2,17 @@ use std::sync::Arc;
use codex_app_server_protocol::Model;
use codex_app_server_protocol::ReasoningEffortOption;
use codex_core::ConversationManager;
use codex_core::ThreadManager;
use codex_core::config::Config;
use codex_protocol::openai_models::ModelPreset;
use codex_protocol::openai_models::ReasoningEffortPreset;
pub async fn supported_models(
conversation_manager: Arc<ConversationManager>,
config: &Config,
) -> Vec<Model> {
conversation_manager
pub async fn supported_models(thread_manager: Arc<ThreadManager>, config: &Config) -> Vec<Model> {
thread_manager
.list_models(config)
.await
.into_iter()
.filter(|preset| preset.show_in_picker)
.map(model_from_preset)
.collect()
}

View File

@@ -18,8 +18,9 @@ pub use core_test_support::test_path_buf_with_windows;
pub use core_test_support::test_tmp_path;
pub use core_test_support::test_tmp_path_buf;
pub use mcp_process::McpProcess;
pub use mock_model_server::create_mock_chat_completions_server;
pub use mock_model_server::create_mock_chat_completions_server_unchecked;
pub use mock_model_server::create_mock_responses_server_repeating_assistant;
pub use mock_model_server::create_mock_responses_server_sequence;
pub use mock_model_server::create_mock_responses_server_sequence_unchecked;
pub use models_cache::write_models_cache;
pub use models_cache::write_models_cache_with_models;
pub use responses::create_apply_patch_sse_response;

View File

@@ -21,6 +21,7 @@ use codex_app_server_protocol::ConfigBatchWriteParams;
use codex_app_server_protocol::ConfigReadParams;
use codex_app_server_protocol::ConfigValueWriteParams;
use codex_app_server_protocol::FeedbackUploadParams;
use codex_app_server_protocol::ForkConversationParams;
use codex_app_server_protocol::GetAccountParams;
use codex_app_server_protocol::GetAuthStatusParams;
use codex_app_server_protocol::InitializeParams;
@@ -43,8 +44,11 @@ use codex_app_server_protocol::SendUserTurnParams;
use codex_app_server_protocol::ServerRequest;
use codex_app_server_protocol::SetDefaultModelParams;
use codex_app_server_protocol::ThreadArchiveParams;
use codex_app_server_protocol::ThreadForkParams;
use codex_app_server_protocol::ThreadListParams;
use codex_app_server_protocol::ThreadLoadedListParams;
use codex_app_server_protocol::ThreadResumeParams;
use codex_app_server_protocol::ThreadRollbackParams;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::TurnInterruptParams;
use codex_app_server_protocol::TurnStartParams;
@@ -59,7 +63,7 @@ pub struct McpProcess {
process: Child,
stdin: ChildStdin,
stdout: BufReader<ChildStdout>,
pending_user_messages: VecDeque<JSONRPCNotification>,
pending_messages: VecDeque<JSONRPCMessage>,
}
impl McpProcess {
@@ -126,7 +130,7 @@ impl McpProcess {
process,
stdin,
stdout,
pending_user_messages: VecDeque::new(),
pending_messages: VecDeque::new(),
})
}
@@ -197,7 +201,7 @@ impl McpProcess {
}
/// Send a `removeConversationListener` JSON-RPC request.
pub async fn send_remove_conversation_listener_request(
pub async fn send_remove_thread_listener_request(
&mut self,
params: RemoveConversationListenerParams,
) -> anyhow::Result<i64> {
@@ -307,6 +311,15 @@ impl McpProcess {
self.send_request("thread/resume", params).await
}
/// Send a `thread/fork` JSON-RPC request.
pub async fn send_thread_fork_request(
&mut self,
params: ThreadForkParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("thread/fork", params).await
}
/// Send a `thread/archive` JSON-RPC request.
pub async fn send_thread_archive_request(
&mut self,
@@ -316,6 +329,15 @@ impl McpProcess {
self.send_request("thread/archive", params).await
}
/// Send a `thread/rollback` JSON-RPC request.
pub async fn send_thread_rollback_request(
&mut self,
params: ThreadRollbackParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("thread/rollback", params).await
}
/// Send a `thread/list` JSON-RPC request.
pub async fn send_thread_list_request(
&mut self,
@@ -325,6 +347,15 @@ impl McpProcess {
self.send_request("thread/list", params).await
}
/// Send a `thread/loaded/list` JSON-RPC request.
pub async fn send_thread_loaded_list_request(
&mut self,
params: ThreadLoadedListParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("thread/loaded/list", params).await
}
/// Send a `model/list` JSON-RPC request.
pub async fn send_list_models_request(
&mut self,
@@ -343,6 +374,15 @@ impl McpProcess {
self.send_request("resumeConversation", params).await
}
/// Send a `forkConversation` JSON-RPC request.
pub async fn send_fork_conversation_request(
&mut self,
params: ForkConversationParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("forkConversation", params).await
}
/// Send a `loginApiKey` JSON-RPC request.
pub async fn send_login_api_key_request(
&mut self,
@@ -534,27 +574,16 @@ impl McpProcess {
pub async fn read_stream_until_request_message(&mut self) -> anyhow::Result<ServerRequest> {
eprintln!("in read_stream_until_request_message()");
loop {
let message = self.read_jsonrpc_message().await?;
let message = self
.read_stream_until_message(|message| matches!(message, JSONRPCMessage::Request(_)))
.await?;
match message {
JSONRPCMessage::Notification(notification) => {
eprintln!("notification: {notification:?}");
self.enqueue_user_message(notification);
}
JSONRPCMessage::Request(jsonrpc_request) => {
return jsonrpc_request.try_into().with_context(
|| "failed to deserialize ServerRequest from JSONRPCRequest",
);
}
JSONRPCMessage::Error(_) => {
anyhow::bail!("unexpected JSONRPCMessage::Error: {message:?}");
}
JSONRPCMessage::Response(_) => {
anyhow::bail!("unexpected JSONRPCMessage::Response: {message:?}");
}
}
}
let JSONRPCMessage::Request(jsonrpc_request) = message else {
unreachable!("expected JSONRPCMessage::Request, got {message:?}");
};
jsonrpc_request
.try_into()
.with_context(|| "failed to deserialize ServerRequest from JSONRPCRequest")
}
pub async fn read_stream_until_response_message(
@@ -563,52 +592,32 @@ impl McpProcess {
) -> anyhow::Result<JSONRPCResponse> {
eprintln!("in read_stream_until_response_message({request_id:?})");
loop {
let message = self.read_jsonrpc_message().await?;
match message {
JSONRPCMessage::Notification(notification) => {
eprintln!("notification: {notification:?}");
self.enqueue_user_message(notification);
}
JSONRPCMessage::Request(_) => {
anyhow::bail!("unexpected JSONRPCMessage::Request: {message:?}");
}
JSONRPCMessage::Error(_) => {
anyhow::bail!("unexpected JSONRPCMessage::Error: {message:?}");
}
JSONRPCMessage::Response(jsonrpc_response) => {
if jsonrpc_response.id == request_id {
return Ok(jsonrpc_response);
}
}
}
}
let message = self
.read_stream_until_message(|message| {
Self::message_request_id(message) == Some(&request_id)
})
.await?;
let JSONRPCMessage::Response(response) = message else {
unreachable!("expected JSONRPCMessage::Response, got {message:?}");
};
Ok(response)
}
pub async fn read_stream_until_error_message(
&mut self,
request_id: RequestId,
) -> anyhow::Result<JSONRPCError> {
loop {
let message = self.read_jsonrpc_message().await?;
match message {
JSONRPCMessage::Notification(notification) => {
eprintln!("notification: {notification:?}");
self.enqueue_user_message(notification);
}
JSONRPCMessage::Request(_) => {
anyhow::bail!("unexpected JSONRPCMessage::Request: {message:?}");
}
JSONRPCMessage::Response(_) => {
// Keep scanning; we're waiting for an error with matching id.
}
JSONRPCMessage::Error(err) => {
if err.id == request_id {
return Ok(err);
}
}
}
}
let message = self
.read_stream_until_message(|message| {
Self::message_request_id(message) == Some(&request_id)
})
.await?;
let JSONRPCMessage::Error(err) = message else {
unreachable!("expected JSONRPCMessage::Error, got {message:?}");
};
Ok(err)
}
pub async fn read_stream_until_notification_message(
@@ -617,46 +626,64 @@ impl McpProcess {
) -> anyhow::Result<JSONRPCNotification> {
eprintln!("in read_stream_until_notification_message({method})");
if let Some(notification) = self.take_pending_notification_by_method(method) {
return Ok(notification);
let message = self
.read_stream_until_message(|message| {
matches!(
message,
JSONRPCMessage::Notification(notification) if notification.method == method
)
})
.await?;
let JSONRPCMessage::Notification(notification) = message else {
unreachable!("expected JSONRPCMessage::Notification, got {message:?}");
};
Ok(notification)
}
/// Clears any buffered messages so future reads only consider new stream items.
///
/// We call this when e.g. we want to validate against the next turn and no longer care about
/// messages buffered from the prior turn.
pub fn clear_message_buffer(&mut self) {
self.pending_messages.clear();
}
/// Reads the stream until a message matches `predicate`, buffering any non-matching messages
/// for later reads.
async fn read_stream_until_message<F>(&mut self, predicate: F) -> anyhow::Result<JSONRPCMessage>
where
F: Fn(&JSONRPCMessage) -> bool,
{
if let Some(message) = self.take_pending_message(&predicate) {
return Ok(message);
}
loop {
let message = self.read_jsonrpc_message().await?;
match message {
JSONRPCMessage::Notification(notification) => {
if notification.method == method {
return Ok(notification);
}
self.enqueue_user_message(notification);
}
JSONRPCMessage::Request(_) => {
anyhow::bail!("unexpected JSONRPCMessage::Request: {message:?}");
}
JSONRPCMessage::Error(_) => {
anyhow::bail!("unexpected JSONRPCMessage::Error: {message:?}");
}
JSONRPCMessage::Response(_) => {
anyhow::bail!("unexpected JSONRPCMessage::Response: {message:?}");
}
if predicate(&message) {
return Ok(message);
}
self.pending_messages.push_back(message);
}
}
fn take_pending_notification_by_method(&mut self, method: &str) -> Option<JSONRPCNotification> {
if let Some(pos) = self
.pending_user_messages
.iter()
.position(|notification| notification.method == method)
{
return self.pending_user_messages.remove(pos);
fn take_pending_message<F>(&mut self, predicate: &F) -> Option<JSONRPCMessage>
where
F: Fn(&JSONRPCMessage) -> bool,
{
if let Some(pos) = self.pending_messages.iter().position(predicate) {
return self.pending_messages.remove(pos);
}
None
}
fn enqueue_user_message(&mut self, notification: JSONRPCNotification) {
if notification.method == "codex/event/user_message" {
self.pending_user_messages.push_back(notification);
fn message_request_id(message: &JSONRPCMessage) -> Option<&RequestId> {
match message {
JSONRPCMessage::Request(request) => Some(&request.id),
JSONRPCMessage::Response(response) => Some(&response.id),
JSONRPCMessage::Error(err) => Some(&err.id),
JSONRPCMessage::Notification(_) => None,
}
}
}

View File

@@ -1,17 +1,18 @@
use std::sync::atomic::AtomicUsize;
use std::sync::atomic::Ordering;
use core_test_support::responses;
use wiremock::Mock;
use wiremock::MockServer;
use wiremock::Respond;
use wiremock::ResponseTemplate;
use wiremock::matchers::method;
use wiremock::matchers::path;
use wiremock::matchers::path_regex;
/// Create a mock server that will provide the responses, in order, for
/// requests to the `/v1/chat/completions` endpoint.
pub async fn create_mock_chat_completions_server(responses: Vec<String>) -> MockServer {
let server = MockServer::start().await;
/// requests to the `/v1/responses` endpoint.
pub async fn create_mock_responses_server_sequence(responses: Vec<String>) -> MockServer {
let server = responses::start_mock_server().await;
let num_calls = responses.len();
let seq_responder = SeqResponder {
@@ -20,7 +21,7 @@ pub async fn create_mock_chat_completions_server(responses: Vec<String>) -> Mock
};
Mock::given(method("POST"))
.and(path("/v1/chat/completions"))
.and(path_regex(".*/responses$"))
.respond_with(seq_responder)
.expect(num_calls as u64)
.mount(&server)
@@ -29,10 +30,10 @@ pub async fn create_mock_chat_completions_server(responses: Vec<String>) -> Mock
server
}
/// Same as `create_mock_chat_completions_server` but does not enforce an
/// Same as `create_mock_responses_server_sequence` but does not enforce an
/// expectation on the number of calls.
pub async fn create_mock_chat_completions_server_unchecked(responses: Vec<String>) -> MockServer {
let server = MockServer::start().await;
pub async fn create_mock_responses_server_sequence_unchecked(responses: Vec<String>) -> MockServer {
let server = responses::start_mock_server().await;
let seq_responder = SeqResponder {
num_calls: AtomicUsize::new(0),
@@ -40,7 +41,7 @@ pub async fn create_mock_chat_completions_server_unchecked(responses: Vec<String
};
Mock::given(method("POST"))
.and(path("/v1/chat/completions"))
.and(path_regex(".*/responses$"))
.respond_with(seq_responder)
.mount(&server)
.await;
@@ -57,10 +58,24 @@ impl Respond for SeqResponder {
fn respond(&self, _: &wiremock::Request) -> ResponseTemplate {
let call_num = self.num_calls.fetch_add(1, Ordering::SeqCst);
match self.responses.get(call_num) {
Some(response) => ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.set_body_raw(response.clone(), "text/event-stream"),
Some(response) => responses::sse_response(response.clone()),
None => panic!("no response for {call_num}"),
}
}
}
/// Create a mock responses API server that returns the same assistant message for every request.
pub async fn create_mock_responses_server_repeating_assistant(message: &str) -> MockServer {
let server = responses::start_mock_server().await;
let body = responses::sse(vec![
responses::ev_response_created("resp-1"),
responses::ev_assistant_message("msg-1", message),
responses::ev_completed("resp-1"),
]);
Mock::given(method("POST"))
.and(path_regex(".*/responses$"))
.respond_with(responses::sse_response(body))
.mount(&server)
.await;
server
}

View File

@@ -15,7 +15,7 @@ fn preset_to_info(preset: &ModelPreset, priority: i32) -> ModelInfo {
slug: preset.id.clone(),
display_name: preset.display_name.clone(),
description: Some(preset.description.clone()),
default_reasoning_level: preset.default_reasoning_effort,
default_reasoning_level: Some(preset.default_reasoning_effort),
supported_reasoning_levels: preset.supported_reasoning_efforts.clone(),
shell_type: ConfigShellToolType::ShellCommand,
visibility: if preset.show_in_picker {
@@ -26,19 +26,20 @@ fn preset_to_info(preset: &ModelPreset, priority: i32) -> ModelInfo {
supported_in_api: true,
priority,
upgrade: preset.upgrade.as_ref().map(|u| u.id.clone()),
base_instructions: None,
base_instructions: "base instructions".to_string(),
supports_reasoning_summaries: false,
support_verbosity: false,
default_verbosity: None,
apply_patch_tool_type: None,
truncation_policy: TruncationPolicyConfig::bytes(10_000),
supports_parallel_tool_calls: false,
context_window: None,
context_window: Some(272_000),
auto_compact_token_limit: None,
effective_context_window_percent: 95,
experimental_supported_tools: Vec::new(),
}
}
// todo(aibrahim): fix the priorities to be the opposite here.
/// Write a models_cache.json file to the codex home directory.
/// This prevents ModelsManager from making network requests to refresh models.
/// The cache will be treated as fresh (within TTL) and used instead of fetching from the network.
@@ -49,14 +50,14 @@ pub fn write_models_cache(codex_home: &Path) -> std::io::Result<()> {
.iter()
.filter(|preset| preset.show_in_picker)
.collect();
// Convert presets to ModelInfo, assigning priorities (higher = earlier in list)
// Priority is used for sorting, so first model gets highest priority
// Convert presets to ModelInfo, assigning priorities (lower = earlier in list).
// Priority is used for sorting, so the first model gets the lowest priority.
let models: Vec<ModelInfo> = presets
.iter()
.enumerate()
.map(|(idx, preset)| {
// Higher priority = earlier in list, so reverse the index
let priority = (presets.len() - idx) as i32;
// Lower priority = earlier in list.
let priority = idx as i32;
preset_to_info(preset, priority)
})
.collect();

View File

@@ -1,3 +1,4 @@
use core_test_support::responses;
use serde_json::json;
use std::path::Path;
@@ -14,85 +15,30 @@ pub fn create_shell_command_sse_response(
"workdir": workdir.map(|w| w.to_string_lossy()),
"timeout_ms": timeout_ms
}))?;
let tool_call = json!({
"choices": [
{
"delta": {
"tool_calls": [
{
"id": call_id,
"function": {
"name": "shell_command",
"arguments": tool_call_arguments
}
}
]
},
"finish_reason": "tool_calls"
}
]
});
let sse = format!(
"data: {}\n\ndata: DONE\n\n",
serde_json::to_string(&tool_call)?
);
Ok(sse)
Ok(responses::sse(vec![
responses::ev_response_created("resp-1"),
responses::ev_function_call(call_id, "shell_command", &tool_call_arguments),
responses::ev_completed("resp-1"),
]))
}
pub fn create_final_assistant_message_sse_response(message: &str) -> anyhow::Result<String> {
let assistant_message = json!({
"choices": [
{
"delta": {
"content": message
},
"finish_reason": "stop"
}
]
});
let sse = format!(
"data: {}\n\ndata: DONE\n\n",
serde_json::to_string(&assistant_message)?
);
Ok(sse)
Ok(responses::sse(vec![
responses::ev_response_created("resp-1"),
responses::ev_assistant_message("msg-1", message),
responses::ev_completed("resp-1"),
]))
}
pub fn create_apply_patch_sse_response(
patch_content: &str,
call_id: &str,
) -> anyhow::Result<String> {
// Use shell_command to call apply_patch with heredoc format
let command = format!("apply_patch <<'EOF'\n{patch_content}\nEOF");
let tool_call_arguments = serde_json::to_string(&json!({
"command": command
}))?;
let tool_call = json!({
"choices": [
{
"delta": {
"tool_calls": [
{
"id": call_id,
"function": {
"name": "shell_command",
"arguments": tool_call_arguments
}
}
]
},
"finish_reason": "tool_calls"
}
]
});
let sse = format!(
"data: {}\n\ndata: DONE\n\n",
serde_json::to_string(&tool_call)?
);
Ok(sse)
Ok(responses::sse(vec![
responses::ev_response_created("resp-1"),
responses::ev_apply_patch_shell_command_call_via_heredoc(call_id, patch_content),
responses::ev_completed("resp-1"),
]))
}
pub fn create_exec_command_sse_response(call_id: &str) -> anyhow::Result<String> {
@@ -108,28 +54,9 @@ pub fn create_exec_command_sse_response(call_id: &str) -> anyhow::Result<String>
"cmd": command.join(" "),
"yield_time_ms": 500
}))?;
let tool_call = json!({
"choices": [
{
"delta": {
"tool_calls": [
{
"id": call_id,
"function": {
"name": "exec_command",
"arguments": tool_call_arguments
}
}
]
},
"finish_reason": "tool_calls"
}
]
});
let sse = format!(
"data: {}\n\ndata: DONE\n\n",
serde_json::to_string(&tool_call)?
);
Ok(sse)
Ok(responses::sse(vec![
responses::ev_response_created("resp-1"),
responses::ev_function_call(call_id, "exec_command", &tool_call_arguments),
responses::ev_completed("resp-1"),
]))
}

View File

@@ -1,5 +1,5 @@
use anyhow::Result;
use codex_protocol::ConversationId;
use codex_protocol::ThreadId;
use codex_protocol::protocol::GitInfo;
use codex_protocol::protocol::SessionMeta;
use codex_protocol::protocol::SessionMetaLine;
@@ -28,7 +28,7 @@ pub fn create_fake_rollout(
) -> Result<String> {
let uuid = Uuid::new_v4();
let uuid_str = uuid.to_string();
let conversation_id = ConversationId::from_string(&uuid_str)?;
let conversation_id = ThreadId::from_string(&uuid_str)?;
// sessions/YYYY/MM/DD derived from filename_ts (YYYY-MM-DDThh-mm-ss)
let year = &filename_ts[0..4];

View File

@@ -37,7 +37,7 @@ model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "http://127.0.0.1:0/v1"
wire_api = "chat"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
{requires_line}

View File

@@ -1,7 +1,7 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_final_assistant_message_sse_response;
use app_test_support::create_mock_chat_completions_server;
use app_test_support::create_mock_responses_server_sequence;
use app_test_support::create_shell_command_sse_response;
use app_test_support::format_with_current_shell;
use app_test_support::to_response;
@@ -65,7 +65,7 @@ async fn test_codex_jsonrpc_conversation_flow() -> Result<()> {
)?,
create_final_assistant_message_sse_response("Enjoy your new git repo!")?,
];
let server = create_mock_chat_completions_server(responses).await;
let server = create_mock_responses_server_sequence(responses).await;
create_config_toml(&codex_home, &server.uri())?;
// Start MCP server and initialize.
@@ -145,9 +145,7 @@ async fn test_codex_jsonrpc_conversation_flow() -> Result<()> {
// 4) removeConversationListener
let remove_listener_id = mcp
.send_remove_conversation_listener_request(RemoveConversationListenerParams {
subscription_id,
})
.send_remove_thread_listener_request(RemoveConversationListenerParams { subscription_id })
.await?;
let remove_listener_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
@@ -199,7 +197,7 @@ async fn test_send_user_turn_changes_approval_policy_behavior() -> Result<()> {
)?,
create_final_assistant_message_sse_response("done 2")?,
];
let server = create_mock_chat_completions_server(responses).await;
let server = create_mock_responses_server_sequence(responses).await;
create_config_toml(&codex_home, &server.uri())?;
// Start MCP server and initialize.
@@ -365,7 +363,7 @@ async fn test_send_user_turn_updates_sandbox_and_cwd_between_turns() -> Result<(
)?,
create_final_assistant_message_sse_response("done second")?,
];
let server = create_mock_chat_completions_server(responses).await;
let server = create_mock_responses_server_sequence(responses).await;
create_config_toml(&codex_home, &server.uri())?;
let mut mcp = McpProcess::new(&codex_home).await?;
@@ -432,6 +430,7 @@ async fn test_send_user_turn_updates_sandbox_and_cwd_between_turns() -> Result<(
mcp.read_stream_until_notification_message("codex/event/task_complete"),
)
.await??;
mcp.clear_message_buffer();
let second_turn_id = mcp
.send_send_user_turn_request(SendUserTurnParams {
@@ -501,7 +500,7 @@ model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "chat"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#

View File

@@ -1,7 +1,6 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_final_assistant_message_sse_response;
use app_test_support::create_mock_chat_completions_server;
use app_test_support::to_response;
use codex_app_server_protocol::AddConversationListenerParams;
use codex_app_server_protocol::AddConversationSubscriptionResponse;
@@ -12,6 +11,7 @@ use codex_app_server_protocol::NewConversationResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::SendUserMessageParams;
use codex_app_server_protocol::SendUserMessageResponse;
use core_test_support::responses;
use pretty_assertions::assert_eq;
use serde_json::json;
use std::path::Path;
@@ -23,8 +23,9 @@ const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn test_conversation_create_and_send_message_ok() -> Result<()> {
// Mock server we won't strictly rely on it, but provide one to satisfy any model wiring.
let responses = vec![create_final_assistant_message_sse_response("Done")?];
let server = create_mock_chat_completions_server(responses).await;
let response_body = create_final_assistant_message_sse_response("Done")?;
let server = responses::start_mock_server().await;
let response_mock = responses::mount_sse_sequence(&server, vec![response_body]).await;
// Temporary Codex home with config pointing at the mock server.
let codex_home = TempDir::new()?;
@@ -86,32 +87,30 @@ async fn test_conversation_create_and_send_message_ok() -> Result<()> {
.await??;
let _ok: SendUserMessageResponse = to_response::<SendUserMessageResponse>(send_resp)?;
// avoid race condition by waiting for the mock server to receive the chat.completions request
// Avoid race condition by waiting for the mock server to receive the responses request.
let deadline = std::time::Instant::now() + DEFAULT_READ_TIMEOUT;
let requests = loop {
let requests = server.received_requests().await.unwrap_or_default();
let requests = response_mock.requests();
if !requests.is_empty() {
break requests;
}
if std::time::Instant::now() >= deadline {
panic!("mock server did not receive the chat.completions request in time");
panic!("mock server did not receive the responses request in time");
}
tokio::time::sleep(std::time::Duration::from_millis(10)).await;
};
// Verify the outbound request body matches expectations for Chat Completions.
// Verify the outbound request body matches expectations for Responses.
let request = requests
.first()
.expect("mock server should have received at least one request");
let body = request.body_json::<serde_json::Value>()?;
let body = request.body_json();
assert_eq!(body["model"], json!("o3"));
assert!(body["stream"].as_bool().unwrap_or(false));
let messages = body["messages"]
.as_array()
.expect("messages should be array");
let last = messages.last().expect("at least one message");
assert_eq!(last["role"], json!("user"));
assert_eq!(last["content"], json!("Hello"));
let user_texts = request.message_input_texts("user");
assert!(
user_texts.iter().any(|text| text == "Hello"),
"expected user input to include Hello, got {user_texts:?}"
);
drop(server);
Ok(())
@@ -133,7 +132,7 @@ model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "chat"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#

View File

@@ -0,0 +1,140 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_fake_rollout;
use app_test_support::to_response;
use codex_app_server_protocol::ForkConversationParams;
use codex_app_server_protocol::ForkConversationResponse;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::NewConversationParams; // reused for overrides shape
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ServerNotification;
use codex_app_server_protocol::SessionConfiguredNotification;
use codex_core::protocol::EventMsg;
use pretty_assertions::assert_eq;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn fork_conversation_creates_new_rollout() -> Result<()> {
let codex_home = TempDir::new()?;
let preview = "Hello A";
let conversation_id = create_fake_rollout(
codex_home.path(),
"2025-01-02T12-00-00",
"2025-01-02T12:00:00Z",
preview,
Some("openai"),
None,
)?;
let original_path = codex_home
.path()
.join("sessions")
.join("2025")
.join("01")
.join("02")
.join(format!(
"rollout-2025-01-02T12-00-00-{conversation_id}.jsonl"
));
assert!(
original_path.exists(),
"expected original rollout to exist at {}",
original_path.display()
);
let original_contents = std::fs::read_to_string(&original_path)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let fork_req_id = mcp
.send_fork_conversation_request(ForkConversationParams {
path: Some(original_path.clone()),
conversation_id: None,
overrides: Some(NewConversationParams {
model: Some("o3".to_string()),
..Default::default()
}),
})
.await?;
// Expect a sessionConfigured notification for the forked session.
let notification: JSONRPCNotification = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("sessionConfigured"),
)
.await??;
let session_configured: ServerNotification = notification.try_into()?;
let ServerNotification::SessionConfigured(SessionConfiguredNotification {
model,
session_id,
rollout_path,
initial_messages: session_initial_messages,
..
}) = session_configured
else {
unreachable!("expected sessionConfigured notification");
};
assert_eq!(model, "o3");
assert_ne!(
session_id.to_string(),
conversation_id,
"expected a new conversation id when forking"
);
assert_ne!(
rollout_path, original_path,
"expected a new rollout path when forking"
);
assert!(
rollout_path.exists(),
"expected forked rollout to exist at {}",
rollout_path.display()
);
let session_initial_messages =
session_initial_messages.expect("expected initial messages when forking from rollout");
match session_initial_messages.as_slice() {
[EventMsg::UserMessage(message)] => {
assert_eq!(message.message, preview);
}
other => panic!("unexpected initial messages from rollout fork: {other:#?}"),
}
// Then the response for forkConversation.
let fork_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(fork_req_id)),
)
.await??;
let ForkConversationResponse {
conversation_id: forked_id,
model: forked_model,
initial_messages: response_initial_messages,
rollout_path: response_rollout_path,
} = to_response::<ForkConversationResponse>(fork_resp)?;
assert_eq!(forked_model, "o3");
assert_eq!(response_rollout_path, rollout_path);
assert_ne!(forked_id.to_string(), conversation_id);
let response_initial_messages =
response_initial_messages.expect("expected initial messages in fork response");
match response_initial_messages.as_slice() {
[EventMsg::UserMessage(message)] => {
assert_eq!(message.message, preview);
}
other => panic!("unexpected initial messages in fork response: {other:#?}"),
}
let after_contents = std::fs::read_to_string(&original_path)?;
assert_eq!(
after_contents, original_contents,
"fork should not mutate the original rollout file"
);
Ok(())
}

View File

@@ -18,7 +18,7 @@ use tempfile::TempDir;
use tokio::time::timeout;
use app_test_support::McpProcess;
use app_test_support::create_mock_chat_completions_server;
use app_test_support::create_mock_responses_server_sequence;
use app_test_support::create_shell_command_sse_response;
use app_test_support::to_response;
@@ -56,7 +56,7 @@ async fn shell_command_interruption() -> anyhow::Result<()> {
std::fs::create_dir(&working_directory)?;
// Create mock server with a single SSE response: the long sleep command
let server = create_mock_chat_completions_server(vec![create_shell_command_sse_response(
let server = create_mock_responses_server_sequence(vec![create_shell_command_sse_response(
shell_command.clone(),
Some(&working_directory),
Some(10_000), // 10 seconds timeout in ms
@@ -153,7 +153,7 @@ model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "chat"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#

View File

@@ -6,7 +6,7 @@ use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::ListConversationsParams;
use codex_app_server_protocol::ListConversationsResponse;
use codex_app_server_protocol::NewConversationParams; // reused for overrides shape
use codex_app_server_protocol::NewConversationParams;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ResumeConversationParams;
use codex_app_server_protocol::ResumeConversationResponse;

View File

@@ -32,7 +32,7 @@ model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "http://127.0.0.1:0/v1"
wire_api = "chat"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#,

View File

@@ -1,8 +1,9 @@
mod archive_conversation;
mod archive_thread;
mod auth;
mod codex_message_processor_flow;
mod config;
mod create_conversation;
mod create_thread;
mod fork_thread;
mod fuzzy_file_search;
mod interrupt;
mod list_resume;

View File

@@ -1,7 +1,5 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_final_assistant_message_sse_response;
use app_test_support::create_mock_chat_completions_server;
use app_test_support::to_response;
use codex_app_server_protocol::AddConversationListenerParams;
use codex_app_server_protocol::AddConversationSubscriptionResponse;
@@ -13,10 +11,11 @@ use codex_app_server_protocol::NewConversationResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::SendUserMessageParams;
use codex_app_server_protocol::SendUserMessageResponse;
use codex_protocol::ConversationId;
use codex_protocol::ThreadId;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ResponseItem;
use codex_protocol::protocol::RawResponseItemEvent;
use core_test_support::responses;
use pretty_assertions::assert_eq;
use std::path::Path;
use tempfile::TempDir;
@@ -26,13 +25,21 @@ const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs
#[tokio::test]
async fn test_send_message_success() -> Result<()> {
// Spin up a mock completions server that immediately ends the Codex turn.
// Spin up a mock responses server that immediately ends the Codex turn.
// Two Codex turns hit the mock model (session start + send-user-message). Provide two SSE responses.
let responses = vec![
create_final_assistant_message_sse_response("Done")?,
create_final_assistant_message_sse_response("Done")?,
];
let server = create_mock_chat_completions_server(responses).await;
let server = responses::start_mock_server().await;
let body1 = responses::sse(vec![
responses::ev_response_created("resp-1"),
responses::ev_assistant_message("msg-1", "Done"),
responses::ev_completed("resp-1"),
]);
let body2 = responses::sse(vec![
responses::ev_response_created("resp-2"),
responses::ev_assistant_message("msg-2", "Done"),
responses::ev_completed("resp-2"),
]);
let _response_mock1 = responses::mount_sse_once(&server, body1).await;
let _response_mock2 = responses::mount_sse_once(&server, body2).await;
// Create a temporary Codex home with config pointing at the mock server.
let codex_home = TempDir::new()?;
@@ -81,7 +88,7 @@ async fn test_send_message_success() -> Result<()> {
#[expect(clippy::expect_used)]
async fn send_message(
message: &str,
conversation_id: ConversationId,
conversation_id: ThreadId,
mcp: &mut McpProcess,
) -> Result<()> {
// Now exercise sendUserMessage.
@@ -135,8 +142,13 @@ async fn send_message(
#[tokio::test]
async fn test_send_message_raw_notifications_opt_in() -> Result<()> {
let responses = vec![create_final_assistant_message_sse_response("Done")?];
let server = create_mock_chat_completions_server(responses).await;
let server = responses::start_mock_server().await;
let body = responses::sse(vec![
responses::ev_response_created("resp-1"),
responses::ev_assistant_message("msg-1", "Done"),
responses::ev_completed("resp-1"),
]);
let _response_mock = responses::mount_sse_once(&server, body).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
@@ -220,7 +232,7 @@ async fn test_send_message_session_not_found() -> Result<()> {
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let unknown = ConversationId::new();
let unknown = ThreadId::new();
let req_id = mcp
.send_send_user_message_request(SendUserMessageParams {
conversation_id: unknown,
@@ -259,7 +271,7 @@ model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "chat"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#
@@ -268,10 +280,8 @@ stream_max_retries = 0
}
#[expect(clippy::expect_used)]
async fn read_raw_response_item(
mcp: &mut McpProcess,
conversation_id: ConversationId,
) -> ResponseItem {
async fn read_raw_response_item(mcp: &mut McpProcess, conversation_id: ThreadId) -> ResponseItem {
// TODO: Switch to rawResponseItem/completed once we migrate to app server v2 in codex web.
loop {
let raw_notification: JSONRPCNotification = timeout(
DEFAULT_READ_TIMEOUT,

View File

@@ -67,7 +67,7 @@ model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "http://127.0.0.1:0/v1"
wire_api = "chat"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
{requires_line}

View File

@@ -184,7 +184,10 @@ writable_roots = [{}]
let mut mcp = McpProcess::new_with_env(
codex_home.path(),
&[("CODEX_MANAGED_CONFIG_PATH", Some(&managed_path_str))],
&[(
"CODEX_APP_SERVER_MANAGED_CONFIG_PATH",
Some(&managed_path_str),
)],
)
.await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;

View File

@@ -5,8 +5,11 @@ mod output_schema;
mod rate_limits;
mod review;
mod thread_archive;
mod thread_fork;
mod thread_list;
mod thread_loaded_list;
mod thread_resume;
mod thread_rollback;
mod thread_start;
mod turn_interrupt;
mod turn_start;

View File

@@ -48,57 +48,32 @@ async fn list_models_returns_all_models_with_large_limit() -> Result<()> {
let expected_models = vec![
Model {
id: "gpt-5.2".to_string(),
model: "gpt-5.2".to_string(),
display_name: "gpt-5.2".to_string(),
description:
"Latest frontier model with improvements across knowledge, reasoning and coding"
.to_string(),
id: "gpt-5.2-codex".to_string(),
model: "gpt-5.2-codex".to_string(),
display_name: "gpt-5.2-codex".to_string(),
description: "Latest frontier agentic coding model.".to_string(),
supported_reasoning_efforts: vec![
ReasoningEffortOption {
reasoning_effort: ReasoningEffort::Low,
description: "Balances speed with some reasoning; useful for straightforward \
queries and short explanations"
.to_string(),
description: "Fast responses with lighter reasoning".to_string(),
},
ReasoningEffortOption {
reasoning_effort: ReasoningEffort::Medium,
description: "Provides a solid balance of reasoning depth and latency for \
general-purpose tasks"
description: "Balances speed and reasoning depth for everyday tasks"
.to_string(),
},
ReasoningEffortOption {
reasoning_effort: ReasoningEffort::High,
description: "Maximizes reasoning depth for complex or ambiguous problems"
.to_string(),
description: "Greater reasoning depth for complex problems".to_string(),
},
ReasoningEffortOption {
reasoning_effort: ReasoningEffort::XHigh,
description: "Extra high reasoning for complex problems".to_string(),
description: "Extra high reasoning depth for complex problems".to_string(),
},
],
default_reasoning_effort: ReasoningEffort::Medium,
is_default: true,
},
Model {
id: "gpt-5.1-codex-mini".to_string(),
model: "gpt-5.1-codex-mini".to_string(),
display_name: "gpt-5.1-codex-mini".to_string(),
description: "Optimized for codex. Cheaper, faster, but less capable.".to_string(),
supported_reasoning_efforts: vec![
ReasoningEffortOption {
reasoning_effort: ReasoningEffort::Medium,
description: "Dynamically adjusts reasoning based on the task".to_string(),
},
ReasoningEffortOption {
reasoning_effort: ReasoningEffort::High,
description: "Maximizes reasoning depth for complex or ambiguous problems"
.to_string(),
},
],
default_reasoning_effort: ReasoningEffort::Medium,
is_default: false,
},
Model {
id: "gpt-5.1-codex-max".to_string(),
model: "gpt-5.1-codex-max".to_string(),
@@ -127,23 +102,48 @@ async fn list_models_returns_all_models_with_large_limit() -> Result<()> {
is_default: false,
},
Model {
id: "gpt-5.2-codex".to_string(),
model: "gpt-5.2-codex".to_string(),
display_name: "gpt-5.2-codex".to_string(),
description: "Latest frontier agentic coding model.".to_string(),
id: "gpt-5.1-codex-mini".to_string(),
model: "gpt-5.1-codex-mini".to_string(),
display_name: "gpt-5.1-codex-mini".to_string(),
description: "Optimized for codex. Cheaper, faster, but less capable.".to_string(),
supported_reasoning_efforts: vec![
ReasoningEffortOption {
reasoning_effort: ReasoningEffort::Medium,
description: "Dynamically adjusts reasoning based on the task".to_string(),
},
ReasoningEffortOption {
reasoning_effort: ReasoningEffort::High,
description: "Maximizes reasoning depth for complex or ambiguous problems"
.to_string(),
},
],
default_reasoning_effort: ReasoningEffort::Medium,
is_default: false,
},
Model {
id: "gpt-5.2".to_string(),
model: "gpt-5.2".to_string(),
display_name: "gpt-5.2".to_string(),
description:
"Latest frontier model with improvements across knowledge, reasoning and coding"
.to_string(),
supported_reasoning_efforts: vec![
ReasoningEffortOption {
reasoning_effort: ReasoningEffort::Low,
description: "Fast responses with lighter reasoning".to_string(),
description: "Balances speed with some reasoning; useful for straightforward \
queries and short explanations"
.to_string(),
},
ReasoningEffortOption {
reasoning_effort: ReasoningEffort::Medium,
description: "Balances speed and reasoning depth for everyday tasks"
description: "Provides a solid balance of reasoning depth and latency for \
general-purpose tasks"
.to_string(),
},
ReasoningEffortOption {
reasoning_effort: ReasoningEffort::High,
description: "Greater reasoning depth for complex problems".to_string(),
description: "Maximizes reasoning depth for complex or ambiguous problems"
.to_string(),
},
ReasoningEffortOption {
reasoning_effort: ReasoningEffort::XHigh,
@@ -187,7 +187,7 @@ async fn list_models_pagination_works() -> Result<()> {
} = to_response::<ModelListResponse>(first_response)?;
assert_eq!(first_items.len(), 1);
assert_eq!(first_items[0].id, "gpt-5.2");
assert_eq!(first_items[0].id, "gpt-5.2-codex");
let next_cursor = first_cursor.ok_or_else(|| anyhow!("cursor for second page"))?;
let second_request = mcp
@@ -209,7 +209,7 @@ async fn list_models_pagination_works() -> Result<()> {
} = to_response::<ModelListResponse>(second_response)?;
assert_eq!(second_items.len(), 1);
assert_eq!(second_items[0].id, "gpt-5.1-codex-mini");
assert_eq!(second_items[0].id, "gpt-5.1-codex-max");
let third_cursor = second_cursor.ok_or_else(|| anyhow!("cursor for third page"))?;
let third_request = mcp
@@ -231,7 +231,7 @@ async fn list_models_pagination_works() -> Result<()> {
} = to_response::<ModelListResponse>(third_response)?;
assert_eq!(third_items.len(), 1);
assert_eq!(third_items[0].id, "gpt-5.1-codex-max");
assert_eq!(third_items[0].id, "gpt-5.1-codex-mini");
let fourth_cursor = third_cursor.ok_or_else(|| anyhow!("cursor for fourth page"))?;
let fourth_request = mcp
@@ -253,7 +253,7 @@ async fn list_models_pagination_works() -> Result<()> {
} = to_response::<ModelListResponse>(fourth_response)?;
assert_eq!(fourth_items.len(), 1);
assert_eq!(fourth_items[0].id, "gpt-5.2-codex");
assert_eq!(fourth_items[0].id, "gpt-5.2");
assert!(fourth_cursor.is_none());
Ok(())
}

View File

@@ -1,7 +1,6 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_final_assistant_message_sse_response;
use app_test_support::create_mock_chat_completions_server_unchecked;
use app_test_support::create_mock_responses_server_repeating_assistant;
use app_test_support::to_response;
use codex_app_server_protocol::ItemCompletedNotification;
use codex_app_server_protocol::ItemStartedNotification;
@@ -44,10 +43,7 @@ async fn review_start_runs_review_turn_and_emits_code_review_item() -> Result<()
"overall_confidence_score": 0.75
})
.to_string();
let responses = vec![create_final_assistant_message_sse_response(
&review_payload,
)?];
let server = create_mock_chat_completions_server_unchecked(responses).await;
let server = create_mock_responses_server_repeating_assistant(&review_payload).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
@@ -135,7 +131,7 @@ async fn review_start_runs_review_turn_and_emits_code_review_item() -> Result<()
#[tokio::test]
async fn review_start_rejects_empty_base_branch() -> Result<()> {
let server = create_mock_chat_completions_server_unchecked(vec![]).await;
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
@@ -176,10 +172,7 @@ async fn review_start_with_detached_delivery_returns_new_thread_id() -> Result<(
"overall_confidence_score": 0.5
})
.to_string();
let responses = vec![create_final_assistant_message_sse_response(
&review_payload,
)?];
let server = create_mock_chat_completions_server_unchecked(responses).await;
let server = create_mock_responses_server_repeating_assistant(&review_payload).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
@@ -219,7 +212,7 @@ async fn review_start_with_detached_delivery_returns_new_thread_id() -> Result<(
#[tokio::test]
async fn review_start_rejects_empty_commit_sha() -> Result<()> {
let server = create_mock_chat_completions_server_unchecked(vec![]).await;
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
@@ -254,7 +247,7 @@ async fn review_start_rejects_empty_commit_sha() -> Result<()> {
#[tokio::test]
async fn review_start_rejects_empty_custom_instructions() -> Result<()> {
let server = create_mock_chat_completions_server_unchecked(vec![]).await;
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
@@ -320,7 +313,7 @@ model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider"
base_url = "{server_uri}/v1"
wire_api = "chat"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#

View File

@@ -8,7 +8,7 @@ use codex_app_server_protocol::ThreadArchiveResponse;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
use codex_core::ARCHIVED_SESSIONS_SUBDIR;
use codex_core::find_conversation_path_by_id_str;
use codex_core::find_thread_path_by_id_str;
use std::path::Path;
use tempfile::TempDir;
use tokio::time::timeout;
@@ -39,7 +39,7 @@ async fn thread_archive_moves_rollout_into_archived_directory() -> Result<()> {
assert!(!thread.id.is_empty());
// Locate the rollout path recorded for this thread id.
let rollout_path = find_conversation_path_by_id_str(codex_home.path(), &thread.id)
let rollout_path = find_thread_path_by_id_str(codex_home.path(), &thread.id)
.await?
.expect("expected rollout path for thread id to exist");
assert!(

View File

@@ -0,0 +1,140 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_fake_rollout;
use app_test_support::create_mock_responses_server_repeating_assistant;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::SessionSource;
use codex_app_server_protocol::ThreadForkParams;
use codex_app_server_protocol::ThreadForkResponse;
use codex_app_server_protocol::ThreadItem;
use codex_app_server_protocol::ThreadStartedNotification;
use codex_app_server_protocol::TurnStatus;
use codex_app_server_protocol::UserInput;
use pretty_assertions::assert_eq;
use std::path::Path;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
#[tokio::test]
async fn thread_fork_creates_new_thread_and_emits_started() -> Result<()> {
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
let preview = "Saved user message";
let conversation_id = create_fake_rollout(
codex_home.path(),
"2025-01-05T12-00-00",
"2025-01-05T12:00:00Z",
preview,
Some("mock_provider"),
None,
)?;
let original_path = codex_home
.path()
.join("sessions")
.join("2025")
.join("01")
.join("05")
.join(format!(
"rollout-2025-01-05T12-00-00-{conversation_id}.jsonl"
));
assert!(
original_path.exists(),
"expected original rollout to exist at {}",
original_path.display()
);
let original_contents = std::fs::read_to_string(&original_path)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let fork_id = mcp
.send_thread_fork_request(ThreadForkParams {
thread_id: conversation_id.clone(),
..Default::default()
})
.await?;
let fork_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(fork_id)),
)
.await??;
let ThreadForkResponse { thread, .. } = to_response::<ThreadForkResponse>(fork_resp)?;
let after_contents = std::fs::read_to_string(&original_path)?;
assert_eq!(
after_contents, original_contents,
"fork should not mutate the original rollout file"
);
assert_ne!(thread.id, conversation_id);
assert_eq!(thread.preview, preview);
assert_eq!(thread.model_provider, "mock_provider");
assert!(thread.path.is_absolute());
assert_ne!(thread.path, original_path);
assert!(thread.cwd.is_absolute());
assert_eq!(thread.source, SessionSource::VsCode);
assert_eq!(
thread.turns.len(),
1,
"expected forked thread to include one turn"
);
let turn = &thread.turns[0];
assert_eq!(turn.status, TurnStatus::Completed);
assert_eq!(turn.items.len(), 1, "expected user message item");
match &turn.items[0] {
ThreadItem::UserMessage { content, .. } => {
assert_eq!(
content,
&vec![UserInput::Text {
text: preview.to_string()
}]
);
}
other => panic!("expected user message item, got {other:?}"),
}
// A corresponding thread/started notification should arrive.
let notif: JSONRPCNotification = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("thread/started"),
)
.await??;
let started: ThreadStartedNotification =
serde_json::from_value(notif.params.expect("params must be present"))?;
assert_eq!(started.thread, thread);
Ok(())
}
// Helper to create a config.toml pointing at the mock model server.
fn create_config_toml(codex_home: &Path, server_uri: &str) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
format!(
r#"
model = "mock-model"
approval_policy = "never"
sandbox_mode = "read-only"
model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#
),
)
}

View File

@@ -0,0 +1,139 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_mock_responses_server_repeating_assistant;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ThreadLoadedListParams;
use codex_app_server_protocol::ThreadLoadedListResponse;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
use pretty_assertions::assert_eq;
use std::path::Path;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
#[tokio::test]
async fn thread_loaded_list_returns_loaded_thread_ids() -> Result<()> {
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let thread_id = start_thread(&mut mcp).await?;
let list_id = mcp
.send_thread_loaded_list_request(ThreadLoadedListParams::default())
.await?;
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(list_id)),
)
.await??;
let ThreadLoadedListResponse {
mut data,
next_cursor,
} = to_response::<ThreadLoadedListResponse>(resp)?;
data.sort();
assert_eq!(data, vec![thread_id]);
assert_eq!(next_cursor, None);
Ok(())
}
#[tokio::test]
async fn thread_loaded_list_paginates() -> Result<()> {
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let first = start_thread(&mut mcp).await?;
let second = start_thread(&mut mcp).await?;
let mut expected = [first, second];
expected.sort();
let list_id = mcp
.send_thread_loaded_list_request(ThreadLoadedListParams {
cursor: None,
limit: Some(1),
})
.await?;
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(list_id)),
)
.await??;
let ThreadLoadedListResponse {
data: first_page,
next_cursor,
} = to_response::<ThreadLoadedListResponse>(resp)?;
assert_eq!(first_page, vec![expected[0].clone()]);
assert_eq!(next_cursor, Some(expected[0].clone()));
let list_id = mcp
.send_thread_loaded_list_request(ThreadLoadedListParams {
cursor: next_cursor,
limit: Some(1),
})
.await?;
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(list_id)),
)
.await??;
let ThreadLoadedListResponse {
data: second_page,
next_cursor,
} = to_response::<ThreadLoadedListResponse>(resp)?;
assert_eq!(second_page, vec![expected[1].clone()]);
assert_eq!(next_cursor, None);
Ok(())
}
fn create_config_toml(codex_home: &Path, server_uri: &str) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
format!(
r#"
model = "mock-model"
approval_policy = "never"
sandbox_mode = "read-only"
model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#
),
)
}
async fn start_thread(mcp: &mut McpProcess) -> Result<String> {
let req_id = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("gpt-5.1".to_string()),
..Default::default()
})
.await?;
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(req_id)),
)
.await??;
let ThreadStartResponse { thread, .. } = to_response::<ThreadStartResponse>(resp)?;
Ok(thread.id)
}

View File

@@ -1,7 +1,7 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_fake_rollout;
use app_test_support::create_mock_chat_completions_server;
use app_test_support::create_mock_responses_server_repeating_assistant;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
@@ -23,7 +23,7 @@ const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs
#[tokio::test]
async fn thread_resume_returns_original_thread() -> Result<()> {
let server = create_mock_chat_completions_server(vec![]).await;
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
@@ -66,7 +66,7 @@ async fn thread_resume_returns_original_thread() -> Result<()> {
#[tokio::test]
async fn thread_resume_returns_rollout_history() -> Result<()> {
let server = create_mock_chat_completions_server(vec![]).await;
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
@@ -130,7 +130,7 @@ async fn thread_resume_returns_rollout_history() -> Result<()> {
#[tokio::test]
async fn thread_resume_prefers_path_over_thread_id() -> Result<()> {
let server = create_mock_chat_completions_server(vec![]).await;
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
@@ -174,7 +174,7 @@ async fn thread_resume_prefers_path_over_thread_id() -> Result<()> {
#[tokio::test]
async fn thread_resume_supports_history_and_overrides() -> Result<()> {
let server = create_mock_chat_completions_server(vec![]).await;
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
@@ -247,7 +247,7 @@ model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "chat"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#

View File

@@ -0,0 +1,177 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_final_assistant_message_sse_response;
use app_test_support::create_mock_responses_server_sequence_unchecked;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ThreadItem;
use codex_app_server_protocol::ThreadResumeParams;
use codex_app_server_protocol::ThreadResumeResponse;
use codex_app_server_protocol::ThreadRollbackParams;
use codex_app_server_protocol::ThreadRollbackResponse;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
use codex_app_server_protocol::TurnStartParams;
use codex_app_server_protocol::UserInput as V2UserInput;
use pretty_assertions::assert_eq;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
#[tokio::test]
async fn thread_rollback_drops_last_turns_and_persists_to_rollout() -> Result<()> {
// Three Codex turns hit the mock model (session start + two turn/start calls).
let responses = vec![
create_final_assistant_message_sse_response("Done")?,
create_final_assistant_message_sse_response("Done")?,
create_final_assistant_message_sse_response("Done")?,
];
let server = create_mock_responses_server_sequence_unchecked(responses).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
// Start a thread.
let start_id = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("mock-model".to_string()),
..Default::default()
})
.await?;
let start_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(start_id)),
)
.await??;
let ThreadStartResponse { thread, .. } = to_response::<ThreadStartResponse>(start_resp)?;
// Two turns.
let first_text = "First";
let turn1_id = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: first_text.to_string(),
}],
..Default::default()
})
.await?;
let _turn1_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn1_id)),
)
.await??;
let _completed1 = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("turn/completed"),
)
.await??;
let turn2_id = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "Second".to_string(),
}],
..Default::default()
})
.await?;
let _turn2_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn2_id)),
)
.await??;
let _completed2 = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("turn/completed"),
)
.await??;
// Roll back the last turn.
let rollback_id = mcp
.send_thread_rollback_request(ThreadRollbackParams {
thread_id: thread.id.clone(),
num_turns: 1,
})
.await?;
let rollback_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(rollback_id)),
)
.await??;
let ThreadRollbackResponse {
thread: rolled_back_thread,
} = to_response::<ThreadRollbackResponse>(rollback_resp)?;
assert_eq!(rolled_back_thread.turns.len(), 1);
assert_eq!(rolled_back_thread.turns[0].items.len(), 2);
match &rolled_back_thread.turns[0].items[0] {
ThreadItem::UserMessage { content, .. } => {
assert_eq!(
content,
&vec![V2UserInput::Text {
text: first_text.to_string()
}]
);
}
other => panic!("expected user message item, got {other:?}"),
}
// Resume and confirm the history is pruned.
let resume_id = mcp
.send_thread_resume_request(ThreadResumeParams {
thread_id: thread.id,
..Default::default()
})
.await?;
let resume_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(resume_id)),
)
.await??;
let ThreadResumeResponse { thread, .. } = to_response::<ThreadResumeResponse>(resume_resp)?;
assert_eq!(thread.turns.len(), 1);
assert_eq!(thread.turns[0].items.len(), 2);
match &thread.turns[0].items[0] {
ThreadItem::UserMessage { content, .. } => {
assert_eq!(
content,
&vec![V2UserInput::Text {
text: first_text.to_string()
}]
);
}
other => panic!("expected user message item, got {other:?}"),
}
Ok(())
}
fn create_config_toml(codex_home: &std::path::Path, server_uri: &str) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
format!(
r#"
model = "mock-model"
approval_policy = "never"
sandbox_mode = "read-only"
model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#
),
)
}

View File

@@ -1,6 +1,6 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_mock_chat_completions_server;
use app_test_support::create_mock_responses_server_repeating_assistant;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCResponse;
@@ -17,7 +17,7 @@ const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs
#[tokio::test]
async fn thread_start_creates_thread_and_emits_started() -> Result<()> {
// Provide a mock server and config so model wiring is valid.
let server = create_mock_chat_completions_server(vec![]).await;
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
@@ -85,7 +85,7 @@ model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "chat"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#

View File

@@ -2,7 +2,7 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_mock_chat_completions_server;
use app_test_support::create_mock_responses_server_sequence;
use app_test_support::create_shell_command_sse_response;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCNotification;
@@ -41,7 +41,7 @@ async fn turn_interrupt_aborts_running_turn() -> Result<()> {
std::fs::create_dir(&working_directory)?;
// Mock server: long-running shell command then (after abort) nothing else needed.
let server = create_mock_chat_completions_server(vec![create_shell_command_sse_response(
let server = create_mock_responses_server_sequence(vec![create_shell_command_sse_response(
shell_command.clone(),
Some(&working_directory),
Some(10_000),
@@ -135,7 +135,7 @@ model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "chat"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#

View File

@@ -3,14 +3,15 @@ use app_test_support::McpProcess;
use app_test_support::create_apply_patch_sse_response;
use app_test_support::create_exec_command_sse_response;
use app_test_support::create_final_assistant_message_sse_response;
use app_test_support::create_mock_chat_completions_server;
use app_test_support::create_mock_chat_completions_server_unchecked;
use app_test_support::create_mock_responses_server_sequence;
use app_test_support::create_mock_responses_server_sequence_unchecked;
use app_test_support::create_shell_command_sse_response;
use app_test_support::format_with_current_shell_display;
use app_test_support::to_response;
use codex_app_server_protocol::ApprovalDecision;
use codex_app_server_protocol::CommandExecutionApprovalDecision;
use codex_app_server_protocol::CommandExecutionRequestApprovalResponse;
use codex_app_server_protocol::CommandExecutionStatus;
use codex_app_server_protocol::FileChangeApprovalDecision;
use codex_app_server_protocol::FileChangeOutputDeltaNotification;
use codex_app_server_protocol::FileChangeRequestApprovalResponse;
use codex_app_server_protocol::ItemCompletedNotification;
@@ -49,7 +50,7 @@ async fn turn_start_emits_notifications_and_accepts_model_override() -> Result<(
create_final_assistant_message_sse_response("Done")?,
create_final_assistant_message_sse_response("Done")?,
];
let server = create_mock_chat_completions_server_unchecked(responses).await;
let server = create_mock_responses_server_sequence_unchecked(responses).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri(), "never")?;
@@ -156,7 +157,7 @@ async fn turn_start_accepts_local_image_input() -> Result<()> {
];
// Use the unchecked variant because the request payload includes a LocalImage
// which the strict matcher does not currently cover.
let server = create_mock_chat_completions_server_unchecked(responses).await;
let server = create_mock_responses_server_sequence_unchecked(responses).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri(), "never")?;
@@ -232,7 +233,7 @@ async fn turn_start_exec_approval_toggle_v2() -> Result<()> {
)?,
create_final_assistant_message_sse_response("done 2")?,
];
let server = create_mock_chat_completions_server(responses).await;
let server = create_mock_responses_server_sequence(responses).await;
// Default approval is untrusted to force elicitation on first turn.
create_config_toml(codex_home.as_path(), &server.uri(), "untrusted")?;
@@ -356,7 +357,7 @@ async fn turn_start_exec_approval_decline_v2() -> Result<()> {
)?,
create_final_assistant_message_sse_response("done")?,
];
let server = create_mock_chat_completions_server(responses).await;
let server = create_mock_responses_server_sequence(responses).await;
create_config_toml(codex_home.as_path(), &server.uri(), "untrusted")?;
let mut mcp = McpProcess::new(codex_home.as_path()).await?;
@@ -426,7 +427,7 @@ async fn turn_start_exec_approval_decline_v2() -> Result<()> {
mcp.send_response(
request_id,
serde_json::to_value(CommandExecutionRequestApprovalResponse {
decision: ApprovalDecision::Decline,
decision: CommandExecutionApprovalDecision::Decline,
})?,
)
.await?;
@@ -502,7 +503,7 @@ async fn turn_start_updates_sandbox_and_cwd_between_turns_v2() -> Result<()> {
)?,
create_final_assistant_message_sse_response("done second")?,
];
let server = create_mock_chat_completions_server(responses).await;
let server = create_mock_responses_server_sequence(responses).await;
create_config_toml(&codex_home, &server.uri(), "untrusted")?;
let mut mcp = McpProcess::new(&codex_home).await?;
@@ -553,6 +554,7 @@ async fn turn_start_updates_sandbox_and_cwd_between_turns_v2() -> Result<()> {
mcp.read_stream_until_notification_message("codex/event/task_complete"),
)
.await??;
mcp.clear_message_buffer();
// second turn with workspace-write and second_cwd, ensure exec begins in second_cwd
let second_turn = mcp
@@ -635,7 +637,7 @@ async fn turn_start_file_change_approval_v2() -> Result<()> {
create_apply_patch_sse_response(patch, "patch-call")?,
create_final_assistant_message_sse_response("patch applied")?,
];
let server = create_mock_chat_completions_server(responses).await;
let server = create_mock_responses_server_sequence(responses).await;
create_config_toml(&codex_home, &server.uri(), "untrusted")?;
let mut mcp = McpProcess::new(&codex_home).await?;
@@ -722,7 +724,7 @@ async fn turn_start_file_change_approval_v2() -> Result<()> {
mcp.send_response(
request_id,
serde_json::to_value(FileChangeRequestApprovalResponse {
decision: ApprovalDecision::Accept,
decision: FileChangeApprovalDecision::Accept,
})?,
)
.await?;
@@ -782,6 +784,190 @@ async fn turn_start_file_change_approval_v2() -> Result<()> {
Ok(())
}
#[tokio::test]
async fn turn_start_file_change_approval_accept_for_session_persists_v2() -> Result<()> {
skip_if_no_network!(Ok(()));
let tmp = TempDir::new()?;
let codex_home = tmp.path().join("codex_home");
std::fs::create_dir(&codex_home)?;
let workspace = tmp.path().join("workspace");
std::fs::create_dir(&workspace)?;
let patch_1 = r#"*** Begin Patch
*** Add File: README.md
+new line
*** End Patch
"#;
let patch_2 = r#"*** Begin Patch
*** Update File: README.md
@@
-new line
+updated line
*** End Patch
"#;
let responses = vec![
create_apply_patch_sse_response(patch_1, "patch-call-1")?,
create_final_assistant_message_sse_response("patch 1 applied")?,
create_apply_patch_sse_response(patch_2, "patch-call-2")?,
create_final_assistant_message_sse_response("patch 2 applied")?,
];
let server = create_mock_responses_server_sequence(responses).await;
create_config_toml(&codex_home, &server.uri(), "untrusted")?;
let mut mcp = McpProcess::new(&codex_home).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let start_req = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("mock-model".to_string()),
cwd: Some(workspace.to_string_lossy().into_owned()),
..Default::default()
})
.await?;
let start_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(start_req)),
)
.await??;
let ThreadStartResponse { thread, .. } = to_response::<ThreadStartResponse>(start_resp)?;
// First turn: expect FileChangeRequestApproval, respond with AcceptForSession, and verify the file exists.
let turn_1_req = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "apply patch 1".into(),
}],
cwd: Some(workspace.clone()),
..Default::default()
})
.await?;
let turn_1_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn_1_req)),
)
.await??;
let TurnStartResponse { turn: turn_1 } = to_response::<TurnStartResponse>(turn_1_resp)?;
let started_file_change_1 = timeout(DEFAULT_READ_TIMEOUT, async {
loop {
let started_notif = mcp
.read_stream_until_notification_message("item/started")
.await?;
let started: ItemStartedNotification =
serde_json::from_value(started_notif.params.clone().expect("item/started params"))?;
if let ThreadItem::FileChange { .. } = started.item {
return Ok::<ThreadItem, anyhow::Error>(started.item);
}
}
})
.await??;
let ThreadItem::FileChange { id, status, .. } = started_file_change_1 else {
unreachable!("loop ensures we break on file change items");
};
assert_eq!(id, "patch-call-1");
assert_eq!(status, PatchApplyStatus::InProgress);
let server_req = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_request_message(),
)
.await??;
let ServerRequest::FileChangeRequestApproval { request_id, params } = server_req else {
panic!("expected FileChangeRequestApproval request")
};
assert_eq!(params.item_id, "patch-call-1");
assert_eq!(params.thread_id, thread.id);
assert_eq!(params.turn_id, turn_1.id);
mcp.send_response(
request_id,
serde_json::to_value(FileChangeRequestApprovalResponse {
decision: FileChangeApprovalDecision::AcceptForSession,
})?,
)
.await?;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("item/fileChange/outputDelta"),
)
.await??;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("item/completed"),
)
.await??;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("codex/event/task_complete"),
)
.await??;
let readme_path = workspace.join("README.md");
assert_eq!(std::fs::read_to_string(&readme_path)?, "new line\n");
// Second turn: apply a patch to the same file. Approval should be skipped due to AcceptForSession.
let turn_2_req = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "apply patch 2".into(),
}],
cwd: Some(workspace.clone()),
..Default::default()
})
.await?;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn_2_req)),
)
.await??;
let started_file_change_2 = timeout(DEFAULT_READ_TIMEOUT, async {
loop {
let started_notif = mcp
.read_stream_until_notification_message("item/started")
.await?;
let started: ItemStartedNotification =
serde_json::from_value(started_notif.params.clone().expect("item/started params"))?;
if let ThreadItem::FileChange { .. } = started.item {
return Ok::<ThreadItem, anyhow::Error>(started.item);
}
}
})
.await??;
let ThreadItem::FileChange { id, status, .. } = started_file_change_2 else {
unreachable!("loop ensures we break on file change items");
};
assert_eq!(id, "patch-call-2");
assert_eq!(status, PatchApplyStatus::InProgress);
// If the server incorrectly emits FileChangeRequestApproval, the helper below will error
// (it bails on unexpected JSONRPCMessage::Request), causing the test to fail.
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("item/fileChange/outputDelta"),
)
.await??;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("item/completed"),
)
.await??;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("codex/event/task_complete"),
)
.await??;
assert_eq!(std::fs::read_to_string(readme_path)?, "updated line\n");
Ok(())
}
#[tokio::test]
async fn turn_start_file_change_approval_decline_v2() -> Result<()> {
skip_if_no_network!(Ok(()));
@@ -801,7 +987,7 @@ async fn turn_start_file_change_approval_decline_v2() -> Result<()> {
create_apply_patch_sse_response(patch, "patch-call")?,
create_final_assistant_message_sse_response("patch declined")?,
];
let server = create_mock_chat_completions_server(responses).await;
let server = create_mock_responses_server_sequence(responses).await;
create_config_toml(&codex_home, &server.uri(), "untrusted")?;
let mut mcp = McpProcess::new(&codex_home).await?;
@@ -888,7 +1074,7 @@ async fn turn_start_file_change_approval_decline_v2() -> Result<()> {
mcp.send_response(
request_id,
serde_json::to_value(FileChangeRequestApprovalResponse {
decision: ApprovalDecision::Decline,
decision: FileChangeApprovalDecision::Decline,
})?,
)
.await?;
@@ -939,7 +1125,7 @@ async fn command_execution_notifications_include_process_id() -> Result<()> {
create_exec_command_sse_response("uexec-1")?,
create_final_assistant_message_sse_response("done")?,
];
let server = create_mock_chat_completions_server(responses).await;
let server = create_mock_responses_server_sequence(responses).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri(), "never")?;
let config_toml = codex_home.path().join("config.toml");
@@ -1078,7 +1264,7 @@ model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "chat"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#

View File

@@ -227,11 +227,14 @@ fn check_start_and_end_lines_strict(
first_line: Option<&&str>,
last_line: Option<&&str>,
) -> Result<(), ParseError> {
let first_line = first_line.map(|line| line.trim());
let last_line = last_line.map(|line| line.trim());
match (first_line, last_line) {
(Some(&first), Some(&last)) if first == BEGIN_PATCH_MARKER && last == END_PATCH_MARKER => {
(Some(first), Some(last)) if first == BEGIN_PATCH_MARKER && last == END_PATCH_MARKER => {
Ok(())
}
(Some(&first), _) if first != BEGIN_PATCH_MARKER => Err(InvalidPatchError(String::from(
(Some(first), _) if first != BEGIN_PATCH_MARKER => Err(InvalidPatchError(String::from(
"The first line of the patch must be '*** Begin Patch'",
))),
_ => Err(InvalidPatchError(String::from(
@@ -444,6 +447,25 @@ fn test_parse_patch() {
"The last line of the patch must be '*** End Patch'".to_string()
))
);
assert_eq!(
parse_patch_text(
concat!(
"*** Begin Patch",
" ",
"\n*** Add File: foo\n+hi\n",
" ",
"*** End Patch"
),
ParseMode::Strict
)
.unwrap()
.hunks,
vec![AddFile {
path: PathBuf::from("foo"),
contents: "hi\n".to_string()
}]
);
assert_eq!(
parse_patch_text(
"*** Begin Patch\n\

View File

@@ -0,0 +1 @@
obsolete

View File

@@ -0,0 +1,3 @@
*** Begin Patch
*** Delete File: obsolete.txt
*** End Patch

View File

@@ -0,0 +1,6 @@
*** Begin Patch
*** Update File: file.txt
@@
-one
+two
*** End Patch

View File

@@ -0,0 +1,2 @@
line1
line3

View File

@@ -0,0 +1,3 @@
line1
line2
line3

View File

@@ -0,0 +1,7 @@
*** Begin Patch
*** Update File: lines.txt
@@
line1
-line2
line3
*** End Patch

View File

@@ -0,0 +1,2 @@
first
second updated

View File

@@ -0,0 +1,2 @@
first
second

View File

@@ -0,0 +1,8 @@
*** Begin Patch
*** Update File: tail.txt
@@
first
-second
+second updated
*** End of File
*** End Patch

View File

@@ -1,3 +1,4 @@
use codex_utils_cargo_bin::find_resource;
use pretty_assertions::assert_eq;
use std::collections::BTreeMap;
use std::fs;
@@ -8,7 +9,7 @@ use tempfile::tempdir;
#[test]
fn test_apply_patch_scenarios() -> anyhow::Result<()> {
let scenarios_dir = Path::new(env!("CARGO_MANIFEST_DIR")).join("tests/fixtures/scenarios");
let scenarios_dir = find_resource!("tests/fixtures/scenarios")?;
for scenario in fs::read_dir(scenarios_dir)? {
let scenario = scenario?;
let path = scenario.path();

View File

@@ -73,8 +73,8 @@ impl Client {
})
}
pub async fn from_auth(base_url: impl Into<String>, auth: &CodexAuth) -> Result<Self> {
let token = auth.get_token().await.map_err(anyhow::Error::from)?;
pub fn from_auth(base_url: impl Into<String>, auth: &CodexAuth) -> Result<Self> {
let token = auth.get_token().map_err(anyhow::Error::from)?;
let mut client = Self::new(base_url)?
.with_user_agent(get_codex_user_agent())
.with_bearer_token(token);

View File

@@ -12,6 +12,7 @@ anyhow = { workspace = true }
clap = { workspace = true, features = ["derive"] }
codex-common = { workspace = true, features = ["cli"] }
codex-core = { workspace = true }
codex-utils-cargo-bin = { workspace = true }
serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }
tokio = { workspace = true, features = ["full"] }

View File

@@ -1,4 +1,4 @@
use codex_core::CodexAuth;
use codex_core::AuthManager;
use std::path::Path;
use std::sync::LazyLock;
use std::sync::RwLock;
@@ -23,9 +23,10 @@ pub async fn init_chatgpt_token_from_auth(
codex_home: &Path,
auth_credentials_store_mode: AuthCredentialsStoreMode,
) -> std::io::Result<()> {
let auth = CodexAuth::from_auth_storage(codex_home, auth_credentials_store_mode)?;
if let Some(auth) = auth {
let token_data = auth.get_token_data().await?;
let auth_manager =
AuthManager::new(codex_home.to_path_buf(), false, auth_credentials_store_mode);
if let Some(auth) = auth_manager.auth().await {
let token_data = auth.get_token_data()?;
set_chatgpt_token_data(token_data);
}
Ok(())

View File

@@ -1,6 +1,6 @@
use codex_chatgpt::apply_command::apply_diff_from_task;
use codex_chatgpt::get_task::GetTaskResponse;
use std::path::Path;
use codex_utils_cargo_bin::find_resource;
use tempfile::TempDir;
use tokio::process::Command;
@@ -68,8 +68,8 @@ async fn create_temp_git_repo() -> anyhow::Result<TempDir> {
}
async fn mock_get_task_with_fixture() -> anyhow::Result<GetTaskResponse> {
let fixture_path = Path::new(env!("CARGO_MANIFEST_DIR")).join("tests/task_turn_fixture.json");
let fixture_content = std::fs::read_to_string(fixture_path)?;
let fixture_path = find_resource!("tests/task_turn_fixture.json")?;
let fixture_content = tokio::fs::read_to_string(fixture_path).await?;
let response: GetTaskResponse = serde_json::from_str(&fixture_content)?;
Ok(response)
}

View File

@@ -30,7 +30,6 @@ codex-exec = { workspace = true }
codex-execpolicy = { workspace = true }
codex-login = { workspace = true }
codex-mcp-server = { workspace = true }
codex-process-hardening = { workspace = true }
codex-protocol = { workspace = true }
codex-responses-api-proxy = { workspace = true }
codex-rmcp-client = { workspace = true }
@@ -38,7 +37,6 @@ codex-stdio-to-uds = { workspace = true }
codex-tui = { workspace = true }
codex-tui2 = { workspace = true }
codex-utils-absolute-path = { workspace = true }
ctor = { workspace = true }
libc = { workspace = true }
owo-colors = { workspace = true }
regex-lite = { workspace = true }

View File

@@ -155,7 +155,7 @@ pub async fn run_login_status(cli_config_overrides: CliConfigOverrides) -> ! {
match CodexAuth::from_auth_storage(&config.codex_home, config.cli_auth_credentials_store_mode) {
Ok(Some(auth)) => match auth.mode {
AuthMode::ApiKey => match auth.get_token().await {
AuthMode::ApiKey => match auth.get_token() {
Ok(api_key) => {
eprintln!("Logged in using an API key - {}", safe_format_key(&api_key));
std::process::exit(0);

View File

@@ -283,7 +283,7 @@ struct StdioToUdsCommand {
fn format_exit_messages(exit_info: AppExitInfo, color_enabled: bool) -> Vec<String> {
let AppExitInfo {
token_usage,
conversation_id,
thread_id: conversation_id,
..
} = exit_info;
@@ -418,14 +418,6 @@ fn stage_str(stage: codex_core::features::Stage) -> &'static str {
}
}
/// As early as possible in the process lifecycle, apply hardening measures. We
/// skip this in debug builds to avoid interfering with debugging.
#[ctor::ctor]
#[cfg(not(debug_assertions))]
fn pre_main_hardening() {
codex_process_hardening::pre_main_hardening();
}
fn main() -> anyhow::Result<()> {
arg0_dispatch_or_else(|codex_linux_sandbox_exe| async move {
cli_main(codex_linux_sandbox_exe).await?;
@@ -480,7 +472,12 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
}
Some(Subcommand::AppServer(app_server_cli)) => match app_server_cli.subcommand {
None => {
codex_app_server::run_main(codex_linux_sandbox_exe, root_config_overrides).await?;
codex_app_server::run_main(
codex_linux_sandbox_exe,
root_config_overrides,
codex_core::config_loader::LoaderOverrides::default(),
)
.await?;
}
Some(AppServerSubcommand::GenerateTs(gen_cli)) => {
codex_app_server_protocol::generate_ts(
@@ -785,7 +782,7 @@ mod tests {
use super::*;
use assert_matches::assert_matches;
use codex_core::protocol::TokenUsage;
use codex_protocol::ConversationId;
use codex_protocol::ThreadId;
use pretty_assertions::assert_eq;
fn finalize_from_args(args: &[&str]) -> TuiCli {
@@ -825,9 +822,7 @@ mod tests {
};
AppExitInfo {
token_usage,
conversation_id: conversation
.map(ConversationId::from_string)
.map(Result::unwrap),
thread_id: conversation.map(ThreadId::from_string).map(Result::unwrap),
update_action: None,
}
}
@@ -836,7 +831,7 @@ mod tests {
fn format_exit_messages_skips_zero_usage() {
let exit_info = AppExitInfo {
token_usage: TokenUsage::default(),
conversation_id: None,
thread_id: None,
update_action: None,
};
let lines = format_exit_messages(exit_info, false);

View File

@@ -59,3 +59,61 @@ prefix_rule(
Ok(())
}
#[test]
fn execpolicy_check_includes_justification_when_present() -> Result<(), Box<dyn std::error::Error>>
{
let codex_home = TempDir::new()?;
let policy_path = codex_home.path().join("rules").join("policy.rules");
fs::create_dir_all(
policy_path
.parent()
.expect("policy path should have a parent"),
)?;
fs::write(
&policy_path,
r#"
prefix_rule(
pattern = ["git", "push"],
decision = "forbidden",
justification = "pushing is blocked in this repo",
)
"#,
)?;
let output = Command::new(codex_utils_cargo_bin::cargo_bin("codex")?)
.env("CODEX_HOME", codex_home.path())
.args([
"execpolicy",
"check",
"--rules",
policy_path
.to_str()
.expect("policy path should be valid UTF-8"),
"git",
"push",
"origin",
"main",
])
.output()?;
assert!(output.status.success());
let result: serde_json::Value = serde_json::from_slice(&output.stdout)?;
assert_eq!(
result,
json!({
"decision": "forbidden",
"matchedRules": [
{
"prefixRuleMatch": {
"matchedPrefix": ["git", "push"],
"decision": "forbidden",
"justification": "pushing is blocked in this repo"
}
}
]
})
);
Ok(())
}

View File

@@ -10,7 +10,6 @@ pub use cli::Cli;
use anyhow::anyhow;
use chrono::Utc;
use codex_cloud_tasks_client::TaskStatus;
use codex_login::AuthManager;
use owo_colors::OwoColorize;
use owo_colors::Stream;
use std::cmp::Ordering;
@@ -65,7 +64,11 @@ async fn init_backend(user_agent_suffix: &str) -> anyhow::Result<BackendContext>
append_error_log(format!("startup: base_url={base_url} path_style={style}"));
let auth_manager = util::load_auth_manager().await;
let auth = match auth_manager.as_ref().and_then(AuthManager::auth) {
let auth = match auth_manager.as_ref() {
Some(manager) => manager.auth().await,
None => None,
};
let auth = match auth {
Some(auth) => auth,
None => {
eprintln!(
@@ -79,7 +82,7 @@ async fn init_backend(user_agent_suffix: &str) -> anyhow::Result<BackendContext>
append_error_log(format!("auth: mode=ChatGPT account_id={acc}"));
}
let token = match auth.get_token().await {
let token = match auth.get_token() {
Ok(t) if !t.is_empty() => t,
_ => {
eprintln!(

View File

@@ -85,8 +85,8 @@ pub async fn build_chatgpt_headers() -> HeaderMap {
HeaderValue::from_str(&ua).unwrap_or(HeaderValue::from_static("codex-cli")),
);
if let Some(am) = load_auth_manager().await
&& let Some(auth) = am.auth()
&& let Ok(tok) = auth.get_token().await
&& let Some(auth) = am.auth().await
&& let Ok(tok) = auth.get_token()
&& !tok.is_empty()
{
let v = format!("Bearer {tok}");

View File

@@ -10,6 +10,7 @@ use crate::provider::WireApi;
use crate::sse::chat::spawn_chat_stream;
use crate::telemetry::SseTelemetry;
use codex_client::HttpTransport;
use codex_client::RequestCompression;
use codex_client::RequestTelemetry;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ReasoningItemContent;
@@ -80,7 +81,13 @@ impl<T: HttpTransport, A: AuthProvider> ChatClient<T, A> {
extra_headers: HeaderMap,
) -> Result<ResponseStream, ApiError> {
self.streaming
.stream(self.path(), body, extra_headers, spawn_chat_stream)
.stream(
self.path(),
body,
extra_headers,
RequestCompression::None,
spawn_chat_stream,
)
.await
}
}

View File

@@ -215,14 +215,14 @@ mod tests {
"supported_in_api": true,
"priority": 1,
"upgrade": null,
"base_instructions": null,
"base_instructions": "base instructions",
"supports_reasoning_summaries": false,
"support_verbosity": false,
"default_verbosity": null,
"apply_patch_tool_type": null,
"truncation_policy": {"mode": "bytes", "limit": 10_000},
"supports_parallel_tool_calls": false,
"context_window": null,
"context_window": 272_000,
"experimental_supported_tools": [],
}))
.unwrap(),

View File

@@ -9,9 +9,11 @@ use crate::provider::Provider;
use crate::provider::WireApi;
use crate::requests::ResponsesRequest;
use crate::requests::ResponsesRequestBuilder;
use crate::requests::responses::Compression;
use crate::sse::spawn_response_stream;
use crate::telemetry::SseTelemetry;
use codex_client::HttpTransport;
use codex_client::RequestCompression;
use codex_client::RequestTelemetry;
use codex_protocol::protocol::SessionSource;
use http::HeaderMap;
@@ -33,6 +35,7 @@ pub struct ResponsesOptions {
pub conversation_id: Option<String>,
pub session_source: Option<SessionSource>,
pub extra_headers: HeaderMap,
pub compression: Compression,
}
impl<T: HttpTransport, A: AuthProvider> ResponsesClient<T, A> {
@@ -56,7 +59,8 @@ impl<T: HttpTransport, A: AuthProvider> ResponsesClient<T, A> {
&self,
request: ResponsesRequest,
) -> Result<ResponseStream, ApiError> {
self.stream(request.body, request.headers).await
self.stream(request.body, request.headers, request.compression)
.await
}
#[instrument(level = "trace", skip_all, err)]
@@ -75,6 +79,7 @@ impl<T: HttpTransport, A: AuthProvider> ResponsesClient<T, A> {
conversation_id,
session_source,
extra_headers,
compression,
} = options;
let request = ResponsesRequestBuilder::new(model, &prompt.instructions, &prompt.input)
@@ -88,6 +93,7 @@ impl<T: HttpTransport, A: AuthProvider> ResponsesClient<T, A> {
.session_source(session_source)
.store_override(store_override)
.extra_headers(extra_headers)
.compression(compression)
.build(self.streaming.provider())?;
self.stream_request(request).await
@@ -104,9 +110,21 @@ impl<T: HttpTransport, A: AuthProvider> ResponsesClient<T, A> {
&self,
body: Value,
extra_headers: HeaderMap,
compression: Compression,
) -> Result<ResponseStream, ApiError> {
let compression = match compression {
Compression::None => RequestCompression::None,
Compression::Zstd => RequestCompression::Zstd,
};
self.streaming
.stream(self.path(), body, extra_headers, spawn_response_stream)
.stream(
self.path(),
body,
extra_headers,
compression,
spawn_response_stream,
)
.await
}
}

View File

@@ -6,6 +6,7 @@ use crate::provider::Provider;
use crate::telemetry::SseTelemetry;
use crate::telemetry::run_with_request_telemetry;
use codex_client::HttpTransport;
use codex_client::RequestCompression;
use codex_client::RequestTelemetry;
use codex_client::StreamResponse;
use http::HeaderMap;
@@ -52,6 +53,7 @@ impl<T: HttpTransport, A: AuthProvider> StreamingClient<T, A> {
path: &str,
body: Value,
extra_headers: HeaderMap,
compression: RequestCompression,
spawner: fn(StreamResponse, Duration, Option<Arc<dyn SseTelemetry>>) -> ResponseStream,
) -> Result<ResponseStream, ApiError> {
let builder = || {
@@ -62,6 +64,7 @@ impl<T: HttpTransport, A: AuthProvider> StreamingClient<T, A> {
http::HeaderValue::from_static("text/event-stream"),
);
req.body = Some(body.clone());
req.compression = compression;
add_auth_headers(&self.auth, req)
};

View File

@@ -1,4 +1,5 @@
use codex_client::Request;
use codex_client::RequestCompression;
use codex_client::RetryOn;
use codex_client::RetryPolicy;
use http::Method;
@@ -87,6 +88,7 @@ impl Provider {
url: self.url_for_path(path),
headers: self.headers.clone(),
body: None,
compression: RequestCompression::None,
timeout: None,
}
}

View File

@@ -11,10 +11,18 @@ use codex_protocol::protocol::SessionSource;
use http::HeaderMap;
use serde_json::Value;
#[derive(Debug, Clone, Copy, Default, PartialEq, Eq)]
pub enum Compression {
#[default]
None,
Zstd,
}
/// Assembled request body plus headers for a Responses stream request.
pub struct ResponsesRequest {
pub body: Value,
pub headers: HeaderMap,
pub compression: Compression,
}
#[derive(Default)]
@@ -32,6 +40,7 @@ pub struct ResponsesRequestBuilder<'a> {
session_source: Option<SessionSource>,
store_override: Option<bool>,
headers: HeaderMap,
compression: Compression,
}
impl<'a> ResponsesRequestBuilder<'a> {
@@ -94,6 +103,11 @@ impl<'a> ResponsesRequestBuilder<'a> {
self
}
pub fn compression(mut self, compression: Compression) -> Self {
self.compression = compression;
self
}
pub fn build(self, provider: &Provider) -> Result<ResponsesRequest, ApiError> {
let model = self
.model
@@ -138,7 +152,11 @@ impl<'a> ResponsesRequestBuilder<'a> {
insert_header(&mut headers, "x-openai-subagent", &subagent);
}
Ok(ResponsesRequest { body, headers })
Ok(ResponsesRequest {
body,
headers,
compression: self.compression,
})
}
}

View File

@@ -11,6 +11,7 @@ use codex_api::Provider;
use codex_api::ResponsesClient;
use codex_api::ResponsesOptions;
use codex_api::WireApi;
use codex_api::requests::responses::Compression;
use codex_client::HttpTransport;
use codex_client::Request;
use codex_client::Response;
@@ -229,7 +230,9 @@ async fn responses_client_uses_responses_path_for_responses_wire() -> Result<()>
let client = ResponsesClient::new(transport, provider("openai", WireApi::Responses), NoAuth);
let body = serde_json::json!({ "echo": true });
let _stream = client.stream(body, HeaderMap::new()).await?;
let _stream = client
.stream(body, HeaderMap::new(), Compression::None)
.await?;
let requests = state.take_stream_requests();
assert_path_ends_with(&requests, "/responses");
@@ -243,7 +246,9 @@ async fn responses_client_uses_chat_path_for_chat_wire() -> Result<()> {
let client = ResponsesClient::new(transport, provider("openai", WireApi::Chat), NoAuth);
let body = serde_json::json!({ "echo": true });
let _stream = client.stream(body, HeaderMap::new()).await?;
let _stream = client
.stream(body, HeaderMap::new(), Compression::None)
.await?;
let requests = state.take_stream_requests();
assert_path_ends_with(&requests, "/chat/completions");
@@ -258,7 +263,9 @@ async fn streaming_client_adds_auth_headers() -> Result<()> {
let client = ResponsesClient::new(transport, provider("openai", WireApi::Responses), auth);
let body = serde_json::json!({ "model": "gpt-test" });
let _stream = client.stream(body, HeaderMap::new()).await?;
let _stream = client
.stream(body, HeaderMap::new(), Compression::None)
.await?;
let requests = state.take_stream_requests();
assert_eq!(requests.len(), 1);

View File

@@ -56,7 +56,7 @@ async fn models_client_hits_models_endpoint() {
slug: "gpt-test".to_string(),
display_name: "gpt-test".to_string(),
description: Some("desc".to_string()),
default_reasoning_level: ReasoningEffort::Medium,
default_reasoning_level: Some(ReasoningEffort::Medium),
supported_reasoning_levels: vec![
ReasoningEffortPreset {
effort: ReasoningEffort::Low,
@@ -76,14 +76,16 @@ async fn models_client_hits_models_endpoint() {
supported_in_api: true,
priority: 1,
upgrade: None,
base_instructions: None,
base_instructions: "base instructions".to_string(),
supports_reasoning_summaries: false,
support_verbosity: false,
default_verbosity: None,
apply_patch_tool_type: None,
truncation_policy: TruncationPolicyConfig::bytes(10_000),
supports_parallel_tool_calls: false,
context_window: None,
context_window: Some(272_000),
auto_compact_token_limit: None,
effective_context_window_percent: 95,
experimental_supported_tools: Vec::new(),
}],
};

View File

@@ -9,6 +9,7 @@ use codex_api::Provider;
use codex_api::ResponseEvent;
use codex_api::ResponsesClient;
use codex_api::WireApi;
use codex_api::requests::responses::Compression;
use codex_client::HttpTransport;
use codex_client::Request;
use codex_client::Response;
@@ -124,7 +125,11 @@ async fn responses_stream_parses_items_and_completed_end_to_end() -> Result<()>
let client = ResponsesClient::new(transport, provider("openai", WireApi::Responses), NoAuth);
let mut stream = client
.stream(serde_json::json!({"echo": true}), HeaderMap::new())
.stream(
serde_json::json!({"echo": true}),
HeaderMap::new(),
Compression::None,
)
.await?;
let mut events = Vec::new();
@@ -189,7 +194,11 @@ async fn responses_stream_aggregates_output_text_deltas() -> Result<()> {
let client = ResponsesClient::new(transport, provider("openai", WireApi::Responses), NoAuth);
let stream = client
.stream(serde_json::json!({"echo": true}), HeaderMap::new())
.stream(
serde_json::json!({"echo": true}),
HeaderMap::new(),
Compression::None,
)
.await?;
let mut stream = stream.aggregate();

View File

@@ -19,6 +19,7 @@ thiserror = { workspace = true }
tokio = { workspace = true, features = ["macros", "rt", "time", "sync"] }
tracing = { workspace = true }
tracing-opentelemetry = { workspace = true }
zstd = { workspace = true }
[lints]
workspace = true

View File

@@ -104,6 +104,13 @@ impl CodexRequestBuilder {
self.map(|builder| builder.json(value))
}
pub fn body<B>(self, body: B) -> Self
where
B: Into<reqwest::Body>,
{
self.map(|builder| builder.body(body))
}
pub async fn send(self) -> Result<Response, reqwest::Error> {
let headers = trace_headers();

View File

@@ -11,6 +11,7 @@ pub use crate::default_client::CodexRequestBuilder;
pub use crate::error::StreamError;
pub use crate::error::TransportError;
pub use crate::request::Request;
pub use crate::request::RequestCompression;
pub use crate::request::Response;
pub use crate::retry::RetryOn;
pub use crate::retry::RetryPolicy;

View File

@@ -5,12 +5,20 @@ use serde::Serialize;
use serde_json::Value;
use std::time::Duration;
#[derive(Debug, Clone, Copy, Default, PartialEq, Eq)]
pub enum RequestCompression {
#[default]
None,
Zstd,
}
#[derive(Debug, Clone)]
pub struct Request {
pub method: Method,
pub url: String,
pub headers: HeaderMap,
pub body: Option<Value>,
pub compression: RequestCompression,
pub timeout: Option<Duration>,
}
@@ -21,6 +29,7 @@ impl Request {
url,
headers: HeaderMap::new(),
body: None,
compression: RequestCompression::None,
timeout: None,
}
}
@@ -29,6 +38,11 @@ impl Request {
self.body = serde_json::to_value(body).ok();
self
}
pub fn with_compression(mut self, compression: RequestCompression) -> Self {
self.compression = compression;
self
}
}
#[derive(Debug, Clone)]

View File

@@ -2,6 +2,7 @@ use crate::default_client::CodexHttpClient;
use crate::default_client::CodexRequestBuilder;
use crate::error::TransportError;
use crate::request::Request;
use crate::request::RequestCompression;
use crate::request::Response;
use async_trait::async_trait;
use bytes::Bytes;
@@ -41,18 +42,70 @@ impl ReqwestTransport {
}
fn build(&self, req: Request) -> Result<CodexRequestBuilder, TransportError> {
let mut builder = self
.client
.request(
Method::from_bytes(req.method.as_str().as_bytes()).unwrap_or(Method::GET),
&req.url,
)
.headers(req.headers);
if let Some(timeout) = req.timeout {
let Request {
method,
url,
mut headers,
body,
compression,
timeout,
} = req;
let mut builder = self.client.request(
Method::from_bytes(method.as_str().as_bytes()).unwrap_or(Method::GET),
&url,
);
if let Some(timeout) = timeout {
builder = builder.timeout(timeout);
}
if let Some(body) = req.body {
builder = builder.json(&body);
if let Some(body) = body {
if compression != RequestCompression::None {
if headers.contains_key(http::header::CONTENT_ENCODING) {
return Err(TransportError::Build(
"request compression was requested but content-encoding is already set"
.to_string(),
));
}
let json = serde_json::to_vec(&body)
.map_err(|err| TransportError::Build(err.to_string()))?;
let pre_compression_bytes = json.len();
let compression_start = std::time::Instant::now();
let (compressed, content_encoding) = match compression {
RequestCompression::None => unreachable!("guarded by compression != None"),
RequestCompression::Zstd => (
zstd::stream::encode_all(std::io::Cursor::new(json), 3)
.map_err(|err| TransportError::Build(err.to_string()))?,
http::HeaderValue::from_static("zstd"),
),
};
let post_compression_bytes = compressed.len();
let compression_duration = compression_start.elapsed();
// Ensure the server knows to unpack the request body.
headers.insert(http::header::CONTENT_ENCODING, content_encoding);
if !headers.contains_key(http::header::CONTENT_TYPE) {
headers.insert(
http::header::CONTENT_TYPE,
http::HeaderValue::from_static("application/json"),
);
}
tracing::info!(
pre_compression_bytes,
post_compression_bytes,
compression_duration_ms = compression_duration.as_millis(),
"Compressed request body with zstd"
);
builder = builder.headers(headers).body(compressed);
} else {
builder = builder.headers(headers).json(&body);
}
} else {
builder = builder.headers(headers);
}
Ok(builder)
}

View File

@@ -122,11 +122,11 @@ keyring = { workspace = true, features = ["sync-secret-service"] }
assert_cmd = { workspace = true }
assert_matches = { workspace = true }
codex-arg0 = { workspace = true }
codex-core = { path = ".", features = ["deterministic_process_ids"] }
codex-core = { path = ".", default-features = false, features = ["deterministic_process_ids"] }
codex-otel = { workspace = true, features = ["disable-default-metrics-exporter"] }
codex-utils-cargo-bin = { workspace = true }
core_test_support = { workspace = true }
ctor = { workspace = true }
escargot = { workspace = true }
image = { workspace = true, features = ["jpeg", "png"] }
maplit = { workspace = true }
predicates = { workspace = true }
@@ -137,6 +137,7 @@ tracing-subscriber = { workspace = true }
tracing-test = { workspace = true, features = ["no-env-filter"] }
walkdir = { workspace = true }
wiremock = { workspace = true }
zstd = { workspace = true }
[package.metadata.cargo-shear]
ignored = ["openssl-sys"]

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,386 @@
You are a coding agent running in the Codex CLI, a terminal-based coding assistant. Codex CLI is an open source project led by OpenAI. You are expected to be precise, safe, and helpful.
Your capabilities:
- Receive user prompts and other context provided by the harness, such as files in the workspace.
- Communicate with the user by streaming thinking & responses, and by making & updating plans.
- Emit function calls to run terminal commands and apply patches. Depending on how this specific run is configured, you can request that these function calls be escalated to the user for approval before running. More on this in the "Sandbox and approvals" section.
Within this context, Codex refers to the open-source agentic coding interface (not the old Codex language model built by OpenAI).
# How you work
## Personality
Your default personality and tone is concise, direct, and friendly. You communicate efficiently, always keeping the user clearly informed about ongoing actions without unnecessary detail. You always prioritize actionable guidance, clearly stating assumptions, environment prerequisites, and next steps. Unless explicitly asked, you avoid excessively verbose explanations about your work.
# AGENTS.md spec
- Repos often contain AGENTS.md files. These files can appear anywhere within the repository.
- These files are a way for humans to give you (the agent) instructions or tips for working within the container.
- Some examples might be: coding conventions, info about how code is organized, or instructions for how to run or test code.
- Instructions in AGENTS.md files:
- The scope of an AGENTS.md file is the entire directory tree rooted at the folder that contains it.
- For every file you touch in the final patch, you must obey instructions in any AGENTS.md file whose scope includes that file.
- Instructions about code style, structure, naming, etc. apply only to code within the AGENTS.md file's scope, unless the file states otherwise.
- More-deeply-nested AGENTS.md files take precedence in the case of conflicting instructions.
- Direct system/developer/user instructions (as part of a prompt) take precedence over AGENTS.md instructions.
- The contents of the AGENTS.md file at the root of the repo and any directories from the CWD up to the root are included with the developer message and don't need to be re-read. When working in a subdirectory of CWD, or a directory outside the CWD, check for any AGENTS.md files that may be applicable.
## Responsiveness
### Preamble messages
Before making tool calls, send a brief preamble to the user explaining what youre about to do. When sending preamble messages, follow these principles and examples:
- **Logically group related actions**: if youre about to run several related commands, describe them together in one preamble rather than sending a separate note for each.
- **Keep it concise**: be no more than 1-2 sentences, focused on immediate, tangible next steps. (812 words for quick updates).
- **Build on prior context**: if this is not your first tool call, use the preamble message to connect the dots with whats been done so far and create a sense of momentum and clarity for the user to understand your next actions.
- **Keep your tone light, friendly and curious**: add small touches of personality in preambles feel collaborative and engaging.
- **Exception**: Avoid adding a preamble for every trivial read (e.g., `cat` a single file) unless its part of a larger grouped action.
**Examples:**
- “Ive explored the repo; now checking the API route definitions.”
- “Next, Ill patch the config and update the related tests.”
- “Im about to scaffold the CLI commands and helper functions.”
- “Ok cool, so Ive wrapped my head around the repo. Now digging into the API routes.”
- “Configs looking tidy. Next up is patching helpers to keep things in sync.”
- “Finished poking at the DB gateway. I will now chase down error handling.”
- “Alright, build pipeline order is interesting. Checking how it reports failures.”
- “Spotted a clever caching util; now hunting where it gets used.”
## Planning
You have access to an `update_plan` tool which tracks steps and progress and renders them to the user. Using the tool helps demonstrate that you've understood the task and convey how you're approaching it. Plans can help to make complex, ambiguous, or multi-phase work clearer and more collaborative for the user. A good plan should break the task into meaningful, logically ordered steps that are easy to verify as you go.
Note that plans are not for padding out simple work with filler steps or stating the obvious. The content of your plan should not involve doing anything that you aren't capable of doing (i.e. don't try to test things that you can't test). Do not use plans for simple or single-step queries that you can just do or answer immediately.
Do not repeat the full contents of the plan after an `update_plan` call — the harness already displays it. Instead, summarize the change made and highlight any important context or next step.
Before running a command, consider whether or not you have completed the previous step, and make sure to mark it as completed before moving on to the next step. It may be the case that you complete all steps in your plan after a single pass of implementation. If this is the case, you can simply mark all the planned steps as completed. Sometimes, you may need to change plans in the middle of a task: call `update_plan` with the updated plan and make sure to provide an `explanation` of the rationale when doing so.
Use a plan when:
- The task is non-trivial and will require multiple actions over a long time horizon.
- There are logical phases or dependencies where sequencing matters.
- The work has ambiguity that benefits from outlining high-level goals.
- You want intermediate checkpoints for feedback and validation.
- When the user asked you to do more than one thing in a single prompt
- The user has asked you to use the plan tool (aka "TODOs")
- You generate additional steps while working, and plan to do them before yielding to the user
### Examples
**High-quality plans**
Example 1:
1. Add CLI entry with file args
2. Parse Markdown via CommonMark library
3. Apply semantic HTML template
4. Handle code blocks, images, links
5. Add error handling for invalid files
Example 2:
1. Define CSS variables for colors
2. Add toggle with localStorage state
3. Refactor components to use variables
4. Verify all views for readability
5. Add smooth theme-change transition
Example 3:
1. Set up Node.js + WebSocket server
2. Add join/leave broadcast events
3. Implement messaging with timestamps
4. Add usernames + mention highlighting
5. Persist messages in lightweight DB
6. Add typing indicators + unread count
**Low-quality plans**
Example 1:
1. Create CLI tool
2. Add Markdown parser
3. Convert to HTML
Example 2:
1. Add dark mode toggle
2. Save preference
3. Make styles look good
Example 3:
1. Create single-file HTML game
2. Run quick sanity check
3. Summarize usage instructions
If you need to write a plan, only write high quality plans, not low quality ones.
## Task execution
You are a coding agent. Please keep going until the query is completely resolved, before ending your turn and yielding back to the user. Only terminate your turn when you are sure that the problem is solved. Autonomously resolve the query to the best of your ability, using the tools available to you, before coming back to the user. Do NOT guess or make up an answer.
You MUST adhere to the following criteria when solving queries:
- Working on the repo(s) in the current environment is allowed, even if they are proprietary.
- Analyzing code for vulnerabilities is allowed.
- Showing user code and tool call details is allowed.
- Use the `apply_patch` tool to edit files (NEVER try `applypatch` or `apply-patch`, only `apply_patch`): {"command":["apply_patch","*** Begin Patch\\n*** Update File: path/to/file.py\\n@@ def example():\\n- pass\\n+ return 123\\n*** End Patch"]}
If completing the user's task requires writing or modifying files, your code and final answer should follow these coding guidelines, though user instructions (i.e. AGENTS.md) may override these guidelines:
- Fix the problem at the root cause rather than applying surface-level patches, when possible.
- Avoid unneeded complexity in your solution.
- Do not attempt to fix unrelated bugs or broken tests. It is not your responsibility to fix them. (You may mention them to the user in your final message though.)
- Update documentation as necessary.
- Keep changes consistent with the style of the existing codebase. Changes should be minimal and focused on the task.
- Use `git log` and `git blame` to search the history of the codebase if additional context is required.
- NEVER add copyright or license headers unless specifically requested.
- Do not waste tokens by re-reading files after calling `apply_patch` on them. The tool call will fail if it didn't work. The same goes for making folders, deleting folders, etc.
- Do not `git commit` your changes or create new git branches unless explicitly requested.
- Do not add inline comments within code unless explicitly requested.
- Do not use one-letter variable names unless explicitly requested.
- NEVER output inline citations like "【F:README.md†L5-L14】" in your outputs. The CLI is not able to render these so they will just be broken in the UI. Instead, if you output valid filepaths, users will be able to click on them to open the files in their editor.
## Sandbox and approvals
The Codex CLI harness supports several different sandboxing, and approval configurations that the user can choose from.
Filesystem sandboxing prevents you from editing files without user approval. The options are:
- **read-only**: You can only read files.
- **workspace-write**: You can read files. You can write to files in your workspace folder, but not outside it.
- **danger-full-access**: No filesystem sandboxing.
Network sandboxing prevents you from accessing network without approval. Options are
- **restricted**
- **enabled**
Approvals are your mechanism to get user consent to perform more privileged actions. Although they introduce friction to the user because your work is paused until the user responds, you should leverage them to accomplish your important work. Do not let these settings or the sandbox deter you from attempting to accomplish the user's task. Approval options are
- **untrusted**: The harness will escalate most commands for user approval, apart from a limited allowlist of safe "read" commands.
- **on-failure**: The harness will allow all commands to run in the sandbox (if enabled), and failures will be escalated to the user for approval to run again without the sandbox.
- **on-request**: Commands will be run in the sandbox by default, and you can specify in your tool call if you want to escalate a command to run without sandboxing. (Note that this mode is not always available. If it is, you'll see parameters for it in the `shell` command description.)
- **never**: This is a non-interactive mode where you may NEVER ask the user for approval to run commands. Instead, you must always persist and work around constraints to solve the task for the user. You MUST do your utmost best to finish the task and validate your work before yielding. If this mode is pared with `danger-full-access`, take advantage of it to deliver the best outcome for the user. Further, in this mode, your default testing philosophy is overridden: Even if you don't see local patterns for testing, you may add tests and scripts to validate your work. Just remove them before yielding.
When you are running with approvals `on-request`, and sandboxing enabled, here are scenarios where you'll need to request approval:
- You need to run a command that writes to a directory that requires it (e.g. running tests that write to /tmp)
- You need to run a GUI app (e.g., open/xdg-open/osascript) to open browsers or files.
- You are running sandboxed and need to run a command that requires network access (e.g. installing packages)
- If you run a command that is important to solving the user's query, but it fails because of sandboxing, rerun the command with approval.
- You are about to take a potentially destructive action such as an `rm` or `git reset` that the user did not explicitly ask for
- (For all of these, you should weigh alternative paths that do not require approval.)
Note that when sandboxing is set to read-only, you'll need to request approval for any command that isn't a read.
You will be told what filesystem sandboxing, network sandboxing, and approval mode are active in a developer or user message. If you are not told about this, assume that you are running with workspace-write, network sandboxing ON, and approval on-failure.
## Validating your work
If the codebase has tests or the ability to build or run, consider using them to verify that your work is complete.
When testing, your philosophy should be to start as specific as possible to the code you changed so that you can catch issues efficiently, then make your way to broader tests as you build confidence. If there's no test for the code you changed, and if the adjacent patterns in the codebases show that there's a logical place for you to add a test, you may do so. However, do not add tests to codebases with no tests.
Similarly, once you're confident in correctness, you can suggest or use formatting commands to ensure that your code is well formatted. If there are issues you can iterate up to 3 times to get formatting right, but if you still can't manage it's better to save the user time and present them a correct solution where you call out the formatting in your final message. If the codebase does not have a formatter configured, do not add one.
For all of testing, running, building, and formatting, do not attempt to fix unrelated bugs. It is not your responsibility to fix them. (You may mention them to the user in your final message though.)
Be mindful of whether to run validation commands proactively. In the absence of behavioral guidance:
- When running in non-interactive approval modes like **never** or **on-failure**, proactively run tests, lint and do whatever you need to ensure you've completed the task.
- When working in interactive approval modes like **untrusted**, or **on-request**, hold off on running tests or lint commands until the user is ready for you to finalize your output, because these commands take time to run and slow down iteration. Instead suggest what you want to do next, and let the user confirm first.
- When working on test-related tasks, such as adding tests, fixing tests, or reproducing a bug to verify behavior, you may proactively run tests regardless of approval mode. Use your judgement to decide whether this is a test-related task.
## Ambition vs. precision
For tasks that have no prior context (i.e. the user is starting something brand new), you should feel free to be ambitious and demonstrate creativity with your implementation.
If you're operating in an existing codebase, you should make sure you do exactly what the user asks with surgical precision. Treat the surrounding codebase with respect, and don't overstep (i.e. changing filenames or variables unnecessarily). You should balance being sufficiently ambitious and proactive when completing tasks of this nature.
You should use judicious initiative to decide on the right level of detail and complexity to deliver based on the user's needs. This means showing good judgment that you're capable of doing the right extras without gold-plating. This might be demonstrated by high-value, creative touches when scope of the task is vague; while being surgical and targeted when scope is tightly specified.
## Sharing progress updates
For especially longer tasks that you work on (i.e. requiring many tool calls, or a plan with multiple steps), you should provide progress updates back to the user at reasonable intervals. These updates should be structured as a concise sentence or two (no more than 8-10 words long) recapping progress so far in plain language: this update demonstrates your understanding of what needs to be done, progress so far (i.e. files explores, subtasks complete), and where you're going next.
Before doing large chunks of work that may incur latency as experienced by the user (i.e. writing a new file), you should send a concise message to the user with an update indicating what you're about to do to ensure they know what you're spending time on. Don't start editing or writing large files before informing the user what you are doing and why.
The messages you send before tool calls should describe what is immediately about to be done next in very concise language. If there was previous work done, this preamble message should also include a note about the work done so far to bring the user along.
## Presenting your work and final message
Your final message should read naturally, like an update from a concise teammate. For casual conversation, brainstorming tasks, or quick questions from the user, respond in a friendly, conversational tone. You should ask questions, suggest ideas, and adapt to the users style. If you've finished a large amount of work, when describing what you've done to the user, you should follow the final answer formatting guidelines to communicate substantive changes. You don't need to add structured formatting for one-word answers, greetings, or purely conversational exchanges.
You can skip heavy formatting for single, simple actions or confirmations. In these cases, respond in plain sentences with any relevant next step or quick option. Reserve multi-section structured responses for results that need grouping or explanation.
The user is working on the same computer as you, and has access to your work. As such there's no need to show the full contents of large files you have already written unless the user explicitly asks for them. Similarly, if you've created or modified files using `apply_patch`, there's no need to tell users to "save the file" or "copy the code into a file"—just reference the file path.
If there's something that you think you could help with as a logical next step, concisely ask the user if they want you to do so. Good examples of this are running tests, committing changes, or building out the next logical component. If theres something that you couldn't do (even with approval) but that the user might want to do (such as verifying changes by running the app), include those instructions succinctly.
Brevity is very important as a default. You should be very concise (i.e. no more than 10 lines), but can relax this requirement for tasks where additional detail and comprehensiveness is important for the user's understanding.
### Final answer structure and style guidelines
You are producing plain text that will later be styled by the CLI. Follow these rules exactly. Formatting should make results easy to scan, but not feel mechanical. Use judgment to decide how much structure adds value.
**Section Headers**
- Use only when they improve clarity — they are not mandatory for every answer.
- Choose descriptive names that fit the content
- Keep headers short (13 words) and in `**Title Case**`. Always start headers with `**` and end with `**`
- Leave no blank line before the first bullet under a header.
- Section headers should only be used where they genuinely improve scanability; avoid fragmenting the answer.
**Bullets**
- Use `-` followed by a space for every bullet.
- Merge related points when possible; avoid a bullet for every trivial detail.
- Keep bullets to one line unless breaking for clarity is unavoidable.
- Group into short lists (46 bullets) ordered by importance.
- Use consistent keyword phrasing and formatting across sections.
**Monospace**
- Wrap all commands, file paths, env vars, and code identifiers in backticks (`` `...` ``).
- Apply to inline examples and to bullet keywords if the keyword itself is a literal file/command.
- Never mix monospace and bold markers; choose one based on whether its a keyword (`**`) or inline code/path (`` ` ``).
**File References**
When referencing files in your response, make sure to include the relevant start line and always follow the below rules:
* Use inline code to make file paths clickable.
* Each reference should have a stand alone path. Even if it's the same file.
* Accepted: absolute, workspacerelative, a/ or b/ diff prefixes, or bare filename/suffix.
* Line/column (1based, optional): :line[:column] or #Lline[Ccolumn] (column defaults to 1).
* Do not use URIs like file://, vscode://, or https://.
* Do not provide range of lines
* Examples: src/app.ts, src/app.ts:42, b/server/index.js#L10, C:\repo\project\main.rs:12:5
**Structure**
- Place related bullets together; dont mix unrelated concepts in the same section.
- Order sections from general → specific → supporting info.
- For subsections (e.g., “Binaries” under “Rust Workspace”), introduce with a bolded keyword bullet, then list items under it.
- Match structure to complexity:
- Multi-part or detailed results → use clear headers and grouped bullets.
- Simple results → minimal headers, possibly just a short list or paragraph.
**Tone**
- Keep the voice collaborative and natural, like a coding partner handing off work.
- Be concise and factual — no filler or conversational commentary and avoid unnecessary repetition
- Use present tense and active voice (e.g., “Runs tests” not “This will run tests”).
- Keep descriptions self-contained; dont refer to “above” or “below”.
- Use parallel structure in lists for consistency.
**Dont**
- Dont use literal words “bold” or “monospace” in the content.
- Dont nest bullets or create deep hierarchies.
- Dont output ANSI escape codes directly — the CLI renderer applies them.
- Dont cram unrelated keywords into a single bullet; split for clarity.
- Dont let keyword lists run long — wrap or reformat for scanability.
Generally, ensure your final answers adapt their shape and depth to the request. For example, answers to code explanations should have a precise, structured explanation with code references that answer the question directly. For tasks with a simple implementation, lead with the outcome and supplement only with whats needed for clarity. Larger changes can be presented as a logical walkthrough of your approach, grouping related steps, explaining rationale where it adds value, and highlighting next actions to accelerate the user. Your answers should provide the right level of detail while being easily scannable.
For casual greetings, acknowledgements, or other one-off conversational messages that are not delivering substantive information or structured results, respond naturally without section headers or bullet formatting.
# Tool Guidelines
## Shell commands
When using the shell, you must adhere to the following guidelines:
- When searching for text or files, prefer using `rg` or `rg --files` respectively because `rg` is much faster than alternatives like `grep`. (If the `rg` command is not found, then use alternatives.)
- Do not use python scripts to attempt to output larger chunks of a file.
## `update_plan`
A tool named `update_plan` is available to you. You can use it to keep an uptodate, stepbystep plan for the task.
To create a new plan, call `update_plan` with a short list of 1sentence steps (no more than 5-7 words each) with a `status` for each step (`pending`, `in_progress`, or `completed`).
When steps have been completed, use `update_plan` to mark each finished step as `completed` and the next step you are working on as `in_progress`. There should always be exactly one `in_progress` step until everything is done. You can mark multiple items as complete in a single `update_plan` call.
If all steps are complete, ensure you call `update_plan` to mark all steps as `completed`.
## `apply_patch`
Use the `apply_patch` shell command to edit files.
Your patch language is a strippeddown, fileoriented diff format designed to be easy to parse and safe to apply. You can think of it as a highlevel envelope:
*** Begin Patch
[ one or more file sections ]
*** End Patch
Within that envelope, you get a sequence of file operations.
You MUST include a header to specify the action you are taking.
Each operation starts with one of three headers:
*** Add File: <path> - create a new file. Every following line is a + line (the initial contents).
*** Delete File: <path> - remove an existing file. Nothing follows.
*** Update File: <path> - patch an existing file in place (optionally with a rename).
May be immediately followed by *** Move to: <new path> if you want to rename the file.
Then one or more “hunks”, each introduced by @@ (optionally followed by a hunk header).
Within a hunk each line starts with:
For instructions on [context_before] and [context_after]:
- By default, show 3 lines of code immediately above and 3 lines immediately below each change. If a change is within 3 lines of a previous change, do NOT duplicate the first changes [context_after] lines in the second changes [context_before] lines.
- If 3 lines of context is insufficient to uniquely identify the snippet of code within the file, use the @@ operator to indicate the class or function to which the snippet belongs. For instance, we might have:
@@ class BaseClass
[3 lines of pre-context]
- [old_code]
+ [new_code]
[3 lines of post-context]
- If a code block is repeated so many times in a class or function such that even a single `@@` statement and 3 lines of context cannot uniquely identify the snippet of code, you can use multiple `@@` statements to jump to the right context. For instance:
@@ class BaseClass
@@ def method():
[3 lines of pre-context]
- [old_code]
+ [new_code]
[3 lines of post-context]
The full grammar definition is below:
Patch := Begin { FileOp } End
Begin := "*** Begin Patch" NEWLINE
End := "*** End Patch" NEWLINE
FileOp := AddFile | DeleteFile | UpdateFile
AddFile := "*** Add File: " path NEWLINE { "+" line NEWLINE }
DeleteFile := "*** Delete File: " path NEWLINE
UpdateFile := "*** Update File: " path NEWLINE [ MoveTo ] { Hunk }
MoveTo := "*** Move to: " newPath NEWLINE
Hunk := "@@" [ header ] NEWLINE { HunkLine } [ "*** End of File" NEWLINE ]
HunkLine := (" " | "-" | "+") text NEWLINE
A full patch can combine several operations:
*** Begin Patch
*** Add File: hello.txt
+Hello world
*** Update File: src/app.py
*** Move to: src/main.py
@@ def greet():
-print("Hi")
+print("Hello, world!")
*** Delete File: obsolete.txt
*** End Patch
It is important to remember:
- You must include a header with your intended action (Add/Delete/Update)
- You must prefix new lines with `+` even when creating a new file
- File references can only be relative, NEVER ABSOLUTE.
You can invoke apply_patch like:
```
shell {"command":["apply_patch","*** Begin Patch\n*** Add File: hello.txt\n+Hello, world!\n*** End Patch\n"]}
```

View File

@@ -0,0 +1,188 @@
use crate::CodexThread;
use crate::agent::AgentStatus;
use crate::error::CodexErr;
use crate::error::Result as CodexResult;
use crate::thread_manager::ThreadManagerState;
use codex_protocol::ThreadId;
use codex_protocol::protocol::EventMsg;
use codex_protocol::protocol::Op;
use codex_protocol::user_input::UserInput;
use std::sync::Arc;
use std::sync::Weak;
/// Control-plane handle for multi-agent operations.
/// `AgentControl` is held by each session (via `SessionServices`). It provides capability to
/// spawn new agents and the inter-agent communication layer.
#[derive(Clone, Default)]
pub(crate) struct AgentControl {
/// Weak handle back to the global thread registry/state.
/// This is `Weak` to avoid reference cycles and shadow persistence of the form
/// `ThreadManagerState -> CodexThread -> Session -> SessionServices -> ThreadManagerState`.
manager: Weak<ThreadManagerState>,
}
impl AgentControl {
/// Construct a new `AgentControl` that can spawn/message agents via the given manager state.
pub(crate) fn new(manager: Weak<ThreadManagerState>) -> Self {
Self { manager }
}
#[allow(dead_code)] // Used by upcoming multi-agent tooling.
/// Spawn a new agent thread and submit the initial prompt.
///
/// If `headless` is true, a background drain task is spawned to prevent unbounded event growth
/// of the channel queue when there is no client actively reading the thread events.
pub(crate) async fn spawn_agent(
&self,
config: crate::config::Config,
prompt: String,
headless: bool,
) -> CodexResult<ThreadId> {
let state = self.upgrade()?;
let new_thread = state.spawn_new_thread(config, self.clone()).await?;
if headless {
spawn_headless_drain(Arc::clone(&new_thread.thread));
}
self.send_prompt(new_thread.thread_id, prompt).await?;
Ok(new_thread.thread_id)
}
#[allow(dead_code)] // Used by upcoming multi-agent tooling.
/// Send a `user` prompt to an existing agent thread.
pub(crate) async fn send_prompt(
&self,
agent_id: ThreadId,
prompt: String,
) -> CodexResult<String> {
let state = self.upgrade()?;
state
.send_op(
agent_id,
Op::UserInput {
items: vec![UserInput::Text { text: prompt }],
final_output_json_schema: None,
},
)
.await
}
#[allow(dead_code)] // Used by upcoming multi-agent tooling.
/// Fetch the last known status for `agent_id`, returning `NotFound` when unavailable.
pub(crate) async fn get_status(&self, agent_id: ThreadId) -> AgentStatus {
let Ok(state) = self.upgrade() else {
// No agent available if upgrade fails.
return AgentStatus::NotFound;
};
let Ok(thread) = state.get_thread(agent_id).await else {
return AgentStatus::NotFound;
};
thread.agent_status().await
}
fn upgrade(&self) -> CodexResult<Arc<ThreadManagerState>> {
self.manager
.upgrade()
.ok_or_else(|| CodexErr::UnsupportedOperation("thread manager dropped".to_string()))
}
}
/// When an agent is spawned "headless" (no UI/view attached), there may be no consumer polling
/// `CodexThread::next_event()`. The underlying event channel is unbounded, so the producer can
/// accumulate events indefinitely. This drain task prevents that memory growth by polling and
/// discarding events until shutdown.
fn spawn_headless_drain(thread: Arc<CodexThread>) {
tokio::spawn(async move {
loop {
match thread.next_event().await {
Ok(event) => {
if matches!(event.msg, EventMsg::ShutdownComplete) {
break;
}
}
Err(err) => {
tracing::warn!("failed to receive event from agent: {err:?}");
break;
}
}
}
});
}
#[cfg(test)]
mod tests {
use super::*;
use crate::agent::agent_status_from_event;
use codex_protocol::protocol::ErrorEvent;
use codex_protocol::protocol::TaskCompleteEvent;
use codex_protocol::protocol::TaskStartedEvent;
use codex_protocol::protocol::TurnAbortReason;
use codex_protocol::protocol::TurnAbortedEvent;
use pretty_assertions::assert_eq;
#[tokio::test]
async fn send_prompt_errors_when_manager_dropped() {
let control = AgentControl::default();
let err = control
.send_prompt(ThreadId::new(), "hello".to_string())
.await
.expect_err("send_prompt should fail without a manager");
assert_eq!(
err.to_string(),
"unsupported operation: thread manager dropped"
);
}
#[tokio::test]
async fn get_status_returns_not_found_without_manager() {
let control = AgentControl::default();
let got = control.get_status(ThreadId::new()).await;
assert_eq!(got, AgentStatus::NotFound);
}
#[tokio::test]
async fn on_event_updates_status_from_task_started() {
let status = agent_status_from_event(&EventMsg::TaskStarted(TaskStartedEvent {
model_context_window: None,
}));
assert_eq!(status, Some(AgentStatus::Running));
}
#[tokio::test]
async fn on_event_updates_status_from_task_complete() {
let status = agent_status_from_event(&EventMsg::TaskComplete(TaskCompleteEvent {
last_agent_message: Some("done".to_string()),
}));
let expected = AgentStatus::Completed(Some("done".to_string()));
assert_eq!(status, Some(expected));
}
#[tokio::test]
async fn on_event_updates_status_from_error() {
let status = agent_status_from_event(&EventMsg::Error(ErrorEvent {
message: "boom".to_string(),
codex_error_info: None,
}));
let expected = AgentStatus::Errored("boom".to_string());
assert_eq!(status, Some(expected));
}
#[tokio::test]
async fn on_event_updates_status_from_turn_aborted() {
let status = agent_status_from_event(&EventMsg::TurnAborted(TurnAbortedEvent {
reason: TurnAbortReason::Interrupted,
}));
let expected = AgentStatus::Errored("Interrupted".to_string());
assert_eq!(status, Some(expected));
}
#[tokio::test]
async fn on_event_updates_status_from_shutdown_complete() {
let status = agent_status_from_event(&EventMsg::ShutdownComplete);
assert_eq!(status, Some(AgentStatus::Shutdown));
}
}

View File

@@ -0,0 +1,6 @@
pub(crate) mod control;
pub(crate) mod status;
pub(crate) use codex_protocol::protocol::AgentStatus;
pub(crate) use control::AgentControl;
pub(crate) use status::agent_status_from_event;

View File

@@ -0,0 +1,15 @@
use codex_protocol::protocol::AgentStatus;
use codex_protocol::protocol::EventMsg;
/// Derive the next agent status from a single emitted event.
/// Returns `None` when the event does not affect status tracking.
pub(crate) fn agent_status_from_event(msg: &EventMsg) -> Option<AgentStatus> {
match msg {
EventMsg::TaskStarted(_) => Some(AgentStatus::Running),
EventMsg::TaskComplete(ev) => Some(AgentStatus::Completed(ev.last_agent_message.clone())),
EventMsg::TurnAborted(ev) => Some(AgentStatus::Errored(format!("{:?}", ev.reason))),
EventMsg::Error(ev) => Some(AgentStatus::Errored(ev.message.clone())),
EventMsg::ShutdownComplete => Some(AgentStatus::Shutdown),
_ => None,
}
}

View File

@@ -100,7 +100,7 @@ fn extract_request_id(headers: Option<&HeaderMap>) -> Option<String> {
})
}
pub(crate) async fn auth_provider_from_auth(
pub(crate) fn auth_provider_from_auth(
auth: Option<CodexAuth>,
provider: &ModelProviderInfo,
) -> crate::error::Result<CoreAuthProvider> {
@@ -119,7 +119,7 @@ pub(crate) async fn auth_provider_from_auth(
}
if let Some(auth) = auth {
let token = auth.get_token().await?;
let token = auth.get_token()?;
Ok(CoreAuthProvider {
token: Some(token),
account_id: auth.get_account_id(),

View File

@@ -1,10 +1,9 @@
use crate::codex::Session;
use crate::codex::TurnContext;
use crate::function_tool::FunctionCallError;
use crate::protocol::FileChange;
use crate::protocol::ReviewDecision;
use crate::safety::SafetyCheck;
use crate::safety::assess_patch_safety;
use crate::tools::sandboxing::ExecApprovalRequirement;
use codex_apply_patch::ApplyPatchAction;
use codex_apply_patch::ApplyPatchFileChange;
use std::collections::HashMap;
@@ -30,13 +29,12 @@ pub(crate) enum InternalApplyPatchInvocation {
#[derive(Debug)]
pub(crate) struct ApplyPatchExec {
pub(crate) action: ApplyPatchAction,
pub(crate) user_explicitly_approved_this_action: bool,
pub(crate) auto_approved: bool,
pub(crate) exec_approval_requirement: ExecApprovalRequirement,
}
pub(crate) async fn apply_patch(
sess: &Session,
turn_context: &TurnContext,
call_id: &str,
action: ApplyPatchAction,
) -> InternalApplyPatchInvocation {
match assess_patch_safety(
@@ -50,40 +48,24 @@ pub(crate) async fn apply_patch(
..
} => InternalApplyPatchInvocation::DelegateToExec(ApplyPatchExec {
action,
user_explicitly_approved_this_action: user_explicitly_approved,
auto_approved: !user_explicitly_approved,
exec_approval_requirement: ExecApprovalRequirement::Skip {
bypass_sandbox: false,
proposed_execpolicy_amendment: None,
},
}),
SafetyCheck::AskUser => {
// Compute a readable summary of path changes to include in the
// approval request so the user can make an informed decision.
//
// Note that it might be worth expanding this approval request to
// give the user the option to expand the set of writable roots so
// that similar patches can be auto-approved in the future during
// this session.
let rx_approve = sess
.request_patch_approval(
turn_context,
call_id.to_owned(),
convert_apply_patch_to_protocol(&action),
None,
None,
)
.await;
match rx_approve.await.unwrap_or_default() {
ReviewDecision::Approved
| ReviewDecision::ApprovedExecpolicyAmendment { .. }
| ReviewDecision::ApprovedForSession => {
InternalApplyPatchInvocation::DelegateToExec(ApplyPatchExec {
action,
user_explicitly_approved_this_action: true,
})
}
ReviewDecision::Denied | ReviewDecision::Abort => {
InternalApplyPatchInvocation::Output(Err(FunctionCallError::RespondToModel(
"patch rejected by user".to_string(),
)))
}
}
// Delegate the approval prompt (including cached approvals) to the
// tool runtime, consistent with how shell/unified_exec approvals
// are orchestrator-driven.
InternalApplyPatchInvocation::DelegateToExec(ApplyPatchExec {
action,
auto_approved: false,
exec_approval_requirement: ExecApprovalRequirement::NeedsApproval {
reason: None,
proposed_execpolicy_amendment: None,
},
})
}
SafetyCheck::Reject { reason } => InternalApplyPatchInvocation::Output(Err(
FunctionCallError::RespondToModel(format!("patch rejected: {reason}")),

View File

@@ -8,12 +8,10 @@ use serde::Serialize;
use serial_test::serial;
use std::env;
use std::fmt::Debug;
use std::io::ErrorKind;
use std::path::Path;
use std::path::PathBuf;
use std::sync::Arc;
use std::sync::Mutex;
use std::time::Duration;
use codex_app_server_protocol::AuthMode;
use codex_protocol::config_types::ForcedLoginMethod;
@@ -32,11 +30,7 @@ use crate::token_data::parse_id_token;
use crate::util::try_parse_error_message;
use codex_client::CodexHttpClient;
use codex_protocol::account::PlanType as AccountPlanType;
#[cfg(any(test, feature = "test-support"))]
use once_cell::sync::Lazy;
use serde_json::Value;
#[cfg(any(test, feature = "test-support"))]
use tempfile::TempDir;
use thiserror::Error;
#[derive(Debug, Clone)]
@@ -66,9 +60,6 @@ const REFRESH_TOKEN_UNKNOWN_MESSAGE: &str =
const REFRESH_TOKEN_URL: &str = "https://auth.openai.com/oauth/token";
pub const REFRESH_TOKEN_URL_OVERRIDE_ENV_VAR: &str = "CODEX_REFRESH_TOKEN_URL_OVERRIDE";
#[cfg(any(test, feature = "test-support"))]
static TEST_AUTH_TEMP_DIRS: Lazy<Mutex<Vec<TempDir>>> = Lazy::new(|| Mutex::new(Vec::new()));
#[derive(Debug, Error)]
pub enum RefreshTokenError {
#[error("{0}")]
@@ -84,10 +75,6 @@ impl RefreshTokenError {
Self::Transient(_) => None,
}
}
fn other_with_message(message: impl Into<String>) -> Self {
Self::Transient(std::io::Error::other(message.into()))
}
}
impl From<RefreshTokenError> for std::io::Error {
@@ -100,40 +87,6 @@ impl From<RefreshTokenError> for std::io::Error {
}
impl CodexAuth {
pub async fn refresh_token(&self) -> Result<String, RefreshTokenError> {
tracing::info!("Refreshing token");
let token_data = self.get_current_token_data().ok_or_else(|| {
RefreshTokenError::Transient(std::io::Error::other("Token data is not available."))
})?;
let token = token_data.refresh_token;
let refresh_response = try_refresh_token(token, &self.client).await?;
let updated = update_tokens(
&self.storage,
refresh_response.id_token,
refresh_response.access_token,
refresh_response.refresh_token,
)
.await
.map_err(RefreshTokenError::from)?;
if let Ok(mut auth_lock) = self.auth_dot_json.lock() {
*auth_lock = Some(updated.clone());
}
let access = match updated.tokens {
Some(t) => t.access_token,
None => {
return Err(RefreshTokenError::other_with_message(
"Token data is not available after refresh.",
));
}
};
Ok(access)
}
/// Loads the available auth information from auth storage.
pub fn from_auth_storage(
codex_home: &Path,
@@ -142,62 +95,23 @@ impl CodexAuth {
load_auth(codex_home, false, auth_credentials_store_mode)
}
pub async fn get_token_data(&self) -> Result<TokenData, std::io::Error> {
pub fn get_token_data(&self) -> Result<TokenData, std::io::Error> {
let auth_dot_json: Option<AuthDotJson> = self.get_current_auth_json();
match auth_dot_json {
Some(AuthDotJson {
tokens: Some(mut tokens),
last_refresh: Some(last_refresh),
tokens: Some(tokens),
last_refresh: Some(_),
..
}) => {
if last_refresh < Utc::now() - chrono::Duration::days(TOKEN_REFRESH_INTERVAL) {
let refresh_result = tokio::time::timeout(
Duration::from_secs(60),
try_refresh_token(tokens.refresh_token.clone(), &self.client),
)
.await;
let refresh_response = match refresh_result {
Ok(Ok(response)) => response,
Ok(Err(err)) => return Err(err.into()),
Err(_) => {
return Err(std::io::Error::new(
ErrorKind::TimedOut,
"timed out while refreshing OpenAI API key",
));
}
};
let updated_auth_dot_json = update_tokens(
&self.storage,
refresh_response.id_token,
refresh_response.access_token,
refresh_response.refresh_token,
)
.await?;
tokens = updated_auth_dot_json
.tokens
.clone()
.ok_or(std::io::Error::other(
"Token data is not available after refresh.",
))?;
#[expect(clippy::unwrap_used)]
let mut auth_lock = self.auth_dot_json.lock().unwrap();
*auth_lock = Some(updated_auth_dot_json);
}
Ok(tokens)
}
}) => Ok(tokens),
_ => Err(std::io::Error::other("Token data is not available.")),
}
}
pub async fn get_token(&self) -> Result<String, std::io::Error> {
pub fn get_token(&self) -> Result<String, std::io::Error> {
match self.mode {
AuthMode::ApiKey => Ok(self.api_key.clone().unwrap_or_default()),
AuthMode::ChatGPT => {
let id_token = self.get_token_data().await?.access_token;
let id_token = self.get_token_data()?.access_token;
Ok(id_token)
}
}
@@ -345,7 +259,7 @@ pub fn load_auth_dot_json(
storage.load()
}
pub async fn enforce_login_restrictions(config: &Config) -> std::io::Result<()> {
pub fn enforce_login_restrictions(config: &Config) -> std::io::Result<()> {
let Some(auth) = load_auth(
&config.codex_home,
true,
@@ -383,7 +297,7 @@ pub async fn enforce_login_restrictions(config: &Config) -> std::io::Result<()>
return Ok(());
}
let token_data = match auth.get_token_data().await {
let token_data = match auth.get_token_data() {
Ok(data) => data,
Err(err) => {
return logout_with_message(
@@ -532,6 +446,7 @@ async fn try_refresh_token(
Ok(refresh_response)
} else {
let body = response.text().await.unwrap_or_default();
tracing::error!("Failed to refresh token: {status}: {body}");
if status == StatusCode::UNAUTHORIZED {
let failed = classify_refresh_token_failure(&body);
Err(RefreshTokenError::Permanent(failed))
@@ -630,6 +545,329 @@ struct CachedAuth {
auth: Option<CodexAuth>,
}
enum UnauthorizedRecoveryStep {
Reload,
RefreshToken,
Done,
}
enum ReloadOutcome {
Reloaded,
Skipped,
}
// UnauthorizedRecovery is a state machine that handles an attempt to refresh the authentication when requests
// to API fail with 401 status code.
// The client calls next() every time it encounters a 401 error, one time per retry.
// For API key based authentication, we don't do anything and let the error bubble to the user.
// For ChatGPT based authentication, we:
// 1. Attempt to reload the auth data from disk. We only reload if the account id matches the one the current process is running as.
// 2. Attempt to refresh the token using OAuth token refresh flow.
// If after both steps the server still responds with 401 we let the error bubble to the user.
pub struct UnauthorizedRecovery {
manager: Arc<AuthManager>,
step: UnauthorizedRecoveryStep,
expected_account_id: Option<String>,
}
impl UnauthorizedRecovery {
fn new(manager: Arc<AuthManager>) -> Self {
let expected_account_id = manager
.auth_cached()
.as_ref()
.and_then(CodexAuth::get_account_id);
Self {
manager,
step: UnauthorizedRecoveryStep::Reload,
expected_account_id,
}
}
pub fn has_next(&self) -> bool {
if !self
.manager
.auth_cached()
.is_some_and(|auth| auth.mode == AuthMode::ChatGPT)
{
return false;
}
!matches!(self.step, UnauthorizedRecoveryStep::Done)
}
pub async fn next(&mut self) -> Result<(), RefreshTokenError> {
if !self.has_next() {
return Err(RefreshTokenError::Permanent(RefreshTokenFailedError::new(
RefreshTokenFailedReason::Other,
"No more recovery steps available.",
)));
}
match self.step {
UnauthorizedRecoveryStep::Reload => {
match self
.manager
.reload_if_account_id_matches(self.expected_account_id.as_deref())
{
ReloadOutcome::Reloaded => {
self.step = UnauthorizedRecoveryStep::RefreshToken;
}
ReloadOutcome::Skipped => {
self.manager.refresh_token().await?;
self.step = UnauthorizedRecoveryStep::Done;
}
}
}
UnauthorizedRecoveryStep::RefreshToken => {
self.manager.refresh_token().await?;
self.step = UnauthorizedRecoveryStep::Done;
}
UnauthorizedRecoveryStep::Done => {}
}
Ok(())
}
}
/// Central manager providing a single source of truth for auth.json derived
/// authentication data. It loads once (or on preference change) and then
/// hands out cloned `CodexAuth` values so the rest of the program has a
/// consistent snapshot.
///
/// External modifications to `auth.json` will NOT be observed until
/// `reload()` is called explicitly. This matches the design goal of avoiding
/// different parts of the program seeing inconsistent auth data midrun.
#[derive(Debug)]
pub struct AuthManager {
codex_home: PathBuf,
inner: RwLock<CachedAuth>,
enable_codex_api_key_env: bool,
auth_credentials_store_mode: AuthCredentialsStoreMode,
}
impl AuthManager {
/// Create a new manager loading the initial auth using the provided
/// preferred auth method. Errors loading auth are swallowed; `auth()` will
/// simply return `None` in that case so callers can treat it as an
/// unauthenticated state.
pub fn new(
codex_home: PathBuf,
enable_codex_api_key_env: bool,
auth_credentials_store_mode: AuthCredentialsStoreMode,
) -> Self {
let auth = load_auth(
&codex_home,
enable_codex_api_key_env,
auth_credentials_store_mode,
)
.ok()
.flatten();
Self {
codex_home,
inner: RwLock::new(CachedAuth { auth }),
enable_codex_api_key_env,
auth_credentials_store_mode,
}
}
#[cfg(any(test, feature = "test-support"))]
/// Create an AuthManager with a specific CodexAuth, for testing only.
pub fn from_auth_for_testing(auth: CodexAuth) -> Arc<Self> {
let cached = CachedAuth { auth: Some(auth) };
Arc::new(Self {
codex_home: PathBuf::from("non-existent"),
inner: RwLock::new(cached),
enable_codex_api_key_env: false,
auth_credentials_store_mode: AuthCredentialsStoreMode::File,
})
}
#[cfg(any(test, feature = "test-support"))]
/// Create an AuthManager with a specific CodexAuth and codex home, for testing only.
pub fn from_auth_for_testing_with_home(auth: CodexAuth, codex_home: PathBuf) -> Arc<Self> {
let cached = CachedAuth { auth: Some(auth) };
Arc::new(Self {
codex_home,
inner: RwLock::new(cached),
enable_codex_api_key_env: false,
auth_credentials_store_mode: AuthCredentialsStoreMode::File,
})
}
/// Current cached auth (clone) without attempting a refresh.
pub fn auth_cached(&self) -> Option<CodexAuth> {
self.inner.read().ok().and_then(|c| c.auth.clone())
}
/// Current cached auth (clone). May be `None` if not logged in or load failed.
/// Refreshes cached ChatGPT tokens if they are stale before returning.
pub async fn auth(&self) -> Option<CodexAuth> {
let auth = self.auth_cached()?;
if let Err(err) = self.refresh_if_stale(&auth).await {
tracing::error!("Failed to refresh token: {}", err);
return Some(auth);
}
self.auth_cached()
}
/// Force a reload of the auth information from auth.json. Returns
/// whether the auth value changed.
pub fn reload(&self) -> bool {
tracing::info!("Reloading auth");
let new_auth = self.load_auth_from_storage();
self.set_auth(new_auth)
}
fn reload_if_account_id_matches(&self, expected_account_id: Option<&str>) -> ReloadOutcome {
let expected_account_id = match expected_account_id {
Some(account_id) => account_id,
None => {
tracing::info!("Skipping auth reload because no account id is available.");
return ReloadOutcome::Skipped;
}
};
let new_auth = self.load_auth_from_storage();
let new_account_id = new_auth.as_ref().and_then(CodexAuth::get_account_id);
if new_account_id.as_deref() != Some(expected_account_id) {
let found_account_id = new_account_id.as_deref().unwrap_or("unknown");
tracing::info!(
"Skipping auth reload due to account id mismatch (expected: {expected_account_id}, found: {found_account_id})"
);
return ReloadOutcome::Skipped;
}
tracing::info!("Reloading auth for account {expected_account_id}");
self.set_auth(new_auth);
ReloadOutcome::Reloaded
}
fn auths_equal(a: &Option<CodexAuth>, b: &Option<CodexAuth>) -> bool {
match (a, b) {
(None, None) => true,
(Some(a), Some(b)) => a == b,
_ => false,
}
}
fn load_auth_from_storage(&self) -> Option<CodexAuth> {
load_auth(
&self.codex_home,
self.enable_codex_api_key_env,
self.auth_credentials_store_mode,
)
.ok()
.flatten()
}
fn set_auth(&self, new_auth: Option<CodexAuth>) -> bool {
if let Ok(mut guard) = self.inner.write() {
let changed = !AuthManager::auths_equal(&guard.auth, &new_auth);
tracing::info!("Reloaded auth, changed: {changed}");
guard.auth = new_auth;
changed
} else {
false
}
}
/// Convenience constructor returning an `Arc` wrapper.
pub fn shared(
codex_home: PathBuf,
enable_codex_api_key_env: bool,
auth_credentials_store_mode: AuthCredentialsStoreMode,
) -> Arc<Self> {
Arc::new(Self::new(
codex_home,
enable_codex_api_key_env,
auth_credentials_store_mode,
))
}
pub fn unauthorized_recovery(self: &Arc<Self>) -> UnauthorizedRecovery {
UnauthorizedRecovery::new(Arc::clone(self))
}
/// Attempt to refresh the current auth token (if any). On success, reload
/// the auth state from disk so other components observe refreshed token.
/// If the token refresh fails, returns the error to the caller.
pub async fn refresh_token(&self) -> Result<(), RefreshTokenError> {
tracing::info!("Refreshing token");
let auth = match self.auth_cached() {
Some(auth) => auth,
None => return Ok(()),
};
let token_data = auth.get_current_token_data().ok_or_else(|| {
RefreshTokenError::Transient(std::io::Error::other("Token data is not available."))
})?;
self.refresh_tokens(&auth, token_data.refresh_token).await?;
// Reload to pick up persisted changes.
self.reload();
Ok(())
}
/// Log out by deleting the ondisk auth.json (if present). Returns Ok(true)
/// if a file was removed, Ok(false) if no auth file existed. On success,
/// reloads the inmemory auth cache so callers immediately observe the
/// unauthenticated state.
pub fn logout(&self) -> std::io::Result<bool> {
let removed = super::auth::logout(&self.codex_home, self.auth_credentials_store_mode)?;
// Always reload to clear any cached auth (even if file absent).
self.reload();
Ok(removed)
}
pub fn get_auth_mode(&self) -> Option<AuthMode> {
self.auth_cached().map(|a| a.mode)
}
async fn refresh_if_stale(&self, auth: &CodexAuth) -> Result<bool, RefreshTokenError> {
if auth.mode != AuthMode::ChatGPT {
return Ok(false);
}
let auth_dot_json = match auth.get_current_auth_json() {
Some(auth_dot_json) => auth_dot_json,
None => return Ok(false),
};
let tokens = match auth_dot_json.tokens {
Some(tokens) => tokens,
None => return Ok(false),
};
let last_refresh = match auth_dot_json.last_refresh {
Some(last_refresh) => last_refresh,
None => return Ok(false),
};
if last_refresh >= Utc::now() - chrono::Duration::days(TOKEN_REFRESH_INTERVAL) {
return Ok(false);
}
self.refresh_tokens(auth, tokens.refresh_token).await?;
self.reload();
Ok(true)
}
async fn refresh_tokens(
&self,
auth: &CodexAuth,
refresh_token: String,
) -> Result<(), RefreshTokenError> {
let refresh_response = try_refresh_token(refresh_token, &auth.client).await?;
update_tokens(
&auth.storage,
refresh_response.id_token,
refresh_response.access_token,
refresh_response.refresh_token,
)
.await
.map_err(RefreshTokenError::from)?;
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
@@ -788,7 +1026,7 @@ mod tests {
assert_eq!(auth.mode, AuthMode::ApiKey);
assert_eq!(auth.api_key, Some("sk-test-key".to_string()));
assert!(auth.get_token_data().await.is_err());
assert!(auth.get_token_data().is_err());
}
#[test]
@@ -916,7 +1154,6 @@ mod tests {
let config = build_config(codex_home.path(), Some(ForcedLoginMethod::Chatgpt), None).await;
let err = super::enforce_login_restrictions(&config)
.await
.expect_err("expected method mismatch to error");
assert!(err.to_string().contains("ChatGPT login is required"));
assert!(
@@ -942,7 +1179,6 @@ mod tests {
let config = build_config(codex_home.path(), None, Some("org_mine".to_string())).await;
let err = super::enforce_login_restrictions(&config)
.await
.expect_err("expected workspace mismatch to error");
assert!(err.to_string().contains("workspace org_mine"));
assert!(
@@ -967,9 +1203,7 @@ mod tests {
let config = build_config(codex_home.path(), None, Some("org_mine".to_string())).await;
super::enforce_login_restrictions(&config)
.await
.expect("matching workspace should succeed");
super::enforce_login_restrictions(&config).expect("matching workspace should succeed");
assert!(
codex_home.path().join("auth.json").exists(),
"auth.json should remain when restrictions pass"
@@ -985,9 +1219,7 @@ mod tests {
let config = build_config(codex_home.path(), None, Some("org_mine".to_string())).await;
super::enforce_login_restrictions(&config)
.await
.expect("matching workspace should succeed");
super::enforce_login_restrictions(&config).expect("matching workspace should succeed");
assert!(
codex_home.path().join("auth.json").exists(),
"auth.json should remain when restrictions pass"
@@ -1003,7 +1235,6 @@ mod tests {
let config = build_config(codex_home.path(), Some(ForcedLoginMethod::Chatgpt), None).await;
let err = super::enforce_login_restrictions(&config)
.await
.expect_err("environment API key should not satisfy forced ChatGPT login");
assert!(
err.to_string()
@@ -1051,162 +1282,3 @@ mod tests {
pretty_assertions::assert_eq!(auth.account_plan_type(), Some(AccountPlanType::Unknown));
}
}
/// Central manager providing a single source of truth for auth.json derived
/// authentication data. It loads once (or on preference change) and then
/// hands out cloned `CodexAuth` values so the rest of the program has a
/// consistent snapshot.
///
/// External modifications to `auth.json` will NOT be observed until
/// `reload()` is called explicitly. This matches the design goal of avoiding
/// different parts of the program seeing inconsistent auth data midrun.
#[derive(Debug)]
pub struct AuthManager {
codex_home: PathBuf,
inner: RwLock<CachedAuth>,
enable_codex_api_key_env: bool,
auth_credentials_store_mode: AuthCredentialsStoreMode,
}
impl AuthManager {
/// Create a new manager loading the initial auth using the provided
/// preferred auth method. Errors loading auth are swallowed; `auth()` will
/// simply return `None` in that case so callers can treat it as an
/// unauthenticated state.
pub fn new(
codex_home: PathBuf,
enable_codex_api_key_env: bool,
auth_credentials_store_mode: AuthCredentialsStoreMode,
) -> Self {
let auth = load_auth(
&codex_home,
enable_codex_api_key_env,
auth_credentials_store_mode,
)
.ok()
.flatten();
Self {
codex_home,
inner: RwLock::new(CachedAuth { auth }),
enable_codex_api_key_env,
auth_credentials_store_mode,
}
}
#[cfg(any(test, feature = "test-support"))]
#[expect(clippy::expect_used)]
/// Create an AuthManager with a specific CodexAuth, for testing only.
pub fn from_auth_for_testing(auth: CodexAuth) -> Arc<Self> {
let cached = CachedAuth { auth: Some(auth) };
let temp_dir = tempfile::tempdir().expect("temp codex home");
let codex_home = temp_dir.path().to_path_buf();
TEST_AUTH_TEMP_DIRS
.lock()
.expect("lock test codex homes")
.push(temp_dir);
Arc::new(Self {
codex_home,
inner: RwLock::new(cached),
enable_codex_api_key_env: false,
auth_credentials_store_mode: AuthCredentialsStoreMode::File,
})
}
#[cfg(any(test, feature = "test-support"))]
/// Create an AuthManager with a specific CodexAuth and codex home, for testing only.
pub fn from_auth_for_testing_with_home(auth: CodexAuth, codex_home: PathBuf) -> Arc<Self> {
let cached = CachedAuth { auth: Some(auth) };
Arc::new(Self {
codex_home,
inner: RwLock::new(cached),
enable_codex_api_key_env: false,
auth_credentials_store_mode: AuthCredentialsStoreMode::File,
})
}
/// Current cached auth (clone). May be `None` if not logged in or load failed.
pub fn auth(&self) -> Option<CodexAuth> {
self.inner.read().ok().and_then(|c| c.auth.clone())
}
pub fn codex_home(&self) -> &Path {
&self.codex_home
}
/// Force a reload of the auth information from auth.json. Returns
/// whether the auth value changed.
pub fn reload(&self) -> bool {
let new_auth = load_auth(
&self.codex_home,
self.enable_codex_api_key_env,
self.auth_credentials_store_mode,
)
.ok()
.flatten();
if let Ok(mut guard) = self.inner.write() {
let changed = !AuthManager::auths_equal(&guard.auth, &new_auth);
guard.auth = new_auth;
changed
} else {
false
}
}
fn auths_equal(a: &Option<CodexAuth>, b: &Option<CodexAuth>) -> bool {
match (a, b) {
(None, None) => true,
(Some(a), Some(b)) => a == b,
_ => false,
}
}
/// Convenience constructor returning an `Arc` wrapper.
pub fn shared(
codex_home: PathBuf,
enable_codex_api_key_env: bool,
auth_credentials_store_mode: AuthCredentialsStoreMode,
) -> Arc<Self> {
Arc::new(Self::new(
codex_home,
enable_codex_api_key_env,
auth_credentials_store_mode,
))
}
/// Attempt to refresh the current auth token (if any). On success, reload
/// the auth state from disk so other components observe refreshed token.
/// If the token refresh fails in a permanent (nontransient) way, logs out
/// to clear invalid auth state.
pub async fn refresh_token(&self) -> Result<Option<String>, RefreshTokenError> {
let auth = match self.auth() {
Some(a) => a,
None => return Ok(None),
};
match auth.refresh_token().await {
Ok(token) => {
// Reload to pick up persisted changes.
self.reload();
Ok(Some(token))
}
Err(e) => {
tracing::error!("Failed to refresh token: {}", e);
Err(e)
}
}
}
/// Log out by deleting the ondisk auth.json (if present). Returns Ok(true)
/// if a file was removed, Ok(false) if no auth file existed. On success,
/// reloads the inmemory auth cache so callers immediately observe the
/// unauthenticated state.
pub fn logout(&self) -> std::io::Result<bool> {
let removed = super::auth::logout(&self.codex_home, self.auth_credentials_store_mode)?;
// Always reload to clear any cached auth (even if file absent).
self.reload();
Ok(removed)
}
pub fn get_auth_mode(&self) -> Option<AuthMode> {
self.auth().map(|a| a.mode)
}
}

View File

@@ -2,6 +2,7 @@ use std::sync::Arc;
use crate::api_bridge::auth_provider_from_auth;
use crate::api_bridge::map_api_error;
use crate::auth::UnauthorizedRecovery;
use codex_api::AggregateStreamExt;
use codex_api::ChatClient as ApiChatClient;
use codex_api::CompactClient as ApiCompactClient;
@@ -17,11 +18,14 @@ use codex_api::TransportError;
use codex_api::common::Reasoning;
use codex_api::create_text_param_for_request;
use codex_api::error::ApiError;
use codex_api::requests::responses::Compression;
use codex_app_server_protocol::AuthMode;
use codex_otel::otel_manager::OtelManager;
use codex_protocol::ConversationId;
use codex_otel::OtelManager;
use codex_protocol::ThreadId;
use codex_protocol::config_types::ReasoningSummary as ReasoningSummaryConfig;
use codex_protocol::models::ResponseItem;
use codex_protocol::openai_models::ModelInfo;
use codex_protocol::openai_models::ReasoningEffort as ReasoningEffortConfig;
use codex_protocol::protocol::SessionSource;
use eventsource_stream::Event;
@@ -46,10 +50,10 @@ use crate::default_client::build_reqwest_client;
use crate::error::CodexErr;
use crate::error::Result;
use crate::features::FEATURES;
use crate::features::Feature;
use crate::flags::CODEX_RS_SSE_FIXTURE;
use crate::model_provider_info::ModelProviderInfo;
use crate::model_provider_info::WireApi;
use crate::models_manager::model_family::ModelFamily;
use crate::tools::spec::create_tools_json_for_chat_completions_api;
use crate::tools::spec::create_tools_json_for_responses_api;
@@ -57,10 +61,10 @@ use crate::tools::spec::create_tools_json_for_responses_api;
pub struct ModelClient {
config: Arc<Config>,
auth_manager: Option<Arc<AuthManager>>,
model_family: ModelFamily,
model_info: ModelInfo,
otel_manager: OtelManager,
provider: ModelProviderInfo,
conversation_id: ConversationId,
conversation_id: ThreadId,
effort: Option<ReasoningEffortConfig>,
summary: ReasoningSummaryConfig,
session_source: SessionSource,
@@ -71,18 +75,18 @@ impl ModelClient {
pub fn new(
config: Arc<Config>,
auth_manager: Option<Arc<AuthManager>>,
model_family: ModelFamily,
model_info: ModelInfo,
otel_manager: OtelManager,
provider: ModelProviderInfo,
effort: Option<ReasoningEffortConfig>,
summary: ReasoningSummaryConfig,
conversation_id: ConversationId,
conversation_id: ThreadId,
session_source: SessionSource,
) -> Self {
Self {
config,
auth_manager,
model_family,
model_info,
otel_manager,
provider,
conversation_id,
@@ -93,11 +97,11 @@ impl ModelClient {
}
pub fn get_model_context_window(&self) -> Option<i64> {
let model_family = self.get_model_family();
let effective_context_window_percent = model_family.effective_context_window_percent;
model_family
.context_window
.map(|w| w.saturating_mul(effective_context_window_percent) / 100)
let model_info = self.get_model_info();
let effective_context_window_percent = model_info.effective_context_window_percent;
model_info.context_window.map(|context_window| {
context_window.saturating_mul(effective_context_window_percent) / 100
})
}
pub fn config(&self) -> Arc<Config> {
@@ -146,20 +150,25 @@ impl ModelClient {
}
let auth_manager = self.auth_manager.clone();
let model_family = self.get_model_family();
let instructions = prompt.get_full_instructions(&model_family).into_owned();
let model_info = self.get_model_info();
let instructions = prompt.get_full_instructions(&model_info).into_owned();
let tools_json = create_tools_json_for_chat_completions_api(&prompt.tools)?;
let api_prompt = build_api_prompt(prompt, instructions, tools_json);
let conversation_id = self.conversation_id.to_string();
let session_source = self.session_source.clone();
let mut refreshed = false;
let mut auth_recovery = auth_manager
.as_ref()
.map(super::auth::AuthManager::unauthorized_recovery);
loop {
let auth = auth_manager.as_ref().and_then(|m| m.auth());
let auth = match auth_manager.as_ref() {
Some(manager) => manager.auth().await,
None => None,
};
let api_provider = self
.provider
.to_api_provider(auth.as_ref().map(|a| a.mode))?;
let api_auth = auth_provider_from_auth(auth.clone(), &self.provider).await?;
let api_auth = auth_provider_from_auth(auth.clone(), &self.provider)?;
let transport = ReqwestTransport::new(build_reqwest_client());
let (request_telemetry, sse_telemetry) = self.build_streaming_telemetry();
let client = ApiChatClient::new(transport, api_provider, api_auth)
@@ -179,7 +188,7 @@ impl ModelClient {
Err(ApiError::Transport(TransportError::Http { status, .. }))
if status == StatusCode::UNAUTHORIZED =>
{
handle_unauthorized(status, &mut refreshed, &auth_manager, &auth).await?;
handle_unauthorized(status, &mut auth_recovery).await?;
continue;
}
Err(err) => return Err(map_api_error(err)),
@@ -200,13 +209,14 @@ impl ModelClient {
}
let auth_manager = self.auth_manager.clone();
let model_family = self.get_model_family();
let instructions = prompt.get_full_instructions(&model_family).into_owned();
let model_info = self.get_model_info();
let instructions = prompt.get_full_instructions(&model_info).into_owned();
let tools_json: Vec<Value> = create_tools_json_for_responses_api(&prompt.tools)?;
let reasoning = if model_family.supports_reasoning_summaries {
let default_reasoning_effort = model_info.default_reasoning_level;
let reasoning = if model_info.supports_reasoning_summaries {
Some(Reasoning {
effort: self.effort.or(model_family.default_reasoning_effort),
effort: self.effort.or(default_reasoning_effort),
summary: if self.summary == ReasoningSummaryConfig::None {
None
} else {
@@ -223,15 +233,13 @@ impl ModelClient {
vec![]
};
let verbosity = if model_family.support_verbosity {
self.config
.model_verbosity
.or(model_family.default_verbosity)
let verbosity = if model_info.support_verbosity {
self.config.model_verbosity.or(model_info.default_verbosity)
} else {
if self.config.model_verbosity.is_some() {
warn!(
"model_verbosity is set but ignored as the model does not support verbosity: {}",
model_family.family
model_info.slug
);
}
None
@@ -242,15 +250,34 @@ impl ModelClient {
let conversation_id = self.conversation_id.to_string();
let session_source = self.session_source.clone();
let mut refreshed = false;
let mut auth_recovery = auth_manager
.as_ref()
.map(super::auth::AuthManager::unauthorized_recovery);
loop {
let auth = auth_manager.as_ref().and_then(|m| m.auth());
let auth = match auth_manager.as_ref() {
Some(manager) => manager.auth().await,
None => None,
};
let api_provider = self
.provider
.to_api_provider(auth.as_ref().map(|a| a.mode))?;
let api_auth = auth_provider_from_auth(auth.clone(), &self.provider).await?;
let api_auth = auth_provider_from_auth(auth.clone(), &self.provider)?;
let transport = ReqwestTransport::new(build_reqwest_client());
let (request_telemetry, sse_telemetry) = self.build_streaming_telemetry();
let compression = if self
.config
.features
.enabled(Feature::EnableRequestCompression)
&& auth
.as_ref()
.is_some_and(|auth| auth.mode == AuthMode::ChatGPT)
&& self.provider.is_openai()
{
Compression::Zstd
} else {
Compression::None
};
let client = ApiResponsesClient::new(transport, api_provider, api_auth)
.with_telemetry(Some(request_telemetry), Some(sse_telemetry));
@@ -263,6 +290,7 @@ impl ModelClient {
conversation_id: Some(conversation_id.clone()),
session_source: Some(session_source.clone()),
extra_headers: beta_feature_headers(&self.config),
compression,
};
let stream_result = client
@@ -276,7 +304,7 @@ impl ModelClient {
Err(ApiError::Transport(TransportError::Http { status, .. }))
if status == StatusCode::UNAUTHORIZED =>
{
handle_unauthorized(status, &mut refreshed, &auth_manager, &auth).await?;
handle_unauthorized(status, &mut auth_recovery).await?;
continue;
}
Err(err) => return Err(map_api_error(err)),
@@ -298,12 +326,11 @@ impl ModelClient {
/// Returns the currently configured model slug.
pub fn get_model(&self) -> String {
self.get_model_family().get_model_slug().to_string()
self.model_info.slug.clone()
}
/// Returns the currently configured model family.
pub fn get_model_family(&self) -> ModelFamily {
self.model_family.clone()
pub fn get_model_info(&self) -> ModelInfo {
self.model_info.clone()
}
/// Returns the current reasoning effort setting.
@@ -329,18 +356,21 @@ impl ModelClient {
return Ok(Vec::new());
}
let auth_manager = self.auth_manager.clone();
let auth = auth_manager.as_ref().and_then(|m| m.auth());
let auth = match auth_manager.as_ref() {
Some(manager) => manager.auth().await,
None => None,
};
let api_provider = self
.provider
.to_api_provider(auth.as_ref().map(|a| a.mode))?;
let api_auth = auth_provider_from_auth(auth.clone(), &self.provider).await?;
let api_auth = auth_provider_from_auth(auth.clone(), &self.provider)?;
let transport = ReqwestTransport::new(build_reqwest_client());
let request_telemetry = self.build_request_telemetry();
let client = ApiCompactClient::new(transport, api_provider, api_auth)
.with_telemetry(Some(request_telemetry));
let instructions = prompt
.get_full_instructions(&self.get_model_family())
.get_full_instructions(&self.get_model_info())
.into_owned();
let payload = ApiCompactionInput {
model: &self.get_model(),
@@ -485,29 +515,19 @@ where
/// the mapped `CodexErr` is returned to the caller.
async fn handle_unauthorized(
status: StatusCode,
refreshed: &mut bool,
auth_manager: &Option<Arc<AuthManager>>,
auth: &Option<crate::auth::CodexAuth>,
auth_recovery: &mut Option<UnauthorizedRecovery>,
) -> Result<()> {
if *refreshed {
return Err(map_unauthorized_status(status));
}
if let Some(manager) = auth_manager.as_ref()
&& let Some(auth) = auth.as_ref()
&& auth.mode == AuthMode::ChatGPT
if let Some(recovery) = auth_recovery
&& recovery.has_next()
{
match manager.refresh_token().await {
Ok(_) => {
*refreshed = true;
Ok(())
}
return match recovery.next().await {
Ok(_) => Ok(()),
Err(RefreshTokenError::Permanent(failed)) => Err(CodexErr::RefreshTokenFailed(failed)),
Err(RefreshTokenError::Transient(other)) => Err(CodexErr::Io(other)),
}
} else {
Err(map_unauthorized_status(status))
};
}
Err(map_unauthorized_status(status))
}
fn map_unauthorized_status(status: StatusCode) -> CodexErr {

View File

@@ -1,15 +1,13 @@
use crate::client_common::tools::ToolSpec;
use crate::error::Result;
use crate::models_manager::model_family::ModelFamily;
pub use codex_api::common::ResponseEvent;
use codex_apply_patch::APPLY_PATCH_TOOL_INSTRUCTIONS;
use codex_protocol::models::ResponseItem;
use codex_protocol::openai_models::ModelInfo;
use futures::Stream;
use serde::Deserialize;
use serde_json::Value;
use std::borrow::Cow;
use std::collections::HashSet;
use std::ops::Deref;
use std::pin::Pin;
use std::task::Context;
use std::task::Poll;
@@ -44,28 +42,12 @@ pub struct Prompt {
}
impl Prompt {
pub(crate) fn get_full_instructions<'a>(&'a self, model: &'a ModelFamily) -> Cow<'a, str> {
let base = self
.base_instructions_override
.as_deref()
.unwrap_or(model.base_instructions.deref());
// When there are no custom instructions, add apply_patch_tool_instructions if:
// - the model needs special instructions (4.1)
// AND
// - there is no apply_patch tool present
let is_apply_patch_tool_present = self.tools.iter().any(|tool| match tool {
ToolSpec::Function(f) => f.name == "apply_patch",
ToolSpec::Freeform(f) => f.name == "apply_patch",
_ => false,
});
if self.base_instructions_override.is_none()
&& model.needs_special_apply_patch_instructions
&& !is_apply_patch_tool_present
{
Cow::Owned(format!("{base}\n{APPLY_PATCH_TOOL_INSTRUCTIONS}"))
} else {
Cow::Borrowed(base)
}
pub(crate) fn get_full_instructions<'a>(&'a self, model: &'a ModelInfo) -> Cow<'a, str> {
Cow::Borrowed(
self.base_instructions_override
.as_deref()
.unwrap_or(model.base_instructions.as_str()),
)
}
pub(crate) fn get_formatted_input(&self) -> Vec<ResponseItem> {
@@ -195,8 +177,13 @@ pub(crate) mod tools {
LocalShell {},
// TODO: Understand why we get an error on web_search although the API docs say it's supported.
// https://platform.openai.com/docs/guides/tools-web-search?api-mode=responses#:~:text=%7B%20type%3A%20%22web_search%22%20%7D%2C
// The `external_web_access` field determines whether the web search is over cached or live content.
// https://platform.openai.com/docs/guides/tools-web-search#live-internet-access
#[serde(rename = "web_search")]
WebSearch {},
WebSearch {
#[serde(skip_serializing_if = "Option::is_none")]
external_web_access: Option<bool>,
},
#[serde(rename = "custom")]
Freeform(FreeformTool),
}
@@ -206,7 +193,7 @@ pub(crate) mod tools {
match self {
ToolSpec::Function(tool) => tool.name.as_str(),
ToolSpec::LocalShell {} => "local_shell",
ToolSpec::WebSearch {} => "web_search",
ToolSpec::WebSearch { .. } => "web_search",
ToolSpec::Freeform(tool) => tool.name.as_str(),
}
}
@@ -272,6 +259,8 @@ mod tests {
let prompt = Prompt {
..Default::default()
};
let prompt_with_apply_patch_instructions =
include_str!("../prompt_with_apply_patch_instructions.md");
let test_cases = vec![
InstructionsTestCase {
slug: "gpt-3.5",
@@ -312,19 +301,16 @@ mod tests {
];
for test_case in test_cases {
let config = test_config();
let model_family =
ModelsManager::construct_model_family_offline(test_case.slug, &config);
let expected = if test_case.expects_apply_patch_instructions {
format!(
"{}\n{}",
model_family.clone().base_instructions,
APPLY_PATCH_TOOL_INSTRUCTIONS
)
} else {
model_family.clone().base_instructions
};
let model_info = ModelsManager::construct_model_info_offline(test_case.slug, &config);
if test_case.expects_apply_patch_instructions {
assert_eq!(
model_info.base_instructions.as_str(),
prompt_with_apply_patch_instructions
);
}
let full = prompt.get_full_instructions(&model_family);
let expected = model_info.base_instructions.as_str();
let full = prompt.get_full_instructions(&model_info);
assert_eq!(full, expected);
}
}

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More