Compare commits

..

45 Commits

Author SHA1 Message Date
Abhishek Bhardwaj
9518387ddb feature: Add "!cmd" user shell execution
- protocol: add Op::RunUserShellCommand to model a user-initiated one-off command
- core: handle new Op by spawning a cancellable task that runs the command using the user’s default shell; stream output via ExecCommand* events; send TaskStarted/TaskComplete; track as current_task so Interrupt works
- tui: detect leading '!' in composer submission and dispatch Op::RunUserShellCommand instead of sending a user message

No changes to sandbox env var behavior; uses existing exec pipeline and event types.
2025-09-13 17:24:18 -07:00
pakrym-oai
3d4acbaea0 Preserve IDs for more item types in azure (#3542)
https://github.com/openai/codex/issues/3509
2025-09-13 01:09:56 +00:00
pakrym-oai
414b8be8b6 Always request encrypted cot (#3539)
Otherwise future requests will fail with 500
2025-09-12 23:51:30 +00:00
dedrisian-oai
90a0fd342f Review Mode (Core) (#3401)
## 📝 Review Mode -- Core

This PR introduces the Core implementation for Review mode:

- New op `Op::Review { prompt: String }:` spawns a child review task
with isolated context, a review‑specific system prompt, and a
`Config.review_model`.
- `EnteredReviewMode`: emitted when the child review session starts.
Every event from this point onwards reflects the review session.
- `ExitedReviewMode(Option<ReviewOutputEvent>)`: emitted when the review
finishes or is interrupted, with optional structured findings:

```json
{
  "findings": [
    {
      "title": "<≤ 80 chars, imperative>",
      "body": "<valid Markdown explaining *why* this is a problem; cite files/lines/functions>",
      "confidence_score": <float 0.0-1.0>,
      "priority": <int 0-3>,
      "code_location": {
        "absolute_file_path": "<file path>",
        "line_range": {"start": <int>, "end": <int>}
      }
    }
  ],
  "overall_correctness": "patch is correct" | "patch is incorrect",
  "overall_explanation": "<1-3 sentence explanation justifying the overall_correctness verdict>",
  "overall_confidence_score": <float 0.0-1.0>
}
```

## Questions

### Why separate out its own message history?

We want the review thread to match the training of our review models as
much as possible -- that means using a custom prompt, removing user
instructions, and starting a clean chat history.

We also want to make sure the review thread doesn't leak into the parent
thread.

### Why do this as a mode, vs. sub-agents?

1. We want review to be a synchronous task, so it's fine for now to do a
bespoke implementation.
2. We're still unclear about the final structure for sub-agents. We'd
prefer to land this quickly and then refactor into sub-agents without
rushing that implementation.
2025-09-12 23:25:10 +00:00
jif-oai
8d56d2f655 fix: NIT None reasoning effort (#3536)
Fix the reasoning effort not being set to None in the UI
2025-09-12 21:17:49 +00:00
jif-oai
8408f3e8ed Fix NUX UI (#3534)
Fix NUX UI
2025-09-12 14:09:31 -07:00
Jeremy Rose
b8ccfe9b65 core: expand default sandbox (#3483)
this adds some more capabilities to the default sandbox which I feel are
safe. Most are in the
[renderer.sb](https://source.chromium.org/chromium/chromium/src/+/main:sandbox/policy/mac/renderer.sb)
sandbox for chrome renderers, which i feel is fair game for codex
commands.

Specific changes:

1. Allow processes in the sandbox to send signals to any other process
in the same sandbox (e.g. child processes or daemonized processes),
instead of just themselves.
2. Allow user-preference-read
3. Allow process-info* to anything in the same sandbox. This is a bit
wider than Chromium allows, but it seems OK to me to allow anything in
the sandbox to get details about other processes in the same sandbox.
Bazel uses these to e.g. wait for another process to exit.
4. Allow all CPU feature detection, this seems harmless to me. It's
wider than Chromium, but Chromium is concerned about fingerprinting, and
tightly controls what CPU features they actually care about, and we
don't have either that restriction or that advantage.
5. Allow new sysctl-reads:
   ```
     (sysctl-name "vm.loadavg")
     (sysctl-name-prefix "kern.proc.pgrp.")
     (sysctl-name-prefix "kern.proc.pid.")
     (sysctl-name-prefix "net.routetable.")
   ```
bazel needs these for waiting on child processes and for communicating
with its local build server, i believe. I wonder if we should just allow
all (sysctl-read), as reading any arbitrary info about the system seems
fine to me.
6. Allow iokit-open on RootDomainUserClient. This has to do with power
management I believe, and Chromium allows renderers to do this, so okay.
Bazel needs it to boot successfully, possibly for sleep/wake callbacks?
7. Mach lookup to `com.apple.system.opendirectoryd.libinfo`, which has
to do with user data, and which Chrome allows.
8. Mach lookup to `com.apple.PowerManagement.control`. Chromium allows
its GPU process to do this, but not its renderers. Bazel needs this to
boot, probably relatedly to sleep/wake stuff.
2025-09-12 14:03:02 -07:00
pakrym-oai
e3c6903199 Add Azure Responses API workaround (#3528)
Azure Responses API doesn't work well with store:false and response
items.

If store = false and id is sent an error is thrown that ID is not found
If store = false and id is not sent an error is thrown that ID is
required

Add detection for Azure urls and add a workaround to preserve reasoning
item IDs and send store:true
2025-09-12 13:52:15 -07:00
Jeremy Rose
5f6e95b592 if a command parses as a patch, do not attempt to run it (#3382)
sometimes the model forgets to actually invoke `apply_patch` and puts a
patch as the script body. trying to execute this as bash sometimes
creates files named `,` or `{` or does other unknown things, so catch
this situation and return an error to the model.
2025-09-12 13:47:41 -07:00
Ahmed Ibrahim
a2e9cc5530 Update interruption error message styling (#3470)
<img width="497" height="76" alt="image"
src="https://github.com/user-attachments/assets/a1ad279d-1d01-41cd-ac14-b3343a392563"
/>

<img width="493" height="74" alt="image"
src="https://github.com/user-attachments/assets/baf487ba-430e-40fe-8944-2071ec052962"
/>
2025-09-12 16:17:02 -04:00
jif-oai
ea225df22e feat: context compaction (#3446)
## Compact feature:
1. Stops the model when the context window become too large
2. Add a user turn, asking for the model to summarize
3. Build a bridge that contains all the previous user message + the
summary. Rendered from a template
4. Start sampling again from a clean conversation with only that bridge
2025-09-12 13:07:10 -07:00
Ahmed Ibrahim
d4848e558b Add spacing before composer footer hints (#3469)
<img width="647" height="82" alt="image"
src="https://github.com/user-attachments/assets/867eb5d9-3076-4018-846e-260a50408185"
/>
2025-09-12 15:31:24 -04:00
Ahmed Ibrahim
1a6a95fb2a Add spacing between dropdown headers and items (#3472)
<img width="927" height="194" alt="image"
src="https://github.com/user-attachments/assets/f4cb999b-16c3-448a-aed4-060bed8b96dd"
/>

<img width="1246" height="205" alt="image"
src="https://github.com/user-attachments/assets/5d9ba5bd-0c02-46da-a809-b583a176528a"
/>
2025-09-12 15:31:15 -04:00
jif-oai
c6fd056aa6 feat: reasoning effort as optional (#3527)
Allow the reasoning effort to be optional
2025-09-12 12:06:33 -07:00
Michael Bolin
abdcb40f4c feat: change the behavior of SetDefaultModel RPC so None clears the value. (#3529)
It turns out that we want slightly different behavior for the
`SetDefaultModel` RPC because some models do not work with reasoning
(like GPT-4.1), so we should be able to explicitly clear this value.

Verified in `codex-rs/mcp-server/tests/suite/set_default_model.rs`.
2025-09-12 11:35:51 -07:00
Dylan
4ae6b9787a standardize shell description (#3514)
## Summary
Standardizes the shell description across sandbox_types, since we cover
this in the prompt, and have moved necessary details (like
network_access and writeable workspace roots) to EnvironmentContext
messages.

## Test Plan
- [x] updated unit tests
2025-09-12 14:24:09 -04:00
jif-oai
bba567cee9 bug: fix model save (#3525)
Fix those 2 behaviors:
1. The model does not get saved if we don't CTRL + S
2. The reasoning effort get saved
2025-09-12 10:38:12 -07:00
Ahmed Ibrahim
ba6af23cb6 Add spacing to timer duration formats (#3471)
<img width="426" height="28" alt="image"
src="https://github.com/user-attachments/assets/b281aca3-3c8d-4b88-a017-5d2f8ea9f3d5"
/>
2025-09-12 12:05:57 -04:00
Charlie Weems
f805d17930 MCP Documentation Changes Requests in Code Review (#3507)
Add in review changes from @bolinfest that were dropped due to
auto-merge (#3345).
2025-09-12 09:04:49 -07:00
Michael Bolin
90965fbc84 chore: add just test, which runs cargo nextest (#3508)
Since I can never seem to remember to add `--no-fail-fast` when running
`cargo nextest run`, let's just create an alias for it.
2025-09-12 08:44:44 -07:00
Michael Bolin
c172e8e997 feat: added SetDefaultModel to JSON-RPC server (#3512)
This adds `SetDefaultModel`, which takes `model` and `reasoning_effort`
as optional fields. If set, the field will overwrite what is in the
user's `config.toml`.

This reuses logic that was added to support the `/model` command in the
TUI: https://github.com/openai/codex/pull/2799.
2025-09-11 23:44:17 -07:00
Michael Bolin
9bbeb75361 feat: include reasoning_effort in NewConversationResponse (#3506)
`ClientRequest::NewConversation` picks up the reasoning level from the user's defaults in `config.toml`, so it should be reported in `NewConversationResponse`.
2025-09-11 21:04:40 -07:00
Fouad Matin
6ccd32c601 add(readme): IDE (#3494)
update copy in readme to add link to IDE
2025-09-11 17:46:20 -07:00
pakrym-oai
3b5a5412bb Log cf-ray header in client traces (#3488)
## Summary
- log the `cf-ray` header when tracing HTTP responses in the Codex
client
- keep existing response status logging unchanged

## Testing
- just fmt
- just fix -p codex-core
- cargo test -p codex-core *(fails:
suite::client::azure_overrides_assign_properties_used_for_responses_url,
suite::client::env_var_overrides_loaded_auth)*

------
https://chatgpt.com/codex/tasks/task_i_68c31640dacc83209be131baf91611cd
2025-09-11 17:42:44 -07:00
jif-oai
44bb53df1e bug: default to image (#3501)
Default the MIME type to image
2025-09-11 23:10:24 +00:00
jif-oai
8453915e02 feat: TUI onboarding (#3398)
Example of how onboarding could look like
2025-09-11 15:04:29 -07:00
Ahmed Ibrahim
44587c2443 Use PlanType enum when formatting usage-limit CTA (#3495)
- Started using Play type struct
- Added CTA for team/business 
- Refactored a bit to unify the logic
2025-09-11 22:01:25 +00:00
Charlie Weems
8f7b22b652 Add more detailed documentation on MCP server usage (#3345)
Adds further information on how to get started with `codex mcp`:
- Tool details and parameter references
- Quickstart with example using MCP inspector.
2025-09-11 14:38:24 -07:00
Dylan
027944c64e fix: improve handle_sandbox_error timeouts (#3435)
## Summary
Handle timeouts the same way, regardless of approval mode. There's more
to do here, but this is simple and should be zero-regret

## Testing
- [x] existing tests pass
- [x] test locally and verify rollout
2025-09-11 12:09:20 -07:00
Michael Bolin
bec51f6c05 chore: enable clippy::redundant_clone (#3489)
Created this PR by:

- adding `redundant_clone` to `[workspace.lints.clippy]` in
`cargo-rs/Cargol.toml`
- running `cargo clippy --tests --fix`
- running `just fmt`

Though I had to clean up one instance of the following that resulted:

```rust
let codex = codex;
```
2025-09-11 11:59:37 -07:00
pakrym-oai
66967500bb Assign the entire gpt-5 model family same characteristics (#3490)
So the context size indicator is displayed.
2025-09-11 18:56:49 +00:00
Ahmed Ibrahim
167b4f0e25 Clear composer on fork (#3445)
Fixes this

<img width="344" height="51" alt="image"
src="https://github.com/user-attachments/assets/f227d338-b044-4f8d-bf07-87499b4230d8"
/>
2025-09-11 11:45:17 -07:00
Michael Bolin
167154178b fix: use -F instead of -f for force=true in gh call (#3486)
Apparently `-F` is the correct thing to use. From the code sample on 


https://docs.github.com/en/rest/git/refs?apiVersion=2022-11-28#update-a-reference

```shell
gh api \
  --method PATCH \
  -H "Accept: application/vnd.github+json" \
  -H "X-GitHub-Api-Version: 2022-11-28" \
  /repos/OWNER/REPO/git/refs/REF \
   -f 'sha=aa218f56b14c9653891f9e74264a383fa43fefbd' -F "force=true"
```

Also, I ran the following locally and verified it worked:

```shell
export GITHUB_REPOSITORY=openai/codex
export GITHUB_SHA=305252b2fb2d57bb40a9e4bad269db9a761f7099
gh api \
  repos/${GITHUB_REPOSITORY}/git/refs/heads/latest-alpha-cli \
  -X PATCH \
  -f sha="${GITHUB_SHA}" \
  -F force=true
```

`$GITHUB_REPOSITORY` and `$GITHUB_SHA` should already be available as
environment variables for the `run` step without having to be redeclared
in the `env` section.
2025-09-11 11:32:47 -07:00
Ahmed Ibrahim
674e3d3c90 Add Compact and Turn Context to the rollout items (#3444)
Adding compact and turn context to the rollout items

based on #3440
2025-09-11 18:08:51 +00:00
jif-oai
114ce9ff4d NIT unified exec (#3479)
Fix the default value of the experimental flag of unified_exec
2025-09-11 16:19:12 +00:00
Eric Traut
e13b35ecb0 Simplify auth flow and reconcile differences between ChatGPT and API Key auth (#3189)
This PR does the following:
* Adds the ability to paste or type an API key.
* Removes the `preferred_auth_method` config option. The last login
method is always persisted in auth.json, so this isn't needed.
* If OPENAI_API_KEY env variable is defined, the value is used to
prepopulate the new UI. The env variable is otherwise ignored by the
CLI.
* Adds a new MCP server entry point "login_api_key" so we can implement
this same API key behavior for the VS Code extension.
<img width="473" height="140" alt="Screenshot 2025-09-04 at 3 51 04 PM"
src="https://github.com/user-attachments/assets/c11bbd5b-8a4d-4d71-90fd-34130460f9d9"
/>
<img width="726" height="254" alt="Screenshot 2025-09-04 at 3 51 32 PM"
src="https://github.com/user-attachments/assets/6cc76b34-309a-4387-acbc-15ee5c756db9"
/>
2025-09-11 09:16:34 -07:00
Jeremy Rose
377af75730 apply-patch: sort replacements and add regression tests (#3425)
- Ensure replacements are applied in index order for determinism.
- Add tests for addition chunk followed by removal and worktree-aware
helper.

This fixes a panic I observed.

Co-authored-by: Codex <199175422+chatgpt-codex-connector[bot]@users.noreply.github.com>
2025-09-11 09:07:03 -07:00
Michael Bolin
86e0f31a7e chore: rust-release.yml should update the latest-alpha-cli branch (#3458)
This updates `rust-release.yml` so that the last step of creating a
release entails updating the `latest-alpha-cli` branch to point to the
tag used to create the latest release. This will facilitate building
automation to identify the most recent alpha release of Codex CLI
(though note this branch could also point to an official release, as it
is implemented today).

This introduces a new job, `update-branch`, which depends on the
`release` job. I made it separate from the `release` job because
`update-branch` needs the `contents: write` permission, so this limits
the amount of work we do with that permission.

Note I also created a branch protection rule for `latest-alpha-cli`
that:

- specifies repository admins as the only members of the bypass list
- only those with bypass permissions can create, update, or delete this
branch
- this branch requires a linear history
- note that force pushes _are_ allowed

This is the first step in fixing
https://github.com/openai/codex/issues/3098.
2025-09-11 08:06:28 -07:00
Michael Bolin
8f837f1093 fix: add check to ensure output of generate_mcp_types.py matches codex-rs/mcp-types/src/lib.rs (#3450)
As a follow-up to https://github.com/openai/codex/pull/3439, this adds a
CI job to ensure the codegen script has to be updated in order to change
`codex-rs/mcp-types/src/lib.rs`.
2025-09-10 23:31:28 -07:00
Ahmed Ibrahim
162e1235a8 Change forking to read the rollout from file (#3440)
This PR changes get history op to get path. Then, forking will use a
path. This will help us have one unified codepath for resuming/forking
conversations. Will also help in having rollout history in order. It
also fixes a bug where you won't see the UI when resuming after forking.
2025-09-10 17:42:54 -07:00
jif-oai
c09ed74a16 Unified execution (#3288)
## Unified PTY-Based Exec Tool

Note: this requires to have this flag in the config:
`use_experimental_unified_exec_tool=true`

- Adds a PTY-backed interactive exec feature (“unified_exec”) with
session reuse via
  session_id, bounded output (128 KiB), and timeout clamping (≤ 60 s).
- Protocol: introduces ResponseItem::UnifiedExec { session_id,
arguments, timeout_ms }.
- Tools: exposes unified_exec as a function tool (Responses API);
excluded from Chat
  Completions payload while still supported in tool lists.
- Path handling: resolves commands via PATH (or explicit paths), with
UTF‑8/newline‑aware
  truncation (truncate_middle).
- Tests: cover command parsing, path resolution, session
persistence/cleanup, multi‑session
  isolation, timeouts, and truncation behavior.
2025-09-10 17:38:11 -07:00
Michael Bolin
65f3528cad feat: add UserInfo request to JSON-RPC server (#3428)
This adds a simple endpoint that provides the email address encoded in
`$CODEX_HOME/auth.json`.

As noted, for now, we do not hit the server to verify this is the user's
true email address.
2025-09-10 17:03:35 -07:00
Michael Bolin
44262d8fd8 fix: ensure output of codex-rs/mcp-types/generate_mcp_types.py matches codex-rs/mcp-types/src/lib.rs (#3439)
https://github.com/openai/codex/pull/3395 updated `mcp-types/src/lib.rs`
by hand, but that file is generated code that is produced by
`mcp-types/generate_mcp_types.py`. Unfortunately, we do not have
anything in CI to verify this right now, but I will address that in a
subsequent PR.

#3395 ended up introducing a change that added a required field when
deserializing `InitializeResult`, breaking Codex when used as an MCP
client, so the quick fix in #3436 was to make the new field `Optional`
with `skip_serializing_if = "Option::is_none"`, but that did not address
the problem that `mcp-types/generate_mcp_types.py` and
`mcp-types/src/lib.rs` are out of sync.

This PR gets things back to where they are in sync. It removes the
custom `mcp_types::McpClientInfo` type that was added to
`mcp-types/src/lib.rs` and forces us to use the generated
`mcp_types::Implementation` type. Though this PR also updates
`generate_mcp_types.py` to generate the additional `user_agent:
Optional<String>` field on `Implementation` so that we can continue to
specify it when Codex operates as an MCP server.

However, this also requires us to specify `user_agent: None` when Codex
operates as an MCP client.

We may want to introduce our own `InitializeResult` type that is
specific to when we run as a server to avoid this in the future, but my
immediate goal is just to get things back in sync.
2025-09-10 16:14:41 -07:00
Jeremy Rose
95a9938d3a fix trampling projects table when accepting trusted dirs (#3434)
Co-authored-by: Codex <199175422+chatgpt-codex-connector[bot]@users.noreply.github.com>
2025-09-10 23:01:31 +00:00
Jeremy Rose
f69f07b028 put workspace roots in the environment context (#3375)
to keep the tool description constant when the writable roots change.
2025-09-10 15:10:52 -07:00
127 changed files with 7047 additions and 1645 deletions

View File

@@ -62,6 +62,8 @@ jobs:
components: rustfmt
- name: cargo fmt
run: cargo fmt -- --config imports_granularity=Item --check
- name: Verify codegen for mcp-types
run: ./mcp-types/check_lib_rs.py
cargo_shear:
name: cargo shear

View File

@@ -219,3 +219,22 @@ jobs:
with:
tag: ${{ github.ref_name }}
config: .github/dotslash-config.json
update-branch:
name: Update latest-alpha-cli branch
permissions:
contents: write
needs: release
runs-on: ubuntu-latest
steps:
- name: Update latest-alpha-cli branch
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
set -euo pipefail
gh api \
repos/${GITHUB_REPOSITORY}/git/refs/heads/latest-alpha-cli \
-X PATCH \
-f sha="${GITHUB_SHA}" \
-F force=true

View File

@@ -2,7 +2,10 @@
<p align="center"><code>npm i -g @openai/codex</code><br />or <code>brew install codex</code></p>
<p align="center"><strong>Codex CLI</strong> is a coding agent from OpenAI that runs locally on your computer.</br>If you are looking for the <em>cloud-based agent</em> from OpenAI, <strong>Codex Web</strong>, see <a href="https://chatgpt.com/codex">chatgpt.com/codex</a>.</p>
<p align="center"><strong>Codex CLI</strong> is a coding agent from OpenAI that runs locally on your computer.
</br>
</br>If you want Codex in your code editor (VS Code, Cursor, Windsurf), <a href="https://developers.openai.com/codex/ide">install in your IDE</a>
</br>If you are looking for the <em>cloud-based agent</em> from OpenAI, <strong>Codex Web</strong>, go to <a href="https://chatgpt.com/codex">chatgpt.com/codex</a></p>
<p align="center">
<img src="./.github/codex-cli-splash.png" alt="Codex CLI splash" width="80%" />

71
codex-rs/Cargo.lock generated
View File

@@ -212,6 +212,50 @@ dependencies = [
"term",
]
[[package]]
name = "askama"
version = "0.12.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b79091df18a97caea757e28cd2d5fda49c6cd4bd01ddffd7ff01ace0c0ad2c28"
dependencies = [
"askama_derive",
"askama_escape",
"humansize",
"num-traits",
"percent-encoding",
]
[[package]]
name = "askama_derive"
version = "0.12.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "19fe8d6cb13c4714962c072ea496f3392015f0989b1a2847bb4b2d9effd71d83"
dependencies = [
"askama_parser",
"basic-toml",
"mime",
"mime_guess",
"proc-macro2",
"quote",
"serde",
"syn 2.0.104",
]
[[package]]
name = "askama_escape"
version = "0.10.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "619743e34b5ba4e9703bba34deac3427c72507c7159f5fd030aea8cac0cfe341"
[[package]]
name = "askama_parser"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "acb1161c6b64d1c3d83108213c2a2533a342ac225aabd0bda218278c2ddb00c0"
dependencies = [
"nom",
]
[[package]]
name = "assert-json-diff"
version = "2.0.2"
@@ -305,6 +349,15 @@ version = "0.22.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "72b3254f16251a8381aa12e40e3c4d2f0199f8c6508fbecb9d91f575e0fbb8c6"
[[package]]
name = "basic-toml"
version = "0.1.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ba62675e8242a4c4e806d12f11d136e626e6c8361d6b829310732241652a178a"
dependencies = [
"serde",
]
[[package]]
name = "beef"
version = "0.5.2"
@@ -561,7 +614,6 @@ dependencies = [
"clap",
"codex-common",
"codex-core",
"codex-protocol",
"serde",
"serde_json",
"tempfile",
@@ -607,6 +659,7 @@ name = "codex-core"
version = "0.0.0"
dependencies = [
"anyhow",
"askama",
"assert_cmd",
"async-channel",
"base64",
@@ -769,6 +822,7 @@ version = "0.0.0"
dependencies = [
"anyhow",
"assert_cmd",
"base64",
"codex-arg0",
"codex-common",
"codex-core",
@@ -1981,6 +2035,15 @@ version = "1.0.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "df3b46402a9d5adb4c86a0cf463f42e19994e3ee891101b1841f30a545cb49a9"
[[package]]
name = "humansize"
version = "2.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6cb51c9a029ddc91b07a787f1d86b53ccfa49b0e86688c946ebe8d3555685dd7"
dependencies = [
"libm",
]
[[package]]
name = "hyper"
version = "1.7.0"
@@ -2536,6 +2599,12 @@ version = "0.2.175"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6a82ae493e598baaea5209805c49bbf2ea7de956d50d7da0da1164f9c6d28543"
[[package]]
name = "libm"
version = "0.2.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f9fbbcab51052fe104eb5e5d351cf728d30a5be1fe14d9be8a3b097481fb97de"
[[package]]
name = "libredox"
version = "0.1.6"

View File

@@ -22,7 +22,7 @@ members = [
resolver = "2"
[workspace.package]
version = "0.34.0"
version = "0.0.0"
# Track the edition for all workspace crates in one place. Individual
# crates can still override this value, but keeping it here means new
# crates created with `cargo new -w ...` automatically inherit the 2024
@@ -34,6 +34,7 @@ rust = {}
[workspace.lints.clippy]
expect_used = "deny"
redundant_clone = "deny"
uninlined_format_args = "deny"
unwrap_used = "deny"

View File

@@ -40,6 +40,11 @@ pub enum ApplyPatchError {
/// Error that occurs while computing replacements when applying patch chunks
#[error("{0}")]
ComputeReplacements(String),
/// A raw patch body was provided without an explicit `apply_patch` invocation.
#[error(
"patch detected without explicit call to apply_patch. Rerun as [\"apply_patch\", \"<patch>\"]"
)]
ImplicitInvocation,
}
impl From<std::io::Error> for ApplyPatchError {
@@ -93,10 +98,12 @@ pub struct ApplyPatchArgs {
pub fn maybe_parse_apply_patch(argv: &[String]) -> MaybeApplyPatch {
match argv {
// Direct invocation: apply_patch <patch>
[cmd, body] if APPLY_PATCH_COMMANDS.contains(&cmd.as_str()) => match parse_patch(body) {
Ok(source) => MaybeApplyPatch::Body(source),
Err(e) => MaybeApplyPatch::PatchParseError(e),
},
// Bash heredoc form: (optional `cd <path> &&`) apply_patch <<'EOF' ...
[bash, flag, script] if bash == "bash" && flag == "-lc" => {
match extract_apply_patch_from_bash(script) {
Ok((body, workdir)) => match parse_patch(&body) {
@@ -207,6 +214,26 @@ impl ApplyPatchAction {
/// cwd must be an absolute path so that we can resolve relative paths in the
/// patch.
pub fn maybe_parse_apply_patch_verified(argv: &[String], cwd: &Path) -> MaybeApplyPatchVerified {
// Detect a raw patch body passed directly as the command or as the body of a bash -lc
// script. In these cases, report an explicit error rather than applying the patch.
match argv {
[body] => {
if parse_patch(body).is_ok() {
return MaybeApplyPatchVerified::CorrectnessError(
ApplyPatchError::ImplicitInvocation,
);
}
}
[bash, flag, script] if bash == "bash" && flag == "-lc" => {
if parse_patch(script).is_ok() {
return MaybeApplyPatchVerified::CorrectnessError(
ApplyPatchError::ImplicitInvocation,
);
}
}
_ => {}
}
match maybe_parse_apply_patch(argv) {
MaybeApplyPatch::Body(ApplyPatchArgs {
patch,
@@ -733,6 +760,8 @@ fn compute_replacements(
}
}
replacements.sort_by(|(lhs_idx, _, _), (rhs_idx, _, _)| lhs_idx.cmp(rhs_idx));
Ok(replacements)
}
@@ -873,6 +902,28 @@ mod tests {
));
}
#[test]
fn test_implicit_patch_single_arg_is_error() {
let patch = "*** Begin Patch\n*** Add File: foo\n+hi\n*** End Patch".to_string();
let args = vec![patch];
let dir = tempdir().unwrap();
assert!(matches!(
maybe_parse_apply_patch_verified(&args, dir.path()),
MaybeApplyPatchVerified::CorrectnessError(ApplyPatchError::ImplicitInvocation)
));
}
#[test]
fn test_implicit_patch_bash_script_is_error() {
let script = "*** Begin Patch\n*** Add File: foo\n+hi\n*** End Patch";
let args = args_bash(script);
let dir = tempdir().unwrap();
assert!(matches!(
maybe_parse_apply_patch_verified(&args, dir.path()),
MaybeApplyPatchVerified::CorrectnessError(ApplyPatchError::ImplicitInvocation)
));
}
#[test]
fn test_literal() {
let args = strs_to_strings(&[
@@ -1216,6 +1267,33 @@ PATCH"#,
assert_eq!(contents, "a\nB\nc\nd\nE\nf\ng\n");
}
#[test]
fn test_pure_addition_chunk_followed_by_removal() {
let dir = tempdir().unwrap();
let path = dir.path().join("panic.txt");
fs::write(&path, "line1\nline2\nline3\n").unwrap();
let patch = wrap_patch(&format!(
r#"*** Update File: {}
@@
+after-context
+second-line
@@
line1
-line2
-line3
+line2-replacement"#,
path.display()
));
let mut stdout = Vec::new();
let mut stderr = Vec::new();
apply_patch(&patch, &mut stdout, &mut stderr).unwrap();
let contents = fs::read_to_string(path).unwrap();
assert_eq!(
contents,
"line1\nline2-replacement\nafter-context\nsecond-line\n"
);
}
/// Ensure that patches authored with ASCII characters can update lines that
/// contain typographic Unicode punctuation (e.g. EN DASH, NON-BREAKING
/// HYPHEN). Historically `git apply` succeeds in such scenarios but our

View File

@@ -617,7 +617,7 @@ fn test_parse_patch_lenient() {
assert_eq!(
parse_patch_text(&patch_text_in_double_quoted_heredoc, ParseMode::Lenient),
Ok(ApplyPatchArgs {
hunks: expected_patch.clone(),
hunks: expected_patch,
patch: patch_text.to_string(),
workdir: None,
})
@@ -637,7 +637,7 @@ fn test_parse_patch_lenient() {
"<<EOF\n*** Begin Patch\n*** Update File: file2.py\nEOF\n".to_string();
assert_eq!(
parse_patch_text(&patch_text_with_missing_closing_heredoc, ParseMode::Strict),
Err(expected_error.clone())
Err(expected_error)
);
assert_eq!(
parse_patch_text(&patch_text_with_missing_closing_heredoc, ParseMode::Lenient),

View File

@@ -11,7 +11,6 @@ anyhow = "1"
clap = { version = "4", features = ["derive"] }
codex-common = { path = "../common", features = ["cli"] }
codex-core = { path = "../core" }
codex-protocol = { path = "../protocol" }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
tokio = { version = "1", features = ["full"] }

View File

@@ -1,5 +1,4 @@
use codex_core::CodexAuth;
use codex_protocol::mcp_protocol::AuthMode;
use std::path::Path;
use std::sync::LazyLock;
use std::sync::RwLock;
@@ -20,7 +19,7 @@ pub fn set_chatgpt_token_data(value: TokenData) {
/// Initialize the ChatGPT token from auth.json file
pub async fn init_chatgpt_token_from_auth(codex_home: &Path) -> std::io::Result<()> {
let auth = CodexAuth::from_codex_home(codex_home, AuthMode::ChatGPT)?;
let auth = CodexAuth::from_codex_home(codex_home)?;
if let Some(auth) = auth {
let token_data = auth.get_token_data().await?;
set_chatgpt_token_data(token_data);

View File

@@ -1,7 +1,6 @@
use codex_common::CliConfigOverrides;
use codex_core::CodexAuth;
use codex_core::auth::CLIENT_ID;
use codex_core::auth::OPENAI_API_KEY_ENV_VAR;
use codex_core::auth::login_with_api_key;
use codex_core::auth::logout;
use codex_core::config::Config;
@@ -9,7 +8,6 @@ use codex_core::config::ConfigOverrides;
use codex_login::ServerOptions;
use codex_login::run_login_server;
use codex_protocol::mcp_protocol::AuthMode;
use std::env;
use std::path::PathBuf;
pub async fn login_with_chatgpt(codex_home: PathBuf) -> std::io::Result<()> {
@@ -60,19 +58,11 @@ pub async fn run_login_with_api_key(
pub async fn run_login_status(cli_config_overrides: CliConfigOverrides) -> ! {
let config = load_config_or_exit(cli_config_overrides);
match CodexAuth::from_codex_home(&config.codex_home, config.preferred_auth_method) {
match CodexAuth::from_codex_home(&config.codex_home) {
Ok(Some(auth)) => match auth.mode {
AuthMode::ApiKey => match auth.get_token().await {
Ok(api_key) => {
eprintln!("Logged in using an API key - {}", safe_format_key(&api_key));
if let Ok(env_api_key) = env::var(OPENAI_API_KEY_ENV_VAR)
&& env_api_key == api_key
{
eprintln!(
" API loaded from OPENAI_API_KEY environment variable or .env file"
);
}
std::process::exit(0);
}
Err(e) => {

View File

@@ -37,10 +37,8 @@ pub async fn run_main(opts: ProtoCli) -> anyhow::Result<()> {
let config = Config::load_with_cli_overrides(overrides_vec, ConfigOverrides::default())?;
// Use conversation_manager API to start a conversation
let conversation_manager = ConversationManager::new(AuthManager::shared(
config.codex_home.clone(),
config.preferred_auth_method,
));
let conversation_manager =
ConversationManager::new(AuthManager::shared(config.codex_home.clone()));
let NewConversation {
conversation_id: _,
conversation,

View File

@@ -17,7 +17,10 @@ pub fn create_config_summary_entries(config: &Config) -> Vec<(&'static str, Stri
{
entries.push((
"reasoning effort",
config.model_reasoning_effort.to_string(),
config
.model_reasoning_effort
.map(|effort| effort.to_string())
.unwrap_or_else(|| "none".to_string()),
));
entries.push((
"reasoning summaries",

View File

@@ -2,7 +2,7 @@ use std::time::Duration;
use std::time::Instant;
/// Returns a string representing the elapsed time since `start_time` like
/// "1m15s" or "1.50s".
/// "1m 15s" or "1.50s".
pub fn format_elapsed(start_time: Instant) -> String {
format_duration(start_time.elapsed())
}
@@ -12,7 +12,7 @@ pub fn format_elapsed(start_time: Instant) -> String {
/// Formatting rules:
/// * < 1 s -> "{milli}ms"
/// * < 60 s -> "{sec:.2}s" (two decimal places)
/// * >= 60 s -> "{min}m{sec:02}s"
/// * >= 60 s -> "{min}m {sec:02}s"
pub fn format_duration(duration: Duration) -> String {
let millis = duration.as_millis() as i64;
format_elapsed_millis(millis)
@@ -26,7 +26,7 @@ fn format_elapsed_millis(millis: i64) -> String {
} else {
let minutes = millis / 60_000;
let seconds = (millis % 60_000) / 1000;
format!("{minutes}m{seconds:02}s")
format!("{minutes}m {seconds:02}s")
}
}
@@ -61,12 +61,18 @@ mod tests {
fn test_format_duration_minutes() {
// Durations ≥ 1 minute should be printed mmss.
let dur = Duration::from_millis(75_000); // 1m15s
assert_eq!(format_duration(dur), "1m15s");
assert_eq!(format_duration(dur), "1m 15s");
let dur_exact = Duration::from_millis(60_000); // 1m0s
assert_eq!(format_duration(dur_exact), "1m00s");
assert_eq!(format_duration(dur_exact), "1m 00s");
let dur_long = Duration::from_millis(3_601_000);
assert_eq!(format_duration(dur_long), "60m01s");
assert_eq!(format_duration(dur_long), "60m 01s");
}
#[test]
fn test_format_duration_one_hour_has_space() {
let dur_hour = Duration::from_millis(3_600_000);
assert_eq!(format_duration(dur_hour), "60m 00s");
}
}

View File

@@ -12,7 +12,7 @@ pub struct ModelPreset {
/// Model slug (e.g., "gpt-5").
pub model: &'static str,
/// Reasoning effort to apply for this preset.
pub effort: ReasoningEffort,
pub effort: Option<ReasoningEffort>,
}
/// Built-in list of model presets that pair a model with a reasoning effort.
@@ -26,28 +26,35 @@ pub fn builtin_model_presets() -> &'static [ModelPreset] {
label: "gpt-5 minimal",
description: "— fastest responses with limited reasoning; ideal for coding, instructions, or lightweight tasks",
model: "gpt-5",
effort: ReasoningEffort::Minimal,
effort: Some(ReasoningEffort::Minimal),
},
ModelPreset {
id: "gpt-5-low",
label: "gpt-5 low",
description: "— balances speed with some reasoning; useful for straightforward queries and short explanations",
model: "gpt-5",
effort: ReasoningEffort::Low,
effort: Some(ReasoningEffort::Low),
},
ModelPreset {
id: "gpt-5-medium",
label: "gpt-5 medium",
description: "— default setting; provides a solid balance of reasoning depth and latency for general-purpose tasks",
model: "gpt-5",
effort: ReasoningEffort::Medium,
effort: Some(ReasoningEffort::Medium),
},
ModelPreset {
id: "gpt-5-high",
label: "gpt-5 high",
description: "— maximizes reasoning depth for complex or ambiguous problems",
model: "gpt-5",
effort: ReasoningEffort::High,
effort: Some(ReasoningEffort::High),
},
ModelPreset {
id: "gpt-5-high-new",
label: "gpt-5 high new",
description: "— our latest release tuned to rely on the model's built-in reasoning defaults",
model: "gpt-5-high-new",
effort: None,
},
];
PRESETS

View File

@@ -13,6 +13,7 @@ workspace = true
[dependencies]
anyhow = "1"
askama = "0.12"
async-channel = "2.3.1"
base64 = "0.22"
bytes = "1.10.1"
@@ -54,6 +55,7 @@ tracing = { version = "0.1.41", features = ["log"] }
tree-sitter = "0.25.9"
tree-sitter-bash = "0.25.0"
uuid = { version = "1", features = ["serde", "v4"] }
which = "6"
wildmatch = "2.4.0"
@@ -69,9 +71,6 @@ openssl-sys = { version = "*", features = ["vendored"] }
[target.aarch64-unknown-linux-musl.dependencies]
openssl-sys = { version = "*", features = ["vendored"] }
[target.'cfg(target_os = "windows")'.dependencies]
which = "6"
[dev-dependencies]
assert_cmd = "2"
core_test_support = { path = "tests/common" }

View File

@@ -0,0 +1,87 @@
# Review guidelines:
You are acting as a reviewer for a proposed code change made by another engineer.
Below are some default guidelines for determining whether the original author would appreciate the issue being flagged.
These are not the final word in determining whether an issue is a bug. In many cases, you will encounter other, more specific guidelines. These may be present elsewhere in a developer message, a user message, a file, or even elsewhere in this system message.
Those guidelines should be considered to override these general instructions.
Here are the general guidelines for determining whether something is a bug and should be flagged.
1. It meaningfully impacts the accuracy, performance, security, or maintainability of the code.
2. The bug is discrete and actionable (i.e. not a general issue with the codebase or a combination of multiple issues).
3. Fixing the bug does not demand a level of rigor that is not present in the rest of the codebase (e.g. one doesn't need very detailed comments and input validation in a repository of one-off scripts in personal projects)
4. The bug was introduced in the commit (pre-existing bugs should not be flagged).
5. The author of the original PR would likely fix the issue if they were made aware of it.
6. The bug does not rely on unstated assumptions about the codebase or author's intent.
7. It is not enough to speculate that a change may disrupt another part of the codebase, to be considered a bug, one must identify the other parts of the code that are provably affected.
8. The bug is clearly not just an intentional change by the original author.
When flagging a bug, you will also provide an accompanying comment. Once again, these guidelines are not the final word on how to construct a comment -- defer to any subsequent guidelines that you encounter.
1. The comment should be clear about why the issue is a bug.
2. The comment should appropriately communicate the severity of the issue. It should not claim that an issue is more severe than it actually is.
3. The comment should be brief. The body should be at most 1 paragraph. It should not introduce line breaks within the natural language flow unless it is necessary for the code fragment.
4. The comment should not include any chunks of code longer than 3 lines. Any code chunks should be wrapped in markdown inline code tags or a code block.
5. The comment should clearly and explicitly communicate the scenarios, environments, or inputs that are necessary for the bug to arise. The comment should immediately indicate that the issue's severity depends on these factors.
6. The comment's tone should be matter-of-fact and not accusatory or overly positive. It should read as a helpful AI assistant suggestion without sounding too much like a human reviewer.
7. The comment should be written such that the original author can immediately grasp the idea without close reading.
8. The comment should avoid excessive flattery and comments that are not helpful to the original author. The comment should avoid phrasing like "Great job ...", "Thanks for ...".
Below are some more detailed guidelines that you should apply to this specific review.
HOW MANY FINDINGS TO RETURN:
Output all findings that the original author would fix if they knew about it. If there is no finding that a person would definitely love to see and fix, prefer outputting no findings. Do not stop at the first qualifying finding. Continue until you've listed every qualifying finding.
GUIDELINES:
- Ignore trivial style unless it obscures meaning or violates documented standards.
- Use one comment per distinct issue (or a multi-line range if necessary).
- Use ```suggestion blocks ONLY for concrete replacement code (minimal lines; no commentary inside the block).
- In every ```suggestion block, preserve the exact leading whitespace of the replaced lines (spaces vs tabs, number of spaces).
- Do NOT introduce or remove outer indentation levels unless that is the actual fix.
The comments will be presented in the code review as inline comments. You should avoid providing unnecessary location details in the comment body. Always keep the line range as short as possible for interpreting the issue. Avoid ranges longer than 510 lines; instead, choose the most suitable subrange that pinpoints the problem.
At the beginning of the finding title, tag the bug with priority level. For example "[P1] Un-padding slices along wrong tensor dimensions". [P0] Drop everything to fix. Blocking release, operations, or major usage. Only use for universal issues that do not depend on any assumptions about the inputs. · [P1] Urgent. Should be addressed in the next cycle · [P2] Normal. To be fixed eventually · [P3] Low. Nice to have.
Additionally, include a numeric priority field in the JSON output for each finding: set "priority" to 0 for P0, 1 for P1, 2 for P2, or 3 for P3. If a priority cannot be determined, omit the field or use null.
At the end of your findings, output an "overall correctness" verdict of whether or not the patch should be considered "correct".
Correct implies that existing code and tests will not break, and the patch is free of bugs and other blocking issues.
Ignore non-blocking issues such as style, formatting, typos, documentation, and other nits.
FORMATTING GUIDELINES:
The finding description should be one paragraph.
OUTPUT FORMAT:
## Output schema — MUST MATCH *exactly*
```json
{
"findings": [
{
"title": "<≤ 80 chars, imperative>",
"body": "<valid Markdown explaining *why* this is a problem; cite files/lines/functions>",
"confidence_score": <float 0.0-1.0>,
"priority": <int 0-3, optional>,
"code_location": {
"absolute_file_path": "<file path>",
"line_range": {"start": <int>, "end": <int>}
}
}
],
"overall_correctness": "patch is correct" | "patch is incorrect",
"overall_explanation": "<1-3 sentence explanation justifying the overall_correctness verdict>",
"overall_confidence_score": <float 0.0-1.0>
}
```
* **Do not** wrap the JSON in markdown fences or extra prose.
* The code_location field is required and must include absolute_file_path and line_range.
*Line ranges must be as short as possible for interpreting the issue (avoid ranges over 510 lines; pick the most suitable subrange).
* The code_location should overlap with the diff.
* Do not generate a PR fix.

View File

@@ -17,6 +17,7 @@ use std::time::Duration;
use codex_protocol::mcp_protocol::AuthMode;
use crate::token_data::PlanType;
use crate::token_data::TokenData;
use crate::token_data::parse_id_token;
@@ -70,13 +71,9 @@ impl CodexAuth {
Ok(access)
}
/// Loads the available auth information from the auth.json or
/// OPENAI_API_KEY environment variable.
pub fn from_codex_home(
codex_home: &Path,
preferred_auth_method: AuthMode,
) -> std::io::Result<Option<CodexAuth>> {
load_auth(codex_home, true, preferred_auth_method)
/// Loads the available auth information from the auth.json.
pub fn from_codex_home(codex_home: &Path) -> std::io::Result<Option<CodexAuth>> {
load_auth(codex_home)
}
pub async fn get_token_data(&self) -> Result<TokenData, std::io::Error> {
@@ -135,13 +132,12 @@ impl CodexAuth {
}
pub fn get_account_id(&self) -> Option<String> {
self.get_current_token_data()
.and_then(|t| t.account_id.clone())
self.get_current_token_data().and_then(|t| t.account_id)
}
pub fn get_plan_type(&self) -> Option<String> {
pub(crate) fn get_plan_type(&self) -> Option<PlanType> {
self.get_current_token_data()
.and_then(|t| t.id_token.chatgpt_plan_type.as_ref().map(|p| p.as_string()))
.and_then(|t| t.id_token.chatgpt_plan_type)
}
fn get_current_auth_json(&self) -> Option<AuthDotJson> {
@@ -150,7 +146,7 @@ impl CodexAuth {
}
fn get_current_token_data(&self) -> Option<TokenData> {
self.get_current_auth_json().and_then(|t| t.tokens.clone())
self.get_current_auth_json().and_then(|t| t.tokens)
}
/// Consider this private to integration tests.
@@ -193,10 +189,11 @@ impl CodexAuth {
pub const OPENAI_API_KEY_ENV_VAR: &str = "OPENAI_API_KEY";
fn read_openai_api_key_from_env() -> Option<String> {
pub fn read_openai_api_key_from_env() -> Option<String> {
env::var(OPENAI_API_KEY_ENV_VAR)
.ok()
.filter(|s| !s.is_empty())
.map(|value| value.trim().to_string())
.filter(|value| !value.is_empty())
}
pub fn get_auth_file(codex_home: &Path) -> PathBuf {
@@ -214,7 +211,7 @@ pub fn logout(codex_home: &Path) -> std::io::Result<bool> {
}
}
/// Writes an `auth.json` that contains only the API key. Intended for CLI use.
/// Writes an `auth.json` that contains only the API key.
pub fn login_with_api_key(codex_home: &Path, api_key: &str) -> std::io::Result<()> {
let auth_dot_json = AuthDotJson {
openai_api_key: Some(api_key.to_string()),
@@ -224,28 +221,11 @@ pub fn login_with_api_key(codex_home: &Path, api_key: &str) -> std::io::Result<(
write_auth_json(&get_auth_file(codex_home), &auth_dot_json)
}
fn load_auth(
codex_home: &Path,
include_env_var: bool,
preferred_auth_method: AuthMode,
) -> std::io::Result<Option<CodexAuth>> {
// First, check to see if there is a valid auth.json file. If not, we fall
// back to AuthMode::ApiKey using the OPENAI_API_KEY environment variable
// (if it is set).
fn load_auth(codex_home: &Path) -> std::io::Result<Option<CodexAuth>> {
let auth_file = get_auth_file(codex_home);
let client = crate::default_client::create_client();
let auth_dot_json = match try_read_auth_json(&auth_file) {
Ok(auth) => auth,
// If auth.json does not exist, try to read the OPENAI_API_KEY from the
// environment variable.
Err(e) if e.kind() == std::io::ErrorKind::NotFound && include_env_var => {
return match read_openai_api_key_from_env() {
Some(api_key) => Ok(Some(CodexAuth::from_api_key_with_client(&api_key, client))),
None => Ok(None),
};
}
// Though if auth.json exists but is malformed, do not fall back to the
// env var because the user may be expecting to use AuthMode::ChatGPT.
Err(e) => {
return Err(e);
}
@@ -257,32 +237,11 @@ fn load_auth(
last_refresh,
} = auth_dot_json;
// If the auth.json has an API key AND does not appear to be on a plan that
// should prefer AuthMode::ChatGPT, use AuthMode::ApiKey.
// Prefer AuthMode.ApiKey if it's set in the auth.json.
if let Some(api_key) = &auth_json_api_key {
// Should any of these be AuthMode::ChatGPT with the api_key set?
// Does AuthMode::ChatGPT indicate that there is an auth.json that is
// "refreshable" even if we are using the API key for auth?
match &tokens {
Some(tokens) => {
if tokens.should_use_api_key(preferred_auth_method, tokens.is_openai_email()) {
return Ok(Some(CodexAuth::from_api_key_with_client(api_key, client)));
} else {
// Ignore the API key and fall through to ChatGPT auth.
}
}
None => {
// We have an API key but no tokens in the auth.json file.
// Perhaps the user ran `codex login --api-key <KEY>` or updated
// auth.json by hand. Either way, let's assume they are trying
// to use their API key.
return Ok(Some(CodexAuth::from_api_key_with_client(api_key, client)));
}
}
return Ok(Some(CodexAuth::from_api_key_with_client(api_key, client)));
}
// For the AuthMode::ChatGPT variant, perhaps neither api_key nor
// openai_api_key should exist?
Ok(Some(CodexAuth {
api_key: None,
mode: AuthMode::ChatGPT,
@@ -332,10 +291,10 @@ async fn update_tokens(
let tokens = auth_dot_json.tokens.get_or_insert_with(TokenData::default);
tokens.id_token = parse_id_token(&id_token).map_err(std::io::Error::other)?;
if let Some(access_token) = access_token {
tokens.access_token = access_token.to_string();
tokens.access_token = access_token;
}
if let Some(refresh_token) = refresh_token {
tokens.refresh_token = refresh_token.to_string();
tokens.refresh_token = refresh_token;
}
auth_dot_json.last_refresh = Some(Utc::now());
write_auth_json(auth_file, &auth_dot_json)?;
@@ -412,7 +371,6 @@ use std::sync::RwLock;
/// Internal cached auth state.
#[derive(Clone, Debug)]
struct CachedAuth {
preferred_auth_mode: AuthMode,
auth: Option<CodexAuth>,
}
@@ -468,9 +426,7 @@ mod tests {
auth_dot_json,
auth_file: _,
..
} = super::load_auth(codex_home.path(), false, AuthMode::ChatGPT)
.unwrap()
.unwrap();
} = super::load_auth(codex_home.path()).unwrap().unwrap();
assert_eq!(None, api_key);
assert_eq!(AuthMode::ChatGPT, mode);
@@ -499,88 +455,6 @@ mod tests {
)
}
/// Even if the OPENAI_API_KEY is set in auth.json, if the plan is not in
/// [`TokenData::is_plan_that_should_use_api_key`], it should use
/// [`AuthMode::ChatGPT`].
#[tokio::test]
async fn pro_account_with_api_key_still_uses_chatgpt_auth() {
let codex_home = tempdir().unwrap();
let fake_jwt = write_auth_file(
AuthFileParams {
openai_api_key: Some("sk-test-key".to_string()),
chatgpt_plan_type: "pro".to_string(),
},
codex_home.path(),
)
.expect("failed to write auth file");
let CodexAuth {
api_key,
mode,
auth_dot_json,
auth_file: _,
..
} = super::load_auth(codex_home.path(), false, AuthMode::ChatGPT)
.unwrap()
.unwrap();
assert_eq!(None, api_key);
assert_eq!(AuthMode::ChatGPT, mode);
let guard = auth_dot_json.lock().unwrap();
let auth_dot_json = guard.as_ref().expect("AuthDotJson should exist");
assert_eq!(
&AuthDotJson {
openai_api_key: None,
tokens: Some(TokenData {
id_token: IdTokenInfo {
email: Some("user@example.com".to_string()),
chatgpt_plan_type: Some(PlanType::Known(KnownPlan::Pro)),
raw_jwt: fake_jwt,
},
access_token: "test-access-token".to_string(),
refresh_token: "test-refresh-token".to_string(),
account_id: None,
}),
last_refresh: Some(
DateTime::parse_from_rfc3339(LAST_REFRESH)
.unwrap()
.with_timezone(&Utc)
),
},
auth_dot_json
)
}
/// If the OPENAI_API_KEY is set in auth.json and it is an enterprise
/// account, then it should use [`AuthMode::ApiKey`].
#[tokio::test]
async fn enterprise_account_with_api_key_uses_apikey_auth() {
let codex_home = tempdir().unwrap();
write_auth_file(
AuthFileParams {
openai_api_key: Some("sk-test-key".to_string()),
chatgpt_plan_type: "enterprise".to_string(),
},
codex_home.path(),
)
.expect("failed to write auth file");
let CodexAuth {
api_key,
mode,
auth_dot_json,
auth_file: _,
..
} = super::load_auth(codex_home.path(), false, AuthMode::ChatGPT)
.unwrap()
.unwrap();
assert_eq!(Some("sk-test-key".to_string()), api_key);
assert_eq!(AuthMode::ApiKey, mode);
let guard = auth_dot_json.lock().expect("should unwrap");
assert!(guard.is_none(), "auth_dot_json should be None");
}
#[tokio::test]
async fn loads_api_key_from_auth_json() {
let dir = tempdir().unwrap();
@@ -591,9 +465,7 @@ mod tests {
)
.unwrap();
let auth = super::load_auth(dir.path(), false, AuthMode::ChatGPT)
.unwrap()
.unwrap();
let auth = super::load_auth(dir.path()).unwrap().unwrap();
assert_eq!(auth.mode, AuthMode::ApiKey);
assert_eq!(auth.api_key, Some("sk-test-key".to_string()));
@@ -683,26 +555,17 @@ impl AuthManager {
/// preferred auth method. Errors loading auth are swallowed; `auth()` will
/// simply return `None` in that case so callers can treat it as an
/// unauthenticated state.
pub fn new(codex_home: PathBuf, preferred_auth_mode: AuthMode) -> Self {
let auth = CodexAuth::from_codex_home(&codex_home, preferred_auth_mode)
.ok()
.flatten();
pub fn new(codex_home: PathBuf) -> Self {
let auth = CodexAuth::from_codex_home(&codex_home).ok().flatten();
Self {
codex_home,
inner: RwLock::new(CachedAuth {
preferred_auth_mode,
auth,
}),
inner: RwLock::new(CachedAuth { auth }),
}
}
/// Create an AuthManager with a specific CodexAuth, for testing only.
pub fn from_auth_for_testing(auth: CodexAuth) -> Arc<Self> {
let preferred_auth_mode = auth.mode;
let cached = CachedAuth {
preferred_auth_mode,
auth: Some(auth),
};
let cached = CachedAuth { auth: Some(auth) };
Arc::new(Self {
codex_home: PathBuf::new(),
inner: RwLock::new(cached),
@@ -714,21 +577,10 @@ impl AuthManager {
self.inner.read().ok().and_then(|c| c.auth.clone())
}
/// Preferred auth method used when (re)loading.
pub fn preferred_auth_method(&self) -> AuthMode {
self.inner
.read()
.map(|c| c.preferred_auth_mode)
.unwrap_or(AuthMode::ApiKey)
}
/// Force a reload using the existing preferred auth method. Returns
/// Force a reload of the auth information from auth.json. Returns
/// whether the auth value changed.
pub fn reload(&self) -> bool {
let preferred = self.preferred_auth_method();
let new_auth = CodexAuth::from_codex_home(&self.codex_home, preferred)
.ok()
.flatten();
let new_auth = CodexAuth::from_codex_home(&self.codex_home).ok().flatten();
if let Ok(mut guard) = self.inner.write() {
let changed = !AuthManager::auths_equal(&guard.auth, &new_auth);
guard.auth = new_auth;
@@ -747,8 +599,8 @@ impl AuthManager {
}
/// Convenience constructor returning an `Arc` wrapper.
pub fn shared(codex_home: PathBuf, preferred_auth_mode: AuthMode) -> Arc<Self> {
Arc::new(Self::new(codex_home, preferred_auth_mode))
pub fn shared(codex_home: PathBuf) -> Arc<Self> {
Arc::new(Self::new(codex_home))
}
/// Attempt to refresh the current auth token (if any). On success, reload

View File

@@ -41,6 +41,7 @@ use crate::model_provider_info::WireApi;
use crate::openai_model_info::get_model_info;
use crate::openai_tools::create_tools_json_for_responses_api;
use crate::protocol::TokenUsage;
use crate::token_data::PlanType;
use crate::util::backoff;
use codex_protocol::config_types::ReasoningEffort as ReasoningEffortConfig;
use codex_protocol::config_types::ReasoningSummary as ReasoningSummaryConfig;
@@ -60,7 +61,7 @@ struct Error {
message: Option<String>,
// Optional fields available on "usage_limit_reached" and "usage_not_included" errors
plan_type: Option<String>,
plan_type: Option<PlanType>,
resets_in_seconds: Option<u64>,
}
@@ -71,7 +72,7 @@ pub struct ModelClient {
client: reqwest::Client,
provider: ModelProviderInfo,
conversation_id: ConversationId,
effort: ReasoningEffortConfig,
effort: Option<ReasoningEffortConfig>,
summary: ReasoningSummaryConfig,
}
@@ -80,7 +81,7 @@ impl ModelClient {
config: Arc<Config>,
auth_manager: Option<Arc<AuthManager>>,
provider: ModelProviderInfo,
effort: ReasoningEffortConfig,
effort: Option<ReasoningEffortConfig>,
summary: ReasoningSummaryConfig,
conversation_id: ConversationId,
) -> Self {
@@ -103,6 +104,12 @@ impl ModelClient {
.or_else(|| get_model_info(&self.config.model_family).map(|info| info.context_window))
}
pub fn get_auto_compact_token_limit(&self) -> Option<i64> {
self.config.model_auto_compact_token_limit.or_else(|| {
get_model_info(&self.config.model_family).and_then(|info| info.auto_compact_token_limit)
})
}
/// Dispatches to either the Responses or Chat implementation depending on
/// the provider config. Public callers always invoke `stream()` the
/// specialised helpers are private to avoid accidental misuse.
@@ -186,6 +193,15 @@ impl ModelClient {
None
};
// In general, we want to explicitly send `store: false` when using the Responses API,
// but in practice, the Azure Responses API rejects `store: false`:
//
// - If store = false and id is sent an error is thrown that ID is not found
// - If store = false and id is not sent an error is thrown that ID is required
//
// For Azure, we send `store: true` and preserve reasoning item IDs.
let azure_workaround = self.provider.is_azure_responses_endpoint();
let payload = ResponsesApiRequest {
model: &self.config.model,
instructions: &full_instructions,
@@ -194,13 +210,19 @@ impl ModelClient {
tool_choice: "auto",
parallel_tool_calls: false,
reasoning,
store: false,
store: azure_workaround,
stream: true,
include,
prompt_cache_key: Some(self.conversation_id.to_string()),
text,
};
let mut payload_json = serde_json::to_value(&payload)?;
if azure_workaround {
attach_item_ids(&mut payload_json, &input_with_instructions);
}
let payload_body = serde_json::to_string(&payload_json)?;
let mut attempt = 0;
let max_retries = self.provider.request_max_retries();
@@ -213,7 +235,7 @@ impl ModelClient {
trace!(
"POST to {}: {}",
self.provider.get_full_url(&auth),
serde_json::to_string(&payload)?
payload_body.as_str()
);
let mut req_builder = self
@@ -227,7 +249,7 @@ impl ModelClient {
.header("conversation_id", self.conversation_id.to_string())
.header("session_id", self.conversation_id.to_string())
.header(reqwest::header::ACCEPT, "text/event-stream")
.json(&payload);
.json(&payload_json);
if let Some(auth) = auth.as_ref()
&& auth.mode == AuthMode::ChatGPT
@@ -239,10 +261,10 @@ impl ModelClient {
let res = req_builder.send().await;
if let Ok(resp) = &res {
trace!(
"Response status: {}, request-id: {}",
"Response status: {}, cf-ray: {}",
resp.status(),
resp.headers()
.get("x-request-id")
.get("cf-ray")
.map(|v| v.to_str().unwrap_or_default())
.unwrap_or_default()
);
@@ -304,7 +326,7 @@ impl ModelClient {
// token.
let plan_type = error
.plan_type
.or_else(|| auth.and_then(|a| a.get_plan_type()));
.or_else(|| auth.as_ref().and_then(|a| a.get_plan_type()));
let resets_in_seconds = error.resets_in_seconds;
return Err(CodexErr::UsageLimitReached(UsageLimitReachedError {
plan_type,
@@ -355,7 +377,7 @@ impl ModelClient {
}
/// Returns the current reasoning effort setting.
pub fn get_reasoning_effort(&self) -> ReasoningEffortConfig {
pub fn get_reasoning_effort(&self) -> Option<ReasoningEffortConfig> {
self.effort
}
@@ -424,6 +446,33 @@ struct ResponseCompletedOutputTokensDetails {
reasoning_tokens: u64,
}
fn attach_item_ids(payload_json: &mut Value, original_items: &[ResponseItem]) {
let Some(input_value) = payload_json.get_mut("input") else {
return;
};
let serde_json::Value::Array(items) = input_value else {
return;
};
for (value, item) in items.iter_mut().zip(original_items.iter()) {
if let ResponseItem::Reasoning { id, .. }
| ResponseItem::Message { id: Some(id), .. }
| ResponseItem::WebSearchCall { id: Some(id), .. }
| ResponseItem::FunctionCall { id: Some(id), .. }
| ResponseItem::LocalShellCall { id: Some(id), .. }
| ResponseItem::CustomToolCall { id: Some(id), .. } = item
{
if id.is_empty() {
continue;
}
if let Some(obj) = value.as_object_mut() {
obj.insert("id".to_string(), Value::String(id.clone()));
}
}
}
}
async fn process_sse<S>(
stream: S,
tx_event: mpsc::Sender<Result<ResponseEvent>>,
@@ -1037,4 +1086,37 @@ mod tests {
let delay = try_parse_retry_after(&err);
assert_eq!(delay, Some(Duration::from_secs_f64(1.898)));
}
#[test]
fn error_response_deserializes_old_schema_known_plan_type_and_serializes_back() {
use crate::token_data::KnownPlan;
use crate::token_data::PlanType;
let json = r#"{"error":{"type":"usage_limit_reached","plan_type":"pro","resets_in_seconds":3600}}"#;
let resp: ErrorResponse =
serde_json::from_str(json).expect("should deserialize old schema");
assert!(matches!(
resp.error.plan_type,
Some(PlanType::Known(KnownPlan::Pro))
));
let plan_json = serde_json::to_string(&resp.error.plan_type).expect("serialize plan_type");
assert_eq!(plan_json, "\"pro\"");
}
#[test]
fn error_response_deserializes_old_schema_unknown_plan_type_and_serializes_back() {
use crate::token_data::PlanType;
let json =
r#"{"error":{"type":"usage_limit_reached","plan_type":"vip","resets_in_seconds":60}}"#;
let resp: ErrorResponse =
serde_json::from_str(json).expect("should deserialize old schema");
assert!(matches!(resp.error.plan_type, Some(PlanType::Unknown(ref s)) if s == "vip"));
let plan_json = serde_json::to_string(&resp.error.plan_type).expect("serialize plan_type");
assert_eq!(plan_json, "\"vip\"");
}
}

View File

@@ -19,6 +19,9 @@ use tokio::sync::mpsc;
/// with this content.
const BASE_INSTRUCTIONS: &str = include_str!("../prompt.md");
/// Review thread system prompt. Edit `core/src/review_prompt.md` to customize.
pub const REVIEW_PROMPT: &str = include_str!("../review_prompt.md");
/// API request payload for a single model turn
#[derive(Default, Debug, Clone)]
pub struct Prompt {
@@ -81,8 +84,10 @@ pub enum ResponseEvent {
#[derive(Debug, Serialize)]
pub(crate) struct Reasoning {
pub(crate) effort: ReasoningEffortConfig,
pub(crate) summary: ReasoningSummaryConfig,
#[serde(skip_serializing_if = "Option::is_none")]
pub(crate) effort: Option<ReasoningEffortConfig>,
#[serde(skip_serializing_if = "Option::is_none")]
pub(crate) summary: Option<ReasoningSummaryConfig>,
}
/// Controls under the `text` field in the Responses API for GPT-5.
@@ -136,14 +141,17 @@ pub(crate) struct ResponsesApiRequest<'a> {
pub(crate) fn create_reasoning_param_for_request(
model_family: &ModelFamily,
effort: ReasoningEffortConfig,
effort: Option<ReasoningEffortConfig>,
summary: ReasoningSummaryConfig,
) -> Option<Reasoning> {
if model_family.supports_reasoning_summaries {
Some(Reasoning { effort, summary })
} else {
None
if !model_family.supports_reasoning_summaries {
return None;
}
Some(Reasoning {
effort,
summary: Some(summary),
})
}
pub(crate) fn create_text_param_for_request(

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,401 @@
use std::sync::Arc;
use super::AgentTask;
use super::MutexExt;
use super::Session;
use super::TurnContext;
use super::get_last_assistant_message_from_turn;
use crate::Prompt;
use crate::client_common::ResponseEvent;
use crate::error::CodexErr;
use crate::error::Result as CodexResult;
use crate::protocol::AgentMessageEvent;
use crate::protocol::CompactedItem;
use crate::protocol::ErrorEvent;
use crate::protocol::Event;
use crate::protocol::EventMsg;
use crate::protocol::InputItem;
use crate::protocol::InputMessageKind;
use crate::protocol::TaskCompleteEvent;
use crate::protocol::TaskStartedEvent;
use crate::protocol::TurnContextItem;
use crate::util::backoff;
use askama::Template;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ResponseInputItem;
use codex_protocol::models::ResponseItem;
use codex_protocol::protocol::RolloutItem;
use futures::prelude::*;
pub(super) const COMPACT_TRIGGER_TEXT: &str = "Start Summarization";
const SUMMARIZATION_PROMPT: &str = include_str!("../../templates/compact/prompt.md");
#[derive(Template)]
#[template(path = "compact/history_bridge.md", escape = "none")]
struct HistoryBridgeTemplate<'a> {
user_messages_text: &'a str,
summary_text: &'a str,
}
pub(super) fn spawn_compact_task(
sess: Arc<Session>,
turn_context: Arc<TurnContext>,
sub_id: String,
input: Vec<InputItem>,
) {
let task = AgentTask::compact(
sess.clone(),
turn_context,
sub_id,
input,
SUMMARIZATION_PROMPT.to_string(),
);
sess.set_task(task);
}
pub(super) async fn run_inline_auto_compact_task(
sess: Arc<Session>,
turn_context: Arc<TurnContext>,
) {
let sub_id = sess.next_internal_sub_id();
let input = vec![InputItem::Text {
text: COMPACT_TRIGGER_TEXT.to_string(),
}];
run_compact_task_inner(
sess,
turn_context,
sub_id,
input,
SUMMARIZATION_PROMPT.to_string(),
false,
)
.await;
}
pub(super) async fn run_compact_task(
sess: Arc<Session>,
turn_context: Arc<TurnContext>,
sub_id: String,
input: Vec<InputItem>,
compact_instructions: String,
) {
run_compact_task_inner(
sess,
turn_context,
sub_id,
input,
compact_instructions,
true,
)
.await;
}
async fn run_compact_task_inner(
sess: Arc<Session>,
turn_context: Arc<TurnContext>,
sub_id: String,
input: Vec<InputItem>,
compact_instructions: String,
remove_task_on_completion: bool,
) {
let model_context_window = turn_context.client.get_model_context_window();
let start_event = Event {
id: sub_id.clone(),
msg: EventMsg::TaskStarted(TaskStartedEvent {
model_context_window,
}),
};
sess.send_event(start_event).await;
let initial_input_for_turn: ResponseInputItem = ResponseInputItem::from(input);
let instructions_override = compact_instructions;
let turn_input = sess.turn_input_with_history(vec![initial_input_for_turn.clone().into()]);
let prompt = Prompt {
input: turn_input,
tools: Vec::new(),
base_instructions_override: Some(instructions_override),
};
let max_retries = turn_context.client.get_provider().stream_max_retries();
let mut retries = 0;
let rollout_item = RolloutItem::TurnContext(TurnContextItem {
cwd: turn_context.cwd.clone(),
approval_policy: turn_context.approval_policy,
sandbox_policy: turn_context.sandbox_policy.clone(),
model: turn_context.client.get_model(),
effort: turn_context.client.get_reasoning_effort(),
summary: turn_context.client.get_reasoning_summary(),
});
sess.persist_rollout_items(&[rollout_item]).await;
loop {
let attempt_result = drain_to_completed(&sess, turn_context.as_ref(), &prompt).await;
match attempt_result {
Ok(()) => {
break;
}
Err(CodexErr::Interrupted) => {
return;
}
Err(e) => {
if retries < max_retries {
retries += 1;
let delay = backoff(retries);
sess.notify_stream_error(
&sub_id,
format!(
"stream error: {e}; retrying {retries}/{max_retries} in {delay:?}"
),
)
.await;
tokio::time::sleep(delay).await;
continue;
} else {
let event = Event {
id: sub_id.clone(),
msg: EventMsg::Error(ErrorEvent {
message: e.to_string(),
}),
};
sess.send_event(event).await;
return;
}
}
}
}
if remove_task_on_completion {
sess.remove_task(&sub_id);
}
let history_snapshot = {
let state = sess.state.lock_unchecked();
state.history.contents()
};
let summary_text = get_last_assistant_message_from_turn(&history_snapshot).unwrap_or_default();
let user_messages = collect_user_messages(&history_snapshot);
let new_history =
build_compacted_history(&sess, turn_context.as_ref(), &user_messages, &summary_text);
{
let mut state = sess.state.lock_unchecked();
state.history.replace(new_history);
}
let rollout_item = RolloutItem::Compacted(CompactedItem {
message: summary_text.clone(),
});
sess.persist_rollout_items(&[rollout_item]).await;
let event = Event {
id: sub_id.clone(),
msg: EventMsg::AgentMessage(AgentMessageEvent {
message: "Compact task completed".to_string(),
}),
};
sess.send_event(event).await;
let event = Event {
id: sub_id.clone(),
msg: EventMsg::TaskComplete(TaskCompleteEvent {
last_agent_message: None,
}),
};
sess.send_event(event).await;
}
fn content_items_to_text(content: &[ContentItem]) -> Option<String> {
let mut pieces = Vec::new();
for item in content {
match item {
ContentItem::InputText { text } | ContentItem::OutputText { text } => {
if !text.is_empty() {
pieces.push(text.as_str());
}
}
ContentItem::InputImage { .. } => {}
}
}
if pieces.is_empty() {
None
} else {
Some(pieces.join("\n"))
}
}
fn collect_user_messages(items: &[ResponseItem]) -> Vec<String> {
items
.iter()
.filter_map(|item| match item {
ResponseItem::Message { role, content, .. } if role == "user" => {
content_items_to_text(content)
}
_ => None,
})
.filter(|text| !is_session_prefix_message(text))
.collect()
}
fn is_session_prefix_message(text: &str) -> bool {
matches!(
InputMessageKind::from(("user", text)),
InputMessageKind::UserInstructions | InputMessageKind::EnvironmentContext
)
}
fn build_compacted_history(
sess: &Session,
turn_context: &TurnContext,
user_messages: &[String],
summary_text: &str,
) -> Vec<ResponseItem> {
let mut history = sess.build_initial_context(turn_context);
let user_messages_text = if user_messages.is_empty() {
"(none)".to_string()
} else {
user_messages.join("\n\n")
};
let summary_text = if summary_text.is_empty() {
"(no summary available)".to_string()
} else {
summary_text.to_string()
};
let Ok(bridge) = HistoryBridgeTemplate {
user_messages_text: &user_messages_text,
summary_text: &summary_text,
}
.render() else {
return vec![];
};
history.push(ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::InputText { text: bridge }],
});
history
}
async fn drain_to_completed(
sess: &Session,
turn_context: &TurnContext,
prompt: &Prompt,
) -> CodexResult<()> {
let mut stream = turn_context.client.clone().stream(prompt).await?;
loop {
let maybe_event = stream.next().await;
let Some(event) = maybe_event else {
return Err(CodexErr::Stream(
"stream closed before response.completed".into(),
None,
));
};
match event {
Ok(ResponseEvent::OutputItemDone(item)) => {
let mut state = sess.state.lock_unchecked();
state.history.record_items(std::slice::from_ref(&item));
}
Ok(ResponseEvent::Completed { .. }) => {
return Ok(());
}
Ok(_) => continue,
Err(e) => return Err(e),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use pretty_assertions::assert_eq;
#[test]
fn content_items_to_text_joins_non_empty_segments() {
let items = vec![
ContentItem::InputText {
text: "hello".to_string(),
},
ContentItem::OutputText {
text: String::new(),
},
ContentItem::OutputText {
text: "world".to_string(),
},
];
let joined = content_items_to_text(&items);
assert_eq!(Some("hello\nworld".to_string()), joined);
}
#[test]
fn content_items_to_text_ignores_image_only_content() {
let items = vec![ContentItem::InputImage {
image_url: "file://image.png".to_string(),
}];
let joined = content_items_to_text(&items);
assert_eq!(None, joined);
}
#[test]
fn collect_user_messages_extracts_user_text_only() {
let items = vec![
ResponseItem::Message {
id: Some("assistant".to_string()),
role: "assistant".to_string(),
content: vec![ContentItem::OutputText {
text: "ignored".to_string(),
}],
},
ResponseItem::Message {
id: Some("user".to_string()),
role: "user".to_string(),
content: vec![
ContentItem::InputText {
text: "first".to_string(),
},
ContentItem::OutputText {
text: "second".to_string(),
},
],
},
ResponseItem::Other,
];
let collected = collect_user_messages(&items);
assert_eq!(vec!["first\nsecond".to_string()], collected);
}
#[test]
fn collect_user_messages_filters_session_prefix_entries() {
let items = vec![
ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::InputText {
text: "<user_instructions>do things</user_instructions>".to_string(),
}],
},
ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::InputText {
text: "<ENVIRONMENT_CONTEXT>cwd=/tmp</ENVIRONMENT_CONTEXT>".to_string(),
}],
},
ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::InputText {
text: "real user message".to_string(),
}],
},
];
let collected = collect_user_messages(&items);
assert_eq!(vec!["real user message".to_string()], collected);
}
}

View File

@@ -15,11 +15,11 @@ use crate::model_provider_info::built_in_model_providers;
use crate::openai_model_info::get_model_info;
use crate::protocol::AskForApproval;
use crate::protocol::SandboxPolicy;
use anyhow::Context;
use codex_protocol::config_types::ReasoningEffort;
use codex_protocol::config_types::ReasoningSummary;
use codex_protocol::config_types::SandboxMode;
use codex_protocol::config_types::Verbosity;
use codex_protocol::mcp_protocol::AuthMode;
use codex_protocol::mcp_protocol::Tools;
use codex_protocol::mcp_protocol::UserSavedConfig;
use dirs::home_dir;
@@ -32,6 +32,8 @@ use toml::Value as TomlValue;
use toml_edit::DocumentMut;
const OPENAI_DEFAULT_MODEL: &str = "gpt-5";
const OPENAI_DEFAULT_REVIEW_MODEL: &str = "gpt-5";
pub const GPT5_HIGH_MODEL: &str = "gpt-5-high-new";
/// Maximum number of bytes of the documentation that will be embedded. Larger
/// files are *silently truncated* to this size so we do not take up too much of
@@ -46,6 +48,9 @@ pub struct Config {
/// Optional override of model selection.
pub model: String,
/// Model used specifically for review sessions. Defaults to "gpt-5".
pub review_model: String,
pub model_family: ModelFamily,
/// Size of the context window for the model, in tokens.
@@ -54,6 +59,9 @@ pub struct Config {
/// Maximum number of output tokens.
pub model_max_output_tokens: Option<u64>,
/// Token usage threshold triggering auto-compaction of conversation history.
pub model_auto_compact_token_limit: Option<i64>,
/// Key into the model_providers map that specifies which provider to use.
pub model_provider_id: String,
@@ -129,9 +137,6 @@ pub struct Config {
/// output will be hyperlinked using the specified URI scheme.
pub file_opener: UriBasedFileOpener,
/// Collection of settings that are specific to the TUI.
pub tui: Tui,
/// Path to the `codex-linux-sandbox` executable. This must be set if
/// [`crate::exec::SandboxType::LinuxSeccomp`] is used. Note that this
/// cannot be set in the config file: it must be set in code via
@@ -142,7 +147,7 @@ pub struct Config {
/// Value to use for `reasoning.effort` when making a request using the
/// Responses API.
pub model_reasoning_effort: ReasoningEffort,
pub model_reasoning_effort: Option<ReasoningEffort>,
/// If not "none", the value to use for `reasoning.summary` when making a
/// request using the Responses API.
@@ -167,11 +172,11 @@ pub struct Config {
pub tools_web_search_request: bool,
/// If set to `true`, the API key will be signed with the `originator` header.
pub preferred_auth_method: AuthMode,
pub use_experimental_streamable_shell_tool: bool,
/// If set to `true`, used only the experimental unified exec tool.
pub use_experimental_unified_exec_tool: bool,
/// Include the `view_image` tool that lets the agent attach a local image path to context.
pub include_view_image_tool: bool,
@@ -261,17 +266,7 @@ pub fn load_config_as_toml(codex_home: &Path) -> std::io::Result<TomlValue> {
}
}
/// Patch `CODEX_HOME/config.toml` project state.
/// Use with caution.
pub fn set_project_trusted(codex_home: &Path, project_path: &Path) -> anyhow::Result<()> {
let config_path = codex_home.join(CONFIG_TOML_FILE);
// Parse existing config if present; otherwise start a new document.
let mut doc = match std::fs::read_to_string(config_path.clone()) {
Ok(s) => s.parse::<DocumentMut>()?,
Err(e) if e.kind() == std::io::ErrorKind::NotFound => DocumentMut::new(),
Err(e) => return Err(e.into()),
};
fn set_project_trusted_inner(doc: &mut DocumentMut, project_path: &Path) -> anyhow::Result<()> {
// Ensure we render a human-friendly structure:
//
// [projects]
@@ -287,14 +282,26 @@ pub fn set_project_trusted(codex_home: &Path, project_path: &Path) -> anyhow::Re
// Ensure top-level `projects` exists as a non-inline, explicit table. If it
// exists but was previously represented as a non-table (e.g., inline),
// replace it with an explicit table.
let mut created_projects_table = false;
{
let root = doc.as_table_mut();
let needs_table = !root.contains_key("projects")
|| root.get("projects").and_then(|i| i.as_table()).is_none();
if needs_table {
root.insert("projects", toml_edit::table());
created_projects_table = true;
// If `projects` exists but isn't a standard table (e.g., it's an inline table),
// convert it to an explicit table while preserving existing entries.
let existing_projects = root.get("projects").cloned();
if existing_projects.as_ref().is_none_or(|i| !i.is_table()) {
let mut projects_tbl = toml_edit::Table::new();
projects_tbl.set_implicit(true);
// If there was an existing inline table, migrate its entries to explicit tables.
if let Some(inline_tbl) = existing_projects.as_ref().and_then(|i| i.as_inline_table()) {
for (k, v) in inline_tbl.iter() {
if let Some(inner_tbl) = v.as_inline_table() {
let new_tbl = inner_tbl.clone().into_table();
projects_tbl.insert(k, toml_edit::Item::Table(new_tbl));
}
}
}
root.insert("projects", toml_edit::Item::Table(projects_tbl));
}
}
let Some(projects_tbl) = doc["projects"].as_table_mut() else {
@@ -303,12 +310,6 @@ pub fn set_project_trusted(codex_home: &Path, project_path: &Path) -> anyhow::Re
));
};
// If we created the `projects` table ourselves, keep it implicit so we
// don't render a standalone `[projects]` header.
if created_projects_table {
projects_tbl.set_implicit(true);
}
// Ensure the per-project entry is its own explicit table. If it exists but
// is not a table (e.g., an inline table), replace it with an explicit table.
let needs_proj_table = !projects_tbl.contains_key(project_key.as_str())
@@ -327,6 +328,21 @@ pub fn set_project_trusted(codex_home: &Path, project_path: &Path) -> anyhow::Re
};
proj_tbl.set_implicit(false);
proj_tbl["trust_level"] = toml_edit::value("trusted");
Ok(())
}
/// Patch `CODEX_HOME/config.toml` project state.
/// Use with caution.
pub fn set_project_trusted(codex_home: &Path, project_path: &Path) -> anyhow::Result<()> {
let config_path = codex_home.join(CONFIG_TOML_FILE);
// Parse existing config if present; otherwise start a new document.
let mut doc = match std::fs::read_to_string(config_path.clone()) {
Ok(s) => s.parse::<DocumentMut>()?,
Err(e) if e.kind() == std::io::ErrorKind::NotFound => DocumentMut::new(),
Err(e) => return Err(e.into()),
};
set_project_trusted_inner(&mut doc, project_path)?;
// ensure codex_home exists
std::fs::create_dir_all(codex_home)?;
@@ -341,6 +357,117 @@ pub fn set_project_trusted(codex_home: &Path, project_path: &Path) -> anyhow::Re
Ok(())
}
fn ensure_profile_table<'a>(
doc: &'a mut DocumentMut,
profile_name: &str,
) -> anyhow::Result<&'a mut toml_edit::Table> {
let mut created_profiles_table = false;
{
let root = doc.as_table_mut();
let needs_table = !root.contains_key("profiles")
|| root
.get("profiles")
.and_then(|item| item.as_table())
.is_none();
if needs_table {
root.insert("profiles", toml_edit::table());
created_profiles_table = true;
}
}
let Some(profiles_table) = doc["profiles"].as_table_mut() else {
return Err(anyhow::anyhow!(
"profiles table missing after initialization"
));
};
if created_profiles_table {
profiles_table.set_implicit(true);
}
let needs_profile_table = !profiles_table.contains_key(profile_name)
|| profiles_table
.get(profile_name)
.and_then(|item| item.as_table())
.is_none();
if needs_profile_table {
profiles_table.insert(profile_name, toml_edit::table());
}
let Some(profile_table) = profiles_table
.get_mut(profile_name)
.and_then(|item| item.as_table_mut())
else {
return Err(anyhow::anyhow!(format!(
"profile table missing for {profile_name}"
)));
};
profile_table.set_implicit(false);
Ok(profile_table)
}
// TODO(jif) refactor config persistence.
pub async fn persist_model_selection(
codex_home: &Path,
active_profile: Option<&str>,
model: &str,
effort: Option<ReasoningEffort>,
) -> anyhow::Result<()> {
let config_path = codex_home.join(CONFIG_TOML_FILE);
let serialized = match tokio::fs::read_to_string(&config_path).await {
Ok(contents) => contents,
Err(err) if err.kind() == std::io::ErrorKind::NotFound => String::new(),
Err(err) => return Err(err.into()),
};
let mut doc = if serialized.is_empty() {
DocumentMut::new()
} else {
serialized.parse::<DocumentMut>()?
};
if let Some(profile_name) = active_profile {
let profile_table = ensure_profile_table(&mut doc, profile_name)?;
profile_table["model"] = toml_edit::value(model);
match effort {
Some(effort) => {
profile_table["model_reasoning_effort"] = toml_edit::value(effort.to_string());
}
None => {
profile_table.remove("model_reasoning_effort");
}
}
} else {
let table = doc.as_table_mut();
table["model"] = toml_edit::value(model);
match effort {
Some(effort) => {
table["model_reasoning_effort"] = toml_edit::value(effort.to_string());
}
None => {
table.remove("model_reasoning_effort");
}
}
}
// TODO(jif) refactor the home creation
tokio::fs::create_dir_all(codex_home)
.await
.with_context(|| {
format!(
"failed to create Codex home directory at {}",
codex_home.display()
)
})?;
tokio::fs::write(&config_path, doc.to_string())
.await
.with_context(|| format!("failed to persist config.toml at {}", config_path.display()))?;
Ok(())
}
/// Apply a single dotted-path override onto a TOML value.
fn apply_toml_override(root: &mut TomlValue, path: &str, value: TomlValue) {
use toml::value::Table;
@@ -385,10 +512,12 @@ fn apply_toml_override(root: &mut TomlValue, path: &str, value: TomlValue) {
}
/// Base config deserialized from ~/.codex/config.toml.
#[derive(Deserialize, Debug, Clone, Default)]
#[derive(Deserialize, Debug, Clone, Default, PartialEq)]
pub struct ConfigToml {
/// Optional override of model selection.
pub model: Option<String>,
/// Review model override used by the `/review` feature.
pub review_model: Option<String>,
/// Provider to use from the model_providers map.
pub model_provider: Option<String>,
@@ -399,6 +528,9 @@ pub struct ConfigToml {
/// Maximum number of output tokens.
pub model_max_output_tokens: Option<u64>,
/// Token usage threshold triggering auto-compaction of conversation history.
pub model_auto_compact_token_limit: Option<i64>,
/// Default approval policy for executing commands.
pub approval_policy: Option<AskForApproval>,
@@ -476,12 +608,10 @@ pub struct ConfigToml {
pub experimental_instructions_file: Option<PathBuf>,
pub experimental_use_exec_command_tool: Option<bool>,
pub experimental_use_unified_exec_tool: Option<bool>,
pub projects: Option<HashMap<String, ProjectConfig>>,
/// If set to `true`, the API key will be signed with the `originator` header.
pub preferred_auth_method: Option<AuthMode>,
/// Nested tools section for feature toggles
pub tools: Option<ToolsToml>,
@@ -519,7 +649,7 @@ pub struct ProjectConfig {
pub trust_level: Option<String>,
}
#[derive(Deserialize, Debug, Clone, Default)]
#[derive(Deserialize, Debug, Clone, Default, PartialEq)]
pub struct ToolsToml {
#[serde(default, alias = "web_search_request")]
pub web_search: Option<bool>,
@@ -616,6 +746,7 @@ impl ConfigToml {
#[derive(Default, Debug, Clone)]
pub struct ConfigOverrides {
pub model: Option<String>,
pub review_model: Option<String>,
pub cwd: Option<PathBuf>,
pub approval_policy: Option<AskForApproval>,
pub sandbox_mode: Option<SandboxMode>,
@@ -643,6 +774,7 @@ impl Config {
// Destructure ConfigOverrides fully to ensure all overrides are applied.
let ConfigOverrides {
model,
review_model: override_review_model,
cwd,
approval_policy,
sandbox_mode,
@@ -759,6 +891,11 @@ impl Config {
.as_ref()
.map(|info| info.max_output_tokens)
});
let model_auto_compact_token_limit = cfg.model_auto_compact_token_limit.or_else(|| {
openai_model_info
.as_ref()
.and_then(|info| info.auto_compact_token_limit)
});
let experimental_resume = cfg.experimental_resume;
@@ -773,11 +910,18 @@ impl Config {
Self::get_base_instructions(experimental_instructions_path, &resolved_cwd)?;
let base_instructions = base_instructions.or(file_base_instructions);
// Default review model when not set in config; allow CLI override to take precedence.
let review_model = override_review_model
.or(cfg.review_model)
.unwrap_or_else(default_review_model);
let config = Self {
model,
review_model,
model_family,
model_context_window,
model_max_output_tokens,
model_auto_compact_token_limit,
model_provider_id,
model_provider,
cwd: resolved_cwd,
@@ -796,7 +940,6 @@ impl Config {
codex_home,
history,
file_opener: cfg.file_opener.unwrap_or(UriBasedFileOpener::VsCode),
tui: cfg.tui.unwrap_or_default(),
codex_linux_sandbox_exe,
hide_agent_reasoning: cfg.hide_agent_reasoning.unwrap_or(false),
@@ -806,8 +949,7 @@ impl Config {
.unwrap_or(false),
model_reasoning_effort: config_profile
.model_reasoning_effort
.or(cfg.model_reasoning_effort)
.unwrap_or_default(),
.or(cfg.model_reasoning_effort),
model_reasoning_summary: config_profile
.model_reasoning_summary
.or(cfg.model_reasoning_summary)
@@ -822,10 +964,12 @@ impl Config {
include_plan_tool: include_plan_tool.unwrap_or(false),
include_apply_patch_tool: include_apply_patch_tool.unwrap_or(false),
tools_web_search_request,
preferred_auth_method: cfg.preferred_auth_method.unwrap_or(AuthMode::ChatGPT),
use_experimental_streamable_shell_tool: cfg
.experimental_use_exec_command_tool
.unwrap_or(false),
use_experimental_unified_exec_tool: cfg
.experimental_use_unified_exec_tool
.unwrap_or(false),
include_view_image_tool,
active_profile: active_profile_name,
disable_paste_burst: cfg.disable_paste_burst.unwrap_or(false),
@@ -897,6 +1041,10 @@ fn default_model() -> String {
OPENAI_DEFAULT_MODEL.to_string()
}
fn default_review_model() -> String {
OPENAI_DEFAULT_REVIEW_MODEL.to_string()
}
/// Returns the path to the Codex configuration directory, which can be
/// specified by the `CODEX_HOME` environment variable. If not set, defaults to
/// `~/.codex`.
@@ -938,6 +1086,7 @@ mod tests {
use super::*;
use pretty_assertions::assert_eq;
use tempfile::TempDir;
#[test]
@@ -1028,6 +1177,145 @@ exclude_slash_tmp = true
);
}
#[tokio::test]
async fn persist_model_selection_updates_defaults() -> anyhow::Result<()> {
let codex_home = TempDir::new()?;
persist_model_selection(
codex_home.path(),
None,
"gpt-5-high-new",
Some(ReasoningEffort::High),
)
.await?;
let serialized =
tokio::fs::read_to_string(codex_home.path().join(CONFIG_TOML_FILE)).await?;
let parsed: ConfigToml = toml::from_str(&serialized)?;
assert_eq!(parsed.model.as_deref(), Some("gpt-5-high-new"));
assert_eq!(parsed.model_reasoning_effort, Some(ReasoningEffort::High));
Ok(())
}
#[tokio::test]
async fn persist_model_selection_overwrites_existing_model() -> anyhow::Result<()> {
let codex_home = TempDir::new()?;
let config_path = codex_home.path().join(CONFIG_TOML_FILE);
tokio::fs::write(
&config_path,
r#"
model = "gpt-5"
model_reasoning_effort = "medium"
[profiles.dev]
model = "gpt-4.1"
"#,
)
.await?;
persist_model_selection(
codex_home.path(),
None,
"o4-mini",
Some(ReasoningEffort::High),
)
.await?;
let serialized = tokio::fs::read_to_string(config_path).await?;
let parsed: ConfigToml = toml::from_str(&serialized)?;
assert_eq!(parsed.model.as_deref(), Some("o4-mini"));
assert_eq!(parsed.model_reasoning_effort, Some(ReasoningEffort::High));
assert_eq!(
parsed
.profiles
.get("dev")
.and_then(|profile| profile.model.as_deref()),
Some("gpt-4.1"),
);
Ok(())
}
#[tokio::test]
async fn persist_model_selection_updates_profile() -> anyhow::Result<()> {
let codex_home = TempDir::new()?;
persist_model_selection(
codex_home.path(),
Some("dev"),
"gpt-5-high-new",
Some(ReasoningEffort::Low),
)
.await?;
let serialized =
tokio::fs::read_to_string(codex_home.path().join(CONFIG_TOML_FILE)).await?;
let parsed: ConfigToml = toml::from_str(&serialized)?;
let profile = parsed
.profiles
.get("dev")
.expect("profile should be created");
assert_eq!(profile.model.as_deref(), Some("gpt-5-high-new"));
assert_eq!(profile.model_reasoning_effort, Some(ReasoningEffort::Low));
Ok(())
}
#[tokio::test]
async fn persist_model_selection_updates_existing_profile() -> anyhow::Result<()> {
let codex_home = TempDir::new()?;
let config_path = codex_home.path().join(CONFIG_TOML_FILE);
tokio::fs::write(
&config_path,
r#"
[profiles.dev]
model = "gpt-4"
model_reasoning_effort = "medium"
[profiles.prod]
model = "gpt-5"
"#,
)
.await?;
persist_model_selection(
codex_home.path(),
Some("dev"),
"o4-high",
Some(ReasoningEffort::Medium),
)
.await?;
let serialized = tokio::fs::read_to_string(config_path).await?;
let parsed: ConfigToml = toml::from_str(&serialized)?;
let dev_profile = parsed
.profiles
.get("dev")
.expect("dev profile should survive updates");
assert_eq!(dev_profile.model.as_deref(), Some("o4-high"));
assert_eq!(
dev_profile.model_reasoning_effort,
Some(ReasoningEffort::Medium)
);
assert_eq!(
parsed
.profiles
.get("prod")
.and_then(|profile| profile.model.as_deref()),
Some("gpt-5"),
);
Ok(())
}
struct PrecedenceTestFixture {
cwd: TempDir,
codex_home: TempDir,
@@ -1169,9 +1457,11 @@ model_verbosity = "high"
assert_eq!(
Config {
model: "o3".to_string(),
review_model: "gpt-5".to_string(),
model_family: find_family_for_model("o3").expect("known model slug"),
model_context_window: Some(200_000),
model_max_output_tokens: Some(100_000),
model_auto_compact_token_limit: None,
model_provider_id: "openai".to_string(),
model_provider: fixture.openai_provider.clone(),
approval_policy: AskForApproval::Never,
@@ -1186,11 +1476,10 @@ model_verbosity = "high"
codex_home: fixture.codex_home(),
history: History::default(),
file_opener: UriBasedFileOpener::VsCode,
tui: Tui::default(),
codex_linux_sandbox_exe: None,
hide_agent_reasoning: false,
show_raw_agent_reasoning: false,
model_reasoning_effort: ReasoningEffort::High,
model_reasoning_effort: Some(ReasoningEffort::High),
model_reasoning_summary: ReasoningSummary::Detailed,
model_verbosity: None,
chatgpt_base_url: "https://chatgpt.com/backend-api/".to_string(),
@@ -1199,8 +1488,8 @@ model_verbosity = "high"
include_plan_tool: false,
include_apply_patch_tool: false,
tools_web_search_request: false,
preferred_auth_method: AuthMode::ChatGPT,
use_experimental_streamable_shell_tool: false,
use_experimental_unified_exec_tool: false,
include_view_image_tool: true,
active_profile: Some("o3".to_string()),
disable_paste_burst: false,
@@ -1226,9 +1515,11 @@ model_verbosity = "high"
)?;
let expected_gpt3_profile_config = Config {
model: "gpt-3.5-turbo".to_string(),
review_model: "gpt-5".to_string(),
model_family: find_family_for_model("gpt-3.5-turbo").expect("known model slug"),
model_context_window: Some(16_385),
model_max_output_tokens: Some(4_096),
model_auto_compact_token_limit: None,
model_provider_id: "openai-chat-completions".to_string(),
model_provider: fixture.openai_chat_completions_provider.clone(),
approval_policy: AskForApproval::UnlessTrusted,
@@ -1243,11 +1534,10 @@ model_verbosity = "high"
codex_home: fixture.codex_home(),
history: History::default(),
file_opener: UriBasedFileOpener::VsCode,
tui: Tui::default(),
codex_linux_sandbox_exe: None,
hide_agent_reasoning: false,
show_raw_agent_reasoning: false,
model_reasoning_effort: ReasoningEffort::default(),
model_reasoning_effort: None,
model_reasoning_summary: ReasoningSummary::default(),
model_verbosity: None,
chatgpt_base_url: "https://chatgpt.com/backend-api/".to_string(),
@@ -1256,8 +1546,8 @@ model_verbosity = "high"
include_plan_tool: false,
include_apply_patch_tool: false,
tools_web_search_request: false,
preferred_auth_method: AuthMode::ChatGPT,
use_experimental_streamable_shell_tool: false,
use_experimental_unified_exec_tool: false,
include_view_image_tool: true,
active_profile: Some("gpt3".to_string()),
disable_paste_burst: false,
@@ -1298,9 +1588,11 @@ model_verbosity = "high"
)?;
let expected_zdr_profile_config = Config {
model: "o3".to_string(),
review_model: "gpt-5".to_string(),
model_family: find_family_for_model("o3").expect("known model slug"),
model_context_window: Some(200_000),
model_max_output_tokens: Some(100_000),
model_auto_compact_token_limit: None,
model_provider_id: "openai".to_string(),
model_provider: fixture.openai_provider.clone(),
approval_policy: AskForApproval::OnFailure,
@@ -1315,11 +1607,10 @@ model_verbosity = "high"
codex_home: fixture.codex_home(),
history: History::default(),
file_opener: UriBasedFileOpener::VsCode,
tui: Tui::default(),
codex_linux_sandbox_exe: None,
hide_agent_reasoning: false,
show_raw_agent_reasoning: false,
model_reasoning_effort: ReasoningEffort::default(),
model_reasoning_effort: None,
model_reasoning_summary: ReasoningSummary::default(),
model_verbosity: None,
chatgpt_base_url: "https://chatgpt.com/backend-api/".to_string(),
@@ -1328,8 +1619,8 @@ model_verbosity = "high"
include_plan_tool: false,
include_apply_patch_tool: false,
tools_web_search_request: false,
preferred_auth_method: AuthMode::ChatGPT,
use_experimental_streamable_shell_tool: false,
use_experimental_unified_exec_tool: false,
include_view_image_tool: true,
active_profile: Some("zdr".to_string()),
disable_paste_burst: false,
@@ -1356,9 +1647,11 @@ model_verbosity = "high"
)?;
let expected_gpt5_profile_config = Config {
model: "gpt-5".to_string(),
review_model: "gpt-5".to_string(),
model_family: find_family_for_model("gpt-5").expect("known model slug"),
model_context_window: Some(272_000),
model_max_output_tokens: Some(128_000),
model_auto_compact_token_limit: None,
model_provider_id: "openai".to_string(),
model_provider: fixture.openai_provider.clone(),
approval_policy: AskForApproval::OnFailure,
@@ -1373,11 +1666,10 @@ model_verbosity = "high"
codex_home: fixture.codex_home(),
history: History::default(),
file_opener: UriBasedFileOpener::VsCode,
tui: Tui::default(),
codex_linux_sandbox_exe: None,
hide_agent_reasoning: false,
show_raw_agent_reasoning: false,
model_reasoning_effort: ReasoningEffort::High,
model_reasoning_effort: Some(ReasoningEffort::High),
model_reasoning_summary: ReasoningSummary::Detailed,
model_verbosity: Some(Verbosity::High),
chatgpt_base_url: "https://chatgpt.com/backend-api/".to_string(),
@@ -1386,8 +1678,8 @@ model_verbosity = "high"
include_plan_tool: false,
include_apply_patch_tool: false,
tools_web_search_request: false,
preferred_auth_method: AuthMode::ChatGPT,
use_experimental_streamable_shell_tool: false,
use_experimental_unified_exec_tool: false,
include_view_image_tool: true,
active_profile: Some("gpt5".to_string()),
disable_paste_burst: false,
@@ -1400,17 +1692,14 @@ model_verbosity = "high"
#[test]
fn test_set_project_trusted_writes_explicit_tables() -> anyhow::Result<()> {
let codex_home = TempDir::new().unwrap();
let project_dir = TempDir::new().unwrap();
let project_dir = Path::new("/some/path");
let mut doc = DocumentMut::new();
// Call the function under test
set_project_trusted(codex_home.path(), project_dir.path())?;
set_project_trusted_inner(&mut doc, project_dir)?;
// Read back the generated config.toml and assert exact contents
let config_path = codex_home.path().join(CONFIG_TOML_FILE);
let contents = std::fs::read_to_string(&config_path)?;
let contents = doc.to_string();
let raw_path = project_dir.path().to_string_lossy();
let raw_path = project_dir.to_string_lossy();
let path_str = if raw_path.contains('\\') {
format!("'{raw_path}'")
} else {
@@ -1428,12 +1717,10 @@ trust_level = "trusted"
#[test]
fn test_set_project_trusted_converts_inline_to_explicit() -> anyhow::Result<()> {
let codex_home = TempDir::new().unwrap();
let project_dir = TempDir::new().unwrap();
let project_dir = Path::new("/some/path");
// Seed config.toml with an inline project entry under [projects]
let config_path = codex_home.path().join(CONFIG_TOML_FILE);
let raw_path = project_dir.path().to_string_lossy();
let raw_path = project_dir.to_string_lossy();
let path_str = if raw_path.contains('\\') {
format!("'{raw_path}'")
} else {
@@ -1445,13 +1732,12 @@ trust_level = "trusted"
{path_str} = {{ trust_level = "untrusted" }}
"#
);
std::fs::create_dir_all(codex_home.path())?;
std::fs::write(&config_path, initial)?;
let mut doc = initial.parse::<DocumentMut>()?;
// Run the function; it should convert to explicit tables and set trusted
set_project_trusted(codex_home.path(), project_dir.path())?;
set_project_trusted_inner(&mut doc, project_dir)?;
let contents = std::fs::read_to_string(&config_path)?;
let contents = doc.to_string();
// Assert exact output after conversion to explicit table
let expected = format!(
@@ -1465,4 +1751,38 @@ trust_level = "trusted"
Ok(())
}
#[test]
fn test_set_project_trusted_migrates_top_level_inline_projects_preserving_entries()
-> anyhow::Result<()> {
let initial = r#"toplevel = "baz"
projects = { "/Users/mbolin/code/codex4" = { trust_level = "trusted", foo = "bar" } , "/Users/mbolin/code/codex3" = { trust_level = "trusted" } }
model = "foo""#;
let mut doc = initial.parse::<DocumentMut>()?;
// Approve a new directory
let new_project = Path::new("/Users/mbolin/code/codex2");
set_project_trusted_inner(&mut doc, new_project)?;
let contents = doc.to_string();
// Since we created the [projects] table as part of migration, it is kept implicit.
// Expect explicit per-project tables, preserving prior entries and appending the new one.
let expected = r#"toplevel = "baz"
model = "foo"
[projects."/Users/mbolin/code/codex4"]
trust_level = "trusted"
foo = "bar"
[projects."/Users/mbolin/code/codex3"]
trust_level = "trusted"
[projects."/Users/mbolin/code/codex2"]
trust_level = "trusted"
"#;
assert_eq!(contents, expected);
Ok(())
}
}

View File

@@ -7,6 +7,12 @@ use toml_edit::DocumentMut;
pub const CONFIG_KEY_MODEL: &str = "model";
pub const CONFIG_KEY_EFFORT: &str = "model_reasoning_effort";
#[derive(Copy, Clone)]
enum NoneBehavior {
Skip,
Remove,
}
/// Persist overrides into `config.toml` using explicit key segments per
/// override. This avoids ambiguity with keys that contain dots or spaces.
pub async fn persist_overrides(
@@ -14,47 +20,12 @@ pub async fn persist_overrides(
profile: Option<&str>,
overrides: &[(&[&str], &str)],
) -> Result<()> {
let config_path = codex_home.join(CONFIG_TOML_FILE);
let with_options: Vec<(&[&str], Option<&str>)> = overrides
.iter()
.map(|(segments, value)| (*segments, Some(*value)))
.collect();
let mut doc = match tokio::fs::read_to_string(&config_path).await {
Ok(s) => s.parse::<DocumentMut>()?,
Err(e) if e.kind() == std::io::ErrorKind::NotFound => {
tokio::fs::create_dir_all(codex_home).await?;
DocumentMut::new()
}
Err(e) => return Err(e.into()),
};
let effective_profile = if let Some(p) = profile {
Some(p.to_owned())
} else {
doc.get("profile")
.and_then(|i| i.as_str())
.map(|s| s.to_string())
};
for (segments, val) in overrides.iter().copied() {
let value = toml_edit::value(val);
if let Some(ref name) = effective_profile {
if segments.first().copied() == Some("profiles") {
apply_toml_edit_override_segments(&mut doc, segments, value);
} else {
let mut seg_buf: Vec<&str> = Vec::with_capacity(2 + segments.len());
seg_buf.push("profiles");
seg_buf.push(name.as_str());
seg_buf.extend_from_slice(segments);
apply_toml_edit_override_segments(&mut doc, &seg_buf, value);
}
} else {
apply_toml_edit_override_segments(&mut doc, segments, value);
}
}
let tmp_file = NamedTempFile::new_in(codex_home)?;
tokio::fs::write(tmp_file.path(), doc.to_string()).await?;
tmp_file.persist(config_path)?;
Ok(())
persist_overrides_with_behavior(codex_home, profile, &with_options, NoneBehavior::Skip).await
}
/// Persist overrides where values may be optional. Any entries with `None`
@@ -65,16 +36,17 @@ pub async fn persist_non_null_overrides(
profile: Option<&str>,
overrides: &[(&[&str], Option<&str>)],
) -> Result<()> {
let filtered: Vec<(&[&str], &str)> = overrides
.iter()
.filter_map(|(k, v)| v.map(|vv| (*k, vv)))
.collect();
persist_overrides_with_behavior(codex_home, profile, overrides, NoneBehavior::Skip).await
}
if filtered.is_empty() {
return Ok(());
}
persist_overrides(codex_home, profile, &filtered).await
/// Persist overrides where `None` values clear any existing values from the
/// configuration file.
pub async fn persist_overrides_and_clear_if_none(
codex_home: &Path,
profile: Option<&str>,
overrides: &[(&[&str], Option<&str>)],
) -> Result<()> {
persist_overrides_with_behavior(codex_home, profile, overrides, NoneBehavior::Remove).await
}
/// Apply a single override onto a `toml_edit` document while preserving
@@ -121,6 +93,125 @@ fn apply_toml_edit_override_segments(
current[last] = value;
}
async fn persist_overrides_with_behavior(
codex_home: &Path,
profile: Option<&str>,
overrides: &[(&[&str], Option<&str>)],
none_behavior: NoneBehavior,
) -> Result<()> {
if overrides.is_empty() {
return Ok(());
}
let should_skip = match none_behavior {
NoneBehavior::Skip => overrides.iter().all(|(_, value)| value.is_none()),
NoneBehavior::Remove => false,
};
if should_skip {
return Ok(());
}
let config_path = codex_home.join(CONFIG_TOML_FILE);
let read_result = tokio::fs::read_to_string(&config_path).await;
let mut doc = match read_result {
Ok(contents) => contents.parse::<DocumentMut>()?,
Err(e) if e.kind() == std::io::ErrorKind::NotFound => {
if overrides
.iter()
.all(|(_, value)| value.is_none() && matches!(none_behavior, NoneBehavior::Remove))
{
return Ok(());
}
tokio::fs::create_dir_all(codex_home).await?;
DocumentMut::new()
}
Err(e) => return Err(e.into()),
};
let effective_profile = if let Some(p) = profile {
Some(p.to_owned())
} else {
doc.get("profile")
.and_then(|i| i.as_str())
.map(|s| s.to_string())
};
let mut mutated = false;
for (segments, value) in overrides.iter().copied() {
let mut seg_buf: Vec<&str> = Vec::new();
let segments_to_apply: &[&str];
if let Some(ref name) = effective_profile {
if segments.first().copied() == Some("profiles") {
segments_to_apply = segments;
} else {
seg_buf.reserve(2 + segments.len());
seg_buf.push("profiles");
seg_buf.push(name.as_str());
seg_buf.extend_from_slice(segments);
segments_to_apply = seg_buf.as_slice();
}
} else {
segments_to_apply = segments;
}
match value {
Some(v) => {
let item_value = toml_edit::value(v);
apply_toml_edit_override_segments(&mut doc, segments_to_apply, item_value);
mutated = true;
}
None => {
if matches!(none_behavior, NoneBehavior::Remove)
&& remove_toml_edit_segments(&mut doc, segments_to_apply)
{
mutated = true;
}
}
}
}
if !mutated {
return Ok(());
}
let tmp_file = NamedTempFile::new_in(codex_home)?;
tokio::fs::write(tmp_file.path(), doc.to_string()).await?;
tmp_file.persist(config_path)?;
Ok(())
}
fn remove_toml_edit_segments(doc: &mut DocumentMut, segments: &[&str]) -> bool {
use toml_edit::Item;
if segments.is_empty() {
return false;
}
let mut current = doc.as_table_mut();
for seg in &segments[..segments.len() - 1] {
let Some(item) = current.get_mut(seg) else {
return false;
};
match item {
Item::Table(table) => {
current = table;
}
_ => {
return false;
}
}
}
current.remove(segments[segments.len() - 1]).is_some()
}
#[cfg(test)]
mod tests {
use super::*;
@@ -574,6 +665,81 @@ model = "o3"
assert_eq!(contents, expected);
}
#[tokio::test]
async fn persist_clear_none_removes_top_level_value() {
let tmpdir = tempdir().expect("tmp");
let codex_home = tmpdir.path();
let seed = r#"model = "gpt-5"
model_reasoning_effort = "medium"
"#;
tokio::fs::write(codex_home.join(CONFIG_TOML_FILE), seed)
.await
.expect("seed write");
persist_overrides_and_clear_if_none(
codex_home,
None,
&[
(&[CONFIG_KEY_MODEL], None),
(&[CONFIG_KEY_EFFORT], Some("high")),
],
)
.await
.expect("persist");
let contents = read_config(codex_home).await;
let expected = "model_reasoning_effort = \"high\"\n";
assert_eq!(contents, expected);
}
#[tokio::test]
async fn persist_clear_none_respects_active_profile() {
let tmpdir = tempdir().expect("tmp");
let codex_home = tmpdir.path();
let seed = r#"profile = "team"
[profiles.team]
model = "gpt-4"
model_reasoning_effort = "minimal"
"#;
tokio::fs::write(codex_home.join(CONFIG_TOML_FILE), seed)
.await
.expect("seed write");
persist_overrides_and_clear_if_none(
codex_home,
None,
&[
(&[CONFIG_KEY_MODEL], None),
(&[CONFIG_KEY_EFFORT], Some("high")),
],
)
.await
.expect("persist");
let contents = read_config(codex_home).await;
let expected = r#"profile = "team"
[profiles.team]
model_reasoning_effort = "high"
"#;
assert_eq!(contents, expected);
}
#[tokio::test]
async fn persist_clear_none_noop_when_file_missing() {
let tmpdir = tempdir().expect("tmp");
let codex_home = tmpdir.path();
persist_overrides_and_clear_if_none(codex_home, None, &[(&[CONFIG_KEY_MODEL], None)])
.await
.expect("persist");
assert!(!codex_home.join(CONFIG_TOML_FILE).exists());
}
// Test helper moved to bottom per review guidance.
async fn read_config(codex_home: &Path) -> String {
let p = codex_home.join(CONFIG_TOML_FILE);

View File

@@ -32,32 +32,8 @@ impl ConversationHistory {
}
}
pub(crate) fn keep_last_messages(&mut self, n: usize) {
if n == 0 {
self.items.clear();
return;
}
// Collect the last N message items (assistant/user), newest to oldest.
let mut kept: Vec<ResponseItem> = Vec::with_capacity(n);
for item in self.items.iter().rev() {
if let ResponseItem::Message { role, content, .. } = item {
kept.push(ResponseItem::Message {
// we need to remove the id or the model will complain that messages are sent without
// their reasonings
id: None,
role: role.clone(),
content: content.clone(),
});
if kept.len() == n {
break;
}
}
}
// Preserve chronological order (oldest to newest) within the kept slice.
kept.reverse();
self.items = kept;
pub(crate) fn replace(&mut self, items: Vec<ResponseItem>) {
self.items = items;
}
}

View File

@@ -150,13 +150,13 @@ impl ConversationManager {
/// caller's `config`). The new conversation will have a fresh id.
pub async fn fork_conversation(
&self,
conversation_history: Vec<ResponseItem>,
num_messages_to_drop: usize,
config: Config,
path: PathBuf,
) -> CodexResult<NewConversation> {
// Compute the prefix up to the cut point.
let history =
truncate_after_dropping_last_messages(conversation_history, num_messages_to_drop);
let history = RolloutRecorder::get_rollout_history(&path).await?;
let history = truncate_after_dropping_last_messages(history, num_messages_to_drop);
// Spawn a new conversation with the computed initial history.
let auth_manager = self.auth_manager.clone();
@@ -171,36 +171,36 @@ impl ConversationManager {
/// Return a prefix of `items` obtained by dropping the last `n` user messages
/// and all items that follow them.
fn truncate_after_dropping_last_messages(items: Vec<ResponseItem>, n: usize) -> InitialHistory {
fn truncate_after_dropping_last_messages(history: InitialHistory, n: usize) -> InitialHistory {
if n == 0 {
let rolled: Vec<RolloutItem> = items.into_iter().map(RolloutItem::ResponseItem).collect();
return InitialHistory::Forked(rolled);
return InitialHistory::Forked(history.get_rollout_items());
}
// Walk backwards counting only `user` Message items, find cut index.
let mut count = 0usize;
let mut cut_index = 0usize;
for (idx, item) in items.iter().enumerate().rev() {
if let ResponseItem::Message { role, .. } = item
// Work directly on rollout items, and cut the vector at the nth-from-last user message input.
let items: Vec<RolloutItem> = history.get_rollout_items();
// Find indices of user message inputs in rollout order.
let mut user_positions: Vec<usize> = Vec::new();
for (idx, item) in items.iter().enumerate() {
if let RolloutItem::ResponseItem(ResponseItem::Message { role, .. }) = item
&& role == "user"
{
count += 1;
if count == n {
// Cut everything from this user message to the end.
cut_index = idx;
break;
}
user_positions.push(idx);
}
}
if cut_index == 0 {
// No prefix remains after dropping; start a new conversation.
// If fewer than n user messages exist, treat as empty.
if user_positions.len() < n {
return InitialHistory::New;
}
// Cut strictly before the nth-from-last user message (do not keep the nth itself).
let cut_idx = user_positions[user_positions.len() - n];
let rolled: Vec<RolloutItem> = items.into_iter().take(cut_idx).collect();
if rolled.is_empty() {
InitialHistory::New
} else {
let rolled: Vec<RolloutItem> = items
.into_iter()
.take(cut_index)
.map(RolloutItem::ResponseItem)
.collect();
InitialHistory::Forked(rolled)
}
}
@@ -256,7 +256,13 @@ mod tests {
assistant_msg("a4"),
];
let truncated = truncate_after_dropping_last_messages(items.clone(), 1);
// Wrap as InitialHistory::Forked with response items only.
let initial: Vec<RolloutItem> = items
.iter()
.cloned()
.map(RolloutItem::ResponseItem)
.collect();
let truncated = truncate_after_dropping_last_messages(InitialHistory::Forked(initial), 1);
let got_items = truncated.get_rollout_items();
let expected_items = vec![
RolloutItem::ResponseItem(items[0].clone()),
@@ -268,7 +274,12 @@ mod tests {
serde_json::to_value(&expected_items).unwrap()
);
let truncated2 = truncate_after_dropping_last_messages(items, 2);
let initial2: Vec<RolloutItem> = items
.iter()
.cloned()
.map(RolloutItem::ResponseItem)
.collect();
let truncated2 = truncate_after_dropping_last_messages(InitialHistory::Forked(initial2), 2);
assert!(matches!(truncated2, InitialHistory::New));
}
}

View File

@@ -26,6 +26,7 @@ pub(crate) struct EnvironmentContext {
pub approval_policy: Option<AskForApproval>,
pub sandbox_mode: Option<SandboxMode>,
pub network_access: Option<NetworkAccess>,
pub writable_roots: Option<Vec<PathBuf>>,
pub shell: Option<Shell>,
}
@@ -57,6 +58,16 @@ impl EnvironmentContext {
}
None => None,
},
writable_roots: match sandbox_policy {
Some(SandboxPolicy::WorkspaceWrite { writable_roots, .. }) => {
if writable_roots.is_empty() {
None
} else {
Some(writable_roots)
}
}
_ => None,
},
shell,
}
}
@@ -72,6 +83,7 @@ impl EnvironmentContext {
/// <cwd>...</cwd>
/// <approval_policy>...</approval_policy>
/// <sandbox_mode>...</sandbox_mode>
/// <writable_roots>...</writable_roots>
/// <network_access>...</network_access>
/// <shell>...</shell>
/// </environment_context>
@@ -94,6 +106,16 @@ impl EnvironmentContext {
" <network_access>{network_access}</network_access>"
));
}
if let Some(writable_roots) = self.writable_roots {
lines.push(" <writable_roots>".to_string());
for writable_root in writable_roots {
lines.push(format!(
" <root>{}</root>",
writable_root.to_string_lossy()
));
}
lines.push(" </writable_roots>".to_string());
}
if let Some(shell) = self.shell
&& let Some(shell_name) = shell.name()
{
@@ -115,3 +137,77 @@ impl From<EnvironmentContext> for ResponseItem {
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use pretty_assertions::assert_eq;
fn workspace_write_policy(writable_roots: Vec<&str>, network_access: bool) -> SandboxPolicy {
SandboxPolicy::WorkspaceWrite {
writable_roots: writable_roots.into_iter().map(PathBuf::from).collect(),
network_access,
exclude_tmpdir_env_var: false,
exclude_slash_tmp: false,
}
}
#[test]
fn serialize_workspace_write_environment_context() {
let context = EnvironmentContext::new(
Some(PathBuf::from("/repo")),
Some(AskForApproval::OnRequest),
Some(workspace_write_policy(vec!["/repo", "/tmp"], false)),
None,
);
let expected = r#"<environment_context>
<cwd>/repo</cwd>
<approval_policy>on-request</approval_policy>
<sandbox_mode>workspace-write</sandbox_mode>
<network_access>restricted</network_access>
<writable_roots>
<root>/repo</root>
<root>/tmp</root>
</writable_roots>
</environment_context>"#;
assert_eq!(context.serialize_to_xml(), expected);
}
#[test]
fn serialize_read_only_environment_context() {
let context = EnvironmentContext::new(
None,
Some(AskForApproval::Never),
Some(SandboxPolicy::ReadOnly),
None,
);
let expected = r#"<environment_context>
<approval_policy>never</approval_policy>
<sandbox_mode>read-only</sandbox_mode>
<network_access>restricted</network_access>
</environment_context>"#;
assert_eq!(context.serialize_to_xml(), expected);
}
#[test]
fn serialize_full_access_environment_context() {
let context = EnvironmentContext::new(
None,
Some(AskForApproval::OnFailure),
Some(SandboxPolicy::DangerFullAccess),
None,
);
let expected = r#"<environment_context>
<approval_policy>on-failure</approval_policy>
<sandbox_mode>danger-full-access</sandbox_mode>
<network_access>enabled</network_access>
</environment_context>"#;
assert_eq!(context.serialize_to_xml(), expected);
}
}

View File

@@ -1,3 +1,5 @@
use crate::token_data::KnownPlan;
use crate::token_data::PlanType;
use codex_protocol::mcp_protocol::ConversationId;
use reqwest::StatusCode;
use serde_json;
@@ -127,38 +129,58 @@ pub enum CodexErr {
#[derive(Debug)]
pub struct UsageLimitReachedError {
pub plan_type: Option<String>,
pub resets_in_seconds: Option<u64>,
pub(crate) plan_type: Option<PlanType>,
pub(crate) resets_in_seconds: Option<u64>,
}
impl std::fmt::Display for UsageLimitReachedError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
// Base message differs slightly for legacy ChatGPT Plus plan users.
if let Some(plan_type) = &self.plan_type
&& plan_type == "plus"
{
write!(
f,
"You've hit your usage limit. Upgrade to Pro (https://openai.com/chatgpt/pricing) or try again"
)?;
if let Some(secs) = self.resets_in_seconds {
let reset_duration = format_reset_duration(secs);
write!(f, " in {reset_duration}.")?;
} else {
write!(f, " later.")?;
let message = match self.plan_type.as_ref() {
Some(PlanType::Known(KnownPlan::Plus)) => format!(
"You've hit your usage limit. Upgrade to Pro (https://openai.com/chatgpt/pricing){}",
retry_suffix_after_or(self.resets_in_seconds)
),
Some(PlanType::Known(KnownPlan::Team)) | Some(PlanType::Known(KnownPlan::Business)) => {
format!(
"You've hit your usage limit. To get more access now, send a request to your admin{}",
retry_suffix_after_or(self.resets_in_seconds)
)
}
} else {
write!(f, "You've hit your usage limit.")?;
if let Some(secs) = self.resets_in_seconds {
let reset_duration = format_reset_duration(secs);
write!(f, " Try again in {reset_duration}.")?;
} else {
write!(f, " Try again later.")?;
Some(PlanType::Known(KnownPlan::Free)) => {
"To use Codex with your ChatGPT plan, upgrade to Plus: https://openai.com/chatgpt/pricing."
.to_string()
}
}
Some(PlanType::Known(KnownPlan::Pro))
| Some(PlanType::Known(KnownPlan::Enterprise))
| Some(PlanType::Known(KnownPlan::Edu)) => format!(
"You've hit your usage limit.{}",
retry_suffix(self.resets_in_seconds)
),
Some(PlanType::Unknown(_)) | None => format!(
"You've hit your usage limit.{}",
retry_suffix(self.resets_in_seconds)
),
};
Ok(())
write!(f, "{message}")
}
}
fn retry_suffix(resets_in_seconds: Option<u64>) -> String {
if let Some(secs) = resets_in_seconds {
let reset_duration = format_reset_duration(secs);
format!(" Try again in {reset_duration}.")
} else {
" Try again later.".to_string()
}
}
fn retry_suffix_after_or(resets_in_seconds: Option<u64>) -> String {
if let Some(secs) = resets_in_seconds {
let reset_duration = format_reset_duration(secs);
format!(" or try again in {reset_duration}.")
} else {
" or try again later.".to_string()
}
}
@@ -237,7 +259,7 @@ mod tests {
#[test]
fn usage_limit_reached_error_formats_plus_plan() {
let err = UsageLimitReachedError {
plan_type: Some("plus".to_string()),
plan_type: Some(PlanType::Known(KnownPlan::Plus)),
resets_in_seconds: None,
};
assert_eq!(
@@ -246,6 +268,18 @@ mod tests {
);
}
#[test]
fn usage_limit_reached_error_formats_free_plan() {
let err = UsageLimitReachedError {
plan_type: Some(PlanType::Known(KnownPlan::Free)),
resets_in_seconds: Some(3600),
};
assert_eq!(
err.to_string(),
"To use Codex with your ChatGPT plan, upgrade to Plus: https://openai.com/chatgpt/pricing."
);
}
#[test]
fn usage_limit_reached_error_formats_default_when_none() {
let err = UsageLimitReachedError {
@@ -258,10 +292,34 @@ mod tests {
);
}
#[test]
fn usage_limit_reached_error_formats_team_plan() {
let err = UsageLimitReachedError {
plan_type: Some(PlanType::Known(KnownPlan::Team)),
resets_in_seconds: Some(3600),
};
assert_eq!(
err.to_string(),
"You've hit your usage limit. To get more access now, send a request to your admin or try again in 1 hour."
);
}
#[test]
fn usage_limit_reached_error_formats_business_plan_without_reset() {
let err = UsageLimitReachedError {
plan_type: Some(PlanType::Known(KnownPlan::Business)),
resets_in_seconds: None,
};
assert_eq!(
err.to_string(),
"You've hit your usage limit. To get more access now, send a request to your admin or try again later."
);
}
#[test]
fn usage_limit_reached_error_formats_default_for_other_plans() {
let err = UsageLimitReachedError {
plan_type: Some("pro".to_string()),
plan_type: Some(PlanType::Known(KnownPlan::Pro)),
resets_in_seconds: None,
};
assert_eq!(
@@ -285,7 +343,7 @@ mod tests {
#[test]
fn usage_limit_reached_includes_hours_and_minutes() {
let err = UsageLimitReachedError {
plan_type: Some("plus".to_string()),
plan_type: Some(PlanType::Known(KnownPlan::Plus)),
resets_in_seconds: Some(3 * 3600 + 32 * 60),
};
assert_eq!(

View File

@@ -159,7 +159,7 @@ mod tests {
EventMsg::UserMessage(user) => {
assert_eq!(user.message, "Hello world");
assert!(matches!(user.kind, Some(InputMessageKind::Plain)));
assert_eq!(user.images, Some(vec![img1.clone(), img2.clone()]));
assert_eq!(user.images, Some(vec![img1, img2]));
}
other => panic!("expected UserMessage, got {other:?}"),
}

View File

@@ -24,6 +24,9 @@ pub(crate) struct ExecCommandSession {
/// JoinHandle for the child wait task.
wait_handle: StdMutex<Option<JoinHandle<()>>>,
/// Tracks whether the underlying process has exited.
exit_status: std::sync::Arc<std::sync::atomic::AtomicBool>,
}
impl ExecCommandSession {
@@ -34,6 +37,7 @@ impl ExecCommandSession {
reader_handle: JoinHandle<()>,
writer_handle: JoinHandle<()>,
wait_handle: JoinHandle<()>,
exit_status: std::sync::Arc<std::sync::atomic::AtomicBool>,
) -> Self {
Self {
writer_tx,
@@ -42,6 +46,7 @@ impl ExecCommandSession {
reader_handle: StdMutex::new(Some(reader_handle)),
writer_handle: StdMutex::new(Some(writer_handle)),
wait_handle: StdMutex::new(Some(wait_handle)),
exit_status,
}
}
@@ -52,6 +57,10 @@ impl ExecCommandSession {
pub(crate) fn output_receiver(&self) -> broadcast::Receiver<Vec<u8>> {
self.output_tx.subscribe()
}
pub(crate) fn has_exited(&self) -> bool {
self.exit_status.load(std::sync::atomic::Ordering::SeqCst)
}
}
impl Drop for ExecCommandSession {

View File

@@ -6,6 +6,7 @@ mod session_manager;
pub use exec_command_params::ExecCommandParams;
pub use exec_command_params::WriteStdinParams;
pub(crate) use exec_command_session::ExecCommandSession;
pub use responses_api::EXEC_COMMAND_TOOL_NAME;
pub use responses_api::WRITE_STDIN_TOOL_NAME;
pub use responses_api::create_exec_command_tool_for_responses_api;

View File

@@ -3,6 +3,7 @@ use std::io::ErrorKind;
use std::io::Read;
use std::sync::Arc;
use std::sync::Mutex as StdMutex;
use std::sync::atomic::AtomicBool;
use std::sync::atomic::AtomicU32;
use portable_pty::CommandBuilder;
@@ -19,6 +20,7 @@ use crate::exec_command::exec_command_params::ExecCommandParams;
use crate::exec_command::exec_command_params::WriteStdinParams;
use crate::exec_command::exec_command_session::ExecCommandSession;
use crate::exec_command::session_id::SessionId;
use crate::truncate::truncate_middle;
use codex_protocol::models::FunctionCallOutputPayload;
#[derive(Debug, Default)]
@@ -327,11 +329,14 @@ async fn create_exec_command_session(
// Keep the child alive until it exits, then signal exit code.
let (exit_tx, exit_rx) = oneshot::channel::<i32>();
let exit_status = Arc::new(AtomicBool::new(false));
let wait_exit_status = exit_status.clone();
let wait_handle = tokio::task::spawn_blocking(move || {
let code = match child.wait() {
Ok(status) => status.exit_code() as i32,
Err(_) => -1,
};
wait_exit_status.store(true, std::sync::atomic::Ordering::SeqCst);
let _ = exit_tx.send(code);
});
@@ -343,116 +348,11 @@ async fn create_exec_command_session(
reader_handle,
writer_handle,
wait_handle,
exit_status,
);
Ok((session, exit_rx))
}
/// Truncate the middle of a UTF-8 string to at most `max_bytes` bytes,
/// preserving the beginning and the end. Returns the possibly truncated
/// string and `Some(original_token_count)` (estimated at 4 bytes/token)
/// if truncation occurred; otherwise returns the original string and `None`.
fn truncate_middle(s: &str, max_bytes: usize) -> (String, Option<u64>) {
// No truncation needed
if s.len() <= max_bytes {
return (s.to_string(), None);
}
let est_tokens = (s.len() as u64).div_ceil(4);
if max_bytes == 0 {
// Cannot keep any content; still return a full marker (never truncated).
return (format!("{est_tokens} tokens truncated…"), Some(est_tokens));
}
// Helper to truncate a string to a given byte length on a char boundary.
fn truncate_on_boundary(input: &str, max_len: usize) -> &str {
if input.len() <= max_len {
return input;
}
let mut end = max_len;
while end > 0 && !input.is_char_boundary(end) {
end -= 1;
}
&input[..end]
}
// Given a left/right budget, prefer newline boundaries; otherwise fall back
// to UTF-8 char boundaries.
fn pick_prefix_end(s: &str, left_budget: usize) -> usize {
if let Some(head) = s.get(..left_budget)
&& let Some(i) = head.rfind('\n')
{
return i + 1; // keep the newline so suffix starts on a fresh line
}
truncate_on_boundary(s, left_budget).len()
}
fn pick_suffix_start(s: &str, right_budget: usize) -> usize {
let start_tail = s.len().saturating_sub(right_budget);
if let Some(tail) = s.get(start_tail..)
&& let Some(i) = tail.find('\n')
{
return start_tail + i + 1; // start after newline
}
// Fall back to a char boundary at or after start_tail.
let mut idx = start_tail.min(s.len());
while idx < s.len() && !s.is_char_boundary(idx) {
idx += 1;
}
idx
}
// Refine marker length and budgets until stable. Marker is never truncated.
let mut guess_tokens = est_tokens; // worst-case: everything truncated
for _ in 0..4 {
let marker = format!("{guess_tokens} tokens truncated…");
let marker_len = marker.len();
let keep_budget = max_bytes.saturating_sub(marker_len);
if keep_budget == 0 {
// No room for any content within the cap; return a full, untruncated marker
// that reflects the entire truncated content.
return (format!("{est_tokens} tokens truncated…"), Some(est_tokens));
}
let left_budget = keep_budget / 2;
let right_budget = keep_budget - left_budget;
let prefix_end = pick_prefix_end(s, left_budget);
let mut suffix_start = pick_suffix_start(s, right_budget);
if suffix_start < prefix_end {
suffix_start = prefix_end;
}
let kept_content_bytes = prefix_end + (s.len() - suffix_start);
let truncated_content_bytes = s.len().saturating_sub(kept_content_bytes);
let new_tokens = (truncated_content_bytes as u64).div_ceil(4);
if new_tokens == guess_tokens {
let mut out = String::with_capacity(marker_len + kept_content_bytes + 1);
out.push_str(&s[..prefix_end]);
out.push_str(&marker);
// Place marker on its own line for symmetry when we keep line boundaries.
out.push('\n');
out.push_str(&s[suffix_start..]);
return (out, Some(est_tokens));
}
guess_tokens = new_tokens;
}
// Fallback: use last guess to build output.
let marker = format!("{guess_tokens} tokens truncated…");
let marker_len = marker.len();
let keep_budget = max_bytes.saturating_sub(marker_len);
if keep_budget == 0 {
return (format!("{est_tokens} tokens truncated…"), Some(est_tokens));
}
let left_budget = keep_budget / 2;
let right_budget = keep_budget - left_budget;
let prefix_end = pick_prefix_end(s, left_budget);
let suffix_start = pick_suffix_start(s, right_budget);
let mut out = String::with_capacity(marker_len + prefix_end + (s.len() - suffix_start) + 1);
out.push_str(&s[..prefix_end]);
out.push_str(&marker);
out.push('\n');
out.push_str(&s[suffix_start..]);
(out, Some(est_tokens))
}
#[cfg(test)]
mod tests {
use super::*;
@@ -616,50 +516,4 @@ Output:
abc"#;
assert_eq!(expected, text);
}
#[test]
fn truncate_middle_no_newlines_fallback() {
// A long string with no newlines that exceeds the cap.
let s = "abcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ";
let max_bytes = 16; // force truncation
let (out, original) = truncate_middle(s, max_bytes);
// For very small caps, we return the full, untruncated marker,
// even if it exceeds the cap.
assert_eq!(out, "…16 tokens truncated…");
// Original string length is 62 bytes => ceil(62/4) = 16 tokens.
assert_eq!(original, Some(16));
}
#[test]
fn truncate_middle_prefers_newline_boundaries() {
// Build a multi-line string of 20 numbered lines (each "NNN\n").
let mut s = String::new();
for i in 1..=20 {
s.push_str(&format!("{i:03}\n"));
}
// Total length: 20 lines * 4 bytes per line = 80 bytes.
assert_eq!(s.len(), 80);
// Choose a cap that forces truncation while leaving room for
// a few lines on each side after accounting for the marker.
let max_bytes = 64;
// Expect exact output: first 4 lines, marker, last 4 lines, and correct token estimate (80/4 = 20).
assert_eq!(
truncate_middle(&s, max_bytes),
(
r#"001
002
003
004
…12 tokens truncated…
017
018
019
020
"#
.to_string(),
Some(20)
)
);
}
}

View File

@@ -802,7 +802,7 @@ mod tests {
async fn resolve_root_git_project_for_trust_regular_repo_returns_repo_root() {
let temp_dir = TempDir::new().expect("Failed to create temp dir");
let repo_path = create_test_git_repo(&temp_dir).await;
let expected = std::fs::canonicalize(&repo_path).unwrap().to_path_buf();
let expected = std::fs::canonicalize(&repo_path).unwrap();
assert_eq!(
resolve_root_git_project_for_trust(&repo_path),
@@ -810,10 +810,7 @@ mod tests {
);
let nested = repo_path.join("sub/dir");
std::fs::create_dir_all(&nested).unwrap();
assert_eq!(
resolve_root_git_project_for_trust(&nested),
Some(expected.clone())
);
assert_eq!(resolve_root_git_project_for_trust(&nested), Some(expected));
}
#[tokio::test]

View File

@@ -0,0 +1,68 @@
use anyhow::Context;
use serde::Deserialize;
use serde::Serialize;
use std::path::Path;
use std::path::PathBuf;
pub(crate) const INTERNAL_STORAGE_FILE: &str = "internal_storage.json";
#[derive(Debug, Default, Clone, Serialize, Deserialize)]
pub struct InternalStorage {
#[serde(skip)]
storage_path: PathBuf,
#[serde(default)]
pub gpt_5_high_model_prompt_seen: bool,
}
// TODO(jif) generalise all the file writers and build proper async channel inserters.
impl InternalStorage {
pub fn load(codex_home: &Path) -> Self {
let storage_path = codex_home.join(INTERNAL_STORAGE_FILE);
match std::fs::read_to_string(&storage_path) {
Ok(serialized) => match serde_json::from_str::<Self>(&serialized) {
Ok(mut storage) => {
storage.storage_path = storage_path;
storage
}
Err(error) => {
tracing::warn!("failed to parse internal storage: {error:?}");
Self::empty(storage_path)
}
},
Err(error) => {
tracing::warn!("failed to read internal storage: {error:?}");
Self::empty(storage_path)
}
}
}
fn empty(storage_path: PathBuf) -> Self {
Self {
storage_path,
..Default::default()
}
}
pub async fn persist(&self) -> anyhow::Result<()> {
let serialized = serde_json::to_string_pretty(self)?;
if let Some(parent) = self.storage_path.parent() {
tokio::fs::create_dir_all(parent).await.with_context(|| {
format!(
"failed to create internal storage directory at {}",
parent.display()
)
})?;
}
tokio::fs::write(&self.storage_path, serialized)
.await
.with_context(|| {
format!(
"failed to persist internal storage at {}",
self.storage_path.display()
)
})
}
}

View File

@@ -28,6 +28,7 @@ mod exec_command;
pub mod exec_env;
mod flags;
pub mod git_info;
pub mod internal_storage;
mod is_safe_command;
pub mod landlock;
mod mcp_connection_manager;
@@ -35,6 +36,8 @@ mod mcp_tool_call;
mod message_history;
mod model_provider_info;
pub mod parse_command;
mod truncate;
mod unified_exec;
mod user_instructions;
pub use model_provider_info::BUILT_IN_OSS_MODEL_PROVIDER_ID;
pub use model_provider_info::ModelProviderInfo;
@@ -72,6 +75,7 @@ pub use rollout::list::ConversationsPage;
pub use rollout::list::Cursor;
mod user_notification;
pub mod util;
pub use apply_patch::CODEX_APPLY_PATCH_ARG1;
pub use safety::get_platform_sandbox;
// Re-export the protocol types from the standalone `codex-protocol` crate so existing

View File

@@ -17,7 +17,7 @@ use anyhow::Result;
use anyhow::anyhow;
use codex_mcp_client::McpClient;
use mcp_types::ClientCapabilities;
use mcp_types::McpClientInfo;
use mcp_types::Implementation;
use mcp_types::Tool;
use serde_json::json;
@@ -159,10 +159,14 @@ impl McpConnectionManager {
// indicates this should be an empty object.
elicitation: Some(json!({})),
},
client_info: McpClientInfo {
client_info: Implementation {
name: "codex-mcp-client".to_owned(),
version: env!("CARGO_PKG_VERSION").to_owned(),
title: Some("Codex".into()),
// This field is used by Codex when it is an MCP
// server: it should not be used when Codex is
// an MCP client.
user_agent: None,
},
protocol_version: mcp_types::MCP_SCHEMA_VERSION.to_owned(),
};

View File

@@ -80,7 +80,10 @@ pub struct ModelProviderInfo {
/// the connection as lost.
pub stream_idle_timeout_ms: Option<u64>,
/// Whether this provider requires some form of standard authentication (API key, ChatGPT token).
/// Does this provider require an OpenAI API Key or ChatGPT login token? If true,
/// user is presented with login screen on first run, and login preference and token/key
/// are stored in auth.json. If false (which is the default), login screen is skipped,
/// and API key (if needed) comes from the "env_key" environment variable.
#[serde(default)]
pub requires_openai_auth: bool,
}
@@ -159,6 +162,21 @@ impl ModelProviderInfo {
}
}
pub(crate) fn is_azure_responses_endpoint(&self) -> bool {
if self.wire_api != WireApi::Responses {
return false;
}
if self.name.eq_ignore_ascii_case("azure") {
return true;
}
self.base_url
.as_ref()
.map(|base| matches_azure_responses_base_url(base))
.unwrap_or(false)
}
/// Apply provider-specific HTTP headers (both static and environment-based)
/// onto an existing `reqwest::RequestBuilder` and return the updated
/// builder.
@@ -326,6 +344,18 @@ pub fn create_oss_provider_with_base_url(base_url: &str) -> ModelProviderInfo {
}
}
fn matches_azure_responses_base_url(base_url: &str) -> bool {
let base = base_url.to_ascii_lowercase();
const AZURE_MARKERS: [&str; 5] = [
"openai.azure.",
"cognitiveservices.azure.",
"aoai.azure.",
"azure-api.",
"azurefd.",
];
AZURE_MARKERS.iter().any(|marker| base.contains(marker))
}
#[cfg(test)]
mod tests {
use super::*;
@@ -416,4 +446,69 @@ env_http_headers = { "X-Example-Env-Header" = "EXAMPLE_ENV_VAR" }
let provider: ModelProviderInfo = toml::from_str(azure_provider_toml).unwrap();
assert_eq!(expected_provider, provider);
}
#[test]
fn detects_azure_responses_base_urls() {
fn provider_for(base_url: &str) -> ModelProviderInfo {
ModelProviderInfo {
name: "test".into(),
base_url: Some(base_url.into()),
env_key: None,
env_key_instructions: None,
wire_api: WireApi::Responses,
query_params: None,
http_headers: None,
env_http_headers: None,
request_max_retries: None,
stream_max_retries: None,
stream_idle_timeout_ms: None,
requires_openai_auth: false,
}
}
let positive_cases = [
"https://foo.openai.azure.com/openai",
"https://foo.openai.azure.us/openai/deployments/bar",
"https://foo.cognitiveservices.azure.cn/openai",
"https://foo.aoai.azure.com/openai",
"https://foo.openai.azure-api.net/openai",
"https://foo.z01.azurefd.net/",
];
for base_url in positive_cases {
let provider = provider_for(base_url);
assert!(
provider.is_azure_responses_endpoint(),
"expected {base_url} to be detected as Azure"
);
}
let named_provider = ModelProviderInfo {
name: "Azure".into(),
base_url: Some("https://example.com".into()),
env_key: None,
env_key_instructions: None,
wire_api: WireApi::Responses,
query_params: None,
http_headers: None,
env_http_headers: None,
request_max_retries: None,
stream_max_retries: None,
stream_idle_timeout_ms: None,
requires_openai_auth: false,
};
assert!(named_provider.is_azure_responses_endpoint());
let negative_cases = [
"https://api.openai.com/v1",
"https://example.com/openai",
"https://myproxy.azurewebsites.net/openai",
];
for base_url in negative_cases {
let provider = provider_for(base_url);
assert!(
!provider.is_azure_responses_endpoint(),
"expected {base_url} not to be detected as Azure"
);
}
}
}

View File

@@ -12,6 +12,19 @@ pub(crate) struct ModelInfo {
/// Maximum number of output tokens that can be generated for the model.
pub(crate) max_output_tokens: u64,
/// Token threshold where we should automatically compact conversation history.
pub(crate) auto_compact_token_limit: Option<i64>,
}
impl ModelInfo {
const fn new(context_window: u64, max_output_tokens: u64) -> Self {
Self {
context_window,
max_output_tokens,
auto_compact_token_limit: None,
}
}
}
pub(crate) fn get_model_info(model_family: &ModelFamily) -> Option<ModelInfo> {
@@ -20,73 +33,37 @@ pub(crate) fn get_model_info(model_family: &ModelFamily) -> Option<ModelInfo> {
// OSS models have a 128k shared token pool.
// Arbitrarily splitting it: 3/4 input context, 1/4 output.
// https://openai.com/index/gpt-oss-model-card/
"gpt-oss-20b" => Some(ModelInfo {
context_window: 96_000,
max_output_tokens: 32_000,
}),
"gpt-oss-120b" => Some(ModelInfo {
context_window: 96_000,
max_output_tokens: 32_000,
}),
"gpt-oss-20b" => Some(ModelInfo::new(96_000, 32_000)),
"gpt-oss-120b" => Some(ModelInfo::new(96_000, 32_000)),
// https://platform.openai.com/docs/models/o3
"o3" => Some(ModelInfo {
context_window: 200_000,
max_output_tokens: 100_000,
}),
"o3" => Some(ModelInfo::new(200_000, 100_000)),
// https://platform.openai.com/docs/models/o4-mini
"o4-mini" => Some(ModelInfo {
context_window: 200_000,
max_output_tokens: 100_000,
}),
"o4-mini" => Some(ModelInfo::new(200_000, 100_000)),
// https://platform.openai.com/docs/models/codex-mini-latest
"codex-mini-latest" => Some(ModelInfo {
context_window: 200_000,
max_output_tokens: 100_000,
}),
"codex-mini-latest" => Some(ModelInfo::new(200_000, 100_000)),
// As of Jun 25, 2025, gpt-4.1 defaults to gpt-4.1-2025-04-14.
// https://platform.openai.com/docs/models/gpt-4.1
"gpt-4.1" | "gpt-4.1-2025-04-14" => Some(ModelInfo {
context_window: 1_047_576,
max_output_tokens: 32_768,
}),
"gpt-4.1" | "gpt-4.1-2025-04-14" => Some(ModelInfo::new(1_047_576, 32_768)),
// As of Jun 25, 2025, gpt-4o defaults to gpt-4o-2024-08-06.
// https://platform.openai.com/docs/models/gpt-4o
"gpt-4o" | "gpt-4o-2024-08-06" => Some(ModelInfo {
context_window: 128_000,
max_output_tokens: 16_384,
}),
"gpt-4o" | "gpt-4o-2024-08-06" => Some(ModelInfo::new(128_000, 16_384)),
// https://platform.openai.com/docs/models/gpt-4o?snapshot=gpt-4o-2024-05-13
"gpt-4o-2024-05-13" => Some(ModelInfo {
context_window: 128_000,
max_output_tokens: 4_096,
}),
"gpt-4o-2024-05-13" => Some(ModelInfo::new(128_000, 4_096)),
// https://platform.openai.com/docs/models/gpt-4o?snapshot=gpt-4o-2024-11-20
"gpt-4o-2024-11-20" => Some(ModelInfo {
context_window: 128_000,
max_output_tokens: 16_384,
}),
"gpt-4o-2024-11-20" => Some(ModelInfo::new(128_000, 16_384)),
// https://platform.openai.com/docs/models/gpt-3.5-turbo
"gpt-3.5-turbo" => Some(ModelInfo {
context_window: 16_385,
max_output_tokens: 4_096,
}),
"gpt-3.5-turbo" => Some(ModelInfo::new(16_385, 4_096)),
"gpt-5" => Some(ModelInfo {
context_window: 272_000,
max_output_tokens: 128_000,
}),
_ if slug.starts_with("gpt-5") => Some(ModelInfo::new(272_000, 128_000)),
_ if slug.starts_with("codex-") => Some(ModelInfo {
context_window: 272_000,
max_output_tokens: 128_000,
}),
_ if slug.starts_with("codex-") => Some(ModelInfo::new(272_000, 128_000)),
_ => None,
}

View File

@@ -70,6 +70,7 @@ pub(crate) struct ToolsConfig {
pub apply_patch_tool_type: Option<ApplyPatchToolType>,
pub web_search_request: bool,
pub include_view_image_tool: bool,
pub experimental_unified_exec_tool: bool,
}
pub(crate) struct ToolsConfigParams<'a> {
@@ -81,6 +82,7 @@ pub(crate) struct ToolsConfigParams<'a> {
pub(crate) include_web_search_request: bool,
pub(crate) use_streamable_shell_tool: bool,
pub(crate) include_view_image_tool: bool,
pub(crate) experimental_unified_exec_tool: bool,
}
impl ToolsConfig {
@@ -94,6 +96,7 @@ impl ToolsConfig {
include_web_search_request,
use_streamable_shell_tool,
include_view_image_tool,
experimental_unified_exec_tool,
} = params;
let mut shell_type = if *use_streamable_shell_tool {
ConfigShellToolType::StreamableShell
@@ -126,6 +129,7 @@ impl ToolsConfig {
apply_patch_tool_type,
web_search_request: *include_web_search_request,
include_view_image_tool: *include_view_image_tool,
experimental_unified_exec_tool: *experimental_unified_exec_tool,
}
}
}
@@ -200,6 +204,53 @@ fn create_shell_tool() -> OpenAiTool {
})
}
fn create_unified_exec_tool() -> OpenAiTool {
let mut properties = BTreeMap::new();
properties.insert(
"input".to_string(),
JsonSchema::Array {
items: Box::new(JsonSchema::String { description: None }),
description: Some(
"When no session_id is provided, treat the array as the command and arguments \
to launch. When session_id is set, concatenate the strings (in order) and write \
them to the session's stdin."
.to_string(),
),
},
);
properties.insert(
"session_id".to_string(),
JsonSchema::String {
description: Some(
"Identifier for an existing interactive session. If omitted, a new command \
is spawned."
.to_string(),
),
},
);
properties.insert(
"timeout_ms".to_string(),
JsonSchema::Number {
description: Some(
"Maximum time in milliseconds to wait for output after writing the input."
.to_string(),
),
},
);
OpenAiTool::Function(ResponsesApiTool {
name: "unified_exec".to_string(),
description:
"Runs a command in a PTY. Provide a session_id to reuse an existing interactive session.".to_string(),
strict: false,
parameters: JsonSchema::Object {
properties,
required: Some(vec!["input".to_string()]),
additional_properties: Some(false),
},
})
}
fn create_shell_tool_for_sandbox(sandbox_policy: &SandboxPolicy) -> OpenAiTool {
let mut properties = BTreeMap::new();
properties.insert(
@@ -237,61 +288,9 @@ fn create_shell_tool_for_sandbox(sandbox_policy: &SandboxPolicy) -> OpenAiTool {
);
}
let description = match sandbox_policy {
SandboxPolicy::WorkspaceWrite {
network_access,
writable_roots,
..
} => {
format!(
r#"
The shell tool is used to execute shell commands.
- When invoking the shell tool, your call will be running in a sandbox, and some shell commands will require escalated privileges:
- Types of actions that require escalated privileges:
- Writing files other than those in the writable roots
- writable roots:
{}{}
- Examples of commands that require escalated privileges:
- git commit
- npm install or pnpm install
- cargo build
- cargo test
- When invoking a command that will require escalated privileges:
- Provide the with_escalated_permissions parameter with the boolean value true
- Include a short, 1 sentence explanation for why we need to run with_escalated_permissions in the justification parameter."#,
writable_roots.iter().map(|wr| format!(" - {}", wr.to_string_lossy())).collect::<Vec<String>>().join("\n"),
if !network_access {
"\n - Commands that require network access\n"
} else {
""
}
)
}
SandboxPolicy::DangerFullAccess => {
"Runs a shell command and returns its output.".to_string()
}
SandboxPolicy::ReadOnly => {
r#"
The shell tool is used to execute shell commands.
- When invoking the shell tool, your call will be running in a sandbox, and some shell commands (including apply_patch) will require escalated permissions:
- Types of actions that require escalated privileges:
- Writing files
- Applying patches
- Examples of commands that require escalated privileges:
- apply_patch
- git commit
- npm install or pnpm install
- cargo build
- cargo test
- When invoking a command that will require escalated privileges:
- Provide the with_escalated_permissions parameter with the boolean value true
- Include a short, 1 sentence explanation for why we need to run with_escalated_permissions in the justification parameter"#.to_string()
}
};
OpenAiTool::Function(ResponsesApiTool {
name: "shell".to_string(),
description,
description: "Runs a shell command and returns its output.".to_string(),
strict: false,
parameters: JsonSchema::Object {
properties,
@@ -534,23 +533,27 @@ pub(crate) fn get_openai_tools(
) -> Vec<OpenAiTool> {
let mut tools: Vec<OpenAiTool> = Vec::new();
match &config.shell_type {
ConfigShellToolType::DefaultShell => {
tools.push(create_shell_tool());
}
ConfigShellToolType::ShellWithRequest { sandbox_policy } => {
tools.push(create_shell_tool_for_sandbox(sandbox_policy));
}
ConfigShellToolType::LocalShell => {
tools.push(OpenAiTool::LocalShell {});
}
ConfigShellToolType::StreamableShell => {
tools.push(OpenAiTool::Function(
crate::exec_command::create_exec_command_tool_for_responses_api(),
));
tools.push(OpenAiTool::Function(
crate::exec_command::create_write_stdin_tool_for_responses_api(),
));
if config.experimental_unified_exec_tool {
tools.push(create_unified_exec_tool());
} else {
match &config.shell_type {
ConfigShellToolType::DefaultShell => {
tools.push(create_shell_tool());
}
ConfigShellToolType::ShellWithRequest { sandbox_policy } => {
tools.push(create_shell_tool_for_sandbox(sandbox_policy));
}
ConfigShellToolType::LocalShell => {
tools.push(OpenAiTool::LocalShell {});
}
ConfigShellToolType::StreamableShell => {
tools.push(OpenAiTool::Function(
crate::exec_command::create_exec_command_tool_for_responses_api(),
));
tools.push(OpenAiTool::Function(
crate::exec_command::create_write_stdin_tool_for_responses_api(),
));
}
}
}
@@ -577,10 +580,8 @@ pub(crate) fn get_openai_tools(
if config.include_view_image_tool {
tools.push(create_view_image_tool());
}
if let Some(mcp_tools) = mcp_tools {
// Ensure deterministic ordering to maximize prompt cache hits.
// HashMap iteration order is non-deterministic, so sort by fully-qualified tool name.
let mut entries: Vec<(String, mcp_types::Tool)> = mcp_tools.into_iter().collect();
entries.sort_by(|a, b| a.0.cmp(&b.0));
@@ -642,12 +643,13 @@ mod tests {
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
experimental_unified_exec_tool: true,
});
let tools = get_openai_tools(&config, Some(HashMap::new()));
assert_eq_tool_names(
&tools,
&["local_shell", "update_plan", "web_search", "view_image"],
&["unified_exec", "update_plan", "web_search", "view_image"],
);
}
@@ -663,12 +665,13 @@ mod tests {
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
experimental_unified_exec_tool: true,
});
let tools = get_openai_tools(&config, Some(HashMap::new()));
assert_eq_tool_names(
&tools,
&["shell", "update_plan", "web_search", "view_image"],
&["unified_exec", "update_plan", "web_search", "view_image"],
);
}
@@ -684,6 +687,7 @@ mod tests {
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
experimental_unified_exec_tool: true,
});
let tools = get_openai_tools(
&config,
@@ -726,7 +730,7 @@ mod tests {
assert_eq_tool_names(
&tools,
&[
"shell",
"unified_exec",
"web_search",
"view_image",
"test_server/do_something_cool",
@@ -789,6 +793,7 @@ mod tests {
include_web_search_request: false,
use_streamable_shell_tool: false,
include_view_image_tool: true,
experimental_unified_exec_tool: true,
});
// Intentionally construct a map with keys that would sort alphabetically.
@@ -841,11 +846,11 @@ mod tests {
]);
let tools = get_openai_tools(&config, Some(tools_map));
// Expect shell first, followed by MCP tools sorted by fully-qualified name.
// Expect unified_exec first, followed by MCP tools sorted by fully-qualified name.
assert_eq_tool_names(
&tools,
&[
"shell",
"unified_exec",
"view_image",
"test_server/cool",
"test_server/do",
@@ -866,6 +871,7 @@ mod tests {
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
experimental_unified_exec_tool: true,
});
let tools = get_openai_tools(
@@ -893,7 +899,7 @@ mod tests {
assert_eq_tool_names(
&tools,
&["shell", "web_search", "view_image", "dash/search"],
&["unified_exec", "web_search", "view_image", "dash/search"],
);
assert_eq!(
@@ -928,6 +934,7 @@ mod tests {
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
experimental_unified_exec_tool: true,
});
let tools = get_openai_tools(
@@ -953,7 +960,7 @@ mod tests {
assert_eq_tool_names(
&tools,
&["shell", "web_search", "view_image", "dash/paginate"],
&["unified_exec", "web_search", "view_image", "dash/paginate"],
);
assert_eq!(
tools[3],
@@ -985,6 +992,7 @@ mod tests {
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
experimental_unified_exec_tool: true,
});
let tools = get_openai_tools(
@@ -1008,7 +1016,10 @@ mod tests {
)])),
);
assert_eq_tool_names(&tools, &["shell", "web_search", "view_image", "dash/tags"]);
assert_eq_tool_names(
&tools,
&["unified_exec", "web_search", "view_image", "dash/tags"],
);
assert_eq!(
tools[3],
OpenAiTool::Function(ResponsesApiTool {
@@ -1042,6 +1053,7 @@ mod tests {
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
experimental_unified_exec_tool: true,
});
let tools = get_openai_tools(
@@ -1065,7 +1077,10 @@ mod tests {
)])),
);
assert_eq_tool_names(&tools, &["shell", "web_search", "view_image", "dash/value"]);
assert_eq_tool_names(
&tools,
&["unified_exec", "web_search", "view_image", "dash/value"],
);
assert_eq!(
tools[3],
OpenAiTool::Function(ResponsesApiTool {
@@ -1101,23 +1116,7 @@ mod tests {
};
assert_eq!(name, "shell");
let expected = r#"
The shell tool is used to execute shell commands.
- When invoking the shell tool, your call will be running in a sandbox, and some shell commands will require escalated privileges:
- Types of actions that require escalated privileges:
- Writing files other than those in the writable roots
- writable roots:
- workspace
- Commands that require network access
- Examples of commands that require escalated privileges:
- git commit
- npm install or pnpm install
- cargo build
- cargo test
- When invoking a command that will require escalated privileges:
- Provide the with_escalated_permissions parameter with the boolean value true
- Include a short, 1 sentence explanation for why we need to run with_escalated_permissions in the justification parameter."#;
let expected = "Runs a shell command and returns its output.";
assert_eq!(description, expected);
}
@@ -1132,21 +1131,7 @@ The shell tool is used to execute shell commands.
};
assert_eq!(name, "shell");
let expected = r#"
The shell tool is used to execute shell commands.
- When invoking the shell tool, your call will be running in a sandbox, and some shell commands (including apply_patch) will require escalated permissions:
- Types of actions that require escalated privileges:
- Writing files
- Applying patches
- Examples of commands that require escalated privileges:
- apply_patch
- git commit
- npm install or pnpm install
- cargo build
- cargo test
- When invoking a command that will require escalated privileges:
- Provide the with_escalated_permissions parameter with the boolean value true
- Include a short, 1 sentence explanation for why we need to run with_escalated_permissions in the justification parameter"#;
let expected = "Runs a shell command and returns its output.";
assert_eq!(description, expected);
}

View File

@@ -868,7 +868,7 @@ pub fn parse_command_impl(command: &[String]) -> Vec<ParsedCommand> {
let parts = if contains_connectors(&normalized) {
split_on_connectors(&normalized)
} else {
vec![normalized.clone()]
vec![normalized]
};
// Preserve left-to-right execution order for all commands, including bash -c/-lc
@@ -1201,10 +1201,7 @@ fn parse_bash_lc_commands(original: &[String]) -> Option<Vec<ParsedCommand>> {
name,
}
} else {
ParsedCommand::Read {
cmd: cmd.clone(),
name,
}
ParsedCommand::Read { cmd, name }
}
} else {
ParsedCommand::Read {
@@ -1215,10 +1212,7 @@ fn parse_bash_lc_commands(original: &[String]) -> Option<Vec<ParsedCommand>> {
}
ParsedCommand::ListFiles { path, cmd, .. } => {
if had_connectors {
ParsedCommand::ListFiles {
cmd: cmd.clone(),
path,
}
ParsedCommand::ListFiles { cmd, path }
} else {
ParsedCommand::ListFiles {
cmd: shlex_join(&script_tokens),
@@ -1230,11 +1224,7 @@ fn parse_bash_lc_commands(original: &[String]) -> Option<Vec<ParsedCommand>> {
query, path, cmd, ..
} => {
if had_connectors {
ParsedCommand::Search {
cmd: cmd.clone(),
query,
path,
}
ParsedCommand::Search { cmd, query, path }
} else {
ParsedCommand::Search {
cmd: shlex_join(&script_tokens),

View File

@@ -115,7 +115,7 @@ pub fn discover_project_doc_paths(config: &Config) -> std::io::Result<Vec<PathBu
// Build chain from cwd upwards and detect git root.
let mut chain: Vec<PathBuf> = vec![dir.clone()];
let mut git_root: Option<PathBuf> = None;
let mut cursor = dir.clone();
let mut cursor = dir;
while let Some(parent) = cursor.parent() {
let git_marker = cursor.join(".git");
let git_exists = match std::fs::metadata(&git_marker) {

View File

@@ -1,21 +0,0 @@
You are a summarization assistant. A conversation follows between a user and a coding-focused AI (Codex). Your task is to generate a clear summary capturing:
• High-level objective or problem being solved
• Key instructions or design decisions given by the user
• Main code actions or behaviors from the AI
• Important variables, functions, modules, or outputs discussed
• Any unresolved questions or next steps
Produce the summary in a structured format like:
**Objective:**
**User instructions:** … (bulleted)
**AI actions / code behavior:** … (bulleted)
**Important entities:** … (e.g. function names, variables, files)
**Open issues / next steps:** … (if any)
**Summary (concise):** (one or two sentences)

View File

@@ -318,6 +318,12 @@ async fn read_head_and_flags(
head.push(val);
}
}
RolloutItem::TurnContext(_) => {
// Not included in `head`; skip.
}
RolloutItem::Compacted(_) => {
// Not included in `head`; skip.
}
RolloutItem::EventMsg(ev) => {
if matches!(ev, EventMsg::UserMessage(_)) {
saw_user_event = true;

View File

@@ -8,8 +8,10 @@ pub(crate) fn is_persisted_response_item(item: &RolloutItem) -> bool {
match item {
RolloutItem::ResponseItem(item) => should_persist_response_item(item),
RolloutItem::EventMsg(ev) => should_persist_event_msg(ev),
// Always persist session meta
RolloutItem::SessionMeta(_) => true,
// Persist Codex executive markers so we can analyze flows (e.g., compaction, API turns).
RolloutItem::Compacted(_) | RolloutItem::TurnContext(_) | RolloutItem::SessionMeta(_) => {
true
}
}
}
@@ -36,7 +38,9 @@ pub(crate) fn should_persist_event_msg(ev: &EventMsg) -> bool {
| EventMsg::AgentMessage(_)
| EventMsg::AgentReasoning(_)
| EventMsg::AgentReasoningRawContent(_)
| EventMsg::TokenCount(_) => true,
| EventMsg::TokenCount(_)
| EventMsg::EnteredReviewMode(_)
| EventMsg::ExitedReviewMode(_) => true,
EventMsg::Error(_)
| EventMsg::TaskStarted(_)
| EventMsg::TaskComplete(_)
@@ -65,6 +69,6 @@ pub(crate) fn should_persist_event_msg(ev: &EventMsg) -> bool {
| EventMsg::PlanUpdate(_)
| EventMsg::TurnAborted(_)
| EventMsg::ShutdownComplete
| EventMsg::ConversationHistory(_) => false,
| EventMsg::ConversationPath(_) => false,
}
}

View File

@@ -77,7 +77,13 @@ pub enum RolloutRecorderParams {
enum RolloutCmd {
AddItems(Vec<RolloutItem>),
Shutdown { ack: oneshot::Sender<()> },
/// Ensure all prior writes are processed; respond when flushed.
Flush {
ack: oneshot::Sender<()>,
},
Shutdown {
ack: oneshot::Sender<()>,
},
}
impl RolloutRecorderParams {
@@ -185,6 +191,17 @@ impl RolloutRecorder {
.map_err(|e| IoError::other(format!("failed to queue rollout items: {e}")))
}
/// Flush all queued writes and wait until they are committed by the writer task.
pub async fn flush(&self) -> std::io::Result<()> {
let (tx, rx) = oneshot::channel();
self.tx
.send(RolloutCmd::Flush { ack: tx })
.await
.map_err(|e| IoError::other(format!("failed to queue rollout flush: {e}")))?;
rx.await
.map_err(|e| IoError::other(format!("failed waiting for rollout flush: {e}")))
}
pub(crate) async fn get_rollout_history(path: &Path) -> std::io::Result<InitialHistory> {
info!("Resuming rollout from {path:?}");
tracing::error!("Resuming rollout from {path:?}");
@@ -211,16 +228,22 @@ impl RolloutRecorder {
match serde_json::from_value::<RolloutLine>(v.clone()) {
Ok(rollout_line) => match rollout_line.item {
RolloutItem::SessionMeta(session_meta_line) => {
tracing::error!(
"Parsed conversation ID from rollout file: {:?}",
session_meta_line.meta.id
);
conversation_id = Some(session_meta_line.meta.id);
// Use the FIRST SessionMeta encountered in the file as the canonical
// conversation id and main session information. Keep all items intact.
if conversation_id.is_none() {
conversation_id = Some(session_meta_line.meta.id);
}
items.push(RolloutItem::SessionMeta(session_meta_line));
}
RolloutItem::ResponseItem(item) => {
items.push(RolloutItem::ResponseItem(item));
}
RolloutItem::Compacted(item) => {
items.push(RolloutItem::Compacted(item));
}
RolloutItem::TurnContext(item) => {
items.push(RolloutItem::TurnContext(item));
}
RolloutItem::EventMsg(_ev) => {
items.push(RolloutItem::EventMsg(_ev));
}
@@ -251,6 +274,10 @@ impl RolloutRecorder {
}))
}
pub(crate) fn get_rollout_path(&self) -> PathBuf {
self.rollout_path.clone()
}
pub async fn shutdown(&self) -> std::io::Result<()> {
let (tx_done, rx_done) = oneshot::channel();
match self.tx.send(RolloutCmd::Shutdown { ack: tx_done }).await {
@@ -351,6 +378,14 @@ async fn rollout_writer(
}
}
}
RolloutCmd::Flush { ack } => {
// Ensure underlying file is flushed and then ack.
if let Err(e) = writer.file.flush().await {
let _ = ack.send(());
return Err(e);
}
let _ = ack.send(());
}
RolloutCmd::Shutdown { ack } => {
let _ = ack.send(());
}

View File

@@ -305,7 +305,7 @@ async fn test_pagination_cursor() {
path: p1,
head: head_1,
}],
next_cursor: Some(expected_cursor3.clone()),
next_cursor: Some(expected_cursor3),
num_scanned_files: 5, // scanned 05, 04 (anchor), 03, 02 (anchor), 01
reached_scan_cap: false,
};
@@ -344,7 +344,7 @@ async fn test_get_conversation_contents() {
let expected_cursor: Cursor = serde_json::from_str(&format!("\"{ts}|{uuid}\"")).unwrap();
let expected_page = ConversationsPage {
items: vec![ConversationItem {
path: expected_path.clone(),
path: expected_path,
head: expected_head,
}],
next_cursor: Some(expected_cursor),
@@ -437,7 +437,7 @@ async fn test_stable_ordering_same_second_pagination() {
path: p1,
head: head(u1),
}],
next_cursor: Some(expected_cursor2.clone()),
next_cursor: Some(expected_cursor2),
num_scanned_files: 3, // scanned u3, u2 (anchor), u1
reached_scan_cap: false,
};

View File

@@ -293,7 +293,7 @@ mod tests {
// With the parent dir explicitly added as a writable root, the
// outside write should be permitted.
let policy_with_parent = SandboxPolicy::WorkspaceWrite {
writable_roots: vec![parent.clone()],
writable_roots: vec![parent],
network_access: false,
exclude_tmpdir_env_var: true,
exclude_slash_tmp: true,

View File

@@ -153,7 +153,7 @@ mod tests {
// Build a policy that only includes the two test roots as writable and
// does not automatically include defaults TMPDIR or /tmp.
let policy = SandboxPolicy::WorkspaceWrite {
writable_roots: vec![root_with_git.clone(), root_without_git.clone()],
writable_roots: vec![root_with_git, root_without_git],
network_access: false,
exclude_tmpdir_env_var: true,
exclude_slash_tmp: true,

View File

@@ -2,6 +2,7 @@
; inspired by Chrome's sandbox policy:
; https://source.chromium.org/chromium/chromium/src/+/main:sandbox/policy/mac/common.sb;l=273-319;drc=7b3962fe2e5fc9e2ee58000dc8fbf3429d84d3bd
; https://source.chromium.org/chromium/chromium/src/+/main:sandbox/policy/mac/renderer.sb;l=64;drc=7b3962fe2e5fc9e2ee58000dc8fbf3429d84d3bd
; start with closed-by-default
(deny default)
@@ -9,7 +10,13 @@
; child processes inherit the policy of their parent
(allow process-exec)
(allow process-fork)
(allow signal (target self))
(allow signal (target same-sandbox))
; Allow cf prefs to work.
(allow user-preference-read)
; process-info
(allow process-info* (target same-sandbox))
(allow file-write-data
(require-all
@@ -32,28 +39,22 @@
(sysctl-name "hw.l3cachesize_compat")
(sysctl-name "hw.logicalcpu_max")
(sysctl-name "hw.machine")
(sysctl-name "hw.memsize")
(sysctl-name "hw.ncpu")
(sysctl-name "hw.nperflevels")
(sysctl-name "hw.optional.arm.FEAT_BF16")
(sysctl-name "hw.optional.arm.FEAT_DotProd")
(sysctl-name "hw.optional.arm.FEAT_FCMA")
(sysctl-name "hw.optional.arm.FEAT_FHM")
(sysctl-name "hw.optional.arm.FEAT_FP16")
(sysctl-name "hw.optional.arm.FEAT_I8MM")
(sysctl-name "hw.optional.arm.FEAT_JSCVT")
(sysctl-name "hw.optional.arm.FEAT_LSE")
(sysctl-name "hw.optional.arm.FEAT_RDM")
(sysctl-name "hw.optional.arm.FEAT_SHA512")
(sysctl-name "hw.optional.armv8_2_sha512")
(sysctl-name "hw.memsize")
(sysctl-name "hw.pagesize")
; Chrome locks these CPU feature detection down a bit more tightly,
; but mostly for fingerprinting concerns which isn't an issue for codex.
(sysctl-name-prefix "hw.optional.arm.")
(sysctl-name-prefix "hw.optional.armv8_")
(sysctl-name "hw.packages")
(sysctl-name "hw.pagesize_compat")
(sysctl-name "hw.pagesize")
(sysctl-name "hw.physicalcpu_max")
(sysctl-name "hw.tbfrequency_compat")
(sysctl-name "hw.vectorunit")
(sysctl-name "kern.hostname")
(sysctl-name "kern.maxfilesperproc")
(sysctl-name "kern.maxproc")
(sysctl-name "kern.osproductversion")
(sysctl-name "kern.osrelease")
(sysctl-name "kern.ostype")
@@ -63,14 +64,27 @@
(sysctl-name "kern.usrstack64")
(sysctl-name "kern.version")
(sysctl-name "sysctl.proc_cputype")
(sysctl-name "vm.loadavg")
(sysctl-name-prefix "hw.perflevel")
(sysctl-name-prefix "kern.proc.pgrp.")
(sysctl-name-prefix "kern.proc.pid.")
(sysctl-name-prefix "net.routetable.")
)
; IOKit
(allow iokit-open
(iokit-registry-entry-class "RootDomainUserClient")
)
; needed to look up user info, see https://crbug.com/792228
(allow mach-lookup
(global-name "com.apple.system.opendirectoryd.libinfo")
)
; Added on top of Chrome profile
; Needed for python multiprocessing on MacOS for the SemLock
(allow ipc-posix-sem)
; needed to look up user info, see https://crbug.com/792228
(allow mach-lookup
(global-name "com.apple.system.opendirectoryd.libinfo")
(global-name "com.apple.PowerManagement.control")
)

View File

@@ -326,10 +326,7 @@ mod tests {
.format_default_shell_invocation(input.iter().map(|s| s.to_string()).collect());
let expected_cmd = expected_cmd
.iter()
.map(|s| {
s.replace("BASHRC_PATH", bashrc_path.to_str().unwrap())
.to_string()
})
.map(|s| s.replace("BASHRC_PATH", bashrc_path.to_str().unwrap()))
.collect();
assert_eq!(actual_cmd, Some(expected_cmd));
@@ -435,10 +432,7 @@ mod macos_tests {
.format_default_shell_invocation(input.iter().map(|s| s.to_string()).collect());
let expected_cmd = expected_cmd
.iter()
.map(|s| {
s.replace("ZSHRC_PATH", zshrc_path.to_str().unwrap())
.to_string()
})
.map(|s| s.replace("ZSHRC_PATH", zshrc_path.to_str().unwrap()))
.collect();
assert_eq!(actual_cmd, Some(expected_cmd));

View File

@@ -3,8 +3,6 @@ use serde::Deserialize;
use serde::Serialize;
use thiserror::Error;
use codex_protocol::mcp_protocol::AuthMode;
#[derive(Deserialize, Serialize, Clone, Debug, PartialEq, Default)]
pub struct TokenData {
/// Flat info parsed from the JWT in auth.json.
@@ -22,36 +20,6 @@ pub struct TokenData {
pub account_id: Option<String>,
}
impl TokenData {
/// Returns true if this is a plan that should use the traditional
/// "metered" billing via an API key.
pub(crate) fn should_use_api_key(
&self,
preferred_auth_method: AuthMode,
is_openai_email: bool,
) -> bool {
if preferred_auth_method == AuthMode::ApiKey {
return true;
}
// If the email is an OpenAI email, use AuthMode::ChatGPT unless preferred_auth_method is AuthMode::ApiKey.
if is_openai_email {
return false;
}
self.id_token
.chatgpt_plan_type
.as_ref()
.is_none_or(|plan| plan.is_plan_that_should_use_api_key())
}
pub fn is_openai_email(&self) -> bool {
self.id_token
.email
.as_deref()
.is_some_and(|email| email.trim().to_ascii_lowercase().ends_with("@openai.com"))
}
}
/// Flat subset of useful claims in id_token from auth.json.
#[derive(Debug, Clone, PartialEq, Eq, Default, Serialize, Deserialize)]
pub struct IdTokenInfo {
@@ -79,28 +47,6 @@ pub(crate) enum PlanType {
Unknown(String),
}
impl PlanType {
fn is_plan_that_should_use_api_key(&self) -> bool {
match self {
Self::Known(known) => {
use KnownPlan::*;
!matches!(known, Free | Plus | Pro | Team)
}
Self::Unknown(_) => {
// Unknown plans should use the API key.
true
}
}
}
pub fn as_string(&self) -> String {
match self {
Self::Known(known) => format!("{known:?}").to_lowercase(),
Self::Unknown(s) => s.clone(),
}
}
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub(crate) enum KnownPlan {

View File

@@ -0,0 +1,180 @@
//! Utilities for truncating large chunks of output while preserving a prefix
//! and suffix on UTF-8 boundaries.
/// Truncate the middle of a UTF-8 string to at most `max_bytes` bytes,
/// preserving the beginning and the end. Returns the possibly truncated
/// string and `Some(original_token_count)` (estimated at 4 bytes/token)
/// if truncation occurred; otherwise returns the original string and `None`.
pub(crate) fn truncate_middle(s: &str, max_bytes: usize) -> (String, Option<u64>) {
if s.len() <= max_bytes {
return (s.to_string(), None);
}
let est_tokens = (s.len() as u64).div_ceil(4);
if max_bytes == 0 {
return (format!("{est_tokens} tokens truncated…"), Some(est_tokens));
}
fn truncate_on_boundary(input: &str, max_len: usize) -> &str {
if input.len() <= max_len {
return input;
}
let mut end = max_len;
while end > 0 && !input.is_char_boundary(end) {
end -= 1;
}
&input[..end]
}
fn pick_prefix_end(s: &str, left_budget: usize) -> usize {
if let Some(head) = s.get(..left_budget)
&& let Some(i) = head.rfind('\n')
{
return i + 1;
}
truncate_on_boundary(s, left_budget).len()
}
fn pick_suffix_start(s: &str, right_budget: usize) -> usize {
let start_tail = s.len().saturating_sub(right_budget);
if let Some(tail) = s.get(start_tail..)
&& let Some(i) = tail.find('\n')
{
return start_tail + i + 1;
}
let mut idx = start_tail.min(s.len());
while idx < s.len() && !s.is_char_boundary(idx) {
idx += 1;
}
idx
}
let mut guess_tokens = est_tokens;
for _ in 0..4 {
let marker = format!("{guess_tokens} tokens truncated…");
let marker_len = marker.len();
let keep_budget = max_bytes.saturating_sub(marker_len);
if keep_budget == 0 {
return (format!("{est_tokens} tokens truncated…"), Some(est_tokens));
}
let left_budget = keep_budget / 2;
let right_budget = keep_budget - left_budget;
let prefix_end = pick_prefix_end(s, left_budget);
let mut suffix_start = pick_suffix_start(s, right_budget);
if suffix_start < prefix_end {
suffix_start = prefix_end;
}
let kept_content_bytes = prefix_end + (s.len() - suffix_start);
let truncated_content_bytes = s.len().saturating_sub(kept_content_bytes);
let new_tokens = (truncated_content_bytes as u64).div_ceil(4);
if new_tokens == guess_tokens {
let mut out = String::with_capacity(marker_len + kept_content_bytes + 1);
out.push_str(&s[..prefix_end]);
out.push_str(&marker);
out.push('\n');
out.push_str(&s[suffix_start..]);
return (out, Some(est_tokens));
}
guess_tokens = new_tokens;
}
let marker = format!("{guess_tokens} tokens truncated…");
let marker_len = marker.len();
let keep_budget = max_bytes.saturating_sub(marker_len);
if keep_budget == 0 {
return (format!("{est_tokens} tokens truncated…"), Some(est_tokens));
}
let left_budget = keep_budget / 2;
let right_budget = keep_budget - left_budget;
let prefix_end = pick_prefix_end(s, left_budget);
let suffix_start = pick_suffix_start(s, right_budget);
let mut out = String::with_capacity(marker_len + prefix_end + (s.len() - suffix_start) + 1);
out.push_str(&s[..prefix_end]);
out.push_str(&marker);
out.push('\n');
out.push_str(&s[suffix_start..]);
(out, Some(est_tokens))
}
#[cfg(test)]
mod tests {
use super::truncate_middle;
#[test]
fn truncate_middle_no_newlines_fallback() {
let s = "abcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ*";
let max_bytes = 32;
let (out, original) = truncate_middle(s, max_bytes);
assert!(out.starts_with("abc"));
assert!(out.contains("tokens truncated"));
assert!(out.ends_with("XYZ*"));
assert_eq!(original, Some((s.len() as u64).div_ceil(4)));
}
#[test]
fn truncate_middle_prefers_newline_boundaries() {
let mut s = String::new();
for i in 1..=20 {
s.push_str(&format!("{i:03}\n"));
}
assert_eq!(s.len(), 80);
let max_bytes = 64;
let (out, tokens) = truncate_middle(&s, max_bytes);
assert!(out.starts_with("001\n002\n003\n004\n"));
assert!(out.contains("tokens truncated"));
assert!(out.ends_with("017\n018\n019\n020\n"));
assert_eq!(tokens, Some(20));
}
#[test]
fn truncate_middle_handles_utf8_content() {
let s = "😀😀😀😀😀😀😀😀😀😀\nsecond line with ascii text\n";
let max_bytes = 32;
let (out, tokens) = truncate_middle(s, max_bytes);
assert!(out.contains("tokens truncated"));
assert!(!out.contains('\u{fffd}'));
assert_eq!(tokens, Some((s.len() as u64).div_ceil(4)));
}
#[test]
fn truncate_middle_prefers_newline_boundaries_2() {
// Build a multi-line string of 20 numbered lines (each "NNN\n").
let mut s = String::new();
for i in 1..=20 {
s.push_str(&format!("{i:03}\n"));
}
// Total length: 20 lines * 4 bytes per line = 80 bytes.
assert_eq!(s.len(), 80);
// Choose a cap that forces truncation while leaving room for
// a few lines on each side after accounting for the marker.
let max_bytes = 64;
// Expect exact output: first 4 lines, marker, last 4 lines, and correct token estimate (80/4 = 20).
assert_eq!(
truncate_middle(&s, max_bytes),
(
r#"001
002
003
004
…12 tokens truncated…
017
018
019
020
"#
.to_string(),
Some(20)
)
);
}
}

View File

@@ -678,7 +678,7 @@ index {left_oid}..{right_oid}
let dest = dir.path().join("dest.txt");
let mut acc = TurnDiffTracker::new();
let mv = HashMap::from([(
src.clone(),
src,
FileChange::Update {
unified_diff: "".into(),
move_path: Some(dest.clone()),

View File

@@ -0,0 +1,22 @@
use thiserror::Error;
#[derive(Debug, Error)]
pub(crate) enum UnifiedExecError {
#[error("Failed to create unified exec session: {pty_error}")]
CreateSession {
#[source]
pty_error: anyhow::Error,
},
#[error("Unknown session id {session_id}")]
UnknownSessionId { session_id: i32 },
#[error("failed to write to stdin")]
WriteToStdin,
#[error("missing command line for unified exec request")]
MissingCommandLine,
}
impl UnifiedExecError {
pub(crate) fn create_session(error: anyhow::Error) -> Self {
Self::CreateSession { pty_error: error }
}
}

View File

@@ -0,0 +1,633 @@
use portable_pty::CommandBuilder;
use portable_pty::PtySize;
use portable_pty::native_pty_system;
use std::collections::HashMap;
use std::collections::VecDeque;
use std::io::ErrorKind;
use std::io::Read;
use std::sync::Arc;
use std::sync::Mutex as StdMutex;
use std::sync::atomic::AtomicBool;
use std::sync::atomic::AtomicI32;
use std::sync::atomic::Ordering;
use tokio::sync::Mutex;
use tokio::sync::Notify;
use tokio::sync::mpsc;
use tokio::task::JoinHandle;
use tokio::time::Duration;
use tokio::time::Instant;
use crate::exec_command::ExecCommandSession;
use crate::truncate::truncate_middle;
mod errors;
pub(crate) use errors::UnifiedExecError;
const DEFAULT_TIMEOUT_MS: u64 = 1_000;
const MAX_TIMEOUT_MS: u64 = 60_000;
const UNIFIED_EXEC_OUTPUT_MAX_BYTES: usize = 128 * 1024; // 128 KiB
#[derive(Debug)]
pub(crate) struct UnifiedExecRequest<'a> {
pub session_id: Option<i32>,
pub input_chunks: &'a [String],
pub timeout_ms: Option<u64>,
}
#[derive(Debug, Clone, PartialEq)]
pub(crate) struct UnifiedExecResult {
pub session_id: Option<i32>,
pub output: String,
}
#[derive(Debug, Default)]
pub(crate) struct UnifiedExecSessionManager {
next_session_id: AtomicI32,
sessions: Mutex<HashMap<i32, ManagedUnifiedExecSession>>,
}
#[derive(Debug)]
struct ManagedUnifiedExecSession {
session: ExecCommandSession,
output_buffer: OutputBuffer,
/// Notifies waiters whenever new output has been appended to
/// `output_buffer`, allowing clients to poll for fresh data.
output_notify: Arc<Notify>,
output_task: JoinHandle<()>,
}
#[derive(Debug, Default)]
struct OutputBufferState {
chunks: VecDeque<Vec<u8>>,
total_bytes: usize,
}
impl OutputBufferState {
fn push_chunk(&mut self, chunk: Vec<u8>) {
self.total_bytes = self.total_bytes.saturating_add(chunk.len());
self.chunks.push_back(chunk);
let mut excess = self
.total_bytes
.saturating_sub(UNIFIED_EXEC_OUTPUT_MAX_BYTES);
while excess > 0 {
match self.chunks.front_mut() {
Some(front) if excess >= front.len() => {
excess -= front.len();
self.total_bytes = self.total_bytes.saturating_sub(front.len());
self.chunks.pop_front();
}
Some(front) => {
front.drain(..excess);
self.total_bytes = self.total_bytes.saturating_sub(excess);
break;
}
None => break,
}
}
}
fn drain(&mut self) -> Vec<Vec<u8>> {
let drained: Vec<Vec<u8>> = self.chunks.drain(..).collect();
self.total_bytes = 0;
drained
}
}
type OutputBuffer = Arc<Mutex<OutputBufferState>>;
type OutputHandles = (OutputBuffer, Arc<Notify>);
impl ManagedUnifiedExecSession {
fn new(session: ExecCommandSession) -> Self {
let output_buffer = Arc::new(Mutex::new(OutputBufferState::default()));
let output_notify = Arc::new(Notify::new());
let mut receiver = session.output_receiver();
let buffer_clone = Arc::clone(&output_buffer);
let notify_clone = Arc::clone(&output_notify);
let output_task = tokio::spawn(async move {
while let Ok(chunk) = receiver.recv().await {
let mut guard = buffer_clone.lock().await;
guard.push_chunk(chunk);
drop(guard);
notify_clone.notify_waiters();
}
});
Self {
session,
output_buffer,
output_notify,
output_task,
}
}
fn writer_sender(&self) -> mpsc::Sender<Vec<u8>> {
self.session.writer_sender()
}
fn output_handles(&self) -> OutputHandles {
(
Arc::clone(&self.output_buffer),
Arc::clone(&self.output_notify),
)
}
fn has_exited(&self) -> bool {
self.session.has_exited()
}
}
impl Drop for ManagedUnifiedExecSession {
fn drop(&mut self) {
self.output_task.abort();
}
}
impl UnifiedExecSessionManager {
pub async fn handle_request(
&self,
request: UnifiedExecRequest<'_>,
) -> Result<UnifiedExecResult, UnifiedExecError> {
let (timeout_ms, timeout_warning) = match request.timeout_ms {
Some(requested) if requested > MAX_TIMEOUT_MS => (
MAX_TIMEOUT_MS,
Some(format!(
"Warning: requested timeout {requested}ms exceeds maximum of {MAX_TIMEOUT_MS}ms; clamping to {MAX_TIMEOUT_MS}ms.\n"
)),
),
Some(requested) => (requested, None),
None => (DEFAULT_TIMEOUT_MS, None),
};
let mut new_session: Option<ManagedUnifiedExecSession> = None;
let session_id;
let writer_tx;
let output_buffer;
let output_notify;
if let Some(existing_id) = request.session_id {
let mut sessions = self.sessions.lock().await;
match sessions.get(&existing_id) {
Some(session) => {
if session.has_exited() {
sessions.remove(&existing_id);
return Err(UnifiedExecError::UnknownSessionId {
session_id: existing_id,
});
}
let (buffer, notify) = session.output_handles();
session_id = existing_id;
writer_tx = session.writer_sender();
output_buffer = buffer;
output_notify = notify;
}
None => {
return Err(UnifiedExecError::UnknownSessionId {
session_id: existing_id,
});
}
}
drop(sessions);
} else {
let command = request.input_chunks.to_vec();
let new_id = self.next_session_id.fetch_add(1, Ordering::SeqCst);
let session = create_unified_exec_session(&command).await?;
let managed_session = ManagedUnifiedExecSession::new(session);
let (buffer, notify) = managed_session.output_handles();
writer_tx = managed_session.writer_sender();
output_buffer = buffer;
output_notify = notify;
session_id = new_id;
new_session = Some(managed_session);
};
if request.session_id.is_some() {
let joined_input = request.input_chunks.join(" ");
if !joined_input.is_empty() && writer_tx.send(joined_input.into_bytes()).await.is_err()
{
return Err(UnifiedExecError::WriteToStdin);
}
}
let mut collected: Vec<u8> = Vec::with_capacity(4096);
let start = Instant::now();
let deadline = start + Duration::from_millis(timeout_ms);
loop {
let drained_chunks;
let mut wait_for_output = None;
{
let mut guard = output_buffer.lock().await;
drained_chunks = guard.drain();
if drained_chunks.is_empty() {
wait_for_output = Some(output_notify.notified());
}
}
if drained_chunks.is_empty() {
let remaining = deadline.saturating_duration_since(Instant::now());
if remaining == Duration::ZERO {
break;
}
let notified = wait_for_output.unwrap_or_else(|| output_notify.notified());
tokio::pin!(notified);
tokio::select! {
_ = &mut notified => {}
_ = tokio::time::sleep(remaining) => break,
}
continue;
}
for chunk in drained_chunks {
collected.extend_from_slice(&chunk);
}
if Instant::now() >= deadline {
break;
}
}
let (output, _maybe_tokens) = truncate_middle(
&String::from_utf8_lossy(&collected),
UNIFIED_EXEC_OUTPUT_MAX_BYTES,
);
let output = if let Some(warning) = timeout_warning {
format!("{warning}{output}")
} else {
output
};
let should_store_session = if let Some(session) = new_session.as_ref() {
!session.has_exited()
} else if request.session_id.is_some() {
let mut sessions = self.sessions.lock().await;
if let Some(existing) = sessions.get(&session_id) {
if existing.has_exited() {
sessions.remove(&session_id);
false
} else {
true
}
} else {
false
}
} else {
true
};
if should_store_session {
if let Some(session) = new_session {
self.sessions.lock().await.insert(session_id, session);
}
Ok(UnifiedExecResult {
session_id: Some(session_id),
output,
})
} else {
Ok(UnifiedExecResult {
session_id: None,
output,
})
}
}
}
async fn create_unified_exec_session(
command: &[String],
) -> Result<ExecCommandSession, UnifiedExecError> {
if command.is_empty() {
return Err(UnifiedExecError::MissingCommandLine);
}
let pty_system = native_pty_system();
let pair = pty_system
.openpty(PtySize {
rows: 24,
cols: 80,
pixel_width: 0,
pixel_height: 0,
})
.map_err(UnifiedExecError::create_session)?;
// Safe thanks to the check at the top of the function.
let mut command_builder = CommandBuilder::new(command[0].clone());
for arg in &command[1..] {
command_builder.arg(arg);
}
let mut child = pair
.slave
.spawn_command(command_builder)
.map_err(UnifiedExecError::create_session)?;
let killer = child.clone_killer();
let (writer_tx, mut writer_rx) = mpsc::channel::<Vec<u8>>(128);
let (output_tx, _) = tokio::sync::broadcast::channel::<Vec<u8>>(256);
let mut reader = pair
.master
.try_clone_reader()
.map_err(UnifiedExecError::create_session)?;
let output_tx_clone = output_tx.clone();
let reader_handle = tokio::task::spawn_blocking(move || {
let mut buf = [0u8; 8192];
loop {
match reader.read(&mut buf) {
Ok(0) => break,
Ok(n) => {
let _ = output_tx_clone.send(buf[..n].to_vec());
}
Err(ref e) if e.kind() == ErrorKind::Interrupted => continue,
Err(ref e) if e.kind() == ErrorKind::WouldBlock => {
std::thread::sleep(Duration::from_millis(5));
continue;
}
Err(_) => break,
}
}
});
let writer = pair
.master
.take_writer()
.map_err(UnifiedExecError::create_session)?;
let writer = Arc::new(StdMutex::new(writer));
let writer_handle = tokio::spawn({
let writer = writer.clone();
async move {
while let Some(bytes) = writer_rx.recv().await {
let writer = writer.clone();
let _ = tokio::task::spawn_blocking(move || {
if let Ok(mut guard) = writer.lock() {
use std::io::Write;
let _ = guard.write_all(&bytes);
let _ = guard.flush();
}
})
.await;
}
}
});
let exit_status = Arc::new(AtomicBool::new(false));
let wait_exit_status = Arc::clone(&exit_status);
let wait_handle = tokio::task::spawn_blocking(move || {
let _ = child.wait();
wait_exit_status.store(true, Ordering::SeqCst);
});
Ok(ExecCommandSession::new(
writer_tx,
output_tx,
killer,
reader_handle,
writer_handle,
wait_handle,
exit_status,
))
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn push_chunk_trims_only_excess_bytes() {
let mut buffer = OutputBufferState::default();
buffer.push_chunk(vec![b'a'; UNIFIED_EXEC_OUTPUT_MAX_BYTES]);
buffer.push_chunk(vec![b'b']);
buffer.push_chunk(vec![b'c']);
assert_eq!(buffer.total_bytes, UNIFIED_EXEC_OUTPUT_MAX_BYTES);
assert_eq!(buffer.chunks.len(), 3);
assert_eq!(
buffer.chunks.front().unwrap().len(),
UNIFIED_EXEC_OUTPUT_MAX_BYTES - 2
);
assert_eq!(buffer.chunks.pop_back().unwrap(), vec![b'c']);
assert_eq!(buffer.chunks.pop_back().unwrap(), vec![b'b']);
}
#[cfg(unix)]
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn unified_exec_persists_across_requests_jif() -> Result<(), UnifiedExecError> {
let manager = UnifiedExecSessionManager::default();
let open_shell = manager
.handle_request(UnifiedExecRequest {
session_id: None,
input_chunks: &["bash".to_string(), "-i".to_string()],
timeout_ms: Some(2_500),
})
.await?;
let session_id = open_shell.session_id.expect("expected session_id");
manager
.handle_request(UnifiedExecRequest {
session_id: Some(session_id),
input_chunks: &[
"export".to_string(),
"CODEX_INTERACTIVE_SHELL_VAR=codex\n".to_string(),
],
timeout_ms: Some(2_500),
})
.await?;
let out_2 = manager
.handle_request(UnifiedExecRequest {
session_id: Some(session_id),
input_chunks: &["echo $CODEX_INTERACTIVE_SHELL_VAR\n".to_string()],
timeout_ms: Some(2_500),
})
.await?;
assert!(out_2.output.contains("codex"));
Ok(())
}
#[cfg(unix)]
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn multi_unified_exec_sessions() -> Result<(), UnifiedExecError> {
let manager = UnifiedExecSessionManager::default();
let shell_a = manager
.handle_request(UnifiedExecRequest {
session_id: None,
input_chunks: &["/bin/bash".to_string(), "-i".to_string()],
timeout_ms: Some(2_500),
})
.await?;
let session_a = shell_a.session_id.expect("expected session id");
manager
.handle_request(UnifiedExecRequest {
session_id: Some(session_a),
input_chunks: &["export CODEX_INTERACTIVE_SHELL_VAR=codex\n".to_string()],
timeout_ms: Some(2_500),
})
.await?;
let out_2 = manager
.handle_request(UnifiedExecRequest {
session_id: None,
input_chunks: &[
"echo".to_string(),
"$CODEX_INTERACTIVE_SHELL_VAR\n".to_string(),
],
timeout_ms: Some(2_500),
})
.await?;
assert!(!out_2.output.contains("codex"));
let out_3 = manager
.handle_request(UnifiedExecRequest {
session_id: Some(session_a),
input_chunks: &["echo $CODEX_INTERACTIVE_SHELL_VAR\n".to_string()],
timeout_ms: Some(2_500),
})
.await?;
assert!(out_3.output.contains("codex"));
Ok(())
}
#[cfg(unix)]
#[tokio::test]
async fn unified_exec_timeouts() -> Result<(), UnifiedExecError> {
let manager = UnifiedExecSessionManager::default();
let open_shell = manager
.handle_request(UnifiedExecRequest {
session_id: None,
input_chunks: &["bash".to_string(), "-i".to_string()],
timeout_ms: Some(2_500),
})
.await?;
let session_id = open_shell.session_id.expect("expected session id");
manager
.handle_request(UnifiedExecRequest {
session_id: Some(session_id),
input_chunks: &[
"export".to_string(),
"CODEX_INTERACTIVE_SHELL_VAR=codex\n".to_string(),
],
timeout_ms: Some(2_500),
})
.await?;
let out_2 = manager
.handle_request(UnifiedExecRequest {
session_id: Some(session_id),
input_chunks: &["sleep 5 && echo $CODEX_INTERACTIVE_SHELL_VAR\n".to_string()],
timeout_ms: Some(10),
})
.await?;
assert!(!out_2.output.contains("codex"));
tokio::time::sleep(Duration::from_secs(7)).await;
let empty = Vec::new();
let out_3 = manager
.handle_request(UnifiedExecRequest {
session_id: Some(session_id),
input_chunks: &empty,
timeout_ms: Some(100),
})
.await?;
assert!(out_3.output.contains("codex"));
Ok(())
}
#[cfg(unix)]
#[tokio::test]
async fn requests_with_large_timeout_are_capped() -> Result<(), UnifiedExecError> {
let manager = UnifiedExecSessionManager::default();
let result = manager
.handle_request(UnifiedExecRequest {
session_id: None,
input_chunks: &["echo".to_string(), "codex".to_string()],
timeout_ms: Some(120_000),
})
.await?;
assert!(result.output.starts_with(
"Warning: requested timeout 120000ms exceeds maximum of 60000ms; clamping to 60000ms.\n"
));
assert!(result.output.contains("codex"));
Ok(())
}
#[cfg(unix)]
#[tokio::test]
async fn completed_commands_do_not_persist_sessions() -> Result<(), UnifiedExecError> {
let manager = UnifiedExecSessionManager::default();
let result = manager
.handle_request(UnifiedExecRequest {
session_id: None,
input_chunks: &["/bin/echo".to_string(), "codex".to_string()],
timeout_ms: Some(2_500),
})
.await?;
assert!(result.session_id.is_none());
assert!(result.output.contains("codex"));
assert!(manager.sessions.lock().await.is_empty());
Ok(())
}
#[cfg(unix)]
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn reusing_completed_session_returns_unknown_session() -> Result<(), UnifiedExecError> {
let manager = UnifiedExecSessionManager::default();
let open_shell = manager
.handle_request(UnifiedExecRequest {
session_id: None,
input_chunks: &["/bin/bash".to_string(), "-i".to_string()],
timeout_ms: Some(2_500),
})
.await?;
let session_id = open_shell.session_id.expect("expected session id");
manager
.handle_request(UnifiedExecRequest {
session_id: Some(session_id),
input_chunks: &["exit\n".to_string()],
timeout_ms: Some(2_500),
})
.await?;
tokio::time::sleep(Duration::from_millis(200)).await;
let err = manager
.handle_request(UnifiedExecRequest {
session_id: Some(session_id),
input_chunks: &[],
timeout_ms: Some(100),
})
.await
.expect_err("expected unknown session error");
match err {
UnifiedExecError::UnknownSessionId { session_id: err_id } => {
assert_eq!(err_id, session_id);
}
other => panic!("expected UnknownSessionId, got {other:?}"),
}
assert!(!manager.sessions.lock().await.contains_key(&session_id));
Ok(())
}
}

View File

@@ -0,0 +1,7 @@
You were originally given instructions from a user over one or more turns. Here were the user messages:
{{ user_messages_text }}
Another language model started to solve this problem and produced a summary of its thinking process. You also have access to the state of the tools that were used by that language model. Use this to build on the work that has already been done and avoid duplicating work. Here is the summary produced by the other language model, use the information in this summary to assist with your own analysis:
{{ summary_text }}

View File

@@ -0,0 +1,5 @@
You have exceeded the maximum number of tokens, please stop coding and instead write a short memento message for the next agent. Your note should:
- Summarize what you finished and what still needs work. If there was a recent update_plan call, repeat its steps verbatim.
- List outstanding TODOs with file paths / line numbers so they're easy to find.
- Flag code that needs more tests (edge cases, performance, integration, etc.).
- Record any open bugs, quirks, or setup steps that will make it easier for the next agent to pick up where you left off.

View File

@@ -1,19 +1,32 @@
use codex_core::CodexAuth;
use codex_core::ContentItem;
use codex_core::ConversationManager;
use codex_core::LocalShellAction;
use codex_core::LocalShellExecAction;
use codex_core::LocalShellStatus;
use codex_core::ModelClient;
use codex_core::ModelProviderInfo;
use codex_core::NewConversation;
use codex_core::Prompt;
use codex_core::ReasoningItemContent;
use codex_core::ResponseEvent;
use codex_core::ResponseItem;
use codex_core::WireApi;
use codex_core::built_in_model_providers;
use codex_core::protocol::EventMsg;
use codex_core::protocol::InputItem;
use codex_core::protocol::Op;
use codex_core::spawn::CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR;
use codex_protocol::mcp_protocol::AuthMode;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::models::ReasoningItemReasoningSummary;
use codex_protocol::models::WebSearchAction;
use core_test_support::load_default_config_for_test;
use core_test_support::load_sse_fixture_with_id;
use core_test_support::wait_for_event;
use futures::StreamExt;
use serde_json::json;
use std::io::Write;
use std::sync::Arc;
use tempfile::TempDir;
use uuid::Uuid;
use wiremock::Mock;
@@ -489,79 +502,6 @@ async fn chatgpt_auth_sends_correct_request() {
);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn prefers_chatgpt_token_when_config_prefers_chatgpt() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
// Mock server
let server = MockServer::start().await;
let first = ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.set_body_raw(sse_completed("resp1"), "text/event-stream");
// Expect ChatGPT base path and correct headers
Mock::given(method("POST"))
.and(path("/v1/responses"))
.and(header_regex("Authorization", r"Bearer Access-123"))
.and(header_regex("chatgpt-account-id", r"acc-123"))
.respond_with(first)
.expect(1)
.mount(&server)
.await;
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
// Init session
let codex_home = TempDir::new().unwrap();
// Write auth.json that contains both API key and ChatGPT tokens for a plan that should prefer ChatGPT.
let _jwt = write_auth_json(
&codex_home,
Some("sk-test-key"),
"pro",
"Access-123",
Some("acc-123"),
);
let mut config = load_default_config_for_test(&codex_home);
config.model_provider = model_provider;
config.preferred_auth_method = AuthMode::ChatGPT;
let auth_manager =
match CodexAuth::from_codex_home(codex_home.path(), config.preferred_auth_method) {
Ok(Some(auth)) => codex_core::AuthManager::from_auth_for_testing(auth),
Ok(None) => panic!("No CodexAuth found in codex_home"),
Err(e) => panic!("Failed to load CodexAuth: {e}"),
};
let conversation_manager = ConversationManager::new(auth_manager);
let NewConversation {
conversation: codex,
..
} = conversation_manager
.new_conversation(config)
.await
.expect("create new conversation");
codex
.submit(Op::UserInput {
items: vec![InputItem::Text {
text: "hello".into(),
}],
})
.await
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn prefers_apikey_when_config_prefers_apikey_even_with_chatgpt_tokens() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
@@ -606,14 +546,12 @@ async fn prefers_apikey_when_config_prefers_apikey_even_with_chatgpt_tokens() {
let mut config = load_default_config_for_test(&codex_home);
config.model_provider = model_provider;
config.preferred_auth_method = AuthMode::ApiKey;
let auth_manager =
match CodexAuth::from_codex_home(codex_home.path(), config.preferred_auth_method) {
Ok(Some(auth)) => codex_core::AuthManager::from_auth_for_testing(auth),
Ok(None) => panic!("No CodexAuth found in codex_home"),
Err(e) => panic!("Failed to load CodexAuth: {e}"),
};
let auth_manager = match CodexAuth::from_codex_home(codex_home.path()) {
Ok(Some(auth)) => codex_core::AuthManager::from_auth_for_testing(auth),
Ok(None) => panic!("No CodexAuth found in codex_home"),
Err(e) => panic!("Failed to load CodexAuth: {e}"),
};
let conversation_manager = ConversationManager::new(auth_manager);
let NewConversation {
conversation: codex,
@@ -696,6 +634,147 @@ async fn includes_user_instructions_message_in_request() {
assert_message_ends_with(&request_body["input"][1], "</environment_context>");
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn azure_responses_request_includes_store_and_reasoning_ids() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
let server = MockServer::start().await;
let sse_body = concat!(
"data: {\"type\":\"response.created\",\"response\":{}}\n\n",
"data: {\"type\":\"response.completed\",\"response\":{\"id\":\"resp_1\"}}\n\n",
);
let template = ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.set_body_raw(sse_body, "text/event-stream");
Mock::given(method("POST"))
.and(path("/openai/responses"))
.respond_with(template)
.expect(1)
.mount(&server)
.await;
let provider = ModelProviderInfo {
name: "azure".into(),
base_url: Some(format!("{}/openai", server.uri())),
env_key: None,
env_key_instructions: None,
wire_api: WireApi::Responses,
query_params: None,
http_headers: None,
env_http_headers: None,
request_max_retries: Some(0),
stream_max_retries: Some(0),
stream_idle_timeout_ms: Some(5_000),
requires_openai_auth: false,
};
let codex_home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&codex_home);
config.model_provider_id = provider.name.clone();
config.model_provider = provider.clone();
let effort = config.model_reasoning_effort;
let summary = config.model_reasoning_summary;
let config = Arc::new(config);
let client = ModelClient::new(
Arc::clone(&config),
None,
provider,
effort,
summary,
ConversationId::new(),
);
let mut prompt = Prompt::default();
prompt.input.push(ResponseItem::Reasoning {
id: "reasoning-id".into(),
summary: vec![ReasoningItemReasoningSummary::SummaryText {
text: "summary".into(),
}],
content: Some(vec![ReasoningItemContent::ReasoningText {
text: "content".into(),
}]),
encrypted_content: None,
});
prompt.input.push(ResponseItem::Message {
id: Some("message-id".into()),
role: "assistant".into(),
content: vec![ContentItem::OutputText {
text: "message".into(),
}],
});
prompt.input.push(ResponseItem::WebSearchCall {
id: Some("web-search-id".into()),
status: Some("completed".into()),
action: WebSearchAction::Search {
query: "weather".into(),
},
});
prompt.input.push(ResponseItem::FunctionCall {
id: Some("function-id".into()),
name: "do_thing".into(),
arguments: "{}".into(),
call_id: "function-call-id".into(),
});
prompt.input.push(ResponseItem::LocalShellCall {
id: Some("local-shell-id".into()),
call_id: Some("local-shell-call-id".into()),
status: LocalShellStatus::Completed,
action: LocalShellAction::Exec(LocalShellExecAction {
command: vec!["echo".into(), "hello".into()],
timeout_ms: None,
working_directory: None,
env: None,
user: None,
}),
});
prompt.input.push(ResponseItem::CustomToolCall {
id: Some("custom-tool-id".into()),
status: Some("completed".into()),
call_id: "custom-tool-call-id".into(),
name: "custom_tool".into(),
input: "{}".into(),
});
let mut stream = client
.stream(&prompt)
.await
.expect("responses stream to start");
while let Some(event) = stream.next().await {
if let Ok(ResponseEvent::Completed { .. }) = event {
break;
}
}
let requests = server
.received_requests()
.await
.expect("mock server collected requests");
assert_eq!(requests.len(), 1, "expected a single request");
let body: serde_json::Value = requests[0]
.body_json()
.expect("request body to be valid JSON");
assert_eq!(body["store"], serde_json::Value::Bool(true));
assert_eq!(body["stream"], serde_json::Value::Bool(true));
assert_eq!(body["input"].as_array().map(Vec::len), Some(6));
assert_eq!(body["input"][0]["id"].as_str(), Some("reasoning-id"));
assert_eq!(body["input"][1]["id"].as_str(), Some("message-id"));
assert_eq!(body["input"][2]["id"].as_str(), Some("web-search-id"));
assert_eq!(body["input"][3]["id"].as_str(), Some("function-id"));
assert_eq!(body["input"][4]["id"].as_str(), Some("local-shell-id"));
assert_eq!(body["input"][5]["id"].as_str(), Some("custom-tool-id"));
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn azure_overrides_assign_properties_used_for_responses_url() {
let existing_env_var_with_random_value = if cfg!(windows) { "USERNAME" } else { "USER" };

View File

@@ -3,22 +3,33 @@
use codex_core::CodexAuth;
use codex_core::ConversationManager;
use codex_core::ModelProviderInfo;
use codex_core::NewConversation;
use codex_core::built_in_model_providers;
use codex_core::protocol::ErrorEvent;
use codex_core::protocol::EventMsg;
use codex_core::protocol::InputItem;
use codex_core::protocol::Op;
use codex_core::protocol::RolloutItem;
use codex_core::protocol::RolloutLine;
use codex_core::spawn::CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR;
use core_test_support::load_default_config_for_test;
use core_test_support::wait_for_event;
use serde_json::Value;
use tempfile::TempDir;
use wiremock::BodyPrintLimit;
use wiremock::Mock;
use wiremock::MockServer;
use wiremock::Request;
use wiremock::Respond;
use wiremock::ResponseTemplate;
use wiremock::matchers::method;
use wiremock::matchers::path;
use pretty_assertions::assert_eq;
use std::sync::Arc;
use std::sync::Mutex;
use std::sync::atomic::AtomicUsize;
use std::sync::atomic::Ordering;
// --- Test helpers -----------------------------------------------------------
@@ -49,6 +60,22 @@ fn ev_completed(id: &str) -> Value {
})
}
fn ev_completed_with_tokens(id: &str, total_tokens: u64) -> Value {
serde_json::json!({
"type": "response.completed",
"response": {
"id": id,
"usage": {
"input_tokens": total_tokens,
"input_tokens_details": null,
"output_tokens": 0,
"output_tokens_details": null,
"total_tokens": total_tokens
}
}
})
}
/// Convenience: SSE event for a single assistant message output item.
fn ev_assistant_message(id: &str, text: &str) -> Value {
serde_json::json!({
@@ -62,6 +89,18 @@ fn ev_assistant_message(id: &str, text: &str) -> Value {
})
}
fn ev_function_call(call_id: &str, name: &str, arguments: &str) -> Value {
serde_json::json!({
"type": "response.output_item.done",
"item": {
"type": "function_call",
"call_id": call_id,
"name": name,
"arguments": arguments
}
})
}
fn sse_response(body: String) -> ResponseTemplate {
ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
@@ -81,10 +120,28 @@ where
.await;
}
async fn start_mock_server() -> MockServer {
MockServer::builder()
.body_print_limit(BodyPrintLimit::Limited(80_000))
.start()
.await
}
const FIRST_REPLY: &str = "FIRST_REPLY";
const SUMMARY_TEXT: &str = "SUMMARY_ONLY_CONTEXT";
const SUMMARIZE_TRIGGER: &str = "Start Summarization";
const THIRD_USER_MSG: &str = "next turn";
const AUTO_SUMMARY_TEXT: &str = "AUTO_SUMMARY";
const FIRST_AUTO_MSG: &str = "token limit start";
const SECOND_AUTO_MSG: &str = "token limit push";
const STILL_TOO_BIG_REPLY: &str = "STILL_TOO_BIG";
const MULTI_AUTO_MSG: &str = "multi auto";
const SECOND_LARGE_REPLY: &str = "SECOND_LARGE_REPLY";
const FIRST_AUTO_SUMMARY: &str = "FIRST_AUTO_SUMMARY";
const SECOND_AUTO_SUMMARY: &str = "SECOND_AUTO_SUMMARY";
const FINAL_REPLY: &str = "FINAL_REPLY";
const DUMMY_FUNCTION_NAME: &str = "unsupported_tool";
const DUMMY_CALL_ID: &str = "call-multi-auto";
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn summarize_context_three_requests_and_instructions() {
@@ -96,7 +153,7 @@ async fn summarize_context_three_requests_and_instructions() {
}
// Set up a mock server that we can inspect after the run.
let server = MockServer::start().await;
let server = start_mock_server().await;
// SSE 1: assistant replies normally so it is recorded in history.
let sse1 = sse(vec![
@@ -141,12 +198,14 @@ async fn summarize_context_three_requests_and_instructions() {
let home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&home);
config.model_provider = model_provider;
config.model_auto_compact_token_limit = Some(200_000);
let conversation_manager = ConversationManager::with_auth(CodexAuth::from_api_key("dummy"));
let codex = conversation_manager
.new_conversation(config)
.await
.unwrap()
.conversation;
let NewConversation {
conversation: codex,
session_configured,
..
} = conversation_manager.new_conversation(config).await.unwrap();
let rollout_path = session_configured.rollout_path;
// 1) Normal user input should hit server once.
codex
@@ -194,7 +253,7 @@ async fn summarize_context_three_requests_and_instructions() {
"summarization should override base instructions"
);
assert!(
instr2.contains("You are a summarization assistant"),
instr2.contains("You have exceeded the maximum number of tokens"),
"summarization instructions not applied"
);
@@ -205,14 +264,17 @@ async fn summarize_context_three_requests_and_instructions() {
assert_eq!(last2.get("type").unwrap().as_str().unwrap(), "message");
assert_eq!(last2.get("role").unwrap().as_str().unwrap(), "user");
let text2 = last2["content"][0]["text"].as_str().unwrap();
assert!(text2.contains(SUMMARIZE_TRIGGER));
assert!(
text2.contains(SUMMARIZE_TRIGGER),
"expected summarize trigger, got `{text2}`"
);
// Third request must contain only the summary from step 2 as prior history plus new user msg.
// Third request must contain the refreshed instructions, bridge summary message and new user msg.
let input3 = body3.get("input").and_then(|v| v.as_array()).unwrap();
println!("third request body: {body3}");
assert!(
input3.len() >= 2,
"expected summary + new user message in third request"
input3.len() >= 3,
"expected refreshed context and new user message in third request"
);
// Collect all (role, text) message tuples.
@@ -228,24 +290,582 @@ async fn summarize_context_three_requests_and_instructions() {
}
}
// Exactly one assistant message should remain after compaction and the new user message is present.
// No previous assistant messages should remain and the new user message is present.
let assistant_count = messages.iter().filter(|(r, _)| r == "assistant").count();
assert_eq!(
assistant_count, 1,
"exactly one assistant message should remain after compaction"
);
assert_eq!(assistant_count, 0, "assistant history should be cleared");
assert!(
messages
.iter()
.any(|(r, t)| r == "user" && t == THIRD_USER_MSG),
"third request should include the new user message"
);
let Some((_, bridge_text)) = messages.iter().find(|(role, text)| {
role == "user"
&& (text.contains("Here were the user messages")
|| text.contains("Here are all the user messages"))
&& text.contains(SUMMARY_TEXT)
}) else {
panic!("expected a bridge message containing the summary");
};
assert!(
!messages.iter().any(|(_, t)| t.contains("hello world")),
"third request should not include the original user input"
bridge_text.contains("hello world"),
"bridge should capture earlier user messages"
);
assert!(
!messages.iter().any(|(_, t)| t.contains(SUMMARIZE_TRIGGER)),
!bridge_text.contains(SUMMARIZE_TRIGGER),
"bridge text should not echo the summarize trigger"
);
assert!(
!messages
.iter()
.any(|(_, text)| text.contains(SUMMARIZE_TRIGGER)),
"third request should not include the summarize trigger"
);
// Shut down Codex to flush rollout entries before inspecting the file.
codex.submit(Op::Shutdown).await.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::ShutdownComplete)).await;
// Verify rollout contains APITurn entries for each API call and a Compacted entry.
println!("rollout path: {}", rollout_path.display());
let text = std::fs::read_to_string(&rollout_path).unwrap_or_else(|e| {
panic!(
"failed to read rollout file {}: {e}",
rollout_path.display()
)
});
let mut api_turn_count = 0usize;
let mut saw_compacted_summary = false;
for line in text.lines() {
let trimmed = line.trim();
if trimmed.is_empty() {
continue;
}
let Ok(entry): Result<RolloutLine, _> = serde_json::from_str(trimmed) else {
continue;
};
match entry.item {
RolloutItem::TurnContext(_) => {
api_turn_count += 1;
}
RolloutItem::Compacted(ci) => {
if ci.message == SUMMARY_TEXT {
saw_compacted_summary = true;
}
}
_ => {}
}
}
assert!(
api_turn_count == 3,
"expected three APITurn entries in rollout"
);
assert!(
saw_compacted_summary,
"expected a Compacted entry containing the summarizer output"
);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn auto_compact_runs_after_token_limit_hit() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
let server = start_mock_server().await;
let sse1 = sse(vec![
ev_assistant_message("m1", FIRST_REPLY),
ev_completed_with_tokens("r1", 70_000),
]);
let sse2 = sse(vec![
ev_assistant_message("m2", "SECOND_REPLY"),
ev_completed_with_tokens("r2", 330_000),
]);
let sse3 = sse(vec![
ev_assistant_message("m3", AUTO_SUMMARY_TEXT),
ev_completed_with_tokens("r3", 200),
]);
let first_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains(FIRST_AUTO_MSG)
&& !body.contains(SECOND_AUTO_MSG)
&& !body.contains("You have exceeded the maximum number of tokens")
};
Mock::given(method("POST"))
.and(path("/v1/responses"))
.and(first_matcher)
.respond_with(sse_response(sse1))
.mount(&server)
.await;
let second_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains(SECOND_AUTO_MSG)
&& body.contains(FIRST_AUTO_MSG)
&& !body.contains("You have exceeded the maximum number of tokens")
};
Mock::given(method("POST"))
.and(path("/v1/responses"))
.and(second_matcher)
.respond_with(sse_response(sse2))
.mount(&server)
.await;
let third_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains("You have exceeded the maximum number of tokens")
};
Mock::given(method("POST"))
.and(path("/v1/responses"))
.and(third_matcher)
.respond_with(sse_response(sse3))
.mount(&server)
.await;
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&home);
config.model_provider = model_provider;
config.model_auto_compact_token_limit = Some(200_000);
let conversation_manager = ConversationManager::with_auth(CodexAuth::from_api_key("dummy"));
let codex = conversation_manager
.new_conversation(config)
.await
.unwrap()
.conversation;
codex
.submit(Op::UserInput {
items: vec![InputItem::Text {
text: FIRST_AUTO_MSG.into(),
}],
})
.await
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
codex
.submit(Op::UserInput {
items: vec![InputItem::Text {
text: SECOND_AUTO_MSG.into(),
}],
})
.await
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
// wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
let requests = server.received_requests().await.unwrap();
assert_eq!(requests.len(), 3, "auto compact should add a third request");
let body3 = requests[2].body_json::<serde_json::Value>().unwrap();
let instructions = body3
.get("instructions")
.and_then(|v| v.as_str())
.unwrap_or_default();
assert!(
instructions.contains("You have exceeded the maximum number of tokens"),
"auto compact should reuse summarization instructions"
);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn auto_compact_persists_rollout_entries() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
let server = start_mock_server().await;
let sse1 = sse(vec![
ev_assistant_message("m1", FIRST_REPLY),
ev_completed_with_tokens("r1", 70_000),
]);
let sse2 = sse(vec![
ev_assistant_message("m2", "SECOND_REPLY"),
ev_completed_with_tokens("r2", 330_000),
]);
let sse3 = sse(vec![
ev_assistant_message("m3", AUTO_SUMMARY_TEXT),
ev_completed_with_tokens("r3", 200),
]);
let first_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains(FIRST_AUTO_MSG)
&& !body.contains(SECOND_AUTO_MSG)
&& !body.contains("You have exceeded the maximum number of tokens")
};
Mock::given(method("POST"))
.and(path("/v1/responses"))
.and(first_matcher)
.respond_with(sse_response(sse1))
.mount(&server)
.await;
let second_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains(SECOND_AUTO_MSG)
&& body.contains(FIRST_AUTO_MSG)
&& !body.contains("You have exceeded the maximum number of tokens")
};
Mock::given(method("POST"))
.and(path("/v1/responses"))
.and(second_matcher)
.respond_with(sse_response(sse2))
.mount(&server)
.await;
let third_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains("You have exceeded the maximum number of tokens")
};
Mock::given(method("POST"))
.and(path("/v1/responses"))
.and(third_matcher)
.respond_with(sse_response(sse3))
.mount(&server)
.await;
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&home);
config.model_provider = model_provider;
let conversation_manager = ConversationManager::with_auth(CodexAuth::from_api_key("dummy"));
let NewConversation {
conversation: codex,
session_configured,
..
} = conversation_manager.new_conversation(config).await.unwrap();
codex
.submit(Op::UserInput {
items: vec![InputItem::Text {
text: FIRST_AUTO_MSG.into(),
}],
})
.await
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
codex
.submit(Op::UserInput {
items: vec![InputItem::Text {
text: SECOND_AUTO_MSG.into(),
}],
})
.await
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
codex.submit(Op::Shutdown).await.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::ShutdownComplete)).await;
let rollout_path = session_configured.rollout_path;
let text = std::fs::read_to_string(&rollout_path).unwrap_or_else(|e| {
panic!(
"failed to read rollout file {}: {e}",
rollout_path.display()
)
});
let mut turn_context_count = 0usize;
for line in text.lines() {
let trimmed = line.trim();
if trimmed.is_empty() {
continue;
}
let Ok(entry): Result<RolloutLine, _> = serde_json::from_str(trimmed) else {
continue;
};
match entry.item {
RolloutItem::TurnContext(_) => {
turn_context_count += 1;
}
RolloutItem::Compacted(_) => {}
_ => {}
}
}
assert!(
turn_context_count >= 2,
"expected at least two turn context entries, got {turn_context_count}"
);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn auto_compact_stops_after_failed_attempt() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
let server = start_mock_server().await;
let sse1 = sse(vec![
ev_assistant_message("m1", FIRST_REPLY),
ev_completed_with_tokens("r1", 500),
]);
let sse2 = sse(vec![
ev_assistant_message("m2", SUMMARY_TEXT),
ev_completed_with_tokens("r2", 50),
]);
let sse3 = sse(vec![
ev_assistant_message("m3", STILL_TOO_BIG_REPLY),
ev_completed_with_tokens("r3", 500),
]);
let first_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains(FIRST_AUTO_MSG)
&& !body.contains("You have exceeded the maximum number of tokens")
};
Mock::given(method("POST"))
.and(path("/v1/responses"))
.and(first_matcher)
.respond_with(sse_response(sse1.clone()))
.mount(&server)
.await;
let second_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains("You have exceeded the maximum number of tokens")
};
Mock::given(method("POST"))
.and(path("/v1/responses"))
.and(second_matcher)
.respond_with(sse_response(sse2.clone()))
.mount(&server)
.await;
let third_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
!body.contains("You have exceeded the maximum number of tokens")
&& body.contains(SUMMARY_TEXT)
};
Mock::given(method("POST"))
.and(path("/v1/responses"))
.and(third_matcher)
.respond_with(sse_response(sse3.clone()))
.mount(&server)
.await;
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&home);
config.model_provider = model_provider;
config.model_auto_compact_token_limit = Some(200);
let conversation_manager = ConversationManager::with_auth(CodexAuth::from_api_key("dummy"));
let codex = conversation_manager
.new_conversation(config)
.await
.unwrap()
.conversation;
codex
.submit(Op::UserInput {
items: vec![InputItem::Text {
text: FIRST_AUTO_MSG.into(),
}],
})
.await
.unwrap();
let error_event = wait_for_event(&codex, |ev| matches!(ev, EventMsg::Error(_))).await;
let EventMsg::Error(ErrorEvent { message }) = error_event else {
panic!("expected error event");
};
assert!(
message.contains("limit"),
"error message should include limit information: {message}"
);
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
let requests = server.received_requests().await.unwrap();
assert_eq!(
requests.len(),
3,
"auto compact should attempt at most one summarization before erroring"
);
let last_body = requests[2].body_json::<serde_json::Value>().unwrap();
let instructions = last_body
.get("instructions")
.and_then(|v| v.as_str())
.unwrap_or_default();
assert!(
!instructions.contains("You have exceeded the maximum number of tokens"),
"third request should be the follow-up turn, not another summarization"
);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn auto_compact_allows_multiple_attempts_when_interleaved_with_other_turn_events() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
let server = start_mock_server().await;
let sse1 = sse(vec![
ev_assistant_message("m1", FIRST_REPLY),
ev_completed_with_tokens("r1", 500),
]);
let sse2 = sse(vec![
ev_assistant_message("m2", FIRST_AUTO_SUMMARY),
ev_completed_with_tokens("r2", 50),
]);
let sse3 = sse(vec![
ev_function_call(DUMMY_CALL_ID, DUMMY_FUNCTION_NAME, "{}"),
ev_completed_with_tokens("r3", 150),
]);
let sse4 = sse(vec![
ev_assistant_message("m4", SECOND_LARGE_REPLY),
ev_completed_with_tokens("r4", 450),
]);
let sse5 = sse(vec![
ev_assistant_message("m5", SECOND_AUTO_SUMMARY),
ev_completed_with_tokens("r5", 60),
]);
let sse6 = sse(vec![
ev_assistant_message("m6", FINAL_REPLY),
ev_completed_with_tokens("r6", 120),
]);
#[derive(Clone)]
struct SeqResponder {
bodies: Arc<Vec<String>>,
calls: Arc<AtomicUsize>,
requests: Arc<Mutex<Vec<Vec<u8>>>>,
}
impl SeqResponder {
fn new(bodies: Vec<String>) -> Self {
Self {
bodies: Arc::new(bodies),
calls: Arc::new(AtomicUsize::new(0)),
requests: Arc::new(Mutex::new(Vec::new())),
}
}
fn recorded_requests(&self) -> Vec<Vec<u8>> {
self.requests.lock().unwrap().clone()
}
}
impl Respond for SeqResponder {
fn respond(&self, req: &Request) -> ResponseTemplate {
let idx = self.calls.fetch_add(1, Ordering::SeqCst);
self.requests.lock().unwrap().push(req.body.clone());
let body = self
.bodies
.get(idx)
.unwrap_or_else(|| panic!("unexpected request index {idx}"))
.clone();
ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.set_body_raw(body, "text/event-stream")
}
}
let responder = SeqResponder::new(vec![sse1, sse2, sse3, sse4, sse5, sse6]);
Mock::given(method("POST"))
.and(path("/v1/responses"))
.respond_with(responder.clone())
.expect(6)
.mount(&server)
.await;
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&home);
config.model_provider = model_provider;
config.model_auto_compact_token_limit = Some(200);
let conversation_manager = ConversationManager::with_auth(CodexAuth::from_api_key("dummy"));
let codex = conversation_manager
.new_conversation(config)
.await
.unwrap()
.conversation;
codex
.submit(Op::UserInput {
items: vec![InputItem::Text {
text: MULTI_AUTO_MSG.into(),
}],
})
.await
.unwrap();
loop {
let event = codex.next_event().await.unwrap();
if let EventMsg::TaskComplete(_) = &event.msg
&& !event.id.starts_with("auto-compact-")
{
break;
}
}
let request_bodies: Vec<String> = responder
.recorded_requests()
.into_iter()
.map(|body| String::from_utf8(body).unwrap_or_default())
.collect();
assert_eq!(
request_bodies.len(),
6,
"expected six requests including two auto compactions"
);
assert!(
request_bodies[0].contains(MULTI_AUTO_MSG),
"first request should contain the user input"
);
assert!(
request_bodies[1].contains("You have exceeded the maximum number of tokens"),
"first auto compact request should use summarization instructions"
);
assert!(
request_bodies[3].contains(&format!("unsupported call: {DUMMY_FUNCTION_NAME}")),
"function call output should be sent before the second auto compact"
);
assert!(
request_bodies[4].contains("You have exceeded the maximum number of tokens"),
"second auto compact request should reuse summarization instructions"
);
}

View File

@@ -1,12 +1,16 @@
use codex_core::CodexAuth;
use codex_core::ContentItem;
use codex_core::ConversationManager;
use codex_core::ModelProviderInfo;
use codex_core::NewConversation;
use codex_core::ResponseItem;
use codex_core::built_in_model_providers;
use codex_core::protocol::ConversationHistoryResponseEvent;
use codex_core::protocol::ConversationPathResponseEvent;
use codex_core::protocol::EventMsg;
use codex_core::protocol::InputItem;
use codex_core::protocol::Op;
use codex_core::protocol::RolloutItem;
use codex_core::protocol::RolloutLine;
use core_test_support::load_default_config_for_test;
use core_test_support::wait_for_event;
use tempfile::TempDir;
@@ -71,84 +75,121 @@ async fn fork_conversation_twice_drops_to_first_message() {
let _ = wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
}
// Request history from the base conversation.
codex.submit(Op::GetHistory).await.unwrap();
// Request history from the base conversation to obtain rollout path.
codex.submit(Op::GetPath).await.unwrap();
let base_history =
wait_for_event(&codex, |ev| matches!(ev, EventMsg::ConversationHistory(_))).await;
// Capture entries from the base history and compute expected prefixes after each fork.
let entries_after_three = match &base_history {
EventMsg::ConversationHistory(ConversationHistoryResponseEvent { entries, .. }) => {
entries.clone()
}
wait_for_event(&codex, |ev| matches!(ev, EventMsg::ConversationPath(_))).await;
let base_path = match &base_history {
EventMsg::ConversationPath(ConversationPathResponseEvent { path, .. }) => path.clone(),
_ => panic!("expected ConversationHistory event"),
};
// History layout for this test:
// [0] user instructions,
// [1] environment context,
// [2] "first" user message,
// [3] "second" user message,
// [4] "third" user message.
// Fork 1: drops the last user message and everything after.
let expected_after_first = vec![
entries_after_three[0].clone(),
entries_after_three[1].clone(),
entries_after_three[2].clone(),
entries_after_three[3].clone(),
];
// GetHistory flushes before returning the path; no wait needed.
// Fork 2: drops the last user message and everything after.
// [0] user instructions,
// [1] environment context,
// [2] "first" user message,
let expected_after_second = vec![
entries_after_three[0].clone(),
entries_after_three[1].clone(),
entries_after_three[2].clone(),
];
// Helper: read rollout items (excluding SessionMeta) from a JSONL path.
let read_items = |p: &std::path::Path| -> Vec<RolloutItem> {
let text = std::fs::read_to_string(p).expect("read rollout file");
let mut items: Vec<RolloutItem> = Vec::new();
for line in text.lines() {
if line.trim().is_empty() {
continue;
}
let v: serde_json::Value = serde_json::from_str(line).expect("jsonl line");
let rl: RolloutLine = serde_json::from_value(v).expect("rollout line");
match rl.item {
RolloutItem::SessionMeta(_) => {}
other => items.push(other),
}
}
items
};
// Fork once with n=1 → drops the last user message and everything after.
// Compute expected prefixes after each fork by truncating base rollout at nth-from-last user input.
let base_items = read_items(&base_path);
let find_user_input_positions = |items: &[RolloutItem]| -> Vec<usize> {
let mut pos = Vec::new();
for (i, it) in items.iter().enumerate() {
if let RolloutItem::ResponseItem(ResponseItem::Message { role, content, .. }) = it
&& role == "user"
{
// Consider any user message as an input boundary; recorder stores both EventMsg and ResponseItem.
// We specifically look for input items, which are represented as ContentItem::InputText.
if content
.iter()
.any(|c| matches!(c, ContentItem::InputText { .. }))
{
pos.push(i);
}
}
}
pos
};
let user_inputs = find_user_input_positions(&base_items);
// After dropping last user input (n=1), cut strictly before that input if present, else empty.
let cut1 = user_inputs
.get(user_inputs.len().saturating_sub(1))
.copied()
.unwrap_or(0);
let expected_after_first: Vec<RolloutItem> = base_items[..cut1].to_vec();
// After dropping again (n=1 on fork1), compute expected relative to fork1's rollout.
// Fork once with n=1 → drops the last user input and everything after.
let NewConversation {
conversation: codex_fork1,
..
} = conversation_manager
.fork_conversation(entries_after_three.clone(), 1, config_for_fork.clone())
.fork_conversation(1, config_for_fork.clone(), base_path.clone())
.await
.expect("fork 1");
codex_fork1.submit(Op::GetHistory).await.unwrap();
codex_fork1.submit(Op::GetPath).await.unwrap();
let fork1_history = wait_for_event(&codex_fork1, |ev| {
matches!(ev, EventMsg::ConversationHistory(_))
matches!(ev, EventMsg::ConversationPath(_))
})
.await;
let entries_after_first_fork = match &fork1_history {
EventMsg::ConversationHistory(ConversationHistoryResponseEvent { entries, .. }) => {
assert!(matches!(
fork1_history,
EventMsg::ConversationHistory(ConversationHistoryResponseEvent { ref entries, .. }) if *entries == expected_after_first
));
entries.clone()
}
let fork1_path = match &fork1_history {
EventMsg::ConversationPath(ConversationPathResponseEvent { path, .. }) => path.clone(),
_ => panic!("expected ConversationHistory event after first fork"),
};
// GetHistory on fork1 flushed; the file is ready.
let fork1_items = read_items(&fork1_path);
pretty_assertions::assert_eq!(
serde_json::to_value(&fork1_items).unwrap(),
serde_json::to_value(&expected_after_first).unwrap()
);
// Fork again with n=1 → drops the (new) last user message, leaving only the first.
let NewConversation {
conversation: codex_fork2,
..
} = conversation_manager
.fork_conversation(entries_after_first_fork.clone(), 1, config_for_fork.clone())
.fork_conversation(1, config_for_fork.clone(), fork1_path.clone())
.await
.expect("fork 2");
codex_fork2.submit(Op::GetHistory).await.unwrap();
codex_fork2.submit(Op::GetPath).await.unwrap();
let fork2_history = wait_for_event(&codex_fork2, |ev| {
matches!(ev, EventMsg::ConversationHistory(_))
matches!(ev, EventMsg::ConversationPath(_))
})
.await;
assert!(matches!(
fork2_history,
EventMsg::ConversationHistory(ConversationHistoryResponseEvent { ref entries, .. }) if *entries == expected_after_second
));
let fork2_path = match &fork2_history {
EventMsg::ConversationPath(ConversationPathResponseEvent { path, .. }) => path.clone(),
_ => panic!("expected ConversationHistory event after second fork"),
};
// GetHistory on fork2 flushed; the file is ready.
let fork1_items = read_items(&fork1_path);
let fork1_user_inputs = find_user_input_positions(&fork1_items);
let cut_last_on_fork1 = fork1_user_inputs
.get(fork1_user_inputs.len().saturating_sub(1))
.copied()
.unwrap_or(0);
let expected_after_second: Vec<RolloutItem> = fork1_items[..cut_last_on_fork1].to_vec();
let fork2_items = read_items(&fork2_path);
pretty_assertions::assert_eq!(
serde_json::to_value(&fork2_items).unwrap(),
serde_json::to_value(&expected_after_second).unwrap()
);
}

View File

@@ -1,5 +1,6 @@
// Aggregates all former standalone integration tests as modules.
mod user_shell_cmd;
mod cli_stream;
mod client;
mod compact;
@@ -7,7 +8,9 @@ mod exec;
mod exec_stream_events;
mod fork_conversation;
mod live_cli;
mod model_overrides;
mod prompt_caching;
mod review;
mod seatbelt;
mod stream_error_allows_next_turn;
mod stream_no_completed;

View File

@@ -0,0 +1,92 @@
use codex_core::CodexAuth;
use codex_core::ConversationManager;
use codex_core::protocol::EventMsg;
use codex_core::protocol::Op;
use codex_core::protocol_config_types::ReasoningEffort;
use core_test_support::load_default_config_for_test;
use core_test_support::wait_for_event;
use pretty_assertions::assert_eq;
use tempfile::TempDir;
const CONFIG_TOML: &str = "config.toml";
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn override_turn_context_does_not_persist_when_config_exists() {
let codex_home = TempDir::new().unwrap();
let config_path = codex_home.path().join(CONFIG_TOML);
let initial_contents = "model = \"gpt-4o\"\n";
tokio::fs::write(&config_path, initial_contents)
.await
.expect("seed config.toml");
let mut config = load_default_config_for_test(&codex_home);
config.model = "gpt-4o".to_string();
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));
let codex = conversation_manager
.new_conversation(config)
.await
.expect("create conversation")
.conversation;
codex
.submit(Op::OverrideTurnContext {
cwd: None,
approval_policy: None,
sandbox_policy: None,
model: Some("o3".to_string()),
effort: Some(Some(ReasoningEffort::High)),
summary: None,
})
.await
.expect("submit override");
codex.submit(Op::Shutdown).await.expect("request shutdown");
wait_for_event(&codex, |ev| matches!(ev, EventMsg::ShutdownComplete)).await;
let contents = tokio::fs::read_to_string(&config_path)
.await
.expect("read config.toml after override");
assert_eq!(contents, initial_contents);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn override_turn_context_does_not_create_config_file() {
let codex_home = TempDir::new().unwrap();
let config_path = codex_home.path().join(CONFIG_TOML);
assert!(
!config_path.exists(),
"test setup should start without config"
);
let config = load_default_config_for_test(&codex_home);
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));
let codex = conversation_manager
.new_conversation(config)
.await
.expect("create conversation")
.conversation;
codex
.submit(Op::OverrideTurnContext {
cwd: None,
approval_policy: None,
sandbox_policy: None,
model: Some("o3".to_string()),
effort: Some(Some(ReasoningEffort::Medium)),
summary: None,
})
.await
.expect("submit override");
codex.submit(Op::Shutdown).await.expect("request shutdown");
wait_for_event(&codex, |ev| matches!(ev, EventMsg::ShutdownComplete)).await;
assert!(
!config_path.exists(),
"override should not create config.toml"
);
}

View File

@@ -387,7 +387,7 @@ async fn overrides_turn_context_but_keeps_cached_prefix_and_key_constant() {
exclude_slash_tmp: true,
}),
model: Some("o3".to_string()),
effort: Some(ReasoningEffort::High),
effort: Some(Some(ReasoningEffort::High)),
summary: Some(ReasoningSummary::Detailed),
})
.await
@@ -426,11 +426,17 @@ async fn overrides_turn_context_but_keeps_cached_prefix_and_key_constant() {
// After overriding the turn context, the environment context should be emitted again
// reflecting the new approval policy and sandbox settings. Omit cwd because it did
// not change.
let expected_env_text_2 = r#"<environment_context>
let expected_env_text_2 = format!(
r#"<environment_context>
<approval_policy>never</approval_policy>
<sandbox_mode>workspace-write</sandbox_mode>
<network_access>enabled</network_access>
</environment_context>"#;
<writable_roots>
<root>{}</root>
</writable_roots>
</environment_context>"#,
writable.path().to_string_lossy()
);
let expected_env_msg_2 = serde_json::json!({
"type": "message",
"role": "user",
@@ -513,7 +519,7 @@ async fn per_turn_overrides_keep_cached_prefix_and_key_constant() {
exclude_slash_tmp: true,
},
model: "o3".to_string(),
effort: ReasoningEffort::High,
effort: Some(ReasoningEffort::High),
summary: ReasoningSummary::Detailed,
})
.await

View File

@@ -0,0 +1,542 @@
use codex_core::CodexAuth;
use codex_core::CodexConversation;
use codex_core::ConversationManager;
use codex_core::ModelProviderInfo;
use codex_core::built_in_model_providers;
use codex_core::config::Config;
use codex_core::protocol::EventMsg;
use codex_core::protocol::InputItem;
use codex_core::protocol::Op;
use codex_core::protocol::ReviewCodeLocation;
use codex_core::protocol::ReviewFinding;
use codex_core::protocol::ReviewLineRange;
use codex_core::protocol::ReviewOutputEvent;
use codex_core::protocol::ReviewRequest;
use codex_core::spawn::CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR;
use core_test_support::load_default_config_for_test;
use core_test_support::load_sse_fixture_with_id_from_str;
use core_test_support::wait_for_event;
use pretty_assertions::assert_eq;
use std::path::PathBuf;
use std::sync::Arc;
use tempfile::TempDir;
use tokio::io::AsyncWriteExt as _;
use uuid::Uuid;
use wiremock::Mock;
use wiremock::MockServer;
use wiremock::ResponseTemplate;
use wiremock::matchers::method;
use wiremock::matchers::path;
/// Verify that submitting `Op::Review` spawns a child task and emits
/// EnteredReviewMode -> ExitedReviewMode(None) -> TaskComplete
/// in that order when the model returns a structured review JSON payload.
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn review_op_emits_lifecycle_and_review_output() {
// Skip under Codex sandbox network restrictions.
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
// Start mock Responses API server. Return a single assistant message whose
// text is a JSON-encoded ReviewOutputEvent.
let review_json = serde_json::json!({
"findings": [
{
"title": "Prefer Stylize helpers",
"body": "Use .dim()/.bold() chaining instead of manual Style where possible.",
"confidence_score": 0.9,
"priority": 1,
"code_location": {
"absolute_file_path": "/tmp/file.rs",
"line_range": {"start": 10, "end": 20}
}
}
],
"overall_correctness": "good",
"overall_explanation": "All good with some improvements suggested.",
"overall_confidence_score": 0.8
})
.to_string();
let sse_template = r#"[
{"type":"response.output_item.done", "item":{
"type":"message", "role":"assistant",
"content":[{"type":"output_text","text":__REVIEW__}]
}},
{"type":"response.completed", "response": {"id": "__ID__"}}
]"#;
let review_json_escaped = serde_json::to_string(&review_json).unwrap();
let sse_raw = sse_template.replace("__REVIEW__", &review_json_escaped);
let server = start_responses_server_with_sse(&sse_raw, 1).await;
let codex_home = TempDir::new().unwrap();
let codex = new_conversation_for_server(&server, &codex_home, |_| {}).await;
// Submit review request.
codex
.submit(Op::Review {
review_request: ReviewRequest {
prompt: "Please review my changes".to_string(),
user_facing_hint: "my changes".to_string(),
},
})
.await
.unwrap();
// Verify lifecycle: Entered -> Exited(Some(review)) -> TaskComplete.
let _entered = wait_for_event(&codex, |ev| matches!(ev, EventMsg::EnteredReviewMode(_))).await;
let closed = wait_for_event(&codex, |ev| matches!(ev, EventMsg::ExitedReviewMode(_))).await;
let review = match closed {
EventMsg::ExitedReviewMode(Some(r)) => r,
other => panic!("expected ExitedReviewMode(Some(..)), got {other:?}"),
};
// Deep compare full structure using PartialEq (floats are f32 on both sides).
let expected = ReviewOutputEvent {
findings: vec![ReviewFinding {
title: "Prefer Stylize helpers".to_string(),
body: "Use .dim()/.bold() chaining instead of manual Style where possible.".to_string(),
confidence_score: 0.9,
priority: 1,
code_location: ReviewCodeLocation {
absolute_file_path: PathBuf::from("/tmp/file.rs"),
line_range: ReviewLineRange { start: 10, end: 20 },
},
}],
overall_correctness: "good".to_string(),
overall_explanation: "All good with some improvements suggested.".to_string(),
overall_confidence_score: 0.8,
};
assert_eq!(expected, review);
let _complete = wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
server.verify().await;
}
/// When the model returns plain text that is not JSON, ensure the child
/// lifecycle still occurs and the plain text is surfaced via
/// ExitedReviewMode(Some(..)) as the overall_explanation.
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn review_op_with_plain_text_emits_review_fallback() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
let sse_raw = r#"[
{"type":"response.output_item.done", "item":{
"type":"message", "role":"assistant",
"content":[{"type":"output_text","text":"just plain text"}]
}},
{"type":"response.completed", "response": {"id": "__ID__"}}
]"#;
let server = start_responses_server_with_sse(sse_raw, 1).await;
let codex_home = TempDir::new().unwrap();
let codex = new_conversation_for_server(&server, &codex_home, |_| {}).await;
codex
.submit(Op::Review {
review_request: ReviewRequest {
prompt: "Plain text review".to_string(),
user_facing_hint: "plain text review".to_string(),
},
})
.await
.unwrap();
let _entered = wait_for_event(&codex, |ev| matches!(ev, EventMsg::EnteredReviewMode(_))).await;
let closed = wait_for_event(&codex, |ev| matches!(ev, EventMsg::ExitedReviewMode(_))).await;
let review = match closed {
EventMsg::ExitedReviewMode(Some(r)) => r,
other => panic!("expected ExitedReviewMode(Some(..)), got {other:?}"),
};
// Expect a structured fallback carrying the plain text.
let expected = ReviewOutputEvent {
overall_explanation: "just plain text".to_string(),
..Default::default()
};
assert_eq!(expected, review);
let _complete = wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
server.verify().await;
}
/// When the model returns structured JSON in a review, ensure no AgentMessage
/// is emitted; the UI consumes the structured result via ExitedReviewMode.
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn review_does_not_emit_agent_message_on_structured_output() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
let review_json = serde_json::json!({
"findings": [
{
"title": "Example",
"body": "Structured review output.",
"confidence_score": 0.5,
"priority": 1,
"code_location": {
"absolute_file_path": "/tmp/file.rs",
"line_range": {"start": 1, "end": 2}
}
}
],
"overall_correctness": "ok",
"overall_explanation": "ok",
"overall_confidence_score": 0.5
})
.to_string();
let sse_template = r#"[
{"type":"response.output_item.done", "item":{
"type":"message", "role":"assistant",
"content":[{"type":"output_text","text":__REVIEW__}]
}},
{"type":"response.completed", "response": {"id": "__ID__"}}
]"#;
let review_json_escaped = serde_json::to_string(&review_json).unwrap();
let sse_raw = sse_template.replace("__REVIEW__", &review_json_escaped);
let server = start_responses_server_with_sse(&sse_raw, 1).await;
let codex_home = TempDir::new().unwrap();
let codex = new_conversation_for_server(&server, &codex_home, |_| {}).await;
codex
.submit(Op::Review {
review_request: ReviewRequest {
prompt: "check structured".to_string(),
user_facing_hint: "check structured".to_string(),
},
})
.await
.unwrap();
// Drain events until TaskComplete; ensure none are AgentMessage.
use tokio::time::Duration;
use tokio::time::timeout;
let mut saw_entered = false;
let mut saw_exited = false;
loop {
let ev = timeout(Duration::from_secs(5), codex.next_event())
.await
.expect("timeout waiting for event")
.expect("stream ended unexpectedly");
match ev.msg {
EventMsg::TaskComplete(_) => break,
EventMsg::AgentMessage(_) => {
panic!("unexpected AgentMessage during review with structured output")
}
EventMsg::EnteredReviewMode(_) => saw_entered = true,
EventMsg::ExitedReviewMode(_) => saw_exited = true,
_ => {}
}
}
assert!(saw_entered && saw_exited, "missing review lifecycle events");
server.verify().await;
}
/// Ensure that when a custom `review_model` is set in the config, the review
/// request uses that model (and not the main chat model).
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn review_uses_custom_review_model_from_config() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
// Minimal stream: just a completed event
let sse_raw = r#"[
{"type":"response.completed", "response": {"id": "__ID__"}}
]"#;
let server = start_responses_server_with_sse(sse_raw, 1).await;
let codex_home = TempDir::new().unwrap();
// Choose a review model different from the main model; ensure it is used.
let codex = new_conversation_for_server(&server, &codex_home, |cfg| {
cfg.model = "gpt-4.1".to_string();
cfg.review_model = "gpt-5".to_string();
})
.await;
codex
.submit(Op::Review {
review_request: ReviewRequest {
prompt: "use custom model".to_string(),
user_facing_hint: "use custom model".to_string(),
},
})
.await
.unwrap();
// Wait for completion
let _entered = wait_for_event(&codex, |ev| matches!(ev, EventMsg::EnteredReviewMode(_))).await;
let _closed = wait_for_event(&codex, |ev| matches!(ev, EventMsg::ExitedReviewMode(None))).await;
let _complete = wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
// Assert the request body model equals the configured review model
let request = &server.received_requests().await.unwrap()[0];
let body = request.body_json::<serde_json::Value>().unwrap();
assert_eq!(body["model"].as_str().unwrap(), "gpt-5");
server.verify().await;
}
/// When a review session begins, it must not prepend prior chat history from
/// the parent session. The request `input` should contain only the review
/// prompt from the user.
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn review_input_isolated_from_parent_history() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
// Mock server for the single review request
let sse_raw = r#"[
{"type":"response.completed", "response": {"id": "__ID__"}}
]"#;
let server = start_responses_server_with_sse(sse_raw, 1).await;
// Seed a parent session history via resume file with both user + assistant items.
let codex_home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&codex_home);
config.model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let session_file = codex_home.path().join("resume.jsonl");
{
let mut f = tokio::fs::File::create(&session_file).await.unwrap();
let convo_id = Uuid::new_v4();
// Proper session_meta line (enveloped) with a conversation id
let meta_line = serde_json::json!({
"timestamp": "2024-01-01T00:00:00.000Z",
"type": "session_meta",
"payload": {
"id": convo_id,
"timestamp": "2024-01-01T00:00:00Z",
"instructions": null,
"cwd": ".",
"originator": "test_originator",
"cli_version": "test_version"
}
});
f.write_all(format!("{meta_line}\n").as_bytes())
.await
.unwrap();
// Prior user message (enveloped response_item)
let user = codex_protocol::models::ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![codex_protocol::models::ContentItem::InputText {
text: "parent: earlier user message".to_string(),
}],
};
let user_json = serde_json::to_value(&user).unwrap();
let user_line = serde_json::json!({
"timestamp": "2024-01-01T00:00:01.000Z",
"type": "response_item",
"payload": user_json
});
f.write_all(format!("{user_line}\n").as_bytes())
.await
.unwrap();
// Prior assistant message (enveloped response_item)
let assistant = codex_protocol::models::ResponseItem::Message {
id: None,
role: "assistant".to_string(),
content: vec![codex_protocol::models::ContentItem::OutputText {
text: "parent: assistant reply".to_string(),
}],
};
let assistant_json = serde_json::to_value(&assistant).unwrap();
let assistant_line = serde_json::json!({
"timestamp": "2024-01-01T00:00:02.000Z",
"type": "response_item",
"payload": assistant_json
});
f.write_all(format!("{assistant_line}\n").as_bytes())
.await
.unwrap();
}
config.experimental_resume = Some(session_file);
let codex = new_conversation_for_server(&server, &codex_home, |cfg| {
// apply resume file
cfg.experimental_resume = config.experimental_resume.clone();
})
.await;
// Submit review request; it must start fresh (no parent history in `input`).
let review_prompt = "Please review only this".to_string();
codex
.submit(Op::Review {
review_request: ReviewRequest {
prompt: review_prompt.clone(),
user_facing_hint: review_prompt.clone(),
},
})
.await
.unwrap();
let _entered = wait_for_event(&codex, |ev| matches!(ev, EventMsg::EnteredReviewMode(_))).await;
let _closed = wait_for_event(&codex, |ev| matches!(ev, EventMsg::ExitedReviewMode(None))).await;
let _complete = wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
// Assert the request `input` contains only the single review user message.
let request = &server.received_requests().await.unwrap()[0];
let body = request.body_json::<serde_json::Value>().unwrap();
let expected_input = serde_json::json!([
{
"type": "message",
"role": "user",
"content": [{"type": "input_text", "text": review_prompt}]
}
]);
assert_eq!(body["input"], expected_input);
server.verify().await;
}
/// After a review thread finishes, its conversation should not leak into the
/// parent session. A subsequent parent turn must not include any review
/// messages in its request `input`.
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn review_history_does_not_leak_into_parent_session() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
// Respond to both the review request and the subsequent parent request.
let sse_raw = r#"[
{"type":"response.output_item.done", "item":{
"type":"message", "role":"assistant",
"content":[{"type":"output_text","text":"review assistant output"}]
}},
{"type":"response.completed", "response": {"id": "__ID__"}}
]"#;
let server = start_responses_server_with_sse(sse_raw, 2).await;
let codex_home = TempDir::new().unwrap();
let codex = new_conversation_for_server(&server, &codex_home, |_| {}).await;
// 1) Run a review turn that produces an assistant message (isolated in child).
codex
.submit(Op::Review {
review_request: ReviewRequest {
prompt: "Start a review".to_string(),
user_facing_hint: "Start a review".to_string(),
},
})
.await
.unwrap();
let _entered = wait_for_event(&codex, |ev| matches!(ev, EventMsg::EnteredReviewMode(_))).await;
let _closed = wait_for_event(&codex, |ev| {
matches!(ev, EventMsg::ExitedReviewMode(Some(_)))
})
.await;
let _complete = wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
// 2) Continue in the parent session; request input must not include any review items.
let followup = "back to parent".to_string();
codex
.submit(Op::UserInput {
items: vec![InputItem::Text {
text: followup.clone(),
}],
})
.await
.unwrap();
let _complete = wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
// Inspect the second request (parent turn) input contents.
// Parent turns include session initial messages (user_instructions, environment_context).
// Critically, no messages from the review thread should appear.
let requests = server.received_requests().await.unwrap();
assert_eq!(requests.len(), 2);
let body = requests[1].body_json::<serde_json::Value>().unwrap();
let input = body["input"].as_array().expect("input array");
// Must include the followup as the last item for this turn
let last = input.last().expect("at least one item in input");
assert_eq!(last["role"].as_str().unwrap(), "user");
let last_text = last["content"][0]["text"].as_str().unwrap();
assert_eq!(last_text, followup);
// Ensure no review-thread content leaked into the parent request
let contains_review_prompt = input
.iter()
.any(|msg| msg["content"][0]["text"].as_str().unwrap_or_default() == "Start a review");
let contains_review_assistant = input.iter().any(|msg| {
msg["content"][0]["text"].as_str().unwrap_or_default() == "review assistant output"
});
assert!(
!contains_review_prompt,
"review prompt leaked into parent turn input"
);
assert!(
!contains_review_assistant,
"review assistant output leaked into parent turn input"
);
server.verify().await;
}
/// Start a mock Responses API server and mount the given SSE stream body.
async fn start_responses_server_with_sse(sse_raw: &str, expected_requests: usize) -> MockServer {
let server = MockServer::start().await;
let sse = load_sse_fixture_with_id_from_str(sse_raw, &Uuid::new_v4().to_string());
Mock::given(method("POST"))
.and(path("/v1/responses"))
.respond_with(
ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.set_body_raw(sse.clone(), "text/event-stream"),
)
.expect(expected_requests as u64)
.mount(&server)
.await;
server
}
/// Create a conversation configured to talk to the provided mock server.
#[expect(clippy::expect_used)]
async fn new_conversation_for_server<F>(
server: &MockServer,
codex_home: &TempDir,
mutator: F,
) -> Arc<CodexConversation>
where
F: FnOnce(&mut Config),
{
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let mut config = load_default_config_for_test(codex_home);
config.model_provider = model_provider;
mutator(&mut config);
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));
conversation_manager
.new_conversation(config)
.await
.expect("create conversation")
.conversation
}

View File

@@ -0,0 +1,112 @@
#![cfg(unix)]
use codex_core::ConversationManager;
use codex_core::NewConversation;
use codex_core::protocol::EventMsg;
use codex_core::protocol::ExecCommandEndEvent;
use codex_core::protocol::Op;
use codex_core::protocol::TurnAbortReason;
use core_test_support::load_default_config_for_test;
use core_test_support::wait_for_event;
use std::fs;
use std::path::PathBuf;
use tempfile::TempDir;
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn user_shell_cmd_ls_and_cat_in_temp_dir() {
// No env overrides needed; test build hard-codes a hermetic zsh with empty rc.
// Create a temporary working directory with a known file.
let cwd = TempDir::new().unwrap();
let file_name = "hello.txt";
let file_path: PathBuf = cwd.path().join(file_name);
let contents = "hello from bang test\n";
fs::write(&file_path, contents).expect("write temp file");
// Load config and pin cwd to the temp dir so ls/cat operate there.
let codex_home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&codex_home);
config.cwd = cwd.path().to_path_buf();
let conversation_manager =
ConversationManager::with_auth(codex_core::CodexAuth::from_api_key("dummy"));
let NewConversation {
conversation: codex,
..
} = conversation_manager
.new_conversation(config)
.await
.expect("create new conversation");
// 1) ls should list the file
codex
.submit(Op::RunUserShellCommand {
command: "ls".to_string(),
})
.await
.unwrap();
let msg = wait_for_event(&codex, |ev| matches!(ev, EventMsg::ExecCommandEnd(_))).await;
let EventMsg::ExecCommandEnd(ExecCommandEndEvent {
stdout, exit_code, ..
}) = msg
else {
unreachable!()
};
assert_eq!(exit_code, 0);
assert!(
stdout.contains(file_name),
"ls output should include {file_name}, got: {stdout:?}"
);
// 2) cat the file should return exact contents
codex
.submit(Op::RunUserShellCommand {
command: format!("cat {}", file_name),
})
.await
.unwrap();
let msg = wait_for_event(&codex, |ev| matches!(ev, EventMsg::ExecCommandEnd(_))).await;
let EventMsg::ExecCommandEnd(ExecCommandEndEvent {
stdout, exit_code, ..
}) = msg
else {
unreachable!()
};
assert_eq!(exit_code, 0);
assert_eq!(stdout, contents);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn user_shell_cmd_can_be_interrupted() {
// No env overrides needed; test build hard-codes a hermetic zsh with empty rc.
// Set up isolated config and conversation.
let codex_home = TempDir::new().unwrap();
let config = load_default_config_for_test(&codex_home);
let conversation_manager =
ConversationManager::with_auth(codex_core::CodexAuth::from_api_key("dummy"));
let NewConversation {
conversation: codex,
..
} = conversation_manager
.new_conversation(config)
.await
.expect("create new conversation");
// Start a long-running command and then interrupt it.
codex
.submit(Op::RunUserShellCommand {
command: "sleep 5".to_string(),
})
.await
.unwrap();
// Wait until it has started (ExecCommandBegin), then interrupt.
let _ = wait_for_event(&codex, |ev| matches!(ev, EventMsg::ExecCommandBegin(_))).await;
codex.submit(Op::Interrupt).await.unwrap();
// Expect a TurnAborted(Interrupted) notification.
let msg = wait_for_event(&codex, |ev| matches!(ev, EventMsg::TurnAborted(_))).await;
let EventMsg::TurnAborted(ev) = msg else {
unreachable!()
};
assert_eq!(ev.reason, TurnAbortReason::Interrupted);
}

View File

@@ -278,9 +278,10 @@ impl EventProcessor for EventProcessorWithHumanOutput {
command,
cwd,
parsed_cmd: _,
..
}) => {
self.call_id_to_command.insert(
call_id.clone(),
call_id,
ExecCommandBegin {
command: command.clone(),
},
@@ -382,7 +383,7 @@ impl EventProcessor for EventProcessorWithHumanOutput {
// Store metadata so we can calculate duration later when we
// receive the corresponding PatchApplyEnd event.
self.call_id_to_patch.insert(
call_id.clone(),
call_id,
PatchApplyBegin {
start_time: Instant::now(),
auto_approved,
@@ -520,6 +521,7 @@ impl EventProcessor for EventProcessorWithHumanOutput {
let SessionConfiguredEvent {
session_id: conversation_id,
model,
reasoning_effort: _,
history_log_id: _,
history_entry_count: _,
initial_messages: _,
@@ -559,8 +561,10 @@ impl EventProcessor for EventProcessorWithHumanOutput {
}
},
EventMsg::ShutdownComplete => return CodexStatus::Shutdown,
EventMsg::ConversationHistory(_) => {}
EventMsg::ConversationPath(_) => {}
EventMsg::UserMessage(_) => {}
EventMsg::EnteredReviewMode(_) => {}
EventMsg::ExitedReviewMode(_) => {}
}
CodexStatus::Running
}

View File

@@ -137,6 +137,7 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
// Load configuration and determine approval policy
let overrides = ConfigOverrides {
model,
review_model: None,
config_profile,
// This CLI is intended to be headless and has no affordances for asking
// the user for approval.
@@ -187,10 +188,8 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
std::process::exit(1);
}
let conversation_manager = ConversationManager::new(AuthManager::shared(
config.codex_home.clone(),
config.preferred_auth_method,
));
let conversation_manager =
ConversationManager::new(AuthManager::shared(config.codex_home.clone()));
let NewConversation {
conversation_id: _,
conversation,

View File

@@ -61,7 +61,7 @@ pub(crate) async fn run_e2e_exec_test(cwd: &Path, response_streams: Vec<String>)
.context("should find binary for codex-exec")
.expect("should find binary for codex-exec")
.current_dir(cwd.clone())
.env("CODEX_HOME", cwd.clone())
.env("CODEX_HOME", cwd)
.env("OPENAI_API_KEY", "dummy")
.env("OPENAI_BASE_URL", format!("{uri}/v1"))
.arg("--skip-git-repo-check")

View File

@@ -88,7 +88,7 @@ impl ExecvChecker {
let mut program = valid_exec.program.to_string();
for system_path in valid_exec.system_path {
if is_executable_file(&system_path) {
program = system_path.to_string();
program = system_path;
break;
}
}
@@ -196,7 +196,7 @@ system_path=[{fake_cp:?}]
let checker = setup(&fake_cp);
let exec_call = ExecCall {
program: "cp".into(),
args: vec![source.clone(), dest.clone()],
args: vec![source, dest.clone()],
};
let valid_exec = match checker.r#match(&exec_call)? {
MatchedExec::Match { exec } => exec,
@@ -207,7 +207,7 @@ system_path=[{fake_cp:?}]
assert_eq!(
checker.check(valid_exec.clone(), &cwd, &[], &[]),
Err(ReadablePathNotInReadableFolders {
file: source_path.clone(),
file: source_path,
folders: vec![]
}),
);
@@ -229,7 +229,7 @@ system_path=[{fake_cp:?}]
// Both readable and writeable folders specified.
assert_eq!(
checker.check(
valid_exec.clone(),
valid_exec,
&cwd,
std::slice::from_ref(&root_path),
std::slice::from_ref(&root_path)
@@ -241,7 +241,7 @@ system_path=[{fake_cp:?}]
// folders.
let exec_call_folders_as_args = ExecCall {
program: "cp".into(),
args: vec![root.clone(), root.clone()],
args: vec![root.clone(), root],
};
let valid_exec_call_folders_as_args = match checker.r#match(&exec_call_folders_as_args)? {
MatchedExec::Match { exec } => exec,
@@ -254,7 +254,7 @@ system_path=[{fake_cp:?}]
std::slice::from_ref(&root_path),
std::slice::from_ref(&root_path)
),
Ok(cp.clone()),
Ok(cp),
);
// Specify a parent of a readable folder as input.

View File

@@ -104,7 +104,7 @@ impl PolicyBuilder {
info!("adding program spec: {program_spec:?}");
let name = program_spec.program.clone();
let mut programs = self.programs.borrow_mut();
programs.insert(name.clone(), program_spec);
programs.insert(name, program_spec);
}
fn add_forbidden_substrings(&self, substrings: &[String]) {

View File

@@ -31,6 +31,13 @@ install:
rustup show active-toolchain
cargo fetch
# Run `cargo nextest` since it's faster than `cargo test`, though including
# --no-fail-fast is important to ensure all tests are run.
#
# Run `cargo install cargo-nextest` if you don't have it installed.
test:
cargo nextest run --no-fail-fast
# Run the MCP server
mcp-server-run *args:
cargo run -p codex-mcp-server -- "$@"

View File

@@ -42,7 +42,7 @@ impl ServerOptions {
pub fn new(codex_home: PathBuf, client_id: String) -> Self {
Self {
codex_home,
client_id: client_id.to_string(),
client_id,
issuer: DEFAULT_ISSUER.to_string(),
port: DEFAULT_PORT,
open_browser: true,
@@ -126,7 +126,7 @@ pub fn run_login_server(opts: ServerOptions) -> io::Result<LoginServer> {
let shutdown_notify = Arc::new(tokio::sync::Notify::new());
let server_handle = {
let shutdown_notify = shutdown_notify.clone();
let server = server.clone();
let server = server;
tokio::spawn(async move {
let result = loop {
tokio::select! {

View File

@@ -17,10 +17,10 @@ use anyhow::Context;
use anyhow::Result;
use codex_mcp_client::McpClient;
use mcp_types::ClientCapabilities;
use mcp_types::Implementation;
use mcp_types::InitializeRequestParams;
use mcp_types::ListToolsRequestParams;
use mcp_types::MCP_SCHEMA_VERSION;
use mcp_types::McpClientInfo;
use tracing_subscriber::EnvFilter;
#[tokio::main]
@@ -60,10 +60,13 @@ async fn main() -> Result<()> {
sampling: None,
elicitation: None,
},
client_info: McpClientInfo {
client_info: Implementation {
name: "codex-mcp-client".to_owned(),
version: env!("CARGO_PKG_VERSION").to_owned(),
title: Some("Codex".to_string()),
// This field is used by Codex when it is an MCP server: it should
// not be used when Codex is an MCP client.
user_agent: None,
},
protocol_version: MCP_SCHEMA_VERSION.to_owned(),
};

View File

@@ -40,6 +40,7 @@ uuid = { version = "1", features = ["serde", "v4"] }
[dev-dependencies]
assert_cmd = "2"
base64 = "0.22"
mcp_test_support = { path = "tests/common" }
os_info = "3.12.0"
pretty_assertions = "1.4.1"

View File

@@ -11,10 +11,16 @@ use codex_core::NewConversation;
use codex_core::RolloutRecorder;
use codex_core::SessionMeta;
use codex_core::auth::CLIENT_ID;
use codex_core::auth::get_auth_file;
use codex_core::auth::login_with_api_key;
use codex_core::auth::try_read_auth_json;
use codex_core::config::Config;
use codex_core::config::ConfigOverrides;
use codex_core::config::ConfigToml;
use codex_core::config::load_config_as_toml;
use codex_core::config_edit::CONFIG_KEY_EFFORT;
use codex_core::config_edit::CONFIG_KEY_MODEL;
use codex_core::config_edit::persist_overrides_and_clear_if_none;
use codex_core::default_client::get_codex_user_agent;
use codex_core::exec::ExecParams;
use codex_core::exec_env::create_env;
@@ -37,7 +43,6 @@ use codex_protocol::mcp_protocol::ApplyPatchApprovalParams;
use codex_protocol::mcp_protocol::ApplyPatchApprovalResponse;
use codex_protocol::mcp_protocol::ArchiveConversationParams;
use codex_protocol::mcp_protocol::ArchiveConversationResponse;
use codex_protocol::mcp_protocol::AuthMode;
use codex_protocol::mcp_protocol::AuthStatusChangeNotification;
use codex_protocol::mcp_protocol::ClientRequest;
use codex_protocol::mcp_protocol::ConversationId;
@@ -55,6 +60,8 @@ use codex_protocol::mcp_protocol::InterruptConversationParams;
use codex_protocol::mcp_protocol::InterruptConversationResponse;
use codex_protocol::mcp_protocol::ListConversationsParams;
use codex_protocol::mcp_protocol::ListConversationsResponse;
use codex_protocol::mcp_protocol::LoginApiKeyParams;
use codex_protocol::mcp_protocol::LoginApiKeyResponse;
use codex_protocol::mcp_protocol::LoginChatGptCompleteNotification;
use codex_protocol::mcp_protocol::LoginChatGptResponse;
use codex_protocol::mcp_protocol::NewConversationParams;
@@ -67,6 +74,9 @@ use codex_protocol::mcp_protocol::SendUserMessageResponse;
use codex_protocol::mcp_protocol::SendUserTurnParams;
use codex_protocol::mcp_protocol::SendUserTurnResponse;
use codex_protocol::mcp_protocol::ServerNotification;
use codex_protocol::mcp_protocol::SetDefaultModelParams;
use codex_protocol::mcp_protocol::SetDefaultModelResponse;
use codex_protocol::mcp_protocol::UserInfoResponse;
use codex_protocol::mcp_protocol::UserSavedConfig;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ResponseItem;
@@ -169,6 +179,9 @@ impl CodexMessageProcessor {
ClientRequest::GitDiffToRemote { request_id, params } => {
self.git_diff_to_origin(request_id, params.cwd).await;
}
ClientRequest::LoginApiKey { request_id, params } => {
self.login_api_key(request_id, params).await;
}
ClientRequest::LoginChatGpt { request_id } => {
self.login_chatgpt(request_id).await;
}
@@ -184,15 +197,54 @@ impl CodexMessageProcessor {
ClientRequest::GetUserSavedConfig { request_id } => {
self.get_user_saved_config(request_id).await;
}
ClientRequest::SetDefaultModel { request_id, params } => {
self.set_default_model(request_id, params).await;
}
ClientRequest::GetUserAgent { request_id } => {
self.get_user_agent(request_id).await;
}
ClientRequest::UserInfo { request_id } => {
self.get_user_info(request_id).await;
}
ClientRequest::ExecOneOffCommand { request_id, params } => {
self.exec_one_off_command(request_id, params).await;
}
}
}
async fn login_api_key(&mut self, request_id: RequestId, params: LoginApiKeyParams) {
{
let mut guard = self.active_login.lock().await;
if let Some(active) = guard.take() {
active.drop();
}
}
match login_with_api_key(&self.config.codex_home, &params.api_key) {
Ok(()) => {
self.auth_manager.reload();
self.outgoing
.send_response(request_id, LoginApiKeyResponse {})
.await;
let payload = AuthStatusChangeNotification {
auth_method: self.auth_manager.auth().map(|auth| auth.mode),
};
self.outgoing
.send_server_notification(ServerNotification::AuthStatusChange(payload))
.await;
}
Err(err) => {
let error = JSONRPCErrorError {
code: INTERNAL_ERROR_CODE,
message: format!("failed to save api key: {err}"),
data: None,
};
self.outgoing.send_error(request_id, error).await;
}
}
}
async fn login_chatgpt(&mut self, request_id: RequestId) {
let config = self.config.as_ref();
@@ -346,7 +398,7 @@ impl CodexMessageProcessor {
.await;
// Send auth status change notification reflecting the current auth mode
// after logout (which may fall back to API key via env var).
// after logout.
let current_auth_method = self.auth_manager.auth().map(|auth| auth.mode);
let payload = AuthStatusChangeNotification {
auth_method: current_auth_method,
@@ -361,7 +413,6 @@ impl CodexMessageProcessor {
request_id: RequestId,
params: codex_protocol::mcp_protocol::GetAuthStatusParams,
) {
let preferred_auth_method: AuthMode = self.auth_manager.preferred_auth_method();
let include_token = params.include_token.unwrap_or(false);
let do_refresh = params.refresh_token.unwrap_or(false);
@@ -369,6 +420,11 @@ impl CodexMessageProcessor {
tracing::warn!("failed to refresh token while getting auth status: {err}");
}
// Determine whether auth is required based on the active model provider.
// If a custom provider is configured with `requires_openai_auth == false`,
// then no auth step is required; otherwise, default to requiring auth.
let requires_openai_auth = Some(self.config.model_provider.requires_openai_auth);
let response = match self.auth_manager.auth() {
Some(auth) => {
let (reported_auth_method, token_opt) = match auth.get_token().await {
@@ -384,14 +440,14 @@ impl CodexMessageProcessor {
};
codex_protocol::mcp_protocol::GetAuthStatusResponse {
auth_method: reported_auth_method,
preferred_auth_method,
auth_token: token_opt,
requires_openai_auth,
}
}
None => codex_protocol::mcp_protocol::GetAuthStatusResponse {
auth_method: None,
preferred_auth_method,
auth_token: None,
requires_openai_auth,
},
};
@@ -439,6 +495,52 @@ impl CodexMessageProcessor {
self.outgoing.send_response(request_id, response).await;
}
async fn get_user_info(&self, request_id: RequestId) {
// Read alleged user email from auth.json (best-effort; not verified).
let auth_path = get_auth_file(&self.config.codex_home);
let alleged_user_email = match try_read_auth_json(&auth_path) {
Ok(auth) => auth.tokens.and_then(|t| t.id_token.email),
Err(_) => None,
};
let response = UserInfoResponse { alleged_user_email };
self.outgoing.send_response(request_id, response).await;
}
async fn set_default_model(&self, request_id: RequestId, params: SetDefaultModelParams) {
let SetDefaultModelParams {
model,
reasoning_effort,
} = params;
let effort_str = reasoning_effort.map(|effort| effort.to_string());
let overrides: [(&[&str], Option<&str>); 2] = [
(&[CONFIG_KEY_MODEL], model.as_deref()),
(&[CONFIG_KEY_EFFORT], effort_str.as_deref()),
];
match persist_overrides_and_clear_if_none(
&self.config.codex_home,
self.config.active_profile.as_deref(),
&overrides,
)
.await
{
Ok(()) => {
let response = SetDefaultModelResponse {};
self.outgoing.send_response(request_id, response).await;
}
Err(err) => {
let error = JSONRPCErrorError {
code: INTERNAL_ERROR_CODE,
message: format!("failed to persist overrides: {err}"),
data: None,
};
self.outgoing.send_error(request_id, error).await;
}
}
}
async fn exec_one_off_command(&self, request_id: RequestId, params: ExecOneOffCommandParams) {
tracing::debug!("ExecOneOffCommand params: {params:?}");
@@ -533,6 +635,7 @@ impl CodexMessageProcessor {
let response = NewConversationResponse {
conversation_id,
model: session_configured.model,
reasoning_effort: session_configured.reasoning_effort,
rollout_path: session_configured.rollout_path,
};
self.outgoing.send_response(request_id, response).await;
@@ -1145,6 +1248,7 @@ fn derive_config_from_params(
} = params;
let overrides = ConfigOverrides {
model,
review_model: None,
config_profile: profile,
cwd: cwd.map(PathBuf::from),
approval_policy,

View File

@@ -152,6 +152,7 @@ impl CodexToolCallParam {
// Build the `ConfigOverrides` recognized by codex-core.
let overrides = codex_core::config::ConfigOverrides {
model,
review_model: None,
config_profile: profile,
cwd: cwd.map(PathBuf::from),
approval_policy: approval_policy.map(Into::into),

View File

@@ -222,7 +222,7 @@ async fn run_codex_tool_session_inner(
}
EventMsg::TaskComplete(TaskCompleteEvent { last_agent_message }) => {
let text = match last_agent_message {
Some(msg) => msg.clone(),
Some(msg) => msg,
None => "".to_string(),
};
let result = CallToolResult {
@@ -277,9 +277,11 @@ async fn run_codex_tool_session_inner(
| EventMsg::GetHistoryEntryResponse(_)
| EventMsg::PlanUpdate(_)
| EventMsg::TurnAborted(_)
| EventMsg::ConversationHistory(_)
| EventMsg::ConversationPath(_)
| EventMsg::UserMessage(_)
| EventMsg::ShutdownComplete => {
| EventMsg::ShutdownComplete
| EventMsg::EnteredReviewMode(_)
| EventMsg::ExitedReviewMode(_) => {
// For now, we do not do anything extra for these
// events. Note that
// send(codex_event_to_notification(&event)) above has

View File

@@ -56,8 +56,7 @@ impl MessageProcessor {
config: Arc<Config>,
) -> Self {
let outgoing = Arc::new(outgoing);
let auth_manager =
AuthManager::shared(config.codex_home.clone(), config.preferred_auth_method);
let auth_manager = AuthManager::shared(config.codex_home.clone());
let conversation_manager = Arc::new(ConversationManager::new(auth_manager.clone()));
let codex_message_processor = CodexMessageProcessor::new(
auth_manager,
@@ -234,7 +233,7 @@ impl MessageProcessor {
},
instructions: None,
protocol_version: params.protocol_version.clone(),
server_info: mcp_types::McpServerInfo {
server_info: mcp_types::Implementation {
name: "codex-mcp-server".to_string(),
version: env!("CARGO_PKG_VERSION").to_string(),
title: Some("Codex".to_string()),
@@ -532,7 +531,6 @@ impl MessageProcessor {
// Spawn the long-running reply handler.
tokio::spawn({
let codex = codex.clone();
let outgoing = outgoing.clone();
let prompt = prompt.clone();
let running_requests_id_to_codex_uuid = running_requests_id_to_codex_uuid.clone();

View File

@@ -258,6 +258,7 @@ pub(crate) struct OutgoingError {
mod tests {
use codex_core::protocol::EventMsg;
use codex_core::protocol::SessionConfiguredEvent;
use codex_protocol::config_types::ReasoningEffort;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::mcp_protocol::LoginChatGptCompleteNotification;
use pretty_assertions::assert_eq;
@@ -279,6 +280,7 @@ mod tests {
msg: EventMsg::SessionConfigured(SessionConfiguredEvent {
session_id: conversation_id,
model: "gpt-4o".to_string(),
reasoning_effort: Some(ReasoningEffort::default()),
history_log_id: 1,
history_entry_count: 1000,
initial_messages: None,
@@ -299,7 +301,7 @@ mod tests {
let Ok(expected_params) = serde_json::to_value(&event) else {
panic!("Event must serialize");
};
assert_eq!(params, Some(expected_params.clone()));
assert_eq!(params, Some(expected_params));
}
#[tokio::test]
@@ -312,6 +314,7 @@ mod tests {
let session_configured_event = SessionConfiguredEvent {
session_id: conversation_id,
model: "gpt-4o".to_string(),
reasoning_effort: Some(ReasoningEffort::default()),
history_log_id: 1,
history_entry_count: 1000,
initial_messages: None,
@@ -342,6 +345,7 @@ mod tests {
"msg": {
"session_id": session_configured_event.session_id,
"model": session_configured_event.model,
"reasoning_effort": session_configured_event.reasoning_effort,
"history_log_id": session_configured_event.history_log_id,
"history_entry_count": session_configured_event.history_entry_count,
"type": "session_configured",

View File

@@ -18,21 +18,23 @@ use codex_protocol::mcp_protocol::CancelLoginChatGptParams;
use codex_protocol::mcp_protocol::GetAuthStatusParams;
use codex_protocol::mcp_protocol::InterruptConversationParams;
use codex_protocol::mcp_protocol::ListConversationsParams;
use codex_protocol::mcp_protocol::LoginApiKeyParams;
use codex_protocol::mcp_protocol::NewConversationParams;
use codex_protocol::mcp_protocol::RemoveConversationListenerParams;
use codex_protocol::mcp_protocol::ResumeConversationParams;
use codex_protocol::mcp_protocol::SendUserMessageParams;
use codex_protocol::mcp_protocol::SendUserTurnParams;
use codex_protocol::mcp_protocol::SetDefaultModelParams;
use mcp_types::CallToolRequestParams;
use mcp_types::ClientCapabilities;
use mcp_types::Implementation;
use mcp_types::InitializeRequestParams;
use mcp_types::JSONRPC_VERSION;
use mcp_types::JSONRPCMessage;
use mcp_types::JSONRPCNotification;
use mcp_types::JSONRPCRequest;
use mcp_types::JSONRPCResponse;
use mcp_types::McpClientInfo;
use mcp_types::ModelContextProtocolNotification;
use mcp_types::ModelContextProtocolRequest;
use mcp_types::RequestId;
@@ -134,10 +136,11 @@ impl McpProcess {
roots: None,
sampling: None,
},
client_info: McpClientInfo {
client_info: Implementation {
name: "elicitation test".into(),
title: Some("Elicitation Test".into()),
version: "0.0.0".into(),
user_agent: None,
},
protocol_version: mcp_types::MCP_SCHEMA_VERSION.into(),
};
@@ -294,6 +297,20 @@ impl McpProcess {
self.send_request("getUserAgent", None).await
}
/// Send a `userInfo` JSON-RPC request.
pub async fn send_user_info_request(&mut self) -> anyhow::Result<i64> {
self.send_request("userInfo", None).await
}
/// Send a `setDefaultModel` JSON-RPC request.
pub async fn send_set_default_model_request(
&mut self,
params: SetDefaultModelParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("setDefaultModel", params).await
}
/// Send a `listConversations` JSON-RPC request.
pub async fn send_list_conversations_request(
&mut self,
@@ -312,6 +329,15 @@ impl McpProcess {
self.send_request("resumeConversation", params).await
}
/// Send a `loginApiKey` JSON-RPC request.
pub async fn send_login_api_key_request(
&mut self,
params: LoginApiKeyParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("loginApiKey", params).await
}
/// Send a `loginChatGpt` JSON-RPC request.
pub async fn send_login_chat_gpt_request(&mut self) -> anyhow::Result<i64> {
self.send_request("loginChatGpt", None).await

View File

@@ -1,9 +1,10 @@
use std::path::Path;
use codex_core::auth::login_with_api_key;
use codex_protocol::mcp_protocol::AuthMode;
use codex_protocol::mcp_protocol::GetAuthStatusParams;
use codex_protocol::mcp_protocol::GetAuthStatusResponse;
use codex_protocol::mcp_protocol::LoginApiKeyParams;
use codex_protocol::mcp_protocol::LoginApiKeyResponse;
use mcp_test_support::McpProcess;
use mcp_test_support::to_response;
use mcp_types::JSONRPCResponse;
@@ -36,10 +37,29 @@ stream_max_retries = 0
)
}
async fn login_with_api_key_via_request(mcp: &mut McpProcess, api_key: &str) {
let request_id = mcp
.send_login_api_key_request(LoginApiKeyParams {
api_key: api_key.to_string(),
})
.await
.unwrap_or_else(|e| panic!("send loginApiKey: {e}"));
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
)
.await
.unwrap_or_else(|e| panic!("loginApiKey timeout: {e}"))
.unwrap_or_else(|e| panic!("loginApiKey response: {e}"));
let _: LoginApiKeyResponse =
to_response(resp).unwrap_or_else(|e| panic!("deserialize login response: {e}"));
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn get_auth_status_no_auth() {
let codex_home = TempDir::new().unwrap_or_else(|e| panic!("create tempdir: {e}"));
create_config_toml(codex_home.path()).expect("write config.toml");
create_config_toml(codex_home.path()).unwrap_or_else(|err| panic!("write config.toml: {err}"));
let mut mcp = McpProcess::new_with_env(codex_home.path(), &[("OPENAI_API_KEY", None)])
.await
@@ -72,8 +92,7 @@ async fn get_auth_status_no_auth() {
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn get_auth_status_with_api_key() {
let codex_home = TempDir::new().unwrap_or_else(|e| panic!("create tempdir: {e}"));
create_config_toml(codex_home.path()).expect("write config.toml");
login_with_api_key(codex_home.path(), "sk-test-key").expect("seed api key");
create_config_toml(codex_home.path()).unwrap_or_else(|err| panic!("write config.toml: {err}"));
let mut mcp = McpProcess::new(codex_home.path())
.await
@@ -83,6 +102,8 @@ async fn get_auth_status_with_api_key() {
.expect("init timeout")
.expect("init failed");
login_with_api_key_via_request(&mut mcp, "sk-test-key").await;
let request_id = mcp
.send_get_auth_status_request(GetAuthStatusParams {
include_token: Some(true),
@@ -101,14 +122,12 @@ async fn get_auth_status_with_api_key() {
let status: GetAuthStatusResponse = to_response(resp).expect("deserialize status");
assert_eq!(status.auth_method, Some(AuthMode::ApiKey));
assert_eq!(status.auth_token, Some("sk-test-key".to_string()));
assert_eq!(status.preferred_auth_method, AuthMode::ChatGPT);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn get_auth_status_with_api_key_no_include_token() {
let codex_home = TempDir::new().unwrap_or_else(|e| panic!("create tempdir: {e}"));
create_config_toml(codex_home.path()).expect("write config.toml");
login_with_api_key(codex_home.path(), "sk-test-key").expect("seed api key");
create_config_toml(codex_home.path()).unwrap_or_else(|err| panic!("write config.toml: {err}"));
let mut mcp = McpProcess::new(codex_home.path())
.await
@@ -118,6 +137,8 @@ async fn get_auth_status_with_api_key_no_include_token() {
.expect("init timeout")
.expect("init failed");
login_with_api_key_via_request(&mut mcp, "sk-test-key").await;
// Build params via struct so None field is omitted in wire JSON.
let params = GetAuthStatusParams {
include_token: None,
@@ -138,5 +159,4 @@ async fn get_auth_status_with_api_key_no_include_token() {
let status: GetAuthStatusResponse = to_response(resp).expect("deserialize status");
assert_eq!(status.auth_method, Some(AuthMode::ApiKey));
assert!(status.auth_token.is_none(), "token must be omitted");
assert_eq!(status.preferred_auth_method, AuthMode::ChatGPT);
}

View File

@@ -90,6 +90,7 @@ async fn test_codex_jsonrpc_conversation_flow() {
let NewConversationResponse {
conversation_id,
model,
reasoning_effort: _,
rollout_path: _,
} = new_conv_resp;
assert_eq!(model, "mock-model");
@@ -319,7 +320,7 @@ async fn test_send_user_turn_changes_approval_policy_behavior() {
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::new_read_only_policy(),
model: "mock-model".to_string(),
effort: ReasoningEffort::Medium,
effort: Some(ReasoningEffort::Medium),
summary: ReasoningSummary::Auto,
})
.await

View File

@@ -59,6 +59,7 @@ async fn test_conversation_create_and_send_message_ok() {
let NewConversationResponse {
conversation_id,
model,
reasoning_effort: _,
rollout_path: _,
} = to_response::<NewConversationResponse>(new_conv_resp)
.expect("deserialize newConversation response");

View File

@@ -1,7 +1,7 @@
use std::path::Path;
use std::time::Duration;
use codex_core::auth::login_with_api_key;
use codex_login::login_with_api_key;
use codex_protocol::mcp_protocol::CancelLoginChatGptParams;
use codex_protocol::mcp_protocol::CancelLoginChatGptResponse;
use codex_protocol::mcp_protocol::GetAuthStatusParams;
@@ -95,7 +95,7 @@ async fn logout_chatgpt_removes_auth() {
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn login_and_cancel_chatgpt() {
let codex_home = TempDir::new().unwrap_or_else(|e| panic!("create tempdir: {e}"));
create_config_toml(codex_home.path()).expect("write config.toml");
create_config_toml(codex_home.path()).unwrap_or_else(|err| panic!("write config.toml: {err}"));
let mut mcp = McpProcess::new(codex_home.path())
.await

View File

@@ -9,4 +9,6 @@ mod interrupt;
mod list_resume;
mod login;
mod send_message;
mod set_default_model;
mod user_agent;
mod user_info;

View File

@@ -0,0 +1,76 @@
use std::path::Path;
use codex_core::config::ConfigToml;
use codex_protocol::mcp_protocol::SetDefaultModelParams;
use codex_protocol::mcp_protocol::SetDefaultModelResponse;
use mcp_test_support::McpProcess;
use mcp_test_support::to_response;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use pretty_assertions::assert_eq;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn set_default_model_persists_overrides() {
let codex_home = TempDir::new().expect("create tempdir");
create_config_toml(codex_home.path()).expect("write config.toml");
let mut mcp = McpProcess::new(codex_home.path())
.await
.expect("spawn mcp process");
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize())
.await
.expect("init timeout")
.expect("init failed");
let params = SetDefaultModelParams {
model: Some("gpt-4.1".to_string()),
reasoning_effort: None,
};
let request_id = mcp
.send_set_default_model_request(params)
.await
.expect("send setDefaultModel");
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
)
.await
.expect("setDefaultModel timeout")
.expect("setDefaultModel response");
let _: SetDefaultModelResponse =
to_response(resp).expect("deserialize setDefaultModel response");
let config_path = codex_home.path().join("config.toml");
let config_contents = tokio::fs::read_to_string(&config_path)
.await
.expect("read config.toml");
let config_toml: ConfigToml = toml::from_str(&config_contents).expect("parse config.toml");
assert_eq!(
ConfigToml {
model: Some("gpt-4.1".to_string()),
model_reasoning_effort: None,
..Default::default()
},
config_toml,
);
}
// Helper to create a config.toml; mirrors create_conversation.rs
fn create_config_toml(codex_home: &Path) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
r#"
model = "gpt-5"
model_reasoning_effort = "medium"
"#,
)
}

View File

@@ -0,0 +1,78 @@
use std::time::Duration;
use anyhow::Context;
use base64::Engine;
use base64::engine::general_purpose::URL_SAFE_NO_PAD;
use codex_core::auth::AuthDotJson;
use codex_core::auth::get_auth_file;
use codex_core::auth::write_auth_json;
use codex_core::token_data::IdTokenInfo;
use codex_core::token_data::TokenData;
use codex_protocol::mcp_protocol::UserInfoResponse;
use mcp_test_support::McpProcess;
use mcp_test_support::to_response;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use pretty_assertions::assert_eq;
use serde_json::json;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: Duration = Duration::from_secs(10);
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn user_info_returns_email_from_auth_json() {
let codex_home = TempDir::new().expect("create tempdir");
let auth_path = get_auth_file(codex_home.path());
let mut id_token = IdTokenInfo::default();
id_token.email = Some("user@example.com".to_string());
id_token.raw_jwt = encode_id_token_with_email("user@example.com").expect("encode id token");
let auth = AuthDotJson {
openai_api_key: None,
tokens: Some(TokenData {
id_token,
access_token: "access".to_string(),
refresh_token: "refresh".to_string(),
account_id: None,
}),
last_refresh: None,
};
write_auth_json(&auth_path, &auth).expect("write auth.json");
let mut mcp = McpProcess::new(codex_home.path())
.await
.expect("spawn mcp process");
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize())
.await
.expect("initialize timeout")
.expect("initialize request");
let request_id = mcp.send_user_info_request().await.expect("send userInfo");
let response: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
)
.await
.expect("userInfo timeout")
.expect("userInfo response");
let received: UserInfoResponse = to_response(response).expect("deserialize userInfo response");
let expected = UserInfoResponse {
alleged_user_email: Some("user@example.com".to_string()),
};
assert_eq!(received, expected);
}
fn encode_id_token_with_email(email: &str) -> anyhow::Result<String> {
let header_b64 = URL_SAFE_NO_PAD.encode(
serde_json::to_vec(&json!({ "alg": "none", "typ": "JWT" }))
.context("serialize jwt header")?,
);
let payload =
serde_json::to_vec(&json!({ "email": email })).context("serialize jwt payload")?;
let payload_b64 = URL_SAFE_NO_PAD.encode(payload);
Ok(format!("{header_b64}.{payload_b64}.signature"))
}

View File

@@ -0,0 +1,21 @@
#!/usr/bin/env python3
import subprocess
import sys
from pathlib import Path
def main() -> int:
crate_dir = Path(__file__).resolve().parent
generator = crate_dir / "generate_mcp_types.py"
result = subprocess.run(
[sys.executable, str(generator), "--check"],
cwd=crate_dir,
check=False,
)
return result.returncode
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -5,15 +5,19 @@ import argparse
import json
import subprocess
import sys
import tempfile
from dataclasses import (
dataclass,
)
from difflib import unified_diff
from pathlib import Path
from shutil import copy2
# Helper first so it is defined when other functions call it.
from typing import Any, Literal
SCHEMA_VERSION = "2025-06-18"
JSONRPC_VERSION = "2.0"
@@ -43,16 +47,31 @@ def main() -> int:
default_schema_file = (
Path(__file__).resolve().parent / "schema" / SCHEMA_VERSION / "schema.json"
)
default_lib_rs = Path(__file__).resolve().parent / "src/lib.rs"
parser.add_argument(
"schema_file",
nargs="?",
default=default_schema_file,
help="schema.json file to process",
)
parser.add_argument(
"--check",
action="store_true",
help="Regenerate lib.rs in a sandbox and ensure the checked-in file matches",
)
args = parser.parse_args()
schema_file = args.schema_file
schema_file = Path(args.schema_file)
crate_dir = Path(__file__).resolve().parent
lib_rs = Path(__file__).resolve().parent / "src/lib.rs"
if args.check:
return run_check(schema_file, crate_dir, default_lib_rs)
generate_lib_rs(schema_file, default_lib_rs, fmt=True)
return 0
def generate_lib_rs(schema_file: Path, lib_rs: Path, fmt: bool) -> None:
lib_rs.parent.mkdir(parents=True, exist_ok=True)
global DEFINITIONS # Allow helper functions to access the schema.
@@ -117,9 +136,7 @@ fn default_jsonrpc() -> String {{ JSONRPC_VERSION.to_owned() }}
for req_name in CLIENT_REQUEST_TYPE_NAMES:
defn = definitions[req_name]
method_const = (
defn.get("properties", {}).get("method", {}).get("const", req_name)
)
method_const = defn.get("properties", {}).get("method", {}).get("const", req_name)
payload_type = f"<{req_name} as ModelContextProtocolRequest>::Params"
try_from_impl_lines.append(f' "{method_const}" => {{\n')
try_from_impl_lines.append(
@@ -128,9 +145,7 @@ fn default_jsonrpc() -> String {{ JSONRPC_VERSION.to_owned() }}
try_from_impl_lines.append(
f" let params: {payload_type} = serde_json::from_value(params_json)?;\n"
)
try_from_impl_lines.append(
f" Ok(ClientRequest::{req_name}(params))\n"
)
try_from_impl_lines.append(f" Ok(ClientRequest::{req_name}(params))\n")
try_from_impl_lines.append(" },\n")
try_from_impl_lines.append(
@@ -144,9 +159,7 @@ fn default_jsonrpc() -> String {{ JSONRPC_VERSION.to_owned() }}
# Generate TryFrom for ServerNotification
notif_impl_lines: list[str] = []
notif_impl_lines.append(
"impl TryFrom<JSONRPCNotification> for ServerNotification {\n"
)
notif_impl_lines.append("impl TryFrom<JSONRPCNotification> for ServerNotification {\n")
notif_impl_lines.append(" type Error = serde_json::Error;\n")
notif_impl_lines.append(
" fn try_from(n: JSONRPCNotification) -> std::result::Result<Self, Self::Error> {\n"
@@ -155,9 +168,7 @@ fn default_jsonrpc() -> String {{ JSONRPC_VERSION.to_owned() }}
for notif_name in SERVER_NOTIFICATION_TYPE_NAMES:
n_def = definitions[notif_name]
method_const = (
n_def.get("properties", {}).get("method", {}).get("const", notif_name)
)
method_const = n_def.get("properties", {}).get("method", {}).get("const", notif_name)
payload_type = f"<{notif_name} as ModelContextProtocolNotification>::Params"
notif_impl_lines.append(f' "{method_const}" => {{\n')
# params may be optional
@@ -167,9 +178,7 @@ fn default_jsonrpc() -> String {{ JSONRPC_VERSION.to_owned() }}
notif_impl_lines.append(
f" let params: {payload_type} = serde_json::from_value(params_json)?;\n"
)
notif_impl_lines.append(
f" Ok(ServerNotification::{notif_name}(params))\n"
)
notif_impl_lines.append(f" Ok(ServerNotification::{notif_name}(params))\n")
notif_impl_lines.append(" },\n")
notif_impl_lines.append(
@@ -185,13 +194,70 @@ fn default_jsonrpc() -> String {{ JSONRPC_VERSION.to_owned() }}
for chunk in out:
f.write(chunk)
subprocess.check_call(
["cargo", "fmt", "--", "--config", "imports_granularity=Item"],
cwd=lib_rs.parent.parent,
stderr=subprocess.DEVNULL,
)
if fmt:
subprocess.check_call(
["cargo", "fmt", "--", "--config", "imports_granularity=Item"],
cwd=lib_rs.parent.parent,
stderr=subprocess.DEVNULL,
)
return 0
def run_check(schema_file: Path, crate_dir: Path, checked_in_lib: Path) -> int:
config_path = crate_dir.parent / "rustfmt.toml"
eprint(f"Running --check with schema {schema_file}")
with tempfile.TemporaryDirectory() as tmp_dir:
tmp_path = Path(tmp_dir)
eprint(f"Created temporary workspace at {tmp_path}")
manifest_path = tmp_path / "Cargo.toml"
eprint(f"Copying Cargo.toml into {manifest_path}")
copy2(crate_dir / "Cargo.toml", manifest_path)
manifest_text = manifest_path.read_text(encoding="utf-8")
manifest_text = manifest_text.replace(
"version = { workspace = true }",
'version = "0.0.0"',
)
manifest_text = manifest_text.replace("\n[lints]\nworkspace = true\n", "\n")
manifest_path.write_text(manifest_text, encoding="utf-8")
src_dir = tmp_path / "src"
src_dir.mkdir(parents=True, exist_ok=True)
eprint(f"Generating lib.rs into {src_dir}")
generated_lib = src_dir / "lib.rs"
generate_lib_rs(schema_file, generated_lib, fmt=False)
eprint("Formatting generated lib.rs with rustfmt")
subprocess.check_call(
[
"rustfmt",
"--config-path",
str(config_path),
str(generated_lib),
],
cwd=tmp_path,
stderr=subprocess.DEVNULL,
)
eprint("Comparing generated lib.rs with checked-in version")
checked_in_contents = checked_in_lib.read_text(encoding="utf-8")
generated_contents = generated_lib.read_text(encoding="utf-8")
if checked_in_contents == generated_contents:
eprint("lib.rs matches checked-in version")
return 0
diff = unified_diff(
checked_in_contents.splitlines(keepends=True),
generated_contents.splitlines(keepends=True),
fromfile=str(checked_in_lib),
tofile=str(generated_lib),
)
diff_text = "".join(diff)
eprint("Generated lib.rs does not match the checked-in version. Diff:")
if diff_text:
eprint(diff_text, end="")
eprint("Re-run generate_mcp_types.py without --check to update src/lib.rs.")
return 1
def add_definition(name: str, definition: dict[str, Any], out: list[str]) -> None:
@@ -265,8 +331,11 @@ class StructField:
name: str
type_name: str
serde: str | None = None
comment: str | None = None
def append(self, out: list[str], supports_const: bool) -> None:
if self.comment:
out.append(f" // {self.comment}\n")
if self.serde:
out.append(f" {self.serde}\n")
if self.viz == "const":
@@ -312,6 +381,18 @@ def define_struct(
else:
fields.append(StructField("pub", rs_prop.name, prop_type, rs_prop.serde))
# Special-case: add Codex-specific user_agent to Implementation
if name == "Implementation":
fields.append(
StructField(
"pub",
"user_agent",
"Option<String>",
'#[serde(default, skip_serializing_if = "Option::is_none")]',
"This is an extra field that the Codex MCP server sends as part of InitializeResult.",
)
)
if implements_request_trait(name):
add_trait_impl(name, "ModelContextProtocolRequest", fields, out)
elif implements_notification_trait(name):
@@ -406,15 +487,11 @@ def define_untagged_enum(name: str, type_list: list[str], out: list[str]) -> Non
case "integer":
out.append(" Integer(i64),\n")
case _:
raise ValueError(
f"Unknown type in untagged enum: {simple_type} in {name}"
)
raise ValueError(f"Unknown type in untagged enum: {simple_type} in {name}")
out.append("}\n\n")
def define_any_of(
name: str, list_of_refs: list[Any], description: str | None = None
) -> list[str]:
def define_any_of(name: str, list_of_refs: list[Any], description: str | None = None) -> list[str]:
"""Generate a Rust enum for a JSON-Schema `anyOf` union.
For most types we simply map each `$ref` inside the `anyOf` list to a
@@ -479,9 +556,7 @@ def define_any_of(
if name == "ClientRequest":
payload_type = f"<{ref_name} as ModelContextProtocolRequest>::Params"
else:
payload_type = (
f"<{ref_name} as ModelContextProtocolNotification>::Params"
)
payload_type = f"<{ref_name} as ModelContextProtocolNotification>::Params"
# Determine the wire value for `method` so we can annotate the
# variant appropriately. If for some reason the schema does not
@@ -489,9 +564,7 @@ def define_any_of(
# least compile (although deserialization will likely fail).
request_def = DEFINITIONS.get(ref_name, {})
method_const = (
request_def.get("properties", {})
.get("method", {})
.get("const", ref_name)
request_def.get("properties", {}).get("method", {}).get("const", ref_name)
)
out.append(f' #[serde(rename = "{method_const}")]\n')
@@ -541,7 +614,7 @@ def map_type(
if type_prop == "string":
if const_prop := typedef.get("const", None):
assert isinstance(const_prop, str)
return f'&\'static str = "{const_prop }"'
return f'&\'static str = "{const_prop}"'
else:
return "String"
elif type_prop == "integer":
@@ -617,7 +690,7 @@ def rust_prop_name(name: str, is_optional: bool) -> RustProp:
serde_annotations.append('skip_serializing_if = "Option::is_none"')
if serde_annotations:
serde_str = f'#[serde({", ".join(serde_annotations)})]'
serde_str = f"#[serde({', '.join(serde_annotations)})]"
else:
serde_str = None
return RustProp(prop_name, serde_str)
@@ -625,9 +698,7 @@ def rust_prop_name(name: str, is_optional: bool) -> RustProp:
def to_snake_case(name: str) -> str:
"""Convert a camelCase or PascalCase name to snake_case."""
snake_case = name[0].lower() + "".join(
"_" + c.lower() if c.isupper() else c for c in name[1:]
)
snake_case = name[0].lower() + "".join("_" + c.lower() if c.isupper() else c for c in name[1:])
if snake_case != name:
return snake_case
else:
@@ -663,5 +734,9 @@ def emit_doc_comment(text: str | None, out: list[str]) -> None:
out.append(f"/// {line.rstrip()}\n")
def eprint(*args: Any, **kwargs: Any) -> None:
print(*args, file=sys.stderr, **kwargs)
if __name__ == "__main__":
sys.exit(main())

View File

@@ -482,20 +482,12 @@ pub struct ImageContent {
/// Describes the name and version of an MCP implementation, with an optional title for UI representation.
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, TS)]
pub struct McpClientInfo {
pub name: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub title: Option<String>,
pub version: String,
}
/// Describes the name and version of an MCP implementation, with an optional title for UI representation.
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, TS)]
pub struct McpServerInfo {
pub struct Implementation {
pub name: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub title: Option<String>,
pub version: String,
// This is an extra field that the Codex MCP server sends as part of InitializeResult.
#[serde(default, skip_serializing_if = "Option::is_none")]
pub user_agent: Option<String>,
}
@@ -513,7 +505,7 @@ impl ModelContextProtocolRequest for InitializeRequest {
pub struct InitializeRequestParams {
pub capabilities: ClientCapabilities,
#[serde(rename = "clientInfo")]
pub client_info: McpClientInfo,
pub client_info: Implementation,
#[serde(rename = "protocolVersion")]
pub protocol_version: String,
}
@@ -527,7 +519,7 @@ pub struct InitializeResult {
#[serde(rename = "protocolVersion")]
pub protocol_version: String,
#[serde(rename = "serverInfo")]
pub server_info: McpServerInfo,
pub server_info: Implementation,
}
impl From<InitializeResult> for serde_json::Value {

View File

@@ -1,10 +1,10 @@
use mcp_types::ClientCapabilities;
use mcp_types::ClientRequest;
use mcp_types::Implementation;
use mcp_types::InitializeRequestParams;
use mcp_types::JSONRPC_VERSION;
use mcp_types::JSONRPCMessage;
use mcp_types::JSONRPCRequest;
use mcp_types::McpClientInfo;
use mcp_types::RequestId;
use serde_json::json;
@@ -58,10 +58,11 @@ fn deserialize_initialize_request() {
sampling: None,
elicitation: None,
},
client_info: McpClientInfo {
client_info: Implementation {
name: "acme-client".into(),
title: Some("Acme".to_string()),
version: "1.2.3".into(),
user_agent: None,
},
protocol_version: "2025-06-18".into(),
}

View File

@@ -31,6 +31,8 @@ pub fn generate_ts(out_dir: &Path, prettier: Option<&Path>) -> Result<()> {
codex_protocol::mcp_protocol::SendUserTurnResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::InterruptConversationResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::GitDiffToRemoteResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::LoginApiKeyParams::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::LoginApiKeyResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::LoginChatGptResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::CancelLoginChatGptResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::LogoutChatGptResponse::export_all_to(out_dir)?;
@@ -38,7 +40,9 @@ pub fn generate_ts(out_dir: &Path, prettier: Option<&Path>) -> Result<()> {
codex_protocol::mcp_protocol::ApplyPatchApprovalResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::ExecCommandApprovalResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::GetUserSavedConfigResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::SetDefaultModelResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::GetUserAgentResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::UserInfoResponse::export_all_to(out_dir)?;
// All notification types reachable from this enum will be generated by
// induction, so they do not need to be listed individually.

View File

@@ -126,6 +126,11 @@ pub enum ClientRequest {
request_id: RequestId,
params: GitDiffToRemoteParams,
},
LoginApiKey {
#[serde(rename = "id")]
request_id: RequestId,
params: LoginApiKeyParams,
},
LoginChatGpt {
#[serde(rename = "id")]
request_id: RequestId,
@@ -148,10 +153,19 @@ pub enum ClientRequest {
#[serde(rename = "id")]
request_id: RequestId,
},
SetDefaultModel {
#[serde(rename = "id")]
request_id: RequestId,
params: SetDefaultModelParams,
},
GetUserAgent {
#[serde(rename = "id")]
request_id: RequestId,
},
UserInfo {
#[serde(rename = "id")]
request_id: RequestId,
},
/// Execute a command (argv vector) under the server's sandbox.
ExecOneOffCommand {
#[serde(rename = "id")]
@@ -208,6 +222,9 @@ pub struct NewConversationParams {
pub struct NewConversationResponse {
pub conversation_id: ConversationId,
pub model: String,
/// Note this could be ignored by the model.
#[serde(skip_serializing_if = "Option::is_none")]
pub reasoning_effort: Option<ReasoningEffort>,
pub rollout_path: PathBuf,
}
@@ -284,6 +301,16 @@ pub struct ArchiveConversationResponse {}
#[serde(rename_all = "camelCase")]
pub struct RemoveConversationSubscriptionResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(rename_all = "camelCase")]
pub struct LoginApiKeyParams {
pub api_key: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(rename_all = "camelCase")]
pub struct LoginApiKeyResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(rename_all = "camelCase")]
pub struct LoginChatGptResponse {
@@ -363,9 +390,14 @@ pub struct ExecArbitraryCommandResponse {
pub struct GetAuthStatusResponse {
#[serde(skip_serializing_if = "Option::is_none")]
pub auth_method: Option<AuthMode>,
pub preferred_auth_method: AuthMode,
#[serde(skip_serializing_if = "Option::is_none")]
pub auth_token: Option<String>,
// Indicates that auth method must be valid to use the server.
// This can be false if using a custom provider that is configured
// with requires_openai_auth == false.
#[serde(skip_serializing_if = "Option::is_none")]
pub requires_openai_auth: Option<bool>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
@@ -374,12 +406,38 @@ pub struct GetUserAgentResponse {
pub user_agent: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(rename_all = "camelCase")]
pub struct UserInfoResponse {
/// Note: `alleged_user_email` is not currently verified. We read it from
/// the local auth.json, which the user could theoretically modify. In the
/// future, we may add logic to verify the email against the server before
/// returning it.
pub alleged_user_email: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(rename_all = "camelCase")]
pub struct GetUserSavedConfigResponse {
pub config: UserSavedConfig,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(rename_all = "camelCase")]
pub struct SetDefaultModelParams {
/// If set to None, this means `model` should be cleared in config.toml.
#[serde(skip_serializing_if = "Option::is_none")]
pub model: Option<String>,
/// If set to None, this means `model_reasoning_effort` should be cleared
/// in config.toml.
#[serde(skip_serializing_if = "Option::is_none")]
pub reasoning_effort: Option<ReasoningEffort>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(rename_all = "camelCase")]
pub struct SetDefaultModelResponse {}
/// UserSavedConfig contains a subset of the config. It is meant to expose mcp
/// client-configurable settings that can be specified in the NewConversation
/// and SendUserTurn requests.
@@ -469,7 +527,8 @@ pub struct SendUserTurnParams {
pub approval_policy: AskForApproval,
pub sandbox_policy: SandboxPolicy,
pub model: String,
pub effort: ReasoningEffort,
#[serde(skip_serializing_if = "Option::is_none")]
pub effort: Option<ReasoningEffort>,
pub summary: ReasoningSummary,
}

View File

@@ -115,7 +115,6 @@ pub enum ResponseItem {
status: Option<String>,
action: WebSearchAction,
},
#[serde(other)]
Other,
}
@@ -220,7 +219,7 @@ impl From<Vec<InputItem>> for ResponseInputItem {
let mime = mime_guess::from_path(&path)
.first()
.map(|m| m.essence_str().to_owned())
.unwrap_or_else(|| "application/octet-stream".to_string());
.unwrap_or_else(|| "image".to_string());
let encoded = base64::engine::general_purpose::STANDARD.encode(bytes);
Some(ContentItem::InputImage {
image_url: format!("data:{mime};base64,{encoded}"),

View File

@@ -81,7 +81,8 @@ pub enum Op {
model: String,
/// Will only be honored if the model is configured to use reasoning.
effort: ReasoningEffortConfig,
#[serde(skip_serializing_if = "Option::is_none")]
effort: Option<ReasoningEffortConfig>,
/// Will only be honored if the model is configured to use reasoning.
summary: ReasoningSummaryConfig,
@@ -111,8 +112,11 @@ pub enum Op {
model: Option<String>,
/// Updated reasoning effort (honored only for reasoning-capable models).
///
/// Use `Some(Some(_))` to set a specific effort, `Some(None)` to clear
/// the effort, or `None` to leave the existing value unchanged.
#[serde(skip_serializing_if = "Option::is_none")]
effort: Option<ReasoningEffortConfig>,
effort: Option<Option<ReasoningEffortConfig>>,
/// Updated reasoning summary preference (honored only for reasoning-capable models).
#[serde(skip_serializing_if = "Option::is_none")]
@@ -149,7 +153,7 @@ pub enum Op {
/// Request the full in-memory conversation transcript for the current session.
/// Reply is delivered via `EventMsg::ConversationHistory`.
GetHistory,
GetPath,
/// Request the list of MCP tools available across all configured servers.
/// Reply is delivered via `EventMsg::McpListToolsResponse`.
@@ -162,8 +166,22 @@ pub enum Op {
/// The agent will use its existing context (either conversation history or previous response id)
/// to generate a summary which will be returned as an AgentMessage event.
Compact,
/// Request a code review from the agent.
Review { review_request: ReviewRequest },
/// Request to shut down codex instance.
Shutdown,
/// Execute a user-initiated one-off shell command (triggered by "!cmd").
///
/// The command string is executed using the user's default shell and may
/// include shell syntax (pipes, redirects, etc.). Output is streamed via
/// `ExecCommand*` events and the UI regains control upon `TaskComplete`.
RunUserShellCommand {
/// The raw command string after '!'
command: String,
},
}
/// Determines the conditions under which the user is consulted to approve
@@ -499,7 +517,13 @@ pub enum EventMsg {
/// Notification that the agent is shutting down.
ShutdownComplete,
ConversationHistory(ConversationHistoryResponseEvent),
ConversationPath(ConversationPathResponseEvent),
/// Entered review mode.
EnteredReviewMode(ReviewRequest),
/// Exited review mode with an optional final result to apply.
ExitedReviewMode(Option<ReviewOutputEvent>),
}
// Individual event payload types matching each `EventMsg` variant.
@@ -708,12 +732,12 @@ where
let (_role, message) = value;
let message = message.as_ref();
let trimmed = message.trim();
if trimmed.starts_with(ENVIRONMENT_CONTEXT_OPEN_TAG)
&& trimmed.ends_with(ENVIRONMENT_CONTEXT_CLOSE_TAG)
if starts_with_ignore_ascii_case(trimmed, ENVIRONMENT_CONTEXT_OPEN_TAG)
&& ends_with_ignore_ascii_case(trimmed, ENVIRONMENT_CONTEXT_CLOSE_TAG)
{
InputMessageKind::EnvironmentContext
} else if trimmed.starts_with(USER_INSTRUCTIONS_OPEN_TAG)
&& trimmed.ends_with(USER_INSTRUCTIONS_CLOSE_TAG)
} else if starts_with_ignore_ascii_case(trimmed, USER_INSTRUCTIONS_OPEN_TAG)
&& ends_with_ignore_ascii_case(trimmed, USER_INSTRUCTIONS_CLOSE_TAG)
{
InputMessageKind::UserInstructions
} else {
@@ -722,6 +746,26 @@ where
}
}
fn starts_with_ignore_ascii_case(text: &str, prefix: &str) -> bool {
let text_bytes = text.as_bytes();
let prefix_bytes = prefix.as_bytes();
text_bytes.len() >= prefix_bytes.len()
&& text_bytes
.iter()
.zip(prefix_bytes.iter())
.all(|(a, b)| a.eq_ignore_ascii_case(b))
}
fn ends_with_ignore_ascii_case(text: &str, suffix: &str) -> bool {
let text_bytes = text.as_bytes();
let suffix_bytes = suffix.as_bytes();
text_bytes.len() >= suffix_bytes.len()
&& text_bytes[text_bytes.len() - suffix_bytes.len()..]
.iter()
.zip(suffix_bytes.iter())
.all(|(a, b)| a.eq_ignore_ascii_case(b))
}
#[derive(Debug, Clone, Deserialize, Serialize, TS)]
pub struct AgentMessageDeltaEvent {
pub delta: String,
@@ -801,9 +845,9 @@ pub struct WebSearchEndEvent {
/// Response payload for `Op::GetHistory` containing the current session's
/// in-memory transcript.
#[derive(Debug, Clone, Deserialize, Serialize, TS)]
pub struct ConversationHistoryResponseEvent {
pub struct ConversationPathResponseEvent {
pub conversation_id: ConversationId,
pub entries: Vec<ResponseItem>,
pub path: PathBuf,
}
#[derive(Debug, Clone, Deserialize, Serialize, TS)]
@@ -897,9 +941,27 @@ pub struct SessionMetaLine {
pub enum RolloutItem {
SessionMeta(SessionMetaLine),
ResponseItem(ResponseItem),
Compacted(CompactedItem),
TurnContext(TurnContextItem),
EventMsg(EventMsg),
}
#[derive(Serialize, Deserialize, Clone, Debug, TS)]
pub struct CompactedItem {
pub message: String,
}
#[derive(Serialize, Deserialize, Clone, Debug, TS)]
pub struct TurnContextItem {
pub cwd: PathBuf,
pub approval_policy: AskForApproval,
pub sandbox_policy: SandboxPolicy,
pub model: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub effort: Option<ReasoningEffortConfig>,
pub summary: ReasoningSummaryConfig,
}
#[derive(Serialize, Deserialize, Clone)]
pub struct RolloutLine {
pub timestamp: String,
@@ -920,6 +982,57 @@ pub struct GitInfo {
pub repository_url: Option<String>,
}
/// Review request sent to the review session.
#[derive(Debug, Clone, Deserialize, Serialize, PartialEq, TS)]
pub struct ReviewRequest {
pub prompt: String,
pub user_facing_hint: String,
}
/// Structured review result produced by a child review session.
#[derive(Debug, Clone, Deserialize, Serialize, PartialEq, TS)]
pub struct ReviewOutputEvent {
pub findings: Vec<ReviewFinding>,
pub overall_correctness: String,
pub overall_explanation: String,
pub overall_confidence_score: f32,
}
impl Default for ReviewOutputEvent {
fn default() -> Self {
Self {
findings: Vec::new(),
overall_correctness: String::default(),
overall_explanation: String::default(),
overall_confidence_score: 0.0,
}
}
}
/// A single review finding describing an observed issue or recommendation.
#[derive(Debug, Clone, Deserialize, Serialize, PartialEq, TS)]
pub struct ReviewFinding {
pub title: String,
pub body: String,
pub confidence_score: f32,
pub priority: i32,
pub code_location: ReviewCodeLocation,
}
/// Location of the code related to a review finding.
#[derive(Debug, Clone, Deserialize, Serialize, PartialEq, TS)]
pub struct ReviewCodeLocation {
pub absolute_file_path: PathBuf,
pub line_range: ReviewLineRange,
}
/// Inclusive line range in a file associated with the finding.
#[derive(Debug, Clone, Deserialize, Serialize, PartialEq, TS)]
pub struct ReviewLineRange {
pub start: u32,
pub end: u32,
}
#[derive(Debug, Clone, Deserialize, Serialize, TS)]
pub struct ExecCommandBeginEvent {
/// Identifier so this can be paired with the ExecCommandEnd event.
@@ -929,6 +1042,10 @@ pub struct ExecCommandBeginEvent {
/// The command's working directory if not the default cwd for the agent.
pub cwd: PathBuf,
pub parsed_cmd: Vec<ParsedCommand>,
/// True when this exec was initiated directly by the user (e.g. bang command),
/// not by the agent/model. Defaults to false for backwards compatibility.
#[serde(default)]
pub user_initiated_shell_command: bool,
}
#[derive(Debug, Clone, Deserialize, Serialize, TS)]
@@ -1064,6 +1181,10 @@ pub struct SessionConfiguredEvent {
/// Tell the client what model is being queried.
pub model: String,
/// The effort the model is putting into reasoning about the user's request.
#[serde(skip_serializing_if = "Option::is_none")]
pub reasoning_effort: Option<ReasoningEffortConfig>,
/// Identifier of the history log file (inode on Unix, 0 otherwise).
pub history_log_id: u64,
@@ -1152,6 +1273,7 @@ mod tests {
msg: EventMsg::SessionConfigured(SessionConfiguredEvent {
session_id: conversation_id,
model: "codex-mini-latest".to_string(),
reasoning_effort: Some(ReasoningEffortConfig::default()),
history_log_id: 0,
history_entry_count: 0,
initial_messages: None,
@@ -1165,6 +1287,7 @@ mod tests {
"type": "session_configured",
"session_id": "67e55044-10b1-426f-9247-bb680e5fe0c8",
"model": "codex-mini-latest",
"reasoning_effort": "medium",
"history_log_id": 0,
"history_entry_count": 0,
"rollout_path": format!("{}", rollout_file.path().display()),
@@ -1189,4 +1312,19 @@ mod tests {
let deserialized: ExecCommandOutputDeltaEvent = serde_json::from_str(&serialized).unwrap();
assert_eq!(deserialized, event);
}
#[test]
fn serialize_run_user_shell_command_op() {
let op = Op::RunUserShellCommand {
command: "echo hi".to_string(),
};
let value = serde_json::to_value(op).unwrap();
assert_eq!(
value,
json!({
"type": "run_user_shell_command",
"command": "echo hi",
})
);
}
}

View File

@@ -11,7 +11,10 @@ use codex_ansi_escape::ansi_escape_line;
use codex_core::AuthManager;
use codex_core::ConversationManager;
use codex_core::config::Config;
use codex_core::config::persist_model_selection;
use codex_core::model_family::find_family_for_model;
use codex_core::protocol::TokenUsage;
use codex_core::protocol_config_types::ReasoningEffort as ReasoningEffortConfig;
use color_eyre::eyre::Result;
use color_eyre::eyre::WrapErr;
use crossterm::event::KeyCode;
@@ -37,6 +40,9 @@ pub(crate) struct App {
/// Config is stored here so we can recreate ChatWidgets as needed.
pub(crate) config: Config,
pub(crate) active_profile: Option<String>,
model_saved_to_profile: bool,
model_saved_to_global: bool,
pub(crate) file_search: FileSearchManager,
@@ -61,6 +67,7 @@ impl App {
tui: &mut tui::Tui,
auth_manager: Arc<AuthManager>,
config: Config,
active_profile: Option<String>,
initial_prompt: Option<String>,
initial_images: Vec<PathBuf>,
resume_selection: ResumeSelection,
@@ -119,6 +126,9 @@ impl App {
app_event_tx,
chat_widget,
config,
active_profile,
model_saved_to_profile: false,
model_saved_to_global: false,
file_search,
enhanced_keys_supported,
transcript_lines: Vec::new(),
@@ -288,10 +298,17 @@ impl App {
self.chat_widget.apply_file_search_result(query, matches);
}
AppEvent::UpdateReasoningEffort(effort) => {
self.chat_widget.set_reasoning_effort(effort);
self.on_update_reasoning_effort(effort);
}
AppEvent::UpdateModel(model) => {
self.chat_widget.set_model(model);
self.chat_widget.set_model(model.clone());
self.config.model = model.clone();
if let Some(family) = find_family_for_model(&model) {
self.config.model_family = family;
}
self.model_saved_to_profile = false;
self.model_saved_to_global = false;
self.show_model_save_hint();
}
AppEvent::UpdateAskForApprovalPolicy(policy) => {
self.chat_widget.set_approval_policy(policy);
@@ -304,7 +321,110 @@ impl App {
}
pub(crate) fn token_usage(&self) -> codex_core::protocol::TokenUsage {
self.chat_widget.token_usage().clone()
self.chat_widget.token_usage()
}
fn show_model_save_hint(&mut self) {
let model = self.config.model.clone();
if self.active_profile.is_some() {
self.chat_widget.add_info_message(
format!("Model changed to {model} for the current session"),
Some("(ctrl+s to set as profile default)".to_string()),
);
} else {
self.chat_widget.add_info_message(
format!("Model changed to {model} for the current session"),
Some("(ctrl+s to set as default)".to_string()),
);
}
}
fn on_update_reasoning_effort(&mut self, effort: Option<ReasoningEffortConfig>) {
let changed = self.config.model_reasoning_effort != effort;
self.chat_widget.set_reasoning_effort(effort);
self.config.model_reasoning_effort = effort;
if changed {
let show_hint = self.model_saved_to_profile || self.model_saved_to_global;
self.model_saved_to_profile = false;
self.model_saved_to_global = false;
if show_hint {
self.show_model_save_hint();
}
}
}
async fn persist_model_shortcut(&mut self) {
enum SaveScope<'a> {
Profile(&'a str),
Global,
AlreadySaved,
}
let scope = if let Some(profile) = self
.active_profile
.as_deref()
.filter(|_| !self.model_saved_to_profile)
{
SaveScope::Profile(profile)
} else if !self.model_saved_to_global {
SaveScope::Global
} else {
SaveScope::AlreadySaved
};
let model = self.config.model.clone();
let effort = self.config.model_reasoning_effort;
let codex_home = self.config.codex_home.clone();
match scope {
SaveScope::Profile(profile) => {
match persist_model_selection(&codex_home, Some(profile), &model, effort).await {
Ok(()) => {
self.model_saved_to_profile = true;
self.chat_widget.add_info_message(
format!("Profile model changed to {model} for all sessions"),
Some("(view global config in config.toml)".to_string()),
);
}
Err(err) => {
tracing::error!(
error = %err,
"failed to persist model selection via shortcut"
);
self.chat_widget.add_error_message(format!(
"Failed to save model preference for profile `{profile}`: {err}"
));
}
}
}
SaveScope::Global => {
match persist_model_selection(&codex_home, None, &model, effort).await {
Ok(()) => {
self.model_saved_to_global = true;
self.chat_widget.add_info_message(
format!("Default model changed to {model} for all sessions"),
Some("(view global config in config.toml)".to_string()),
)
}
Err(err) => {
tracing::error!(
error = %err,
"failed to persist global model selection via shortcut"
);
self.chat_widget.add_error_message(format!(
"Failed to save global model preference: {err}"
));
}
}
}
SaveScope::AlreadySaved => {
self.chat_widget.add_info_message(
"Model preference already saved globally; no further action needed."
.to_string(),
None,
);
}
}
}
async fn handle_key_event(&mut self, tui: &mut tui::Tui, key_event: KeyEvent) {
@@ -320,6 +440,14 @@ impl App {
self.overlay = Some(Overlay::new_transcript(self.transcript_lines.clone()));
tui.frame_requester().schedule_frame();
}
KeyEvent {
code: KeyCode::Char('s'),
modifiers: crossterm::event::KeyModifiers::CONTROL,
kind: KeyEventKind::Press,
..
} => {
self.persist_model_shortcut().await;
}
// Esc primes/advances backtracking only in normal (not working) mode
// with an empty composer. In any other state, forward Esc so the
// active UI (e.g. status indicator, modals, popups) handles it.
@@ -366,3 +494,67 @@ impl App {
};
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::app_backtrack::BacktrackState;
use crate::chatwidget::tests::make_chatwidget_manual_with_sender;
use crate::file_search::FileSearchManager;
use codex_core::CodexAuth;
use codex_core::ConversationManager;
use ratatui::text::Line;
use std::sync::Arc;
use std::sync::atomic::AtomicBool;
fn make_test_app() -> App {
let (chat_widget, app_event_tx, _rx, _op_rx) = make_chatwidget_manual_with_sender();
let config = chat_widget.config_ref().clone();
let server = Arc::new(ConversationManager::with_auth(CodexAuth::from_api_key(
"Test API Key",
)));
let file_search = FileSearchManager::new(config.cwd.clone(), app_event_tx.clone());
App {
server,
app_event_tx,
chat_widget,
config,
active_profile: None,
model_saved_to_profile: false,
model_saved_to_global: false,
file_search,
transcript_lines: Vec::<Line<'static>>::new(),
overlay: None,
deferred_history_lines: Vec::new(),
has_emitted_history_lines: false,
enhanced_keys_supported: false,
commit_anim_running: Arc::new(AtomicBool::new(false)),
backtrack: BacktrackState::default(),
}
}
#[test]
fn update_reasoning_effort_updates_config_and_resets_flags() {
let mut app = make_test_app();
app.model_saved_to_profile = true;
app.model_saved_to_global = true;
app.config.model_reasoning_effort = Some(ReasoningEffortConfig::Medium);
app.chat_widget
.set_reasoning_effort(Some(ReasoningEffortConfig::Medium));
app.on_update_reasoning_effort(Some(ReasoningEffortConfig::High));
assert_eq!(
app.config.model_reasoning_effort,
Some(ReasoningEffortConfig::High)
);
assert_eq!(
app.chat_widget.config_ref().model_reasoning_effort,
Some(ReasoningEffortConfig::High)
);
assert!(!app.model_saved_to_profile);
assert!(!app.model_saved_to_global);
}
}

View File

@@ -1,9 +1,11 @@
use std::path::PathBuf;
use crate::app::App;
use crate::backtrack_helpers;
use crate::pager_overlay::Overlay;
use crate::tui;
use crate::tui::TuiEvent;
use codex_core::protocol::ConversationHistoryResponseEvent;
use codex_core::protocol::ConversationPathResponseEvent;
use codex_protocol::mcp_protocol::ConversationId;
use color_eyre::eyre::Result;
use crossterm::event::KeyCode;
@@ -98,7 +100,7 @@ impl App {
) {
self.backtrack.pending = Some((base_id, drop_last_messages, prefill));
self.app_event_tx.send(crate::app_event::AppEvent::CodexOp(
codex_core::protocol::Op::GetHistory,
codex_core::protocol::Op::GetPath,
));
}
@@ -265,7 +267,7 @@ impl App {
pub(crate) async fn on_conversation_history_for_backtrack(
&mut self,
tui: &mut tui::Tui,
ev: ConversationHistoryResponseEvent,
ev: ConversationPathResponseEvent,
) -> Result<()> {
if let Some((base_id, _, _)) = self.backtrack.pending.as_ref()
&& ev.conversation_id == *base_id
@@ -281,14 +283,14 @@ impl App {
async fn fork_and_switch_to_new_conversation(
&mut self,
tui: &mut tui::Tui,
ev: ConversationHistoryResponseEvent,
ev: ConversationPathResponseEvent,
drop_count: usize,
prefill: String,
) {
let cfg = self.chat_widget.config_ref().clone();
// Perform the fork via a thin wrapper for clarity/testability.
let result = self
.perform_fork(ev.entries.clone(), drop_count, cfg.clone())
.perform_fork(ev.path.clone(), drop_count, cfg.clone())
.await;
match result {
Ok(new_conv) => {
@@ -301,13 +303,11 @@ impl App {
/// Thin wrapper around ConversationManager::fork_conversation.
async fn perform_fork(
&self,
entries: Vec<codex_protocol::models::ResponseItem>,
path: PathBuf,
drop_count: usize,
cfg: codex_core::config::Config,
) -> codex_core::error::Result<codex_core::NewConversation> {
self.server
.fork_conversation(entries, drop_count, cfg)
.await
self.server.fork_conversation(drop_count, cfg, path).await
}
/// Install a forked conversation into the ChatWidget and update UI to reflect selection.
@@ -335,7 +335,7 @@ impl App {
self.trim_transcript_for_backtrack(drop_count);
self.render_transcript_once(tui);
if !prefill.is_empty() {
self.chat_widget.insert_str(prefill);
self.chat_widget.set_composer_text(prefill.to_string());
}
tui.frame_requester().schedule_frame();
}

View File

@@ -1,4 +1,4 @@
use codex_core::protocol::ConversationHistoryResponseEvent;
use codex_core::protocol::ConversationPathResponseEvent;
use codex_core::protocol::Event;
use codex_file_search::FileMatch;
@@ -46,7 +46,7 @@ pub(crate) enum AppEvent {
CommitTick,
/// Update the current reasoning effort in the running app and widget.
UpdateReasoningEffort(ReasoningEffort),
UpdateReasoningEffort(Option<ReasoningEffort>),
/// Update the current model slug in the running app and widget.
UpdateModel(String),
@@ -58,5 +58,5 @@ pub(crate) enum AppEvent {
UpdateSandboxPolicy(SandboxPolicy),
/// Forwarded conversation history snapshot from the current conversation.
ConversationHistory(ConversationHistoryResponseEvent),
ConversationHistory(ConversationPathResponseEvent),
}

View File

@@ -95,6 +95,10 @@ enum ActivePopup {
File(FileSearchPopup),
}
const FOOTER_HINT_HEIGHT: u16 = 1;
const FOOTER_SPACING_HEIGHT: u16 = 1;
const FOOTER_HEIGHT_WITH_HINT: u16 = FOOTER_HINT_HEIGHT + FOOTER_SPACING_HEIGHT;
impl ChatComposer {
pub fn new(
has_input_focus: bool,
@@ -134,20 +138,20 @@ impl ChatComposer {
pub fn desired_height(&self, width: u16) -> u16 {
self.textarea.desired_height(width - 1)
+ match &self.active_popup {
ActivePopup::None => 1u16,
ActivePopup::None => FOOTER_HEIGHT_WITH_HINT,
ActivePopup::Command(c) => c.calculate_required_height(),
ActivePopup::File(c) => c.calculate_required_height(),
}
}
pub fn cursor_pos(&self, area: Rect) -> Option<(u16, u16)> {
let popup_height = match &self.active_popup {
ActivePopup::Command(popup) => popup.calculate_required_height(),
ActivePopup::File(popup) => popup.calculate_required_height(),
ActivePopup::None => 1,
let popup_constraint = match &self.active_popup {
ActivePopup::Command(popup) => Constraint::Max(popup.calculate_required_height()),
ActivePopup::File(popup) => Constraint::Max(popup.calculate_required_height()),
ActivePopup::None => Constraint::Max(FOOTER_HEIGHT_WITH_HINT),
};
let [textarea_rect, _] =
Layout::vertical([Constraint::Min(1), Constraint::Max(popup_height)]).areas(area);
Layout::vertical([Constraint::Min(1), popup_constraint]).areas(area);
let mut textarea_rect = textarea_rect;
textarea_rect.width = textarea_rect.width.saturating_sub(1);
textarea_rect.x += 1;
@@ -243,6 +247,10 @@ impl ChatComposer {
/// Replace the entire composer content with `text` and reset cursor.
pub(crate) fn set_text_content(&mut self, text: String) {
// Clear any existing content, placeholders, and attachments first.
self.textarea.set_text("");
self.pending_pastes.clear();
self.attached_images.clear();
self.textarea.set_text(&text);
self.textarea.set_cursor(0);
self.sync_command_popup();
@@ -483,7 +491,7 @@ impl ChatComposer {
} => {
// Hide popup without modifying text, remember token to avoid immediate reopen.
if let Some(tok) = Self::current_at_token(&self.textarea) {
self.dismissed_file_popup_token = Some(tok.to_string());
self.dismissed_file_popup_token = Some(tok);
}
self.active_popup = ActivePopup::None;
(InputResult::None, true)
@@ -542,7 +550,7 @@ impl ChatComposer {
Some(ext) if ext == "jpg" || ext == "jpeg" => "JPEG",
_ => "IMG",
};
self.attach_image(path_buf.clone(), w, h, format_label);
self.attach_image(path_buf, w, h, format_label);
// Add a trailing space to keep typing fluid.
self.textarea.insert_str(" ");
} else {
@@ -1219,13 +1227,16 @@ impl ChatComposer {
impl WidgetRef for ChatComposer {
fn render_ref(&self, area: Rect, buf: &mut Buffer) {
let popup_height = match &self.active_popup {
ActivePopup::Command(popup) => popup.calculate_required_height(),
ActivePopup::File(popup) => popup.calculate_required_height(),
ActivePopup::None => 1,
let (popup_constraint, hint_spacing) = match &self.active_popup {
ActivePopup::Command(popup) => (Constraint::Max(popup.calculate_required_height()), 0),
ActivePopup::File(popup) => (Constraint::Max(popup.calculate_required_height()), 0),
ActivePopup::None => (
Constraint::Length(FOOTER_HEIGHT_WITH_HINT),
FOOTER_SPACING_HEIGHT,
),
};
let [textarea_rect, popup_rect] =
Layout::vertical([Constraint::Min(1), Constraint::Max(popup_height)]).areas(area);
Layout::vertical([Constraint::Min(1), popup_constraint]).areas(area);
match &self.active_popup {
ActivePopup::Command(popup) => {
popup.render_ref(popup_rect, buf);
@@ -1234,7 +1245,16 @@ impl WidgetRef for ChatComposer {
popup.render_ref(popup_rect, buf);
}
ActivePopup::None => {
let bottom_line_rect = popup_rect;
let hint_rect = if hint_spacing > 0 {
let [_, hint_rect] = Layout::vertical([
Constraint::Length(hint_spacing),
Constraint::Length(FOOTER_HINT_HEIGHT),
])
.areas(popup_rect);
hint_rect
} else {
popup_rect
};
let mut hint: Vec<Span<'static>> = if self.ctrl_c_quit_hint {
let ctrl_c_followup = if self.is_task_running {
" to interrupt"
@@ -1305,7 +1325,7 @@ impl WidgetRef for ChatComposer {
Line::from(hint)
.style(Style::default().dim())
.render_ref(bottom_line_rect, buf);
.render_ref(hint_rect, buf);
}
}
let border_style = if self.has_focus {
@@ -1340,6 +1360,7 @@ mod tests {
use super::*;
use image::ImageBuffer;
use image::Rgba;
use pretty_assertions::assert_eq;
use std::path::PathBuf;
use tempfile::tempdir;
@@ -1352,6 +1373,60 @@ mod tests {
use crate::bottom_pane::textarea::TextArea;
use tokio::sync::mpsc::unbounded_channel;
#[test]
fn footer_hint_row_is_separated_from_composer() {
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
let area = Rect::new(0, 0, 40, 6);
let mut buf = Buffer::empty(area);
composer.render_ref(area, &mut buf);
let row_to_string = |y: u16| {
let mut row = String::new();
for x in 0..area.width {
row.push(buf[(x, y)].symbol().chars().next().unwrap_or(' '));
}
row
};
let mut hint_row: Option<(u16, String)> = None;
for y in 0..area.height {
let row = row_to_string(y);
if row.contains(" send") {
hint_row = Some((y, row));
break;
}
}
let (hint_row_idx, hint_row_contents) =
hint_row.expect("expected footer hint row to be rendered");
assert_eq!(
hint_row_idx,
area.height - 1,
"hint row should occupy the bottom line: {hint_row_contents:?}",
);
assert!(
hint_row_idx > 0,
"expected a spacing row above the footer hints",
);
let spacing_row = row_to_string(hint_row_idx - 1);
assert_eq!(
spacing_row.trim(),
"",
"expected blank spacing row above hints but saw: {spacing_row:?}",
);
}
#[test]
fn test_current_at_token_basic_cases() {
let test_cases = vec![
@@ -2119,7 +2194,7 @@ mod tests {
// Re-add and test backspace in middle: should break the placeholder string
// and drop the image mapping (same as text placeholder behavior).
composer.attach_image(path.clone(), 20, 10, "PNG");
composer.attach_image(path, 20, 10, "PNG");
let placeholder2 = composer.attached_images[0].placeholder.clone();
// Move cursor to roughly middle of placeholder
if let Some(start_pos) = composer.textarea.text().find(&placeholder2) {
@@ -2178,7 +2253,7 @@ mod tests {
let path1 = PathBuf::from("/tmp/image_dup1.png");
let path2 = PathBuf::from("/tmp/image_dup2.png");
composer.attach_image(path1.clone(), 10, 5, "PNG");
composer.attach_image(path1, 10, 5, "PNG");
// separate placeholders with a space for clarity
composer.handle_paste(" ".into());
composer.attach_image(path2.clone(), 10, 5, "PNG");
@@ -2227,7 +2302,7 @@ mod tests {
assert!(composer.textarea.text().starts_with("[image 3x2 PNG] "));
let imgs = composer.take_recent_submission_images();
assert_eq!(imgs, vec![tmp_path.clone()]);
assert_eq!(imgs, vec![tmp_path]);
}
#[test]

View File

@@ -137,11 +137,12 @@ impl BottomPaneView for ListSelectionView {
fn desired_height(&self, _width: u16) -> u16 {
let rows = (self.items.len()).clamp(1, MAX_POPUP_ROWS);
// +1 for the title row, +1 for optional subtitle, +1 for optional footer
let mut height = rows as u16 + 1;
// +1 for the title row, +1 for a spacer line beneath the header,
// +1 for optional subtitle, +1 for optional footer
let mut height = rows as u16 + 2;
if self.subtitle.is_some() {
// +1 for subtitle, +1 for a blank spacer line beneath it
height = height.saturating_add(2);
// +1 for subtitle (the spacer is accounted for above)
height = height.saturating_add(1);
}
if self.footer_hint.is_some() {
height = height.saturating_add(2);
@@ -178,17 +179,18 @@ impl BottomPaneView for ListSelectionView {
vec![Self::dim_prefix_span(), sub.clone().dim()];
let subtitle_para = Paragraph::new(Line::from(subtitle_spans));
subtitle_para.render(subtitle_area, buf);
// Render the extra spacer line with the dimmed prefix to align with title/subtitle
let spacer_area = Rect {
x: area.x,
y: next_y.saturating_add(1),
width: area.width,
height: 1,
};
Self::render_dim_prefix_line(spacer_area, buf);
next_y = next_y.saturating_add(2);
next_y = next_y.saturating_add(1);
}
let spacer_area = Rect {
x: area.x,
y: next_y,
width: area.width,
height: 1,
};
Self::render_dim_prefix_line(spacer_area, buf);
next_y = next_y.saturating_add(1);
let footer_reserved = if self.footer_hint.is_some() { 2 } else { 0 };
let rows_area = Rect {
x: area.x,
@@ -245,3 +247,78 @@ impl BottomPaneView for ListSelectionView {
}
}
}
#[cfg(test)]
mod tests {
use super::BottomPaneView;
use super::*;
use crate::app_event::AppEvent;
use insta::assert_snapshot;
use ratatui::layout::Rect;
use tokio::sync::mpsc::unbounded_channel;
fn make_selection_view(subtitle: Option<&str>) -> ListSelectionView {
let (tx_raw, _rx) = unbounded_channel::<AppEvent>();
let tx = AppEventSender::new(tx_raw);
let items = vec![
SelectionItem {
name: "Read Only".to_string(),
description: Some("Codex can read files".to_string()),
is_current: true,
actions: vec![],
},
SelectionItem {
name: "Full Access".to_string(),
description: Some("Codex can edit files".to_string()),
is_current: false,
actions: vec![],
},
];
ListSelectionView::new(
"Select Approval Mode".to_string(),
subtitle.map(str::to_string),
Some("Press Enter to confirm or Esc to go back".to_string()),
items,
tx,
)
}
fn render_lines(view: &ListSelectionView) -> String {
let width = 48;
let height = BottomPaneView::desired_height(view, width);
let area = Rect::new(0, 0, width, height);
let mut buf = Buffer::empty(area);
view.render(area, &mut buf);
let lines: Vec<String> = (0..area.height)
.map(|row| {
let mut line = String::new();
for col in 0..area.width {
let symbol = buf[(area.x + col, area.y + row)].symbol();
if symbol.is_empty() {
line.push(' ');
} else {
line.push_str(symbol);
}
}
line
})
.collect();
lines.join("\n")
}
#[test]
fn renders_blank_line_between_title_and_items_without_subtitle() {
let view = make_selection_view(None);
assert_snapshot!(
"list_selection_spacing_without_subtitle",
render_lines(&view)
);
}
#[test]
fn renders_blank_line_between_subtitle_and_items() {
let view = make_selection_view(Some("Switch between Codex approval presets"));
assert_snapshot!("list_selection_spacing_with_subtitle", render_lines(&view));
}
}

View File

@@ -564,7 +564,7 @@ mod tests {
let (tx_raw, rx) = unbounded_channel::<AppEvent>();
let tx = AppEventSender::new(tx_raw);
let mut pane = BottomPane::new(BottomPaneParams {
app_event_tx: tx.clone(),
app_event_tx: tx,
frame_requester: FrameRequester::test_dummy(),
has_input_focus: true,
enhanced_keys_supported: false,

Some files were not shown because too many files have changed in this diff Show More