Compare commits

...

424 Commits

Author SHA1 Message Date
Ahmed Ibrahim
710eb7cf6f Apply clippy fixes 2026-01-26 01:00:26 -08:00
Ahmed Ibrahim
1aea5ff6c1 Add append-only clear context operation 2026-01-26 00:56:15 -08:00
Ahmed Ibrahim
d27f2533a9 Plan prompt (#9877)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2026-01-25 19:50:35 -08:00
Ahmed Ibrahim
0f798173d7 Prompt (#9874)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2026-01-25 18:24:25 -08:00
Ahmed Ibrahim
cb2bbe5cba Adjust modes masks (#9868) 2026-01-25 12:44:17 -08:00
Ahmad Sohail Raoufi
dd2d68e69e chore: remove extra newline in println (#9850)
## Summary

This PR makes a minor formatting adjustment to a `println!` message by
removing an extra empty line and explicitly using `\n` for clarity.

## Changes

- Adjusted console output formatting for the success message.
- No functional or behavioral changes.
2026-01-25 10:44:15 -08:00
jif-oai
8fea8f73d6 chore: half max number of sub-agents (#9861)
https://openai.slack.com/archives/C095U48JNL9/p1769359138786499?thread_ts=1769190766.962719&cid=C095U48JNL9
2026-01-25 17:51:55 +01:00
jif-oai
73b5274443 feat: cap number of agents (#9855)
Adding more guards to agent:
* Max depth or 1 (i.e. a sub-agent can't spawn another one)
* Max 12 sub-agents in total
2026-01-25 14:57:22 +00:00
jif-oai
a748600c42 Revert "Revert "fix: musl build"" (#9847)
Fix for
77222492f9
2026-01-25 08:50:31 -05:00
pakrym-oai
b332482eb1 Mark collab as beta (#9834)
Co-authored-by: jif-oai <jif@openai.com>
2026-01-25 11:13:21 +01:00
Ahmed Ibrahim
58450ba2a1 Use collaboration mode masks without mutating base settings (#9806)
Keep an unmasked base collaboration mode and apply the active mask on
demand. Simplify the TUI mask helpers and update tests/docs to match the
mask contract.
2026-01-25 07:35:31 +00:00
Ahmed Ibrahim
24230c066b Revert "fix: libcc link" (#9841)
Reverts openai/codex#9819
2026-01-25 06:58:56 +00:00
Charley Cunningham
18acec09df Ask for cwd choice when resuming session from different cwd (#9731)
# Summary
- Fix resume/fork config rebuild so cwd changes inside the TUI produce a
fully rebuilt Config (trust/approval/sandbox) instead of mutating only
the cwd.
- Preserve `--add-dir` behavior across resume/fork by normalizing
relative roots to absolute paths once (based on the original cwd).
- Prefer latest `TurnContext.cwd` for resume/fork prompts but fall back
to `SessionMeta.cwd` if the latest cwd no longer exists.
- Align resume/fork selection handling and ensure UI config matches the
resumed thread config.
- Fix Windows test TOML path escaping in trust-level test.

# Details
- Rebuild Config via `ConfigBuilder` when resuming into a different cwd;
carry forward runtime approval/sandbox overrides.
- Add `normalize_harness_overrides_for_cwd` to resolve relative
`additional_writable_roots` against the initial cwd before reuse.
- Guard `read_session_cwd` with filesystem existence check for the
latest `TurnContext.cwd`.
- Update naming/flow around cwd comparison and prompt selection.

<img width="603" height="150" alt="Screenshot 2026-01-23 at 5 42 13 PM"
src="https://github.com/user-attachments/assets/d1897386-bb28-4e8a-98cf-187fdebbecb0"
/>

And proof the model understands the new cwd:

<img width="828" height="353" alt="Screenshot 2026-01-22 at 5 36 45 PM"
src="https://github.com/user-attachments/assets/12aed8ca-dec3-4b64-8dae-c6b8cff78387"
/>
2026-01-24 21:57:19 -08:00
Matthew Zeng
182000999c Raise welcome animation breakpoint to 37 rows (#9778)
### Motivation
- The large ASCII welcome animation can push onboarding content below
the fold on default-height terminals, making the CLI appear
unresponsive; raising the breakpoint prevents that.
- The existing test measured an arbitrary row count rather than
asserting the welcome line position relative to the animation frame,
which made the intent unclear.

### Description
- Increase `MIN_ANIMATION_HEIGHT` from `20` to `37` in
`codex-rs/tui/src/onboarding/welcome.rs` so the animation is skipped
unless there is enough vertical space.
- Replace the brittle measurement logic in the welcome render test with
a `row_containing` helper and assert the welcome row equals the frame
height plus the spacer line (`frame_lines + 1`).
- Add a regression test
`welcome_skips_animation_below_height_breakpoint` that verifies the
animation is not rendered when the viewport height is one row below the
breakpoint.

### Testing
- Ran formatting with `~/.cargo/bin/just fmt` which completed
successfully.
- Ran unit tests for the crate with `cargo test -p codex-tui --lib` and
they passed (unit test suite succeeded).
- Ran `cargo test -p codex-tui` which reported a failing integration
test in this environment because the test cannot locate the `codex`
binary, so full crate tests are blocked here (environment limitation).

------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_6973b0a710d4832c9ff36fac26eb1519)
2026-01-24 21:50:35 -08:00
Ahmed Ibrahim
652f08e98f Revert "fix: musl build" (#9840)
Reverts openai/codex#9820
2026-01-25 04:46:53 +00:00
Charley Cunningham
279c9534a1 Prevent backspace from removing a text element when the cursor is at the element’s left edge (#9630)
**Summary**
- Prevent backspace from removing a text element when the cursor is at
the element’s left edge.
- Instead just delete the char before the placeholder (moving it to the
left).
2026-01-24 10:41:39 -08:00
Max Kong
e2bd9311c9 fix(windows-sandbox): remove request files after read (#9316)
## Summary
- Remove elevated runner request files after read (best-effort cleanup
on errors)
- Add a unit test to cover request file lifecycle

## Testing
- `cargo test -p codex-windows-sandbox` (Windows)

Fixes #9315
2026-01-24 10:23:37 -08:00
jif-oai
2efcdf4062 fix: musl build (#9820) 2026-01-24 16:56:28 +01:00
jif-oai
3651608365 fix: libcc link (#9819) 2026-01-24 16:32:06 +01:00
jif-oai
83775f4df1 feat: ephemeral threads (#9765)
Add ephemeral threads capabilities. Only exposed through the
`app-server` v2

The idea is to disable the rollout recorder for those threads.
2026-01-24 14:57:40 +00:00
jif-oai
515ac2cd19 feat: add thread spawn source for collab tools (#9769) 2026-01-24 14:21:34 +00:00
Charley Cunningham
eb7558ba85 Remove batman reference from experimental prompt (#9812)
https://www.reddit.com/r/codex/comments/1qldbmg/if_you_enable_experimental_subagents_in_openai/
2026-01-24 14:24:36 +01:00
Eric Traut
713ae22c04 Another round of improvements for config error messages (#9746)
In a [recent PR](https://github.com/openai/codex/pull/9182), I made some
improvements to config error messages so errors didn't leave app server
clients in a dead state. This is a follow-on PR to make these error
messages more readable and actionable for both TUI and GUI users. For
example, see #9668 where the user was understandably confused about the
source of the problem and how to fix it.

The improved error message:
1. Clearly identifies the config file where the error was found (which
is more important now that we support layered configs)
2. Provides a line and column number of the error
3. Displays the line where the error occurred and underlines it

For example, if my `config.toml` includes the following:
```toml
[features]
collaboration_modes = "true"
```

Here's the current CLI error message:
```
Error loading config.toml: invalid type: string "true", expected a boolean in `features`
```

And here's the improved message:
```
Error loading config.toml:
/Users/etraut/.codex/config.toml:43:23: invalid type: string "true", expected a boolean
   |
43 | collaboration_modes = "true"
   |                       ^^^^^^
```

The bulk of the new logic is contained within a new module
`config_loader/diagnostics.rs` that is responsible for calculating the
text range for a given toml path (which is more involved than I would
have expected).

In addition, this PR adds the file name and text range to the
`ConfigWarningNotification` app server struct. This allows GUI clients
to present the user with a better error message and an optional link to
open the errant config file. This was a suggestion from @.bolinfest when
he reviewed my previous PR.
2026-01-23 20:11:09 -08:00
Ahmed Ibrahim
b3127e2eeb Have a coding mode and only show coding and plan (#9802) 2026-01-23 19:28:49 -08:00
viyatb-oai
77222492f9 feat: introducing a network sandbox proxy (#8442)
This add a new crate, `codex-network-proxy`, a local network proxy
service used by Codex to enforce fine-grained network policy (domain
allow/deny) and to surface blocked network events for interactive
approvals.

- New crate: `codex-rs/network-proxy/` (`codex-network-proxy` binary +
library)
- Core capabilities:
  - HTTP proxy support (including CONNECT tunneling)
  - SOCKS5 proxy support (in the later PR)
- policy evaluation (allowed/denied domain lists; denylist wins;
wildcard support)
  - small admin API for polling/reload/mode changes
- optional MITM support for HTTPS CONNECT to enforce “limited mode”
method restrictions (later PR)

Will follow up integration with codex in subsequent PRs.

## Testing

- `cd codex-rs && cargo build -p codex-network-proxy`
- `cd codex-rs && cargo run -p codex-network-proxy -- proxy`
2026-01-23 17:47:09 -08:00
Ahmed Ibrahim
69cfc73dc6 change collaboration mode to struct (#9793)
Shouldn't cause behavioral change
2026-01-23 17:00:23 -08:00
Ahmed Ibrahim
1167465bf6 Chore: remove mode from header (#9792) 2026-01-23 22:38:17 +00:00
iceweasel-oai
d9232403aa bundle sandbox helper binaries in main zip, for winget. (#9707)
Winget uses the main codex.exe value as its target.
The elevated sandbox requires these two binaries to live next to
codex.exe
2026-01-23 14:36:42 -08:00
gt-oai
b9deb57689 Load untrusted rules (#9791) 2026-01-23 21:52:27 +00:00
gt-oai
c6ded0afd8 still load skills (#9700) 2026-01-23 20:35:50 +00:00
jcoens-openai
e04851816d Remove stale TODO comment from defs.bzl (#9787)
### Motivation
- Remove an outdated comment in `defs.bzl` referencing
`cargo_build_script` that is no longer relevant.

### Description
- Delete the stale `# TODO(zbarsky): cargo_build_script support?` line
so the logic flows directly from `binaries` to `lib_srcs` in `defs.bzl`.

### Testing
- Ran `git diff --check` which produced no errors.

------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_6973d9ac757c8331be475a8fb0f90a88)
2026-01-23 20:30:01 +00:00
JUAN DAVID SALAS CAMARGO
e0ae219f36 Fix resume picker when user event appears after head (#9512)
Fixes #9501

Contributing guide:
https://github.com/openai/codex/blob/main/docs/contributing.md

## Summary
The resume picker requires a session_meta line and at least one
user_message event within the initial head scan. Some rollout files
contain multiple session_meta entries before the first user_message, so
the user event can fall outside the default head window and the session
is omitted from the picker even though it is resumable by ID.

This PR keeps the head summary bounded but extends scanning for a
user_message once a session_meta has been observed. The summary still
caps stored head entries, but we allow a small, bounded extra scan to
find the first user event so valid sessions are not filtered out.

## Changes
- Continue scanning past the head limit (bounded) when session_meta is
present but no user_message has been seen yet.
- Mark session_meta as seen even if the head summary buffer is already
full.
- Add a regression test with multiple session_meta lines before the
first user_message.

## Why This Is Safe
- The head summary remains bounded to avoid unbounded memory usage.
- The extra scan is capped (USER_EVENT_SCAN_LIMIT) and only triggers
after a session_meta is seen.
- Behavior is unchanged for typical files where the user_message appears
early.

## Testing
- cargo test -p codex-core --lib
test_list_threads_scans_past_head_for_user_event
2026-01-23 12:21:27 -08:00
Ahmed Ibrahim
45fe58159e Select default model from filtered presets (#9782)
Pick the first available preset after auth filtering for default
selection.
2026-01-23 12:18:36 -08:00
gt-oai
7938c170d9 Print warning if we skip config loading (#9611)
https://github.com/openai/codex/pull/9533 silently ignored config if
untrusted. Instead, we still load it but disable it. Maybe we shouldn't
try to parse it either...

<img width="939" height="515" alt="Screenshot 2026-01-21 at 14 56 38"
src="https://github.com/user-attachments/assets/e753cc22-dd99-4242-8ffe-7589e85bef66"
/>
2026-01-23 20:06:37 +00:00
Salman Chishti
eca365cf8c Upgrade GitHub Actions for Node 24 compatibility (#9722)
## Summary

Upgrade GitHub Actions to their latest versions to ensure compatibility
with Node 24, as Node 20 will reach end-of-life in April 2026.

## Changes

| Action | Old Version(s) | New Version | Release | Files |
|--------|---------------|-------------|---------|-------|
| `actions/cache` |
[`v4`](https://github.com/actions/cache/releases/tag/v4) |
[`v5`](https://github.com/actions/cache/releases/tag/v5) |
[Release](https://github.com/actions/cache/releases/tag/v5) | bazel.yml
|

## Context

Per [GitHub's
announcement](https://github.blog/changelog/2025-09-19-deprecation-of-node-20-on-github-actions-runners/),
Node 20 is being deprecated and runners will begin using Node 24 by
default starting March 4th, 2026.

### Why this matters

- **Node 20 EOL**: April 2026
- **Node 24 default**: March 4th, 2026
- **Action**: Update to latest action versions that support Node 24

### Security Note

Actions that were previously pinned to commit SHAs remain pinned to SHAs
(updated to the latest release SHA) to maintain the security benefits of
immutable references.

### Testing

These changes only affect CI/CD workflow configurations and should not
impact application functionality. The workflows should be tested by
running them on a branch before merging.

Signed-off-by: Salman Muin Kayser Chishti <13schishti@gmail.com>
2026-01-23 12:06:04 -08:00
zerone0x
ae7d3e1b49 fix(exec): skip git repo check when --yolo flag is used (#9590)
## Summary

Fixes #7522

The `--yolo` (`--dangerously-bypass-approvals-and-sandbox`) flag is
documented to skip all confirmation prompts and execute commands without
sandboxing, intended solely for running in environments that are
externally sandboxed. However, it was not bypassing the trusted
directory (git repo) check, requiring users to also specify
`--skip-git-repo-check`.

This change makes `--yolo` also skip the git repo check, matching the
documented behavior and user expectations.

## Changes

- Modified `codex-rs/exec/src/lib.rs` to check for
`dangerously_bypass_approvals_and_sandbox` flag in addition to
`skip_git_repo_check` when determining whether to skip the git repo
check

## Testing

- Verified the code compiles with `cargo check -p codex-exec`
- Ran existing tests with `cargo test -p codex-exec` (34 passed, 8
integration tests failed due to unrelated API connectivity issues)

---
🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-23 12:05:20 -08:00
Ahmed Ibrahim
f353d3d695 prompt (#9777)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2026-01-23 19:24:48 +00:00
charley-oai
935d88b455 Persist text element ranges and attached images across history/resume (#9116)
**Summary**
- Backtrack selection now rehydrates `text_elements` and
`local_image_paths` from the chosen user history cell so Esc‑Esc history
edits preserve image placeholders and attachments.
- Composer prefill uses the preserved elements/attachments in both `tui`
and `tui2`.
- Extended backtrack selection tests to cover image placeholder elements
and local image paths.

**Changes**
- `tui/src/app_backtrack.rs`: Backtrack selection now carries text
elements + local image paths; composer prefill uses them (removes TODO).
- `tui2/src/app_backtrack.rs`: Same as above.
- `tui/src/app.rs`: Updated backtrack test to assert restored
elements/paths.
- `tui2/src/app.rs`: Same test updates.

### The original scope of this PR (threading text elements and image
attachments through the codex harness thoroughly/persistently) was
broken into the following PRs other than this one:

The diff of this PR was reduced by changing types in a starter PR:
https://github.com/openai/codex/pull/9235

Then text element metadata was added to protocol, app server, and core
in this PR: https://github.com/openai/codex/pull/9331

Then the end-to-end flow was completed by wiring TUI/TUI2 input,
history, and restore behavior in
https://github.com/openai/codex/pull/9393

Prompt expansion was supported in this PR:
https://github.com/openai/codex/pull/9518

TextElement optional placeholder field was protected in
https://github.com/openai/codex/pull/9545
2026-01-23 10:18:19 -08:00
jif-oai
f30f39b28b feat: tui beta for collab (#9690)
https://github.com/user-attachments/assets/1ca07e7a-3d82-40da-a5b0-8ab2eef0bb69
2026-01-23 13:57:59 +01:00
jif-oai
afa08570f2 nit: exclude PWD for rc sourcing (#9753) 2026-01-23 13:35:48 +01:00
Michael Bolin
86a1e41f2e chore: use some raw strings to reduce quoting (#9745)
Small follow-ups for https://github.com/openai/codex/pull/9565. Mainly
`r#`, but also added some whitespace for early returns.
2026-01-22 22:38:10 -08:00
JUAN DAVID SALAS CAMARGO
f815fa14ea Fix execpolicy parsing for multiline quoted args (#9565)
## What
Fix bash command parsing to accept double-quoted strings that contain
literal newlines so execpolicy can match allow rules.

## Why
Allow rules like [git, commit] should still match when commit messages
include a newline in a quoted argument; the parser currently rejects
these strings and falls back to the outer shell invocation.

## How
- Validate double-quoted strings by ensuring all named children are
string_content and then stripping the outer quotes from the raw node
text so embedded newlines are preserved.
- Reuse the helper for concatenated arguments.
- Ensure large SI suffix formatting uses the caller-provided locale
formatter for grouping.
- Add coverage for newline-containing quoted arguments.

Fixes #9541.

## Tests
- cargo test -p codex-core
- just fix -p codex-core
- cargo test -p codex-protocol
- just fix -p codex-protocol
- cargo test --all-features
2026-01-22 22:16:53 -08:00
alexsong-oai
0fa45fbca4 feat: add session source as otel metadata tag (#9720)
Add session.source and user.account_id as global OTEL metric tags to
identify client surface and user.
2026-01-22 18:46:14 -08:00
charley-oai
02fced28a4 Hide mode cycle hint while a task is running (#9730)
## Summary
- hide the “(shift+tab to cycle)” suffix on the collaboration mode label
while a task is running
- keep the cycle hint visible when idle
- add a snapshot to cover the running-task label state
2026-01-22 18:32:06 -08:00
Ahmed Ibrahim
d86bd20411 Change the prompt for planning and reasoning effort (#9733)
Change the prompt for planning and reasoning effort preset for better
experience
2026-01-22 18:22:12 -08:00
Dylan Hurd
2b1ee24e11 feat(app-server) Expose personality (#9674)
### Motivation
Exposes a per-thread / per-turn `personality` override in the v2
app-server API so clients can influence model communication style at
thread/turn start. Ensures the override is passed into the session
configuration resolution so it becomes effective for subsequent turns
and headless runners.

### Testing
- [x] Add an integration-style test
`turn_start_accepts_personality_override_v2` in
`codex-rs/app-server/tests/suite/v2/turn_start.rs` that verifies a
`/personality` override results in a developer update message containing
`<personality_spec>` in the outbound model request.

------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_6971d646b1c08322a689a54d2649f3fe)
2026-01-22 18:00:20 -08:00
Matthew Zeng
a2c829a808 [connectors] Support connectors part 1 - App server & MCP (#9667)
In order to make Codex work with connectors, we add a built-in gateway
MCP that acts as a transparent proxy between the client and the
connectors. The gateway MCP collects actions that are accessible to the
user and sends them down to the user, when a connector action is chosen
to be called, the client invokes the action through the gateway MCP as
well.

 - [x] Add the system built-in gateway MCP to list and run connectors.
 - [x] Add the app server methods and protocol
2026-01-22 16:48:43 -08:00
github-actions[bot]
d9e041e0a6 Update models.json (#9726)
Automated update of models.json.

---------

Co-authored-by: aibrahim-oai <219906144+aibrahim-oai@users.noreply.github.com>
Co-authored-by: Ahmed Ibrahim <aibrahim@openai.com>
2026-01-23 00:44:47 +00:00
iceweasel-oai
0e4adcd760 use machine scope instead of user scope for dpapi. (#9713)
This fixes a bug where the elevated sandbox setup encrypts sandbox user
passwords as an admin user, but normal command execution attempts to
decrypt them as a different user.

Machine scope allows all users to encyrpt/decrypt

this PR also moves the encrypted file to a different location
.codex/.sandbox-secrets which the sandbox users cannot read.
2026-01-22 16:40:13 -08:00
charley-oai
0e79d239ed TUI: prompt to implement plan and switch to Execute (#9712)
## Summary
- Replace the plan‑implementation prompt with a standard selection
popup.
- “Yes” submits a user turn in Execute via a dedicated app event to
preserve normal transcript behavior.
- “No” simply dismisses the popup.

<img width="977" height="433" alt="Screenshot 2026-01-22 at 2 00 54 PM"
src="https://github.com/user-attachments/assets/91fad06f-7b7a-4cd8-9051-f28a19b750b2"
/>

## Changes
- Add a plan‑implementation popup using `SelectionViewParams`.
- Add `SubmitUserMessageWithMode` so “Yes” routes through
`submit_user_message` (ensures user history + separator state).
- Track `saw_plan_update_this_turn` so the prompt appears even when only
`update_plan` is emitted.
- Suppress the plan popup on replayed turns, when messages are queued,
or when a rate‑limit prompt is pending.
- Add `execute_mode` helper for collaboration modes.
- Add tests for replay/queued/rate‑limit guards and plan update without
final message.
- Add snapshots for both the default and “No”‑selected popup states.
2026-01-23 00:25:50 +00:00
Anton Panasenko
e117a3ff33 feat: support proxy for ws connection (#9719)
reapply websocket changes without changing tls lib.
2026-01-22 15:23:15 -08:00
iudi
afd63e8bae Fix typo in experimental_prompt.md (#9716)
Simple typo fix in the first sentence of the experimental_prompt.md
instructions file.
2026-01-22 14:07:14 -08:00
Michael Bolin
5d963ee5d9 feat: fix formatting of codex features list (#9715)
The formatting of `codex features list` made it hard to follow. This PR
introduces column width math to make things nice.

Maybe slightly hard to machine-parse (since not a simple `\t`), but we
should introduce a `--json` option if that's really important.

You can see the before/after in the screenshot:

<img width="1119" height="932" alt="image"
src="https://github.com/user-attachments/assets/c99dce85-899a-4a2d-b4af-003938f5e1df"
/>
2026-01-22 13:02:41 -08:00
Owen Lin
733cb68496 feat(app-server): support archived threads in thread/list (#9571) 2026-01-22 12:22:36 -08:00
Owen Lin
80240b3b67 feat(app-server): thread/read API (#9569) 2026-01-22 12:22:01 -08:00
Dylan Hurd
8b3521ee77 feat(core) update Personality on turn (#9644)
## Summary
Support updating Personality mid-Thread via UserTurn/OverwriteTurn. This
is explicitly unused by the clients so far, to simplify PRs - app-server
and tui implementations will be follow-ups.

## Testing
- [x] added integration tests
2026-01-22 12:04:23 -08:00
charley-oai
4210fb9e6c Modes label below textarea (#9645)
# Summary
- Add a collaboration mode indicator rendered at the bottom-right of the
TUI composer footer.
- Style modes per design (Plan in #D72EE1, Execute matching dim context
style, Pair Programming using the same cyan as text elements).
- Add shared “(shift+tab to cycle)” hint text for all mode labels and
align the indicator with the left footer margin.

NOTE: currently this is hidden if the Collaboration Modes feature flag
is disabled, or in Custom mode. Maybe we should show it in Custom mode
too? I'll leave that out of this PR though

# UI
- Mode indicator appears below the textarea, bottom-right of the footer
line.
- Includes “(shift+tab to cycle)” and keeps right padding aligned to the
left footer indent.

<img width="983" height="200" alt="Screenshot 2026-01-21 at 7 17 54 PM"
src="https://github.com/user-attachments/assets/d1c5e4ed-7d7b-4f6c-9e71-bc3cf6400e0e"
/>

<img width="980" height="200" alt="Screenshot 2026-01-21 at 7 18 53 PM"
src="https://github.com/user-attachments/assets/d22ff0da-a406-4930-85c5-affb2234e84b"
/>

<img width="979" height="201" alt="Screenshot 2026-01-21 at 7 19 12 PM"
src="https://github.com/user-attachments/assets/862cb17f-0495-46fa-9b01-a4a9f29b52d5"
/>
2026-01-22 17:31:11 +00:00
pakrym-oai
b511c38ddb Support end_turn flag (#9698)
Experimental flag that signals the end of the turn.
2026-01-22 17:27:48 +00:00
pakrym-oai
4d48d4e0c2 Revert "feat: support proxy for ws connection" (#9693)
Reverts openai/codex#9409
2026-01-22 15:57:18 +00:00
Shijie Rao
a4cb97ba5a Chore: add cmd related info to exec approval request (#9659)
### Summary
We now rely purely on `item/commandExecution/requestApproval` item to
render pending approval in VSCE and app. With v2 approach, it does not
include the actual cmd that it is attempting and therefore we can only
use `proposedExecpolicyAmendment` to render which can be incomplete.

### Reproduce
* Add `prefix_rule(pattern=["echo"], decision="prompt")` to your
`~/.codex/rules.default.rules`.
* Ask to `Run  echo "approval-test" please` in VSCE or app. 
* The pending approval protal does show up but with no content

#### Example screenshot
<img width="3434" height="3648" alt="Screenshot 2026-01-21 at 8 23
25 PM"
src="https://github.com/user-attachments/assets/75644837-21f1-40f8-8b02-858d361ff817"
/>

#### Sample output
```
  {"method":"item/commandExecution/requestApproval","id":0,"params":{
    "threadId":"019be439-5a90-7600-a7ea-2d2dcc50302a",
    "turnId":"0",
    "itemId":"call_usgnQ4qEX5U9roNdjT7fPzhb",
    "reason":"`/bin/zsh -lc 'echo \"testing\"'` requires approval by policy",
    "proposedExecpolicyAmendment":null
  }}

```

### Fix
Inlude `command` string, `cwd` and `command_actions` in
`CommandExecutionRequestApprovalParams` so that consumers can display
the correct command instead of relying on exec policy output.
2026-01-21 23:58:53 -08:00
Kbediako
079fd2adb9 Fix: Lower log level for closed-channel send (#9653)
## What?
- Downgrade the closed-channel send error log to debug in
`codex-rs/core/src/codex.rs`.

## Why?
- `async_channel::Sender::send` only fails when the channel is closed,
so the current error-level log is noisy during normal shutdown. See
issue #9652.

## How?
- Replace the error log with a debug log on send failure.

## Tests
- `just fmt`
- `just fix -p codex-core`
- `cargo test -p codex-core`
2026-01-21 22:09:58 -08:00
Dylan Hurd
038b78c915 feat(tui) /permissions flow (#9561)
## Summary
Adds the `/permissions` command, with a (usually) shorter set of
permissions. `/approvals` still exists, for backwards compatibility.

<img width="863" height="309" alt="Screenshot 2026-01-20 at 4 12 51 PM"
src="https://github.com/user-attachments/assets/c49b5ba5-bc47-46dd-9067-e1a5670328fe"
/>


## Testing
- [x] updated unit tests
- [x] Tested locally
2026-01-21 21:38:46 -08:00
pakrym-oai
836f0343a3 Add tui.experimental_mode setting (#9656)
To simplify testing
2026-01-22 05:27:57 +00:00
Dylan Hurd
e520592bcf chore: tweak AGENTS.md (#9650)
## Summary
Update AGENTS.md to improve testing flow

## Testing
- [x] Tested locally, much faster
2026-01-21 20:20:45 -08:00
xl-openai
577ba3a4ca Add UI for skill enable/disable. (#9627)
"/skill" will now allow you to enable/disable skills:
<img width="658" height="199" alt="image"
src="https://github.com/user-attachments/assets/bf8994c8-d6c1-462f-8bbb-f1ee9241caa4"
/>
2026-01-21 18:21:12 -08:00
Dylan Hurd
96a72828be feat(core) ModelInfo.model_instructions_template (#9597)
## Summary
#9555 is the start of a rename, so I'm starting to standardize here.
Sets up `model_instructions` templating with a strongly-typed object for
injecting a personality block into the model instructions.

## Testing
- [x] Added tests
- [x] Ran locally
2026-01-21 18:11:18 -08:00
Josh McKinney
a489b64cb5 feat(tui): retire the tui2 experiment (#9640)
## Summary
- Retire the experimental TUI2 implementation and its feature flag.
- Remove TUI2-only config/schema/docs so the CLI stays on the
terminal-native path.
- Keep docs aligned with the legacy TUI while we focus on redraw-based
improvements.

## Customer impact
- Retires the TUI2 experiment and keeps Codex on the proven
terminal-native UI while we invest in redraw-based improvements to the
existing experience.

## Migration / compatibility
- If you previously set tui2-related options in config.toml, they are
now ignored and Codex continues using the existing terminal-native TUI
(no action required).

## Context
- What worked: a transcript-owned viewport delivered excellent resize
rewrap and high-fidelity copy (especially for code).
- Why stop: making that experience feel fully native across the
environment matrix (terminal emulator, OS, input modality, multiplexer,
font/theme, alt-screen behavior) creates a combinatorial explosion of
edge cases.
- What next: we are focusing on redraw-based improvements to the
existing terminal-native TUI so scrolling, selection, and copy remain
native while resize/redraw correctness improves.

## Testing
- just write-config-schema
- just fmt
- cargo clippy --fix --all-features --tests --allow-dirty --allow-no-vcs
-p codex-core
- cargo clippy --fix --all-features --tests --allow-dirty --allow-no-vcs
-p codex-cli
- cargo check
- cargo test -p codex-core
- cargo test -p codex-cli
2026-01-22 01:02:29 +00:00
charley-oai
41e38856f6 Reduce burst testing flake (#9549)
## Summary

- make paste-burst tests deterministic by injecting explicit timestamps
instead of relying on wall clock timing
- add time-aware helpers for input/submission paths so tests can drive
the burst heuristic precisely
- update burst-related tests to flush using computed timeouts while
preserving behavior assertions
- increase timeout slack in
shell_tools_start_before_response_completed_when_stream_delayed to
reduce flakiness
2026-01-21 16:42:31 -08:00
sayan-oai
c285b88980 feat: publish config schema on release (#9572)
Follow up to #8956; publish schema on new release to stable URL.

Also canonicalize schema (sort keys) when writing. This avoids reliance
on default `schema_rs` behavior and makes the schema easier to read.
2026-01-21 16:24:14 -08:00
Dylan Hurd
f1240ff4fe fix(tui) turn timing incremental (#9599)
## Summary
When we send multiple assistant messages, reset the timer so "Worked for
2m 36s" is the time since the last time we showed the message, rather
than an ever-increasing number.

We could instead change the copy so it's more clearly a running counter.

## Testing
- [x] ran locally

<img width="903" height="732" alt="Screenshot 2026-01-21 at 1 42 51 AM"
src="https://github.com/user-attachments/assets/bb4d827b-3a0e-48ba-bd6a-d8cd65d8e892"
/>
2026-01-21 15:59:56 -08:00
jif-oai
5dad1b956e feat: better sorting of shell commands (#9629)
This PR changes the way we sort slash command by going in this order:
1. Exact match
2. Prefix
3. Fuzzy

As a result, we you type `/ps` the default command is not `/approvals`
2026-01-21 23:03:01 +00:00
Eric Traut
2ca9a56528 Add layered config.toml support to app server (#9510)
This PR adds support for chained (layered) config.toml file merging for
clients that use the app server interface. This feature already exists
for the TUI, but it does not work for GUI clients.

It does the following:
* Changes code paths for new thread, resume thread, and fork thread to
use the effective config based on the cwd.
* Updates the `config/read` API to accept an optional `cwd` parameter.
If specified, the API returns the effective config based on that cwd
path. Also optionally includes all layers including project config
files. If cwd is not specified, the API falls back on its older behavior
where it considers only the global (non-project) config files when
computing the effective config.

The changes in codex_message_processor.rs look deceptively large. They
mostly just involve moving existing blocks of code to a later point in
some functions so it can use the cwd to calculate the config.

This PR builds upon #9509 and should be reviewed and merged after that
PR.

Tested:
* Verified change with (dependent, as-yet-uncommitted) changes to IDE
Extension and confirmed correct behavior

The full fix requires additional changes in the IDE Extension code base,
but they depend on this PR.
2026-01-21 14:21:48 -08:00
charley-oai
fe641f759f Add collaboration_mode to TurnContextItem (#9583)
## Summary
- add optional `collaboration_mode` to `TurnContextItem` in rollouts
- persist the current collaboration mode when recording turn context
(sampling + compaction)

## Rationale
We already persist turn context data for resume logic. Capturing
collaboration mode in the rollout gives us the mode context for each
turn, enabling follow‑up work to diff mode instructions correctly on
resume.

## Changes
- protocol: add optional `collaboration_mode` field to `TurnContextItem`
- core: persist collaboration mode alongside other turn context settings
in rollouts
2026-01-21 14:14:21 -08:00
Shijie Rao
3fcb40245e Chore: update plan mode output in prompt (#9592)
### Summary
* Update plan prompt output
* Update requestUserInput response to be a single key value pair
`answer: String`.
2026-01-21 14:12:18 -08:00
pakrym-oai
f2e1ad59bc Add websockets logging (#9633)
To help with debugging.
2026-01-21 21:35:38 +00:00
iceweasel-oai
7a9c9b8636 forgot to add some windows sandbox nux events. (#9624) 2026-01-21 13:24:09 -08:00
zbarsky-openai
ab8415dcf5 [bazel] Upgrade llvm toolchain and enable remote repo cache (#9616)
On bazel9 this lets us avoid performing some external repo downloads if
they've been previously uploaded to remote cache, downloads are deferred
until they are actually needed to execute an uncached action
2026-01-21 12:52:39 -08:00
Gav Verma
2e06d61339 Update skills/list protocol readme (#9623)
Updates readme example for `skills/list` to reflect latest response
spec.
2026-01-21 12:51:51 -08:00
Tien Nguyen
68b8381723 docs: fix outdated MCP subcommands documentation (#9622) 2026-01-21 11:17:37 -08:00
iceweasel-oai
f81dd128a2 define/emit some metrics for windows sandbox setup (#9573)
This should give us visibility into how users are using the elevated
sandbox nux flow, and the timing of the elevated setup.
2026-01-21 11:07:26 -08:00
Tiffany Citra
8179312ff5 fix: Fix tilde expansion to avoid absolute-path escape (#9621)
### Motivation
- Prevent inputs like `~//` or `~///etc` from expanding to arbitrary
absolute paths (e.g. `/`) because `Path::join` discards the left side
when the right side is absolute, which could allow config values to
escape `HOME` and broaden writable roots.

### Description
- In `codex-rs/utils/absolute-path/src/lib.rs` update
`maybe_expand_home_directory` to trim leading separators from the suffix
and return `home` when the remainder is empty so tilde expansion stays
rooted under `HOME`.
- Add a non-Windows unit test
`home_directory_double_slash_on_non_windows_is_expanded_in_deserialization`
that validates `"~//code"` expands to `home.join("code")`.

### Testing
- Ran `just fmt` successfully.
- Ran `just fix -p codex-utils-absolute-path` (Clippy autofix)
successfully.
- Ran `cargo test -p codex-utils-absolute-path` and all tests passed.

------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_697007481cac832dbeb1ee144d1e4cbe)
2026-01-21 10:43:10 -08:00
jif-oai
3355adad1d chore: defensive shell snapshot (#9609)
This PR adds 2 defensive mechanisms for shell snapshotting:
* Filter out invalid env variables (containing `-` for example) without
dropping the whole snapshot
* Validate the snapshot before considering it as valid by running a mock
command with a shell snapshot
2026-01-21 18:41:58 +00:00
jif-oai
338f2d634b nit: ui on interruption (#9606) 2026-01-21 14:09:15 +00:00
zbarsky-openai
2338f99f58 [bazel] Upgrade to bazel9 (#9576) 2026-01-21 13:25:36 +00:00
jif-oai
f1b6a43907 nit: better collab tui (#9551)
<img width="478" height="304" alt="Screenshot 2026-01-21 at 11 53 50"
src="https://github.com/user-attachments/assets/e2ef70de-2fff-44e0-a574-059177966ed2"
/>
2026-01-21 11:53:58 +00:00
jif-oai
13358fa131 fix: nit tui on terminal interactions (#9602) 2026-01-21 11:30:34 +00:00
jif-oai
b75024c465 feat: async shell snapshot (#9600) 2026-01-21 10:41:13 +00:00
Eric Traut
16b9380e99 Added "codex." prefix to "conversation.turn.count" metric name (#9594)
All other metrics names start with "codex.", so I presume this was an
unintended omission.
2026-01-21 10:00:47 +00:00
jif-oai
a22a61e678 feat: display raw command on user shell (#9598) 2026-01-21 09:44:38 +00:00
jif-oai
f1c961d5f7 feat: max threads config (#9483)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2026-01-21 09:39:11 +00:00
Ahmed Ibrahim
6e9a31def1 fix going up and down on questions after writing notes (#9596) 2026-01-21 09:37:37 +00:00
Ahmed Ibrahim
5f55ed666b Add request-user-input overlay (#9585)
- Add request-user-input overlay and routing in the TUI
2026-01-21 00:19:35 -08:00
Ahmed Ibrahim
ebc88f29f8 don't ask for approval for just fix (#9586)
It blocks all my skills from executing because it asks to run just fmt.
It's quick command that doesn't need approval.


<img width="967" height="120" alt="image"
src="https://github.com/user-attachments/assets/f8e6ca76-a650-49e9-beb2-ce98ba48d310"
/>
2026-01-21 04:56:11 +00:00
Ahmed Ibrahim
465da00d02 fix CI by running pnpm (#9587) 2026-01-20 20:54:15 -08:00
pakrym-oai
527b7b4c02 Feature to auto-enable websockets transport (#9578) 2026-01-20 20:32:06 -08:00
alexsong-oai
fabc2bcc32 feat: add skill injected counter metric (#9575) 2026-01-20 19:05:37 -08:00
charley-oai
0523a259c8 Reject ask user question tool in Execute and Custom (#9560)
## Summary
- Keep `request_user_input` in the tool list but reject it at runtime in
Execute/Custom modes with a clear model-facing error.
- Add a session accessor for current collaboration mode and enforce the
gate in the request_user_input handler.
- Update core/app-server tests to use Plan mode for success and add
Execute/Custom rejection coverage.
2026-01-20 18:32:17 -08:00
charley-oai
531748a080 Prompt Expansion: Preserve Text Elements (#9518)
Summary
- Preserve `text_elements` through custom prompt argument parsing and
expansion (named and numeric placeholders).
- Translate text element ranges through Shlex parsing using sentinel
substitution, and rehydrate text + element ranges per arg.
- Drop image attachments when their placeholder does not survive prompt
expansion, keeping attachments consistent with rendered elements.
- Mirror changes in TUI2 and expand tests for prompt parsing/expansion
edge cases.

Tests
- placeholders with spaces as single tokens (positional + key=value,
quoted + unquoted),
  - prompt expansion with image placeholders,
  - large paste + image arg combinations,
  - unused image arg dropped after expansion.
2026-01-20 18:30:20 -08:00
Michael Bolin
f4d55319d1 feat: rename experimental_instructions_file to model_instructions_file (#9555)
A user who has `experimental_instructions_file` set will now see this:

<img width="888" height="660" alt="image"
src="https://github.com/user-attachments/assets/51c98312-eb9b-4881-81f1-bea6677e158d"
/>

And a `codex exec` would include this warning:

<img width="888" height="660" alt="image"
src="https://github.com/user-attachments/assets/a89f62be-1edf-4593-a75e-e0b4a762ed7d"
/>
2026-01-21 02:25:08 +00:00
Ahmed Ibrahim
3a0eeb8edf Show session header before configuration (#9568)
We were skipping if we know the model. We shouldn't
2026-01-21 02:13:54 +00:00
Michael Bolin
ac2090caf2 fix: bminor/bash is no longer on GitHub so use bolinfest/bash instead (#9563)
This should fix CI.
2026-01-21 00:35:42 +00:00
Josh McKinney
0a26675155 feat(tui2): add /experimental menu (#9562)
Adds an /experimental slash command and bottom-pane view to toggle beta
features.

Persists feature-flag updates to config.toml, matching tui behavior.
2026-01-21 00:20:57 +00:00
Jeff Mickey
c14e6813fb [codex-tui] exit when terminal is dumb (#9293)
Using terminal with TERM=dumb specifically mean that TUIs and the like
don't work. Ensure that codex doesn't run in these environments and exit
with odd errors like crossterm's "Error: The cursor position could not
be read within a normal duration"

---------

Co-authored-by: Josh McKinney <joshka@openai.com>
2026-01-20 16:17:38 -08:00
HDCode
80f80181c2 fix(core): require approval for force delete on Windows (#8590)
### What
Implemented detection for dangerous "force delete" commands on Windows
to trigger the user approval prompt when `--ask-for-approval on-request`
is set. This aligns Windows behavior with the existing safety checks for
`rm -rf` on Linux.

### Why
Fixes #8567 - a critical safety gap where destructive Windows commands
could bypass the approval prompt. This prevents accidental data loss by
ensuring the user explicitly confirms operations that would otherwise
suppress the OS's native confirmation prompts.

### How
Updated the Windows command safety module to identify and flag the
following patterns as dangerous:
*   **PowerShell**:
* Detects `Remove-Item` (and aliases `rm`, `ri`, `del`, `erase`, `rd`,
`rmdir`) when used with the `-Force` flag.
* Uses token-based analysis to robustly detect these patterns even
inside script blocks (`{...}`), sub-expression `(...)`, or
semicolon-chained sequences.
*   **CMD**:
    *   Detects `del /f` (force delete files).
    *   Detects `rd /s /q` (recursive delete quiet).
* **Command Chaining**: Added support for analyzing chained commands
(using `&`, `&&`, `|`, `||`) to separate and check individual commands
(e.g., catching `del /f` hidden in `echo log & del /f data`).

### Testing
Added comprehensive unit tests covering:
* **PowerShell**: `Remove-Item -Path 'test' -Recurse -Force` (Exact
reproduction case).
* **Complex Syntax**: Verified detection inside blocks (e.g., `if
($true) { rm -Force }`) and with trailing punctuation.
*   **CMD**:
    *   `del /f` (Flagged).
    *   `rd /s /q` (Flagged).
    *   Chained commands: `echo hi & del /f file` (Flagged).
*   **False Positives**:
    *   `rd /s` (Not flagged - relies on native prompt).
    *   Standard deletions without force flags.

Verified with `cargo test` and `cargo clippy`.

---------

Co-authored-by: Eric Traut <etraut@openai.com>
2026-01-20 15:25:27 -08:00
Ahmed Ibrahim
fbd8afad81 queue only when task is working (#9558) 2026-01-20 15:24:45 -08:00
Ahmed Ibrahim
de4980d2ac Enable remote models (#9554) 2026-01-20 23:17:22 +00:00
charley-oai
64678f895a Improve UI spacing for queued messages (#9162)
Despite good spacing between queued messages and assistant message text:
<img width="462" height="322" alt="Screenshot 2026-01-12 at 4 54 50 PM"
src="https://github.com/user-attachments/assets/e8b46252-0b33-40d2-b431-cb73b9a3bd2e"
/>

Codex has confusing spacing between queued messages and shimmering
status text (making the queued message seem like a sub-item of the
shimmering status text)
<img width="615" height="217" alt="Screenshot 2026-01-12 at 4 54 18 PM"
src="https://github.com/user-attachments/assets/ee5e6095-8fe9-4863-88d2-10472cab8bd6"
/>

This PR changes the spacing between the queued message(s) and shimmering
status text to make it less confusing:
<img width="440" height="240" alt="Screenshot 2026-01-13 at 11 20 36 AM"
src="https://github.com/user-attachments/assets/02dcc690-cbe9-4943-87de-c7300ef51120"
/>

While working on the status/queued spacing change, we noticed two
paste‑burst tests were timing‑sensitive and could fail
on slower CI. We added a small test‑only helper to keep the paste‑burst
state active and refreshed during these tests. This
removes dependence on tight timing and makes the tests deterministic
without affecting runtime behavior.
2026-01-20 14:54:49 -08:00
zerone0x
ca23b0da5b fix(cli): add execute permission to bin/codex.js (#9532)
## Summary
Fixes #9520

The `bin/codex.js` file was missing execute permissions (`644` instead
of `755`), causing the `codex` command to fail after npm global
installation.

## Changes
- Added execute permission (`+x`) to `codex-cli/bin/codex.js`

## Verification
After this fix, npm tarballs will include the correct file permissions:
```bash
# Before: -rw-r--r-- (644)
# After:  -rwxr-xr-x (755)
```

---
🤖 Generated with Claude Code

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-20 14:53:14 -08:00
charley-oai
be9e55c5fc Add total (non-partial) TextElement placeholder accessors (#9545)
## Summary
- Make `TextElement` placeholders private and add a text-backed accessor
to avoid assuming `Some`.
- Since they are optional in the protocol, we want to make sure any
accessors properly handle the None case (getting the placeholder using
the byte range in the text)
- Preserve placeholders during protocol/app-server conversions using the
accessor fallback.
- Update TUI composer/remap logic and tests to use the new
constructor/accessor.
2026-01-20 14:04:11 -08:00
Ahmed Ibrahim
56fe5e7bea merge remote models (#9547)
We have `models.json` and `/models` response
Behavior:
1. New models from models endpoint gets added
2. Shared models get replaced by remote ones
3. Existing models in `models.json` but not `/models` are kept
4. Mark highest priority as default
2026-01-20 14:02:07 -08:00
Max Kong
c73a11d55e fix(windows-sandbox): parse PATH list entries for audit roots (#9319)
## Summary
- Use `std::env::split_paths` to parse PATH entries in audit candidate
collection
- Add a unit test covering multiple PATH entries (including spaces)

## Testing
- `cargo test -p codex-windows-sandbox` (Windows)

Fixes #9317
2026-01-20 14:00:27 -08:00
Max Kong
f2de920185 fix(windows-sandbox): deny .git file entries under writable roots (#9314)
## Summary
- Deny `.git` entries under writable roots even when `.git` is a file
(worktrees/submodules)
- Add a unit test for `.git` file handling

## Testing
- `cargo test -p codex-windows-sandbox` (Windows)

Fixes #9313
2026-01-20 13:59:59 -08:00
iceweasel-oai
9ea8e3115e lookup system SIDs instead of hardcoding English strings. (#9552)
The elevated setup does not work on non-English windows installs where
Users/Administrators/etc are in different languages. This PR uses the
well-known SIDs instead, which do not vary based on locale
2026-01-20 13:55:37 -08:00
Owen Lin
b0049ab644 fix(core): don't update the file's mtime on resume (#9553)
Remove `FileTimes::new().set_modified(SystemTime::now())` when resuming
a thread.

Context: It's awkward in UI built on top of app-server that resuming a
thread bumps the `updated_at` timestamp, even if no message is sent. So
if you open a thread (perhaps to just view its contents), it
automatically reorders it to the top which is almost certainly not what
you want.
2026-01-20 21:39:31 +00:00
Skylar Graika
b236f1c95d fix: prevent repeating interrupted turns (#9043)
## What
Record a model-visible `<turn_aborted>` marker in history when a turn is
interrupted, and treat it as a session prefix.

## Why
When a turn is interrupted, Codex emits `TurnAborted` but previously did
not persist anything model-visible in the conversation history. On the
next user turn, the model can’t tell the previous work was aborted and
may resume/repeat earlier actions (including duplicated side effects
like re-opening PRs).

Fixes: https://github.com/openai/codex/issues/9042

## How
On `TurnAbortReason::Interrupted`, append a hidden user message
containing a `<turn_aborted>…</turn_aborted>` marker and flush.
Treat `<turn_aborted>` like `<environment_context>` for session-prefix
filtering.
Add a regression test to ensure follow-up turns don’t repeat side
effects from an aborted turn.

## Testing
`just fmt`
`just fix -p codex-core`
`cargo test -p codex-core -- --test-threads=1`
`cargo test --all-features -- --test-threads=1`

---------

Co-authored-by: Skylar Graika <sgraika127@gmail.com>
Co-authored-by: jif-oai <jif@openai.com>
Co-authored-by: Eric Traut <etraut@openai.com>
2026-01-20 13:07:28 -08:00
Eric Traut
79c5bf9835 Fixed config merging issue with profiles (#9509)
This PR fixes a small issue with chained (layered) config.toml file
merging. The old logic didn't properly handle profiles.

In particular, if a lower-layer config overrides a profile defined in a
higher-layer config, the override did not take effect. This prevents
users from having project-specific profile overrides and contradicts the
(soon-to-be) documented behavior of config merging.

The change adds a unit test for this case. It also exposes a function
from the config crate that is needed by the app server code paths to
implement support for layered configs.
2026-01-20 12:18:00 -08:00
jif-oai
0b3c802a54 fix: memory leak issue (#9543)
Co-authored-by: Josh McKinney <joshka@openai.com>
2026-01-20 20:14:14 +00:00
Dylan Hurd
714151eb4e feat(personality) introduce model_personality config (#9459)
## Summary
Introduces the concept of a config model_personality. I would consider
this an MVP for testing out the feature. There are a number of
follow-ups to this PR:

- More sophisticated templating with validation
- In-product experience to manage this

## Testing
- [x] Testing locally
2026-01-20 11:06:14 -08:00
Simon Willison
46a4a03083 Fix typo in feature name from 'Mult-agents' to 'Multi-agents' (#9542)
Fixes a typo in a feature description.
2026-01-20 10:55:36 -08:00
Tiffany Citra
2c3843728c fix: writable_roots doesn't recognize home directory symbol in non-windows OS (#9193)
Fixes:
```
[sandbox_workspace_write]
writable_roots = ["~/code/"]
```

translates to
```
/Users/ccunningham/.codex/~/code
```
(i.e. the home dir symbol isn't recognized)
2026-01-20 10:55:01 -08:00
Ahmed Ibrahim
5ae6e70801 Tui: use collaboration mode instead of model and effort (#9507)
- Only use collaboration modes in the tui state to track model and
effort.
- No behavior change without the collaboration modes flag.
- Change model and effort on /model, /collab (behind a flag), and
shift+tab (behind flag)
2026-01-20 10:26:12 -08:00
Anton Panasenko
7b27aa7707 feat: support proxy for ws connection (#9409)
unfortunately tokio-tungstenite doesn't support proxy configuration
outbox, while https://github.com/snapview/tokio-tungstenite/pull/370 is
in review, we can depend on source code for now.
2026-01-20 09:36:30 -08:00
gt-oai
7351c12999 Only load config from trusted folders (#9533)
Config includes multiple code execution entrypoints. 

Now, we load the config from predetermined locations first
(~/.codex/config.toml etc), use those to learn which folders are
'trusted', and only load additional config from the CWD if it is
trusted.
2026-01-20 15:44:21 +00:00
jif-oai
3a9f436ce0 feat: metrics on shell snapshot (#9527) 2026-01-20 13:18:24 +00:00
jif-oai
6bbf506120 feat: metrics on remote models (#9528) 2026-01-20 13:02:55 +00:00
jif-oai
a3a97f3ea9 feat: record timer with additional tags (#9529) 2026-01-20 13:01:55 +00:00
jif-oai
9ec20ba065 nit: do not render terminal interactions if no task running (#9374)
To prevent race where the terminal interaction message is processed
after the last message
2026-01-20 10:20:17 +00:00
jif-oai
483239d861 chore: collab in experimental (#9525) 2026-01-20 10:19:06 +00:00
Dylan Hurd
3078eedb24 fix(tui) fix user message light mode background (#9407)
## Summary
Fixes the user message styles for light mode.

## Testing
Attaching 2 screenshots from ghostty, but I also tried various styles in
Terminal.app and iTerm2.
**Before**
<img width="888" height="560" alt="Screenshot 2026-01-16 at 5 22 36 PM"
src="https://github.com/user-attachments/assets/73d9decb-a01a-4ece-b88e-ea49a33cc0c6"
/>

**After**
<img width="890" height="281" alt="Screenshot 2026-01-16 at 5 22 59 PM"
src="https://github.com/user-attachments/assets/6689e286-d699-4ceb-b0cb-579a31b047bf"
/>
2026-01-19 23:58:44 -08:00
charley-oai
eb90e20c0b Persist text elements through TUI input and history (#9393)
Continuation of breaking up this PR
https://github.com/openai/codex/pull/9116

## Summary
- Thread user text element ranges through TUI/TUI2 input, submission,
queueing, and history so placeholders survive resume/edit flows.
- Preserve local image attachments alongside text elements and rehydrate
placeholders when restoring drafts.
- Keep model-facing content shapes clean by attaching UI metadata only
to user input/events (no API content changes).

## Key Changes
- TUI/TUI2 composer now captures text element ranges, trims them with
text edits, and restores them when submission is suppressed.
- User history cells render styled spans for text elements and keep
local image paths for future rehydration.
- Initial chat widget bootstraps accept empty `initial_text_elements` to
keep initialization uniform.
- Protocol/core helpers updated to tolerate the new InputText field
shape without changing payloads sent to the API.
2026-01-19 23:49:34 -08:00
Dylan Hurd
675f165c56 fix(core) Preserve base_instructions in SessionMeta (#9427)
## Summary
This PR consolidates base_instructions onto SessionMeta /
SessionConfiguration, so we ensure `base_instructions` is set once per
session and should be (mostly) immutable, unless:
- overridden by config on resume / fork
- sub-agent tasks, like review or collab


In a future PR, we should convert all references to `base_instructions`
to consistently used the typed struct, so it's less likely that we put
other strings there. See #9423. However, this PR is already quite
complex, so I'm deferring that to a follow-up.

## Testing
- [x] Added a resume test to assert that instructions are preserved. In
particular, `resume_switches_models_preserves_base_instructions` fails
against main.

Existing test coverage thats assert base instructions are preserved
across multiple requests in a session:
- Manual compact keeps baseline instructions:
core/tests/suite/compact.rs:199
- Auto-compact keeps baseline instructions:
core/tests/suite/compact.rs:1142
- Prompt caching reuses the same instructions across two requests:
core/tests/suite/prompt_caching.rs:150 and
core/tests/suite/prompt_caching.rs:157
- Prompt caching with explicit expected string across two requests:
core/tests/suite/prompt_caching.rs:213 and
core/tests/suite/prompt_caching.rs:222
- Resume with model switch keeps original instructions:
core/tests/suite/resume.rs:136
- Compact/resume/fork uses request 0 instructions for later expected
payloads: core/tests/suite/compact_resume_fork.rs:215
2026-01-19 21:59:36 -08:00
Ahmed Ibrahim
65d3b9e145 Migrate tui to use UserTurn (#9497)
- `tui/` and `tui2/` submit `Op::UserTurn` and own full turn context
(cwd/approval/sandbox/model/etc.).
- `Op::UserInput` is documented as legacy in `codex-protocol` (doc-only;
no `#[deprecated]` to avoid `-D warnings` fallout).
- Remove obsolete `#[allow(deprecated)]` and the unused `ConversationId`
alias/re-export.
2026-01-19 13:40:39 -08:00
prateek-oai
0c0c5aeddc tui: avoid Esc interrupt when skill popup active (#9451)
Fixes #9450

## What
- When a task is running and the skills autocomplete popup is open,
`Esc` now dismisses the popup instead of sending `Op::Interrupt`.
- `Esc` still interrupts a running task when no popup is active.

## Tests
- `cargo test -p codex-tui`

---------

Co-authored-by: prateek <199982+prateek@users.noreply.github.com>
2026-01-19 12:52:04 -08:00
Shijie Rao
d544adf71a Feat: plan mode prompt update (#9495)
### Summary
* Added instruction on using `request_user_input`
* Added the output to be json with `plan` key and the actual plan as the
value.
* Remove `PLAN.md` write because that gets into sandbox issue. We can
add it back later.
2026-01-19 12:17:29 -08:00
jif-oai
070935d5e8 chore: fix beta VS experimental (#9496) 2026-01-19 19:39:23 +00:00
Ahmed Ibrahim
b11e96fb04 Act on reasoning-included per turn (#9402)
- Reset reasoning-included flag each turn and update compaction test
2026-01-19 11:23:25 -08:00
Shijie Rao
57ec3a8277 Feat: request user input tool (#9472)
### Summary
* Add `requestUserInput` tool that the model can use for gather
feedback/asking question mid turn.


### Tool input schema
```
{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "title": "requestUserInput input",
  "type": "object",
  "additionalProperties": false,
  "required": ["questions"],
  "properties": {
    "questions": {
      "type": "array",
      "description": "Questions to show the user (1-3). Prefer 1 unless multiple independent decisions block progress.",
      "minItems": 1,
      "maxItems": 3,
      "items": {
        "type": "object",
        "additionalProperties": false,
        "required": ["id", "header", "question"],
        "properties": {
          "id": {
            "type": "string",
            "description": "Stable identifier for mapping answers (snake_case)."
          },
          "header": {
            "type": "string",
            "description": "Short header label shown in the UI (12 or fewer chars)."
          },
          "question": {
            "type": "string",
            "description": "Single-sentence prompt shown to the user."
          },
          "options": {
            "type": "array",
            "description": "Optional 2-3 mutually exclusive choices. Put the recommended option first and suffix its label with \"(Recommended)\". Only include \"Other\" option if we want to include a free form option. If the question is free form in nature, do not include any option.",
            "minItems": 2,
            "maxItems": 3,
            "items": {
              "type": "object",
              "additionalProperties": false,
              "required": ["value", "label", "description"],
              "properties": {
                "value": {
                  "type": "string",
                  "description": "Machine-readable value (snake_case)."
                },
                "label": {
                  "type": "string",
                  "description": "User-facing label (1-5 words)."
                },
                "description": {
                  "type": "string",
                  "description": "One short sentence explaining impact/tradeoff if selected."
                }
              }
            }
          }
        }
      }
    }
  }
}
```

### Tool output schema
```
{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "title": "requestUserInput output",
  "type": "object",
  "additionalProperties": false,
  "required": ["answers"],
  "properties": {
    "answers": {
      "type": "object",
      "description": "Map of question id to user answer.",
      "additionalProperties": {
        "type": "object",
        "additionalProperties": false,
        "required": ["selected"],
        "properties": {
          "selected": {
            "type": "array",
            "items": { "type": "string" }
          },
          "other": {
            "type": ["string", "null"]
          }
        }
      }
    }
  }
}
```
2026-01-19 10:17:30 -08:00
Ahmed Ibrahim
bf430ad9fe TUI: collaboration mode UX + always submit UserTurn when enabled (#9461)
- Adds experimental collaboration modes UX in TUI: Plan / Pair
Programming / Execute.
- Gated behind `Feature::CollaborationModes`; existing behavior remains
unchanged when disabled.
- Selection UX:
- `Shift+Tab` cycles modes while idle (no task running, no modal/popup).
- `/collab` cycles; `/collab <plan|pair|pp|execute|exec>` sets
explicitly.
- Footer flash after changes + shortcut overlay shows `Shift+Tab` “to
change mode”.
  - `/status` shows “Collaboration mode”.
- Submission semantics:
- When enabled: every submit uses `Op::UserTurn` and always includes
`collaboration_mode: Some(...)` (default Pair Programming).
  - Removes the one-shot “pending collaboration mode” behavior.
- Implementation:
- New `tui/src/collaboration_modes.rs` (selection enum/cycle, `/collab`
parsing, resolve to `CollaborationMode`, footer flash line).
- Fallback: `resolve_mode_or_fallback` synthesizes a `CollaborationMode`
when presets are missing (uses current model + reasoning effort; no
`developer_instructions`) to avoid core falling back to `Custom`.
  - TODO: migrate TUI to use `Op::UserTurn`.
2026-01-19 09:32:04 -08:00
Eric Traut
3788e2cc0f Fixed stale link to MCP documentation (#9490)
This was noted in #9482
2026-01-19 09:10:54 -08:00
jif-oai
92cf2a1c3a chore: warning metric (#9487) 2026-01-19 17:07:04 +00:00
Ahmed Ibrahim
31415ebfcf Remove unused protocol collaboration mode prompts (#9463)
Delete duplicate collaboration mode markdown under protocol prompts;
core templates remain the single source of truth.
2026-01-19 08:54:19 -08:00
Eric Traut
264d40efdc Fix invalid input error on Azure endpoint (#9387)
Users of Azure endpoints are reporting that when they use `/review`,
they sometimes see an error "Invalid 'input[3].id". I suspect this is
specific to the Azure implementation of the `responses` API. The Azure
team generally copies the OpenAI code for this endpoint, but they do
have minor differences and sometimes lag in rolling out bug fixes or
updates.

The error appears to be triggered because the `/review` implementation
is using a user ID with a colon in it.

Addresses #9360
2026-01-19 08:21:27 -08:00
jif-oai
3c28c85063 prompt 3 (#9479) 2026-01-19 11:56:38 +00:00
jif-oai
dc1b62acbd feat: detach non-tty childs (#9477)
Thanks to the investigations made by
* @frantic-openai https://github.com/openai/codex/pull/9403
* @kfiramar https://github.com/openai/codex/pull/9388
2026-01-19 11:35:34 +00:00
jif-oai
186794dbb3 feat: close all threads in /new (#9478) 2026-01-19 11:35:03 +00:00
jif-oai
7ebe13f692 feat: timer total turn metrics (#9382) 2026-01-19 10:44:31 +00:00
Eric Traut
a803467f52 Fixed TUI regression related to image paste in WSL (#9473)
This affects quoted paths in WSL. The fix is to normalize quoted Windows
paths before WSL conversion.

This addresses #9456
2026-01-18 23:02:02 -08:00
dependabot[bot]
a5e5d7a384 chore(deps): bump chrono from 0.4.42 to 0.4.43 in /codex-rs (#9465)
Bumps [chrono](https://github.com/chronotope/chrono) from 0.4.42 to
0.4.43.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/chronotope/chrono/releases">chrono's
releases</a>.</em></p>
<blockquote>
<h2>0.4.43</h2>
<h2>What's Changed</h2>
<ul>
<li>Install extra components for lint workflow by <a
href="https://github.com/djc"><code>@​djc</code></a> in <a
href="https://redirect.github.com/chronotope/chrono/pull/1741">chronotope/chrono#1741</a></li>
<li>Upgrade windows-bindgen to 0.64 by <a
href="https://github.com/djc"><code>@​djc</code></a> in <a
href="https://redirect.github.com/chronotope/chrono/pull/1742">chronotope/chrono#1742</a></li>
<li>Improve windows-bindgen setup by <a
href="https://github.com/djc"><code>@​djc</code></a> in <a
href="https://redirect.github.com/chronotope/chrono/pull/1744">chronotope/chrono#1744</a></li>
<li>Drop stabilized feature doc_auto_cfg by <a
href="https://github.com/djc"><code>@​djc</code></a> in <a
href="https://redirect.github.com/chronotope/chrono/pull/1745">chronotope/chrono#1745</a></li>
<li>Faster RFC 3339 parsing by <a
href="https://github.com/djc"><code>@​djc</code></a> in <a
href="https://redirect.github.com/chronotope/chrono/pull/1748">chronotope/chrono#1748</a></li>
<li>Update windows-bindgen requirement from 0.64 to 0.65 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/chronotope/chrono/pull/1751">chronotope/chrono#1751</a></li>
<li>add <code>NaiveDate::abs_diff</code> by <a
href="https://github.com/Kinrany"><code>@​Kinrany</code></a> in <a
href="https://redirect.github.com/chronotope/chrono/pull/1752">chronotope/chrono#1752</a></li>
<li>Add feature gated defmt support. by <a
href="https://github.com/pebender"><code>@​pebender</code></a> in <a
href="https://redirect.github.com/chronotope/chrono/pull/1747">chronotope/chrono#1747</a></li>
<li>Drop deny lints, eager Debug impls are a mixed blessing by <a
href="https://github.com/djc"><code>@​djc</code></a> in <a
href="https://redirect.github.com/chronotope/chrono/pull/1753">chronotope/chrono#1753</a></li>
<li>chore: minor improvement for docs by <a
href="https://github.com/spuradage"><code>@​spuradage</code></a> in <a
href="https://redirect.github.com/chronotope/chrono/pull/1756">chronotope/chrono#1756</a></li>
<li>Added doctest for the NaiveDate years_since function by <a
href="https://github.com/LucasBou"><code>@​LucasBou</code></a> in <a
href="https://redirect.github.com/chronotope/chrono/pull/1755">chronotope/chrono#1755</a></li>
<li>Prepare 0.4.43 by <a
href="https://github.com/djc"><code>@​djc</code></a> in <a
href="https://redirect.github.com/chronotope/chrono/pull/1765">chronotope/chrono#1765</a></li>
<li>Update copyright year to 2026 in LICENSE.txt by <a
href="https://github.com/taozui472"><code>@​taozui472</code></a> in <a
href="https://redirect.github.com/chronotope/chrono/pull/1767">chronotope/chrono#1767</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="45caaa970c"><code>45caaa9</code></a>
Update copyright year to 2026 in LICENSE.txt</li>
<li><a
href="1c0b8f011a"><code>1c0b8f0</code></a>
Bump version to 0.4.43</li>
<li><a
href="a03e43b1c3"><code>a03e43b</code></a>
Upgrade windows-bindgen to 0.66</li>
<li><a
href="4fedaba2a2"><code>4fedaba</code></a>
Ignore bincode advisory</li>
<li><a
href="f4b7bbda67"><code>f4b7bbd</code></a>
Bump actions/checkout from 5 to 6</li>
<li><a
href="db129730e8"><code>db12973</code></a>
Added doctest for the NaiveDate years_since function (<a
href="https://redirect.github.com/chronotope/chrono/issues/1755">#1755</a>)</li>
<li><a
href="34b5f49e9d"><code>34b5f49</code></a>
chore: minor improvement for docs</li>
<li><a
href="8c827116b9"><code>8c82711</code></a>
Bump actions/setup-node from 5 to 6</li>
<li><a
href="ea1f11b356"><code>ea1f11b</code></a>
Drop deny lints, eager Debug impls are a mixed blessing</li>
<li><a
href="35f9f2daef"><code>35f9f2d</code></a>
Add feature gated defmt support.</li>
<li>Additional commits viewable in <a
href="https://github.com/chronotope/chrono/compare/v0.4.42...v0.4.43">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=chrono&package-manager=cargo&previous-version=0.4.42&new-version=0.4.43)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-19 06:43:34 +00:00
dependabot[bot]
66b74efbc6 chore(deps): bump ctor from 0.5.0 to 0.6.3 in /codex-rs (#9469)
Bumps [ctor](https://github.com/mmastrac/rust-ctor) from 0.5.0 to 0.6.3.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/mmastrac/rust-ctor/commits">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=ctor&package-manager=cargo&previous-version=0.5.0&new-version=0.6.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-18 22:37:11 -08:00
dependabot[bot]
78a359f7fa chore(deps): bump arc-swap from 1.7.1 to 1.8.0 in /codex-rs (#9468)
Bumps [arc-swap](https://github.com/vorner/arc-swap) from 1.7.1 to
1.8.0.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/vorner/arc-swap/blob/master/CHANGELOG.md">arc-swap's
changelog</a>.</em></p>
<blockquote>
<h1>1.8.0</h1>
<ul>
<li>Support for Pin (<a
href="https://redirect.github.com/vorner/arc-swap/issues/185">#185</a>,
<a
href="https://redirect.github.com/vorner/arc-swap/issues/183">#183</a>).</li>
<li>Fix (hopefully) crash on ARM (<a
href="https://redirect.github.com/vorner/arc-swap/issues/164">#164</a>).</li>
<li>Fix Miri check (<a
href="https://redirect.github.com/vorner/arc-swap/issues/186">#186</a>,
<a
href="https://redirect.github.com/vorner/arc-swap/issues/156">#156</a>).</li>
<li>Fix support for Rust 1.31.0.</li>
<li>Some minor clippy lints.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="2540d266a8"><code>2540d26</code></a>
Version bump to 1.8.0</li>
<li><a
href="9981e3af23"><code>9981e3a</code></a>
Keep &quot;old&quot; Cargo.lock around</li>
<li><a
href="57a8abbfc4"><code>57a8abb</code></a>
Fix documentation links</li>
<li><a
href="346c5b642b"><code>346c5b6</code></a>
Fix some clippy warnings</li>
<li><a
href="0bd349a56b"><code>0bd349a</code></a>
Fix support for Rust 1.31.0</li>
<li><a
href="57aa5224c1"><code>57aa522</code></a>
Merge pull request <a
href="https://redirect.github.com/vorner/arc-swap/issues/185">#185</a>
from SpriteOvO/pin</li>
<li><a
href="4c0c4ab321"><code>4c0c4ab</code></a>
Implement <code>RefCnt</code> for <code>Pin\&lt;Arc&gt;</code> and
<code>Pin\&lt;Rc&gt;</code></li>
<li><a
href="e596275acf"><code>e596275</code></a>
Avoid warnings about hidden lifetimes</li>
<li><a
href="d849a2d17e"><code>d849a2d</code></a>
Use SeqCst in debt-lists</li>
<li><a
href="1f9b221da9"><code>1f9b221</code></a>
Merge pull request <a
href="https://redirect.github.com/vorner/arc-swap/issues/186">#186</a>
from nbdd0121/prov</li>
<li>Additional commits viewable in <a
href="https://github.com/vorner/arc-swap/compare/v1.7.1...v1.8.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=arc-swap&package-manager=cargo&previous-version=1.7.1&new-version=1.8.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-18 22:25:23 -08:00
dependabot[bot]
274af30525 chore(deps): bump tokio from 1.48.0 to 1.49.0 in /codex-rs (#9467)
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.48.0 to 1.49.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/tokio-rs/tokio/releases">tokio's
releases</a>.</em></p>
<blockquote>
<h2>Tokio v1.49.0</h2>
<h1>1.49.0 (January 3rd, 2026)</h1>
<h3>Added</h3>
<ul>
<li>net: add support for <code>TCLASS</code> option on IPv6 (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7781">#7781</a>)</li>
<li>runtime: stabilize <code>runtime::id::Id</code> (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7125">#7125</a>)</li>
<li>task: implement <code>Extend</code> for <code>JoinSet</code> (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7195">#7195</a>)</li>
<li>task: stabilize the <code>LocalSet::id()</code> (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7776">#7776</a>)</li>
</ul>
<h3>Changed</h3>
<ul>
<li>net: deprecate <code>{TcpStream,TcpSocket}::set_linger</code> (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7752">#7752</a>)</li>
</ul>
<h3>Fixed</h3>
<ul>
<li>macros: fix the hygiene issue of <code>join!</code> and
<code>try_join!</code> (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7766">#7766</a>)</li>
<li>runtime: revert &quot;replace manual vtable definitions with
Wake&quot; (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7699">#7699</a>)</li>
<li>sync: return <code>TryRecvError::Disconnected</code> from
<code>Receiver::try_recv</code> after <code>Receiver::close</code> (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7686">#7686</a>)</li>
<li>task: remove unnecessary trait bounds on the <code>Debug</code>
implementation (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7720">#7720</a>)</li>
</ul>
<h3>Unstable</h3>
<ul>
<li>fs: handle <code>EINTR</code> in <code>fs::write</code> for io-uring
(<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7786">#7786</a>)</li>
<li>fs: support io-uring with <code>tokio::fs::read</code> (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7696">#7696</a>)</li>
<li>runtime: disable io-uring on <code>EPERM</code> (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7724">#7724</a>)</li>
<li>time: add alternative timer for better multicore scalability (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7467">#7467</a>)</li>
</ul>
<h3>Documented</h3>
<ul>
<li>docs: fix a typos in <code>bounded.rs</code> and
<code>park.rs</code> (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7817">#7817</a>)</li>
<li>io: add <code>SyncIoBridge</code> cross-references to
<code>copy</code> and <code>copy_buf</code> (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7798">#7798</a>)</li>
<li>io: doc that <code>AsyncWrite</code> does not inherit from
<code>std::io::Write</code> (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7705">#7705</a>)</li>
<li>metrics: clarify that <code>num_alive_tasks</code> is not strongly
consistent (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7614">#7614</a>)</li>
<li>net: clarify the cancellation safety of the
<code>TcpStream::peek</code> (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7305">#7305</a>)</li>
<li>net: clarify the drop behavior of <code>unix::OwnedWriteHalf</code>
(<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7742">#7742</a>)</li>
<li>net: clarify the platform-dependent backlog in
<code>TcpSocket</code> docs (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7738">#7738</a>)</li>
<li>runtime: mention <code>LocalRuntime</code> in
<code>new_current_thread</code> docs (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7820">#7820</a>)</li>
<li>sync: add missing period to <code>mpsc::Sender::try_send</code> docs
(<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7721">#7721</a>)</li>
<li>sync: clarify the cancellation safety of
<code>oneshot::Receiver</code> (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7780">#7780</a>)</li>
<li>sync: improve the docs for the <code>errors</code> of mpsc (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7722">#7722</a>)</li>
<li>task: add example for <code>spawn_local</code> usage on local
runtime (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7689">#7689</a>)</li>
</ul>
<p><a
href="https://redirect.github.com/tokio-rs/tokio/issues/7125">#7125</a>:
<a
href="https://redirect.github.com/tokio-rs/tokio/pull/7125">tokio-rs/tokio#7125</a>
<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7195">#7195</a>:
<a
href="https://redirect.github.com/tokio-rs/tokio/pull/7195">tokio-rs/tokio#7195</a>
<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7305">#7305</a>:
<a
href="https://redirect.github.com/tokio-rs/tokio/pull/7305">tokio-rs/tokio#7305</a>
<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7467">#7467</a>:
<a
href="https://redirect.github.com/tokio-rs/tokio/pull/7467">tokio-rs/tokio#7467</a>
<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7614">#7614</a>:
<a
href="https://redirect.github.com/tokio-rs/tokio/pull/7614">tokio-rs/tokio#7614</a>
<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7686">#7686</a>:
<a
href="https://redirect.github.com/tokio-rs/tokio/pull/7686">tokio-rs/tokio#7686</a>
<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7689">#7689</a>:
<a
href="https://redirect.github.com/tokio-rs/tokio/pull/7689">tokio-rs/tokio#7689</a></p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="e3b89bbefa"><code>e3b89bb</code></a>
chore: prepare Tokio v1.49.0 (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7824">#7824</a>)</li>
<li><a
href="4f577b84e9"><code>4f577b8</code></a>
Merge 'tokio-1.47.3' into 'master'</li>
<li><a
href="f320197693"><code>f320197</code></a>
chore: prepare Tokio v1.47.3 (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7823">#7823</a>)</li>
<li><a
href="ea6b144cd1"><code>ea6b144</code></a>
ci: freeze rustc on nightly-2025-01-25 in <code>netlify.toml</code> (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7652">#7652</a>)</li>
<li><a
href="264e703296"><code>264e703</code></a>
Merge <code>tokio-1.43.4</code> into <code>tokio-1.47.x</code> (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7822">#7822</a>)</li>
<li><a
href="dfb0f00838"><code>dfb0f00</code></a>
chore: prepare Tokio v1.43.4 (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7821">#7821</a>)</li>
<li><a
href="4a91f197b0"><code>4a91f19</code></a>
ci: fix wasm32-wasip1 tests (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7788">#7788</a>)</li>
<li><a
href="601c383ab6"><code>601c383</code></a>
ci: upgrade FreeBSD from 14.2 to 14.3 (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7758">#7758</a>)</li>
<li><a
href="484cb52d8d"><code>484cb52</code></a>
sync: return <code>TryRecvError::Disconnected</code> from
<code>Receiver::try_recv</code> after `Re...</li>
<li><a
href="16f20c34ed"><code>16f20c3</code></a>
rt: mention <code>LocalRuntime</code> in <code>new_current_thread</code>
docs (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7820">#7820</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/tokio-rs/tokio/compare/tokio-1.48.0...tokio-1.49.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=tokio&package-manager=cargo&previous-version=1.48.0&new-version=1.49.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-18 22:24:55 -08:00
dependabot[bot]
efa9326f08 chore(deps): bump log from 0.4.28 to 0.4.29 in /codex-rs (#9466)
Bumps [log](https://github.com/rust-lang/log) from 0.4.28 to 0.4.29.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/rust-lang/log/releases">log's
releases</a>.</em></p>
<blockquote>
<h2>0.4.29</h2>
<h2>MSRV</h2>
<p>This release increases <code>log</code>'s MSRV from
<code>1.61.0</code> to <code>1.68.0</code>.</p>
<h2>What's Changed</h2>
<ul>
<li>docs: Add missing impls from README.md by <a
href="https://github.com/AldaronLau"><code>@​AldaronLau</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/703">rust-lang/log#703</a></li>
<li>Point to new URLs for favicon and logo by <a
href="https://github.com/AldaronLau"><code>@​AldaronLau</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/704">rust-lang/log#704</a></li>
<li>perf: reduce llvm-lines of FromStr for <code>Level</code> and
<code>LevelFilter</code> by <a
href="https://github.com/dishmaker"><code>@​dishmaker</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/709">rust-lang/log#709</a></li>
<li>Replace serde with serde_core by <a
href="https://github.com/Thomasdezeeuw"><code>@​Thomasdezeeuw</code></a>
in <a
href="https://redirect.github.com/rust-lang/log/pull/712">rust-lang/log#712</a></li>
<li>Fix clippy lints by <a
href="https://github.com/Thomasdezeeuw"><code>@​Thomasdezeeuw</code></a>
in <a
href="https://redirect.github.com/rust-lang/log/pull/713">rust-lang/log#713</a></li>
<li>Use GitHub Actions to install Rust and cargo-hack by <a
href="https://github.com/Thomasdezeeuw"><code>@​Thomasdezeeuw</code></a>
in <a
href="https://redirect.github.com/rust-lang/log/pull/715">rust-lang/log#715</a></li>
<li>Exclude old unstable_kv features from testing matrix by <a
href="https://github.com/Thomasdezeeuw"><code>@​Thomasdezeeuw</code></a>
in <a
href="https://redirect.github.com/rust-lang/log/pull/716">rust-lang/log#716</a></li>
<li>Fix up CI by <a
href="https://github.com/KodrAus"><code>@​KodrAus</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/718">rust-lang/log#718</a></li>
<li>Prepare for 0.4.29 release by <a
href="https://github.com/KodrAus"><code>@​KodrAus</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/719">rust-lang/log#719</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/AldaronLau"><code>@​AldaronLau</code></a> made
their first contribution in <a
href="https://redirect.github.com/rust-lang/log/pull/703">rust-lang/log#703</a></li>
<li><a href="https://github.com/dishmaker"><code>@​dishmaker</code></a>
made their first contribution in <a
href="https://redirect.github.com/rust-lang/log/pull/709">rust-lang/log#709</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/rust-lang/log/compare/0.4.28...0.4.29">https://github.com/rust-lang/log/compare/0.4.28...0.4.29</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/rust-lang/log/blob/master/CHANGELOG.md">log's
changelog</a>.</em></p>
<blockquote>
<h2>[0.4.29] - 2025-12-02</h2>
<h2>What's Changed</h2>
<ul>
<li>perf: reduce llvm-lines of FromStr for <code>Level</code> and
<code>LevelFilter</code> by <a
href="https://github.com/dishmaker"><code>@​dishmaker</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/709">rust-lang/log#709</a></li>
<li>Replace serde with serde_core by <a
href="https://github.com/Thomasdezeeuw"><code>@​Thomasdezeeuw</code></a>
in <a
href="https://redirect.github.com/rust-lang/log/pull/712">rust-lang/log#712</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/AldaronLau"><code>@​AldaronLau</code></a> made
their first contribution in <a
href="https://redirect.github.com/rust-lang/log/pull/703">rust-lang/log#703</a></li>
<li><a href="https://github.com/dishmaker"><code>@​dishmaker</code></a>
made their first contribution in <a
href="https://redirect.github.com/rust-lang/log/pull/709">rust-lang/log#709</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/rust-lang/log/compare/0.4.28...0.4.29">https://github.com/rust-lang/log/compare/0.4.28...0.4.29</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="b1e2df7bce"><code>b1e2df7</code></a>
Merge pull request <a
href="https://redirect.github.com/rust-lang/log/issues/719">#719</a>
from rust-lang/cargo/0.4.29</li>
<li><a
href="3fe1a546dc"><code>3fe1a54</code></a>
prepare for 0.4.29 release</li>
<li><a
href="7a432d9ab5"><code>7a432d9</code></a>
Merge pull request <a
href="https://redirect.github.com/rust-lang/log/issues/718">#718</a>
from rust-lang/ci/msrv</li>
<li><a
href="0689d56847"><code>0689d56</code></a>
rebump msrv to 1.68.0</li>
<li><a
href="46b448e2a7"><code>46b448e</code></a>
try drop msrv back to 1.61.0</li>
<li><a
href="929ab3812e"><code>929ab38</code></a>
fix up doc test feature gate</li>
<li><a
href="957cece478"><code>957cece</code></a>
bump serde-dependent crates</li>
<li><a
href="bea40c847c"><code>bea40c8</code></a>
bump msrv to 1.68.0</li>
<li><a
href="c540184ee9"><code>c540184</code></a>
Merge pull request <a
href="https://redirect.github.com/rust-lang/log/issues/716">#716</a>
from rust-lang/ci-smaller-matrix2</li>
<li><a
href="c971e636c4"><code>c971e63</code></a>
Merge branch 'master' into ci-smaller-matrix2</li>
<li>Additional commits viewable in <a
href="https://github.com/rust-lang/log/compare/0.4.28...0.4.29">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=log&package-manager=cargo&previous-version=0.4.28&new-version=0.4.29)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-18 22:24:34 -08:00
Eric Traut
1271d450b1 Fixed symlink support for config.toml (#9445)
We already support reading from `config.toml` through a symlink, but the
code was not properly handling updates to a symlinked config file. This
PR generalizes safe symlink-chain resolution and atomic writes into
path_utils, updating all config write paths to use the shared logic
(including set_default_oss_provider, which previously didn't use the
common path), and adds tests for symlink chains and cycles.

This resolves #6646.

Notes:
* Symlink cycles or resolution failures replace the top-level symlink
with a real file.
* Shared config write path now handles symlinks consistently across
edits, defaults, and empty-user-layer creation.

This PR was inspired by https://github.com/openai/codex/pull/9437, which
was contributed by @ryoppippi
2026-01-18 19:22:28 -08:00
Michael Bolin
c87a7d9043 fix(tui2): running /mcp was not printing any output until another event triggered a flush (#9457)
This seems to fix things, but I'm not sure if this is the
correct/minimal fix.
2026-01-18 19:19:37 -08:00
Ahmed Ibrahim
f72f87fbee Add collaboration modes test prompts (#9443)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2026-01-18 11:39:08 -08:00
SlKzᵍᵐ
0a568a47fd tui: allow forward navigation in backtrack preview (#9059)
Fixes #9058

## Summary
When the transcript backtrack preview is armed (press `Esc`), allow
navigating to newer user messages with the `→` arrow, in addition to
navigating backwards with `Esc`/`←`, before confirming with `Enter`.

## Changes
- Backtrack preview navigation: `Esc`/`←` steps to older user messages,
`→` steps to newer ones, `Enter` edits the selected message (clamped at
bounds, no wrap-around).
- Transcript overlay footer hints updated to advertise `esc/←`, `→`, and
`enter` when a message is highlighted.

## Related
- WSL shortcut-overlay snapshot determinism: #9359

## Testing
- `just fmt`
- `just fix -p codex-tui`
- `just fix -p codex-tui2`
- `cargo test -p codex-tui app_backtrack::`
- `cargo test -p codex-tui pager_overlay::`
- `cargo test -p codex-tui2 app_backtrack::`
- `cargo test -p codex-tui2 pager_overlay::`

---------

Co-authored-by: Josh McKinney <joshka@openai.com>
2026-01-18 04:10:24 +00:00
Ahmed Ibrahim
aeaff26451 Preserve slash command order in search (#9425)
Keep slash popup search results in presentation order for built-ins and
prompts.
2026-01-17 19:46:07 -08:00
Ahmed Ibrahim
1478a88eb0 Add collaboration developer instructions (#9424)
- Add additional instructions when they are available
- Make sure to update them on change either UserInput or UserTurn
2026-01-18 01:31:14 +00:00
Dylan Hurd
80d7a5d7fe chore(instructions) Remove unread SessionMeta.instructions field (#9423)
### Description
- Remove the now-unused `instructions` field from the session metadata
to simplify SessionMeta and stop propagating transient instruction text
through the rollout recorder API. This was only saving
user_instructions, and was never being read.
- Stop passing user instructions into the rollout writer at session
creation so the rollout header only contains canonical session metadata.

### Testing

- Ran `just fmt` which completed successfully.
- Ran `just fix -p codex-protocol`, `just fix -p codex-core`, `just fix
-p codex-app-server`, `just fix -p codex-tui`, and `just fix -p
codex-tui2` which completed (Clippy fixes applied) as part of
verification.
- Ran `cargo test -p codex-protocol` which passed (28 tests).
- Ran `cargo test -p codex-core` which showed failures in a small set of
tests (not caused by the protocol type change directly):
`default_client::tests::test_create_client_sets_default_headers`,
several `models_manager::manager::tests::refresh_available_models_*`,
and `shell_snapshot::tests::linux_sh_snapshot_includes_sections` (these
tests failed in this CI run).
- Ran `cargo test -p codex-app-server` which reported several failing
integration tests (including
`suite::codex_message_processor_flow::test_codex_jsonrpc_conversation_flow`,
`suite::output_schema::send_user_turn_*`, and
`suite::user_agent::get_user_agent_returns_current_codex_user_agent`).
- `cargo test -p codex-tui` and `cargo test -p codex-tui2` were
attempted but aborted due to disk space exhaustion (`No space left on
device`).

------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_696bd8ce632483228d298cf07c7eb41c)
2026-01-17 16:02:28 -08:00
Dylan Hurd
bffe9b33e9 chore(core) Create instructions module (#9422)
## Summary
We have a variety of things we refer to as instructions in the code
base: our current canonical terms are:
- base instructions (raw string)
- developer instructions (has a type in protocol)
- user instructions

We also have `instructions` floating around in various places. We should
standardize on the above, and start using types to prevent them from
ending up in the wrong place. There will be additional PRs, but I'm
going to keep these small so we can easily follow them!

## Testing
- [x] Tests pass, this is purely a file move
2026-01-17 16:01:26 -08:00
Ahmed Ibrahim
8f0e0300d2 Expose collaboration presets (#9421)
Expose collaboration presets for clients

---------

Co-authored-by: Josh McKinney <joshka@openai.com>
2026-01-17 12:32:50 -08:00
Alex Hornby
b877a2041e fix unified_exec::tests::unified_exec_timeouts to use a more unique match value (#9414)
Fix unified_exec_timeouts to use a unique variable value rather than
"codex" which was causing false positives when running tests locally
(presumably from my bash prompts). Discovered while running tests to
validate another change.

Fixes https://github.com/openai/codex/issues/9413

Test Plan:

Ran test locally on my fedora 43 x86_64 machine with:
```
cd codex/cargo-rs
cargo nextest run --all-features --no-fail-fast unified_exec::tests::unified_exec_timeouts
```

Before, unified_exec_timeouts fails:
```
    Finished `test` profile [unoptimized + debuginfo] target(s) in 0.38s
────────────
 Nextest run ID fa2b4949-a66c-408c-8002-32c52c70ec4f with nextest profile: default
    Starting 1 test across 107 binaries (3211 tests skipped)
        FAIL [   5.667s] codex-core unified_exec::tests::unified_exec_timeouts
  stdout ───

    running 1 test
    test unified_exec::tests::unified_exec_timeouts ... FAILED

    failures:

    failures:
        unified_exec::tests::unified_exec_timeouts

    test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 774 filtered out; finished in 5.66s

  stderr ───

    thread 'unified_exec::tests::unified_exec_timeouts' (459601) panicked at core/src/unified_exec/mod.rs:381:9:
    timeout too short should yield incomplete output
    note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

────────────
     Summary [   5.677s] 1 test run: 0 passed, 1 failed, 3211 skipped
        FAIL [   5.667s] codex-core unified_exec::tests::unified_exec_timeouts
error: test run failed

```

After, works:
```
    Finished `test` profile [unoptimized + debuginfo] target(s) in 0.34s
────────────
 Nextest run ID f49e9004-e30b-4049-b0ff-283b543a1cd7 with nextest profile: default
    Starting 1 test across 107 binaries (3211 tests skipped)
        SLOW [> 15.000s] codex-core unified_exec::tests::unified_exec_timeouts
        PASS [  17.666s] codex-core unified_exec::tests::unified_exec_timeouts
────────────
     Summary [  17.676s] 1 test run: 1 passed (1 slow), 3211 skipped
```
2026-01-17 09:05:53 -08:00
Ahmed Ibrahim
764f3c7d03 fix(tui) Defer backtrack trim until rollback confirms (#9401)
Document the backtrack/rollback state machine and invariants between the
transcript overlay, in-flight “live tail”, and core thread state (tui + tui2).

Also adjust behavior for correctness:
- Track a single pending rollback and block additional rollbacks until core responds.
- Defer trimming transcript cells until ThreadRolledBack for the active session.
- Clear the guard on ThreadRollbackFailed so the user can retry.
- After a confirmed trim, schedule a one-shot scrollback refresh on the next draw.
- Clear stale pending rollback state when switching sessions.

---------

Co-authored-by: Josh McKinney <joshka@openai.com>
2026-01-17 06:29:41 +00:00
Fouad Matin
93a5e0fe1c fix(codex-api): treat invalid_prompt as non-retryable (#9400)
**Goal**: Prevent response.failed events with `invalid_prompt` from
being treated as retryable errors so the UI shows the actual error
message instead of continually retrying.

**Before**: Codex would continue to retry despite the prompt being
marked as disallowed
**After**: Codex will stop retrying once prompt is marked disallowed
2026-01-16 22:22:08 -08:00
Ahmed Ibrahim
146d54cede Add collaboration_mode override to turns (#9408) 2026-01-16 21:51:25 -08:00
xl-openai
ad8bf59cbf Support enable/disable skill via config/api. (#9328)
In config.toml:
```
[[skills.config]]
path = "/Users/xl/.codex/skills/my_skill/SKILL.md"
enabled = false
```

API:
skills/list, skills/config/write
2026-01-16 20:22:05 -08:00
Ahmed Ibrahim
246f506551 Introduce collaboration modes (#9340)
- Merge `model` and `reasoning_effort` under collaboration modes.
- Add additional instructions for custom collaboration mode
- Default to Custom to not change behavior
2026-01-17 00:28:22 +00:00
Anton Panasenko
c26fe64539 feat: show forked from session id in /status (#9330)
Summary:
- Add forked_from to SessionMeta/SessionConfiguredEvent and persist it
for forked sessions.
- Surface forked_from in /status for tui + tui2 and add snapshots.
2026-01-16 13:41:46 -08:00
Owen Lin
f1653dd4d3 feat(app-server, core): return threads by created_at or updated_at (#9247)
Add support for returning threads by either `created_at` OR `updated_at`
descending. Previously core always returned threads ordered by
`created_at`.

This PR:
- updates core to be able to list threads by `updated_at` OR
`created_at` descending based on what the caller wants
- also update `thread/list` in app-server to expose this (default to
`created_at` if not specified)

All existing codepaths (app-server, TUI) still default to `created_at`,
so no behavior change is expected with this PR.

**Implementation**
To sort by `updated_at` is a bit nontrivial (whereas `created_at` is
easy due to the way we structure the folders and filenames on disk,
which are all based on `created_at`).

The most naive way to do this without introducing a cache file or sqlite
DB (which we have to implement/maintain) is to scan files in reverse
`created_at` order on disk, and look at the file's mtime (last modified
timestamp according to the filesystem) until we reach `MAX_SCAN_FILES`
(currently set to 10,000). Then, we can return the most recent N
threads.

Based on some quick and dirty benchmarking on my machine with ~1000
rollout files, calling `thread/list` with limit 50, the `updated_at`
path is slower as expected due to all the I/O:
- updated-at: average 103.10 ms
- created-at: average 41.10 ms

Those absolute numbers aren't a big deal IMO, but we can certainly
optimize this in a followup if needed by introducing more state stored
on disk.

**Caveat**
There's also a limitation in that any files older than `MAX_SCAN_FILES`
will be excluded, which means if a user continues a REALLY old thread,
it's possible to not be included. In practice that should not be too big
of an issue.

If a user makes...
- 1000 rollouts/day → threads older than 10 days won't show up
- 100 rollouts/day → ~100 days

If this becomes a problem for some reason, even more motivation to
implement an updated_at cache.
2026-01-16 20:58:55 +00:00
Anton Panasenko
e893e83eb9 feat: /fork the current session instead of opening session picker (#9385)
Implemented /fork to fork the current session directly (no picker),
handling it via a new ForkCurrentSession app event in both tui and tui2.
Updated slash command descriptions/tooltips and adjusted the fork tests
accordingly. Removed the unused in-session fork picker event.
2026-01-16 11:28:52 -08:00
viyatb-oai
f89a40a849 chore: upgrade to Rust 1.92.0 (#8860)
**Summary**
- Upgrade Rust toolchain used by CI to 1.92.0.
- Address new clippy `derivable_impls` warnings by deriving `Default`
for enums across protocol, core, backend openapi models, and
windows-sandbox setup.
- Tidy up related test/config behavior (originator header handling, env
override cleanup) and remove a now-unused assignment in TUI/TUI2 render
layout.

**Testing**
- `just fmt`
- `just fix -p codex-tui`
- `just fix -p codex-tui2`
- `just fix -p codex-windows-sandbox`
- `cargo test -p codex-tui`
- `cargo test -p codex-tui2`
- `cargo test -p codex-windows-sandbox`
- `cargo test -p codex-core --test all`
- `cargo test -p codex-app-server --test all`
- `cargo test -p codex-mcp-server --test all`
- `cargo test --all-features`
2026-01-16 11:12:52 -08:00
jif-oai
e650d4b02c feat: tool call duration metric (#9364) 2026-01-16 18:33:14 +01:00
Ahmed Ibrahim
ebdd8795e9 Turn-state sticky routing per turn (#9332)
- capture the header from SSE/WS handshakes, store it per
ModelClientSession using `Oncelock`, echo it on turn-scoped requests,
and add SSE+WS integration tests for within-turn persistence +
cross-turn reset.

- keep `x-codex-turn-state` sticky within a user turn to maintain
routing continuity for retries/tool follow-ups.
2026-01-16 09:30:11 -08:00
Jeremy Rose
4125c825f9 add codex cloud list (#9324)
for listing cloud tasks.
2026-01-16 08:56:38 -08:00
Eric Traut
9147df0e60 Made codex exec resume --last consistent with codex resume --last (#9352)
PR #9245 made `codex resume --last` honor cwd, but I forgot to make the
same change for `codex exec resume --last`. This PR fixes the
inconsistency.

This addresses #8700
2026-01-16 08:53:47 -08:00
Matthew Zeng
131590066e [device-auth] Add device code auth as a standalone option when headless environment is detected. (#9333) 2026-01-16 08:35:03 -08:00
jif-oai
2691e1ce21 fix: flaky tests (#9373) 2026-01-16 17:24:41 +01:00
jif-oai
1668ca726f chore: close pipe on non-pty processes (#9369)
Closing the STDIN of piped process when starting them to avoid commands
like `rg` to wait for content on STDIN and hangs for ever
2026-01-16 15:54:32 +01:00
jif-oai
7905e99d03 prompt collab (#9367) 2026-01-16 15:12:41 +01:00
jif-oai
7fc49697dd feat: CODEX_CI (#9366) 2026-01-16 13:52:16 +00:00
jif-oai
c576756c81 feat: collab wait multiple IDs (#9294) 2026-01-16 12:05:04 +01:00
jif-oai
c1ac5223e1 feat: run user commands under user snapshot (#9357)
The initial goal is for user snapshots to have access to aliases etc
2026-01-16 11:49:28 +01:00
jif-oai
f5b3e738fb feat: propagate approval request of unsubscribed threads (#9232)
A thread can now be spawned by another thread. In order to process the
approval requests of such sub-threads, we need to detect those event and
show them in the TUI.

This is a temporary solution while the UX is being figured out. This PR
should be reverted once done
2026-01-16 11:23:01 +01:00
Ahmed Ibrahim
0cce6ebd83 rename model turn to sampling request (#9336)
We have two type of turns now: model and user turns. It's always
confusing to refer to either. Model turn is basically a sampling
request.
2026-01-16 10:06:24 +01:00
Eric Traut
1fc72c647f Fix token estimate during compaction (#9337)
This addresses #9287
2026-01-15 19:48:11 -08:00
Michael Bolin
99f47d6e9a fix(mcp): include threadId in both content and structuredContent in CallToolResult (#9338) 2026-01-15 18:33:11 -08:00
Thanh Nguyen
a6324ab34b fix(tui): only show 'Worked for' separator when actual work was performed (#8958)
Fixes #7919.

This PR addresses a TUI display bug where the "Worked for" separator
would appear prematurely during the planning stage.

**Changes:**
- Added `had_work_activity` flag to `ChatWidget` to track if actual work
(exec commands, MCP tool calls, patches) was performed in the current
turn.
- Updated `handle_streaming_delta` to only display the
`FinalMessageSeparator` if both `needs_final_message_separator` AND
`had_work_activity` are true.
- Updated `handle_exec_end_now`, `handle_patch_apply_end_now`, and
`handle_mcp_end_now` to set `had_work_activity = true`.

**Verification:**
- Ran `cargo test -p codex-tui` to ensure no regressions.
- Manual verification confirms the separator now only appears after
actual work is completed.

---------

Co-authored-by: Josh McKinney <joshka@openai.com>
2026-01-16 01:41:43 +00:00
Dylan Hurd
3cabb24210 chore(windows) Enable Powershell UTF8 feature (#9195)
## Summary
We've received a lot of positive feedback about this feature, so we're
going to enable it by default.
2026-01-16 01:29:12 +00:00
charley-oai
1fa8350ae7 Add text element metadata to protocol, app server, and core (#9331)
The second part of breaking up PR
https://github.com/openai/codex/pull/9116

Summary:

- Add `TextElement` / `ByteRange` to protocol user inputs and user
message events with defaults.
- Thread `text_elements` through app-server v1/v2 request handling and
history rebuild.
- Preserve UI metadata only in user input/events (not `ContentItem`)
while keeping local image attachments in user events for rehydration.

Details:

- Protocol: `UserInput::Text` carries `text_elements`;
`UserMessageEvent` carries `text_elements` + `local_images`.
Serialization includes empty vectors for backward compatibility.
- app-server-protocol: v1 defines `V1TextElement` / `V1ByteRange` in
camelCase with conversions; v2 uses its own camelCase wrapper.
- app-server: v1/v2 input mapping includes `text_elements`; thread
history rebuilds include them.
- Core: user event emission preserves UI metadata while model history
stays clean; history replay round-trips the metadata.
2026-01-15 17:26:41 -08:00
Yuvraj Angad Singh
004a74940a fix: send non-null content on elicitation Accept (#9196)
## Summary

- When a user accepts an MCP elicitation request, send `content:
Some(json!({}))` instead of `None`
- MCP servers that use elicitation expect content to be present when
action is Accept
- This matches the expected behavior shown in tests at
`exec-server/tests/common/lib.rs:171`

## Root Cause

In `codex-rs/core/src/codex.rs`, the `resolve_elicitation` function
always sent `content: None`:

```rust
let response = ElicitationResponse {
    action,
    content: None,  // Always None, even for Accept
};
```

## Fix

Send an empty object when accepting:

```rust
let content = match action {
    ElicitationAction::Accept => Some(serde_json::json!({})),
    ElicitationAction::Decline | ElicitationAction::Cancel => None,
};
```

## Test plan

- [x] Code compiles with `cargo check -p codex-core`
- [x] Formatted with `just fmt`
- [ ] Integration test `accept_elicitation_for_prompt_rule` (requires
MCP server binary)

Fixes #9053
2026-01-15 14:20:57 -08:00
Ahmed Ibrahim
749b58366c Revert empty paste image handling (#9318)
Revert #9049 behavior so empty paste events no longer trigger a
clipboard image read.
2026-01-15 14:16:09 -08:00
pap-openai
d886a8646c remove needs_follow_up error log (#9272) 2026-01-15 21:20:54 +00:00
sayan-oai
169201b1b5 [search] allow explicitly disabling web search (#9249)
moving `web_search` rollout serverside, so need a way to explicitly
disable search + signal eligibility from the client.

- Add `x‑oai‑web‑search‑eligible` header that signifies whether the
request can have web search.
- Only attach the `web_search` tool when the resolved `WebSearchMode` is
`Live` or `Cached`.
2026-01-15 11:28:57 -08:00
xl-openai
42fa4c237f Support SKILL.toml file. (#9125)
We’re introducing a new SKILL.toml to hold skill metadata so Codex can
deliver a richer Skills experience.

Initial focus is the interface block:
```
[interface]
display_name = "Optional user-facing name"
short_description = "Optional user-facing description"
icon_small = "./assets/small-400px.png"
icon_large = "./assets/large-logo.svg"
brand_color = "#3B82F6"
default_prompt = "Optional surrounding prompt to use the skill with"
```

All fields are exposed via the app server API.
display_name and short_description are consumed by the TUI.
2026-01-15 11:20:04 -08:00
Eric Traut
5f10548772 Revert recent styling change for input prompt placeholder text (#9307)
A recent change in commit ccba737d26 modified the styling of the
placeholder text (e.g. "Implement {feature}") in the input box of the
CLI, changing it from non-italic to italic. I think this was likely
unintentional. It results in a bad display appearance on some terminal
emulators, and several users have complained about it.

This change switches back to non-italic styling, restoring the older
behavior.

It addresses #9262
2026-01-15 10:58:12 -08:00
jif-oai
da44569fef nit: clean unified exec background processes (#9304)
To fix the occurences where the End event is received after the listener
stopped listenning
2026-01-15 18:34:33 +00:00
jif-oai
393a5a0311 chore: better orchestrator prompt (#9301) 2026-01-15 18:11:43 +00:00
viyatb-oai
55bda1a0f2 revert: remove pre-Landlock bind mounts apply (#9300)
**Description**

This removes the pre‑Landlock read‑only bind‑mount step from the Linux
sandbox so filesystem restrictions rely solely on Landlock again.
`mounts.rs` is kept in place but left unused. The linux‑sandbox README
is updated to match the new behavior and manual test expectations.
2026-01-15 09:47:57 -08:00
李琼羽
b4d240c3ae fix(exec): improve stdin prompt decoding (#9151)
Fixes #8733.

- Read prompt from stdin as raw bytes and decode more helpfully.
- Strip UTF-8 BOM; decode UTF-16LE/UTF-16BE when a BOM is present.
- For other non-UTF8 input, fail with an actionable message (offset +
iconv hint).

Tests: `cargo test -p codex-exec`.
2026-01-15 09:29:05 -08:00
gt-oai
f6df1596eb Propagate MCP disabled reason (#9207)
Indicate why MCP servers are disabled when they are disabled by
requirements:

```
➜  codex git:(main) ✗ just codex mcp list
cargo run --bin codex -- "$@"
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.27s
     Running `target/debug/codex mcp list`
Name         Command          Args  Env  Cwd  Status                                                                  Auth
docs         docs-mcp         -     -    -    disabled: requirements (MDM com.openai.codex:requirements_toml_base64)  Unsupported
hello_world  hello-world-mcp  -     -    -    disabled: requirements (MDM com.openai.codex:requirements_toml_base64)  Unsupported

➜  codex git:(main) ✗ just c
cargo run --bin codex -- "$@"
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.90s
     Running `target/debug/codex`
╭─────────────────────────────────────────────╮
│ >_ OpenAI Codex (v0.0.0)                    │
│                                             │
│ model:     gpt-5.2 xhigh   /model to change │
│ directory: ~/code/codex/codex-rs            │
╰─────────────────────────────────────────────╯

/mcp

🔌  MCP Tools

  • No MCP tools available.

  • docs (disabled)
    • Reason: requirements (MDM com.openai.codex:requirements_toml_base64)

  • hello_world (disabled)
    • Reason: requirements (MDM com.openai.codex:requirements_toml_base64)
```
2026-01-15 17:24:00 +00:00
Eric Traut
ae96a15312 Changed codex resume --last to honor the current cwd (#9245)
This PR changes `codex resume --last` to work consistently with `codex
resume`. Namely, it filters based on the cwd when selecting the last
session. It also supports the `--all` modifier as an override.

This addresses #8700
2026-01-15 17:05:08 +00:00
jif-oai
3fc487e0e0 feat: basic tui for event emission (#9209) 2026-01-15 15:53:02 +00:00
jif-oai
faeb08c1e1 feat: add interrupt capabilities to send_input (#9276) 2026-01-15 14:59:07 +00:00
jif-oai
05b960671d feat: add agent roles to collab tools (#9275)
Add `agent_type` parameter to the collab tool `spawn_agent` that
contains a preset to apply on the config when spawning this agent
2026-01-15 13:33:52 +00:00
jif-oai
bad4c12b9d feat: collab tools app-server event mapping (#9213) 2026-01-15 09:03:26 +00:00
viyatb-oai
2259031d64 fix: fallback to Landlock-only when user namespaces unavailable and set PR_SET_NO_NEW_PRIVS early (#9250)
fixes https://github.com/openai/codex/issues/9236

### Motivation
- Prevent sandbox setup from failing when unprivileged user namespaces
are denied so Landlock-only protections can still be applied.
- Ensure `PR_SET_NO_NEW_PRIVS` is set before installing seccomp and
Landlock restrictions to avoid kernel `EPERM`/`LandlockRestrict`
ordering issues.

### Description
- Add `is_permission_denied` helper that detects `EPERM` /
`PermissionDenied` from `CodexErr` to drive fallback logic.
- In `apply_read_only_mounts` skip read-only bind-mount setup and return
`Ok(())` when `unshare_user_and_mount_namespaces()` fails with
permission-denied so Landlock rules can still be installed.
- Add `set_no_new_privs()` and call it from
`apply_sandbox_policy_to_current_thread` before installing seccomp
filters and Landlock rules when disk or network access is restricted.
2026-01-14 22:24:34 -08:00
Ahmed Ibrahim
a09711332a Add migration_markdown in model_info (#9219)
Next step would be to clean Model Upgrade in model presets

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: aibrahim-oai <219906144+aibrahim-oai@users.noreply.github.com>
2026-01-15 01:55:22 +00:00
charley-oai
4a9c2bcc5a Add text element metadata to types (#9235)
Initial type tweaking PR to make the diff of
https://github.com/openai/codex/pull/9116 smaller

This should not change any behavior, just adds some fields to types
2026-01-14 16:41:50 -08:00
Michael Bolin
2a68b74b9b fix: increase timeout for release builds from 30 to 60 minutes (#9242)
Windows builds have been tripping the 30 minute timeout. For sure, we
need to improve this, but as a quick fix, let's just increase the
timeout.

Perhaps we should switch to `lto = "thin"` for release builds, at least
for Windows:


3728db11b8/codex-rs/Cargo.toml (L288)

See https://doc.rust-lang.org/cargo/reference/profiles.html#lto for
details.
2026-01-15 00:38:25 +00:00
Michael Bolin
3728db11b8 fix: eliminate unnecessary clone() for each SSE event (#9238)
Given how many SSE events we get, seems worth fixing.
2026-01-15 00:06:09 +00:00
viyatb-oai
e59e7d163d fix: correct linux sandbox uid/gid mapping after unshare (#9234)
fixes https://github.com/openai/codex/issues/9233
## Summary
- capture effective uid/gid before unshare for user namespace maps
- pass captured ids into uid/gid map writer

## Testing
- just fmt
- just fix -p codex-linux-sandbox
- cargo test -p codex-linux-sandbox
2026-01-14 15:35:53 -08:00
willwang-openai
71a2973fd9 upgrade runners in rust-ci.yml to use the larger runners (#9106)
Upgrades runners in rust-ci.yaml to larger runners

ubuntu-24.04 (x64 and arm64) -> custom 16 core ubuntu 24.04 runners
macos-14 -> mac0s-15-xlarge
[TODO] windows (x64 and arm64) -> custom 16 core windows runners
2026-01-14 15:22:59 -08:00
Eric Traut
24b88890cb Updated issue template to ask for terminal emulator (#9231)
Co-authored-by: Josh McKinney <joshka@openai.com>
2026-01-14 23:00:15 +00:00
sayan-oai
5e426ac270 add WebSearchMode enum (#9216)
### What
Add `WebSearchMode` enum (disabled, cached live, defaults to cached) to
config + V2 protocol. This enum takes precedence over legacy flags:
`web_search_cached`, `web_search_request`, and `tools.web_search`.

Keep `--search` as live.

### Tests
Added tests
2026-01-14 12:51:42 -08:00
Josh McKinney
27da8a68d3 fix(tui): disable double-press quit shortcut (#9220)
Disables the default Ctrl+C/Ctrl+D double-press quit UX (keeps the code
path behind a const) while we rethink the quit/interrupt flow.

Tests:
- just fmt
- cargo clippy --fix --all-features --tests --allow-dirty --allow-no-vcs
-p codex-tui
- cargo test -p codex-tui --lib
2026-01-14 20:28:18 +00:00
Josh McKinney
0471ddbe74 fix(tui2): align Steer submit keys (#9218)
- Remove legacy Ctrl+K queuing in tui2; Tab is the queue key.
- Make Enter queue when Steer is disabled and submit immediately when
Steer is enabled.
- Add Steer keybinding docs on both tui and tui2 chat composers.
2026-01-14 19:35:55 +00:00
pakrym-oai
e6d2ef432d Rename hierarchical_agents to child_agents_md (#9215)
Clearer name
2026-01-14 19:14:24 +00:00
jif-oai
577e1fd1b2 feat: adding piped process to replace PTY when needed (#8797) 2026-01-14 18:44:04 +00:00
gt-oai
fe1e0da102 s/mcp_server_requirements/mcp_servers (#9212)
A simple `s/mcp_server_requirements/mcp_servers/g` for an unreleased
feature. @bolinfest correctly pointed out, it's already in
`requirements.toml` so the `_requirements` is redundant.
2026-01-14 18:41:52 +00:00
pakrym-oai
e958d0337e Log headers in trace mode (#9214)
To enable:

```
export RUST_LOG="warn,codex_=trace"
```

Sample: 
```
Request completed method=POST url=https://chatgpt.com/backend-api/codex/responses status=200 OK headers={"date": "Wed, 14 Jan 2026 18:21:21 GMT", "transfer-encoding": "chunked", "connection": "keep-alive", "x-codex-plan-type": "business", "x-codex-primary-used-percent": "3", "x-codex-secondary-used-percent": "6", "x-codex-primary-window-minutes": "300", "x-codex-primary-over-secondary-limit-percent": "0", "x-codex-secondary-window-minutes": "10080", "x-codex-primary-reset-after-seconds": "9944", "x-codex-secondary-reset-after-seconds": "171121", "x-codex-primary-reset-at": "1768424824", "x-codex-secondary-reset-at": "1768586001", "x-codex-credits-has-credits": "False", "x-codex-credits-balance": "", "x-codex-credits-unlimited": "False", "x-models-etag": "W/\"7a7ffbc83c159dbd7a2a73aaa9c91b7a\"", "x-oai-request-id": "ffedcd30-6d8a-4c4d-be10-8ebb23c142c8", "x-envoy-upstream-service-time": "417", "x-openai-proxy-wasm": "v0.1", "cf-cache-status": "DYNAMIC", "set-cookie": "__cf_bm=xFKeaMbWNbKO5ZX.K5cJBhj34OA1QvnF_3nkdMThjlA-1768414881-1.0.1.1-uLpsE_BDkUfcmOMaeKVQmv_6_2ytnh_R3lO_il5N5K3YPQEkBo0cOMTdma6bK0Gz.hQYcIesFwKIJht1kZ9JKqAYYnjgB96hF4.sii2U3cE; path=/; expires=Wed, 14-Jan-26 18:51:21 GMT; domain=.chatgpt.com; HttpOnly; Secure; SameSite=None", "report-to": "{\"endpoints\":[{\"url\":\"https:\/\/a.nel.cloudflare.com\/report\/v4?s=4Kc7g4zUhKkIm3xHuB6ba4jyIUqqZ07ETwIPAYQASikRjA8JesbtUKDP9tSrZ5PnzWldaiSz5dZVQFI579LEsCMlMUSelTvmyQ8j4FbFDawi%2FprWZ5iRePiaSalr\"}],\"group\":\"cf-nel\",\"max_age\":604800}", "nel": "{\"success_fraction\":0.01,\"report_to\":\"cf-nel\",\"max_age\":604800}", "strict-transport-security": "max-age=31536000; includeSubDomains; preload", "x-content-type-options": "nosniff", "cross-origin-opener-policy": "same-origin-allow-popups", "referrer-policy": "strict-origin-when-cross-origin", "server": "cloudflare", "cf-ray": "9bdf270adc7aba3a-SEA"} version=HTTP/1.1
```
2026-01-14 18:38:12 +00:00
Ahmed Ibrahim
8e937fbba9 Get model on session configured (#9191)
- Don't try to precompute model unless you know it from `config`
- Block `/model` on session configured
- Queue messages until session configured
- show "loading" in status until session configured
2026-01-14 10:20:41 -08:00
Celia Chen
02f67bace8 fix: Emit response.completed immediately for Responses SSE (#9170)
we see windows test failures like this:
https://github.com/openai/codex/actions/runs/20930055601/job/60138344260.

The issue is that SSE connections sometimes remain open after the
completion event esp. for windows. We should emit the completion event
and return immediately. this is consistent with the protocol:

> The Model streams responses back in an SSE, which are collected until
"completed" message and the SSE terminates

from
https://github.com/openai/codex/blob/dev/cc/fix-windows-test/codex-rs/docs/protocol_v1.md#L37.

this helps us achieve parity with responses websocket logic here:
https://github.com/openai/codex/blob/dev/cc/fix-windows-test/codex-rs/codex-api/src/endpoint/responses_websocket.rs#L220-L227.
2026-01-14 10:05:00 -08:00
jif-oai
3d322fa9d8 feat: add collab prompt (#9208)
Adding a prompt for collab tools. This is only for internal use and the
prompt won't be gated for now as it is not stable yet.

The goal of this PR is to provide the tool required to iterate on the
prompt
2026-01-14 09:58:15 -08:00
jif-oai
6a939ed7a4 feat: emit events around collab tools (#9095)
Emit the following events around the collab tools. On the `app-server`
this will be under `item/started` and `item/completed`
```
#[derive(Debug, Clone, Deserialize, Serialize, PartialEq, JsonSchema, TS)]
pub struct CollabAgentSpawnBeginEvent {
    /// Identifier for the collab tool call.
    pub call_id: String,
    /// Thread ID of the sender.
    pub sender_thread_id: ThreadId,
    /// Initial prompt sent to the agent. Can be empty to prevent CoT leaking at the
    /// beginning.
    pub prompt: String,
}

#[derive(Debug, Clone, Deserialize, Serialize, PartialEq, JsonSchema, TS)]
pub struct CollabAgentSpawnEndEvent {
    /// Identifier for the collab tool call.
    pub call_id: String,
    /// Thread ID of the sender.
    pub sender_thread_id: ThreadId,
    /// Thread ID of the newly spawned agent, if it was created.
    pub new_thread_id: Option<ThreadId>,
    /// Initial prompt sent to the agent. Can be empty to prevent CoT leaking at the
    /// beginning.
    pub prompt: String,
    /// Last known status of the new agent reported to the sender agent.
    pub status: AgentStatus,
}

#[derive(Debug, Clone, Deserialize, Serialize, PartialEq, JsonSchema, TS)]
pub struct CollabAgentInteractionBeginEvent {
    /// Identifier for the collab tool call.
    pub call_id: String,
    /// Thread ID of the sender.
    pub sender_thread_id: ThreadId,
    /// Thread ID of the receiver.
    pub receiver_thread_id: ThreadId,
    /// Prompt sent from the sender to the receiver. Can be empty to prevent CoT
    /// leaking at the beginning.
    pub prompt: String,
}

#[derive(Debug, Clone, Deserialize, Serialize, PartialEq, JsonSchema, TS)]
pub struct CollabAgentInteractionEndEvent {
    /// Identifier for the collab tool call.
    pub call_id: String,
    /// Thread ID of the sender.
    pub sender_thread_id: ThreadId,
    /// Thread ID of the receiver.
    pub receiver_thread_id: ThreadId,
    /// Prompt sent from the sender to the receiver. Can be empty to prevent CoT
    /// leaking at the beginning.
    pub prompt: String,
    /// Last known status of the receiver agent reported to the sender agent.
    pub status: AgentStatus,
}

#[derive(Debug, Clone, Deserialize, Serialize, PartialEq, JsonSchema, TS)]
pub struct CollabWaitingBeginEvent {
    /// Thread ID of the sender.
    pub sender_thread_id: ThreadId,
    /// Thread ID of the receiver.
    pub receiver_thread_id: ThreadId,
    /// ID of the waiting call.
    pub call_id: String,
}

#[derive(Debug, Clone, Deserialize, Serialize, PartialEq, JsonSchema, TS)]
pub struct CollabWaitingEndEvent {
    /// Thread ID of the sender.
    pub sender_thread_id: ThreadId,
    /// Thread ID of the receiver.
    pub receiver_thread_id: ThreadId,
    /// ID of the waiting call.
    pub call_id: String,
    /// Last known status of the receiver agent reported to the sender agent.
    pub status: AgentStatus,
}

#[derive(Debug, Clone, Deserialize, Serialize, PartialEq, JsonSchema, TS)]
pub struct CollabCloseBeginEvent {
    /// Identifier for the collab tool call.
    pub call_id: String,
    /// Thread ID of the sender.
    pub sender_thread_id: ThreadId,
    /// Thread ID of the receiver.
    pub receiver_thread_id: ThreadId,
}

#[derive(Debug, Clone, Deserialize, Serialize, PartialEq, JsonSchema, TS)]
pub struct CollabCloseEndEvent {
    /// Identifier for the collab tool call.
    pub call_id: String,
    /// Thread ID of the sender.
    pub sender_thread_id: ThreadId,
    /// Thread ID of the receiver.
    pub receiver_thread_id: ThreadId,
    /// Last known status of the receiver agent reported to the sender agent before
    /// the close.
    pub status: AgentStatus,
}
```
2026-01-14 17:55:57 +00:00
Josh McKinney
4283a7432b tui: double-press Ctrl+C/Ctrl+D to quit (#8936)
## Problem

Codex’s TUI quit behavior has historically been easy to trigger
accidentally and hard to reason
about.

- `Ctrl+C`/`Ctrl+D` could terminate the UI immediately, which is a
common key to press while trying
  to dismiss a modal, cancel a command, or recover from a stuck state.
- “Quit” and “shutdown” were not consistently separated, so some exit
paths could bypass the
  shutdown/cleanup work that should run before the process terminates.

This PR makes quitting both safer (harder to do by accident) and more
uniform across quit
gestures, while keeping the shutdown-first semantics explicit.

## Mental model

After this change, the system treats quitting as a UI request that is
coordinated by the app
layer.

- The UI requests exit via `AppEvent::Exit(ExitMode)`.
- `ExitMode::ShutdownFirst` is the normal user path: the app triggers
`Op::Shutdown`, continues
rendering while shutdown runs, and only ends the UI loop once shutdown
has completed.
- `ExitMode::Immediate` exists as an escape hatch (and as the
post-shutdown “now actually exit”
signal); it bypasses cleanup and should not be the default for
user-triggered quits.

User-facing quit gestures are intentionally “two-step” for safety:

- `Ctrl+C` and `Ctrl+D` no longer exit immediately.
- The first press arms a 1-second window and shows a footer hint (“ctrl
+ <key> again to quit”).
- Pressing the same key again within the window requests a
shutdown-first quit; otherwise the
  hint expires and the next press starts a fresh window.

Key routing remains modal-first:

- A modal/popup gets first chance to consume `Ctrl+C`.
- If a modal handles `Ctrl+C`, any armed quit shortcut is cleared so
dismissing a modal cannot
  prime a subsequent `Ctrl+C` to quit.
- `Ctrl+D` only participates in quitting when the composer is empty and
no modal/popup is active.

The design doc `docs/exit-confirmation-prompt-design.md` captures the
intended routing and the
invariants the UI should maintain.

## Non-goals

- This does not attempt to redesign modal UX or make modals uniformly
dismissible via `Ctrl+C`.
It only ensures modals get priority and that quit arming does not leak
across modal handling.
- This does not introduce a persistent confirmation prompt/menu for
quitting; the goal is to keep
  the exit gesture lightweight and consistent.
- This does not change the semantics of core shutdown itself; it changes
how the UI requests and
  sequences it.

## Tradeoffs

- Quitting via `Ctrl+C`/`Ctrl+D` now requires a deliberate second
keypress, which adds friction for
  users who relied on the old “instant quit” behavior.
- The UI now maintains a small time-bounded state machine for the armed
shortcut, which increases
  complexity and introduces timing-dependent behavior.

This design was chosen over alternatives (a modal confirmation prompt or
a long-lived “are you
sure” state) because it provides an explicit safety barrier while
keeping the flow fast and
keyboard-native.

## Architecture

- `ChatWidget` owns the quit-shortcut state machine and decides when a
quit gesture is allowed
  (idle vs cancellable work, composer state, etc.).
- `BottomPane` owns rendering and local input routing for modals/popups.
It is responsible for
consuming cancellation keys when a view is active and for
showing/expiring the footer hint.
- `App` owns shutdown sequencing: translating
`AppEvent::Exit(ShutdownFirst)` into `Op::Shutdown`
  and only terminating the UI loop when exit is safe.

This keeps “what should happen” decisions (quit vs interrupt vs ignore)
in the chat/widget layer,
while keeping “how it looks and which view gets the key” in the
bottom-pane layer.

## Observability

You can tell this is working by running the TUIs and exercising the quit
gestures:

- While idle: pressing `Ctrl+C` (or `Ctrl+D` with an empty composer and
no modal) shows a footer
hint for ~1 second; pressing again within that window exits via
shutdown-first.
- While streaming/tools/review are active: `Ctrl+C` interrupts work
rather than quitting.
- With a modal/popup open: `Ctrl+C` dismisses/handles the modal (if it
chooses to) and does not
arm a quit shortcut; a subsequent quick `Ctrl+C` should not quit unless
the user re-arms it.

Failure modes are visible as:

- Quits that happen immediately (no hint window) from `Ctrl+C`/`Ctrl+D`.
- Quits that occur while a modal is open and consuming `Ctrl+C`.
- UI termination before shutdown completes (cleanup skipped).

## Tests

- Updated/added unit and snapshot coverage in `codex-tui` and
`codex-tui2` to validate:
  - The quit hint appears and expires on the expected key.
- Double-press within the window triggers a shutdown-first quit request.
- Modal-first routing prevents quit bypass and clears any armed shortcut
when a modal consumes
    `Ctrl+C`.

These tests focus on the UI-level invariants and rendered output; they
do not attempt to validate
real terminal key-repeat timing or end-to-end process shutdown behavior.

---
Screenshot:
<img width="912" height="740" alt="Screenshot 2026-01-13 at 1 05 28 PM"
src="https://github.com/user-attachments/assets/18f3d22e-2557-47f2-a369-ae7a9531f29f"
/>
2026-01-14 17:42:52 +00:00
pakrym-oai
92472e7baa Use current model for review (#9179)
Instead of having a hard-coded default review model, use the current
model for running `/review` unless one is specified in the config.

Also inherit current reasoning effort
2026-01-14 08:59:41 -08:00
viyatb-oai
e1447c3009 feat: add support for read-only bind mounts in the linux sandbox (#9112)
### Motivation

- Landlock alone cannot prevent writes to sensitive in-repo files like
`.git/` when the repo root is writable, so explicit mount restrictions
are required for those paths.
- The sandbox must set up any mounts before calling Landlock so Landlock
can still be applied afterwards and the two mechanisms compose
correctly.

### Description

- Add a new `linux-sandbox` helper `apply_read_only_mounts` in
`linux-sandbox/src/mounts.rs` that: unshares namespaces, maps uids/gids
when required, makes mounts private, bind-mounts targets, and remounts
them read-only.
- Wire the mount step into the sandbox flow by calling
`apply_read_only_mounts(...)` before network/seccomp and before applying
Landlock rules in `linux-sandbox/src/landlock.rs`.
2026-01-14 08:30:46 -08:00
jif-oai
bcd7858ced feat: add auto refresh on thread listeners (#9105)
This PR is in the scope of multi-agent work. 

An agent (=thread) can now spawn other agents. Those other agents are
not attached to any clients. We need a way to make sure that the clients
are aware of the new threads to look at (for approval for example). This
PR adds a channel to the `ThreadManager` that pushes the ID of those
newly created agents such that the client (here the app-server) can also
subscribe to those ones.
2026-01-14 16:26:01 +00:00
jif-oai
32b1795ff4 chore: clamp min yield time for empty write_stdin (#9156)
After evals, 0 impact on performance
2026-01-14 16:25:40 +00:00
Ahmed Ibrahim
bdae0035ec Render exec output deltas inline (#9194) 2026-01-14 08:11:12 -08:00
jif-oai
7532f34699 fix: drop double waiting header in TUI (#9145) 2026-01-14 09:52:34 +00:00
jif-oai
bc6d9ef6fc feat: only source shell snapshot if the file exists (#9197) 2026-01-14 01:28:25 -08:00
jif-oai
dc3deaa3e7 feat: return an error if the image sent by the user is a bad image (#9146)
## Before
When we detect an `InvalidImageRequest`, we replace the image by a
placeholder and keep going

## Now
In such `InvalidImageRequest`, we check if the image is due to a user
message or a tool call output. For tool call output we still replace it
with a placeholder to avoid breaking the agentic loop bu tif this is
because of a user message, we send an error to the user
2026-01-14 09:07:45 +00:00
jif-oai
6fbb89e858 fix: shell snapshot clean-up (#9155)
Clean all shell snapshot files corresponding to sessions that have not
been updated in 7 days
Those files should never leak. The only known cases were it can leak are
during non graceful interrupt of the process (`kill -9, `panic`, OS
crash, ...)
2026-01-14 09:05:46 +00:00
jif-oai
258fc4b401 feat: add sourcing of rc files to shell snapshot (#9150) 2026-01-14 08:58:10 +00:00
Ahmed Ibrahim
b9ff4ec830 change api default model (#9188) 2026-01-13 22:33:34 -08:00
Michael Bolin
0c09dc3c03 feat: add threadId to MCP server messages (#9192)
This favors `threadId` instead of `conversationId` so we use the same
terms as https://developers.openai.com/codex/sdk/.

To test the local build:

```
cd codex-rs
cargo build --bin codex
npx -y @modelcontextprotocol/inspector ./target/debug/codex mcp-server
```

I sent:

```json
{
  "method": "tools/call",
  "params": {
    "name": "codex",
    "arguments": {
      "prompt": "favorite ls option?"
    },
    "_meta": {
      "progressToken": 0
    }
  }
}
```

and got:

```json
{
  "content": [
    {
      "type": "text",
      "text": "`ls -lah` (or `ls -alh`) — long listing, includes dotfiles, human-readable sizes."
    }
  ],
  "structuredContent": {
    "threadId": "019bbb20-bff6-7130-83aa-bf45ab33250e"
  }
}
```

and successfully used the `threadId` in the follow-up with the
`codex-reply` tool call:

```json
{
  "method": "tools/call",
  "params": {
    "name": "codex-reply",
    "arguments": {
      "prompt": "what is the long versoin",
      "threadId": "019bbb20-bff6-7130-83aa-bf45ab33250e"
    },
    "_meta": {
      "progressToken": 1
    }
  }
}
```

whose response also has the `threadId`:

```json
{
  "content": [
    {
      "type": "text",
      "text": "Long listing is `ls -l` (adds permissions, owner/group, size, timestamp)."
    }
  ],
  "structuredContent": {
    "threadId": "019bbb20-bff6-7130-83aa-bf45ab33250e"
  }
}
```

Fixes https://github.com/openai/codex/issues/3712.
2026-01-13 22:14:41 -08:00
sayan-oai
5675af5190 chore: clarify default shell for unified_exec (#8997)
The description of the `shell` arg for `exec_command` states the default
is `/bin/bash`, but AFAICT it's the user's default shell.

Default logic
[here](2a06d64bc9/codex-rs/core/src/tools/handlers/unified_exec.rs (L123)).

EDIT: #9004 has an alternative where we inform the model of the default
shell itself.
2026-01-13 21:37:41 -08:00
Eric Traut
31d9b6f4d2 Improve handling of config and rules errors for app server clients (#9182)
When an invalid config.toml key or value is detected, the CLI currently
just quits. This leaves the VSCE in a dead state.

This PR changes the behavior to not quit and bubble up the config error
to users to make it actionable. It also surfaces errors related to
"rules" parsing.

This allows us to surface these errors to users in the VSCE, like this:

<img width="342" height="129" alt="Screenshot 2026-01-13 at 4 29 22 PM"
src="https://github.com/user-attachments/assets/a79ffbe7-7604-400c-a304-c5165b6eebc4"
/>

<img width="346" height="244" alt="Screenshot 2026-01-13 at 4 45 06 PM"
src="https://github.com/user-attachments/assets/de874f7c-16a2-4a95-8c6d-15f10482e67b"
/>
2026-01-13 17:57:09 -08:00
Ahmed Ibrahim
5a82a72d93 Use offline cache for tui migrations (#9186) 2026-01-13 17:44:12 -08:00
Josh McKinney
ce49e92848 fix(tui): harden paste-burst state transitions (#9124)
User-facing symptom: On terminals that deliver pastes as rapid
KeyCode::Char/Enter streams (notably Windows), paste-burst transient
state
can leak into the next input. Users can see Enter insert a newline when
they meant to submit, or see characters appear late / handled through
the
wrong path.

System problem: PasteBurst is time-based. Clearing only the
classification window (e.g. via clear_window_after_non_char()) can erase
last_plain_char_time without emitting buffered text. If a buffer is
still
non-empty after that, flush_if_due() no longer has a timeout clock to
flush against, so the buffer can get "stuck" until another plain char
arrives.

This was surfaced while adding deterministic regression tests for
paste-burst behavior.

Fix: when disabling burst detection, defuse any in-flight burst state:
flush held/buffered text through handle_paste() (so it follows normal
paste integration), then clear timing and Enter suppression.

Document the rationale inline and update docs/tui-chat-composer.md so
"disable_paste_burst" matches the actual behavior.
2026-01-14 01:42:21 +00:00
Ahmed Ibrahim
4d787a2cc2 Renew cache ttl on etag match (#9174)
so we don't do unnecessary fetches
2026-01-14 01:21:41 +00:00
Ahmed Ibrahim
c96c26cf5b [CODEX-4427] improve parsed commands (#8933)
**make command summaries more accurate by distinguishing
list/search/read operations across common CLI tools.**
- Added parsing helpers to centralize operand/flag handling (cd_target,
sed_read_path, first_non_flag_operand, single_non_flag_operand,
parse_grep_like, awk_data_file_operand, python_walks_files,
is_python_command) and reused them in summarize_main_tokens/shell
parsing.
- Newly parsed list-files commands: git ls-files, rg --files (incl.
rga/ripgrep-all), eza/exa, tree, du, python -c file-walks, plus fd/find
map to ListFiles when no query.
- Newly parsed search commands: git grep, grep/egrep/fgrep, ag/ack/pt,
rg/rga files-with-matches flags (-l/-L, --files-with-matches,
--files-without-match), with improved flag skipping to avoid
misclassifying args as paths.
- Newly parsed read commands: bat/batcat, less, more, awk <file>, and
more flexible sed -n range + file detection.
- refine “small formatting command” detection for awk/sed, handle cd
with -- or multiple operands, keep pipeline summaries focused on primary
command.
2026-01-13 16:59:33 -08:00
Ahmed Ibrahim
7e33ac7eb6 clean models manager (#9168)
Have only the following Methods:
- `list_models`: getting current available models
- `try_list_models`: sync version no refresh for tui use
- `get_default_model`: get the default model (should be tightened to
core and received on session configuration)
- `get_model_info`: get `ModelInfo` for a specific model (should be
tightened to core but used in tests)
- `refresh_if_new_etag`: trigger refresh on different etags

Also move the cache to its own struct
2026-01-13 16:55:33 -08:00
github-actions[bot]
ebbbee70c6 Update models.json (#9136)
Automated update of models.json.

Co-authored-by: aibrahim-oai <219906144+aibrahim-oai@users.noreply.github.com>
2026-01-13 16:46:24 -08:00
pakrym-oai
5a70b1568f WebSocket test server script (#9175)
Very simple test server to test Responses API WebSocket transport
integration.
2026-01-13 16:21:14 -08:00
Michael Bolin
903a0c0933 feat: add bazel-codex entry to justfile (#9177)
This is less straightforward than I realized, so created an entry for
this in our `justfile`.

Verified that running `just bazel-codex` from anywhere in the repo uses
the user's `$PWD` as the one to run Codex.

While here, updated the `MODULE.bazel.lock`, though it looks like I need
to add a CI job that runs `bazel mod deps --lockfile_mode=error` or
something.
2026-01-13 16:16:22 -08:00
Michael Bolin
4c673086bc fix: integration test for #9011 (#9166)
Adds an integration test for the new behavior introduced in
https://github.com/openai/codex/pull/9011. The work to create the test
setup was substantial enough that I thought it merited a separate PR.

This integration test spawns `codex` in TUI mode, which requires
spawning a PTY to run successfully, so I had to introduce quite a bit of
scaffolding in `run_codex_cli()`. I was surprised to discover that we
have not done this in our codebase before, so perhaps this should get
moved to a common location so it can be reused.

The test itself verifies that a malformed `rules` in `$CODEX_HOME`
prints a human-readable error message and exits nonzero.
2026-01-13 23:39:34 +00:00
Michael Bolin
2cd1a0a45e fix: report an appropriate error in the TUI for malformed rules (#9011)
The underlying issue is that when we encountered an error starting a
conversation (any sort of error, though making `$CODEX_HOME/rules` a
file rather than folder was the example in #8803), then we were writing
the message to stderr, but this could be printed over by our UI
framework so the user would not see it. In general, we disallow the use
of `eprintln!()` in this part of the code for exactly this reason,
though this was suppressed by an `#[allow(clippy::print_stderr)]`.

This attempts to clean things up by changing `handle_event()` and
`handle_tui_event()` to return a `Result<AppRunControl>` instead of a
`Result<bool>`, which is a new type introduced in this PR (and depends
on `ExitReason`, also a new type):

```rust
#[derive(Debug)]
pub(crate) enum AppRunControl {
    Continue,
    Exit(ExitReason),
}

#[derive(Debug, Clone)]
pub enum ExitReason {
    UserRequested,
    Fatal(String),
}
```

This makes it possible to exit the primary control flow of the TUI with
richer information. This PR adds `ExitReason` to the existing
`AppExitInfo` struct and updates `handle_app_exit()` to print the error
and exit code `1` in the event of `ExitReason::Fatal`.

I tried to create an integration test for this, but it was a bit
involved, so I published it as a separate PR:
https://github.com/openai/codex/pull/9166. For this PR, please have
faith in my manual testing!

Fixes https://github.com/openai/codex/issues/8803.




---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/9011).
* #9166
* __->__ #9011
2026-01-13 23:21:32 +00:00
pakrym-oai
9f8d3c14ce Fix flakiness in WebSocket tests (#9169)
The connection was being added to the list after the WebSocket response
was sent.

So the test can sometimes race and observe connections before the list
was updated.

After this change, connection and request is added to the list before
the response is sent.
2026-01-13 15:09:59 -08:00
xl-openai
89403c5e11 Allow close skill popup with esc. (#9165)
<img width="398" height="133" alt="image"
src="https://github.com/user-attachments/assets/3084e793-ce5b-4f92-ad60-4c73e65c21c5"
/>
<img width="242" height="86" alt="image"
src="https://github.com/user-attachments/assets/57dd5587-0aea-4a55-91b8-273702939cb2"
/>

You can now esc to quit the skill popup and submit the input as it is.
2026-01-13 15:02:44 -08:00
Marius Wichtner
3c711f3d16 Fix spinner/Esc interrupt when MCP startup completes mid-turn (#8661)
## **Problem**

Codex’s TUI uses a single “task running” indicator (spinner + Esc interrupt hint)
to communicate “the UI is busy”. In practice, “busy” can mean two different
things: an agent turn is running, or MCP servers are still starting up. Without a
clear contract, those lifecycles can interfere: startup completion can clear the
spinner while a turn is still in progress, or the UI can appear idle while MCP is
still booting. This is user-visible confusion during the most important moments
(startup and the first turn), so it was worth making the contract explicit and
guarding it.

## **Mental model**

`ChatWidget` is the UI-side adapter for the `codex_core::protocol` event stream.
It receives `EventMsg` events and updates two major UI surfaces: the transcript
(history/streaming cells) and the bottom pane (composer + status indicator).

The key concept after this change is that the bottom pane’s “task running”
indicator is treated as **derived UI-busy state**, not “agent is running”. It is
considered active while either:
- an agent turn is in progress (`TurnStarted` → completion/abort), or
- MCP startup is in progress (`McpStartupUpdate` → `McpStartupComplete`).

Those lifecycles are tracked independently, and the bottom-pane indicator is
defined as their union.

## **Non-goals**

- This does not introduce separate UI indicators for “turn busy” vs “MCP busy”.
- This does not change MCP startup behavior, ordering guarantees, or core
  protocol semantics.
- This does not rework unrelated status/header rendering or transcript layout.

## **Tradeoffs**

- The “one flag represents multiple lifecycles” approach remains lossy: it
  preserves correct “busy vs idle” semantics but cannot express *which* kind of
  busy is happening without further UI changes.
- The design keeps complexity low by keeping a single derived boolean, rather
  than adding a more expressive bottom-pane state machine. That’s chosen because
  it matches existing UX and minimizes churn while fixing the confusion.

## **Architecture**

- `codex-core` owns the actual lifecycles and emits `codex_core::protocol`
  events.
- `ChatWidget` owns the UI interpretation of those lifecycles. It is responsible
  for keeping the bottom pane’s derived “busy” state consistent with the event
  stream, and for updating the status header when MCP progress updates arrive.
- The bottom pane remains a dumb renderer of the single “task running” flag; it
  does not learn about MCP or agent turns directly.

## **Observability**

- When working: the spinner/Esc hint stays visible during MCP startup and does
  not disappear mid-turn when `McpStartupComplete` arrives; startup status
  headers can update without clearing “busy” for an active turn.
- When broken: you’ll see the spinner/hint flicker off while output is still
  streaming, or the UI appears idle while MCP startup status is still changing.

## **Tests**

- Adds/strengthens a regression test that asserts MCP startup completion does
  not clear the “task running” indicator for an active turn (in both `tui` and
  `tui2` variants).
- These tests prove the **contract** (“busy is the union of turn + startup”) at
  the UI boundary; they do not attempt to validate MCP startup ordering,
  real-world startup timing, or backend integration behavior.

Fixes #7017

Signed-off-by: 2mawi2 <2mawi2@users.noreply.github.com>
Co-authored-by: 2mawi2 <2mawi2@users.noreply.github.com>
Co-authored-by: Josh McKinney <joshka@openai.com>
2026-01-13 13:56:09 -08:00
Josh McKinney
141d2b5022 test(tui): add deterministic paste-burst tests (#9121)
Replace the old timing-dependent non-ASCII paste test with deterministic
coverage by forcing an active `PasteBurst` and asserting the exact flush
payload.

Add focused unit tests for `PasteBurst` transitions, and add short
"Behavior:" rustdoc notes on chat composer tests to make the state
machine contracts explicit.
2026-01-13 21:55:44 +00:00
Dylan Hurd
ebacd28817 fix(windows-sandbox-rs) bump SETUP_VERSION (#9134)
## Summary
Bumps the windows setup version, to re-trigger windows sandbox setup for
users in the experimental sandbox. We've seen some drift in the ACL
controls, amongst a few other changes. Hopefully this should fix #9062.

## Testing
- [x] Tested locally
2026-01-13 13:47:29 -08:00
Matthew Zeng
e25d2ab3bf Fresh tooltips (#9130)
Fresh tooltips
2026-01-13 13:06:35 -08:00
Owen Lin
bde734fd1e feat(app-server): add an --analytics-default-enabled flag (#9118)
Add a new `codex app-server --analytics-default-enabled` CLI flag that
controls whether analytics are enabled by default.

Analytics are disabled by default for app-server. Users have to
explicitly opt in
via the `analytics` section in the config.toml file.

However, for first-party use cases like the VSCode IDE extension, we
default analytics
to be enabled by default by setting this flag. Users can still opt out
by setting this
in their config.toml:

```toml
[analytics]
enabled = false
```

See https://developers.openai.com/codex/config-advanced/#metrics for
more details.
2026-01-13 11:59:39 -08:00
Josh McKinney
58e8f75b27 fix(tui): document paste-burst state machine (#9020)
Add a narrative doc and inline rustdoc explaining how `ChatComposer`
and `PasteBurst` compose into a single state machine on terminals that
lack reliable bracketed paste (notably Windows).

This documents the key states, invariants, and integration points
(`handle_input_basic`, `handle_non_ascii_char`, tick-driven flush) so
future changes are easier to reason about.
2026-01-13 11:48:31 -08:00
gt-oai
2651980bdf Restrict MCP servers from requirements.toml (#9101)
Enterprises want to restrict the MCP servers their users can use.

Admins can now specify an allowlist of MCPs in `requirements.toml`. The
MCP servers are matched on both Name and Transport (local path or HTTP
URL) -- both must match to allow the MCP server. This prevents
circumventing the allowlist by renaming MCP servers in user config. (It
is still possible to replace the local path e.g. rewrite say
`/usr/local/github-mcp` with a nefarious MCP. We could allow hash
pinning in the future, but that would break updates. I also think this
represents a broader, out-of-scope problem.)

We introduce a new field to Constrained: "normalizer". In general, it is
a fn(T) -> T and applies when `Constrained<T>.set()` is called. In this
particular case, it disables MCP servers which do not match the
allowlist. An alternative solution would remove this and instead throw a
ConstraintError. That would stop Codex launching if any MCP server was
configured which didn't match. I think this is bad.

We currently reuse the enabled flag on MCP servers to disable them, but
don't propagate any information about why they are disabled. I'd like to
add that in a follow up PR, possibly by switching out enabled with an
enum.

In action:

```
# MCP server config has two MCPs. We are going to allowlist one of them.
➜  codex git:(gt/restrict-mcps) ✗ cat ~/.codex/config.toml | grep mcp_servers -A1
[mcp_servers.hello_world]
command = "hello-world-mcp"
--
[mcp_servers.docs]
command = "docs-mcp"

# Restrict the MCPs to the hello_world MCP.
➜  codex git:(gt/restrict-mcps) ✗ defaults read com.openai.codex requirements_toml_base64 | base64 -d
[mcp_server_allowlist.hello_world]
command = "hello-world-mcp"

# List the MCPs, observe hello_world is enabled and docs is disabled.
➜  codex git:(gt/restrict-mcps) ✗ just codex mcp list
cargo run --bin codex -- "$@"
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.25s
     Running `target/debug/codex mcp list`
Name         Command          Args  Env  Cwd  Status    Auth
docs         docs-mcp         -     -    -    disabled  Unsupported
hello_world  hello-world-mcp  -     -    -    enabled   Unsupported

# Remove the restrictions.
➜  codex git:(gt/restrict-mcps) ✗ defaults delete com.openai.codex requirements_toml_base64

# Observe both MCPs are enabled.
➜  codex git:(gt/restrict-mcps) ✗ just codex mcp list
cargo run --bin codex -- "$@"
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.25s
     Running `target/debug/codex mcp list`
Name         Command          Args  Env  Cwd  Status   Auth
docs         docs-mcp         -     -    -    enabled  Unsupported
hello_world  hello-world-mcp  -     -    -    enabled  Unsupported

# A new requirements that updates the command to one that does not match.
➜  codex git:(gt/restrict-mcps) ✗ cat ~/requirements.toml
[mcp_server_allowlist.hello_world]
command = "hello-world-mcp-v2"

# Use those requirements.
➜  codex git:(gt/restrict-mcps) ✗ defaults write com.openai.codex requirements_toml_base64 "$(base64 -i /Users/gt/requirements.toml)"

# Observe both MCPs are disabled.
➜  codex git:(gt/restrict-mcps) ✗ just codex mcp list
cargo run --bin codex -- "$@"
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.75s
     Running `target/debug/codex mcp list`
Name         Command          Args  Env  Cwd  Status    Auth
docs         docs-mcp         -     -    -    disabled  Unsupported
hello_world  hello-world-mcp  -     -    -    disabled  Unsupported
```
2026-01-13 19:45:00 +00:00
Anton Panasenko
51d75bb80a fix: drop session span at end of the session (#9126) 2026-01-13 11:36:00 -08:00
charley-oai
57ba758df5 Fix queued messages during /review (#9122)
Sending a message during /review interrupts the review, whereas during
normal operation, sending a message while the agent is running will
queue the message. This is unexpected behavior, and since /review
usually takes a while, it takes away a potentially useful operation.

Summary
- Treat review mode as an active task for message queuing so inputs
don’t inject into the running review turn.
- Prevents user submissions from rendering immediately in the transcript
while the review continues streaming.
- Keeps review UX consistent with normal “task running” behavior and
avoids accidental interrupt/replacement.

Notes
- This change only affects UI queuing logic; core review flow and task
lifecycle remain unchanged.
2026-01-13 11:23:22 -08:00
sayan-oai
40e2405998 add generated jsonschema for config.toml (#8956)
### What
Add JSON Schema generation for `config.toml`, with checked‑in
`docs/config.schema.json`. We can move the schema elsewhere if preferred
(and host it if there's demand).

Add fixture test to prevent drift and `just write-config-schema` to
regenerate on schema changes.

Generate MCP config schema from `RawMcpServerConfig` instead of
`McpServerConfig` because that is the runtime type used for
deserialization.

Populate feature flag values into generated schema so they can be
autocompleted.

### Tests
Added tests + regenerate script to prevent drift. Tested autocompletions
using generated jsonschema locally with Even Better TOML.



https://github.com/user-attachments/assets/5aa7cd39-520c-4a63-96fb-63798183d0bc
2026-01-13 10:22:51 -08:00
Devon Rifkin
fe03320791 ollama: default to Responses API for built-ins (#8798)
This is an alternate PR to solving the same problem as
<https://github.com/openai/codex/pull/8227>.

In this PR, when Ollama is used via `--oss` (or via `model_provider =
"ollama"`), we default it to use the Responses format. At runtime, we do
an Ollama version check, and if the version is older than when Responses
support was added to Ollama, we print out a warning.

Because there's no way of configuring the wire api for a built-in
provider, we temporarily add a new `oss_provider`/`model_provider`
called `"ollama-chat"` that will force the chat format.

Once the `"chat"` format is fully removed (see
<https://github.com/openai/codex/discussions/7782>), `ollama-chat` can
be removed as well

---------

Co-authored-by: Eric Traut <etraut@openai.com>
Co-authored-by: Michael Bolin <mbolin@openai.com>
2026-01-13 09:51:41 -08:00
pakrym-oai
2d56519ecd Support response.done and add integration tests (#9129)
The agent loop using a persistent incremental web socket connection.
2026-01-13 16:12:30 +00:00
jif-oai
97f1f20edb nit: collab send input cleaning (#9147) 2026-01-13 15:15:41 +00:00
jif-oai
3b8d79ee11 chore: better error handling on collab tools (#9143) 2026-01-13 13:56:11 +00:00
Ahmed Ibrahim
3a300d1117 Use thread rollback for Esc backtrack (#9140)
- Swap Esc backtrack to roll back the current thread instead of forking
2026-01-13 09:17:39 +00:00
Ahmed Ibrahim
17ab5f6a52 Show tab queue hint in footer (#9138)
- show the Tab queue hint in the footer when a task is running with
Steer enabled
- drop the history queue hint and add footer snapshots
2026-01-13 00:32:53 -08:00
Eric Traut
3c8fb90bf0 Updated heuristic for tool call summary to detect file modifications (#9109)
This PR attempts to address #9079
2026-01-13 08:00:13 +00:00
Ahmed Ibrahim
325ce985f1 Use markdown for migration screen (#8952)
Next steps will be routing this to model info
2026-01-13 07:41:42 +00:00
Ahmed Ibrahim
18b737910c Handle image paste from empty paste events (#9049)
Handle image paste on empty paste events.

- Intent: make image paste work in terminals that emit empty paste
events.
- Approach: route paste events through an image-aware handler and read
the clipboard when text is empty.
- That's best effort to detect it. Some terminals don't send the empty
signal.
2026-01-12 23:39:59 -08:00
Ahmed Ibrahim
cbca43d57a Send message by default mid turn. queue messages by tab (#9077)
https://github.com/user-attachments/assets/03838730-4ddc-44df-a2c7-cb8ecda78660
2026-01-12 23:06:35 -08:00
pakrym-oai
e726a82c8a Websocket append support (#9128)
Support an incremental append request in websocket transport.
2026-01-13 06:07:13 +00:00
Michael Bolin
ddae70bd62 fix: prompt for unsafe commands on Windows (#9117) 2026-01-12 21:30:09 -08:00
pakrym-oai
d75626ad99 Reuse websocket connection (#9127)
Reuses the connection but still sends full requests.
2026-01-13 03:30:09 +00:00
Chriss4123
12779c7c07 fix(tui): show in-flight coalesced tool calls in transcript overlay (#8246)
### Problem
Ctrl+T transcript overlay can omit in-flight coalesced tool calls because it
renders only committed transcript cells while the main viewport can render the
current in-flight ChatWidget.active_cell immediately.

### Mental model
The UI has both committed transcript cells (finalized HistoryCell entries) and
an in-flight active cell that can mutate in place while streaming, often
representing a coalesced exec/tool group. The transcript overlay renders
committed cells plus a render-only live tail derived from the current active
cell. The live tail is cached and only recomputed when its cache key changes,
which is derived from terminal width (wrapping), active-cell revision
(in-place mutations), stream continuation (spacing), and animation tick
(time-based visuals).

### Non-goals
This does not change coalescing rules, flush boundaries, or when active cells
become committed. It does not change tool-call semantics or transcript
persistence; it is a rendering-only improvement for the overlay.

### Tradeoffs
This adds cache invalidation complexity: correctness depends on bumping an
active-cell revision (and/or providing an animation tick) when the active cell
mutates in place. The mechanism is implemented in both codex-tui and codex-tui2,
which keeps behavior consistent but risks drift if future changes are not
applied in lockstep.

### Architecture
App special-cases transcript overlay draws to sync a live tail from ChatWidget
into TranscriptOverlay. TranscriptOverlay remains the owner of committed
transcript cells; the live tail is an optional appended renderable.
HistoryCell::transcript_animation_tick() allows time-dependent transcript output
(spinner/shimmer) to invalidate the cached tail without requiring data mutation.

### Observability
Manual verification is to open Ctrl+T while an exploring/coalesced active cell
is still in-flight and confirm the overlay includes the same in-flight tool-call
group the main viewport shows. The overlay is kept in sync by App passing an
active-cell key and transcript lines into TranscriptOverlay::sync_live_tail; the
key must change when the active cell mutates or animates.

### Tests
Snapshot tests validate that the transcript overlay renders a live tail appended
after committed cells and that identical keys short-circuit recomputation. Unit
tests validate that active-cell revision bumps occur on specific in-place
mutations (e.g. unified exec wait cell command display becoming known late) so
cached tails are invalidated.

## Documentation patches (module, type, function)

### Module-level docs (invariants + mechanisms)
- codex-rs/tui/src/app_backtrack.rs:1
- codex-rs/tui/src/chatwidget.rs:1
- codex-rs/tui/src/pager_overlay.rs:1
- codex-rs/tui/src/history_cell.rs:1
- codex-rs/tui2/src/app_backtrack.rs:1
- codex-rs/tui2/src/chatwidget.rs:1
- codex-rs/tui2/src/pager_overlay.rs:1
- codex-rs/tui2/src/history_cell.rs:1

### Type-level docs (cache key + invariants)
- codex-rs/tui/src/chatwidget.rs (ChatWidget.active_cell_revision, ActiveCellTranscriptKey)
- codex-rs/tui/src/pager_overlay.rs (TranscriptOverlay live tail storage model)
- codex-rs/tui/src/history_cell.rs (HistoryCell::transcript_animation_tick, UnifiedExecWaitCell::update_command_display)
- Mirrored in codex-rs/tui2/src/chatwidget.rs, codex-rs/tui2/src/pager_overlay.rs, codex-rs/tui2/src/history_cell.rs

### Function-level docs (why/when/guarantees/pitfalls)
- codex-rs/tui/src/app_backtrack.rs (overlay_forward_event)
- codex-rs/tui/src/chatwidget.rs (active_cell_transcript_key, active_cell_transcript_lines)
- codex-rs/tui/src/pager_overlay.rs (sync_live_tail, take_live_tail_renderable)
- codex-rs/tui/src/history_cell.rs (transcript_animation_tick, UnifiedExecWaitCell::update_command_display)
- Mirrored in codex-rs/tui2 equivalents where present

### Validation performed
- cd codex-rs && just fmt
- cd codex-rs && cargo test -p codex-tui
- cd codex-rs && cargo test -p codex-tui2

## Design inconsistencies / risks

- Cache invalidation is a distributed responsibility: any future in-place active
  cell transcript mutation that forgets to bump active_cell_revision (or expose
  an animation tick) can leave the transcript overlay live tail out of sync with
  the main viewport.
- TranscriptOverlay tail handling assumes a structural invariant that the live
  tail, when present, is exactly one trailing renderable after the committed cell
  renderables; if renderable construction changes in a way that violates that
  assumption, tail insertion/removal logic becomes incorrect.
- codex-tui and codex-tui2 duplicate the live-tail mechanism; the documentation
  is aligned, but the implementation can still drift unless changes continue to
  be applied in lockstep.
2026-01-13 03:06:11 +00:00
pakrym-oai
490c1c1fdd Add model client sessions (#9102)
Maintain a long-running session.
2026-01-13 01:15:56 +00:00
Ahmed Ibrahim
87f7226cca Assemble sandbox/approval/network prompts dynamically (#8961)
- Add a single builder for developer permissions messaging that accepts
SandboxPolicy and approval policy. This builder now drives the developer
“permissions” message that’s injected at session start and any time
sandbox/approval settings change.
- Trim EnvironmentContext to only include cwd, writable roots, and
shell; removed sandbox/approval/network duplication and adjusted XML
serialization and tests accordingly.

Follow-up: adding a config value to replace the developer permissions
message for custom sandboxes.
2026-01-12 23:12:59 +00:00
pakrym-oai
3a6a43ff5c Extract single responses SSE event parsing (#9114)
To be reused in WebSockets parsing.
2026-01-12 13:59:51 -08:00
charley-oai
d7cdcfc302 Add some tests for image attachments (#9080)
Some extra tests for https://github.com/openai/codex/pull/8950
2026-01-12 13:41:50 -08:00
pakrym-oai
5dfa780f3d Remove unused conversation_id header (#9107)
It's an exact copy of session_id
2026-01-12 21:01:07 +00:00
Shijie Rao
3e91a95ce1 feat: hot reload mcp servers (#8957)
### Summary
* Added `mcpServer/refresh` command to inform app servers and active
threads to refresh mcpServer on next turn event.
* Added `pending_mcp_server_refresh_config` to codex core so that if the
value is populated, we reinitialize the mcp server manager on the thread
level.
* The config is updated on `mcpServer/refresh` command which we iterate
through threads and provide with the latest config value after last
write.
2026-01-12 11:17:50 -08:00
dependabot[bot]
034d489c34 chore(deps): bump tokio-util from 0.7.16 to 0.7.18 in /codex-rs (#9076)
Bumps [tokio-util](https://github.com/tokio-rs/tokio) from 0.7.16 to
0.7.18.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="9cc02cc88d"><code>9cc02cc</code></a>
chore: prepare tokio-util 0.7.18 (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7829">#7829</a>)</li>
<li><a
href="d2799d791b"><code>d2799d7</code></a>
task: improve the docs of <code>Builder::spawn_local</code> (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7828">#7828</a>)</li>
<li><a
href="4d4870f291"><code>4d4870f</code></a>
task: doc that task drops before JoinHandle completion (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7825">#7825</a>)</li>
<li><a
href="fdb150901a"><code>fdb1509</code></a>
fs: check for io-uring opcode support (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7815">#7815</a>)</li>
<li><a
href="426a562780"><code>426a562</code></a>
rt: remove <code>allow(dead_code)</code> after <code>JoinSet</code>
stabilization (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7826">#7826</a>)</li>
<li><a
href="e3b89bbefa"><code>e3b89bb</code></a>
chore: prepare Tokio v1.49.0 (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7824">#7824</a>)</li>
<li><a
href="4f577b84e9"><code>4f577b8</code></a>
Merge 'tokio-1.47.3' into 'master'</li>
<li><a
href="f320197693"><code>f320197</code></a>
chore: prepare Tokio v1.47.3 (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7823">#7823</a>)</li>
<li><a
href="ea6b144cd1"><code>ea6b144</code></a>
ci: freeze rustc on nightly-2025-01-25 in <code>netlify.toml</code> (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7652">#7652</a>)</li>
<li><a
href="264e703296"><code>264e703</code></a>
Merge <code>tokio-1.43.4</code> into <code>tokio-1.47.x</code> (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7822">#7822</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/tokio-rs/tokio/compare/tokio-util-0.7.16...tokio-util-0.7.18">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=tokio-util&package-manager=cargo&previous-version=0.7.16&new-version=0.7.18)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-12 10:14:42 -08:00
dependabot[bot]
729e097662 chore(deps): bump clap from 4.5.53 to 4.5.54 in /codex-rs (#9075)
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.53 to 4.5.54.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/clap-rs/clap/releases">clap's
releases</a>.</em></p>
<blockquote>
<h2>v4.5.54</h2>
<h2>[4.5.54] - 2026-01-02</h2>
<h3>Fixes</h3>
<ul>
<li><em>(help)</em> Move <code>[default]</code> to its own paragraph
when <code>PossibleValue::help</code> is present in
<code>--help</code></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/clap-rs/clap/blob/master/CHANGELOG.md">clap's
changelog</a>.</em></p>
<blockquote>
<h2>[4.5.54] - 2026-01-02</h2>
<h3>Fixes</h3>
<ul>
<li><em>(help)</em> Move <code>[default]</code> to its own paragraph
when <code>PossibleValue::help</code> is present in
<code>--help</code></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="194c676f60"><code>194c676</code></a>
chore: Release</li>
<li><a
href="44838f6606"><code>44838f6</code></a>
docs: Update changelog</li>
<li><a
href="0f59d55ff6"><code>0f59d55</code></a>
Merge pull request <a
href="https://redirect.github.com/clap-rs/clap/issues/6027">#6027</a>
from Alpha1337k/master</li>
<li><a
href="e2aa2f07d1"><code>e2aa2f0</code></a>
Feat: Add catch-all on external subcommands for zsh</li>
<li><a
href="b9c0aee9f2"><code>b9c0aee</code></a>
Feat: Add external subcommands test to suite</li>
<li>See full diff in <a
href="https://github.com/clap-rs/clap/compare/clap_complete-v4.5.53...clap_complete-v4.5.54">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=clap&package-manager=cargo&previous-version=4.5.53&new-version=4.5.54)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-12 10:14:22 -08:00
dependabot[bot]
7ac498e0e0 chore(deps): bump which from 6.0.3 to 8.0.0 in /codex-rs (#9074)
Bumps [which](https://github.com/harryfei/which-rs) from 6.0.3 to 8.0.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/harryfei/which-rs/releases">which's
releases</a>.</em></p>
<blockquote>
<h2>8.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Add new <code>Sys</code> trait to allow abstracting over the
underlying filesystem. Particularly useful for
<code>wasm32-unknown-unknown</code> targets. Thanks <a
href="https://github.com/dsherret"><code>@​dsherret</code></a> for this
contribution to which!</li>
<li>Add more debug level tracing for otherwise silent I/O errors.</li>
<li>Call the <code>NonFatalHandler</code> in more places to catch
previously ignored I/O errors.</li>
<li>Remove use of the <code>either</code> dependency.</li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/dsherret"><code>@​dsherret</code></a>
made their first contribution in <a
href="https://redirect.github.com/harryfei/which-rs/pull/109">harryfei/which-rs#109</a></li>
</ul>
<h2>7.0.3</h2>
<ul>
<li>Update rustix to version 1.0. Congrats to rustix on this milestone,
and thanks <a href="https://github.com/mhils"><code>@​mhils</code></a>
for this contribution to which!</li>
</ul>
<h2>7.0.2</h2>
<ul>
<li>Don't return paths containing the single dot <code>.</code>
reference to the current directory, even if the original request was
given in terms of the current directory. Thanks <a
href="https://github.com/jakobhellermann"><code>@​jakobhellermann</code></a>
for this contribution!</li>
</ul>
<h2>7.0.1</h2>
<h2>What's Changed</h2>
<ul>
<li>Switch to <code>env_home</code> crate by <a
href="https://github.com/micolous"><code>@​micolous</code></a> in <a
href="https://redirect.github.com/harryfei/which-rs/pull/105">harryfei/which-rs#105</a></li>
<li>fixes <a
href="https://redirect.github.com/harryfei/which-rs/issues/106">#106</a>,
bump patch version by <a
href="https://github.com/Xaeroxe"><code>@​Xaeroxe</code></a> in <a
href="https://redirect.github.com/harryfei/which-rs/pull/107">harryfei/which-rs#107</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/micolous"><code>@​micolous</code></a>
made their first contribution in <a
href="https://redirect.github.com/harryfei/which-rs/pull/105">harryfei/which-rs#105</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/harryfei/which-rs/compare/7.0.0...7.0.1">https://github.com/harryfei/which-rs/compare/7.0.0...7.0.1</a></p>
<h2>7.0.0</h2>
<ul>
<li>Add support to <code>WhichConfig</code> for a user provided closure
that will be called whenever a nonfatal error occurs.
This technically breaks a few APIs due to the need to add more generics
and lifetimes. Most code will compile
without changes.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/harryfei/which-rs/blob/master/CHANGELOG.md">which's
changelog</a>.</em></p>
<blockquote>
<h2>8.0.0</h2>
<ul>
<li>Add new <code>Sys</code> trait to allow abstracting over the
underlying filesystem. Particularly useful for
<code>wasm32-unknown-unknown</code> targets. Thanks <a
href="https://github.com/dsherret"><code>@​dsherret</code></a> for this
contribution to which!</li>
<li>Add more debug level tracing for otherwise silent I/O errors.</li>
<li>Call the <code>NonFatalHandler</code> in more places to catch
previously ignored I/O errors.</li>
<li>Remove use of the <code>either</code> dependency.</li>
</ul>
<h2>7.0.3</h2>
<ul>
<li>Update rustix to version 1.0. Congrats to rustix on this milestone,
and thanks <a href="https://github.com/mhils"><code>@​mhils</code></a>
for this contribution to which!</li>
</ul>
<h2>7.0.2</h2>
<ul>
<li>Don't return paths containing the single dot <code>.</code>
reference to the current directory, even if the original request was
given in
terms of the current directory. Thanks <a
href="https://github.com/jakobhellermann"><code>@​jakobhellermann</code></a>
for this contribution!</li>
</ul>
<h2>7.0.1</h2>
<ul>
<li>Get user home directory from <code>env_home</code> instead of
<code>home</code>. Thanks <a
href="https://github.com/micolous"><code>@​micolous</code></a> for this
contribution!</li>
<li>If home directory is unavailable, do not expand the tilde to an
empty string. Leave it as is.</li>
</ul>
<h2>7.0.0</h2>
<ul>
<li>Add support to <code>WhichConfig</code> for a user provided closure
that will be called whenever a nonfatal error occurs.
This technically breaks a few APIs due to the need to add more generics
and lifetimes. Most code will compile
without changes.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="adac2cdae7"><code>adac2cd</code></a>
bump version, update changelog</li>
<li><a
href="84e152ec23"><code>84e152e</code></a>
reduce sys::Sys requirements, add some tracing for otherwise silent
errors (#...</li>
<li><a
href="a0a6daf199"><code>a0a6daf</code></a>
feat: add Sys trait for swapping out system (<a
href="https://redirect.github.com/harryfei/which-rs/issues/109">#109</a>)</li>
<li><a
href="eef199824a"><code>eef1998</code></a>
Add actively maintained badge</li>
<li><a
href="1d145deef8"><code>1d145de</code></a>
release version 7.0.3</li>
<li><a
href="f5e5292234"><code>f5e5292</code></a>
fix unrelated lint error</li>
<li><a
href="4dcefa6fe9"><code>4dcefa6</code></a>
bump rustix</li>
<li><a
href="bd868818bd"><code>bd86881</code></a>
bump version, add to changelog</li>
<li><a
href="cf37760ea1"><code>cf37760</code></a>
don't run relative dot test on macos</li>
<li><a
href="f2c4bd6e8b"><code>f2c4bd6</code></a>
update target to new name for wasm32-wasip1</li>
<li>Additional commits viewable in <a
href="https://github.com/harryfei/which-rs/compare/6.0.3...8.0.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=which&package-manager=cargo&previous-version=6.0.3&new-version=8.0.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-12 10:14:00 -08:00
dependabot[bot]
45ffcdf886 chore(deps): bump ts-rs from 11.0.1 to 11.1.0 in /codex-rs (#9072)
[//]: # (dependabot-start)
⚠️  **Dependabot is rebasing this PR** ⚠️ 

Rebasing might not happen immediately, so don't worry if this takes some
time.

Note: if you make any changes to this PR yourself, they will take
precedence over the rebase.

---

[//]: # (dependabot-end)

Bumps [ts-rs](https://github.com/Aleph-Alpha/ts-rs) from 11.0.1 to
11.1.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/Aleph-Alpha/ts-rs/releases">ts-rs's
releases</a>.</em></p>
<blockquote>
<h2>v11.1.0</h2>
<p>Today, we're happy to publish a small follow-up to v11.0.1!</p>
<p>This release fixes a nasty build failure when using the
<code>format</code> feature.
<strong>Note:</strong> For those that use the <code>format</code>
feature, this release bumps the MSRV to 1.88. We'd have preferred to do
this in a major release, but felt this was acceptable since the build
was broken by one of the dependencies anyway.</p>
<h1>New features</h1>
<h2>TypeScript enums with <code>#[ts(repr(enum))</code></h2>
<p><code>#[ts(repr(enum))</code> instructs ts-rs to generate an
<code>enum</code>, instead of a <code>type</code> for your rust
enum.</p>
<pre lang="rust"><code>#[derive(TS)]
#[ts(repr(enum))]
enum Role {
    User,
    Admin,
}
// will generate `export enum Role { &quot;User&quot;, &quot;Admin&quot;
}`
</code></pre>
<p>Discriminants are preserved, and you can use the variant's name as
discriminant instead using <code>#[ts(repr(enum = name))]</code></p>
<h2><code>#[ts(optional_fields)]</code> in enums</h2>
<p>The <code>#[ts(optional_fields)]</code> attribute can now be applied
directly to enums, or even to individual enum variants.</p>
<h2>Control over file extensions in imports</h2>
<p>Normally, we generate <code>import { Type } from
&quot;file&quot;</code> statements. In some scenarios though, it might
be necessary to use a <code>.ts</code> or even <code>.js</code>
extension instead.<br />
This is now possible by setting the <code>TS_RS_IMPORT_EXTENSION</code>
environment variable.</p>
<blockquote>
<p>Note: With the introduction of this feature, we deprecate the
<code>import-esm</code> cargo feature. It will be removed in a future
major release.</p>
</blockquote>
<h2>Full changelog</h2>
<ul>
<li>Regression: <code>#[ts(optional)]</code> with
<code>#[ts(type)]</code> by <a
href="https://github.com/NyxCode"><code>@​NyxCode</code></a> in <a
href="https://redirect.github.com/Aleph-Alpha/ts-rs/pull/416">Aleph-Alpha/ts-rs#416</a></li>
<li>release v11.0.1 by <a
href="https://github.com/NyxCode"><code>@​NyxCode</code></a> in <a
href="https://redirect.github.com/Aleph-Alpha/ts-rs/pull/417">Aleph-Alpha/ts-rs#417</a></li>
<li>Make <code>rename_all</code> compatible with tuple and unit structs
as a no-op attribute by <a
href="https://github.com/gustavo-shigueo"><code>@​gustavo-shigueo</code></a>
in <a
href="https://redirect.github.com/Aleph-Alpha/ts-rs/pull/422">Aleph-Alpha/ts-rs#422</a></li>
<li>Replace <code>import-esm</code> with
<code>TS_RS_IMPORT_EXTENSION</code> by <a
href="https://github.com/gustavo-shigueo"><code>@​gustavo-shigueo</code></a>
in <a
href="https://redirect.github.com/Aleph-Alpha/ts-rs/pull/423">Aleph-Alpha/ts-rs#423</a></li>
<li>Updated chrono Duration emitted type by <a
href="https://github.com/fxf8"><code>@​fxf8</code></a> in <a
href="https://redirect.github.com/Aleph-Alpha/ts-rs/pull/434">Aleph-Alpha/ts-rs#434</a></li>
<li>Add optional_fields to enum by <a
href="https://github.com/gustavo-shigueo"><code>@​gustavo-shigueo</code></a>
in <a
href="https://redirect.github.com/Aleph-Alpha/ts-rs/pull/432">Aleph-Alpha/ts-rs#432</a></li>
<li>Add <code>#[ts(repr(enum)]</code> attribute by <a
href="https://github.com/gustavo-shigueo"><code>@​gustavo-shigueo</code></a>
in <a
href="https://redirect.github.com/Aleph-Alpha/ts-rs/pull/425">Aleph-Alpha/ts-rs#425</a></li>
<li>Fix build with <code>format</code> feature by <a
href="https://github.com/gustavo-shigueo"><code>@​gustavo-shigueo</code></a>
in <a
href="https://redirect.github.com/Aleph-Alpha/ts-rs/pull/438">Aleph-Alpha/ts-rs#438</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/fxf8"><code>@​fxf8</code></a> made their
first contribution in <a
href="https://redirect.github.com/Aleph-Alpha/ts-rs/pull/434">Aleph-Alpha/ts-rs#434</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/Aleph-Alpha/ts-rs/blob/main/CHANGELOG.md">ts-rs's
changelog</a>.</em></p>
<blockquote>
<h1>11.1.0</h1>
<h3>Features</h3>
<ul>
<li>Add <code>#[ts(repr(enum))]</code> attribute (<a
href="https://redirect.github.com/Aleph-Alpha/ts-rs/pull/425">#425</a>)</li>
<li>Add support for <code>#[ts(optional_fields)]</code> in enums and
enum variants (<a
href="https://redirect.github.com/Aleph-Alpha/ts-rs/pull/432">#432</a>)</li>
<li>Deprecate <code>import-esm</code> cargo feature in favour of
<code>RS_RS_IMPORT_EXTENSION</code> (<a
href="https://redirect.github.com/Aleph-Alpha/ts-rs/pull/423">#423</a>)</li>
</ul>
<h3>Fixes</h3>
<ul>
<li>Fix bindings for <code>chrono::Duration</code> (<a
href="https://redirect.github.com/Aleph-Alpha/ts-rs/pull/434">#434</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/Aleph-Alpha/ts-rs/commits/v11.1.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=ts-rs&package-manager=cargo&previous-version=11.0.1&new-version=11.1.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-12 10:13:27 -08:00
dependabot[bot]
06088535ad chore(deps): bump tui-scrollbar from 0.2.1 to 0.2.2 in /codex-rs (#9071)
Bumps [tui-scrollbar](https://github.com/joshka/tui-widgets) from 0.2.1
to 0.2.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/joshka/tui-widgets/releases">tui-scrollbar's
releases</a>.</em></p>
<blockquote>
<h2>tui-scrollbar-v0.2.2</h2>
<h3>🚀 Features</h3>
<ul>
<li><em>(scrollbar)</em> Support crossterm 0.28 (<a
href="https://redirect.github.com/joshka/tui-widgets/issues/172">#172</a>)
<blockquote>
<p>Add versioned crossterm feature flags and re-export the selected
version
as <code>tui_scrollbar::crossterm</code>.</p>
<p>Add CI checks for the feature matrix and a docs.rs-style build.</p>
<hr />
</blockquote>
</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/joshka/tui-widgets/blob/main/CHANGELOG.md">tui-scrollbar's
changelog</a>.</em></p>
<blockquote>
<h2>[0.2.2] - 2024-07-25</h2>
<h3>⚙️ Miscellaneous Tasks</h3>
<ul>
<li>Updated the following local packages: tui-big-text</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="5c7acc5d2c"><code>5c7acc5</code></a>
chore(tui-scrollbar): release v0.2.2 (<a
href="https://redirect.github.com/joshka/tui-widgets/issues/173">#173</a>)</li>
<li><a
href="24b14c25cb"><code>24b14c2</code></a>
feat(scrollbar): support crossterm 0.28 (<a
href="https://redirect.github.com/joshka/tui-widgets/issues/172">#172</a>)</li>
<li>See full diff in <a
href="https://github.com/joshka/tui-widgets/compare/tui-scrollbar-v0.2.1...tui-scrollbar-v0.2.2">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=tui-scrollbar&package-manager=cargo&previous-version=0.2.1&new-version=0.2.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-12 10:11:37 -08:00
Anton Panasenko
4223948cf5 feat: wire fork to codex cli (#8994)
## Summary
- add `codex fork` subcommand and `/fork` slash command mirroring resume
- extend session picker to support fork/resume actions with dynamic
labels in tui/tui2
- wire fork selection flow through tui bootstraps and add fork-related
tests
2026-01-12 10:09:11 -08:00
jif-oai
898e5f82f0 nit: add docstring (#9099)
Add docstring on `ToolHandler` trait
2026-01-12 17:40:52 +00:00
WhammyLeaf
d5562983d9 Add static mcp callback uri support (#8971)
Currently the callback URI for MCP authentication is dynamically
generated. More specifically, the callback URI is dynamic because the
port part of it is randomly chosen by the OS. This is not ideal as
callback URIs are recommended to be static and many authorization
servers do not support dynamic callback URIs.

This PR fixes that issue by exposing a new config option named
`mcp_oauth_callback_port`. When it is set, the callback URI is
constructed using this port rather than a random one chosen by the OS,
thereby making callback URI static.

Related issue: https://github.com/openai/codex/issues/8827
2026-01-12 16:57:04 +00:00
jif-oai
9659583559 feat: add close tool implementation for collab (#9090)
Pretty straight forward. A known follow-up will be to drop it from the
AgentControl
2026-01-12 13:21:46 +00:00
jif-oai
623707ab58 feat: add wait tool implementation for collab (#9088)
Add implementation for the `wait` tool.

For this we consider all status different from `PendingInit` and
`Running` as terminal. The `wait` tool call will return either after a
given timeout or when the tool reaches a non-terminal status.

A few points to note:
* The usage of a channel is preferred to prevent some races (just
looping on `get_status()` could "miss" a terminal status)
* The order of operations is very important, we need to first subscribe
and then check the last known status to prevent race conditions
* If the channel gets dropped, we return an error on purpose
2026-01-12 12:16:24 +00:00
jif-oai
86f81ca010 feat: testing harness for collab 1 (#8983) 2026-01-12 11:17:05 +00:00
zbarsky-openai
6a57d7980b fix: support remote arm64 builds, as well (#9018) 2026-01-10 18:41:08 -08:00
jif-oai
198289934f Revert "Delete announcement_tip.toml" (#9032)
Reverts openai/codex#9003
2026-01-10 07:30:14 -08:00
charley-oai
6709ad8975 Label attached images so agent can understand in-message labels (#8950)
Agent wouldn't "see" attached images and would instead try to use the
view_file tool:
<img width="1516" height="504" alt="image"
src="https://github.com/user-attachments/assets/68a705bb-f962-4fc1-9087-e932a6859b12"
/>

In this PR, we wrap image content items in XML tags with the name of
each image (now just a numbered name like `[Image #1]`), so that the
model can understand inline image references (based on name). We also
put the image content items above the user message which the model seems
to prefer (maybe it's more used to definitions being before references).

We also tweak the view_file tool description which seemed to help a bit

Results on a simple eval set of images:

Before
<img width="980" height="310" alt="image"
src="https://github.com/user-attachments/assets/ba838651-2565-4684-a12e-81a36641bf86"
/>

After
<img width="918" height="322" alt="image"
src="https://github.com/user-attachments/assets/10a81951-7ee6-415e-a27e-e7a3fd0aee6f"
/>

```json
[
  {
    "id": "single_describe",
    "prompt": "Describe the attached image in one sentence.",
    "images": ["image_a.png"]
  },
  {
    "id": "single_color",
    "prompt": "What is the dominant color in the image? Answer with a single color word.",
    "images": ["image_b.png"]
  },
  {
    "id": "orientation_check",
    "prompt": "Is the image portrait or landscape? Answer in one sentence.",
    "images": ["image_c.png"]
  },
  {
    "id": "detail_request",
    "prompt": "Look closely at the image and call out any small details you notice.",
    "images": ["image_d.png"]
  },
  {
    "id": "two_images_compare",
    "prompt": "I attached two images. Are they the same or different? Briefly explain.",
    "images": ["image_a.png", "image_b.png"]
  },
  {
    "id": "two_images_captions",
    "prompt": "Provide a short caption for each image (Image 1, Image 2).",
    "images": ["image_c.png", "image_d.png"]
  },
  {
    "id": "multi_image_rank",
    "prompt": "Rank the attached images from most colorful to least colorful.",
    "images": ["image_a.png", "image_b.png", "image_c.png"]
  },
  {
    "id": "multi_image_choice",
    "prompt": "Which image looks more vibrant? Answer with 'Image 1' or 'Image 2'.",
    "images": ["image_b.png", "image_d.png"]
  }
]
```
2026-01-09 21:33:45 -08:00
Michael Bolin
cf515142b0 fix: include AGENTS.md as repo root marker for integration tests (#9010)
As explained in `codex-rs/core/BUILD.bazel`, including the repo's own
`AGENTS.md` is a hack to get some tests passing. We should fix this
properly, but I wanted to put stake in the ground ASAP to get `just
bazel-remote-test` working and then add a job to `bazel.yml` to ensure
it keeps working.
2026-01-09 17:09:59 -08:00
Michael Bolin
74b2238931 fix: add .git to .bazelignore (#9008)
As noted in the comment, this was causing a problem for me locally
because Sapling backed up some files under `.git/sl` named `BUILD.bazel`
and so Bazel tried to parse them.

It's a bit surprising that Bazel does not ignore `.git` out of the box
such that you have to opt-in to considering it rather than opting-out.
2026-01-10 00:55:02 +00:00
gt-oai
cc0b5e8504 Add URL to responses error messages (#8984)
Put the URL in error messages, to aid debugging Codex pointing at wrong
endpoints.

<img width="759" height="164" alt="Screenshot 2026-01-09 at 16 32 49"
src="https://github.com/user-attachments/assets/77a0622c-955d-426d-86bb-c035210a4ecc"
/>
2026-01-10 00:53:47 +00:00
gt-oai
8e49a2c0d1 Add model provider info to /status if non-default (#8981)
Add model provider info to /status if non-default

Enterprises are running Codex and migrating between proxied / API key
auth and SIWC. If you accidentally run Codex with `OPENAI_BASE_URL=...`,
which is surprisingly easy to do, we don't tend to surface this anywhere
and it may lead to breakage. One suggestion was to include this
information in `/status`:

<img width="477" height="157" alt="Screenshot 2026-01-09 at 15 45 34"
src="https://github.com/user-attachments/assets/630ce68f-c856-4a2b-a004-7df2fbe5de93"
/>
2026-01-10 00:53:34 +00:00
Ahmed Ibrahim
af1ed2685e Refactor remote models tests to use TestCodex builder (#8940)
- add `with_model_provider` to the test codex builder
- replace the bespoke remote models harness with `TestCodex` in
`remote_models` tests
2026-01-09 15:11:56 -08:00
pakrym-oai
1a0e2e612b Delete announcement_tip.toml (#9003) 2026-01-09 14:47:46 -08:00
pakrym-oai
acfd94f625 Add hierarchical agent prompt (#8996) 2026-01-09 13:47:37 -08:00
pakrym-oai
cabf85aa18 Log unhandled sse events (#8949) 2026-01-09 12:36:07 -08:00
viyatb-oai
bc284669c2 fix: harden arg0 helper PATH handling (#8766)
### Motivation
- Avoid placing PATH entries under the system temp directory by creating
the helper directory under `CODEX_HOME` instead of
`std::env::temp_dir()`.
- Fail fast on unsafe configuration by rejecting `CODEX_HOME` values
that live under the system temp root to prevent writable PATH entries.

### Testing
- Ran `just fmt`, which completed with a non-blocking
`imports_granularity` warning.
- Ran `just fix -p codex-arg0` (Clippy fixes) which completed
successfully.
- Ran `cargo test -p codex-arg0` and the test run completed
successfully.
2026-01-09 12:35:54 -08:00
Owen Lin
fbe883318d fix(app-server): set originator header from initialize (re-revert) (#8988)
Reapplies https://github.com/openai/codex/pull/8873 which was reverted
due to merge conflicts
2026-01-09 12:09:30 -08:00
zbarsky-openai
2a06d64bc9 feat: add support for building with Bazel (#8875)
This PR configures Codex CLI so it can be built with
[Bazel](https://bazel.build) in addition to Cargo. The `.bazelrc`
includes configuration so that remote builds can be done using
[BuildBuddy](https://www.buildbuddy.io).

If you are familiar with Bazel, things should work as you expect, e.g.,
run `bazel test //... --keep-going` to run all the tests in the repo,
but we have also added some new aliases in the `justfile` for
convenience:

- `just bazel-test` to run tests locally
- `just bazel-remote-test` to run tests remotely (currently, the remote
build is for x86_64 Linux regardless of your host platform). Note we are
currently seeing the following test failures in the remote build, so we
still need to figure out what is happening here:

```
failures:
    suite::compact::manual_compact_twice_preserves_latest_user_messages
    suite::compact_resume_fork::compact_resume_after_second_compaction_preserves_history
    suite::compact_resume_fork::compact_resume_and_fork_preserve_model_history_view
```

- `just build-for-release` to build release binaries for all
platforms/architectures remotely

To setup remote execution:
- [Create a buildbuddy account](https://app.buildbuddy.io/) (OpenAI
employees should also request org access at
https://openai.buildbuddy.io/join/ with their `@openai.com` email
address.)
- [Copy your API key](https://app.buildbuddy.io/docs/setup/) to
`~/.bazelrc` (add the line `build
--remote_header=x-buildbuddy-api-key=YOUR_KEY`)
- Use `--config=remote` in your `bazel` invocations (or add `common
--config=remote` to your `~/.bazelrc`, or use the `just` commands)

## CI

In terms of CI, this PR introduces `.github/workflows/bazel.yml`, which
uses Bazel to run the tests _locally_ on Mac and Linux GitHub runners
(we are working on supporting Windows, but that is not ready yet). Note
that the failures we are seeing in `just bazel-remote-test` do not occur
on these GitHub CI jobs, so everything in `.github/workflows/bazel.yml`
is green right now.

The `bazel.yml` uses extra config in `.github/workflows/ci.bazelrc` so
that macOS CI jobs build _remotely_ on Linux hosts (using the
`docker://docker.io/mbolin491/codex-bazel` Docker image declared in the
root `BUILD.bazel`) using cross-compilation to build the macOS
artifacts. Then these artifacts are downloaded locally to GitHub's macOS
runner so the tests can be executed natively. This is the relevant
config that enables this:

```
common:macos --config=remote
common:macos --strategy=remote
common:macos --strategy=TestRunner=darwin-sandbox,local
```

Because of the remote caching benefits we get from BuildBuddy, these new
CI jobs can be extremely fast! For example, consider these two jobs that
ran all the tests on Linux x86_64:

- Bazel 1m37s
https://github.com/openai/codex/actions/runs/20861063212/job/59940545209?pr=8875
- Cargo 9m20s
https://github.com/openai/codex/actions/runs/20861063192/job/59940559592?pr=8875

For now, we will continue to run both the Bazel and Cargo jobs for PRs,
but once we add support for Windows and running Clippy, we should be
able to cutover to using Bazel exclusively for PRs, which should still
speed things up considerably. We will probably continue to run the Cargo
jobs post-merge for commits that land on `main` as a sanity check.

Release builds will also continue to be done by Cargo for now.

Earlier attempt at this PR: https://github.com/openai/codex/pull/8832
Earlier attempt to add support for Buck2, now abandoned:
https://github.com/openai/codex/pull/8504

---------

Co-authored-by: David Zbarsky <dzbarsky@gmail.com>
Co-authored-by: Michael Bolin <mbolin@openai.com>
2026-01-09 11:09:43 -08:00
Helmut Januschka
7daaabc795 fix: add tui.alternate_screen config and --no-alt-screen CLI flag for Zellij scrollback (#8555)
Fixes #2558

Codex uses alternate screen mode (CSI 1049) which, per xterm spec,
doesn't support scrollback. Zellij follows this strictly, so users can't
scroll back through output.

**Changes:**
- Add `tui.alternate_screen` config: `auto` (default), `always`, `never`
- Add `--no-alt-screen` CLI flag
- Auto-detect Zellij and skip alt screen (uses existing `ZELLIJ` env var
detection)

**Usage:**
```bash
# CLI flag
codex --no-alt-screen

# Or in config.toml
[tui]
alternate_screen = "never"
```

With default `auto` mode, Zellij users get working scrollback without
any config changes.

---------

Co-authored-by: Josh McKinney <joshka@openai.com>
2026-01-09 18:38:26 +00:00
jif-oai
1aed01e99f renaming: task to turn (#8963) 2026-01-09 17:31:17 +00:00
jif-oai
ed64804cb5 nit: rename to analytics_enabled (#8978) 2026-01-09 17:18:42 +00:00
jif-oai
5c380d5b1e Revert "fix(app-server): set originator header from initialize JSON-RPC request" (#8986)
Reverts openai/codex#8873
2026-01-09 17:00:53 +00:00
jif-oai
46b0c4acbb chore: nuke telemetry file (#8985) 2026-01-09 08:55:21 -08:00
gt-oai
5b5a5b92b5 Add config to disable /feedback (#8909)
Some enterprises do not want their users to be able to `/feedback`.

<img width="395" height="325" alt="image"
src="https://github.com/user-attachments/assets/2dae9c0b-20c3-4a15-bcd3-0187857ebbd8"
/>

Adds to `config.toml`:

```toml
[feedback]
enabled = false
```

I've deliberately decided to:
1. leave other references to `/feedback` (e.g. in the interrupt message,
tips of the day) unchanged. I think we should continue to promote the
feature even if it is not usable currently.
2. leave the `/feedback` menu item selectable and display an error
saying it's disabled, rather than remove the menu item (which I believe
would raise more questions).

but happy to discuss these.

This will be followed by a change to requirements.toml that admins can
use to force the value of feedback.enabled.
2026-01-09 16:33:48 +00:00
Owen Lin
ea56186c2b fix(app-server): set originator header from initialize JSON-RPC request (#8873)
**Motivation**
The `originator` header is important for codex-backend’s Responses API
proxy because it identifies the real end client (codex cli, codex vscode
extension, codex exec, future IDEs) and is used to categorize requests
by client for our enterprise compliance API.

Today the `originator` header is set by either:
- the `CODEX_INTERNAL_ORIGINATOR_OVERRIDE` env var (our VSCode extension
does this)
- calling `set_default_originator()` which sets a global immutable
singleton (`codex exec` does this)

For `codex app-server`, we want the `initialize` JSON-RPC request to set
that header because it is a natural place to do so. Example:
```json
{
  "method": "initialize",
  "id": 0,
  "params": {
    "clientInfo": {
      "name": "codex_vscode",
      "title": "Codex VS Code Extension",
      "version": "0.1.0"
    }
  }
}
```
and when app-server receives that request, it can call
`set_default_originator()`. This is a much more natural interface than
asking third party developers to set an env var.

One hiccup is that `originator()` reads the global singleton and locks
in the value, preventing a later `set_default_originator()` call from
setting it. This would be fine but is brittle, since any codepath that
calls `originator()` before app-server can process an `initialize`
JSON-RPC call would prevent app-server from setting it. This was
actually the case with OTEL initialization which runs on boot, but I
also saw this behavior in certain tests.

Instead, what we now do is:
- [unchanged] If `CODEX_INTERNAL_ORIGINATOR_OVERRIDE` env var is set,
`originator()` would return that value and `set_default_originator()`
with some other value does NOT override it.
- [new] If no env var is set, `originator()` would return the default
value which is `codex_cli_rs` UNTIL `set_default_originator()` is called
once, in which case it is set to the new value and becomes immutable.
Later calls to `set_default_originator()` returns
`SetOriginatorError::AlreadyInitialized`.

**Other notes**
- I updated `codex_core::otel_init::build_provider` to accepts a service
name override, and app-server sends a hardcoded `codex_app_server`
service name to distinguish it from `codex_cli_rs` used by default (e.g.
TUI).

**Next steps**
- Update VSCE to set the proper value for `clientInfo.name` on
`initialize` and drop the `CODEX_INTERNAL_ORIGINATOR_OVERRIDE` env var.
- Delete support for `CODEX_INTERNAL_ORIGINATOR_OVERRIDE` in codex-rs.
2026-01-09 08:17:13 -08:00
Eric Traut
cacdae8c05 Work around crash in system-configuration library (#8954)
This is a proposed fix for #8912

Information provided by Codex:

no_proxy means “don’t use any system proxy settings for this client,”
even if macOS has proxies configured in System Settings or via
environment. On macOS, reqwest’s proxy discovery can call into the
system-configuration framework; that’s the code path that was panicking
with “Attempted to create a NULL object.” By forcing a direct connection
for the OAuth discovery request, we avoid that proxy-resolution path
entirely, so the system-configuration crate never gets invoked and the
panic disappears.

Effectively:

With proxies: reqwest asks the OS for proxy config →
system-configuration gets touched → panic.
With no_proxy: reqwest skips proxy lookup → no system-configuration call
→ no panic.
So the fix doesn’t change any MCP protocol behavior; it just prevents
the OAuth discovery probe from touching the macOS proxy APIs that are
crashing in the reported environment.

This fix changes behavior for the OAuth discovery probe used in codex
mcp list/auth status detection. With no_proxy, that probe won’t use
system or env proxy settings, so:

If a server is only reachable via a proxy, the discovery call may fail
and we’ll show auth as Unsupported/NotLoggedIn incorrectly.
If the server is reachable directly (common case), behavior is
unchanged.



As an alternative, we could try to get a fix into the
[system-configuration](https://github.com/mullvad/system-configuration-rs)
library. It looks like this library is still under development but has
slow release pace.
2026-01-09 08:11:34 -07:00
jif-oai
bc92dc5cf0 chore: update metrics temporality (#8901) 2026-01-09 14:57:42 +00:00
jif-oai
7e5b3e069e chore: metrics tool call (#8975) 2026-01-09 13:28:43 +00:00
jif-oai
e2e3f4490e chore: add approval metric (#8970) 2026-01-09 13:10:31 +00:00
jif-oai
225614d7fb chore: add mcp call metric (#8973) 2026-01-09 13:09:21 +00:00
jif-oai
16c66c37eb chore: move otel provider outside of trace module (#8968) 2026-01-09 12:42:54 +00:00
jif-oai
e9c548c65e chore: non mutable btree when building specs (#8969) 2026-01-09 12:21:55 +00:00
jif-oai
fceae86581 nit: rename session metric (#8966) 2026-01-09 11:55:57 +00:00
jif-oai
568b938c80 feat: first pass on clb tool (#8930) 2026-01-09 11:54:05 +00:00
Matthew Zeng
24d6e0114f [device-auth] When headless environment is detected, show device login flow instead. (#8756)
When headless environment is detected, show device login flow instead.
2026-01-08 21:48:30 -08:00
Michael Bolin
d3ff668f68 fix: remove existing process hardening from Codex CLI (#8951)
As explained in https://github.com/openai/codex/issues/8945 and
https://github.com/openai/codex/issues/8472, there are legitimate cases
where users expect processes spawned by Codex to inherit environment
variables such as `LD_LIBRARY_PATH` and `DYLD_LIBRARY_PATH`, where
failing to do so can cause significant performance issues.

This PR removes the use of
`codex_process_hardening::pre_main_hardening()` in Codex CLI (which was
added not in response to a known security issue, but because it seemed
like a prudent thing to do from a security perspective:
https://github.com/openai/codex/pull/4521), but we will continue to use
it in `codex-responses-api-proxy`. At some point, we probably want to
introduce a slightly different version of
`codex_process_hardening::pre_main_hardening()` in Codex CLI that
excludes said environment variables from the Codex process itself, but
continues to propagate them to subprocesses.
2026-01-08 21:19:34 -08:00
Ahmed Ibrahim
81caee3400 Add 5s timeout to models list call + integration test (#8942)
- Enforce a 5s timeout around the remote models refresh to avoid hanging
/models calls.
2026-01-08 18:06:10 -08:00
Thibault Sottiaux
51dd5af807 fix: treat null MCP resource args as empty (#8917)
Handle null tool arguments in the MCP resource handler so optional
resource tools accept null without failing, preserving normal JSON
parsing for non-null payloads and improving robustness when models emit
null; this avoids spurious argument parse errors for list/read MCP
resource calls.
2026-01-08 17:47:02 -08:00
iceweasel-oai
6372ba9d5f Elevated sandbox NUX (#8789)
Elevated Sandbox NUX:

* prompt for elevated sandbox setup when agent mode is selected (via
/approvals or at startup)
* prompt for degraded sandbox if elevated setup is declined or fails
* introduce /elevate-sandbox command to upgrade from degraded
experience.
2026-01-08 16:23:06 -08:00
Michael Bolin
bdfdebcfa1 fix: increase timeout for wait_for_event() for Bazel (#8946)
This seems to be necessary to get the Bazel builds on ARM Linux to go
green on https://github.com/openai/codex/pull/8875.

I don't feel great about timeout-whack-a-mole, but we're still learning
here...
2026-01-08 15:37:46 -08:00
pakrym-oai
62a73b6d58 Attempt to reload auth as a step in 401 recovery (#8880)
When authentication fails, first attempt to reload the auth from file
and then attempt to refresh it.
2026-01-08 15:06:44 -08:00
Celia Chen
be4364bb80 [chore] move app server tests from chat completion to responses (#8939)
We are deprecating chat completions. Move all app server tests from chat
completion to responses.
2026-01-08 22:27:55 +00:00
Ahmed Ibrahim
0d3e673019 remove get_responses_requests and get_responses_request_bodies to use in-place matcher (#8858) 2026-01-08 13:57:48 -08:00
Anton Panasenko
41a317321d feat: fork conversation/thread (#8866)
## Summary
- add thread/conversation fork endpoints to the protocol (v1 + v2)
- implement fork handling in app-server using thread manager and config
overrides
- add fork coverage in app-server tests and document `thread/fork` usage
2026-01-08 12:54:20 -08:00
Celia Chen
051bf81df9 [fix] app server flaky send_messages test (#8874)
Fix flakiness of CI test:
https://github.com/openai/codex/actions/runs/20350530276/job/58473691434?pr=8282

This PR does two things:
1. move the flakiness test to use responses API instead of chat
completion API
2. make mcp_process agnostic to the order of
responses/notifications/requests that come in, by buffering messages not
read
2026-01-08 20:41:21 +00:00
Michael Bolin
a70f5b0b3c fix: correct login shell mismatch in the accept_elicitation_for_prompt_rule() test (#8931)
Because the path to `git` is used to construct `elicitations_to_accept`,
we need to ensure that we resolve which `git` to use the same way our
Bash process will:


c9c6560685/codex-rs/exec-server/tests/suite/accept_elicitation.rs (L59-L69)

This fixes an issue when running the test on macOS using Bazel
(https://github.com/openai/codex/pull/8875) where the login shell chose
`/opt/homebrew/bin/git` whereas the non-login shell chose
`/usr/bin/git`.
2026-01-08 12:37:38 -08:00
Michael Bolin
224c4867dd fix: increase timeout for tests that have been flaking with timeout issues (#8932)
I have seen this test flake out sometimes when running the macOS build
using Bazel in CI: https://github.com/openai/codex/pull/8875. Perhaps
Bazel runs with greater parallelism, inducing a heavier load, causing an
issue?
2026-01-08 20:31:03 +00:00
jif-oai
c9c6560685 nit: parse_arguments (#8927) 2026-01-08 19:49:17 +00:00
pakrym-oai
634764ece9 Immutable CodexAuth (#8857)
Historically we started with a CodexAuth that knew how to refresh it's
own tokens and then added AuthManager that did a different kind of
refresh (re-reading from disk).

I don't think it makes sense for both `CodexAuth` and `AuthManager` to
be mutable and contain behaviors.

Move all refresh logic into `AuthManager` and keep `CodexAuth` as a data
object.
2026-01-08 11:43:56 -08:00
Felipe Petroski Such
5bc3e325a6 add tooltip hint for shell commands (!) (#8926)
I didn't know this existed because its not listed in the hints.
2026-01-08 19:31:20 +00:00
gt-oai
4156060416 Add read-only when backfilling requirements from managed_config (#8913)
When a user has a managed_config which doesn't specify read-only, Codex
fails to launch.
2026-01-08 11:27:46 -08:00
Thibault Sottiaux
98122cbad0 fix: preserve core env vars on Windows (#8897)
This updates core shell environment policy handling to match Windows
case-insensitive variable names and adds a Windows-only regression test,
so Path/TEMP are no longer dropped when inherit=core.
2026-01-08 10:36:36 -08:00
github-actions[bot]
7b21b443bb Update models.json (#8792)
Automated update of models.json.

Co-authored-by: aibrahim-oai <219906144+aibrahim-oai@users.noreply.github.com>
2026-01-08 10:26:01 -08:00
gt-oai
93dec9045e otel test: retry WouldBlock errors (#8915)
This test looks flaky on Windows:

```
        FAIL [   0.034s] (1442/2802) codex-otel::tests suite::otlp_http_loopback::otlp_http_exporter_sends_metrics_to_collector
  stdout ───

    running 1 test
    test suite::otlp_http_loopback::otlp_http_exporter_sends_metrics_to_collector ... FAILED

    failures:

    failures:
        suite::otlp_http_loopback::otlp_http_exporter_sends_metrics_to_collector

    test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 14 filtered out; finished in 0.02s
    
  stderr ───
    Error: ProviderShutdown { source: InternalFailure("[InternalFailure(\"Failed to shutdown\")]") }

────────────
     Summary [ 175.360s] 2802 tests run: 2801 passed, 1 failed, 15 skipped
        FAIL [   0.034s] (1442/2802) codex-otel::tests suite::otlp_http_loopback::otlp_http_exporter_sends_metrics_to_collector
```
2026-01-08 18:18:49 +00:00
jif-oai
69898e3dba clean: all history cloning (#8916) 2026-01-08 18:17:18 +00:00
Celia Chen
c4af304c77 [fix] app server flaky thread/resume tests (#8870)
Fix flakiness of CI tests:
https://github.com/openai/codex/actions/runs/20350530276/job/58473691443?pr=8282

This PR does two things:
1. test with responses API instead of chat completions API in
thread_resume tests;
2. have a new responses API fixture that mocks out arbitrary numbers of
responses API calls (including no calls) and have the same repeated
response.

Tested by CI
2026-01-08 10:17:05 -08:00
jif-oai
5b7707dfb1 feat: add list loaded threads to app server (#8902) 2026-01-08 17:48:20 +00:00
Michael Bolin
59d6937550 fix: reduce duplicate include_str!() calls (#8914) 2026-01-08 17:20:41 +00:00
gt-oai
932a5a446f config requirements: improve requirement error messages (#8843)
**Before:**
```
Error loading configuration: value `Never` is not in the allowed set [OnRequest]
```

**After:**
```
Error loading configuration: invalid value for `approval_policy`: `Never` is not in the
allowed set [OnRequest] (set by MDM com.openai.codex:requirements_toml_base64)
```

Done by introducing a new struct `ConfigRequirementsWithSources` onto
which we `merge_unset_fields` now. Also introduces a pair of requirement
value and its `RequirementSource` (inspired by `ConfigLayerSource`):

```rust
pub struct Sourced<T> {
    pub value: T,
    pub source: RequirementSource,
}
```
2026-01-08 16:11:14 +00:00
zbarsky-openai
484f6f4c26 gitignore bazel-* (#8911)
QoL improvement so we don't accidentally add these dirs while we
prototype bazel things
2026-01-08 07:50:58 -08:00
jif-oai
5522663f92 feat: add a few metrics (#8910) 2026-01-08 15:39:57 +00:00
jif-oai
98e171258c nit: drop unused function call error (#8903) 2026-01-08 15:07:30 +00:00
jif-oai
da667b1f56 chore: drop useless interaction_input (#8907) 2026-01-08 15:01:07 +00:00
Michael Bolin
1e29774fce fix: leverage codex_utils_cargo_bin() in codex-rs/core/tests/suite (#8887)
This eliminates our dependency on the `escargot` crate and better
prepares us for Bazel builds: https://github.com/openai/codex/pull/8875.
2026-01-08 14:56:16 +00:00
Denis Andrejew
9ce6bbc43e Avoid setpgid for inherited stdio on macOS (#8691)
## Summary
- avoid setting a new process group when stdio is inherited (keeps child
in foreground PG)
- keep process-group isolation when stdio is redirected so killpg
cleanup still works
- prevents macOS job-control SIGTTIN stops that look like hangs after
output

## Testing
- `cargo build -p codex-cli`
- `GIT_CONFIG_GLOBAL=/dev/null GIT_CONFIG_NOSYSTEM=1
CARGO_BIN_EXE_codex=/Users/denis/Code/codex/codex-rs/target/debug/codex
/opt/homebrew/bin/timeout 30m cargo test -p codex-core -p codex-exec`

## Context
This fixes macOS sandbox hangs for commands like `elixir -v` / `erl
-noshell`, where the child was moved into a new process group while
still attached to the controlling TTY. See issue #8690.

## Authorship & collaboration
- This change and analysis were authored by **Codex** (AI coding agent).
- Human collaborator: @seeekr provided repro environment, context, and
review guidance.
- CLI used: `codex-cli 0.77.0`.
- Model: `gpt-5.2-codex (xhigh)`.

Co-authored-by: Eric Traut <etraut@openai.com>
2026-01-08 07:50:40 -07:00
Michael Bolin
7520d8ba58 fix: leverage find_resource! macro in load_sse_fixture_with_id (#8888)
This helps prepare us for Bazel builds:
https://github.com/openai/codex/pull/8875.
2026-01-08 09:34:05 -05:00
jif-oai
0318f30ed8 chore: add small debug client (#8894)
Small debug client, do not use in production
2026-01-08 13:40:14 +00:00
Thibault Sottiaux
be212db0c8 fix: include project instructions in /review subagent (#8899)
Include project-level AGENTS.md and skills in /review sessions so the
review sub-agent uses the same instruction pipeline as standard runs,
keeping reviewer context aligned with normal sessions.
2026-01-08 13:31:01 +00:00
Thibault Sottiaux
5b022c2904 chore: align error limit comment (#8896) 2026-01-08 13:30:33 +00:00
jif-oai
e21ce6c5de chore: drop metrics exporter config (#8892)
Dropped for now as enterprises should not be able to use it
2026-01-08 13:20:18 +00:00
Thibault Sottiaux
267c05fb30 fix: stabilize list_dir pagination order (#8826)
Sort list_dir entries before applying offset/limit so pagination matches
the displayed order, update pagination/truncation expectations, and add
coverage for sorted pagination. This ensures stable, predictable
directory pages when list_dir is enabled.
2026-01-08 03:51:47 -08:00
jif-oai
634650dd25 feat: metrics capabilities (#8318)
Add metrics capabilities to Codex. The `README.md` is up to date.

This will not be merged with the metrics before this PR of course:
https://github.com/openai/codex/pull/8350
2026-01-08 11:47:36 +00:00
jif-oai
8a0c2e5841 chore: add list thread ids on manager (#8855) 2026-01-08 10:53:58 +00:00
Dylan Hurd
0f8bb4579b fix: windows can now paste non-ascii multiline text (#8774)
## Summary
This PR builds _heavily_ on the work from @occurrent in #8021 - I've
only added a small fix, added additional tests, and propagated the
changes to tui2.

From the original PR:

> On Windows, Codex relies on PasteBurst for paste detection because
bracketed paste is not reliably available via crossterm.
> 
> When pasted content starts with non-ASCII characters, input is routed
through handle_non_ascii_char, which bypasses the normal paste burst
logic. This change extends the paste burst window for that path, which
should ensure that Enter is correctly grouped as part of the paste.


## Testing
- [x] tested locally cross-platform
- [x] added regression tests

---------

Co-authored-by: occur <occurring@outlook.com>
2026-01-07 23:21:49 -08:00
Michael Bolin
35fd69a9f0 fix: make the find_resource! macro responsible for the absolutize() call (#8884)
https://github.com/openai/codex/pull/8879 introduced the
`find_resource!` macro, but now that I am about to use it in more
places, I realize that it should take care of this normalization case
for callers.

Note the `use $crate::path_absolutize::Absolutize;` line is there so
that users of `find_resource!` do not have to explicitly include
`path-absolutize` to their own `Cargo.toml`.
2026-01-07 23:03:43 -08:00
iceweasel-oai
ccba737d26 add ability to disable input temporarily in the TUI. (#8876)
We will disable input while the elevated sandbox setup is running.
2026-01-07 20:56:48 -08:00
xl-openai
75076aabfe Support UserInput::Skill in V2 API. (#8864)
Allow client to specify explicit skill invocation in v2 API.
2026-01-07 18:26:35 -08:00
Michael Bolin
f6b563ec64 feat: introduce find_resource! macro that works with Cargo or Bazel (#8879)
To support Bazelification in https://github.com/openai/codex/pull/8875,
this PR introduces a new `find_resource!` macro that we use in place of
our existing logic in tests that looks for resources relative to the
compile-time `CARGO_MANIFEST_DIR` env var.

To make this work, we plan to add the following to all `rust_library()`
and `rust_test()` Bazel rules in the project:

```
rustc_env = {
    "BAZEL_PACKAGE": native.package_name(),
},
```

Our new `find_resource!` macro reads this value via
`option_env!("BAZEL_PACKAGE")` so that the Bazel package _of the code
using `find_resource!`_ is injected into the code expanded from the
macro. (If `find_resource()` were a function, then
`option_env!("BAZEL_PACKAGE")` would always be
`codex-rs/utils/cargo-bin`, which is not what we want.)

Note we only consider the `BAZEL_PACKAGE` value when the `RUNFILES_DIR`
environment variable is set at runtime, indicating that the test is
being run by Bazel. In this case, we have to concatenate the runtime
`RUNFILES_DIR` with the compile-time `BAZEL_PACKAGE` value to build the
path to the resource.

In testing this change, I discovered one funky edge case in
`codex-rs/exec-server/tests/common/lib.rs` where we have to _normalize_
(but not canonicalize!) the result from `find_resource!` because the
path contains a `common/..` component that does not exist on disk when
the test is run under Bazel, so it must be semantically normalized using
the [`path-absolutize`](https://crates.io/crates/path-absolutize) crate
before it is passed to `dotslash fetch`.

Because this new behavior may be non-obvious, this PR also updates
`AGENTS.md` to make humans/Codex aware that this API is preferred.
2026-01-07 18:06:08 -08:00
iceweasel-oai
357e4c902b add footer note to TUI (#8867)
This will be used by the elevated sandbox NUX to give a hint on how to
run the elevated sandbox when in the non-elevated mode.
2026-01-07 16:44:28 -08:00
Michael Bolin
ef8b8ebc94 fix: use tokio for I/O in an async function (#8868)
I thought this might solve a bug I'm working on, but it turned out to be
a red herring. Nevertheless, this seems like the right thing to do here.
2026-01-07 16:36:23 -08:00
Michael Bolin
54b290ec1d fix: update resource path resolution logic so it works with Bazel (#8861)
The Bazelification work in-flight over at
https://github.com/openai/codex/pull/8832 needs this fix so that Bazel
can find the path to the DotSlash file for `bash`.

With this change, the following almost works:

```
bazel test --test_output=errors //codex-rs/exec-server:exec-server-all-test
```

That is, now the `list_tools` test passes, but
`accept_elicitation_for_prompt_rule` still fails because it runs
Seatbelt itself, so it needs to be run outside Bazel's local sandboxing.
2026-01-07 22:33:05 +00:00
Shijie Rao
efd0c21b9b Feat: appServer.requirementList for requirement.toml (#8800)
### Summary
We are exposing requirements via `requirement/list` method from
app-server so that we can conditionally disable the agent mode dropdown
selection in VSCE and correctly setting the default value.

### Sample output
#### `etc/codex/requirements.toml`
<img width="497" height="49" alt="Screenshot 2026-01-06 at 11 32 06 PM"
src="https://github.com/user-attachments/assets/fbd9402e-515f-4b9e-a158-2abb23e866a0"
/>

#### App server response
<img width="1107" height="79" alt="Screenshot 2026-01-06 at 11 30 18 PM"
src="https://github.com/user-attachments/assets/c0d669cd-54ef-4789-a26c-adb2c41950af"
/>
2026-01-07 13:57:44 -08:00
xl-openai
61e81af887 Support symlink for skills discovery. (#8801)
Skills discovery now follows symlink entries for SkillScope::User
($CODEX_HOME/skills) and SkillScope::Admin (e.g. /etc/codex/skills).

Added cycle protection: directories are canonicalized and tracked in a
visited set to prevent infinite traversal from circular links.

Added per-root traversal limits to avoid accidentally scanning huge
trees:
- max depth: 6
- max directories: 2000 (logs a warning if truncated)

For now, symlink stat failures and traversal truncation are logged
rather than surfaced as UI “invalid SKILL.md” warnings.
2026-01-07 13:34:48 -08:00
gt-oai
f07b8aa591 Warn in /model if BASE_URL set (#8847)
<img width="763" height="349" alt="Screenshot 2026-01-07 at 18 37 59"
src="https://github.com/user-attachments/assets/569d01cb-ea91-4113-889b-ba74df24adaf"
/>

It may not make sense to use the `/model` menu with a custom
OPENAI_BASE_URL. But some model proxies may support it, so we shouldn't
disable it completely. A warning is a reasonable compromise.
2026-01-07 21:24:18 +00:00
darlingm
5f3f70203c Clarify YAML frontmatter formatting in skill-creator (#8610)
Fixes #8609

# Summary

Emphasize single-line name/description values and quoting when values
could be interpreted as YAML syntax.

# Testing

Not run (skill-only change.)
2026-01-07 14:24:02 -07:00
Channing Conger
21c6d40a44 Add feature for optional request compression (#8767)
Adds a new feature
`enable_request_compression` that will compress using zstd requests to
the codex-backend. Currently only enabled for codex-backend so only enabled for openai providers when using chatgpt::auth even when the feature is enabled

Added a new info log line too for evaluating the compression ratio and
overhead off compressing before requesting. You can enable with
`RUST_LOG=$RUST_LOG,codex_client::transport=info`

```
2026-01-06T00:09:48.272113Z  INFO codex_client::transport: Compressed request body with zstd pre_compression_bytes=28914 post_compression_bytes=11485 compression_duration_ms=0
```
2026-01-07 13:21:40 -08:00
Ahmed Ibrahim
a9b5e8a136 Simplify error managment in run_turn (#8849) 2026-01-07 13:15:46 -08:00
Ahmed Ibrahim
187924d761 Override truncation policy at model info level (#8856)
We used to override truncation policy by comparing model info vs config
value in context manager. A better way to do it is to construct model
info using the config value
2026-01-07 13:06:20 -08:00
Owen Lin
66450f0445 fix: implement 'Allow this session' for apply_patch approvals (#8451)
**Summary**
This PR makes “ApprovalDecision::AcceptForSession / don’t ask again this
session” actually work for `apply_patch` approvals by caching approvals
based on absolute file paths in codex-core, properly wiring it through
app-server v2, and exposing the choice in both TUI and TUI2.
- This brings `apply_patch` calls to be at feature-parity with general
shell commands, which also have a "Yes, and don't ask again" option.
- This also fixes VSCE's "Allow this session" button to actually work.

While we're at it, also split the app-server v2 protocol's
`ApprovalDecision` enum so execpolicy amendments are only available for
command execution approvals.

**Key changes**
- Core: per-session patch approval allowlist keyed by absolute file
paths
- Handles multi-file patches and renames/moves by recording both source
and destination paths for `Update { move_path: Some(...) }`.
- Extend the `Approvable` trait and `ApplyPatchRuntime` to work with
multiple keys, because an `apply_patch` tool call can modify multiple
files. For a request to be auto-approved, we will need to check that all
file paths have been approved previously.
- App-server v2: honor AcceptForSession for file changes
- File-change approval responses now map AcceptForSession to
ReviewDecision::ApprovedForSession (no longer downgraded to plain
Approved).
- Replace `ApprovalDecision` with two enums:
`CommandExecutionApprovalDecision` and `FileChangeApprovalDecision`
- TUI / TUI2: expose “don’t ask again for these files this session”
- Patch approval overlays now include a third option (“Yes, and don’t
ask again for these files this session (s)”).
    - Snapshot updates for the approval modal.

**Tests added/updated**
- Core:
- Integration test that proves ApprovedForSession on a patch skips the
next patch prompt for the same file
- App-server:
- v2 integration test verifying
FileChangeApprovalDecision::AcceptForSession works properly

**User-visible behavior**
- When the user approves a patch “for session”, future patches touching
only those previously approved file(s) will no longer prompt gain during
that session (both via app-server v2 and TUI/TUI2).

**Manual testing**
Tested both TUI and TUI2 - see screenshots below.

TUI:
<img width="1082" height="355" alt="image"
src="https://github.com/user-attachments/assets/adcf45ad-d428-498d-92fc-1a0a420878d9"
/>


TUI2:
<img width="1089" height="438" alt="image"
src="https://github.com/user-attachments/assets/dd768b1a-2f5f-4bd6-98fd-e52c1d3abd9e"
/>
2026-01-07 20:11:12 +00:00
Celia Chen
e8421c761c [chore] update app server doc with skills (#8853) 2026-01-07 20:07:01 +00:00
jif-oai
fe460e0f9a chore: drop some deprecated (#8848) 2026-01-07 19:54:45 +00:00
jif-oai
1253d19641 chore: drop useless feature flags (#8850) 2026-01-07 19:54:32 +00:00
Ahmed Ibrahim
4c9b4b684f Fix app-server write_models_cache to treat models with less priority number as higher priority. (#8844)
Rank models with p0 higher than p1. This shouldn't result in any
behavioral changes. Just reordering.
2026-01-07 11:22:13 -08:00
pakrym-oai
018de994b0 Stop using AuthManager as the source of codex_home (#8846) 2026-01-07 18:56:20 +00:00
Ahmed Ibrahim
c31960b13a remove unnecessary todos (#8842)
> // todo(aibrahim): why are we passing model here while it can change?

we update it on each turn with `.with_model`

> //TODO(aibrahim): run CI in release mode.

although it's good to have, release builds take double the time tests
take.

> // todo(aibrahim): make this async function

we figured out another way of doing this sync
2026-01-07 10:43:10 -08:00
Ahmed Ibrahim
9179c9deac Merge Modelfamily into modelinfo (#8763)
- Merge ModelFamily into ModelInfo
- Remove logic for adding instructions to apply patch
- Add compaction limit and visible context window to `ModelInfo`
2026-01-07 10:35:09 -08:00
Michael Bolin
a1e81180f8 fix: upgrade lru crate to 0.16.3 (#8845)
See https://rustsec.org/advisories/RUSTSEC-2026-0002.

Though our `ratatui` fork has a transitive dep on an older version of
the `lru` crate, so to get CI green ASAP, this PR also adds an exception
to `deny.toml` for `RUSTSEC-2026-0002`, but hopefully this will be
short-lived.
2026-01-07 10:11:27 -08:00
pakrym-oai
fedcb8f63c Move tests below auth manager (#8840)
To simplify future diffs
2026-01-07 17:36:44 +00:00
jif-oai
116059c3a0 chore: unify conversation with thread name (#8830)
Done and verified by Codex + refactor feature of RustRover
2026-01-07 17:04:53 +00:00
Thibault Sottiaux
0d788e6263 fix: handle early codex exec exit (#8825)
Fixes CodexExec to avoid missing early process exits by registering the
exit handler up front and deferring the error until after stdout is
drained, and adds a regression test that simulates a fast-exit child
while still producing output so hangs are caught.
2026-01-07 08:54:27 -08:00
jif-oai
4cef89a122 chore: rename unified exec sessions (#8822)
Renaming done by Codex
2026-01-07 16:12:47 +00:00
Thibault Sottiaux
124a09e577 fix: handle /review arguments in TUI (#8823)
Handle /review <instructions> in the TUI and TUI2 by routing it as a
custom review command instead of plain text, wiring command dispatch and
adding composer coverage so typing /review text starts a review directly
rather than posting a message. User impact: /review with arguments now
kicks off the review flow, previously it would just forward as a plain
command and not actually start a review.
2026-01-07 13:14:55 +00:00
Thibault Sottiaux
a59052341d fix: parse git apply paths correctly (#8824)
Fixes apply.rs path parsing so 
- quoted diff headers are tokenized and extracted correctly, 
- /dev/null headers are ignored before prefix stripping to avoid bogus
dev/null paths, and
- git apply output paths are unescaped from C-style quoting.

**Why**
This prevents potentially missed staging and misclassified paths when
applying or reverting patches, which could lead to incorrect behavior
for repos with spaces or escaped characters in filenames.

**Impact**
I checked and this is only used in the cloud tasks support and `codex
apply <task_id>` flow.
2026-01-07 13:00:31 +00:00
jif-oai
8372d61be7 chore: silent just fmt (#8820)
Done to avoid spammy warnings to end up in the model context without
having to switch to nightly
```
Warning: can't set `imports_granularity = Item`, unstable features are only available in nightly channel.
```
2026-01-07 12:16:38 +00:00
Thibault Sottiaux
230a045ac9 chore: stabilize core tool parallelism test (#8805)
Set login=false for the shell tool in the timing-based parallelism test
so it does not depend on slow user login shells, making the test
deterministic without user-facing changes. This prevents occasional
flakes when running locally.
2026-01-07 09:26:47 +00:00
charley-oai
3389465c8d Enable model upgrade popup even when selected model is no longer in picker (#8802)
With `config.toml`:
```
model = "gpt-5.1-codex"
```
(where `gpt-5.1-codex` has `show_in_picker: false` in
[`model_presets.rs`](https://github.com/openai/codex/blob/main/codex-rs/core/src/models_manager/model_presets.rs);
this happens if the user hasn't used codex in a while so they didn't see
the popup before their model was changed to `show_in_picker: false`)

The upgrade picker used to not show (because `gpt-5.1-codex` was
filtered out of the model list in code). Now, the filtering is done
downstream in tui and app-server, so the model upgrade popup shows:

<img width="1503" height="227" alt="Screenshot 2026-01-06 at 5 04 37 PM"
src="https://github.com/user-attachments/assets/26144cc2-0b3f-4674-ac17-e476781ec548"
/>
2026-01-06 19:32:27 -08:00
Thibault Sottiaux
8b4d27dfcd fix: truncate long approval prefixes when rendering (#8734)
Fixes inscrutable multiline approval requests:
<img width="686" height="844" alt="image"
src="https://github.com/user-attachments/assets/cf9493dc-79e6-4168-8020-0ef0fe676d5e"
/>
2026-01-06 15:17:01 -08:00
Michael Bolin
dc1a568dc7 fix: populate the release notes when the release is created (#8799)
Use the contents of the commit message from the commit associated with
the tag (that contains the version bump) as the release notes by writing
them to a file and then specifying the file as the `body_path` of
`softprops/action-gh-release@v2`.
2026-01-06 15:02:39 -08:00
sayan-oai
54ded1a3c0 add web_search_cached flag (#8795)
Add `web_search_cached` feature to config. Enables `web_search` tool
with access only to cached/indexed results (see
[docs](https://platform.openai.com/docs/guides/tools-web-search#live-internet-access)).

This takes precedence over the existing `web_search_request`, which
continues to enable `web_search` over live results as it did before.

`web_search_cached` is disabled for review mode, as `web_search_request`
is.
2026-01-06 14:53:59 -08:00
Celia Chen
11d4f3f45e [app-server] fix config loading for conversations (#8765)
Currently we don't load config properly for app server conversations.
see:
https://linear.app/openai/issue/CODEX-3956/config-flags-not-respected-in-codex-app-server.
This PR fixes that by respecting the config passed in.

Tested by running `cargo build -p codex-cli &&
RUST_LOG=codex_app_server=debug CODEX_BIN=target/debug/codex cargo run
-p codex-app-server-test-client -- \
--config
model_providers.mock_provider.base_url=\"http://localhost:4010/v2\" \
    --config model_provider=\"mock_provider\" \
    --config model_providers.mock_provider.name="hello" \
    send-message-v2 "hello"`
and verified that the mock_provider is called instead of default
provider.

#closes
https://linear.app/openai/issue/CODEX-3956/config-flags-not-respected-in-codex-app-server

---------

Co-authored-by: Michael Bolin <mbolin@openai.com>
2026-01-06 22:02:17 +00:00
Owen Lin
8b7ec31ba7 feat(app-server): thread/rollback API (#8454)
Add `thread/rollback` to app-server to support IDEs undo-ing the last N
turns of a thread.

For context, an IDE partner will be supporting an "undo" capability
where the IDE (the app-server client) will be responsible for reverting
the local changes made during the last turn. To support this well, we
also need a way to drop the last turn (or more generally, the last N
turns) from the agent's context. This is what `thread/rollback` does.

**Core idea**: A Thread rollback is represented as a persisted event
message (EventMsg::ThreadRollback) in the rollout JSONL file, not by
rewriting history. On resume, both the model's context (core replay) and
the UI turn list (app-server v2's thread history builder) apply these
markers so the pruned history is consistent across live conversations
and `thread/resume`.

Implementation notes:
- Rollback only affects agent context and appends to the rollout file;
clients are responsible for reverting files on disk.
- If a thread rollback is currently in progress, subsequent
`thread/rollback` calls are rejected.
- Because we use `CodexConversation::submit` and codex core tracks
active turns, returning an error on concurrent rollbacks is communicated
via an `EventMsg::Error` with a new variant
`CodexErrorInfo::ThreadRollbackFailed`. app-server watches for that and
sends the BAD_REQUEST RPC response.

Tests cover thread rollbacks in both core and app-server, including when
`num_turns` > existing turns (which clears all turns).

**Note**: this explicitly does **not** behave like `/undo` which we just
removed from the CLI, which does the opposite of what `thread/rollback`
does. `/undo` reverts local changes via ghost commits/snapshots and does
not modify the agent's context / conversation history.
2026-01-06 21:23:48 +00:00
jif-oai
188f79afee feat: drop agent bus and store the agent status in codex directly (#8788) 2026-01-06 19:44:39 +00:00
Josh McKinney
a0b2d03302 Clear copy pill background and add snapshot test (#8777)
### Motivation
- Fix a visual bug where transcript text could bleed through the
on-screen copy "pill" overlay.
- Ensure the copy affordance fully covers the underlying buffer so the
pill background is solid and consistent with styling.
- Document the approach in-code to make the background-clearing
rationale explicit.

### Description
- Clear the pill area before drawing by iterating `Rect::positions()`
and calling `cell.set_symbol(" ")` and `cell.set_style(base_style)` in
`render_copy_pill` in `transcript_copy_ui.rs`.
- Added an explanatory comment for why the pill background is explicitly
cleared.
- Added a unit test `copy_pill_clears_background` and committed the
corresponding snapshot file to validate the rendering behavior.

### Testing
- Ran `just fmt` (formatting completed; non-blocking environment warning
may appear).
- Ran `just fix -p codex-tui2` to apply lints/fixes (completed). 
- Ran `cargo test -p codex-tui2` and all tests passed (snapshot updated
and tests succeeded).

------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_695c9b23e9b8832997d5a457c4d83410)
2026-01-06 11:21:26 -08:00
xl-openai
4ce9d0aa7b suppress popups while browsing input history (#8772) 2026-01-06 11:13:21 -08:00
jif-oai
1dd1355df3 feat: agent controller (#8783)
Added an agent control plane that lets sessions spawn or message other
conversations via `AgentControl`.

`AgentBus` (core/src/agent/bus.rs) keeps track of the last known status
of a conversation.

ConversationManager now holds shared state behind an Arc so AgentControl
keeps only a weak back-reference, the goal is just to avoid explicit
cycle reference.

Follow-ups:
* Build a small tool in the TUI to be able to see every agent and send
manual message to each of them
* Handle approval requests in this TUI
* Add tools to spawn/communicate between agents (see related design)
* Define agent types
2026-01-06 19:08:02 +00:00
Javi
915352b10c feat: add analytics config setting (#8350) 2026-01-06 19:04:13 +00:00
jif-oai
740bf0e755 chore: clear background terminals on interrupt (#8786) 2026-01-06 19:01:07 +00:00
jif-oai
d1c6329c32 feat: forced tool tips (#8752)
Force an announcement tooltip in the CLI. This query the gh repo on this
[file](https://raw.githubusercontent.com/openai/codex/main/announcement_tip.toml)
which contains announcements in TOML looking like this:
```
# Example announcement tips for Codex TUI.
# Each [[announcements]] entry is evaluated in order; the last matching one is shown.
# Dates are UTC, formatted as YYYY-MM-DD. The from_date is inclusive and the to_date is exclusive.
# version_regex matches against the CLI version (env!("CARGO_PKG_VERSION")); omit to apply to all versions.
# target_app specify which app should display the announcement (cli, vsce, ...).

[[announcements]]
content = "Welcome to Codex! Check out the new onboarding flow."
from_date = "2024-10-01"
to_date = "2024-10-15"
version_regex = "^0\\.0\\.0$"
target_app = "cli"
``` 

To make this efficient, the announcement is queried on a best effort
basis at the launch of the CLI (no refresh made after this).
This is done in an async way and we display the announcement (with 100%
probability) iff the announcement is available, the cache is correctly
warmed and there is a matching announcement (matching is recomputed for
each new session).
2026-01-06 18:02:05 +00:00
Owen Lin
cab7136fb3 chore: add model/list call to app-server-test-client (#8331)
Allows us to run `cargo run -p codex-app-server-test-client --
model-list` to return the list of models over app-server.
2026-01-06 17:50:17 +00:00
jif-oai
32db8ea5ca feat: add head-tail buffer for unified_exec (#8735) 2026-01-06 15:48:44 +00:00
Abdelkader Boudih
06e21c7a65 fix: update model examples to gpt-5.2 (#8566)
The models are outdated and sometime get used by GPT when it to try
delegate.

I have read the CLA Document and I hereby sign the CLA
2026-01-06 08:47:29 -07:00
Michael Bolin
7ecd0dc9b3 fix: stop honoring CODEX_MANAGED_CONFIG_PATH environment variable in production (#8762) 2026-01-06 07:10:27 -08:00
jif-oai
8858012fd1 chore: emit unified exec begin only when PTY exist (#8780) 2026-01-06 13:12:54 +00:00
Thibault Sottiaux
6346e4f560 fix: fix readiness subscribe token wrap-around (#8770)
Fixes ReadinessFlag::subscribe to avoid handing out token 0 or duplicate
tokens on i32 wrap-around, adds regression tests, and prevents readiness
gates from getting stuck waiting on an unmarkable or mis-authorized
token.
2026-01-06 13:09:02 +00:00
Josh McKinney
4c3d2a5bbe fix: render cwd-relative paths in tui (#8771)
Display paths relative to the cwd before checking git roots so view
image tool calls keep project-local names in jj/no-.git workspaces.
2026-01-06 03:17:40 +00:00
Josh McKinney
c92dbea7c1 tui2: stop baking streaming wraps; reflow agent markdown (#8761)
Background
Streaming assistant prose in tui2 was being rendered with viewport-width
wrapping during streaming, then stored in history cells as already split
`Line`s. Those width-derived breaks became indistinguishable from hard
newlines, so the transcript could not "un-split" on resize. This also
degraded copy/paste, since soft wraps looked like hard breaks.

What changed
- Introduce width-agnostic `MarkdownLogicalLine` output in
`tui2/src/markdown_render.rs`, preserving markdown wrap semantics:
initial/subsequent indents, per-line style, and a preformatted flag.
- Update the streaming collector (`tui2/src/markdown_stream.rs`) to emit
logical lines (newline-gated) and remove any captured viewport width.
- Update streaming orchestration (`tui2/src/streaming/*`) to queue and
emit logical lines, producing `AgentMessageCell::new_logical(...)`.
- Make `AgentMessageCell` store logical lines and wrap at render time in
`HistoryCell::transcript_lines_with_joiners(width)`, emitting joiners so
copy/paste can join soft-wrap continuations correctly.

Overlay deferral
When an overlay is active, defer *cells* (not rendered `Vec<Line>`) and
render them at overlay close time. This avoids baking width-derived
wraps based on a stale width.

Tests + docs
- Add resize/reflow regression tests + snapshots for streamed agent
output.
- Expand module/API docs for the new logical-line streaming pipeline and
clarify joiner semantics.
- Align scrollback-related docs/comments with current tui2 behavior
(main draw loop does not flush queued "history lines" to the terminal).

More details
See `codex-rs/tui2/docs/streaming_wrapping_design.md` for the full
problem statement and solution approach, and
`codex-rs/tui2/docs/tui_viewport_and_history.md` for viewport vs printed
output behavior.
2026-01-05 18:37:58 -08:00
Thibault Sottiaux
771f1ca6ab fix: accept whitespace-padded patch markers (#8746)
Trim whitespace when validating '*** Begin Patch'/'*** End Patch'
markers in codex-apply-patch so padded marker lines parse as intended,
and add regression coverage (unit + fixture scenario); this avoids
apply_patch failures when models include extra spacing. Tested with
cargo test -p codex-apply-patch.
2026-01-05 17:41:23 -08:00
Dylan Hurd
b1c93e135b chore(apply-patch) additional scenarios (#8230)
## Summary
More apply-patch scenarios

## Testing
- [x] This pr only adds tests
2026-01-05 15:56:38 -08:00
Curtis 'Fjord' Hawthorne
5f8776d34d Allow global exec flags after resume and fix CI codex build/timeout (#8440)
**Motivation**
- Bring `codex exec resume` to parity with top‑level flags so global
options (git check bypass, json, model, sandbox toggles) work after the
subcommand, including when outside a git repo.

**Description**
- Exec CLI: mark `--skip-git-repo-check`, `--json`, `--model`,
`--full-auto`, and `--dangerously-bypass-approvals-and-sandbox` as
global so they’re accepted after `resume`.
- Tests: add `exec_resume_accepts_global_flags_after_subcommand` to
verify those flags work when passed after `resume`.

**Testing**
- `just fmt`
- `cargo test -p codex-exec` (pass; ran with elevated perms to allow
network/port binds)
- Manual: exercised `codex exec resume` with global flags after the
subcommand to confirm behavior.
2026-01-05 22:12:09 +00:00
xl-openai
58a91a0b50 Use ConfigLayerStack for skills discovery. (#8497)
Use ConfigLayerStack to get all folders while loading skills.
2026-01-05 13:47:39 -08:00
Matthew Zeng
c29afc0cf3 [device-auth] Update login instruction for headless environments. (#8753)
We've seen reports that people who try to login on a remote/headless
machine will open the login link on their own machine and got errors.
Update the instructions to ask those users to use `codex login
--device-auth` instead.

<img width="1434" height="938" alt="CleanShot 2026-01-05 at 11 35 02@2x"
src="https://github.com/user-attachments/assets/2b209953-6a42-4eb0-8b55-bb0733f2e373"
/>
2026-01-05 13:46:42 -08:00
Michael Bolin
cafb07fe6e feat: add justification arg to prefix_rule() in *.rules (#8751)
Adds an optional `justification` parameter to the `prefix_rule()`
execpolicy DSL so policy authors can attach human-readable rationale to
a rule. That justification is propagated through parsing/matching and
can be surfaced to the model (or approval UI) when a command is blocked
or requires approval.

When a command is rejected (or gated behind approval) due to policy, a
generic message makes it hard for the model/user to understand what went
wrong and what to do instead. Allowing policy authors to supply a short
justification improves debuggability and helps guide the model toward
compliant alternatives.

Example:

```python
prefix_rule(
    pattern = ["git", "push"],
    decision = "forbidden",
    justification = "pushing is blocked in this repo",
)
```

If Codex tried to run `git push origin main`, now the failure would
include:

```
`git push origin main` rejected: pushing is blocked in this repo
```

whereas previously, all it was told was:

```
execpolicy forbids this command
```
2026-01-05 21:24:48 +00:00
iceweasel-oai
07f077dfb3 best effort to "hide" Sandbox users (#8492)
The elevated sandbox creates two new Windows users - CodexSandboxOffline
and CodexSandboxOnline. This is necessary, so this PR does all that it
can to "hide" those users. It uses the registry plus directory flags (on
their home directories) to get them to show up as little as possible.
2026-01-05 12:29:10 -08:00
Abrar Ahmed
7cf6f1c723 Use issuer URL in device auth prompt link (#7858)
## Summary

When using device-code login with a custom issuer
(`--experimental_issuer`), Codex correctly uses that issuer for the auth
flow — but the **terminal prompt still told users to open the default
OpenAI device URL** (`https://auth.openai.com/codex/device`). That’s
confusing and can send users to the **wrong domain** (especially for
enterprise/staging issuers). This PR updates the prompt (and related
URLs) to consistently use the configured issuer. 🎯

---

## 🔧 What changed

* 🔗 **Device auth prompt link** now uses the configured issuer (instead
of a hard-coded OpenAI URL)
* 🧭 **Redirect callback URL** is derived from the same issuer for
consistency
* 🧼 Minor cleanup: normalize the issuer base URL once and reuse it
(avoids formatting quirks like trailing `/`)

---

## 🧪 Repro + Before/After

### ▶️ Command

```bash
codex login --device-auth --experimental_issuer https://auth.example.com
```

###  Before (wrong link shown)

```text
1. Open this link in your browser and sign in to your account
   https://auth.openai.com/codex/device
```

###  After (correct link shown)

```text
1. Open this link in your browser and sign in to your account
   https://auth.example.com/codex/device
```

Full example output (same as before, but with the correct URL):

```text
Welcome to Codex [v0.72.0]
OpenAI's command-line coding agent

Follow these steps to sign in with ChatGPT using device code authorization:

1. Open this link in your browser and sign in to your account
   https://auth.example.com/codex/device

2. Enter this one-time code (expires in 15 minutes)
   BUT6-0M8K4

Device codes are a common phishing target. Never share this code.
```

---

##  Test plan

* 🟦 `codex login --device-auth` (default issuer): output remains
unchanged
* 🟩 `codex login --device-auth --experimental_issuer
https://auth.example.com`:

  * prompt link points to the issuer 
  * callback URL is derived from the same issuer 
  * no double slashes / mismatched domains 

Co-authored-by: Eric Traut <etraut@openai.com>
2026-01-05 13:09:05 -07:00
Gav Verma
57f8158608 chore: improve skills render section (#8459)
This change improves the skills render section
- Separate the skills list from usage rules with clear subheadings
- Define skill more clearly upfront
- Remove confusing trigger/discovery wording and make reference-following guidance more actionable
2026-01-05 11:55:26 -08:00
iceweasel-oai
95580f229e never let sandbox write to .codex/ or .codex/.sandbox/ (#8683)
Never treat .codex or .codex/.sandbox as a workspace root.
Handle write permissions to .codex/.sandbox in a single method so that
the sandbox setup/runner can write logs and other setup files to that
directory.
2026-01-05 11:54:21 -08:00
1445 changed files with 69807 additions and 84586 deletions

3
.bazelignore Normal file
View File

@@ -0,0 +1,3 @@
# Without this, Bazel will consider BUILD.bazel files in
# .git/sl/origbackups (which can be populated by Sapling SCM).
.git

46
.bazelrc Normal file
View File

@@ -0,0 +1,46 @@
common --repo_env=BAZEL_DO_NOT_DETECT_CPP_TOOLCHAIN=1
common --repo_env=BAZEL_NO_APPLE_CPP_TOOLCHAIN=1
common --disk_cache=~/.cache/bazel-disk-cache
common --repo_contents_cache=~/.cache/bazel-repo-contents-cache
common --repository_cache=~/.cache/bazel-repo-cache
startup --experimental_remote_repo_contents_cache
common --experimental_platform_in_output_dir
common --enable_platform_specific_config
# TODO(zbarsky): We need to untangle these libc constraints to get linux remote builds working.
common:linux --host_platform=//:local
common --@rules_cc//cc/toolchains/args/archiver_flags:use_libtool_on_macos=False
common --@toolchains_llvm_bootstrapped//config:experimental_stub_libgcc_s
# We need to use the sh toolchain on windows so we don't send host bash paths to the linux executor.
common:windows --@rules_rust//rust/settings:experimental_use_sh_toolchain_for_bootstrap_process_wrapper
# TODO(zbarsky): rules_rust doesn't implement this flag properly with remote exec...
# common --@rules_rust//rust/settings:pipelined_compilation
common --incompatible_strict_action_env
# Not ideal, but We need to allow dotslash to be found
common --test_env=PATH=/opt/homebrew/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin
common --test_output=errors
common --bes_results_url=https://app.buildbuddy.io/invocation/
common --bes_backend=grpcs://remote.buildbuddy.io
common --remote_cache=grpcs://remote.buildbuddy.io
common --remote_download_toplevel
common --nobuild_runfile_links
common --remote_timeout=3600
common --noexperimental_throttle_remote_action_building
common --experimental_remote_execution_keepalive
common --grpc_keepalive_time=30s
# This limits both in-flight executions and concurrent downloads. Even with high number
# of jobs execution will still be limited by CPU cores, so this just pays a bit of
# memory in exchange for higher download concurrency.
common --jobs=30
common:remote --extra_execution_platforms=//:rbe
common:remote --remote_executor=grpcs://remote.buildbuddy.io
common:remote --jobs=800

1
.bazelversion Normal file
View File

@@ -0,0 +1 @@
9.0.0

View File

@@ -40,11 +40,18 @@ body:
description: |
For MacOS and Linux: copy the output of `uname -mprs`
For Windows: copy the output of `"$([Environment]::OSVersion | ForEach-Object VersionString) $(if ([Environment]::Is64BitOperatingSystem) { "x64" } else { "x86" })"` in the PowerShell console
- type: input
id: terminal
attributes:
label: What terminal emulator and version are you using (if applicable)?
description: Also note any multiplexer in use (screen / tmux / zellij)
description: |
E.g, VSCode, Terminal.app, iTerm2, Ghostty, Windows Terminal (WSL / PowerShell)
- type: textarea
id: actual
attributes:
label: What issue are you seeing?
description: Please include the full error messages and prompts with PII redacted. If possible, please provide text instead of a screenshot.
description: Please include the full error messages and prompts with PII redacted. If possible, please provide text instead of a screenshot.
validations:
required: true
- type: textarea

View File

@@ -0,0 +1,163 @@
#!/usr/bin/env bash
set -euo pipefail
: "${TARGET:?TARGET environment variable is required}"
: "${GITHUB_ENV:?GITHUB_ENV environment variable is required}"
apt_update_args=()
if [[ -n "${APT_UPDATE_ARGS:-}" ]]; then
# shellcheck disable=SC2206
apt_update_args=(${APT_UPDATE_ARGS})
fi
apt_install_args=()
if [[ -n "${APT_INSTALL_ARGS:-}" ]]; then
# shellcheck disable=SC2206
apt_install_args=(${APT_INSTALL_ARGS})
fi
sudo apt-get update "${apt_update_args[@]}"
sudo apt-get install -y "${apt_install_args[@]}" musl-tools pkg-config g++ clang libc++-dev libc++abi-dev lld
case "${TARGET}" in
x86_64-unknown-linux-musl)
arch="x86_64"
;;
aarch64-unknown-linux-musl)
arch="aarch64"
;;
*)
echo "Unexpected musl target: ${TARGET}" >&2
exit 1
;;
esac
# Use the musl toolchain as the Rust linker to avoid Zig injecting its own CRT.
if command -v "${arch}-linux-musl-gcc" >/dev/null; then
musl_linker="$(command -v "${arch}-linux-musl-gcc")"
elif command -v musl-gcc >/dev/null; then
musl_linker="$(command -v musl-gcc)"
else
echo "musl gcc not found after install; arch=${arch}" >&2
exit 1
fi
zig_target="${TARGET/-unknown-linux-musl/-linux-musl}"
runner_temp="${RUNNER_TEMP:-/tmp}"
tool_root="${runner_temp}/codex-musl-tools-${TARGET}"
mkdir -p "${tool_root}"
sysroot=""
if command -v zig >/dev/null; then
zig_bin="$(command -v zig)"
cc="${tool_root}/zigcc"
cxx="${tool_root}/zigcxx"
cat >"${cc}" <<EOF
#!/usr/bin/env bash
set -euo pipefail
args=()
skip_next=0
for arg in "\$@"; do
if [[ "\${skip_next}" -eq 1 ]]; then
skip_next=0
continue
fi
case "\${arg}" in
--target)
skip_next=1
continue
;;
--target=*|-target=*|-target)
# Drop any explicit --target/-target flags. Zig expects -target and
# rejects Rust triples like *-unknown-linux-musl.
if [[ "\${arg}" == "-target" ]]; then
skip_next=1
fi
continue
;;
esac
args+=("\${arg}")
done
exec "${zig_bin}" cc -target "${zig_target}" "\${args[@]}"
EOF
cat >"${cxx}" <<EOF
#!/usr/bin/env bash
set -euo pipefail
args=()
skip_next=0
for arg in "\$@"; do
if [[ "\${skip_next}" -eq 1 ]]; then
skip_next=0
continue
fi
case "\${arg}" in
--target)
skip_next=1
continue
;;
--target=*|-target=*|-target)
if [[ "\${arg}" == "-target" ]]; then
skip_next=1
fi
continue
;;
esac
args+=("\${arg}")
done
exec "${zig_bin}" c++ -target "${zig_target}" "\${args[@]}"
EOF
chmod +x "${cc}" "${cxx}"
sysroot="$("${zig_bin}" cc -target "${zig_target}" -print-sysroot 2>/dev/null || true)"
else
cc="${musl_linker}"
if command -v "${arch}-linux-musl-g++" >/dev/null; then
cxx="$(command -v "${arch}-linux-musl-g++")"
elif command -v musl-g++ >/dev/null; then
cxx="$(command -v musl-g++)"
else
cxx="${cc}"
fi
fi
if [[ -n "${sysroot}" && "${sysroot}" != "/" ]]; then
echo "BORING_BSSL_SYSROOT=${sysroot}" >> "$GITHUB_ENV"
boring_sysroot_var="BORING_BSSL_SYSROOT_${TARGET}"
boring_sysroot_var="${boring_sysroot_var//-/_}"
echo "${boring_sysroot_var}=${sysroot}" >> "$GITHUB_ENV"
fi
cflags="-pthread"
cxxflags="-pthread"
if [[ "${TARGET}" == "aarch64-unknown-linux-musl" ]]; then
# BoringSSL enables -Wframe-larger-than=25344 under clang and treats warnings as errors.
cflags="${cflags} -Wno-error=frame-larger-than"
cxxflags="${cxxflags} -Wno-error=frame-larger-than"
fi
echo "CFLAGS=${cflags}" >> "$GITHUB_ENV"
echo "CXXFLAGS=${cxxflags}" >> "$GITHUB_ENV"
echo "CC=${cc}" >> "$GITHUB_ENV"
echo "TARGET_CC=${cc}" >> "$GITHUB_ENV"
target_cc_var="CC_${TARGET}"
target_cc_var="${target_cc_var//-/_}"
echo "${target_cc_var}=${cc}" >> "$GITHUB_ENV"
echo "CXX=${cxx}" >> "$GITHUB_ENV"
echo "TARGET_CXX=${cxx}" >> "$GITHUB_ENV"
target_cxx_var="CXX_${TARGET}"
target_cxx_var="${target_cxx_var//-/_}"
echo "${target_cxx_var}=${cxx}" >> "$GITHUB_ENV"
cargo_linker_var="CARGO_TARGET_${TARGET^^}_LINKER"
cargo_linker_var="${cargo_linker_var//-/_}"
echo "${cargo_linker_var}=${musl_linker}" >> "$GITHUB_ENV"
echo "CMAKE_C_COMPILER=${cc}" >> "$GITHUB_ENV"
echo "CMAKE_CXX_COMPILER=${cxx}" >> "$GITHUB_ENV"
echo "CMAKE_ARGS=-DCMAKE_HAVE_THREADS_LIBRARY=1 -DCMAKE_USE_PTHREADS_INIT=1 -DCMAKE_THREAD_LIBS_INIT=-pthread -DTHREADS_PREFER_PTHREAD_FLAG=ON" >> "$GITHUB_ENV"

20
.github/workflows/Dockerfile.bazel vendored Normal file
View File

@@ -0,0 +1,20 @@
FROM ubuntu:24.04
# TODO(mbolin): Published to docker.io/mbolin491/codex-bazel:latest for
# initial debugging, but we should publish to a more proper location.
#
# docker buildx create --use
# docker buildx build --platform linux/amd64,linux/arm64 -f .github/workflows/Dockerfile.bazel -t mbolin491/codex-bazel:latest --push .
RUN apt-get update && \
apt-get install -y --no-install-recommends \
curl git python3 ca-certificates && \
rm -rf /var/lib/apt/lists/*
# Install dotslash.
RUN curl -LSfs "https://github.com/facebook/dotslash/releases/download/v0.5.8/dotslash-ubuntu-22.04.$(uname -m).tar.gz" | tar fxz - -C /usr/local/bin
# Ubuntu 24.04 ships with user 'ubuntu' already created with UID 1000.
USER ubuntu
WORKDIR /workspace

110
.github/workflows/bazel.yml vendored Normal file
View File

@@ -0,0 +1,110 @@
name: Bazel (experimental)
# Note this workflow was originally derived from:
# https://github.com/cerisier/toolchains_llvm_bootstrapped/blob/main/.github/workflows/ci.yaml
on:
pull_request: {}
push:
branches:
- main
workflow_dispatch:
concurrency:
# Cancel previous actions from the same PR or branch except 'main' branch.
# See https://docs.github.com/en/actions/using-jobs/using-concurrency and https://docs.github.com/en/actions/learn-github-actions/contexts for more info.
group: concurrency-group::${{ github.workflow }}::${{ github.event.pull_request.number > 0 && format('pr-{0}', github.event.pull_request.number) || github.ref_name }}${{ github.ref_name == 'main' && format('::{0}', github.run_id) || ''}}
cancel-in-progress: ${{ github.ref_name != 'main' }}
jobs:
test:
strategy:
fail-fast: false
matrix:
include:
# macOS
- os: macos-15-xlarge
target: aarch64-apple-darwin
- os: macos-15-xlarge
target: x86_64-apple-darwin
# Linux
- os: ubuntu-24.04-arm
target: aarch64-unknown-linux-gnu
- os: ubuntu-24.04
target: x86_64-unknown-linux-gnu
- os: ubuntu-24.04-arm
target: aarch64-unknown-linux-musl
- os: ubuntu-24.04
target: x86_64-unknown-linux-musl
# TODO: Enable Windows once we fix the toolchain issues there.
#- os: windows-latest
# target: x86_64-pc-windows-gnullvm
runs-on: ${{ matrix.os }}
# Configure a human readable name for each job
name: Local Bazel build on ${{ matrix.os }} for ${{ matrix.target }}
steps:
- uses: actions/checkout@v6
# Some integration tests rely on DotSlash being installed.
# See https://github.com/openai/codex/pull/7617.
- name: Install DotSlash
uses: facebook/install-dotslash@v2
- name: Make DotSlash available in PATH (Unix)
if: runner.os != 'Windows'
run: cp "$(which dotslash)" /usr/local/bin
- name: Make DotSlash available in PATH (Windows)
if: runner.os == 'Windows'
shell: pwsh
run: Copy-Item (Get-Command dotslash).Source -Destination "$env:LOCALAPPDATA\Microsoft\WindowsApps\dotslash.exe"
# Install Bazel via Bazelisk
- name: Set up Bazel
uses: bazelbuild/setup-bazelisk@v3
# TODO(mbolin): Bring this back once we have caching working. Currently,
# we never seem to get a cache hit but we still end up paying the cost of
# uploading at the end of the build, which takes over a minute!
#
# Cache build and external artifacts so that the next ci build is incremental.
# Because github action caches cannot be updated after a build, we need to
# store the contents of each build in a unique cache key, then fall back to loading
# it on the next ci run. We use hashFiles(...) in the key and restore-keys- with
# the prefix to load the most recent cache for the branch on a cache miss. You
# should customize the contents of hashFiles to capture any bazel input sources,
# although this doesn't need to be perfect. If none of the input sources change
# then a cache hit will load an existing cache and bazel won't have to do any work.
# In the case of a cache miss, you want the fallback cache to contain most of the
# previously built artifacts to minimize build time. The more precise you are with
# hashFiles sources the less work bazel will have to do.
# - name: Mount bazel caches
# uses: actions/cache@v5
# with:
# path: |
# ~/.cache/bazel-repo-cache
# ~/.cache/bazel-repo-contents-cache
# key: bazel-cache-${{ matrix.os }}-${{ hashFiles('**/BUILD.bazel', '**/*.bzl', 'MODULE.bazel') }}
# restore-keys: |
# bazel-cache-${{ matrix.os }}
- name: Configure Bazel startup args (Windows)
if: runner.os == 'Windows'
shell: pwsh
run: |
# Use a very short path to reduce argv/path length issues.
"BAZEL_STARTUP_ARGS=--output_user_root=C:\" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append
- name: bazel test //...
env:
BUILDBUDDY_API_KEY: ${{ secrets.BUILDBUDDY_API_KEY }}
shell: bash
run: |
bazel $BAZEL_STARTUP_ARGS --bazelrc=.github/workflows/ci.bazelrc test //... \
--build_metadata=REPO_URL=https://github.com/openai/codex.git \
--build_metadata=COMMIT_SHA=$(git rev-parse HEAD) \
--build_metadata=ROLE=CI \
--build_metadata=VISIBILITY=PUBLIC \
"--remote_header=x-buildbuddy-api-key=$BUILDBUDDY_API_KEY"

20
.github/workflows/ci.bazelrc vendored Normal file
View File

@@ -0,0 +1,20 @@
common --remote_download_minimal
common --nobuild_runfile_links
common --keep_going
# We prefer to run the build actions entirely remotely so we can dial up the concurrency.
# We have platform-specific tests, so we want to execute the tests on all platforms using the strongest sandboxing available on each platform.
# On linux, we can do a full remote build/test, by targeting the right (x86/arm) runners, so we have coverage of both.
# Linux crossbuilds don't work until we untangle the libc constraint mess.
common:linux --config=remote
common:linux --strategy=remote
common:linux --platforms=//:rbe
# On mac, we can run all the build actions remotely but test actions locally.
common:macos --config=remote
common:macos --strategy=remote
common:macos --strategy=TestRunner=darwin-sandbox,local
common:windows --strategy=TestRunner=local

View File

@@ -59,7 +59,7 @@ jobs:
working-directory: codex-rs
steps:
- uses: actions/checkout@v6
- uses: dtolnay/rust-toolchain@1.90
- uses: dtolnay/rust-toolchain@1.92
with:
components: rustfmt
- name: cargo fmt
@@ -77,7 +77,7 @@ jobs:
working-directory: codex-rs
steps:
- uses: actions/checkout@v6
- uses: dtolnay/rust-toolchain@1.90
- uses: dtolnay/rust-toolchain@1.92
- uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
with:
tool: cargo-shear
@@ -88,7 +88,7 @@ jobs:
# --- CI to validate on different os/targets --------------------------------
lint_build:
name: Lint/Build — ${{ matrix.runner }} - ${{ matrix.target }}${{ matrix.profile == 'release' && ' (release)' || '' }}
runs-on: ${{ matrix.runner }}
runs-on: ${{ matrix.runs_on || matrix.runner }}
timeout-minutes: 30
needs: changed
# Keep job-level if to avoid spinning up runners when not needed
@@ -106,51 +106,78 @@ jobs:
fail-fast: false
matrix:
include:
- runner: macos-14
- runner: macos-15-xlarge
target: aarch64-apple-darwin
profile: dev
- runner: macos-14
- runner: macos-15-xlarge
target: x86_64-apple-darwin
profile: dev
- runner: ubuntu-24.04
target: x86_64-unknown-linux-musl
profile: dev
runs_on:
group: codex-runners
labels: codex-linux-x64
- runner: ubuntu-24.04
target: x86_64-unknown-linux-gnu
profile: dev
runs_on:
group: codex-runners
labels: codex-linux-x64
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-musl
profile: dev
runs_on:
group: codex-runners
labels: codex-linux-arm64
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-gnu
profile: dev
- runner: windows-latest
runs_on:
group: codex-runners
labels: codex-linux-arm64
- runner: windows-x64
target: x86_64-pc-windows-msvc
profile: dev
- runner: windows-11-arm
runs_on:
group: codex-runners
labels: codex-windows-x64
- runner: windows-arm64
target: aarch64-pc-windows-msvc
profile: dev
runs_on:
group: codex-runners
labels: codex-windows-arm64
# Also run representative release builds on Mac and Linux because
# there could be release-only build errors we want to catch.
# Hopefully this also pre-populates the build cache to speed up
# releases.
- runner: macos-14
- runner: macos-15-xlarge
target: aarch64-apple-darwin
profile: release
- runner: ubuntu-24.04
target: x86_64-unknown-linux-musl
profile: release
- runner: windows-latest
runs_on:
group: codex-runners
labels: codex-linux-x64
- runner: windows-x64
target: x86_64-pc-windows-msvc
profile: release
- runner: windows-11-arm
runs_on:
group: codex-runners
labels: codex-windows-x64
- runner: windows-arm64
target: aarch64-pc-windows-msvc
profile: release
runs_on:
group: codex-runners
labels: codex-windows-arm64
steps:
- uses: actions/checkout@v6
- uses: dtolnay/rust-toolchain@1.90
- uses: dtolnay/rust-toolchain@1.92
with:
targets: ${{ matrix.target }}
components: clippy
@@ -234,15 +261,21 @@ jobs:
/var/cache/apt
key: apt-${{ matrix.runner }}-${{ matrix.target }}-v1
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Install Zig
uses: mlugg/setup-zig@v2
with:
version: 0.14.0
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Install musl build tools
env:
DEBIAN_FRONTEND: noninteractive
TARGET: ${{ matrix.target }}
APT_UPDATE_ARGS: -o Acquire::Retries=3
APT_INSTALL_ARGS: --no-install-recommends
shell: bash
run: |
set -euo pipefail
sudo apt-get -y update -o Acquire::Retries=3
sudo apt-get -y install --no-install-recommends musl-tools pkg-config
run: bash "${GITHUB_WORKSPACE}/.github/scripts/install-musl-build-tools.sh"
- name: Install cargo-chef
if: ${{ matrix.profile == 'release' }}
@@ -336,7 +369,7 @@ jobs:
tests:
name: Tests — ${{ matrix.runner }} - ${{ matrix.target }}
runs-on: ${{ matrix.runner }}
runs-on: ${{ matrix.runs_on || matrix.runner }}
timeout-minutes: 30
needs: changed
if: ${{ needs.changed.outputs.codex == 'true' || needs.changed.outputs.workflows == 'true' || github.event_name == 'push' }}
@@ -353,46 +386,43 @@ jobs:
fail-fast: false
matrix:
include:
- runner: macos-14
- runner: macos-15-xlarge
target: aarch64-apple-darwin
profile: dev
- runner: ubuntu-24.04
target: x86_64-unknown-linux-gnu
profile: dev
runs_on:
group: codex-runners
labels: codex-linux-x64
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-gnu
profile: dev
- runner: windows-latest
runs_on:
group: codex-runners
labels: codex-linux-arm64
- runner: windows-x64
target: x86_64-pc-windows-msvc
profile: dev
- runner: windows-11-arm
runs_on:
group: codex-runners
labels: codex-windows-x64
- runner: windows-arm64
target: aarch64-pc-windows-msvc
profile: dev
runs_on:
group: codex-runners
labels: codex-windows-arm64
steps:
- uses: actions/checkout@v6
# We have been running out of space when running this job on Linux for
# x86_64-unknown-linux-gnu, so remove some unnecessary dependencies.
- name: Remove unnecessary dependencies to save space
if: ${{ startsWith(matrix.runner, 'ubuntu') }}
shell: bash
run: |
set -euo pipefail
sudo rm -rf \
/usr/local/lib/android \
/usr/share/dotnet \
/usr/local/share/boost \
/usr/local/lib/node_modules \
/opt/ghc
sudo apt-get remove -y docker.io docker-compose podman buildah
# Some integration tests rely on DotSlash being installed.
# See https://github.com/openai/codex/pull/7617.
- name: Install DotSlash
uses: facebook/install-dotslash@v2
- uses: dtolnay/rust-toolchain@1.90
- uses: dtolnay/rust-toolchain@1.92
with:
targets: ${{ matrix.target }}

View File

@@ -20,6 +20,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: dtolnay/rust-toolchain@1.92
- name: Validate tag matches Cargo.toml version
shell: bash
@@ -45,11 +46,20 @@ jobs:
echo "✅ Tag and Cargo.toml agree (${tag_ver})"
echo "::endgroup::"
- name: Verify config schema fixture
shell: bash
working-directory: codex-rs
run: |
set -euo pipefail
echo "If this fails, run: just write-config-schema to overwrite fixture with intentional changes."
cargo run -p codex-core --bin codex-write-config-schema
git diff --exit-code core/config.schema.json
build:
needs: tag-check
name: Build - ${{ matrix.runner }} - ${{ matrix.target }}
runs-on: ${{ matrix.runner }}
timeout-minutes: 30
timeout-minutes: 60
permissions:
contents: read
id-token: write
@@ -80,7 +90,7 @@ jobs:
steps:
- uses: actions/checkout@v6
- uses: dtolnay/rust-toolchain@1.90
- uses: dtolnay/rust-toolchain@1.92
with:
targets: ${{ matrix.target }}
@@ -94,11 +104,17 @@ jobs:
${{ github.workspace }}/codex-rs/target/
key: cargo-${{ matrix.runner }}-${{ matrix.target }}-release-${{ hashFiles('**/Cargo.lock') }}
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Install Zig
uses: mlugg/setup-zig@v2
with:
version: 0.14.0
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Install musl build tools
run: |
sudo apt-get update
sudo apt-get install -y musl-tools pkg-config
env:
TARGET: ${{ matrix.target }}
run: bash "${GITHUB_WORKSPACE}/.github/scripts/install-musl-build-tools.sh"
- name: Cargo build
shell: bash
@@ -275,7 +291,28 @@ jobs:
# Must run from inside the dest dir so 7z won't
# embed the directory path inside the zip.
if [[ "${{ matrix.runner }}" == windows* ]]; then
(cd "$dest" && 7z a "${base}.zip" "$base")
if [[ "$base" == "codex-${{ matrix.target }}.exe" ]]; then
# Bundle the sandbox helper binaries into the main codex zip so
# WinGet installs include the required helpers next to codex.exe.
# Fall back to the single-binary zip if the helpers are missing
# to avoid breaking releases.
bundle_dir="$(mktemp -d)"
runner_src="$dest/codex-command-runner-${{ matrix.target }}.exe"
setup_src="$dest/codex-windows-sandbox-setup-${{ matrix.target }}.exe"
if [[ -f "$runner_src" && -f "$setup_src" ]]; then
cp "$dest/$base" "$bundle_dir/$base"
cp "$runner_src" "$bundle_dir/codex-command-runner.exe"
cp "$setup_src" "$bundle_dir/codex-windows-sandbox-setup.exe"
(cd "$bundle_dir" && 7z a "$dest/${base}.zip" .)
else
echo "warning: missing sandbox binaries; falling back to single-binary zip"
echo "warning: expected $runner_src and $setup_src"
(cd "$dest" && 7z a "${base}.zip" "$base")
fi
rm -rf "$bundle_dir"
else
(cd "$dest" && 7z a "${base}.zip" "$base")
fi
fi
# Also create .zst (existing behaviour) *and* remove the original
@@ -323,6 +360,26 @@ jobs:
- name: Checkout repository
uses: actions/checkout@v6
- name: Generate release notes from tag commit message
id: release_notes
shell: bash
run: |
set -euo pipefail
# On tag pushes, GITHUB_SHA may be a tag object for annotated tags;
# peel it to the underlying commit.
commit="$(git rev-parse "${GITHUB_SHA}^{commit}")"
notes_path="${RUNNER_TEMP}/release-notes.md"
# Use the commit message for the commit the tag points at (not the
# annotated tag message).
git log -1 --format=%B "${commit}" > "${notes_path}"
# Ensure trailing newline so GitHub's markdown renderer doesn't
# occasionally run the last line into subsequent content.
echo >> "${notes_path}"
echo "path=${notes_path}" >> "${GITHUB_OUTPUT}"
- uses: actions/download-artifact@v7
with:
path: dist
@@ -338,6 +395,10 @@ jobs:
ls -R dist/
- name: Add config schema release asset
run: |
cp codex-rs/core/config.schema.json dist/config-schema.json
- name: Define release name
id: release_name
run: |
@@ -395,6 +456,7 @@ jobs:
with:
name: ${{ steps.release_name.outputs.name }}
tag_name: ${{ github.ref_name }}
body_path: ${{ steps.release_notes.outputs.path }}
files: dist/**
# Mark as prerelease only when the version has a suffix after x.y.z
# (e.g. -alpha, -beta). Otherwise publish a normal release.
@@ -407,6 +469,19 @@ jobs:
tag: ${{ github.ref_name }}
config: .github/dotslash-config.json
- name: Trigger developers.openai.com deploy
# Only trigger the deploy if the release is not a pre-release.
# The deploy is used to update the developers.openai.com website with the new config schema json file.
if: ${{ !contains(steps.release_name.outputs.name, '-') }}
continue-on-error: true
env:
DEV_WEBSITE_VERCEL_DEPLOY_HOOK_URL: ${{ secrets.DEV_WEBSITE_VERCEL_DEPLOY_HOOK_URL }}
run: |
if ! curl -sS -f -o /dev/null -X POST "$DEV_WEBSITE_VERCEL_DEPLOY_HOOK_URL"; then
echo "::warning title=developers.openai.com deploy hook failed::Vercel deploy hook POST failed for ${GITHUB_REF_NAME}"
exit 1
fi
# Publish to npm using OIDC authentication.
# July 31, 2025: https://github.blog/changelog/2025-07-31-npm-trusted-publishing-with-oidc-is-generally-available/
# npm docs: https://docs.npmjs.com/trusted-publishers

View File

@@ -24,7 +24,7 @@ jobs:
node-version: 22
cache: pnpm
- uses: dtolnay/rust-toolchain@1.90
- uses: dtolnay/rust-toolchain@1.92
- name: build codex
run: cargo build --bin codex

View File

@@ -93,15 +93,21 @@ jobs:
- name: Checkout repository
uses: actions/checkout@v6
- uses: dtolnay/rust-toolchain@1.90
- uses: dtolnay/rust-toolchain@1.92
with:
targets: ${{ matrix.target }}
- if: ${{ matrix.install_musl }}
name: Install Zig
uses: mlugg/setup-zig@v2
with:
version: 0.14.0
- if: ${{ matrix.install_musl }}
name: Install musl build dependencies
run: |
sudo apt-get update
sudo apt-get install -y musl-tools pkg-config
env:
TARGET: ${{ matrix.target }}
run: bash "${GITHUB_WORKSPACE}/.github/scripts/install-musl-build-tools.sh"
- name: Build exec server binaries
run: cargo build --release --target ${{ matrix.target }} --bin codex-exec-mcp-server --bin codex-execve-wrapper
@@ -198,7 +204,7 @@ jobs:
shell: bash
run: |
set -euo pipefail
git clone --depth 1 https://github.com/bminor/bash /tmp/bash
git clone --depth 1 https://github.com/bolinfest/bash /tmp/bash
cd /tmp/bash
git fetch --depth 1 origin a8a1c2fac029404d3f42cd39f5a20f24b6e4fe4b
git checkout a8a1c2fac029404d3f42cd39f5a20f24b6e4fe4b
@@ -240,7 +246,7 @@ jobs:
shell: bash
run: |
set -euo pipefail
git clone --depth 1 https://github.com/bminor/bash /tmp/bash
git clone --depth 1 https://github.com/bolinfest/bash /tmp/bash
cd /tmp/bash
git fetch --depth 1 origin a8a1c2fac029404d3f42cd39f5a20f24b6e4fe4b
git checkout a8a1c2fac029404d3f42cd39f5a20f24b6e4fe4b

1
.gitignore vendored
View File

@@ -9,6 +9,7 @@ node_modules
# build
dist/
bazel-*
build/
out/
storybook-static/

6
.markdownlint-cli2.yaml Normal file
View File

@@ -0,0 +1,6 @@
config:
MD013:
line_length: 100
globs:
- "docs/tui-chat-composer.md"

View File

@@ -13,12 +13,14 @@ In the codex-rs folder where the rust code lives:
- Use method references over closures when possible per https://rust-lang.github.io/rust-clippy/master/index.html#redundant_closure_for_method_calls
- When writing tests, prefer comparing the equality of entire objects over fields one by one.
- When making a change that adds or changes an API, ensure that the documentation in the `docs/` folder is up to date if applicable.
- If you change `ConfigToml` or nested config types, run `just write-config-schema` to update `codex-rs/core/config.schema.json`.
Run `just fmt` (in `codex-rs` directory) automatically after making Rust code changes; do not ask for approval to run it. Before finalizing a change to `codex-rs`, run `just fix -p <project>` (in `codex-rs` directory) to fix any linter issues in the code. Prefer scoping with `-p` to avoid slow workspacewide Clippy builds; only run `just fix` without `-p` if you changed shared crates. Additionally, run the tests:
Run `just fmt` (in `codex-rs` directory) automatically after you have finished making Rust code changes; do not ask for approval to run it. Additionally, run the tests:
1. Run the test for the specific project that was changed. For example, if changes were made in `codex-rs/tui`, run `cargo test -p codex-tui`.
2. Once those pass, if any changes were made in common, core, or protocol, run the complete test suite with `cargo test --all-features`.
When running interactively, ask the user before running `just fix` to finalize. `just fmt` does not require approval. project-specific or individual tests can be run without asking the user, but do ask the user before running the complete test suite.
2. Once those pass, if any changes were made in common, core, or protocol, run the complete test suite with `cargo test --all-features`. project-specific or individual tests can be run without asking the user, but do ask the user before running the complete test suite.
Before finalizing a large change to `codex-rs`, run `just fix -p <project>` (in `codex-rs` directory) to fix any linter issues in the code. Prefer scoping with `-p` to avoid slow workspacewide Clippy builds; only run `just fix` without `-p` if you changed shared crates.
## TUI style conventions
@@ -77,11 +79,11 @@ If you dont have the tool:
- Prefer deep equals comparisons whenever possible. Perform `assert_eq!()` on entire objects, rather than individual fields.
- Avoid mutating process environment in tests; prefer passing environment-derived flags or dependencies from above.
### Spawning workspace binaries in tests (Cargo vs Buck2)
### Spawning workspace binaries in tests (Cargo vs Bazel)
- Prefer `codex_utils_cargo_bin::cargo_bin("...")` over `assert_cmd::Command::cargo_bin(...)` or `escargot` when tests need to spawn first-party binaries.
- Under Buck2, `CARGO_BIN_EXE_*` may be project-relative (e.g. `buck-out/...`), which breaks if a test changes its working directory. `codex_utils_cargo_bin::cargo_bin` resolves to an absolute path first.
- When locating fixture files under Buck2, avoid `env!("CARGO_MANIFEST_DIR")` (Buck codegen sets it to `"."`). Prefer deriving paths from `codex_utils_cargo_bin::buck_project_root()` when needed.
- Under Bazel, binaries and resources may live under runfiles; use `codex_utils_cargo_bin::cargo_bin` to resolve absolute paths that remain stable after `chdir`.
- When locating fixture files or test resources under Bazel, avoid `env!("CARGO_MANIFEST_DIR")`. Prefer `codex_utils_cargo_bin::find_resource!` so paths resolve correctly under both Cargo and Bazel runfiles.
### Integration tests (core)

19
BUILD.bazel Normal file
View File

@@ -0,0 +1,19 @@
# We mark the local platform as glibc-compatible so that rust can grab a toolchain for us.
# TODO(zbarsky): Upstream a better libc constraint into rules_rust.
# We only enable this on linux though for sanity, and because it breaks remote execution.
platform(
name = "local",
constraint_values = [
"@toolchains_llvm_bootstrapped//constraints/libc:gnu.2.28",
],
parents = [
"@platforms//host",
],
)
alias(
name = "rbe",
actual = "@rbe_platform",
)
exports_files(["AGENTS.md"])

124
MODULE.bazel Normal file
View File

@@ -0,0 +1,124 @@
bazel_dep(name = "platforms", version = "1.0.0")
bazel_dep(name = "toolchains_llvm_bootstrapped", version = "0.3.1")
archive_override(
module_name = "toolchains_llvm_bootstrapped",
integrity = "sha256-4/2h4tYSUSptxFVI9G50yJxWGOwHSeTeOGBlaLQBV8g=",
strip_prefix = "toolchains_llvm_bootstrapped-d20baf67e04d8e2887e3779022890d1dc5e6b948",
urls = ["https://github.com/cerisier/toolchains_llvm_bootstrapped/archive/d20baf67e04d8e2887e3779022890d1dc5e6b948.tar.gz"],
)
osx = use_extension("@toolchains_llvm_bootstrapped//toolchain/extension:osx.bzl", "osx")
osx.framework(name = "ApplicationServices")
osx.framework(name = "AppKit")
osx.framework(name = "ColorSync")
osx.framework(name = "CoreFoundation")
osx.framework(name = "CoreGraphics")
osx.framework(name = "CoreServices")
osx.framework(name = "CoreText")
osx.framework(name = "CFNetwork")
osx.framework(name = "Foundation")
osx.framework(name = "ImageIO")
osx.framework(name = "Kernel")
osx.framework(name = "OSLog")
osx.framework(name = "Security")
osx.framework(name = "SystemConfiguration")
register_toolchains(
"@toolchains_llvm_bootstrapped//toolchain:all",
)
bazel_dep(name = "rules_cc", version = "0.2.16")
bazel_dep(name = "rules_platform", version = "0.1.0")
bazel_dep(name = "rules_rust", version = "0.68.1")
single_version_override(
module_name = "rules_rust",
patch_strip = 1,
patches = [
"//patches:rules_rust.patch",
"//patches:rules_rust_windows_gnu.patch",
"//patches:rules_rust_musl.patch",
],
)
RUST_TRIPLES = [
"aarch64-unknown-linux-musl",
"aarch64-apple-darwin",
"aarch64-pc-windows-gnullvm",
"x86_64-unknown-linux-musl",
"x86_64-apple-darwin",
"x86_64-pc-windows-gnullvm",
]
rust = use_extension("@rules_rust//rust:extensions.bzl", "rust")
rust.toolchain(
edition = "2024",
extra_target_triples = RUST_TRIPLES,
versions = ["1.90.0"],
)
use_repo(rust, "rust_toolchains")
register_toolchains("@rust_toolchains//:all")
bazel_dep(name = "rules_rs", version = "0.0.23")
crate = use_extension("@rules_rs//rs:extensions.bzl", "crate")
crate.from_cargo(
cargo_lock = "//codex-rs:Cargo.lock",
cargo_toml = "//codex-rs:Cargo.toml",
platform_triples = RUST_TRIPLES,
)
bazel_dep(name = "openssl", version = "3.5.4.bcr.0")
crate.annotation(
build_script_data = [
"@openssl//:gen_dir",
],
build_script_env = {
"OPENSSL_DIR": "$(execpath @openssl//:gen_dir)",
"OPENSSL_NO_VENDOR": "1",
"OPENSSL_STATIC": "1",
},
crate = "openssl-sys",
data = ["@openssl//:gen_dir"],
)
inject_repo(crate, "openssl")
# Fix readme inclusions
crate.annotation(
crate = "windows-link",
patch_args = ["-p1"],
patches = [
"//patches:windows-link.patch",
],
)
WINDOWS_IMPORT_LIB = """
load("@rules_cc//cc:defs.bzl", "cc_import")
cc_import(
name = "windows_import_lib",
static_library = glob(["lib/*.a"])[0],
)
"""
crate.annotation(
additive_build_file_content = WINDOWS_IMPORT_LIB,
crate = "windows_x86_64_gnullvm",
gen_build_script = "off",
deps = [":windows_import_lib"],
)
crate.annotation(
additive_build_file_content = WINDOWS_IMPORT_LIB,
crate = "windows_aarch64_gnullvm",
gen_build_script = "off",
deps = [":windows_import_lib"],
)
use_repo(crate, "crates")
rbe_platform_repository = use_repo_rule("//:rbe.bzl", "rbe_platform_repository")
rbe_platform_repository(
name = "rbe_platform",
)

1315
MODULE.bazel.lock generated Normal file

File diff suppressed because one or more lines are too long

17
announcement_tip.toml Normal file
View File

@@ -0,0 +1,17 @@
# Example announcement tips for Codex TUI.
# Each [[announcements]] entry is evaluated in order; the last matching one is shown.
# Dates are UTC, formatted as YYYY-MM-DD. The from_date is inclusive and the to_date is exclusive.
# version_regex matches against the CLI version (env!("CARGO_PKG_VERSION")); omit to apply to all versions.
# target_app specify which app should display the announcement (cli, vsce, ...).
[[announcements]]
content = "Welcome to Codex! Check out the new onboarding flow."
from_date = "2024-10-01"
to_date = "2024-10-15"
target_app = "cli"
# Test announcement only for local build version until 2026-01-10 excluded (past)
[[announcements]]
content = "This is a test announcement"
version_regex = "^0\\.0\\.0$"
to_date = "2026-01-10"

1
codex-cli/bin/codex.js Normal file → Executable file
View File

@@ -95,7 +95,6 @@ function detectPackageManager() {
return "bun";
}
if (
__dirname.includes(".bun/install/global") ||
__dirname.includes(".bun\\install\\global")

1
codex-rs/BUILD.bazel Normal file
View File

@@ -0,0 +1 @@

1231
codex-rs/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -6,6 +6,7 @@ members = [
"app-server",
"app-server-protocol",
"app-server-test-client",
"debug-client",
"apply-patch",
"arg0",
"feedback",
@@ -26,6 +27,7 @@ members = [
"login",
"mcp-server",
"mcp-types",
"network-proxy",
"ollama",
"process-hardening",
"protocol",
@@ -34,7 +36,6 @@ members = [
"stdio-to-uds",
"otel",
"tui",
"tui2",
"utils/absolute-path",
"utils/cargo-bin",
"utils/git",
@@ -70,6 +71,7 @@ codex-arg0 = { path = "arg0" }
codex-async-utils = { path = "async-utils" }
codex-backend-client = { path = "backend-client" }
codex-chatgpt = { path = "chatgpt" }
codex-cli = { path = "cli"}
codex-client = { path = "codex-client" }
codex-common = { path = "common" }
codex-core = { path = "core" }
@@ -91,7 +93,6 @@ codex-responses-api-proxy = { path = "responses-api-proxy" }
codex-rmcp-client = { path = "rmcp-client" }
codex-stdio-to-uds = { path = "stdio-to-uds" }
codex-tui = { path = "tui" }
codex-tui2 = { path = "tui2" }
codex-utils-absolute-path = { path = "utils/absolute-path" }
codex-utils-cache = { path = "utils/cache" }
codex-utils-cargo-bin = { path = "utils/cargo-bin" }
@@ -120,12 +121,12 @@ axum = { version = "0.8", default-features = false }
base64 = "0.22.1"
bytes = "1.10.1"
chardetng = "0.1.17"
chrono = "0.4.42"
chrono = "0.4.43"
clap = "4"
clap_complete = "4"
color-eyre = "0.6.3"
crossterm = "0.28.1"
ctor = "0.5.0"
ctor = "0.6.3"
derive_more = "2"
diffy = "0.4.2"
dirs = "6"
@@ -134,14 +135,15 @@ dunce = "1.0.4"
encoding_rs = "0.8.35"
env-flags = "0.1.1"
env_logger = "0.11.5"
escargot = "0.5"
eventsource-stream = "0.2.3"
futures = { version = "0.3", default-features = false }
globset = "0.4"
http = "1.3.1"
icu_decimal = "2.1"
icu_locale_core = "2.1"
icu_provider = { version = "2.1", features = ["sync"] }
ignore = "0.4.23"
indoc = "2.0"
image = { version = "^0.25.9", default-features = false }
include_dir = "0.7.4"
indexmap = "2.12.0"
@@ -152,7 +154,7 @@ landlock = "0.4.4"
lazy_static = "1"
libc = "0.2.177"
log = "0.4"
lru = "0.16.2"
lru = "0.16.3"
maplit = "1.0.2"
mime_guess = "2.0.5"
multimap = "0.10.0"
@@ -176,7 +178,6 @@ pretty_assertions = "1.4.1"
pulldown-cmark = "0.10"
rand = "0.9"
ratatui = "0.29.0"
ratatui-core = "0.1.0"
ratatui-macros = "0.6.0"
regex = "1.12.2"
regex-lite = "0.1.8"
@@ -187,11 +188,13 @@ seccompiler = "0.5.0"
sentry = "0.46.0"
serde = "1"
serde_json = "1"
serde_path_to_error = "0.1.20"
serde_with = "3.16"
serde_yaml = "0.9"
serial_test = "3.2.0"
sha1 = "0.10.6"
sha2 = "0.10"
semver = "1.0"
shlex = "1.3.0"
similar = "2.7.0"
socket2 = "0.6.1"
@@ -209,7 +212,8 @@ tiny_http = "0.12"
tokio = "1"
tokio-stream = "0.1.18"
tokio-test = "0.4"
tokio-util = "0.7.16"
tokio-tungstenite = { version = "0.28.0", features = ["proxy", "rustls-tls-native-roots"] }
tokio-util = "0.7.18"
toml = "0.9.5"
toml_edit = "0.24.0"
tracing = "0.1.43"
@@ -218,9 +222,9 @@ tracing-subscriber = "0.3.22"
tracing-test = "0.2.5"
tree-sitter = "0.25.10"
tree-sitter-bash = "0.25"
zstd = "0.13"
tree-sitter-highlight = "0.25.10"
ts-rs = "11"
tui-scrollbar = "0.2.1"
uds_windows = "1.1.0"
unicode-segmentation = "1.12.0"
unicode-width = "0.2"
@@ -230,7 +234,7 @@ uuid = "1"
vt100 = "0.16.2"
walkdir = "2.5.0"
webbrowser = "1.0"
which = "6"
which = "8"
wildmatch = "2.6.1"
wiremock = "0.6"
@@ -298,6 +302,10 @@ opt-level = 0
# ratatui = { path = "../../ratatui" }
crossterm = { git = "https://github.com/nornagon/crossterm", branch = "nornagon/color-query" }
ratatui = { git = "https://github.com/nornagon/ratatui", branch = "nornagon-v0.29.0-patch" }
tokio-tungstenite = { git = "https://github.com/JakkuSakura/tokio-tungstenite", rev = "2ae536b0de793f3ddf31fc2f22d445bf1ef2023d" }
# Uncomment to debug local changes.
# rmcp = { path = "../../rust-sdk/crates/rmcp" }
[patch."ssh://git@github.com/JakkuSakura/tungstenite-rs.git"]
tungstenite = { git = "https://github.com/JakkuSakura/tungstenite-rs", rev = "f514de8644821113e5d18a027d6d28a5c8cc0a6e" }

View File

@@ -0,0 +1,6 @@
load("//:defs.bzl", "codex_rust_crate")
codex_rust_crate(
name = "ansi-escape",
crate_name = "codex_ansi_escape",
)

View File

@@ -0,0 +1,6 @@
load("//:defs.bzl", "codex_rust_crate")
codex_rust_crate(
name = "app-server-protocol",
crate_name = "codex_app_server_protocol",
)

View File

@@ -109,18 +109,42 @@ client_request_definitions! {
params: v2::ThreadResumeParams,
response: v2::ThreadResumeResponse,
},
ThreadFork => "thread/fork" {
params: v2::ThreadForkParams,
response: v2::ThreadForkResponse,
},
ThreadArchive => "thread/archive" {
params: v2::ThreadArchiveParams,
response: v2::ThreadArchiveResponse,
},
ThreadRollback => "thread/rollback" {
params: v2::ThreadRollbackParams,
response: v2::ThreadRollbackResponse,
},
ThreadList => "thread/list" {
params: v2::ThreadListParams,
response: v2::ThreadListResponse,
},
ThreadLoadedList => "thread/loaded/list" {
params: v2::ThreadLoadedListParams,
response: v2::ThreadLoadedListResponse,
},
ThreadRead => "thread/read" {
params: v2::ThreadReadParams,
response: v2::ThreadReadResponse,
},
SkillsList => "skills/list" {
params: v2::SkillsListParams,
response: v2::SkillsListResponse,
},
AppsList => "app/list" {
params: v2::AppsListParams,
response: v2::AppsListResponse,
},
SkillsConfigWrite => "skills/config/write" {
params: v2::SkillsConfigWriteParams,
response: v2::SkillsConfigWriteResponse,
},
TurnStart => "turn/start" {
params: v2::TurnStartParams,
response: v2::TurnStartResponse,
@@ -138,12 +162,22 @@ client_request_definitions! {
params: v2::ModelListParams,
response: v2::ModelListResponse,
},
/// EXPERIMENTAL - list collaboration mode presets.
CollaborationModeList => "collaborationMode/list" {
params: v2::CollaborationModeListParams,
response: v2::CollaborationModeListResponse,
},
McpServerOauthLogin => "mcpServer/oauth/login" {
params: v2::McpServerOauthLoginParams,
response: v2::McpServerOauthLoginResponse,
},
McpServerRefresh => "config/mcpServer/reload" {
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
response: v2::McpServerRefreshResponse,
},
McpServerStatusList => "mcpServerStatus/list" {
params: v2::ListMcpServerStatusParams,
response: v2::ListMcpServerStatusResponse,
@@ -193,6 +227,11 @@ client_request_definitions! {
response: v2::ConfigWriteResponse,
},
ConfigRequirementsRead => "configRequirements/read" {
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
response: v2::ConfigRequirementsReadResponse,
},
GetAccount => "account/read" {
params: v2::GetAccountParams,
response: v2::GetAccountResponse,
@@ -217,6 +256,11 @@ client_request_definitions! {
params: v1::ResumeConversationParams,
response: v1::ResumeConversationResponse,
},
/// Fork a recorded Codex conversation into a new session.
ForkConversation {
params: v1::ForkConversationParams,
response: v1::ForkConversationResponse,
},
ArchiveConversation {
params: v1::ArchiveConversationParams,
response: v1::ArchiveConversationResponse,
@@ -474,6 +518,12 @@ server_request_definitions! {
response: v2::FileChangeRequestApprovalResponse,
},
/// EXPERIMENTAL - Request input from the user for a tool call.
ToolRequestUserInput => "item/tool/requestUserInput" {
params: v2::ToolRequestUserInputParams,
response: v2::ToolRequestUserInputResponse,
},
/// DEPRECATED APIs below
/// Request to approve a patch.
/// This request is used for Turns started via the legacy APIs (i.e. SendUserTurn, SendUserMessage).
@@ -540,6 +590,7 @@ server_notification_definitions! {
ReasoningTextDelta => "item/reasoning/textDelta" (v2::ReasoningTextDeltaNotification),
ContextCompacted => "thread/compacted" (v2::ContextCompactedNotification),
DeprecationNotice => "deprecationNotice" (v2::DeprecationNoticeNotification),
ConfigWarning => "configWarning" (v2::ConfigWarningNotification),
/// Notifies the user of world-writable directories on Windows, which cannot be protected by the sandbox.
WindowsWorldWritableWarning => "windows/worldWritableWarning" (v2::WindowsWorldWritableWarningNotification),
@@ -565,7 +616,7 @@ client_notification_definitions! {
mod tests {
use super::*;
use anyhow::Result;
use codex_protocol::ConversationId;
use codex_protocol::ThreadId;
use codex_protocol::account::PlanType;
use codex_protocol::parse_command::ParsedCommand;
use codex_protocol::protocol::AskForApproval;
@@ -614,7 +665,7 @@ mod tests {
#[test]
fn conversation_id_serializes_as_plain_string() -> Result<()> {
let id = ConversationId::from_string("67e55044-10b1-426f-9247-bb680e5fe0c8")?;
let id = ThreadId::from_string("67e55044-10b1-426f-9247-bb680e5fe0c8")?;
assert_eq!(
json!("67e55044-10b1-426f-9247-bb680e5fe0c8"),
@@ -625,11 +676,10 @@ mod tests {
#[test]
fn conversation_id_deserializes_from_plain_string() -> Result<()> {
let id: ConversationId =
serde_json::from_value(json!("67e55044-10b1-426f-9247-bb680e5fe0c8"))?;
let id: ThreadId = serde_json::from_value(json!("67e55044-10b1-426f-9247-bb680e5fe0c8"))?;
assert_eq!(
ConversationId::from_string("67e55044-10b1-426f-9247-bb680e5fe0c8")?,
ThreadId::from_string("67e55044-10b1-426f-9247-bb680e5fe0c8")?,
id,
);
Ok(())
@@ -650,7 +700,7 @@ mod tests {
#[test]
fn serialize_server_request() -> Result<()> {
let conversation_id = ConversationId::from_string("67e55044-10b1-426f-9247-bb680e5fe0c8")?;
let conversation_id = ThreadId::from_string("67e55044-10b1-426f-9247-bb680e5fe0c8")?;
let params = v1::ExecCommandApprovalParams {
conversation_id,
call_id: "call-42".to_string(),
@@ -708,6 +758,22 @@ mod tests {
Ok(())
}
#[test]
fn serialize_config_requirements_read() -> Result<()> {
let request = ClientRequest::ConfigRequirementsRead {
request_id: RequestId::Integer(1),
params: None,
};
assert_eq!(
json!({
"method": "configRequirements/read",
"id": 1,
}),
serde_json::to_value(&request)?,
);
Ok(())
}
#[test]
fn serialize_account_login_api_key() -> Result<()> {
let request = ClientRequest::LoginAccount {
@@ -831,4 +897,21 @@ mod tests {
);
Ok(())
}
#[test]
fn serialize_list_collaboration_modes() -> Result<()> {
let request = ClientRequest::CollaborationModeList {
request_id: RequestId::Integer(7),
params: v2::CollaborationModeListParams::default(),
};
assert_eq!(
json!({
"method": "collaborationMode/list",
"id": 7,
"params": {}
}),
serde_json::to_value(&request)?,
);
Ok(())
}
}

View File

@@ -6,6 +6,7 @@ use crate::protocol::v2::UserInput;
use codex_protocol::protocol::AgentReasoningEvent;
use codex_protocol::protocol::AgentReasoningRawContentEvent;
use codex_protocol::protocol::EventMsg;
use codex_protocol::protocol::ThreadRolledBackEvent;
use codex_protocol::protocol::TurnAbortedEvent;
use codex_protocol::protocol::UserMessageEvent;
@@ -57,6 +58,7 @@ impl ThreadHistoryBuilder {
EventMsg::TokenCount(_) => {}
EventMsg::EnteredReviewMode(_) => {}
EventMsg::ExitedReviewMode(_) => {}
EventMsg::ThreadRolledBack(payload) => self.handle_thread_rollback(payload),
EventMsg::UndoCompleted(_) => {}
EventMsg::TurnAborted(payload) => self.handle_turn_aborted(payload),
_ => {}
@@ -130,6 +132,23 @@ impl ThreadHistoryBuilder {
turn.status = TurnStatus::Interrupted;
}
fn handle_thread_rollback(&mut self, payload: &ThreadRolledBackEvent) {
self.finish_current_turn();
let n = usize::try_from(payload.num_turns).unwrap_or(usize::MAX);
if n >= self.turns.len() {
self.turns.clear();
} else {
self.turns.truncate(self.turns.len().saturating_sub(n));
}
// Re-number subsequent synthetic ids so the pruned history is consistent.
self.next_turn_index =
i64::try_from(self.turns.len().saturating_add(1)).unwrap_or(i64::MAX);
let item_count: usize = self.turns.iter().map(|t| t.items.len()).sum();
self.next_item_index = i64::try_from(item_count.saturating_add(1)).unwrap_or(i64::MAX);
}
fn finish_current_turn(&mut self) {
if let Some(turn) = self.current_turn.take() {
if turn.items.is_empty() {
@@ -178,6 +197,12 @@ impl ThreadHistoryBuilder {
if !payload.message.trim().is_empty() {
content.push(UserInput::Text {
text: payload.message.clone(),
text_elements: payload
.text_elements
.iter()
.cloned()
.map(Into::into)
.collect(),
});
}
if let Some(images) = &payload.images {
@@ -185,6 +210,9 @@ impl ThreadHistoryBuilder {
content.push(UserInput::Image { url: image.clone() });
}
}
for path in &payload.local_images {
content.push(UserInput::LocalImage { path: path.clone() });
}
content
}
}
@@ -213,6 +241,7 @@ mod tests {
use codex_protocol::protocol::AgentMessageEvent;
use codex_protocol::protocol::AgentReasoningEvent;
use codex_protocol::protocol::AgentReasoningRawContentEvent;
use codex_protocol::protocol::ThreadRolledBackEvent;
use codex_protocol::protocol::TurnAbortReason;
use codex_protocol::protocol::TurnAbortedEvent;
use codex_protocol::protocol::UserMessageEvent;
@@ -224,6 +253,8 @@ mod tests {
EventMsg::UserMessage(UserMessageEvent {
message: "First turn".into(),
images: Some(vec!["https://example.com/one.png".into()]),
text_elements: Vec::new(),
local_images: Vec::new(),
}),
EventMsg::AgentMessage(AgentMessageEvent {
message: "Hi there".into(),
@@ -237,6 +268,8 @@ mod tests {
EventMsg::UserMessage(UserMessageEvent {
message: "Second turn".into(),
images: None,
text_elements: Vec::new(),
local_images: Vec::new(),
}),
EventMsg::AgentMessage(AgentMessageEvent {
message: "Reply two".into(),
@@ -257,6 +290,7 @@ mod tests {
content: vec![
UserInput::Text {
text: "First turn".into(),
text_elements: Vec::new(),
},
UserInput::Image {
url: "https://example.com/one.png".into(),
@@ -288,7 +322,8 @@ mod tests {
ThreadItem::UserMessage {
id: "item-4".into(),
content: vec![UserInput::Text {
text: "Second turn".into()
text: "Second turn".into(),
text_elements: Vec::new(),
}],
}
);
@@ -307,6 +342,8 @@ mod tests {
EventMsg::UserMessage(UserMessageEvent {
message: "Turn start".into(),
images: None,
text_elements: Vec::new(),
local_images: Vec::new(),
}),
EventMsg::AgentReasoning(AgentReasoningEvent {
text: "first summary".into(),
@@ -351,6 +388,8 @@ mod tests {
EventMsg::UserMessage(UserMessageEvent {
message: "Please do the thing".into(),
images: None,
text_elements: Vec::new(),
local_images: Vec::new(),
}),
EventMsg::AgentMessage(AgentMessageEvent {
message: "Working...".into(),
@@ -361,6 +400,8 @@ mod tests {
EventMsg::UserMessage(UserMessageEvent {
message: "Let's try again".into(),
images: None,
text_elements: Vec::new(),
local_images: Vec::new(),
}),
EventMsg::AgentMessage(AgentMessageEvent {
message: "Second attempt complete.".into(),
@@ -378,7 +419,8 @@ mod tests {
ThreadItem::UserMessage {
id: "item-1".into(),
content: vec![UserInput::Text {
text: "Please do the thing".into()
text: "Please do the thing".into(),
text_elements: Vec::new(),
}],
}
);
@@ -398,7 +440,8 @@ mod tests {
ThreadItem::UserMessage {
id: "item-3".into(),
content: vec![UserInput::Text {
text: "Let's try again".into()
text: "Let's try again".into(),
text_elements: Vec::new(),
}],
}
);
@@ -410,4 +453,107 @@ mod tests {
}
);
}
#[test]
fn drops_last_turns_on_thread_rollback() {
let events = vec![
EventMsg::UserMessage(UserMessageEvent {
message: "First".into(),
images: None,
text_elements: Vec::new(),
local_images: Vec::new(),
}),
EventMsg::AgentMessage(AgentMessageEvent {
message: "A1".into(),
}),
EventMsg::UserMessage(UserMessageEvent {
message: "Second".into(),
images: None,
text_elements: Vec::new(),
local_images: Vec::new(),
}),
EventMsg::AgentMessage(AgentMessageEvent {
message: "A2".into(),
}),
EventMsg::ThreadRolledBack(ThreadRolledBackEvent { num_turns: 1 }),
EventMsg::UserMessage(UserMessageEvent {
message: "Third".into(),
images: None,
text_elements: Vec::new(),
local_images: Vec::new(),
}),
EventMsg::AgentMessage(AgentMessageEvent {
message: "A3".into(),
}),
];
let turns = build_turns_from_event_msgs(&events);
let expected = vec![
Turn {
id: "turn-1".into(),
status: TurnStatus::Completed,
error: None,
items: vec![
ThreadItem::UserMessage {
id: "item-1".into(),
content: vec![UserInput::Text {
text: "First".into(),
text_elements: Vec::new(),
}],
},
ThreadItem::AgentMessage {
id: "item-2".into(),
text: "A1".into(),
},
],
},
Turn {
id: "turn-2".into(),
status: TurnStatus::Completed,
error: None,
items: vec![
ThreadItem::UserMessage {
id: "item-3".into(),
content: vec![UserInput::Text {
text: "Third".into(),
text_elements: Vec::new(),
}],
},
ThreadItem::AgentMessage {
id: "item-4".into(),
text: "A3".into(),
},
],
},
];
assert_eq!(turns, expected);
}
#[test]
fn thread_rollback_clears_all_turns_when_num_turns_exceeds_history() {
let events = vec![
EventMsg::UserMessage(UserMessageEvent {
message: "One".into(),
images: None,
text_elements: Vec::new(),
local_images: Vec::new(),
}),
EventMsg::AgentMessage(AgentMessageEvent {
message: "A1".into(),
}),
EventMsg::UserMessage(UserMessageEvent {
message: "Two".into(),
images: None,
text_elements: Vec::new(),
local_images: Vec::new(),
}),
EventMsg::AgentMessage(AgentMessageEvent {
message: "A2".into(),
}),
EventMsg::ThreadRolledBack(ThreadRolledBackEvent { num_turns: 99 }),
];
let turns = build_turns_from_event_msgs(&events);
assert_eq!(turns, Vec::<Turn>::new());
}
}

View File

@@ -1,7 +1,7 @@
use std::collections::HashMap;
use std::path::PathBuf;
use codex_protocol::ConversationId;
use codex_protocol::ThreadId;
use codex_protocol::config_types::ForcedLoginMethod;
use codex_protocol::config_types::ReasoningSummary;
use codex_protocol::config_types::SandboxMode;
@@ -16,6 +16,8 @@ use codex_protocol::protocol::ReviewDecision;
use codex_protocol::protocol::SandboxPolicy;
use codex_protocol::protocol::SessionSource;
use codex_protocol::protocol::TurnAbortReason;
use codex_protocol::user_input::ByteRange as CoreByteRange;
use codex_protocol::user_input::TextElement as CoreTextElement;
use codex_utils_absolute_path::AbsolutePathBuf;
use schemars::JsonSchema;
use serde::Deserialize;
@@ -68,7 +70,7 @@ pub struct NewConversationParams {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct NewConversationResponse {
pub conversation_id: ConversationId,
pub conversation_id: ThreadId,
pub model: String,
pub reasoning_effort: Option<ReasoningEffort>,
pub rollout_path: PathBuf,
@@ -77,7 +79,16 @@ pub struct NewConversationResponse {
#[derive(Serialize, Deserialize, Debug, Clone, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ResumeConversationResponse {
pub conversation_id: ConversationId,
pub conversation_id: ThreadId,
pub model: String,
pub initial_messages: Option<Vec<EventMsg>>,
pub rollout_path: PathBuf,
}
#[derive(Serialize, Deserialize, Debug, Clone, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ForkConversationResponse {
pub conversation_id: ThreadId,
pub model: String,
pub initial_messages: Option<Vec<EventMsg>>,
pub rollout_path: PathBuf,
@@ -90,9 +101,9 @@ pub enum GetConversationSummaryParams {
#[serde(rename = "rolloutPath")]
rollout_path: PathBuf,
},
ConversationId {
ThreadId {
#[serde(rename = "conversationId")]
conversation_id: ConversationId,
conversation_id: ThreadId,
},
}
@@ -113,10 +124,11 @@ pub struct ListConversationsParams {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ConversationSummary {
pub conversation_id: ConversationId,
pub conversation_id: ThreadId,
pub path: PathBuf,
pub preview: String,
pub timestamp: Option<String>,
pub updated_at: Option<String>,
pub model_provider: String,
pub cwd: PathBuf,
pub cli_version: String,
@@ -143,11 +155,19 @@ pub struct ListConversationsResponse {
#[serde(rename_all = "camelCase")]
pub struct ResumeConversationParams {
pub path: Option<PathBuf>,
pub conversation_id: Option<ConversationId>,
pub conversation_id: Option<ThreadId>,
pub history: Option<Vec<ResponseItem>>,
pub overrides: Option<NewConversationParams>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ForkConversationParams {
pub path: Option<PathBuf>,
pub conversation_id: Option<ThreadId>,
pub overrides: Option<NewConversationParams>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct AddConversationSubscriptionResponse {
@@ -158,7 +178,7 @@ pub struct AddConversationSubscriptionResponse {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ArchiveConversationParams {
pub conversation_id: ConversationId,
pub conversation_id: ThreadId,
pub rollout_path: PathBuf,
}
@@ -198,7 +218,7 @@ pub struct GitDiffToRemoteResponse {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ApplyPatchApprovalParams {
pub conversation_id: ConversationId,
pub conversation_id: ThreadId,
/// Use to correlate this with [codex_core::protocol::PatchApplyBeginEvent]
/// and [codex_core::protocol::PatchApplyEndEvent].
pub call_id: String,
@@ -219,7 +239,7 @@ pub struct ApplyPatchApprovalResponse {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ExecCommandApprovalParams {
pub conversation_id: ConversationId,
pub conversation_id: ThreadId,
/// Use to correlate this with [codex_core::protocol::ExecCommandBeginEvent]
/// and [codex_core::protocol::ExecCommandEndEvent].
pub call_id: String,
@@ -369,14 +389,14 @@ pub struct SandboxSettings {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct SendUserMessageParams {
pub conversation_id: ConversationId,
pub conversation_id: ThreadId,
pub items: Vec<InputItem>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct SendUserTurnParams {
pub conversation_id: ConversationId,
pub conversation_id: ThreadId,
pub items: Vec<InputItem>,
pub cwd: PathBuf,
pub approval_policy: AskForApproval,
@@ -395,7 +415,7 @@ pub struct SendUserTurnResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct InterruptConversationParams {
pub conversation_id: ConversationId,
pub conversation_id: ThreadId,
}
#[derive(Serialize, Deserialize, Debug, Clone, JsonSchema, TS)]
@@ -411,7 +431,7 @@ pub struct SendUserMessageResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct AddConversationListenerParams {
pub conversation_id: ConversationId,
pub conversation_id: ThreadId,
#[serde(default)]
pub experimental_raw_events: bool,
}
@@ -427,9 +447,71 @@ pub struct RemoveConversationListenerParams {
#[serde(rename_all = "camelCase")]
#[serde(tag = "type", content = "data")]
pub enum InputItem {
Text { text: String },
Image { image_url: String },
LocalImage { path: PathBuf },
Text {
text: String,
/// UI-defined spans within `text` used to render or persist special elements.
#[serde(default)]
text_elements: Vec<V1TextElement>,
},
Image {
image_url: String,
},
LocalImage {
path: PathBuf,
},
}
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(rename = "ByteRange")]
pub struct V1ByteRange {
/// Start byte offset (inclusive) within the UTF-8 text buffer.
pub start: usize,
/// End byte offset (exclusive) within the UTF-8 text buffer.
pub end: usize,
}
impl From<CoreByteRange> for V1ByteRange {
fn from(value: CoreByteRange) -> Self {
Self {
start: value.start,
end: value.end,
}
}
}
impl From<V1ByteRange> for CoreByteRange {
fn from(value: V1ByteRange) -> Self {
Self {
start: value.start,
end: value.end,
}
}
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(rename = "TextElement")]
pub struct V1TextElement {
/// Byte range in the parent `text` buffer that this element occupies.
pub byte_range: V1ByteRange,
/// Optional human-readable placeholder for the element, displayed in the UI.
pub placeholder: Option<String>,
}
impl From<CoreTextElement> for V1TextElement {
fn from(value: CoreTextElement) -> Self {
Self {
byte_range: value.byte_range.into(),
placeholder: value._placeholder_for_conversion_only().map(str::to_string),
}
}
}
impl From<V1TextElement> for CoreTextElement {
fn from(value: V1TextElement) -> Self {
Self::new(value.byte_range.into(), value.placeholder)
}
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
@@ -445,7 +527,7 @@ pub struct LoginChatGptCompleteNotification {
#[derive(Serialize, Deserialize, Debug, Clone, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct SessionConfiguredNotification {
pub session_id: ConversationId,
pub session_id: ThreadId,
pub model: String,
pub reasoning_effort: Option<ReasoningEffort>,
pub history_log_id: u64,

View File

@@ -4,10 +4,14 @@ use std::path::PathBuf;
use crate::protocol::common::AuthMode;
use codex_protocol::account::PlanType;
use codex_protocol::approvals::ExecPolicyAmendment as CoreExecPolicyAmendment;
use codex_protocol::config_types::CollaborationMode;
use codex_protocol::config_types::CollaborationModeMask;
use codex_protocol::config_types::ForcedLoginMethod;
use codex_protocol::config_types::Personality;
use codex_protocol::config_types::ReasoningSummary;
use codex_protocol::config_types::SandboxMode as CoreSandboxMode;
use codex_protocol::config_types::Verbosity;
use codex_protocol::config_types::WebSearchMode;
use codex_protocol::items::AgentMessageContent as CoreAgentMessageContent;
use codex_protocol::items::TurnItem as CoreTurnItem;
use codex_protocol::models::ResponseItem;
@@ -15,6 +19,7 @@ use codex_protocol::openai_models::ReasoningEffort;
use codex_protocol::parse_command::ParsedCommand as CoreParsedCommand;
use codex_protocol::plan_tool::PlanItemArg as CorePlanItemArg;
use codex_protocol::plan_tool::StepStatus as CorePlanStepStatus;
use codex_protocol::protocol::AgentStatus as CoreAgentStatus;
use codex_protocol::protocol::AskForApproval as CoreAskForApproval;
use codex_protocol::protocol::CodexErrorInfo as CoreCodexErrorInfo;
use codex_protocol::protocol::CreditsSnapshot as CoreCreditsSnapshot;
@@ -23,10 +28,14 @@ use codex_protocol::protocol::RateLimitSnapshot as CoreRateLimitSnapshot;
use codex_protocol::protocol::RateLimitWindow as CoreRateLimitWindow;
use codex_protocol::protocol::SessionSource as CoreSessionSource;
use codex_protocol::protocol::SkillErrorInfo as CoreSkillErrorInfo;
use codex_protocol::protocol::SkillInterface as CoreSkillInterface;
use codex_protocol::protocol::SkillMetadata as CoreSkillMetadata;
use codex_protocol::protocol::SkillScope as CoreSkillScope;
use codex_protocol::protocol::SubAgentSource as CoreSubAgentSource;
use codex_protocol::protocol::TokenUsage as CoreTokenUsage;
use codex_protocol::protocol::TokenUsageInfo as CoreTokenUsageInfo;
use codex_protocol::user_input::ByteRange as CoreByteRange;
use codex_protocol::user_input::TextElement as CoreTextElement;
use codex_protocol::user_input::UserInput as CoreUserInput;
use codex_utils_absolute_path::AbsolutePathBuf;
use mcp_types::ContentBlock as McpContentBlock;
@@ -89,6 +98,8 @@ pub enum CodexErrorInfo {
InternalServerError,
Unauthorized,
BadRequest,
ThreadRollbackFailed,
ContextClearFailed,
SandboxError,
/// The response SSE stream disconnected in the middle of a turn before completion.
ResponseStreamDisconnected {
@@ -119,6 +130,8 @@ impl From<CoreCodexErrorInfo> for CodexErrorInfo {
CoreCodexErrorInfo::InternalServerError => CodexErrorInfo::InternalServerError,
CoreCodexErrorInfo::Unauthorized => CodexErrorInfo::Unauthorized,
CoreCodexErrorInfo::BadRequest => CodexErrorInfo::BadRequest,
CoreCodexErrorInfo::ThreadRollbackFailed => CodexErrorInfo::ThreadRollbackFailed,
CoreCodexErrorInfo::ContextClearFailed => CodexErrorInfo::ContextClearFailed,
CoreCodexErrorInfo::SandboxError => CodexErrorInfo::SandboxError,
CoreCodexErrorInfo::ResponseStreamDisconnected { http_status_code } => {
CodexErrorInfo::ResponseStreamDisconnected { http_status_code }
@@ -209,7 +222,6 @@ v2_enum_from_core!(
}
);
// TODO(mbolin): Support in-repo layer.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(tag = "type", rename_all = "camelCase")]
#[ts(tag = "type")]
@@ -325,11 +337,21 @@ pub struct ProfileV2 {
pub model_reasoning_effort: Option<ReasoningEffort>,
pub model_reasoning_summary: Option<ReasoningSummary>,
pub model_verbosity: Option<Verbosity>,
pub web_search: Option<WebSearchMode>,
pub chatgpt_base_url: Option<String>,
#[serde(default, flatten)]
pub additional: HashMap<String, JsonValue>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "snake_case")]
#[ts(export_to = "v2/")]
pub struct AnalyticsConfig {
pub enabled: Option<bool>,
#[serde(default, flatten)]
pub additional: HashMap<String, JsonValue>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "snake_case")]
#[ts(export_to = "v2/")]
@@ -344,6 +366,7 @@ pub struct Config {
pub sandbox_workspace_write: Option<SandboxWorkspaceWrite>,
pub forced_chatgpt_workspace_id: Option<String>,
pub forced_login_method: Option<ForcedLoginMethod>,
pub web_search: Option<WebSearchMode>,
pub tools: Option<ToolsV2>,
pub profile: Option<String>,
#[serde(default)]
@@ -354,6 +377,7 @@ pub struct Config {
pub model_reasoning_effort: Option<ReasoningEffort>,
pub model_reasoning_summary: Option<ReasoningSummary>,
pub model_verbosity: Option<Verbosity>,
pub analytics: Option<AnalyticsConfig>,
#[serde(default, flatten)]
pub additional: HashMap<String, JsonValue>,
}
@@ -373,6 +397,8 @@ pub struct ConfigLayer {
pub name: ConfigLayerSource,
pub version: String,
pub config: JsonValue,
#[serde(skip_serializing_if = "Option::is_none")]
pub disabled_reason: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
@@ -429,6 +455,10 @@ pub enum ConfigWriteErrorCode {
pub struct ConfigReadParams {
#[serde(default)]
pub include_layers: bool,
/// Optional working directory to resolve project config layers. If specified,
/// return the effective config as seen from that directory (i.e., including any
/// project layers between `cwd` and the project/repo root).
pub cwd: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
@@ -441,6 +471,22 @@ pub struct ConfigReadResponse {
pub layers: Option<Vec<ConfigLayer>>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ConfigRequirements {
pub allowed_approval_policies: Option<Vec<AskForApproval>>,
pub allowed_sandbox_modes: Option<Vec<SandboxMode>>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ConfigRequirementsReadResponse {
/// Null if no requirements are configured (e.g. no requirements.toml/MDM entries).
pub requirements: Option<ConfigRequirements>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -475,14 +521,33 @@ pub struct ConfigEdit {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub enum ApprovalDecision {
pub enum CommandExecutionApprovalDecision {
/// User approved the command.
Accept,
/// Approve and remember the approval for the session.
/// User approved the command and future identical commands should run without prompting.
AcceptForSession,
/// User approved the command, and wants to apply the proposed execpolicy amendment so future
/// matching commands can run without prompting.
AcceptWithExecpolicyAmendment {
execpolicy_amendment: ExecPolicyAmendment,
},
/// User denied the command. The agent will continue the turn.
Decline,
/// User denied the command. The turn will also be immediately interrupted.
Cancel,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub enum FileChangeApprovalDecision {
/// User approved the file changes.
Accept,
/// User approved the file changes and future changes to the same files should run without prompting.
AcceptForSession,
/// User denied the file changes. The agent will continue the turn.
Decline,
/// User denied the file changes. The turn will also be immediately interrupted.
Cancel,
}
@@ -639,6 +704,7 @@ pub enum SessionSource {
VsCode,
Exec,
AppServer,
SubAgent(CoreSubAgentSource),
#[serde(other)]
Unknown,
}
@@ -650,7 +716,7 @@ impl From<CoreSessionSource> for SessionSource {
CoreSessionSource::VSCode => SessionSource::VsCode,
CoreSessionSource::Exec => SessionSource::Exec,
CoreSessionSource::Mcp => SessionSource::AppServer,
CoreSessionSource::SubAgent(_) => SessionSource::Unknown,
CoreSessionSource::SubAgent(sub) => SessionSource::SubAgent(sub),
CoreSessionSource::Unknown => SessionSource::Unknown,
}
}
@@ -663,6 +729,7 @@ impl From<SessionSource> for CoreSessionSource {
SessionSource::VsCode => CoreSessionSource::VSCode,
SessionSource::Exec => CoreSessionSource::Exec,
SessionSource::AppServer => CoreSessionSource::Mcp,
SessionSource::SubAgent(sub) => CoreSessionSource::SubAgent(sub),
SessionSource::Unknown => CoreSessionSource::Unknown,
}
}
@@ -862,6 +929,20 @@ pub struct ModelListResponse {
pub next_cursor: Option<String>,
}
/// EXPERIMENTAL - list collaboration mode presets.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Default, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct CollaborationModeListParams {}
/// EXPERIMENTAL - collaboration mode presets response.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct CollaborationModeListResponse {
pub data: Vec<CollaborationModeMask>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -893,6 +974,49 @@ pub struct ListMcpServerStatusResponse {
pub next_cursor: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Default, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct AppsListParams {
/// Opaque pagination cursor returned by a previous call.
pub cursor: Option<String>,
/// Optional page size; defaults to a reasonable server-side value.
pub limit: Option<u32>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct AppInfo {
pub id: String,
pub name: String,
pub description: Option<String>,
pub logo_url: Option<String>,
pub install_url: Option<String>,
#[serde(default)]
pub is_accessible: bool,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct AppsListResponse {
pub data: Vec<AppInfo>,
/// Opaque cursor to pass to the next call to continue after the last item.
/// If None, there are no more items to return.
pub next_cursor: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct McpServerRefreshParams {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct McpServerRefreshResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -964,6 +1088,8 @@ pub struct ThreadStartParams {
pub config: Option<HashMap<String, JsonValue>>,
pub base_instructions: Option<String>,
pub developer_instructions: Option<String>,
pub personality: Option<Personality>,
pub ephemeral: Option<bool>,
/// If true, opt into emitting raw response items on the event stream.
///
/// This is for internal use only (e.g. Codex Cloud).
@@ -1018,6 +1144,7 @@ pub struct ThreadResumeParams {
pub config: Option<HashMap<String, serde_json::Value>>,
pub base_instructions: Option<String>,
pub developer_instructions: Option<String>,
pub personality: Option<Personality>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
@@ -1033,6 +1160,47 @@ pub struct ThreadResumeResponse {
pub reasoning_effort: Option<ReasoningEffort>,
}
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
/// There are two ways to fork a thread:
/// 1. By thread_id: load the thread from disk by thread_id and fork it into a new thread.
/// 2. By path: load the thread from disk by path and fork it into a new thread.
///
/// If using path, the thread_id param will be ignored.
///
/// Prefer using thread_id whenever possible.
pub struct ThreadForkParams {
pub thread_id: String,
/// [UNSTABLE] Specify the rollout path to fork from.
/// If specified, the thread_id param will be ignored.
pub path: Option<PathBuf>,
/// Configuration overrides for the forked thread, if any.
pub model: Option<String>,
pub model_provider: Option<String>,
pub cwd: Option<String>,
pub approval_policy: Option<AskForApproval>,
pub sandbox: Option<SandboxMode>,
pub config: Option<HashMap<String, serde_json::Value>>,
pub base_instructions: Option<String>,
pub developer_instructions: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadForkResponse {
pub thread: Thread,
pub model: String,
pub model_provider: String,
pub cwd: PathBuf,
pub approval_policy: AskForApproval,
pub sandbox: SandboxPolicy,
pub reasoning_effort: Option<ReasoningEffort>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -1045,6 +1213,30 @@ pub struct ThreadArchiveParams {
#[ts(export_to = "v2/")]
pub struct ThreadArchiveResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadRollbackParams {
pub thread_id: String,
/// The number of turns to drop from the end of the thread. Must be >= 1.
///
/// This only modifies the thread's history and does not revert local file changes
/// that have been made by the agent. Clients are responsible for reverting these changes.
pub num_turns: u32,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadRollbackResponse {
/// The updated thread after applying the rollback, with `turns` populated.
///
/// The ThreadItems stored in each Turn are lossy since we explicitly do not
/// persist all agent interactions, such as command executions. This is the same
/// behavior as `thread/resume`.
pub thread: Thread,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -1053,9 +1245,22 @@ pub struct ThreadListParams {
pub cursor: Option<String>,
/// Optional page size; defaults to a reasonable server-side value.
pub limit: Option<u32>,
/// Optional sort key; defaults to created_at.
pub sort_key: Option<ThreadSortKey>,
/// Optional provider filter; when set, only sessions recorded under these
/// providers are returned. When present but empty, includes all providers.
pub model_providers: Option<Vec<String>>,
/// Optional archived filter; when set to true, only archived threads are returned.
/// If false or null, only non-archived threads are returned.
pub archived: Option<bool>,
}
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "snake_case")]
#[ts(export_to = "v2/")]
pub enum ThreadSortKey {
CreatedAt,
UpdatedAt,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
@@ -1068,6 +1273,44 @@ pub struct ThreadListResponse {
pub next_cursor: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Default, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadLoadedListParams {
/// Opaque pagination cursor returned by a previous call.
pub cursor: Option<String>,
/// Optional page size; defaults to no limit.
pub limit: Option<u32>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadLoadedListResponse {
/// Thread ids for sessions currently loaded in memory.
pub data: Vec<String>,
/// Opaque cursor to pass to the next call to continue after the last item.
/// if None, there are no more items to return.
pub next_cursor: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadReadParams {
pub thread_id: String,
/// When true, include turns and their items from rollout history.
#[serde(default)]
pub include_turns: bool,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadReadResponse {
pub thread: Thread,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -1105,11 +1348,34 @@ pub enum SkillScope {
pub struct SkillMetadata {
pub name: String,
pub description: String,
#[ts(optional)]
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
/// Legacy short_description from SKILL.md. Prefer SKILL.toml interface.short_description.
pub short_description: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub interface: Option<SkillInterface>,
pub path: PathBuf,
pub scope: SkillScope,
pub enabled: bool,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct SkillInterface {
#[ts(optional)]
pub display_name: Option<String>,
#[ts(optional)]
pub short_description: Option<String>,
#[ts(optional)]
pub icon_small: Option<PathBuf>,
#[ts(optional)]
pub icon_large: Option<PathBuf>,
#[ts(optional)]
pub brand_color: Option<String>,
#[ts(optional)]
pub default_prompt: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
@@ -1129,14 +1395,44 @@ pub struct SkillsListEntry {
pub errors: Vec<SkillErrorInfo>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct SkillsConfigWriteParams {
pub path: PathBuf,
pub enabled: bool,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct SkillsConfigWriteResponse {
pub effective_enabled: bool,
}
impl From<CoreSkillMetadata> for SkillMetadata {
fn from(value: CoreSkillMetadata) -> Self {
Self {
name: value.name,
description: value.description,
short_description: value.short_description,
interface: value.interface.map(SkillInterface::from),
path: value.path,
scope: value.scope.into(),
enabled: true,
}
}
}
impl From<CoreSkillInterface> for SkillInterface {
fn from(value: CoreSkillInterface) -> Self {
Self {
display_name: value.display_name,
short_description: value.short_description,
brand_color: value.brand_color,
default_prompt: value.default_prompt,
icon_small: value.icon_small,
icon_large: value.icon_large,
}
}
}
@@ -1173,8 +1469,11 @@ pub struct Thread {
/// Unix timestamp (in seconds) when the thread was created.
#[ts(type = "number")]
pub created_at: i64,
/// Unix timestamp (in seconds) when the thread was last updated.
#[ts(type = "number")]
pub updated_at: i64,
/// [UNSTABLE] Path to the thread on disk.
pub path: PathBuf,
pub path: Option<PathBuf>,
/// Working directory captured for the thread.
pub cwd: PathBuf,
/// Version of the CLI that created the thread.
@@ -1183,7 +1482,8 @@ pub struct Thread {
pub source: SessionSource,
/// Optional Git metadata captured when the thread was created.
pub git_info: Option<GitInfo>,
/// Only populated on a `thread/resume` response.
/// Only populated on `thread/resume`, `thread/rollback`, `thread/fork`, and `thread/read`
/// (when `includeTurns` is true) responses.
/// For all other responses and notifications returning a Thread,
/// the turns field will be an empty list.
pub turns: Vec<Turn>,
@@ -1211,6 +1511,7 @@ pub struct ThreadTokenUsageUpdatedNotification {
pub struct ThreadTokenUsage {
pub total: TokenUsageBreakdown,
pub last: TokenUsageBreakdown,
// TODO(aibrahim): make this not optional
#[ts(type = "number | null")]
pub model_context_window: Option<i64>,
}
@@ -1258,7 +1559,7 @@ impl From<CoreTokenUsage> for TokenUsageBreakdown {
#[ts(export_to = "v2/")]
pub struct Turn {
pub id: String,
/// Only populated on a `thread/resume` response.
/// Only populated on a `thread/resume` or `thread/fork` response.
/// For all other responses and notifications returning a Turn,
/// the items field will be an empty list.
pub items: Vec<ThreadItem>,
@@ -1319,8 +1620,14 @@ pub struct TurnStartParams {
pub effort: Option<ReasoningEffort>,
/// Override the reasoning summary for this turn and subsequent turns.
pub summary: Option<ReasoningSummary>,
/// Override the personality for this turn and subsequent turns.
pub personality: Option<Personality>,
/// Optional JSON Schema used to constrain the final assistant message for this turn.
pub output_schema: Option<JsonValue>,
/// EXPERIMENTAL - set a pre-set collaboration mode.
/// Takes precedence over model, reasoning_effort, and developer instructions if set.
pub collaboration_mode: Option<CollaborationMode>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
@@ -1396,22 +1703,110 @@ pub struct TurnInterruptParams {
pub struct TurnInterruptResponse {}
// User input types
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ByteRange {
pub start: usize,
pub end: usize,
}
impl From<CoreByteRange> for ByteRange {
fn from(value: CoreByteRange) -> Self {
Self {
start: value.start,
end: value.end,
}
}
}
impl From<ByteRange> for CoreByteRange {
fn from(value: ByteRange) -> Self {
Self {
start: value.start,
end: value.end,
}
}
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct TextElement {
/// Byte range in the parent `text` buffer that this element occupies.
pub byte_range: ByteRange,
/// Optional human-readable placeholder for the element, displayed in the UI.
placeholder: Option<String>,
}
impl TextElement {
pub fn new(byte_range: ByteRange, placeholder: Option<String>) -> Self {
Self {
byte_range,
placeholder,
}
}
pub fn set_placeholder(&mut self, placeholder: Option<String>) {
self.placeholder = placeholder;
}
pub fn placeholder(&self) -> Option<&str> {
self.placeholder.as_deref()
}
}
impl From<CoreTextElement> for TextElement {
fn from(value: CoreTextElement) -> Self {
Self::new(
value.byte_range.into(),
value._placeholder_for_conversion_only().map(str::to_string),
)
}
}
impl From<TextElement> for CoreTextElement {
fn from(value: TextElement) -> Self {
Self::new(value.byte_range.into(), value.placeholder)
}
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(tag = "type", rename_all = "camelCase")]
#[ts(tag = "type")]
#[ts(export_to = "v2/")]
pub enum UserInput {
Text { text: String },
Image { url: String },
LocalImage { path: PathBuf },
Text {
text: String,
/// UI-defined spans within `text` used to render or persist special elements.
#[serde(default)]
text_elements: Vec<TextElement>,
},
Image {
url: String,
},
LocalImage {
path: PathBuf,
},
Skill {
name: String,
path: PathBuf,
},
}
impl UserInput {
pub fn into_core(self) -> CoreUserInput {
match self {
UserInput::Text { text } => CoreUserInput::Text { text },
UserInput::Text {
text,
text_elements,
} => CoreUserInput::Text {
text,
text_elements: text_elements.into_iter().map(Into::into).collect(),
},
UserInput::Image { url } => CoreUserInput::Image { image_url: url },
UserInput::LocalImage { path } => CoreUserInput::LocalImage { path },
UserInput::Skill { name, path } => CoreUserInput::Skill { name, path },
}
}
}
@@ -1419,9 +1814,16 @@ impl UserInput {
impl From<CoreUserInput> for UserInput {
fn from(value: CoreUserInput) -> Self {
match value {
CoreUserInput::Text { text } => UserInput::Text { text },
CoreUserInput::Text {
text,
text_elements,
} => UserInput::Text {
text,
text_elements: text_elements.into_iter().map(Into::into).collect(),
},
CoreUserInput::Image { image_url } => UserInput::Image { url: image_url },
CoreUserInput::LocalImage { path } => UserInput::LocalImage { path },
CoreUserInput::Skill { name, path } => UserInput::Skill { name, path },
_ => unreachable!("unsupported user input variant"),
}
}
@@ -1493,6 +1895,25 @@ pub enum ThreadItem {
},
#[serde(rename_all = "camelCase")]
#[ts(rename_all = "camelCase")]
CollabAgentToolCall {
/// Unique identifier for this collab tool call.
id: String,
/// Name of the collab tool that was invoked.
tool: CollabAgentTool,
/// Current status of the collab tool call.
status: CollabAgentToolCallStatus,
/// Thread ID of the agent issuing the collab request.
sender_thread_id: String,
/// Thread ID of the receiving agent, when applicable. In case of spawn operation,
/// this corresponds to the newly spawned agent.
receiver_thread_ids: Vec<String>,
/// Prompt text sent as part of the collab tool call, when available.
prompt: Option<String>,
/// Last known status of the target agents, when available.
agents_states: HashMap<String, CollabAgentState>,
},
#[serde(rename_all = "camelCase")]
#[ts(rename_all = "camelCase")]
WebSearch { id: String, query: String },
#[serde(rename_all = "camelCase")]
#[ts(rename_all = "camelCase")]
@@ -1545,6 +1966,16 @@ pub enum CommandExecutionStatus {
Declined,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub enum CollabAgentTool {
SpawnAgent,
SendInput,
Wait,
CloseAgent,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -1583,6 +2014,66 @@ pub enum McpToolCallStatus {
Failed,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub enum CollabAgentToolCallStatus {
InProgress,
Completed,
Failed,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub enum CollabAgentStatus {
PendingInit,
Running,
Completed,
Errored,
Shutdown,
NotFound,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct CollabAgentState {
pub status: CollabAgentStatus,
pub message: Option<String>,
}
impl From<CoreAgentStatus> for CollabAgentState {
fn from(value: CoreAgentStatus) -> Self {
match value {
CoreAgentStatus::PendingInit => Self {
status: CollabAgentStatus::PendingInit,
message: None,
},
CoreAgentStatus::Running => Self {
status: CollabAgentStatus::Running,
message: None,
},
CoreAgentStatus::Completed(message) => Self {
status: CollabAgentStatus::Completed,
message,
},
CoreAgentStatus::Errored(message) => Self {
status: CollabAgentStatus::Errored,
message: Some(message),
},
CoreAgentStatus::Shutdown => Self {
status: CollabAgentStatus::Shutdown,
message: None,
},
CoreAgentStatus::NotFound => Self {
status: CollabAgentStatus::NotFound,
message: None,
},
}
}
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -1840,6 +2331,18 @@ pub struct CommandExecutionRequestApprovalParams {
pub item_id: String,
/// Optional explanatory reason (e.g. request for network access).
pub reason: Option<String>,
/// The command to be executed.
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub command: Option<String>,
/// The command's working directory.
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub cwd: Option<PathBuf>,
/// Best-effort parsed command actions for friendly display.
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub command_actions: Option<Vec<CommandAction>>,
/// Optional proposed execpolicy amendment to allow similar commands without prompting.
pub proposed_execpolicy_amendment: Option<ExecPolicyAmendment>,
}
@@ -1848,7 +2351,7 @@ pub struct CommandExecutionRequestApprovalParams {
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct CommandExecutionRequestApprovalResponse {
pub decision: ApprovalDecision,
pub decision: CommandExecutionApprovalDecision,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
@@ -1868,7 +2371,54 @@ pub struct FileChangeRequestApprovalParams {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[ts(export_to = "v2/")]
pub struct FileChangeRequestApprovalResponse {
pub decision: ApprovalDecision,
pub decision: FileChangeApprovalDecision,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
/// EXPERIMENTAL. Defines a single selectable option for request_user_input.
pub struct ToolRequestUserInputOption {
pub label: String,
pub description: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
/// EXPERIMENTAL. Represents one request_user_input question and its optional options.
pub struct ToolRequestUserInputQuestion {
pub id: String,
pub header: String,
pub question: String,
pub options: Option<Vec<ToolRequestUserInputOption>>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
/// EXPERIMENTAL. Params sent with a request_user_input event.
pub struct ToolRequestUserInputParams {
pub thread_id: String,
pub turn_id: String,
pub item_id: String,
pub questions: Vec<ToolRequestUserInputQuestion>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
/// EXPERIMENTAL. Captures a user's answer to a request_user_input question.
pub struct ToolRequestUserInputAnswer {
pub answers: Vec<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
/// EXPERIMENTAL. Response payload mapping question ids to answers.
pub struct ToolRequestUserInputResponse {
pub answers: HashMap<String, ToolRequestUserInputAnswer>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
@@ -1960,6 +2510,42 @@ pub struct DeprecationNoticeNotification {
pub details: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct TextPosition {
/// 1-based line number.
pub line: usize,
/// 1-based column number (in Unicode scalar values).
pub column: usize,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct TextRange {
pub start: TextPosition,
pub end: TextPosition,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ConfigWarningNotification {
/// Concise summary of the warning.
pub summary: String,
/// Optional extra guidance or error details.
pub details: Option<String>,
/// Optional path to the config file that triggered the warning.
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub path: Option<String>,
/// Optional range for the error location inside the config file.
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub range: Option<TextRange>,
}
#[cfg(test)]
mod tests {
use super::*;
@@ -2000,6 +2586,7 @@ mod tests {
content: vec![
CoreUserInput::Text {
text: "hello".to_string(),
text_elements: Vec::new(),
},
CoreUserInput::Image {
image_url: "https://example.com/image.png".to_string(),
@@ -2007,6 +2594,10 @@ mod tests {
CoreUserInput::LocalImage {
path: PathBuf::from("local/image.png"),
},
CoreUserInput::Skill {
name: "skill-creator".to_string(),
path: PathBuf::from("/repo/.codex/skills/skill-creator/SKILL.md"),
},
],
});
@@ -2017,6 +2608,7 @@ mod tests {
content: vec![
UserInput::Text {
text: "hello".to_string(),
text_elements: Vec::new(),
},
UserInput::Image {
url: "https://example.com/image.png".to_string(),
@@ -2024,6 +2616,10 @@ mod tests {
UserInput::LocalImage {
path: PathBuf::from("local/image.png"),
},
UserInput::Skill {
name: "skill-creator".to_string(),
path: PathBuf::from("/repo/.codex/skills/skill-creator/SKILL.md"),
},
],
}
);

View File

@@ -0,0 +1,6 @@
load("//:defs.bzl", "codex_rust_crate")
codex_rust_crate(
name = "codex-app-server-test-client",
crate_name = "codex_app_server_test_client",
)

View File

@@ -13,16 +13,18 @@ use std::time::Duration;
use anyhow::Context;
use anyhow::Result;
use anyhow::bail;
use clap::ArgAction;
use clap::Parser;
use clap::Subcommand;
use codex_app_server_protocol::AddConversationListenerParams;
use codex_app_server_protocol::AddConversationSubscriptionResponse;
use codex_app_server_protocol::ApprovalDecision;
use codex_app_server_protocol::AskForApproval;
use codex_app_server_protocol::ClientInfo;
use codex_app_server_protocol::ClientRequest;
use codex_app_server_protocol::CommandExecutionApprovalDecision;
use codex_app_server_protocol::CommandExecutionRequestApprovalParams;
use codex_app_server_protocol::CommandExecutionRequestApprovalResponse;
use codex_app_server_protocol::FileChangeApprovalDecision;
use codex_app_server_protocol::FileChangeRequestApprovalParams;
use codex_app_server_protocol::FileChangeRequestApprovalResponse;
use codex_app_server_protocol::GetAccountRateLimitsResponse;
@@ -35,6 +37,8 @@ use codex_app_server_protocol::JSONRPCRequest;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::LoginChatGptCompleteNotification;
use codex_app_server_protocol::LoginChatGptResponse;
use codex_app_server_protocol::ModelListParams;
use codex_app_server_protocol::ModelListResponse;
use codex_app_server_protocol::NewConversationParams;
use codex_app_server_protocol::NewConversationResponse;
use codex_app_server_protocol::RequestId;
@@ -49,7 +53,7 @@ use codex_app_server_protocol::TurnStartParams;
use codex_app_server_protocol::TurnStartResponse;
use codex_app_server_protocol::TurnStatus;
use codex_app_server_protocol::UserInput as V2UserInput;
use codex_protocol::ConversationId;
use codex_protocol::ThreadId;
use codex_protocol::protocol::Event;
use codex_protocol::protocol::EventMsg;
use serde::Serialize;
@@ -65,6 +69,19 @@ struct Cli {
#[arg(long, env = "CODEX_BIN", default_value = "codex")]
codex_bin: String,
/// Forwarded to the `codex` CLI as `--config key=value`. Repeatable.
///
/// Example:
/// `--config 'model_providers.mock.base_url="http://localhost:4010/v2"'`
#[arg(
short = 'c',
long = "config",
value_name = "key=value",
action = ArgAction::Append,
global = true
)]
config_overrides: Vec<String>,
#[command(subcommand)]
command: CliCommand,
}
@@ -113,37 +130,54 @@ enum CliCommand {
TestLogin,
/// Fetch the current account rate limits from the Codex app-server.
GetAccountRateLimits,
/// List the available models from the Codex app-server.
#[command(name = "model-list")]
ModelList,
}
fn main() -> Result<()> {
let Cli { codex_bin, command } = Cli::parse();
let Cli {
codex_bin,
config_overrides,
command,
} = Cli::parse();
match command {
CliCommand::SendMessage { user_message } => send_message(codex_bin, user_message),
CliCommand::SendMessageV2 { user_message } => send_message_v2(codex_bin, user_message),
CliCommand::SendMessage { user_message } => {
send_message(&codex_bin, &config_overrides, user_message)
}
CliCommand::SendMessageV2 { user_message } => {
send_message_v2(&codex_bin, &config_overrides, user_message)
}
CliCommand::TriggerCmdApproval { user_message } => {
trigger_cmd_approval(codex_bin, user_message)
trigger_cmd_approval(&codex_bin, &config_overrides, user_message)
}
CliCommand::TriggerPatchApproval { user_message } => {
trigger_patch_approval(codex_bin, user_message)
trigger_patch_approval(&codex_bin, &config_overrides, user_message)
}
CliCommand::NoTriggerCmdApproval => no_trigger_cmd_approval(codex_bin),
CliCommand::NoTriggerCmdApproval => no_trigger_cmd_approval(&codex_bin, &config_overrides),
CliCommand::SendFollowUpV2 {
first_message,
follow_up_message,
} => send_follow_up_v2(codex_bin, first_message, follow_up_message),
CliCommand::TestLogin => test_login(codex_bin),
CliCommand::GetAccountRateLimits => get_account_rate_limits(codex_bin),
} => send_follow_up_v2(
&codex_bin,
&config_overrides,
first_message,
follow_up_message,
),
CliCommand::TestLogin => test_login(&codex_bin, &config_overrides),
CliCommand::GetAccountRateLimits => get_account_rate_limits(&codex_bin, &config_overrides),
CliCommand::ModelList => model_list(&codex_bin, &config_overrides),
}
}
fn send_message(codex_bin: String, user_message: String) -> Result<()> {
let mut client = CodexClient::spawn(codex_bin)?;
fn send_message(codex_bin: &str, config_overrides: &[String], user_message: String) -> Result<()> {
let mut client = CodexClient::spawn(codex_bin, config_overrides)?;
let initialize = client.initialize()?;
println!("< initialize response: {initialize:?}");
let conversation = client.new_conversation()?;
let conversation = client.start_thread()?;
println!("< newConversation response: {conversation:?}");
let subscription = client.add_conversation_listener(&conversation.conversation_id)?;
@@ -154,51 +188,66 @@ fn send_message(codex_bin: String, user_message: String) -> Result<()> {
client.stream_conversation(&conversation.conversation_id)?;
client.remove_conversation_listener(subscription.subscription_id)?;
client.remove_thread_listener(subscription.subscription_id)?;
Ok(())
}
fn send_message_v2(codex_bin: String, user_message: String) -> Result<()> {
send_message_v2_with_policies(codex_bin, user_message, None, None)
fn send_message_v2(
codex_bin: &str,
config_overrides: &[String],
user_message: String,
) -> Result<()> {
send_message_v2_with_policies(codex_bin, config_overrides, user_message, None, None)
}
fn trigger_cmd_approval(codex_bin: String, user_message: Option<String>) -> Result<()> {
fn trigger_cmd_approval(
codex_bin: &str,
config_overrides: &[String],
user_message: Option<String>,
) -> Result<()> {
let default_prompt =
"Run `touch /tmp/should-trigger-approval` so I can confirm the file exists.";
let message = user_message.unwrap_or_else(|| default_prompt.to_string());
send_message_v2_with_policies(
codex_bin,
config_overrides,
message,
Some(AskForApproval::OnRequest),
Some(SandboxPolicy::ReadOnly),
)
}
fn trigger_patch_approval(codex_bin: String, user_message: Option<String>) -> Result<()> {
fn trigger_patch_approval(
codex_bin: &str,
config_overrides: &[String],
user_message: Option<String>,
) -> Result<()> {
let default_prompt =
"Create a file named APPROVAL_DEMO.txt containing a short hello message using apply_patch.";
let message = user_message.unwrap_or_else(|| default_prompt.to_string());
send_message_v2_with_policies(
codex_bin,
config_overrides,
message,
Some(AskForApproval::OnRequest),
Some(SandboxPolicy::ReadOnly),
)
}
fn no_trigger_cmd_approval(codex_bin: String) -> Result<()> {
fn no_trigger_cmd_approval(codex_bin: &str, config_overrides: &[String]) -> Result<()> {
let prompt = "Run `touch should_not_trigger_approval.txt`";
send_message_v2_with_policies(codex_bin, prompt.to_string(), None, None)
send_message_v2_with_policies(codex_bin, config_overrides, prompt.to_string(), None, None)
}
fn send_message_v2_with_policies(
codex_bin: String,
codex_bin: &str,
config_overrides: &[String],
user_message: String,
approval_policy: Option<AskForApproval>,
sandbox_policy: Option<SandboxPolicy>,
) -> Result<()> {
let mut client = CodexClient::spawn(codex_bin)?;
let mut client = CodexClient::spawn(codex_bin, config_overrides)?;
let initialize = client.initialize()?;
println!("< initialize response: {initialize:?}");
@@ -207,7 +256,11 @@ fn send_message_v2_with_policies(
println!("< thread/start response: {thread_response:?}");
let mut turn_params = TurnStartParams {
thread_id: thread_response.thread.id.clone(),
input: vec![V2UserInput::Text { text: user_message }],
input: vec![V2UserInput::Text {
text: user_message,
// Test client sends plain text without UI element ranges.
text_elements: Vec::new(),
}],
..Default::default()
};
turn_params.approval_policy = approval_policy;
@@ -222,11 +275,12 @@ fn send_message_v2_with_policies(
}
fn send_follow_up_v2(
codex_bin: String,
codex_bin: &str,
config_overrides: &[String],
first_message: String,
follow_up_message: String,
) -> Result<()> {
let mut client = CodexClient::spawn(codex_bin)?;
let mut client = CodexClient::spawn(codex_bin, config_overrides)?;
let initialize = client.initialize()?;
println!("< initialize response: {initialize:?}");
@@ -238,6 +292,8 @@ fn send_follow_up_v2(
thread_id: thread_response.thread.id.clone(),
input: vec![V2UserInput::Text {
text: first_message,
// Test client sends plain text without UI element ranges.
text_elements: Vec::new(),
}],
..Default::default()
};
@@ -249,6 +305,8 @@ fn send_follow_up_v2(
thread_id: thread_response.thread.id.clone(),
input: vec![V2UserInput::Text {
text: follow_up_message,
// Test client sends plain text without UI element ranges.
text_elements: Vec::new(),
}],
..Default::default()
};
@@ -259,8 +317,8 @@ fn send_follow_up_v2(
Ok(())
}
fn test_login(codex_bin: String) -> Result<()> {
let mut client = CodexClient::spawn(codex_bin)?;
fn test_login(codex_bin: &str, config_overrides: &[String]) -> Result<()> {
let mut client = CodexClient::spawn(codex_bin, config_overrides)?;
let initialize = client.initialize()?;
println!("< initialize response: {initialize:?}");
@@ -289,8 +347,8 @@ fn test_login(codex_bin: String) -> Result<()> {
}
}
fn get_account_rate_limits(codex_bin: String) -> Result<()> {
let mut client = CodexClient::spawn(codex_bin)?;
fn get_account_rate_limits(codex_bin: &str, config_overrides: &[String]) -> Result<()> {
let mut client = CodexClient::spawn(codex_bin, config_overrides)?;
let initialize = client.initialize()?;
println!("< initialize response: {initialize:?}");
@@ -301,6 +359,18 @@ fn get_account_rate_limits(codex_bin: String) -> Result<()> {
Ok(())
}
fn model_list(codex_bin: &str, config_overrides: &[String]) -> Result<()> {
let mut client = CodexClient::spawn(codex_bin, config_overrides)?;
let initialize = client.initialize()?;
println!("< initialize response: {initialize:?}");
let response = client.model_list(ModelListParams::default())?;
println!("< model/list response: {response:?}");
Ok(())
}
struct CodexClient {
child: Child,
stdin: Option<ChildStdin>,
@@ -309,8 +379,12 @@ struct CodexClient {
}
impl CodexClient {
fn spawn(codex_bin: String) -> Result<Self> {
let mut codex_app_server = Command::new(&codex_bin)
fn spawn(codex_bin: &str, config_overrides: &[String]) -> Result<Self> {
let mut cmd = Command::new(codex_bin);
for override_kv in config_overrides {
cmd.arg("--config").arg(override_kv);
}
let mut codex_app_server = cmd
.arg("app-server")
.stdin(Stdio::piped())
.stdout(Stdio::piped())
@@ -351,7 +425,7 @@ impl CodexClient {
self.send_request(request, request_id, "initialize")
}
fn new_conversation(&mut self) -> Result<NewConversationResponse> {
fn start_thread(&mut self) -> Result<NewConversationResponse> {
let request_id = self.request_id();
let request = ClientRequest::NewConversation {
request_id: request_id.clone(),
@@ -363,7 +437,7 @@ impl CodexClient {
fn add_conversation_listener(
&mut self,
conversation_id: &ConversationId,
conversation_id: &ThreadId,
) -> Result<AddConversationSubscriptionResponse> {
let request_id = self.request_id();
let request = ClientRequest::AddConversationListener {
@@ -377,7 +451,7 @@ impl CodexClient {
self.send_request(request, request_id, "addConversationListener")
}
fn remove_conversation_listener(&mut self, subscription_id: Uuid) -> Result<()> {
fn remove_thread_listener(&mut self, subscription_id: Uuid) -> Result<()> {
let request_id = self.request_id();
let request = ClientRequest::RemoveConversationListener {
request_id: request_id.clone(),
@@ -395,7 +469,7 @@ impl CodexClient {
fn send_user_message(
&mut self,
conversation_id: &ConversationId,
conversation_id: &ThreadId,
message: &str,
) -> Result<SendUserMessageResponse> {
let request_id = self.request_id();
@@ -405,6 +479,8 @@ impl CodexClient {
conversation_id: *conversation_id,
items: vec![InputItem::Text {
text: message.to_string(),
// Test client sends plain text without UI element ranges.
text_elements: Vec::new(),
}],
},
};
@@ -452,7 +528,17 @@ impl CodexClient {
self.send_request(request, request_id, "account/rateLimits/read")
}
fn stream_conversation(&mut self, conversation_id: &ConversationId) -> Result<()> {
fn model_list(&mut self, params: ModelListParams) -> Result<ModelListResponse> {
let request_id = self.request_id();
let request = ClientRequest::ModelList {
request_id: request_id.clone(),
params,
};
self.send_request(request, request_id, "model/list")
}
fn stream_conversation(&mut self, conversation_id: &ThreadId) -> Result<()> {
loop {
let notification = self.next_notification()?;
@@ -469,7 +555,7 @@ impl CodexClient {
print!("{}", event.delta);
std::io::stdout().flush().ok();
}
EventMsg::TaskComplete(event) => {
EventMsg::TurnComplete(event) => {
println!("\n[task complete: {event:?}]");
break;
}
@@ -589,7 +675,7 @@ impl CodexClient {
fn extract_event(
&self,
notification: JSONRPCNotification,
conversation_id: &ConversationId,
conversation_id: &ThreadId,
) -> Result<Option<Event>> {
let params = notification
.params
@@ -603,7 +689,7 @@ impl CodexClient {
let conversation_value = map
.remove("conversationId")
.context("event missing conversationId")?;
let notification_conversation: ConversationId = serde_json::from_value(conversation_value)
let notification_conversation: ThreadId = serde_json::from_value(conversation_value)
.context("conversationId was not a valid UUID")?;
if &notification_conversation != conversation_id {
@@ -756,6 +842,9 @@ impl CodexClient {
turn_id,
item_id,
reason,
command,
cwd,
command_actions,
proposed_execpolicy_amendment,
} = params;
@@ -765,12 +854,23 @@ impl CodexClient {
if let Some(reason) = reason.as_deref() {
println!("< reason: {reason}");
}
if let Some(command) = command.as_deref() {
println!("< command: {command}");
}
if let Some(cwd) = cwd.as_ref() {
println!("< cwd: {}", cwd.display());
}
if let Some(command_actions) = command_actions.as_ref()
&& !command_actions.is_empty()
{
println!("< command actions: {command_actions:?}");
}
if let Some(execpolicy_amendment) = proposed_execpolicy_amendment.as_ref() {
println!("< proposed execpolicy amendment: {execpolicy_amendment:?}");
}
let response = CommandExecutionRequestApprovalResponse {
decision: ApprovalDecision::Accept,
decision: CommandExecutionApprovalDecision::Accept,
};
self.send_server_request_response(request_id, &response)?;
println!("< approved commandExecution request for item {item_id}");
@@ -801,7 +901,7 @@ impl CodexClient {
}
let response = FileChangeRequestApprovalResponse {
decision: ApprovalDecision::Accept,
decision: FileChangeApprovalDecision::Accept,
};
self.send_server_request_response(request_id, &response)?;
println!("< approved fileChange request for item {item_id}");

View File

@@ -0,0 +1,8 @@
load("//:defs.bzl", "codex_rust_crate")
codex_rust_crate(
name = "app-server",
crate_name = "codex_app_server",
integration_deps_extra = ["//codex-rs/app-server/tests/common:common"],
test_tags = ["no-sandbox"],
)

View File

@@ -22,6 +22,7 @@ codex-common = { workspace = true, features = ["cli"] }
codex-core = { workspace = true }
codex-backend-client = { workspace = true }
codex-file-search = { workspace = true }
codex-chatgpt = { workspace = true }
codex-login = { workspace = true }
codex-protocol = { workspace = true }
codex-app-server-protocol = { workspace = true }
@@ -34,6 +35,7 @@ serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }
mcp-types = { workspace = true }
tempfile = { workspace = true }
time = { workspace = true }
toml = { workspace = true }
tokio = { workspace = true, features = [
"io-std",
@@ -48,11 +50,20 @@ uuid = { workspace = true, features = ["serde", "v7"] }
[dev-dependencies]
app_test_support = { workspace = true }
axum = { workspace = true, default-features = false, features = [
"http1",
"json",
"tokio",
] }
base64 = { workspace = true }
core_test_support = { workspace = true }
mcp-types = { workspace = true }
os_info = { workspace = true }
pretty_assertions = { workspace = true }
rmcp = { workspace = true, default-features = false, features = [
"server",
"transport-streamable-http-server",
] }
serial_test = { workspace = true }
wiremock = { workspace = true }
shlex = { workspace = true }

View File

@@ -11,6 +11,8 @@
- [Initialization](#initialization)
- [API Overview](#api-overview)
- [Events](#events)
- [Approvals](#approvals)
- [Skills](#skills)
- [Auth endpoints](#auth-endpoints)
## Protocol
@@ -39,7 +41,7 @@ Use the thread APIs to create, list, or archive conversations. Drive a conversat
## Lifecycle Overview
- Initialize once: Immediately after launching the codex app-server process, send an `initialize` request with your client metadata, then emit an `initialized` notification. Any other request before this handshake gets rejected.
- Start (or resume) a thread: Call `thread/start` to open a fresh conversation. The response returns the thread object and youll also get a `thread/started` notification. If youre continuing an existing conversation, call `thread/resume` with its ID instead.
- Start (or resume) a thread: Call `thread/start` to open a fresh conversation. The response returns the thread object and youll also get a `thread/started` notification. If youre continuing an existing conversation, call `thread/resume` with its ID instead. If you want to branch from an existing conversation, call `thread/fork` to create a new thread id with copied history.
- Begin a turn: To send user input, call `turn/start` with the target `threadId` and the user's input. Optional fields let you override model, cwd, sandbox policy, etc. This immediately returns the new turn object and triggers a `turn/started` notification.
- Stream events: After `turn/start`, keep reading JSON-RPC notifications on stdout. Youll see `item/started`, `item/completed`, deltas like `item/agentMessage/delta`, tool progress, etc. These represent streaming model output plus any side effects (commands, tool calls, reasoning notes).
- Finish the turn: When the model is done (or the turn is interrupted via making the `turn/interrupt` call), the server sends `turn/completed` with the final turn state and token usage.
@@ -50,6 +52,10 @@ Clients must send a single `initialize` request before invoking any other method
Applications building on top of `codex app-server` should identify themselves via the `clientInfo` parameter.
**Important**: `clientInfo.name` is used to identify the client for the OpenAI Compliance Logs Platform. If
you are developing a new Codex integration that is intended for enterprise use, please contact us to get it
added to a known clients list. For more context: https://chatgpt.com/admin/api-reference#tag/Logs:-Codex
Example (from OpenAI's official VSCode extension):
```json
@@ -58,7 +64,7 @@ Example (from OpenAI's official VSCode extension):
"id": 0,
"params": {
"clientInfo": {
"name": "codex-vscode",
"name": "codex_vscode",
"title": "Codex VS Code Extension",
"version": "0.1.0"
}
@@ -70,21 +76,31 @@ Example (from OpenAI's official VSCode extension):
- `thread/start` — create a new thread; emits `thread/started` and auto-subscribes you to turn/item events for that thread.
- `thread/resume` — reopen an existing thread by id so subsequent `turn/start` calls append to it.
- `thread/fork` — fork an existing thread into a new thread id by copying the stored history; emits `thread/started` and auto-subscribes you to turn/item events for the new thread.
- `thread/list` — page through stored rollouts; supports cursor-based pagination and optional `modelProviders` filtering.
- `thread/loaded/list` — list the thread ids currently loaded in memory.
- `thread/read` — read a stored thread by id without resuming it; optionally include turns via `includeTurns`.
- `thread/archive` — move a threads rollout file into the archived directory; returns `{}` on success.
- `thread/rollback` — drop the last N turns from the agents in-memory context and persist a rollback marker in the rollout so future resumes see the pruned history; returns the updated `thread` (with `turns` populated) on success.
- `turn/start` — add user input to a thread and begin Codex generation; responds with the initial `turn` object and streams `turn/started`, `item/*`, and `turn/completed` notifications.
- `turn/interrupt` — request cancellation of an in-flight turn by `(thread_id, turn_id)`; success is an empty `{}` response and the turn finishes with `status: "interrupted"`.
- `review/start` — kick off Codexs automated reviewer for a thread; responds like `turn/start` and emits `item/started`/`item/completed` notifications with `enteredReviewMode` and `exitedReviewMode` items, plus a final assistant `agentMessage` containing the review.
- `command/exec` — run a single command under the server sandbox without starting a thread/turn (handy for utilities and validation).
- `model/list` — list available models (with reasoning effort options).
- `collaborationMode/list` — list available collaboration mode presets (experimental, no pagination).
- `skills/list` — list skills for one or more `cwd` values (optional `forceReload`).
- `app/list` — list available apps.
- `skills/config/write` — write user-level skill config by path.
- `mcpServer/oauth/login` — start an OAuth login for a configured MCP server; returns an `authorization_url` and later emits `mcpServer/oauthLogin/completed` once the browser flow finishes.
- `tool/requestUserInput` — prompt the user with 13 short questions for a tool call and return their answers (experimental).
- `config/mcpServer/reload` — reload MCP server config from disk and queue a refresh for loaded threads (applied on each thread's next active turn); returns `{}`. Use this after editing `config.toml` without restarting the server.
- `mcpServerStatus/list` — enumerate configured MCP servers with their tools, resources, resource templates, and auth status; supports cursor+limit pagination.
- `feedback/upload` — submit a feedback report (classification + optional reason/logs and conversation_id); returns the tracking thread id.
- `command/exec` — run a single command under the server sandbox without starting a thread/turn (handy for utilities and validation).
- `config/read` — fetch the effective config on disk after resolving config layering.
- `config/value/write` — write a single config key/value to the user's config.toml on disk.
- `config/batchWrite` — apply multiple config edits atomically to the user's config.toml on disk.
- `configRequirements/read` — fetch the loaded requirements allow-lists from `requirements.toml` and/or MDM (or `null` if none are configured).
### Example: Start or resume a thread
@@ -98,6 +114,7 @@ Start a fresh thread when you need a new Codex conversation.
"cwd": "/Users/me/project",
"approvalPolicy": "never",
"sandbox": "workspaceWrite",
"personality": "friendly"
} }
{ "id": 10, "result": {
"thread": {
@@ -110,20 +127,33 @@ Start a fresh thread when you need a new Codex conversation.
{ "method": "thread/started", "params": { "thread": { } } }
```
To continue a stored session, call `thread/resume` with the `thread.id` you previously recorded. The response shape matches `thread/start`, and no additional notifications are emitted:
To continue a stored session, call `thread/resume` with the `thread.id` you previously recorded. The response shape matches `thread/start`, and no additional notifications are emitted. You can also pass the same configuration overrides supported by `thread/start`, such as `personality`:
```json
{ "method": "thread/resume", "id": 11, "params": { "threadId": "thr_123" } }
{ "method": "thread/resume", "id": 11, "params": {
"threadId": "thr_123",
"personality": "friendly"
} }
{ "id": 11, "result": { "thread": { "id": "thr_123", } } }
```
To branch from a stored session, call `thread/fork` with the `thread.id`. This creates a new thread id and emits a `thread/started` notification for it:
```json
{ "method": "thread/fork", "id": 12, "params": { "threadId": "thr_123" } }
{ "id": 12, "result": { "thread": { "id": "thr_456", } } }
{ "method": "thread/started", "params": { "thread": { } } }
```
### Example: List threads (with pagination & filters)
`thread/list` lets you render a history UI. Pass any combination of:
`thread/list` lets you render a history UI. Results default to `createdAt` (newest first) descending. Pass any combination of:
- `cursor` — opaque string from a prior response; omit for the first page.
- `limit` — server defaults to a reasonable page size if unset.
- `sortKey``created_at` (default) or `updated_at`.
- `modelProviders` — restrict results to specific providers; unset, null, or an empty array will include all providers.
- `archived` — when `true`, list archived threads only. When `false` or `null`, list non-archived threads (default).
Example:
@@ -131,11 +161,12 @@ Example:
{ "method": "thread/list", "id": 20, "params": {
"cursor": null,
"limit": 25,
"sortKey": "created_at"
} }
{ "id": 20, "result": {
"data": [
{ "id": "thr_a", "preview": "Create a TUI", "modelProvider": "openai", "createdAt": 1730831111 },
{ "id": "thr_b", "preview": "Fix tests", "modelProvider": "openai", "createdAt": 1730750000 }
{ "id": "thr_a", "preview": "Create a TUI", "modelProvider": "openai", "createdAt": 1730831111, "updatedAt": 1730831111 },
{ "id": "thr_b", "preview": "Fix tests", "modelProvider": "openai", "createdAt": 1730750000, "updatedAt": 1730750000 }
],
"nextCursor": "opaque-token-or-null"
} }
@@ -143,6 +174,31 @@ Example:
When `nextCursor` is `null`, youve reached the final page.
### Example: List loaded threads
`thread/loaded/list` returns thread ids currently loaded in memory. This is useful when you want to check which sessions are active without scanning rollouts on disk.
```json
{ "method": "thread/loaded/list", "id": 21 }
{ "id": 21, "result": {
"data": ["thr_123", "thr_456"]
} }
```
### Example: Read a thread
Use `thread/read` to fetch a stored thread by id without resuming it. Pass `includeTurns` when you want the rollout history loaded into `thread.turns`.
```json
{ "method": "thread/read", "id": 22, "params": { "threadId": "thr_123" } }
{ "id": 22, "result": { "thread": { "id": "thr_123", "turns": [] } } }
```
```json
{ "method": "thread/read", "id": 23, "params": { "threadId": "thr_123", "includeTurns": true } }
{ "id": 23, "result": { "thread": { "id": "thr_123", "turns": [ ... ] } } }
```
### Example: Archive a thread
Use `thread/archive` to move the persisted rollout (stored as a JSONL file on disk) into the archived sessions directory.
@@ -152,7 +208,7 @@ Use `thread/archive` to move the persisted rollout (stored as a JSONL file on di
{ "id": 21, "result": {} }
```
An archived thread will not appear in future calls to `thread/list`.
An archived thread will not appear in `thread/list` unless `archived` is set to `true`.
### Example: Start a turn (send user input)
@@ -179,6 +235,7 @@ You can optionally specify config overrides on the new turn. If specified, these
"model": "gpt-5.1-codex",
"effort": "medium",
"summary": "concise",
"personality": "friendly",
// Optional JSON Schema to constrain the final assistant message for this turn.
"outputSchema": {
"type": "object",
@@ -195,6 +252,26 @@ You can optionally specify config overrides on the new turn. If specified, these
} } }
```
### Example: Start a turn (invoke a skill)
Invoke a skill explicitly by including `$<skill-name>` in the text input and adding a `skill` input item alongside it.
```json
{ "method": "turn/start", "id": 33, "params": {
"threadId": "thr_123",
"input": [
{ "type": "text", "text": "$skill-creator Add a new skill for triaging flaky CI and include step-by-step usage." },
{ "type": "skill", "name": "skill-creator", "path": "/Users/me/.codex/skills/skill-creator/SKILL.md" }
]
} }
{ "id": 33, "result": { "turn": {
"id": "turn_457",
"status": "inProgress",
"items": [],
"error": null
} } }
```
### Example: Interrupt an active turn
You can cancel a running Turn with `turn/interrupt`.
@@ -325,6 +402,7 @@ Today both notifications carry an empty `items` array even when item events were
- `commandExecution``{id, command, cwd, status, commandActions, aggregatedOutput?, exitCode?, durationMs?}` for sandboxed commands; `status` is `inProgress`, `completed`, `failed`, or `declined`.
- `fileChange``{id, changes, status}` describing proposed edits; `changes` list `{path, kind, diff}` and `status` is `inProgress`, `completed`, `failed`, or `declined`.
- `mcpToolCall``{id, server, tool, status, arguments, result?, error?}` describing MCP calls; `status` is `inProgress`, `completed`, or `failed`.
- `collabToolCall``{id, tool, status, senderThreadId, receiverThreadId?, newThreadId?, prompt?, agentStatus?}` describing collab tool calls (`spawn_agent`, `send_input`, `wait`, `close_agent`); `status` is `inProgress`, `completed`, or `failed`.
- `webSearch``{id, query}` for a web search request issued by the agent.
- `imageView``{id, path}` emitted when the agent invokes the image viewer tool.
- `enteredReviewMode``{id, review}` sent when the reviewer starts; `review` is a short user-facing label such as `"current changes"` or the requested target description.
@@ -389,7 +467,7 @@ Certain actions (shell commands or modifying files) may require explicit user ap
Order of messages:
1. `item/started` — shows the pending `commandExecution` item with `command`, `cwd`, and other fields so you can render the proposed action.
2. `item/commandExecution/requestApproval` (request) — carries the same `itemId`, `threadId`, `turnId`, optionally `reason` or `risk`, plus `parsedCmd` for friendly display.
2. `item/commandExecution/requestApproval` (request) — carries the same `itemId`, `threadId`, `turnId`, optionally `reason`, plus `command`, `cwd`, and `commandActions` for friendly display.
3. Client response — `{ "decision": "accept", "acceptSettings": { "forSession": false } }` or `{ "decision": "decline" }`.
4. `item/completed` — final `commandExecution` item with `status: "completed" | "failed" | "declined"` and execution output. Render this as the authoritative result.
@@ -404,6 +482,82 @@ Order of messages:
UI guidance for IDEs: surface an approval dialog as soon as the request arrives. The turn will proceed after the server receives a response to the approval request. The terminal `item/completed` notification will be sent with the appropriate status.
## Skills
Invoke a skill by including `$<skill-name>` in the text input. Add a `skill` input item (recommended) so the backend injects full skill instructions instead of relying on the model to resolve the name.
```json
{
"method": "turn/start",
"id": 101,
"params": {
"threadId": "thread-1",
"input": [
{
"type": "text",
"text": "$skill-creator Add a new skill for triaging flaky CI."
},
{
"type": "skill",
"name": "skill-creator",
"path": "/Users/me/.codex/skills/skill-creator/SKILL.md"
}
]
}
}
```
If you omit the `skill` item, the model will still parse the `$<skill-name>` marker and try to locate the skill, which can add latency.
Example:
```
$skill-creator Add a new skill for triaging flaky CI and include step-by-step usage.
```
Use `skills/list` to fetch the available skills (optionally scoped by `cwds`, with `forceReload`).
```json
{ "method": "skills/list", "id": 25, "params": {
"cwds": ["/Users/me/project"],
"forceReload": false
} }
{ "id": 25, "result": {
"data": [{
"cwd": "/Users/me/project",
"skills": [
{
"name": "skill-creator",
"description": "Create or update a Codex skill",
"enabled": true,
"interface": {
"displayName": "Skill Creator",
"shortDescription": "Create or update a Codex skill",
"iconSmall": "icon.svg",
"iconLarge": "icon-large.svg",
"brandColor": "#111111",
"defaultPrompt": "Add a new skill for triaging flaky CI."
}
}
],
"errors": []
}]
} }
```
To enable or disable a skill by path:
```json
{
"method": "skills/config/write",
"id": 26,
"params": {
"path": "/Users/me/.codex/skills/skill-creator/SKILL.md",
"enabled": false
}
}
```
## Auth endpoints
The JSON-RPC auth/account surface exposes request/response methods plus server-initiated notifications (no `id`). Use these to determine auth state, start or cancel logins, logout, and inspect ChatGPT rate limits.

View File

@@ -1,15 +1,24 @@
use crate::codex_message_processor::ApiVersion;
use crate::codex_message_processor::PendingInterrupts;
use crate::codex_message_processor::PendingRollbacks;
use crate::codex_message_processor::TurnSummary;
use crate::codex_message_processor::TurnSummaryStore;
use crate::codex_message_processor::read_event_msgs_from_rollout;
use crate::codex_message_processor::read_summary_from_rollout;
use crate::codex_message_processor::summary_to_thread;
use crate::error_code::INTERNAL_ERROR_CODE;
use crate::error_code::INVALID_REQUEST_ERROR_CODE;
use crate::outgoing_message::OutgoingMessageSender;
use codex_app_server_protocol::AccountRateLimitsUpdatedNotification;
use codex_app_server_protocol::AgentMessageDeltaNotification;
use codex_app_server_protocol::ApplyPatchApprovalParams;
use codex_app_server_protocol::ApplyPatchApprovalResponse;
use codex_app_server_protocol::ApprovalDecision;
use codex_app_server_protocol::CodexErrorInfo as V2CodexErrorInfo;
use codex_app_server_protocol::CollabAgentState as V2CollabAgentStatus;
use codex_app_server_protocol::CollabAgentTool;
use codex_app_server_protocol::CollabAgentToolCallStatus as V2CollabToolCallStatus;
use codex_app_server_protocol::CommandAction as V2ParsedCommand;
use codex_app_server_protocol::CommandExecutionApprovalDecision;
use codex_app_server_protocol::CommandExecutionOutputDeltaNotification;
use codex_app_server_protocol::CommandExecutionRequestApprovalParams;
use codex_app_server_protocol::CommandExecutionRequestApprovalResponse;
@@ -20,6 +29,7 @@ use codex_app_server_protocol::ErrorNotification;
use codex_app_server_protocol::ExecCommandApprovalParams;
use codex_app_server_protocol::ExecCommandApprovalResponse;
use codex_app_server_protocol::ExecPolicyAmendment as V2ExecPolicyAmendment;
use codex_app_server_protocol::FileChangeApprovalDecision;
use codex_app_server_protocol::FileChangeOutputDeltaNotification;
use codex_app_server_protocol::FileChangeRequestApprovalParams;
use codex_app_server_protocol::FileChangeRequestApprovalResponse;
@@ -27,6 +37,7 @@ use codex_app_server_protocol::FileUpdateChange;
use codex_app_server_protocol::InterruptConversationResponse;
use codex_app_server_protocol::ItemCompletedNotification;
use codex_app_server_protocol::ItemStartedNotification;
use codex_app_server_protocol::JSONRPCErrorError;
use codex_app_server_protocol::McpToolCallError;
use codex_app_server_protocol::McpToolCallResult;
use codex_app_server_protocol::McpToolCallStatus;
@@ -40,8 +51,13 @@ use codex_app_server_protocol::ServerNotification;
use codex_app_server_protocol::ServerRequestPayload;
use codex_app_server_protocol::TerminalInteractionNotification;
use codex_app_server_protocol::ThreadItem;
use codex_app_server_protocol::ThreadRollbackResponse;
use codex_app_server_protocol::ThreadTokenUsage;
use codex_app_server_protocol::ThreadTokenUsageUpdatedNotification;
use codex_app_server_protocol::ToolRequestUserInputOption;
use codex_app_server_protocol::ToolRequestUserInputParams;
use codex_app_server_protocol::ToolRequestUserInputQuestion;
use codex_app_server_protocol::ToolRequestUserInputResponse;
use codex_app_server_protocol::Turn;
use codex_app_server_protocol::TurnCompletedNotification;
use codex_app_server_protocol::TurnDiffUpdatedNotification;
@@ -50,9 +66,11 @@ use codex_app_server_protocol::TurnInterruptResponse;
use codex_app_server_protocol::TurnPlanStep;
use codex_app_server_protocol::TurnPlanUpdatedNotification;
use codex_app_server_protocol::TurnStatus;
use codex_core::CodexConversation;
use codex_app_server_protocol::build_turns_from_event_msgs;
use codex_core::CodexThread;
use codex_core::parse_command::shlex_join;
use codex_core::protocol::ApplyPatchApprovalRequestEvent;
use codex_core::protocol::CodexErrorInfo as CoreCodexErrorInfo;
use codex_core::protocol::Event;
use codex_core::protocol::EventMsg;
use codex_core::protocol::ExecApprovalRequestEvent;
@@ -66,9 +84,11 @@ use codex_core::protocol::TokenCountEvent;
use codex_core::protocol::TurnDiffEvent;
use codex_core::review_format::format_review_findings_block;
use codex_core::review_prompts;
use codex_protocol::ConversationId;
use codex_protocol::ThreadId;
use codex_protocol::plan_tool::UpdatePlanArgs;
use codex_protocol::protocol::ReviewOutputEvent;
use codex_protocol::request_user_input::RequestUserInputAnswer as CoreRequestUserInputAnswer;
use codex_protocol::request_user_input::RequestUserInputResponse as CoreRequestUserInputResponse;
use std::collections::HashMap;
use std::convert::TryFrom;
use std::path::PathBuf;
@@ -78,21 +98,24 @@ use tracing::error;
type JsonValue = serde_json::Value;
#[allow(clippy::too_many_arguments)]
pub(crate) async fn apply_bespoke_event_handling(
event: Event,
conversation_id: ConversationId,
conversation: Arc<CodexConversation>,
conversation_id: ThreadId,
conversation: Arc<CodexThread>,
outgoing: Arc<OutgoingMessageSender>,
pending_interrupts: PendingInterrupts,
pending_rollbacks: PendingRollbacks,
turn_summary_store: TurnSummaryStore,
api_version: ApiVersion,
fallback_model_provider: String,
) {
let Event {
id: event_turn_id,
msg,
} = event;
match msg {
EventMsg::TaskComplete(_ev) => {
EventMsg::TurnComplete(_ev) => {
handle_turn_complete(
conversation_id,
event_turn_id,
@@ -218,6 +241,9 @@ pub(crate) async fn apply_bespoke_event_handling(
// and emit the corresponding EventMsg, we repurpose the call_id as the item_id.
item_id: item_id.clone(),
reason,
command: Some(command_string.clone()),
cwd: Some(cwd.clone()),
command_actions: Some(command_actions.clone()),
proposed_execpolicy_amendment: proposed_execpolicy_amendment_v2,
};
let rx = outgoing
@@ -241,6 +267,57 @@ pub(crate) async fn apply_bespoke_event_handling(
});
}
},
EventMsg::RequestUserInput(request) => {
if matches!(api_version, ApiVersion::V2) {
let questions = request
.questions
.into_iter()
.map(|question| ToolRequestUserInputQuestion {
id: question.id,
header: question.header,
question: question.question,
options: question.options.map(|options| {
options
.into_iter()
.map(|option| ToolRequestUserInputOption {
label: option.label,
description: option.description,
})
.collect()
}),
})
.collect();
let params = ToolRequestUserInputParams {
thread_id: conversation_id.to_string(),
turn_id: request.turn_id,
item_id: request.call_id,
questions,
};
let rx = outgoing
.send_request(ServerRequestPayload::ToolRequestUserInput(params))
.await;
tokio::spawn(async move {
on_request_user_input_response(event_turn_id, rx, conversation).await;
});
} else {
error!(
"request_user_input is only supported on api v2 (call_id: {})",
request.call_id
);
let empty = CoreRequestUserInputResponse {
answers: HashMap::new(),
};
if let Err(err) = conversation
.submit(Op::UserInputAnswer {
id: event_turn_id,
response: empty,
})
.await
{
error!("failed to submit UserInputAnswer: {err}");
}
}
}
// TODO(celia): properly construct McpToolCall TurnItem in core.
EventMsg::McpToolCallBegin(begin_event) => {
let notification = construct_mcp_tool_call_notification(
@@ -264,6 +341,218 @@ pub(crate) async fn apply_bespoke_event_handling(
.send_server_notification(ServerNotification::ItemCompleted(notification))
.await;
}
EventMsg::CollabAgentSpawnBegin(begin_event) => {
let item = ThreadItem::CollabAgentToolCall {
id: begin_event.call_id,
tool: CollabAgentTool::SpawnAgent,
status: V2CollabToolCallStatus::InProgress,
sender_thread_id: begin_event.sender_thread_id.to_string(),
receiver_thread_ids: Vec::new(),
prompt: Some(begin_event.prompt),
agents_states: HashMap::new(),
};
let notification = ItemStartedNotification {
thread_id: conversation_id.to_string(),
turn_id: event_turn_id.clone(),
item,
};
outgoing
.send_server_notification(ServerNotification::ItemStarted(notification))
.await;
}
EventMsg::CollabAgentSpawnEnd(end_event) => {
let has_receiver = end_event.new_thread_id.is_some();
let status = match &end_event.status {
codex_protocol::protocol::AgentStatus::Errored(_)
| codex_protocol::protocol::AgentStatus::NotFound => V2CollabToolCallStatus::Failed,
_ if has_receiver => V2CollabToolCallStatus::Completed,
_ => V2CollabToolCallStatus::Failed,
};
let (receiver_thread_ids, agents_states) = match end_event.new_thread_id {
Some(id) => {
let receiver_id = id.to_string();
let received_status = V2CollabAgentStatus::from(end_event.status.clone());
(
vec![receiver_id.clone()],
[(receiver_id, received_status)].into_iter().collect(),
)
}
None => (Vec::new(), HashMap::new()),
};
let item = ThreadItem::CollabAgentToolCall {
id: end_event.call_id,
tool: CollabAgentTool::SpawnAgent,
status,
sender_thread_id: end_event.sender_thread_id.to_string(),
receiver_thread_ids,
prompt: Some(end_event.prompt),
agents_states,
};
let notification = ItemCompletedNotification {
thread_id: conversation_id.to_string(),
turn_id: event_turn_id.clone(),
item,
};
outgoing
.send_server_notification(ServerNotification::ItemCompleted(notification))
.await;
}
EventMsg::CollabAgentInteractionBegin(begin_event) => {
let receiver_thread_ids = vec![begin_event.receiver_thread_id.to_string()];
let item = ThreadItem::CollabAgentToolCall {
id: begin_event.call_id,
tool: CollabAgentTool::SendInput,
status: V2CollabToolCallStatus::InProgress,
sender_thread_id: begin_event.sender_thread_id.to_string(),
receiver_thread_ids,
prompt: Some(begin_event.prompt),
agents_states: HashMap::new(),
};
let notification = ItemStartedNotification {
thread_id: conversation_id.to_string(),
turn_id: event_turn_id.clone(),
item,
};
outgoing
.send_server_notification(ServerNotification::ItemStarted(notification))
.await;
}
EventMsg::CollabAgentInteractionEnd(end_event) => {
let status = match &end_event.status {
codex_protocol::protocol::AgentStatus::Errored(_)
| codex_protocol::protocol::AgentStatus::NotFound => V2CollabToolCallStatus::Failed,
_ => V2CollabToolCallStatus::Completed,
};
let receiver_id = end_event.receiver_thread_id.to_string();
let received_status = V2CollabAgentStatus::from(end_event.status);
let item = ThreadItem::CollabAgentToolCall {
id: end_event.call_id,
tool: CollabAgentTool::SendInput,
status,
sender_thread_id: end_event.sender_thread_id.to_string(),
receiver_thread_ids: vec![receiver_id.clone()],
prompt: Some(end_event.prompt),
agents_states: [(receiver_id, received_status)].into_iter().collect(),
};
let notification = ItemCompletedNotification {
thread_id: conversation_id.to_string(),
turn_id: event_turn_id.clone(),
item,
};
outgoing
.send_server_notification(ServerNotification::ItemCompleted(notification))
.await;
}
EventMsg::CollabWaitingBegin(begin_event) => {
let receiver_thread_ids = begin_event
.receiver_thread_ids
.iter()
.map(ToString::to_string)
.collect();
let item = ThreadItem::CollabAgentToolCall {
id: begin_event.call_id,
tool: CollabAgentTool::Wait,
status: V2CollabToolCallStatus::InProgress,
sender_thread_id: begin_event.sender_thread_id.to_string(),
receiver_thread_ids,
prompt: None,
agents_states: HashMap::new(),
};
let notification = ItemStartedNotification {
thread_id: conversation_id.to_string(),
turn_id: event_turn_id.clone(),
item,
};
outgoing
.send_server_notification(ServerNotification::ItemStarted(notification))
.await;
}
EventMsg::CollabWaitingEnd(end_event) => {
let status = if end_event.statuses.values().any(|status| {
matches!(
status,
codex_protocol::protocol::AgentStatus::Errored(_)
| codex_protocol::protocol::AgentStatus::NotFound
)
}) {
V2CollabToolCallStatus::Failed
} else {
V2CollabToolCallStatus::Completed
};
let receiver_thread_ids = end_event.statuses.keys().map(ToString::to_string).collect();
let agents_states = end_event
.statuses
.iter()
.map(|(id, status)| (id.to_string(), V2CollabAgentStatus::from(status.clone())))
.collect();
let item = ThreadItem::CollabAgentToolCall {
id: end_event.call_id,
tool: CollabAgentTool::Wait,
status,
sender_thread_id: end_event.sender_thread_id.to_string(),
receiver_thread_ids,
prompt: None,
agents_states,
};
let notification = ItemCompletedNotification {
thread_id: conversation_id.to_string(),
turn_id: event_turn_id.clone(),
item,
};
outgoing
.send_server_notification(ServerNotification::ItemCompleted(notification))
.await;
}
EventMsg::CollabCloseBegin(begin_event) => {
let item = ThreadItem::CollabAgentToolCall {
id: begin_event.call_id,
tool: CollabAgentTool::CloseAgent,
status: V2CollabToolCallStatus::InProgress,
sender_thread_id: begin_event.sender_thread_id.to_string(),
receiver_thread_ids: vec![begin_event.receiver_thread_id.to_string()],
prompt: None,
agents_states: HashMap::new(),
};
let notification = ItemStartedNotification {
thread_id: conversation_id.to_string(),
turn_id: event_turn_id.clone(),
item,
};
outgoing
.send_server_notification(ServerNotification::ItemStarted(notification))
.await;
}
EventMsg::CollabCloseEnd(end_event) => {
let status = match &end_event.status {
codex_protocol::protocol::AgentStatus::Errored(_)
| codex_protocol::protocol::AgentStatus::NotFound => V2CollabToolCallStatus::Failed,
_ => V2CollabToolCallStatus::Completed,
};
let receiver_id = end_event.receiver_thread_id.to_string();
let agents_states = [(
receiver_id.clone(),
V2CollabAgentStatus::from(end_event.status),
)]
.into_iter()
.collect();
let item = ThreadItem::CollabAgentToolCall {
id: end_event.call_id,
tool: CollabAgentTool::CloseAgent,
status,
sender_thread_id: end_event.sender_thread_id.to_string(),
receiver_thread_ids: vec![receiver_id],
prompt: None,
agents_states,
};
let notification = ItemCompletedNotification {
thread_id: conversation_id.to_string(),
turn_id: event_turn_id.clone(),
item,
};
outgoing
.send_server_notification(ServerNotification::ItemCompleted(notification))
.await;
}
EventMsg::AgentMessageContentDelta(event) => {
let notification = AgentMessageDeltaNotification {
thread_id: conversation_id.to_string(),
@@ -337,6 +626,26 @@ pub(crate) async fn apply_bespoke_event_handling(
.await;
}
EventMsg::Error(ev) => {
let message = ev.message.clone();
let codex_error_info = ev.codex_error_info.clone();
// If this error belongs to an in-flight `thread/rollback` request, fail that request
// (and clear pending state) so subsequent rollbacks are unblocked.
//
// Don't send a notification for this error.
if matches!(
codex_error_info,
Some(CoreCodexErrorInfo::ThreadRollbackFailed)
) {
return handle_thread_rollback_failed(
conversation_id,
message,
&pending_rollbacks,
&outgoing,
)
.await;
};
let turn_error = TurnError {
message: ev.message,
codex_error_info: ev.codex_error_info.map(V2CodexErrorInfo::from),
@@ -345,7 +654,7 @@ pub(crate) async fn apply_bespoke_event_handling(
handle_error(conversation_id, turn_error.clone(), &turn_summary_store).await;
outgoing
.send_server_notification(ServerNotification::Error(ErrorNotification {
error: turn_error,
error: turn_error.clone(),
will_retry: false,
thread_id: conversation_id.to_string(),
turn_id: event_turn_id.clone(),
@@ -690,6 +999,66 @@ pub(crate) async fn apply_bespoke_event_handling(
)
.await;
}
EventMsg::ThreadRolledBack(_rollback_event) => {
let pending = {
let mut map = pending_rollbacks.lock().await;
map.remove(&conversation_id)
};
if let Some(request_id) = pending {
let Some(rollout_path) = conversation.rollout_path() else {
let error = JSONRPCErrorError {
code: INVALID_REQUEST_ERROR_CODE,
message: "thread has no persisted rollout".to_string(),
data: None,
};
outgoing.send_error(request_id, error).await;
return;
};
let response = match read_summary_from_rollout(
rollout_path.as_path(),
fallback_model_provider.as_str(),
)
.await
{
Ok(summary) => {
let mut thread = summary_to_thread(summary);
match read_event_msgs_from_rollout(rollout_path.as_path()).await {
Ok(events) => {
thread.turns = build_turns_from_event_msgs(&events);
ThreadRollbackResponse { thread }
}
Err(err) => {
let error = JSONRPCErrorError {
code: INTERNAL_ERROR_CODE,
message: format!(
"failed to load rollout `{}`: {err}",
rollout_path.display()
),
data: None,
};
outgoing.send_error(request_id, error).await;
return;
}
}
}
Err(err) => {
let error = JSONRPCErrorError {
code: INTERNAL_ERROR_CODE,
message: format!(
"failed to load rollout `{}`: {err}",
rollout_path.display()
),
data: None,
};
outgoing.send_error(request_id, error).await;
return;
}
};
outgoing.send_response(request_id, response).await;
}
}
EventMsg::TurnDiff(turn_diff_event) => {
handle_turn_diff(
conversation_id,
@@ -716,7 +1085,7 @@ pub(crate) async fn apply_bespoke_event_handling(
}
async fn handle_turn_diff(
conversation_id: ConversationId,
conversation_id: ThreadId,
event_turn_id: &str,
turn_diff_event: TurnDiffEvent,
api_version: ApiVersion,
@@ -735,7 +1104,7 @@ async fn handle_turn_diff(
}
async fn handle_turn_plan_update(
conversation_id: ConversationId,
conversation_id: ThreadId,
event_turn_id: &str,
plan_update_event: UpdatePlanArgs,
api_version: ApiVersion,
@@ -759,7 +1128,7 @@ async fn handle_turn_plan_update(
}
async fn emit_turn_completed_with_status(
conversation_id: ConversationId,
conversation_id: ThreadId,
event_turn_id: String,
status: TurnStatus,
error: Option<TurnError>,
@@ -780,7 +1149,7 @@ async fn emit_turn_completed_with_status(
}
async fn complete_file_change_item(
conversation_id: ConversationId,
conversation_id: ThreadId,
item_id: String,
changes: Vec<FileUpdateChange>,
status: PatchApplyStatus,
@@ -812,7 +1181,7 @@ async fn complete_file_change_item(
#[allow(clippy::too_many_arguments)]
async fn complete_command_execution_item(
conversation_id: ConversationId,
conversation_id: ThreadId,
turn_id: String,
item_id: String,
command: String,
@@ -845,7 +1214,7 @@ async fn complete_command_execution_item(
async fn maybe_emit_raw_response_item_completed(
api_version: ApiVersion,
conversation_id: ConversationId,
conversation_id: ThreadId,
turn_id: &str,
item: codex_protocol::models::ResponseItem,
outgoing: &OutgoingMessageSender,
@@ -865,7 +1234,7 @@ async fn maybe_emit_raw_response_item_completed(
}
async fn find_and_remove_turn_summary(
conversation_id: ConversationId,
conversation_id: ThreadId,
turn_summary_store: &TurnSummaryStore,
) -> TurnSummary {
let mut map = turn_summary_store.lock().await;
@@ -873,7 +1242,7 @@ async fn find_and_remove_turn_summary(
}
async fn handle_turn_complete(
conversation_id: ConversationId,
conversation_id: ThreadId,
event_turn_id: String,
outgoing: &OutgoingMessageSender,
turn_summary_store: &TurnSummaryStore,
@@ -889,7 +1258,7 @@ async fn handle_turn_complete(
}
async fn handle_turn_interrupted(
conversation_id: ConversationId,
conversation_id: ThreadId,
event_turn_id: String,
outgoing: &OutgoingMessageSender,
turn_summary_store: &TurnSummaryStore,
@@ -906,8 +1275,33 @@ async fn handle_turn_interrupted(
.await;
}
async fn handle_thread_rollback_failed(
conversation_id: ThreadId,
message: String,
pending_rollbacks: &PendingRollbacks,
outgoing: &OutgoingMessageSender,
) {
let pending_rollback = {
let mut map = pending_rollbacks.lock().await;
map.remove(&conversation_id)
};
if let Some(request_id) = pending_rollback {
outgoing
.send_error(
request_id,
JSONRPCErrorError {
code: INVALID_REQUEST_ERROR_CODE,
message: message.clone(),
data: None,
},
)
.await;
}
}
async fn handle_token_count_event(
conversation_id: ConversationId,
conversation_id: ThreadId,
turn_id: String,
token_count_event: TokenCountEvent,
outgoing: &OutgoingMessageSender,
@@ -935,7 +1329,7 @@ async fn handle_token_count_event(
}
async fn handle_error(
conversation_id: ConversationId,
conversation_id: ThreadId,
error: TurnError,
turn_summary_store: &TurnSummaryStore,
) {
@@ -946,7 +1340,7 @@ async fn handle_error(
async fn on_patch_approval_response(
event_turn_id: String,
receiver: oneshot::Receiver<JsonValue>,
codex: Arc<CodexConversation>,
codex: Arc<CodexThread>,
) {
let response = receiver.await;
let value = match response {
@@ -988,7 +1382,7 @@ async fn on_patch_approval_response(
async fn on_exec_approval_response(
event_turn_id: String,
receiver: oneshot::Receiver<JsonValue>,
conversation: Arc<CodexConversation>,
conversation: Arc<CodexThread>,
) {
let response = receiver.await;
let value = match response {
@@ -1021,6 +1415,65 @@ async fn on_exec_approval_response(
}
}
async fn on_request_user_input_response(
event_turn_id: String,
receiver: oneshot::Receiver<JsonValue>,
conversation: Arc<CodexThread>,
) {
let response = receiver.await;
let value = match response {
Ok(value) => value,
Err(err) => {
error!("request failed: {err:?}");
let empty = CoreRequestUserInputResponse {
answers: HashMap::new(),
};
if let Err(err) = conversation
.submit(Op::UserInputAnswer {
id: event_turn_id,
response: empty,
})
.await
{
error!("failed to submit UserInputAnswer: {err}");
}
return;
}
};
let response =
serde_json::from_value::<ToolRequestUserInputResponse>(value).unwrap_or_else(|err| {
error!("failed to deserialize ToolRequestUserInputResponse: {err}");
ToolRequestUserInputResponse {
answers: HashMap::new(),
}
});
let response = CoreRequestUserInputResponse {
answers: response
.answers
.into_iter()
.map(|(id, answer)| {
(
id,
CoreRequestUserInputAnswer {
answers: answer.answers,
},
)
})
.collect(),
};
if let Err(err) = conversation
.submit(Op::UserInputAnswer {
id: event_turn_id,
response,
})
.await
{
error!("failed to submit UserInputAnswer: {err}");
}
}
const REVIEW_FALLBACK_MESSAGE: &str = "Reviewer failed to output a response.";
fn render_review_output_text(output: &ReviewOutputEvent) -> String {
@@ -1083,14 +1536,29 @@ fn format_file_change_diff(change: &CoreFileChange) -> String {
}
}
fn map_file_change_approval_decision(
decision: FileChangeApprovalDecision,
) -> (ReviewDecision, Option<PatchApplyStatus>) {
match decision {
FileChangeApprovalDecision::Accept => (ReviewDecision::Approved, None),
FileChangeApprovalDecision::AcceptForSession => (ReviewDecision::ApprovedForSession, None),
FileChangeApprovalDecision::Decline => {
(ReviewDecision::Denied, Some(PatchApplyStatus::Declined))
}
FileChangeApprovalDecision::Cancel => {
(ReviewDecision::Abort, Some(PatchApplyStatus::Declined))
}
}
}
#[allow(clippy::too_many_arguments)]
async fn on_file_change_request_approval_response(
event_turn_id: String,
conversation_id: ConversationId,
conversation_id: ThreadId,
item_id: String,
changes: Vec<FileUpdateChange>,
receiver: oneshot::Receiver<JsonValue>,
codex: Arc<CodexConversation>,
codex: Arc<CodexThread>,
outgoing: Arc<OutgoingMessageSender>,
turn_summary_store: TurnSummaryStore,
) {
@@ -1101,23 +1569,12 @@ async fn on_file_change_request_approval_response(
.unwrap_or_else(|err| {
error!("failed to deserialize FileChangeRequestApprovalResponse: {err}");
FileChangeRequestApprovalResponse {
decision: ApprovalDecision::Decline,
decision: FileChangeApprovalDecision::Decline,
}
});
let (decision, completion_status) = match response.decision {
ApprovalDecision::Accept
| ApprovalDecision::AcceptForSession
| ApprovalDecision::AcceptWithExecpolicyAmendment { .. } => {
(ReviewDecision::Approved, None)
}
ApprovalDecision::Decline => {
(ReviewDecision::Denied, Some(PatchApplyStatus::Declined))
}
ApprovalDecision::Cancel => {
(ReviewDecision::Abort, Some(PatchApplyStatus::Declined))
}
};
let (decision, completion_status) =
map_file_change_approval_decision(response.decision);
// Allow EventMsg::PatchApplyEnd to emit ItemCompleted for accepted patches.
// Only short-circuit on declines/cancels/failures.
(decision, completion_status)
@@ -1155,13 +1612,13 @@ async fn on_file_change_request_approval_response(
#[allow(clippy::too_many_arguments)]
async fn on_command_execution_request_approval_response(
event_turn_id: String,
conversation_id: ConversationId,
conversation_id: ThreadId,
item_id: String,
command: String,
cwd: PathBuf,
command_actions: Vec<V2ParsedCommand>,
receiver: oneshot::Receiver<JsonValue>,
conversation: Arc<CodexConversation>,
conversation: Arc<CodexThread>,
outgoing: Arc<OutgoingMessageSender>,
) {
let response = receiver.await;
@@ -1171,16 +1628,18 @@ async fn on_command_execution_request_approval_response(
.unwrap_or_else(|err| {
error!("failed to deserialize CommandExecutionRequestApprovalResponse: {err}");
CommandExecutionRequestApprovalResponse {
decision: ApprovalDecision::Decline,
decision: CommandExecutionApprovalDecision::Decline,
}
});
let decision = response.decision;
let (decision, completion_status) = match decision {
ApprovalDecision::Accept => (ReviewDecision::Approved, None),
ApprovalDecision::AcceptForSession => (ReviewDecision::ApprovedForSession, None),
ApprovalDecision::AcceptWithExecpolicyAmendment {
CommandExecutionApprovalDecision::Accept => (ReviewDecision::Approved, None),
CommandExecutionApprovalDecision::AcceptForSession => {
(ReviewDecision::ApprovedForSession, None)
}
CommandExecutionApprovalDecision::AcceptWithExecpolicyAmendment {
execpolicy_amendment,
} => (
ReviewDecision::ApprovedExecpolicyAmendment {
@@ -1188,11 +1647,11 @@ async fn on_command_execution_request_approval_response(
},
None,
),
ApprovalDecision::Decline => (
CommandExecutionApprovalDecision::Decline => (
ReviewDecision::Denied,
Some(CommandExecutionStatus::Declined),
),
ApprovalDecision::Cancel => (
CommandExecutionApprovalDecision::Cancel => (
ReviewDecision::Abort,
Some(CommandExecutionStatus::Declined),
),
@@ -1332,9 +1791,17 @@ mod tests {
Arc::new(Mutex::new(HashMap::new()))
}
#[test]
fn file_change_accept_for_session_maps_to_approved_for_session() {
let (decision, completion_status) =
map_file_change_approval_decision(FileChangeApprovalDecision::AcceptForSession);
assert_eq!(decision, ReviewDecision::ApprovedForSession);
assert_eq!(completion_status, None);
}
#[tokio::test]
async fn test_handle_error_records_message() -> Result<()> {
let conversation_id = ConversationId::new();
let conversation_id = ThreadId::new();
let turn_summary_store = new_turn_summary_store();
handle_error(
@@ -1362,7 +1829,7 @@ mod tests {
#[tokio::test]
async fn test_handle_turn_complete_emits_completed_without_error() -> Result<()> {
let conversation_id = ConversationId::new();
let conversation_id = ThreadId::new();
let event_turn_id = "complete1".to_string();
let (tx, mut rx) = mpsc::channel(CHANNEL_CAPACITY);
let outgoing = Arc::new(OutgoingMessageSender::new(tx));
@@ -1394,7 +1861,7 @@ mod tests {
#[tokio::test]
async fn test_handle_turn_interrupted_emits_interrupted_with_error() -> Result<()> {
let conversation_id = ConversationId::new();
let conversation_id = ThreadId::new();
let event_turn_id = "interrupt1".to_string();
let turn_summary_store = new_turn_summary_store();
handle_error(
@@ -1436,7 +1903,7 @@ mod tests {
#[tokio::test]
async fn test_handle_turn_complete_emits_failed_with_error() -> Result<()> {
let conversation_id = ConversationId::new();
let conversation_id = ThreadId::new();
let event_turn_id = "complete_err1".to_string();
let turn_summary_store = new_turn_summary_store();
handle_error(
@@ -1501,7 +1968,7 @@ mod tests {
],
};
let conversation_id = ConversationId::new();
let conversation_id = ThreadId::new();
handle_turn_plan_update(
conversation_id,
@@ -1535,7 +2002,7 @@ mod tests {
#[tokio::test]
async fn test_handle_token_count_event_emits_usage_and_rate_limits() -> Result<()> {
let conversation_id = ConversationId::new();
let conversation_id = ThreadId::new();
let turn_id = "turn-123".to_string();
let (tx, mut rx) = mpsc::channel(CHANNEL_CAPACITY);
let outgoing = Arc::new(OutgoingMessageSender::new(tx));
@@ -1620,7 +2087,7 @@ mod tests {
#[tokio::test]
async fn test_handle_token_count_event_without_usage_info() -> Result<()> {
let conversation_id = ConversationId::new();
let conversation_id = ThreadId::new();
let turn_id = "turn-456".to_string();
let (tx, mut rx) = mpsc::channel(CHANNEL_CAPACITY);
let outgoing = Arc::new(OutgoingMessageSender::new(tx));
@@ -1654,7 +2121,7 @@ mod tests {
},
};
let thread_id = ConversationId::new().to_string();
let thread_id = ThreadId::new().to_string();
let turn_id = "turn_1".to_string();
let notification = construct_mcp_tool_call_notification(
begin_event.clone(),
@@ -1684,8 +2151,8 @@ mod tests {
#[tokio::test]
async fn test_handle_turn_complete_emits_error_multiple_turns() -> Result<()> {
// Conversation A will have two turns; Conversation B will have one turn.
let conversation_a = ConversationId::new();
let conversation_b = ConversationId::new();
let conversation_a = ThreadId::new();
let conversation_b = ThreadId::new();
let turn_summary_store = new_turn_summary_store();
let (tx, mut rx) = mpsc::channel(CHANNEL_CAPACITY);
@@ -1812,7 +2279,7 @@ mod tests {
},
};
let thread_id = ConversationId::new().to_string();
let thread_id = ThreadId::new().to_string();
let turn_id = "turn_2".to_string();
let notification = construct_mcp_tool_call_notification(
begin_event.clone(),
@@ -1863,7 +2330,7 @@ mod tests {
result: Ok(result),
};
let thread_id = ConversationId::new().to_string();
let thread_id = ThreadId::new().to_string();
let turn_id = "turn_3".to_string();
let notification = construct_mcp_tool_call_end_notification(
end_event.clone(),
@@ -1906,7 +2373,7 @@ mod tests {
result: Err("boom".to_string()),
};
let thread_id = ConversationId::new().to_string();
let thread_id = ThreadId::new().to_string();
let turn_id = "turn_4".to_string();
let notification = construct_mcp_tool_call_end_notification(
end_event.clone(),
@@ -1940,7 +2407,7 @@ mod tests {
let (tx, mut rx) = mpsc::channel(CHANNEL_CAPACITY);
let outgoing = OutgoingMessageSender::new(tx);
let unified_diff = "--- a\n+++ b\n".to_string();
let conversation_id = ConversationId::new();
let conversation_id = ThreadId::new();
handle_turn_diff(
conversation_id,
@@ -1975,7 +2442,7 @@ mod tests {
async fn test_handle_turn_diff_is_noop_for_v1() -> Result<()> {
let (tx, mut rx) = mpsc::channel(CHANNEL_CAPACITY);
let outgoing = OutgoingMessageSender::new(tx);
let conversation_id = ConversationId::new();
let conversation_id = ThreadId::new();
handle_turn_diff(
conversation_id,

File diff suppressed because it is too large Load Diff

View File

@@ -3,12 +3,18 @@ use crate::error_code::INVALID_REQUEST_ERROR_CODE;
use codex_app_server_protocol::ConfigBatchWriteParams;
use codex_app_server_protocol::ConfigReadParams;
use codex_app_server_protocol::ConfigReadResponse;
use codex_app_server_protocol::ConfigRequirements;
use codex_app_server_protocol::ConfigRequirementsReadResponse;
use codex_app_server_protocol::ConfigValueWriteParams;
use codex_app_server_protocol::ConfigWriteErrorCode;
use codex_app_server_protocol::ConfigWriteResponse;
use codex_app_server_protocol::JSONRPCErrorError;
use codex_app_server_protocol::SandboxMode;
use codex_core::config::ConfigService;
use codex_core::config::ConfigServiceError;
use codex_core::config_loader::ConfigRequirementsToml;
use codex_core::config_loader::LoaderOverrides;
use codex_core::config_loader::SandboxModeRequirement as CoreSandboxModeRequirement;
use serde_json::json;
use std::path::PathBuf;
use toml::Value as TomlValue;
@@ -19,9 +25,13 @@ pub(crate) struct ConfigApi {
}
impl ConfigApi {
pub(crate) fn new(codex_home: PathBuf, cli_overrides: Vec<(String, TomlValue)>) -> Self {
pub(crate) fn new(
codex_home: PathBuf,
cli_overrides: Vec<(String, TomlValue)>,
loader_overrides: LoaderOverrides,
) -> Self {
Self {
service: ConfigService::new(codex_home, cli_overrides),
service: ConfigService::new(codex_home, cli_overrides, loader_overrides),
}
}
@@ -32,6 +42,19 @@ impl ConfigApi {
self.service.read(params).await.map_err(map_error)
}
pub(crate) async fn config_requirements_read(
&self,
) -> Result<ConfigRequirementsReadResponse, JSONRPCErrorError> {
let requirements = self
.service
.read_requirements()
.await
.map_err(map_error)?
.map(map_requirements_toml_to_api);
Ok(ConfigRequirementsReadResponse { requirements })
}
pub(crate) async fn write_value(
&self,
params: ConfigValueWriteParams,
@@ -47,6 +70,32 @@ impl ConfigApi {
}
}
fn map_requirements_toml_to_api(requirements: ConfigRequirementsToml) -> ConfigRequirements {
ConfigRequirements {
allowed_approval_policies: requirements.allowed_approval_policies.map(|policies| {
policies
.into_iter()
.map(codex_app_server_protocol::AskForApproval::from)
.collect()
}),
allowed_sandbox_modes: requirements.allowed_sandbox_modes.map(|modes| {
modes
.into_iter()
.filter_map(map_sandbox_mode_requirement_to_api)
.collect()
}),
}
}
fn map_sandbox_mode_requirement_to_api(mode: CoreSandboxModeRequirement) -> Option<SandboxMode> {
match mode {
CoreSandboxModeRequirement::ReadOnly => Some(SandboxMode::ReadOnly),
CoreSandboxModeRequirement::WorkspaceWrite => Some(SandboxMode::WorkspaceWrite),
CoreSandboxModeRequirement::DangerFullAccess => Some(SandboxMode::DangerFullAccess),
CoreSandboxModeRequirement::ExternalSandbox => None,
}
}
fn map_error(err: ConfigServiceError) -> JSONRPCErrorError {
if let Some(code) = err.write_error_code() {
return config_write_error(code, err.to_string());
@@ -68,3 +117,39 @@ fn config_write_error(code: ConfigWriteErrorCode, message: impl Into<String>) ->
})),
}
}
#[cfg(test)]
mod tests {
use super::*;
use codex_protocol::protocol::AskForApproval as CoreAskForApproval;
use pretty_assertions::assert_eq;
#[test]
fn map_requirements_toml_to_api_converts_core_enums() {
let requirements = ConfigRequirementsToml {
allowed_approval_policies: Some(vec![
CoreAskForApproval::Never,
CoreAskForApproval::OnRequest,
]),
allowed_sandbox_modes: Some(vec![
CoreSandboxModeRequirement::ReadOnly,
CoreSandboxModeRequirement::ExternalSandbox,
]),
mcp_servers: None,
};
let mapped = map_requirements_toml_to_api(requirements);
assert_eq!(
mapped.allowed_approval_policies,
Some(vec![
codex_app_server_protocol::AskForApproval::Never,
codex_app_server_protocol::AskForApproval::OnRequest,
])
);
assert_eq!(
mapped.allowed_sandbox_modes,
Some(vec![SandboxMode::ReadOnly]),
);
}
}

View File

@@ -2,6 +2,9 @@
use codex_common::CliConfigOverrides;
use codex_core::config::Config;
use codex_core::config::ConfigBuilder;
use codex_core::config_loader::ConfigLayerStackOrdering;
use codex_core::config_loader::LoaderOverrides;
use std::io::ErrorKind;
use std::io::Result as IoResult;
use std::path::PathBuf;
@@ -9,7 +12,15 @@ use std::path::PathBuf;
use crate::message_processor::MessageProcessor;
use crate::outgoing_message::OutgoingMessage;
use crate::outgoing_message::OutgoingMessageSender;
use codex_app_server_protocol::ConfigLayerSource;
use codex_app_server_protocol::ConfigWarningNotification;
use codex_app_server_protocol::JSONRPCMessage;
use codex_app_server_protocol::TextPosition as AppTextPosition;
use codex_app_server_protocol::TextRange as AppTextRange;
use codex_core::ExecPolicyError;
use codex_core::check_execpolicy_for_warnings;
use codex_core::config_loader::ConfigLoadError;
use codex_core::config_loader::TextRange as CoreTextRange;
use codex_feedback::CodexFeedback;
use tokio::io::AsyncBufReadExt;
use tokio::io::AsyncWriteExt;
@@ -20,6 +31,7 @@ use toml::Value as TomlValue;
use tracing::debug;
use tracing::error;
use tracing::info;
use tracing::warn;
use tracing_subscriber::EnvFilter;
use tracing_subscriber::Layer;
use tracing_subscriber::layer::SubscriberExt;
@@ -39,9 +51,117 @@ mod outgoing_message;
/// plenty for an interactive CLI.
const CHANNEL_CAPACITY: usize = 128;
fn config_warning_from_error(
summary: impl Into<String>,
err: &std::io::Error,
) -> ConfigWarningNotification {
let (path, range) = match config_error_location(err) {
Some((path, range)) => (Some(path), Some(range)),
None => (None, None),
};
ConfigWarningNotification {
summary: summary.into(),
details: Some(err.to_string()),
path,
range,
}
}
fn config_error_location(err: &std::io::Error) -> Option<(String, AppTextRange)> {
err.get_ref()
.and_then(|err| err.downcast_ref::<ConfigLoadError>())
.map(|err| {
let config_error = err.config_error();
(
config_error.path.to_string_lossy().to_string(),
app_text_range(&config_error.range),
)
})
}
fn exec_policy_warning_location(err: &ExecPolicyError) -> (Option<String>, Option<AppTextRange>) {
match err {
ExecPolicyError::ParsePolicy { path, source } => {
if let Some(location) = source.location() {
let range = AppTextRange {
start: AppTextPosition {
line: location.range.start.line,
column: location.range.start.column,
},
end: AppTextPosition {
line: location.range.end.line,
column: location.range.end.column,
},
};
return (Some(location.path), Some(range));
}
(Some(path.clone()), None)
}
_ => (None, None),
}
}
fn app_text_range(range: &CoreTextRange) -> AppTextRange {
AppTextRange {
start: AppTextPosition {
line: range.start.line,
column: range.start.column,
},
end: AppTextPosition {
line: range.end.line,
column: range.end.column,
},
}
}
fn project_config_warning(config: &Config) -> Option<ConfigWarningNotification> {
let mut disabled_folders = Vec::new();
for layer in config
.config_layer_stack
.get_layers(ConfigLayerStackOrdering::LowestPrecedenceFirst, true)
{
if !matches!(layer.name, ConfigLayerSource::Project { .. })
|| layer.disabled_reason.is_none()
{
continue;
}
if let ConfigLayerSource::Project { dot_codex_folder } = &layer.name {
disabled_folders.push((
dot_codex_folder.as_path().display().to_string(),
layer
.disabled_reason
.as_ref()
.map(ToString::to_string)
.unwrap_or_else(|| "Config folder disabled.".to_string()),
));
}
}
if disabled_folders.is_empty() {
return None;
}
let mut message = "The following config folders are disabled:\n".to_string();
for (index, (folder, reason)) in disabled_folders.iter().enumerate() {
let display_index = index + 1;
message.push_str(&format!(" {display_index}. {folder}\n"));
message.push_str(&format!(" {reason}\n"));
}
Some(ConfigWarningNotification {
summary: message,
details: None,
path: None,
range: None,
})
}
pub async fn run_main(
codex_linux_sandbox_exe: Option<PathBuf>,
cli_config_overrides: CliConfigOverrides,
loader_overrides: LoaderOverrides,
default_analytics_enabled: bool,
) -> IoResult<()> {
// Set up channels.
let (incoming_tx, mut incoming_rx) = mpsc::channel::<JSONRPCMessage>(CHANNEL_CAPACITY);
@@ -78,21 +198,58 @@ pub async fn run_main(
format!("error parsing -c overrides: {e}"),
)
})?;
let config = Config::load_with_cli_overrides(cli_kv_overrides.clone())
let loader_overrides_for_config_api = loader_overrides.clone();
let mut config_warnings = Vec::new();
let config = match ConfigBuilder::default()
.cli_overrides(cli_kv_overrides.clone())
.loader_overrides(loader_overrides)
.build()
.await
.map_err(|e| {
std::io::Error::new(ErrorKind::InvalidData, format!("error loading config: {e}"))
})?;
{
Ok(config) => config,
Err(err) => {
let message = config_warning_from_error("Invalid configuration; using defaults.", &err);
config_warnings.push(message);
Config::load_default_with_cli_overrides(cli_kv_overrides.clone()).map_err(|e| {
std::io::Error::new(
ErrorKind::InvalidData,
format!("error loading default config after config error: {e}"),
)
})?
}
};
if let Ok(Some(err)) =
check_execpolicy_for_warnings(&config.features, &config.config_layer_stack).await
{
let (path, range) = exec_policy_warning_location(&err);
let message = ConfigWarningNotification {
summary: "Error parsing rules; custom rules not applied.".to_string(),
details: Some(err.to_string()),
path,
range,
};
config_warnings.push(message);
}
if let Some(warning) = project_config_warning(&config) {
config_warnings.push(warning);
}
let feedback = CodexFeedback::new();
let otel =
codex_core::otel_init::build_provider(&config, env!("CARGO_PKG_VERSION")).map_err(|e| {
std::io::Error::new(
ErrorKind::InvalidData,
format!("error loading otel config: {e}"),
)
})?;
let otel = codex_core::otel_init::build_provider(
&config,
env!("CARGO_PKG_VERSION"),
Some("codex_app_server"),
default_analytics_enabled,
)
.map_err(|e| {
std::io::Error::new(
ErrorKind::InvalidData,
format!("error loading otel config: {e}"),
)
})?;
// Install a simple subscriber so `tracing` output is visible. Users can
// control the log level with `RUST_LOG`.
@@ -115,25 +272,60 @@ pub async fn run_main(
.with(otel_logger_layer)
.with(otel_tracing_layer)
.try_init();
for warning in &config_warnings {
match &warning.details {
Some(details) => error!("{} {}", warning.summary, details),
None => error!("{}", warning.summary),
}
}
// Task: process incoming messages.
let processor_handle = tokio::spawn({
let outgoing_message_sender = OutgoingMessageSender::new(outgoing_tx);
let cli_overrides: Vec<(String, TomlValue)> = cli_kv_overrides.clone();
let loader_overrides = loader_overrides_for_config_api;
let mut processor = MessageProcessor::new(
outgoing_message_sender,
codex_linux_sandbox_exe,
std::sync::Arc::new(config),
cli_overrides,
loader_overrides,
feedback.clone(),
config_warnings,
);
let mut thread_created_rx = processor.thread_created_receiver();
async move {
while let Some(msg) = incoming_rx.recv().await {
match msg {
JSONRPCMessage::Request(r) => processor.process_request(r).await,
JSONRPCMessage::Response(r) => processor.process_response(r).await,
JSONRPCMessage::Notification(n) => processor.process_notification(n).await,
JSONRPCMessage::Error(e) => processor.process_error(e),
let mut listen_for_threads = true;
loop {
tokio::select! {
msg = incoming_rx.recv() => {
let Some(msg) = msg else {
break;
};
match msg {
JSONRPCMessage::Request(r) => processor.process_request(r).await,
JSONRPCMessage::Response(r) => processor.process_response(r).await,
JSONRPCMessage::Notification(n) => processor.process_notification(n).await,
JSONRPCMessage::Error(e) => processor.process_error(e),
}
}
created = thread_created_rx.recv(), if listen_for_threads => {
match created {
Ok(thread_id) => {
processor.try_attach_thread_listener(thread_id).await;
}
Err(tokio::sync::broadcast::error::RecvError::Lagged(_)) => {
// TODO(jif) handle lag.
// Assumes thread creation volume is low enough that lag never happens.
// If it does, we log and continue without resyncing to avoid attaching
// listeners for threads that should remain unsubscribed.
warn!("thread_created receiver lagged; skipping resync");
}
Err(tokio::sync::broadcast::error::RecvError::Closed) => {
listen_for_threads = false;
}
}
}
}
}

View File

@@ -1,10 +1,43 @@
use codex_app_server::run_main;
use codex_arg0::arg0_dispatch_or_else;
use codex_common::CliConfigOverrides;
use codex_core::config_loader::LoaderOverrides;
use std::path::PathBuf;
// Debug-only test hook: lets integration tests point the server at a temporary
// managed config file without writing to /etc.
const MANAGED_CONFIG_PATH_ENV_VAR: &str = "CODEX_APP_SERVER_MANAGED_CONFIG_PATH";
fn main() -> anyhow::Result<()> {
arg0_dispatch_or_else(|codex_linux_sandbox_exe| async move {
run_main(codex_linux_sandbox_exe, CliConfigOverrides::default()).await?;
let managed_config_path = managed_config_path_from_debug_env();
let loader_overrides = LoaderOverrides {
managed_config_path,
..Default::default()
};
run_main(
codex_linux_sandbox_exe,
CliConfigOverrides::default(),
loader_overrides,
false,
)
.await?;
Ok(())
})
}
fn managed_config_path_from_debug_env() -> Option<PathBuf> {
#[cfg(debug_assertions)]
{
if let Ok(value) = std::env::var(MANAGED_CONFIG_PATH_ENV_VAR) {
return if value.is_empty() {
None
} else {
Some(PathBuf::from(value))
};
}
}
None
}

View File

@@ -10,6 +10,7 @@ use codex_app_server_protocol::ClientRequest;
use codex_app_server_protocol::ConfigBatchWriteParams;
use codex_app_server_protocol::ConfigReadParams;
use codex_app_server_protocol::ConfigValueWriteParams;
use codex_app_server_protocol::ConfigWarningNotification;
use codex_app_server_protocol::InitializeResponse;
use codex_app_server_protocol::JSONRPCError;
use codex_app_server_protocol::JSONRPCErrorError;
@@ -17,13 +18,19 @@ use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCRequest;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ServerNotification;
use codex_core::AuthManager;
use codex_core::ConversationManager;
use codex_core::ThreadManager;
use codex_core::config::Config;
use codex_core::config_loader::LoaderOverrides;
use codex_core::default_client::SetOriginatorError;
use codex_core::default_client::USER_AGENT_SUFFIX;
use codex_core::default_client::get_codex_user_agent;
use codex_core::default_client::set_default_originator;
use codex_feedback::CodexFeedback;
use codex_protocol::ThreadId;
use codex_protocol::protocol::SessionSource;
use tokio::sync::broadcast;
use toml::Value as TomlValue;
pub(crate) struct MessageProcessor {
@@ -31,6 +38,7 @@ pub(crate) struct MessageProcessor {
codex_message_processor: CodexMessageProcessor,
config_api: ConfigApi,
initialized: bool,
config_warnings: Vec<ConfigWarningNotification>,
}
impl MessageProcessor {
@@ -41,7 +49,9 @@ impl MessageProcessor {
codex_linux_sandbox_exe: Option<PathBuf>,
config: Arc<Config>,
cli_overrides: Vec<(String, TomlValue)>,
loader_overrides: LoaderOverrides,
feedback: CodexFeedback,
config_warnings: Vec<ConfigWarningNotification>,
) -> Self {
let outgoing = Arc::new(outgoing);
let auth_manager = AuthManager::shared(
@@ -49,26 +59,28 @@ impl MessageProcessor {
false,
config.cli_auth_credentials_store_mode,
);
let conversation_manager = Arc::new(ConversationManager::new(
let thread_manager = Arc::new(ThreadManager::new(
config.codex_home.clone(),
auth_manager.clone(),
SessionSource::VSCode,
));
let codex_message_processor = CodexMessageProcessor::new(
auth_manager,
conversation_manager,
thread_manager,
outgoing.clone(),
codex_linux_sandbox_exe,
Arc::clone(&config),
cli_overrides.clone(),
feedback,
);
let config_api = ConfigApi::new(config.codex_home.clone(), cli_overrides);
let config_api = ConfigApi::new(config.codex_home.clone(), cli_overrides, loader_overrides);
Self {
outgoing,
codex_message_processor,
config_api,
initialized: false,
config_warnings,
}
}
@@ -118,6 +130,27 @@ impl MessageProcessor {
title: _title,
version,
} = params.client_info;
if let Err(error) = set_default_originator(name.clone()) {
match error {
SetOriginatorError::InvalidHeaderValue => {
let error = JSONRPCErrorError {
code: INVALID_REQUEST_ERROR_CODE,
message: format!(
"Invalid clientInfo.name: '{name}'. Must be a valid HTTP header value."
),
data: None,
};
self.outgoing.send_error(request_id, error).await;
return;
}
SetOriginatorError::AlreadyInitialized => {
// No-op. This is expected to happen if the originator is already set via env var.
// TODO(owen): Once we remove support for CODEX_INTERNAL_ORIGINATOR_OVERRIDE,
// this will be an unexpected state and we can return a JSON-RPC error indicating
// internal server error.
}
}
}
let user_agent_suffix = format!("{name}; {version}");
if let Ok(mut suffix) = USER_AGENT_SUFFIX.lock() {
*suffix = Some(user_agent_suffix);
@@ -128,6 +161,15 @@ impl MessageProcessor {
self.outgoing.send_response(request_id, response).await;
self.initialized = true;
if !self.config_warnings.is_empty() {
for notification in self.config_warnings.drain(..) {
self.outgoing
.send_server_notification(ServerNotification::ConfigWarning(
notification,
))
.await;
}
}
return;
}
@@ -155,6 +197,12 @@ impl MessageProcessor {
ClientRequest::ConfigBatchWrite { request_id, params } => {
self.handle_config_batch_write(request_id, params).await;
}
ClientRequest::ConfigRequirementsRead {
request_id,
params: _,
} => {
self.handle_config_requirements_read(request_id).await;
}
other => {
self.codex_message_processor.process_request(other).await;
}
@@ -167,6 +215,19 @@ impl MessageProcessor {
tracing::info!("<- notification: {:?}", notification);
}
pub(crate) fn thread_created_receiver(&self) -> broadcast::Receiver<ThreadId> {
self.codex_message_processor.thread_created_receiver()
}
pub(crate) async fn try_attach_thread_listener(&mut self, thread_id: ThreadId) {
if !self.initialized {
return;
}
self.codex_message_processor
.try_attach_thread_listener(thread_id)
.await;
}
/// Handle a standalone JSON-RPC response originating from the peer.
pub(crate) async fn process_response(&mut self, response: JSONRPCResponse) {
tracing::info!("<- response: {:?}", response);
@@ -207,4 +268,11 @@ impl MessageProcessor {
Err(error) => self.outgoing.send_error(request_id, error).await,
}
}
async fn handle_config_requirements_read(&self, request_id: RequestId) {
match self.config_api.config_requirements_read().await {
Ok(response) => self.outgoing.send_response(request_id, response).await,
Err(error) => self.outgoing.send_error(request_id, error).await,
}
}
}

View File

@@ -2,19 +2,18 @@ use std::sync::Arc;
use codex_app_server_protocol::Model;
use codex_app_server_protocol::ReasoningEffortOption;
use codex_core::ConversationManager;
use codex_core::ThreadManager;
use codex_core::config::Config;
use codex_core::models_manager::manager::RefreshStrategy;
use codex_protocol::openai_models::ModelPreset;
use codex_protocol::openai_models::ReasoningEffortPreset;
pub async fn supported_models(
conversation_manager: Arc<ConversationManager>,
config: &Config,
) -> Vec<Model> {
conversation_manager
.list_models(config)
pub async fn supported_models(thread_manager: Arc<ThreadManager>, config: &Config) -> Vec<Model> {
thread_manager
.list_models(config, RefreshStrategy::OnlineIfUncached)
.await
.into_iter()
.filter(|preset| preset.show_in_picker)
.map(model_from_preset)
.collect()
}

View File

@@ -162,6 +162,7 @@ mod tests {
use codex_app_server_protocol::AccountRateLimitsUpdatedNotification;
use codex_app_server_protocol::AccountUpdatedNotification;
use codex_app_server_protocol::AuthMode;
use codex_app_server_protocol::ConfigWarningNotification;
use codex_app_server_protocol::LoginChatGptCompleteNotification;
use codex_app_server_protocol::RateLimitSnapshot;
use codex_app_server_protocol::RateLimitWindow;
@@ -279,4 +280,28 @@ mod tests {
"ensure the notification serializes correctly"
);
}
#[test]
fn verify_config_warning_notification_serialization() {
let notification = ServerNotification::ConfigWarning(ConfigWarningNotification {
summary: "Config error: using defaults".to_string(),
details: Some("error loading config: bad config".to_string()),
path: None,
range: None,
});
let jsonrpc_notification = OutgoingMessage::AppServerNotification(notification);
assert_eq!(
json!( {
"method": "configWarning",
"params": {
"summary": "Config error: using defaults",
"details": "error loading config: bad config",
},
}),
serde_json::to_value(jsonrpc_notification)
.expect("ensure the notification serializes correctly"),
"ensure the notification serializes correctly"
);
}
}

View File

@@ -0,0 +1,7 @@
load("//:defs.bzl", "codex_rust_crate")
codex_rust_crate(
name = "common",
crate_name = "app_test_support",
crate_srcs = glob(["*.rs"]),
)

View File

@@ -49,6 +49,16 @@ impl ChatGptAuthFixture {
self
}
pub fn chatgpt_user_id(mut self, chatgpt_user_id: impl Into<String>) -> Self {
self.claims.chatgpt_user_id = Some(chatgpt_user_id.into());
self
}
pub fn chatgpt_account_id(mut self, chatgpt_account_id: impl Into<String>) -> Self {
self.claims.chatgpt_account_id = Some(chatgpt_account_id.into());
self
}
pub fn email(mut self, email: impl Into<String>) -> Self {
self.claims.email = Some(email.into());
self
@@ -69,6 +79,8 @@ impl ChatGptAuthFixture {
pub struct ChatGptIdTokenClaims {
pub email: Option<String>,
pub plan_type: Option<String>,
pub chatgpt_user_id: Option<String>,
pub chatgpt_account_id: Option<String>,
}
impl ChatGptIdTokenClaims {
@@ -85,6 +97,16 @@ impl ChatGptIdTokenClaims {
self.plan_type = Some(plan_type.into());
self
}
pub fn chatgpt_user_id(mut self, chatgpt_user_id: impl Into<String>) -> Self {
self.chatgpt_user_id = Some(chatgpt_user_id.into());
self
}
pub fn chatgpt_account_id(mut self, chatgpt_account_id: impl Into<String>) -> Self {
self.chatgpt_account_id = Some(chatgpt_account_id.into());
self
}
}
pub fn encode_id_token(claims: &ChatGptIdTokenClaims) -> Result<String> {
@@ -93,10 +115,20 @@ pub fn encode_id_token(claims: &ChatGptIdTokenClaims) -> Result<String> {
if let Some(email) = &claims.email {
payload.insert("email".to_string(), json!(email));
}
let mut auth_payload = serde_json::Map::new();
if let Some(plan_type) = &claims.plan_type {
auth_payload.insert("chatgpt_plan_type".to_string(), json!(plan_type));
}
if let Some(chatgpt_user_id) = &claims.chatgpt_user_id {
auth_payload.insert("chatgpt_user_id".to_string(), json!(chatgpt_user_id));
}
if let Some(chatgpt_account_id) = &claims.chatgpt_account_id {
auth_payload.insert("chatgpt_account_id".to_string(), json!(chatgpt_account_id));
}
if !auth_payload.is_empty() {
payload.insert(
"https://api.openai.com/auth".to_string(),
json!({ "chatgpt_plan_type": plan_type }),
serde_json::Value::Object(auth_payload),
);
}
let payload = serde_json::Value::Object(payload);

View File

@@ -17,16 +17,21 @@ pub use core_test_support::format_with_current_shell_non_login;
pub use core_test_support::test_path_buf_with_windows;
pub use core_test_support::test_tmp_path;
pub use core_test_support::test_tmp_path_buf;
pub use mcp_process::DEFAULT_CLIENT_NAME;
pub use mcp_process::McpProcess;
pub use mock_model_server::create_mock_chat_completions_server;
pub use mock_model_server::create_mock_chat_completions_server_unchecked;
pub use mock_model_server::create_mock_responses_server_repeating_assistant;
pub use mock_model_server::create_mock_responses_server_sequence;
pub use mock_model_server::create_mock_responses_server_sequence_unchecked;
pub use models_cache::write_models_cache;
pub use models_cache::write_models_cache_with_models;
pub use responses::create_apply_patch_sse_response;
pub use responses::create_exec_command_sse_response;
pub use responses::create_final_assistant_message_sse_response;
pub use responses::create_request_user_input_sse_response;
pub use responses::create_shell_command_sse_response;
pub use rollout::create_fake_rollout;
pub use rollout::create_fake_rollout_with_text_elements;
pub use rollout::rollout_path;
use serde::de::DeserializeOwned;
pub fn to_response<T: DeserializeOwned>(response: JSONRPCResponse) -> anyhow::Result<T> {

View File

@@ -12,15 +12,18 @@ use tokio::process::ChildStdout;
use anyhow::Context;
use codex_app_server_protocol::AddConversationListenerParams;
use codex_app_server_protocol::AppsListParams;
use codex_app_server_protocol::ArchiveConversationParams;
use codex_app_server_protocol::CancelLoginAccountParams;
use codex_app_server_protocol::CancelLoginChatGptParams;
use codex_app_server_protocol::ClientInfo;
use codex_app_server_protocol::ClientNotification;
use codex_app_server_protocol::CollaborationModeListParams;
use codex_app_server_protocol::ConfigBatchWriteParams;
use codex_app_server_protocol::ConfigReadParams;
use codex_app_server_protocol::ConfigValueWriteParams;
use codex_app_server_protocol::FeedbackUploadParams;
use codex_app_server_protocol::ForkConversationParams;
use codex_app_server_protocol::GetAccountParams;
use codex_app_server_protocol::GetAuthStatusParams;
use codex_app_server_protocol::InitializeParams;
@@ -43,11 +46,16 @@ use codex_app_server_protocol::SendUserTurnParams;
use codex_app_server_protocol::ServerRequest;
use codex_app_server_protocol::SetDefaultModelParams;
use codex_app_server_protocol::ThreadArchiveParams;
use codex_app_server_protocol::ThreadForkParams;
use codex_app_server_protocol::ThreadListParams;
use codex_app_server_protocol::ThreadLoadedListParams;
use codex_app_server_protocol::ThreadReadParams;
use codex_app_server_protocol::ThreadResumeParams;
use codex_app_server_protocol::ThreadRollbackParams;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::TurnInterruptParams;
use codex_app_server_protocol::TurnStartParams;
use codex_core::default_client::CODEX_INTERNAL_ORIGINATOR_OVERRIDE_ENV_VAR;
use tokio::process::Command;
pub struct McpProcess {
@@ -59,9 +67,11 @@ pub struct McpProcess {
process: Child,
stdin: ChildStdin,
stdout: BufReader<ChildStdout>,
pending_user_messages: VecDeque<JSONRPCNotification>,
pending_messages: VecDeque<JSONRPCMessage>,
}
pub const DEFAULT_CLIENT_NAME: &str = "codex-app-server-tests";
impl McpProcess {
pub async fn new(codex_home: &Path) -> anyhow::Result<Self> {
Self::new_with_env(codex_home, &[]).await
@@ -85,6 +95,7 @@ impl McpProcess {
cmd.stderr(Stdio::piped());
cmd.env("CODEX_HOME", codex_home);
cmd.env("RUST_LOG", "debug");
cmd.env_remove(CODEX_INTERNAL_ORIGINATOR_OVERRIDE_ENV_VAR);
for (k, v) in env_overrides {
match v {
@@ -126,39 +137,68 @@ impl McpProcess {
process,
stdin,
stdout,
pending_user_messages: VecDeque::new(),
pending_messages: VecDeque::new(),
})
}
/// Performs the initialization handshake with the MCP server.
pub async fn initialize(&mut self) -> anyhow::Result<()> {
let params = Some(serde_json::to_value(InitializeParams {
client_info: ClientInfo {
name: "codex-app-server-tests".to_string(),
let initialized = self
.initialize_with_client_info(ClientInfo {
name: DEFAULT_CLIENT_NAME.to_string(),
title: None,
version: "0.1.0".to_string(),
},
})?);
let req_id = self.send_request("initialize", params).await?;
let initialized = self.read_jsonrpc_message().await?;
let JSONRPCMessage::Response(response) = initialized else {
})
.await?;
let JSONRPCMessage::Response(_) = initialized else {
unreachable!("expected JSONRPCMessage::Response for initialize, got {initialized:?}");
};
if response.id != RequestId::Integer(req_id) {
anyhow::bail!(
"initialize response id mismatch: expected {}, got {:?}",
req_id,
response.id
);
}
// Send notifications/initialized to ack the response.
self.send_notification(ClientNotification::Initialized)
.await?;
Ok(())
}
/// Sends initialize with the provided client info and returns the response/error message.
pub async fn initialize_with_client_info(
&mut self,
client_info: ClientInfo,
) -> anyhow::Result<JSONRPCMessage> {
let params = Some(serde_json::to_value(InitializeParams { client_info })?);
let request_id = self.send_request("initialize", params).await?;
let message = self.read_jsonrpc_message().await?;
match message {
JSONRPCMessage::Response(response) => {
if response.id != RequestId::Integer(request_id) {
anyhow::bail!(
"initialize response id mismatch: expected {}, got {:?}",
request_id,
response.id
);
}
// Send notifications/initialized to ack the response.
self.send_notification(ClientNotification::Initialized)
.await?;
Ok(JSONRPCMessage::Response(response))
}
JSONRPCMessage::Error(error) => {
if error.id != RequestId::Integer(request_id) {
anyhow::bail!(
"initialize error id mismatch: expected {}, got {:?}",
request_id,
error.id
);
}
Ok(JSONRPCMessage::Error(error))
}
JSONRPCMessage::Notification(notification) => {
anyhow::bail!("unexpected JSONRPCMessage::Notification: {notification:?}");
}
JSONRPCMessage::Request(request) => {
anyhow::bail!("unexpected JSONRPCMessage::Request: {request:?}");
}
}
}
/// Send a `newConversation` JSON-RPC request.
pub async fn send_new_conversation_request(
&mut self,
@@ -197,7 +237,7 @@ impl McpProcess {
}
/// Send a `removeConversationListener` JSON-RPC request.
pub async fn send_remove_conversation_listener_request(
pub async fn send_remove_thread_listener_request(
&mut self,
params: RemoveConversationListenerParams,
) -> anyhow::Result<i64> {
@@ -307,6 +347,15 @@ impl McpProcess {
self.send_request("thread/resume", params).await
}
/// Send a `thread/fork` JSON-RPC request.
pub async fn send_thread_fork_request(
&mut self,
params: ThreadForkParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("thread/fork", params).await
}
/// Send a `thread/archive` JSON-RPC request.
pub async fn send_thread_archive_request(
&mut self,
@@ -316,6 +365,15 @@ impl McpProcess {
self.send_request("thread/archive", params).await
}
/// Send a `thread/rollback` JSON-RPC request.
pub async fn send_thread_rollback_request(
&mut self,
params: ThreadRollbackParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("thread/rollback", params).await
}
/// Send a `thread/list` JSON-RPC request.
pub async fn send_thread_list_request(
&mut self,
@@ -325,6 +383,24 @@ impl McpProcess {
self.send_request("thread/list", params).await
}
/// Send a `thread/loaded/list` JSON-RPC request.
pub async fn send_thread_loaded_list_request(
&mut self,
params: ThreadLoadedListParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("thread/loaded/list", params).await
}
/// Send a `thread/read` JSON-RPC request.
pub async fn send_thread_read_request(
&mut self,
params: ThreadReadParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("thread/read", params).await
}
/// Send a `model/list` JSON-RPC request.
pub async fn send_list_models_request(
&mut self,
@@ -334,6 +410,21 @@ impl McpProcess {
self.send_request("model/list", params).await
}
/// Send an `app/list` JSON-RPC request.
pub async fn send_apps_list_request(&mut self, params: AppsListParams) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("app/list", params).await
}
/// Send a `collaborationMode/list` JSON-RPC request.
pub async fn send_list_collaboration_modes_request(
&mut self,
params: CollaborationModeListParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("collaborationMode/list", params).await
}
/// Send a `resumeConversation` JSON-RPC request.
pub async fn send_resume_conversation_request(
&mut self,
@@ -343,6 +434,15 @@ impl McpProcess {
self.send_request("resumeConversation", params).await
}
/// Send a `forkConversation` JSON-RPC request.
pub async fn send_fork_conversation_request(
&mut self,
params: ForkConversationParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("forkConversation", params).await
}
/// Send a `loginApiKey` JSON-RPC request.
pub async fn send_login_api_key_request(
&mut self,
@@ -534,27 +634,16 @@ impl McpProcess {
pub async fn read_stream_until_request_message(&mut self) -> anyhow::Result<ServerRequest> {
eprintln!("in read_stream_until_request_message()");
loop {
let message = self.read_jsonrpc_message().await?;
let message = self
.read_stream_until_message(|message| matches!(message, JSONRPCMessage::Request(_)))
.await?;
match message {
JSONRPCMessage::Notification(notification) => {
eprintln!("notification: {notification:?}");
self.enqueue_user_message(notification);
}
JSONRPCMessage::Request(jsonrpc_request) => {
return jsonrpc_request.try_into().with_context(
|| "failed to deserialize ServerRequest from JSONRPCRequest",
);
}
JSONRPCMessage::Error(_) => {
anyhow::bail!("unexpected JSONRPCMessage::Error: {message:?}");
}
JSONRPCMessage::Response(_) => {
anyhow::bail!("unexpected JSONRPCMessage::Response: {message:?}");
}
}
}
let JSONRPCMessage::Request(jsonrpc_request) = message else {
unreachable!("expected JSONRPCMessage::Request, got {message:?}");
};
jsonrpc_request
.try_into()
.with_context(|| "failed to deserialize ServerRequest from JSONRPCRequest")
}
pub async fn read_stream_until_response_message(
@@ -563,52 +652,32 @@ impl McpProcess {
) -> anyhow::Result<JSONRPCResponse> {
eprintln!("in read_stream_until_response_message({request_id:?})");
loop {
let message = self.read_jsonrpc_message().await?;
match message {
JSONRPCMessage::Notification(notification) => {
eprintln!("notification: {notification:?}");
self.enqueue_user_message(notification);
}
JSONRPCMessage::Request(_) => {
anyhow::bail!("unexpected JSONRPCMessage::Request: {message:?}");
}
JSONRPCMessage::Error(_) => {
anyhow::bail!("unexpected JSONRPCMessage::Error: {message:?}");
}
JSONRPCMessage::Response(jsonrpc_response) => {
if jsonrpc_response.id == request_id {
return Ok(jsonrpc_response);
}
}
}
}
let message = self
.read_stream_until_message(|message| {
Self::message_request_id(message) == Some(&request_id)
})
.await?;
let JSONRPCMessage::Response(response) = message else {
unreachable!("expected JSONRPCMessage::Response, got {message:?}");
};
Ok(response)
}
pub async fn read_stream_until_error_message(
&mut self,
request_id: RequestId,
) -> anyhow::Result<JSONRPCError> {
loop {
let message = self.read_jsonrpc_message().await?;
match message {
JSONRPCMessage::Notification(notification) => {
eprintln!("notification: {notification:?}");
self.enqueue_user_message(notification);
}
JSONRPCMessage::Request(_) => {
anyhow::bail!("unexpected JSONRPCMessage::Request: {message:?}");
}
JSONRPCMessage::Response(_) => {
// Keep scanning; we're waiting for an error with matching id.
}
JSONRPCMessage::Error(err) => {
if err.id == request_id {
return Ok(err);
}
}
}
}
let message = self
.read_stream_until_message(|message| {
Self::message_request_id(message) == Some(&request_id)
})
.await?;
let JSONRPCMessage::Error(err) = message else {
unreachable!("expected JSONRPCMessage::Error, got {message:?}");
};
Ok(err)
}
pub async fn read_stream_until_notification_message(
@@ -617,46 +686,64 @@ impl McpProcess {
) -> anyhow::Result<JSONRPCNotification> {
eprintln!("in read_stream_until_notification_message({method})");
if let Some(notification) = self.take_pending_notification_by_method(method) {
return Ok(notification);
let message = self
.read_stream_until_message(|message| {
matches!(
message,
JSONRPCMessage::Notification(notification) if notification.method == method
)
})
.await?;
let JSONRPCMessage::Notification(notification) = message else {
unreachable!("expected JSONRPCMessage::Notification, got {message:?}");
};
Ok(notification)
}
/// Clears any buffered messages so future reads only consider new stream items.
///
/// We call this when e.g. we want to validate against the next turn and no longer care about
/// messages buffered from the prior turn.
pub fn clear_message_buffer(&mut self) {
self.pending_messages.clear();
}
/// Reads the stream until a message matches `predicate`, buffering any non-matching messages
/// for later reads.
async fn read_stream_until_message<F>(&mut self, predicate: F) -> anyhow::Result<JSONRPCMessage>
where
F: Fn(&JSONRPCMessage) -> bool,
{
if let Some(message) = self.take_pending_message(&predicate) {
return Ok(message);
}
loop {
let message = self.read_jsonrpc_message().await?;
match message {
JSONRPCMessage::Notification(notification) => {
if notification.method == method {
return Ok(notification);
}
self.enqueue_user_message(notification);
}
JSONRPCMessage::Request(_) => {
anyhow::bail!("unexpected JSONRPCMessage::Request: {message:?}");
}
JSONRPCMessage::Error(_) => {
anyhow::bail!("unexpected JSONRPCMessage::Error: {message:?}");
}
JSONRPCMessage::Response(_) => {
anyhow::bail!("unexpected JSONRPCMessage::Response: {message:?}");
}
if predicate(&message) {
return Ok(message);
}
self.pending_messages.push_back(message);
}
}
fn take_pending_notification_by_method(&mut self, method: &str) -> Option<JSONRPCNotification> {
if let Some(pos) = self
.pending_user_messages
.iter()
.position(|notification| notification.method == method)
{
return self.pending_user_messages.remove(pos);
fn take_pending_message<F>(&mut self, predicate: &F) -> Option<JSONRPCMessage>
where
F: Fn(&JSONRPCMessage) -> bool,
{
if let Some(pos) = self.pending_messages.iter().position(predicate) {
return self.pending_messages.remove(pos);
}
None
}
fn enqueue_user_message(&mut self, notification: JSONRPCNotification) {
if notification.method == "codex/event/user_message" {
self.pending_user_messages.push_back(notification);
fn message_request_id(message: &JSONRPCMessage) -> Option<&RequestId> {
match message {
JSONRPCMessage::Request(request) => Some(&request.id),
JSONRPCMessage::Response(response) => Some(&response.id),
JSONRPCMessage::Error(err) => Some(&err.id),
JSONRPCMessage::Notification(_) => None,
}
}
}

View File

@@ -1,17 +1,18 @@
use std::sync::atomic::AtomicUsize;
use std::sync::atomic::Ordering;
use core_test_support::responses;
use wiremock::Mock;
use wiremock::MockServer;
use wiremock::Respond;
use wiremock::ResponseTemplate;
use wiremock::matchers::method;
use wiremock::matchers::path;
use wiremock::matchers::path_regex;
/// Create a mock server that will provide the responses, in order, for
/// requests to the `/v1/chat/completions` endpoint.
pub async fn create_mock_chat_completions_server(responses: Vec<String>) -> MockServer {
let server = MockServer::start().await;
/// requests to the `/v1/responses` endpoint.
pub async fn create_mock_responses_server_sequence(responses: Vec<String>) -> MockServer {
let server = responses::start_mock_server().await;
let num_calls = responses.len();
let seq_responder = SeqResponder {
@@ -20,7 +21,7 @@ pub async fn create_mock_chat_completions_server(responses: Vec<String>) -> Mock
};
Mock::given(method("POST"))
.and(path("/v1/chat/completions"))
.and(path_regex(".*/responses$"))
.respond_with(seq_responder)
.expect(num_calls as u64)
.mount(&server)
@@ -29,10 +30,10 @@ pub async fn create_mock_chat_completions_server(responses: Vec<String>) -> Mock
server
}
/// Same as `create_mock_chat_completions_server` but does not enforce an
/// Same as `create_mock_responses_server_sequence` but does not enforce an
/// expectation on the number of calls.
pub async fn create_mock_chat_completions_server_unchecked(responses: Vec<String>) -> MockServer {
let server = MockServer::start().await;
pub async fn create_mock_responses_server_sequence_unchecked(responses: Vec<String>) -> MockServer {
let server = responses::start_mock_server().await;
let seq_responder = SeqResponder {
num_calls: AtomicUsize::new(0),
@@ -40,7 +41,7 @@ pub async fn create_mock_chat_completions_server_unchecked(responses: Vec<String
};
Mock::given(method("POST"))
.and(path("/v1/chat/completions"))
.and(path_regex(".*/responses$"))
.respond_with(seq_responder)
.mount(&server)
.await;
@@ -57,10 +58,24 @@ impl Respond for SeqResponder {
fn respond(&self, _: &wiremock::Request) -> ResponseTemplate {
let call_num = self.num_calls.fetch_add(1, Ordering::SeqCst);
match self.responses.get(call_num) {
Some(response) => ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.set_body_raw(response.clone(), "text/event-stream"),
Some(response) => responses::sse_response(response.clone()),
None => panic!("no response for {call_num}"),
}
}
}
/// Create a mock responses API server that returns the same assistant message for every request.
pub async fn create_mock_responses_server_repeating_assistant(message: &str) -> MockServer {
let server = responses::start_mock_server().await;
let body = responses::sse(vec![
responses::ev_response_created("resp-1"),
responses::ev_assistant_message("msg-1", message),
responses::ev_completed("resp-1"),
]);
Mock::given(method("POST"))
.and(path_regex(".*/responses$"))
.respond_with(responses::sse_response(body))
.mount(&server)
.await;
server
}

View File

@@ -15,7 +15,7 @@ fn preset_to_info(preset: &ModelPreset, priority: i32) -> ModelInfo {
slug: preset.id.clone(),
display_name: preset.display_name.clone(),
description: Some(preset.description.clone()),
default_reasoning_level: preset.default_reasoning_effort,
default_reasoning_level: Some(preset.default_reasoning_effort),
supported_reasoning_levels: preset.supported_reasoning_efforts.clone(),
shell_type: ConfigShellToolType::ShellCommand,
visibility: if preset.show_in_picker {
@@ -25,20 +25,22 @@ fn preset_to_info(preset: &ModelPreset, priority: i32) -> ModelInfo {
},
supported_in_api: true,
priority,
upgrade: preset.upgrade.as_ref().map(|u| u.id.clone()),
base_instructions: None,
upgrade: preset.upgrade.as_ref().map(|u| u.into()),
base_instructions: "base instructions".to_string(),
model_instructions_template: None,
supports_reasoning_summaries: false,
support_verbosity: false,
default_verbosity: None,
apply_patch_tool_type: None,
truncation_policy: TruncationPolicyConfig::bytes(10_000),
supports_parallel_tool_calls: false,
context_window: None,
context_window: Some(272_000),
auto_compact_token_limit: None,
effective_context_window_percent: 95,
experimental_supported_tools: Vec::new(),
}
}
// todo(aibrahim): fix the priorities to be the opposite here.
/// Write a models_cache.json file to the codex home directory.
/// This prevents ModelsManager from making network requests to refresh models.
/// The cache will be treated as fresh (within TTL) and used instead of fetching from the network.
@@ -49,14 +51,14 @@ pub fn write_models_cache(codex_home: &Path) -> std::io::Result<()> {
.iter()
.filter(|preset| preset.show_in_picker)
.collect();
// Convert presets to ModelInfo, assigning priorities (higher = earlier in list)
// Priority is used for sorting, so first model gets highest priority
// Convert presets to ModelInfo, assigning priorities (lower = earlier in list).
// Priority is used for sorting, so the first model gets the lowest priority.
let models: Vec<ModelInfo> = presets
.iter()
.enumerate()
.map(|(idx, preset)| {
// Higher priority = earlier in list, so reverse the index
let priority = (presets.len() - idx) as i32;
// Lower priority = earlier in list.
let priority = idx as i32;
preset_to_info(preset, priority)
})
.collect();

View File

@@ -1,3 +1,4 @@
use core_test_support::responses;
use serde_json::json;
use std::path::Path;
@@ -14,85 +15,30 @@ pub fn create_shell_command_sse_response(
"workdir": workdir.map(|w| w.to_string_lossy()),
"timeout_ms": timeout_ms
}))?;
let tool_call = json!({
"choices": [
{
"delta": {
"tool_calls": [
{
"id": call_id,
"function": {
"name": "shell_command",
"arguments": tool_call_arguments
}
}
]
},
"finish_reason": "tool_calls"
}
]
});
let sse = format!(
"data: {}\n\ndata: DONE\n\n",
serde_json::to_string(&tool_call)?
);
Ok(sse)
Ok(responses::sse(vec![
responses::ev_response_created("resp-1"),
responses::ev_function_call(call_id, "shell_command", &tool_call_arguments),
responses::ev_completed("resp-1"),
]))
}
pub fn create_final_assistant_message_sse_response(message: &str) -> anyhow::Result<String> {
let assistant_message = json!({
"choices": [
{
"delta": {
"content": message
},
"finish_reason": "stop"
}
]
});
let sse = format!(
"data: {}\n\ndata: DONE\n\n",
serde_json::to_string(&assistant_message)?
);
Ok(sse)
Ok(responses::sse(vec![
responses::ev_response_created("resp-1"),
responses::ev_assistant_message("msg-1", message),
responses::ev_completed("resp-1"),
]))
}
pub fn create_apply_patch_sse_response(
patch_content: &str,
call_id: &str,
) -> anyhow::Result<String> {
// Use shell_command to call apply_patch with heredoc format
let command = format!("apply_patch <<'EOF'\n{patch_content}\nEOF");
let tool_call_arguments = serde_json::to_string(&json!({
"command": command
}))?;
let tool_call = json!({
"choices": [
{
"delta": {
"tool_calls": [
{
"id": call_id,
"function": {
"name": "shell_command",
"arguments": tool_call_arguments
}
}
]
},
"finish_reason": "tool_calls"
}
]
});
let sse = format!(
"data: {}\n\ndata: DONE\n\n",
serde_json::to_string(&tool_call)?
);
Ok(sse)
Ok(responses::sse(vec![
responses::ev_response_created("resp-1"),
responses::ev_apply_patch_shell_command_call_via_heredoc(call_id, patch_content),
responses::ev_completed("resp-1"),
]))
}
pub fn create_exec_command_sse_response(call_id: &str) -> anyhow::Result<String> {
@@ -108,28 +54,32 @@ pub fn create_exec_command_sse_response(call_id: &str) -> anyhow::Result<String>
"cmd": command.join(" "),
"yield_time_ms": 500
}))?;
let tool_call = json!({
"choices": [
{
"delta": {
"tool_calls": [
{
"id": call_id,
"function": {
"name": "exec_command",
"arguments": tool_call_arguments
}
}
]
},
"finish_reason": "tool_calls"
}
]
});
let sse = format!(
"data: {}\n\ndata: DONE\n\n",
serde_json::to_string(&tool_call)?
);
Ok(sse)
Ok(responses::sse(vec![
responses::ev_response_created("resp-1"),
responses::ev_function_call(call_id, "exec_command", &tool_call_arguments),
responses::ev_completed("resp-1"),
]))
}
pub fn create_request_user_input_sse_response(call_id: &str) -> anyhow::Result<String> {
let tool_call_arguments = serde_json::to_string(&json!({
"questions": [{
"id": "confirm_path",
"header": "Confirm",
"question": "Proceed with the plan?",
"options": [{
"label": "Yes (Recommended)",
"description": "Continue the current plan."
}, {
"label": "No",
"description": "Stop and revisit the approach."
}]
}]
}))?;
Ok(responses::sse(vec![
responses::ev_response_created("resp-1"),
responses::ev_function_call(call_id, "request_user_input", &tool_call_arguments),
responses::ev_completed("resp-1"),
]))
}

View File

@@ -1,15 +1,28 @@
use anyhow::Result;
use codex_protocol::ConversationId;
use codex_protocol::ThreadId;
use codex_protocol::protocol::GitInfo;
use codex_protocol::protocol::SessionMeta;
use codex_protocol::protocol::SessionMetaLine;
use codex_protocol::protocol::SessionSource;
use serde_json::json;
use std::fs;
use std::fs::FileTimes;
use std::path::Path;
use std::path::PathBuf;
use uuid::Uuid;
pub fn rollout_path(codex_home: &Path, filename_ts: &str, thread_id: &str) -> PathBuf {
let year = &filename_ts[0..4];
let month = &filename_ts[5..7];
let day = &filename_ts[8..10];
codex_home
.join("sessions")
.join(year)
.join(month)
.join(day)
.join(format!("rollout-{filename_ts}-{thread_id}.jsonl"))
}
/// Create a minimal rollout file under `CODEX_HOME/sessions/YYYY/MM/DD/`.
///
/// - `filename_ts` is the filename timestamp component in `YYYY-MM-DDThh-mm-ss` format.
@@ -28,27 +41,25 @@ pub fn create_fake_rollout(
) -> Result<String> {
let uuid = Uuid::new_v4();
let uuid_str = uuid.to_string();
let conversation_id = ConversationId::from_string(&uuid_str)?;
let conversation_id = ThreadId::from_string(&uuid_str)?;
// sessions/YYYY/MM/DD derived from filename_ts (YYYY-MM-DDThh-mm-ss)
let year = &filename_ts[0..4];
let month = &filename_ts[5..7];
let day = &filename_ts[8..10];
let dir = codex_home.join("sessions").join(year).join(month).join(day);
fs::create_dir_all(&dir)?;
let file_path = dir.join(format!("rollout-{filename_ts}-{uuid}.jsonl"));
let file_path = rollout_path(codex_home, filename_ts, &uuid_str);
let dir = file_path
.parent()
.ok_or_else(|| anyhow::anyhow!("missing rollout parent directory"))?;
fs::create_dir_all(dir)?;
// Build JSONL lines
let meta = SessionMeta {
id: conversation_id,
forked_from_id: None,
timestamp: meta_rfc3339.to_string(),
cwd: PathBuf::from("/"),
originator: "codex".to_string(),
cli_version: "0.0.0".to_string(),
instructions: None,
source: SessionSource::Cli,
model_provider: model_provider.map(str::to_string),
base_instructions: None,
};
let payload = serde_json::to_value(SessionMetaLine {
meta,
@@ -84,6 +95,85 @@ pub fn create_fake_rollout(
.to_string(),
];
fs::write(&file_path, lines.join("\n") + "\n")?;
let parsed = chrono::DateTime::parse_from_rfc3339(meta_rfc3339)?.with_timezone(&chrono::Utc);
let times = FileTimes::new().set_modified(parsed.into());
std::fs::OpenOptions::new()
.append(true)
.open(&file_path)?
.set_times(times)?;
Ok(uuid_str)
}
pub fn create_fake_rollout_with_text_elements(
codex_home: &Path,
filename_ts: &str,
meta_rfc3339: &str,
preview: &str,
text_elements: Vec<serde_json::Value>,
model_provider: Option<&str>,
git_info: Option<GitInfo>,
) -> Result<String> {
let uuid = Uuid::new_v4();
let uuid_str = uuid.to_string();
let conversation_id = ThreadId::from_string(&uuid_str)?;
// sessions/YYYY/MM/DD derived from filename_ts (YYYY-MM-DDThh-mm-ss)
let year = &filename_ts[0..4];
let month = &filename_ts[5..7];
let day = &filename_ts[8..10];
let dir = codex_home.join("sessions").join(year).join(month).join(day);
fs::create_dir_all(&dir)?;
let file_path = dir.join(format!("rollout-{filename_ts}-{uuid}.jsonl"));
// Build JSONL lines
let meta = SessionMeta {
id: conversation_id,
forked_from_id: None,
timestamp: meta_rfc3339.to_string(),
cwd: PathBuf::from("/"),
originator: "codex".to_string(),
cli_version: "0.0.0".to_string(),
source: SessionSource::Cli,
model_provider: model_provider.map(str::to_string),
base_instructions: None,
};
let payload = serde_json::to_value(SessionMetaLine {
meta,
git: git_info,
})?;
let lines = [
json!( {
"timestamp": meta_rfc3339,
"type": "session_meta",
"payload": payload
})
.to_string(),
json!( {
"timestamp": meta_rfc3339,
"type":"response_item",
"payload": {
"type":"message",
"role":"user",
"content":[{"type":"input_text","text": preview}]
}
})
.to_string(),
json!( {
"timestamp": meta_rfc3339,
"type":"event_msg",
"payload": {
"type":"user_message",
"message": preview,
"text_elements": text_elements,
"local_images": []
}
})
.to_string(),
];
fs::write(file_path, lines.join("\n") + "\n")?;
Ok(uuid_str)
}

View File

@@ -37,7 +37,7 @@ model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "http://127.0.0.1:0/v1"
wire_api = "chat"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
{requires_line}

View File

@@ -1,7 +1,7 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_final_assistant_message_sse_response;
use app_test_support::create_mock_chat_completions_server;
use app_test_support::create_mock_responses_server_sequence;
use app_test_support::create_shell_command_sse_response;
use app_test_support::format_with_current_shell;
use app_test_support::to_response;
@@ -65,7 +65,7 @@ async fn test_codex_jsonrpc_conversation_flow() -> Result<()> {
)?,
create_final_assistant_message_sse_response("Enjoy your new git repo!")?,
];
let server = create_mock_chat_completions_server(responses).await;
let server = create_mock_responses_server_sequence(responses).await;
create_config_toml(&codex_home, &server.uri())?;
// Start MCP server and initialize.
@@ -114,6 +114,7 @@ async fn test_codex_jsonrpc_conversation_flow() -> Result<()> {
conversation_id,
items: vec![codex_app_server_protocol::InputItem::Text {
text: "text".to_string(),
text_elements: Vec::new(),
}],
})
.await?;
@@ -145,9 +146,7 @@ async fn test_codex_jsonrpc_conversation_flow() -> Result<()> {
// 4) removeConversationListener
let remove_listener_id = mcp
.send_remove_conversation_listener_request(RemoveConversationListenerParams {
subscription_id,
})
.send_remove_thread_listener_request(RemoveConversationListenerParams { subscription_id })
.await?;
let remove_listener_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
@@ -199,7 +198,7 @@ async fn test_send_user_turn_changes_approval_policy_behavior() -> Result<()> {
)?,
create_final_assistant_message_sse_response("done 2")?,
];
let server = create_mock_chat_completions_server(responses).await;
let server = create_mock_responses_server_sequence(responses).await;
create_config_toml(&codex_home, &server.uri())?;
// Start MCP server and initialize.
@@ -243,6 +242,7 @@ async fn test_send_user_turn_changes_approval_policy_behavior() -> Result<()> {
conversation_id,
items: vec![codex_app_server_protocol::InputItem::Text {
text: "run python".to_string(),
text_elements: Vec::new(),
}],
})
.await?;
@@ -285,7 +285,7 @@ async fn test_send_user_turn_changes_approval_policy_behavior() -> Result<()> {
)
.await?;
// Wait for first TaskComplete
// Wait for first TurnComplete
let _ = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("codex/event/task_complete"),
@@ -298,6 +298,7 @@ async fn test_send_user_turn_changes_approval_policy_behavior() -> Result<()> {
conversation_id,
items: vec![codex_app_server_protocol::InputItem::Text {
text: "run python again".to_string(),
text_elements: Vec::new(),
}],
cwd: working_directory.clone(),
approval_policy: AskForApproval::Never,
@@ -365,7 +366,7 @@ async fn test_send_user_turn_updates_sandbox_and_cwd_between_turns() -> Result<(
)?,
create_final_assistant_message_sse_response("done second")?,
];
let server = create_mock_chat_completions_server(responses).await;
let server = create_mock_responses_server_sequence(responses).await;
create_config_toml(&codex_home, &server.uri())?;
let mut mcp = McpProcess::new(&codex_home).await?;
@@ -407,6 +408,7 @@ async fn test_send_user_turn_updates_sandbox_and_cwd_between_turns() -> Result<(
conversation_id,
items: vec![InputItem::Text {
text: "first turn".to_string(),
text_elements: Vec::new(),
}],
cwd: first_cwd.clone(),
approval_policy: AskForApproval::Never,
@@ -432,12 +434,14 @@ async fn test_send_user_turn_updates_sandbox_and_cwd_between_turns() -> Result<(
mcp.read_stream_until_notification_message("codex/event/task_complete"),
)
.await??;
mcp.clear_message_buffer();
let second_turn_id = mcp
.send_send_user_turn_request(SendUserTurnParams {
conversation_id,
items: vec![InputItem::Text {
text: "second turn".to_string(),
text_elements: Vec::new(),
}],
cwd: second_cwd.clone(),
approval_policy: AskForApproval::Never,
@@ -501,7 +505,7 @@ model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "chat"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#

View File

@@ -1,7 +1,6 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_final_assistant_message_sse_response;
use app_test_support::create_mock_chat_completions_server;
use app_test_support::to_response;
use codex_app_server_protocol::AddConversationListenerParams;
use codex_app_server_protocol::AddConversationSubscriptionResponse;
@@ -12,6 +11,7 @@ use codex_app_server_protocol::NewConversationResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::SendUserMessageParams;
use codex_app_server_protocol::SendUserMessageResponse;
use core_test_support::responses;
use pretty_assertions::assert_eq;
use serde_json::json;
use std::path::Path;
@@ -23,8 +23,9 @@ const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn test_conversation_create_and_send_message_ok() -> Result<()> {
// Mock server we won't strictly rely on it, but provide one to satisfy any model wiring.
let responses = vec![create_final_assistant_message_sse_response("Done")?];
let server = create_mock_chat_completions_server(responses).await;
let response_body = create_final_assistant_message_sse_response("Done")?;
let server = responses::start_mock_server().await;
let response_mock = responses::mount_sse_sequence(&server, vec![response_body]).await;
// Temporary Codex home with config pointing at the mock server.
let codex_home = TempDir::new()?;
@@ -76,6 +77,7 @@ async fn test_conversation_create_and_send_message_ok() -> Result<()> {
conversation_id,
items: vec![InputItem::Text {
text: "Hello".to_string(),
text_elements: Vec::new(),
}],
})
.await?;
@@ -86,32 +88,30 @@ async fn test_conversation_create_and_send_message_ok() -> Result<()> {
.await??;
let _ok: SendUserMessageResponse = to_response::<SendUserMessageResponse>(send_resp)?;
// avoid race condition by waiting for the mock server to receive the chat.completions request
// Avoid race condition by waiting for the mock server to receive the responses request.
let deadline = std::time::Instant::now() + DEFAULT_READ_TIMEOUT;
let requests = loop {
let requests = server.received_requests().await.unwrap_or_default();
let requests = response_mock.requests();
if !requests.is_empty() {
break requests;
}
if std::time::Instant::now() >= deadline {
panic!("mock server did not receive the chat.completions request in time");
panic!("mock server did not receive the responses request in time");
}
tokio::time::sleep(std::time::Duration::from_millis(10)).await;
};
// Verify the outbound request body matches expectations for Chat Completions.
// Verify the outbound request body matches expectations for Responses.
let request = requests
.first()
.expect("mock server should have received at least one request");
let body = request.body_json::<serde_json::Value>()?;
let body = request.body_json();
assert_eq!(body["model"], json!("o3"));
assert!(body["stream"].as_bool().unwrap_or(false));
let messages = body["messages"]
.as_array()
.expect("messages should be array");
let last = messages.last().expect("at least one message");
assert_eq!(last["role"], json!("user"));
assert_eq!(last["content"], json!("Hello"));
let user_texts = request.message_input_texts("user");
assert!(
user_texts.iter().any(|text| text == "Hello"),
"expected user input to include Hello, got {user_texts:?}"
);
drop(server);
Ok(())
@@ -133,7 +133,7 @@ model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "chat"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#

View File

@@ -0,0 +1,140 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_fake_rollout;
use app_test_support::to_response;
use codex_app_server_protocol::ForkConversationParams;
use codex_app_server_protocol::ForkConversationResponse;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::NewConversationParams; // reused for overrides shape
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ServerNotification;
use codex_app_server_protocol::SessionConfiguredNotification;
use codex_core::protocol::EventMsg;
use pretty_assertions::assert_eq;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn fork_conversation_creates_new_rollout() -> Result<()> {
let codex_home = TempDir::new()?;
let preview = "Hello A";
let conversation_id = create_fake_rollout(
codex_home.path(),
"2025-01-02T12-00-00",
"2025-01-02T12:00:00Z",
preview,
Some("openai"),
None,
)?;
let original_path = codex_home
.path()
.join("sessions")
.join("2025")
.join("01")
.join("02")
.join(format!(
"rollout-2025-01-02T12-00-00-{conversation_id}.jsonl"
));
assert!(
original_path.exists(),
"expected original rollout to exist at {}",
original_path.display()
);
let original_contents = std::fs::read_to_string(&original_path)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let fork_req_id = mcp
.send_fork_conversation_request(ForkConversationParams {
path: Some(original_path.clone()),
conversation_id: None,
overrides: Some(NewConversationParams {
model: Some("o3".to_string()),
..Default::default()
}),
})
.await?;
// Expect a sessionConfigured notification for the forked session.
let notification: JSONRPCNotification = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("sessionConfigured"),
)
.await??;
let session_configured: ServerNotification = notification.try_into()?;
let ServerNotification::SessionConfigured(SessionConfiguredNotification {
model,
session_id,
rollout_path,
initial_messages: session_initial_messages,
..
}) = session_configured
else {
unreachable!("expected sessionConfigured notification");
};
assert_eq!(model, "o3");
assert_ne!(
session_id.to_string(),
conversation_id,
"expected a new conversation id when forking"
);
assert_ne!(
rollout_path, original_path,
"expected a new rollout path when forking"
);
assert!(
rollout_path.exists(),
"expected forked rollout to exist at {}",
rollout_path.display()
);
let session_initial_messages =
session_initial_messages.expect("expected initial messages when forking from rollout");
match session_initial_messages.as_slice() {
[EventMsg::UserMessage(message)] => {
assert_eq!(message.message, preview);
}
other => panic!("unexpected initial messages from rollout fork: {other:#?}"),
}
// Then the response for forkConversation.
let fork_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(fork_req_id)),
)
.await??;
let ForkConversationResponse {
conversation_id: forked_id,
model: forked_model,
initial_messages: response_initial_messages,
rollout_path: response_rollout_path,
} = to_response::<ForkConversationResponse>(fork_resp)?;
assert_eq!(forked_model, "o3");
assert_eq!(response_rollout_path, rollout_path);
assert_ne!(forked_id.to_string(), conversation_id);
let response_initial_messages =
response_initial_messages.expect("expected initial messages in fork response");
match response_initial_messages.as_slice() {
[EventMsg::UserMessage(message)] => {
assert_eq!(message.message, preview);
}
other => panic!("unexpected initial messages in fork response: {other:#?}"),
}
let after_contents = std::fs::read_to_string(&original_path)?;
assert_eq!(
after_contents, original_contents,
"fork should not mutate the original rollout file"
);
Ok(())
}

View File

@@ -18,7 +18,7 @@ use tempfile::TempDir;
use tokio::time::timeout;
use app_test_support::McpProcess;
use app_test_support::create_mock_chat_completions_server;
use app_test_support::create_mock_responses_server_sequence;
use app_test_support::create_shell_command_sse_response;
use app_test_support::to_response;
@@ -56,7 +56,7 @@ async fn shell_command_interruption() -> anyhow::Result<()> {
std::fs::create_dir(&working_directory)?;
// Create mock server with a single SSE response: the long sleep command
let server = create_mock_chat_completions_server(vec![create_shell_command_sse_response(
let server = create_mock_responses_server_sequence(vec![create_shell_command_sse_response(
shell_command.clone(),
Some(&working_directory),
Some(10_000), // 10 seconds timeout in ms
@@ -105,6 +105,7 @@ async fn shell_command_interruption() -> anyhow::Result<()> {
conversation_id,
items: vec![codex_app_server_protocol::InputItem::Text {
text: "run first sleep command".to_string(),
text_elements: Vec::new(),
}],
})
.await?;
@@ -153,7 +154,7 @@ model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "chat"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#

View File

@@ -6,7 +6,7 @@ use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::ListConversationsParams;
use codex_app_server_protocol::ListConversationsResponse;
use codex_app_server_protocol::NewConversationParams; // reused for overrides shape
use codex_app_server_protocol::NewConversationParams;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ResumeConversationParams;
use codex_app_server_protocol::ResumeConversationResponse;
@@ -307,6 +307,7 @@ async fn test_list_and_resume_conversations() -> Result<()> {
content: vec![ContentItem::InputText {
text: fork_history_text.to_string(),
}],
end_turn: None,
}];
let resume_with_history_req_id = mcp
.send_resume_conversation_request(ResumeConversationParams {

View File

@@ -32,7 +32,7 @@ model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "http://127.0.0.1:0/v1"
wire_api = "chat"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#,

View File

@@ -1,8 +1,9 @@
mod archive_conversation;
mod archive_thread;
mod auth;
mod codex_message_processor_flow;
mod config;
mod create_conversation;
mod create_thread;
mod fork_thread;
mod fuzzy_file_search;
mod interrupt;
mod list_resume;

View File

@@ -80,6 +80,7 @@ async fn send_user_turn_accepts_output_schema_v1() -> Result<()> {
conversation_id,
items: vec![InputItem::Text {
text: "Hello".to_string(),
text_elements: Vec::new(),
}],
cwd: codex_home.path().to_path_buf(),
approval_policy: AskForApproval::Never,
@@ -181,6 +182,7 @@ async fn send_user_turn_output_schema_is_per_turn_v1() -> Result<()> {
conversation_id,
items: vec![InputItem::Text {
text: "Hello".to_string(),
text_elements: Vec::new(),
}],
cwd: codex_home.path().to_path_buf(),
approval_policy: AskForApproval::Never,
@@ -228,6 +230,7 @@ async fn send_user_turn_output_schema_is_per_turn_v1() -> Result<()> {
conversation_id,
items: vec![InputItem::Text {
text: "Hello again".to_string(),
text_elements: Vec::new(),
}],
cwd: codex_home.path().to_path_buf(),
approval_policy: AskForApproval::Never,

View File

@@ -1,7 +1,5 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_final_assistant_message_sse_response;
use app_test_support::create_mock_chat_completions_server;
use app_test_support::to_response;
use codex_app_server_protocol::AddConversationListenerParams;
use codex_app_server_protocol::AddConversationSubscriptionResponse;
@@ -13,12 +11,17 @@ use codex_app_server_protocol::NewConversationResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::SendUserMessageParams;
use codex_app_server_protocol::SendUserMessageResponse;
use codex_protocol::ConversationId;
use codex_protocol::ThreadId;
use codex_protocol::models::ContentItem;
use codex_protocol::models::DeveloperInstructions;
use codex_protocol::models::ResponseItem;
use codex_protocol::protocol::AskForApproval;
use codex_protocol::protocol::RawResponseItemEvent;
use codex_protocol::protocol::SandboxPolicy;
use core_test_support::responses;
use pretty_assertions::assert_eq;
use std::path::Path;
use std::path::PathBuf;
use tempfile::TempDir;
use tokio::time::timeout;
@@ -26,13 +29,21 @@ const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs
#[tokio::test]
async fn test_send_message_success() -> Result<()> {
// Spin up a mock completions server that immediately ends the Codex turn.
// Spin up a mock responses server that immediately ends the Codex turn.
// Two Codex turns hit the mock model (session start + send-user-message). Provide two SSE responses.
let responses = vec![
create_final_assistant_message_sse_response("Done")?,
create_final_assistant_message_sse_response("Done")?,
];
let server = create_mock_chat_completions_server(responses).await;
let server = responses::start_mock_server().await;
let body1 = responses::sse(vec![
responses::ev_response_created("resp-1"),
responses::ev_assistant_message("msg-1", "Done"),
responses::ev_completed("resp-1"),
]);
let body2 = responses::sse(vec![
responses::ev_response_created("resp-2"),
responses::ev_assistant_message("msg-2", "Done"),
responses::ev_completed("resp-2"),
]);
let _response_mock1 = responses::mount_sse_once(&server, body1).await;
let _response_mock2 = responses::mount_sse_once(&server, body2).await;
// Create a temporary Codex home with config pointing at the mock server.
let codex_home = TempDir::new()?;
@@ -81,7 +92,7 @@ async fn test_send_message_success() -> Result<()> {
#[expect(clippy::expect_used)]
async fn send_message(
message: &str,
conversation_id: ConversationId,
conversation_id: ThreadId,
mcp: &mut McpProcess,
) -> Result<()> {
// Now exercise sendUserMessage.
@@ -90,6 +101,7 @@ async fn send_message(
conversation_id,
items: vec![InputItem::Text {
text: message.to_string(),
text_elements: Vec::new(),
}],
})
.await?;
@@ -135,8 +147,13 @@ async fn send_message(
#[tokio::test]
async fn test_send_message_raw_notifications_opt_in() -> Result<()> {
let responses = vec![create_final_assistant_message_sse_response("Done")?];
let server = create_mock_chat_completions_server(responses).await;
let server = responses::start_mock_server().await;
let body = responses::sse(vec![
responses::ev_response_created("resp-1"),
responses::ev_assistant_message("msg-1", "Done"),
responses::ev_completed("resp-1"),
]);
let _response_mock = responses::mount_sse_once(&server, body).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
@@ -178,10 +195,14 @@ async fn test_send_message_raw_notifications_opt_in() -> Result<()> {
conversation_id,
items: vec![InputItem::Text {
text: "Hello".to_string(),
text_elements: Vec::new(),
}],
})
.await?;
let permissions = read_raw_response_item(&mut mcp, conversation_id).await;
assert_permissions_message(&permissions);
let developer = read_raw_response_item(&mut mcp, conversation_id).await;
assert_developer_message(&developer, "Use the test harness tools.");
@@ -220,12 +241,13 @@ async fn test_send_message_session_not_found() -> Result<()> {
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let unknown = ConversationId::new();
let unknown = ThreadId::new();
let req_id = mcp
.send_send_user_message_request(SendUserMessageParams {
conversation_id: unknown,
items: vec![InputItem::Text {
text: "ping".to_string(),
text_elements: Vec::new(),
}],
})
.await?;
@@ -259,7 +281,7 @@ model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "chat"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#
@@ -268,10 +290,8 @@ stream_max_retries = 0
}
#[expect(clippy::expect_used)]
async fn read_raw_response_item(
mcp: &mut McpProcess,
conversation_id: ConversationId,
) -> ResponseItem {
async fn read_raw_response_item(mcp: &mut McpProcess, conversation_id: ThreadId) -> ResponseItem {
// TODO: Switch to rawResponseItem/completed once we migrate to app server v2 in codex web.
loop {
let raw_notification: JSONRPCNotification = timeout(
DEFAULT_READ_TIMEOUT,
@@ -330,6 +350,27 @@ fn assert_instructions_message(item: &ResponseItem) {
}
}
fn assert_permissions_message(item: &ResponseItem) {
match item {
ResponseItem::Message { role, content, .. } => {
assert_eq!(role, "developer");
let texts = content_texts(content);
let expected = DeveloperInstructions::from_policy(
&SandboxPolicy::DangerFullAccess,
AskForApproval::Never,
&PathBuf::from("/tmp"),
)
.into_text();
assert_eq!(
texts,
vec![expected.as_str()],
"expected permissions developer message, got {texts:?}"
);
}
other => panic!("expected permissions message, got {other:?}"),
}
}
fn assert_developer_message(item: &ResponseItem, expected_text: &str) {
match item {
ResponseItem::Message { role, content, .. } => {
@@ -387,7 +428,7 @@ fn content_texts(content: &[ContentItem]) -> Vec<&str> {
content
.iter()
.filter_map(|item| match item {
ContentItem::InputText { text } | ContentItem::OutputText { text } => {
ContentItem::InputText { text, .. } | ContentItem::OutputText { text } => {
Some(text.as_str())
}
_ => None,

View File

@@ -1,4 +1,5 @@
use anyhow::Result;
use app_test_support::DEFAULT_CLIENT_NAME;
use app_test_support::McpProcess;
use app_test_support::to_response;
use codex_app_server_protocol::GetUserAgentResponse;
@@ -25,13 +26,13 @@ async fn get_user_agent_returns_current_codex_user_agent() -> Result<()> {
.await??;
let os_info = os_info::get();
let originator = codex_core::default_client::originator().value.as_str();
let originator = DEFAULT_CLIENT_NAME;
let os_type = os_info.os_type();
let os_version = os_info.version();
let architecture = os_info.architecture().unwrap_or("unknown");
let terminal_ua = codex_core::terminal::user_agent();
let user_agent = format!(
"{originator}/0.0.0 ({os_type} {os_version}; {architecture}) {terminal_ua} (codex-app-server-tests; 0.1.0)"
"{originator}/0.0.0 ({os_type} {os_version}; {architecture}) {terminal_ua} ({DEFAULT_CLIENT_NAME}; 0.1.0)"
);
let received: GetUserAgentResponse = to_response(response)?;

View File

@@ -67,7 +67,7 @@ model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "http://127.0.0.1:0/v1"
wire_api = "chat"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
{requires_line}

View File

@@ -0,0 +1,66 @@
use anyhow::Result;
use codex_core::config::ConfigBuilder;
use codex_core::config::types::OtelExporterKind;
use codex_core::config::types::OtelHttpProtocol;
use pretty_assertions::assert_eq;
use std::collections::HashMap;
use tempfile::TempDir;
const SERVICE_VERSION: &str = "0.0.0-test";
fn set_metrics_exporter(config: &mut codex_core::config::Config) {
config.otel.metrics_exporter = OtelExporterKind::OtlpHttp {
endpoint: "http://localhost:4318".to_string(),
headers: HashMap::new(),
protocol: OtelHttpProtocol::Json,
tls: None,
};
}
#[tokio::test]
async fn app_server_default_analytics_disabled_without_flag() -> Result<()> {
let codex_home = TempDir::new()?;
let mut config = ConfigBuilder::default()
.codex_home(codex_home.path().to_path_buf())
.build()
.await?;
set_metrics_exporter(&mut config);
config.analytics_enabled = None;
let provider = codex_core::otel_init::build_provider(
&config,
SERVICE_VERSION,
Some("codex_app_server"),
false,
)
.map_err(|err| anyhow::anyhow!(err.to_string()))?;
// With analytics unset in the config and the default flag is false, metrics are disabled.
// No provider is built.
assert_eq!(provider.is_none(), true);
Ok(())
}
#[tokio::test]
async fn app_server_default_analytics_enabled_with_flag() -> Result<()> {
let codex_home = TempDir::new()?;
let mut config = ConfigBuilder::default()
.codex_home(codex_home.path().to_path_buf())
.build()
.await?;
set_metrics_exporter(&mut config);
config.analytics_enabled = None;
let provider = codex_core::otel_init::build_provider(
&config,
SERVICE_VERSION,
Some("codex_app_server"),
true,
)
.map_err(|err| anyhow::anyhow!(err.to_string()))?;
// With analytics unset in the config and the default flag is true, metrics are enabled.
let has_metrics = provider.as_ref().and_then(|otel| otel.metrics()).is_some();
assert_eq!(has_metrics, true);
Ok(())
}

View File

@@ -0,0 +1,381 @@
use std::borrow::Cow;
use std::sync::Arc;
use std::time::Duration;
use anyhow::Result;
use app_test_support::ChatGptAuthFixture;
use app_test_support::McpProcess;
use app_test_support::to_response;
use app_test_support::write_chatgpt_auth;
use axum::Json;
use axum::Router;
use axum::extract::State;
use axum::http::HeaderMap;
use axum::http::StatusCode;
use axum::http::header::AUTHORIZATION;
use axum::routing::post;
use codex_app_server_protocol::AppInfo;
use codex_app_server_protocol::AppsListParams;
use codex_app_server_protocol::AppsListResponse;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_core::auth::AuthCredentialsStoreMode;
use codex_core::connectors::ConnectorInfo;
use pretty_assertions::assert_eq;
use rmcp::handler::server::ServerHandler;
use rmcp::model::JsonObject;
use rmcp::model::ListToolsResult;
use rmcp::model::Meta;
use rmcp::model::ServerCapabilities;
use rmcp::model::ServerInfo;
use rmcp::model::Tool;
use rmcp::model::ToolAnnotations;
use rmcp::transport::StreamableHttpServerConfig;
use rmcp::transport::StreamableHttpService;
use rmcp::transport::streamable_http_server::session::local::LocalSessionManager;
use serde_json::json;
use tempfile::TempDir;
use tokio::net::TcpListener;
use tokio::task::JoinHandle;
use tokio::time::timeout;
const DEFAULT_TIMEOUT: Duration = Duration::from_secs(10);
#[tokio::test]
async fn list_apps_returns_empty_when_connectors_disabled() -> Result<()> {
let codex_home = TempDir::new()?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_TIMEOUT, mcp.initialize()).await??;
let request_id = mcp
.send_apps_list_request(AppsListParams {
limit: Some(50),
cursor: None,
})
.await?;
let response: JSONRPCResponse = timeout(
DEFAULT_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
)
.await??;
let AppsListResponse { data, next_cursor } = to_response(response)?;
assert!(data.is_empty());
assert!(next_cursor.is_none());
Ok(())
}
#[tokio::test]
async fn list_apps_returns_connectors_with_accessible_flags() -> Result<()> {
let connectors = vec![
ConnectorInfo {
connector_id: "alpha".to_string(),
connector_name: "Alpha".to_string(),
connector_description: Some("Alpha connector".to_string()),
logo_url: Some("https://example.com/alpha.png".to_string()),
install_url: None,
is_accessible: false,
},
ConnectorInfo {
connector_id: "beta".to_string(),
connector_name: "beta".to_string(),
connector_description: None,
logo_url: None,
install_url: None,
is_accessible: false,
},
];
let tools = vec![connector_tool("beta", "Beta App")?];
let (server_url, server_handle) = start_apps_server(connectors.clone(), tools).await?;
let codex_home = TempDir::new()?;
write_connectors_config(codex_home.path(), &server_url)?;
write_chatgpt_auth(
codex_home.path(),
ChatGptAuthFixture::new("chatgpt-token")
.account_id("account-123")
.chatgpt_user_id("user-123")
.chatgpt_account_id("account-123"),
AuthCredentialsStoreMode::File,
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_TIMEOUT, mcp.initialize()).await??;
let request_id = mcp
.send_apps_list_request(AppsListParams {
limit: None,
cursor: None,
})
.await?;
let response: JSONRPCResponse = timeout(
DEFAULT_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
)
.await??;
let AppsListResponse { data, next_cursor } = to_response(response)?;
let expected = vec![
AppInfo {
id: "beta".to_string(),
name: "Beta App".to_string(),
description: None,
logo_url: None,
install_url: Some("https://chatgpt.com/apps/beta/beta".to_string()),
is_accessible: true,
},
AppInfo {
id: "alpha".to_string(),
name: "Alpha".to_string(),
description: Some("Alpha connector".to_string()),
logo_url: Some("https://example.com/alpha.png".to_string()),
install_url: Some("https://chatgpt.com/apps/alpha/alpha".to_string()),
is_accessible: false,
},
];
assert_eq!(data, expected);
assert!(next_cursor.is_none());
server_handle.abort();
Ok(())
}
#[tokio::test]
async fn list_apps_paginates_results() -> Result<()> {
let connectors = vec![
ConnectorInfo {
connector_id: "alpha".to_string(),
connector_name: "Alpha".to_string(),
connector_description: Some("Alpha connector".to_string()),
logo_url: None,
install_url: None,
is_accessible: false,
},
ConnectorInfo {
connector_id: "beta".to_string(),
connector_name: "beta".to_string(),
connector_description: None,
logo_url: None,
install_url: None,
is_accessible: false,
},
];
let tools = vec![connector_tool("beta", "Beta App")?];
let (server_url, server_handle) = start_apps_server(connectors.clone(), tools).await?;
let codex_home = TempDir::new()?;
write_connectors_config(codex_home.path(), &server_url)?;
write_chatgpt_auth(
codex_home.path(),
ChatGptAuthFixture::new("chatgpt-token")
.account_id("account-123")
.chatgpt_user_id("user-123")
.chatgpt_account_id("account-123"),
AuthCredentialsStoreMode::File,
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_TIMEOUT, mcp.initialize()).await??;
let first_request = mcp
.send_apps_list_request(AppsListParams {
limit: Some(1),
cursor: None,
})
.await?;
let first_response: JSONRPCResponse = timeout(
DEFAULT_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(first_request)),
)
.await??;
let AppsListResponse {
data: first_page,
next_cursor: first_cursor,
} = to_response(first_response)?;
let expected_first = vec![AppInfo {
id: "beta".to_string(),
name: "Beta App".to_string(),
description: None,
logo_url: None,
install_url: Some("https://chatgpt.com/apps/beta/beta".to_string()),
is_accessible: true,
}];
assert_eq!(first_page, expected_first);
let next_cursor = first_cursor.ok_or_else(|| anyhow::anyhow!("missing cursor"))?;
let second_request = mcp
.send_apps_list_request(AppsListParams {
limit: Some(1),
cursor: Some(next_cursor),
})
.await?;
let second_response: JSONRPCResponse = timeout(
DEFAULT_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(second_request)),
)
.await??;
let AppsListResponse {
data: second_page,
next_cursor: second_cursor,
} = to_response(second_response)?;
let expected_second = vec![AppInfo {
id: "alpha".to_string(),
name: "Alpha".to_string(),
description: Some("Alpha connector".to_string()),
logo_url: None,
install_url: Some("https://chatgpt.com/apps/alpha/alpha".to_string()),
is_accessible: false,
}];
assert_eq!(second_page, expected_second);
assert!(second_cursor.is_none());
server_handle.abort();
Ok(())
}
#[derive(Clone)]
struct AppsServerState {
expected_bearer: String,
expected_account_id: String,
response: serde_json::Value,
}
#[derive(Clone)]
struct AppListMcpServer {
tools: Arc<Vec<Tool>>,
}
impl AppListMcpServer {
fn new(tools: Arc<Vec<Tool>>) -> Self {
Self { tools }
}
}
impl ServerHandler for AppListMcpServer {
fn get_info(&self) -> ServerInfo {
ServerInfo {
capabilities: ServerCapabilities::builder().enable_tools().build(),
..ServerInfo::default()
}
}
fn list_tools(
&self,
_request: Option<rmcp::model::PaginatedRequestParam>,
_context: rmcp::service::RequestContext<rmcp::service::RoleServer>,
) -> impl std::future::Future<Output = Result<ListToolsResult, rmcp::ErrorData>> + Send + '_
{
let tools = self.tools.clone();
async move {
Ok(ListToolsResult {
tools: (*tools).clone(),
next_cursor: None,
meta: None,
})
}
}
}
async fn start_apps_server(
connectors: Vec<ConnectorInfo>,
tools: Vec<Tool>,
) -> Result<(String, JoinHandle<()>)> {
let state = AppsServerState {
expected_bearer: "Bearer chatgpt-token".to_string(),
expected_account_id: "account-123".to_string(),
response: json!({ "connectors": connectors }),
};
let state = Arc::new(state);
let tools = Arc::new(tools);
let listener = TcpListener::bind("127.0.0.1:0").await?;
let addr = listener.local_addr()?;
let mcp_service = StreamableHttpService::new(
{
let tools = tools.clone();
move || Ok(AppListMcpServer::new(tools.clone()))
},
Arc::new(LocalSessionManager::default()),
StreamableHttpServerConfig::default(),
);
let router = Router::new()
.route("/aip/connectors/list_accessible", post(list_connectors))
.with_state(state)
.nest_service("/api/codex/apps", mcp_service);
let handle = tokio::spawn(async move {
let _ = axum::serve(listener, router).await;
});
Ok((format!("http://{addr}"), handle))
}
async fn list_connectors(
State(state): State<Arc<AppsServerState>>,
headers: HeaderMap,
) -> Result<impl axum::response::IntoResponse, StatusCode> {
let bearer_ok = headers
.get(AUTHORIZATION)
.and_then(|value| value.to_str().ok())
.is_some_and(|value| value == state.expected_bearer);
let account_ok = headers
.get("chatgpt-account-id")
.and_then(|value| value.to_str().ok())
.is_some_and(|value| value == state.expected_account_id);
if bearer_ok && account_ok {
Ok(Json(state.response.clone()))
} else {
Err(StatusCode::UNAUTHORIZED)
}
}
fn connector_tool(connector_id: &str, connector_name: &str) -> Result<Tool> {
let schema: JsonObject = serde_json::from_value(json!({
"type": "object",
"additionalProperties": false
}))?;
let mut tool = Tool::new(
Cow::Owned(format!("connector_{connector_id}")),
Cow::Borrowed("Connector test tool"),
Arc::new(schema),
);
tool.annotations = Some(ToolAnnotations::new().read_only(true));
let mut meta = Meta::new();
meta.0
.insert("connector_id".to_string(), json!(connector_id));
meta.0
.insert("connector_name".to_string(), json!(connector_name));
tool.meta = Some(meta);
Ok(tool)
}
fn write_connectors_config(codex_home: &std::path::Path, base_url: &str) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
format!(
r#"
chatgpt_base_url = "{base_url}"
[features]
connectors = true
"#
),
)
}

View File

@@ -0,0 +1,111 @@
//! Validates that the collaboration mode list endpoint returns the expected default presets.
//!
//! The test drives the app server through the MCP harness and asserts that the list response
//! includes the plan, coding, pair programming, and execute modes with their default model and reasoning
//! effort settings, which keeps the API contract visible in one place.
#![allow(clippy::unwrap_used)]
use std::time::Duration;
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::to_response;
use codex_app_server_protocol::CollaborationModeListParams;
use codex_app_server_protocol::CollaborationModeListResponse;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_core::models_manager::test_builtin_collaboration_mode_presets;
use codex_protocol::config_types::CollaborationModeMask;
use codex_protocol::config_types::ModeKind;
use pretty_assertions::assert_eq;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_TIMEOUT: Duration = Duration::from_secs(10);
/// Confirms the server returns the default collaboration mode presets in a stable order.
#[tokio::test]
async fn list_collaboration_modes_returns_presets() -> Result<()> {
let codex_home = TempDir::new()?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_TIMEOUT, mcp.initialize()).await??;
let request_id = mcp
.send_list_collaboration_modes_request(CollaborationModeListParams {})
.await?;
let response: JSONRPCResponse = timeout(
DEFAULT_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
)
.await??;
let CollaborationModeListResponse { data: items } =
to_response::<CollaborationModeListResponse>(response)?;
let expected = [
plan_preset(),
code_preset(),
pair_programming_preset(),
execute_preset(),
];
assert_eq!(expected.len(), items.len());
for (expected_mask, actual_mask) in expected.iter().zip(items.iter()) {
assert_eq!(expected_mask.name, actual_mask.name);
assert_eq!(expected_mask.mode, actual_mask.mode);
assert_eq!(expected_mask.model, actual_mask.model);
assert_eq!(expected_mask.reasoning_effort, actual_mask.reasoning_effort);
assert_eq!(
expected_mask.developer_instructions,
actual_mask.developer_instructions
);
}
Ok(())
}
/// Builds the plan preset that the list response is expected to return.
///
/// If the defaults change in the app server, this helper should be updated alongside the
/// contract, or the test will fail in ways that imply a regression in the API.
fn plan_preset() -> CollaborationModeMask {
let presets = test_builtin_collaboration_mode_presets();
presets
.into_iter()
.find(|p| p.mode == Some(ModeKind::Plan))
.unwrap()
}
/// Builds the pair programming preset that the list response is expected to return.
///
/// The helper keeps the expected model and reasoning defaults co-located with the test
/// so that mismatches point directly at the API contract being exercised.
fn pair_programming_preset() -> CollaborationModeMask {
let presets = test_builtin_collaboration_mode_presets();
presets
.into_iter()
.find(|p| p.mode == Some(ModeKind::PairProgramming))
.unwrap()
}
/// Builds the code preset that the list response is expected to return.
fn code_preset() -> CollaborationModeMask {
let presets = test_builtin_collaboration_mode_presets();
presets
.into_iter()
.find(|p| p.mode == Some(ModeKind::Code))
.unwrap()
}
/// Builds the execute preset that the list response is expected to return.
///
/// The execute preset uses a different reasoning effort to capture the higher-effort
/// execution contract the server currently exposes.
fn execute_preset() -> CollaborationModeMask {
let presets = test_builtin_collaboration_mode_presets();
presets
.into_iter()
.find(|p| p.mode == Some(ModeKind::Execute))
.unwrap()
}

View File

@@ -18,7 +18,10 @@ use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::SandboxMode;
use codex_app_server_protocol::ToolsV2;
use codex_app_server_protocol::WriteStatus;
use codex_core::config::set_project_trust_level;
use codex_core::config_loader::SYSTEM_CONFIG_TOML_FILE_UNIX;
use codex_protocol::config_types::TrustLevel;
use codex_protocol::openai_models::ReasoningEffort;
use codex_utils_absolute_path::AbsolutePathBuf;
use pretty_assertions::assert_eq;
use serde_json::json;
@@ -53,6 +56,7 @@ sandbox_mode = "workspace-write"
let request_id = mcp
.send_config_read_request(ConfigReadParams {
include_layers: true,
cwd: None,
})
.await?;
let resp: JSONRPCResponse = timeout(
@@ -101,6 +105,7 @@ view_image = false
let request_id = mcp
.send_config_read_request(ConfigReadParams {
include_layers: true,
cwd: None,
})
.await?;
let resp: JSONRPCResponse = timeout(
@@ -141,6 +146,52 @@ view_image = false
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn config_read_includes_project_layers_for_cwd() -> Result<()> {
let codex_home = TempDir::new()?;
write_config(&codex_home, r#"model = "gpt-user""#)?;
let workspace = TempDir::new()?;
let project_config_dir = workspace.path().join(".codex");
std::fs::create_dir_all(&project_config_dir)?;
std::fs::write(
project_config_dir.join("config.toml"),
r#"
model_reasoning_effort = "high"
"#,
)?;
set_project_trust_level(codex_home.path(), workspace.path(), TrustLevel::Trusted)?;
let project_config = AbsolutePathBuf::try_from(project_config_dir)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let request_id = mcp
.send_config_read_request(ConfigReadParams {
include_layers: true,
cwd: Some(workspace.path().to_string_lossy().into_owned()),
})
.await?;
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
)
.await??;
let ConfigReadResponse {
config, origins, ..
} = to_response(resp)?;
assert_eq!(config.model_reasoning_effort, Some(ReasoningEffort::High));
assert_eq!(
origins.get("model_reasoning_effort").expect("origin").name,
ConfigLayerSource::Project {
dot_codex_folder: project_config
}
);
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn config_read_includes_system_layer_and_overrides() -> Result<()> {
let codex_home = TempDir::new()?;
@@ -184,7 +235,10 @@ writable_roots = [{}]
let mut mcp = McpProcess::new_with_env(
codex_home.path(),
&[("CODEX_MANAGED_CONFIG_PATH", Some(&managed_path_str))],
&[(
"CODEX_APP_SERVER_MANAGED_CONFIG_PATH",
Some(&managed_path_str),
)],
)
.await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -192,6 +246,7 @@ writable_roots = [{}]
let request_id = mcp
.send_config_read_request(ConfigReadParams {
include_layers: true,
cwd: None,
})
.await?;
let resp: JSONRPCResponse = timeout(
@@ -278,6 +333,7 @@ model = "gpt-old"
let read_id = mcp
.send_config_read_request(ConfigReadParams {
include_layers: false,
cwd: None,
})
.await?;
let read_resp: JSONRPCResponse = timeout(
@@ -312,6 +368,7 @@ model = "gpt-old"
let verify_id = mcp
.send_config_read_request(ConfigReadParams {
include_layers: false,
cwd: None,
})
.await?;
let verify_resp: JSONRPCResponse = timeout(
@@ -408,6 +465,7 @@ async fn config_batch_write_applies_multiple_edits() -> Result<()> {
let read_id = mcp
.send_config_read_request(ConfigReadParams {
include_layers: false,
cwd: None,
})
.await?;
let read_resp: JSONRPCResponse = timeout(

View File

@@ -0,0 +1,137 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_mock_responses_server_sequence_unchecked;
use app_test_support::to_response;
use codex_app_server_protocol::ClientInfo;
use codex_app_server_protocol::InitializeResponse;
use codex_app_server_protocol::JSONRPCMessage;
use pretty_assertions::assert_eq;
use std::path::Path;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
#[tokio::test]
async fn initialize_uses_client_info_name_as_originator() -> Result<()> {
let responses = Vec::new();
let server = create_mock_responses_server_sequence_unchecked(responses).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri(), "never")?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
let message = timeout(
DEFAULT_READ_TIMEOUT,
mcp.initialize_with_client_info(ClientInfo {
name: "codex_vscode".to_string(),
title: Some("Codex VS Code Extension".to_string()),
version: "0.1.0".to_string(),
}),
)
.await??;
let JSONRPCMessage::Response(response) = message else {
anyhow::bail!("expected initialize response, got {message:?}");
};
let InitializeResponse { user_agent } = to_response::<InitializeResponse>(response)?;
assert!(user_agent.starts_with("codex_vscode/"));
Ok(())
}
#[tokio::test]
async fn initialize_respects_originator_override_env_var() -> Result<()> {
let responses = Vec::new();
let server = create_mock_responses_server_sequence_unchecked(responses).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri(), "never")?;
let mut mcp = McpProcess::new_with_env(
codex_home.path(),
&[(
"CODEX_INTERNAL_ORIGINATOR_OVERRIDE",
Some("codex_originator_via_env_var"),
)],
)
.await?;
let message = timeout(
DEFAULT_READ_TIMEOUT,
mcp.initialize_with_client_info(ClientInfo {
name: "codex_vscode".to_string(),
title: Some("Codex VS Code Extension".to_string()),
version: "0.1.0".to_string(),
}),
)
.await??;
let JSONRPCMessage::Response(response) = message else {
anyhow::bail!("expected initialize response, got {message:?}");
};
let InitializeResponse { user_agent } = to_response::<InitializeResponse>(response)?;
assert!(user_agent.starts_with("codex_originator_via_env_var/"));
Ok(())
}
#[tokio::test]
async fn initialize_rejects_invalid_client_name() -> Result<()> {
let responses = Vec::new();
let server = create_mock_responses_server_sequence_unchecked(responses).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri(), "never")?;
let mut mcp = McpProcess::new_with_env(
codex_home.path(),
&[("CODEX_INTERNAL_ORIGINATOR_OVERRIDE", None)],
)
.await?;
let message = timeout(
DEFAULT_READ_TIMEOUT,
mcp.initialize_with_client_info(ClientInfo {
name: "bad\rname".to_string(),
title: Some("Bad Client".to_string()),
version: "0.1.0".to_string(),
}),
)
.await??;
let JSONRPCMessage::Error(error) = message else {
anyhow::bail!("expected initialize error, got {message:?}");
};
assert_eq!(error.error.code, -32600);
assert_eq!(
error.error.message,
"Invalid clientInfo.name: 'bad\rname'. Must be a valid HTTP header value."
);
assert_eq!(error.error.data, None);
Ok(())
}
// Helper to create a config.toml pointing at the mock model server.
fn create_config_toml(
codex_home: &Path,
server_uri: &str,
approval_policy: &str,
) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
format!(
r#"
model = "mock-model"
approval_policy = "{approval_policy}"
sandbox_mode = "read-only"
model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#
),
)
}

View File

@@ -1,12 +1,21 @@
mod account;
mod analytics;
mod app_list;
mod collaboration_mode_list;
mod config_rpc;
mod initialize;
mod model_list;
mod output_schema;
mod rate_limits;
mod request_user_input;
mod review;
mod thread_archive;
mod thread_fork;
mod thread_list;
mod thread_loaded_list;
mod thread_read;
mod thread_resume;
mod thread_rollback;
mod thread_start;
mod turn_interrupt;
mod turn_start;

View File

@@ -48,57 +48,32 @@ async fn list_models_returns_all_models_with_large_limit() -> Result<()> {
let expected_models = vec![
Model {
id: "gpt-5.2".to_string(),
model: "gpt-5.2".to_string(),
display_name: "gpt-5.2".to_string(),
description:
"Latest frontier model with improvements across knowledge, reasoning and coding"
.to_string(),
id: "gpt-5.2-codex".to_string(),
model: "gpt-5.2-codex".to_string(),
display_name: "gpt-5.2-codex".to_string(),
description: "Latest frontier agentic coding model.".to_string(),
supported_reasoning_efforts: vec![
ReasoningEffortOption {
reasoning_effort: ReasoningEffort::Low,
description: "Balances speed with some reasoning; useful for straightforward \
queries and short explanations"
.to_string(),
description: "Fast responses with lighter reasoning".to_string(),
},
ReasoningEffortOption {
reasoning_effort: ReasoningEffort::Medium,
description: "Provides a solid balance of reasoning depth and latency for \
general-purpose tasks"
description: "Balances speed and reasoning depth for everyday tasks"
.to_string(),
},
ReasoningEffortOption {
reasoning_effort: ReasoningEffort::High,
description: "Maximizes reasoning depth for complex or ambiguous problems"
.to_string(),
description: "Greater reasoning depth for complex problems".to_string(),
},
ReasoningEffortOption {
reasoning_effort: ReasoningEffort::XHigh,
description: "Extra high reasoning for complex problems".to_string(),
description: "Extra high reasoning depth for complex problems".to_string(),
},
],
default_reasoning_effort: ReasoningEffort::Medium,
is_default: true,
},
Model {
id: "gpt-5.1-codex-mini".to_string(),
model: "gpt-5.1-codex-mini".to_string(),
display_name: "gpt-5.1-codex-mini".to_string(),
description: "Optimized for codex. Cheaper, faster, but less capable.".to_string(),
supported_reasoning_efforts: vec![
ReasoningEffortOption {
reasoning_effort: ReasoningEffort::Medium,
description: "Dynamically adjusts reasoning based on the task".to_string(),
},
ReasoningEffortOption {
reasoning_effort: ReasoningEffort::High,
description: "Maximizes reasoning depth for complex or ambiguous problems"
.to_string(),
},
],
default_reasoning_effort: ReasoningEffort::Medium,
is_default: false,
},
Model {
id: "gpt-5.1-codex-max".to_string(),
model: "gpt-5.1-codex-max".to_string(),
@@ -127,23 +102,48 @@ async fn list_models_returns_all_models_with_large_limit() -> Result<()> {
is_default: false,
},
Model {
id: "gpt-5.2-codex".to_string(),
model: "gpt-5.2-codex".to_string(),
display_name: "gpt-5.2-codex".to_string(),
description: "Latest frontier agentic coding model.".to_string(),
id: "gpt-5.1-codex-mini".to_string(),
model: "gpt-5.1-codex-mini".to_string(),
display_name: "gpt-5.1-codex-mini".to_string(),
description: "Optimized for codex. Cheaper, faster, but less capable.".to_string(),
supported_reasoning_efforts: vec![
ReasoningEffortOption {
reasoning_effort: ReasoningEffort::Medium,
description: "Dynamically adjusts reasoning based on the task".to_string(),
},
ReasoningEffortOption {
reasoning_effort: ReasoningEffort::High,
description: "Maximizes reasoning depth for complex or ambiguous problems"
.to_string(),
},
],
default_reasoning_effort: ReasoningEffort::Medium,
is_default: false,
},
Model {
id: "gpt-5.2".to_string(),
model: "gpt-5.2".to_string(),
display_name: "gpt-5.2".to_string(),
description:
"Latest frontier model with improvements across knowledge, reasoning and coding"
.to_string(),
supported_reasoning_efforts: vec![
ReasoningEffortOption {
reasoning_effort: ReasoningEffort::Low,
description: "Fast responses with lighter reasoning".to_string(),
description: "Balances speed with some reasoning; useful for straightforward \
queries and short explanations"
.to_string(),
},
ReasoningEffortOption {
reasoning_effort: ReasoningEffort::Medium,
description: "Balances speed and reasoning depth for everyday tasks"
description: "Provides a solid balance of reasoning depth and latency for \
general-purpose tasks"
.to_string(),
},
ReasoningEffortOption {
reasoning_effort: ReasoningEffort::High,
description: "Greater reasoning depth for complex problems".to_string(),
description: "Maximizes reasoning depth for complex or ambiguous problems"
.to_string(),
},
ReasoningEffortOption {
reasoning_effort: ReasoningEffort::XHigh,
@@ -187,7 +187,7 @@ async fn list_models_pagination_works() -> Result<()> {
} = to_response::<ModelListResponse>(first_response)?;
assert_eq!(first_items.len(), 1);
assert_eq!(first_items[0].id, "gpt-5.2");
assert_eq!(first_items[0].id, "gpt-5.2-codex");
let next_cursor = first_cursor.ok_or_else(|| anyhow!("cursor for second page"))?;
let second_request = mcp
@@ -209,7 +209,7 @@ async fn list_models_pagination_works() -> Result<()> {
} = to_response::<ModelListResponse>(second_response)?;
assert_eq!(second_items.len(), 1);
assert_eq!(second_items[0].id, "gpt-5.1-codex-mini");
assert_eq!(second_items[0].id, "gpt-5.1-codex-max");
let third_cursor = second_cursor.ok_or_else(|| anyhow!("cursor for third page"))?;
let third_request = mcp
@@ -231,7 +231,7 @@ async fn list_models_pagination_works() -> Result<()> {
} = to_response::<ModelListResponse>(third_response)?;
assert_eq!(third_items.len(), 1);
assert_eq!(third_items[0].id, "gpt-5.1-codex-max");
assert_eq!(third_items[0].id, "gpt-5.1-codex-mini");
let fourth_cursor = third_cursor.ok_or_else(|| anyhow!("cursor for fourth page"))?;
let fourth_request = mcp
@@ -253,7 +253,7 @@ async fn list_models_pagination_works() -> Result<()> {
} = to_response::<ModelListResponse>(fourth_response)?;
assert_eq!(fourth_items.len(), 1);
assert_eq!(fourth_items[0].id, "gpt-5.2-codex");
assert_eq!(fourth_items[0].id, "gpt-5.2");
assert!(fourth_cursor.is_none());
Ok(())
}

View File

@@ -61,6 +61,7 @@ async fn turn_start_accepts_output_schema_v2() -> Result<()> {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "Hello".to_string(),
text_elements: Vec::new(),
}],
output_schema: Some(output_schema.clone()),
..Default::default()
@@ -142,6 +143,7 @@ async fn turn_start_output_schema_is_per_turn_v2() -> Result<()> {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "Hello".to_string(),
text_elements: Vec::new(),
}],
output_schema: Some(output_schema.clone()),
..Default::default()
@@ -183,6 +185,7 @@ async fn turn_start_output_schema_is_per_turn_v2() -> Result<()> {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "Hello again".to_string(),
text_elements: Vec::new(),
}],
output_schema: None,
..Default::default()

View File

@@ -0,0 +1,138 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_final_assistant_message_sse_response;
use app_test_support::create_mock_responses_server_sequence;
use app_test_support::create_request_user_input_sse_response;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ServerRequest;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
use codex_app_server_protocol::TurnStartParams;
use codex_app_server_protocol::TurnStartResponse;
use codex_app_server_protocol::UserInput as V2UserInput;
use codex_protocol::config_types::CollaborationMode;
use codex_protocol::config_types::ModeKind;
use codex_protocol::config_types::Settings;
use codex_protocol::openai_models::ReasoningEffort;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
async fn request_user_input_round_trip() -> Result<()> {
let codex_home = tempfile::TempDir::new()?;
let responses = vec![
create_request_user_input_sse_response("call1")?,
create_final_assistant_message_sse_response("done")?,
];
let server = create_mock_responses_server_sequence(responses).await;
create_config_toml(codex_home.path(), &server.uri())?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let thread_start_id = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("mock-model".to_string()),
..Default::default()
})
.await?;
let thread_start_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(thread_start_id)),
)
.await??;
let ThreadStartResponse { thread, .. } = to_response(thread_start_resp)?;
let turn_start_id = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "ask something".to_string(),
text_elements: Vec::new(),
}],
model: Some("mock-model".to_string()),
effort: Some(ReasoningEffort::Medium),
collaboration_mode: Some(CollaborationMode {
mode: ModeKind::Plan,
settings: Settings {
model: "mock-model".to_string(),
reasoning_effort: Some(ReasoningEffort::Medium),
developer_instructions: None,
},
}),
..Default::default()
})
.await?;
let turn_start_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn_start_id)),
)
.await??;
let TurnStartResponse { turn, .. } = to_response(turn_start_resp)?;
let server_req = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_request_message(),
)
.await??;
let ServerRequest::ToolRequestUserInput { request_id, params } = server_req else {
panic!("expected ToolRequestUserInput request, got: {server_req:?}");
};
assert_eq!(params.thread_id, thread.id);
assert_eq!(params.turn_id, turn.id);
assert_eq!(params.item_id, "call1");
assert_eq!(params.questions.len(), 1);
mcp.send_response(
request_id,
serde_json::json!({
"answers": {
"confirm_path": { "answers": ["yes"] }
}
}),
)
.await?;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("codex/event/task_complete"),
)
.await??;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("turn/completed"),
)
.await??;
Ok(())
}
fn create_config_toml(codex_home: &std::path::Path, server_uri: &str) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
format!(
r#"
model = "mock-model"
approval_policy = "untrusted"
sandbox_mode = "read-only"
model_provider = "mock_provider"
[features]
collaboration_modes = true
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#
),
)
}

View File

@@ -1,7 +1,6 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_final_assistant_message_sse_response;
use app_test_support::create_mock_chat_completions_server_unchecked;
use app_test_support::create_mock_responses_server_repeating_assistant;
use app_test_support::to_response;
use codex_app_server_protocol::ItemCompletedNotification;
use codex_app_server_protocol::ItemStartedNotification;
@@ -44,10 +43,7 @@ async fn review_start_runs_review_turn_and_emits_code_review_item() -> Result<()
"overall_confidence_score": 0.75
})
.to_string();
let responses = vec![create_final_assistant_message_sse_response(
&review_payload,
)?];
let server = create_mock_chat_completions_server_unchecked(responses).await;
let server = create_mock_responses_server_repeating_assistant(&review_payload).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
@@ -135,7 +131,7 @@ async fn review_start_runs_review_turn_and_emits_code_review_item() -> Result<()
#[tokio::test]
async fn review_start_rejects_empty_base_branch() -> Result<()> {
let server = create_mock_chat_completions_server_unchecked(vec![]).await;
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
@@ -176,10 +172,7 @@ async fn review_start_with_detached_delivery_returns_new_thread_id() -> Result<(
"overall_confidence_score": 0.5
})
.to_string();
let responses = vec![create_final_assistant_message_sse_response(
&review_payload,
)?];
let server = create_mock_chat_completions_server_unchecked(responses).await;
let server = create_mock_responses_server_repeating_assistant(&review_payload).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
@@ -219,7 +212,7 @@ async fn review_start_with_detached_delivery_returns_new_thread_id() -> Result<(
#[tokio::test]
async fn review_start_rejects_empty_commit_sha() -> Result<()> {
let server = create_mock_chat_completions_server_unchecked(vec![]).await;
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
@@ -254,7 +247,7 @@ async fn review_start_rejects_empty_commit_sha() -> Result<()> {
#[tokio::test]
async fn review_start_rejects_empty_custom_instructions() -> Result<()> {
let server = create_mock_chat_completions_server_unchecked(vec![]).await;
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
@@ -320,7 +313,7 @@ model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider"
base_url = "{server_uri}/v1"
wire_api = "chat"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#

View File

@@ -8,7 +8,7 @@ use codex_app_server_protocol::ThreadArchiveResponse;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
use codex_core::ARCHIVED_SESSIONS_SUBDIR;
use codex_core::find_conversation_path_by_id_str;
use codex_core::find_thread_path_by_id_str;
use std::path::Path;
use tempfile::TempDir;
use tokio::time::timeout;
@@ -39,7 +39,7 @@ async fn thread_archive_moves_rollout_into_archived_directory() -> Result<()> {
assert!(!thread.id.is_empty());
// Locate the rollout path recorded for this thread id.
let rollout_path = find_conversation_path_by_id_str(codex_home.path(), &thread.id)
let rollout_path = find_thread_path_by_id_str(codex_home.path(), &thread.id)
.await?
.expect("expected rollout path for thread id to exist");
assert!(

View File

@@ -0,0 +1,142 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_fake_rollout;
use app_test_support::create_mock_responses_server_repeating_assistant;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::SessionSource;
use codex_app_server_protocol::ThreadForkParams;
use codex_app_server_protocol::ThreadForkResponse;
use codex_app_server_protocol::ThreadItem;
use codex_app_server_protocol::ThreadStartedNotification;
use codex_app_server_protocol::TurnStatus;
use codex_app_server_protocol::UserInput;
use pretty_assertions::assert_eq;
use std::path::Path;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
#[tokio::test]
async fn thread_fork_creates_new_thread_and_emits_started() -> Result<()> {
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
let preview = "Saved user message";
let conversation_id = create_fake_rollout(
codex_home.path(),
"2025-01-05T12-00-00",
"2025-01-05T12:00:00Z",
preview,
Some("mock_provider"),
None,
)?;
let original_path = codex_home
.path()
.join("sessions")
.join("2025")
.join("01")
.join("05")
.join(format!(
"rollout-2025-01-05T12-00-00-{conversation_id}.jsonl"
));
assert!(
original_path.exists(),
"expected original rollout to exist at {}",
original_path.display()
);
let original_contents = std::fs::read_to_string(&original_path)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let fork_id = mcp
.send_thread_fork_request(ThreadForkParams {
thread_id: conversation_id.clone(),
..Default::default()
})
.await?;
let fork_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(fork_id)),
)
.await??;
let ThreadForkResponse { thread, .. } = to_response::<ThreadForkResponse>(fork_resp)?;
let after_contents = std::fs::read_to_string(&original_path)?;
assert_eq!(
after_contents, original_contents,
"fork should not mutate the original rollout file"
);
assert_ne!(thread.id, conversation_id);
assert_eq!(thread.preview, preview);
assert_eq!(thread.model_provider, "mock_provider");
let thread_path = thread.path.clone().expect("thread path");
assert!(thread_path.is_absolute());
assert_ne!(thread_path, original_path);
assert!(thread.cwd.is_absolute());
assert_eq!(thread.source, SessionSource::VsCode);
assert_eq!(
thread.turns.len(),
1,
"expected forked thread to include one turn"
);
let turn = &thread.turns[0];
assert_eq!(turn.status, TurnStatus::Completed);
assert_eq!(turn.items.len(), 1, "expected user message item");
match &turn.items[0] {
ThreadItem::UserMessage { content, .. } => {
assert_eq!(
content,
&vec![UserInput::Text {
text: preview.to_string(),
text_elements: Vec::new(),
}]
);
}
other => panic!("expected user message item, got {other:?}"),
}
// A corresponding thread/started notification should arrive.
let notif: JSONRPCNotification = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("thread/started"),
)
.await??;
let started: ThreadStartedNotification =
serde_json::from_value(notif.params.expect("params must be present"))?;
assert_eq!(started.thread, thread);
Ok(())
}
// Helper to create a config.toml pointing at the mock model server.
fn create_config_toml(codex_home: &Path, server_uri: &str) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
format!(
r#"
model = "mock-model"
approval_policy = "never"
sandbox_mode = "read-only"
model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#
),
)
}

View File

@@ -1,17 +1,29 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_fake_rollout;
use app_test_support::rollout_path;
use app_test_support::to_response;
use chrono::DateTime;
use chrono::Utc;
use codex_app_server_protocol::GitInfo as ApiGitInfo;
use codex_app_server_protocol::JSONRPCError;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::SessionSource;
use codex_app_server_protocol::ThreadListResponse;
use codex_app_server_protocol::ThreadSortKey;
use codex_core::ARCHIVED_SESSIONS_SUBDIR;
use codex_protocol::protocol::GitInfo as CoreGitInfo;
use pretty_assertions::assert_eq;
use std::cmp::Reverse;
use std::fs;
use std::fs::FileTimes;
use std::fs::OpenOptions;
use std::path::Path;
use std::path::PathBuf;
use tempfile::TempDir;
use tokio::time::timeout;
use uuid::Uuid;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
@@ -26,12 +38,26 @@ async fn list_threads(
cursor: Option<String>,
limit: Option<u32>,
providers: Option<Vec<String>>,
archived: Option<bool>,
) -> Result<ThreadListResponse> {
list_threads_with_sort(mcp, cursor, limit, providers, None, archived).await
}
async fn list_threads_with_sort(
mcp: &mut McpProcess,
cursor: Option<String>,
limit: Option<u32>,
providers: Option<Vec<String>>,
sort_key: Option<ThreadSortKey>,
archived: Option<bool>,
) -> Result<ThreadListResponse> {
let request_id = mcp
.send_thread_list_request(codex_app_server_protocol::ThreadListParams {
cursor,
limit,
sort_key,
model_providers: providers,
archived,
})
.await?;
let resp: JSONRPCResponse = timeout(
@@ -82,6 +108,16 @@ fn timestamp_at(
)
}
fn set_rollout_mtime(path: &Path, updated_at_rfc3339: &str) -> Result<()> {
let parsed = DateTime::parse_from_rfc3339(updated_at_rfc3339)?.with_timezone(&Utc);
let times = FileTimes::new().set_modified(parsed.into());
OpenOptions::new()
.append(true)
.open(path)?
.set_times(times)?;
Ok(())
}
#[tokio::test]
async fn thread_list_basic_empty() -> Result<()> {
let codex_home = TempDir::new()?;
@@ -94,6 +130,7 @@ async fn thread_list_basic_empty() -> Result<()> {
None,
Some(10),
Some(vec!["mock_provider".to_string()]),
None,
)
.await?;
assert!(data.is_empty());
@@ -156,6 +193,7 @@ async fn thread_list_pagination_next_cursor_none_on_last_page() -> Result<()> {
None,
Some(2),
Some(vec!["mock_provider".to_string()]),
None,
)
.await?;
assert_eq!(data1.len(), 2);
@@ -163,6 +201,7 @@ async fn thread_list_pagination_next_cursor_none_on_last_page() -> Result<()> {
assert_eq!(thread.preview, "Hello");
assert_eq!(thread.model_provider, "mock_provider");
assert!(thread.created_at > 0);
assert_eq!(thread.updated_at, thread.created_at);
assert_eq!(thread.cwd, PathBuf::from("/"));
assert_eq!(thread.cli_version, "0.0.0");
assert_eq!(thread.source, SessionSource::Cli);
@@ -179,6 +218,7 @@ async fn thread_list_pagination_next_cursor_none_on_last_page() -> Result<()> {
Some(cursor1),
Some(2),
Some(vec!["mock_provider".to_string()]),
None,
)
.await?;
assert!(data2.len() <= 2);
@@ -186,6 +226,7 @@ async fn thread_list_pagination_next_cursor_none_on_last_page() -> Result<()> {
assert_eq!(thread.preview, "Hello");
assert_eq!(thread.model_provider, "mock_provider");
assert!(thread.created_at > 0);
assert_eq!(thread.updated_at, thread.created_at);
assert_eq!(thread.cwd, PathBuf::from("/"));
assert_eq!(thread.cli_version, "0.0.0");
assert_eq!(thread.source, SessionSource::Cli);
@@ -227,6 +268,7 @@ async fn thread_list_respects_provider_filter() -> Result<()> {
None,
Some(10),
Some(vec!["other_provider".to_string()]),
None,
)
.await?;
assert_eq!(data.len(), 1);
@@ -236,6 +278,7 @@ async fn thread_list_respects_provider_filter() -> Result<()> {
assert_eq!(thread.model_provider, "other_provider");
let expected_ts = chrono::DateTime::parse_from_rfc3339("2025-01-02T11:00:00Z")?.timestamp();
assert_eq!(thread.created_at, expected_ts);
assert_eq!(thread.updated_at, expected_ts);
assert_eq!(thread.cwd, PathBuf::from("/"));
assert_eq!(thread.cli_version, "0.0.0");
assert_eq!(thread.source, SessionSource::Cli);
@@ -275,6 +318,7 @@ async fn thread_list_fetches_until_limit_or_exhausted() -> Result<()> {
None,
Some(8),
Some(vec!["target_provider".to_string()]),
None,
)
.await?;
assert_eq!(
@@ -319,6 +363,7 @@ async fn thread_list_enforces_max_limit() -> Result<()> {
None,
Some(200),
Some(vec!["mock_provider".to_string()]),
None,
)
.await?;
assert_eq!(
@@ -364,6 +409,7 @@ async fn thread_list_stops_when_not_enough_filtered_results_exist() -> Result<()
None,
Some(10),
Some(vec!["target_provider".to_string()]),
None,
)
.await?;
assert_eq!(
@@ -410,6 +456,7 @@ async fn thread_list_includes_git_info() -> Result<()> {
None,
Some(10),
Some(vec!["mock_provider".to_string()]),
None,
)
.await?;
let thread = data
@@ -429,3 +476,418 @@ async fn thread_list_includes_git_info() -> Result<()> {
Ok(())
}
#[tokio::test]
async fn thread_list_default_sorts_by_created_at() -> Result<()> {
let codex_home = TempDir::new()?;
create_minimal_config(codex_home.path())?;
let id_a = create_fake_rollout(
codex_home.path(),
"2025-01-02T12-00-00",
"2025-01-02T12:00:00Z",
"Hello",
Some("mock_provider"),
None,
)?;
let id_b = create_fake_rollout(
codex_home.path(),
"2025-01-01T13-00-00",
"2025-01-01T13:00:00Z",
"Hello",
Some("mock_provider"),
None,
)?;
let id_c = create_fake_rollout(
codex_home.path(),
"2025-01-01T12-00-00",
"2025-01-01T12:00:00Z",
"Hello",
Some("mock_provider"),
None,
)?;
let mut mcp = init_mcp(codex_home.path()).await?;
let ThreadListResponse { data, .. } = list_threads_with_sort(
&mut mcp,
None,
Some(10),
Some(vec!["mock_provider".to_string()]),
None,
None,
)
.await?;
let ids: Vec<_> = data.iter().map(|thread| thread.id.as_str()).collect();
assert_eq!(ids, vec![id_a.as_str(), id_b.as_str(), id_c.as_str()]);
Ok(())
}
#[tokio::test]
async fn thread_list_sort_updated_at_orders_by_mtime() -> Result<()> {
let codex_home = TempDir::new()?;
create_minimal_config(codex_home.path())?;
let id_old = create_fake_rollout(
codex_home.path(),
"2025-01-01T10-00-00",
"2025-01-01T10:00:00Z",
"Hello",
Some("mock_provider"),
None,
)?;
let id_mid = create_fake_rollout(
codex_home.path(),
"2025-01-01T11-00-00",
"2025-01-01T11:00:00Z",
"Hello",
Some("mock_provider"),
None,
)?;
let id_new = create_fake_rollout(
codex_home.path(),
"2025-01-01T12-00-00",
"2025-01-01T12:00:00Z",
"Hello",
Some("mock_provider"),
None,
)?;
set_rollout_mtime(
rollout_path(codex_home.path(), "2025-01-01T10-00-00", &id_old).as_path(),
"2025-01-03T00:00:00Z",
)?;
set_rollout_mtime(
rollout_path(codex_home.path(), "2025-01-01T11-00-00", &id_mid).as_path(),
"2025-01-02T00:00:00Z",
)?;
set_rollout_mtime(
rollout_path(codex_home.path(), "2025-01-01T12-00-00", &id_new).as_path(),
"2025-01-01T00:00:00Z",
)?;
let mut mcp = init_mcp(codex_home.path()).await?;
let ThreadListResponse { data, .. } = list_threads_with_sort(
&mut mcp,
None,
Some(10),
Some(vec!["mock_provider".to_string()]),
Some(ThreadSortKey::UpdatedAt),
None,
)
.await?;
let ids: Vec<_> = data.iter().map(|thread| thread.id.as_str()).collect();
assert_eq!(ids, vec![id_old.as_str(), id_mid.as_str(), id_new.as_str()]);
Ok(())
}
#[tokio::test]
async fn thread_list_updated_at_paginates_with_cursor() -> Result<()> {
let codex_home = TempDir::new()?;
create_minimal_config(codex_home.path())?;
let id_a = create_fake_rollout(
codex_home.path(),
"2025-02-01T10-00-00",
"2025-02-01T10:00:00Z",
"Hello",
Some("mock_provider"),
None,
)?;
let id_b = create_fake_rollout(
codex_home.path(),
"2025-02-01T11-00-00",
"2025-02-01T11:00:00Z",
"Hello",
Some("mock_provider"),
None,
)?;
let id_c = create_fake_rollout(
codex_home.path(),
"2025-02-01T12-00-00",
"2025-02-01T12:00:00Z",
"Hello",
Some("mock_provider"),
None,
)?;
set_rollout_mtime(
rollout_path(codex_home.path(), "2025-02-01T10-00-00", &id_a).as_path(),
"2025-02-03T00:00:00Z",
)?;
set_rollout_mtime(
rollout_path(codex_home.path(), "2025-02-01T11-00-00", &id_b).as_path(),
"2025-02-02T00:00:00Z",
)?;
set_rollout_mtime(
rollout_path(codex_home.path(), "2025-02-01T12-00-00", &id_c).as_path(),
"2025-02-01T00:00:00Z",
)?;
let mut mcp = init_mcp(codex_home.path()).await?;
let ThreadListResponse {
data: page1,
next_cursor: cursor1,
} = list_threads_with_sort(
&mut mcp,
None,
Some(2),
Some(vec!["mock_provider".to_string()]),
Some(ThreadSortKey::UpdatedAt),
None,
)
.await?;
let ids_page1: Vec<_> = page1.iter().map(|thread| thread.id.as_str()).collect();
assert_eq!(ids_page1, vec![id_a.as_str(), id_b.as_str()]);
let cursor1 = cursor1.expect("expected nextCursor on first page");
let ThreadListResponse {
data: page2,
next_cursor: cursor2,
} = list_threads_with_sort(
&mut mcp,
Some(cursor1),
Some(2),
Some(vec!["mock_provider".to_string()]),
Some(ThreadSortKey::UpdatedAt),
None,
)
.await?;
let ids_page2: Vec<_> = page2.iter().map(|thread| thread.id.as_str()).collect();
assert_eq!(ids_page2, vec![id_c.as_str()]);
assert_eq!(cursor2, None);
Ok(())
}
#[tokio::test]
async fn thread_list_created_at_tie_breaks_by_uuid() -> Result<()> {
let codex_home = TempDir::new()?;
create_minimal_config(codex_home.path())?;
let id_a = create_fake_rollout(
codex_home.path(),
"2025-02-01T10-00-00",
"2025-02-01T10:00:00Z",
"Hello",
Some("mock_provider"),
None,
)?;
let id_b = create_fake_rollout(
codex_home.path(),
"2025-02-01T10-00-00",
"2025-02-01T10:00:00Z",
"Hello",
Some("mock_provider"),
None,
)?;
let mut mcp = init_mcp(codex_home.path()).await?;
let ThreadListResponse { data, .. } = list_threads(
&mut mcp,
None,
Some(10),
Some(vec!["mock_provider".to_string()]),
None,
)
.await?;
let ids: Vec<_> = data.iter().map(|thread| thread.id.as_str()).collect();
let mut expected = [id_a, id_b];
expected.sort_by_key(|id| Reverse(Uuid::parse_str(id).expect("uuid should parse")));
let expected: Vec<_> = expected.iter().map(String::as_str).collect();
assert_eq!(ids, expected);
Ok(())
}
#[tokio::test]
async fn thread_list_updated_at_tie_breaks_by_uuid() -> Result<()> {
let codex_home = TempDir::new()?;
create_minimal_config(codex_home.path())?;
let id_a = create_fake_rollout(
codex_home.path(),
"2025-02-01T10-00-00",
"2025-02-01T10:00:00Z",
"Hello",
Some("mock_provider"),
None,
)?;
let id_b = create_fake_rollout(
codex_home.path(),
"2025-02-01T11-00-00",
"2025-02-01T11:00:00Z",
"Hello",
Some("mock_provider"),
None,
)?;
let updated_at = "2025-02-03T00:00:00Z";
set_rollout_mtime(
rollout_path(codex_home.path(), "2025-02-01T10-00-00", &id_a).as_path(),
updated_at,
)?;
set_rollout_mtime(
rollout_path(codex_home.path(), "2025-02-01T11-00-00", &id_b).as_path(),
updated_at,
)?;
let mut mcp = init_mcp(codex_home.path()).await?;
let ThreadListResponse { data, .. } = list_threads_with_sort(
&mut mcp,
None,
Some(10),
Some(vec!["mock_provider".to_string()]),
Some(ThreadSortKey::UpdatedAt),
None,
)
.await?;
let ids: Vec<_> = data.iter().map(|thread| thread.id.as_str()).collect();
let mut expected = [id_a, id_b];
expected.sort_by_key(|id| Reverse(Uuid::parse_str(id).expect("uuid should parse")));
let expected: Vec<_> = expected.iter().map(String::as_str).collect();
assert_eq!(ids, expected);
Ok(())
}
#[tokio::test]
async fn thread_list_updated_at_uses_mtime() -> Result<()> {
let codex_home = TempDir::new()?;
create_minimal_config(codex_home.path())?;
let thread_id = create_fake_rollout(
codex_home.path(),
"2025-02-01T10-00-00",
"2025-02-01T10:00:00Z",
"Hello",
Some("mock_provider"),
None,
)?;
set_rollout_mtime(
rollout_path(codex_home.path(), "2025-02-01T10-00-00", &thread_id).as_path(),
"2025-02-05T00:00:00Z",
)?;
let mut mcp = init_mcp(codex_home.path()).await?;
let ThreadListResponse { data, .. } = list_threads_with_sort(
&mut mcp,
None,
Some(10),
Some(vec!["mock_provider".to_string()]),
Some(ThreadSortKey::UpdatedAt),
None,
)
.await?;
let thread = data
.iter()
.find(|item| item.id == thread_id)
.expect("expected thread for created rollout");
let expected_created =
chrono::DateTime::parse_from_rfc3339("2025-02-01T10:00:00Z")?.timestamp();
let expected_updated =
chrono::DateTime::parse_from_rfc3339("2025-02-05T00:00:00Z")?.timestamp();
assert_eq!(thread.created_at, expected_created);
assert_eq!(thread.updated_at, expected_updated);
Ok(())
}
#[tokio::test]
async fn thread_list_archived_filter() -> Result<()> {
let codex_home = TempDir::new()?;
create_minimal_config(codex_home.path())?;
let active_id = create_fake_rollout(
codex_home.path(),
"2025-03-01T10-00-00",
"2025-03-01T10:00:00Z",
"Active",
Some("mock_provider"),
None,
)?;
let archived_id = create_fake_rollout(
codex_home.path(),
"2025-03-01T09-00-00",
"2025-03-01T09:00:00Z",
"Archived",
Some("mock_provider"),
None,
)?;
let archived_dir = codex_home.path().join(ARCHIVED_SESSIONS_SUBDIR);
fs::create_dir_all(&archived_dir)?;
let archived_source = rollout_path(codex_home.path(), "2025-03-01T09-00-00", &archived_id);
let archived_dest = archived_dir.join(
archived_source
.file_name()
.expect("archived rollout should have a file name"),
);
fs::rename(&archived_source, &archived_dest)?;
let mut mcp = init_mcp(codex_home.path()).await?;
let ThreadListResponse { data, .. } = list_threads(
&mut mcp,
None,
Some(10),
Some(vec!["mock_provider".to_string()]),
None,
)
.await?;
assert_eq!(data.len(), 1);
assert_eq!(data[0].id, active_id);
let ThreadListResponse { data, .. } = list_threads(
&mut mcp,
None,
Some(10),
Some(vec!["mock_provider".to_string()]),
Some(true),
)
.await?;
assert_eq!(data.len(), 1);
assert_eq!(data[0].id, archived_id);
Ok(())
}
#[tokio::test]
async fn thread_list_invalid_cursor_returns_error() -> Result<()> {
let codex_home = TempDir::new()?;
create_minimal_config(codex_home.path())?;
let mut mcp = init_mcp(codex_home.path()).await?;
let request_id = mcp
.send_thread_list_request(codex_app_server_protocol::ThreadListParams {
cursor: Some("not-a-cursor".to_string()),
limit: Some(2),
sort_key: None,
model_providers: Some(vec!["mock_provider".to_string()]),
archived: None,
})
.await?;
let error: JSONRPCError = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_error_message(RequestId::Integer(request_id)),
)
.await??;
assert_eq!(error.error.code, -32600);
assert_eq!(error.error.message, "invalid cursor: not-a-cursor");
Ok(())
}

View File

@@ -0,0 +1,139 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_mock_responses_server_repeating_assistant;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ThreadLoadedListParams;
use codex_app_server_protocol::ThreadLoadedListResponse;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
use pretty_assertions::assert_eq;
use std::path::Path;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
#[tokio::test]
async fn thread_loaded_list_returns_loaded_thread_ids() -> Result<()> {
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let thread_id = start_thread(&mut mcp).await?;
let list_id = mcp
.send_thread_loaded_list_request(ThreadLoadedListParams::default())
.await?;
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(list_id)),
)
.await??;
let ThreadLoadedListResponse {
mut data,
next_cursor,
} = to_response::<ThreadLoadedListResponse>(resp)?;
data.sort();
assert_eq!(data, vec![thread_id]);
assert_eq!(next_cursor, None);
Ok(())
}
#[tokio::test]
async fn thread_loaded_list_paginates() -> Result<()> {
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let first = start_thread(&mut mcp).await?;
let second = start_thread(&mut mcp).await?;
let mut expected = [first, second];
expected.sort();
let list_id = mcp
.send_thread_loaded_list_request(ThreadLoadedListParams {
cursor: None,
limit: Some(1),
})
.await?;
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(list_id)),
)
.await??;
let ThreadLoadedListResponse {
data: first_page,
next_cursor,
} = to_response::<ThreadLoadedListResponse>(resp)?;
assert_eq!(first_page, vec![expected[0].clone()]);
assert_eq!(next_cursor, Some(expected[0].clone()));
let list_id = mcp
.send_thread_loaded_list_request(ThreadLoadedListParams {
cursor: next_cursor,
limit: Some(1),
})
.await?;
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(list_id)),
)
.await??;
let ThreadLoadedListResponse {
data: second_page,
next_cursor,
} = to_response::<ThreadLoadedListResponse>(resp)?;
assert_eq!(second_page, vec![expected[1].clone()]);
assert_eq!(next_cursor, None);
Ok(())
}
fn create_config_toml(codex_home: &Path, server_uri: &str) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
format!(
r#"
model = "mock-model"
approval_policy = "never"
sandbox_mode = "read-only"
model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#
),
)
}
async fn start_thread(mcp: &mut McpProcess) -> Result<String> {
let req_id = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("gpt-5.1".to_string()),
..Default::default()
})
.await?;
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(req_id)),
)
.await??;
let ThreadStartResponse { thread, .. } = to_response::<ThreadStartResponse>(resp)?;
Ok(thread.id)
}

View File

@@ -0,0 +1,159 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_fake_rollout_with_text_elements;
use app_test_support::create_mock_responses_server_repeating_assistant;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::SessionSource;
use codex_app_server_protocol::ThreadItem;
use codex_app_server_protocol::ThreadReadParams;
use codex_app_server_protocol::ThreadReadResponse;
use codex_app_server_protocol::TurnStatus;
use codex_app_server_protocol::UserInput;
use codex_protocol::user_input::ByteRange;
use codex_protocol::user_input::TextElement;
use pretty_assertions::assert_eq;
use std::path::Path;
use std::path::PathBuf;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
#[tokio::test]
async fn thread_read_returns_summary_without_turns() -> Result<()> {
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
let preview = "Saved user message";
let text_elements = [TextElement::new(
ByteRange { start: 0, end: 5 },
Some("<note>".into()),
)];
let conversation_id = create_fake_rollout_with_text_elements(
codex_home.path(),
"2025-01-05T12-00-00",
"2025-01-05T12:00:00Z",
preview,
text_elements
.iter()
.map(|elem| serde_json::to_value(elem).expect("serialize text element"))
.collect(),
Some("mock_provider"),
None,
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let read_id = mcp
.send_thread_read_request(ThreadReadParams {
thread_id: conversation_id.clone(),
include_turns: false,
})
.await?;
let read_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(read_id)),
)
.await??;
let ThreadReadResponse { thread } = to_response::<ThreadReadResponse>(read_resp)?;
assert_eq!(thread.id, conversation_id);
assert_eq!(thread.preview, preview);
assert_eq!(thread.model_provider, "mock_provider");
assert!(thread.path.as_ref().expect("thread path").is_absolute());
assert_eq!(thread.cwd, PathBuf::from("/"));
assert_eq!(thread.cli_version, "0.0.0");
assert_eq!(thread.source, SessionSource::Cli);
assert_eq!(thread.git_info, None);
assert_eq!(thread.turns.len(), 0);
Ok(())
}
#[tokio::test]
async fn thread_read_can_include_turns() -> Result<()> {
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
let preview = "Saved user message";
let text_elements = vec![TextElement::new(
ByteRange { start: 0, end: 5 },
Some("<note>".into()),
)];
let conversation_id = create_fake_rollout_with_text_elements(
codex_home.path(),
"2025-01-05T12-00-00",
"2025-01-05T12:00:00Z",
preview,
text_elements
.iter()
.map(|elem| serde_json::to_value(elem).expect("serialize text element"))
.collect(),
Some("mock_provider"),
None,
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let read_id = mcp
.send_thread_read_request(ThreadReadParams {
thread_id: conversation_id.clone(),
include_turns: true,
})
.await?;
let read_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(read_id)),
)
.await??;
let ThreadReadResponse { thread } = to_response::<ThreadReadResponse>(read_resp)?;
assert_eq!(thread.turns.len(), 1);
let turn = &thread.turns[0];
assert_eq!(turn.status, TurnStatus::Completed);
assert_eq!(turn.items.len(), 1, "expected user message item");
match &turn.items[0] {
ThreadItem::UserMessage { content, .. } => {
assert_eq!(
content,
&vec![UserInput::Text {
text: preview.to_string(),
text_elements: text_elements.clone().into_iter().map(Into::into).collect(),
}]
);
}
other => panic!("expected user message item, got {other:?}"),
}
Ok(())
}
// Helper to create a config.toml pointing at the mock model server.
fn create_config_toml(codex_home: &Path, server_uri: &str) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
format!(
r#"
model = "mock-model"
approval_policy = "never"
sandbox_mode = "read-only"
model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#
),
)
}

View File

@@ -1,7 +1,7 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_fake_rollout;
use app_test_support::create_mock_chat_completions_server;
use app_test_support::create_fake_rollout_with_text_elements;
use app_test_support::create_mock_responses_server_repeating_assistant;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
@@ -11,19 +11,27 @@ use codex_app_server_protocol::ThreadResumeParams;
use codex_app_server_protocol::ThreadResumeResponse;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
use codex_app_server_protocol::TurnStartParams;
use codex_app_server_protocol::TurnStatus;
use codex_app_server_protocol::UserInput;
use codex_protocol::config_types::Personality;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ResponseItem;
use codex_protocol::user_input::ByteRange;
use codex_protocol::user_input::TextElement;
use core_test_support::responses;
use core_test_support::skip_if_no_network;
use pretty_assertions::assert_eq;
use std::path::PathBuf;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
const DEFAULT_BASE_INSTRUCTIONS: &str = "You are Codex, based on GPT-5. You are running as a coding agent in the Codex CLI on a user's computer.";
#[tokio::test]
async fn thread_resume_returns_original_thread() -> Result<()> {
let server = create_mock_chat_completions_server(vec![]).await;
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
@@ -59,23 +67,33 @@ async fn thread_resume_returns_original_thread() -> Result<()> {
let ThreadResumeResponse {
thread: resumed, ..
} = to_response::<ThreadResumeResponse>(resume_resp)?;
assert_eq!(resumed, thread);
let mut expected = thread;
expected.updated_at = resumed.updated_at;
assert_eq!(resumed, expected);
Ok(())
}
#[tokio::test]
async fn thread_resume_returns_rollout_history() -> Result<()> {
let server = create_mock_chat_completions_server(vec![]).await;
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
let preview = "Saved user message";
let conversation_id = create_fake_rollout(
let text_elements = vec![TextElement::new(
ByteRange { start: 0, end: 5 },
Some("<note>".into()),
)];
let conversation_id = create_fake_rollout_with_text_elements(
codex_home.path(),
"2025-01-05T12-00-00",
"2025-01-05T12:00:00Z",
preview,
text_elements
.iter()
.map(|elem| serde_json::to_value(elem).expect("serialize text element"))
.collect(),
Some("mock_provider"),
None,
)?;
@@ -99,7 +117,7 @@ async fn thread_resume_returns_rollout_history() -> Result<()> {
assert_eq!(thread.id, conversation_id);
assert_eq!(thread.preview, preview);
assert_eq!(thread.model_provider, "mock_provider");
assert!(thread.path.is_absolute());
assert!(thread.path.as_ref().expect("thread path").is_absolute());
assert_eq!(thread.cwd, PathBuf::from("/"));
assert_eq!(thread.cli_version, "0.0.0");
assert_eq!(thread.source, SessionSource::Cli);
@@ -118,7 +136,8 @@ async fn thread_resume_returns_rollout_history() -> Result<()> {
assert_eq!(
content,
&vec![UserInput::Text {
text: preview.to_string()
text: preview.to_string(),
text_elements: text_elements.clone().into_iter().map(Into::into).collect(),
}]
);
}
@@ -130,7 +149,7 @@ async fn thread_resume_returns_rollout_history() -> Result<()> {
#[tokio::test]
async fn thread_resume_prefers_path_over_thread_id() -> Result<()> {
let server = create_mock_chat_completions_server(vec![]).await;
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
@@ -150,7 +169,7 @@ async fn thread_resume_prefers_path_over_thread_id() -> Result<()> {
.await??;
let ThreadStartResponse { thread, .. } = to_response::<ThreadStartResponse>(start_resp)?;
let thread_path = thread.path.clone();
let thread_path = thread.path.clone().expect("thread path");
let resume_id = mcp
.send_thread_resume_request(ThreadResumeParams {
thread_id: "not-a-valid-thread-id".to_string(),
@@ -167,14 +186,16 @@ async fn thread_resume_prefers_path_over_thread_id() -> Result<()> {
let ThreadResumeResponse {
thread: resumed, ..
} = to_response::<ThreadResumeResponse>(resume_resp)?;
assert_eq!(resumed, thread);
let mut expected = thread;
expected.updated_at = resumed.updated_at;
assert_eq!(resumed, expected);
Ok(())
}
#[tokio::test]
async fn thread_resume_supports_history_and_overrides() -> Result<()> {
let server = create_mock_chat_completions_server(vec![]).await;
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
@@ -202,6 +223,7 @@ async fn thread_resume_supports_history_and_overrides() -> Result<()> {
content: vec![ContentItem::InputText {
text: history_text.to_string(),
}],
end_turn: None,
}];
// Resume with explicit history and override the model.
@@ -231,6 +253,91 @@ async fn thread_resume_supports_history_and_overrides() -> Result<()> {
Ok(())
}
#[tokio::test]
async fn thread_resume_accepts_personality_override_v2() -> Result<()> {
skip_if_no_network!(Ok(()));
let server = responses::start_mock_server().await;
let body = responses::sse(vec![
responses::ev_response_created("resp-1"),
responses::ev_assistant_message("msg-1", "Done"),
responses::ev_completed("resp-1"),
]);
let response_mock = responses::mount_sse_once(&server, body).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let start_id = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("gpt-5.2-codex".to_string()),
..Default::default()
})
.await?;
let start_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(start_id)),
)
.await??;
let ThreadStartResponse { thread, .. } = to_response::<ThreadStartResponse>(start_resp)?;
let resume_id = mcp
.send_thread_resume_request(ThreadResumeParams {
thread_id: thread.id.clone(),
model: Some("gpt-5.2-codex".to_string()),
personality: Some(Personality::Friendly),
..Default::default()
})
.await?;
let resume_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(resume_id)),
)
.await??;
let _resume: ThreadResumeResponse = to_response::<ThreadResumeResponse>(resume_resp)?;
let turn_id = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id,
input: vec![UserInput::Text {
text: "Hello".to_string(),
text_elements: Vec::new(),
}],
..Default::default()
})
.await?;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn_id)),
)
.await??;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("turn/completed"),
)
.await??;
let request = response_mock.single_request();
let developer_texts = request.message_input_texts("developer");
assert!(
!developer_texts
.iter()
.any(|text| text.contains("<personality_spec>")),
"did not expect a personality update message in developer input, got {developer_texts:?}"
);
let instructions_text = request.instructions_text();
assert!(
instructions_text.contains(DEFAULT_BASE_INSTRUCTIONS),
"expected default base instructions from history, got {instructions_text:?}"
);
Ok(())
}
// Helper to create a config.toml pointing at the mock model server.
fn create_config_toml(codex_home: &std::path::Path, server_uri: &str) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
@@ -244,10 +351,13 @@ sandbox_mode = "read-only"
model_provider = "mock_provider"
[features]
remote_models = false
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "chat"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#

View File

@@ -0,0 +1,181 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_final_assistant_message_sse_response;
use app_test_support::create_mock_responses_server_sequence_unchecked;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ThreadItem;
use codex_app_server_protocol::ThreadResumeParams;
use codex_app_server_protocol::ThreadResumeResponse;
use codex_app_server_protocol::ThreadRollbackParams;
use codex_app_server_protocol::ThreadRollbackResponse;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
use codex_app_server_protocol::TurnStartParams;
use codex_app_server_protocol::UserInput as V2UserInput;
use pretty_assertions::assert_eq;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
#[tokio::test]
async fn thread_rollback_drops_last_turns_and_persists_to_rollout() -> Result<()> {
// Three Codex turns hit the mock model (session start + two turn/start calls).
let responses = vec![
create_final_assistant_message_sse_response("Done")?,
create_final_assistant_message_sse_response("Done")?,
create_final_assistant_message_sse_response("Done")?,
];
let server = create_mock_responses_server_sequence_unchecked(responses).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
// Start a thread.
let start_id = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("mock-model".to_string()),
..Default::default()
})
.await?;
let start_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(start_id)),
)
.await??;
let ThreadStartResponse { thread, .. } = to_response::<ThreadStartResponse>(start_resp)?;
// Two turns.
let first_text = "First";
let turn1_id = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: first_text.to_string(),
text_elements: Vec::new(),
}],
..Default::default()
})
.await?;
let _turn1_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn1_id)),
)
.await??;
let _completed1 = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("turn/completed"),
)
.await??;
let turn2_id = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "Second".to_string(),
text_elements: Vec::new(),
}],
..Default::default()
})
.await?;
let _turn2_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn2_id)),
)
.await??;
let _completed2 = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("turn/completed"),
)
.await??;
// Roll back the last turn.
let rollback_id = mcp
.send_thread_rollback_request(ThreadRollbackParams {
thread_id: thread.id.clone(),
num_turns: 1,
})
.await?;
let rollback_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(rollback_id)),
)
.await??;
let ThreadRollbackResponse {
thread: rolled_back_thread,
} = to_response::<ThreadRollbackResponse>(rollback_resp)?;
assert_eq!(rolled_back_thread.turns.len(), 1);
assert_eq!(rolled_back_thread.turns[0].items.len(), 2);
match &rolled_back_thread.turns[0].items[0] {
ThreadItem::UserMessage { content, .. } => {
assert_eq!(
content,
&vec![V2UserInput::Text {
text: first_text.to_string(),
text_elements: Vec::new(),
}]
);
}
other => panic!("expected user message item, got {other:?}"),
}
// Resume and confirm the history is pruned.
let resume_id = mcp
.send_thread_resume_request(ThreadResumeParams {
thread_id: thread.id,
..Default::default()
})
.await?;
let resume_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(resume_id)),
)
.await??;
let ThreadResumeResponse { thread, .. } = to_response::<ThreadResumeResponse>(resume_resp)?;
assert_eq!(thread.turns.len(), 1);
assert_eq!(thread.turns[0].items.len(), 2);
match &thread.turns[0].items[0] {
ThreadItem::UserMessage { content, .. } => {
assert_eq!(
content,
&vec![V2UserInput::Text {
text: first_text.to_string(),
text_elements: Vec::new(),
}]
);
}
other => panic!("expected user message item, got {other:?}"),
}
Ok(())
}
fn create_config_toml(codex_home: &std::path::Path, server_uri: &str) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
format!(
r#"
model = "mock-model"
approval_policy = "never"
sandbox_mode = "read-only"
model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#
),
)
}

View File

@@ -1,6 +1,6 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_mock_chat_completions_server;
use app_test_support::create_mock_responses_server_repeating_assistant;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCResponse;
@@ -8,6 +8,9 @@ use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
use codex_app_server_protocol::ThreadStartedNotification;
use codex_core::config::set_project_trust_level;
use codex_protocol::config_types::TrustLevel;
use codex_protocol::openai_models::ReasoningEffort;
use std::path::Path;
use tempfile::TempDir;
use tokio::time::timeout;
@@ -17,7 +20,7 @@ const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs
#[tokio::test]
async fn thread_start_creates_thread_and_emits_started() -> Result<()> {
// Provide a mock server and config so model wiring is valid.
let server = create_mock_chat_completions_server(vec![]).await;
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
@@ -69,6 +72,47 @@ async fn thread_start_creates_thread_and_emits_started() -> Result<()> {
Ok(())
}
#[tokio::test]
async fn thread_start_respects_project_config_from_cwd() -> Result<()> {
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
let workspace = TempDir::new()?;
let project_config_dir = workspace.path().join(".codex");
std::fs::create_dir_all(&project_config_dir)?;
std::fs::write(
project_config_dir.join("config.toml"),
r#"
model_reasoning_effort = "high"
"#,
)?;
set_project_trust_level(codex_home.path(), workspace.path(), TrustLevel::Trusted)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let req_id = mcp
.send_thread_start_request(ThreadStartParams {
cwd: Some(workspace.path().to_string_lossy().into_owned()),
..Default::default()
})
.await?;
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(req_id)),
)
.await??;
let ThreadStartResponse {
reasoning_effort, ..
} = to_response::<ThreadStartResponse>(resp)?;
assert_eq!(reasoning_effort, Some(ReasoningEffort::High));
Ok(())
}
// Helper to create a config.toml pointing at the mock model server.
fn create_config_toml(codex_home: &Path, server_uri: &str) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
@@ -85,7 +129,7 @@ model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "chat"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#

View File

@@ -2,7 +2,7 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_mock_chat_completions_server;
use app_test_support::create_mock_responses_server_sequence;
use app_test_support::create_shell_command_sse_response;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCNotification;
@@ -41,7 +41,7 @@ async fn turn_interrupt_aborts_running_turn() -> Result<()> {
std::fs::create_dir(&working_directory)?;
// Mock server: long-running shell command then (after abort) nothing else needed.
let server = create_mock_chat_completions_server(vec![create_shell_command_sse_response(
let server = create_mock_responses_server_sequence(vec![create_shell_command_sse_response(
shell_command.clone(),
Some(&working_directory),
Some(10_000),
@@ -73,6 +73,7 @@ async fn turn_interrupt_aborts_running_turn() -> Result<()> {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "run sleep".to_string(),
text_elements: Vec::new(),
}],
cwd: Some(working_directory.clone()),
..Default::default()
@@ -135,7 +136,7 @@ model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "chat"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#

View File

@@ -3,14 +3,17 @@ use app_test_support::McpProcess;
use app_test_support::create_apply_patch_sse_response;
use app_test_support::create_exec_command_sse_response;
use app_test_support::create_final_assistant_message_sse_response;
use app_test_support::create_mock_chat_completions_server;
use app_test_support::create_mock_chat_completions_server_unchecked;
use app_test_support::create_mock_responses_server_sequence;
use app_test_support::create_mock_responses_server_sequence_unchecked;
use app_test_support::create_shell_command_sse_response;
use app_test_support::format_with_current_shell_display;
use app_test_support::to_response;
use codex_app_server_protocol::ApprovalDecision;
use codex_app_server_protocol::ByteRange;
use codex_app_server_protocol::ClientInfo;
use codex_app_server_protocol::CommandExecutionApprovalDecision;
use codex_app_server_protocol::CommandExecutionRequestApprovalResponse;
use codex_app_server_protocol::CommandExecutionStatus;
use codex_app_server_protocol::FileChangeApprovalDecision;
use codex_app_server_protocol::FileChangeOutputDeltaNotification;
use codex_app_server_protocol::FileChangeRequestApprovalResponse;
use codex_app_server_protocol::ItemCompletedNotification;
@@ -21,6 +24,7 @@ use codex_app_server_protocol::PatchApplyStatus;
use codex_app_server_protocol::PatchChangeKind;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ServerRequest;
use codex_app_server_protocol::TextElement;
use codex_app_server_protocol::ThreadItem;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
@@ -30,15 +34,185 @@ use codex_app_server_protocol::TurnStartResponse;
use codex_app_server_protocol::TurnStartedNotification;
use codex_app_server_protocol::TurnStatus;
use codex_app_server_protocol::UserInput as V2UserInput;
use codex_core::features::FEATURES;
use codex_core::features::Feature;
use codex_core::protocol_config_types::ReasoningSummary;
use codex_protocol::config_types::CollaborationMode;
use codex_protocol::config_types::ModeKind;
use codex_protocol::config_types::Personality;
use codex_protocol::config_types::Settings;
use codex_protocol::openai_models::ReasoningEffort;
use core_test_support::responses;
use core_test_support::skip_if_no_network;
use pretty_assertions::assert_eq;
use std::collections::BTreeMap;
use std::path::Path;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
const TEST_ORIGINATOR: &str = "codex_vscode";
#[tokio::test]
async fn turn_start_sends_originator_header() -> Result<()> {
let responses = vec![create_final_assistant_message_sse_response("Done")?];
let server = create_mock_responses_server_sequence_unchecked(responses).await;
let codex_home = TempDir::new()?;
create_config_toml(
codex_home.path(),
&server.uri(),
"never",
&BTreeMap::default(),
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.initialize_with_client_info(ClientInfo {
name: TEST_ORIGINATOR.to_string(),
title: Some("Codex VS Code Extension".to_string()),
version: "0.1.0".to_string(),
}),
)
.await??;
let thread_req = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("mock-model".to_string()),
..Default::default()
})
.await?;
let thread_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(thread_req)),
)
.await??;
let ThreadStartResponse { thread, .. } = to_response::<ThreadStartResponse>(thread_resp)?;
let turn_req = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "Hello".to_string(),
text_elements: Vec::new(),
}],
..Default::default()
})
.await?;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn_req)),
)
.await??;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("turn/completed"),
)
.await??;
let requests = server
.received_requests()
.await
.expect("failed to fetch received requests");
assert!(!requests.is_empty());
for request in requests {
let originator = request
.headers
.get("originator")
.expect("originator header missing");
assert_eq!(originator.to_str()?, TEST_ORIGINATOR);
}
Ok(())
}
#[tokio::test]
async fn turn_start_emits_user_message_item_with_text_elements() -> Result<()> {
let responses = vec![create_final_assistant_message_sse_response("Done")?];
let server = create_mock_responses_server_sequence_unchecked(responses).await;
let codex_home = TempDir::new()?;
create_config_toml(
codex_home.path(),
&server.uri(),
"never",
&BTreeMap::default(),
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let thread_req = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("mock-model".to_string()),
..Default::default()
})
.await?;
let thread_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(thread_req)),
)
.await??;
let ThreadStartResponse { thread, .. } = to_response::<ThreadStartResponse>(thread_resp)?;
let text_elements = vec![TextElement::new(
ByteRange { start: 0, end: 5 },
Some("<note>".to_string()),
)];
let turn_req = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "Hello".to_string(),
text_elements: text_elements.clone(),
}],
..Default::default()
})
.await?;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn_req)),
)
.await??;
let user_message_item = timeout(DEFAULT_READ_TIMEOUT, async {
loop {
let notification = mcp
.read_stream_until_notification_message("item/started")
.await?;
let params = notification.params.expect("item/started params");
let item_started: ItemStartedNotification =
serde_json::from_value(params).expect("deserialize item/started notification");
if let ThreadItem::UserMessage { .. } = item_started.item {
return Ok::<ThreadItem, anyhow::Error>(item_started.item);
}
}
})
.await??;
match user_message_item {
ThreadItem::UserMessage { content, .. } => {
assert_eq!(
content,
vec![V2UserInput::Text {
text: "Hello".to_string(),
text_elements,
}]
);
}
other => panic!("expected user message item, got {other:?}"),
}
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("turn/completed"),
)
.await??;
Ok(())
}
#[tokio::test]
async fn turn_start_emits_notifications_and_accepts_model_override() -> Result<()> {
@@ -49,10 +223,15 @@ async fn turn_start_emits_notifications_and_accepts_model_override() -> Result<(
create_final_assistant_message_sse_response("Done")?,
create_final_assistant_message_sse_response("Done")?,
];
let server = create_mock_chat_completions_server_unchecked(responses).await;
let server = create_mock_responses_server_sequence_unchecked(responses).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri(), "never")?;
create_config_toml(
codex_home.path(),
&server.uri(),
"never",
&BTreeMap::default(),
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -77,6 +256,7 @@ async fn turn_start_emits_notifications_and_accepts_model_override() -> Result<(
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "Hello".to_string(),
text_elements: Vec::new(),
}],
..Default::default()
})
@@ -109,6 +289,7 @@ async fn turn_start_emits_notifications_and_accepts_model_override() -> Result<(
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "Second".to_string(),
text_elements: Vec::new(),
}],
model: Some("mock-model-override".to_string()),
..Default::default()
@@ -147,6 +328,161 @@ async fn turn_start_emits_notifications_and_accepts_model_override() -> Result<(
Ok(())
}
#[tokio::test]
async fn turn_start_accepts_collaboration_mode_override_v2() -> Result<()> {
skip_if_no_network!(Ok(()));
let server = responses::start_mock_server().await;
let body = responses::sse(vec![
responses::ev_response_created("resp-1"),
responses::ev_assistant_message("msg-1", "Done"),
responses::ev_completed("resp-1"),
]);
let response_mock = responses::mount_sse_once(&server, body).await;
let codex_home = TempDir::new()?;
create_config_toml(
codex_home.path(),
&server.uri(),
"never",
&BTreeMap::default(),
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let thread_req = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("gpt-5.2-codex".to_string()),
..Default::default()
})
.await?;
let thread_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(thread_req)),
)
.await??;
let ThreadStartResponse { thread, .. } = to_response::<ThreadStartResponse>(thread_resp)?;
let collaboration_mode = CollaborationMode {
mode: ModeKind::Custom,
settings: Settings {
model: "mock-model-collab".to_string(),
reasoning_effort: Some(ReasoningEffort::High),
developer_instructions: None,
},
};
let turn_req = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "Hello".to_string(),
text_elements: Vec::new(),
}],
model: Some("mock-model-override".to_string()),
effort: Some(ReasoningEffort::Low),
summary: Some(ReasoningSummary::Auto),
output_schema: None,
collaboration_mode: Some(collaboration_mode),
..Default::default()
})
.await?;
let turn_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn_req)),
)
.await??;
let _turn: TurnStartResponse = to_response::<TurnStartResponse>(turn_resp)?;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("turn/completed"),
)
.await??;
let request = response_mock.single_request();
let payload = request.body_json();
assert_eq!(payload["model"].as_str(), Some("mock-model-collab"));
Ok(())
}
#[tokio::test]
async fn turn_start_accepts_personality_override_v2() -> Result<()> {
skip_if_no_network!(Ok(()));
let server = responses::start_mock_server().await;
let body = responses::sse(vec![
responses::ev_response_created("resp-1"),
responses::ev_assistant_message("msg-1", "Done"),
responses::ev_completed("resp-1"),
]);
let response_mock = responses::mount_sse_once(&server, body).await;
let codex_home = TempDir::new()?;
create_config_toml(
codex_home.path(),
&server.uri(),
"never",
&BTreeMap::default(),
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let thread_req = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("gpt-5.2-codex".to_string()),
..Default::default()
})
.await?;
let thread_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(thread_req)),
)
.await??;
let ThreadStartResponse { thread, .. } = to_response::<ThreadStartResponse>(thread_resp)?;
let turn_req = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "Hello".to_string(),
text_elements: Vec::new(),
}],
personality: Some(Personality::Friendly),
..Default::default()
})
.await?;
let turn_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn_req)),
)
.await??;
let _turn: TurnStartResponse = to_response::<TurnStartResponse>(turn_resp)?;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("turn/completed"),
)
.await??;
let request = response_mock.single_request();
let developer_texts = request.message_input_texts("developer");
if developer_texts.is_empty() {
eprintln!("request body: {}", request.body_json());
}
assert!(
developer_texts
.iter()
.any(|text| text.contains("<personality_spec>")),
"expected personality update message in developer input, got {developer_texts:?}"
);
Ok(())
}
#[tokio::test]
async fn turn_start_accepts_local_image_input() -> Result<()> {
// Two Codex turns hit the mock model (session start + turn/start).
@@ -156,10 +492,15 @@ async fn turn_start_accepts_local_image_input() -> Result<()> {
];
// Use the unchecked variant because the request payload includes a LocalImage
// which the strict matcher does not currently cover.
let server = create_mock_chat_completions_server_unchecked(responses).await;
let server = create_mock_responses_server_sequence_unchecked(responses).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri(), "never")?;
create_config_toml(
codex_home.path(),
&server.uri(),
"never",
&BTreeMap::default(),
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -232,9 +573,14 @@ async fn turn_start_exec_approval_toggle_v2() -> Result<()> {
)?,
create_final_assistant_message_sse_response("done 2")?,
];
let server = create_mock_chat_completions_server(responses).await;
let server = create_mock_responses_server_sequence(responses).await;
// Default approval is untrusted to force elicitation on first turn.
create_config_toml(codex_home.as_path(), &server.uri(), "untrusted")?;
create_config_toml(
codex_home.as_path(),
&server.uri(),
"untrusted",
&BTreeMap::default(),
)?;
let mut mcp = McpProcess::new(codex_home.as_path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -259,6 +605,7 @@ async fn turn_start_exec_approval_toggle_v2() -> Result<()> {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "run python".to_string(),
text_elements: Vec::new(),
}],
..Default::default()
})
@@ -304,6 +651,7 @@ async fn turn_start_exec_approval_toggle_v2() -> Result<()> {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "run python again".to_string(),
text_elements: Vec::new(),
}],
approval_policy: Some(codex_app_server_protocol::AskForApproval::Never),
sandbox_policy: Some(codex_app_server_protocol::SandboxPolicy::DangerFullAccess),
@@ -356,8 +704,13 @@ async fn turn_start_exec_approval_decline_v2() -> Result<()> {
)?,
create_final_assistant_message_sse_response("done")?,
];
let server = create_mock_chat_completions_server(responses).await;
create_config_toml(codex_home.as_path(), &server.uri(), "untrusted")?;
let server = create_mock_responses_server_sequence(responses).await;
create_config_toml(
codex_home.as_path(),
&server.uri(),
"untrusted",
&BTreeMap::default(),
)?;
let mut mcp = McpProcess::new(codex_home.as_path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -380,6 +733,7 @@ async fn turn_start_exec_approval_decline_v2() -> Result<()> {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "run python".to_string(),
text_elements: Vec::new(),
}],
cwd: Some(workspace.clone()),
..Default::default()
@@ -426,7 +780,7 @@ async fn turn_start_exec_approval_decline_v2() -> Result<()> {
mcp.send_response(
request_id,
serde_json::to_value(CommandExecutionRequestApprovalResponse {
decision: ApprovalDecision::Decline,
decision: CommandExecutionApprovalDecision::Decline,
})?,
)
.await?;
@@ -502,8 +856,13 @@ async fn turn_start_updates_sandbox_and_cwd_between_turns_v2() -> Result<()> {
)?,
create_final_assistant_message_sse_response("done second")?,
];
let server = create_mock_chat_completions_server(responses).await;
create_config_toml(&codex_home, &server.uri(), "untrusted")?;
let server = create_mock_responses_server_sequence(responses).await;
create_config_toml(
&codex_home,
&server.uri(),
"untrusted",
&BTreeMap::default(),
)?;
let mut mcp = McpProcess::new(&codex_home).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -528,6 +887,7 @@ async fn turn_start_updates_sandbox_and_cwd_between_turns_v2() -> Result<()> {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "first turn".to_string(),
text_elements: Vec::new(),
}],
cwd: Some(first_cwd.clone()),
approval_policy: Some(codex_app_server_protocol::AskForApproval::Never),
@@ -540,7 +900,9 @@ async fn turn_start_updates_sandbox_and_cwd_between_turns_v2() -> Result<()> {
model: Some("mock-model".to_string()),
effort: Some(ReasoningEffort::Medium),
summary: Some(ReasoningSummary::Auto),
personality: None,
output_schema: None,
collaboration_mode: None,
})
.await?;
timeout(
@@ -553,6 +915,7 @@ async fn turn_start_updates_sandbox_and_cwd_between_turns_v2() -> Result<()> {
mcp.read_stream_until_notification_message("codex/event/task_complete"),
)
.await??;
mcp.clear_message_buffer();
// second turn with workspace-write and second_cwd, ensure exec begins in second_cwd
let second_turn = mcp
@@ -560,6 +923,7 @@ async fn turn_start_updates_sandbox_and_cwd_between_turns_v2() -> Result<()> {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "second turn".to_string(),
text_elements: Vec::new(),
}],
cwd: Some(second_cwd.clone()),
approval_policy: Some(codex_app_server_protocol::AskForApproval::Never),
@@ -567,7 +931,9 @@ async fn turn_start_updates_sandbox_and_cwd_between_turns_v2() -> Result<()> {
model: Some("mock-model".to_string()),
effort: Some(ReasoningEffort::Medium),
summary: Some(ReasoningSummary::Auto),
personality: None,
output_schema: None,
collaboration_mode: None,
})
.await?;
timeout(
@@ -635,8 +1001,13 @@ async fn turn_start_file_change_approval_v2() -> Result<()> {
create_apply_patch_sse_response(patch, "patch-call")?,
create_final_assistant_message_sse_response("patch applied")?,
];
let server = create_mock_chat_completions_server(responses).await;
create_config_toml(&codex_home, &server.uri(), "untrusted")?;
let server = create_mock_responses_server_sequence(responses).await;
create_config_toml(
&codex_home,
&server.uri(),
"untrusted",
&BTreeMap::default(),
)?;
let mut mcp = McpProcess::new(&codex_home).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -660,6 +1031,7 @@ async fn turn_start_file_change_approval_v2() -> Result<()> {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "apply patch".into(),
text_elements: Vec::new(),
}],
cwd: Some(workspace.clone()),
..Default::default()
@@ -722,7 +1094,7 @@ async fn turn_start_file_change_approval_v2() -> Result<()> {
mcp.send_response(
request_id,
serde_json::to_value(FileChangeRequestApprovalResponse {
decision: ApprovalDecision::Accept,
decision: FileChangeApprovalDecision::Accept,
})?,
)
.await?;
@@ -782,6 +1154,197 @@ async fn turn_start_file_change_approval_v2() -> Result<()> {
Ok(())
}
#[tokio::test]
async fn turn_start_file_change_approval_accept_for_session_persists_v2() -> Result<()> {
skip_if_no_network!(Ok(()));
let tmp = TempDir::new()?;
let codex_home = tmp.path().join("codex_home");
std::fs::create_dir(&codex_home)?;
let workspace = tmp.path().join("workspace");
std::fs::create_dir(&workspace)?;
let patch_1 = r#"*** Begin Patch
*** Add File: README.md
+new line
*** End Patch
"#;
let patch_2 = r#"*** Begin Patch
*** Update File: README.md
@@
-new line
+updated line
*** End Patch
"#;
let responses = vec![
create_apply_patch_sse_response(patch_1, "patch-call-1")?,
create_final_assistant_message_sse_response("patch 1 applied")?,
create_apply_patch_sse_response(patch_2, "patch-call-2")?,
create_final_assistant_message_sse_response("patch 2 applied")?,
];
let server = create_mock_responses_server_sequence(responses).await;
create_config_toml(
&codex_home,
&server.uri(),
"untrusted",
&BTreeMap::default(),
)?;
let mut mcp = McpProcess::new(&codex_home).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let start_req = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("mock-model".to_string()),
cwd: Some(workspace.to_string_lossy().into_owned()),
..Default::default()
})
.await?;
let start_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(start_req)),
)
.await??;
let ThreadStartResponse { thread, .. } = to_response::<ThreadStartResponse>(start_resp)?;
// First turn: expect FileChangeRequestApproval, respond with AcceptForSession, and verify the file exists.
let turn_1_req = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "apply patch 1".into(),
text_elements: Vec::new(),
}],
cwd: Some(workspace.clone()),
..Default::default()
})
.await?;
let turn_1_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn_1_req)),
)
.await??;
let TurnStartResponse { turn: turn_1 } = to_response::<TurnStartResponse>(turn_1_resp)?;
let started_file_change_1 = timeout(DEFAULT_READ_TIMEOUT, async {
loop {
let started_notif = mcp
.read_stream_until_notification_message("item/started")
.await?;
let started: ItemStartedNotification =
serde_json::from_value(started_notif.params.clone().expect("item/started params"))?;
if let ThreadItem::FileChange { .. } = started.item {
return Ok::<ThreadItem, anyhow::Error>(started.item);
}
}
})
.await??;
let ThreadItem::FileChange { id, status, .. } = started_file_change_1 else {
unreachable!("loop ensures we break on file change items");
};
assert_eq!(id, "patch-call-1");
assert_eq!(status, PatchApplyStatus::InProgress);
let server_req = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_request_message(),
)
.await??;
let ServerRequest::FileChangeRequestApproval { request_id, params } = server_req else {
panic!("expected FileChangeRequestApproval request")
};
assert_eq!(params.item_id, "patch-call-1");
assert_eq!(params.thread_id, thread.id);
assert_eq!(params.turn_id, turn_1.id);
mcp.send_response(
request_id,
serde_json::to_value(FileChangeRequestApprovalResponse {
decision: FileChangeApprovalDecision::AcceptForSession,
})?,
)
.await?;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("item/fileChange/outputDelta"),
)
.await??;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("item/completed"),
)
.await??;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("codex/event/task_complete"),
)
.await??;
let readme_path = workspace.join("README.md");
assert_eq!(std::fs::read_to_string(&readme_path)?, "new line\n");
// Second turn: apply a patch to the same file. Approval should be skipped due to AcceptForSession.
let turn_2_req = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "apply patch 2".into(),
text_elements: Vec::new(),
}],
cwd: Some(workspace.clone()),
..Default::default()
})
.await?;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn_2_req)),
)
.await??;
let started_file_change_2 = timeout(DEFAULT_READ_TIMEOUT, async {
loop {
let started_notif = mcp
.read_stream_until_notification_message("item/started")
.await?;
let started: ItemStartedNotification =
serde_json::from_value(started_notif.params.clone().expect("item/started params"))?;
if let ThreadItem::FileChange { .. } = started.item {
return Ok::<ThreadItem, anyhow::Error>(started.item);
}
}
})
.await??;
let ThreadItem::FileChange { id, status, .. } = started_file_change_2 else {
unreachable!("loop ensures we break on file change items");
};
assert_eq!(id, "patch-call-2");
assert_eq!(status, PatchApplyStatus::InProgress);
// If the server incorrectly emits FileChangeRequestApproval, the helper below will error
// (it bails on unexpected JSONRPCMessage::Request), causing the test to fail.
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("item/fileChange/outputDelta"),
)
.await??;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("item/completed"),
)
.await??;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("codex/event/task_complete"),
)
.await??;
assert_eq!(std::fs::read_to_string(readme_path)?, "updated line\n");
Ok(())
}
#[tokio::test]
async fn turn_start_file_change_approval_decline_v2() -> Result<()> {
skip_if_no_network!(Ok(()));
@@ -801,8 +1364,13 @@ async fn turn_start_file_change_approval_decline_v2() -> Result<()> {
create_apply_patch_sse_response(patch, "patch-call")?,
create_final_assistant_message_sse_response("patch declined")?,
];
let server = create_mock_chat_completions_server(responses).await;
create_config_toml(&codex_home, &server.uri(), "untrusted")?;
let server = create_mock_responses_server_sequence(responses).await;
create_config_toml(
&codex_home,
&server.uri(),
"untrusted",
&BTreeMap::default(),
)?;
let mut mcp = McpProcess::new(&codex_home).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -826,6 +1394,7 @@ async fn turn_start_file_change_approval_decline_v2() -> Result<()> {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "apply patch".into(),
text_elements: Vec::new(),
}],
cwd: Some(workspace.clone()),
..Default::default()
@@ -888,7 +1457,7 @@ async fn turn_start_file_change_approval_decline_v2() -> Result<()> {
mcp.send_response(
request_id,
serde_json::to_value(FileChangeRequestApprovalResponse {
decision: ApprovalDecision::Decline,
decision: FileChangeApprovalDecision::Decline,
})?,
)
.await?;
@@ -939,18 +1508,14 @@ async fn command_execution_notifications_include_process_id() -> Result<()> {
create_exec_command_sse_response("uexec-1")?,
create_final_assistant_message_sse_response("done")?,
];
let server = create_mock_chat_completions_server(responses).await;
let server = create_mock_responses_server_sequence(responses).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri(), "never")?;
let config_toml = codex_home.path().join("config.toml");
let mut config_contents = std::fs::read_to_string(&config_toml)?;
config_contents.push_str(
r#"
[features]
unified_exec = true
"#,
);
std::fs::write(&config_toml, config_contents)?;
create_config_toml(
codex_home.path(),
&server.uri(),
"never",
&BTreeMap::from([(Feature::UnifiedExec, true)]),
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -973,6 +1538,7 @@ unified_exec = true
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "run a command".to_string(),
text_elements: Vec::new(),
}],
..Default::default()
})
@@ -1042,8 +1608,18 @@ unified_exec = true
unreachable!("loop ensures we break on command execution items");
};
assert_eq!(completed_id, "uexec-1");
assert_eq!(completed_status, CommandExecutionStatus::Completed);
assert_eq!(exit_code, Some(0));
assert!(
matches!(
completed_status,
CommandExecutionStatus::Completed | CommandExecutionStatus::Failed
),
"unexpected command execution status: {completed_status:?}"
);
if completed_status == CommandExecutionStatus::Completed {
assert_eq!(exit_code, Some(0));
} else {
assert!(exit_code.is_some(), "expected exit_code for failed command");
}
assert_eq!(
completed_process_id.as_deref(),
Some(started_process_id.as_str())
@@ -1063,7 +1639,24 @@ fn create_config_toml(
codex_home: &Path,
server_uri: &str,
approval_policy: &str,
feature_flags: &BTreeMap<Feature, bool>,
) -> std::io::Result<()> {
let mut features = BTreeMap::from([(Feature::RemoteModels, false)]);
for (feature, enabled) in feature_flags {
features.insert(*feature, *enabled);
}
let feature_entries = features
.into_iter()
.map(|(feature, enabled)| {
let key = FEATURES
.iter()
.find(|spec| spec.id == feature)
.map(|spec| spec.key)
.unwrap_or_else(|| panic!("missing feature key for {feature:?}"));
format!("{key} = {enabled}")
})
.collect::<Vec<_>>()
.join("\n");
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
@@ -1075,10 +1668,13 @@ sandbox_mode = "read-only"
model_provider = "mock_provider"
[features]
{feature_entries}
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "chat"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#

View File

@@ -0,0 +1,11 @@
load("//:defs.bzl", "codex_rust_crate")
exports_files(["apply_patch_tool_instructions.md"])
codex_rust_crate(
name = "apply-patch",
crate_name = "codex_apply_patch",
compile_data = [
"apply_patch_tool_instructions.md",
],
)

View File

@@ -227,11 +227,14 @@ fn check_start_and_end_lines_strict(
first_line: Option<&&str>,
last_line: Option<&&str>,
) -> Result<(), ParseError> {
let first_line = first_line.map(|line| line.trim());
let last_line = last_line.map(|line| line.trim());
match (first_line, last_line) {
(Some(&first), Some(&last)) if first == BEGIN_PATCH_MARKER && last == END_PATCH_MARKER => {
(Some(first), Some(last)) if first == BEGIN_PATCH_MARKER && last == END_PATCH_MARKER => {
Ok(())
}
(Some(&first), _) if first != BEGIN_PATCH_MARKER => Err(InvalidPatchError(String::from(
(Some(first), _) if first != BEGIN_PATCH_MARKER => Err(InvalidPatchError(String::from(
"The first line of the patch must be '*** Begin Patch'",
))),
_ => Err(InvalidPatchError(String::from(
@@ -444,6 +447,25 @@ fn test_parse_patch() {
"The last line of the patch must be '*** End Patch'".to_string()
))
);
assert_eq!(
parse_patch_text(
concat!(
"*** Begin Patch",
" ",
"\n*** Add File: foo\n+hi\n",
" ",
"*** End Patch"
),
ParseMode::Strict
)
.unwrap()
.hunks,
vec![AddFile {
path: PathBuf::from("foo"),
contents: "hi\n".to_string()
}]
);
assert_eq!(
parse_patch_text(
"*** Begin Patch\n\

View File

@@ -0,0 +1 @@
obsolete

View File

@@ -0,0 +1,3 @@
*** Begin Patch
*** Delete File: obsolete.txt
*** End Patch

View File

@@ -0,0 +1,6 @@
*** Begin Patch
*** Update File: file.txt
@@
-one
+two
*** End Patch

View File

@@ -0,0 +1,2 @@
line1
line3

View File

@@ -0,0 +1,3 @@
line1
line2
line3

View File

@@ -0,0 +1,7 @@
*** Begin Patch
*** Update File: lines.txt
@@
line1
-line2
line3
*** End Patch

View File

@@ -0,0 +1,2 @@
first
second updated

View File

@@ -0,0 +1,2 @@
first
second

View File

@@ -0,0 +1,8 @@
*** Begin Patch
*** Update File: tail.txt
@@
first
-second
+second updated
*** End of File
*** End Patch

View File

@@ -1,3 +1,4 @@
use codex_utils_cargo_bin::find_resource;
use pretty_assertions::assert_eq;
use std::collections::BTreeMap;
use std::fs;
@@ -8,7 +9,7 @@ use tempfile::tempdir;
#[test]
fn test_apply_patch_scenarios() -> anyhow::Result<()> {
let scenarios_dir = Path::new(env!("CARGO_MANIFEST_DIR")).join("tests/fixtures/scenarios");
let scenarios_dir = find_resource!("tests/fixtures/scenarios")?;
for scenario in fs::read_dir(scenarios_dir)? {
let scenario = scenario?;
let path = scenario.path();

View File

@@ -0,0 +1,6 @@
load("//:defs.bzl", "codex_rust_crate")
codex_rust_crate(
name = "arg0",
crate_name = "codex_arg0",
)

Some files were not shown because too many files have changed in this diff Show More