Compare commits

...

268 Commits

Author SHA1 Message Date
viyatb-oai
d3ce7907bc fix(core): flag dangerous git push refspec and delete forms 2026-02-01 13:43:50 -08:00
viyatb-oai
b3d811c338 fix(core): detect git branch delete flags in grouped short options 2026-01-30 16:06:19 -08:00
viyatb-oai
25ce92dde7 fix(core): remove redundant git suffix check and document stacked branch delete flags 2026-01-30 15:47:48 -08:00
viyatb-oai
f8c7b477d6 fix(core): tighten git and cargo safe-command rules 2026-01-30 14:07:48 -08:00
viyatb-oai
6468488fd6 fix(core): flag force push and clean as dangerous 2026-01-30 14:00:27 -08:00
viyatb-oai
552c05cff4 fix(core): stop scanning git args after first subcommand 2026-01-30 13:44:44 -08:00
viyatb-oai
f881a4c3c0 Merge branch 'main' into pr-10247 2026-01-30 13:19:54 -08:00
viyatb-oai
9df8b81696 test(core): use assert! for boolean safety checks 2026-01-30 13:14:47 -08:00
viyatb-oai
641a8db83d refactor(core): share git subcommand helper across safety checks 2026-01-30 13:08:01 -08:00
pakrym-oai
aacd530a41 Update copy (#10256)
<img width="839" height="62" alt="image"
src="https://github.com/user-attachments/assets/ca987cdb-9e8c-403e-8856-a9b37baa7673"
/>
2026-01-30 12:57:19 -08:00
daniel-oai
dd6c1d3787 Skip loading codex home as project layer (#10207)
Summary:
- Fixes issue #9932: https://github.com/openai/codex/issues/9932
- Prevents `$CODEX_HOME` (typically `~/.codex`) from being discovered as
a project `.codex` layer by skipping it during project layer traversal.
We compare both normalized absolute paths and best-effort canonicalized
paths to handle symlinks.
- Adds regression tests for home-directory invocation and for the case
where `CODEX_HOME` points to a project `.codex` directory (e.g.,
worktrees/editor integrations).

Testing:
- `cargo build -p codex-cli --bin codex`
- `cargo build -p codex-rmcp-client --bin test_stdio_server`
- `cargo test -p codex-core`
- `cargo test --all-features`
- Manual: ran `target/debug/codex` from `~` and confirmed the
disabled-folder warning and trust prompt no longer appear.
2026-01-30 12:42:07 -08:00
Charley Cunningham
83317ed4bf Make plan highlight use popup grey background (#10253)
## Summary
- align proposed plan background with popup surface color by reusing
`user_message_bg`
- remove the custom blue-tinted plan background

<img width="1572" height="1568" alt="image"
src="https://github.com/user-attachments/assets/63a5341e-4342-4c07-b6b0-c4350c3b2639"
/>
2026-01-30 12:39:15 -08:00
viyatb-oai
1e9258c8b3 fix(core): detect git branch deletion with global flags 2026-01-30 12:28:44 -08:00
Ahmed Ibrahim
b7351f7f53 plan prompt (#10255)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2026-01-30 12:22:37 -08:00
Charley Cunningham
2457bb3c40 Fix deploy (#10251)
Fix
https://github.com/openai/codex/actions/runs/21527697445/job/62035898666
2026-01-30 11:57:13 -08:00
Ahmed Ibrahim
9b29a48a09 Plan mode prompt (#10238)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2026-01-30 11:48:03 -08:00
Michael Bolin
e6d913af2d chore: rename ChatGpt -> Chatgpt in type names (#10244)
When using ChatGPT in names of types, we should be consistent, so this
renames some types with `ChatGpt` in the name to `Chatgpt`. From
https://rust-lang.github.io/api-guidelines/naming.html:

> In `UpperCamelCase`, acronyms and contractions of compound words count
as one word: use `Uuid` rather than `UUID`, `Usize` rather than `USize`
or `Stdin` rather than `StdIn`. In `snake_case`, acronyms and
contractions are lower-cased: `is_xid_start`.

This PR updates existing uses of `ChatGpt` and changes them to
`Chatgpt`. Though in all cases where it could affect the wire format, I
visually inspected that we don't change anything there. That said, this
_will_ change the codegen because it will affect the spelling of type
names.

For example, this renames `AuthMode::ChatGPT` to `AuthMode::Chatgpt` in
`app-server-protocol`, but the wire format is still `"chatgpt"`.

This PR also updates a number of types in `codex-rs/core/src/auth.rs`.
2026-01-30 11:18:39 -08:00
Charley Cunningham
2d10aa6859 Tui: hide Code mode footer label (#10063)
Title
Hide Code mode footer label/cycle hint; add Plan footer-collapse
snapshots

Summary
- Keep Code mode internal naming but suppress the footer mode label +
cycle hint when Code is active.
- Only show the cycle hint when a non‑Code mode indicator is present.
- Add Plan-mode footer collapse snapshot coverage (empty + queued,
across widths) and update existing footer collapse snapshots for the new
Code behavior.

Notes
- The test run currently fails in codex-cloud-requirements on
origin/main due to a stale auth.mode field; no fix is included in this
PR to keep the diff minimal.

Codex author
`codex resume 019c0296-cfd4-7193-9b0a-6949048e4546`
2026-01-30 11:15:21 -08:00
Eric Traut
901fa6d18a Fix unsafe auto-approval of git branch deletion
Diagnosis: Although the bug report specifically mentioned worktrees, this problm isn't worktree-specific. `git branch -d/-D/--delete` isn’t treated as dangerous, and git branch is currently treated as “safe,” so it won’t prompt under on-request. In a worktree, that’s extra risky because branch deletion writes to the common gitdir outside the workdir.

* Classify git branch -d/-D/--delete as dangerous to force approval under on-request.
* Restrict git branch “safe” classification to read-only forms (e.g., --show-current, --list).
* Add focused safety tests for branch deletion and read-only branch queries.

This addresses #10160
2026-01-30 11:02:12 -08:00
Charley Cunningham
ec4a2d07e4 Plan mode: stream proposed plans, emit plan items, and render in TUI (#9786)
## Summary
- Stream proposed plans in Plan Mode using `<proposed_plan>` tags parsed
in core, emitting plan deltas plus a plan `ThreadItem`, while stripping
tags from normal assistant output.
- Persist plan items and rebuild them on resume so proposed plans show
in thread history.
- Wire plan items/deltas through app-server protocol v2 and render a
dedicated proposed-plan view in the TUI, including the “Implement this
plan?” prompt only when a plan item is present.

## Changes

### Core (`codex-rs/core`)
- Added a generic, line-based tag parser that buffers each line until it
can disprove a tag prefix; implements auto-close on `finish()` for
unterminated tags. `codex-rs/core/src/tagged_block_parser.rs`
- Refactored proposed plan parsing to wrap the generic parser.
`codex-rs/core/src/proposed_plan_parser.rs`
- In plan mode, stream assistant deltas as:
  - **Normal text** → `AgentMessageContentDelta`
  - **Plan text** → `PlanDelta` + `TurnItem::Plan` start/completion  
  (`codex-rs/core/src/codex.rs`)
- Final plan item content is derived from the completed assistant
message (authoritative), not necessarily the concatenated deltas.
- Strips `<proposed_plan>` blocks from assistant text in plan mode so
tags don’t appear in normal messages.
(`codex-rs/core/src/stream_events_utils.rs`)
- Persist `ItemCompleted` events only for plan items for rollout replay.
(`codex-rs/core/src/rollout/policy.rs`)
- Guard `update_plan` tool in Plan Mode with a clear error message.
(`codex-rs/core/src/tools/handlers/plan.rs`)
- Updated Plan Mode prompt to:  
  - keep `<proposed_plan>` out of non-final reasoning/preambles  
  - require exact tag formatting  
  - allow only one `<proposed_plan>` block per turn  
  (`codex-rs/core/templates/collaboration_mode/plan.md`)

### Protocol / App-server protocol
- Added `TurnItem::Plan` and `PlanDeltaEvent` to core protocol items.
(`codex-rs/protocol/src/items.rs`, `codex-rs/protocol/src/protocol.rs`)
- Added v2 `ThreadItem::Plan` and `PlanDeltaNotification` with
EXPERIMENTAL markers and note that deltas may not match the final plan
item. (`codex-rs/app-server-protocol/src/protocol/v2.rs`)
- Added plan delta route in app-server protocol common mapping.
(`codex-rs/app-server-protocol/src/protocol/common.rs`)
- Rebuild plan items from persisted `ItemCompleted` events on resume.
(`codex-rs/app-server-protocol/src/protocol/thread_history.rs`)

### App-server
- Forward plan deltas to v2 clients and map core plan items to v2 plan
items. (`codex-rs/app-server/src/bespoke_event_handling.rs`,
`codex-rs/app-server/src/codex_message_processor.rs`)
- Added v2 plan item tests.
(`codex-rs/app-server/tests/suite/v2/plan_item.rs`)

### TUI
- Added a dedicated proposed plan history cell with special background
and padding, and moved “• Proposed Plan” outside the highlighted block.
(`codex-rs/tui/src/history_cell.rs`, `codex-rs/tui/src/style.rs`)
- Only show “Implement this plan?” when a plan item exists.
(`codex-rs/tui/src/chatwidget.rs`,
`codex-rs/tui/src/chatwidget/tests.rs`)

<img width="831" height="847" alt="Screenshot 2026-01-29 at 7 06 24 PM"
src="https://github.com/user-attachments/assets/69794c8c-f96b-4d36-92ef-c1f5c3a8f286"
/>

### Docs / Misc
- Updated protocol docs to mention plan deltas.
(`codex-rs/docs/protocol_v1.md`)
- Minor plumbing updates in exec/debug clients to tolerate plan deltas.
(`codex-rs/debug-client/src/reader.rs`, `codex-rs/exec/...`)

## Tests
- Added core integration tests:
  - Plan mode strips plan from agent messages.
  - Missing `</proposed_plan>` closes at end-of-message.  
  (`codex-rs/core/tests/suite/items.rs`)
- Added unit tests for generic tag parser (prefix buffering, non-tag
lines, auto-close). (`codex-rs/core/src/tagged_block_parser.rs`)
- Existing app-server plan item tests in v2.
(`codex-rs/app-server/tests/suite/v2/plan_item.rs`)

## Notes / Behavior
- Plan output no longer appears in standard assistant text in Plan Mode;
it streams via `PlanDelta` and completes as a `TurnItem::Plan`.
- The final plan item content is authoritative and may diverge from
streamed deltas (documented as experimental).
- Reasoning summaries are not filtered; prompt instructs the model not
to include `<proposed_plan>` outside the final plan message.

## Codex Author
`codex fork 019bec2d-b09d-7450-b292-d7bcdddcdbfb`
2026-01-30 18:59:30 +00:00
Michael Bolin
40bf11bd52 chore: fix the build breakage that came from a merge race (#10239)
I think I needed to rebase on top of
https://github.com/openai/codex/pull/10167 before merging
https://github.com/openai/codex/pull/10208.
2026-01-30 10:29:54 -08:00
baumann-oai
1ce722ed2e plan mode: add TL;DR checkpoint and client behavior note (#10195)
## Summary
- Tightens Plan Mode to encourage exploration-first behavior and more
back-and-forth alignment.
- Adds a required TL;DR checkpoint before drafting the full plan.
- Clarifies client behavior that can cause premature “Implement this
plan?” prompts.

## What changed
- Require at least one targeted non-mutating exploration pass before the
first user question.
- Insert a TL;DR checkpoint between Phase 2 (intent) and Phase 3
(implementation).
- TL;DR checkpoint guidance:
  - Label: “Proposed Plan (TL;DR)”
  - Format: 3–5 bullets using `- `
  - Options: exactly one option, “Approve”
- `isOther: true`, with explicit guidance that “None of the above” is
the edit path in the current UI.
- Require the final plan to include a TL;DR consistent with the approved
checkpoint.

## Why
- In Plan Mode, any normal assistant message at turn completion is
treated as plan content by the client. This can trigger premature
“Implement this plan?” prompts.
- The TL;DR checkpoint aligns on direction before Codex drafts a long,
decision-complete plan.

## Testing
- Manual: built the local CLI and verified the flow now explores first,
presents a TL;DR checkpoint, and only drafts the full plan after
approval.

---------

Co-authored-by: Nick Baumann <@openai.com>
2026-01-30 10:14:46 -08:00
gt-oai
5662eb8b75 Load exec policy rules from requirements (#10190)
`requirements.toml` should be able to specify rules which always run. 

My intention here was that these rules could only ever be restrictive,
which means the decision can be "prompt" or "forbidden" but never
"allow". A requirement of "you must always allow this command" didn't
make sense to me, but happy to be gaveled otherwise.

Rules already applies the most restrictive decision, so we can safely
merge these with rules found in other config folders.
2026-01-30 18:04:09 +00:00
Dylan Hurd
23db79fae2 chore(feature) Experimental: Smart Approvals (#10211)
## Summary
Let's start getting feedback on this feature 😅 

## Testing
- [x] existing tests pass
2026-01-30 10:41:37 -07:00
Dylan Hurd
dfafc546ab chore(feature) Experimental: Personality (#10212)
## Summary
Let users start opting in to trying out personalities

## Testing
- [x] existing tests pass
2026-01-30 10:41:22 -07:00
Michael Bolin
377ab0c77c feat: refactor CodexAuth so invalid state cannot be represented (#10208)
Previously, `CodexAuth` was defined as follows:


d550fbf41a/codex-rs/core/src/auth.rs (L39-L46)

But if you looked at its constructors, we had creation for
`AuthMode::ApiKey` where `storage` was built using a nonsensical path
(`PathBuf::new()`) and `auth_dot_json` was `None`:


d550fbf41a/codex-rs/core/src/auth.rs (L212-L220)

By comparison, when `AuthMode::ChatGPT` was used, `api_key` was always
`None`:


d550fbf41a/codex-rs/core/src/auth.rs (L665-L671)

https://github.com/openai/codex/pull/10012 took things further because
it introduced a new `ChatgptAuthTokens` variant to `AuthMode`, which is
important in when invoking `account/login/start` via the app server, but
most logic _internal_ to the app server should just reason about two
`AuthMode` variants: `ApiKey` and `ChatGPT`.

This PR tries to clean things up as follows:

- `LoginAccountParams` and `AuthMode` in `codex-rs/app-server-protocol/`
both continue to have the `ChatgptAuthTokens` variant, though it is used
exclusively for the on-the-wire messaging.
- `codex-rs/core/src/auth.rs` now has its own `AuthMode` enum, which
only has two variants: `ApiKey` and `ChatGPT`.
- `CodexAuth` has been changed from a struct to an enum. It is a
disjoint union where each variant (`ApiKey`, `ChatGpt`, and
`ChatGptAuthTokens`) have only the associated fields that make sense for
that variant.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/10208).
* #10224
* __->__ #10208
2026-01-30 09:33:23 -08:00
jif-oai
0212f4010e nit: fix db with multiple metadata lines (#10237) 2026-01-30 17:32:10 +00:00
jif-oai
079f4952e0 feat: heuristic coloring of logs (#10228) 2026-01-30 18:26:49 +01:00
jif-oai
eff11f792b feat: improve logs client (#10229) 2026-01-30 18:23:18 +01:00
jif-oai
887bec0dee chore: do not clean the DB anymore (#10232) 2026-01-30 18:23:00 +01:00
jif-oai
09d25e91e9 fix: make sure the shell exists (#10222) 2026-01-30 14:18:31 +01:00
jif-oai
6cee538380 explorer prompt (#10225) 2026-01-30 13:50:33 +01:00
gt-oai
e85d019daa Fetch Requirements from cloud (#10167)
Load requirements from Codex Backend. It only does this for enterprise
customers signed in with ChatGPT.

Todo in follow-up PRs:
* Add to app-server and exec too
* Switch from fail-open to fail-closed on failure
2026-01-30 12:03:29 +00:00
pap-openai
1ef5455eb6 Conversation naming (#8991)
Session renaming:
- `/rename my_session`
- `/rename` without arg and passing an argument in `customViewPrompt`
- AppExitInfo shows resume hint using the session name if set instead of
uuid, defaults to uuid if not set
- Names are stored in `CODEX_HOME/sessions.jsonl`

Session resuming:
- codex resume <name> lookup for `CODEX_HOME/sessions.jsonl` first entry
matching the name and resumes the session

---------

Co-authored-by: jif-oai <jif@openai.com>
2026-01-30 10:40:09 +00:00
jif-oai
25ad414680 chore: unify metric (#10220) 2026-01-30 11:13:43 +01:00
jif-oai
129787493f feat: backfill timing metric (#10218)
1. Add a metric to measure the backfill time
2. Add a unit to the timing histogram
2026-01-30 10:19:41 +01:00
Shijie Rao
a0ccef9d5c Chore: plan mode do not include free form question and always include isOther (#10210)
We should never ask a freeform question when planning and we should
always include isOther as an escape hatch.
2026-01-30 01:19:24 -08:00
Josh McKinney
c0cad80668 Add community links to startup tooltips (#10177)
## Summary
- add startup tooltip for OpenAI community Discord
- add startup tooltip for Codex community forum

## Testing
- not run (text-only tooltip change)
2026-01-30 10:14:15 +01:00
jif-oai
f8056e62d4 nit: actually run tests (#10217) 2026-01-30 10:02:46 +01:00
jif-oai
a270a28a06 feat: add output to /ps (#10154)
<img width="599" height="238" alt="Screenshot 2026-01-29 at 13 24 57"
src="https://github.com/user-attachments/assets/1e9a5af2-f649-476c-b310-ae4938814538"
/>
2026-01-30 09:00:44 +01:00
Matthew Zeng
34f89b12d0 MCP tool call approval (simplified version) (#10200)
Add elicitation approval request for MCP tool call requests.
2026-01-29 23:40:32 -08:00
Dylan Hurd
e3ab0bd973 chore(personality) new schema with fallbacks (#10147)
## Summary
Let's dial in this api contract in a bit more with more robust fallback
behavior when model_instructions_template is false.

Switches to a more explicit template / variables structure, with more
fallbacks.

## Testing
- [x] Adding unit tests
- [x] Tested locally
2026-01-30 00:10:12 -07:00
alexsong-oai
d550fbf41a load from yaml (#10194) 2026-01-29 21:44:12 -05:00
Josh McKinney
36f2fe8af9 feat(tui): route employee feedback follow-ups to internal link (#10198)
## Problem
OpenAI employees were sent to the public GitHub issue flow after
`/feedback`, which is the wrong follow-up path internally.

## Mental model
After feedback upload completes, we render a follow-up link/message.
That link should be audience-aware but must not change the upload
pipeline itself.

## Non-goals
- Changing how feedback is captured or uploaded
- Changing external user behavior

## Tradeoffs
We detect employees via the authenticated account email suffix
(`@openai.com`). If the email is unavailable (e.g., API key auth), we
default to the external behavior.

## Architecture
- Introduce `FeedbackAudience` and thread it from `App` -> `ChatWidget`
-> `FeedbackNoteView`
- Gate internal messaging/links on `FeedbackAudience::OpenAiEmployee`
- Internal follow-up link is now `http://go/codex-feedback-internal`
- External GitHub URL remains byte-for-byte identical

## Observability
No new telemetry; this only changes rendered follow-up instructions.

## Tests
- `just fmt`
- `cargo test -p codex-tui --lib`
2026-01-30 02:12:46 +00:00
willwang-openai
a9cf449a80 add error messages for the go plan type (#10181)
Adds support for the Go plan type
Updates rate limit error messages to point to the usage page
2026-01-30 01:17:25 +00:00
Celia Chen
7151387474 [feat] persist dynamic tools in session rollout file (#10130)
Add dynamic tools to rollout file for persistence & read from rollout on
resume. Ran a real example and spotted the following in the rollout
file:
```
{"timestamp":"2026-01-29T01:27:57.468Z","type":"session_meta","payload":{"id":"019c075d-3f0b-77e3-894e-c1c159b04b1e","timestamp":"2026-01-29T01:27:57.451Z","...."dynamic_tools":[{"name":"demo_tool","description":"Demo dynamic tool","inputSchema":{"additionalProperties":false,"properties":{"city":{"type":"string"}},"required":["city"],"type":"object"}}],"git":{"commit_hash":"ebc573f15c01b8af158e060cfedd401f043e9dfa","branch":"dev/cc/dynamic-tools","repository_url":"https://github.com/openai/codex.git"}}}
```
2026-01-30 01:10:00 +00:00
Owen Lin
c6e1288ef1 chore(app-server): document AuthMode (#10191)
Explain what this is and what it's used for.
2026-01-29 16:48:15 -08:00
Charley Cunningham
11958221a3 tui: add feature-gated /plan slash command to switch to Plan mode (#10103)
## Summary
Adds a simple `/plan` slash command in the TUI that switches the active
collaboration mode to Plan mode. The command is only available when the
`collaboration_modes` feature is enabled.

## Changes
- Add `plan_mask` helper in `codex-rs/tui/src/collaboration_modes.rs`
- Add `SlashCommand::Plan` metadata in
`codex-rs/tui/src/slash_command.rs`
- Implement and hard-gate `/plan` dispatch in
`codex-rs/tui/src/chatwidget.rs`
- Hide `/plan` when collaboration modes are disabled in
`codex-rs/tui/src/bottom_pane/slash_commands.rs`
- Update command popup tests in
`codex-rs/tui/src/bottom_pane/command_popup.rs`
- Add a focused unit test for `/plan` in
`codex-rs/tui/src/chatwidget/tests.rs`

## Behavior notes
- `/plan` is now a no-op if `Feature::CollaborationModes` is disabled.
- When enabled, `/plan` switches directly to Plan mode without opening
the picker.

## Codex author
`codex resume 019c05da-d7c3-7322-ae2c-3ca38d0ef702`
2026-01-29 16:40:43 -08:00
Owen Lin
81a17bb2c1 feat(app-server): support external auth mode (#10012)
This enables a new use case where `codex app-server` is embedded into a
parent application that will directly own the user's ChatGPT auth
lifecycle, which means it owns the user’s auth tokens and refreshes it
when necessary. The parent application would just want a way to pass in
the auth tokens for codex to use directly.

The idea is that we are introducing a new "auth mode" currently only
exposed via app server: **`chatgptAuthTokens`** which consist of the
`id_token` (stores account metadata) and `access_token` (the bearer
token used directly for backend API calls). These auth tokens are only
stored in-memory. This new mode is in addition to the existing `apiKey`
and `chatgpt` auth modes.

This PR reuses the shape of our existing app-server account APIs as much
as possible:
- Update `account/login/start` with a new `chatgptAuthTokens` variant,
which will allow the client to pass in the tokens and have codex
app-server use them directly. Upon success, the server emits
`account/login/completed` and `account/updated` notifications.
- A new server->client request called
`account/chatgptAuthTokens/refresh` which the server can use whenever
the access token previously passed in has expired and it needs a new one
from the parent application.

I leveraged the core 401 retry loop which typically triggers auth token
refreshes automatically, but made it pluggable:
- **chatgpt** mode refreshes internally, as usual.
- **chatgptAuthTokens** mode calls the client via
`account/chatgptAuthTokens/refresh`, the client responds with updated
tokens, codex updates its in-memory auth, then retries. This RPC has a
10s timeout and handles JSON-RPC errors from the client.

Also some additional things:
- chatgpt logins are blocked while external auth is active (have to log
out first. typically clients will pick one OR the other, not support
both)
- `account/logout` clears external auth in memory
- Ensures that if `forced_chatgpt_workspace_id` is set via the user's
config, we respect it in both:
- `account/login/start` with `chatgptAuthTokens` (returns a JSON-RPC
error back to the client)
- `account/chatgptAuthTokens/refresh` (fails the turn, and on next
request app-server will send another `account/chatgptAuthTokens/refresh`
request to the client).
2026-01-29 23:46:04 +00:00
Colin Young
b79bf69af6 [Codex][CLI] Show model-capacity guidance on 429 (#10118)
###### Problem
Users get generic 429s with no guidance when a model is at capacity.
###### Solution
Detect model-cap headers, surface a clear “try a different model”
message, and keep behavior non‑intrusive (no auto‑switch).
###### Scope
CLI/TUI only; protocol + error mapping updated to carry model‑cap info.
###### Tests
      - just fmt
      - cargo test -p codex-tui
- cargo test -p codex-core --lib
shell_snapshot::tests::try_new_creates_and_deletes_snapshot_file --
--nocapture (ran in isolated env)
      - validate local build with backend
     
<img width="719" height="845" alt="image"
src="https://github.com/user-attachments/assets/1470b33d-0974-4b1f-b8e6-d11f892f4b54"
/>
2026-01-29 14:59:07 -08:00
natea-oai
ca9d417633 updating comment to better indicate intent of skipping quit in the main slash command menu (#10186)
Updates comment indicating intent for skipping `quit` in the main slash
command dropdown.
2026-01-29 14:41:42 -08:00
pakrym-oai
fbb3a30953 Remove WebSocket wire format (#10179)
I'd like WireApi to go away (when chat is removed) and WebSockets is
still responses API just over a different transport.
2026-01-29 13:50:53 -08:00
Michael Bolin
2d9ac8227a fix: /approvals -> /permissions (#10184)
I believe we should be recommending `/permissions` in light of
https://github.com/openai/codex/pull/9561.
2026-01-29 20:36:53 +00:00
Josh McKinney
03aee7140f Add features enable/disable subcommands (#10180)
## Summary
- add `codex features enable <feature>` and `codex features disable
<feature>`
- persist feature flag changes to `config.toml` (respecting profile)
- print the under-development feature warning when enabling prerelease
features
- keep `features list` behavior unchanged and add unit/integration tests

## Testing
- cargo test -p codex-cli
2026-01-29 20:35:03 +00:00
Michael Bolin
48f203120d fix: unify npm publish call across shell-tool-mcp.yml and rust-release.yml (#10182)
We are seeing flakiness in the `npm publish` step for
https://www.npmjs.com/package/@openai/codex-shell-tool-mcp, so this is a
shot in the dark for a fix:

https://github.com/openai/codex/actions/runs/21490679301/job/61913765060

Note this removes `actions/checkout@v6` and `pnpm/action-setup@v4`
steps, which I believe are superflous for the `npm publish` call.
2026-01-29 11:51:33 -08:00
xl-openai
bdd8a7d58b Better handling skill depdenencies on ENV VAR. (#9017)
An experimental flow for env var skill dependencies. Skills can now
declare required env vars in SKILL.md; if missing, the CLI prompts the
user to get the value, and Core will store it in memory (eventually to a
local persistent store)
<img width="790" height="169" alt="image"
src="https://github.com/user-attachments/assets/cd928918-9403-43cb-a7e7-b8d59bcccd9a"
/>
2026-01-29 14:13:30 -05:00
Michael Bolin
b7f26d74f0 chore: ensure pnpm-workspace.yaml is up-to-date (#10140)
On the back of:

https://github.com/openai/codex/pull/10138

Let's ensure that every folder with a `package.json` is listed in
`pnpm-workspace.yaml` (not sure why `docs` was in there...) and that we
are using `pnpm` over `npm` consistently (which is why this PR deletes
`codex-cli/package-lock.json`).
2026-01-29 10:49:03 -08:00
pakrym-oai
3b1cddf001 Fall back to http when websockets fail (#10139)
I expect not all proxies work with websockets, fall back to http if
websockets fail.
2026-01-29 10:36:21 -08:00
jif-oai
798c4b3260 feat: reduce span exposition (#10171)
This only avoids the creation of duplicates spans
2026-01-29 18:15:22 +00:00
Josh McKinney
3e798c5a7d Add OpenAI docs MCP tooltip (#10175) 2026-01-29 17:34:59 +00:00
jif-oai
e6c4f548ab chore: unify log queries (#10152)
Unify log queries to only have SQLX code in the runtime and use it for
both the log client and for tests
2026-01-29 16:28:15 +00:00
jif-oai
d6631fb5a9 feat: add log retention and delete them after 90 days (#10151) 2026-01-29 16:55:01 +01:00
jif-oai
89c5f3c4d4 feat: adding thread ID to logs + filter in the client (#10150) 2026-01-29 16:53:30 +01:00
jif-oai
b654b7a9ae [experimental] nit: try to speed up apt-install 2 (#10164) 2026-01-29 15:59:56 +01:00
jif-oai
2945667dcc [experimental] nit: try to speed up apt-install (#10163) 2026-01-29 15:46:15 +01:00
jif-oai
d29129f352 nit: update npm (#10161) 2026-01-29 15:08:22 +01:00
jif-oai
4ba911d48c chore: improve client (#10149)
<img width="883" height="84" alt="Screenshot 2026-01-29 at 11 13 12"
src="https://github.com/user-attachments/assets/090a2fec-94ed-4c0f-aee5-1653ed8b1439"
/>
2026-01-29 11:25:22 +01:00
jif-oai
6a06726af2 feat: log db client (#10087)
```
just log -h
if [ "${1:-}" = "--" ]; then shift; fi; cargo run -p codex-state --bin logs_client -- "$@"
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.21s
     Running `target/debug/logs_client -h`
Tail Codex logs from state.sqlite with simple filters

Usage: logs_client [OPTIONS]

Options:
      --codex-home <CODEX_HOME>  Path to CODEX_HOME. Defaults to $CODEX_HOME or ~/.codex [env: CODEX_HOME=]
      --db <DB>                  Direct path to the SQLite database. Overrides --codex-home
      --level <LEVEL>            Log level to match exactly (case-insensitive)
      --from <RFC3339|UNIX>      Start timestamp (RFC3339 or unix seconds)
      --to <RFC3339|UNIX>        End timestamp (RFC3339 or unix seconds)
      --module <MODULE>          Substring match on module_path
      --file <FILE>              Substring match on file path
      --backfill <BACKFILL>      Number of matching rows to show before tailing [default: 200]
      --poll-ms <POLL_MS>        Poll interval in milliseconds [default: 500]
  -h, --help                     Print help
  ```
2026-01-29 11:11:47 +01:00
jif-oai
714dc8d8bd feat: async backfill (#10089) 2026-01-29 09:57:50 +00:00
jif-oai
780482da84 feat: add log db (#10086)
Add a log DB. The goal is just to store our logs in a `.sqlite` DB to
make it easier to crawl them and drop the oldest ones.
2026-01-29 10:23:03 +01:00
Michael Bolin
4d9ae3a298 fix: remove references to corepack (#10138)
Currently, our `npm publish` logic is failing.

There were a number of things that were merged recently that seemed to
contribute to this situation, though I think we have fixed most of them,
but this one stands out:

https://github.com/openai/codex/pull/10115

As best I can tell, we tried to fix the pnpm version to a specific hash,
but we did not do it consistently (though `shell-tool-mcp/package.json`
had it specified twice...), so for this PR, I ran:

```
$ git ls-files | grep package.json
codex-cli/package.json
codex-rs/responses-api-proxy/npm/package.json
package.json
sdk/typescript/package.json
shell-tool-mcp/package.json
```

and ensured that all of them now have this line:

```json
  "packageManager": "pnpm@10.28.2+sha512.41872f037ad22f7348e3b1debbaf7e867cfd448f2726d9cf74c08f19507c31d2c8e7a11525b983febc2df640b5438dee6023ebb1f84ed43cc2d654d2bc326264"
```

I also went and deleted all of the `corepack` stuff that was added by
https://github.com/openai/codex/pull/10115.

If someone can explain why we need it and verify it does not break `npm
publish`, then we can bring it back.
2026-01-28 23:31:25 -08:00
Josh McKinney
e70592f85a fix: ignore key release events during onboarding (#10131)
## Summary
- guard onboarding key handling to ignore KeyEventKind::Release
- handle key events at the onboarding screen boundary to avoid
double-triggering widgets

## Related
- https://github.com/ratatui/ratatui/issues/347

## Testing
- cd codex-rs && just fmt
- cd codex-rs && cargo test -p codex-tui
2026-01-28 22:13:53 -08:00
Dylan Hurd
b4b4763009 fix(ci) missing package.json for shell-mcp-tool (#10135)
## Summary
This _should_ be the final place to fix.
2026-01-28 22:58:55 -07:00
Dylan Hurd
be33de3f87 fix(tui) reorder personality command (#10134)
## Summary
Reorder it down the list

## Testing 
- [x] Tests pass
2026-01-28 22:51:57 -07:00
iceweasel-oai
8cc338aecf emit a metric when we can't spawn powershell (#10125)
This will help diagnose and measure the impact of a user-reported bug
with the elevated sandbox and powershell
2026-01-28 21:51:51 -08:00
Dylan Hurd
335713f7e9 chore(core) personality under development (#10133)
## Summary
Have one or two more changes coming in for this.
2026-01-28 22:00:48 -07:00
Matthew Zeng
b9cd089d1f [connectors] Support connectors part 2 - slash command and tui (#9728)
- [x] Support `/apps` slash command to browse the apps in tui.
- [x] Support inserting apps to prompt using `$`.
- [x] Lots of simplification/renaming from connectors to apps.
2026-01-28 19:51:58 -08:00
natea-oai
ecc66f4f52 removing quit from dropdown menu, but not autocomplete [cli] (#10128)
Currently we have both `\quit` and `\exit` which do the same thing. This
removes `\quit` from the slash command menu but allows it to still be an
autocomplete option & working for those used to that command.

`/quit` autocomplete:
<img width="232" height="108" alt="Screenshot 2026-01-28 at 4 32 53 PM"
src="https://github.com/user-attachments/assets/d71e079f-77f6-4edc-9590-44a01e2a4ff5"
/>

slash command menu:
<img width="425" height="191" alt="Screenshot 2026-01-28 at 4 32 36 PM"
src="https://github.com/user-attachments/assets/a9458cff-1784-4ce0-927d-43ad13d2a97c"
/>
2026-01-28 17:52:27 -08:00
Dylan Hurd
9757e1418d chore(config) Update personality instructions (#10114)
## Summary
Add personality instructions so we can let users try it out, in tandem
with making it an experimental feature

## Testing
- [x] Tested locally
2026-01-29 01:14:44 +00:00
Ahmed Ibrahim
52609c6f42 Add app-server compaction item notifications tests (#10123)
- add v2 tests covering local + remote auto-compaction item
started/completed notifications
2026-01-29 01:00:38 +00:00
Dylan Hurd
ce3d764ae1 chore(config) personality as a feature (#10116)
## Summary
Sets up an explicit Feature flag for `/personality`, so users can now
opt in to it via `/experimental`. #10114 also updates the config

## Testing
- [x] Tested locally
2026-01-28 17:58:28 -07:00
Ahmed Ibrahim
26590d7927 Ensure auto-compaction starts after turn started (#10129)
Start auto-compaction only after TurnStarted is emitted.\nAdd an
integration test for deterministic ordering.
2026-01-28 16:51:20 -08:00
zbarsky-openai
8497163363 [bazel] Improve runfiles handling (#10098)
we can't use runfiles directory on Windows due to path lengths, so swap
to manifest strategy. Parsing the manifest is a bit complex and the
format is changing in Bazel upstream, so pull in the official Rust
library (via a small hack to make it importable...) and cleanup all the
associated logic to work cleanly in both bazel and cargo without extra
confusion
2026-01-29 00:15:44 +00:00
mjr-openai
83d7c44500 update the ci pnpm workflow for shell-tool-mcp to use corepack for pnpm versioning (#10115)
This updates the CI workflows for shell-tool-mcp to use the pnpm version
from package.json and print it in the build for verification.

I have read the CLA Document and I hereby sign the CLA
2026-01-28 16:30:48 -07:00
Dylan Hurd
7b34cad1b1 fix(ci) more shell-tool-mcp issues (#10111)
## Summary
More pnpm upgrade issues.
2026-01-28 14:36:40 -07:00
sayan-oai
ff9fa56368 default enable compression, update test helpers (#10102)
set `enable_request_compression` flag to default-enabled.

update integration test helpers to decompress `zstd` if flag set.
2026-01-28 12:25:40 -08:00
zbarsky-openai
fe920d7804 [bazel] Fix the build (#10104) 2026-01-28 20:06:28 +00:00
Eric Traut
147e7118e0 Added tui.notifications_method config option (#10043)
This PR adds a new `tui.notifications_method` config option that accepts
values of "auto", "osc9" and "bel". It defaults to "auto", which
attempts to auto-detect whether the terminal supports OSC 9 escape
sequences and falls back to BEL if not.

The PR also removes the inconsistent handling of notifications on
Windows when WSL was used.
2026-01-28 12:00:32 -08:00
Dylan Hurd
f7699e0487 fix(ci) fix shell-tool-mcp version v2 (#10101)
## summary
we had a merge conflict from the linux musl fix, let's get this squared
away.
2026-01-28 12:56:26 -07:00
iceweasel-oai
66de985e4e allow elevated sandbox to be enabled without base experimental flag (#10028)
elevated flag = elevated sandbox
experimental flag = non-elevated sandbox
both = elevated
2026-01-28 11:38:29 -08:00
Ahmed Ibrahim
b7edeee8ca compaction (#10034)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2026-01-28 11:36:11 -08:00
sayan-oai
851617ff5a chore: deprecate old web search feature flags (#10097)
deprecate all old web search flags and aliases, including:
- `[features].web_search_request` and `[features].web_search_cached`
- `[tools].web_search`
- `[features].web_search`

slightly rework `legacy_usages` to enable pointing to non-features from
deprecated features; we need to point to `web_search` (not under
`[features]`) from things like `[features].web_search_cached` and
`[features].web_search_request`.

Added integration tests to confirm deprecation notice is shown on
explicit enablement and disablement of deprecated flags.
2026-01-28 10:55:57 -08:00
Jeremy Rose
b8156706e6 file-search: improve file query perf (#9939)
switch nucleo-matcher for nucleo and use a "file search session" w/ live
updating query instead of a single hermetic run per query.
2026-01-28 10:54:43 -08:00
Dylan Hurd
35e03a0716 Update shell-tool-mcp.yml (#10095)
## Summary
#10004 broke the builds for shell-tool-mcp.yml - we need to copy over
the build configuration from there.

## Testing
- [x] builds
2026-01-28 11:17:17 -07:00
zbarsky-openai
ad5f9e7370 Upgrade to rust 1.93 (#10080)
I needed to upgrade bazel one to get gnullvm artifacts and then noticed
monorepo had drifted forward. They should move in lockstep. Also 1.93
already shipped so we can try that instead.
2026-01-28 17:46:18 +00:00
Charley Cunningham
96386755b6 Refine request_user_input TUI interactions and option UX (#10025)
## Summary
Overhaul the ask‑user‑questions TUI to support “Other/None” answers,
better notes handling, improved option selection
UX, and a safer submission flow with confirmation for unanswered
questions.

Multiple choice (number keys for quick selection, up/down or jk for
cycling through options):
<img width="856" height="169" alt="Screenshot 2026-01-27 at 7 22 29 PM"
src="https://github.com/user-attachments/assets/cabd1b0e-25e0-4859-bd8f-9941192ca274"
/>

Tab to add notes:
<img width="856" height="197" alt="Screenshot 2026-01-27 at 7 22 45 PM"
src="https://github.com/user-attachments/assets/a807db5e-e966-412c-af91-6edc60062f35"
/>

Freeform (also note enter tooltip is highlighted on last question to
indicate questions UI will be exited upon submission):
<img width="854" height="112" alt="Screenshot 2026-01-27 at 7 23 13 PM"
src="https://github.com/user-attachments/assets/2e7b88bf-062b-4b9f-a9da-c9d8c8a59643"
/>

Confirmation dialogue (submitting with unanswered questions):
<img width="854" height="126" alt="Screenshot 2026-01-27 at 7 23 29 PM"
src="https://github.com/user-attachments/assets/93965c8f-54ac-45bc-a660-9625bcd101f8"
/>

## Key Changes
- **Options UI refresh**
- Render options as numbered entries; allow number keys to select &
submit.
- Remove “Option X/Y” header and allow the question UI height to expand
naturally.
- Keep spacing between question, options, and notes even when notes are
visible.
- Hide the title line and render the question prompt in cyan **only when
uncommitted**.

- **“Other / None of the above” support**
  - Wire `isOther` to add “None of the above”.
  - Add guidance text: “Optionally, add details in notes (tab).”

- **Notes composer UX**
- Remove “Notes” heading; place composer directly under the selected
option.
- Preserve pending paste placeholders across question navigation and
after submission.
  - Ctrl+C clears notes **only when the notes composer has focus**.
  - Ctrl+C now triggers an immediate redraw so the clear is visible.

- **Committed vs uncommitted state**
  - Introduce a unified `answer_committed` flag per question.
- Editing notes (including adding text or pastes) marks the answer
uncommitted.
- Changing the option highlight (j/k, up/down) marks the answer
uncommitted.
  - Clearing options (Backspace/Delete) also clears pending notes.
  - Question prompt turns cyan only when the answer is uncommitted.

- **Submission safety & confirmation**
  - Only submit notes/freeform text once explicitly committed.
- Last-question submit with unanswered questions shows a confirmation
dialog.
  - Confirmation options:
    1. **Proceed** (default)
    2. **Go back**
  - Description reflects count: “Submit with N unanswered question(s).”
  - Esc/Backspace in confirmation returns to first unanswered question.
  - Ctrl+C in confirmation interrupts and exits the overlay.

- **Footer hints**
- Cyan highlight restored for “enter to submit answer” / “enter to
submit all”.

## Codex author
`codex fork 019c00ed-323a-7000-bdb5-9f9c5a635bd9`
2026-01-28 09:41:59 -08:00
zbarsky-openai
74bd6d7178 [bazel] Enable remote cache compression (#10079)
BB already stores the blobs compressed so we may as well keep them
compressed in transfer
2026-01-28 17:26:57 +00:00
Dylan Hurd
2a624661ef Update shell-tool-mcp.yml (#10092)
## Summary
Remove pnpm version so we rely on package.json instead, and fix the
mismatch due to https://github.com/openai/codex/pull/9992
2026-01-28 10:03:47 -07:00
jif-oai
231406bd04 feat: sort metadata by date (#10083) 2026-01-28 16:19:08 +01:00
jif-oai
3878c3dc7c feat: sqlite 1 (#10004)
Add a `.sqlite` database to be used to store rollout metatdata (and
later logs)
This PR is phase 1:
* Add the database and the required infrastructure
* Add a backfill of the database
* Persist the newly created rollout both in files and in the DB
* When we need to get metadata or a rollout, consider the `JSONL` as the
source of truth but compare the results with the DB and show any errors
2026-01-28 15:29:14 +01:00
jif-oai
dabafe204a feat: codex exec auto-subscribe to new threads (#9821) 2026-01-28 14:03:20 +01:00
gt-oai
71b8d937ed Add exec policy TOML representation (#10026)
We'd like to represent these in `requirements.toml`. This just adds the
representation and the tests, doesn't wire it up anywhere yet.
2026-01-28 12:00:10 +00:00
Dylan Hurd
996e09ca24 feat(core) RequestRule (#9489)
## Summary
Instead of trying to derive the prefix_rule for a command mechanically,
let's let the model decide for us.

## Testing
- [x] tested locally
2026-01-28 08:43:17 +00:00
iceweasel-oai
9f79365691 error code/msg details for failed elevated setup (#9941) 2026-01-27 23:06:10 -08:00
Dylan Hurd
fef3e36f67 fix(core) info cleanup (#9986)
## Summary
Simplify this logic a bit.
2026-01-27 21:15:15 -07:00
Matthew Zeng
3bb8e69dd3 [skills] Auto install MCP dependencies when running skils with dependency specs. (#9982)
Auto install MCP dependencies when running skils with dependency specs.
2026-01-27 19:02:45 -08:00
Charley Cunningham
add648df82 Restore image attachments/text elements when recalling input history (Up/Down) (#9628)
**Summary**
- Up/Down input history now restores image attachments and text elements
for local entries.
- Composer history stores rich local entries (text + text elements +
local image paths) while persistent history remains text-only.
- Added tests to verify history recall rehydrates image placeholders and
attachments in both `tui` and `tui2`.

**Changes**
- `tui/src/bottom_pane/chat_composer_history.rs`: store `HistoryEntry`
(text + elements + image paths) for local history; adapt navigation +
tests.
- `tui2/src/bottom_pane/chat_composer_history.rs`: same as above.
- `tui/src/bottom_pane/chat_composer.rs`: record rich history entries
and restore them on Up/Down; update Ctrl+C history and tests.
- `tui2/src/bottom_pane/chat_composer.rs`: same as above.
2026-01-27 18:39:59 -08:00
sayan-oai
1609f6aa81 fix: allow unknown fields on Notice in schema (#10041)
the `notice` field didn't allow unknown fields in the schema, leading to
issues where they shouldn't be.

Now we allow unknown fields.

<img width="2260" height="720" alt="image"
src="https://github.com/user-attachments/assets/1de43b60-0d50-4a96-9c9c-34419270d722"
/>
2026-01-27 18:24:24 -08:00
sayan-oai
a90ab789c2 fix: enable per-turn updates to web search mode (#10040)
web_search can now be updated per-turn, for things like changes to
sandbox policy.

`SandboxPolicy::DangerFullAccess` now sets web_search to `live`, and the
default is still `cached`.

Added integration tests.
2026-01-27 18:09:29 -08:00
SlKzᵍᵐ
3f3916e595 tui: stabilize shortcut overlay snapshots on WSL (#9359)
Fixes #9361

## Context
Split out from #9059 per review:
https://github.com/openai/codex/pull/9059#issuecomment-3757859033

## Summary
The shortcut overlay renders different paste-image bindings on WSL
(Ctrl+Alt+V) vs non-WSL (Ctrl+V), which makes snapshot tests
non-deterministic when run under WSL.

## Changes
- Gate WSL detection behind `cfg(not(test))` so snapshot tests are
deterministic across environments.
- Add a focused unit test that still asserts the WSL-specific
paste-image binding.

## Testing
- `just fmt`
- `just fix -p codex-tui`
- `just fix -p codex-tui2`
- `cargo test -p codex-tui`
- `cargo test -p codex-tui2`
2026-01-28 01:10:16 +00:00
Charley Cunningham
19d8f71a98 Ask user question UI footer improvements (#9949)
## Summary

Polishes the `request_user_input` TUI overlay

Question 1 (unanswered)
<img width="853" height="167" alt="Screenshot 2026-01-27 at 1 30 09 PM"
src="https://github.com/user-attachments/assets/3c305644-449e-4e8d-a47b-d689ebd8702c"
/>

Tab to add notes
<img width="856" height="198" alt="Screenshot 2026-01-27 at 1 30 25 PM"
src="https://github.com/user-attachments/assets/0d2801b0-df0c-49ae-85af-e6d56fc2c67c"
/>

Question 2 (unanswered)
<img width="854" height="168" alt="Screenshot 2026-01-27 at 1 30 55 PM"
src="https://github.com/user-attachments/assets/b3723062-51f9-49c9-a9ab-bb1b32964542"
/>

Ctrl+p or h to go back to q1 (answered)
<img width="853" height="195" alt="Screenshot 2026-01-27 at 1 31 27 PM"
src="https://github.com/user-attachments/assets/c602f183-1c25-4c51-8f9f-e565cb6bd637"
/>

Unanswered freeform
<img width="856" height="126" alt="Screenshot 2026-01-27 at 1 31 42 PM"
src="https://github.com/user-attachments/assets/7e3d9d8b-820b-4b9a-9ef2-4699eed484c5"
/>

## Key changes

- Footer tips wrap at tip boundaries (no truncation mid‑tip); footer
height scales to wrapped tips.
- Keep tooltip text as Esc: interrupt in all states.
- Make the full Tab: add notes tip cyan/bold when applicable; hide notes
UI by default.
- Notes toggling/backspace:
- Tab opens notes when an option is selected; Tab again clears notes and
hides the notes UI.
    - Backspace in options clears the current selection.
    - Backspace in empty notes closes notes and returns to options.
- Selection/answering behavior:
- Option questions highlight a default option but are not answered until
Enter.
- Enter no longer auto‑selects when there’s no selection (prevents
accidental answers).
    - Notes submission can commit the selected option when present.
- Freeform questions require Enter with non‑empty text to mark answered;
drafts are not submitted unless committed.
- Unanswered cues:
    - Skipped option questions count as unanswered.
    - Unanswered question titles are highlighted for visibility.
- Typing/navigation in options:
    - Typing no longer opens notes; notes are Tab‑only.
- j/k move option selection; h/l switch questions (Ctrl+n/Ctrl+p still
work).

## Tests

- Added unit coverage for:
    - tip‑level wrapping
    - focus reset when switching questions with existing drafts
    - backspace clearing selection
    - backspace closing empty notes
    - typing in options does not open notes
    - freeform draft submission gating
    - h/l question navigation in options
- Updated snapshots, including narrow footer wrap.

## Why

These changes make the ask‑user‑question overlay:

- safer (no silent auto‑selection or accidental freeform submission),
- clearer (tips wrap cleanly and unanswered states stand out),
- more ergonomic (Tab explicitly controls notes; backspace acts like
undo/close).

## Codex author
`codex fork 019bfc3c-2c42-7982-9119-fee8b9315c2f`

---------

Co-authored-by: Ahmed Ibrahim <aibrahim@openai.com>
2026-01-27 14:57:07 -08:00
Josh McKinney
3ae966edd8 Clarify external editor env var message (#10030)
### Motivation
- Improve UX by making it explicit that `VISUAL`/`EDITOR` must be set
before launching Codex, not during a running session.

### Description
- Update the external editor error text in `codex-rs/tui/src/app.rs` to:
`"Cannot open external editor: set $VISUAL or $EDITOR before starting
Codex."` and run `just fmt` to apply formatting.

### Testing
- Ran `just fmt` successfully; attempted `cargo test -p codex-tui` but
it failed due to network errors when fetching git dependencies (tests
did not complete).

------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_6972c2c984948329b1a37d5c5839aff3)
2026-01-27 13:29:55 -08:00
blevy-oai
c7c2b3cf8d Show OAuth error descriptions in callback responses (#9654)
### Motivation
- The local OAuth callback server returned a generic "Invalid OAuth
callback" on failures even when the query contained an
`error_description`, making it hard to debug OAuth failures.

### Description
- Update `codex-rs/rmcp-client/src/perform_oauth_login.rs` to surface
`error_description` values from the callback query in the HTTP response.
- Introduce a `CallbackOutcome` enum and change `parse_oauth_callback`
to return it, parsing `code`, `state`, and `error_description` from the
query string.
- Change `spawn_callback_server` to match on `CallbackOutcome` and
return `OAuth error: <description>` with a 400 status when
`error_description` is present, while preserving the existing success
and invalid flows.
- Use inline formatting for the error response string.

### Testing
- Ran `just fmt` in the `codex-rs` workspace to format changes
successfully.
- Ran `cargo test -p codex-rmcp-client` and all tests passed.

------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_6971aadc68d0832e93159efea8cd48a9)
2026-01-27 13:22:54 -08:00
K Bediako
337643b00a Fix: Render MCP image outputs regardless of ordering (#9815)
## What?
- Render an MCP image output cell whenever a decodable image block
exists in `CallToolResult.content` (including text-before-image or
malformed image before valid image).

## Why?
- Tool results that include caption text before the image currently drop
the image output cell.
- A malformed image block can also suppress later valid image output.

## How?
- Iterate `content` and return the first successfully decoded image
instead of only checking the first block.
- Add unit tests that cover text-before-image ordering and
invalid-image-before-valid.

## Before
```rust
let image = match result {
    Ok(mcp_types::CallToolResult { content, .. }) => {
        if let Some(mcp_types::ContentBlock::ImageContent(image)) = content.first() {
            // decode image (fails -> None)
        } else {
            None
        }
    }
    _ => None,
}?;
```
## After
```rust
let image = result
    .as_ref()
    .ok()?
    .content
    .iter()
    .find_map(decode_mcp_image)?;
```

## Risk / Impact
- Low: only affects image cell creation for MCP tool results; no change
for non-image outputs.

## Tests
- [x] `just fmt`
- [x] `cargo test -p codex-tui`
- [x] Rerun after branch update (2026-01-27): `just fmt`, `cargo test -p
codex-tui`

Manual testing

# Manual testing: MCP image tool result rendering (Codex TUI)

# Build the rmcp stdio test server binary:
cd codex-rs
cargo build -p codex-rmcp-client --bin test_stdio_server

# Register the server as an MCP server (absolute path to the built binary):
codex mcp add mcpimg -- /Users/joshka/code/codex-pr-review/codex-rs/target/debug/test_stdio_server

# Then in Codex TUI, ask it to call:
- mcpimg.image_scenario({"scenario":"image_only"})
- mcpimg.image_scenario({"scenario":"text_then_image","caption":"Here is the image:"})
- mcpimg.image_scenario({"scenario":"invalid_base64_then_image"})
- mcpimg.image_scenario({"scenario":"invalid_image_bytes_then_image"})
- mcpimg.image_scenario({"scenario":"multiple_valid_images"})
- mcpimg.image_scenario({"scenario":"image_then_text","caption":"Here is the image:"})
- mcpimg.image_scenario({"scenario":"text_only","caption":"Here is the image:"})

# Expected:
# - You should see an extra history cell: "tool result (image output)" when the
#   tool result contains at least one decodable image block (even if earlier
#   blocks are text or invalid images).


Fixes #9814

---------

Co-authored-by: Josh McKinney <joshka@openai.com>
2026-01-27 21:14:08 +00:00
sayan-oai
28051d18c6 enable live web search for DangerFullAccess sandbox policy (#10008)
Auto-enable live `web_search` tool when sandbox policy is
`DangerFullAccess`.

Explicitly setting `web_search` (canonical setting), or enabling
`web_search_cached` or `web_search_request` still takes precedence over
this sandbox-policy-driven enablement.
2026-01-27 20:09:05 +00:00
alexsong-oai
2f8a44baea Remove load from SKILL.toml fallback (#10007) 2026-01-27 12:06:40 -08:00
iceweasel-oai
30eb655ad1 really fix pwd for windows codex zip (#10011)
Co-authored-by: Michael Bolin <mbolin@openai.com>
2026-01-27 19:29:28 +00:00
Michael Bolin
700a29e157 chore: introduce *Args types for new() methods (#10009)
Constructors with long param lists can be hard to reason about when a
number of the args are `None`, in practice. Introducing a struct to use
as the args type helps make things more self-documenting.
2026-01-27 19:15:38 +00:00
iceweasel-oai
c40ad65bd8 remove sandbox globals. (#9797)
Threads sandbox updates through OverrideTurnContext for active turn
Passes computed sandbox type into safety/exec
2026-01-27 11:04:23 -08:00
Michael Bolin
894923ed5d feat: make it possible to specify --config flags in the SDK (#10003)
Updates the `CodexOptions` passed to the `Codex()` constructor in the
SDK to support a `config` property that is a map of configuration data
that will be transformed into `--config` flags passed to the invocation
of `codex`.

Therefore, something like this:

```typescript
const codex = new Codex({
  config: {
    show_raw_agent_reasoning: true,
    sandbox_workspace_write: { network_access: true },
  },
});
```

would result in the following args being added to the invocation of
`codex`:

```shell
--config show_raw_agent_reasoning=true --config sandbox_workspace_write.network_access=true
```
2026-01-27 10:47:07 -08:00
Owen Lin
fc0fd85349 fix(app-server, core): defer initial context write to rollout file until first turn (#9950)
### Overview
Currently calling `thread/resume` will always bump the thread's
`updated_at` timestamp. This PR makes it the `updated_at` timestamp
changes only if a turn is triggered.

### Additonal context
What we typically do on resuming a thread is **always** writing “initial
context” to the rollout file immediately. This initial context includes:
- Developer instructions derived from sandbox/approval policy + cwd
- Optional developer instructions (if provided)
- Optional collaboration-mode instructions
- Optional user instructions (if provided)
- Environment context (cwd, shell, etc.)

This PR defers writing the “initial context” to the rollout file until
the first `turn/start`, so we don't inadvertently bump the thread's
`updated_at` timestamp until a turn is actually triggered.

This works even though both `thread/resume` and `turn/start` accept
overrides (such as `model`, `cwd`, etc.) because the initial context is
seeded from the effective `TurnContext` in memory, computed at
`turn/start` time, after both sets of overrides have been applied.

**NOTE**: This is a very short-lived solution until we introduce sqlite.
Then we can remove this.
2026-01-27 10:41:54 -08:00
viyatb-oai
877b76bb9d feat(network-proxy): add a SOCKS5 proxy with policy enforcement (#9803)
### Summary
- Adds an optional SOCKS5 listener via `rama-socks5`
- SOCKS5 is disabled by default and gated by config
- Reuses existing policy enforcement and blocked-request recording
- Blocks SOCKS5 in limited mode to prevent method-policy bypass
- Applies bind clamping to the SOCKS5 listener

### Config
New/used fields under `network_proxy`:
- `enable_socks5`
- `socks_url`
- `enable_socks5_udp`

### Scope
- Changes limited to `codex-rs/network-proxy` (+ `codex-rs/Cargo.lock`)

### Testing
```bash
cd codex-rs
just fmt
cargo test -p codex-network-proxy --offline
2026-01-27 10:09:39 -08:00
Charley Cunningham
538e1059a3 TUI footer: right-align context and degrade shortcut summary + mode cleanly (#9944)
## Summary
Refines the bottom footer layout to keep `% context left` right-aligned
while making the left side degrade cleanly

## Behavior with empty textarea
Full width:
<img width="607" height="62" alt="Screenshot 2026-01-26 at 2 59 59 PM"
src="https://github.com/user-attachments/assets/854f33b7-d714-40be-8840-a52eb3bda442"
/>
Less:
<img width="412" height="66" alt="Screenshot 2026-01-26 at 2 59 48 PM"
src="https://github.com/user-attachments/assets/9c501788-c3a2-4b34-8f0b-8ec4395b44fe"
/>
Min width:
<img width="218" height="77" alt="Screenshot 2026-01-26 at 2 59 33 PM"
src="https://github.com/user-attachments/assets/0bed2385-bdbf-4254-8ae4-ab3452243628"
/>

## Behavior with message in textarea and agent running (steer enabled)
Full width:
<img width="753" height="63" alt="Screenshot 2026-01-26 at 4 33 54 PM"
src="https://github.com/user-attachments/assets/1856b352-914a-44cf-813d-1cb50c7f183b"
/>

Less:
<img width="353" height="61" alt="Screenshot 2026-01-26 at 4 30 12 PM"
src="https://github.com/user-attachments/assets/d951c4d5-f3e7-4116-8fe1-6a6c712b3d48"
/>

Less:
<img width="304" height="64" alt="Screenshot 2026-01-26 at 4 30 51 PM"
src="https://github.com/user-attachments/assets/1433e994-5cbc-4e20-a98a-79eee13c8699"
/>

Less:
<img width="235" height="61" alt="Screenshot 2026-01-26 at 4 30 56 PM"
src="https://github.com/user-attachments/assets/e216c3c6-84cd-40fc-ae4d-83bf28947f0e"
/>

Less:
<img width="165" height="59" alt="Screenshot 2026-01-26 at 4 31 08 PM"
src="https://github.com/user-attachments/assets/027de5de-7185-47ce-b1cc-5363ea33d9b1"
/>

## Notes / Edge Cases
- In steer mode while typing, the queue hint no longer replaces the mode
label; it renders as `tab to queue message · {Mode}`.
- Collapse priorities differ by state:
- With the queue hint active, `% context left` is hidden before
shortening or dropping the queue hint.
- In the empty + non-running state, `? for shortcuts` is dropped first,
and `% context left` is only shown if `(shift+tab to
cycle)` can also fit.
- Transient instructional states (`?` overlay, Esc hint, Ctrl+C/D
reminders, and flash/override hints) intentionally suppress the
mode label (and context) to focus the next action.

## Implementation Notes
- Renamed the base footer modes to make the state explicit:
`ComposerEmpty` and `ComposerHasDraft`, and compute the base mode
directly from emptiness.
- Unified collapse behavior in `single_line_footer_layout` for both base
modes, with:
- Queue-hint behavior that prefers keeping the queue hint over context.
- A cycle-hint guard that prevents context from reappearing after
`(shift+tab to cycle)` is dropped.
- Kept rendering responsibilities explicit:
  - `single_line_footer_layout` decides what fits.
  - `render_footer_line` renders a chosen line.
- `render_footer_from_props` renders the canonical mode-to-text mapping.
- Expanded snapshot coverage:
- Added `footer_collapse_snapshots` in `chat_composer.rs` to lock the
distinct collapse states across widths.
- Consolidated the width-aware snapshot helper usage (e.g.,
`snapshot_composer_state_with_width`,
`snapshot_footer_with_mode_indicator`).
2026-01-27 17:43:09 +00:00
jif-oai
067922a734 description in role type (#9993) 2026-01-27 17:20:07 +00:00
mjr-openai
dd24ac6b26 update pnpm to 10.28.2 to address security issues (#9992)
Updates pnpm to 10.28.2. to address security issues in prior versions of
pnpm that can allow deps to execute lifecycle scripts against policy.

I have read the CLA Document and I hereby sign the CLA
2026-01-27 09:19:43 -08:00
gt-oai
ddc704d4c6 backend-client: add get_config_requirements_file (#10001)
Adds getting config requirement to backend-client.

I made a slash command to test it (not included in this PR):
<img width="726" height="330" alt="Screenshot 2026-01-27 at 15 20 41"
src="https://github.com/user-attachments/assets/97222e7c-5078-485a-a5b2-a6630313901e"
/>
2026-01-27 16:59:53 +00:00
jif-oai
3b726d9550 chore: clean orchestrator prompt (#9994) 2026-01-27 16:32:05 +00:00
jif-oai
74ffbbe7c1 nit: better unused prompt (#9991) 2026-01-27 13:03:12 +00:00
jif-oai
742f086ee6 nit: better tool description (#9988) 2026-01-27 12:46:51 +00:00
K Bediako
ab99df0694 Fix: cap aggregated exec output consistently (#9759)
## WHAT?
- Bias aggregated output toward stderr under contention (2/3 stderr, 1/3
stdout) while keeping the 1 MiB cap.
- Rebalance unused stderr share back to stdout when stderr is tiny to
avoid underfilling.
- Add tests for contention, small-stderr rebalance, and under-cap
ordering (stdout then stderr).

## WHY?
- Review feedback requested stderr priority under contention.
- Avoid underfilled aggregated output when stderr is small while
preserving a consistent cap across exec paths.

## HOW?
- Update `aggregate_output` to compute stdout/stderr shares, then
reassign unused capacity to the other stream.
- Use the helper in both Windows and async exec paths.
- Add regression tests for contention/rebalance and under-cap ordering.

## BEFORE
```rust
// Best-effort aggregate: stdout then stderr (capped).
let mut aggregated = Vec::with_capacity(
    stdout
        .text
        .len()
        .saturating_add(stderr.text.len())
        .min(EXEC_OUTPUT_MAX_BYTES),
);
append_capped(&mut aggregated, &stdout.text, EXEC_OUTPUT_MAX_BYTES);
append_capped(&mut aggregated, &stderr.text, EXEC_OUTPUT_MAX_BYTES);
let aggregated_output = StreamOutput {
    text: aggregated,
    truncated_after_lines: None,
};
```

## AFTER
```rust
fn aggregate_output(
    stdout: &StreamOutput<Vec<u8>>,
    stderr: &StreamOutput<Vec<u8>>,
) -> StreamOutput<Vec<u8>> {
    let total_len = stdout.text.len().saturating_add(stderr.text.len());
    let max_bytes = EXEC_OUTPUT_MAX_BYTES;
    let mut aggregated = Vec::with_capacity(total_len.min(max_bytes));

    if total_len <= max_bytes {
        aggregated.extend_from_slice(&stdout.text);
        aggregated.extend_from_slice(&stderr.text);
        return StreamOutput {
            text: aggregated,
            truncated_after_lines: None,
        };
    }

    // Under contention, reserve 1/3 for stdout and 2/3 for stderr; rebalance unused stderr to stdout.
    let want_stdout = stdout.text.len().min(max_bytes / 3);
    let want_stderr = stderr.text.len();
    let stderr_take = want_stderr.min(max_bytes.saturating_sub(want_stdout));
    let remaining = max_bytes.saturating_sub(want_stdout + stderr_take);
    let stdout_take = want_stdout + remaining.min(stdout.text.len().saturating_sub(want_stdout));

    aggregated.extend_from_slice(&stdout.text[..stdout_take]);
    aggregated.extend_from_slice(&stderr.text[..stderr_take]);

    StreamOutput {
        text: aggregated,
        truncated_after_lines: None,
    }
}
```

## TESTS
- [x] `just fmt`
- [x] `just fix -p codex-core`
- [x] `cargo test -p codex-core aggregate_output_`
- [x] `cargo test -p codex-core`
- [x] `cargo test --all-features`

## FIXES
Fixes #9758
2026-01-27 09:29:12 +00:00
Ahmed Ibrahim
509ff1c643 Fixing main and make plan mode reasoning effort medium (#9980)
It's overthinking so much on high and going over the context window.
2026-01-26 22:30:24 -08:00
Ahmed Ibrahim
cabb2085cc make plan prompt less detailed (#9977)
This was too much to ask for
2026-01-26 21:42:01 -08:00
Ahmed Ibrahim
4db6da32a3 tui: wrapping user input questions (#9971) 2026-01-26 21:30:09 -08:00
sayan-oai
0adcd8aa86 make cached web_search client-side default (#9974)
[Experiment](https://console.statsig.com/50aWbk2p4R76rNX9lN5VUw/experiments/codex_web_search_rollout/summary)
for default cached `web_search` completed; cached chosen as default.

Update client to reflect that.
2026-01-26 21:25:40 -08:00
Ahmed Ibrahim
28bd7db14a plan prompt (#9975)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2026-01-26 21:14:05 -08:00
Ahmed Ibrahim
0c72d8fd6e prompt (#9970)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2026-01-26 20:27:57 -08:00
Eric Traut
7c96f2e84c Fix resume --last with --json option (#9475)
Fix resume --last prompt parsing by dropping the clap conflict on the
codex resume subcommand so a positional prompt is accepted when --last
is set. This aligns interactive resume behavior with exec-mode logic and
avoids the “--last cannot be used with SESSION_ID” error.

This addresses #6717
2026-01-26 20:20:57 -08:00
Ahmed Ibrahim
f45a8733bf prompt final (#9969)
hopefully final this time (at least tonight) >_<
2026-01-26 20:12:43 -08:00
Ahmed Ibrahim
b655a092ba Improve plan mode prompt (#9968)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2026-01-26 19:56:16 -08:00
Ahmed Ibrahim
b7bba3614e plan prompt v7 (#9966)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2026-01-26 19:34:18 -08:00
sayan-oai
86adf53235 fix: handle all web_search actions and in progress invocations (#9960)
### Summary
- Parse all `web_search` tool actions (`search`, `find_in_page`,
`open_page`).
- Previously we only parsed + displayed `search`, which made the TUI
appear to pause when the other actions were being used.
- Show in progress `web_search` calls as `Searching the web`
  - Previously we only showed completed tool calls

<img width="308" height="149" alt="image"
src="https://github.com/user-attachments/assets/90a4e8ff-b06a-48ff-a282-b57b31121845"
/>

### Tests
Added + updated tests, tested locally

### Follow ups
Update VSCode extension to display these as well
2026-01-27 03:33:48 +00:00
pakrym-oai
998e88b12a Use test_codex more (#9961)
Reduces boilderplate.
2026-01-26 18:52:10 -08:00
Ahmed Ibrahim
c900de271a Warn users on enabling underdevelopment features (#9954)
<img width="938" height="73" alt="image"
src="https://github.com/user-attachments/assets/a2d5ac46-92c5-4828-b35e-0965c30cdf36"
/>
2026-01-27 01:58:05 +00:00
alexsong-oai
a641a6427c feat: load interface metadata from SKILL.json (#9953) 2026-01-27 01:38:06 +00:00
jif-oai
5d13427ef4 NIT larger buffer (#9957) 2026-01-27 01:26:55 +00:00
Ahmed Ibrahim
394b967432 Reuse ChatComposer in request_user_input overlay (#9892)
Reuse the shared chat composer for notes and freeform answers in
request_user_input.

- Build the overlay composer with ChatComposerConfig::plain_text.
- Wire paste-burst flushing + menu surface sizing through the bottom
pane.
2026-01-26 17:21:41 -08:00
Eric Traut
6a279f6d77 Updated contribution guidelines (#9933) 2026-01-26 17:13:25 -08:00
Charley Cunningham
47aa1f3b6a Reject request_user_input outside Plan/Pair (#9955)
## Context

Previous work in https://github.com/openai/codex/pull/9560 only rejected
`request_user_input` in Execute and Custom modes. Since then, additional
modes
(e.g., Code) were added, so the guard should be mode-agnostic.

## What changed

- Switch the handler to an allowlist: only Plan and PairProgramming are
allowed
- Return the same error for any other mode (including Code)
- Add a Code-mode rejection test alongside the existing Execute/Custom
tests

## Why

This prevents `request_user_input` from being used in modes where it is
not
intended, even as new modes are introduced.
2026-01-26 17:12:17 -08:00
jif-oai
73bd84dee0 fix: try to fix freezes 2 (#9951)
Fixes a TUI freeze caused by awaiting `mpsc::Sender::send()` that blocks
the tokio thread, stopping the consumption runtime and creating a
deadlock. This could happen if the server was producing enough chunks to
fill the `mpsc` fast enough. To solve this we try on insert using a
`try_send()` (not requiring an `await`) and delegate to a tokio task if
this does not work

This is a temporary solution as it can contain races for delta elements
and a stronger design should come here
2026-01-27 01:02:22 +00:00
JBallin
32b062d0e1 fix: use brew upgrade --cask codex to avoid warnings and ambiguity (#9823)
Fixes #9822 

### Summary

Make the Homebrew upgrade command explicit by using `brew upgrade --cask
codex`.

### Motivation

During the Codex self-update, Homebrew can emit an avoidable warning
because the
name `codex` resolves to a cask:

```
Warning: Formula codex was renamed to homebrew/cask/codex.
````

While the upgrade succeeds, this relies on implicit name resolution and
produces
unnecessary output during the update flow.

### Why `--cask`

* Eliminates warning/noise for users
* Explicitly matches how Codex is distributed via Homebrew
* Avoids reliance on name resolution behavior
* Makes the command more robust if a `codex` formula is ever introduced

### Context

This restores the `--cask` flag that was removed in #6238 after being
considered
“not necessary” during review:
[https://github.com/openai/codex/pull/6238#discussion_r2505947880](https://github.com/openai/codex/pull/6238#discussion_r2505947880).

Co-authored-by: Eric Traut <etraut@openai.com>
2026-01-26 16:21:09 -08:00
Matt Ridley
f29a0defa2 fix: remove cli tooltip references to custom prompts (#9901)
Custom prompts are now deprecated, however are still references in
tooltips. Remove the relevant tips from the repository.

Closes #9900
2026-01-26 15:55:44 -08:00
dependabot[bot]
2e5aa809f4 chore(deps): bump globset from 0.4.16 to 0.4.18 in /codex-rs (#9884)
Bumps [globset](https://github.com/BurntSushi/ripgrep) from 0.4.16 to
0.4.18.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="0b0e013f5a"><code>0b0e013</code></a>
globset-0.4.18</li>
<li><a
href="cac9870a02"><code>cac9870</code></a>
doc: update date in man page template</li>
<li><a
href="24e88dc15b"><code>24e88dc</code></a>
ignore/types: add <code>ssa</code> type</li>
<li><a
href="5748f81bb1"><code>5748f81</code></a>
printer: use <code>doc_cfg</code> instead of
<code>doc_auto_cfg</code></li>
<li><a
href="d47663b1b4"><code>d47663b</code></a>
searcher: fix regression with <code>--line-buffered</code> flag</li>
<li><a
href="38d630261a"><code>38d6302</code></a>
printer: add Cursor hyperlink alias</li>
<li><a
href="b3dc4b0998"><code>b3dc4b0</code></a>
globset: improve debug log</li>
<li><a
href="ca2e34f37c"><code>ca2e34f</code></a>
grep-0.4.0</li>
<li><a
href="a0d61a063f"><code>a0d61a0</code></a>
grep-printer-0.3.0</li>
<li><a
href="c22fc0f13c"><code>c22fc0f</code></a>
deps: bump to grep-searcher 0.1.15</li>
<li>Additional commits viewable in <a
href="https://github.com/BurntSushi/ripgrep/compare/globset-0.4.16...globset-0.4.18">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=globset&package-manager=cargo&previous-version=0.4.16&new-version=0.4.18)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-26 15:55:19 -08:00
dependabot[bot]
6418e65356 chore(deps): bump axum from 0.8.4 to 0.8.8 in /codex-rs (#9883)
Bumps [axum](https://github.com/tokio-rs/axum) from 0.8.4 to 0.8.8.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/tokio-rs/axum/releases">axum's
releases</a>.</em></p>
<blockquote>
<h2>axum v0.8.8</h2>
<ul>
<li>Clarify documentation for <code>Router::route_layer</code> (<a
href="https://redirect.github.com/tokio-rs/axum/issues/3567">#3567</a>)</li>
</ul>
<p><a
href="https://redirect.github.com/tokio-rs/axum/issues/3567">#3567</a>:
<a
href="https://redirect.github.com/tokio-rs/axum/pull/3567">tokio-rs/axum#3567</a></p>
<h2>axum v0.8.7</h2>
<ul>
<li>Relax implicit <code>Send</code> / <code>Sync</code> bounds on
<code>RouterAsService</code>, <code>RouterIntoService</code> (<a
href="https://redirect.github.com/tokio-rs/axum/issues/3555">#3555</a>)</li>
<li>Make it easier to visually scan for default features (<a
href="https://redirect.github.com/tokio-rs/axum/issues/3550">#3550</a>)</li>
<li>Fix some documentation typos</li>
</ul>
<p><a
href="https://redirect.github.com/tokio-rs/axum/issues/3550">#3550</a>:
<a
href="https://redirect.github.com/tokio-rs/axum/pull/3550">tokio-rs/axum#3550</a>
<a
href="https://redirect.github.com/tokio-rs/axum/issues/3555">#3555</a>:
<a
href="https://redirect.github.com/tokio-rs/axum/pull/3555">tokio-rs/axum#3555</a></p>
<h2>axum v0.8.5</h2>
<ul>
<li><strong>fixed:</strong> Reject JSON request bodies with trailing
characters after the JSON document (<a
href="https://redirect.github.com/tokio-rs/axum/issues/3453">#3453</a>)</li>
<li><strong>added:</strong> Implement <code>OptionalFromRequest</code>
for <code>Multipart</code> (<a
href="https://redirect.github.com/tokio-rs/axum/issues/3220">#3220</a>)</li>
<li><strong>added:</strong> Getter methods <code>Location::{status_code,
location}</code></li>
<li><strong>added:</strong> Support for writing arbitrary binary data
into server-sent events (<a
href="https://redirect.github.com/tokio-rs/axum/issues/3425">#3425</a>)]</li>
<li><strong>added:</strong>
<code>middleware::ResponseAxumBodyLayer</code> for mapping response body
to <code>axum::body::Body</code> (<a
href="https://redirect.github.com/tokio-rs/axum/issues/3469">#3469</a>)</li>
<li><strong>added:</strong> <code>impl FusedStream for WebSocket</code>
(<a
href="https://redirect.github.com/tokio-rs/axum/issues/3443">#3443</a>)</li>
<li><strong>changed:</strong> The <code>sse</code> module and
<code>Sse</code> type no longer depend on the <code>tokio</code> feature
(<a
href="https://redirect.github.com/tokio-rs/axum/issues/3154">#3154</a>)</li>
<li><strong>changed:</strong> If the location given to one of
<code>Redirect</code>s constructors is not a valid header value, instead
of panicking on construction, the <code>IntoResponse</code> impl now
returns an HTTP 500, just like <code>Json</code> does when serialization
fails (<a
href="https://redirect.github.com/tokio-rs/axum/issues/3377">#3377</a>)</li>
<li><strong>changed:</strong> Update minimum rust version to 1.78 (<a
href="https://redirect.github.com/tokio-rs/axum/issues/3412">#3412</a>)</li>
</ul>
<p><a
href="https://redirect.github.com/tokio-rs/axum/issues/3154">#3154</a>:
<a
href="https://redirect.github.com/tokio-rs/axum/pull/3154">tokio-rs/axum#3154</a>
<a
href="https://redirect.github.com/tokio-rs/axum/issues/3220">#3220</a>:
<a
href="https://redirect.github.com/tokio-rs/axum/pull/3220">tokio-rs/axum#3220</a>
<a
href="https://redirect.github.com/tokio-rs/axum/issues/3377">#3377</a>:
<a
href="https://redirect.github.com/tokio-rs/axum/pull/3377">tokio-rs/axum#3377</a>
<a
href="https://redirect.github.com/tokio-rs/axum/issues/3412">#3412</a>:
<a
href="https://redirect.github.com/tokio-rs/axum/pull/3412">tokio-rs/axum#3412</a>
<a
href="https://redirect.github.com/tokio-rs/axum/issues/3425">#3425</a>:
<a
href="https://redirect.github.com/tokio-rs/axum/pull/3425">tokio-rs/axum#3425</a>
<a
href="https://redirect.github.com/tokio-rs/axum/issues/3443">#3443</a>:
<a
href="https://redirect.github.com/tokio-rs/axum/pull/3443">tokio-rs/axum#3443</a>
<a
href="https://redirect.github.com/tokio-rs/axum/issues/3453">#3453</a>:
<a
href="https://redirect.github.com/tokio-rs/axum/pull/3453">tokio-rs/axum#3453</a>
<a
href="https://redirect.github.com/tokio-rs/axum/issues/3469">#3469</a>:
<a
href="https://redirect.github.com/tokio-rs/axum/pull/3469">tokio-rs/axum#3469</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="d07863f97d"><code>d07863f</code></a>
Release axum v0.8.8 and axum-extra v0.12.3</li>
<li><a
href="287c674b65"><code>287c674</code></a>
axum-extra: Make typed-routing feature enable routing feature (<a
href="https://redirect.github.com/tokio-rs/axum/issues/3514">#3514</a>)</li>
<li><a
href="f5804aa6a1"><code>f5804aa</code></a>
SecondElementIs: Correct a small inconsistency (<a
href="https://redirect.github.com/tokio-rs/axum/issues/3559">#3559</a>)</li>
<li><a
href="f51f3ba436"><code>f51f3ba</code></a>
axum-extra: Add trailing newline to pretty JSON response (<a
href="https://redirect.github.com/tokio-rs/axum/issues/3526">#3526</a>)</li>
<li><a
href="816407a816"><code>816407a</code></a>
Fix integer underflow in <code>try_range_response</code> for empty files
(<a
href="https://redirect.github.com/tokio-rs/axum/issues/3566">#3566</a>)</li>
<li><a
href="78656ebb4a"><code>78656eb</code></a>
docs: Clarify <code>route_layer</code> does not apply middleware to the
fallback handler...</li>
<li><a
href="4404f27cea"><code>4404f27</code></a>
Release axum v0.8.7 and axum-extra v0.12.2</li>
<li><a
href="8f1545adec"><code>8f1545a</code></a>
Fix typo in extractors guide (<a
href="https://redirect.github.com/tokio-rs/axum/issues/3554">#3554</a>)</li>
<li><a
href="4fc3faa0b4"><code>4fc3faa</code></a>
Relax implicit Send / Sync bounds (<a
href="https://redirect.github.com/tokio-rs/axum/issues/3555">#3555</a>)</li>
<li><a
href="a05920c906"><code>a05920c</code></a>
Make it easier to visually scan for default features (<a
href="https://redirect.github.com/tokio-rs/axum/issues/3550">#3550</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/tokio-rs/axum/compare/axum-v0.8.4...axum-v0.8.8">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=axum&package-manager=cargo&previous-version=0.8.4&new-version=0.8.8)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-26 15:54:58 -08:00
dependabot[bot]
764712c116 chore(deps): bump tokio-test from 0.4.4 to 0.4.5 in /codex-rs (#9882)
Bumps [tokio-test](https://github.com/tokio-rs/tokio) from 0.4.4 to
0.4.5.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="41d1877689"><code>41d1877</code></a>
chore: prepare tokio-test 0.4.5 (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7831">#7831</a>)</li>
<li><a
href="60b083b630"><code>60b083b</code></a>
chore: prepare tokio-stream 0.1.18 (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7830">#7830</a>)</li>
<li><a
href="9cc02cc88d"><code>9cc02cc</code></a>
chore: prepare tokio-util 0.7.18 (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7829">#7829</a>)</li>
<li><a
href="d2799d791b"><code>d2799d7</code></a>
task: improve the docs of <code>Builder::spawn_local</code> (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7828">#7828</a>)</li>
<li><a
href="4d4870f291"><code>4d4870f</code></a>
task: doc that task drops before JoinHandle completion (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7825">#7825</a>)</li>
<li><a
href="fdb150901a"><code>fdb1509</code></a>
fs: check for io-uring opcode support (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7815">#7815</a>)</li>
<li><a
href="426a562780"><code>426a562</code></a>
rt: remove <code>allow(dead_code)</code> after <code>JoinSet</code>
stabilization (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7826">#7826</a>)</li>
<li><a
href="e3b89bbefa"><code>e3b89bb</code></a>
chore: prepare Tokio v1.49.0 (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7824">#7824</a>)</li>
<li><a
href="4f577b84e9"><code>4f577b8</code></a>
Merge 'tokio-1.47.3' into 'master'</li>
<li><a
href="f320197693"><code>f320197</code></a>
chore: prepare Tokio v1.47.3 (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7823">#7823</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/tokio-rs/tokio/compare/tokio-test-0.4.4...tokio-test-0.4.5">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=tokio-test&package-manager=cargo&previous-version=0.4.4&new-version=0.4.5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-26 15:51:21 -08:00
dependabot[bot]
5ace350186 chore(deps): bump tracing from 0.1.43 to 0.1.44 in /codex-rs (#9880)
Bumps [tracing](https://github.com/tokio-rs/tracing) from 0.1.43 to
0.1.44.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/tokio-rs/tracing/releases">tracing's
releases</a>.</em></p>
<blockquote>
<h2>tracing 0.1.44</h2>
<h3>Fixed</h3>
<ul>
<li>Fix <code>record_all</code> panic (<a
href="https://redirect.github.com/tokio-rs/tracing/issues/3432">#3432</a>)</li>
</ul>
<h3>Changed</h3>
<ul>
<li><code>tracing-core</code>: updated to 0.1.36 (<a
href="https://redirect.github.com/tokio-rs/tracing/issues/3440">#3440</a>)</li>
</ul>
<p><a
href="https://redirect.github.com/tokio-rs/tracing/issues/3432">#3432</a>:
<a
href="https://redirect.github.com/tokio-rs/tracing/pull/3432">tokio-rs/tracing#3432</a>
<a
href="https://redirect.github.com/tokio-rs/tracing/issues/3440">#3440</a>:
<a
href="https://redirect.github.com/tokio-rs/tracing/pull/3440">tokio-rs/tracing#3440</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="2d55f6faf9"><code>2d55f6f</code></a>
chore: prepare tracing 0.1.44 (<a
href="https://redirect.github.com/tokio-rs/tracing/issues/3439">#3439</a>)</li>
<li><a
href="10a9e838a3"><code>10a9e83</code></a>
chore: prepare tracing-core 0.1.36 (<a
href="https://redirect.github.com/tokio-rs/tracing/issues/3440">#3440</a>)</li>
<li><a
href="ee82cf92a8"><code>ee82cf9</code></a>
tracing: fix record_all panic (<a
href="https://redirect.github.com/tokio-rs/tracing/issues/3432">#3432</a>)</li>
<li><a
href="9978c3663b"><code>9978c36</code></a>
chore: prepare tracing-mock 0.1.0-beta.3 (<a
href="https://redirect.github.com/tokio-rs/tracing/issues/3429">#3429</a>)</li>
<li><a
href="cc44064b3a"><code>cc44064</code></a>
chore: prepare tracing-subscriber 0.3.22 (<a
href="https://redirect.github.com/tokio-rs/tracing/issues/3428">#3428</a>)</li>
<li>See full diff in <a
href="https://github.com/tokio-rs/tracing/compare/tracing-0.1.43...tracing-0.1.44">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=tracing&package-manager=cargo&previous-version=0.1.43&new-version=0.1.44)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-26 15:48:45 -08:00
Ahmed Ibrahim
a8f195828b Add composer config and shared menu surface helpers (#9891)
Centralize built-in slash-command gating and extract shared menu-surface
helpers.

- Add bottom_pane::slash_commands and reuse it from composer + command
popup.
- Introduce ChatComposerConfig + shared menu surface rendering without
changing default behavior.
2026-01-26 23:16:29 +00:00
David Gilbertson
313ee3003b fix: handle utf-8 in windows sandbox logs (#8647)
Currently `apply_patch` will fail on Windows if the file contents happen
to have a multi-byte character at the point where the `preview` function
truncates.

I've used the existing `take_bytes_at_char_boundary` helper and added a
regression test (that fails without the fix).

This is related to #4013 but doesn't fix it.
2026-01-26 15:11:27 -08:00
Ahmed Ibrahim
159ff06281 plan prompt (#9943)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2026-01-26 14:48:54 -08:00
blevy-oai
bdc4742bfc Add MCP server scopes config and use it as fallback for OAuth login (#9647)
### Motivation
- Allow MCP OAuth flows to request scopes defined in `config.toml`
instead of requiring users to always pass `--scopes` on the CLI.
CLI/remote parameters should still override config values.

### Description
- Add optional `scopes: Option<Vec<String>>` to `McpServerConfig` and
`RawMcpServerConfig`, and propagate it through deserialization and the
built config types.
- Serialize `scopes` into the MCP server TOML via
`serialize_mcp_server_table` in `core/src/config/edit.rs` and include
`scopes` in the generated config schema (`core/config.schema.json`).
- CLI: update `codex-rs/cli/src/mcp_cmd.rs` `run_login` to fall back to
`server.scopes` when the `--scopes` flag is empty, with explicit CLI
scopes still taking precedence.
- App server: update
`codex-rs/app-server/src/codex_message_processor.rs`
`mcp_server_oauth_login` to use `params.scopes.or_else(||
server.scopes.clone())` so the RPC path also respects configured scopes.
- Update many test fixtures to initialize the new `scopes` field (set to
`None`) so test code builds with the new struct field.

### Testing
- Ran config tooling and formatters: `just write-config-schema`
(succeeded), `just fmt` (succeeded), and `just fix -p codex-core`, `just
fix -p codex-cli`, `just fix -p codex-app-server` (succeeded where
applicable).
- Ran unit tests for the CLI: `cargo test -p codex-cli` (passed).
- Ran unit tests for core: `cargo test -p codex-core` (ran; many tests
passed but several failed, including model refresh/403-related tests,
shell snapshot/timeouts, and several `unified_exec` expectations).
- Ran app-server tests: `cargo test -p codex-app-server` (ran; many
integration-suite tests failed due to mocked/remote HTTP 401/403
responses and wiremock expectations).

If you want, I can split the tests into smaller focused runs or help
debug the failing integration tests (they appear to be unrelated to the
config change and stem from external HTTP/mocking behaviors encountered
during the test runs).

------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_69718f505914832ea1f334b3ba064553)
2026-01-26 14:13:04 -08:00
jif-oai
247fb2de64 [app-server] feat: add filtering on thread list (#9897) 2026-01-26 21:54:19 +00:00
iceweasel-oai
6a02fdde76 ensure codex bundle zip is created in dist/ (#9934)
cd-ing into the tmp bundle directory was putting the .zip in the wrong
place
2026-01-26 21:39:00 +00:00
Eric Traut
b77bf4d36d Aligned feature stage names with public feature maturity stages (#9929)
We've recently standardized a [feature maturity
model](https://developers.openai.com/codex/feature-maturity) that we're
using in our docs and support forums to communicate expectations to
users. This PR updates the internal stage names and descriptions to
match.

This change involves a simple internal rename and updates to a few
user-visible strings. No functional change.
2026-01-26 11:43:36 -08:00
Charley Cunningham
62266b13f8 Add thread/unarchive to restore archived rollouts (#9843)
## Summary
- Adds a new `thread/unarchive` RPC to move archived thread rollouts
back into the active `sessions/` tree.

## What changed
- **Protocol**
  - Adds `thread/unarchive` request/response types and wiring.
- **Server**
  - Implements `thread_unarchive` in the app server.
  - Validates the archived rollout path and thread ID.
- Restores the rollout to `sessions/YYYY/MM/DD/...` based on the rollout
filename timestamp.
- **Core**
- Adds `find_archived_thread_path_by_id_str` helper for archived
rollouts.
- **Docs**
  - Documents the new RPC and usage example.
- **Tests**
  - Adds an end-to-end server test that:
    1) starts a thread,
    2) archives it,
    3) unarchives it,
    4) asserts the file is restored to `sessions/`.

## How to use
```json
{ "method": "thread/unarchive", "id": 24, "params": { "threadId": "<thread-id>" } }
```

## Author Codex Session

`codex resume 019bf158-54b6-7960-a696-9d85df7e1bc1` (soon I'll make this
kind of session UUID forkable by anyone with the right
`session_object_storage_url` line in their config, but for now just
pasting it here for my reference)
2026-01-26 11:24:36 -08:00
jif-oai
09251387e0 chore: update interrupt message (#9925) 2026-01-26 19:07:54 +00:00
Ahmed Ibrahim
e471ebc5d2 prompt (#9928)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2026-01-26 10:27:18 -08:00
Gene Oden
375a5ef051 fix: attempt to reduce high cpu usage when using collab (#9776)
Reproduce with a prompt like this with collab enabled:
```
Examine the code at <some subdirectory with a deeply nested project>.  Find the most urgent issue to resolve and describe it to me.
```

Existing behavior causes the top-level agent to busy wait on subagents.
2026-01-26 10:07:25 -08:00
gt-oai
fdc69df454 Fix flakey shell snapshot test (#9919)
Sometimes fails with:

```
failures:

  ---- shell_snapshot::tests::timed_out_snapshot_shell_is_terminated stdout ----

  thread 'shell_snapshot::tests::timed_out_snapshot_shell_is_terminated' panicked at codex-rs/core/src/shell_snapshot.rs:588:9:
  expected timeout error, got Failed to execute sh

  Caused by:
      Text file busy (os error 26)


  failures:
      shell_snapshot::tests::timed_out_snapshot_shell_is_terminated

  test result: FAILED. 815 passed; 1 failed; 4 ignored; 0 measured; 0 filtered out; finished in 18.00s
```
2026-01-26 18:05:30 +00:00
jif-oai
01d7f8095b feat: codex exec mapping of collab tools (#9817)
THIS IS NOT THE FINAL UX
2026-01-26 18:01:35 +00:00
Shijie Rao
3ba702c5b6 Feat: add isOther to question returned by request user input tool (#9890)
### Summary
Add `isOther` to question object from request_user_input tool input and
remove `other` option from the tool prompt to better handle tool input.
2026-01-26 09:52:38 -08:00
gt-oai
6316e57497 Fix up config disabled err msg (#9916)
**Before:**
<img width="745" height="375" alt="image"
src="https://github.com/user-attachments/assets/d6c23562-b87f-4af9-8642-329aab8e594d"
/>

**After:**
<img width="1042" height="354" alt="image"
src="https://github.com/user-attachments/assets/c9a2413c-c945-4c34-8b7e-c6c9b8fbf762"
/>

Two changes:
1. only display if there is a `config.toml` that is skipped (i.e. if
there is just `.codex/skills` but no `.codex/config.toml` we do not
display the error)
2. clarify the implications and the fix in the error message.
2026-01-26 17:49:31 +00:00
jif-oai
70d5959398 feat: disable collab at max depth (#9899) 2026-01-26 17:05:36 +00:00
jif-oai
3f338e4a6a feat: explorer collab (#9918) 2026-01-26 16:21:42 +00:00
gt-oai
48aeb67f7a Fix flakey conversation flow test (#9784)
I've seen this test fail with:

```
 - Mock #1.
        	Expected range of matching incoming requests: == 2
        	Number of matched incoming requests: 1
```

This is because we pop the wrong task_complete events and then the test
exits. I think this is because the MCP events are now buffered after
https://github.com/openai/codex/pull/8874.

So:
1. clear the buffer before we do any user message sending
2. additionally listen for task start before task complete
3. use the ID from task start to find the correct task complete event.
2026-01-26 15:58:14 +00:00
gt-oai
65c7119fb7 Fix flakey resume test (#9789)
Sessions' `updated_at` times are truncated to seconds, with the UUID
session ID used to break ties. If the two test sessions are created in
the same second, AND the session B UUID < session A UUID, the test
fails.

Fix this by mutating the session mtimes, from which we derive the
updated_at time, to ensure session B is updated_at later than session A.
2026-01-26 14:44:37 +00:00
jif-oai
c66662c61b feat: rebase multi-agent tui on config_snapshot (#9818) 2026-01-26 10:18:47 +00:00
jif-oai
d594693d1a feat: dynamic tools injection (#9539)
## Summary
Add dynamic tool injection to thread startup in API v2, wire dynamic
tool calls through the app server to clients, and plumb responses back
into the model tool pipeline.

### Flow (high level)
- Thread start injects `dynamic_tools` into the model tool list for that
thread (validation is done here).
- When the model emits a tool call for one of those names, core raises a
`DynamicToolCallRequest` event.
- The app server forwards it to the client as `item/tool/call`, waits
for the client’s response, then submits a `DynamicToolResponse` back to
core.
- Core turns that into a `function_call_output` in the next model
request so the model can continue.

### What changed
- Added dynamic tool specs to v2 thread start params and protocol types;
introduced `item/tool/call` (request/response) for dynamic tool
execution.
- Core now registers dynamic tool specs at request time and routes those
calls via a new dynamic tool handler.
- App server validates tool names/schemas, forwards dynamic tool call
requests to clients, and publishes tool outputs back into the session.
- Integration tests
2026-01-26 10:06:44 +00:00
Dylan Hurd
25fccc3d4d chore(core) move model_instructions_template config (#9871)
## Summary
Move `model_instructions_template` config to the experimental slug while
we iterate on this feature

## Testing
- [x] Tested locally, unit tests still pass
2026-01-26 07:02:11 +00:00
Dylan Hurd
031bafd1fb feat(tui) /personality (#9718)
## Summary
Adds /personality selector in the TUI, which leverages the new core
interface in #9644

Notes:
- We are doing some of our own state management for model_info loading
here, but not sure if that's ideal. open to opinions on simpler
approach, but would like to avoid blocking on a larger refactor
- Right now, the `/personality` selector just hides when the model
doesn't support it. we can update this behavior down the line

## Testing
- [x] Tested locally
- [x] Added snapshot tests
2026-01-25 21:59:42 -08:00
Ahmed Ibrahim
d27f2533a9 Plan prompt (#9877)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2026-01-25 19:50:35 -08:00
Ahmed Ibrahim
0f798173d7 Prompt (#9874)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2026-01-25 18:24:25 -08:00
Ahmed Ibrahim
cb2bbe5cba Adjust modes masks (#9868) 2026-01-25 12:44:17 -08:00
Ahmad Sohail Raoufi
dd2d68e69e chore: remove extra newline in println (#9850)
## Summary

This PR makes a minor formatting adjustment to a `println!` message by
removing an extra empty line and explicitly using `\n` for clarity.

## Changes

- Adjusted console output formatting for the success message.
- No functional or behavioral changes.
2026-01-25 10:44:15 -08:00
jif-oai
8fea8f73d6 chore: half max number of sub-agents (#9861)
https://openai.slack.com/archives/C095U48JNL9/p1769359138786499?thread_ts=1769190766.962719&cid=C095U48JNL9
2026-01-25 17:51:55 +01:00
jif-oai
73b5274443 feat: cap number of agents (#9855)
Adding more guards to agent:
* Max depth or 1 (i.e. a sub-agent can't spawn another one)
* Max 12 sub-agents in total
2026-01-25 14:57:22 +00:00
jif-oai
a748600c42 Revert "Revert "fix: musl build"" (#9847)
Fix for
77222492f9
2026-01-25 08:50:31 -05:00
pakrym-oai
b332482eb1 Mark collab as beta (#9834)
Co-authored-by: jif-oai <jif@openai.com>
2026-01-25 11:13:21 +01:00
Ahmed Ibrahim
58450ba2a1 Use collaboration mode masks without mutating base settings (#9806)
Keep an unmasked base collaboration mode and apply the active mask on
demand. Simplify the TUI mask helpers and update tests/docs to match the
mask contract.
2026-01-25 07:35:31 +00:00
Ahmed Ibrahim
24230c066b Revert "fix: libcc link" (#9841)
Reverts openai/codex#9819
2026-01-25 06:58:56 +00:00
Charley Cunningham
18acec09df Ask for cwd choice when resuming session from different cwd (#9731)
# Summary
- Fix resume/fork config rebuild so cwd changes inside the TUI produce a
fully rebuilt Config (trust/approval/sandbox) instead of mutating only
the cwd.
- Preserve `--add-dir` behavior across resume/fork by normalizing
relative roots to absolute paths once (based on the original cwd).
- Prefer latest `TurnContext.cwd` for resume/fork prompts but fall back
to `SessionMeta.cwd` if the latest cwd no longer exists.
- Align resume/fork selection handling and ensure UI config matches the
resumed thread config.
- Fix Windows test TOML path escaping in trust-level test.

# Details
- Rebuild Config via `ConfigBuilder` when resuming into a different cwd;
carry forward runtime approval/sandbox overrides.
- Add `normalize_harness_overrides_for_cwd` to resolve relative
`additional_writable_roots` against the initial cwd before reuse.
- Guard `read_session_cwd` with filesystem existence check for the
latest `TurnContext.cwd`.
- Update naming/flow around cwd comparison and prompt selection.

<img width="603" height="150" alt="Screenshot 2026-01-23 at 5 42 13 PM"
src="https://github.com/user-attachments/assets/d1897386-bb28-4e8a-98cf-187fdebbecb0"
/>

And proof the model understands the new cwd:

<img width="828" height="353" alt="Screenshot 2026-01-22 at 5 36 45 PM"
src="https://github.com/user-attachments/assets/12aed8ca-dec3-4b64-8dae-c6b8cff78387"
/>
2026-01-24 21:57:19 -08:00
Matthew Zeng
182000999c Raise welcome animation breakpoint to 37 rows (#9778)
### Motivation
- The large ASCII welcome animation can push onboarding content below
the fold on default-height terminals, making the CLI appear
unresponsive; raising the breakpoint prevents that.
- The existing test measured an arbitrary row count rather than
asserting the welcome line position relative to the animation frame,
which made the intent unclear.

### Description
- Increase `MIN_ANIMATION_HEIGHT` from `20` to `37` in
`codex-rs/tui/src/onboarding/welcome.rs` so the animation is skipped
unless there is enough vertical space.
- Replace the brittle measurement logic in the welcome render test with
a `row_containing` helper and assert the welcome row equals the frame
height plus the spacer line (`frame_lines + 1`).
- Add a regression test
`welcome_skips_animation_below_height_breakpoint` that verifies the
animation is not rendered when the viewport height is one row below the
breakpoint.

### Testing
- Ran formatting with `~/.cargo/bin/just fmt` which completed
successfully.
- Ran unit tests for the crate with `cargo test -p codex-tui --lib` and
they passed (unit test suite succeeded).
- Ran `cargo test -p codex-tui` which reported a failing integration
test in this environment because the test cannot locate the `codex`
binary, so full crate tests are blocked here (environment limitation).

------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_6973b0a710d4832c9ff36fac26eb1519)
2026-01-24 21:50:35 -08:00
Ahmed Ibrahim
652f08e98f Revert "fix: musl build" (#9840)
Reverts openai/codex#9820
2026-01-25 04:46:53 +00:00
Charley Cunningham
279c9534a1 Prevent backspace from removing a text element when the cursor is at the element’s left edge (#9630)
**Summary**
- Prevent backspace from removing a text element when the cursor is at
the element’s left edge.
- Instead just delete the char before the placeholder (moving it to the
left).
2026-01-24 10:41:39 -08:00
Max Kong
e2bd9311c9 fix(windows-sandbox): remove request files after read (#9316)
## Summary
- Remove elevated runner request files after read (best-effort cleanup
on errors)
- Add a unit test to cover request file lifecycle

## Testing
- `cargo test -p codex-windows-sandbox` (Windows)

Fixes #9315
2026-01-24 10:23:37 -08:00
jif-oai
2efcdf4062 fix: musl build (#9820) 2026-01-24 16:56:28 +01:00
jif-oai
3651608365 fix: libcc link (#9819) 2026-01-24 16:32:06 +01:00
jif-oai
83775f4df1 feat: ephemeral threads (#9765)
Add ephemeral threads capabilities. Only exposed through the
`app-server` v2

The idea is to disable the rollout recorder for those threads.
2026-01-24 14:57:40 +00:00
jif-oai
515ac2cd19 feat: add thread spawn source for collab tools (#9769) 2026-01-24 14:21:34 +00:00
Charley Cunningham
eb7558ba85 Remove batman reference from experimental prompt (#9812)
https://www.reddit.com/r/codex/comments/1qldbmg/if_you_enable_experimental_subagents_in_openai/
2026-01-24 14:24:36 +01:00
Eric Traut
713ae22c04 Another round of improvements for config error messages (#9746)
In a [recent PR](https://github.com/openai/codex/pull/9182), I made some
improvements to config error messages so errors didn't leave app server
clients in a dead state. This is a follow-on PR to make these error
messages more readable and actionable for both TUI and GUI users. For
example, see #9668 where the user was understandably confused about the
source of the problem and how to fix it.

The improved error message:
1. Clearly identifies the config file where the error was found (which
is more important now that we support layered configs)
2. Provides a line and column number of the error
3. Displays the line where the error occurred and underlines it

For example, if my `config.toml` includes the following:
```toml
[features]
collaboration_modes = "true"
```

Here's the current CLI error message:
```
Error loading config.toml: invalid type: string "true", expected a boolean in `features`
```

And here's the improved message:
```
Error loading config.toml:
/Users/etraut/.codex/config.toml:43:23: invalid type: string "true", expected a boolean
   |
43 | collaboration_modes = "true"
   |                       ^^^^^^
```

The bulk of the new logic is contained within a new module
`config_loader/diagnostics.rs` that is responsible for calculating the
text range for a given toml path (which is more involved than I would
have expected).

In addition, this PR adds the file name and text range to the
`ConfigWarningNotification` app server struct. This allows GUI clients
to present the user with a better error message and an optional link to
open the errant config file. This was a suggestion from @.bolinfest when
he reviewed my previous PR.
2026-01-23 20:11:09 -08:00
Ahmed Ibrahim
b3127e2eeb Have a coding mode and only show coding and plan (#9802) 2026-01-23 19:28:49 -08:00
viyatb-oai
77222492f9 feat: introducing a network sandbox proxy (#8442)
This add a new crate, `codex-network-proxy`, a local network proxy
service used by Codex to enforce fine-grained network policy (domain
allow/deny) and to surface blocked network events for interactive
approvals.

- New crate: `codex-rs/network-proxy/` (`codex-network-proxy` binary +
library)
- Core capabilities:
  - HTTP proxy support (including CONNECT tunneling)
  - SOCKS5 proxy support (in the later PR)
- policy evaluation (allowed/denied domain lists; denylist wins;
wildcard support)
  - small admin API for polling/reload/mode changes
- optional MITM support for HTTPS CONNECT to enforce “limited mode”
method restrictions (later PR)

Will follow up integration with codex in subsequent PRs.

## Testing

- `cd codex-rs && cargo build -p codex-network-proxy`
- `cd codex-rs && cargo run -p codex-network-proxy -- proxy`
2026-01-23 17:47:09 -08:00
Ahmed Ibrahim
69cfc73dc6 change collaboration mode to struct (#9793)
Shouldn't cause behavioral change
2026-01-23 17:00:23 -08:00
Ahmed Ibrahim
1167465bf6 Chore: remove mode from header (#9792) 2026-01-23 22:38:17 +00:00
iceweasel-oai
d9232403aa bundle sandbox helper binaries in main zip, for winget. (#9707)
Winget uses the main codex.exe value as its target.
The elevated sandbox requires these two binaries to live next to
codex.exe
2026-01-23 14:36:42 -08:00
gt-oai
b9deb57689 Load untrusted rules (#9791) 2026-01-23 21:52:27 +00:00
gt-oai
c6ded0afd8 still load skills (#9700) 2026-01-23 20:35:50 +00:00
jcoens-openai
e04851816d Remove stale TODO comment from defs.bzl (#9787)
### Motivation
- Remove an outdated comment in `defs.bzl` referencing
`cargo_build_script` that is no longer relevant.

### Description
- Delete the stale `# TODO(zbarsky): cargo_build_script support?` line
so the logic flows directly from `binaries` to `lib_srcs` in `defs.bzl`.

### Testing
- Ran `git diff --check` which produced no errors.

------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_6973d9ac757c8331be475a8fb0f90a88)
2026-01-23 20:30:01 +00:00
JUAN DAVID SALAS CAMARGO
e0ae219f36 Fix resume picker when user event appears after head (#9512)
Fixes #9501

Contributing guide:
https://github.com/openai/codex/blob/main/docs/contributing.md

## Summary
The resume picker requires a session_meta line and at least one
user_message event within the initial head scan. Some rollout files
contain multiple session_meta entries before the first user_message, so
the user event can fall outside the default head window and the session
is omitted from the picker even though it is resumable by ID.

This PR keeps the head summary bounded but extends scanning for a
user_message once a session_meta has been observed. The summary still
caps stored head entries, but we allow a small, bounded extra scan to
find the first user event so valid sessions are not filtered out.

## Changes
- Continue scanning past the head limit (bounded) when session_meta is
present but no user_message has been seen yet.
- Mark session_meta as seen even if the head summary buffer is already
full.
- Add a regression test with multiple session_meta lines before the
first user_message.

## Why This Is Safe
- The head summary remains bounded to avoid unbounded memory usage.
- The extra scan is capped (USER_EVENT_SCAN_LIMIT) and only triggers
after a session_meta is seen.
- Behavior is unchanged for typical files where the user_message appears
early.

## Testing
- cargo test -p codex-core --lib
test_list_threads_scans_past_head_for_user_event
2026-01-23 12:21:27 -08:00
Ahmed Ibrahim
45fe58159e Select default model from filtered presets (#9782)
Pick the first available preset after auth filtering for default
selection.
2026-01-23 12:18:36 -08:00
gt-oai
7938c170d9 Print warning if we skip config loading (#9611)
https://github.com/openai/codex/pull/9533 silently ignored config if
untrusted. Instead, we still load it but disable it. Maybe we shouldn't
try to parse it either...

<img width="939" height="515" alt="Screenshot 2026-01-21 at 14 56 38"
src="https://github.com/user-attachments/assets/e753cc22-dd99-4242-8ffe-7589e85bef66"
/>
2026-01-23 20:06:37 +00:00
Salman Chishti
eca365cf8c Upgrade GitHub Actions for Node 24 compatibility (#9722)
## Summary

Upgrade GitHub Actions to their latest versions to ensure compatibility
with Node 24, as Node 20 will reach end-of-life in April 2026.

## Changes

| Action | Old Version(s) | New Version | Release | Files |
|--------|---------------|-------------|---------|-------|
| `actions/cache` |
[`v4`](https://github.com/actions/cache/releases/tag/v4) |
[`v5`](https://github.com/actions/cache/releases/tag/v5) |
[Release](https://github.com/actions/cache/releases/tag/v5) | bazel.yml
|

## Context

Per [GitHub's
announcement](https://github.blog/changelog/2025-09-19-deprecation-of-node-20-on-github-actions-runners/),
Node 20 is being deprecated and runners will begin using Node 24 by
default starting March 4th, 2026.

### Why this matters

- **Node 20 EOL**: April 2026
- **Node 24 default**: March 4th, 2026
- **Action**: Update to latest action versions that support Node 24

### Security Note

Actions that were previously pinned to commit SHAs remain pinned to SHAs
(updated to the latest release SHA) to maintain the security benefits of
immutable references.

### Testing

These changes only affect CI/CD workflow configurations and should not
impact application functionality. The workflows should be tested by
running them on a branch before merging.

Signed-off-by: Salman Muin Kayser Chishti <13schishti@gmail.com>
2026-01-23 12:06:04 -08:00
zerone0x
ae7d3e1b49 fix(exec): skip git repo check when --yolo flag is used (#9590)
## Summary

Fixes #7522

The `--yolo` (`--dangerously-bypass-approvals-and-sandbox`) flag is
documented to skip all confirmation prompts and execute commands without
sandboxing, intended solely for running in environments that are
externally sandboxed. However, it was not bypassing the trusted
directory (git repo) check, requiring users to also specify
`--skip-git-repo-check`.

This change makes `--yolo` also skip the git repo check, matching the
documented behavior and user expectations.

## Changes

- Modified `codex-rs/exec/src/lib.rs` to check for
`dangerously_bypass_approvals_and_sandbox` flag in addition to
`skip_git_repo_check` when determining whether to skip the git repo
check

## Testing

- Verified the code compiles with `cargo check -p codex-exec`
- Ran existing tests with `cargo test -p codex-exec` (34 passed, 8
integration tests failed due to unrelated API connectivity issues)

---
🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-23 12:05:20 -08:00
Ahmed Ibrahim
f353d3d695 prompt (#9777)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2026-01-23 19:24:48 +00:00
charley-oai
935d88b455 Persist text element ranges and attached images across history/resume (#9116)
**Summary**
- Backtrack selection now rehydrates `text_elements` and
`local_image_paths` from the chosen user history cell so Esc‑Esc history
edits preserve image placeholders and attachments.
- Composer prefill uses the preserved elements/attachments in both `tui`
and `tui2`.
- Extended backtrack selection tests to cover image placeholder elements
and local image paths.

**Changes**
- `tui/src/app_backtrack.rs`: Backtrack selection now carries text
elements + local image paths; composer prefill uses them (removes TODO).
- `tui2/src/app_backtrack.rs`: Same as above.
- `tui/src/app.rs`: Updated backtrack test to assert restored
elements/paths.
- `tui2/src/app.rs`: Same test updates.

### The original scope of this PR (threading text elements and image
attachments through the codex harness thoroughly/persistently) was
broken into the following PRs other than this one:

The diff of this PR was reduced by changing types in a starter PR:
https://github.com/openai/codex/pull/9235

Then text element metadata was added to protocol, app server, and core
in this PR: https://github.com/openai/codex/pull/9331

Then the end-to-end flow was completed by wiring TUI/TUI2 input,
history, and restore behavior in
https://github.com/openai/codex/pull/9393

Prompt expansion was supported in this PR:
https://github.com/openai/codex/pull/9518

TextElement optional placeholder field was protected in
https://github.com/openai/codex/pull/9545
2026-01-23 10:18:19 -08:00
jif-oai
f30f39b28b feat: tui beta for collab (#9690)
https://github.com/user-attachments/assets/1ca07e7a-3d82-40da-a5b0-8ab2eef0bb69
2026-01-23 13:57:59 +01:00
jif-oai
afa08570f2 nit: exclude PWD for rc sourcing (#9753) 2026-01-23 13:35:48 +01:00
Michael Bolin
86a1e41f2e chore: use some raw strings to reduce quoting (#9745)
Small follow-ups for https://github.com/openai/codex/pull/9565. Mainly
`r#`, but also added some whitespace for early returns.
2026-01-22 22:38:10 -08:00
JUAN DAVID SALAS CAMARGO
f815fa14ea Fix execpolicy parsing for multiline quoted args (#9565)
## What
Fix bash command parsing to accept double-quoted strings that contain
literal newlines so execpolicy can match allow rules.

## Why
Allow rules like [git, commit] should still match when commit messages
include a newline in a quoted argument; the parser currently rejects
these strings and falls back to the outer shell invocation.

## How
- Validate double-quoted strings by ensuring all named children are
string_content and then stripping the outer quotes from the raw node
text so embedded newlines are preserved.
- Reuse the helper for concatenated arguments.
- Ensure large SI suffix formatting uses the caller-provided locale
formatter for grouping.
- Add coverage for newline-containing quoted arguments.

Fixes #9541.

## Tests
- cargo test -p codex-core
- just fix -p codex-core
- cargo test -p codex-protocol
- just fix -p codex-protocol
- cargo test --all-features
2026-01-22 22:16:53 -08:00
alexsong-oai
0fa45fbca4 feat: add session source as otel metadata tag (#9720)
Add session.source and user.account_id as global OTEL metric tags to
identify client surface and user.
2026-01-22 18:46:14 -08:00
charley-oai
02fced28a4 Hide mode cycle hint while a task is running (#9730)
## Summary
- hide the “(shift+tab to cycle)” suffix on the collaboration mode label
while a task is running
- keep the cycle hint visible when idle
- add a snapshot to cover the running-task label state
2026-01-22 18:32:06 -08:00
Ahmed Ibrahim
d86bd20411 Change the prompt for planning and reasoning effort (#9733)
Change the prompt for planning and reasoning effort preset for better
experience
2026-01-22 18:22:12 -08:00
Dylan Hurd
2b1ee24e11 feat(app-server) Expose personality (#9674)
### Motivation
Exposes a per-thread / per-turn `personality` override in the v2
app-server API so clients can influence model communication style at
thread/turn start. Ensures the override is passed into the session
configuration resolution so it becomes effective for subsequent turns
and headless runners.

### Testing
- [x] Add an integration-style test
`turn_start_accepts_personality_override_v2` in
`codex-rs/app-server/tests/suite/v2/turn_start.rs` that verifies a
`/personality` override results in a developer update message containing
`<personality_spec>` in the outbound model request.

------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_6971d646b1c08322a689a54d2649f3fe)
2026-01-22 18:00:20 -08:00
Matthew Zeng
a2c829a808 [connectors] Support connectors part 1 - App server & MCP (#9667)
In order to make Codex work with connectors, we add a built-in gateway
MCP that acts as a transparent proxy between the client and the
connectors. The gateway MCP collects actions that are accessible to the
user and sends them down to the user, when a connector action is chosen
to be called, the client invokes the action through the gateway MCP as
well.

 - [x] Add the system built-in gateway MCP to list and run connectors.
 - [x] Add the app server methods and protocol
2026-01-22 16:48:43 -08:00
github-actions[bot]
d9e041e0a6 Update models.json (#9726)
Automated update of models.json.

---------

Co-authored-by: aibrahim-oai <219906144+aibrahim-oai@users.noreply.github.com>
Co-authored-by: Ahmed Ibrahim <aibrahim@openai.com>
2026-01-23 00:44:47 +00:00
iceweasel-oai
0e4adcd760 use machine scope instead of user scope for dpapi. (#9713)
This fixes a bug where the elevated sandbox setup encrypts sandbox user
passwords as an admin user, but normal command execution attempts to
decrypt them as a different user.

Machine scope allows all users to encyrpt/decrypt

this PR also moves the encrypted file to a different location
.codex/.sandbox-secrets which the sandbox users cannot read.
2026-01-22 16:40:13 -08:00
charley-oai
0e79d239ed TUI: prompt to implement plan and switch to Execute (#9712)
## Summary
- Replace the plan‑implementation prompt with a standard selection
popup.
- “Yes” submits a user turn in Execute via a dedicated app event to
preserve normal transcript behavior.
- “No” simply dismisses the popup.

<img width="977" height="433" alt="Screenshot 2026-01-22 at 2 00 54 PM"
src="https://github.com/user-attachments/assets/91fad06f-7b7a-4cd8-9051-f28a19b750b2"
/>

## Changes
- Add a plan‑implementation popup using `SelectionViewParams`.
- Add `SubmitUserMessageWithMode` so “Yes” routes through
`submit_user_message` (ensures user history + separator state).
- Track `saw_plan_update_this_turn` so the prompt appears even when only
`update_plan` is emitted.
- Suppress the plan popup on replayed turns, when messages are queued,
or when a rate‑limit prompt is pending.
- Add `execute_mode` helper for collaboration modes.
- Add tests for replay/queued/rate‑limit guards and plan update without
final message.
- Add snapshots for both the default and “No”‑selected popup states.
2026-01-23 00:25:50 +00:00
Anton Panasenko
e117a3ff33 feat: support proxy for ws connection (#9719)
reapply websocket changes without changing tls lib.
2026-01-22 15:23:15 -08:00
iudi
afd63e8bae Fix typo in experimental_prompt.md (#9716)
Simple typo fix in the first sentence of the experimental_prompt.md
instructions file.
2026-01-22 14:07:14 -08:00
Michael Bolin
5d963ee5d9 feat: fix formatting of codex features list (#9715)
The formatting of `codex features list` made it hard to follow. This PR
introduces column width math to make things nice.

Maybe slightly hard to machine-parse (since not a simple `\t`), but we
should introduce a `--json` option if that's really important.

You can see the before/after in the screenshot:

<img width="1119" height="932" alt="image"
src="https://github.com/user-attachments/assets/c99dce85-899a-4a2d-b4af-003938f5e1df"
/>
2026-01-22 13:02:41 -08:00
Owen Lin
733cb68496 feat(app-server): support archived threads in thread/list (#9571) 2026-01-22 12:22:36 -08:00
Owen Lin
80240b3b67 feat(app-server): thread/read API (#9569) 2026-01-22 12:22:01 -08:00
Dylan Hurd
8b3521ee77 feat(core) update Personality on turn (#9644)
## Summary
Support updating Personality mid-Thread via UserTurn/OverwriteTurn. This
is explicitly unused by the clients so far, to simplify PRs - app-server
and tui implementations will be follow-ups.

## Testing
- [x] added integration tests
2026-01-22 12:04:23 -08:00
charley-oai
4210fb9e6c Modes label below textarea (#9645)
# Summary
- Add a collaboration mode indicator rendered at the bottom-right of the
TUI composer footer.
- Style modes per design (Plan in #D72EE1, Execute matching dim context
style, Pair Programming using the same cyan as text elements).
- Add shared “(shift+tab to cycle)” hint text for all mode labels and
align the indicator with the left footer margin.

NOTE: currently this is hidden if the Collaboration Modes feature flag
is disabled, or in Custom mode. Maybe we should show it in Custom mode
too? I'll leave that out of this PR though

# UI
- Mode indicator appears below the textarea, bottom-right of the footer
line.
- Includes “(shift+tab to cycle)” and keeps right padding aligned to the
left footer indent.

<img width="983" height="200" alt="Screenshot 2026-01-21 at 7 17 54 PM"
src="https://github.com/user-attachments/assets/d1c5e4ed-7d7b-4f6c-9e71-bc3cf6400e0e"
/>

<img width="980" height="200" alt="Screenshot 2026-01-21 at 7 18 53 PM"
src="https://github.com/user-attachments/assets/d22ff0da-a406-4930-85c5-affb2234e84b"
/>

<img width="979" height="201" alt="Screenshot 2026-01-21 at 7 19 12 PM"
src="https://github.com/user-attachments/assets/862cb17f-0495-46fa-9b01-a4a9f29b52d5"
/>
2026-01-22 17:31:11 +00:00
pakrym-oai
b511c38ddb Support end_turn flag (#9698)
Experimental flag that signals the end of the turn.
2026-01-22 17:27:48 +00:00
pakrym-oai
4d48d4e0c2 Revert "feat: support proxy for ws connection" (#9693)
Reverts openai/codex#9409
2026-01-22 15:57:18 +00:00
Shijie Rao
a4cb97ba5a Chore: add cmd related info to exec approval request (#9659)
### Summary
We now rely purely on `item/commandExecution/requestApproval` item to
render pending approval in VSCE and app. With v2 approach, it does not
include the actual cmd that it is attempting and therefore we can only
use `proposedExecpolicyAmendment` to render which can be incomplete.

### Reproduce
* Add `prefix_rule(pattern=["echo"], decision="prompt")` to your
`~/.codex/rules.default.rules`.
* Ask to `Run  echo "approval-test" please` in VSCE or app. 
* The pending approval protal does show up but with no content

#### Example screenshot
<img width="3434" height="3648" alt="Screenshot 2026-01-21 at 8 23
25 PM"
src="https://github.com/user-attachments/assets/75644837-21f1-40f8-8b02-858d361ff817"
/>

#### Sample output
```
  {"method":"item/commandExecution/requestApproval","id":0,"params":{
    "threadId":"019be439-5a90-7600-a7ea-2d2dcc50302a",
    "turnId":"0",
    "itemId":"call_usgnQ4qEX5U9roNdjT7fPzhb",
    "reason":"`/bin/zsh -lc 'echo \"testing\"'` requires approval by policy",
    "proposedExecpolicyAmendment":null
  }}

```

### Fix
Inlude `command` string, `cwd` and `command_actions` in
`CommandExecutionRequestApprovalParams` so that consumers can display
the correct command instead of relying on exec policy output.
2026-01-21 23:58:53 -08:00
Kbediako
079fd2adb9 Fix: Lower log level for closed-channel send (#9653)
## What?
- Downgrade the closed-channel send error log to debug in
`codex-rs/core/src/codex.rs`.

## Why?
- `async_channel::Sender::send` only fails when the channel is closed,
so the current error-level log is noisy during normal shutdown. See
issue #9652.

## How?
- Replace the error log with a debug log on send failure.

## Tests
- `just fmt`
- `just fix -p codex-core`
- `cargo test -p codex-core`
2026-01-21 22:09:58 -08:00
Dylan Hurd
038b78c915 feat(tui) /permissions flow (#9561)
## Summary
Adds the `/permissions` command, with a (usually) shorter set of
permissions. `/approvals` still exists, for backwards compatibility.

<img width="863" height="309" alt="Screenshot 2026-01-20 at 4 12 51 PM"
src="https://github.com/user-attachments/assets/c49b5ba5-bc47-46dd-9067-e1a5670328fe"
/>


## Testing
- [x] updated unit tests
- [x] Tested locally
2026-01-21 21:38:46 -08:00
pakrym-oai
836f0343a3 Add tui.experimental_mode setting (#9656)
To simplify testing
2026-01-22 05:27:57 +00:00
Dylan Hurd
e520592bcf chore: tweak AGENTS.md (#9650)
## Summary
Update AGENTS.md to improve testing flow

## Testing
- [x] Tested locally, much faster
2026-01-21 20:20:45 -08:00
xl-openai
577ba3a4ca Add UI for skill enable/disable. (#9627)
"/skill" will now allow you to enable/disable skills:
<img width="658" height="199" alt="image"
src="https://github.com/user-attachments/assets/bf8994c8-d6c1-462f-8bbb-f1ee9241caa4"
/>
2026-01-21 18:21:12 -08:00
Dylan Hurd
96a72828be feat(core) ModelInfo.model_instructions_template (#9597)
## Summary
#9555 is the start of a rename, so I'm starting to standardize here.
Sets up `model_instructions` templating with a strongly-typed object for
injecting a personality block into the model instructions.

## Testing
- [x] Added tests
- [x] Ran locally
2026-01-21 18:11:18 -08:00
Josh McKinney
a489b64cb5 feat(tui): retire the tui2 experiment (#9640)
## Summary
- Retire the experimental TUI2 implementation and its feature flag.
- Remove TUI2-only config/schema/docs so the CLI stays on the
terminal-native path.
- Keep docs aligned with the legacy TUI while we focus on redraw-based
improvements.

## Customer impact
- Retires the TUI2 experiment and keeps Codex on the proven
terminal-native UI while we invest in redraw-based improvements to the
existing experience.

## Migration / compatibility
- If you previously set tui2-related options in config.toml, they are
now ignored and Codex continues using the existing terminal-native TUI
(no action required).

## Context
- What worked: a transcript-owned viewport delivered excellent resize
rewrap and high-fidelity copy (especially for code).
- Why stop: making that experience feel fully native across the
environment matrix (terminal emulator, OS, input modality, multiplexer,
font/theme, alt-screen behavior) creates a combinatorial explosion of
edge cases.
- What next: we are focusing on redraw-based improvements to the
existing terminal-native TUI so scrolling, selection, and copy remain
native while resize/redraw correctness improves.

## Testing
- just write-config-schema
- just fmt
- cargo clippy --fix --all-features --tests --allow-dirty --allow-no-vcs
-p codex-core
- cargo clippy --fix --all-features --tests --allow-dirty --allow-no-vcs
-p codex-cli
- cargo check
- cargo test -p codex-core
- cargo test -p codex-cli
2026-01-22 01:02:29 +00:00
charley-oai
41e38856f6 Reduce burst testing flake (#9549)
## Summary

- make paste-burst tests deterministic by injecting explicit timestamps
instead of relying on wall clock timing
- add time-aware helpers for input/submission paths so tests can drive
the burst heuristic precisely
- update burst-related tests to flush using computed timeouts while
preserving behavior assertions
- increase timeout slack in
shell_tools_start_before_response_completed_when_stream_delayed to
reduce flakiness
2026-01-21 16:42:31 -08:00
sayan-oai
c285b88980 feat: publish config schema on release (#9572)
Follow up to #8956; publish schema on new release to stable URL.

Also canonicalize schema (sort keys) when writing. This avoids reliance
on default `schema_rs` behavior and makes the schema easier to read.
2026-01-21 16:24:14 -08:00
Dylan Hurd
f1240ff4fe fix(tui) turn timing incremental (#9599)
## Summary
When we send multiple assistant messages, reset the timer so "Worked for
2m 36s" is the time since the last time we showed the message, rather
than an ever-increasing number.

We could instead change the copy so it's more clearly a running counter.

## Testing
- [x] ran locally

<img width="903" height="732" alt="Screenshot 2026-01-21 at 1 42 51 AM"
src="https://github.com/user-attachments/assets/bb4d827b-3a0e-48ba-bd6a-d8cd65d8e892"
/>
2026-01-21 15:59:56 -08:00
jif-oai
5dad1b956e feat: better sorting of shell commands (#9629)
This PR changes the way we sort slash command by going in this order:
1. Exact match
2. Prefix
3. Fuzzy

As a result, we you type `/ps` the default command is not `/approvals`
2026-01-21 23:03:01 +00:00
Eric Traut
2ca9a56528 Add layered config.toml support to app server (#9510)
This PR adds support for chained (layered) config.toml file merging for
clients that use the app server interface. This feature already exists
for the TUI, but it does not work for GUI clients.

It does the following:
* Changes code paths for new thread, resume thread, and fork thread to
use the effective config based on the cwd.
* Updates the `config/read` API to accept an optional `cwd` parameter.
If specified, the API returns the effective config based on that cwd
path. Also optionally includes all layers including project config
files. If cwd is not specified, the API falls back on its older behavior
where it considers only the global (non-project) config files when
computing the effective config.

The changes in codex_message_processor.rs look deceptively large. They
mostly just involve moving existing blocks of code to a later point in
some functions so it can use the cwd to calculate the config.

This PR builds upon #9509 and should be reviewed and merged after that
PR.

Tested:
* Verified change with (dependent, as-yet-uncommitted) changes to IDE
Extension and confirmed correct behavior

The full fix requires additional changes in the IDE Extension code base,
but they depend on this PR.
2026-01-21 14:21:48 -08:00
charley-oai
fe641f759f Add collaboration_mode to TurnContextItem (#9583)
## Summary
- add optional `collaboration_mode` to `TurnContextItem` in rollouts
- persist the current collaboration mode when recording turn context
(sampling + compaction)

## Rationale
We already persist turn context data for resume logic. Capturing
collaboration mode in the rollout gives us the mode context for each
turn, enabling follow‑up work to diff mode instructions correctly on
resume.

## Changes
- protocol: add optional `collaboration_mode` field to `TurnContextItem`
- core: persist collaboration mode alongside other turn context settings
in rollouts
2026-01-21 14:14:21 -08:00
Shijie Rao
3fcb40245e Chore: update plan mode output in prompt (#9592)
### Summary
* Update plan prompt output
* Update requestUserInput response to be a single key value pair
`answer: String`.
2026-01-21 14:12:18 -08:00
pakrym-oai
f2e1ad59bc Add websockets logging (#9633)
To help with debugging.
2026-01-21 21:35:38 +00:00
iceweasel-oai
7a9c9b8636 forgot to add some windows sandbox nux events. (#9624) 2026-01-21 13:24:09 -08:00
zbarsky-openai
ab8415dcf5 [bazel] Upgrade llvm toolchain and enable remote repo cache (#9616)
On bazel9 this lets us avoid performing some external repo downloads if
they've been previously uploaded to remote cache, downloads are deferred
until they are actually needed to execute an uncached action
2026-01-21 12:52:39 -08:00
Gav Verma
2e06d61339 Update skills/list protocol readme (#9623)
Updates readme example for `skills/list` to reflect latest response
spec.
2026-01-21 12:51:51 -08:00
Tien Nguyen
68b8381723 docs: fix outdated MCP subcommands documentation (#9622) 2026-01-21 11:17:37 -08:00
iceweasel-oai
f81dd128a2 define/emit some metrics for windows sandbox setup (#9573)
This should give us visibility into how users are using the elevated
sandbox nux flow, and the timing of the elevated setup.
2026-01-21 11:07:26 -08:00
Tiffany Citra
8179312ff5 fix: Fix tilde expansion to avoid absolute-path escape (#9621)
### Motivation
- Prevent inputs like `~//` or `~///etc` from expanding to arbitrary
absolute paths (e.g. `/`) because `Path::join` discards the left side
when the right side is absolute, which could allow config values to
escape `HOME` and broaden writable roots.

### Description
- In `codex-rs/utils/absolute-path/src/lib.rs` update
`maybe_expand_home_directory` to trim leading separators from the suffix
and return `home` when the remainder is empty so tilde expansion stays
rooted under `HOME`.
- Add a non-Windows unit test
`home_directory_double_slash_on_non_windows_is_expanded_in_deserialization`
that validates `"~//code"` expands to `home.join("code")`.

### Testing
- Ran `just fmt` successfully.
- Ran `just fix -p codex-utils-absolute-path` (Clippy autofix)
successfully.
- Ran `cargo test -p codex-utils-absolute-path` and all tests passed.

------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_697007481cac832dbeb1ee144d1e4cbe)
2026-01-21 10:43:10 -08:00
jif-oai
3355adad1d chore: defensive shell snapshot (#9609)
This PR adds 2 defensive mechanisms for shell snapshotting:
* Filter out invalid env variables (containing `-` for example) without
dropping the whole snapshot
* Validate the snapshot before considering it as valid by running a mock
command with a shell snapshot
2026-01-21 18:41:58 +00:00
jif-oai
338f2d634b nit: ui on interruption (#9606) 2026-01-21 14:09:15 +00:00
zbarsky-openai
2338f99f58 [bazel] Upgrade to bazel9 (#9576) 2026-01-21 13:25:36 +00:00
jif-oai
f1b6a43907 nit: better collab tui (#9551)
<img width="478" height="304" alt="Screenshot 2026-01-21 at 11 53 50"
src="https://github.com/user-attachments/assets/e2ef70de-2fff-44e0-a574-059177966ed2"
/>
2026-01-21 11:53:58 +00:00
jif-oai
13358fa131 fix: nit tui on terminal interactions (#9602) 2026-01-21 11:30:34 +00:00
jif-oai
b75024c465 feat: async shell snapshot (#9600) 2026-01-21 10:41:13 +00:00
Eric Traut
16b9380e99 Added "codex." prefix to "conversation.turn.count" metric name (#9594)
All other metrics names start with "codex.", so I presume this was an
unintended omission.
2026-01-21 10:00:47 +00:00
jif-oai
a22a61e678 feat: display raw command on user shell (#9598) 2026-01-21 09:44:38 +00:00
jif-oai
f1c961d5f7 feat: max threads config (#9483)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2026-01-21 09:39:11 +00:00
Ahmed Ibrahim
6e9a31def1 fix going up and down on questions after writing notes (#9596) 2026-01-21 09:37:37 +00:00
Ahmed Ibrahim
5f55ed666b Add request-user-input overlay (#9585)
- Add request-user-input overlay and routing in the TUI
2026-01-21 00:19:35 -08:00
1305 changed files with 49201 additions and 88633 deletions

View File

@@ -1,3 +1,4 @@
# Without this, Bazel will consider BUILD.bazel files in
# .git/sl/origbackups (which can be populated by Sapling SCM).
.git
codex-rs/target

View File

@@ -1,12 +1,19 @@
common --repo_env=BAZEL_DO_NOT_DETECT_CPP_TOOLCHAIN=1
common --repo_env=BAZEL_NO_APPLE_CPP_TOOLCHAIN=1
# Dummy xcode config so we don't need to build xcode_locator in repo rule.
common --xcode_version_config=//:disable_xcode
common --disk_cache=~/.cache/bazel-disk-cache
common --repo_contents_cache=~/.cache/bazel-repo-contents-cache
common --repository_cache=~/.cache/bazel-repo-cache
common --remote_cache_compression
startup --experimental_remote_repo_contents_cache
common --experimental_platform_in_output_dir
# Runfiles strategy rationale: codex-rs/utils/cargo-bin/README.md
common --noenable_runfiles
common --enable_platform_specific_config
# TODO(zbarsky): We need to untangle these libc constraints to get linux remote builds working.
common:linux --host_platform=//:local
@@ -42,4 +49,3 @@ common --jobs=30
common:remote --extra_execution_platforms=//:rbe
common:remote --remote_executor=grpcs://remote.buildbuddy.io
common:remote --jobs=800

1
.bazelversion Normal file
View File

@@ -0,0 +1 @@
9.0.0

View File

@@ -1,6 +1,6 @@
[codespell]
# Ref: https://github.com/codespell-project/codespell#using-a-config-file
skip = .git*,vendor,*-lock.yaml,*.lock,.codespellrc,*test.ts,*.jsonl,frame*.txt
skip = .git*,vendor,*-lock.yaml,*.lock,.codespellrc,*test.ts,*.jsonl,frame*.txt,*.snap,*.snap.new
check-hidden = true
ignore-regex = ^\s*"image/\S+": ".*|\b(afterAll)\b
ignore-words-list = ratatui,ser,iTerm,iterm2,iterm

View File

@@ -0,0 +1,163 @@
#!/usr/bin/env bash
set -euo pipefail
: "${TARGET:?TARGET environment variable is required}"
: "${GITHUB_ENV:?GITHUB_ENV environment variable is required}"
apt_update_args=()
if [[ -n "${APT_UPDATE_ARGS:-}" ]]; then
# shellcheck disable=SC2206
apt_update_args=(${APT_UPDATE_ARGS})
fi
apt_install_args=()
if [[ -n "${APT_INSTALL_ARGS:-}" ]]; then
# shellcheck disable=SC2206
apt_install_args=(${APT_INSTALL_ARGS})
fi
sudo apt-get update "${apt_update_args[@]}"
sudo apt-get install -y "${apt_install_args[@]}" musl-tools pkg-config g++ clang libc++-dev libc++abi-dev lld
case "${TARGET}" in
x86_64-unknown-linux-musl)
arch="x86_64"
;;
aarch64-unknown-linux-musl)
arch="aarch64"
;;
*)
echo "Unexpected musl target: ${TARGET}" >&2
exit 1
;;
esac
# Use the musl toolchain as the Rust linker to avoid Zig injecting its own CRT.
if command -v "${arch}-linux-musl-gcc" >/dev/null; then
musl_linker="$(command -v "${arch}-linux-musl-gcc")"
elif command -v musl-gcc >/dev/null; then
musl_linker="$(command -v musl-gcc)"
else
echo "musl gcc not found after install; arch=${arch}" >&2
exit 1
fi
zig_target="${TARGET/-unknown-linux-musl/-linux-musl}"
runner_temp="${RUNNER_TEMP:-/tmp}"
tool_root="${runner_temp}/codex-musl-tools-${TARGET}"
mkdir -p "${tool_root}"
sysroot=""
if command -v zig >/dev/null; then
zig_bin="$(command -v zig)"
cc="${tool_root}/zigcc"
cxx="${tool_root}/zigcxx"
cat >"${cc}" <<EOF
#!/usr/bin/env bash
set -euo pipefail
args=()
skip_next=0
for arg in "\$@"; do
if [[ "\${skip_next}" -eq 1 ]]; then
skip_next=0
continue
fi
case "\${arg}" in
--target)
skip_next=1
continue
;;
--target=*|-target=*|-target)
# Drop any explicit --target/-target flags. Zig expects -target and
# rejects Rust triples like *-unknown-linux-musl.
if [[ "\${arg}" == "-target" ]]; then
skip_next=1
fi
continue
;;
esac
args+=("\${arg}")
done
exec "${zig_bin}" cc -target "${zig_target}" "\${args[@]}"
EOF
cat >"${cxx}" <<EOF
#!/usr/bin/env bash
set -euo pipefail
args=()
skip_next=0
for arg in "\$@"; do
if [[ "\${skip_next}" -eq 1 ]]; then
skip_next=0
continue
fi
case "\${arg}" in
--target)
skip_next=1
continue
;;
--target=*|-target=*|-target)
if [[ "\${arg}" == "-target" ]]; then
skip_next=1
fi
continue
;;
esac
args+=("\${arg}")
done
exec "${zig_bin}" c++ -target "${zig_target}" "\${args[@]}"
EOF
chmod +x "${cc}" "${cxx}"
sysroot="$("${zig_bin}" cc -target "${zig_target}" -print-sysroot 2>/dev/null || true)"
else
cc="${musl_linker}"
if command -v "${arch}-linux-musl-g++" >/dev/null; then
cxx="$(command -v "${arch}-linux-musl-g++")"
elif command -v musl-g++ >/dev/null; then
cxx="$(command -v musl-g++)"
else
cxx="${cc}"
fi
fi
if [[ -n "${sysroot}" && "${sysroot}" != "/" ]]; then
echo "BORING_BSSL_SYSROOT=${sysroot}" >> "$GITHUB_ENV"
boring_sysroot_var="BORING_BSSL_SYSROOT_${TARGET}"
boring_sysroot_var="${boring_sysroot_var//-/_}"
echo "${boring_sysroot_var}=${sysroot}" >> "$GITHUB_ENV"
fi
cflags="-pthread"
cxxflags="-pthread"
if [[ "${TARGET}" == "aarch64-unknown-linux-musl" ]]; then
# BoringSSL enables -Wframe-larger-than=25344 under clang and treats warnings as errors.
cflags="${cflags} -Wno-error=frame-larger-than"
cxxflags="${cxxflags} -Wno-error=frame-larger-than"
fi
echo "CFLAGS=${cflags}" >> "$GITHUB_ENV"
echo "CXXFLAGS=${cxxflags}" >> "$GITHUB_ENV"
echo "CC=${cc}" >> "$GITHUB_ENV"
echo "TARGET_CC=${cc}" >> "$GITHUB_ENV"
target_cc_var="CC_${TARGET}"
target_cc_var="${target_cc_var//-/_}"
echo "${target_cc_var}=${cc}" >> "$GITHUB_ENV"
echo "CXX=${cxx}" >> "$GITHUB_ENV"
echo "TARGET_CXX=${cxx}" >> "$GITHUB_ENV"
target_cxx_var="CXX_${TARGET}"
target_cxx_var="${target_cxx_var//-/_}"
echo "${target_cxx_var}=${cxx}" >> "$GITHUB_ENV"
cargo_linker_var="CARGO_TARGET_${TARGET^^}_LINKER"
cargo_linker_var="${cargo_linker_var//-/_}"
echo "${cargo_linker_var}=${musl_linker}" >> "$GITHUB_ENV"
echo "CMAKE_C_COMPILER=${cc}" >> "$GITHUB_ENV"
echo "CMAKE_CXX_COMPILER=${cxx}" >> "$GITHUB_ENV"
echo "CMAKE_ARGS=-DCMAKE_HAVE_THREADS_LIBRARY=1 -DCMAKE_USE_PTHREADS_INIT=1 -DCMAKE_THREAD_LIBS_INIT=-pthread -DTHREADS_PREFER_PTHREAD_FLAG=ON" >> "$GITHUB_ENV"

View File

@@ -81,7 +81,7 @@ jobs:
# previously built artifacts to minimize build time. The more precise you are with
# hashFiles sources the less work bazel will have to do.
# - name: Mount bazel caches
# uses: actions/cache@v4
# uses: actions/cache@v5
# with:
# path: |
# ~/.cache/bazel-repo-cache

View File

@@ -59,7 +59,7 @@ jobs:
working-directory: codex-rs
steps:
- uses: actions/checkout@v6
- uses: dtolnay/rust-toolchain@1.92
- uses: dtolnay/rust-toolchain@1.93
with:
components: rustfmt
- name: cargo fmt
@@ -77,7 +77,7 @@ jobs:
working-directory: codex-rs
steps:
- uses: actions/checkout@v6
- uses: dtolnay/rust-toolchain@1.92
- uses: dtolnay/rust-toolchain@1.93
- uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
with:
tool: cargo-shear
@@ -177,11 +177,31 @@ jobs:
steps:
- uses: actions/checkout@v6
- uses: dtolnay/rust-toolchain@1.92
- name: Install UBSan runtime (musl)
if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl' }}
shell: bash
run: |
set -euo pipefail
if command -v apt-get >/dev/null 2>&1; then
sudo apt-get update -y
sudo DEBIAN_FRONTEND=noninteractive apt-get install -y libubsan1
fi
- uses: dtolnay/rust-toolchain@1.93
with:
targets: ${{ matrix.target }}
components: clippy
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Use hermetic Cargo home (musl)
shell: bash
run: |
set -euo pipefail
cargo_home="${GITHUB_WORKSPACE}/.cargo-home"
mkdir -p "${cargo_home}/bin"
echo "CARGO_HOME=${cargo_home}" >> "$GITHUB_ENV"
echo "${cargo_home}/bin" >> "$GITHUB_PATH"
: > "${cargo_home}/config.toml"
- name: Compute lockfile hash
id: lockhash
working-directory: codex-rs
@@ -202,6 +222,10 @@ jobs:
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
${{ github.workspace }}/.cargo-home/bin/
${{ github.workspace }}/.cargo-home/registry/index/
${{ github.workspace }}/.cargo-home/registry/cache/
${{ github.workspace }}/.cargo-home/git/db/
key: cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ steps.lockhash.outputs.toolchain_hash }}
restore-keys: |
cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-
@@ -244,6 +268,14 @@ jobs:
sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-
sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Disable sccache wrapper (musl)
shell: bash
run: |
set -euo pipefail
echo "RUSTC_WRAPPER=" >> "$GITHUB_ENV"
echo "RUSTC_WORKSPACE_WRAPPER=" >> "$GITHUB_ENV"
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Prepare APT cache directories (musl)
shell: bash
@@ -261,15 +293,73 @@ jobs:
/var/cache/apt
key: apt-${{ matrix.runner }}-${{ matrix.target }}-v1
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Install Zig
uses: mlugg/setup-zig@v2
with:
version: 0.14.0
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Install musl build tools
env:
DEBIAN_FRONTEND: noninteractive
TARGET: ${{ matrix.target }}
APT_UPDATE_ARGS: -o Acquire::Retries=3
APT_INSTALL_ARGS: --no-install-recommends
shell: bash
run: bash "${GITHUB_WORKSPACE}/.github/scripts/install-musl-build-tools.sh"
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Configure rustc UBSan wrapper (musl host)
shell: bash
run: |
set -euo pipefail
sudo apt-get -y update -o Acquire::Retries=3
sudo apt-get -y install --no-install-recommends musl-tools pkg-config
ubsan=""
if command -v ldconfig >/dev/null 2>&1; then
ubsan="$(ldconfig -p | grep -m1 'libubsan\.so\.1' | sed -E 's/.*=> (.*)$/\1/')"
fi
wrapper_root="${RUNNER_TEMP:-/tmp}"
wrapper="${wrapper_root}/rustc-ubsan-wrapper"
cat > "${wrapper}" <<EOF
#!/usr/bin/env bash
set -euo pipefail
if [[ -n "${ubsan}" ]]; then
export LD_PRELOAD="${ubsan}\${LD_PRELOAD:+:\${LD_PRELOAD}}"
fi
exec "\$1" "\${@:2}"
EOF
chmod +x "${wrapper}"
echo "RUSTC_WRAPPER=${wrapper}" >> "$GITHUB_ENV"
echo "RUSTC_WORKSPACE_WRAPPER=" >> "$GITHUB_ENV"
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Clear sanitizer flags (musl)
shell: bash
run: |
set -euo pipefail
# Clear global Rust flags so host/proc-macro builds don't pull in UBSan.
echo "RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_ENCODED_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "RUSTDOCFLAGS=" >> "$GITHUB_ENV"
# Override any runner-level Cargo config rustflags as well.
echo "CARGO_BUILD_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_TARGET_X86_64_UNKNOWN_LINUX_MUSL_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_TARGET_AARCH64_UNKNOWN_LINUX_MUSL_RUSTFLAGS=" >> "$GITHUB_ENV"
sanitize_flags() {
local input="$1"
input="${input//-fsanitize=undefined/}"
input="${input//-fno-sanitize-recover=undefined/}"
input="${input//-fno-sanitize-trap=undefined/}"
echo "$input"
}
cflags="$(sanitize_flags "${CFLAGS-}")"
cxxflags="$(sanitize_flags "${CXXFLAGS-}")"
echo "CFLAGS=${cflags}" >> "$GITHUB_ENV"
echo "CXXFLAGS=${cxxflags}" >> "$GITHUB_ENV"
- name: Install cargo-chef
if: ${{ matrix.profile == 'release' }}
@@ -316,6 +406,10 @@ jobs:
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
${{ github.workspace }}/.cargo-home/bin/
${{ github.workspace }}/.cargo-home/registry/index/
${{ github.workspace }}/.cargo-home/registry/cache/
${{ github.workspace }}/.cargo-home/git/db/
key: cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ steps.lockhash.outputs.toolchain_hash }}
- name: Save sccache cache (fallback)
@@ -416,7 +510,7 @@ jobs:
- name: Install DotSlash
uses: facebook/install-dotslash@v2
- uses: dtolnay/rust-toolchain@1.92
- uses: dtolnay/rust-toolchain@1.93
with:
targets: ${{ matrix.target }}

View File

@@ -20,7 +20,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: dtolnay/rust-toolchain@1.92
- name: Validate tag matches Cargo.toml version
shell: bash
run: |
@@ -45,6 +45,15 @@ jobs:
echo "✅ Tag and Cargo.toml agree (${tag_ver})"
echo "::endgroup::"
- name: Verify config schema fixture
shell: bash
working-directory: codex-rs
run: |
set -euo pipefail
echo "If this fails, run: just write-config-schema to overwrite fixture with intentional changes."
cargo run -p codex-core --bin codex-write-config-schema
git diff --exit-code core/config.schema.json
build:
needs: tag-check
name: Build - ${{ matrix.runner }} - ${{ matrix.target }}
@@ -80,10 +89,30 @@ jobs:
steps:
- uses: actions/checkout@v6
- uses: dtolnay/rust-toolchain@1.92
- name: Install UBSan runtime (musl)
if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl' }}
shell: bash
run: |
set -euo pipefail
if command -v apt-get >/dev/null 2>&1; then
sudo apt-get update -y
sudo DEBIAN_FRONTEND=noninteractive apt-get install -y libubsan1
fi
- uses: dtolnay/rust-toolchain@1.93
with:
targets: ${{ matrix.target }}
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Use hermetic Cargo home (musl)
shell: bash
run: |
set -euo pipefail
cargo_home="${GITHUB_WORKSPACE}/.cargo-home"
mkdir -p "${cargo_home}/bin"
echo "CARGO_HOME=${cargo_home}" >> "$GITHUB_ENV"
echo "${cargo_home}/bin" >> "$GITHUB_PATH"
: > "${cargo_home}/config.toml"
- uses: actions/cache@v5
with:
path: |
@@ -91,14 +120,76 @@ jobs:
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
${{ github.workspace }}/.cargo-home/bin/
${{ github.workspace }}/.cargo-home/registry/index/
${{ github.workspace }}/.cargo-home/registry/cache/
${{ github.workspace }}/.cargo-home/git/db/
${{ github.workspace }}/codex-rs/target/
key: cargo-${{ matrix.runner }}-${{ matrix.target }}-release-${{ hashFiles('**/Cargo.lock') }}
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Install Zig
uses: mlugg/setup-zig@v2
with:
version: 0.14.0
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Install musl build tools
env:
TARGET: ${{ matrix.target }}
run: bash "${GITHUB_WORKSPACE}/.github/scripts/install-musl-build-tools.sh"
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Configure rustc UBSan wrapper (musl host)
shell: bash
run: |
sudo apt-get update
sudo apt-get install -y musl-tools pkg-config
set -euo pipefail
ubsan=""
if command -v ldconfig >/dev/null 2>&1; then
ubsan="$(ldconfig -p | grep -m1 'libubsan\.so\.1' | sed -E 's/.*=> (.*)$/\1/')"
fi
wrapper_root="${RUNNER_TEMP:-/tmp}"
wrapper="${wrapper_root}/rustc-ubsan-wrapper"
cat > "${wrapper}" <<EOF
#!/usr/bin/env bash
set -euo pipefail
if [[ -n "${ubsan}" ]]; then
export LD_PRELOAD="${ubsan}\${LD_PRELOAD:+:\${LD_PRELOAD}}"
fi
exec "\$1" "\${@:2}"
EOF
chmod +x "${wrapper}"
echo "RUSTC_WRAPPER=${wrapper}" >> "$GITHUB_ENV"
echo "RUSTC_WORKSPACE_WRAPPER=" >> "$GITHUB_ENV"
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Clear sanitizer flags (musl)
shell: bash
run: |
set -euo pipefail
# Clear global Rust flags so host/proc-macro builds don't pull in UBSan.
echo "RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_ENCODED_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "RUSTDOCFLAGS=" >> "$GITHUB_ENV"
# Override any runner-level Cargo config rustflags as well.
echo "CARGO_BUILD_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_TARGET_X86_64_UNKNOWN_LINUX_MUSL_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_TARGET_AARCH64_UNKNOWN_LINUX_MUSL_RUSTFLAGS=" >> "$GITHUB_ENV"
sanitize_flags() {
local input="$1"
input="${input//-fsanitize=undefined/}"
input="${input//-fno-sanitize-recover=undefined/}"
input="${input//-fno-sanitize-trap=undefined/}"
echo "$input"
}
cflags="$(sanitize_flags "${CFLAGS-}")"
cxxflags="$(sanitize_flags "${CXXFLAGS-}")"
echo "CFLAGS=${cflags}" >> "$GITHUB_ENV"
echo "CXXFLAGS=${cxxflags}" >> "$GITHUB_ENV"
- name: Cargo build
shell: bash
@@ -236,6 +327,7 @@ jobs:
# Path that contains the uncompressed binaries for the current
# ${{ matrix.target }}
dest="dist/${{ matrix.target }}"
repo_root=$PWD
# We want to ship the raw Windows executables in the GitHub Release
# in addition to the compressed archives. Keep the originals for
@@ -275,7 +367,30 @@ jobs:
# Must run from inside the dest dir so 7z won't
# embed the directory path inside the zip.
if [[ "${{ matrix.runner }}" == windows* ]]; then
(cd "$dest" && 7z a "${base}.zip" "$base")
if [[ "$base" == "codex-${{ matrix.target }}.exe" ]]; then
# Bundle the sandbox helper binaries into the main codex zip so
# WinGet installs include the required helpers next to codex.exe.
# Fall back to the single-binary zip if the helpers are missing
# to avoid breaking releases.
bundle_dir="$(mktemp -d)"
runner_src="$dest/codex-command-runner-${{ matrix.target }}.exe"
setup_src="$dest/codex-windows-sandbox-setup-${{ matrix.target }}.exe"
if [[ -f "$runner_src" && -f "$setup_src" ]]; then
cp "$dest/$base" "$bundle_dir/$base"
cp "$runner_src" "$bundle_dir/codex-command-runner.exe"
cp "$setup_src" "$bundle_dir/codex-windows-sandbox-setup.exe"
# Use an absolute path so bundle zips land in the real dist
# dir even when 7z runs from a temp directory.
(cd "$bundle_dir" && 7z a "$repo_root/$dest/${base}.zip" .)
else
echo "warning: missing sandbox binaries; falling back to single-binary zip"
echo "warning: expected $runner_src and $setup_src"
(cd "$dest" && 7z a "${base}.zip" "$base")
fi
rm -rf "$bundle_dir"
else
(cd "$dest" && 7z a "${base}.zip" "$base")
fi
fi
# Also create .zst (existing behaviour) *and* remove the original
@@ -358,6 +473,10 @@ jobs:
ls -R dist/
- name: Add config schema release asset
run: |
cp codex-rs/core/config.schema.json dist/config-schema.json
- name: Define release name
id: release_name
run: |
@@ -428,6 +547,19 @@ jobs:
tag: ${{ github.ref_name }}
config: .github/dotslash-config.json
- name: Trigger developers.openai.com deploy
# Only trigger the deploy if the release is not a pre-release.
# The deploy is used to update the developers.openai.com website with the new config schema json file.
if: ${{ !contains(steps.release_name.outputs.name, '-') }}
continue-on-error: true
env:
DEV_WEBSITE_VERCEL_DEPLOY_HOOK_URL: ${{ secrets.DEV_WEBSITE_VERCEL_DEPLOY_HOOK_URL }}
run: |
if ! curl -sS -f -o /dev/null -X POST "$DEV_WEBSITE_VERCEL_DEPLOY_HOOK_URL"; then
echo "::warning title=developers.openai.com deploy hook failed::Vercel deploy hook POST failed for ${GITHUB_REF_NAME}"
exit 1
fi
# Publish to npm using OIDC authentication.
# July 31, 2025: https://github.blog/changelog/2025-07-31-npm-trusted-publishing-with-oidc-is-generally-available/
# npm docs: https://docs.npmjs.com/trusted-publishers

View File

@@ -24,7 +24,7 @@ jobs:
node-version: 22
cache: pnpm
- uses: dtolnay/rust-toolchain@1.92
- uses: dtolnay/rust-toolchain@1.93
- name: build codex
run: cargo build --bin codex

View File

@@ -93,15 +93,83 @@ jobs:
- name: Checkout repository
uses: actions/checkout@v6
- uses: dtolnay/rust-toolchain@1.92
- name: Install UBSan runtime (musl)
if: ${{ matrix.install_musl }}
shell: bash
run: |
set -euo pipefail
if command -v apt-get >/dev/null 2>&1; then
sudo apt-get update -y
sudo DEBIAN_FRONTEND=noninteractive apt-get install -y libubsan1
fi
- uses: dtolnay/rust-toolchain@1.93
with:
targets: ${{ matrix.target }}
- if: ${{ matrix.install_musl }}
name: Install Zig
uses: mlugg/setup-zig@v2
with:
version: 0.14.0
- if: ${{ matrix.install_musl }}
name: Install musl build dependencies
env:
TARGET: ${{ matrix.target }}
run: bash "${GITHUB_WORKSPACE}/.github/scripts/install-musl-build-tools.sh"
- if: ${{ matrix.install_musl }}
name: Configure rustc UBSan wrapper (musl host)
shell: bash
run: |
sudo apt-get update
sudo apt-get install -y musl-tools pkg-config
set -euo pipefail
ubsan=""
if command -v ldconfig >/dev/null 2>&1; then
ubsan="$(ldconfig -p | grep -m1 'libubsan\.so\.1' | sed -E 's/.*=> (.*)$/\1/')"
fi
wrapper_root="${RUNNER_TEMP:-/tmp}"
wrapper="${wrapper_root}/rustc-ubsan-wrapper"
cat > "${wrapper}" <<EOF
#!/usr/bin/env bash
set -euo pipefail
if [[ -n "${ubsan}" ]]; then
export LD_PRELOAD="${ubsan}\${LD_PRELOAD:+:\${LD_PRELOAD}}"
fi
exec "\$1" "\${@:2}"
EOF
chmod +x "${wrapper}"
echo "RUSTC_WRAPPER=${wrapper}" >> "$GITHUB_ENV"
echo "RUSTC_WORKSPACE_WRAPPER=" >> "$GITHUB_ENV"
- if: ${{ matrix.install_musl }}
name: Clear sanitizer flags (musl)
shell: bash
run: |
set -euo pipefail
# Clear global Rust flags so host/proc-macro builds don't pull in UBSan.
echo "RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_ENCODED_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "RUSTDOCFLAGS=" >> "$GITHUB_ENV"
# Override any runner-level Cargo config rustflags as well.
echo "CARGO_BUILD_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_TARGET_X86_64_UNKNOWN_LINUX_MUSL_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_TARGET_AARCH64_UNKNOWN_LINUX_MUSL_RUSTFLAGS=" >> "$GITHUB_ENV"
sanitize_flags() {
local input="$1"
input="${input//-fsanitize=undefined/}"
input="${input//-fno-sanitize-recover=undefined/}"
input="${input//-fno-sanitize-trap=undefined/}"
echo "$input"
}
cflags="$(sanitize_flags "${CFLAGS-}")"
cxxflags="$(sanitize_flags "${CXXFLAGS-}")"
echo "CFLAGS=${cflags}" >> "$GITHUB_ENV"
echo "CXXFLAGS=${cxxflags}" >> "$GITHUB_ENV"
- name: Build exec server binaries
run: cargo build --release --target ${{ matrix.target }} --bin codex-exec-mcp-server --bin codex-execve-wrapper
@@ -276,7 +344,6 @@ jobs:
- name: Setup pnpm
uses: pnpm/action-setup@v4
with:
version: 10.8.1
run_install: false
- name: Setup Node.js
@@ -369,12 +436,6 @@ jobs:
id-token: write
contents: read
steps:
- name: Setup pnpm
uses: pnpm/action-setup@v4
with:
version: 10.8.1
run_install: false
- name: Setup Node.js
uses: actions/setup-node@v6
with:
@@ -382,6 +443,7 @@ jobs:
registry-url: https://registry.npmjs.org
scope: "@openai"
# Trusted publishing requires npm CLI version 11.5.1 or later.
- name: Update npm
run: npm install -g npm@latest

View File

@@ -11,15 +11,18 @@ In the codex-rs folder where the rust code lives:
- Always collapse if statements per https://rust-lang.github.io/rust-clippy/master/index.html#collapsible_if
- Always inline format! args when possible per https://rust-lang.github.io/rust-clippy/master/index.html#uninlined_format_args
- Use method references over closures when possible per https://rust-lang.github.io/rust-clippy/master/index.html#redundant_closure_for_method_calls
- When possible, make `match` statements exhaustive and avoid wildcard arms.
- When writing tests, prefer comparing the equality of entire objects over fields one by one.
- When making a change that adds or changes an API, ensure that the documentation in the `docs/` folder is up to date if applicable.
- If you change `ConfigToml` or nested config types, run `just write-config-schema` to update `codex-rs/core/config.schema.json`.
Run `just fmt` (in `codex-rs` directory) automatically after making Rust code changes; do not ask for approval to run it. Before finalizing a change to `codex-rs`, run `just fix -p <project>` (in `codex-rs` directory) to fix any linter issues in the code. Prefer scoping with `-p` to avoid slow workspacewide Clippy builds; only run `just fix` without `-p` if you changed shared crates. Additionally, run the tests:
Run `just fmt` (in `codex-rs` directory) automatically after you have finished making Rust code changes; do not ask for approval to run it. Additionally, run the tests:
1. Run the test for the specific project that was changed. For example, if changes were made in `codex-rs/tui`, run `cargo test -p codex-tui`.
2. Once those pass, if any changes were made in common, core, or protocol, run the complete test suite with `cargo test --all-features`. project-specific or individual tests can be run without asking the user, but do ask the user before running the complete test suite.
Before finalizing a large change to `codex-rs`, run `just fix -p <project>` (in `codex-rs` directory) to fix any linter issues in the code. Prefer scoping with `-p` to avoid slow workspacewide Clippy builds; only run `just fix` without `-p` if you changed shared crates.
## TUI style conventions
See `codex-rs/tui/styles.md`.

View File

@@ -1,3 +1,7 @@
load("@apple_support//xcode:xcode_config.bzl", "xcode_config")
xcode_config(name = "disable_xcode")
# We mark the local platform as glibc-compatible so that rust can grab a toolchain for us.
# TODO(zbarsky): Upstream a better libc constraint into rules_rust.
# We only enable this on linux though for sanity, and because it breaks remote execution.

View File

@@ -2,13 +2,9 @@ bazel_dep(name = "platforms", version = "1.0.0")
bazel_dep(name = "toolchains_llvm_bootstrapped", version = "0.3.1")
archive_override(
module_name = "toolchains_llvm_bootstrapped",
integrity = "sha256-9ks21bgEqbQWmwUIvqeLA64+Jk6o4ZVjC8KxjVa2Vw8=",
strip_prefix = "toolchains_llvm_bootstrapped-e3775e66a7b6d287c705ca0cd24497ef4a77c503",
urls = ["https://github.com/cerisier/toolchains_llvm_bootstrapped/archive/e3775e66a7b6d287c705ca0cd24497ef4a77c503/master.tar.gz"],
patch_strip = 1,
patches = [
"//patches:llvm_toolchain_archive_params.patch",
],
integrity = "sha256-4/2h4tYSUSptxFVI9G50yJxWGOwHSeTeOGBlaLQBV8g=",
strip_prefix = "toolchains_llvm_bootstrapped-d20baf67e04d8e2887e3779022890d1dc5e6b948",
urls = ["https://github.com/cerisier/toolchains_llvm_bootstrapped/archive/d20baf67e04d8e2887e3779022890d1dc5e6b948.tar.gz"],
)
osx = use_extension("@toolchains_llvm_bootstrapped//toolchain/extension:osx.bzl", "osx")
@@ -31,6 +27,8 @@ register_toolchains(
"@toolchains_llvm_bootstrapped//toolchain:all",
)
# Needed to disable xcode...
bazel_dep(name = "apple_support", version = "2.1.0")
bazel_dep(name = "rules_cc", version = "0.2.16")
bazel_dep(name = "rules_platform", version = "0.1.0")
bazel_dep(name = "rules_rust", version = "0.68.1")
@@ -57,7 +55,7 @@ rust = use_extension("@rules_rust//rust:extensions.bzl", "rust")
rust.toolchain(
edition = "2024",
extra_target_triples = RUST_TRIPLES,
versions = ["1.90.0"],
versions = ["1.93.0"],
)
use_repo(rust, "rust_toolchains")
@@ -71,6 +69,11 @@ crate.from_cargo(
cargo_toml = "//codex-rs:Cargo.toml",
platform_triples = RUST_TRIPLES,
)
crate.annotation(
crate = "nucleo-matcher",
strip_prefix = "matcher",
version = "0.3.1",
)
bazel_dep(name = "openssl", version = "3.5.4.bcr.0")
@@ -89,12 +92,17 @@ crate.annotation(
inject_repo(crate, "openssl")
crate.annotation(
crate = "runfiles",
workspace_cargo_toml = "rust/runfiles/Cargo.toml",
)
# Fix readme inclusions
crate.annotation(
crate = "windows-link",
patch_args = ["-p1"],
patches = [
"//patches:windows-link.patch"
"//patches:windows-link.patch",
],
)

479
MODULE.bazel.lock generated

File diff suppressed because one or more lines are too long

70
PNPM.md
View File

@@ -1,70 +0,0 @@
# Migration to pnpm
This project has been migrated from npm to pnpm to improve dependency management and developer experience.
## Why pnpm?
- **Faster installation**: pnpm is significantly faster than npm and yarn
- **Disk space savings**: pnpm uses a content-addressable store to avoid duplication
- **Phantom dependency prevention**: pnpm creates a strict node_modules structure
- **Native workspaces support**: simplified monorepo management
## How to use pnpm
### Installation
```bash
# Global installation of pnpm
npm install -g pnpm@10.8.1
# Or with corepack (available with Node.js 22+)
corepack enable
corepack prepare pnpm@10.8.1 --activate
```
### Common commands
| npm command | pnpm equivalent |
| --------------- | ---------------- |
| `npm install` | `pnpm install` |
| `npm run build` | `pnpm run build` |
| `npm test` | `pnpm test` |
| `npm run lint` | `pnpm run lint` |
### Workspace-specific commands
| Action | Command |
| ------------------------------------------ | ---------------------------------------- |
| Run a command in a specific package | `pnpm --filter @openai/codex run build` |
| Install a dependency in a specific package | `pnpm --filter @openai/codex add lodash` |
| Run a command in all packages | `pnpm -r run test` |
## Monorepo structure
```
codex/
├── pnpm-workspace.yaml # Workspace configuration
├── .npmrc # pnpm configuration
├── package.json # Root dependencies and scripts
├── codex-cli/ # Main package
│ └── package.json # codex-cli specific dependencies
└── docs/ # Documentation (future package)
```
## Configuration files
- **pnpm-workspace.yaml**: Defines the packages included in the monorepo
- **.npmrc**: Configures pnpm behavior
- **Root package.json**: Contains shared scripts and dependencies
## CI/CD
CI/CD workflows have been updated to use pnpm instead of npm. Make sure your CI environments use pnpm 10.8.1 or higher.
## Known issues
If you encounter issues with pnpm, try the following solutions:
1. Remove the `node_modules` folder and `pnpm-lock.yaml` file, then run `pnpm install`
2. Make sure you're using pnpm 10.8.1 or higher
3. Verify that Node.js 22 or higher is installed

View File

@@ -1,18 +0,0 @@
{
"name": "@openai/codex",
"version": "0.0.0-dev",
"lockfileVersion": 3,
"packages": {
"": {
"name": "@openai/codex",
"version": "0.0.0-dev",
"license": "Apache-2.0",
"bin": {
"codex": "bin/codex.js"
},
"engines": {
"node": ">=16"
}
}
}
}

View File

@@ -17,5 +17,6 @@
"type": "git",
"url": "git+https://github.com/openai/codex.git",
"directory": "codex-cli"
}
},
"packageManager": "pnpm@10.28.2+sha512.41872f037ad22f7348e3b1debbaf7e867cfd448f2726d9cf74c08f19507c31d2c8e7a11525b983febc2df640b5438dee6023ebb1f84ed43cc2d654d2bc326264"
}

1627
codex-rs/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -11,6 +11,7 @@ members = [
"arg0",
"feedback",
"codex-backend-openapi-models",
"cloud-requirements",
"cloud-tasks",
"cloud-tasks-client",
"cli",
@@ -27,6 +28,7 @@ members = [
"login",
"mcp-server",
"mcp-types",
"network-proxy",
"ollama",
"process-hardening",
"protocol",
@@ -35,7 +37,6 @@ members = [
"stdio-to-uds",
"otel",
"tui",
"tui2",
"utils/absolute-path",
"utils/cargo-bin",
"utils/git",
@@ -47,6 +48,7 @@ members = [
"utils/string",
"codex-client",
"codex-api",
"state",
]
resolver = "2"
@@ -70,6 +72,7 @@ codex-apply-patch = { path = "apply-patch" }
codex-arg0 = { path = "arg0" }
codex-async-utils = { path = "async-utils" }
codex-backend-client = { path = "backend-client" }
codex-cloud-requirements = { path = "cloud-requirements" }
codex-chatgpt = { path = "chatgpt" }
codex-cli = { path = "cli"}
codex-client = { path = "codex-client" }
@@ -91,9 +94,9 @@ codex-process-hardening = { path = "process-hardening" }
codex-protocol = { path = "protocol" }
codex-responses-api-proxy = { path = "responses-api-proxy" }
codex-rmcp-client = { path = "rmcp-client" }
codex-state = { path = "state" }
codex-stdio-to-uds = { path = "stdio-to-uds" }
codex-tui = { path = "tui" }
codex-tui2 = { path = "tui2" }
codex-utils-absolute-path = { path = "utils/absolute-path" }
codex-utils-cache = { path = "utils/cache" }
codex-utils-cargo-bin = { path = "utils/cargo-bin" }
@@ -127,6 +130,7 @@ clap = "4"
clap_complete = "4"
color-eyre = "0.6.3"
crossterm = "0.28.1"
crossbeam-channel = "0.5.15"
ctor = "0.6.3"
derive_more = "2"
diffy = "0.4.2"
@@ -138,6 +142,7 @@ env-flags = "0.1.1"
env_logger = "0.11.5"
eventsource-stream = "0.2.3"
futures = { version = "0.3", default-features = false }
globset = "0.4"
http = "1.3.1"
icu_decimal = "2.1"
icu_locale_core = "2.1"
@@ -159,7 +164,7 @@ maplit = "1.0.2"
mime_guess = "2.0.5"
multimap = "0.10.0"
notify = "8.2.0"
nucleo-matcher = "0.3.1"
nucleo = { git = "https://github.com/helix-editor/nucleo.git", rev = "4253de9faabb4e5c6d81d946a5e35a90f87347ee" }
once_cell = "1.20.2"
openssl-sys = "*"
opentelemetry = "0.31.0"
@@ -178,17 +183,18 @@ pretty_assertions = "1.4.1"
pulldown-cmark = "0.10"
rand = "0.9"
ratatui = "0.29.0"
ratatui-core = "0.1.0"
ratatui-macros = "0.6.0"
regex = "1.12.2"
regex-lite = "0.1.8"
reqwest = { version = "0.12", features = ["rustls-tls-webpki-roots", "rustls-tls-native-roots"] }
reqwest = "0.12"
rmcp = { version = "0.12.0", default-features = false }
runfiles = { git = "https://github.com/dzbarsky/rules_rust", rev = "b56cbaa8465e74127f1ea216f813cd377295ad81" }
schemars = "0.8.22"
seccompiler = "0.5.0"
sentry = "0.46.0"
serde = "1"
serde_json = "1"
serde_path_to_error = "0.1.20"
serde_with = "3.16"
serde_yaml = "0.9"
serial_test = "3.2.0"
@@ -198,6 +204,7 @@ semver = "1.0"
shlex = "1.3.0"
similar = "2.7.0"
socket2 = "0.6.1"
sqlx = { version = "0.8.6", default-features = false, features = ["chrono", "json", "macros", "migrate", "runtime-tokio-rustls", "sqlite", "time", "uuid"] }
starlark = "0.13.0"
strum = "0.27.2"
strum_macros = "0.27.2"
@@ -212,11 +219,11 @@ tiny_http = "0.12"
tokio = "1"
tokio-stream = "0.1.18"
tokio-test = "0.4"
tokio-tungstenite = { version = "0.28.0", features = ["rustls-tls-webpki-roots", "rustls-tls-native-roots", "proxy"] }
tokio-tungstenite = { version = "0.28.0", features = ["proxy", "rustls-tls-native-roots"] }
tokio-util = "0.7.18"
toml = "0.9.5"
toml_edit = "0.24.0"
tracing = "0.1.43"
tracing = "0.1.44"
tracing-appender = "0.2.3"
tracing-subscriber = "0.3.22"
tracing-test = "0.2.5"
@@ -225,7 +232,6 @@ tree-sitter-bash = "0.25"
zstd = "0.13"
tree-sitter-highlight = "0.25.10"
ts-rs = "11"
tui-scrollbar = "0.2.2"
uds_windows = "1.1.0"
unicode-segmentation = "1.12.0"
unicode-width = "0.2"

View File

@@ -23,11 +23,22 @@ impl GitSha {
}
}
/// Authentication mode for OpenAI-backed providers.
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, Eq, Display, JsonSchema, TS)]
#[serde(rename_all = "lowercase")]
pub enum AuthMode {
/// OpenAI API key provided by the caller and stored by Codex.
ApiKey,
ChatGPT,
/// ChatGPT OAuth managed by Codex (tokens persisted and refreshed by Codex).
Chatgpt,
/// [UNSTABLE] FOR OPENAI INTERNAL USE ONLY - DO NOT USE.
///
/// ChatGPT auth tokens are supplied by an external host app and are only
/// stored in memory. Token refresh must be handled by the external host app.
#[serde(rename = "chatgptAuthTokens")]
#[ts(rename = "chatgptAuthTokens")]
#[strum(serialize = "chatgptAuthTokens")]
ChatgptAuthTokens,
}
/// Generates an `enum ClientRequest` where each variant is a request that the
@@ -117,6 +128,14 @@ client_request_definitions! {
params: v2::ThreadArchiveParams,
response: v2::ThreadArchiveResponse,
},
ThreadSetName => "thread/name/set" {
params: v2::ThreadSetNameParams,
response: v2::ThreadSetNameResponse,
},
ThreadUnarchive => "thread/unarchive" {
params: v2::ThreadUnarchiveParams,
response: v2::ThreadUnarchiveResponse,
},
ThreadRollback => "thread/rollback" {
params: v2::ThreadRollbackParams,
response: v2::ThreadRollbackResponse,
@@ -129,10 +148,18 @@ client_request_definitions! {
params: v2::ThreadLoadedListParams,
response: v2::ThreadLoadedListResponse,
},
ThreadRead => "thread/read" {
params: v2::ThreadReadParams,
response: v2::ThreadReadResponse,
},
SkillsList => "skills/list" {
params: v2::SkillsListParams,
response: v2::SkillsListResponse,
},
AppsList => "app/list" {
params: v2::AppsListParams,
response: v2::AppsListResponse,
},
SkillsConfigWrite => "skills/config/write" {
params: v2::SkillsConfigWriteParams,
response: v2::SkillsConfigWriteResponse,
@@ -516,6 +543,17 @@ server_request_definitions! {
response: v2::ToolRequestUserInputResponse,
},
/// Execute a dynamic tool call on the client.
DynamicToolCall => "item/tool/call" {
params: v2::DynamicToolCallParams,
response: v2::DynamicToolCallResponse,
},
ChatgptAuthTokensRefresh => "account/chatgptAuthTokens/refresh" {
params: v2::ChatgptAuthTokensRefreshParams,
response: v2::ChatgptAuthTokensRefreshResponse,
},
/// DEPRECATED APIs below
/// Request to approve a patch.
/// This request is used for Turns started via the legacy APIs (i.e. SendUserTurn, SendUserMessage).
@@ -560,6 +598,7 @@ server_notification_definitions! {
/// NEW NOTIFICATIONS
Error => "error" (v2::ErrorNotification),
ThreadStarted => "thread/started" (v2::ThreadStartedNotification),
ThreadNameUpdated => "thread/name/updated" (v2::ThreadNameUpdatedNotification),
ThreadTokenUsageUpdated => "thread/tokenUsage/updated" (v2::ThreadTokenUsageUpdatedNotification),
TurnStarted => "turn/started" (v2::TurnStartedNotification),
TurnCompleted => "turn/completed" (v2::TurnCompletedNotification),
@@ -570,6 +609,8 @@ server_notification_definitions! {
/// This event is internal-only. Used by Codex Cloud.
RawResponseItemCompleted => "rawResponseItem/completed" (v2::RawResponseItemCompletedNotification),
AgentMessageDelta => "item/agentMessage/delta" (v2::AgentMessageDeltaNotification),
/// EXPERIMENTAL - proposed plan streaming deltas for plan items.
PlanDelta => "item/plan/delta" (v2::PlanDeltaNotification),
CommandExecutionOutputDelta => "item/commandExecution/outputDelta" (v2::CommandExecutionOutputDeltaNotification),
TerminalInteraction => "item/commandExecution/terminalInteraction" (v2::TerminalInteractionNotification),
FileChangeOutputDelta => "item/fileChange/outputDelta" (v2::FileChangeOutputDeltaNotification),
@@ -580,6 +621,7 @@ server_notification_definitions! {
ReasoningSummaryTextDelta => "item/reasoning/summaryTextDelta" (v2::ReasoningSummaryTextDeltaNotification),
ReasoningSummaryPartAdded => "item/reasoning/summaryPartAdded" (v2::ReasoningSummaryPartAddedNotification),
ReasoningTextDelta => "item/reasoning/textDelta" (v2::ReasoningTextDeltaNotification),
/// Deprecated: Use `ContextCompaction` item type instead.
ContextCompacted => "thread/compacted" (v2::ContextCompactedNotification),
DeprecationNotice => "deprecationNotice" (v2::DeprecationNoticeNotification),
ConfigWarning => "configWarning" (v2::ConfigWarningNotification),
@@ -734,6 +776,29 @@ mod tests {
Ok(())
}
#[test]
fn serialize_chatgpt_auth_tokens_refresh_request() -> Result<()> {
let request = ServerRequest::ChatgptAuthTokensRefresh {
request_id: RequestId::Integer(8),
params: v2::ChatgptAuthTokensRefreshParams {
reason: v2::ChatgptAuthTokensRefreshReason::Unauthorized,
previous_account_id: Some("org-123".to_string()),
},
};
assert_eq!(
json!({
"method": "account/chatgptAuthTokens/refresh",
"id": 8,
"params": {
"reason": "unauthorized",
"previousAccountId": "org-123"
}
}),
serde_json::to_value(&request)?,
);
Ok(())
}
#[test]
fn serialize_get_account_rate_limits() -> Result<()> {
let request = ClientRequest::GetAccountRateLimits {
@@ -823,10 +888,34 @@ mod tests {
Ok(())
}
#[test]
fn serialize_account_login_chatgpt_auth_tokens() -> Result<()> {
let request = ClientRequest::LoginAccount {
request_id: RequestId::Integer(5),
params: v2::LoginAccountParams::ChatgptAuthTokens {
access_token: "access-token".to_string(),
id_token: "id-token".to_string(),
},
};
assert_eq!(
json!({
"method": "account/login/start",
"id": 5,
"params": {
"type": "chatgptAuthTokens",
"accessToken": "access-token",
"idToken": "id-token"
}
}),
serde_json::to_value(&request)?,
);
Ok(())
}
#[test]
fn serialize_get_account() -> Result<()> {
let request = ClientRequest::GetAccount {
request_id: RequestId::Integer(5),
request_id: RequestId::Integer(6),
params: v2::GetAccountParams {
refresh_token: false,
},
@@ -834,7 +923,7 @@ mod tests {
assert_eq!(
json!({
"method": "account/read",
"id": 5,
"id": 6,
"params": {
"refreshToken": false
}

View File

@@ -6,6 +6,7 @@ use crate::protocol::v2::UserInput;
use codex_protocol::protocol::AgentReasoningEvent;
use codex_protocol::protocol::AgentReasoningRawContentEvent;
use codex_protocol::protocol::EventMsg;
use codex_protocol::protocol::ItemCompletedEvent;
use codex_protocol::protocol::ThreadRolledBackEvent;
use codex_protocol::protocol::TurnAbortedEvent;
use codex_protocol::protocol::UserMessageEvent;
@@ -55,6 +56,7 @@ impl ThreadHistoryBuilder {
EventMsg::AgentReasoningRawContent(payload) => {
self.handle_agent_reasoning_raw_content(payload)
}
EventMsg::ItemCompleted(payload) => self.handle_item_completed(payload),
EventMsg::TokenCount(_) => {}
EventMsg::EnteredReviewMode(_) => {}
EventMsg::ExitedReviewMode(_) => {}
@@ -125,6 +127,19 @@ impl ThreadHistoryBuilder {
});
}
fn handle_item_completed(&mut self, payload: &ItemCompletedEvent) {
if let codex_protocol::items::TurnItem::Plan(plan) = &payload.item {
if plan.text.is_empty() {
return;
}
let id = self.next_item_id();
self.ensure_turn().items.push(ThreadItem::Plan {
id,
text: plan.text.clone(),
});
}
}
fn handle_turn_aborted(&mut self, _payload: &TurnAbortedEvent) {
let Some(turn) = self.current_turn.as_mut() else {
return;

View File

@@ -5,7 +5,9 @@ use crate::protocol::common::AuthMode;
use codex_protocol::account::PlanType;
use codex_protocol::approvals::ExecPolicyAmendment as CoreExecPolicyAmendment;
use codex_protocol::config_types::CollaborationMode;
use codex_protocol::config_types::CollaborationModeMask;
use codex_protocol::config_types::ForcedLoginMethod;
use codex_protocol::config_types::Personality;
use codex_protocol::config_types::ReasoningSummary;
use codex_protocol::config_types::SandboxMode as CoreSandboxMode;
use codex_protocol::config_types::Verbosity;
@@ -25,10 +27,13 @@ use codex_protocol::protocol::NetworkAccess as CoreNetworkAccess;
use codex_protocol::protocol::RateLimitSnapshot as CoreRateLimitSnapshot;
use codex_protocol::protocol::RateLimitWindow as CoreRateLimitWindow;
use codex_protocol::protocol::SessionSource as CoreSessionSource;
use codex_protocol::protocol::SkillDependencies as CoreSkillDependencies;
use codex_protocol::protocol::SkillErrorInfo as CoreSkillErrorInfo;
use codex_protocol::protocol::SkillInterface as CoreSkillInterface;
use codex_protocol::protocol::SkillMetadata as CoreSkillMetadata;
use codex_protocol::protocol::SkillScope as CoreSkillScope;
use codex_protocol::protocol::SkillToolDependency as CoreSkillToolDependency;
use codex_protocol::protocol::SubAgentSource as CoreSubAgentSource;
use codex_protocol::protocol::TokenUsage as CoreTokenUsage;
use codex_protocol::protocol::TokenUsageInfo as CoreTokenUsageInfo;
use codex_protocol::user_input::ByteRange as CoreByteRange;
@@ -81,6 +86,10 @@ macro_rules! v2_enum_from_core {
pub enum CodexErrorInfo {
ContextWindowExceeded,
UsageLimitExceeded,
ModelCap {
model: String,
reset_after_seconds: Option<u64>,
},
HttpConnectionFailed {
#[serde(rename = "httpStatusCode")]
#[ts(rename = "httpStatusCode")]
@@ -117,6 +126,13 @@ impl From<CoreCodexErrorInfo> for CodexErrorInfo {
match value {
CoreCodexErrorInfo::ContextWindowExceeded => CodexErrorInfo::ContextWindowExceeded,
CoreCodexErrorInfo::UsageLimitExceeded => CodexErrorInfo::UsageLimitExceeded,
CoreCodexErrorInfo::ModelCap {
model,
reset_after_seconds,
} => CodexErrorInfo::ModelCap {
model,
reset_after_seconds,
},
CoreCodexErrorInfo::HttpConnectionFailed { http_status_code } => {
CodexErrorInfo::HttpConnectionFailed { http_status_code }
}
@@ -217,7 +233,6 @@ v2_enum_from_core!(
}
);
// TODO(mbolin): Support in-repo layer.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(tag = "type", rename_all = "camelCase")]
#[ts(tag = "type")]
@@ -323,6 +338,15 @@ pub struct ToolsV2 {
pub view_image: Option<bool>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct DynamicToolSpec {
pub name: String,
pub description: String,
pub input_schema: JsonValue,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "snake_case")]
#[ts(export_to = "v2/")]
@@ -393,6 +417,8 @@ pub struct ConfigLayer {
pub name: ConfigLayerSource,
pub version: String,
pub config: JsonValue,
#[serde(skip_serializing_if = "Option::is_none")]
pub disabled_reason: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
@@ -449,6 +475,10 @@ pub enum ConfigWriteErrorCode {
pub struct ConfigReadParams {
#[serde(default)]
pub include_layers: bool,
/// Optional working directory to resolve project config layers. If specified,
/// return the effective config as seen from that directory (i.e., including any
/// project layers between `cwd` and the project/repo root).
pub cwd: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
@@ -694,6 +724,7 @@ pub enum SessionSource {
VsCode,
Exec,
AppServer,
SubAgent(CoreSubAgentSource),
#[serde(other)]
Unknown,
}
@@ -705,7 +736,7 @@ impl From<CoreSessionSource> for SessionSource {
CoreSessionSource::VSCode => SessionSource::VsCode,
CoreSessionSource::Exec => SessionSource::Exec,
CoreSessionSource::Mcp => SessionSource::AppServer,
CoreSessionSource::SubAgent(_) => SessionSource::Unknown,
CoreSessionSource::SubAgent(sub) => SessionSource::SubAgent(sub),
CoreSessionSource::Unknown => SessionSource::Unknown,
}
}
@@ -718,6 +749,7 @@ impl From<SessionSource> for CoreSessionSource {
SessionSource::VsCode => CoreSessionSource::VSCode,
SessionSource::Exec => CoreSessionSource::Exec,
SessionSource::AppServer => CoreSessionSource::Mcp,
SessionSource::SubAgent(sub) => CoreSessionSource::SubAgent(sub),
SessionSource::Unknown => CoreSessionSource::Unknown,
}
}
@@ -803,6 +835,24 @@ pub enum LoginAccountParams {
#[serde(rename = "chatgpt")]
#[ts(rename = "chatgpt")]
Chatgpt,
/// [UNSTABLE] FOR OPENAI INTERNAL USE ONLY - DO NOT USE.
/// The access token must contain the same scopes that Codex-managed ChatGPT auth tokens have.
#[serde(rename = "chatgptAuthTokens")]
#[ts(rename = "chatgptAuthTokens")]
ChatgptAuthTokens {
/// ID token (JWT) supplied by the client.
///
/// This token is used for identity and account metadata (email, plan type,
/// workspace id).
#[serde(rename = "idToken")]
#[ts(rename = "idToken")]
id_token: String,
/// Access token (JWT) supplied by the client.
/// This token is used for backend API requests.
#[serde(rename = "accessToken")]
#[ts(rename = "accessToken")]
access_token: String,
},
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
@@ -822,6 +872,9 @@ pub enum LoginAccountResponse {
/// URL the client should open in a browser to initiate the OAuth flow.
auth_url: String,
},
#[serde(rename = "chatgptAuthTokens", rename_all = "camelCase")]
#[ts(rename = "chatgptAuthTokens", rename_all = "camelCase")]
ChatgptAuthTokens {},
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
@@ -852,6 +905,37 @@ pub struct CancelLoginAccountResponse {
#[ts(export_to = "v2/")]
pub struct LogoutAccountResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub enum ChatgptAuthTokensRefreshReason {
/// Codex attempted a backend request and received `401 Unauthorized`.
Unauthorized,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ChatgptAuthTokensRefreshParams {
pub reason: ChatgptAuthTokensRefreshReason,
/// Workspace/account identifier that Codex was previously using.
///
/// Clients that manage multiple accounts/workspaces can use this as a hint
/// to refresh the token for the correct workspace.
///
/// This may be `null` when the prior ID token did not include a workspace
/// identifier (`chatgpt_account_id`) or when the token could not be parsed.
pub previous_account_id: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ChatgptAuthTokensRefreshResponse {
pub id_token: String,
pub access_token: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -863,6 +947,11 @@ pub struct GetAccountRateLimitsResponse {
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct GetAccountParams {
/// When `true`, requests a proactive token refresh before returning.
///
/// In managed auth mode this triggers the normal refresh-token flow. In
/// external auth mode this flag is ignored. Clients should refresh tokens
/// themselves and call `account/login/start` with `chatgptAuthTokens`.
#[serde(default)]
pub refresh_token: bool,
}
@@ -895,6 +984,8 @@ pub struct Model {
pub description: String,
pub supported_reasoning_efforts: Vec<ReasoningEffortOption>,
pub default_reasoning_effort: ReasoningEffort,
#[serde(default)]
pub supports_personality: bool,
// Only one model should be marked as default.
pub is_default: bool,
}
@@ -928,7 +1019,7 @@ pub struct CollaborationModeListParams {}
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct CollaborationModeListResponse {
pub data: Vec<CollaborationMode>,
pub data: Vec<CollaborationModeMask>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
@@ -962,6 +1053,41 @@ pub struct ListMcpServerStatusResponse {
pub next_cursor: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Default, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct AppsListParams {
/// Opaque pagination cursor returned by a previous call.
pub cursor: Option<String>,
/// Optional page size; defaults to a reasonable server-side value.
pub limit: Option<u32>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct AppInfo {
pub id: String,
pub name: String,
pub description: Option<String>,
pub logo_url: Option<String>,
pub logo_url_dark: Option<String>,
pub distribution_channel: Option<String>,
pub install_url: Option<String>,
#[serde(default)]
pub is_accessible: bool,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct AppsListResponse {
pub data: Vec<AppInfo>,
/// Opaque cursor to pass to the next call to continue after the last item.
/// If None, there are no more items to return.
pub next_cursor: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -1043,6 +1169,9 @@ pub struct ThreadStartParams {
pub config: Option<HashMap<String, JsonValue>>,
pub base_instructions: Option<String>,
pub developer_instructions: Option<String>,
pub personality: Option<Personality>,
pub ephemeral: Option<bool>,
pub dynamic_tools: Option<Vec<DynamicToolSpec>>,
/// If true, opt into emitting raw response items on the event stream.
///
/// This is for internal use only (e.g. Codex Cloud).
@@ -1097,6 +1226,7 @@ pub struct ThreadResumeParams {
pub config: Option<HashMap<String, serde_json::Value>>,
pub base_instructions: Option<String>,
pub developer_instructions: Option<String>,
pub personality: Option<Personality>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
@@ -1165,6 +1295,33 @@ pub struct ThreadArchiveParams {
#[ts(export_to = "v2/")]
pub struct ThreadArchiveResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadSetNameParams {
pub thread_id: String,
pub name: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadUnarchiveParams {
pub thread_id: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadSetNameResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadUnarchiveResponse {
pub thread: Thread,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -1202,6 +1359,30 @@ pub struct ThreadListParams {
/// Optional provider filter; when set, only sessions recorded under these
/// providers are returned. When present but empty, includes all providers.
pub model_providers: Option<Vec<String>>,
/// Optional source filter; when set, only sessions from these source kinds
/// are returned. When omitted or empty, defaults to interactive sources.
pub source_kinds: Option<Vec<ThreadSourceKind>>,
/// Optional archived filter; when set to true, only archived threads are returned.
/// If false or null, only non-archived threads are returned.
pub archived: Option<bool>,
}
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(rename_all = "camelCase", export_to = "v2/")]
pub enum ThreadSourceKind {
Cli,
#[serde(rename = "vscode")]
#[ts(rename = "vscode")]
VsCode,
Exec,
AppServer,
SubAgent,
SubAgentReview,
SubAgentCompact,
SubAgentThreadSpawn,
SubAgentOther,
Unknown,
}
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, JsonSchema, TS)]
@@ -1243,6 +1424,23 @@ pub struct ThreadLoadedListResponse {
pub next_cursor: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadReadParams {
pub thread_id: String,
/// When true, include turns and their items from rollout history.
#[serde(default)]
pub include_turns: bool,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadReadResponse {
pub thread: Thread,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -1282,11 +1480,14 @@ pub struct SkillMetadata {
pub description: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
/// Legacy short_description from SKILL.md. Prefer SKILL.toml interface.short_description.
/// Legacy short_description from SKILL.md. Prefer SKILL.json interface.short_description.
pub short_description: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub interface: Option<SkillInterface>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub dependencies: Option<SkillDependencies>,
pub path: PathBuf,
pub scope: SkillScope,
pub enabled: bool,
@@ -1310,6 +1511,35 @@ pub struct SkillInterface {
pub default_prompt: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct SkillDependencies {
pub tools: Vec<SkillToolDependency>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct SkillToolDependency {
#[serde(rename = "type")]
#[ts(rename = "type")]
pub r#type: String,
pub value: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub description: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub transport: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub command: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub url: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -1349,6 +1579,7 @@ impl From<CoreSkillMetadata> for SkillMetadata {
description: value.description,
short_description: value.short_description,
interface: value.interface.map(SkillInterface::from),
dependencies: value.dependencies.map(SkillDependencies::from),
path: value.path,
scope: value.scope.into(),
enabled: true,
@@ -1369,6 +1600,31 @@ impl From<CoreSkillInterface> for SkillInterface {
}
}
impl From<CoreSkillDependencies> for SkillDependencies {
fn from(value: CoreSkillDependencies) -> Self {
Self {
tools: value
.tools
.into_iter()
.map(SkillToolDependency::from)
.collect(),
}
}
}
impl From<CoreSkillToolDependency> for SkillToolDependency {
fn from(value: CoreSkillToolDependency) -> Self {
Self {
r#type: value.r#type,
value: value.value,
description: value.description,
transport: value.transport,
command: value.command,
url: value.url,
}
}
}
impl From<CoreSkillScope> for SkillScope {
fn from(value: CoreSkillScope) -> Self {
match value {
@@ -1405,7 +1661,7 @@ pub struct Thread {
#[ts(type = "number")]
pub updated_at: i64,
/// [UNSTABLE] Path to the thread on disk.
pub path: PathBuf,
pub path: Option<PathBuf>,
/// Working directory captured for the thread.
pub cwd: PathBuf,
/// Version of the CLI that created the thread.
@@ -1414,7 +1670,8 @@ pub struct Thread {
pub source: SessionSource,
/// Optional Git metadata captured when the thread was created.
pub git_info: Option<GitInfo>,
/// Only populated on `thread/resume`, `thread/rollback`, `thread/fork` responses.
/// Only populated on `thread/resume`, `thread/rollback`, `thread/fork`, and `thread/read`
/// (when `includeTurns` is true) responses.
/// For all other responses and notifications returning a Thread,
/// the turns field will be an empty list.
pub turns: Vec<Turn>,
@@ -1551,6 +1808,8 @@ pub struct TurnStartParams {
pub effort: Option<ReasoningEffort>,
/// Override the reasoning summary for this turn and subsequent turns.
pub summary: Option<ReasoningSummary>,
/// Override the personality for this turn and subsequent turns.
pub personality: Option<Personality>,
/// Optional JSON Schema used to constrain the final assistant message for this turn.
pub output_schema: Option<JsonValue>,
@@ -1721,6 +1980,10 @@ pub enum UserInput {
name: String,
path: PathBuf,
},
Mention {
name: String,
path: String,
},
}
impl UserInput {
@@ -1736,6 +1999,7 @@ impl UserInput {
UserInput::Image { url } => CoreUserInput::Image { image_url: url },
UserInput::LocalImage { path } => CoreUserInput::LocalImage { path },
UserInput::Skill { name, path } => CoreUserInput::Skill { name, path },
UserInput::Mention { name, path } => CoreUserInput::Mention { name, path },
}
}
}
@@ -1753,6 +2017,7 @@ impl From<CoreUserInput> for UserInput {
CoreUserInput::Image { image_url } => UserInput::Image { url: image_url },
CoreUserInput::LocalImage { path } => UserInput::LocalImage { path },
CoreUserInput::Skill { name, path } => UserInput::Skill { name, path },
CoreUserInput::Mention { name, path } => UserInput::Mention { name, path },
_ => unreachable!("unsupported user input variant"),
}
}
@@ -1771,6 +2036,11 @@ pub enum ThreadItem {
AgentMessage { id: String, text: String },
#[serde(rename_all = "camelCase")]
#[ts(rename_all = "camelCase")]
/// EXPERIMENTAL - proposed plan item content. The completed plan item is
/// authoritative and may not match the concatenation of `PlanDelta` text.
Plan { id: String, text: String },
#[serde(rename_all = "camelCase")]
#[ts(rename_all = "camelCase")]
Reasoning {
id: String,
#[serde(default)]
@@ -1853,6 +2123,9 @@ pub enum ThreadItem {
#[serde(rename_all = "camelCase")]
#[ts(rename_all = "camelCase")]
ExitedReviewMode { id: String, review: String },
#[serde(rename_all = "camelCase")]
#[ts(rename_all = "camelCase")]
ContextCompaction { id: String },
}
impl From<CoreTurnItem> for ThreadItem {
@@ -1872,6 +2145,10 @@ impl From<CoreTurnItem> for ThreadItem {
.collect::<String>();
ThreadItem::AgentMessage { id: agent.id, text }
}
CoreTurnItem::Plan(plan) => ThreadItem::Plan {
id: plan.id,
text: plan.text,
},
CoreTurnItem::Reasoning(reasoning) => ThreadItem::Reasoning {
id: reasoning.id,
summary: reasoning.summary_text,
@@ -1881,6 +2158,9 @@ impl From<CoreTurnItem> for ThreadItem {
id: search.id,
query: search.query,
},
CoreTurnItem::ContextCompaction(compaction) => {
ThreadItem::ContextCompaction { id: compaction.id }
}
}
}
}
@@ -2027,6 +2307,16 @@ pub struct ThreadStartedNotification {
pub thread: Thread,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadNameUpdatedNotification {
pub thread_id: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub thread_name: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -2147,6 +2437,18 @@ pub struct AgentMessageDeltaNotification {
pub delta: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
/// EXPERIMENTAL - proposed plan streaming deltas for plan items. Clients should
/// not assume concatenated deltas match the completed plan item content.
pub struct PlanDeltaNotification {
pub thread_id: String,
pub turn_id: String,
pub item_id: String,
pub delta: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -2243,6 +2545,7 @@ pub struct WindowsWorldWritableWarningNotification {
pub failed_scan: bool,
}
/// Deprecated: Use `ContextCompaction` item type instead.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -2260,6 +2563,18 @@ pub struct CommandExecutionRequestApprovalParams {
pub item_id: String,
/// Optional explanatory reason (e.g. request for network access).
pub reason: Option<String>,
/// The command to be executed.
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub command: Option<String>,
/// The command's working directory.
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub cwd: Option<PathBuf>,
/// Best-effort parsed command actions for friendly display.
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub command_actions: Option<Vec<CommandAction>>,
/// Optional proposed execpolicy amendment to allow similar commands without prompting.
pub proposed_execpolicy_amendment: Option<ExecPolicyAmendment>,
}
@@ -2291,6 +2606,25 @@ pub struct FileChangeRequestApprovalResponse {
pub decision: FileChangeApprovalDecision,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct DynamicToolCallParams {
pub thread_id: String,
pub turn_id: String,
pub call_id: String,
pub tool: String,
pub arguments: JsonValue,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct DynamicToolCallResponse {
pub output: String,
pub success: bool,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -2303,11 +2637,15 @@ pub struct ToolRequestUserInputOption {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
/// EXPERIMENTAL. Represents one request_user_input question and its optional options.
/// EXPERIMENTAL. Represents one request_user_input question and its required options.
pub struct ToolRequestUserInputQuestion {
pub id: String,
pub header: String,
pub question: String,
#[serde(default)]
pub is_other: bool,
#[serde(default)]
pub is_secret: bool,
pub options: Option<Vec<ToolRequestUserInputOption>>,
}
@@ -2327,8 +2665,7 @@ pub struct ToolRequestUserInputParams {
#[ts(export_to = "v2/")]
/// EXPERIMENTAL. Captures a user's answer to a request_user_input question.
pub struct ToolRequestUserInputAnswer {
pub selected: Vec<String>,
pub other: Option<String>,
pub answers: Vec<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
@@ -2428,6 +2765,24 @@ pub struct DeprecationNoticeNotification {
pub details: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct TextPosition {
/// 1-based line number.
pub line: usize,
/// 1-based column number (in Unicode scalar values).
pub column: usize,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct TextRange {
pub start: TextPosition,
pub end: TextPosition,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -2436,6 +2791,14 @@ pub struct ConfigWarningNotification {
pub summary: String,
/// Optional extra guidance or error details.
pub details: Option<String>,
/// Optional path to the config file that triggered the warning.
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub path: Option<String>,
/// Optional range for the error location inside the config file.
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub range: Option<TextRange>,
}
#[cfg(test)]
@@ -2447,6 +2810,7 @@ mod tests {
use codex_protocol::items::TurnItem;
use codex_protocol::items::UserMessageItem;
use codex_protocol::items::WebSearchItem;
use codex_protocol::models::WebSearchAction;
use codex_protocol::protocol::NetworkAccess as CoreNetworkAccess;
use codex_protocol::user_input::UserInput as CoreUserInput;
use pretty_assertions::assert_eq;
@@ -2490,6 +2854,10 @@ mod tests {
name: "skill-creator".to_string(),
path: PathBuf::from("/repo/.codex/skills/skill-creator/SKILL.md"),
},
CoreUserInput::Mention {
name: "Demo App".to_string(),
path: "app://demo-app".to_string(),
},
],
});
@@ -2512,6 +2880,10 @@ mod tests {
name: "skill-creator".to_string(),
path: PathBuf::from("/repo/.codex/skills/skill-creator/SKILL.md"),
},
UserInput::Mention {
name: "Demo App".to_string(),
path: "app://demo-app".to_string(),
},
],
}
);
@@ -2554,6 +2926,9 @@ mod tests {
let search_item = TurnItem::WebSearch(WebSearchItem {
id: "search-1".to_string(),
query: "docs".to_string(),
action: WebSearchAction::Search {
query: Some("docs".to_string()),
},
});
assert_eq!(

View File

@@ -842,6 +842,9 @@ impl CodexClient {
turn_id,
item_id,
reason,
command,
cwd,
command_actions,
proposed_execpolicy_amendment,
} = params;
@@ -851,6 +854,17 @@ impl CodexClient {
if let Some(reason) = reason.as_deref() {
println!("< reason: {reason}");
}
if let Some(command) = command.as_deref() {
println!("< command: {command}");
}
if let Some(cwd) = cwd.as_ref() {
println!("< cwd: {}", cwd.display());
}
if let Some(command_actions) = command_actions.as_ref()
&& !command_actions.is_empty()
{
println!("< command actions: {command_actions:?}");
}
if let Some(execpolicy_amendment) = proposed_execpolicy_amendment.as_ref() {
println!("< proposed execpolicy amendment: {execpolicy_amendment:?}");
}

View File

@@ -17,11 +17,13 @@ workspace = true
[dependencies]
anyhow = { workspace = true }
async-trait = { workspace = true }
codex-arg0 = { workspace = true }
codex-common = { workspace = true, features = ["cli"] }
codex-core = { workspace = true }
codex-backend-client = { workspace = true }
codex-file-search = { workspace = true }
codex-chatgpt = { workspace = true }
codex-login = { workspace = true }
codex-protocol = { workspace = true }
codex-app-server-protocol = { workspace = true }
@@ -34,6 +36,7 @@ serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }
mcp-types = { workspace = true }
tempfile = { workspace = true }
time = { workspace = true }
toml = { workspace = true }
tokio = { workspace = true, features = [
"io-std",
@@ -48,11 +51,21 @@ uuid = { workspace = true, features = ["serde", "v7"] }
[dev-dependencies]
app_test_support = { workspace = true }
axum = { workspace = true, default-features = false, features = [
"http1",
"json",
"tokio",
] }
base64 = { workspace = true }
codex-execpolicy = { workspace = true }
core_test_support = { workspace = true }
mcp-types = { workspace = true }
os_info = { workspace = true }
pretty_assertions = { workspace = true }
rmcp = { workspace = true, default-features = false, features = [
"server",
"transport-streamable-http-server",
] }
serial_test = { workspace = true }
wiremock = { workspace = true }
shlex = { workspace = true }

View File

@@ -13,6 +13,7 @@
- [Events](#events)
- [Approvals](#approvals)
- [Skills](#skills)
- [Apps](#apps)
- [Auth endpoints](#auth-endpoints)
## Protocol
@@ -79,7 +80,10 @@ Example (from OpenAI's official VSCode extension):
- `thread/fork` — fork an existing thread into a new thread id by copying the stored history; emits `thread/started` and auto-subscribes you to turn/item events for the new thread.
- `thread/list` — page through stored rollouts; supports cursor-based pagination and optional `modelProviders` filtering.
- `thread/loaded/list` — list the thread ids currently loaded in memory.
- `thread/read` — read a stored thread by id without resuming it; optionally include turns via `includeTurns`.
- `thread/archive` — move a threads rollout file into the archived directory; returns `{}` on success.
- `thread/name/set` — set or update a threads user-facing name; returns `{}` on success. Thread names are not required to be unique; name lookups resolve to the most recently updated thread.
- `thread/unarchive` — move an archived rollout file back into the sessions directory; returns the restored `thread` on success.
- `thread/rollback` — drop the last N turns from the agents in-memory context and persist a rollback marker in the rollout so future resumes see the pruned history; returns the updated `thread` (with `turns` populated) on success.
- `turn/start` — add user input to a thread and begin Codex generation; responds with the initial `turn` object and streams `turn/started`, `item/*`, and `turn/completed` notifications.
- `turn/interrupt` — request cancellation of an in-flight turn by `(thread_id, turn_id)`; success is an empty `{}` response and the turn finishes with `status: "interrupted"`.
@@ -88,6 +92,7 @@ Example (from OpenAI's official VSCode extension):
- `model/list` — list available models (with reasoning effort options).
- `collaborationMode/list` — list available collaboration mode presets (experimental, no pagination).
- `skills/list` — list skills for one or more `cwd` values (optional `forceReload`).
- `app/list` — list available apps.
- `skills/config/write` — write user-level skill config by path.
- `mcpServer/oauth/login` — start an OAuth login for a configured MCP server; returns an `authorization_url` and later emits `mcpServer/oauthLogin/completed` once the browser flow finishes.
- `tool/requestUserInput` — prompt the user with 13 short questions for a tool call and return their answers (experimental).
@@ -112,6 +117,20 @@ Start a fresh thread when you need a new Codex conversation.
"cwd": "/Users/me/project",
"approvalPolicy": "never",
"sandbox": "workspaceWrite",
"personality": "friendly",
"dynamicTools": [
{
"name": "lookup_ticket",
"description": "Fetch a ticket by id",
"inputSchema": {
"type": "object",
"properties": {
"id": { "type": "string" }
},
"required": ["id"]
}
}
],
} }
{ "id": 10, "result": {
"thread": {
@@ -124,10 +143,13 @@ Start a fresh thread when you need a new Codex conversation.
{ "method": "thread/started", "params": { "thread": { } } }
```
To continue a stored session, call `thread/resume` with the `thread.id` you previously recorded. The response shape matches `thread/start`, and no additional notifications are emitted:
To continue a stored session, call `thread/resume` with the `thread.id` you previously recorded. The response shape matches `thread/start`, and no additional notifications are emitted. You can also pass the same configuration overrides supported by `thread/start`, such as `personality`:
```json
{ "method": "thread/resume", "id": 11, "params": { "threadId": "thr_123" } }
{ "method": "thread/resume", "id": 11, "params": {
"threadId": "thr_123",
"personality": "friendly"
} }
{ "id": 11, "result": { "thread": { "id": "thr_123", } } }
```
@@ -147,6 +169,8 @@ To branch from a stored session, call `thread/fork` with the `thread.id`. This c
- `limit` — server defaults to a reasonable page size if unset.
- `sortKey``created_at` (default) or `updated_at`.
- `modelProviders` — restrict results to specific providers; unset, null, or an empty array will include all providers.
- `sourceKinds` — restrict results to specific sources; omit or pass `[]` for interactive sessions only (`cli`, `vscode`).
- `archived` — when `true`, list archived threads only. When `false` or `null`, list non-archived threads (default).
Example:
@@ -178,6 +202,20 @@ When `nextCursor` is `null`, youve reached the final page.
} }
```
### Example: Read a thread
Use `thread/read` to fetch a stored thread by id without resuming it. Pass `includeTurns` when you want the rollout history loaded into `thread.turns`.
```json
{ "method": "thread/read", "id": 22, "params": { "threadId": "thr_123" } }
{ "id": 22, "result": { "thread": { "id": "thr_123", "turns": [] } } }
```
```json
{ "method": "thread/read", "id": 23, "params": { "threadId": "thr_123", "includeTurns": true } }
{ "id": 23, "result": { "thread": { "id": "thr_123", "turns": [ ... ] } } }
```
### Example: Archive a thread
Use `thread/archive` to move the persisted rollout (stored as a JSONL file on disk) into the archived sessions directory.
@@ -187,7 +225,16 @@ Use `thread/archive` to move the persisted rollout (stored as a JSONL file on di
{ "id": 21, "result": {} }
```
An archived thread will not appear in future calls to `thread/list`.
An archived thread will not appear in `thread/list` unless `archived` is set to `true`.
### Example: Unarchive a thread
Use `thread/unarchive` to move an archived rollout back into the sessions directory.
```json
{ "method": "thread/unarchive", "id": 24, "params": { "threadId": "thr_b" } }
{ "id": 24, "result": { "thread": { "id": "thr_b" } } }
```
### Example: Start a turn (send user input)
@@ -214,6 +261,7 @@ You can optionally specify config overrides on the new turn. If specified, these
"model": "gpt-5.1-codex",
"effort": "medium",
"summary": "concise",
"personality": "friendly",
// Optional JSON Schema to constrain the final assistant message for this turn.
"outputSchema": {
"type": "object",
@@ -250,6 +298,26 @@ Invoke a skill explicitly by including `$<skill-name>` in the text input and add
} } }
```
### Example: Start a turn (invoke an app)
Invoke an app by including `$<app-slug>` in the text input and adding a `mention` input item with the app id in `app://<connector-id>` form.
```json
{ "method": "turn/start", "id": 34, "params": {
"threadId": "thr_123",
"input": [
{ "type": "text", "text": "$demo-app Summarize the latest updates." },
{ "type": "mention", "name": "Demo App", "path": "app://demo-app" }
]
} }
{ "id": 34, "result": { "turn": {
"id": "turn_458",
"status": "inProgress",
"items": [],
"error": null
} } }
```
### Example: Interrupt an active turn
You can cancel a running Turn with `turn/interrupt`.
@@ -376,6 +444,7 @@ Today both notifications carry an empty `items` array even when item events were
- `userMessage``{id, content}` where `content` is a list of user inputs (`text`, `image`, or `localImage`).
- `agentMessage``{id, text}` containing the accumulated agent reply.
- `plan``{id, text}` emitted for plan-mode turns; plan text can stream via `item/plan/delta` (experimental).
- `reasoning``{id, summary, content}` where `summary` holds streamed reasoning summaries (applicable for most OpenAI models) and `content` holds raw reasoning blocks (applicable for e.g. open source models).
- `commandExecution``{id, command, cwd, status, commandActions, aggregatedOutput?, exitCode?, durationMs?}` for sandboxed commands; `status` is `inProgress`, `completed`, `failed`, or `declined`.
- `fileChange``{id, changes, status}` describing proposed edits; `changes` list `{path, kind, diff}` and `status` is `inProgress`, `completed`, `failed`, or `declined`.
@@ -385,7 +454,8 @@ Today both notifications carry an empty `items` array even when item events were
- `imageView``{id, path}` emitted when the agent invokes the image viewer tool.
- `enteredReviewMode``{id, review}` sent when the reviewer starts; `review` is a short user-facing label such as `"current changes"` or the requested target description.
- `exitedReviewMode``{id, review}` emitted when the reviewer finishes; `review` is the full plain-text review (usually, overall notes plus bullet point findings).
- `compacted` - `{threadId, turnId}` when codex compacts the conversation history. This can happen automatically.
- `contextCompaction` `{id}` emitted when codex compacts the conversation history. This can happen automatically.
- `compacted` - `{threadId, turnId}` when codex compacts the conversation history. This can happen automatically. **Deprecated:** Use `contextCompaction` instead.
All items emit two shared lifecycle events:
@@ -398,6 +468,10 @@ There are additional item-specific events:
- `item/agentMessage/delta` — appends streamed text for the agent message; concatenate `delta` values for the same `itemId` in order to reconstruct the full reply.
#### plan
- `item/plan/delta` — streams proposed plan content for plan items (experimental); concatenate `delta` values for the same plan `itemId`. These deltas correspond to the `<proposed_plan>` block.
#### reasoning
- `item/reasoning/summaryTextDelta` — streams readable reasoning summaries; `summaryIndex` increments when a new summary section opens.
@@ -445,7 +519,7 @@ Certain actions (shell commands or modifying files) may require explicit user ap
Order of messages:
1. `item/started` — shows the pending `commandExecution` item with `command`, `cwd`, and other fields so you can render the proposed action.
2. `item/commandExecution/requestApproval` (request) — carries the same `itemId`, `threadId`, `turnId`, optionally `reason` or `risk`, plus `parsedCmd` for friendly display.
2. `item/commandExecution/requestApproval` (request) — carries the same `itemId`, `threadId`, `turnId`, optionally `reason`, plus `command`, `cwd`, and `commandActions` for friendly display.
3. Client response — `{ "decision": "accept", "acceptSettings": { "forSession": false } }` or `{ "decision": "decline" }`.
4. `item/completed` — final `commandExecution` item with `status: "completed" | "failed" | "declined"` and execution output. Render this as the authoritative result.
@@ -504,7 +578,19 @@ Use `skills/list` to fetch the available skills (optionally scoped by `cwds`, wi
"data": [{
"cwd": "/Users/me/project",
"skills": [
{ "name": "skill-creator", "description": "Create or update a Codex skill", "enabled": true }
{
"name": "skill-creator",
"description": "Create or update a Codex skill",
"enabled": true,
"interface": {
"displayName": "Skill Creator",
"shortDescription": "Create or update a Codex skill",
"iconSmall": "icon.svg",
"iconLarge": "icon-large.svg",
"brandColor": "#111111",
"defaultPrompt": "Add a new skill for triaging flaky CI."
}
}
],
"errors": []
}]
@@ -524,14 +610,72 @@ To enable or disable a skill by path:
}
```
## Apps
Use `app/list` to fetch available apps (connectors). Each entry includes metadata like the app `id`, display `name`, `installUrl`, and whether it is currently accessible.
```json
{ "method": "app/list", "id": 50, "params": {
"cursor": null,
"limit": 50
} }
{ "id": 50, "result": {
"data": [
{
"id": "demo-app",
"name": "Demo App",
"description": "Example connector for documentation.",
"logoUrl": "https://example.com/demo-app.png",
"logoUrlDark": null,
"distributionChannel": null,
"installUrl": "https://chatgpt.com/apps/demo-app/demo-app",
"isAccessible": true
}
],
"nextCursor": null
} }
```
Invoke an app by inserting `$<app-slug>` in the text input. The slug is derived from the app name and lowercased with non-alphanumeric characters replaced by `-` (for example, "Demo App" becomes `$demo-app`). Add a `mention` input item (recommended) so the server uses the exact `app://<connector-id>` path rather than guessing by name.
Example:
```
$demo-app Pull the latest updates from the team.
```
```json
{
"method": "turn/start",
"id": 51,
"params": {
"threadId": "thread-1",
"input": [
{
"type": "text",
"text": "$demo-app Pull the latest updates from the team."
},
{ "type": "mention", "name": "Demo App", "path": "app://demo-app" }
]
}
}
```
## Auth endpoints
The JSON-RPC auth/account surface exposes request/response methods plus server-initiated notifications (no `id`). Use these to determine auth state, start or cancel logins, logout, and inspect ChatGPT rate limits.
### Authentication modes
Codex supports these authentication modes. The current mode is surfaced in `account/updated` (`authMode`) and can be inferred from `account/read`.
- **API key (`apiKey`)**: Caller supplies an OpenAI API key via `account/login/start` with `type: "apiKey"`. The API key is saved and used for API requests.
- **ChatGPT managed (`chatgpt`)** (recommended): Codex owns the ChatGPT OAuth flow and refresh tokens. Start via `account/login/start` with `type: "chatgpt"`; Codex persists tokens to disk and refreshes them automatically.
### API Overview
- `account/read` — fetch current account info; optionally refresh tokens.
- `account/login/start` — begin login (`apiKey` or `chatgpt`).
- `account/login/start` — begin login (`apiKey`, `chatgpt`).
- `account/login/completed` (notify) — emitted when a login attempt finishes (success or error).
- `account/login/cancel` — cancel a pending ChatGPT login by `loginId`.
- `account/logout` — sign out; triggers `account/updated`.

View File

@@ -25,6 +25,7 @@ use codex_app_server_protocol::CommandExecutionRequestApprovalResponse;
use codex_app_server_protocol::CommandExecutionStatus;
use codex_app_server_protocol::ContextCompactedNotification;
use codex_app_server_protocol::DeprecationNoticeNotification;
use codex_app_server_protocol::DynamicToolCallParams;
use codex_app_server_protocol::ErrorNotification;
use codex_app_server_protocol::ExecCommandApprovalParams;
use codex_app_server_protocol::ExecCommandApprovalResponse;
@@ -43,6 +44,7 @@ use codex_app_server_protocol::McpToolCallResult;
use codex_app_server_protocol::McpToolCallStatus;
use codex_app_server_protocol::PatchApplyStatus;
use codex_app_server_protocol::PatchChangeKind as V2PatchChangeKind;
use codex_app_server_protocol::PlanDeltaNotification;
use codex_app_server_protocol::RawResponseItemCompletedNotification;
use codex_app_server_protocol::ReasoningSummaryPartAddedNotification;
use codex_app_server_protocol::ReasoningSummaryTextDeltaNotification;
@@ -51,6 +53,7 @@ use codex_app_server_protocol::ServerNotification;
use codex_app_server_protocol::ServerRequestPayload;
use codex_app_server_protocol::TerminalInteractionNotification;
use codex_app_server_protocol::ThreadItem;
use codex_app_server_protocol::ThreadNameUpdatedNotification;
use codex_app_server_protocol::ThreadRollbackResponse;
use codex_app_server_protocol::ThreadTokenUsage;
use codex_app_server_protocol::ThreadTokenUsageUpdatedNotification;
@@ -85,6 +88,7 @@ use codex_core::protocol::TurnDiffEvent;
use codex_core::review_format::format_review_findings_block;
use codex_core::review_prompts;
use codex_protocol::ThreadId;
use codex_protocol::dynamic_tools::DynamicToolResponse as CoreDynamicToolResponse;
use codex_protocol::plan_tool::UpdatePlanArgs;
use codex_protocol::protocol::ReviewOutputEvent;
use codex_protocol::request_user_input::RequestUserInputAnswer as CoreRequestUserInputAnswer;
@@ -115,6 +119,7 @@ pub(crate) async fn apply_bespoke_event_handling(
msg,
} = event;
match msg {
EventMsg::TurnStarted(_) => {}
EventMsg::TurnComplete(_ev) => {
handle_turn_complete(
conversation_id,
@@ -241,6 +246,9 @@ pub(crate) async fn apply_bespoke_event_handling(
// and emit the corresponding EventMsg, we repurpose the call_id as the item_id.
item_id: item_id.clone(),
reason,
command: Some(command_string.clone()),
cwd: Some(cwd.clone()),
command_actions: Some(command_actions.clone()),
proposed_execpolicy_amendment: proposed_execpolicy_amendment_v2,
};
let rx = outgoing
@@ -273,6 +281,8 @@ pub(crate) async fn apply_bespoke_event_handling(
id: question.id,
header: question.header,
question: question.question,
is_other: question.is_other,
is_secret: question.is_secret,
options: question.options.map(|options| {
options
.into_iter()
@@ -315,6 +325,40 @@ pub(crate) async fn apply_bespoke_event_handling(
}
}
}
EventMsg::DynamicToolCallRequest(request) => {
if matches!(api_version, ApiVersion::V2) {
let call_id = request.call_id;
let params = DynamicToolCallParams {
thread_id: conversation_id.to_string(),
turn_id: request.turn_id,
call_id: call_id.clone(),
tool: request.tool,
arguments: request.arguments,
};
let rx = outgoing
.send_request(ServerRequestPayload::DynamicToolCall(params))
.await;
tokio::spawn(async move {
crate::dynamic_tools::on_call_response(call_id, rx, conversation).await;
});
} else {
error!(
"dynamic tool calls are only supported on api v2 (call_id: {})",
request.call_id
);
let call_id = request.call_id;
let _ = conversation
.submit(Op::DynamicToolResponse {
id: call_id.clone(),
response: CoreDynamicToolResponse {
call_id,
output: "dynamic tool calls require api v2".to_string(),
success: false,
},
})
.await;
}
}
// TODO(celia): properly construct McpToolCall TurnItem in core.
EventMsg::McpToolCallBegin(begin_event) => {
let notification = construct_mcp_tool_call_notification(
@@ -551,14 +595,27 @@ pub(crate) async fn apply_bespoke_event_handling(
.await;
}
EventMsg::AgentMessageContentDelta(event) => {
let codex_protocol::protocol::AgentMessageContentDeltaEvent { item_id, delta, .. } =
event;
let notification = AgentMessageDeltaNotification {
thread_id: conversation_id.to_string(),
turn_id: event_turn_id.clone(),
item_id,
delta,
};
outgoing
.send_server_notification(ServerNotification::AgentMessageDelta(notification))
.await;
}
EventMsg::PlanDelta(event) => {
let notification = PlanDeltaNotification {
thread_id: conversation_id.to_string(),
turn_id: event_turn_id.clone(),
item_id: event.item_id,
delta: event.delta,
};
outgoing
.send_server_notification(ServerNotification::AgentMessageDelta(notification))
.send_server_notification(ServerNotification::PlanDelta(notification))
.await;
}
EventMsg::ContextCompacted(..) => {
@@ -1003,7 +1060,15 @@ pub(crate) async fn apply_bespoke_event_handling(
};
if let Some(request_id) = pending {
let rollout_path = conversation.rollout_path();
let Some(rollout_path) = conversation.rollout_path() else {
let error = JSONRPCErrorError {
code: INVALID_REQUEST_ERROR_CODE,
message: "thread has no persisted rollout".to_string(),
data: None,
};
outgoing.send_error(request_id, error).await;
return;
};
let response = match read_summary_from_rollout(
rollout_path.as_path(),
fallback_model_provider.as_str(),
@@ -1048,6 +1113,17 @@ pub(crate) async fn apply_bespoke_event_handling(
outgoing.send_response(request_id, response).await;
}
}
EventMsg::ThreadNameUpdated(thread_name_event) => {
if let ApiVersion::V2 = api_version {
let notification = ThreadNameUpdatedNotification {
thread_id: thread_name_event.thread_id.to_string(),
thread_name: thread_name_event.thread_name,
};
outgoing
.send_server_notification(ServerNotification::ThreadNameUpdated(notification))
.await;
}
}
EventMsg::TurnDiff(turn_diff_event) => {
handle_turn_diff(
conversation_id,
@@ -1099,6 +1175,7 @@ async fn handle_turn_plan_update(
api_version: ApiVersion,
outgoing: &OutgoingMessageSender,
) {
// `update_plan` is a todo/checklist tool; it is not related to plan-mode updates
if let ApiVersion::V2 = api_version {
let notification = TurnPlanUpdatedNotification {
thread_id: conversation_id.to_string(),
@@ -1445,8 +1522,7 @@ async fn on_request_user_input_response(
(
id,
CoreRequestUserInputAnswer {
selected: answer.selected,
other: answer.other,
answers: answer.answers,
},
)
})

File diff suppressed because it is too large Load Diff

View File

@@ -136,6 +136,7 @@ mod tests {
CoreSandboxModeRequirement::ExternalSandbox,
]),
mcp_servers: None,
rules: None,
};
let mapped = map_requirements_toml_to_api(requirements);

View File

@@ -0,0 +1,58 @@
use codex_app_server_protocol::DynamicToolCallResponse;
use codex_core::CodexThread;
use codex_protocol::dynamic_tools::DynamicToolResponse as CoreDynamicToolResponse;
use codex_protocol::protocol::Op;
use std::sync::Arc;
use tokio::sync::oneshot;
use tracing::error;
pub(crate) async fn on_call_response(
call_id: String,
receiver: oneshot::Receiver<serde_json::Value>,
conversation: Arc<CodexThread>,
) {
let response = receiver.await;
let value = match response {
Ok(value) => value,
Err(err) => {
error!("request failed: {err:?}");
let fallback = CoreDynamicToolResponse {
call_id: call_id.clone(),
output: "dynamic tool request failed".to_string(),
success: false,
};
if let Err(err) = conversation
.submit(Op::DynamicToolResponse {
id: call_id.clone(),
response: fallback,
})
.await
{
error!("failed to submit DynamicToolResponse: {err}");
}
return;
}
};
let response = serde_json::from_value::<DynamicToolCallResponse>(value).unwrap_or_else(|err| {
error!("failed to deserialize DynamicToolCallResponse: {err}");
DynamicToolCallResponse {
output: "dynamic tool response was invalid".to_string(),
success: false,
}
});
let response = CoreDynamicToolResponse {
call_id: call_id.clone(),
output: response.output,
success: response.success,
};
if let Err(err) = conversation
.submit(Op::DynamicToolResponse {
id: call_id,
response,
})
.await
{
error!("failed to submit DynamicToolResponse: {err}");
}
}

View File

@@ -0,0 +1,155 @@
use codex_app_server_protocol::ThreadSourceKind;
use codex_core::INTERACTIVE_SESSION_SOURCES;
use codex_protocol::protocol::SessionSource as CoreSessionSource;
use codex_protocol::protocol::SubAgentSource as CoreSubAgentSource;
pub(crate) fn compute_source_filters(
source_kinds: Option<Vec<ThreadSourceKind>>,
) -> (Vec<CoreSessionSource>, Option<Vec<ThreadSourceKind>>) {
let Some(source_kinds) = source_kinds else {
return (INTERACTIVE_SESSION_SOURCES.to_vec(), None);
};
if source_kinds.is_empty() {
return (INTERACTIVE_SESSION_SOURCES.to_vec(), None);
}
let requires_post_filter = source_kinds.iter().any(|kind| {
matches!(
kind,
ThreadSourceKind::Exec
| ThreadSourceKind::AppServer
| ThreadSourceKind::SubAgent
| ThreadSourceKind::SubAgentReview
| ThreadSourceKind::SubAgentCompact
| ThreadSourceKind::SubAgentThreadSpawn
| ThreadSourceKind::SubAgentOther
| ThreadSourceKind::Unknown
)
});
if requires_post_filter {
(Vec::new(), Some(source_kinds))
} else {
let interactive_sources = source_kinds
.iter()
.filter_map(|kind| match kind {
ThreadSourceKind::Cli => Some(CoreSessionSource::Cli),
ThreadSourceKind::VsCode => Some(CoreSessionSource::VSCode),
ThreadSourceKind::Exec
| ThreadSourceKind::AppServer
| ThreadSourceKind::SubAgent
| ThreadSourceKind::SubAgentReview
| ThreadSourceKind::SubAgentCompact
| ThreadSourceKind::SubAgentThreadSpawn
| ThreadSourceKind::SubAgentOther
| ThreadSourceKind::Unknown => None,
})
.collect::<Vec<_>>();
(interactive_sources, Some(source_kinds))
}
}
pub(crate) fn source_kind_matches(source: &CoreSessionSource, filter: &[ThreadSourceKind]) -> bool {
filter.iter().any(|kind| match kind {
ThreadSourceKind::Cli => matches!(source, CoreSessionSource::Cli),
ThreadSourceKind::VsCode => matches!(source, CoreSessionSource::VSCode),
ThreadSourceKind::Exec => matches!(source, CoreSessionSource::Exec),
ThreadSourceKind::AppServer => matches!(source, CoreSessionSource::Mcp),
ThreadSourceKind::SubAgent => matches!(source, CoreSessionSource::SubAgent(_)),
ThreadSourceKind::SubAgentReview => {
matches!(
source,
CoreSessionSource::SubAgent(CoreSubAgentSource::Review)
)
}
ThreadSourceKind::SubAgentCompact => {
matches!(
source,
CoreSessionSource::SubAgent(CoreSubAgentSource::Compact)
)
}
ThreadSourceKind::SubAgentThreadSpawn => matches!(
source,
CoreSessionSource::SubAgent(CoreSubAgentSource::ThreadSpawn { .. })
),
ThreadSourceKind::SubAgentOther => matches!(
source,
CoreSessionSource::SubAgent(CoreSubAgentSource::Other(_))
),
ThreadSourceKind::Unknown => matches!(source, CoreSessionSource::Unknown),
})
}
#[cfg(test)]
mod tests {
use super::*;
use codex_protocol::ThreadId;
use pretty_assertions::assert_eq;
use uuid::Uuid;
#[test]
fn compute_source_filters_defaults_to_interactive_sources() {
let (allowed_sources, filter) = compute_source_filters(None);
assert_eq!(allowed_sources, INTERACTIVE_SESSION_SOURCES.to_vec());
assert_eq!(filter, None);
}
#[test]
fn compute_source_filters_empty_means_interactive_sources() {
let (allowed_sources, filter) = compute_source_filters(Some(Vec::new()));
assert_eq!(allowed_sources, INTERACTIVE_SESSION_SOURCES.to_vec());
assert_eq!(filter, None);
}
#[test]
fn compute_source_filters_interactive_only_skips_post_filtering() {
let source_kinds = vec![ThreadSourceKind::Cli, ThreadSourceKind::VsCode];
let (allowed_sources, filter) = compute_source_filters(Some(source_kinds.clone()));
assert_eq!(
allowed_sources,
vec![CoreSessionSource::Cli, CoreSessionSource::VSCode]
);
assert_eq!(filter, Some(source_kinds));
}
#[test]
fn compute_source_filters_subagent_variant_requires_post_filtering() {
let source_kinds = vec![ThreadSourceKind::SubAgentReview];
let (allowed_sources, filter) = compute_source_filters(Some(source_kinds.clone()));
assert_eq!(allowed_sources, Vec::new());
assert_eq!(filter, Some(source_kinds));
}
#[test]
fn source_kind_matches_distinguishes_subagent_variants() {
let parent_thread_id =
ThreadId::from_string(&Uuid::new_v4().to_string()).expect("valid thread id");
let review = CoreSessionSource::SubAgent(CoreSubAgentSource::Review);
let spawn = CoreSessionSource::SubAgent(CoreSubAgentSource::ThreadSpawn {
parent_thread_id,
depth: 1,
});
assert!(source_kind_matches(
&review,
&[ThreadSourceKind::SubAgentReview]
));
assert!(!source_kind_matches(
&review,
&[ThreadSourceKind::SubAgentThreadSpawn]
));
assert!(source_kind_matches(
&spawn,
&[ThreadSourceKind::SubAgentThreadSpawn]
));
assert!(!source_kind_matches(
&spawn,
&[ThreadSourceKind::SubAgentReview]
));
}
}

View File

@@ -3,6 +3,7 @@
use codex_common::CliConfigOverrides;
use codex_core::config::Config;
use codex_core::config::ConfigBuilder;
use codex_core::config_loader::ConfigLayerStackOrdering;
use codex_core::config_loader::LoaderOverrides;
use std::io::ErrorKind;
use std::io::Result as IoResult;
@@ -11,9 +12,15 @@ use std::path::PathBuf;
use crate::message_processor::MessageProcessor;
use crate::outgoing_message::OutgoingMessage;
use crate::outgoing_message::OutgoingMessageSender;
use codex_app_server_protocol::ConfigLayerSource;
use codex_app_server_protocol::ConfigWarningNotification;
use codex_app_server_protocol::JSONRPCMessage;
use codex_app_server_protocol::TextPosition as AppTextPosition;
use codex_app_server_protocol::TextRange as AppTextRange;
use codex_core::ExecPolicyError;
use codex_core::check_execpolicy_for_warnings;
use codex_core::config_loader::ConfigLoadError;
use codex_core::config_loader::TextRange as CoreTextRange;
use codex_feedback::CodexFeedback;
use tokio::io::AsyncBufReadExt;
use tokio::io::AsyncWriteExt;
@@ -33,7 +40,9 @@ use tracing_subscriber::util::SubscriberInitExt;
mod bespoke_event_handling;
mod codex_message_processor;
mod config_api;
mod dynamic_tools;
mod error_code;
mod filters;
mod fuzzy_file_search;
mod message_processor;
mod models;
@@ -44,6 +53,116 @@ mod outgoing_message;
/// plenty for an interactive CLI.
const CHANNEL_CAPACITY: usize = 128;
fn config_warning_from_error(
summary: impl Into<String>,
err: &std::io::Error,
) -> ConfigWarningNotification {
let (path, range) = match config_error_location(err) {
Some((path, range)) => (Some(path), Some(range)),
None => (None, None),
};
ConfigWarningNotification {
summary: summary.into(),
details: Some(err.to_string()),
path,
range,
}
}
fn config_error_location(err: &std::io::Error) -> Option<(String, AppTextRange)> {
err.get_ref()
.and_then(|err| err.downcast_ref::<ConfigLoadError>())
.map(|err| {
let config_error = err.config_error();
(
config_error.path.to_string_lossy().to_string(),
app_text_range(&config_error.range),
)
})
}
fn exec_policy_warning_location(err: &ExecPolicyError) -> (Option<String>, Option<AppTextRange>) {
match err {
ExecPolicyError::ParsePolicy { path, source } => {
if let Some(location) = source.location() {
let range = AppTextRange {
start: AppTextPosition {
line: location.range.start.line,
column: location.range.start.column,
},
end: AppTextPosition {
line: location.range.end.line,
column: location.range.end.column,
},
};
return (Some(location.path), Some(range));
}
(Some(path.clone()), None)
}
_ => (None, None),
}
}
fn app_text_range(range: &CoreTextRange) -> AppTextRange {
AppTextRange {
start: AppTextPosition {
line: range.start.line,
column: range.start.column,
},
end: AppTextPosition {
line: range.end.line,
column: range.end.column,
},
}
}
fn project_config_warning(config: &Config) -> Option<ConfigWarningNotification> {
let mut disabled_folders = Vec::new();
for layer in config
.config_layer_stack
.get_layers(ConfigLayerStackOrdering::LowestPrecedenceFirst, true)
{
if !matches!(layer.name, ConfigLayerSource::Project { .. })
|| layer.disabled_reason.is_none()
{
continue;
}
if let ConfigLayerSource::Project { dot_codex_folder } = &layer.name {
disabled_folders.push((
dot_codex_folder.as_path().display().to_string(),
layer
.disabled_reason
.as_ref()
.map(ToString::to_string)
.unwrap_or_else(|| "config.toml is disabled.".to_string()),
));
}
}
if disabled_folders.is_empty() {
return None;
}
let mut message = concat!(
"Project config.toml files are disabled in the following folders. ",
"Settings in those files are ignored, but skills and exec policies still load.\n",
)
.to_string();
for (index, (folder, reason)) in disabled_folders.iter().enumerate() {
let display_index = index + 1;
message.push_str(&format!(" {display_index}. {folder}\n"));
message.push_str(&format!(" {reason}\n"));
}
Some(ConfigWarningNotification {
summary: message,
details: None,
path: None,
range: None,
})
}
pub async fn run_main(
codex_linux_sandbox_exe: Option<PathBuf>,
cli_config_overrides: CliConfigOverrides,
@@ -95,10 +214,7 @@ pub async fn run_main(
{
Ok(config) => config,
Err(err) => {
let message = ConfigWarningNotification {
summary: "Invalid configuration; using defaults.".to_string(),
details: Some(err.to_string()),
};
let message = config_warning_from_error("Invalid configuration; using defaults.", &err);
config_warnings.push(message);
Config::load_default_with_cli_overrides(cli_kv_overrides.clone()).map_err(|e| {
std::io::Error::new(
@@ -112,13 +228,20 @@ pub async fn run_main(
if let Ok(Some(err)) =
check_execpolicy_for_warnings(&config.features, &config.config_layer_stack).await
{
let (path, range) = exec_policy_warning_location(&err);
let message = ConfigWarningNotification {
summary: "Error parsing rules; custom rules not applied.".to_string(),
details: Some(err.to_string()),
path,
range,
};
config_warnings.push(message);
}
if let Some(warning) = project_config_warning(&config) {
config_warnings.push(warning);
}
let feedback = CodexFeedback::new();
let otel = codex_core::otel_init::build_provider(
@@ -189,7 +312,7 @@ pub async fn run_main(
JSONRPCMessage::Request(r) => processor.process_request(r).await,
JSONRPCMessage::Response(r) => processor.process_response(r).await,
JSONRPCMessage::Notification(n) => processor.process_notification(n).await,
JSONRPCMessage::Error(e) => processor.process_error(e),
JSONRPCMessage::Error(e) => processor.process_error(e).await,
}
}
created = thread_created_rx.recv(), if listen_for_threads => {

View File

@@ -5,6 +5,10 @@ use crate::codex_message_processor::CodexMessageProcessor;
use crate::config_api::ConfigApi;
use crate::error_code::INVALID_REQUEST_ERROR_CODE;
use crate::outgoing_message::OutgoingMessageSender;
use async_trait::async_trait;
use codex_app_server_protocol::ChatgptAuthTokensRefreshParams;
use codex_app_server_protocol::ChatgptAuthTokensRefreshReason;
use codex_app_server_protocol::ChatgptAuthTokensRefreshResponse;
use codex_app_server_protocol::ClientInfo;
use codex_app_server_protocol::ClientRequest;
use codex_app_server_protocol::ConfigBatchWriteParams;
@@ -19,8 +23,13 @@ use codex_app_server_protocol::JSONRPCRequest;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ServerNotification;
use codex_app_server_protocol::ServerRequestPayload;
use codex_core::AuthManager;
use codex_core::ThreadManager;
use codex_core::auth::ExternalAuthRefreshContext;
use codex_core::auth::ExternalAuthRefreshReason;
use codex_core::auth::ExternalAuthRefresher;
use codex_core::auth::ExternalAuthTokens;
use codex_core::config::Config;
use codex_core::config_loader::LoaderOverrides;
use codex_core::default_client::SetOriginatorError;
@@ -31,8 +40,64 @@ use codex_feedback::CodexFeedback;
use codex_protocol::ThreadId;
use codex_protocol::protocol::SessionSource;
use tokio::sync::broadcast;
use tokio::time::Duration;
use tokio::time::timeout;
use toml::Value as TomlValue;
const EXTERNAL_AUTH_REFRESH_TIMEOUT: Duration = Duration::from_secs(10);
#[derive(Clone)]
struct ExternalAuthRefreshBridge {
outgoing: Arc<OutgoingMessageSender>,
}
impl ExternalAuthRefreshBridge {
fn map_reason(reason: ExternalAuthRefreshReason) -> ChatgptAuthTokensRefreshReason {
match reason {
ExternalAuthRefreshReason::Unauthorized => ChatgptAuthTokensRefreshReason::Unauthorized,
}
}
}
#[async_trait]
impl ExternalAuthRefresher for ExternalAuthRefreshBridge {
async fn refresh(
&self,
context: ExternalAuthRefreshContext,
) -> std::io::Result<ExternalAuthTokens> {
let params = ChatgptAuthTokensRefreshParams {
reason: Self::map_reason(context.reason),
previous_account_id: context.previous_account_id,
};
let (request_id, rx) = self
.outgoing
.send_request_with_id(ServerRequestPayload::ChatgptAuthTokensRefresh(params))
.await;
let result = match timeout(EXTERNAL_AUTH_REFRESH_TIMEOUT, rx).await {
Ok(result) => result.map_err(|err| {
std::io::Error::other(format!("auth refresh request canceled: {err}"))
})?,
Err(_) => {
let _canceled = self.outgoing.cancel_request(&request_id).await;
return Err(std::io::Error::other(format!(
"auth refresh request timed out after {}s",
EXTERNAL_AUTH_REFRESH_TIMEOUT.as_secs()
)));
}
};
let response: ChatgptAuthTokensRefreshResponse =
serde_json::from_value(result).map_err(std::io::Error::other)?;
Ok(ExternalAuthTokens {
access_token: response.access_token,
id_token: response.id_token,
})
}
}
pub(crate) struct MessageProcessor {
outgoing: Arc<OutgoingMessageSender>,
codex_message_processor: CodexMessageProcessor,
@@ -59,6 +124,10 @@ impl MessageProcessor {
false,
config.cli_auth_credentials_store_mode,
);
auth_manager.set_forced_chatgpt_workspace_id(config.forced_chatgpt_workspace_id.clone());
auth_manager.set_external_auth_refresher(Arc::new(ExternalAuthRefreshBridge {
outgoing: outgoing.clone(),
}));
let thread_manager = Arc::new(ThreadManager::new(
config.codex_home.clone(),
auth_manager.clone(),
@@ -236,8 +305,9 @@ impl MessageProcessor {
}
/// Handle an error object received from the peer.
pub(crate) fn process_error(&mut self, err: JSONRPCError) {
pub(crate) async fn process_error(&mut self, err: JSONRPCError) {
tracing::error!("<- error: {:?}", err);
self.outgoing.notify_client_error(err.id, err.error).await;
}
async fn handle_config_read(&self, request_id: RequestId, params: ConfigReadParams) {

View File

@@ -28,6 +28,7 @@ fn model_from_preset(preset: ModelPreset) -> Model {
preset.supported_reasoning_efforts,
),
default_reasoning_effort: preset.default_reasoning_effort,
supports_personality: preset.supports_personality,
is_default: preset.is_default,
}
}

View File

@@ -39,6 +39,14 @@ impl OutgoingMessageSender {
&self,
request: ServerRequestPayload,
) -> oneshot::Receiver<Result> {
let (_id, rx) = self.send_request_with_id(request).await;
rx
}
pub(crate) async fn send_request_with_id(
&self,
request: ServerRequestPayload,
) -> (RequestId, oneshot::Receiver<Result>) {
let id = RequestId::Integer(self.next_request_id.fetch_add(1, Ordering::Relaxed));
let outgoing_message_id = id.clone();
let (tx_approve, rx_approve) = oneshot::channel();
@@ -54,7 +62,7 @@ impl OutgoingMessageSender {
let mut request_id_to_callback = self.request_id_to_callback.lock().await;
request_id_to_callback.remove(&outgoing_message_id);
}
rx_approve
(outgoing_message_id, rx_approve)
}
pub(crate) async fn notify_client_response(&self, id: RequestId, result: Result) {
@@ -75,6 +83,30 @@ impl OutgoingMessageSender {
}
}
pub(crate) async fn notify_client_error(&self, id: RequestId, error: JSONRPCErrorError) {
let entry = {
let mut request_id_to_callback = self.request_id_to_callback.lock().await;
request_id_to_callback.remove_entry(&id)
};
match entry {
Some((id, _sender)) => {
warn!("client responded with error for {id:?}: {error:?}");
}
None => {
warn!("could not find callback for {id:?}");
}
}
}
pub(crate) async fn cancel_request(&self, id: &RequestId) -> bool {
let entry = {
let mut request_id_to_callback = self.request_id_to_callback.lock().await;
request_id_to_callback.remove_entry(id)
};
entry.is_some()
}
pub(crate) async fn send_response<T: Serialize>(&self, id: RequestId, response: T) {
match serde_json::to_value(response) {
Ok(result) => {
@@ -286,6 +318,8 @@ mod tests {
let notification = ServerNotification::ConfigWarning(ConfigWarningNotification {
summary: "Config error: using defaults".to_string(),
details: Some("error loading config: bad config".to_string()),
path: None,
range: None,
});
let jsonrpc_notification = OutgoingMessage::AppServerNotification(notification);

View File

@@ -6,6 +6,7 @@ use base64::Engine;
use base64::engine::general_purpose::URL_SAFE_NO_PAD;
use chrono::DateTime;
use chrono::Utc;
use codex_app_server_protocol::AuthMode;
use codex_core::auth::AuthCredentialsStoreMode;
use codex_core::auth::AuthDotJson;
use codex_core::auth::save_auth;
@@ -49,6 +50,16 @@ impl ChatGptAuthFixture {
self
}
pub fn chatgpt_user_id(mut self, chatgpt_user_id: impl Into<String>) -> Self {
self.claims.chatgpt_user_id = Some(chatgpt_user_id.into());
self
}
pub fn chatgpt_account_id(mut self, chatgpt_account_id: impl Into<String>) -> Self {
self.claims.chatgpt_account_id = Some(chatgpt_account_id.into());
self
}
pub fn email(mut self, email: impl Into<String>) -> Self {
self.claims.email = Some(email.into());
self
@@ -69,6 +80,8 @@ impl ChatGptAuthFixture {
pub struct ChatGptIdTokenClaims {
pub email: Option<String>,
pub plan_type: Option<String>,
pub chatgpt_user_id: Option<String>,
pub chatgpt_account_id: Option<String>,
}
impl ChatGptIdTokenClaims {
@@ -85,6 +98,16 @@ impl ChatGptIdTokenClaims {
self.plan_type = Some(plan_type.into());
self
}
pub fn chatgpt_user_id(mut self, chatgpt_user_id: impl Into<String>) -> Self {
self.chatgpt_user_id = Some(chatgpt_user_id.into());
self
}
pub fn chatgpt_account_id(mut self, chatgpt_account_id: impl Into<String>) -> Self {
self.chatgpt_account_id = Some(chatgpt_account_id.into());
self
}
}
pub fn encode_id_token(claims: &ChatGptIdTokenClaims) -> Result<String> {
@@ -93,10 +116,20 @@ pub fn encode_id_token(claims: &ChatGptIdTokenClaims) -> Result<String> {
if let Some(email) = &claims.email {
payload.insert("email".to_string(), json!(email));
}
let mut auth_payload = serde_json::Map::new();
if let Some(plan_type) = &claims.plan_type {
auth_payload.insert("chatgpt_plan_type".to_string(), json!(plan_type));
}
if let Some(chatgpt_user_id) = &claims.chatgpt_user_id {
auth_payload.insert("chatgpt_user_id".to_string(), json!(chatgpt_user_id));
}
if let Some(chatgpt_account_id) = &claims.chatgpt_account_id {
auth_payload.insert("chatgpt_account_id".to_string(), json!(chatgpt_account_id));
}
if !auth_payload.is_empty() {
payload.insert(
"https://api.openai.com/auth".to_string(),
json!({ "chatgpt_plan_type": plan_type }),
serde_json::Value::Object(auth_payload),
);
}
let payload = serde_json::Value::Object(payload);
@@ -126,6 +159,7 @@ pub fn write_chatgpt_auth(
let last_refresh = fixture.last_refresh.unwrap_or_else(|| Some(Utc::now()));
let auth = AuthDotJson {
auth_mode: Some(AuthMode::Chatgpt),
openai_api_key: None,
tokens: Some(tokens),
last_refresh,

View File

@@ -0,0 +1,72 @@
use codex_core::features::FEATURES;
use codex_core::features::Feature;
use std::collections::BTreeMap;
use std::path::Path;
pub fn write_mock_responses_config_toml(
codex_home: &Path,
server_uri: &str,
feature_flags: &BTreeMap<Feature, bool>,
auto_compact_limit: i64,
requires_openai_auth: Option<bool>,
model_provider_id: &str,
compact_prompt: &str,
) -> std::io::Result<()> {
// Phase 1: build the features block for config.toml.
let mut features = BTreeMap::from([(Feature::RemoteModels, false)]);
for (feature, enabled) in feature_flags {
features.insert(*feature, *enabled);
}
let feature_entries = features
.into_iter()
.map(|(feature, enabled)| {
let key = FEATURES
.iter()
.find(|spec| spec.id == feature)
.map(|spec| spec.key)
.unwrap_or_else(|| panic!("missing feature key for {feature:?}"));
format!("{key} = {enabled}")
})
.collect::<Vec<_>>()
.join("\n");
// Phase 2: build provider-specific config bits.
let requires_line = match requires_openai_auth {
Some(true) => "requires_openai_auth = true\n".to_string(),
Some(false) | None => String::new(),
};
let provider_block = if model_provider_id == "openai" {
String::new()
} else {
format!(
r#"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
{requires_line}
"#
)
};
// Phase 3: write the final config file.
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
format!(
r#"
model = "mock-model"
approval_policy = "never"
sandbox_mode = "read-only"
compact_prompt = "{compact_prompt}"
model_auto_compact_token_limit = {auto_compact_limit}
model_provider = "{model_provider_id}"
[features]
{feature_entries}
{provider_block}
"#
),
)
}

View File

@@ -1,4 +1,5 @@
mod auth_fixtures;
mod config;
mod mcp_process;
mod mock_model_server;
mod models_cache;
@@ -10,6 +11,7 @@ pub use auth_fixtures::ChatGptIdTokenClaims;
pub use auth_fixtures::encode_id_token;
pub use auth_fixtures::write_chatgpt_auth;
use codex_app_server_protocol::JSONRPCResponse;
pub use config::write_mock_responses_config_toml;
pub use core_test_support::format_with_current_shell;
pub use core_test_support::format_with_current_shell_display;
pub use core_test_support::format_with_current_shell_display_non_login;
@@ -30,6 +32,7 @@ pub use responses::create_final_assistant_message_sse_response;
pub use responses::create_request_user_input_sse_response;
pub use responses::create_shell_command_sse_response;
pub use rollout::create_fake_rollout;
pub use rollout::create_fake_rollout_with_source;
pub use rollout::create_fake_rollout_with_text_elements;
pub use rollout::rollout_path;
use serde::de::DeserializeOwned;

View File

@@ -12,6 +12,7 @@ use tokio::process::ChildStdout;
use anyhow::Context;
use codex_app_server_protocol::AddConversationListenerParams;
use codex_app_server_protocol::AppsListParams;
use codex_app_server_protocol::ArchiveConversationParams;
use codex_app_server_protocol::CancelLoginAccountParams;
use codex_app_server_protocol::CancelLoginChatGptParams;
@@ -28,11 +29,13 @@ use codex_app_server_protocol::GetAuthStatusParams;
use codex_app_server_protocol::InitializeParams;
use codex_app_server_protocol::InterruptConversationParams;
use codex_app_server_protocol::JSONRPCError;
use codex_app_server_protocol::JSONRPCErrorError;
use codex_app_server_protocol::JSONRPCMessage;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCRequest;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::ListConversationsParams;
use codex_app_server_protocol::LoginAccountParams;
use codex_app_server_protocol::LoginApiKeyParams;
use codex_app_server_protocol::ModelListParams;
use codex_app_server_protocol::NewConversationParams;
@@ -48,11 +51,14 @@ use codex_app_server_protocol::ThreadArchiveParams;
use codex_app_server_protocol::ThreadForkParams;
use codex_app_server_protocol::ThreadListParams;
use codex_app_server_protocol::ThreadLoadedListParams;
use codex_app_server_protocol::ThreadReadParams;
use codex_app_server_protocol::ThreadResumeParams;
use codex_app_server_protocol::ThreadRollbackParams;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadUnarchiveParams;
use codex_app_server_protocol::TurnInterruptParams;
use codex_app_server_protocol::TurnStartParams;
use codex_core::default_client::CODEX_INTERNAL_ORIGINATOR_OVERRIDE_ENV_VAR;
use tokio::process::Command;
pub struct McpProcess {
@@ -92,6 +98,7 @@ impl McpProcess {
cmd.stderr(Stdio::piped());
cmd.env("CODEX_HOME", codex_home);
cmd.env("RUST_LOG", "debug");
cmd.env_remove(CODEX_INTERNAL_ORIGINATOR_OVERRIDE_ENV_VAR);
for (k, v) in env_overrides {
match v {
@@ -293,6 +300,20 @@ impl McpProcess {
self.send_request("account/read", params).await
}
/// Send an `account/login/start` JSON-RPC request with ChatGPT auth tokens.
pub async fn send_chatgpt_auth_tokens_login_request(
&mut self,
id_token: String,
access_token: String,
) -> anyhow::Result<i64> {
let params = LoginAccountParams::ChatgptAuthTokens {
id_token,
access_token,
};
let params = Some(serde_json::to_value(params)?);
self.send_request("account/login/start", params).await
}
/// Send a `feedback/upload` JSON-RPC request.
pub async fn send_feedback_upload_request(
&mut self,
@@ -361,6 +382,15 @@ impl McpProcess {
self.send_request("thread/archive", params).await
}
/// Send a `thread/unarchive` JSON-RPC request.
pub async fn send_thread_unarchive_request(
&mut self,
params: ThreadUnarchiveParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("thread/unarchive", params).await
}
/// Send a `thread/rollback` JSON-RPC request.
pub async fn send_thread_rollback_request(
&mut self,
@@ -388,6 +418,15 @@ impl McpProcess {
self.send_request("thread/loaded/list", params).await
}
/// Send a `thread/read` JSON-RPC request.
pub async fn send_thread_read_request(
&mut self,
params: ThreadReadParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("thread/read", params).await
}
/// Send a `model/list` JSON-RPC request.
pub async fn send_list_models_request(
&mut self,
@@ -397,6 +436,12 @@ impl McpProcess {
self.send_request("model/list", params).await
}
/// Send an `app/list` JSON-RPC request.
pub async fn send_apps_list_request(&mut self, params: AppsListParams) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("app/list", params).await
}
/// Send a `collaborationMode/list` JSON-RPC request.
pub async fn send_list_collaboration_modes_request(
&mut self,
@@ -579,6 +624,15 @@ impl McpProcess {
.await
}
pub async fn send_error(
&mut self,
id: RequestId,
error: JSONRPCErrorError,
) -> anyhow::Result<()> {
self.send_jsonrpc_message(JSONRPCMessage::Error(JSONRPCError { id, error }))
.await
}
pub async fn send_notification(
&mut self,
notification: ClientNotification,
@@ -682,6 +736,10 @@ impl McpProcess {
Ok(notification)
}
pub async fn read_next_message(&mut self) -> anyhow::Result<JSONRPCMessage> {
self.read_stream_until_message(|_| true).await
}
/// Clears any buffered messages so future reads only consider new stream items.
///
/// We call this when e.g. we want to validate against the next turn and no longer care about

View File

@@ -27,6 +27,7 @@ fn preset_to_info(preset: &ModelPreset, priority: i32) -> ModelInfo {
priority,
upgrade: preset.upgrade.as_ref().map(|u| u.into()),
base_instructions: "base instructions".to_string(),
model_messages: None,
supports_reasoning_summaries: false,
support_verbosity: false,
default_verbosity: None,

View File

@@ -38,6 +38,27 @@ pub fn create_fake_rollout(
preview: &str,
model_provider: Option<&str>,
git_info: Option<GitInfo>,
) -> Result<String> {
create_fake_rollout_with_source(
codex_home,
filename_ts,
meta_rfc3339,
preview,
model_provider,
git_info,
SessionSource::Cli,
)
}
/// Create a minimal rollout file with an explicit session source.
pub fn create_fake_rollout_with_source(
codex_home: &Path,
filename_ts: &str,
meta_rfc3339: &str,
preview: &str,
model_provider: Option<&str>,
git_info: Option<GitInfo>,
source: SessionSource,
) -> Result<String> {
let uuid = Uuid::new_v4();
let uuid_str = uuid.to_string();
@@ -57,9 +78,10 @@ pub fn create_fake_rollout(
cwd: PathBuf::from("/"),
originator: "codex".to_string(),
cli_version: "0.0.0".to_string(),
source: SessionSource::Cli,
source,
model_provider: model_provider.map(str::to_string),
base_instructions: None,
dynamic_tools: None,
};
let payload = serde_json::to_value(SessionMetaLine {
meta,
@@ -138,6 +160,7 @@ pub fn create_fake_rollout_with_text_elements(
source: SessionSource::Cli,
model_provider: model_provider.map(str::to_string),
base_instructions: None,
dynamic_tools: None,
};
let payload = serde_json::to_value(SessionMetaLine {
meta,

View File

@@ -108,6 +108,10 @@ async fn test_codex_jsonrpc_conversation_flow() -> Result<()> {
let AddConversationSubscriptionResponse { subscription_id } =
to_response::<AddConversationSubscriptionResponse>(add_listener_resp)?;
// Drop any buffered events from conversation setup to avoid
// matching an earlier task_complete.
mcp.clear_message_buffer();
// 3) sendUserMessage (should trigger notifications; we only validate an OK response)
let send_user_id = mcp
.send_send_user_message_request(SendUserMessageParams {
@@ -125,13 +129,38 @@ async fn test_codex_jsonrpc_conversation_flow() -> Result<()> {
.await??;
let SendUserMessageResponse {} = to_response::<SendUserMessageResponse>(send_user_resp)?;
// Verify the task_finished notification is received.
// Note this also ensures that the final request to the server was made.
let task_finished_notification: JSONRPCNotification = timeout(
let task_started_notification: JSONRPCNotification = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("codex/event/task_complete"),
mcp.read_stream_until_notification_message("codex/event/task_started"),
)
.await??;
let task_started_event: Event = serde_json::from_value(
task_started_notification
.params
.clone()
.expect("task_started should have params"),
)
.expect("task_started should deserialize to Event");
// Verify the task_finished notification for this turn is received.
// Note this also ensures that the final request to the server was made.
let task_finished_notification: JSONRPCNotification = loop {
let notification: JSONRPCNotification = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("codex/event/task_complete"),
)
.await??;
let event: Event = serde_json::from_value(
notification
.params
.clone()
.expect("task_complete should have params"),
)
.expect("task_complete should deserialize to Event");
if event.id == task_started_event.id {
break notification;
}
};
let serde_json::Value::Object(map) = task_finished_notification
.params
.expect("notification should have params")

View File

@@ -48,8 +48,7 @@ async fn test_fuzzy_file_search_sorts_and_includes_indices() -> Result<()> {
.await??;
let value = resp.result;
// The path separator on Windows affects the score.
let expected_score = if cfg!(windows) { 69 } else { 72 };
let expected_score = 72;
assert_eq!(
value,
@@ -59,16 +58,9 @@ async fn test_fuzzy_file_search_sorts_and_includes_indices() -> Result<()> {
"root": root_path.clone(),
"path": "abexy",
"file_name": "abexy",
"score": 88,
"score": 84,
"indices": [0, 1, 2],
},
{
"root": root_path.clone(),
"path": "abcde",
"file_name": "abcde",
"score": 74,
"indices": [0, 1, 4],
},
{
"root": root_path.clone(),
"path": sub_abce_rel,
@@ -76,6 +68,13 @@ async fn test_fuzzy_file_search_sorts_and_includes_indices() -> Result<()> {
"score": expected_score,
"indices": [4, 5, 7],
},
{
"root": root_path.clone(),
"path": "abcde",
"file_name": "abcde",
"score": 71,
"indices": [0, 1, 4],
},
]
})
);

View File

@@ -307,6 +307,7 @@ async fn test_list_and_resume_conversations() -> Result<()> {
content: vec![ContentItem::InputText {
text: fork_history_text.to_string(),
}],
end_turn: None,
}];
let resume_with_history_req_id = mcp
.send_resume_conversation_request(ResumeConversationParams {

View File

@@ -11,6 +11,7 @@ use codex_app_server_protocol::NewConversationResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::SendUserMessageParams;
use codex_app_server_protocol::SendUserMessageResponse;
use codex_execpolicy::Policy;
use codex_protocol::ThreadId;
use codex_protocol::models::ContentItem;
use codex_protocol::models::DeveloperInstructions;
@@ -358,6 +359,8 @@ fn assert_permissions_message(item: &ResponseItem) {
let expected = DeveloperInstructions::from_policy(
&SandboxPolicy::DangerFullAccess,
AskForApproval::Never,
&Policy::empty(),
false,
&PathBuf::from("/tmp"),
)
.into_text();

View File

@@ -4,28 +4,43 @@ use app_test_support::McpProcess;
use app_test_support::to_response;
use app_test_support::ChatGptAuthFixture;
use app_test_support::ChatGptIdTokenClaims;
use app_test_support::encode_id_token;
use app_test_support::write_chatgpt_auth;
use app_test_support::write_models_cache;
use codex_app_server_protocol::Account;
use codex_app_server_protocol::AuthMode;
use codex_app_server_protocol::CancelLoginAccountParams;
use codex_app_server_protocol::CancelLoginAccountResponse;
use codex_app_server_protocol::CancelLoginAccountStatus;
use codex_app_server_protocol::ChatgptAuthTokensRefreshReason;
use codex_app_server_protocol::ChatgptAuthTokensRefreshResponse;
use codex_app_server_protocol::GetAccountParams;
use codex_app_server_protocol::GetAccountResponse;
use codex_app_server_protocol::JSONRPCError;
use codex_app_server_protocol::JSONRPCErrorError;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::LoginAccountResponse;
use codex_app_server_protocol::LogoutAccountResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ServerNotification;
use codex_app_server_protocol::ServerRequest;
use codex_app_server_protocol::TurnCompletedNotification;
use codex_app_server_protocol::TurnStatus;
use codex_core::auth::AuthCredentialsStoreMode;
use codex_login::login_with_api_key;
use codex_protocol::account::PlanType as AccountPlanType;
use core_test_support::responses;
use pretty_assertions::assert_eq;
use serde_json::json;
use serial_test::serial;
use std::path::Path;
use std::time::Duration;
use tempfile::TempDir;
use tokio::time::timeout;
use wiremock::MockServer;
use wiremock::ResponseTemplate;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
@@ -35,10 +50,14 @@ struct CreateConfigTomlParams {
forced_method: Option<String>,
forced_workspace_id: Option<String>,
requires_openai_auth: Option<bool>,
base_url: Option<String>,
}
fn create_config_toml(codex_home: &Path, params: CreateConfigTomlParams) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
let base_url = params
.base_url
.unwrap_or_else(|| "http://127.0.0.1:0/v1".to_string());
let forced_line = if let Some(method) = params.forced_method {
format!("forced_login_method = \"{method}\"\n")
} else {
@@ -66,7 +85,7 @@ model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "http://127.0.0.1:0/v1"
base_url = "{base_url}"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
@@ -133,6 +152,627 @@ async fn logout_account_removes_auth_and_notifies() -> Result<()> {
Ok(())
}
#[tokio::test]
async fn set_auth_token_updates_account_and_notifies() -> Result<()> {
let codex_home = TempDir::new()?;
let mock_server = MockServer::start().await;
create_config_toml(
codex_home.path(),
CreateConfigTomlParams {
requires_openai_auth: Some(true),
base_url: Some(format!("{}/v1", mock_server.uri())),
..Default::default()
},
)?;
write_models_cache(codex_home.path())?;
let id_token = encode_id_token(
&ChatGptIdTokenClaims::new()
.email("embedded@example.com")
.plan_type("pro")
.chatgpt_account_id("org-embedded"),
)?;
let access_token = "access-embedded".to_string();
let mut mcp = McpProcess::new_with_env(codex_home.path(), &[("OPENAI_API_KEY", None)]).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let set_id = mcp
.send_chatgpt_auth_tokens_login_request(id_token.clone(), access_token)
.await?;
let set_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(set_id)),
)
.await??;
let response: LoginAccountResponse = to_response(set_resp)?;
assert_eq!(response, LoginAccountResponse::ChatgptAuthTokens {});
let note = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("account/updated"),
)
.await??;
let parsed: ServerNotification = note.try_into()?;
let ServerNotification::AccountUpdated(payload) = parsed else {
bail!("unexpected notification: {parsed:?}");
};
assert_eq!(payload.auth_mode, Some(AuthMode::ChatgptAuthTokens));
let get_id = mcp
.send_get_account_request(GetAccountParams {
refresh_token: false,
})
.await?;
let get_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(get_id)),
)
.await??;
let account: GetAccountResponse = to_response(get_resp)?;
assert_eq!(
account,
GetAccountResponse {
account: Some(Account::Chatgpt {
email: "embedded@example.com".to_string(),
plan_type: AccountPlanType::Pro,
}),
requires_openai_auth: true,
}
);
Ok(())
}
#[tokio::test]
async fn account_read_refresh_token_is_noop_in_external_mode() -> Result<()> {
let codex_home = TempDir::new()?;
create_config_toml(
codex_home.path(),
CreateConfigTomlParams {
requires_openai_auth: Some(true),
..Default::default()
},
)?;
write_models_cache(codex_home.path())?;
let id_token = encode_id_token(
&ChatGptIdTokenClaims::new()
.email("embedded@example.com")
.plan_type("pro")
.chatgpt_account_id("org-embedded"),
)?;
let mut mcp = McpProcess::new_with_env(codex_home.path(), &[("OPENAI_API_KEY", None)]).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let set_id = mcp
.send_chatgpt_auth_tokens_login_request(id_token, "access-embedded".to_string())
.await?;
let set_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(set_id)),
)
.await??;
let response: LoginAccountResponse = to_response(set_resp)?;
assert_eq!(response, LoginAccountResponse::ChatgptAuthTokens {});
let _updated = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("account/updated"),
)
.await??;
let get_id = mcp
.send_get_account_request(GetAccountParams {
refresh_token: true,
})
.await?;
let get_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(get_id)),
)
.await??;
let account: GetAccountResponse = to_response(get_resp)?;
assert_eq!(
account,
GetAccountResponse {
account: Some(Account::Chatgpt {
email: "embedded@example.com".to_string(),
plan_type: AccountPlanType::Pro,
}),
requires_openai_auth: true,
}
);
let refresh_request = timeout(
Duration::from_millis(250),
mcp.read_stream_until_request_message(),
)
.await;
assert!(
refresh_request.is_err(),
"external mode should not emit account/chatgptAuthTokens/refresh for refreshToken=true"
);
Ok(())
}
async fn respond_to_refresh_request(
mcp: &mut McpProcess,
access_token: &str,
id_token: &str,
) -> Result<()> {
let refresh_req: ServerRequest = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_request_message(),
)
.await??;
let ServerRequest::ChatgptAuthTokensRefresh { request_id, params } = refresh_req else {
bail!("expected account/chatgptAuthTokens/refresh request, got {refresh_req:?}");
};
assert_eq!(params.reason, ChatgptAuthTokensRefreshReason::Unauthorized);
let response = ChatgptAuthTokensRefreshResponse {
access_token: access_token.to_string(),
id_token: id_token.to_string(),
};
mcp.send_response(request_id, serde_json::to_value(response)?)
.await?;
Ok(())
}
#[tokio::test]
// 401 response triggers account/chatgptAuthTokens/refresh and retries with new tokens.
async fn external_auth_refreshes_on_unauthorized() -> Result<()> {
let codex_home = TempDir::new()?;
let mock_server = MockServer::start().await;
create_config_toml(
codex_home.path(),
CreateConfigTomlParams {
requires_openai_auth: Some(true),
base_url: Some(format!("{}/v1", mock_server.uri())),
..Default::default()
},
)?;
write_models_cache(codex_home.path())?;
let success_sse = responses::sse(vec![
responses::ev_response_created("resp-turn"),
responses::ev_assistant_message("msg-turn", "turn ok"),
responses::ev_completed("resp-turn"),
]);
let unauthorized = ResponseTemplate::new(401).set_body_json(json!({
"error": { "message": "unauthorized" }
}));
let responses_mock = responses::mount_response_sequence(
&mock_server,
vec![unauthorized, responses::sse_response(success_sse)],
)
.await;
let initial_id_token = encode_id_token(
&ChatGptIdTokenClaims::new()
.email("initial@example.com")
.plan_type("pro")
.chatgpt_account_id("org-initial"),
)?;
let refreshed_id_token = encode_id_token(
&ChatGptIdTokenClaims::new()
.email("refreshed@example.com")
.plan_type("pro")
.chatgpt_account_id("org-refreshed"),
)?;
let initial_access_token = "access-initial".to_string();
let refreshed_access_token = "access-refreshed".to_string();
let mut mcp = McpProcess::new_with_env(codex_home.path(), &[("OPENAI_API_KEY", None)]).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let set_id = mcp
.send_chatgpt_auth_tokens_login_request(
initial_id_token.clone(),
initial_access_token.clone(),
)
.await?;
let set_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(set_id)),
)
.await??;
let response: LoginAccountResponse = to_response(set_resp)?;
assert_eq!(response, LoginAccountResponse::ChatgptAuthTokens {});
let _updated = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("account/updated"),
)
.await??;
let thread_req = mcp
.send_thread_start_request(codex_app_server_protocol::ThreadStartParams {
model: Some("mock-model".to_string()),
..Default::default()
})
.await?;
let thread_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(thread_req)),
)
.await??;
let thread = to_response::<codex_app_server_protocol::ThreadStartResponse>(thread_resp)?;
let turn_req = mcp
.send_turn_start_request(codex_app_server_protocol::TurnStartParams {
thread_id: thread.thread.id,
input: vec![codex_app_server_protocol::UserInput::Text {
text: "Hello".to_string(),
text_elements: Vec::new(),
}],
..Default::default()
})
.await?;
respond_to_refresh_request(&mut mcp, &refreshed_access_token, &refreshed_id_token).await?;
let _turn_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn_req)),
)
.await??;
let _turn_completed = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("turn/completed"),
)
.await??;
let requests = responses_mock.requests();
assert_eq!(requests.len(), 2);
assert_eq!(
requests[0].header("authorization"),
Some(format!("Bearer {initial_access_token}"))
);
assert_eq!(
requests[1].header("authorization"),
Some(format!("Bearer {refreshed_access_token}"))
);
Ok(())
}
#[tokio::test]
// Client returns JSON-RPC error to refresh; turn fails.
async fn external_auth_refresh_error_fails_turn() -> Result<()> {
let codex_home = TempDir::new()?;
let mock_server = MockServer::start().await;
create_config_toml(
codex_home.path(),
CreateConfigTomlParams {
requires_openai_auth: Some(true),
base_url: Some(format!("{}/v1", mock_server.uri())),
..Default::default()
},
)?;
write_models_cache(codex_home.path())?;
let unauthorized = ResponseTemplate::new(401).set_body_json(json!({
"error": { "message": "unauthorized" }
}));
let _responses_mock =
responses::mount_response_sequence(&mock_server, vec![unauthorized]).await;
let initial_id_token = encode_id_token(
&ChatGptIdTokenClaims::new()
.email("initial@example.com")
.plan_type("pro")
.chatgpt_account_id("org-initial"),
)?;
let mut mcp = McpProcess::new_with_env(codex_home.path(), &[("OPENAI_API_KEY", None)]).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let set_id = mcp
.send_chatgpt_auth_tokens_login_request(initial_id_token, "access-initial".to_string())
.await?;
let set_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(set_id)),
)
.await??;
let response: LoginAccountResponse = to_response(set_resp)?;
assert_eq!(response, LoginAccountResponse::ChatgptAuthTokens {});
let _updated = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("account/updated"),
)
.await??;
let thread_req = mcp
.send_thread_start_request(codex_app_server_protocol::ThreadStartParams {
model: Some("mock-model".to_string()),
..Default::default()
})
.await?;
let thread_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(thread_req)),
)
.await??;
let thread = to_response::<codex_app_server_protocol::ThreadStartResponse>(thread_resp)?;
let turn_req = mcp
.send_turn_start_request(codex_app_server_protocol::TurnStartParams {
thread_id: thread.thread.id.clone(),
input: vec![codex_app_server_protocol::UserInput::Text {
text: "Hello".to_string(),
text_elements: Vec::new(),
}],
..Default::default()
})
.await?;
let refresh_req: ServerRequest = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_request_message(),
)
.await??;
let ServerRequest::ChatgptAuthTokensRefresh { request_id, .. } = refresh_req else {
bail!("expected account/chatgptAuthTokens/refresh request, got {refresh_req:?}");
};
mcp.send_error(
request_id,
JSONRPCErrorError {
code: -32_000,
message: "refresh failed".to_string(),
data: None,
},
)
.await?;
let _turn_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn_req)),
)
.await??;
let completed_notif: JSONRPCNotification = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("turn/completed"),
)
.await??;
let completed: TurnCompletedNotification = serde_json::from_value(
completed_notif
.params
.expect("turn/completed params must be present"),
)?;
assert_eq!(completed.turn.status, TurnStatus::Failed);
assert!(completed.turn.error.is_some());
Ok(())
}
#[tokio::test]
// Refresh returns tokens for the wrong workspace; turn fails.
async fn external_auth_refresh_mismatched_workspace_fails_turn() -> Result<()> {
let codex_home = TempDir::new()?;
let mock_server = MockServer::start().await;
create_config_toml(
codex_home.path(),
CreateConfigTomlParams {
forced_workspace_id: Some("org-expected".to_string()),
requires_openai_auth: Some(true),
base_url: Some(format!("{}/v1", mock_server.uri())),
..Default::default()
},
)?;
write_models_cache(codex_home.path())?;
let unauthorized = ResponseTemplate::new(401).set_body_json(json!({
"error": { "message": "unauthorized" }
}));
let _responses_mock =
responses::mount_response_sequence(&mock_server, vec![unauthorized]).await;
let initial_id_token = encode_id_token(
&ChatGptIdTokenClaims::new()
.email("initial@example.com")
.plan_type("pro")
.chatgpt_account_id("org-expected"),
)?;
let refreshed_id_token = encode_id_token(
&ChatGptIdTokenClaims::new()
.email("refreshed@example.com")
.plan_type("pro")
.chatgpt_account_id("org-other"),
)?;
let mut mcp = McpProcess::new_with_env(codex_home.path(), &[("OPENAI_API_KEY", None)]).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let set_id = mcp
.send_chatgpt_auth_tokens_login_request(initial_id_token, "access-initial".to_string())
.await?;
let set_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(set_id)),
)
.await??;
let response: LoginAccountResponse = to_response(set_resp)?;
assert_eq!(response, LoginAccountResponse::ChatgptAuthTokens {});
let _updated = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("account/updated"),
)
.await??;
let thread_req = mcp
.send_thread_start_request(codex_app_server_protocol::ThreadStartParams {
model: Some("mock-model".to_string()),
..Default::default()
})
.await?;
let thread_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(thread_req)),
)
.await??;
let thread = to_response::<codex_app_server_protocol::ThreadStartResponse>(thread_resp)?;
let turn_req = mcp
.send_turn_start_request(codex_app_server_protocol::TurnStartParams {
thread_id: thread.thread.id.clone(),
input: vec![codex_app_server_protocol::UserInput::Text {
text: "Hello".to_string(),
text_elements: Vec::new(),
}],
..Default::default()
})
.await?;
let refresh_req: ServerRequest = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_request_message(),
)
.await??;
let ServerRequest::ChatgptAuthTokensRefresh { request_id, .. } = refresh_req else {
bail!("expected account/chatgptAuthTokens/refresh request, got {refresh_req:?}");
};
mcp.send_response(
request_id,
serde_json::to_value(ChatgptAuthTokensRefreshResponse {
access_token: "access-refreshed".to_string(),
id_token: refreshed_id_token,
})?,
)
.await?;
let _turn_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn_req)),
)
.await??;
let completed_notif: JSONRPCNotification = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("turn/completed"),
)
.await??;
let completed: TurnCompletedNotification = serde_json::from_value(
completed_notif
.params
.expect("turn/completed params must be present"),
)?;
assert_eq!(completed.turn.status, TurnStatus::Failed);
assert!(completed.turn.error.is_some());
Ok(())
}
#[tokio::test]
// Refresh returns a malformed id_token; turn fails.
async fn external_auth_refresh_invalid_id_token_fails_turn() -> Result<()> {
let codex_home = TempDir::new()?;
let mock_server = MockServer::start().await;
create_config_toml(
codex_home.path(),
CreateConfigTomlParams {
requires_openai_auth: Some(true),
base_url: Some(format!("{}/v1", mock_server.uri())),
..Default::default()
},
)?;
write_models_cache(codex_home.path())?;
let unauthorized = ResponseTemplate::new(401).set_body_json(json!({
"error": { "message": "unauthorized" }
}));
let _responses_mock =
responses::mount_response_sequence(&mock_server, vec![unauthorized]).await;
let initial_id_token = encode_id_token(
&ChatGptIdTokenClaims::new()
.email("initial@example.com")
.plan_type("pro")
.chatgpt_account_id("org-initial"),
)?;
let mut mcp = McpProcess::new_with_env(codex_home.path(), &[("OPENAI_API_KEY", None)]).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let set_id = mcp
.send_chatgpt_auth_tokens_login_request(initial_id_token, "access-initial".to_string())
.await?;
let set_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(set_id)),
)
.await??;
let response: LoginAccountResponse = to_response(set_resp)?;
assert_eq!(response, LoginAccountResponse::ChatgptAuthTokens {});
let _updated = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("account/updated"),
)
.await??;
let thread_req = mcp
.send_thread_start_request(codex_app_server_protocol::ThreadStartParams {
model: Some("mock-model".to_string()),
..Default::default()
})
.await?;
let thread_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(thread_req)),
)
.await??;
let thread = to_response::<codex_app_server_protocol::ThreadStartResponse>(thread_resp)?;
let turn_req = mcp
.send_turn_start_request(codex_app_server_protocol::TurnStartParams {
thread_id: thread.thread.id.clone(),
input: vec![codex_app_server_protocol::UserInput::Text {
text: "Hello".to_string(),
text_elements: Vec::new(),
}],
..Default::default()
})
.await?;
let refresh_req: ServerRequest = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_request_message(),
)
.await??;
let ServerRequest::ChatgptAuthTokensRefresh { request_id, .. } = refresh_req else {
bail!("expected account/chatgptAuthTokens/refresh request, got {refresh_req:?}");
};
mcp.send_response(
request_id,
serde_json::to_value(ChatgptAuthTokensRefreshResponse {
access_token: "access-refreshed".to_string(),
id_token: "not-a-jwt".to_string(),
})?,
)
.await?;
let _turn_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn_req)),
)
.await??;
let completed_notif: JSONRPCNotification = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("turn/completed"),
)
.await??;
let completed: TurnCompletedNotification = serde_json::from_value(
completed_notif
.params
.expect("turn/completed params must be present"),
)?;
assert_eq!(completed.turn.status, TurnStatus::Failed);
assert!(completed.turn.error.is_some());
Ok(())
}
#[tokio::test]
async fn login_account_api_key_succeeds_and_notifies() -> Result<()> {
let codex_home = TempDir::new()?;
@@ -304,6 +944,71 @@ async fn login_account_chatgpt_start_can_be_cancelled() -> Result<()> {
Ok(())
}
#[tokio::test]
// Serialize tests that launch the login server since it binds to a fixed port.
#[serial(login_port)]
async fn set_auth_token_cancels_active_chatgpt_login() -> Result<()> {
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), CreateConfigTomlParams::default())?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
// Initiate the ChatGPT login flow
let request_id = mcp.send_login_account_chatgpt_request().await?;
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
)
.await??;
let login: LoginAccountResponse = to_response(resp)?;
let LoginAccountResponse::Chatgpt { login_id, .. } = login else {
bail!("unexpected login response: {login:?}");
};
let id_token = encode_id_token(
&ChatGptIdTokenClaims::new()
.email("embedded@example.com")
.plan_type("pro")
.chatgpt_account_id("org-embedded"),
)?;
// Set an external auth token instead of completing the ChatGPT login flow.
// This should cancel the active login attempt.
let set_id = mcp
.send_chatgpt_auth_tokens_login_request(id_token, "access-embedded".to_string())
.await?;
let set_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(set_id)),
)
.await??;
let response: LoginAccountResponse = to_response(set_resp)?;
assert_eq!(response, LoginAccountResponse::ChatgptAuthTokens {});
let _updated = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("account/updated"),
)
.await??;
// Verify that the active login attempt was cancelled.
// We check this by trying to cancel it and expecting a not found error.
let cancel_id = mcp
.send_cancel_login_account_request(CancelLoginAccountParams {
login_id: login_id.clone(),
})
.await?;
let cancel_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(cancel_id)),
)
.await??;
let cancel: CancelLoginAccountResponse = to_response(cancel_resp)?;
assert_eq!(cancel.status, CancelLoginAccountStatus::NotFound);
Ok(())
}
#[tokio::test]
// Serialize tests that launch the login server since it binds to a fixed port.
#[serial(login_port)]

View File

@@ -0,0 +1,400 @@
use std::borrow::Cow;
use std::sync::Arc;
use std::time::Duration;
use anyhow::Result;
use app_test_support::ChatGptAuthFixture;
use app_test_support::McpProcess;
use app_test_support::to_response;
use app_test_support::write_chatgpt_auth;
use axum::Json;
use axum::Router;
use axum::extract::State;
use axum::http::HeaderMap;
use axum::http::StatusCode;
use axum::http::header::AUTHORIZATION;
use axum::routing::get;
use codex_app_server_protocol::AppInfo;
use codex_app_server_protocol::AppsListParams;
use codex_app_server_protocol::AppsListResponse;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_core::auth::AuthCredentialsStoreMode;
use pretty_assertions::assert_eq;
use rmcp::handler::server::ServerHandler;
use rmcp::model::JsonObject;
use rmcp::model::ListToolsResult;
use rmcp::model::Meta;
use rmcp::model::ServerCapabilities;
use rmcp::model::ServerInfo;
use rmcp::model::Tool;
use rmcp::model::ToolAnnotations;
use rmcp::transport::StreamableHttpServerConfig;
use rmcp::transport::StreamableHttpService;
use rmcp::transport::streamable_http_server::session::local::LocalSessionManager;
use serde_json::json;
use tempfile::TempDir;
use tokio::net::TcpListener;
use tokio::task::JoinHandle;
use tokio::time::timeout;
const DEFAULT_TIMEOUT: Duration = Duration::from_secs(10);
#[tokio::test]
async fn list_apps_returns_empty_when_connectors_disabled() -> Result<()> {
let codex_home = TempDir::new()?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_TIMEOUT, mcp.initialize()).await??;
let request_id = mcp
.send_apps_list_request(AppsListParams {
limit: Some(50),
cursor: None,
})
.await?;
let response: JSONRPCResponse = timeout(
DEFAULT_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
)
.await??;
let AppsListResponse { data, next_cursor } = to_response(response)?;
assert!(data.is_empty());
assert!(next_cursor.is_none());
Ok(())
}
#[tokio::test]
async fn list_apps_returns_connectors_with_accessible_flags() -> Result<()> {
let connectors = vec![
AppInfo {
id: "alpha".to_string(),
name: "Alpha".to_string(),
description: Some("Alpha connector".to_string()),
logo_url: Some("https://example.com/alpha.png".to_string()),
logo_url_dark: None,
distribution_channel: None,
install_url: None,
is_accessible: false,
},
AppInfo {
id: "beta".to_string(),
name: "beta".to_string(),
description: None,
logo_url: None,
logo_url_dark: None,
distribution_channel: None,
install_url: None,
is_accessible: false,
},
];
let tools = vec![connector_tool("beta", "Beta App")?];
let (server_url, server_handle) = start_apps_server(connectors.clone(), tools).await?;
let codex_home = TempDir::new()?;
write_connectors_config(codex_home.path(), &server_url)?;
write_chatgpt_auth(
codex_home.path(),
ChatGptAuthFixture::new("chatgpt-token")
.account_id("account-123")
.chatgpt_user_id("user-123")
.chatgpt_account_id("account-123"),
AuthCredentialsStoreMode::File,
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_TIMEOUT, mcp.initialize()).await??;
let request_id = mcp
.send_apps_list_request(AppsListParams {
limit: None,
cursor: None,
})
.await?;
let response: JSONRPCResponse = timeout(
DEFAULT_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
)
.await??;
let AppsListResponse { data, next_cursor } = to_response(response)?;
let expected = vec![
AppInfo {
id: "beta".to_string(),
name: "Beta App".to_string(),
description: None,
logo_url: None,
logo_url_dark: None,
distribution_channel: None,
install_url: Some("https://chatgpt.com/apps/beta/beta".to_string()),
is_accessible: true,
},
AppInfo {
id: "alpha".to_string(),
name: "Alpha".to_string(),
description: Some("Alpha connector".to_string()),
logo_url: Some("https://example.com/alpha.png".to_string()),
logo_url_dark: None,
distribution_channel: None,
install_url: Some("https://chatgpt.com/apps/alpha/alpha".to_string()),
is_accessible: false,
},
];
assert_eq!(data, expected);
assert!(next_cursor.is_none());
server_handle.abort();
Ok(())
}
#[tokio::test]
async fn list_apps_paginates_results() -> Result<()> {
let connectors = vec![
AppInfo {
id: "alpha".to_string(),
name: "Alpha".to_string(),
description: Some("Alpha connector".to_string()),
logo_url: None,
logo_url_dark: None,
distribution_channel: None,
install_url: None,
is_accessible: false,
},
AppInfo {
id: "beta".to_string(),
name: "beta".to_string(),
description: None,
logo_url: None,
logo_url_dark: None,
distribution_channel: None,
install_url: None,
is_accessible: false,
},
];
let tools = vec![connector_tool("beta", "Beta App")?];
let (server_url, server_handle) = start_apps_server(connectors.clone(), tools).await?;
let codex_home = TempDir::new()?;
write_connectors_config(codex_home.path(), &server_url)?;
write_chatgpt_auth(
codex_home.path(),
ChatGptAuthFixture::new("chatgpt-token")
.account_id("account-123")
.chatgpt_user_id("user-123")
.chatgpt_account_id("account-123"),
AuthCredentialsStoreMode::File,
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_TIMEOUT, mcp.initialize()).await??;
let first_request = mcp
.send_apps_list_request(AppsListParams {
limit: Some(1),
cursor: None,
})
.await?;
let first_response: JSONRPCResponse = timeout(
DEFAULT_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(first_request)),
)
.await??;
let AppsListResponse {
data: first_page,
next_cursor: first_cursor,
} = to_response(first_response)?;
let expected_first = vec![AppInfo {
id: "beta".to_string(),
name: "Beta App".to_string(),
description: None,
logo_url: None,
logo_url_dark: None,
distribution_channel: None,
install_url: Some("https://chatgpt.com/apps/beta/beta".to_string()),
is_accessible: true,
}];
assert_eq!(first_page, expected_first);
let next_cursor = first_cursor.ok_or_else(|| anyhow::anyhow!("missing cursor"))?;
let second_request = mcp
.send_apps_list_request(AppsListParams {
limit: Some(1),
cursor: Some(next_cursor),
})
.await?;
let second_response: JSONRPCResponse = timeout(
DEFAULT_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(second_request)),
)
.await??;
let AppsListResponse {
data: second_page,
next_cursor: second_cursor,
} = to_response(second_response)?;
let expected_second = vec![AppInfo {
id: "alpha".to_string(),
name: "Alpha".to_string(),
description: Some("Alpha connector".to_string()),
logo_url: None,
logo_url_dark: None,
distribution_channel: None,
install_url: Some("https://chatgpt.com/apps/alpha/alpha".to_string()),
is_accessible: false,
}];
assert_eq!(second_page, expected_second);
assert!(second_cursor.is_none());
server_handle.abort();
Ok(())
}
#[derive(Clone)]
struct AppsServerState {
expected_bearer: String,
expected_account_id: String,
response: serde_json::Value,
}
#[derive(Clone)]
struct AppListMcpServer {
tools: Arc<Vec<Tool>>,
}
impl AppListMcpServer {
fn new(tools: Arc<Vec<Tool>>) -> Self {
Self { tools }
}
}
impl ServerHandler for AppListMcpServer {
fn get_info(&self) -> ServerInfo {
ServerInfo {
capabilities: ServerCapabilities::builder().enable_tools().build(),
..ServerInfo::default()
}
}
fn list_tools(
&self,
_request: Option<rmcp::model::PaginatedRequestParam>,
_context: rmcp::service::RequestContext<rmcp::service::RoleServer>,
) -> impl std::future::Future<Output = Result<ListToolsResult, rmcp::ErrorData>> + Send + '_
{
let tools = self.tools.clone();
async move {
Ok(ListToolsResult {
tools: (*tools).clone(),
next_cursor: None,
meta: None,
})
}
}
}
async fn start_apps_server(
connectors: Vec<AppInfo>,
tools: Vec<Tool>,
) -> Result<(String, JoinHandle<()>)> {
let state = AppsServerState {
expected_bearer: "Bearer chatgpt-token".to_string(),
expected_account_id: "account-123".to_string(),
response: json!({ "apps": connectors, "next_token": null }),
};
let state = Arc::new(state);
let tools = Arc::new(tools);
let listener = TcpListener::bind("127.0.0.1:0").await?;
let addr = listener.local_addr()?;
let mcp_service = StreamableHttpService::new(
{
let tools = tools.clone();
move || Ok(AppListMcpServer::new(tools.clone()))
},
Arc::new(LocalSessionManager::default()),
StreamableHttpServerConfig::default(),
);
let router = Router::new()
.route("/connectors/directory/list", get(list_directory_connectors))
.route(
"/connectors/directory/list_workspace",
get(list_directory_connectors),
)
.with_state(state)
.nest_service("/api/codex/apps", mcp_service);
let handle = tokio::spawn(async move {
let _ = axum::serve(listener, router).await;
});
Ok((format!("http://{addr}"), handle))
}
async fn list_directory_connectors(
State(state): State<Arc<AppsServerState>>,
headers: HeaderMap,
) -> Result<impl axum::response::IntoResponse, StatusCode> {
let bearer_ok = headers
.get(AUTHORIZATION)
.and_then(|value| value.to_str().ok())
.is_some_and(|value| value == state.expected_bearer);
let account_ok = headers
.get("chatgpt-account-id")
.and_then(|value| value.to_str().ok())
.is_some_and(|value| value == state.expected_account_id);
if bearer_ok && account_ok {
Ok(Json(state.response.clone()))
} else {
Err(StatusCode::UNAUTHORIZED)
}
}
fn connector_tool(connector_id: &str, connector_name: &str) -> Result<Tool> {
let schema: JsonObject = serde_json::from_value(json!({
"type": "object",
"additionalProperties": false
}))?;
let mut tool = Tool::new(
Cow::Owned(format!("connector_{connector_id}")),
Cow::Borrowed("Connector test tool"),
Arc::new(schema),
);
tool.annotations = Some(ToolAnnotations::new().read_only(true));
let mut meta = Meta::new();
meta.0
.insert("connector_id".to_string(), json!(connector_id));
meta.0
.insert("connector_name".to_string(), json!(connector_name));
tool.meta = Some(meta);
Ok(tool)
}
fn write_connectors_config(codex_home: &std::path::Path, base_url: &str) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
format!(
r#"
chatgpt_base_url = "{base_url}"
[features]
connectors = true
"#
),
)
}

View File

@@ -1,7 +1,7 @@
//! Validates that the collaboration mode list endpoint returns the expected default presets.
//!
//! The test drives the app server through the MCP harness and asserts that the list response
//! includes the plan, pair programming, and execute modes with their default model and reasoning
//! includes the plan, coding, pair programming, and execute modes with their default model and reasoning
//! effort settings, which keeps the API contract visible in one place.
#![allow(clippy::unwrap_used)]
@@ -16,7 +16,8 @@ use codex_app_server_protocol::CollaborationModeListResponse;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_core::models_manager::test_builtin_collaboration_mode_presets;
use codex_protocol::config_types::CollaborationMode;
use codex_protocol::config_types::CollaborationModeMask;
use codex_protocol::config_types::ModeKind;
use pretty_assertions::assert_eq;
use tempfile::TempDir;
use tokio::time::timeout;
@@ -44,8 +45,23 @@ async fn list_collaboration_modes_returns_presets() -> Result<()> {
let CollaborationModeListResponse { data: items } =
to_response::<CollaborationModeListResponse>(response)?;
let expected = vec![plan_preset(), pair_programming_preset(), execute_preset()];
assert_eq!(expected, items);
let expected = [
plan_preset(),
code_preset(),
pair_programming_preset(),
execute_preset(),
];
assert_eq!(expected.len(), items.len());
for (expected_mask, actual_mask) in expected.iter().zip(items.iter()) {
assert_eq!(expected_mask.name, actual_mask.name);
assert_eq!(expected_mask.mode, actual_mask.mode);
assert_eq!(expected_mask.model, actual_mask.model);
assert_eq!(expected_mask.reasoning_effort, actual_mask.reasoning_effort);
assert_eq!(
expected_mask.developer_instructions,
actual_mask.developer_instructions
);
}
Ok(())
}
@@ -53,11 +69,11 @@ async fn list_collaboration_modes_returns_presets() -> Result<()> {
///
/// If the defaults change in the app server, this helper should be updated alongside the
/// contract, or the test will fail in ways that imply a regression in the API.
fn plan_preset() -> CollaborationMode {
fn plan_preset() -> CollaborationModeMask {
let presets = test_builtin_collaboration_mode_presets();
presets
.into_iter()
.find(|p| matches!(p, CollaborationMode::Plan(_)))
.find(|p| p.mode == Some(ModeKind::Plan))
.unwrap()
}
@@ -65,11 +81,20 @@ fn plan_preset() -> CollaborationMode {
///
/// The helper keeps the expected model and reasoning defaults co-located with the test
/// so that mismatches point directly at the API contract being exercised.
fn pair_programming_preset() -> CollaborationMode {
fn pair_programming_preset() -> CollaborationModeMask {
let presets = test_builtin_collaboration_mode_presets();
presets
.into_iter()
.find(|p| matches!(p, CollaborationMode::PairProgramming(_)))
.find(|p| p.mode == Some(ModeKind::PairProgramming))
.unwrap()
}
/// Builds the code preset that the list response is expected to return.
fn code_preset() -> CollaborationModeMask {
let presets = test_builtin_collaboration_mode_presets();
presets
.into_iter()
.find(|p| p.mode == Some(ModeKind::Code))
.unwrap()
}
@@ -77,10 +102,10 @@ fn pair_programming_preset() -> CollaborationMode {
///
/// The execute preset uses a different reasoning effort to capture the higher-effort
/// execution contract the server currently exposes.
fn execute_preset() -> CollaborationMode {
fn execute_preset() -> CollaborationModeMask {
let presets = test_builtin_collaboration_mode_presets();
presets
.into_iter()
.find(|p| matches!(p, CollaborationMode::Execute(_)))
.find(|p| p.mode == Some(ModeKind::Execute))
.unwrap()
}

View File

@@ -0,0 +1,282 @@
//! End-to-end compaction flow tests.
//!
//! Phases:
//! 1) Arrange: mock responses/compact endpoints + config.
//! 2) Act: start a thread and submit multiple turns to trigger auto-compaction.
//! 3) Assert: verify item/started + item/completed notifications for context compaction.
#![expect(clippy::expect_used)]
use anyhow::Result;
use app_test_support::ChatGptAuthFixture;
use app_test_support::McpProcess;
use app_test_support::to_response;
use app_test_support::write_chatgpt_auth;
use app_test_support::write_mock_responses_config_toml;
use codex_app_server_protocol::ItemCompletedNotification;
use codex_app_server_protocol::ItemStartedNotification;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ThreadItem;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
use codex_app_server_protocol::TurnCompletedNotification;
use codex_app_server_protocol::TurnStartParams;
use codex_app_server_protocol::TurnStartResponse;
use codex_app_server_protocol::UserInput as V2UserInput;
use codex_core::auth::AuthCredentialsStoreMode;
use codex_core::features::Feature;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ResponseItem;
use core_test_support::responses;
use core_test_support::skip_if_no_network;
use pretty_assertions::assert_eq;
use std::collections::BTreeMap;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
const AUTO_COMPACT_LIMIT: i64 = 1_000;
const COMPACT_PROMPT: &str = "Summarize the conversation.";
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn auto_compaction_local_emits_started_and_completed_items() -> Result<()> {
skip_if_no_network!(Ok(()));
let server = responses::start_mock_server().await;
let sse1 = responses::sse(vec![
responses::ev_assistant_message("m1", "FIRST_REPLY"),
responses::ev_completed_with_tokens("r1", 70_000),
]);
let sse2 = responses::sse(vec![
responses::ev_assistant_message("m2", "SECOND_REPLY"),
responses::ev_completed_with_tokens("r2", 330_000),
]);
let sse3 = responses::sse(vec![
responses::ev_assistant_message("m3", "LOCAL_SUMMARY"),
responses::ev_completed_with_tokens("r3", 200),
]);
let sse4 = responses::sse(vec![
responses::ev_assistant_message("m4", "FINAL_REPLY"),
responses::ev_completed_with_tokens("r4", 120),
]);
responses::mount_sse_sequence(&server, vec![sse1, sse2, sse3, sse4]).await;
let codex_home = TempDir::new()?;
write_mock_responses_config_toml(
codex_home.path(),
&server.uri(),
&BTreeMap::default(),
AUTO_COMPACT_LIMIT,
None,
"mock_provider",
COMPACT_PROMPT,
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let thread_id = start_thread(&mut mcp).await?;
for message in ["first", "second", "third"] {
send_turn_and_wait(&mut mcp, &thread_id, message).await?;
}
let started = wait_for_context_compaction_started(&mut mcp).await?;
let completed = wait_for_context_compaction_completed(&mut mcp).await?;
let ThreadItem::ContextCompaction { id: started_id } = started.item else {
unreachable!("started item should be context compaction");
};
let ThreadItem::ContextCompaction { id: completed_id } = completed.item else {
unreachable!("completed item should be context compaction");
};
assert_eq!(started.thread_id, thread_id);
assert_eq!(completed.thread_id, thread_id);
assert_eq!(started_id, completed_id);
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn auto_compaction_remote_emits_started_and_completed_items() -> Result<()> {
skip_if_no_network!(Ok(()));
let server = responses::start_mock_server().await;
let sse1 = responses::sse(vec![
responses::ev_assistant_message("m1", "FIRST_REPLY"),
responses::ev_completed_with_tokens("r1", 70_000),
]);
let sse2 = responses::sse(vec![
responses::ev_assistant_message("m2", "SECOND_REPLY"),
responses::ev_completed_with_tokens("r2", 330_000),
]);
let sse3 = responses::sse(vec![
responses::ev_assistant_message("m3", "FINAL_REPLY"),
responses::ev_completed_with_tokens("r3", 120),
]);
let responses_log = responses::mount_sse_sequence(&server, vec![sse1, sse2, sse3]).await;
let compacted_history = vec![
ResponseItem::Message {
id: None,
role: "assistant".to_string(),
content: vec![ContentItem::OutputText {
text: "REMOTE_COMPACT_SUMMARY".to_string(),
}],
end_turn: None,
},
ResponseItem::Compaction {
encrypted_content: "ENCRYPTED_COMPACTION_SUMMARY".to_string(),
},
];
let compact_mock = responses::mount_compact_json_once(
&server,
serde_json::json!({ "output": compacted_history }),
)
.await;
let codex_home = TempDir::new()?;
let mut features = BTreeMap::default();
features.insert(Feature::RemoteCompaction, true);
write_mock_responses_config_toml(
codex_home.path(),
&server.uri(),
&features,
AUTO_COMPACT_LIMIT,
Some(true),
"openai",
COMPACT_PROMPT,
)?;
write_chatgpt_auth(
codex_home.path(),
ChatGptAuthFixture::new("access-chatgpt").plan_type("pro"),
AuthCredentialsStoreMode::File,
)?;
let server_base_url = format!("{}/v1", server.uri());
let mut mcp = McpProcess::new_with_env(
codex_home.path(),
&[
("OPENAI_BASE_URL", Some(server_base_url.as_str())),
("OPENAI_API_KEY", None),
],
)
.await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let thread_id = start_thread(&mut mcp).await?;
for message in ["first", "second", "third"] {
send_turn_and_wait(&mut mcp, &thread_id, message).await?;
}
let started = wait_for_context_compaction_started(&mut mcp).await?;
let completed = wait_for_context_compaction_completed(&mut mcp).await?;
let ThreadItem::ContextCompaction { id: started_id } = started.item else {
unreachable!("started item should be context compaction");
};
let ThreadItem::ContextCompaction { id: completed_id } = completed.item else {
unreachable!("completed item should be context compaction");
};
assert_eq!(started.thread_id, thread_id);
assert_eq!(completed.thread_id, thread_id);
assert_eq!(started_id, completed_id);
let compact_requests = compact_mock.requests();
assert_eq!(compact_requests.len(), 1);
assert_eq!(compact_requests[0].path(), "/v1/responses/compact");
let response_requests = responses_log.requests();
assert_eq!(response_requests.len(), 3);
Ok(())
}
async fn start_thread(mcp: &mut McpProcess) -> Result<String> {
let thread_id = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("mock-model".to_string()),
..Default::default()
})
.await?;
let thread_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(thread_id)),
)
.await??;
let ThreadStartResponse { thread, .. } = to_response::<ThreadStartResponse>(thread_resp)?;
Ok(thread.id)
}
async fn send_turn_and_wait(mcp: &mut McpProcess, thread_id: &str, text: &str) -> Result<String> {
let turn_id = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread_id.to_string(),
input: vec![V2UserInput::Text {
text: text.to_string(),
text_elements: Vec::new(),
}],
..Default::default()
})
.await?;
let turn_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn_id)),
)
.await??;
let TurnStartResponse { turn } = to_response::<TurnStartResponse>(turn_resp)?;
wait_for_turn_completed(mcp, &turn.id).await?;
Ok(turn.id)
}
async fn wait_for_turn_completed(mcp: &mut McpProcess, turn_id: &str) -> Result<()> {
loop {
let notification: JSONRPCNotification = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("turn/completed"),
)
.await??;
let completed: TurnCompletedNotification =
serde_json::from_value(notification.params.clone().expect("turn/completed params"))?;
if completed.turn.id == turn_id {
return Ok(());
}
}
}
async fn wait_for_context_compaction_started(
mcp: &mut McpProcess,
) -> Result<ItemStartedNotification> {
loop {
let notification: JSONRPCNotification = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("item/started"),
)
.await??;
let started: ItemStartedNotification =
serde_json::from_value(notification.params.clone().expect("item/started params"))?;
if let ThreadItem::ContextCompaction { .. } = started.item {
return Ok(started);
}
}
}
async fn wait_for_context_compaction_completed(
mcp: &mut McpProcess,
) -> Result<ItemCompletedNotification> {
loop {
let notification: JSONRPCNotification = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("item/completed"),
)
.await??;
let completed: ItemCompletedNotification =
serde_json::from_value(notification.params.clone().expect("item/completed params"))?;
if let ThreadItem::ContextCompaction { .. } = completed.item {
return Ok(completed);
}
}
}

View File

@@ -18,7 +18,10 @@ use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::SandboxMode;
use codex_app_server_protocol::ToolsV2;
use codex_app_server_protocol::WriteStatus;
use codex_core::config::set_project_trust_level;
use codex_core::config_loader::SYSTEM_CONFIG_TOML_FILE_UNIX;
use codex_protocol::config_types::TrustLevel;
use codex_protocol::openai_models::ReasoningEffort;
use codex_utils_absolute_path::AbsolutePathBuf;
use pretty_assertions::assert_eq;
use serde_json::json;
@@ -53,6 +56,7 @@ sandbox_mode = "workspace-write"
let request_id = mcp
.send_config_read_request(ConfigReadParams {
include_layers: true,
cwd: None,
})
.await?;
let resp: JSONRPCResponse = timeout(
@@ -101,6 +105,7 @@ view_image = false
let request_id = mcp
.send_config_read_request(ConfigReadParams {
include_layers: true,
cwd: None,
})
.await?;
let resp: JSONRPCResponse = timeout(
@@ -141,6 +146,52 @@ view_image = false
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn config_read_includes_project_layers_for_cwd() -> Result<()> {
let codex_home = TempDir::new()?;
write_config(&codex_home, r#"model = "gpt-user""#)?;
let workspace = TempDir::new()?;
let project_config_dir = workspace.path().join(".codex");
std::fs::create_dir_all(&project_config_dir)?;
std::fs::write(
project_config_dir.join("config.toml"),
r#"
model_reasoning_effort = "high"
"#,
)?;
set_project_trust_level(codex_home.path(), workspace.path(), TrustLevel::Trusted)?;
let project_config = AbsolutePathBuf::try_from(project_config_dir)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let request_id = mcp
.send_config_read_request(ConfigReadParams {
include_layers: true,
cwd: Some(workspace.path().to_string_lossy().into_owned()),
})
.await?;
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
)
.await??;
let ConfigReadResponse {
config, origins, ..
} = to_response(resp)?;
assert_eq!(config.model_reasoning_effort, Some(ReasoningEffort::High));
assert_eq!(
origins.get("model_reasoning_effort").expect("origin").name,
ConfigLayerSource::Project {
dot_codex_folder: project_config
}
);
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn config_read_includes_system_layer_and_overrides() -> Result<()> {
let codex_home = TempDir::new()?;
@@ -195,6 +246,7 @@ writable_roots = [{}]
let request_id = mcp
.send_config_read_request(ConfigReadParams {
include_layers: true,
cwd: None,
})
.await?;
let resp: JSONRPCResponse = timeout(
@@ -281,6 +333,7 @@ model = "gpt-old"
let read_id = mcp
.send_config_read_request(ConfigReadParams {
include_layers: false,
cwd: None,
})
.await?;
let read_resp: JSONRPCResponse = timeout(
@@ -315,6 +368,7 @@ model = "gpt-old"
let verify_id = mcp
.send_config_read_request(ConfigReadParams {
include_layers: false,
cwd: None,
})
.await?;
let verify_resp: JSONRPCResponse = timeout(
@@ -411,6 +465,7 @@ async fn config_batch_write_applies_multiple_edits() -> Result<()> {
let read_id = mcp
.send_config_read_request(ConfigReadParams {
include_layers: false,
cwd: None,
})
.await?;
let read_resp: JSONRPCResponse = timeout(

View File

@@ -0,0 +1,286 @@
use anyhow::Context;
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_final_assistant_message_sse_response;
use app_test_support::create_mock_responses_server_sequence_unchecked;
use app_test_support::to_response;
use codex_app_server_protocol::DynamicToolCallParams;
use codex_app_server_protocol::DynamicToolCallResponse;
use codex_app_server_protocol::DynamicToolSpec;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ServerRequest;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
use codex_app_server_protocol::TurnStartParams;
use codex_app_server_protocol::TurnStartResponse;
use codex_app_server_protocol::UserInput as V2UserInput;
use core_test_support::responses;
use pretty_assertions::assert_eq;
use serde_json::Value;
use serde_json::json;
use std::path::Path;
use std::time::Duration;
use tempfile::TempDir;
use tokio::time::timeout;
use wiremock::MockServer;
const DEFAULT_READ_TIMEOUT: Duration = Duration::from_secs(10);
/// Ensures dynamic tool specs are serialized into the model request payload.
#[tokio::test]
async fn thread_start_injects_dynamic_tools_into_model_requests() -> Result<()> {
let responses = vec![create_final_assistant_message_sse_response("Done")?];
let server = create_mock_responses_server_sequence_unchecked(responses).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
// Use a minimal JSON schema so we can assert the tool payload round-trips.
let input_schema = json!({
"type": "object",
"properties": {
"city": { "type": "string" }
},
"required": ["city"],
"additionalProperties": false,
});
let dynamic_tool = DynamicToolSpec {
name: "demo_tool".to_string(),
description: "Demo dynamic tool".to_string(),
input_schema: input_schema.clone(),
};
// Thread start injects dynamic tools into the thread's tool registry.
let thread_req = mcp
.send_thread_start_request(ThreadStartParams {
dynamic_tools: Some(vec![dynamic_tool.clone()]),
..Default::default()
})
.await?;
let thread_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(thread_req)),
)
.await??;
let ThreadStartResponse { thread, .. } = to_response::<ThreadStartResponse>(thread_resp)?;
// Start a turn so a model request is issued.
let turn_req = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "Hello".to_string(),
text_elements: Vec::new(),
}],
..Default::default()
})
.await?;
let turn_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn_req)),
)
.await??;
let _turn: TurnStartResponse = to_response::<TurnStartResponse>(turn_resp)?;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("turn/completed"),
)
.await??;
// Inspect the captured model request to assert the tool spec made it through.
let bodies = responses_bodies(&server).await?;
let body = bodies
.first()
.context("expected at least one responses request")?;
let tool = find_tool(body, &dynamic_tool.name)
.context("expected dynamic tool to be injected into request")?;
assert_eq!(
tool.get("description"),
Some(&Value::String(dynamic_tool.description.clone()))
);
assert_eq!(tool.get("parameters"), Some(&input_schema));
Ok(())
}
/// Exercises the full dynamic tool call path (server request, client response, model output).
#[tokio::test]
async fn dynamic_tool_call_round_trip_sends_output_to_model() -> Result<()> {
let call_id = "dyn-call-1";
let tool_name = "demo_tool";
let tool_args = json!({ "city": "Paris" });
let tool_call_arguments = serde_json::to_string(&tool_args)?;
// First response triggers a dynamic tool call, second closes the turn.
let responses = vec![
responses::sse(vec![
responses::ev_response_created("resp-1"),
responses::ev_function_call(call_id, tool_name, &tool_call_arguments),
responses::ev_completed("resp-1"),
]),
create_final_assistant_message_sse_response("Done")?,
];
let server = create_mock_responses_server_sequence_unchecked(responses).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let dynamic_tool = DynamicToolSpec {
name: tool_name.to_string(),
description: "Demo dynamic tool".to_string(),
input_schema: json!({
"type": "object",
"properties": {
"city": { "type": "string" }
},
"required": ["city"],
"additionalProperties": false,
}),
};
let thread_req = mcp
.send_thread_start_request(ThreadStartParams {
dynamic_tools: Some(vec![dynamic_tool]),
..Default::default()
})
.await?;
let thread_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(thread_req)),
)
.await??;
let ThreadStartResponse { thread, .. } = to_response::<ThreadStartResponse>(thread_resp)?;
// Start a turn so the tool call is emitted.
let turn_req = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "Run the tool".to_string(),
text_elements: Vec::new(),
}],
..Default::default()
})
.await?;
let turn_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn_req)),
)
.await??;
let TurnStartResponse { turn } = to_response::<TurnStartResponse>(turn_resp)?;
// Read the tool call request from the app server.
let request = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_request_message(),
)
.await??;
let (request_id, params) = match request {
ServerRequest::DynamicToolCall { request_id, params } => (request_id, params),
other => panic!("expected DynamicToolCall request, got {other:?}"),
};
let expected = DynamicToolCallParams {
thread_id: thread.id,
turn_id: turn.id,
call_id: call_id.to_string(),
tool: tool_name.to_string(),
arguments: tool_args.clone(),
};
assert_eq!(params, expected);
// Respond to the tool call so the model receives a function_call_output.
let response = DynamicToolCallResponse {
output: "dynamic-ok".to_string(),
success: true,
};
mcp.send_response(request_id, serde_json::to_value(response)?)
.await?;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("turn/completed"),
)
.await??;
let bodies = responses_bodies(&server).await?;
let output = bodies
.iter()
.find_map(|body| function_call_output_text(body, call_id))
.context("expected function_call_output in follow-up request")?;
assert_eq!(output, "dynamic-ok");
Ok(())
}
async fn responses_bodies(server: &MockServer) -> Result<Vec<Value>> {
let requests = server
.received_requests()
.await
.context("failed to fetch received requests")?;
requests
.into_iter()
.filter(|req| req.url.path().ends_with("/responses"))
.map(|req| {
req.body_json::<Value>()
.context("request body should be JSON")
})
.collect()
}
fn find_tool<'a>(body: &'a Value, name: &str) -> Option<&'a Value> {
body.get("tools")
.and_then(Value::as_array)
.and_then(|tools| {
tools
.iter()
.find(|tool| tool.get("name").and_then(Value::as_str) == Some(name))
})
}
fn function_call_output_text(body: &Value, call_id: &str) -> Option<String> {
body.get("input")
.and_then(Value::as_array)
.and_then(|items| {
items.iter().find(|item| {
item.get("type").and_then(Value::as_str) == Some("function_call_output")
&& item.get("call_id").and_then(Value::as_str) == Some(call_id)
})
})
.and_then(|item| item.get("output"))
.and_then(Value::as_str)
.map(str::to_string)
}
fn create_config_toml(codex_home: &Path, server_uri: &str) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
format!(
r#"
model = "mock-model"
approval_policy = "never"
sandbox_mode = "read-only"
model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#
),
)
}

View File

@@ -1,10 +1,14 @@
mod account;
mod analytics;
mod app_list;
mod collaboration_mode_list;
mod compaction;
mod config_rpc;
mod dynamic_tools;
mod initialize;
mod model_list;
mod output_schema;
mod plan_item;
mod rate_limits;
mod request_user_input;
mod review;
@@ -12,8 +16,10 @@ mod thread_archive;
mod thread_fork;
mod thread_list;
mod thread_loaded_list;
mod thread_read;
mod thread_resume;
mod thread_rollback;
mod thread_start;
mod thread_unarchive;
mod turn_interrupt;
mod turn_start;

View File

@@ -72,6 +72,7 @@ async fn list_models_returns_all_models_with_large_limit() -> Result<()> {
},
],
default_reasoning_effort: ReasoningEffort::Medium,
supports_personality: false,
is_default: true,
},
Model {
@@ -99,6 +100,7 @@ async fn list_models_returns_all_models_with_large_limit() -> Result<()> {
},
],
default_reasoning_effort: ReasoningEffort::Medium,
supports_personality: false,
is_default: false,
},
Model {
@@ -118,6 +120,7 @@ async fn list_models_returns_all_models_with_large_limit() -> Result<()> {
},
],
default_reasoning_effort: ReasoningEffort::Medium,
supports_personality: false,
is_default: false,
},
Model {
@@ -151,6 +154,7 @@ async fn list_models_returns_all_models_with_large_limit() -> Result<()> {
},
],
default_reasoning_effort: ReasoningEffort::Medium,
supports_personality: false,
is_default: false,
},
];

View File

@@ -0,0 +1,257 @@
use anyhow::Result;
use anyhow::anyhow;
use app_test_support::McpProcess;
use app_test_support::create_mock_responses_server_sequence;
use app_test_support::to_response;
use codex_app_server_protocol::ItemCompletedNotification;
use codex_app_server_protocol::ItemStartedNotification;
use codex_app_server_protocol::JSONRPCMessage;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::PlanDeltaNotification;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ThreadItem;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
use codex_app_server_protocol::TurnCompletedNotification;
use codex_app_server_protocol::TurnStartParams;
use codex_app_server_protocol::TurnStartResponse;
use codex_app_server_protocol::TurnStatus;
use codex_app_server_protocol::UserInput as V2UserInput;
use codex_core::features::FEATURES;
use codex_core::features::Feature;
use codex_protocol::config_types::CollaborationMode;
use codex_protocol::config_types::ModeKind;
use codex_protocol::config_types::Settings;
use core_test_support::responses;
use core_test_support::skip_if_no_network;
use pretty_assertions::assert_eq;
use std::collections::BTreeMap;
use std::path::Path;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
#[tokio::test]
async fn plan_mode_uses_proposed_plan_block_for_plan_item() -> Result<()> {
skip_if_no_network!(Ok(()));
let plan_block = "<proposed_plan>\n# Final plan\n- first\n- second\n</proposed_plan>\n";
let full_message = format!("Preface\n{plan_block}Postscript");
let responses = vec![responses::sse(vec![
responses::ev_response_created("resp-1"),
responses::ev_message_item_added("msg-1", ""),
responses::ev_output_text_delta(&full_message),
responses::ev_assistant_message("msg-1", &full_message),
responses::ev_completed("resp-1"),
])];
let server = create_mock_responses_server_sequence(responses).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let turn = start_plan_mode_turn(&mut mcp).await?;
let (_, completed_items, plan_deltas, turn_completed) =
collect_turn_notifications(&mut mcp).await?;
assert_eq!(turn_completed.turn.id, turn.id);
assert_eq!(turn_completed.turn.status, TurnStatus::Completed);
let expected_plan = ThreadItem::Plan {
id: format!("{}-plan", turn.id),
text: "# Final plan\n- first\n- second\n".to_string(),
};
let expected_plan_id = format!("{}-plan", turn.id);
let streamed_plan = plan_deltas
.iter()
.map(|delta| delta.delta.as_str())
.collect::<String>();
assert_eq!(streamed_plan, "# Final plan\n- first\n- second\n");
assert!(
plan_deltas
.iter()
.all(|delta| delta.item_id == expected_plan_id)
);
let plan_items = completed_items
.iter()
.filter_map(|item| match item {
ThreadItem::Plan { .. } => Some(item.clone()),
_ => None,
})
.collect::<Vec<_>>();
assert_eq!(plan_items, vec![expected_plan]);
assert!(
completed_items
.iter()
.any(|item| matches!(item, ThreadItem::AgentMessage { .. })),
"agent message items should still be emitted alongside the plan item"
);
Ok(())
}
#[tokio::test]
async fn plan_mode_without_proposed_plan_does_not_emit_plan_item() -> Result<()> {
skip_if_no_network!(Ok(()));
let responses = vec![responses::sse(vec![
responses::ev_response_created("resp-1"),
responses::ev_assistant_message("msg-1", "Done"),
responses::ev_completed("resp-1"),
])];
let server = create_mock_responses_server_sequence(responses).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let _turn = start_plan_mode_turn(&mut mcp).await?;
let (_, completed_items, plan_deltas, _) = collect_turn_notifications(&mut mcp).await?;
let has_plan_item = completed_items
.iter()
.any(|item| matches!(item, ThreadItem::Plan { .. }));
assert!(!has_plan_item);
assert!(plan_deltas.is_empty());
Ok(())
}
async fn start_plan_mode_turn(mcp: &mut McpProcess) -> Result<codex_app_server_protocol::Turn> {
let thread_req = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("mock-model".to_string()),
..Default::default()
})
.await?;
let thread_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(thread_req)),
)
.await??;
let thread = to_response::<ThreadStartResponse>(thread_resp)?.thread;
let collaboration_mode = CollaborationMode {
mode: ModeKind::Plan,
settings: Settings {
model: "mock-model".to_string(),
reasoning_effort: None,
developer_instructions: None,
},
};
let turn_req = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id,
input: vec![V2UserInput::Text {
text: "Plan this".to_string(),
text_elements: Vec::new(),
}],
collaboration_mode: Some(collaboration_mode),
..Default::default()
})
.await?;
let turn_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn_req)),
)
.await??;
Ok(to_response::<TurnStartResponse>(turn_resp)?.turn)
}
async fn collect_turn_notifications(
mcp: &mut McpProcess,
) -> Result<(
Vec<ThreadItem>,
Vec<ThreadItem>,
Vec<PlanDeltaNotification>,
TurnCompletedNotification,
)> {
let mut started_items = Vec::new();
let mut completed_items = Vec::new();
let mut plan_deltas = Vec::new();
loop {
let message = timeout(DEFAULT_READ_TIMEOUT, mcp.read_next_message()).await??;
let JSONRPCMessage::Notification(notification) = message else {
continue;
};
match notification.method.as_str() {
"item/started" => {
let params = notification
.params
.ok_or_else(|| anyhow!("item/started notifications must include params"))?;
let payload: ItemStartedNotification = serde_json::from_value(params)?;
started_items.push(payload.item);
}
"item/completed" => {
let params = notification
.params
.ok_or_else(|| anyhow!("item/completed notifications must include params"))?;
let payload: ItemCompletedNotification = serde_json::from_value(params)?;
completed_items.push(payload.item);
}
"item/plan/delta" => {
let params = notification
.params
.ok_or_else(|| anyhow!("item/plan/delta notifications must include params"))?;
let payload: PlanDeltaNotification = serde_json::from_value(params)?;
plan_deltas.push(payload);
}
"turn/completed" => {
let params = notification
.params
.ok_or_else(|| anyhow!("turn/completed notifications must include params"))?;
let payload: TurnCompletedNotification = serde_json::from_value(params)?;
return Ok((started_items, completed_items, plan_deltas, payload));
}
_ => {}
}
}
}
fn create_config_toml(codex_home: &Path, server_uri: &str) -> std::io::Result<()> {
let features = BTreeMap::from([
(Feature::RemoteModels, false),
(Feature::CollaborationModes, true),
]);
let feature_entries = features
.into_iter()
.map(|(feature, enabled)| {
let key = FEATURES
.iter()
.find(|spec| spec.id == feature)
.map(|spec| spec.key)
.unwrap_or_else(|| panic!("missing feature key for {feature:?}"));
format!("{key} = {enabled}")
})
.collect::<Vec<_>>()
.join("\n");
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
format!(
r#"
model = "mock-model"
approval_policy = "never"
sandbox_mode = "read-only"
model_provider = "mock_provider"
[features]
{feature_entries}
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#
),
)
}

View File

@@ -13,6 +13,7 @@ use codex_app_server_protocol::TurnStartParams;
use codex_app_server_protocol::TurnStartResponse;
use codex_app_server_protocol::UserInput as V2UserInput;
use codex_protocol::config_types::CollaborationMode;
use codex_protocol::config_types::ModeKind;
use codex_protocol::config_types::Settings;
use codex_protocol::openai_models::ReasoningEffort;
use tokio::time::timeout;
@@ -54,11 +55,14 @@ async fn request_user_input_round_trip() -> Result<()> {
}],
model: Some("mock-model".to_string()),
effort: Some(ReasoningEffort::Medium),
collaboration_mode: Some(CollaborationMode::Plan(Settings {
model: "mock-model".to_string(),
reasoning_effort: Some(ReasoningEffort::Medium),
developer_instructions: None,
})),
collaboration_mode: Some(CollaborationMode {
mode: ModeKind::Plan,
settings: Settings {
model: "mock-model".to_string(),
reasoning_effort: Some(ReasoningEffort::Medium),
developer_instructions: None,
},
}),
..Default::default()
})
.await?;
@@ -87,7 +91,7 @@ async fn request_user_input_round_trip() -> Result<()> {
request_id,
serde_json::json!({
"answers": {
"confirm_path": { "selected": ["yes"], "other": serde_json::Value::Null }
"confirm_path": { "answers": ["yes"] }
}
}),
)

View File

@@ -77,8 +77,9 @@ async fn thread_fork_creates_new_thread_and_emits_started() -> Result<()> {
assert_ne!(thread.id, conversation_id);
assert_eq!(thread.preview, preview);
assert_eq!(thread.model_provider, "mock_provider");
assert!(thread.path.is_absolute());
assert_ne!(thread.path, original_path);
let thread_path = thread.path.clone().expect("thread path");
assert!(thread_path.is_absolute());
assert_ne!(thread_path, original_path);
assert!(thread.cwd.is_absolute());
assert_eq!(thread.source, SessionSource::VsCode);

View File

@@ -1,6 +1,7 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_fake_rollout;
use app_test_support::create_fake_rollout_with_source;
use app_test_support::rollout_path;
use app_test_support::to_response;
use chrono::DateTime;
@@ -12,9 +13,15 @@ use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::SessionSource;
use codex_app_server_protocol::ThreadListResponse;
use codex_app_server_protocol::ThreadSortKey;
use codex_app_server_protocol::ThreadSourceKind;
use codex_core::ARCHIVED_SESSIONS_SUBDIR;
use codex_protocol::ThreadId;
use codex_protocol::protocol::GitInfo as CoreGitInfo;
use codex_protocol::protocol::SessionSource as CoreSessionSource;
use codex_protocol::protocol::SubAgentSource;
use pretty_assertions::assert_eq;
use std::cmp::Reverse;
use std::fs;
use std::fs::FileTimes;
use std::fs::OpenOptions;
use std::path::Path;
@@ -36,8 +43,10 @@ async fn list_threads(
cursor: Option<String>,
limit: Option<u32>,
providers: Option<Vec<String>>,
source_kinds: Option<Vec<ThreadSourceKind>>,
archived: Option<bool>,
) -> Result<ThreadListResponse> {
list_threads_with_sort(mcp, cursor, limit, providers, None).await
list_threads_with_sort(mcp, cursor, limit, providers, source_kinds, None, archived).await
}
async fn list_threads_with_sort(
@@ -45,7 +54,9 @@ async fn list_threads_with_sort(
cursor: Option<String>,
limit: Option<u32>,
providers: Option<Vec<String>>,
source_kinds: Option<Vec<ThreadSourceKind>>,
sort_key: Option<ThreadSortKey>,
archived: Option<bool>,
) -> Result<ThreadListResponse> {
let request_id = mcp
.send_thread_list_request(codex_app_server_protocol::ThreadListParams {
@@ -53,6 +64,8 @@ async fn list_threads_with_sort(
limit,
sort_key,
model_providers: providers,
source_kinds,
archived,
})
.await?;
let resp: JSONRPCResponse = timeout(
@@ -125,6 +138,8 @@ async fn thread_list_basic_empty() -> Result<()> {
None,
Some(10),
Some(vec!["mock_provider".to_string()]),
None,
None,
)
.await?;
assert!(data.is_empty());
@@ -187,6 +202,8 @@ async fn thread_list_pagination_next_cursor_none_on_last_page() -> Result<()> {
None,
Some(2),
Some(vec!["mock_provider".to_string()]),
None,
None,
)
.await?;
assert_eq!(data1.len(), 2);
@@ -211,6 +228,8 @@ async fn thread_list_pagination_next_cursor_none_on_last_page() -> Result<()> {
Some(cursor1),
Some(2),
Some(vec!["mock_provider".to_string()]),
None,
None,
)
.await?;
assert!(data2.len() <= 2);
@@ -260,6 +279,8 @@ async fn thread_list_respects_provider_filter() -> Result<()> {
None,
Some(10),
Some(vec!["other_provider".to_string()]),
None,
None,
)
.await?;
assert_eq!(data.len(), 1);
@@ -278,6 +299,207 @@ async fn thread_list_respects_provider_filter() -> Result<()> {
Ok(())
}
#[tokio::test]
async fn thread_list_empty_source_kinds_defaults_to_interactive_only() -> Result<()> {
let codex_home = TempDir::new()?;
create_minimal_config(codex_home.path())?;
let cli_id = create_fake_rollout(
codex_home.path(),
"2025-02-01T10-00-00",
"2025-02-01T10:00:00Z",
"CLI",
Some("mock_provider"),
None,
)?;
let exec_id = create_fake_rollout_with_source(
codex_home.path(),
"2025-02-01T11-00-00",
"2025-02-01T11:00:00Z",
"Exec",
Some("mock_provider"),
None,
CoreSessionSource::Exec,
)?;
let mut mcp = init_mcp(codex_home.path()).await?;
let ThreadListResponse { data, next_cursor } = list_threads(
&mut mcp,
None,
Some(10),
Some(vec!["mock_provider".to_string()]),
Some(Vec::new()),
None,
)
.await?;
assert_eq!(next_cursor, None);
let ids: Vec<_> = data.iter().map(|thread| thread.id.as_str()).collect();
assert_eq!(ids, vec![cli_id.as_str()]);
assert_ne!(cli_id, exec_id);
assert_eq!(data[0].source, SessionSource::Cli);
Ok(())
}
#[tokio::test]
async fn thread_list_filters_by_source_kind_subagent_thread_spawn() -> Result<()> {
let codex_home = TempDir::new()?;
create_minimal_config(codex_home.path())?;
let cli_id = create_fake_rollout(
codex_home.path(),
"2025-02-01T10-00-00",
"2025-02-01T10:00:00Z",
"CLI",
Some("mock_provider"),
None,
)?;
let parent_thread_id = ThreadId::from_string(&Uuid::new_v4().to_string())?;
let subagent_id = create_fake_rollout_with_source(
codex_home.path(),
"2025-02-01T11-00-00",
"2025-02-01T11:00:00Z",
"SubAgent",
Some("mock_provider"),
None,
CoreSessionSource::SubAgent(SubAgentSource::ThreadSpawn {
parent_thread_id,
depth: 1,
}),
)?;
let mut mcp = init_mcp(codex_home.path()).await?;
let ThreadListResponse { data, next_cursor } = list_threads(
&mut mcp,
None,
Some(10),
Some(vec!["mock_provider".to_string()]),
Some(vec![ThreadSourceKind::SubAgentThreadSpawn]),
None,
)
.await?;
assert_eq!(next_cursor, None);
let ids: Vec<_> = data.iter().map(|thread| thread.id.as_str()).collect();
assert_eq!(ids, vec![subagent_id.as_str()]);
assert_ne!(cli_id, subagent_id);
assert!(matches!(data[0].source, SessionSource::SubAgent(_)));
Ok(())
}
#[tokio::test]
async fn thread_list_filters_by_subagent_variant() -> Result<()> {
let codex_home = TempDir::new()?;
create_minimal_config(codex_home.path())?;
let parent_thread_id = ThreadId::from_string(&Uuid::new_v4().to_string())?;
let review_id = create_fake_rollout_with_source(
codex_home.path(),
"2025-02-02T09-00-00",
"2025-02-02T09:00:00Z",
"Review",
Some("mock_provider"),
None,
CoreSessionSource::SubAgent(SubAgentSource::Review),
)?;
let compact_id = create_fake_rollout_with_source(
codex_home.path(),
"2025-02-02T10-00-00",
"2025-02-02T10:00:00Z",
"Compact",
Some("mock_provider"),
None,
CoreSessionSource::SubAgent(SubAgentSource::Compact),
)?;
let spawn_id = create_fake_rollout_with_source(
codex_home.path(),
"2025-02-02T11-00-00",
"2025-02-02T11:00:00Z",
"Spawn",
Some("mock_provider"),
None,
CoreSessionSource::SubAgent(SubAgentSource::ThreadSpawn {
parent_thread_id,
depth: 1,
}),
)?;
let other_id = create_fake_rollout_with_source(
codex_home.path(),
"2025-02-02T12-00-00",
"2025-02-02T12:00:00Z",
"Other",
Some("mock_provider"),
None,
CoreSessionSource::SubAgent(SubAgentSource::Other("custom".to_string())),
)?;
let mut mcp = init_mcp(codex_home.path()).await?;
let review = list_threads(
&mut mcp,
None,
Some(10),
Some(vec!["mock_provider".to_string()]),
Some(vec![ThreadSourceKind::SubAgentReview]),
None,
)
.await?;
let review_ids: Vec<_> = review
.data
.iter()
.map(|thread| thread.id.as_str())
.collect();
assert_eq!(review_ids, vec![review_id.as_str()]);
let compact = list_threads(
&mut mcp,
None,
Some(10),
Some(vec!["mock_provider".to_string()]),
Some(vec![ThreadSourceKind::SubAgentCompact]),
None,
)
.await?;
let compact_ids: Vec<_> = compact
.data
.iter()
.map(|thread| thread.id.as_str())
.collect();
assert_eq!(compact_ids, vec![compact_id.as_str()]);
let spawn = list_threads(
&mut mcp,
None,
Some(10),
Some(vec!["mock_provider".to_string()]),
Some(vec![ThreadSourceKind::SubAgentThreadSpawn]),
None,
)
.await?;
let spawn_ids: Vec<_> = spawn.data.iter().map(|thread| thread.id.as_str()).collect();
assert_eq!(spawn_ids, vec![spawn_id.as_str()]);
let other = list_threads(
&mut mcp,
None,
Some(10),
Some(vec!["mock_provider".to_string()]),
Some(vec![ThreadSourceKind::SubAgentOther]),
None,
)
.await?;
let other_ids: Vec<_> = other.data.iter().map(|thread| thread.id.as_str()).collect();
assert_eq!(other_ids, vec![other_id.as_str()]);
Ok(())
}
#[tokio::test]
async fn thread_list_fetches_until_limit_or_exhausted() -> Result<()> {
let codex_home = TempDir::new()?;
@@ -309,6 +531,8 @@ async fn thread_list_fetches_until_limit_or_exhausted() -> Result<()> {
None,
Some(8),
Some(vec!["target_provider".to_string()]),
None,
None,
)
.await?;
assert_eq!(
@@ -353,6 +577,8 @@ async fn thread_list_enforces_max_limit() -> Result<()> {
None,
Some(200),
Some(vec!["mock_provider".to_string()]),
None,
None,
)
.await?;
assert_eq!(
@@ -398,6 +624,8 @@ async fn thread_list_stops_when_not_enough_filtered_results_exist() -> Result<()
None,
Some(10),
Some(vec!["target_provider".to_string()]),
None,
None,
)
.await?;
assert_eq!(
@@ -444,6 +672,8 @@ async fn thread_list_includes_git_info() -> Result<()> {
None,
Some(10),
Some(vec!["mock_provider".to_string()]),
None,
None,
)
.await?;
let thread = data
@@ -502,6 +732,8 @@ async fn thread_list_default_sorts_by_created_at() -> Result<()> {
Some(10),
Some(vec!["mock_provider".to_string()]),
None,
None,
None,
)
.await?;
@@ -561,7 +793,9 @@ async fn thread_list_sort_updated_at_orders_by_mtime() -> Result<()> {
None,
Some(10),
Some(vec!["mock_provider".to_string()]),
None,
Some(ThreadSortKey::UpdatedAt),
None,
)
.await?;
@@ -624,7 +858,9 @@ async fn thread_list_updated_at_paginates_with_cursor() -> Result<()> {
None,
Some(2),
Some(vec!["mock_provider".to_string()]),
None,
Some(ThreadSortKey::UpdatedAt),
None,
)
.await?;
let ids_page1: Vec<_> = page1.iter().map(|thread| thread.id.as_str()).collect();
@@ -639,7 +875,9 @@ async fn thread_list_updated_at_paginates_with_cursor() -> Result<()> {
Some(cursor1),
Some(2),
Some(vec!["mock_provider".to_string()]),
None,
Some(ThreadSortKey::UpdatedAt),
None,
)
.await?;
let ids_page2: Vec<_> = page2.iter().map(|thread| thread.id.as_str()).collect();
@@ -678,6 +916,8 @@ async fn thread_list_created_at_tie_breaks_by_uuid() -> Result<()> {
None,
Some(10),
Some(vec!["mock_provider".to_string()]),
None,
None,
)
.await?;
@@ -729,7 +969,9 @@ async fn thread_list_updated_at_tie_breaks_by_uuid() -> Result<()> {
None,
Some(10),
Some(vec!["mock_provider".to_string()]),
None,
Some(ThreadSortKey::UpdatedAt),
None,
)
.await?;
@@ -768,7 +1010,9 @@ async fn thread_list_updated_at_uses_mtime() -> Result<()> {
None,
Some(10),
Some(vec!["mock_provider".to_string()]),
None,
Some(ThreadSortKey::UpdatedAt),
None,
)
.await?;
@@ -786,6 +1030,67 @@ async fn thread_list_updated_at_uses_mtime() -> Result<()> {
Ok(())
}
#[tokio::test]
async fn thread_list_archived_filter() -> Result<()> {
let codex_home = TempDir::new()?;
create_minimal_config(codex_home.path())?;
let active_id = create_fake_rollout(
codex_home.path(),
"2025-03-01T10-00-00",
"2025-03-01T10:00:00Z",
"Active",
Some("mock_provider"),
None,
)?;
let archived_id = create_fake_rollout(
codex_home.path(),
"2025-03-01T09-00-00",
"2025-03-01T09:00:00Z",
"Archived",
Some("mock_provider"),
None,
)?;
let archived_dir = codex_home.path().join(ARCHIVED_SESSIONS_SUBDIR);
fs::create_dir_all(&archived_dir)?;
let archived_source = rollout_path(codex_home.path(), "2025-03-01T09-00-00", &archived_id);
let archived_dest = archived_dir.join(
archived_source
.file_name()
.expect("archived rollout should have a file name"),
);
fs::rename(&archived_source, &archived_dest)?;
let mut mcp = init_mcp(codex_home.path()).await?;
let ThreadListResponse { data, .. } = list_threads(
&mut mcp,
None,
Some(10),
Some(vec!["mock_provider".to_string()]),
None,
None,
)
.await?;
assert_eq!(data.len(), 1);
assert_eq!(data[0].id, active_id);
let ThreadListResponse { data, .. } = list_threads(
&mut mcp,
None,
Some(10),
Some(vec!["mock_provider".to_string()]),
None,
Some(true),
)
.await?;
assert_eq!(data.len(), 1);
assert_eq!(data[0].id, archived_id);
Ok(())
}
#[tokio::test]
async fn thread_list_invalid_cursor_returns_error() -> Result<()> {
let codex_home = TempDir::new()?;
@@ -799,6 +1104,8 @@ async fn thread_list_invalid_cursor_returns_error() -> Result<()> {
limit: Some(2),
sort_key: None,
model_providers: Some(vec!["mock_provider".to_string()]),
source_kinds: None,
archived: None,
})
.await?;
let error: JSONRPCError = timeout(

View File

@@ -0,0 +1,159 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_fake_rollout_with_text_elements;
use app_test_support::create_mock_responses_server_repeating_assistant;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::SessionSource;
use codex_app_server_protocol::ThreadItem;
use codex_app_server_protocol::ThreadReadParams;
use codex_app_server_protocol::ThreadReadResponse;
use codex_app_server_protocol::TurnStatus;
use codex_app_server_protocol::UserInput;
use codex_protocol::user_input::ByteRange;
use codex_protocol::user_input::TextElement;
use pretty_assertions::assert_eq;
use std::path::Path;
use std::path::PathBuf;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
#[tokio::test]
async fn thread_read_returns_summary_without_turns() -> Result<()> {
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
let preview = "Saved user message";
let text_elements = [TextElement::new(
ByteRange { start: 0, end: 5 },
Some("<note>".into()),
)];
let conversation_id = create_fake_rollout_with_text_elements(
codex_home.path(),
"2025-01-05T12-00-00",
"2025-01-05T12:00:00Z",
preview,
text_elements
.iter()
.map(|elem| serde_json::to_value(elem).expect("serialize text element"))
.collect(),
Some("mock_provider"),
None,
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let read_id = mcp
.send_thread_read_request(ThreadReadParams {
thread_id: conversation_id.clone(),
include_turns: false,
})
.await?;
let read_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(read_id)),
)
.await??;
let ThreadReadResponse { thread } = to_response::<ThreadReadResponse>(read_resp)?;
assert_eq!(thread.id, conversation_id);
assert_eq!(thread.preview, preview);
assert_eq!(thread.model_provider, "mock_provider");
assert!(thread.path.as_ref().expect("thread path").is_absolute());
assert_eq!(thread.cwd, PathBuf::from("/"));
assert_eq!(thread.cli_version, "0.0.0");
assert_eq!(thread.source, SessionSource::Cli);
assert_eq!(thread.git_info, None);
assert_eq!(thread.turns.len(), 0);
Ok(())
}
#[tokio::test]
async fn thread_read_can_include_turns() -> Result<()> {
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
let preview = "Saved user message";
let text_elements = vec![TextElement::new(
ByteRange { start: 0, end: 5 },
Some("<note>".into()),
)];
let conversation_id = create_fake_rollout_with_text_elements(
codex_home.path(),
"2025-01-05T12-00-00",
"2025-01-05T12:00:00Z",
preview,
text_elements
.iter()
.map(|elem| serde_json::to_value(elem).expect("serialize text element"))
.collect(),
Some("mock_provider"),
None,
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let read_id = mcp
.send_thread_read_request(ThreadReadParams {
thread_id: conversation_id.clone(),
include_turns: true,
})
.await?;
let read_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(read_id)),
)
.await??;
let ThreadReadResponse { thread } = to_response::<ThreadReadResponse>(read_resp)?;
assert_eq!(thread.turns.len(), 1);
let turn = &thread.turns[0];
assert_eq!(turn.status, TurnStatus::Completed);
assert_eq!(turn.items.len(), 1, "expected user message item");
match &turn.items[0] {
ThreadItem::UserMessage { content, .. } => {
assert_eq!(
content,
&vec![UserInput::Text {
text: preview.to_string(),
text_elements: text_elements.clone().into_iter().map(Into::into).collect(),
}]
);
}
other => panic!("expected user message item, got {other:?}"),
}
Ok(())
}
// Helper to create a config.toml pointing at the mock model server.
fn create_config_toml(codex_home: &Path, server_uri: &str) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
format!(
r#"
model = "mock-model"
approval_policy = "never"
sandbox_mode = "read-only"
model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#
),
)
}

View File

@@ -2,7 +2,9 @@ use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_fake_rollout_with_text_elements;
use app_test_support::create_mock_responses_server_repeating_assistant;
use app_test_support::rollout_path;
use app_test_support::to_response;
use chrono::Utc;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::SessionSource;
@@ -11,18 +13,25 @@ use codex_app_server_protocol::ThreadResumeParams;
use codex_app_server_protocol::ThreadResumeResponse;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
use codex_app_server_protocol::TurnStartParams;
use codex_app_server_protocol::TurnStatus;
use codex_app_server_protocol::UserInput;
use codex_protocol::config_types::Personality;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ResponseItem;
use codex_protocol::user_input::ByteRange;
use codex_protocol::user_input::TextElement;
use core_test_support::responses;
use core_test_support::skip_if_no_network;
use pretty_assertions::assert_eq;
use std::fs::FileTimes;
use std::path::Path;
use std::path::PathBuf;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
const CODEX_5_2_INSTRUCTIONS_TEMPLATE_DEFAULT: &str = "You are Codex, a coding agent based on GPT-5. You and the user share the same workspace and collaborate to achieve the user's goals.";
#[tokio::test]
async fn thread_resume_returns_original_thread() -> Result<()> {
@@ -112,7 +121,7 @@ async fn thread_resume_returns_rollout_history() -> Result<()> {
assert_eq!(thread.id, conversation_id);
assert_eq!(thread.preview, preview);
assert_eq!(thread.model_provider, "mock_provider");
assert!(thread.path.is_absolute());
assert!(thread.path.as_ref().expect("thread path").is_absolute());
assert_eq!(thread.cwd, PathBuf::from("/"));
assert_eq!(thread.cli_version, "0.0.0");
assert_eq!(thread.source, SessionSource::Cli);
@@ -142,6 +151,116 @@ async fn thread_resume_returns_rollout_history() -> Result<()> {
Ok(())
}
#[tokio::test]
async fn thread_resume_without_overrides_does_not_change_updated_at_or_mtime() -> Result<()> {
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
let rollout = setup_rollout_fixture(codex_home.path(), &server.uri())?;
let thread_id = rollout.conversation_id.clone();
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let resume_id = mcp
.send_thread_resume_request(ThreadResumeParams {
thread_id: thread_id.clone(),
..Default::default()
})
.await?;
let resume_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(resume_id)),
)
.await??;
let ThreadResumeResponse { thread, .. } = to_response::<ThreadResumeResponse>(resume_resp)?;
assert_eq!(thread.updated_at, rollout.expected_updated_at);
let after_modified = std::fs::metadata(&rollout.rollout_file_path)?.modified()?;
assert_eq!(after_modified, rollout.before_modified);
let turn_id = mcp
.send_turn_start_request(TurnStartParams {
thread_id,
input: vec![UserInput::Text {
text: "Hello".to_string(),
text_elements: Vec::new(),
}],
..Default::default()
})
.await?;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn_id)),
)
.await??;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("turn/completed"),
)
.await??;
let after_turn_modified = std::fs::metadata(&rollout.rollout_file_path)?.modified()?;
assert!(after_turn_modified > rollout.before_modified);
Ok(())
}
#[tokio::test]
async fn thread_resume_with_overrides_defers_updated_at_until_turn_start() -> Result<()> {
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
let rollout = setup_rollout_fixture(codex_home.path(), &server.uri())?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let resume_id = mcp
.send_thread_resume_request(ThreadResumeParams {
thread_id: rollout.conversation_id.clone(),
model: Some("mock-model".to_string()),
..Default::default()
})
.await?;
let resume_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(resume_id)),
)
.await??;
let ThreadResumeResponse { thread, .. } = to_response::<ThreadResumeResponse>(resume_resp)?;
assert_eq!(thread.updated_at, rollout.expected_updated_at);
let after_resume_modified = std::fs::metadata(&rollout.rollout_file_path)?.modified()?;
assert_eq!(after_resume_modified, rollout.before_modified);
let turn_id = mcp
.send_turn_start_request(TurnStartParams {
thread_id: rollout.conversation_id,
input: vec![UserInput::Text {
text: "Hello".to_string(),
text_elements: Vec::new(),
}],
..Default::default()
})
.await?;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn_id)),
)
.await??;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("turn/completed"),
)
.await??;
let after_turn_modified = std::fs::metadata(&rollout.rollout_file_path)?.modified()?;
assert!(after_turn_modified > rollout.before_modified);
Ok(())
}
#[tokio::test]
async fn thread_resume_prefers_path_over_thread_id() -> Result<()> {
let server = create_mock_responses_server_repeating_assistant("Done").await;
@@ -164,7 +283,7 @@ async fn thread_resume_prefers_path_over_thread_id() -> Result<()> {
.await??;
let ThreadStartResponse { thread, .. } = to_response::<ThreadStartResponse>(start_resp)?;
let thread_path = thread.path.clone();
let thread_path = thread.path.clone().expect("thread path");
let resume_id = mcp
.send_thread_resume_request(ThreadResumeParams {
thread_id: "not-a-valid-thread-id".to_string(),
@@ -218,6 +337,7 @@ async fn thread_resume_supports_history_and_overrides() -> Result<()> {
content: vec![ContentItem::InputText {
text: history_text.to_string(),
}],
end_turn: None,
}];
// Resume with explicit history and override the model.
@@ -247,6 +367,91 @@ async fn thread_resume_supports_history_and_overrides() -> Result<()> {
Ok(())
}
#[tokio::test]
async fn thread_resume_accepts_personality_override() -> Result<()> {
skip_if_no_network!(Ok(()));
let server = responses::start_mock_server().await;
let body = responses::sse(vec![
responses::ev_response_created("resp-1"),
responses::ev_assistant_message("msg-1", "Done"),
responses::ev_completed("resp-1"),
]);
let response_mock = responses::mount_sse_once(&server, body).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let start_id = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("gpt-5.2-codex".to_string()),
..Default::default()
})
.await?;
let start_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(start_id)),
)
.await??;
let ThreadStartResponse { thread, .. } = to_response::<ThreadStartResponse>(start_resp)?;
let resume_id = mcp
.send_thread_resume_request(ThreadResumeParams {
thread_id: thread.id.clone(),
model: Some("gpt-5.2-codex".to_string()),
personality: Some(Personality::Friendly),
..Default::default()
})
.await?;
let resume_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(resume_id)),
)
.await??;
let _resume: ThreadResumeResponse = to_response::<ThreadResumeResponse>(resume_resp)?;
let turn_id = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id,
input: vec![UserInput::Text {
text: "Hello".to_string(),
text_elements: Vec::new(),
}],
..Default::default()
})
.await?;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn_id)),
)
.await??;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("turn/completed"),
)
.await??;
let request = response_mock.single_request();
let developer_texts = request.message_input_texts("developer");
assert!(
developer_texts
.iter()
.any(|text| text.contains("<personality_spec>")),
"expected a personality update message in developer input, got {developer_texts:?}"
);
let instructions_text = request.instructions_text();
assert!(
instructions_text.contains(CODEX_5_2_INSTRUCTIONS_TEMPLATE_DEFAULT),
"expected default base instructions from history, got {instructions_text:?}"
);
Ok(())
}
// Helper to create a config.toml pointing at the mock model server.
fn create_config_toml(codex_home: &std::path::Path, server_uri: &str) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
@@ -254,12 +459,16 @@ fn create_config_toml(codex_home: &std::path::Path, server_uri: &str) -> std::io
config_toml,
format!(
r#"
model = "mock-model"
model = "gpt-5.2-codex"
approval_policy = "never"
sandbox_mode = "read-only"
model_provider = "mock_provider"
[features]
remote_models = false
personality = true
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
@@ -270,3 +479,51 @@ stream_max_retries = 0
),
)
}
fn set_rollout_mtime(path: &Path, updated_at_rfc3339: &str) -> Result<()> {
let parsed = chrono::DateTime::parse_from_rfc3339(updated_at_rfc3339)?.with_timezone(&Utc);
let times = FileTimes::new().set_modified(parsed.into());
std::fs::OpenOptions::new()
.append(true)
.open(path)?
.set_times(times)?;
Ok(())
}
struct RolloutFixture {
conversation_id: String,
rollout_file_path: PathBuf,
before_modified: std::time::SystemTime,
expected_updated_at: i64,
}
fn setup_rollout_fixture(codex_home: &Path, server_uri: &str) -> Result<RolloutFixture> {
create_config_toml(codex_home, server_uri)?;
let preview = "Saved user message";
let filename_ts = "2025-01-05T12-00-00";
let meta_rfc3339 = "2025-01-05T12:00:00Z";
let expected_updated_at_rfc3339 = "2025-01-07T00:00:00Z";
let conversation_id = create_fake_rollout_with_text_elements(
codex_home,
filename_ts,
meta_rfc3339,
preview,
Vec::new(),
Some("mock_provider"),
None,
)?;
let rollout_file_path = rollout_path(codex_home, filename_ts, &conversation_id);
set_rollout_mtime(rollout_file_path.as_path(), expected_updated_at_rfc3339)?;
let before_modified = std::fs::metadata(&rollout_file_path)?.modified()?;
let expected_updated_at = chrono::DateTime::parse_from_rfc3339(expected_updated_at_rfc3339)?
.with_timezone(&Utc)
.timestamp();
Ok(RolloutFixture {
conversation_id,
rollout_file_path,
before_modified,
expected_updated_at,
})
}

View File

@@ -8,6 +8,9 @@ use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
use codex_app_server_protocol::ThreadStartedNotification;
use codex_core::config::set_project_trust_level;
use codex_protocol::config_types::TrustLevel;
use codex_protocol::openai_models::ReasoningEffort;
use std::path::Path;
use tempfile::TempDir;
use tokio::time::timeout;
@@ -69,6 +72,47 @@ async fn thread_start_creates_thread_and_emits_started() -> Result<()> {
Ok(())
}
#[tokio::test]
async fn thread_start_respects_project_config_from_cwd() -> Result<()> {
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
let workspace = TempDir::new()?;
let project_config_dir = workspace.path().join(".codex");
std::fs::create_dir_all(&project_config_dir)?;
std::fs::write(
project_config_dir.join("config.toml"),
r#"
model_reasoning_effort = "high"
"#,
)?;
set_project_trust_level(codex_home.path(), workspace.path(), TrustLevel::Trusted)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let req_id = mcp
.send_thread_start_request(ThreadStartParams {
cwd: Some(workspace.path().to_string_lossy().into_owned()),
..Default::default()
})
.await?;
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(req_id)),
)
.await??;
let ThreadStartResponse {
reasoning_effort, ..
} = to_response::<ThreadStartResponse>(resp)?;
assert_eq!(reasoning_effort, Some(ReasoningEffort::High));
Ok(())
}
// Helper to create a config.toml pointing at the mock model server.
fn create_config_toml(codex_home: &Path, server_uri: &str) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");

View File

@@ -0,0 +1,101 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ThreadArchiveParams;
use codex_app_server_protocol::ThreadArchiveResponse;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
use codex_app_server_protocol::ThreadUnarchiveParams;
use codex_app_server_protocol::ThreadUnarchiveResponse;
use codex_core::find_archived_thread_path_by_id_str;
use codex_core::find_thread_path_by_id_str;
use std::path::Path;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(30);
#[tokio::test]
async fn thread_unarchive_moves_rollout_back_into_sessions_directory() -> Result<()> {
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path())?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let start_id = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("mock-model".to_string()),
..Default::default()
})
.await?;
let start_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(start_id)),
)
.await??;
let ThreadStartResponse { thread, .. } = to_response::<ThreadStartResponse>(start_resp)?;
let rollout_path = find_thread_path_by_id_str(codex_home.path(), &thread.id)
.await?
.expect("expected rollout path for thread id to exist");
let archive_id = mcp
.send_thread_archive_request(ThreadArchiveParams {
thread_id: thread.id.clone(),
})
.await?;
let archive_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(archive_id)),
)
.await??;
let _: ThreadArchiveResponse = to_response::<ThreadArchiveResponse>(archive_resp)?;
let archived_path = find_archived_thread_path_by_id_str(codex_home.path(), &thread.id)
.await?
.expect("expected archived rollout path for thread id to exist");
let archived_path_display = archived_path.display();
assert!(
archived_path.exists(),
"expected {archived_path_display} to exist"
);
let unarchive_id = mcp
.send_thread_unarchive_request(ThreadUnarchiveParams {
thread_id: thread.id.clone(),
})
.await?;
let unarchive_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(unarchive_id)),
)
.await??;
let _: ThreadUnarchiveResponse = to_response::<ThreadUnarchiveResponse>(unarchive_resp)?;
let rollout_path_display = rollout_path.display();
assert!(
rollout_path.exists(),
"expected rollout path {rollout_path_display} to be restored"
);
assert!(
!archived_path.exists(),
"expected archived rollout path {archived_path_display} to be moved"
);
Ok(())
}
fn create_config_toml(codex_home: &Path) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(config_toml, config_contents())
}
fn config_contents() -> &'static str {
r#"model = "mock-model"
approval_policy = "never"
sandbox_mode = "read-only"
"#
}

View File

@@ -34,13 +34,18 @@ use codex_app_server_protocol::TurnStartResponse;
use codex_app_server_protocol::TurnStartedNotification;
use codex_app_server_protocol::TurnStatus;
use codex_app_server_protocol::UserInput as V2UserInput;
use codex_core::features::FEATURES;
use codex_core::features::Feature;
use codex_core::protocol_config_types::ReasoningSummary;
use codex_protocol::config_types::CollaborationMode;
use codex_protocol::config_types::ModeKind;
use codex_protocol::config_types::Personality;
use codex_protocol::config_types::Settings;
use codex_protocol::openai_models::ReasoningEffort;
use core_test_support::responses;
use core_test_support::skip_if_no_network;
use pretty_assertions::assert_eq;
use std::collections::BTreeMap;
use std::path::Path;
use tempfile::TempDir;
use tokio::time::timeout;
@@ -54,7 +59,12 @@ async fn turn_start_sends_originator_header() -> Result<()> {
let server = create_mock_responses_server_sequence_unchecked(responses).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri(), "never")?;
create_config_toml(
codex_home.path(),
&server.uri(),
"never",
&BTreeMap::from([(Feature::Personality, true)]),
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(
@@ -124,7 +134,12 @@ async fn turn_start_emits_user_message_item_with_text_elements() -> Result<()> {
let server = create_mock_responses_server_sequence_unchecked(responses).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri(), "never")?;
create_config_toml(
codex_home.path(),
&server.uri(),
"never",
&BTreeMap::from([(Feature::Personality, true)]),
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -211,7 +226,12 @@ async fn turn_start_emits_notifications_and_accepts_model_override() -> Result<(
let server = create_mock_responses_server_sequence_unchecked(responses).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri(), "never")?;
create_config_toml(
codex_home.path(),
&server.uri(),
"never",
&BTreeMap::from([(Feature::Personality, true)]),
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -321,13 +341,19 @@ async fn turn_start_accepts_collaboration_mode_override_v2() -> Result<()> {
let response_mock = responses::mount_sse_once(&server, body).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri(), "never")?;
create_config_toml(
codex_home.path(),
&server.uri(),
"never",
&BTreeMap::default(),
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let thread_req = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("gpt-5.2-codex".to_string()),
..Default::default()
})
.await?;
@@ -338,11 +364,14 @@ async fn turn_start_accepts_collaboration_mode_override_v2() -> Result<()> {
.await??;
let ThreadStartResponse { thread, .. } = to_response::<ThreadStartResponse>(thread_resp)?;
let collaboration_mode = CollaborationMode::Custom(Settings {
model: "mock-model-collab".to_string(),
reasoning_effort: Some(ReasoningEffort::High),
developer_instructions: None,
});
let collaboration_mode = CollaborationMode {
mode: ModeKind::Custom,
settings: Settings {
model: "mock-model-collab".to_string(),
reasoning_effort: Some(ReasoningEffort::High),
developer_instructions: None,
},
};
let turn_req = mcp
.send_turn_start_request(TurnStartParams {
@@ -379,6 +408,82 @@ async fn turn_start_accepts_collaboration_mode_override_v2() -> Result<()> {
Ok(())
}
#[tokio::test]
async fn turn_start_accepts_personality_override_v2() -> Result<()> {
skip_if_no_network!(Ok(()));
let server = responses::start_mock_server().await;
let body = responses::sse(vec![
responses::ev_response_created("resp-1"),
responses::ev_assistant_message("msg-1", "Done"),
responses::ev_completed("resp-1"),
]);
let response_mock = responses::mount_sse_once(&server, body).await;
let codex_home = TempDir::new()?;
create_config_toml(
codex_home.path(),
&server.uri(),
"never",
&BTreeMap::from([(Feature::Personality, true)]),
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let thread_req = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("exp-codex-personality".to_string()),
..Default::default()
})
.await?;
let thread_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(thread_req)),
)
.await??;
let ThreadStartResponse { thread, .. } = to_response::<ThreadStartResponse>(thread_resp)?;
let turn_req = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "Hello".to_string(),
text_elements: Vec::new(),
}],
personality: Some(Personality::Friendly),
..Default::default()
})
.await?;
let turn_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn_req)),
)
.await??;
let _turn: TurnStartResponse = to_response::<TurnStartResponse>(turn_resp)?;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("turn/completed"),
)
.await??;
let request = response_mock.single_request();
let developer_texts = request.message_input_texts("developer");
if developer_texts.is_empty() {
eprintln!("request body: {}", request.body_json());
}
assert!(
developer_texts
.iter()
.any(|text| text.contains("<personality_spec>")),
"expected personality update message in developer input, got {developer_texts:?}"
);
Ok(())
}
#[tokio::test]
async fn turn_start_accepts_local_image_input() -> Result<()> {
// Two Codex turns hit the mock model (session start + turn/start).
@@ -391,7 +496,12 @@ async fn turn_start_accepts_local_image_input() -> Result<()> {
let server = create_mock_responses_server_sequence_unchecked(responses).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri(), "never")?;
create_config_toml(
codex_home.path(),
&server.uri(),
"never",
&BTreeMap::default(),
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -466,7 +576,12 @@ async fn turn_start_exec_approval_toggle_v2() -> Result<()> {
];
let server = create_mock_responses_server_sequence(responses).await;
// Default approval is untrusted to force elicitation on first turn.
create_config_toml(codex_home.as_path(), &server.uri(), "untrusted")?;
create_config_toml(
codex_home.as_path(),
&server.uri(),
"untrusted",
&BTreeMap::default(),
)?;
let mut mcp = McpProcess::new(codex_home.as_path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -591,7 +706,12 @@ async fn turn_start_exec_approval_decline_v2() -> Result<()> {
create_final_assistant_message_sse_response("done")?,
];
let server = create_mock_responses_server_sequence(responses).await;
create_config_toml(codex_home.as_path(), &server.uri(), "untrusted")?;
create_config_toml(
codex_home.as_path(),
&server.uri(),
"untrusted",
&BTreeMap::default(),
)?;
let mut mcp = McpProcess::new(codex_home.as_path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -738,7 +858,12 @@ async fn turn_start_updates_sandbox_and_cwd_between_turns_v2() -> Result<()> {
create_final_assistant_message_sse_response("done second")?,
];
let server = create_mock_responses_server_sequence(responses).await;
create_config_toml(&codex_home, &server.uri(), "untrusted")?;
create_config_toml(
&codex_home,
&server.uri(),
"untrusted",
&BTreeMap::default(),
)?;
let mut mcp = McpProcess::new(&codex_home).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -776,6 +901,7 @@ async fn turn_start_updates_sandbox_and_cwd_between_turns_v2() -> Result<()> {
model: Some("mock-model".to_string()),
effort: Some(ReasoningEffort::Medium),
summary: Some(ReasoningSummary::Auto),
personality: None,
output_schema: None,
collaboration_mode: None,
})
@@ -806,6 +932,7 @@ async fn turn_start_updates_sandbox_and_cwd_between_turns_v2() -> Result<()> {
model: Some("mock-model".to_string()),
effort: Some(ReasoningEffort::Medium),
summary: Some(ReasoningSummary::Auto),
personality: None,
output_schema: None,
collaboration_mode: None,
})
@@ -876,7 +1003,12 @@ async fn turn_start_file_change_approval_v2() -> Result<()> {
create_final_assistant_message_sse_response("patch applied")?,
];
let server = create_mock_responses_server_sequence(responses).await;
create_config_toml(&codex_home, &server.uri(), "untrusted")?;
create_config_toml(
&codex_home,
&server.uri(),
"untrusted",
&BTreeMap::default(),
)?;
let mut mcp = McpProcess::new(&codex_home).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -1053,7 +1185,12 @@ async fn turn_start_file_change_approval_accept_for_session_persists_v2() -> Res
create_final_assistant_message_sse_response("patch 2 applied")?,
];
let server = create_mock_responses_server_sequence(responses).await;
create_config_toml(&codex_home, &server.uri(), "untrusted")?;
create_config_toml(
&codex_home,
&server.uri(),
"untrusted",
&BTreeMap::default(),
)?;
let mut mcp = McpProcess::new(&codex_home).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -1229,7 +1366,12 @@ async fn turn_start_file_change_approval_decline_v2() -> Result<()> {
create_final_assistant_message_sse_response("patch declined")?,
];
let server = create_mock_responses_server_sequence(responses).await;
create_config_toml(&codex_home, &server.uri(), "untrusted")?;
create_config_toml(
&codex_home,
&server.uri(),
"untrusted",
&BTreeMap::default(),
)?;
let mut mcp = McpProcess::new(&codex_home).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -1369,16 +1511,12 @@ async fn command_execution_notifications_include_process_id() -> Result<()> {
];
let server = create_mock_responses_server_sequence(responses).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri(), "never")?;
let config_toml = codex_home.path().join("config.toml");
let mut config_contents = std::fs::read_to_string(&config_toml)?;
config_contents.push_str(
r#"
[features]
unified_exec = true
"#,
);
std::fs::write(&config_toml, config_contents)?;
create_config_toml(
codex_home.path(),
&server.uri(),
"never",
&BTreeMap::from([(Feature::UnifiedExec, true)]),
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
@@ -1502,7 +1640,24 @@ fn create_config_toml(
codex_home: &Path,
server_uri: &str,
approval_policy: &str,
feature_flags: &BTreeMap<Feature, bool>,
) -> std::io::Result<()> {
let mut features = BTreeMap::from([(Feature::RemoteModels, false)]);
for (feature, enabled) in feature_flags {
features.insert(*feature, *enabled);
}
let feature_entries = features
.into_iter()
.map(|(feature, enabled)| {
let key = FEATURES
.iter()
.find(|spec| spec.id == feature)
.map(|spec| spec.key)
.unwrap_or_else(|| panic!("missing feature key for {feature:?}"));
format!("{key} = {enabled}")
})
.collect::<Vec<_>>()
.join("\n");
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
@@ -1514,6 +1669,9 @@ sandbox_mode = "read-only"
model_provider = "mock_provider"
[features]
{feature_entries}
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"

View File

@@ -12,7 +12,7 @@ path = "src/lib.rs"
anyhow = "1"
serde = { version = "1", features = ["derive"] }
serde_json = "1"
reqwest = { version = "0.12", default-features = false, features = ["json", "rustls-tls-webpki-roots", "rustls-tls-native-roots"] }
reqwest = { version = "0.12", default-features = false, features = ["json", "rustls-tls"] }
codex-backend-openapi-models = { path = "../codex-backend-openapi-models" }
codex-protocol = { workspace = true }
codex-core = { workspace = true }

View File

@@ -1,4 +1,5 @@
use crate::types::CodeTaskDetailsResponse;
use crate::types::ConfigFileResponse;
use crate::types::CreditStatusDetails;
use crate::types::PaginatedListTaskListItem;
use crate::types::RateLimitStatusPayload;
@@ -244,6 +245,20 @@ impl Client {
self.decode_json::<TurnAttemptsSiblingTurnsResponse>(&url, &ct, &body)
}
/// Fetch the managed requirements file from codex-backend.
///
/// `GET /api/codex/config/requirements` (Codex API style) or
/// `GET /wham/config/requirements` (ChatGPT backend-api style).
pub async fn get_config_requirements_file(&self) -> Result<ConfigFileResponse> {
let url = match self.path_style {
PathStyle::CodexApi => format!("{}/api/codex/config/requirements", self.base_url),
PathStyle::ChatGptApi => format!("{}/wham/config/requirements", self.base_url),
};
let req = self.http.get(&url).headers(self.headers());
let (body, ct) = self.exec_request(req, "GET", &url).await?;
self.decode_json::<ConfigFileResponse>(&url, &ct, &body)
}
/// Create a new task (user turn) by POSTing to the appropriate backend path
/// based on `path_style`. Returns the created task id.
pub async fn create_task(&self, request_body: serde_json::Value) -> Result<String> {
@@ -336,6 +351,7 @@ impl Client {
fn map_plan_type(plan_type: crate::types::PlanType) -> AccountPlanType {
match plan_type {
crate::types::PlanType::Free => AccountPlanType::Free,
crate::types::PlanType::Go => AccountPlanType::Go,
crate::types::PlanType::Plus => AccountPlanType::Plus,
crate::types::PlanType::Pro => AccountPlanType::Pro,
crate::types::PlanType::Team => AccountPlanType::Team,
@@ -343,7 +359,6 @@ impl Client {
crate::types::PlanType::Enterprise => AccountPlanType::Enterprise,
crate::types::PlanType::Edu | crate::types::PlanType::Education => AccountPlanType::Edu,
crate::types::PlanType::Guest
| crate::types::PlanType::Go
| crate::types::PlanType::FreeWorkspace
| crate::types::PlanType::Quorum
| crate::types::PlanType::K12 => AccountPlanType::Unknown,

View File

@@ -4,6 +4,7 @@ pub mod types;
pub use client::Client;
pub use types::CodeTaskDetailsResponse;
pub use types::CodeTaskDetailsResponseExt;
pub use types::ConfigFileResponse;
pub use types::PaginatedListTaskListItem;
pub use types::TaskListItem;
pub use types::TurnAttemptsSiblingTurnsResponse;

View File

@@ -1,3 +1,4 @@
pub use codex_backend_openapi_models::models::ConfigFileResponse;
pub use codex_backend_openapi_models::models::CreditStatusDetails;
pub use codex_backend_openapi_models::models::PaginatedListTaskListItem;
pub use codex_backend_openapi_models::models::PlanType;

View File

@@ -17,6 +17,8 @@ serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }
tokio = { workspace = true, features = ["full"] }
codex-git = { workspace = true }
urlencoding = { workspace = true }
[dev-dependencies]
pretty_assertions = { workspace = true }
tempfile = { workspace = true }

View File

@@ -6,11 +6,20 @@ use crate::chatgpt_token::init_chatgpt_token_from_auth;
use anyhow::Context;
use serde::de::DeserializeOwned;
use std::time::Duration;
/// Make a GET request to the ChatGPT backend API.
pub(crate) async fn chatgpt_get_request<T: DeserializeOwned>(
config: &Config,
path: String,
) -> anyhow::Result<T> {
chatgpt_get_request_with_timeout(config, path, None).await
}
pub(crate) async fn chatgpt_get_request_with_timeout<T: DeserializeOwned>(
config: &Config,
path: String,
timeout: Option<Duration>,
) -> anyhow::Result<T> {
let chatgpt_base_url = &config.chatgpt_base_url;
init_chatgpt_token_from_auth(&config.codex_home, config.cli_auth_credentials_store_mode)
@@ -27,14 +36,17 @@ pub(crate) async fn chatgpt_get_request<T: DeserializeOwned>(
anyhow::anyhow!("ChatGPT account ID not available, please re-run `codex login`")
});
let response = client
let mut request = client
.get(&url)
.bearer_auth(&token.access_token)
.header("chatgpt-account-id", account_id?)
.header("Content-Type", "application/json")
.send()
.await
.context("Failed to send request")?;
.header("Content-Type", "application/json");
if let Some(timeout) = timeout {
request = request.timeout(timeout);
}
let response = request.send().await.context("Failed to send request")?;
if response.status().is_success() {
let result: T = response

View File

@@ -0,0 +1,309 @@
use std::collections::HashMap;
use codex_core::config::Config;
use codex_core::features::Feature;
use serde::Deserialize;
use std::time::Duration;
use crate::chatgpt_client::chatgpt_get_request_with_timeout;
use crate::chatgpt_token::get_chatgpt_token_data;
use crate::chatgpt_token::init_chatgpt_token_from_auth;
pub use codex_core::connectors::AppInfo;
pub use codex_core::connectors::connector_display_label;
use codex_core::connectors::connector_install_url;
pub use codex_core::connectors::list_accessible_connectors_from_mcp_tools;
use codex_core::connectors::merge_connectors;
#[derive(Debug, Deserialize)]
struct DirectoryListResponse {
apps: Vec<DirectoryApp>,
#[serde(alias = "nextToken")]
next_token: Option<String>,
}
#[derive(Debug, Deserialize, Clone)]
struct DirectoryApp {
id: String,
name: String,
description: Option<String>,
#[serde(alias = "logoUrl")]
logo_url: Option<String>,
#[serde(alias = "logoUrlDark")]
logo_url_dark: Option<String>,
#[serde(alias = "distributionChannel")]
distribution_channel: Option<String>,
visibility: Option<String>,
}
const DIRECTORY_CONNECTORS_TIMEOUT: Duration = Duration::from_secs(60);
pub async fn list_connectors(config: &Config) -> anyhow::Result<Vec<AppInfo>> {
if !config.features.enabled(Feature::Apps) {
return Ok(Vec::new());
}
let (connectors_result, accessible_result) = tokio::join!(
list_all_connectors(config),
list_accessible_connectors_from_mcp_tools(config),
);
let connectors = connectors_result?;
let accessible = accessible_result?;
let merged = merge_connectors(connectors, accessible);
Ok(filter_disallowed_connectors(merged))
}
pub async fn list_all_connectors(config: &Config) -> anyhow::Result<Vec<AppInfo>> {
if !config.features.enabled(Feature::Apps) {
return Ok(Vec::new());
}
init_chatgpt_token_from_auth(&config.codex_home, config.cli_auth_credentials_store_mode)
.await?;
let token_data =
get_chatgpt_token_data().ok_or_else(|| anyhow::anyhow!("ChatGPT token not available"))?;
let mut apps = list_directory_connectors(config).await?;
if token_data.id_token.is_workspace_account() {
apps.extend(list_workspace_connectors(config).await?);
}
let mut connectors = merge_directory_apps(apps)
.into_iter()
.map(directory_app_to_app_info)
.collect::<Vec<_>>();
for connector in &mut connectors {
let install_url = match connector.install_url.take() {
Some(install_url) => install_url,
None => connector_install_url(&connector.name, &connector.id),
};
connector.name = normalize_connector_name(&connector.name, &connector.id);
connector.description = normalize_connector_value(connector.description.as_deref());
connector.install_url = Some(install_url);
connector.is_accessible = false;
}
connectors.sort_by(|left, right| {
left.name
.cmp(&right.name)
.then_with(|| left.id.cmp(&right.id))
});
Ok(connectors)
}
async fn list_directory_connectors(config: &Config) -> anyhow::Result<Vec<DirectoryApp>> {
let mut apps = Vec::new();
let mut next_token: Option<String> = None;
loop {
let path = match next_token.as_deref() {
Some(token) => {
let encoded_token = urlencoding::encode(token);
format!("/connectors/directory/list?tier=categorized&token={encoded_token}")
}
None => "/connectors/directory/list?tier=categorized".to_string(),
};
let response: DirectoryListResponse =
chatgpt_get_request_with_timeout(config, path, Some(DIRECTORY_CONNECTORS_TIMEOUT))
.await?;
apps.extend(
response
.apps
.into_iter()
.filter(|app| !is_hidden_directory_app(app)),
);
next_token = response
.next_token
.map(|token| token.trim().to_string())
.filter(|token| !token.is_empty());
if next_token.is_none() {
break;
}
}
Ok(apps)
}
async fn list_workspace_connectors(config: &Config) -> anyhow::Result<Vec<DirectoryApp>> {
let response: anyhow::Result<DirectoryListResponse> = chatgpt_get_request_with_timeout(
config,
"/connectors/directory/list_workspace".to_string(),
Some(DIRECTORY_CONNECTORS_TIMEOUT),
)
.await;
match response {
Ok(response) => Ok(response
.apps
.into_iter()
.filter(|app| !is_hidden_directory_app(app))
.collect()),
Err(_) => Ok(Vec::new()),
}
}
fn merge_directory_apps(apps: Vec<DirectoryApp>) -> Vec<DirectoryApp> {
let mut merged: HashMap<String, DirectoryApp> = HashMap::new();
for app in apps {
if let Some(existing) = merged.get_mut(&app.id) {
merge_directory_app(existing, app);
} else {
merged.insert(app.id.clone(), app);
}
}
merged.into_values().collect()
}
fn merge_directory_app(existing: &mut DirectoryApp, incoming: DirectoryApp) {
let DirectoryApp {
id: _,
name,
description,
logo_url,
logo_url_dark,
distribution_channel,
visibility: _,
} = incoming;
let incoming_name_is_empty = name.trim().is_empty();
if existing.name.trim().is_empty() && !incoming_name_is_empty {
existing.name = name;
}
let incoming_description_present = description
.as_deref()
.map(|value| !value.trim().is_empty())
.unwrap_or(false);
let existing_description_present = existing
.description
.as_deref()
.map(|value| !value.trim().is_empty())
.unwrap_or(false);
if !existing_description_present && incoming_description_present {
existing.description = description;
}
if existing.logo_url.is_none() && logo_url.is_some() {
existing.logo_url = logo_url;
}
if existing.logo_url_dark.is_none() && logo_url_dark.is_some() {
existing.logo_url_dark = logo_url_dark;
}
if existing.distribution_channel.is_none() && distribution_channel.is_some() {
existing.distribution_channel = distribution_channel;
}
}
fn is_hidden_directory_app(app: &DirectoryApp) -> bool {
matches!(app.visibility.as_deref(), Some("HIDDEN"))
}
fn directory_app_to_app_info(app: DirectoryApp) -> AppInfo {
AppInfo {
id: app.id,
name: app.name,
description: app.description,
logo_url: app.logo_url,
logo_url_dark: app.logo_url_dark,
distribution_channel: app.distribution_channel,
install_url: None,
is_accessible: false,
}
}
fn normalize_connector_name(name: &str, connector_id: &str) -> String {
let trimmed = name.trim();
if trimmed.is_empty() {
connector_id.to_string()
} else {
trimmed.to_string()
}
}
fn normalize_connector_value(value: Option<&str>) -> Option<String> {
value
.map(str::trim)
.filter(|value| !value.is_empty())
.map(str::to_string)
}
const ALLOWED_APPS_SDK_APPS: &[&str] = &["asdk_app_69781557cc1481919cf5e9824fa2e792"];
const DISALLOWED_CONNECTOR_IDS: &[&str] = &[
"asdk_app_6938a94a61d881918ef32cb999ff937c",
"connector_2b0a9009c9c64bf9933a3dae3f2b1254",
"connector_68de829bf7648191acd70a907364c67c",
];
const DISALLOWED_CONNECTOR_PREFIX: &str = "connector_openai_";
fn filter_disallowed_connectors(connectors: Vec<AppInfo>) -> Vec<AppInfo> {
// TODO: Support Apps SDK connectors.
connectors
.into_iter()
.filter(is_connector_allowed)
.collect()
}
fn is_connector_allowed(connector: &AppInfo) -> bool {
let connector_id = connector.id.as_str();
if connector_id.starts_with(DISALLOWED_CONNECTOR_PREFIX)
|| DISALLOWED_CONNECTOR_IDS.contains(&connector_id)
{
return false;
}
if connector_id.starts_with("asdk_app_") {
return ALLOWED_APPS_SDK_APPS.contains(&connector_id);
}
true
}
#[cfg(test)]
mod tests {
use super::*;
use pretty_assertions::assert_eq;
fn app(id: &str) -> AppInfo {
AppInfo {
id: id.to_string(),
name: id.to_string(),
description: None,
logo_url: None,
logo_url_dark: None,
distribution_channel: None,
install_url: None,
is_accessible: false,
}
}
#[test]
fn filters_internal_asdk_connectors() {
let filtered = filter_disallowed_connectors(vec![app("asdk_app_hidden"), app("alpha")]);
assert_eq!(filtered, vec![app("alpha")]);
}
#[test]
fn allows_whitelisted_asdk_connectors() {
let filtered = filter_disallowed_connectors(vec![
app("asdk_app_69781557cc1481919cf5e9824fa2e792"),
app("beta"),
]);
assert_eq!(
filtered,
vec![
app("asdk_app_69781557cc1481919cf5e9824fa2e792"),
app("beta")
]
);
}
#[test]
fn filters_openai_connectors() {
let filtered = filter_disallowed_connectors(vec![
app("connector_openai_foo"),
app("connector_openai_bar"),
app("gamma"),
]);
assert_eq!(filtered, vec![app("gamma")]);
}
#[test]
fn filters_disallowed_connector_ids() {
let filtered = filter_disallowed_connectors(vec![
app("asdk_app_6938a94a61d881918ef32cb999ff937c"),
app("delta"),
]);
assert_eq!(filtered, vec![app("delta")]);
}
}

View File

@@ -1,4 +1,5 @@
pub mod apply_command;
mod chatgpt_client;
mod chatgpt_token;
pub mod connectors;
pub mod get_task;

View File

@@ -35,8 +35,6 @@ codex-responses-api-proxy = { workspace = true }
codex-rmcp-client = { workspace = true }
codex-stdio-to-uds = { workspace = true }
codex-tui = { workspace = true }
codex-tui2 = { workspace = true }
codex-utils-absolute-path = { workspace = true }
libc = { workspace = true }
owo-colors = { workspace = true }
regex-lite = { workspace = true }

View File

@@ -136,7 +136,8 @@ async fn run_command_under_sandbox(
if let SandboxType::Windows = sandbox_type {
#[cfg(target_os = "windows")]
{
use codex_core::features::Feature;
use codex_core::windows_sandbox::WindowsSandboxLevelExt;
use codex_protocol::config_types::WindowsSandboxLevel;
use codex_windows_sandbox::run_windows_sandbox_capture;
use codex_windows_sandbox::run_windows_sandbox_capture_elevated;
@@ -147,8 +148,10 @@ async fn run_command_under_sandbox(
let env_map = env.clone();
let command_vec = command.clone();
let base_dir = config.codex_home.clone();
let use_elevated = config.features.enabled(Feature::WindowsSandbox)
&& config.features.enabled(Feature::WindowsSandboxElevated);
let use_elevated = matches!(
WindowsSandboxLevel::from_config(&config),
WindowsSandboxLevel::Elevated
);
// Preflight audit is invoked elsewhere at the appropriate times.
let res = tokio::task::spawn_blocking(move || {

View File

@@ -225,7 +225,7 @@ pub async fn run_login_status(cli_config_overrides: CliConfigOverrides) -> ! {
let config = load_config_or_exit(cli_config_overrides).await;
match CodexAuth::from_auth_storage(&config.codex_home, config.cli_auth_credentials_store_mode) {
Ok(Some(auth)) => match auth.mode {
Ok(Some(auth)) => match auth.api_auth_mode() {
AuthMode::ApiKey => match auth.get_token() {
Ok(api_key) => {
eprintln!("Logged in using an API key - {}", safe_format_key(&api_key));
@@ -236,10 +236,14 @@ pub async fn run_login_status(cli_config_overrides: CliConfigOverrides) -> ! {
std::process::exit(1);
}
},
AuthMode::ChatGPT => {
AuthMode::Chatgpt => {
eprintln!("Logged in using ChatGPT");
std::process::exit(0);
}
AuthMode::ChatgptAuthTokens => {
eprintln!("Logged in using ChatGPT (external tokens)");
std::process::exit(0);
}
},
Ok(None) => {
eprintln!("Not logged in");

View File

@@ -26,7 +26,6 @@ use codex_tui::AppExitInfo;
use codex_tui::Cli as TuiCli;
use codex_tui::ExitReason;
use codex_tui::update_action::UpdateAction;
use codex_tui2 as tui2;
use owo_colors::OwoColorize;
use std::io::IsTerminal;
use std::path::PathBuf;
@@ -40,14 +39,11 @@ use crate::mcp_cmd::McpCli;
use codex_core::config::Config;
use codex_core::config::ConfigOverrides;
use codex_core::config::edit::ConfigEditsBuilder;
use codex_core::config::find_codex_home;
use codex_core::config::load_config_as_toml_with_cli_overrides;
use codex_core::features::Feature;
use codex_core::features::FeatureOverrides;
use codex_core::features::Features;
use codex_core::features::Stage;
use codex_core::features::is_known_feature_key;
use codex_core::terminal::TerminalName;
use codex_utils_absolute_path::AbsolutePathBuf;
/// Codex CLI
///
@@ -148,13 +144,13 @@ struct CompletionCommand {
#[derive(Debug, Parser)]
struct ResumeCommand {
/// Conversation/session id (UUID). When provided, resumes this session.
/// Conversation/session id (UUID) or thread name. UUIDs take precedence if it parses.
/// If omitted, use --last to pick the most recent recorded session.
#[arg(value_name = "SESSION_ID")]
session_id: Option<String>,
/// Continue the most recent session without showing the picker.
#[arg(long = "last", default_value_t = false, conflicts_with = "session_id")]
#[arg(long = "last", default_value_t = false)]
last: bool,
/// Show all sessions (disables cwd filtering and shows CWD column).
@@ -327,6 +323,7 @@ fn format_exit_messages(exit_info: AppExitInfo, color_enabled: bool) -> Vec<Stri
let AppExitInfo {
token_usage,
thread_id: conversation_id,
thread_name,
..
} = exit_info;
@@ -339,8 +336,9 @@ fn format_exit_messages(exit_info: AppExitInfo, color_enabled: bool) -> Vec<Stri
codex_core::protocol::FinalOutput::from(token_usage)
)];
if let Some(session_id) = conversation_id {
let resume_cmd = format!("codex resume {session_id}");
if let Some(resume_cmd) =
codex_core::util::resume_command(thread_name.as_deref(), conversation_id)
{
let command = if color_enabled {
resume_cmd.cyan().to_string()
} else {
@@ -403,8 +401,7 @@ fn run_update_action(action: UpdateAction) -> anyhow::Result<()> {
if !status.success() {
anyhow::bail!("`{cmd_str}` failed with status {status}");
}
println!();
println!("🎉 Update ran successfully! Please restart Codex.");
println!("\n🎉 Update ran successfully! Please restart Codex.");
Ok(())
}
@@ -456,13 +453,23 @@ struct FeaturesCli {
enum FeaturesSubcommand {
/// List known features with their stage and effective state.
List,
/// Enable a feature in config.toml.
Enable(FeatureSetArgs),
/// Disable a feature in config.toml.
Disable(FeatureSetArgs),
}
#[derive(Debug, Parser)]
struct FeatureSetArgs {
/// Feature key to update (for example: unified_exec).
feature: String,
}
fn stage_str(stage: codex_core::features::Stage) -> &'static str {
use codex_core::features::Stage;
match stage {
Stage::Beta => "experimental",
Stage::Experimental { .. } => "beta",
Stage::UnderDevelopment => "under development",
Stage::Experimental { .. } => "experimental",
Stage::Stable => "stable",
Stage::Deprecated => "deprecated",
Stage::Removed => "removed",
@@ -703,12 +710,27 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
overrides,
)
.await?;
let mut rows = Vec::with_capacity(codex_core::features::FEATURES.len());
let mut name_width = 0;
let mut stage_width = 0;
for def in codex_core::features::FEATURES.iter() {
let name = def.key;
let stage = stage_str(def.stage);
let enabled = config.features.enabled(def.id);
println!("{name}\t{stage}\t{enabled}");
name_width = name_width.max(name.len());
stage_width = stage_width.max(stage.len());
rows.push((name, stage, enabled));
}
for (name, stage, enabled) in rows {
println!("{name:<name_width$} {stage:<stage_width$} {enabled}");
}
}
FeaturesSubcommand::Enable(FeatureSetArgs { feature }) => {
enable_feature_in_config(&interactive, &feature).await?;
}
FeaturesSubcommand::Disable(FeatureSetArgs { feature }) => {
disable_feature_in_config(&interactive, &feature).await?;
}
},
}
@@ -716,6 +738,57 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
Ok(())
}
async fn enable_feature_in_config(interactive: &TuiCli, feature: &str) -> anyhow::Result<()> {
FeatureToggles::validate_feature(feature)?;
let codex_home = find_codex_home()?;
ConfigEditsBuilder::new(&codex_home)
.with_profile(interactive.config_profile.as_deref())
.set_feature_enabled(feature, true)
.apply()
.await?;
println!("Enabled feature `{feature}` in config.toml.");
maybe_print_under_development_feature_warning(&codex_home, interactive, feature);
Ok(())
}
async fn disable_feature_in_config(interactive: &TuiCli, feature: &str) -> anyhow::Result<()> {
FeatureToggles::validate_feature(feature)?;
let codex_home = find_codex_home()?;
ConfigEditsBuilder::new(&codex_home)
.with_profile(interactive.config_profile.as_deref())
.set_feature_enabled(feature, false)
.apply()
.await?;
println!("Disabled feature `{feature}` in config.toml.");
Ok(())
}
fn maybe_print_under_development_feature_warning(
codex_home: &std::path::Path,
interactive: &TuiCli,
feature: &str,
) {
if interactive.config_profile.is_some() {
return;
}
let Some(spec) = codex_core::features::FEATURES
.iter()
.find(|spec| spec.key == feature)
else {
return;
};
if !matches!(spec.stage, Stage::UnderDevelopment) {
return;
}
let config_path = codex_home.join(codex_core::config::CONFIG_TOML_FILE);
eprintln!(
"Under-development features enabled: {feature}. Under-development features are incomplete and may behave unpredictably. To suppress this warning, set `suppress_unstable_features_warning = true` in {}.",
config_path.display()
);
}
/// Prepend root-level overrides so they have lower precedence than
/// CLI-specific ones specified after the subcommand (if any).
fn prepend_config_flags(
@@ -727,8 +800,6 @@ fn prepend_config_flags(
.splice(0..0, cli_config_overrides.raw_overrides);
}
/// Run the interactive Codex TUI, dispatching to either the legacy implementation or the
/// experimental TUI v2 shim based on feature flags resolved from config.
async fn run_interactive_tui(
mut interactive: TuiCli,
codex_linux_sandbox_exe: Option<PathBuf>,
@@ -756,12 +827,7 @@ async fn run_interactive_tui(
}
}
if is_tui2_enabled(&interactive).await? {
let result = tui2::run_main(interactive.into(), codex_linux_sandbox_exe).await?;
Ok(result.into())
} else {
codex_tui::run_main(interactive, codex_linux_sandbox_exe).await
}
codex_tui::run_main(interactive, codex_linux_sandbox_exe).await
}
fn confirm(prompt: &str) -> std::io::Result<bool> {
@@ -773,32 +839,6 @@ fn confirm(prompt: &str) -> std::io::Result<bool> {
Ok(answer.eq_ignore_ascii_case("y") || answer.eq_ignore_ascii_case("yes"))
}
/// Returns `Ok(true)` when the resolved configuration enables the `tui2` feature flag.
///
/// This performs a lightweight config load (honoring the same precedence as the lower-level TUI
/// bootstrap: `$CODEX_HOME`, config.toml, profile, and CLI `-c` overrides) solely to decide which
/// TUI frontend to launch. The full configuration is still loaded later by the interactive TUI.
async fn is_tui2_enabled(cli: &TuiCli) -> std::io::Result<bool> {
let raw_overrides = cli.config_overrides.raw_overrides.clone();
let overrides_cli = codex_common::CliConfigOverrides { raw_overrides };
let cli_kv_overrides = overrides_cli
.parse_overrides()
.map_err(|e| std::io::Error::new(std::io::ErrorKind::InvalidInput, e))?;
let codex_home = find_codex_home()?;
let cwd = cli.cwd.clone();
let config_cwd = match cwd.as_deref() {
Some(path) => AbsolutePathBuf::from_absolute_path(path)?,
None => AbsolutePathBuf::current_dir()?,
};
let config_toml =
load_config_as_toml_with_cli_overrides(&codex_home, &config_cwd, cli_kv_overrides).await?;
let config_profile = config_toml.get_config_profile(cli.config_profile.clone())?;
let overrides = FeatureOverrides::default();
let features = Features::from_config(&config_toml, &config_profile, overrides);
Ok(features.enabled(Feature::Tui2))
}
/// Build the final `TuiCli` for a `codex resume` invocation.
fn finalize_resume_interactive(
mut interactive: TuiCli,
@@ -964,6 +1004,24 @@ mod tests {
finalize_fork_interactive(interactive, root_overrides, session_id, last, all, fork_cli)
}
#[test]
fn exec_resume_last_accepts_prompt_positional() {
let cli =
MultitoolCli::try_parse_from(["codex", "exec", "--json", "resume", "--last", "2+2"])
.expect("parse should succeed");
let Some(Subcommand::Exec(exec)) = cli.subcommand else {
panic!("expected exec subcommand");
};
let Some(codex_exec::Command::Resume(args)) = exec.command else {
panic!("expected exec resume");
};
assert!(args.last);
assert_eq!(args.session_id, None);
assert_eq!(args.prompt.as_deref(), Some("2+2"));
}
fn app_server_from_args(args: &[&str]) -> AppServerCommand {
let cli = MultitoolCli::try_parse_from(args).expect("parse");
let Subcommand::AppServer(app_server) = cli.subcommand.expect("app-server present") else {
@@ -972,7 +1030,7 @@ mod tests {
app_server
}
fn sample_exit_info(conversation: Option<&str>) -> AppExitInfo {
fn sample_exit_info(conversation_id: Option<&str>, thread_name: Option<&str>) -> AppExitInfo {
let token_usage = TokenUsage {
output_tokens: 2,
total_tokens: 2,
@@ -980,7 +1038,10 @@ mod tests {
};
AppExitInfo {
token_usage,
thread_id: conversation.map(ThreadId::from_string).map(Result::unwrap),
thread_id: conversation_id
.map(ThreadId::from_string)
.map(Result::unwrap),
thread_name: thread_name.map(str::to_string),
update_action: None,
exit_reason: ExitReason::UserRequested,
}
@@ -991,6 +1052,7 @@ mod tests {
let exit_info = AppExitInfo {
token_usage: TokenUsage::default(),
thread_id: None,
thread_name: None,
update_action: None,
exit_reason: ExitReason::UserRequested,
};
@@ -1000,7 +1062,7 @@ mod tests {
#[test]
fn format_exit_messages_includes_resume_hint_without_color() {
let exit_info = sample_exit_info(Some("123e4567-e89b-12d3-a456-426614174000"));
let exit_info = sample_exit_info(Some("123e4567-e89b-12d3-a456-426614174000"), None);
let lines = format_exit_messages(exit_info, false);
assert_eq!(
lines,
@@ -1014,12 +1076,28 @@ mod tests {
#[test]
fn format_exit_messages_applies_color_when_enabled() {
let exit_info = sample_exit_info(Some("123e4567-e89b-12d3-a456-426614174000"));
let exit_info = sample_exit_info(Some("123e4567-e89b-12d3-a456-426614174000"), None);
let lines = format_exit_messages(exit_info, true);
assert_eq!(lines.len(), 2);
assert!(lines[1].contains("\u{1b}[36m"));
}
#[test]
fn format_exit_messages_prefers_thread_name() {
let exit_info = sample_exit_info(
Some("123e4567-e89b-12d3-a456-426614174000"),
Some("my-thread"),
);
let lines = format_exit_messages(exit_info, false);
assert_eq!(
lines,
vec![
"Token usage: total=2 input=0 output=2".to_string(),
"To continue this session, run codex resume my-thread".to_string(),
]
);
}
#[test]
fn resume_model_flag_applies_when_no_root_flags() {
let interactive =
@@ -1185,6 +1263,32 @@ mod tests {
assert!(app_server.analytics_default_enabled);
}
#[test]
fn features_enable_parses_feature_name() {
let cli = MultitoolCli::try_parse_from(["codex", "features", "enable", "unified_exec"])
.expect("parse should succeed");
let Some(Subcommand::Features(FeaturesCli { sub })) = cli.subcommand else {
panic!("expected features subcommand");
};
let FeaturesSubcommand::Enable(FeatureSetArgs { feature }) = sub else {
panic!("expected features enable");
};
assert_eq!(feature, "unified_exec");
}
#[test]
fn features_disable_parses_feature_name() {
let cli = MultitoolCli::try_parse_from(["codex", "features", "disable", "shell_tool"])
.expect("parse should succeed");
let Some(Subcommand::Features(FeaturesCli { sub })) = cli.subcommand else {
panic!("expected features subcommand");
};
let FeaturesSubcommand::Disable(FeatureSetArgs { feature }) = sub else {
panic!("expected features disable");
};
assert_eq!(feature, "shell_tool");
}
#[test]
fn feature_toggles_known_features_generate_overrides() {
let toggles = FeatureToggles {

View File

@@ -13,18 +13,20 @@ use codex_core::config::find_codex_home;
use codex_core::config::load_global_mcp_servers;
use codex_core::config::types::McpServerConfig;
use codex_core::config::types::McpServerTransportConfig;
use codex_core::mcp::auth::McpOAuthLoginSupport;
use codex_core::mcp::auth::compute_auth_statuses;
use codex_core::mcp::auth::oauth_login_support;
use codex_core::protocol::McpAuthStatus;
use codex_rmcp_client::delete_oauth_tokens;
use codex_rmcp_client::perform_oauth_login;
use codex_rmcp_client::supports_oauth_login;
/// Subcommands:
/// - `serve` — run the MCP server on stdio
/// - `list` — list configured servers (with `--json`)
/// - `get` — show a single server (with `--json`)
/// - `add` — add a server launcher entry to `~/.codex/config.toml`
/// - `remove` — delete a server entry
/// - `login` — authenticate with MCP server using OAuth
/// - `logout` — remove OAuth credentials for MCP server
#[derive(Debug, clap::Parser)]
pub struct McpCli {
#[clap(flatten)]
@@ -246,6 +248,7 @@ async fn run_add(config_overrides: &CliConfigOverrides, add_args: AddArgs) -> Re
tool_timeout_sec: None,
enabled_tools: None,
disabled_tools: None,
scopes: None,
};
servers.insert(name.clone(), new_entry);
@@ -258,33 +261,25 @@ async fn run_add(config_overrides: &CliConfigOverrides, add_args: AddArgs) -> Re
println!("Added global MCP server '{name}'.");
if let McpServerTransportConfig::StreamableHttp {
url,
bearer_token_env_var: None,
http_headers,
env_http_headers,
} = transport
{
match supports_oauth_login(&url).await {
Ok(true) => {
println!("Detected OAuth support. Starting OAuth flow…");
perform_oauth_login(
&name,
&url,
config.mcp_oauth_credentials_store_mode,
http_headers.clone(),
env_http_headers.clone(),
&Vec::new(),
config.mcp_oauth_callback_port,
)
.await?;
println!("Successfully logged in.");
}
Ok(false) => {}
Err(_) => println!(
"MCP server may or may not require login. Run `codex mcp login {name}` to login."
),
match oauth_login_support(&transport).await {
McpOAuthLoginSupport::Supported(oauth_config) => {
println!("Detected OAuth support. Starting OAuth flow…");
perform_oauth_login(
&name,
&oauth_config.url,
config.mcp_oauth_credentials_store_mode,
oauth_config.http_headers,
oauth_config.env_http_headers,
&Vec::new(),
config.mcp_oauth_callback_port,
)
.await?;
println!("Successfully logged in.");
}
McpOAuthLoginSupport::Unsupported => {}
McpOAuthLoginSupport::Unknown(_) => println!(
"MCP server may or may not require login. Run `codex mcp login {name}` to login."
),
}
Ok(())
@@ -347,6 +342,11 @@ async fn run_login(config_overrides: &CliConfigOverrides, login_args: LoginArgs)
_ => bail!("OAuth login is only supported for streamable HTTP servers."),
};
let mut scopes = scopes;
if scopes.is_empty() {
scopes = server.scopes.clone().unwrap_or_default();
}
perform_oauth_login(
&name,
&url,

View File

@@ -0,0 +1,58 @@
use std::path::Path;
use anyhow::Result;
use predicates::str::contains;
use tempfile::TempDir;
fn codex_command(codex_home: &Path) -> Result<assert_cmd::Command> {
let mut cmd = assert_cmd::Command::new(codex_utils_cargo_bin::cargo_bin("codex")?);
cmd.env("CODEX_HOME", codex_home);
Ok(cmd)
}
#[tokio::test]
async fn features_enable_writes_feature_flag_to_config() -> Result<()> {
let codex_home = TempDir::new()?;
let mut cmd = codex_command(codex_home.path())?;
cmd.args(["features", "enable", "unified_exec"])
.assert()
.success()
.stdout(contains("Enabled feature `unified_exec` in config.toml."));
let config = std::fs::read_to_string(codex_home.path().join("config.toml"))?;
assert!(config.contains("[features]"));
assert!(config.contains("unified_exec = true"));
Ok(())
}
#[tokio::test]
async fn features_disable_writes_feature_flag_to_config() -> Result<()> {
let codex_home = TempDir::new()?;
let mut cmd = codex_command(codex_home.path())?;
cmd.args(["features", "disable", "shell_tool"])
.assert()
.success()
.stdout(contains("Disabled feature `shell_tool` in config.toml."));
let config = std::fs::read_to_string(codex_home.path().join("config.toml"))?;
assert!(config.contains("[features]"));
assert!(config.contains("shell_tool = false"));
Ok(())
}
#[tokio::test]
async fn features_enable_under_development_feature_prints_warning() -> Result<()> {
let codex_home = TempDir::new()?;
let mut cmd = codex_command(codex_home.path())?;
cmd.args(["features", "enable", "sqlite"])
.assert()
.success()
.stderr(contains("Under-development features enabled: sqlite."));
Ok(())
}

View File

@@ -0,0 +1,6 @@
load("//:defs.bzl", "codex_rust_crate")
codex_rust_crate(
name = "cloud-requirements",
crate_name = "codex_cloud_requirements",
)

View File

@@ -0,0 +1,25 @@
[package]
name = "codex-cloud-requirements"
version.workspace = true
edition.workspace = true
license.workspace = true
[lints]
workspace = true
[dependencies]
async-trait = { workspace = true }
codex-backend-client = { workspace = true }
codex-core = { workspace = true }
codex-otel = { workspace = true }
codex-protocol = { workspace = true }
tokio = { workspace = true, features = ["sync", "time"] }
toml = { workspace = true }
tracing = { workspace = true }
[dev-dependencies]
base64 = { workspace = true }
pretty_assertions = { workspace = true }
serde_json = { workspace = true }
tempfile = { workspace = true }
tokio = { workspace = true, features = ["macros", "rt", "test-util", "time"] }

View File

@@ -0,0 +1,336 @@
//! Cloud-hosted config requirements for Codex.
//!
//! This crate fetches `requirements.toml` data from the backend as an alternative to loading it
//! from the local filesystem. It only applies to Enterprise ChatGPT customers.
//!
//! Today, fetching is best-effort: on error or timeout, Codex continues without cloud requirements.
//! We expect to tighten this so that Enterprise ChatGPT customers must successfully fetch these
//! requirements before Codex will run.
use async_trait::async_trait;
use codex_backend_client::Client as BackendClient;
use codex_core::AuthManager;
use codex_core::auth::CodexAuth;
use codex_core::config_loader::CloudRequirementsLoader;
use codex_core::config_loader::ConfigRequirementsToml;
use codex_protocol::account::PlanType;
use std::sync::Arc;
use std::time::Duration;
use std::time::Instant;
use tokio::time::timeout;
/// This blocks codecs startup, so must be short.
const CLOUD_REQUIREMENTS_TIMEOUT: Duration = Duration::from_secs(5);
#[async_trait]
trait RequirementsFetcher: Send + Sync {
/// Returns requirements as a TOML string.
///
/// TODO(gt): For now, returns an Option. But when we want to make this fail-closed, return a
/// Result.
async fn fetch_requirements(&self, auth: &CodexAuth) -> Option<String>;
}
struct BackendRequirementsFetcher {
base_url: String,
}
impl BackendRequirementsFetcher {
fn new(base_url: String) -> Self {
Self { base_url }
}
}
#[async_trait]
impl RequirementsFetcher for BackendRequirementsFetcher {
async fn fetch_requirements(&self, auth: &CodexAuth) -> Option<String> {
let client = BackendClient::from_auth(self.base_url.clone(), auth)
.inspect_err(|err| {
tracing::warn!(
error = %err,
"Failed to construct backend client for cloud requirements"
);
})
.ok()?;
let response = client
.get_config_requirements_file()
.await
.inspect_err(|err| tracing::warn!(error = %err, "Failed to fetch cloud requirements"))
.ok()?;
let Some(contents) = response.contents else {
tracing::warn!("Cloud requirements response missing contents");
return None;
};
Some(contents)
}
}
struct CloudRequirementsService {
auth_manager: Arc<AuthManager>,
fetcher: Arc<dyn RequirementsFetcher>,
timeout: Duration,
}
impl CloudRequirementsService {
fn new(
auth_manager: Arc<AuthManager>,
fetcher: Arc<dyn RequirementsFetcher>,
timeout: Duration,
) -> Self {
Self {
auth_manager,
fetcher,
timeout,
}
}
async fn fetch_with_timeout(&self) -> Option<ConfigRequirementsToml> {
let _timer =
codex_otel::start_global_timer("codex.cloud_requirements.fetch.duration_ms", &[]);
let started_at = Instant::now();
let result = timeout(self.timeout, self.fetch())
.await
.inspect_err(|_| {
tracing::warn!("Timed out waiting for cloud requirements; continuing without them");
})
.ok()?;
match result.as_ref() {
Some(requirements) => {
tracing::info!(
elapsed_ms = started_at.elapsed().as_millis(),
requirements = ?requirements,
"Cloud requirements load completed"
);
}
None => {
tracing::info!(
elapsed_ms = started_at.elapsed().as_millis(),
"Cloud requirements load completed (none)"
);
}
}
result
}
async fn fetch(&self) -> Option<ConfigRequirementsToml> {
let auth = self.auth_manager.auth().await?;
if !(auth.is_chatgpt_auth() && auth.account_plan_type() == Some(PlanType::Enterprise)) {
return None;
}
let contents = self.fetcher.fetch_requirements(&auth).await?;
parse_cloud_requirements(&contents)
.inspect_err(|err| tracing::warn!(error = %err, "Failed to parse cloud requirements"))
.ok()
.flatten()
}
}
pub fn cloud_requirements_loader(
auth_manager: Arc<AuthManager>,
chatgpt_base_url: String,
) -> CloudRequirementsLoader {
let service = CloudRequirementsService::new(
auth_manager,
Arc::new(BackendRequirementsFetcher::new(chatgpt_base_url)),
CLOUD_REQUIREMENTS_TIMEOUT,
);
let task = tokio::spawn(async move { service.fetch_with_timeout().await });
CloudRequirementsLoader::new(async move {
task.await
.inspect_err(|err| tracing::warn!(error = %err, "Cloud requirements task failed"))
.ok()
.flatten()
})
}
fn parse_cloud_requirements(
contents: &str,
) -> Result<Option<ConfigRequirementsToml>, toml::de::Error> {
if contents.trim().is_empty() {
return Ok(None);
}
let requirements: ConfigRequirementsToml = toml::from_str(contents)?;
if requirements.is_empty() {
Ok(None)
} else {
Ok(Some(requirements))
}
}
#[cfg(test)]
mod tests {
use super::*;
use base64::Engine;
use base64::engine::general_purpose::URL_SAFE_NO_PAD;
use codex_core::auth::AuthCredentialsStoreMode;
use codex_protocol::protocol::AskForApproval;
use pretty_assertions::assert_eq;
use serde_json::json;
use std::future::pending;
use std::path::Path;
use tempfile::tempdir;
fn write_auth_json(codex_home: &Path, value: serde_json::Value) -> std::io::Result<()> {
std::fs::write(codex_home.join("auth.json"), serde_json::to_string(&value)?)?;
Ok(())
}
fn auth_manager_with_api_key() -> Arc<AuthManager> {
let tmp = tempdir().expect("tempdir");
let auth_json = json!({
"OPENAI_API_KEY": "sk-test-key",
"tokens": null,
"last_refresh": null,
});
write_auth_json(tmp.path(), auth_json).expect("write auth");
Arc::new(AuthManager::new(
tmp.path().to_path_buf(),
false,
AuthCredentialsStoreMode::File,
))
}
fn auth_manager_with_plan(plan_type: &str) -> Arc<AuthManager> {
let tmp = tempdir().expect("tempdir");
let header = json!({ "alg": "none", "typ": "JWT" });
let auth_payload = json!({
"chatgpt_plan_type": plan_type,
"chatgpt_user_id": "user-12345",
"user_id": "user-12345",
});
let payload = json!({
"email": "user@example.com",
"https://api.openai.com/auth": auth_payload,
});
let header_b64 = URL_SAFE_NO_PAD.encode(serde_json::to_vec(&header).expect("header"));
let payload_b64 = URL_SAFE_NO_PAD.encode(serde_json::to_vec(&payload).expect("payload"));
let signature_b64 = URL_SAFE_NO_PAD.encode(b"sig");
let fake_jwt = format!("{header_b64}.{payload_b64}.{signature_b64}");
let auth_json = json!({
"OPENAI_API_KEY": null,
"tokens": {
"id_token": fake_jwt,
"access_token": "test-access-token",
"refresh_token": "test-refresh-token",
},
"last_refresh": null,
});
write_auth_json(tmp.path(), auth_json).expect("write auth");
Arc::new(AuthManager::new(
tmp.path().to_path_buf(),
false,
AuthCredentialsStoreMode::File,
))
}
fn parse_for_fetch(contents: Option<&str>) -> Option<ConfigRequirementsToml> {
contents.and_then(|contents| parse_cloud_requirements(contents).ok().flatten())
}
struct StaticFetcher {
contents: Option<String>,
}
#[async_trait::async_trait]
impl RequirementsFetcher for StaticFetcher {
async fn fetch_requirements(&self, _auth: &CodexAuth) -> Option<String> {
self.contents.clone()
}
}
struct PendingFetcher;
#[async_trait::async_trait]
impl RequirementsFetcher for PendingFetcher {
async fn fetch_requirements(&self, _auth: &CodexAuth) -> Option<String> {
pending::<()>().await;
None
}
}
#[tokio::test]
async fn fetch_cloud_requirements_skips_non_chatgpt_auth() {
let auth_manager = auth_manager_with_api_key();
let service = CloudRequirementsService::new(
auth_manager,
Arc::new(StaticFetcher { contents: None }),
CLOUD_REQUIREMENTS_TIMEOUT,
);
let result = service.fetch().await;
assert!(result.is_none());
}
#[tokio::test]
async fn fetch_cloud_requirements_skips_non_enterprise_plan() {
let auth_manager = auth_manager_with_plan("pro");
let service = CloudRequirementsService::new(
auth_manager,
Arc::new(StaticFetcher { contents: None }),
CLOUD_REQUIREMENTS_TIMEOUT,
);
let result = service.fetch().await;
assert!(result.is_none());
}
#[tokio::test]
async fn fetch_cloud_requirements_handles_missing_contents() {
let result = parse_for_fetch(None);
assert!(result.is_none());
}
#[tokio::test]
async fn fetch_cloud_requirements_handles_empty_contents() {
let result = parse_for_fetch(Some(" "));
assert!(result.is_none());
}
#[tokio::test]
async fn fetch_cloud_requirements_handles_invalid_toml() {
let result = parse_for_fetch(Some("not = ["));
assert!(result.is_none());
}
#[tokio::test]
async fn fetch_cloud_requirements_ignores_empty_requirements() {
let result = parse_for_fetch(Some("# comment"));
assert!(result.is_none());
}
#[tokio::test]
async fn fetch_cloud_requirements_parses_valid_toml() {
let result = parse_for_fetch(Some("allowed_approval_policies = [\"never\"]"));
assert_eq!(
result,
Some(ConfigRequirementsToml {
allowed_approval_policies: Some(vec![AskForApproval::Never]),
allowed_sandbox_modes: None,
mcp_servers: None,
rules: None,
})
);
}
#[tokio::test(start_paused = true)]
async fn fetch_cloud_requirements_times_out() {
let auth_manager = auth_manager_with_plan("enterprise");
let service = CloudRequirementsService::new(
auth_manager,
Arc::new(PendingFetcher),
CLOUD_REQUIREMENTS_TIMEOUT,
);
let handle = tokio::spawn(async move { service.fetch_with_timeout().await });
tokio::time::advance(CLOUD_REQUIREMENTS_TIMEOUT + Duration::from_millis(1)).await;
let result = handle.await.expect("cloud requirements task");
assert!(result.is_none());
}
}

View File

@@ -193,6 +193,7 @@ impl Stream for AggregatedStream {
content: vec![ContentItem::OutputText {
text: std::mem::take(&mut this.cumulative),
}],
end_turn: None,
};
this.pending
.push_back(ResponseEvent::OutputItemDone(aggregated_message));

View File

@@ -24,6 +24,8 @@ use tokio_tungstenite::tungstenite::Error as WsError;
use tokio_tungstenite::tungstenite::Message;
use tokio_tungstenite::tungstenite::client::IntoClientRequest;
use tracing::debug;
use tracing::error;
use tracing::info;
use tracing::trace;
use url::Url;
@@ -151,15 +153,30 @@ async fn connect_websocket(
headers: HeaderMap,
turn_state: Option<Arc<OnceLock<String>>>,
) -> Result<(WsStream, bool), ApiError> {
info!("connecting to websocket: {url}");
let mut request = url
.as_str()
.into_client_request()
.map_err(|err| ApiError::Stream(format!("failed to build websocket request: {err}")))?;
request.headers_mut().extend(headers);
let (stream, response) = tokio_tungstenite::connect_async(request)
.await
.map_err(|err| map_ws_error(err, &url))?;
let response = tokio_tungstenite::connect_async(request).await;
let (stream, response) = match response {
Ok((stream, response)) => {
info!(
"successfully connected to websocket: {url}, headers: {:?}",
response.headers()
);
(stream, response)
}
Err(err) => {
error!("failed to connect to websocket: {err}, url: {url}");
return Err(map_ws_error(err, &url));
}
};
let reasoning_included = response.headers().contains_key(X_REASONING_INCLUDED_HEADER);
if let Some(turn_state) = turn_state
&& let Some(header_value) = response
@@ -271,7 +288,7 @@ async fn run_websocket_response_stream(
Message::Pong(_) => {}
Message::Close(_) => {
return Err(ApiError::Stream(
"websocket closed before response.completed".into(),
"websocket closed by server before response.completed".into(),
));
}
_ => {}

View File

@@ -386,6 +386,7 @@ mod tests {
content: vec![ContentItem::InputText {
text: "hi".to_string(),
}],
end_turn: None,
}];
let req = ChatRequestBuilder::new("gpt-test", "inst", &prompt_input, &[])
.conversation_id(Some("conv-1".into()))
@@ -412,6 +413,7 @@ mod tests {
content: vec![ContentItem::InputText {
text: "read these".to_string(),
}],
end_turn: None,
},
ResponseItem::FunctionCall {
id: None,

View File

@@ -15,13 +15,12 @@ pub(crate) fn subagent_header(source: &Option<SessionSource>) -> Option<String>
return None;
};
match sub {
codex_protocol::protocol::SubAgentSource::Review => Some("review".to_string()),
codex_protocol::protocol::SubAgentSource::Compact => Some("compact".to_string()),
codex_protocol::protocol::SubAgentSource::ThreadSpawn { .. } => {
Some("collab_spawn".to_string())
}
codex_protocol::protocol::SubAgentSource::Other(label) => Some(label.clone()),
other => Some(
serde_json::to_value(other)
.ok()
.and_then(|v| v.as_str().map(std::string::ToString::to_string))
.unwrap_or_else(|| "other".to_string()),
),
}
}

View File

@@ -223,11 +223,13 @@ mod tests {
id: Some("m1".into()),
role: "assistant".into(),
content: Vec::new(),
end_turn: None,
},
ResponseItem::Message {
id: None,
role: "assistant".into(),
content: Vec::new(),
end_turn: None,
},
];

View File

@@ -330,6 +330,7 @@ async fn append_assistant_text(
id: None,
role: "assistant".to_string(),
content: vec![],
end_turn: None,
};
*assistant_item = Some(item.clone());
let _ = tx_event

View File

@@ -291,7 +291,7 @@ pub fn process_responses_event(
if let Ok(item) = serde_json::from_value::<ResponseItem>(item_val) {
return Ok(Some(ResponseEvent::OutputItemAdded(item)));
}
debug!("failed to parse ResponseItem from output_item.done");
debug!("failed to parse ResponseItem from output_item.added");
}
}
"response.reasoning_summary_part.added" => {

View File

@@ -308,6 +308,7 @@ async fn streaming_client_retries_on_transport_error() -> Result<()> {
content: vec![ContentItem::InputText {
text: "hi".to_string(),
}],
end_turn: None,
}],
tools: Vec::<Value>::new(),
parallel_tool_calls: false,

View File

@@ -77,6 +77,7 @@ async fn models_client_hits_models_endpoint() {
priority: 1,
upgrade: None,
base_instructions: "base instructions".to_string(),
model_messages: None,
supports_reasoning_summaries: false,
support_verbosity: false,
default_verbosity: None,

View File

@@ -0,0 +1,40 @@
/*
* codex-backend
*
* codex-backend
*
* The version of the OpenAPI document: 0.0.1
*
* Generated by: https://openapi-generator.tech
*/
use serde::Deserialize;
use serde::Serialize;
#[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)]
pub struct ConfigFileResponse {
#[serde(rename = "contents", skip_serializing_if = "Option::is_none")]
pub contents: Option<String>,
#[serde(rename = "sha256", skip_serializing_if = "Option::is_none")]
pub sha256: Option<String>,
#[serde(rename = "updated_at", skip_serializing_if = "Option::is_none")]
pub updated_at: Option<String>,
#[serde(rename = "updated_by_user_id", skip_serializing_if = "Option::is_none")]
pub updated_by_user_id: Option<String>,
}
impl ConfigFileResponse {
pub fn new(
contents: Option<String>,
sha256: Option<String>,
updated_at: Option<String>,
updated_by_user_id: Option<String>,
) -> ConfigFileResponse {
ConfigFileResponse {
contents,
sha256,
updated_at,
updated_by_user_id,
}
}
}

View File

@@ -3,6 +3,10 @@
// Currently export only the types referenced by the workspace
// The process for this will change
// Config
pub mod config_file_response;
pub use self::config_file_response::ConfigFileResponse;
// Cloud Tasks
pub mod code_task_details_response;
pub use self::code_task_details_response::CodeTaskDetailsResponse;

View File

@@ -24,21 +24,21 @@ pub fn builtin_approval_presets() -> Vec<ApprovalPreset> {
ApprovalPreset {
id: "read-only",
label: "Read Only",
description: "Requires approval to edit files and run commands.",
description: "Codex can read files in the current workspace. Approval is required to edit files or access the internet.",
approval: AskForApproval::OnRequest,
sandbox: SandboxPolicy::ReadOnly,
},
ApprovalPreset {
id: "auto",
label: "Agent",
description: "Read and edit files, and run commands.",
label: "Default",
description: "Codex can read and edit files in the current workspace, and run commands. Approval is required to access the internet or edit other files. (Identical to Agent mode)",
approval: AskForApproval::OnRequest,
sandbox: SandboxPolicy::new_workspace_write_policy(),
},
ApprovalPreset {
id: "full-access",
label: "Agent (full access)",
description: "Codex can edit files outside this workspace and run commands with network access. Exercise caution when using.",
label: "Full Access",
description: "Codex can edit files outside this workspace and access the internet without asking for approval. Exercise caution when using.",
approval: AskForApproval::Never,
sandbox: SandboxPolicy::DangerFullAccess,
},

View File

@@ -37,6 +37,7 @@ codex-keyring-store = { workspace = true }
codex-otel = { workspace = true }
codex-protocol = { workspace = true }
codex-rmcp-client = { workspace = true }
codex-state = { workspace = true }
codex-utils-absolute-path = { workspace = true }
codex-utils-pty = { workspace = true }
codex-utils-readiness = { workspace = true }
@@ -55,6 +56,7 @@ indoc = { workspace = true }
keyring = { workspace = true, features = ["crypto-rust"] }
libc = { workspace = true }
mcp-types = { workspace = true }
multimap = { workspace = true }
once_cell = { workspace = true }
os_info = { workspace = true }
rand = { workspace = true }
@@ -64,6 +66,7 @@ reqwest = { workspace = true, features = ["json", "stream"] }
schemars = { workspace = true }
serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }
serde_path_to_error = { workspace = true }
serde_yaml = { workspace = true }
sha1 = { workspace = true }
sha2 = { workspace = true }

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

View File

@@ -1,4 +1,5 @@
use crate::agent::AgentStatus;
use crate::agent::guards::Guards;
use crate::error::CodexErr;
use crate::error::Result as CodexResult;
use crate::thread_manager::ThreadManagerState;
@@ -12,18 +13,25 @@ use tokio::sync::watch;
/// Control-plane handle for multi-agent operations.
/// `AgentControl` is held by each session (via `SessionServices`). It provides capability to
/// spawn new agents and the inter-agent communication layer.
/// An `AgentControl` instance is shared per "user session" which means the same `AgentControl`
/// is used for every sub-agent spawned by Codex. By doing so, we make sure the guards are
/// scoped to a user session.
#[derive(Clone, Default)]
pub(crate) struct AgentControl {
/// Weak handle back to the global thread registry/state.
/// This is `Weak` to avoid reference cycles and shadow persistence of the form
/// `ThreadManagerState -> CodexThread -> Session -> SessionServices -> ThreadManagerState`.
manager: Weak<ThreadManagerState>,
state: Arc<Guards>,
}
impl AgentControl {
/// Construct a new `AgentControl` that can spawn/message agents via the given manager state.
pub(crate) fn new(manager: Weak<ThreadManagerState>) -> Self {
Self { manager }
Self {
manager,
..Default::default()
}
}
/// Spawn a new agent thread and submit the initial prompt.
@@ -31,9 +39,21 @@ impl AgentControl {
&self,
config: crate::config::Config,
prompt: String,
session_source: Option<codex_protocol::protocol::SessionSource>,
) -> CodexResult<ThreadId> {
let state = self.upgrade()?;
let new_thread = state.spawn_new_thread(config, self.clone()).await?;
let reservation = self.state.reserve_spawn_slot(config.agent_max_threads)?;
// The same `AgentControl` is sent to spawn the thread.
let new_thread = match session_source {
Some(session_source) => {
state
.spawn_new_thread_with_source(config, self.clone(), session_source)
.await?
}
None => state.spawn_new_thread(config, self.clone()).await?,
};
reservation.commit(new_thread.thread_id);
// Notify a new thread has been created. This notification will be processed by clients
// to subscribe or drain this newly created thread.
@@ -67,6 +87,7 @@ impl AgentControl {
.await;
if matches!(result, Err(CodexErr::InternalAgentDied)) {
let _ = state.remove_thread(&agent_id).await;
self.state.release_spawned_thread(agent_id);
}
result
}
@@ -82,6 +103,7 @@ impl AgentControl {
let state = self.upgrade()?;
let result = state.send_op(agent_id, Op::Shutdown {}).await;
let _ = state.remove_thread(&agent_id).await;
self.state.release_spawned_thread(agent_id);
result
}
@@ -124,6 +146,7 @@ mod tests {
use crate::config::Config;
use crate::config::ConfigBuilder;
use assert_matches::assert_matches;
use codex_protocol::config_types::ModeKind;
use codex_protocol::protocol::ErrorEvent;
use codex_protocol::protocol::EventMsg;
use codex_protocol::protocol::TurnAbortReason;
@@ -132,17 +155,25 @@ mod tests {
use codex_protocol::protocol::TurnStartedEvent;
use pretty_assertions::assert_eq;
use tempfile::TempDir;
use toml::Value as TomlValue;
async fn test_config() -> (TempDir, Config) {
async fn test_config_with_cli_overrides(
cli_overrides: Vec<(String, TomlValue)>,
) -> (TempDir, Config) {
let home = TempDir::new().expect("create temp dir");
let config = ConfigBuilder::default()
.codex_home(home.path().to_path_buf())
.cli_overrides(cli_overrides)
.build()
.await
.expect("load default test config");
(home, config)
}
async fn test_config() -> (TempDir, Config) {
test_config_with_cli_overrides(Vec::new()).await
}
struct AgentControlHarness {
_home: TempDir,
config: Config,
@@ -201,6 +232,7 @@ mod tests {
async fn on_event_updates_status_from_task_started() {
let status = agent_status_from_event(&EventMsg::TurnStarted(TurnStartedEvent {
model_context_window: None,
collaboration_mode_kind: ModeKind::Custom,
}));
assert_eq!(status, Some(AgentStatus::Running));
}
@@ -246,7 +278,7 @@ mod tests {
let control = AgentControl::default();
let (_home, config) = test_config().await;
let err = control
.spawn_agent(config, "hello".to_string())
.spawn_agent(config, "hello".to_string(), None)
.await
.expect_err("spawn_agent should fail without a manager");
assert_eq!(
@@ -348,7 +380,7 @@ mod tests {
let harness = AgentControlHarness::new().await;
let thread_id = harness
.control
.spawn_agent(harness.config.clone(), "spawned".to_string())
.spawn_agent(harness.config.clone(), "spawned".to_string(), None)
.await
.expect("spawn_agent should succeed");
let _thread = harness
@@ -373,4 +405,117 @@ mod tests {
.find(|entry| *entry == expected);
assert_eq!(captured, Some(expected));
}
#[tokio::test]
async fn spawn_agent_respects_max_threads_limit() {
let max_threads = 1usize;
let (_home, config) = test_config_with_cli_overrides(vec![(
"agents.max_threads".to_string(),
TomlValue::Integer(max_threads as i64),
)])
.await;
let manager = ThreadManager::with_models_provider_and_home(
CodexAuth::from_api_key("dummy"),
config.model_provider.clone(),
config.codex_home.clone(),
);
let control = manager.agent_control();
let _ = manager
.start_thread(config.clone())
.await
.expect("start thread");
let first_agent_id = control
.spawn_agent(config.clone(), "hello".to_string(), None)
.await
.expect("spawn_agent should succeed");
let err = control
.spawn_agent(config, "hello again".to_string(), None)
.await
.expect_err("spawn_agent should respect max threads");
let CodexErr::AgentLimitReached {
max_threads: seen_max_threads,
} = err
else {
panic!("expected CodexErr::AgentLimitReached");
};
assert_eq!(seen_max_threads, max_threads);
let _ = control
.shutdown_agent(first_agent_id)
.await
.expect("shutdown agent");
}
#[tokio::test]
async fn spawn_agent_releases_slot_after_shutdown() {
let max_threads = 1usize;
let (_home, config) = test_config_with_cli_overrides(vec![(
"agents.max_threads".to_string(),
TomlValue::Integer(max_threads as i64),
)])
.await;
let manager = ThreadManager::with_models_provider_and_home(
CodexAuth::from_api_key("dummy"),
config.model_provider.clone(),
config.codex_home.clone(),
);
let control = manager.agent_control();
let first_agent_id = control
.spawn_agent(config.clone(), "hello".to_string(), None)
.await
.expect("spawn_agent should succeed");
let _ = control
.shutdown_agent(first_agent_id)
.await
.expect("shutdown agent");
let second_agent_id = control
.spawn_agent(config.clone(), "hello again".to_string(), None)
.await
.expect("spawn_agent should succeed after shutdown");
let _ = control
.shutdown_agent(second_agent_id)
.await
.expect("shutdown agent");
}
#[tokio::test]
async fn spawn_agent_limit_shared_across_clones() {
let max_threads = 1usize;
let (_home, config) = test_config_with_cli_overrides(vec![(
"agents.max_threads".to_string(),
TomlValue::Integer(max_threads as i64),
)])
.await;
let manager = ThreadManager::with_models_provider_and_home(
CodexAuth::from_api_key("dummy"),
config.model_provider.clone(),
config.codex_home.clone(),
);
let control = manager.agent_control();
let cloned = control.clone();
let first_agent_id = cloned
.spawn_agent(config.clone(), "hello".to_string(), None)
.await
.expect("spawn_agent should succeed");
let err = control
.spawn_agent(config, "hello again".to_string(), None)
.await
.expect_err("spawn_agent should respect shared guard");
let CodexErr::AgentLimitReached { max_threads } = err else {
panic!("expected CodexErr::AgentLimitReached");
};
assert_eq!(max_threads, 1);
let _ = control
.shutdown_agent(first_agent_id)
.await
.expect("shutdown agent");
}
}

View File

@@ -0,0 +1,238 @@
use crate::error::CodexErr;
use crate::error::Result;
use codex_protocol::ThreadId;
use codex_protocol::protocol::SessionSource;
use codex_protocol::protocol::SubAgentSource;
use std::collections::HashSet;
use std::sync::Arc;
use std::sync::Mutex;
use std::sync::atomic::AtomicUsize;
use std::sync::atomic::Ordering;
/// This structure is used to add some limits on the multi-agent capabilities for Codex. In
/// the current implementation, it limits:
/// * Total number of sub-agents (i.e. threads) per user session
///
/// This structure is shared by all agents in the same user session (because the `AgentControl`
/// is).
#[derive(Default)]
pub(crate) struct Guards {
threads_set: Mutex<HashSet<ThreadId>>,
total_count: AtomicUsize,
}
/// Initial agent is depth 0.
pub(crate) const MAX_THREAD_SPAWN_DEPTH: i32 = 1;
fn session_depth(session_source: &SessionSource) -> i32 {
match session_source {
SessionSource::SubAgent(SubAgentSource::ThreadSpawn { depth, .. }) => *depth,
SessionSource::SubAgent(_) => 0,
_ => 0,
}
}
pub(crate) fn next_thread_spawn_depth(session_source: &SessionSource) -> i32 {
session_depth(session_source).saturating_add(1)
}
pub(crate) fn exceeds_thread_spawn_depth_limit(depth: i32) -> bool {
depth > MAX_THREAD_SPAWN_DEPTH
}
impl Guards {
pub(crate) fn reserve_spawn_slot(
self: &Arc<Self>,
max_threads: Option<usize>,
) -> Result<SpawnReservation> {
if let Some(max_threads) = max_threads {
if !self.try_increment_spawned(max_threads) {
return Err(CodexErr::AgentLimitReached { max_threads });
}
} else {
self.total_count.fetch_add(1, Ordering::AcqRel);
}
Ok(SpawnReservation {
state: Arc::clone(self),
active: true,
})
}
pub(crate) fn release_spawned_thread(&self, thread_id: ThreadId) {
let removed = {
let mut threads = self
.threads_set
.lock()
.unwrap_or_else(std::sync::PoisonError::into_inner);
threads.remove(&thread_id)
};
if removed {
self.total_count.fetch_sub(1, Ordering::AcqRel);
}
}
fn register_spawned_thread(&self, thread_id: ThreadId) {
let mut threads = self
.threads_set
.lock()
.unwrap_or_else(std::sync::PoisonError::into_inner);
threads.insert(thread_id);
}
fn try_increment_spawned(&self, max_threads: usize) -> bool {
let mut current = self.total_count.load(Ordering::Acquire);
loop {
if current >= max_threads {
return false;
}
match self.total_count.compare_exchange_weak(
current,
current + 1,
Ordering::AcqRel,
Ordering::Acquire,
) {
Ok(_) => return true,
Err(updated) => current = updated,
}
}
}
}
pub(crate) struct SpawnReservation {
state: Arc<Guards>,
active: bool,
}
impl SpawnReservation {
pub(crate) fn commit(mut self, thread_id: ThreadId) {
self.state.register_spawned_thread(thread_id);
self.active = false;
}
}
impl Drop for SpawnReservation {
fn drop(&mut self) {
if self.active {
self.state.total_count.fetch_sub(1, Ordering::AcqRel);
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use pretty_assertions::assert_eq;
#[test]
fn session_depth_defaults_to_zero_for_root_sources() {
assert_eq!(session_depth(&SessionSource::Cli), 0);
}
#[test]
fn thread_spawn_depth_increments_and_enforces_limit() {
let session_source = SessionSource::SubAgent(SubAgentSource::ThreadSpawn {
parent_thread_id: ThreadId::new(),
depth: 1,
});
let child_depth = next_thread_spawn_depth(&session_source);
assert_eq!(child_depth, 2);
assert!(exceeds_thread_spawn_depth_limit(child_depth));
}
#[test]
fn non_thread_spawn_subagents_default_to_depth_zero() {
let session_source = SessionSource::SubAgent(SubAgentSource::Review);
assert_eq!(session_depth(&session_source), 0);
assert_eq!(next_thread_spawn_depth(&session_source), 1);
assert!(!exceeds_thread_spawn_depth_limit(1));
}
#[test]
fn reservation_drop_releases_slot() {
let guards = Arc::new(Guards::default());
let reservation = guards.reserve_spawn_slot(Some(1)).expect("reserve slot");
drop(reservation);
let reservation = guards.reserve_spawn_slot(Some(1)).expect("slot released");
drop(reservation);
}
#[test]
fn commit_holds_slot_until_release() {
let guards = Arc::new(Guards::default());
let reservation = guards.reserve_spawn_slot(Some(1)).expect("reserve slot");
let thread_id = ThreadId::new();
reservation.commit(thread_id);
let err = match guards.reserve_spawn_slot(Some(1)) {
Ok(_) => panic!("limit should be enforced"),
Err(err) => err,
};
let CodexErr::AgentLimitReached { max_threads } = err else {
panic!("expected CodexErr::AgentLimitReached");
};
assert_eq!(max_threads, 1);
guards.release_spawned_thread(thread_id);
let reservation = guards
.reserve_spawn_slot(Some(1))
.expect("slot released after thread removal");
drop(reservation);
}
#[test]
fn release_ignores_unknown_thread_id() {
let guards = Arc::new(Guards::default());
let reservation = guards.reserve_spawn_slot(Some(1)).expect("reserve slot");
let thread_id = ThreadId::new();
reservation.commit(thread_id);
guards.release_spawned_thread(ThreadId::new());
let err = match guards.reserve_spawn_slot(Some(1)) {
Ok(_) => panic!("limit should still be enforced"),
Err(err) => err,
};
let CodexErr::AgentLimitReached { max_threads } = err else {
panic!("expected CodexErr::AgentLimitReached");
};
assert_eq!(max_threads, 1);
guards.release_spawned_thread(thread_id);
let reservation = guards
.reserve_spawn_slot(Some(1))
.expect("slot released after real thread removal");
drop(reservation);
}
#[test]
fn release_is_idempotent_for_registered_threads() {
let guards = Arc::new(Guards::default());
let reservation = guards.reserve_spawn_slot(Some(1)).expect("reserve slot");
let first_id = ThreadId::new();
reservation.commit(first_id);
guards.release_spawned_thread(first_id);
let reservation = guards.reserve_spawn_slot(Some(1)).expect("slot reused");
let second_id = ThreadId::new();
reservation.commit(second_id);
guards.release_spawned_thread(first_id);
let err = match guards.reserve_spawn_slot(Some(1)) {
Ok(_) => panic!("limit should still be enforced"),
Err(err) => err,
};
let CodexErr::AgentLimitReached { max_threads } = err else {
panic!("expected CodexErr::AgentLimitReached");
};
assert_eq!(max_threads, 1);
guards.release_spawned_thread(second_id);
let reservation = guards
.reserve_spawn_slot(Some(1))
.expect("slot released after second thread removal");
drop(reservation);
}
}

View File

@@ -1,8 +1,12 @@
pub(crate) mod control;
mod guards;
pub(crate) mod role;
pub(crate) mod status;
pub(crate) use codex_protocol::protocol::AgentStatus;
pub(crate) use control::AgentControl;
pub(crate) use guards::MAX_THREAD_SPAWN_DEPTH;
pub(crate) use guards::exceeds_thread_spawn_depth_limit;
pub(crate) use guards::next_thread_spawn_depth;
pub(crate) use role::AgentRole;
pub(crate) use status::agent_status_from_event;

View File

@@ -1,20 +1,22 @@
use crate::config::Config;
use crate::protocol::SandboxPolicy;
use codex_protocol::openai_models::ReasoningEffort;
use serde::Deserialize;
use serde::Serialize;
/// Base instructions for the orchestrator role.
const ORCHESTRATOR_PROMPT: &str = include_str!("../../templates/agents/orchestrator.md");
/// Base instructions for the worker role.
const WORKER_PROMPT: &str = include_str!("../../gpt-5.2-codex_prompt.md");
/// Default worker model override used by the worker role.
const WORKER_MODEL: &str = "gpt-5.2-codex";
/// Default model override used.
// TODO(jif) update when we have something smarter.
const EXPLORER_MODEL: &str = "gpt-5.2-codex";
/// Enumerated list of all supported agent roles.
const ALL_ROLES: [AgentRole; 3] = [
AgentRole::Default,
AgentRole::Orchestrator,
AgentRole::Explorer,
AgentRole::Worker,
// TODO(jif) add when we have stable prompts + models
// AgentRole::Orchestrator,
];
/// Hard-coded agent role selection used when spawning sub-agents.
@@ -27,6 +29,8 @@ pub enum AgentRole {
Orchestrator,
/// Task-executing agent with a fixed model override.
Worker,
/// Task-executing agent with a fixed model override.
Explorer,
}
/// Immutable profile data that drives per-agent configuration overrides.
@@ -36,8 +40,12 @@ pub struct AgentProfile {
pub base_instructions: Option<&'static str>,
/// Optional model override.
pub model: Option<&'static str>,
/// Optional reasoning effort override.
pub reasoning_effort: Option<ReasoningEffort>,
/// Whether to force a read-only sandbox policy.
pub read_only: bool,
/// Description to include in the tool specs.
pub description: &'static str,
}
impl AgentRole {
@@ -45,7 +53,19 @@ impl AgentRole {
pub fn enum_values() -> Vec<String> {
ALL_ROLES
.iter()
.filter_map(|role| serde_json::to_string(role).ok())
.filter_map(|role| {
let description = role.profile().description;
serde_json::to_string(role)
.map(|role| {
let description = if !description.is_empty() {
format!(r#", "description": {description}"#)
} else {
String::new()
};
format!(r#"{{ "name": {role}{description}}}"#)
})
.ok()
})
.collect()
}
@@ -58,8 +78,31 @@ impl AgentRole {
..Default::default()
},
AgentRole::Worker => AgentProfile {
base_instructions: Some(WORKER_PROMPT),
model: Some(WORKER_MODEL),
// base_instructions: Some(WORKER_PROMPT),
// model: Some(WORKER_MODEL),
description: r#"Use for execution and production work.
Typical tasks:
- Implement part of a feature
- Fix tests or bugs
- Split large refactors into independent chunks
Rules:
- Explicitly assign **ownership** of the task (files / responsibility).
- Always tell workers they are **not alone in the codebase**, and they should ignore edits made by others without touching them"#,
..Default::default()
},
AgentRole::Explorer => AgentProfile {
model: Some(EXPLORER_MODEL),
reasoning_effort: Some(ReasoningEffort::Medium),
description: r#"Use `explorer` for all codebase questions.
Explorers are fast and authoritative.
Always prefer them over manual search or file reading.
Rules:
- Ask explorers first and precisely.
- Do not re-read or re-search code they cover.
- Trust explorer results without verification.
- Run explorers in parallel when useful.
- Reuse existing explorers for related questions.
"#,
..Default::default()
},
}
@@ -74,6 +117,9 @@ impl AgentRole {
if let Some(model) = profile.model {
config.model = Some(model.to_string());
}
if let Some(reasoning_effort) = profile.reasoning_effort {
config.model_reasoning_effort = Some(reasoning_effort)
}
if profile.read_only {
config
.sandbox_policy

View File

@@ -9,6 +9,7 @@ use serde::Deserialize;
use crate::auth::CodexAuth;
use crate::error::CodexErr;
use crate::error::ModelCapError;
use crate::error::RetryLimitReachedError;
use crate::error::UnexpectedResponseError;
use crate::error::UsageLimitReachedError;
@@ -49,6 +50,23 @@ pub(crate) fn map_api_error(err: ApiError) -> CodexErr {
} else if status == http::StatusCode::INTERNAL_SERVER_ERROR {
CodexErr::InternalServerError
} else if status == http::StatusCode::TOO_MANY_REQUESTS {
if let Some(model) = headers
.as_ref()
.and_then(|map| map.get(MODEL_CAP_MODEL_HEADER))
.and_then(|value| value.to_str().ok())
.map(str::to_string)
{
let reset_after_seconds = headers
.as_ref()
.and_then(|map| map.get(MODEL_CAP_RESET_AFTER_HEADER))
.and_then(|value| value.to_str().ok())
.and_then(|value| value.parse::<u64>().ok());
return CodexErr::ModelCap(ModelCapError {
model,
reset_after_seconds,
});
}
if let Ok(err) = serde_json::from_str::<UsageErrorResponse>(&body_text) {
if err.error.error_type.as_deref() == Some("usage_limit_reached") {
let rate_limits = headers.as_ref().and_then(parse_rate_limit);
@@ -92,6 +110,42 @@ pub(crate) fn map_api_error(err: ApiError) -> CodexErr {
}
}
const MODEL_CAP_MODEL_HEADER: &str = "x-codex-model-cap-model";
const MODEL_CAP_RESET_AFTER_HEADER: &str = "x-codex-model-cap-reset-after-seconds";
#[cfg(test)]
mod tests {
use super::*;
use codex_api::TransportError;
use http::HeaderMap;
use http::StatusCode;
#[test]
fn map_api_error_maps_model_cap_headers() {
let mut headers = HeaderMap::new();
headers.insert(
MODEL_CAP_MODEL_HEADER,
http::HeaderValue::from_static("boomslang"),
);
headers.insert(
MODEL_CAP_RESET_AFTER_HEADER,
http::HeaderValue::from_static("120"),
);
let err = map_api_error(ApiError::Transport(TransportError::Http {
status: StatusCode::TOO_MANY_REQUESTS,
url: Some("http://example.com/v1/responses".to_string()),
headers: Some(headers),
body: Some(String::new()),
}));
let CodexErr::ModelCap(model_cap) = err else {
panic!("expected CodexErr::ModelCap, got {err:?}");
};
assert_eq!(model_cap.model, "boomslang");
assert_eq!(model_cap.reset_after_seconds, Some(120));
}
}
fn extract_request_id(headers: Option<&HeaderMap>) -> Option<String> {
headers.and_then(|map| {
["cf-ray", "x-request-id", "x-oai-request-id"]

View File

@@ -42,6 +42,7 @@ pub(crate) async fn apply_patch(
turn_context.approval_policy,
&turn_context.sandbox_policy,
&turn_context.cwd,
turn_context.windows_sandbox_level,
) {
SafetyCheck::AutoApprove {
user_explicitly_approved,

View File

@@ -1,5 +1,6 @@
mod storage;
use async_trait::async_trait;
use chrono::Utc;
use reqwest::StatusCode;
use serde::Deserialize;
@@ -12,8 +13,9 @@ use std::path::Path;
use std::path::PathBuf;
use std::sync::Arc;
use std::sync::Mutex;
use std::sync::RwLock;
use codex_app_server_protocol::AuthMode;
use codex_app_server_protocol::AuthMode as ApiAuthMode;
use codex_protocol::config_types::ForcedLoginMethod;
pub use crate::auth::storage::AuthCredentialsStoreMode;
@@ -23,6 +25,7 @@ use crate::auth::storage::create_auth_storage;
use crate::config::Config;
use crate::error::RefreshTokenFailedError;
use crate::error::RefreshTokenFailedReason;
use crate::token_data::IdTokenInfo;
use crate::token_data::KnownPlan as InternalKnownPlan;
use crate::token_data::PlanType as InternalPlanType;
use crate::token_data::TokenData;
@@ -33,19 +36,50 @@ use codex_protocol::account::PlanType as AccountPlanType;
use serde_json::Value;
use thiserror::Error;
#[derive(Debug, Clone)]
pub struct CodexAuth {
pub mode: AuthMode,
/// Account type for the current user.
///
/// This is used internally to determine the base URL for generating responses,
/// and to gate ChatGPT-only behaviors like rate limits and available models (as
/// opposed to API key-based auth).
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub enum AuthMode {
ApiKey,
Chatgpt,
}
pub(crate) api_key: Option<String>,
pub(crate) auth_dot_json: Arc<Mutex<Option<AuthDotJson>>>,
/// Authentication mechanism used by the current user.
#[derive(Debug, Clone)]
pub enum CodexAuth {
ApiKey(ApiKeyAuth),
Chatgpt(ChatgptAuth),
ChatgptAuthTokens(ChatgptAuthTokens),
}
#[derive(Debug, Clone)]
pub struct ApiKeyAuth {
api_key: String,
}
#[derive(Debug, Clone)]
pub struct ChatgptAuth {
state: ChatgptAuthState,
storage: Arc<dyn AuthStorageBackend>,
pub(crate) client: CodexHttpClient,
}
#[derive(Debug, Clone)]
pub struct ChatgptAuthTokens {
state: ChatgptAuthState,
}
#[derive(Debug, Clone)]
struct ChatgptAuthState {
auth_dot_json: Arc<Mutex<Option<AuthDotJson>>>,
client: CodexHttpClient,
}
impl PartialEq for CodexAuth {
fn eq(&self, other: &Self) -> bool {
self.mode == other.mode
self.api_auth_mode() == other.api_auth_mode()
}
}
@@ -68,6 +102,31 @@ pub enum RefreshTokenError {
Transient(#[from] std::io::Error),
}
#[derive(Clone, Debug, PartialEq, Eq)]
pub struct ExternalAuthTokens {
pub access_token: String,
pub id_token: String,
}
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub enum ExternalAuthRefreshReason {
Unauthorized,
}
#[derive(Clone, Debug, PartialEq, Eq)]
pub struct ExternalAuthRefreshContext {
pub reason: ExternalAuthRefreshReason,
pub previous_account_id: Option<String>,
}
#[async_trait]
pub trait ExternalAuthRefresher: Send + Sync {
async fn refresh(
&self,
context: ExternalAuthRefreshContext,
) -> std::io::Result<ExternalAuthTokens>;
}
impl RefreshTokenError {
pub fn failed_reason(&self) -> Option<RefreshTokenFailedReason> {
match self {
@@ -87,14 +146,78 @@ impl From<RefreshTokenError> for std::io::Error {
}
impl CodexAuth {
fn from_auth_dot_json(
codex_home: &Path,
auth_dot_json: AuthDotJson,
auth_credentials_store_mode: AuthCredentialsStoreMode,
client: CodexHttpClient,
) -> std::io::Result<Self> {
let auth_mode = auth_dot_json.resolved_mode();
if auth_mode == ApiAuthMode::ApiKey {
let Some(api_key) = auth_dot_json.openai_api_key.as_deref() else {
return Err(std::io::Error::other("API key auth is missing a key."));
};
return Ok(CodexAuth::from_api_key_with_client(api_key, client));
}
let storage_mode = auth_dot_json.storage_mode(auth_credentials_store_mode);
let state = ChatgptAuthState {
auth_dot_json: Arc::new(Mutex::new(Some(auth_dot_json))),
client,
};
match auth_mode {
ApiAuthMode::Chatgpt => {
let storage = create_auth_storage(codex_home.to_path_buf(), storage_mode);
Ok(Self::Chatgpt(ChatgptAuth { state, storage }))
}
ApiAuthMode::ChatgptAuthTokens => {
Ok(Self::ChatgptAuthTokens(ChatgptAuthTokens { state }))
}
ApiAuthMode::ApiKey => unreachable!("api key mode is handled above"),
}
}
/// Loads the available auth information from auth storage.
pub fn from_auth_storage(
codex_home: &Path,
auth_credentials_store_mode: AuthCredentialsStoreMode,
) -> std::io::Result<Option<CodexAuth>> {
) -> std::io::Result<Option<Self>> {
load_auth(codex_home, false, auth_credentials_store_mode)
}
pub fn internal_auth_mode(&self) -> AuthMode {
match self {
Self::ApiKey(_) => AuthMode::ApiKey,
Self::Chatgpt(_) | Self::ChatgptAuthTokens(_) => AuthMode::Chatgpt,
}
}
pub fn api_auth_mode(&self) -> ApiAuthMode {
match self {
Self::ApiKey(_) => ApiAuthMode::ApiKey,
Self::Chatgpt(_) => ApiAuthMode::Chatgpt,
Self::ChatgptAuthTokens(_) => ApiAuthMode::ChatgptAuthTokens,
}
}
pub fn is_chatgpt_auth(&self) -> bool {
self.internal_auth_mode() == AuthMode::Chatgpt
}
pub fn is_external_chatgpt_tokens(&self) -> bool {
matches!(self, Self::ChatgptAuthTokens(_))
}
/// Returns `None` is `is_internal_auth_mode() != AuthMode::ApiKey`.
pub fn api_key(&self) -> Option<&str> {
match self {
Self::ApiKey(auth) => Some(auth.api_key.as_str()),
Self::Chatgpt(_) | Self::ChatgptAuthTokens(_) => None,
}
}
/// Returns `Err` if `is_chatgpt_auth()` is false.
pub fn get_token_data(&self) -> Result<TokenData, std::io::Error> {
let auth_dot_json: Option<AuthDotJson> = self.get_current_auth_json();
match auth_dot_json {
@@ -107,20 +230,23 @@ impl CodexAuth {
}
}
/// Returns the token string used for bearer authentication.
pub fn get_token(&self) -> Result<String, std::io::Error> {
match self.mode {
AuthMode::ApiKey => Ok(self.api_key.clone().unwrap_or_default()),
AuthMode::ChatGPT => {
let id_token = self.get_token_data()?.access_token;
Ok(id_token)
match self {
Self::ApiKey(auth) => Ok(auth.api_key.clone()),
Self::Chatgpt(_) | Self::ChatgptAuthTokens(_) => {
let access_token = self.get_token_data()?.access_token;
Ok(access_token)
}
}
}
/// Returns `None` if `is_chatgpt_auth()` is false.
pub fn get_account_id(&self) -> Option<String> {
self.get_current_token_data().and_then(|t| t.account_id)
}
/// Returns `None` if `is_chatgpt_auth()` is false.
pub fn get_account_email(&self) -> Option<String> {
self.get_current_token_data().and_then(|t| t.id_token.email)
}
@@ -132,6 +258,7 @@ impl CodexAuth {
pub fn account_plan_type(&self) -> Option<AccountPlanType> {
let map_known = |kp: &InternalKnownPlan| match kp {
InternalKnownPlan::Free => AccountPlanType::Free,
InternalKnownPlan::Go => AccountPlanType::Go,
InternalKnownPlan::Plus => AccountPlanType::Plus,
InternalKnownPlan::Pro => AccountPlanType::Pro,
InternalKnownPlan::Team => AccountPlanType::Team,
@@ -148,11 +275,18 @@ impl CodexAuth {
})
}
/// Returns `None` if `is_chatgpt_auth()` is false.
fn get_current_auth_json(&self) -> Option<AuthDotJson> {
let state = match self {
Self::Chatgpt(auth) => &auth.state,
Self::ChatgptAuthTokens(auth) => &auth.state,
Self::ApiKey(_) => return None,
};
#[expect(clippy::unwrap_used)]
self.auth_dot_json.lock().unwrap().clone()
state.auth_dot_json.lock().unwrap().clone()
}
/// Returns `None` if `is_chatgpt_auth()` is false.
fn get_current_token_data(&self) -> Option<TokenData> {
self.get_current_auth_json().and_then(|t| t.tokens)
}
@@ -160,6 +294,7 @@ impl CodexAuth {
/// Consider this private to integration tests.
pub fn create_dummy_chatgpt_auth_for_testing() -> Self {
let auth_dot_json = AuthDotJson {
auth_mode: Some(ApiAuthMode::Chatgpt),
openai_api_key: None,
tokens: Some(TokenData {
id_token: Default::default(),
@@ -170,24 +305,19 @@ impl CodexAuth {
last_refresh: Some(Utc::now()),
};
let auth_dot_json = Arc::new(Mutex::new(Some(auth_dot_json)));
Self {
api_key: None,
mode: AuthMode::ChatGPT,
storage: create_auth_storage(PathBuf::new(), AuthCredentialsStoreMode::File),
auth_dot_json,
client: crate::default_client::create_client(),
}
let client = crate::default_client::create_client();
let state = ChatgptAuthState {
auth_dot_json: Arc::new(Mutex::new(Some(auth_dot_json))),
client,
};
let storage = create_auth_storage(PathBuf::new(), AuthCredentialsStoreMode::File);
Self::Chatgpt(ChatgptAuth { state, storage })
}
fn from_api_key_with_client(api_key: &str, client: CodexHttpClient) -> Self {
Self {
api_key: Some(api_key.to_owned()),
mode: AuthMode::ApiKey,
storage: create_auth_storage(PathBuf::new(), AuthCredentialsStoreMode::File),
auth_dot_json: Arc::new(Mutex::new(None)),
client,
}
fn from_api_key_with_client(api_key: &str, _client: CodexHttpClient) -> Self {
Self::ApiKey(ApiKeyAuth {
api_key: api_key.to_owned(),
})
}
pub fn from_api_key(api_key: &str) -> Self {
@@ -195,6 +325,25 @@ impl CodexAuth {
}
}
impl ChatgptAuth {
fn current_auth_json(&self) -> Option<AuthDotJson> {
#[expect(clippy::unwrap_used)]
self.state.auth_dot_json.lock().unwrap().clone()
}
fn current_token_data(&self) -> Option<TokenData> {
self.current_auth_json().and_then(|auth| auth.tokens)
}
fn storage(&self) -> &Arc<dyn AuthStorageBackend> {
&self.storage
}
fn client(&self) -> &CodexHttpClient {
&self.state.client
}
}
pub const OPENAI_API_KEY_ENV_VAR: &str = "OPENAI_API_KEY";
pub const CODEX_API_KEY_ENV_VAR: &str = "CODEX_API_KEY";
@@ -229,6 +378,7 @@ pub fn login_with_api_key(
auth_credentials_store_mode: AuthCredentialsStoreMode,
) -> std::io::Result<()> {
let auth_dot_json = AuthDotJson {
auth_mode: Some(ApiAuthMode::ApiKey),
openai_api_key: Some(api_key.to_string()),
tokens: None,
last_refresh: None,
@@ -236,6 +386,20 @@ pub fn login_with_api_key(
save_auth(codex_home, &auth_dot_json, auth_credentials_store_mode)
}
/// Writes an in-memory auth payload for externally managed ChatGPT tokens.
pub fn login_with_chatgpt_auth_tokens(
codex_home: &Path,
id_token: &str,
access_token: &str,
) -> std::io::Result<()> {
let auth_dot_json = AuthDotJson::from_external_token_strings(id_token, access_token)?;
save_auth(
codex_home,
&auth_dot_json,
AuthCredentialsStoreMode::Ephemeral,
)
}
/// Persist the provided auth payload using the specified backend.
pub fn save_auth(
codex_home: &Path,
@@ -270,10 +434,10 @@ pub fn enforce_login_restrictions(config: &Config) -> std::io::Result<()> {
};
if let Some(required_method) = config.forced_login_method {
let method_violation = match (required_method, auth.mode) {
let method_violation = match (required_method, auth.internal_auth_mode()) {
(ForcedLoginMethod::Api, AuthMode::ApiKey) => None,
(ForcedLoginMethod::Chatgpt, AuthMode::ChatGPT) => None,
(ForcedLoginMethod::Api, AuthMode::ChatGPT) => Some(
(ForcedLoginMethod::Chatgpt, AuthMode::Chatgpt) => None,
(ForcedLoginMethod::Api, AuthMode::Chatgpt) => Some(
"API key login is required, but ChatGPT is currently being used. Logging out."
.to_string(),
),
@@ -293,7 +457,7 @@ pub fn enforce_login_restrictions(config: &Config) -> std::io::Result<()> {
}
if let Some(expected_account_id) = config.forced_chatgpt_workspace_id.as_deref() {
if auth.mode != AuthMode::ChatGPT {
if !auth.is_chatgpt_auth() {
return Ok(());
}
@@ -337,12 +501,26 @@ fn logout_with_message(
message: String,
auth_credentials_store_mode: AuthCredentialsStoreMode,
) -> std::io::Result<()> {
match logout(codex_home, auth_credentials_store_mode) {
Ok(_) => Err(std::io::Error::other(message)),
Err(err) => Err(std::io::Error::other(format!(
"{message}. Failed to remove auth.json: {err}"
))),
// External auth tokens live in the ephemeral store, but persistent auth may still exist
// from earlier logins. Clear both so a forced logout truly removes all active auth.
let removal_result = logout_all_stores(codex_home, auth_credentials_store_mode);
let error_message = match removal_result {
Ok(_) => message,
Err(err) => format!("{message}. Failed to remove auth.json: {err}"),
};
Err(std::io::Error::other(error_message))
}
fn logout_all_stores(
codex_home: &Path,
auth_credentials_store_mode: AuthCredentialsStoreMode,
) -> std::io::Result<bool> {
if auth_credentials_store_mode == AuthCredentialsStoreMode::Ephemeral {
return logout(codex_home, AuthCredentialsStoreMode::Ephemeral);
}
let removed_ephemeral = logout(codex_home, AuthCredentialsStoreMode::Ephemeral)?;
let removed_managed = logout(codex_home, auth_credentials_store_mode)?;
Ok(removed_ephemeral || removed_managed)
}
fn load_auth(
@@ -350,6 +528,12 @@ fn load_auth(
enable_codex_api_key_env: bool,
auth_credentials_store_mode: AuthCredentialsStoreMode,
) -> std::io::Result<Option<CodexAuth>> {
let build_auth = |auth_dot_json: AuthDotJson, storage_mode| {
let client = crate::default_client::create_client();
CodexAuth::from_auth_dot_json(codex_home, auth_dot_json, storage_mode, client)
};
// API key via env var takes precedence over any other auth method.
if enable_codex_api_key_env && let Some(api_key) = read_codex_api_key_from_env() {
let client = crate::default_client::create_client();
return Ok(Some(CodexAuth::from_api_key_with_client(
@@ -358,39 +542,34 @@ fn load_auth(
)));
}
let storage = create_auth_storage(codex_home.to_path_buf(), auth_credentials_store_mode);
// External ChatGPT auth tokens live in the in-memory (ephemeral) store. Always check this
// first so external auth takes precedence over any persisted credentials.
let ephemeral_storage = create_auth_storage(
codex_home.to_path_buf(),
AuthCredentialsStoreMode::Ephemeral,
);
if let Some(auth_dot_json) = ephemeral_storage.load()? {
let auth = build_auth(auth_dot_json, AuthCredentialsStoreMode::Ephemeral)?;
return Ok(Some(auth));
}
let client = crate::default_client::create_client();
// If the caller explicitly requested ephemeral auth, there is no persisted fallback.
if auth_credentials_store_mode == AuthCredentialsStoreMode::Ephemeral {
return Ok(None);
}
// Fall back to the configured persistent store (file/keyring/auto) for managed auth.
let storage = create_auth_storage(codex_home.to_path_buf(), auth_credentials_store_mode);
let auth_dot_json = match storage.load()? {
Some(auth) => auth,
None => return Ok(None),
};
let AuthDotJson {
openai_api_key: auth_json_api_key,
tokens,
last_refresh,
} = auth_dot_json;
// Prefer AuthMode.ApiKey if it's set in the auth.json.
if let Some(api_key) = &auth_json_api_key {
return Ok(Some(CodexAuth::from_api_key_with_client(api_key, client)));
}
Ok(Some(CodexAuth {
api_key: None,
mode: AuthMode::ChatGPT,
storage: storage.clone(),
auth_dot_json: Arc::new(Mutex::new(Some(AuthDotJson {
openai_api_key: None,
tokens,
last_refresh,
}))),
client,
}))
let auth = build_auth(auth_dot_json, auth_credentials_store_mode)?;
Ok(Some(auth))
}
async fn update_tokens(
fn update_tokens(
storage: &Arc<dyn AuthStorageBackend>,
id_token: Option<String>,
access_token: Option<String>,
@@ -537,17 +716,82 @@ fn refresh_token_endpoint() -> String {
.unwrap_or_else(|_| REFRESH_TOKEN_URL.to_string())
}
use std::sync::RwLock;
impl AuthDotJson {
fn from_external_tokens(external: &ExternalAuthTokens, id_token: IdTokenInfo) -> Self {
let account_id = id_token.chatgpt_account_id.clone();
let tokens = TokenData {
id_token,
access_token: external.access_token.clone(),
refresh_token: String::new(),
account_id,
};
Self {
auth_mode: Some(ApiAuthMode::ChatgptAuthTokens),
openai_api_key: None,
tokens: Some(tokens),
last_refresh: Some(Utc::now()),
}
}
fn from_external_token_strings(id_token: &str, access_token: &str) -> std::io::Result<Self> {
let id_token_info = parse_id_token(id_token).map_err(std::io::Error::other)?;
let external = ExternalAuthTokens {
access_token: access_token.to_string(),
id_token: id_token.to_string(),
};
Ok(Self::from_external_tokens(&external, id_token_info))
}
fn resolved_mode(&self) -> ApiAuthMode {
if let Some(mode) = self.auth_mode {
return mode;
}
if self.openai_api_key.is_some() {
return ApiAuthMode::ApiKey;
}
ApiAuthMode::Chatgpt
}
fn storage_mode(
&self,
auth_credentials_store_mode: AuthCredentialsStoreMode,
) -> AuthCredentialsStoreMode {
if self.resolved_mode() == ApiAuthMode::ChatgptAuthTokens {
AuthCredentialsStoreMode::Ephemeral
} else {
auth_credentials_store_mode
}
}
}
/// Internal cached auth state.
#[derive(Clone, Debug)]
#[derive(Clone)]
struct CachedAuth {
auth: Option<CodexAuth>,
/// Callback used to refresh external auth by asking the parent app for new tokens.
external_refresher: Option<Arc<dyn ExternalAuthRefresher>>,
}
impl Debug for CachedAuth {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("CachedAuth")
.field(
"auth_mode",
&self.auth.as_ref().map(CodexAuth::api_auth_mode),
)
.field(
"external_refresher",
&self.external_refresher.as_ref().map(|_| "present"),
)
.finish()
}
}
enum UnauthorizedRecoveryStep {
Reload,
RefreshToken,
ExternalRefresh,
Done,
}
@@ -556,30 +800,53 @@ enum ReloadOutcome {
Skipped,
}
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
enum UnauthorizedRecoveryMode {
Managed,
External,
}
// UnauthorizedRecovery is a state machine that handles an attempt to refresh the authentication when requests
// to API fail with 401 status code.
// The client calls next() every time it encounters a 401 error, one time per retry.
// For API key based authentication, we don't do anything and let the error bubble to the user.
//
// For ChatGPT based authentication, we:
// 1. Attempt to reload the auth data from disk. We only reload if the account id matches the one the current process is running as.
// 2. Attempt to refresh the token using OAuth token refresh flow.
// If after both steps the server still responds with 401 we let the error bubble to the user.
//
// For external ChatGPT auth tokens (chatgptAuthTokens), UnauthorizedRecovery does not touch disk or refresh
// tokens locally. Instead it calls the ExternalAuthRefresher (account/chatgptAuthTokens/refresh) to ask the
// parent app for new tokens, stores them in the ephemeral auth store, and retries once.
pub struct UnauthorizedRecovery {
manager: Arc<AuthManager>,
step: UnauthorizedRecoveryStep,
expected_account_id: Option<String>,
mode: UnauthorizedRecoveryMode,
}
impl UnauthorizedRecovery {
fn new(manager: Arc<AuthManager>) -> Self {
let expected_account_id = manager
.auth_cached()
let cached_auth = manager.auth_cached();
let expected_account_id = cached_auth.as_ref().and_then(CodexAuth::get_account_id);
let mode = if cached_auth
.as_ref()
.and_then(CodexAuth::get_account_id);
.is_some_and(CodexAuth::is_external_chatgpt_tokens)
{
UnauthorizedRecoveryMode::External
} else {
UnauthorizedRecoveryMode::Managed
};
let step = match mode {
UnauthorizedRecoveryMode::Managed => UnauthorizedRecoveryStep::Reload,
UnauthorizedRecoveryMode::External => UnauthorizedRecoveryStep::ExternalRefresh,
};
Self {
manager,
step: UnauthorizedRecoveryStep::Reload,
step,
expected_account_id,
mode,
}
}
@@ -587,7 +854,14 @@ impl UnauthorizedRecovery {
if !self
.manager
.auth_cached()
.is_some_and(|auth| auth.mode == AuthMode::ChatGPT)
.as_ref()
.is_some_and(CodexAuth::is_chatgpt_auth)
{
return false;
}
if self.mode == UnauthorizedRecoveryMode::External
&& !self.manager.has_external_auth_refresher()
{
return false;
}
@@ -622,6 +896,12 @@ impl UnauthorizedRecovery {
self.manager.refresh_token().await?;
self.step = UnauthorizedRecoveryStep::Done;
}
UnauthorizedRecoveryStep::ExternalRefresh => {
self.manager
.refresh_external_auth(ExternalAuthRefreshReason::Unauthorized)
.await?;
self.step = UnauthorizedRecoveryStep::Done;
}
UnauthorizedRecoveryStep::Done => {}
}
Ok(())
@@ -642,6 +922,7 @@ pub struct AuthManager {
inner: RwLock<CachedAuth>,
enable_codex_api_key_env: bool,
auth_credentials_store_mode: AuthCredentialsStoreMode,
forced_chatgpt_workspace_id: RwLock<Option<String>>,
}
impl AuthManager {
@@ -654,7 +935,7 @@ impl AuthManager {
enable_codex_api_key_env: bool,
auth_credentials_store_mode: AuthCredentialsStoreMode,
) -> Self {
let auth = load_auth(
let managed_auth = load_auth(
&codex_home,
enable_codex_api_key_env,
auth_credentials_store_mode,
@@ -663,34 +944,46 @@ impl AuthManager {
.flatten();
Self {
codex_home,
inner: RwLock::new(CachedAuth { auth }),
inner: RwLock::new(CachedAuth {
auth: managed_auth,
external_refresher: None,
}),
enable_codex_api_key_env,
auth_credentials_store_mode,
forced_chatgpt_workspace_id: RwLock::new(None),
}
}
#[cfg(any(test, feature = "test-support"))]
/// Create an AuthManager with a specific CodexAuth, for testing only.
pub fn from_auth_for_testing(auth: CodexAuth) -> Arc<Self> {
let cached = CachedAuth { auth: Some(auth) };
let cached = CachedAuth {
auth: Some(auth),
external_refresher: None,
};
Arc::new(Self {
codex_home: PathBuf::from("non-existent"),
inner: RwLock::new(cached),
enable_codex_api_key_env: false,
auth_credentials_store_mode: AuthCredentialsStoreMode::File,
forced_chatgpt_workspace_id: RwLock::new(None),
})
}
#[cfg(any(test, feature = "test-support"))]
/// Create an AuthManager with a specific CodexAuth and codex home, for testing only.
pub fn from_auth_for_testing_with_home(auth: CodexAuth, codex_home: PathBuf) -> Arc<Self> {
let cached = CachedAuth { auth: Some(auth) };
let cached = CachedAuth {
auth: Some(auth),
external_refresher: None,
};
Arc::new(Self {
codex_home,
inner: RwLock::new(cached),
enable_codex_api_key_env: false,
auth_credentials_store_mode: AuthCredentialsStoreMode::File,
forced_chatgpt_workspace_id: RwLock::new(None),
})
}
@@ -715,7 +1008,7 @@ impl AuthManager {
pub fn reload(&self) -> bool {
tracing::info!("Reloading auth");
let new_auth = self.load_auth_from_storage();
self.set_auth(new_auth)
self.set_cached_auth(new_auth)
}
fn reload_if_account_id_matches(&self, expected_account_id: Option<&str>) -> ReloadOutcome {
@@ -739,11 +1032,11 @@ impl AuthManager {
}
tracing::info!("Reloading auth for account {expected_account_id}");
self.set_auth(new_auth);
self.set_cached_auth(new_auth);
ReloadOutcome::Reloaded
}
fn auths_equal(a: &Option<CodexAuth>, b: &Option<CodexAuth>) -> bool {
fn auths_equal(a: Option<&CodexAuth>, b: Option<&CodexAuth>) -> bool {
match (a, b) {
(None, None) => true,
(Some(a), Some(b)) => a == b,
@@ -761,9 +1054,10 @@ impl AuthManager {
.flatten()
}
fn set_auth(&self, new_auth: Option<CodexAuth>) -> bool {
fn set_cached_auth(&self, new_auth: Option<CodexAuth>) -> bool {
if let Ok(mut guard) = self.inner.write() {
let changed = !AuthManager::auths_equal(&guard.auth, &new_auth);
let previous = guard.auth.as_ref();
let changed = !AuthManager::auths_equal(previous, new_auth.as_ref());
tracing::info!("Reloaded auth, changed: {changed}");
guard.auth = new_auth;
changed
@@ -772,6 +1066,39 @@ impl AuthManager {
}
}
pub fn set_external_auth_refresher(&self, refresher: Arc<dyn ExternalAuthRefresher>) {
if let Ok(mut guard) = self.inner.write() {
guard.external_refresher = Some(refresher);
}
}
pub fn set_forced_chatgpt_workspace_id(&self, workspace_id: Option<String>) {
if let Ok(mut guard) = self.forced_chatgpt_workspace_id.write() {
*guard = workspace_id;
}
}
pub fn forced_chatgpt_workspace_id(&self) -> Option<String> {
self.forced_chatgpt_workspace_id
.read()
.ok()
.and_then(|guard| guard.clone())
}
pub fn has_external_auth_refresher(&self) -> bool {
self.inner
.read()
.ok()
.map(|guard| guard.external_refresher.is_some())
.unwrap_or(false)
}
pub fn is_external_auth_active(&self) -> bool {
self.auth_cached()
.as_ref()
.is_some_and(CodexAuth::is_external_chatgpt_tokens)
}
/// Convenience constructor returning an `Arc` wrapper.
pub fn shared(
codex_home: PathBuf,
@@ -799,13 +1126,25 @@ impl AuthManager {
Some(auth) => auth,
None => return Ok(()),
};
let token_data = auth.get_current_token_data().ok_or_else(|| {
RefreshTokenError::Transient(std::io::Error::other("Token data is not available."))
})?;
self.refresh_tokens(&auth, token_data.refresh_token).await?;
// Reload to pick up persisted changes.
self.reload();
Ok(())
match auth {
CodexAuth::ChatgptAuthTokens(_) => {
self.refresh_external_auth(ExternalAuthRefreshReason::Unauthorized)
.await
}
CodexAuth::Chatgpt(chatgpt_auth) => {
let token_data = chatgpt_auth.current_token_data().ok_or_else(|| {
RefreshTokenError::Transient(std::io::Error::other(
"Token data is not available.",
))
})?;
self.refresh_tokens(&chatgpt_auth, token_data.refresh_token)
.await?;
// Reload to pick up persisted changes.
self.reload();
Ok(())
}
CodexAuth::ApiKey(_) => Ok(()),
}
}
/// Log out by deleting the ondisk auth.json (if present). Returns Ok(true)
@@ -813,22 +1152,29 @@ impl AuthManager {
/// reloads the inmemory auth cache so callers immediately observe the
/// unauthenticated state.
pub fn logout(&self) -> std::io::Result<bool> {
let removed = super::auth::logout(&self.codex_home, self.auth_credentials_store_mode)?;
let removed = logout_all_stores(&self.codex_home, self.auth_credentials_store_mode)?;
// Always reload to clear any cached auth (even if file absent).
self.reload();
Ok(removed)
}
pub fn get_auth_mode(&self) -> Option<AuthMode> {
self.auth_cached().map(|a| a.mode)
pub fn get_auth_mode(&self) -> Option<ApiAuthMode> {
self.auth_cached().as_ref().map(CodexAuth::api_auth_mode)
}
pub fn get_internal_auth_mode(&self) -> Option<AuthMode> {
self.auth_cached()
.as_ref()
.map(CodexAuth::internal_auth_mode)
}
async fn refresh_if_stale(&self, auth: &CodexAuth) -> Result<bool, RefreshTokenError> {
if auth.mode != AuthMode::ChatGPT {
return Ok(false);
}
let chatgpt_auth = match auth {
CodexAuth::Chatgpt(chatgpt_auth) => chatgpt_auth,
_ => return Ok(false),
};
let auth_dot_json = match auth.get_current_auth_json() {
let auth_dot_json = match chatgpt_auth.current_auth_json() {
Some(auth_dot_json) => auth_dot_json,
None => return Ok(false),
};
@@ -843,25 +1189,78 @@ impl AuthManager {
if last_refresh >= Utc::now() - chrono::Duration::days(TOKEN_REFRESH_INTERVAL) {
return Ok(false);
}
self.refresh_tokens(auth, tokens.refresh_token).await?;
self.refresh_tokens(chatgpt_auth, tokens.refresh_token)
.await?;
self.reload();
Ok(true)
}
async fn refresh_external_auth(
&self,
reason: ExternalAuthRefreshReason,
) -> Result<(), RefreshTokenError> {
let forced_chatgpt_workspace_id = self.forced_chatgpt_workspace_id();
let refresher = match self.inner.read() {
Ok(guard) => guard.external_refresher.clone(),
Err(_) => {
return Err(RefreshTokenError::Transient(std::io::Error::other(
"failed to read external auth state",
)));
}
};
let Some(refresher) = refresher else {
return Err(RefreshTokenError::Transient(std::io::Error::other(
"external auth refresher is not configured",
)));
};
let previous_account_id = self
.auth_cached()
.as_ref()
.and_then(CodexAuth::get_account_id);
let context = ExternalAuthRefreshContext {
reason,
previous_account_id,
};
let refreshed = refresher.refresh(context).await?;
let id_token = parse_id_token(&refreshed.id_token)
.map_err(|err| RefreshTokenError::Transient(std::io::Error::other(err)))?;
if let Some(expected_workspace_id) = forced_chatgpt_workspace_id.as_deref() {
let actual_workspace_id = id_token.chatgpt_account_id.as_deref();
if actual_workspace_id != Some(expected_workspace_id) {
return Err(RefreshTokenError::Transient(std::io::Error::other(
format!(
"external auth refresh returned workspace {actual_workspace_id:?}, expected {expected_workspace_id:?}",
),
)));
}
}
let auth_dot_json = AuthDotJson::from_external_tokens(&refreshed, id_token);
save_auth(
&self.codex_home,
&auth_dot_json,
AuthCredentialsStoreMode::Ephemeral,
)
.map_err(RefreshTokenError::Transient)?;
self.reload();
Ok(())
}
async fn refresh_tokens(
&self,
auth: &CodexAuth,
auth: &ChatgptAuth,
refresh_token: String,
) -> Result<(), RefreshTokenError> {
let refresh_response = try_refresh_token(refresh_token, &auth.client).await?;
let refresh_response = try_refresh_token(refresh_token, auth.client()).await?;
update_tokens(
&auth.storage,
auth.storage(),
refresh_response.id_token,
refresh_response.access_token,
refresh_response.refresh_token,
)
.await
.map_err(RefreshTokenError::from)?;
Ok(())
@@ -910,7 +1309,6 @@ mod tests {
Some("new-access-token".to_string()),
Some("new-refresh-token".to_string()),
)
.await
.expect("update_tokens should succeed");
let tokens = updated.tokens.expect("tokens should exist");
@@ -971,31 +1369,28 @@ mod tests {
)
.expect("failed to write auth file");
let CodexAuth {
api_key,
mode,
auth_dot_json,
storage: _,
..
} = super::load_auth(codex_home.path(), false, AuthCredentialsStoreMode::File)
let auth = super::load_auth(codex_home.path(), false, AuthCredentialsStoreMode::File)
.unwrap()
.unwrap();
assert_eq!(None, api_key);
assert_eq!(AuthMode::ChatGPT, mode);
assert_eq!(None, auth.api_key());
assert_eq!(AuthMode::Chatgpt, auth.internal_auth_mode());
let guard = auth_dot_json.lock().unwrap();
let auth_dot_json = guard.as_ref().expect("AuthDotJson should exist");
let auth_dot_json = auth
.get_current_auth_json()
.expect("AuthDotJson should exist");
let last_refresh = auth_dot_json
.last_refresh
.expect("last_refresh should be recorded");
assert_eq!(
&AuthDotJson {
AuthDotJson {
auth_mode: None,
openai_api_key: None,
tokens: Some(TokenData {
id_token: IdTokenInfo {
email: Some("user@example.com".to_string()),
chatgpt_plan_type: Some(InternalPlanType::Known(InternalKnownPlan::Pro)),
chatgpt_user_id: Some("user-12345".to_string()),
chatgpt_account_id: None,
raw_jwt: fake_jwt,
},
@@ -1023,8 +1418,8 @@ mod tests {
let auth = super::load_auth(dir.path(), false, AuthCredentialsStoreMode::File)
.unwrap()
.unwrap();
assert_eq!(auth.mode, AuthMode::ApiKey);
assert_eq!(auth.api_key, Some("sk-test-key".to_string()));
assert_eq!(auth.internal_auth_mode(), AuthMode::ApiKey);
assert_eq!(auth.api_key(), Some("sk-test-key"));
assert!(auth.get_token_data().is_err());
}
@@ -1033,6 +1428,7 @@ mod tests {
fn logout_removes_auth_file() -> Result<(), std::io::Error> {
let dir = tempdir()?;
let auth_dot_json = AuthDotJson {
auth_mode: Some(ApiAuthMode::ApiKey),
openai_api_key: Some("sk-test-key".to_string()),
tokens: None,
last_refresh: None,

Some files were not shown because too many files have changed in this diff Show More