Compare commits

..

136 Commits

Author SHA1 Message Date
kevin zhao
f5ae722656 remove unecessary comment 2025-10-08 10:34:49 -07:00
kevin zhao
8b7680ce3a turn type enums 2025-10-08 10:34:24 -07:00
kevin zhao
eab5e45e5a update test case 2025-10-08 10:13:49 -07:00
kevin zhao
8f1fe7f320 remove weird test 2025-10-07 18:19:33 -07:00
kevin zhao
c1a2be732e add action type to responses header 2025-10-07 17:20:27 -07:00
Jeremy Rose
b8b04514bc feat(tui): switch to tree-sitter-highlight bash highlighting (#4666)
use tree-sitter-highlight instead of custom logic over the tree-sitter
tree to highlight bash.
2025-10-07 16:20:12 -07:00
Jeremy Rose
0e5d72cc57 tui: bring the transcript closer to display mode (#4848)
before
<img width="1161" height="836" alt="Screenshot 2025-10-06 at 3 06 52 PM"
src="https://github.com/user-attachments/assets/7622fd6b-9d37-402f-8651-61c2c55dcbc6"
/>

after
<img width="1161" height="858" alt="Screenshot 2025-10-06 at 3 07 02 PM"
src="https://github.com/user-attachments/assets/1498f327-1d1a-4630-951f-7ca371ab0139"
/>
2025-10-07 16:18:48 -07:00
pakrym-oai
60f9e85c16 Set codex SDK TypeScript originator (#4894)
## Summary
- ensure the TypeScript SDK sets CODEX_INTERNAL_ORIGINATOR_OVERRIDE to
codex_sdk_ts when spawning the Codex CLI
- extend the responses proxy test helper to capture request headers for
assertions
- add coverage that verifies Codex threads launched from the TypeScript
SDK send the codex_sdk_ts originator header

## Testing
- Not Run (not requested)


------
https://chatgpt.com/codex/tasks/task_i_68e561b125248320a487f129093d16e7
2025-10-07 14:06:41 -07:00
dedrisian-oai
b016a3e7d8 Remove instruction hack for /review (#4896)
We use to put the review prompt in the first user message as well to
bypass statsig overrides, but now that's been resolved and instructions
are being respected, so we're duplicating the review instructions.
2025-10-07 12:47:00 -07:00
Jeremy Rose
a0d56541cf tui: breathing spinner on true-color terms (#4853)
uses the same logic as shimmer_spans to render the `•` spinner. on
terminals without true-color support, fall back to the existing `•/◦`
blinking logic.



https://github.com/user-attachments/assets/19db76f2-8fa2-440d-9fde-7bed67f4c4dc
2025-10-07 11:34:05 -07:00
jif-oai
226215f36d feat: list_dir tool (#4817)
Add a tool to list_dir. It is useful because we can mark it as
non-mutating and so use it in parallel
2025-10-07 19:33:19 +01:00
jif-oai
338c2c873c bug: fix flaky test (#4878)
Fix flaky test by warming up the tools
2025-10-07 19:32:49 +01:00
Jeremy Rose
4b0f5eb6a8 tui: wrapping bugfix (#4674)
this fixes an issue where text lines with long words would sometimes
overflow.

- the default penalties for the OptimalFit algorithm allow overflowing
in some cases. this seems insane to me, and i considered just banning
the OptimalFit algorithm by disabling the 'smawk' feature on textwrap,
but decided to keep it and just bump the overflow penalty to ~infinity
since optimal fit does sometimes produce nicer wrappings. it's not clear
this is worth it, though, and maybe we should just dump the optimal fit
algorithm completely.
- user history messages weren't rendering with the same wrap algorithm
as used in the composer, which sometimes resulted in wrapping messages
differently in the history vs. in the composer.
2025-10-07 11:32:13 -07:00
Jeremy Rose
75176dae70 dynamic width for line numbers in diffs (#4664)
instead of always reserving 6 spaces for the line number and gutter, we
now dynamically adjust to the width of the longest number.

<img width="871" height="616" alt="Screenshot 2025-10-03 at 8 21 00 AM"
src="https://github.com/user-attachments/assets/5f18eae6-7c85-48fc-9a41-31978ae71a62"
/>
<img width="871" height="616" alt="Screenshot 2025-10-03 at 8 21 21 AM"
src="https://github.com/user-attachments/assets/9009297d-7690-42b9-ae42-9566b3fea86c"
/>
<img width="871" height="616" alt="Screenshot 2025-10-03 at 8 21 57 AM"
src="https://github.com/user-attachments/assets/669096fd-dddc-407e-bae8-d0c6626fa0bc"
/>
2025-10-07 11:32:07 -07:00
Gabriel Peal
12fd2b4160 [TUI] Remove bottom padding (#4854)
We don't need the bottom padding. It currently just makes the footer
look off-centered.

Before:
<img width="1905" height="478" alt="image"
src="https://github.com/user-attachments/assets/c2a18b38-b8fd-4317-bbbb-2843bca02ba1"
/>

After:
<img width="617" height="479" alt="image"
src="https://github.com/user-attachments/assets/f42470c5-4b24-4a02-b15c-e2aad03e3b42"
/>
2025-10-07 14:10:05 -04:00
pakrym-oai
f2555422b9 Simplify parallel (#4829)
make tool processing return a future and then collect futures.
handle cleanup on Drop
2025-10-07 10:12:38 -07:00
Tamir Duberstein
27f169bb91 cloud-tasks: use workspace deps
This seems to be the way. It made life easier when I was locally forking
clap.
2025-10-07 08:19:10 -07:00
Tamir Duberstein
b16c985ed2 cli: fix zsh completion (#4692)
Before this change:
```
tamird@L03G26TD12 codex-rs % codex
zsh: do you wish to see all 3864 possibilities (1285 lines)?
```

After this change:
```
tamird@L03G26TD12 codex-rs % codex
app-server              -- [experimental] Run the app server
apply                a  -- Apply the latest diff produced by Codex agent as a `git apply` to your local working tree
cloud                   -- [EXPERIMENTAL] Browse tasks from Codex Cloud and apply changes locally
completion              -- Generate shell completion scripts
debug                   -- Internal debugging commands
exec                 e  -- Run Codex non-interactively
generate-ts             -- Internal: generate TypeScript protocol bindings
help                    -- Print this message or the help of the given subcommand(s)
login                   -- Manage login
logout                  -- Remove stored authentication credentials
mcp                     -- [experimental] Run Codex as an MCP server and manage MCP servers
mcp-server              -- [experimental] Run the Codex MCP server (stdio transport)
responses-api-proxy     -- Internal: run the responses API proxy
resume                  -- Resume a previous interactive session (picker by default; use --last to continue the most recent)
```
2025-10-07 08:07:31 -07:00
pakrym-oai
35a770e871 Simplify request body assertions (#4845)
We'll have a lot more test like these
2025-10-07 09:56:39 +01:00
Colin Young
b09f62a1c3 [Codex] Use Number instead of BigInt for TokenCountEvent (#4856)
Adjust to use typescript number so reduce casting and normalizing code
for VSCE since js supports up to 2^53-1
2025-10-06 18:59:37 -07:00
Jeremy Rose
5833508a17 print codex resume note when quitting after codex resume (#4695)
when exiting a session that was started with `codex resume`, the note
about how to resume again wasn't being printed.

thanks @aibrahim-oai for pointing out this issue!
2025-10-06 16:07:22 -07:00
Gabriel Peal
d73055c5b1 [MCP] Fix the bearer token authorization header (#4846)
`http_config.auth_header` automatically added `Bearer `. By adding it
ourselves, we were sending `Bearer Bearer <token>`.

I confirmed that the GitHub MCP initialization 400s before and works
now.

I also optimized the oauth flow to not check the keyring if you
explicitly pass in a bearer token.
2025-10-06 17:41:16 -04:00
pakrym-oai
7e3a272b29 Add a longer message to issue deduplicator and some logs (#4836)
Logs are to diagnose why we're not filtering correctly.
2025-10-06 10:39:26 -07:00
pakrym-oai
661663c98a Fix event names in exec docs. (#4833)
Fixes: https://github.com/openai/codex/issues
2025-10-06 10:07:52 -07:00
Gabriel Peal
721003c552 [MCP] Improve docs (#4811)
Updated, expanded on, clarified, and deduplicated some MCP docs
2025-10-06 11:43:50 -04:00
Fouad Matin
36f1cca1b1 fix: windows instructions (#4807)
link to docs
2025-10-05 22:06:21 -07:00
Ed Bayes
d3e1beb26c add pulsing dot loading state (#4736)
## Description 
Changes default CLI spinner to pulsing dot


https://github.com/user-attachments/assets/b81225d6-6655-4ead-8cb1-d6568a603d5b

## Tests
Passes CI

---------

Co-authored-by: Fouad Matin <fouad@openai.com>
2025-10-05 21:26:27 -07:00
ae
c264ae6021 feat: tweak windows wsl copy (#4795)
Tweaked the WSL dialogue and the installation instructions.
2025-10-06 02:44:26 +00:00
pakrym-oai
8cd882c4bd Update README.md (#4794)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.
2025-10-05 18:21:29 -07:00
pakrym-oai
90fe5e4a7e Add structured-output support (#4793)
Add samples and docs.
2025-10-05 18:17:50 -07:00
pakrym-oai
a90a58f7a1 Trim double Total output lines (#4787) 2025-10-05 16:41:55 -07:00
pakrym-oai
b2d81a7cac Make output assertions more explicit (#4784)
Match using precise regexes.
2025-10-05 16:01:38 -07:00
Fouad Matin
77a8b7fdeb add codex sandbox {linux|macos} (#4782)
## Summary
- add a `codex sandbox` subcommand with macOS and Linux targets while
keeping the legacy `codex debug` aliases
- update documentation to highlight the new sandbox entrypoints and
point existing references to the new command
- clarify the core README about the linux sandbox helper alias

## Testing
- just fmt
- just fix -p codex-cli
- cargo test -p codex-cli


------
https://chatgpt.com/codex/tasks/task_i_68e2e00ca1e8832d8bff53aa0b50b49e
2025-10-05 15:51:57 -07:00
Gabriel Peal
7fa5e95c1f [MCP] Upgrade rmcp to 0.8 (#4774)
The version with the well-known discovery and my MCP client name change
were just released

https://github.com/modelcontextprotocol/rust-sdk/releases
2025-10-05 18:12:37 -04:00
pakrym-oai
191d620707 Use response helpers when mounting SSE test responses (#4783)
## Summary
- replace manual wiremock SSE mounts in the compact suite with the
shared response helpers
- simplify the exec auth_env integration test by using the
mount_sse_once_match helper
- rely on mount_sse_sequence plus server request collection to replace
the bespoke SeqResponder utility in tests

## Testing
- just fmt

------
https://chatgpt.com/codex/tasks/task_i_68e2e238f2a88320a337f0b9e4098093
2025-10-05 21:58:16 +00:00
pranavdesh
53504a38d2 Expand TypeScript SDK README (#4779)
## Summary
- expand the TypeScript SDK README with streaming, architecture, and API
docs
- refresh quick start examples and clarify thread management options

## Testing
- Not Run (docs only)

---------

Co-authored-by: pakrym-oai <pakrym@openai.com>
2025-10-05 21:43:34 +00:00
pakrym-oai
5c42419b02 Use assert_matches (#4756)
assert_matches is soon to be in std but is experimental for now.
2025-10-05 21:12:31 +00:00
pakrym-oai
aecbe0f333 Add helper for response created SSE events in tests (#4758)
## Summary
- add a reusable `ev_response_created` helper that builds
`response.created` SSE events for integration tests
- update the exec and core integration suites to use the new helper
instead of repeating manual JSON literals
- keep the streaming fixtures consistent by relying on the shared helper
in every touched test

## Testing
- `just fmt`


------
https://chatgpt.com/codex/tasks/task_i_68e1fe885bb883208aafffb94218da61
2025-10-05 21:11:43 +00:00
Michael Bolin
a30a902db5 fix: use low-level stdin read logic to avoid a BufReader (#4778)
`codex-responses-api-proxy` is designed so that there should be exactly
one copy of the API key in memory (that is `mlock`'d on UNIX), but in
practice, I was seeing two when I dumped the process data from
`/proc/$PID/mem`.

It appears that `std::io::stdin()` maintains an internal `BufReader`
that we cannot zero out, so this PR changes the implementation on UNIX
so that we use a low-level `read(2)` instead.

Even though it seems like it would be incredibly unlikely, we also make
this logic tolerant of short reads. Either `\n` or `EOF` must be sent to
signal the end of the key written to stdin.
2025-10-05 13:58:30 -07:00
jif-oai
f3b4a26f32 chore: drop read-file for gpt-5-codex (#4739)
Drop `read_file` for gpt-5-codex (will do the same for parallel tool
call) and add `codex-` as internal model for this kind of feature
2025-10-05 16:26:04 +00:00
jif-oai
dc3c6bf62a feat: parallel tool calls (#4663)
Add parallel tool calls. This is configurable at model level and tool
level
2025-10-05 16:10:49 +00:00
Dylan
3203862167 chore: update tool config (#4755)
## Summary
Updates tool config for gpt-5-codex

## Test Plan
- [x] Ran locally
- [x]  Updated unit tests
2025-10-04 22:47:26 -07:00
pakrym-oai
06853d94f0 Use wait_for_event helpers in tests (#4753)
## Summary
- replace manual event polling loops in several core test suites with
the shared wait_for_event helpers
- keep prior assertions intact by using closure captures for stateful
expectations, including plan updates, patch lifecycles, and review flow
checks
- rely on wait_for_event_with_timeout where longer waits are required,
simplifying timeout handling

## Testing
- just fmt


------
https://chatgpt.com/codex/tasks/task_i_68e1d58582d483208febadc5f90dd95e
2025-10-04 22:04:05 -07:00
Ahmed Ibrahim
cc2f4aafd7 Add truncation hint on truncated exec output. (#4740)
When truncating output, add a hint of the total number of lines
2025-10-05 03:29:07 +00:00
pakrym-oai
356ea6ea34 Misc SDK fixes (#4752)
Remove codex-level workingDirectory
Throw on turn.failed in `run()`
Cleanup readme
2025-10-04 19:55:33 -07:00
Dylan
4764fc1ee7 feat: Freeform apply_patch with simple shell output (#4718)
## Summary
This PR is an alternative approach to #4711, but instead of changing our
storage, parses out shell calls in the client and reserializes them on
the fly before we send them out as part of the request.

What this changes:
1. Adds additional serialization logic when the
ApplyPatchToolType::Freeform is in use.
2. Adds a --custom-apply-patch flag to enable this setting on a
session-by-session basis.

This change is delicate, but is not meant to be permanent. It is meant
to be the first step in a migration:
1. (This PR) Add in-flight serialization with config
2. Update model_family default
3. Update serialization logic to store turn outputs in a structured
format, with logic to serialize based on model_family setting.
4. Remove this rewrite in-flight logic.

## Test Plan
- [x] Additional unit tests added
- [x] Integration tests added
- [x] Tested locally
2025-10-04 19:16:36 -07:00
Ahmed Ibrahim
90ef94d3b3 Surface context window error to the client (#4675)
In the past, we were treating `input exceeded context window` as a
streaming error and retrying on it. Retrying on it has no point because
it won't change the behavior. In this PR, we surface the error to the
client without retry and also send a token count event to indicate that
the context window is full.

<img width="650" height="125" alt="image"
src="https://github.com/user-attachments/assets/c26b1213-4c27-4bfc-90f4-51a270a3efd5"
/>
2025-10-05 01:40:06 +00:00
iceweasel-oai
6c2969d22d add an onboarding informing Windows of better support in WSL (#4697) 2025-10-04 17:41:40 -07:00
Thibault Sottiaux
0ad1b0782b feat: instruct model to use apply_patch + avoid destructive changes (#4742) 2025-10-04 12:49:50 -07:00
Ahmed Ibrahim
d7acd146fb fix: exec commands that blows up context window. (#4706)
We truncate the output of exec commands to not blow the context window.
However, some cases we weren't doing that. This caused reports of people
with 76% context window left facing `input exceeded context window`
which is weird.
2025-10-04 11:49:56 -07:00
pakrym-oai
c5465aed60 Update issue-deduplicator.yml (#4733)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.
2025-10-04 08:56:42 -07:00
Michael Bolin
a95605a867 fix: update GH action to use allow-users instead of require-repo-write (#4701) 2025-10-03 17:37:14 -07:00
pakrym-oai
848058f05b Expose turn token usage in the SDK (#4700)
It's present on the event, add it to the final result as well.
2025-10-03 17:33:23 -07:00
pakrym-oai
a4f1c9d67e Remove the feature implementation question (#4698) 2025-10-03 16:45:25 -07:00
Fouad Matin
665341c9b1 login: device code text (#4616)
Co-authored-by: rakesh <rakesh@openai.com>
2025-10-03 16:35:40 -07:00
dedrisian-oai
fae0e6c52c Fix reasoning effort title (#4694) 2025-10-03 16:17:30 -07:00
Jeremy Rose
1b4a79f03c requery default colors on focus (#4673)
fixes an issue when terminals change their color scheme, e.g. dark/light
mode, the composer wouldn't update its background color.
2025-10-03 22:43:41 +00:00
pakrym-oai
640192ac3d Update README.md (#4688)
Include information about the action and SDK
2025-10-03 15:05:55 -07:00
pakrym-oai
205c36e393 Filter current issue from deduplicator results (#4687)
## Summary
- ensure the issue deduplicator workflow ignores the current issue when
listing potential duplicates

## Testing
- not run (workflow change)

------
https://chatgpt.com/codex/tasks/task_i_68e03244836c8320a4aa22bfb98fd291
2025-10-03 14:22:40 -07:00
Gabriel Peal
d13ee79c41 [MCP] Don't require experimental_use_rmcp_client for no-auth http servers (#4689)
The `experimental_use_rmcp_client` flag is still useful to:
1. Toggle between stdio clients
2. Enable oauth beacuse we want to land
https://github.com/modelcontextprotocol/rust-sdk/pull/469,
https://github.com/openai/codex/pull/4677, and binary signing before we
enable it by default

However, for no-auth http servers, there is only one option so we don't
need the flag and it seems to be working pretty well.
2025-10-03 17:15:23 -04:00
Gabriel Peal
bde468ff8d Fix oauth .well-known metadata discovery (#4677)
This picks up https://github.com/modelcontextprotocol/rust-sdk/pull/459
which is required for proper well-known metadata discovery for some MCPs
such as Figma.
2025-10-03 17:15:19 -04:00
Michael Bolin
e292d1ed21 fix: update actions to reflect https://github.com/openai/codex-action/pull/10 (#4691) 2025-10-03 14:07:14 -07:00
iceweasel-oai
de8d77274a set gpt-5 as default model for Windows users (#4676)
Codex isn’t great yet on Windows outside of WSL, and while we’ve merged
https://github.com/openai/codex/pull/4269 to reduce the repetitive
manual approvals on readonly commands, we’ve noticed that users seem to
have more issues with GPT-5-Codex than with GPT-5 on Windows.

This change makes GPT-5 the default for Windows users while we continue
to improve the CLI harness and model for GPT-5-Codex on Windows.
2025-10-03 14:00:03 -07:00
Fouad Matin
a5b7675e42 add(core): managed config (#3868)
## Summary

- Factor `load_config_as_toml` into `core::config_loader` so config
loading is reusable across callers.
- Layer `~/.codex/config.toml`, optional `~/.codex/managed_config.toml`,
and macOS managed preferences (base64) with recursive table merging and
scoped threads per source.

## Config Flow

```
Managed prefs (macOS profile: com.openai.codex/config_toml_base64)
                               ▲
                               │
~/.codex/managed_config.toml   │  (optional file-based override)
                               ▲
                               │
                ~/.codex/config.toml (user-defined settings)
```

- The loader searches under the resolved `CODEX_HOME` directory
(defaults to `~/.codex`).
- Managed configs let administrators ship fleet-wide overrides via
device profiles which is useful for enforcing certain settings like
sandbox or approval defaults.
- For nested hash tables: overlays merge recursively. Child tables are
merged key-by-key, while scalar or array values replace the prior layer
entirely. This lets admins add or tweak individual fields without
clobbering unrelated user settings.
2025-10-03 13:02:26 -07:00
Michael Bolin
9823de3cc6 fix: run Prettier in CI (#4681)
This was supposed to be in https://github.com/openai/codex/pull/4645.
2025-10-03 19:10:27 +00:00
Michael Bolin
c32e9cfe86 chore: subject docs/*.md to Prettier checks (#4645)
Apparently we were not running our `pnpm run prettier` check in CI, so
many files that were covered by the existing Prettier check were not
well-formatted.

This updates CI and formats the files.
2025-10-03 11:35:48 -07:00
Gabriel Peal
1d17ca1fa3 [MCP] Add support for MCP Oauth credentials (#4517)
This PR adds oauth login support to streamable http servers when
`experimental_use_rmcp_client` is enabled.

This PR is large but represents the minimal amount of work required for
this to work. To keep this PR smaller, login can only be done with
`codex mcp login` and `codex mcp logout` but it doesn't appear in `/mcp`
or `codex mcp list` yet. Fingers crossed that this is the last large MCP
PR and that subsequent PRs can be smaller.

Under the hood, credentials are stored using platform credential
managers using the [keyring crate](https://crates.io/crates/keyring).
When the keyring isn't available, it falls back to storing credentials
in `CODEX_HOME/.credentials.json` which is consistent with how other
coding agents handle authentication.

I tested this on macOS, Windows, WSL (ubuntu), and Linux. I wasn't able
to test the dbus store on linux but did verify that the fallback works.

One quirk is that if you have credentials, during development, every
build will have its own ad-hoc binary so the keyring won't recognize the
reader as being the same as the write so it may ask for the user's
password. I may add an override to disable this or allow
users/enterprises to opt-out of the keyring storage if it causes issues.

<img width="5064" height="686" alt="CleanShot 2025-09-30 at 19 31 40"
src="https://github.com/user-attachments/assets/9573f9b4-07f1-4160-83b8-2920db287e2d"
/>
<img width="745" height="486" alt="image"
src="https://github.com/user-attachments/assets/9562649b-ea5f-4f22-ace2-d0cb438b143e"
/>
2025-10-03 13:43:12 -04:00
jif-oai
bfe3328129 Fix flaky test (#4672)
This issue was due to the fact that the timeout is not always sufficient
to have enough character for truncation + a race between synthetic
timeout and process kill
2025-10-03 18:09:41 +01:00
jif-oai
e0b38bd7a2 feat: add beta_supported_tools (#4669)
Gate the new read_file tool behind a new `beta_supported_tools` flag and
only enable it for `gpt-5-codex`
2025-10-03 16:58:03 +00:00
Michael Bolin
153338c20f docs: add barebones README for codex-app-server crate (#4671) 2025-10-03 09:26:44 -07:00
pakrym-oai
3495a7dc37 Modernize workflows (#4668)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.
2025-10-03 09:25:29 -07:00
Michael Bolin
042d4d55d9 feat: codex exec writes only the final message to stdout (#4644)
This updates `codex exec` so that, by default, most of the agent's
activity is written to stderr so that only the final agent message is
written to stdout. This makes it easier to pipe `codex exec` into
another tool without extra filtering.

I introduced `#![deny(clippy::print_stdout)]` to help enforce this
change and renamed the `ts_println!()` macro to `ts_msg()` because (1)
it no longer calls `println!()` and (2), `ts_eprintln!()` seemed too
long of a name.

While here, this also adds `-o` as an alias for `--output-last-message`.

Fixes https://github.com/openai/codex/issues/1670
2025-10-03 16:22:12 +00:00
pakrym-oai
5af08e0719 Update issue-deduplicator.yml (#4660) 2025-10-03 06:41:57 -07:00
jif-oai
33d3ecbccc chore: refactor tool handling (#4510)
# Tool System Refactor

- Centralizes tool definitions and execution in `core/src/tools/*`:
specs (`spec.rs`), handlers (`handlers/*`), router (`router.rs`),
registry/dispatch (`registry.rs`), and shared context (`context.rs`).
One registry now builds the model-visible tool list and binds handlers.
- Router converts model responses to tool calls; Registry dispatches
with consistent telemetry via `codex-rs/otel` and unified error
handling. Function, Local Shell, MCP, and experimental `unified_exec`
all flow through this path; legacy shell aliases still work.
- Rationale: reduce per‑tool boilerplate, keep spec/handler in sync, and
make adding tools predictable and testable.

Example: `read_file`
- Spec: `core/src/tools/spec.rs` (see `create_read_file_tool`,
registered by `build_specs`).
- Handler: `core/src/tools/handlers/read_file.rs` (absolute `file_path`,
1‑indexed `offset`, `limit`, `L#: ` prefixes, safe truncation).
- E2E test: `core/tests/suite/read_file.rs` validates the tool returns
the requested lines.

## Next steps:
- Decompose `handle_container_exec_with_params` 
- Add parallel tool calls
2025-10-03 13:21:06 +01:00
jif-oai
69cb72f842 chore: sandbox refactor 2 (#4653)
Revert the revert and fix the UI issue
2025-10-03 11:17:39 +01:00
Michael Bolin
69ac5153d4 fix: replace --api-key with --with-api-key in codex login (#4646)
Previously, users could supply their API key directly via:

```shell
codex login --api-key KEY
```

but this has the drawback that `KEY` is more likely to end up in shell
history, can be read from `/proc`, etc.

This PR removes support for `--api-key` and replaces it with
`--with-api-key`, which reads the key from stdin, so either of these are
better options:

```
printenv OPENAI_API_KEY | codex login --with-api-key
codex login --with-api-key < my_key.txt
```

Other CLIs, such as `gh auth login --with-token`, follow the same
practice.
2025-10-03 06:17:31 +00:00
dedrisian-oai
16b6951648 Nit: Pop model effort picker on esc (#4642)
Pops the effort picker instead of dismissing the whole thing (on
escape).



https://github.com/user-attachments/assets/cef32291-cd07-4ac7-be8f-ce62b38145f9
2025-10-02 21:07:47 -07:00
dedrisian-oai
231c36f8d3 Move gpt-5-codex to top (#4641)
In /model picker
2025-10-03 03:34:58 +00:00
dedrisian-oai
1e4541b982 Fix tab+enter regression on slash commands (#4639)
Before when you would enter `/di`, hit tab on `/diff`, and then hit
enter, it would execute `/diff`. But now it's just sending it as a text.
This fixes the issue.
2025-10-02 20:14:28 -07:00
Shijie Rao
7be3b484ad feat: add file name to fuzzy search response (#4619)
### Summary
* Updated fuzzy search result to include the file name. 
* This should not affect CLI usage and the UI there will be addressed in
a separate PR.

### Testing
Tested locally and with the extension.

### Screenshot
<img width="431" height="244" alt="Screenshot 2025-10-02 at 11 08 44 AM"
src="https://github.com/user-attachments/assets/ba2ca299-a81d-4453-9242-1750e945aea2"
/>

---------

Co-authored-by: shijie.rao <shijie.rao@squareup.com>
2025-10-02 18:19:13 -07:00
Jeremy Rose
9617b69c8a tui: • Working, 100% context dim (#4629)
- add a `•` before the "Working" shimmer
- make the percentage in "X% context left" dim instead of bold

<img width="751" height="480" alt="Screenshot 2025-10-02 at 2 29 57 PM"
src="https://github.com/user-attachments/assets/cf3e771f-ddb3-48f4-babe-1eaf1f0c2959"
/>
2025-10-03 01:17:34 +00:00
pakrym-oai
1d94b9111c Use supports_color in codex exec (#4633)
It knows how to detect github actions
2025-10-03 01:15:03 +00:00
pakrym-oai
2d6cd6951a Enable codex workflows (#4636) 2025-10-02 17:37:22 -07:00
pakrym-oai
310e3c32e5 Update issue-deduplicator.yml (#4638)
let's test codex_args flag
2025-10-02 17:19:00 -07:00
Michael Bolin
37786593a0 feat: write pid in addition to port to server info (#4571)
This is nice to have for debugging.

While here, also cleaned up a bunch of unnecessary noise in
`write_server_info()`.
2025-10-02 17:15:09 -07:00
pakrym-oai
819a5782b6 Deduplicator fixes (#4635) 2025-10-02 16:01:59 -07:00
Jeremy Rose
c0a84473a4 fix false "task complete" state during agent message (#4627)
fixes an issue where user messages wouldn't be queued and ctrl + c would
quit the app instead of canceling the stream during the final agent
message.
2025-10-02 15:41:25 -07:00
pakrym-oai
591a8ecc16 Bump codex version in actions to latest (#4634) 2025-10-02 15:14:57 -07:00
pakrym-oai
c405d8c06c Rename assistant message to agent message and fix item type field naming (#4610)
Naming cleanup
2025-10-02 15:07:14 -07:00
pakrym-oai
138be0fd73 Use GH cli to fetch current issue (#4630)
Attempting to format the env var caused escaping issues
2025-10-02 14:43:40 -07:00
Jeremy Rose
25a2e15ec5 tui: tweaks to dialog display (#4622)
- prefix command approval reasons with "Reason:"
- show keyboard shortcuts for some ListSelectionItems
- remove "description" lines for approval options, and make the labels
more verbose
- add a spacer line in diff display after the path

and some other minor refactors that go along with the above.

<img width="859" height="508" alt="Screenshot 2025-10-02 at 1 24 50 PM"
src="https://github.com/user-attachments/assets/4fa7ecaf-3d3a-406a-bb4d-23e30ce3e5cf"
/>
2025-10-02 21:41:29 +00:00
pakrym-oai
62cc8a4b8d Add issue deduplicator workflow (#4628)
It's a bit hand-holdy in that it pre-downloads issue list but that keeps
codex running in read-only no-network mode.
2025-10-02 14:36:33 -07:00
pakrym-oai
f895d4cbb3 Minor cleanup of codex exec output (#4585)
<img width="850" height="723" alt="image"
src="https://github.com/user-attachments/assets/2ae067bf-ba6b-47bf-9ffe-d1c3f3aa1870"
/>
<img width="872" height="547" alt="image"
src="https://github.com/user-attachments/assets/9058be24-6513-4423-9dae-2d5fd4cbf162"
/>
2025-10-02 14:17:42 -07:00
Ahmed Ibrahim
ed5d656fa8 Revert "chore: sanbox extraction" (#4626)
Reverts openai/codex#4286
2025-10-02 21:09:21 +00:00
pakrym-oai
c43a561916 Add issue labeler workflow (#4621)
Auto label issues using codex cli
2025-10-02 13:39:45 -07:00
pakrym-oai
b93cc0f431 Add a separate exec doc (#4583)
More/better docs.
2025-10-02 13:33:08 -07:00
pakrym-oai
4c566d484a Separate interactive and non-interactive sessions (#4612)
Do not show exec session in VSCode/TUI selector.
2025-10-02 13:06:21 -07:00
easong-openai
06e34d4607 Make model switcher two-stage (#4178)
https://github.com/user-attachments/assets/16d5c67c-e580-4a29-983c-a315f95424ee
2025-10-02 19:38:24 +00:00
Jeremy Rose
45936f8fbd show "Viewed Image" when the model views an image (#4475)
<img width="1022" height="339" alt="Screenshot 2025-09-29 at 4 22 00 PM"
src="https://github.com/user-attachments/assets/12da7358-19be-4010-a71b-496ede6dfbbf"
/>
2025-10-02 18:36:03 +00:00
Jeremy Rose
ec98445abf normalize key hints (#4586)
render key hints the same everywhere.



| Before | After |
|--------|-------|
| <img width="816" height="172" alt="Screenshot 2025-10-01 at 5 15 42
PM"
src="https://github.com/user-attachments/assets/f88d5db4-04bb-4e89-b571-568222c41e4b"
/> | <img width="672" height="137" alt="Screenshot 2025-10-01 at 5 13 56
PM"
src="https://github.com/user-attachments/assets/1fee6a71-f313-4620-8d9a-10766dc4e195"
/> |
| <img width="816" height="172" alt="Screenshot 2025-10-01 at 5 17 01
PM"
src="https://github.com/user-attachments/assets/5170ab35-88b7-4131-b485-ecebea9f0835"
/> | <img width="816" height="174" alt="Screenshot 2025-10-01 at 5 14 24
PM"
src="https://github.com/user-attachments/assets/6b6bc64c-25b9-4824-b2d7-56f60370870a"
/> |
| <img width="816" height="172" alt="Screenshot 2025-10-01 at 5 17 29
PM"
src="https://github.com/user-attachments/assets/2313b36a-e0a8-4cd2-82be-7d0fe7793c19"
/> | <img width="816" height="134" alt="Screenshot 2025-10-01 at 5 14 37
PM"
src="https://github.com/user-attachments/assets/e18934e8-8e9d-4f46-9809-39c8cb6ee893"
/> |
| <img width="816" height="172" alt="Screenshot 2025-10-01 at 5 17 40
PM"
src="https://github.com/user-attachments/assets/0cc69e4e-8cce-420a-b3e4-be75a7e2c8f5"
/> | <img width="816" height="134" alt="Screenshot 2025-10-01 at 5 14 56
PM"
src="https://github.com/user-attachments/assets/329a5121-ae4a-4829-86e5-4c813543770c"
/> |
2025-10-02 18:34:47 +00:00
dedrisian-oai
b07aafa5f5 Fix status usage ratio (#4584)
1. Removes "Token usage" line for chatgpt sub users
2. Adds the word "used" to the context window line
2025-10-02 10:27:10 -07:00
Marcus Griep
b727d3f98a fix: handle JSON Schema in additionalProperties for MCP tools (#4454)
Fixes #4176

Some common tools provide a schema (even if just an empty object schema)
as the value for `additionalProperties`. The parsing as it currently
stands fails when it encounters this. This PR updates the schema to
accept a schema object in addition to a boolean value, per the JSON
Schema spec.
2025-10-02 13:05:51 -04:00
pakrym-oai
2f6fb37d72 Support CODEX_API_KEY for codex exec (#4615)
Allows to set API key per invocation of `codex exec`
2025-10-02 09:59:45 -07:00
Gabriel Peal
35c76ad47d fix: update the gpt-5-codex prompt to be more explicit that it should always used fenced code blocks info tags (#4569)
We get spurrious reports that the model writes fenced code blocks
without an info tag which then causes auto-language detection in the
extension to incorrectly highlight the code and show the wrong language.
The model should really always include a tag when it can.
2025-10-01 22:41:56 -07:00
pakrym-oai
c07fb71186 Store settings on the thread instead of turn (#4579)
It's much more common to keep the same settings for the entire
conversation, we can add per-turn overrides later.
2025-10-02 00:31:13 +00:00
pakrym-oai
e899ae7d8a Include request ID in the error message (#4572)
To help with issue debugging
<img width="1414" height="253" alt="image"
src="https://github.com/user-attachments/assets/254732df-44ac-4252-997a-6c5e0927355b"
/>
2025-10-01 15:36:04 -07:00
iceweasel-oai
6f97ec4990 canonicalize display of Agents.md paths on Windows. (#4577)
Canonicalize path on Windows to 
- remove unattractive path prefixes such as `\\?\`
- simplify it (`../AGENTS.md` vs
`C:\Users\iceweasel\code\coded\Agents.md`)
before: <img width="1110" height="45" alt="Screenshot 2025-10-01 123520"
src="https://github.com/user-attachments/assets/48920ae6-d89c-41b8-b4ea-df5c18fb5fad"
/>

after: 
<img width="585" height="46" alt="Screenshot 2025-10-01 123612"
src="https://github.com/user-attachments/assets/70a1761a-9d97-4836-b14c-670b6f13e608"
/>
2025-10-01 14:33:19 -07:00
Jeremy Rose
07c1db351a rework patch/exec approval UI (#4573)
| Scenario | Screenshot |
| ---------------------- |
----------------------------------------------------------------------------------------------------------------------------------------------------
|
| short patch | <img width="1096" height="533" alt="short patch"
src="https://github.com/user-attachments/assets/8a883429-0965-4c0b-9002-217b3759b557"
/> |
| short command | <img width="1096" height="533" alt="short command"
src="https://github.com/user-attachments/assets/901abde8-2494-4e86-b98a-7cabaf87ca9c"
/> |
| long patch | <img width="1129" height="892" alt="long patch"
src="https://github.com/user-attachments/assets/fa799a29-a0d6-48e6-b2ef-10302a7916d3"
/> |
| long command | <img width="1096" height="892" alt="long command"
src="https://github.com/user-attachments/assets/11ddf79b-98cb-4b60-ac22-49dfa7779343"
/> |
| viewing complete patch | <img width="1129" height="892" alt="viewing
complete patch"
src="https://github.com/user-attachments/assets/81666958-af94-420e-aa66-b60d0a42b9db"
/> |
2025-10-01 14:29:05 -07:00
pakrym-oai
31102af54b Add initial set of doc comments to the SDK (#4513)
Also perform minor code cleanup.
2025-10-01 13:12:59 -07:00
Thibault Sottiaux
5d78c1edd3 Revert "chore: prompt update to enforce good usage of apply_patch" (#4576)
Reverts openai/codex#3846
2025-10-01 20:11:36 +00:00
pakrym-oai
170c685882 Explicit node imports (#4567)
To help with compatibility
2025-10-01 12:39:04 -07:00
Eric Traut
609f75acec Fix hang on second oauth login attempt (#4568)
This PR fixes a bug that results in a hang in the oauth login flow if a
user logs in, then logs out, then logs in again without first closing
the browser window.

Root cause of problem: We use a local web server for the oauth flow, and
it's implemented using the `tiny_http` rust crate. During the first
login, a socket is created between the browser and the server. The
`tiny_http` library creates worker threads that persist for as long as
this socket remains open. Currently, there's no way to close the
connection on the server side — the library provides no API to do this.
The library also filters all "Connect: close" headers, which makes it
difficult to tell the client browser to close the connection. On the
second login attempt, the browser uses the existing connection rather
than creating a new one. Since that connection is associated with a
server instance that no longer exists, it is effectively ignored.

I considered switching from `tiny_http` to a different web server
library, but that would have been a big change with significant
regression risk. This PR includes a more surgical fix that works around
the limitation of `tiny_http` and sends a "Connect: close" header on the
last "success" page of the oauth flow.
2025-10-01 12:26:28 -07:00
Michael Bolin
eabe18714f fix: use number instead of bigint for the generated TS for RequestId (#4575)
Before this PR:

```typescript
export type RequestId = string | bigint;
```

After:

```typescript
export type RequestId = string | number;
```

`bigint` introduces headaches in TypeScript without providing any real
value.
2025-10-01 12:10:20 -07:00
easong-openai
ceaba36c7f fix ctr-n hint (#4566)
don't show or enable ctr-n to choose best of n while not in the composer
2025-10-01 18:42:04 +00:00
Michael Bolin
d94e8bad8b feat: add --emergency-version-override option to create_github_release script (#4556)
I just had to use this like so:

```
./codex-rs/scripts/create_github_release --publish-alpha --emergency-version-override 0.43.0-alpha.10
```

because the build for `0.43.0-alpha.9` failed:

https://github.com/openai/codex/actions/runs/18167317356
2025-10-01 11:40:04 -07:00
pakrym-oai
8a367ef6bf SDK: support working directory and skipGitRepoCheck options (#4563)
Make options not required, add support for working directory and
skipGitRepoCheck options on the turn
2025-10-01 11:26:49 -07:00
easong-openai
400a5a90bf Fall back to configured instruction files if AGENTS.md isn't available (#4544)
Allow users to configure an agents.md alternative to consume, but warn
the user it may degrade model performance.

Fixes #4376
2025-10-01 18:19:59 +00:00
Ahmed Ibrahim
2f370e946d Show context window usage while tasks run (#4536)
## Summary
- show the remaining context window percentage in `/status` alongside
existing token usage details
- replace the composer shortcut prompt with the context window
percentage (or an unavailable message) while a task is running
- update TUI snapshots to reflect the new context window line

## Testing
- cargo test -p codex-tui

------
https://chatgpt.com/codex/tasks/task_i_68dc6e7397ac8321909d7daff25a396c
2025-10-01 18:03:05 +00:00
Ahmed Ibrahim
751b3b50ac Show placeholder for commands with no output (#4509)
## Summary
- show a dim “(no output)” placeholder when an executed command produces
no stdout or stderr so empty runs are visible
- update TUI snapshots to include the new placeholder in history
renderings

## Testing
- cargo test -p codex-tui


------
https://chatgpt.com/codex/tasks/task_i_68dc056c1d5883218fe8d9929e9b1657
2025-10-01 10:42:30 -07:00
Ahmed Ibrahim
d78d0764aa Add Updated at time in resume picker (#4468)
<img width="639" height="281" alt="image"
src="https://github.com/user-attachments/assets/92b2ad2b-9e18-4485-9b8d-d7056eb98651"
/>
2025-10-01 10:40:43 -07:00
rakesh-oai
699c121606 Handle trailing backslash properly (#4559)
**Summary**

This PR fixes an issue in the device code login flow where trailing
slashes in the issuer URL could cause malformed URLs during codex token
exchange step


**Test**


Before the changes

`Error logging in with device code: device code exchange failed: error
decoding response body`

After the changes

`Successfully logged in`
2025-10-01 10:32:09 -07:00
iceweasel-oai
dde615f482 implement command safety for PowerShell commands (#4269)
Implement command safety for PowerShell commands on Windows

This change adds a new Windows-specific command-safety module under
`codex-rs/core/src/command_safety/windows_safe_commands.rs` to strictly
sanitise PowerShell invocations. Key points:

- Introduce `is_safe_command_windows()` to only allow explicitly
read-only PowerShell calls.
- Parse and split PowerShell invocations (including inline `-Command`
scripts and pipelines).
- Block unsafe switches (`-File`, `-EncodedCommand`, `-ExecutionPolicy`,
unknown flags, call operators, redirections, separators).
- Whitelist only read-only cmdlets (`Get-ChildItem`, `Get-Content`,
`Select-Object`, etc.), safe Git subcommands (`status`, `log`, `show`,
`diff`, `cat-file`), and ripgrep without unsafe options.
- Add comprehensive unit tests covering allowed and rejected command
patterns (nested calls, side effects, chaining, redirections).

This ensures Codex on Windows can safely execute discover-only
PowerShell workflows without risking destructive operations.
2025-10-01 09:56:48 -07:00
Michael Bolin
325fad1d92 fix: pnpm/action-setup@v4 should run before actions/setup-node@v5 (#4555)
`rust-release.yml` just failed:

https://github.com/openai/codex/actions/runs/18167317356/job/51714366768

The error is:

> Error: Unable to locate executable file: pnpm. Please verify either
the file path exists or the file can be found within a directory
specified by the PATH environment variable. Also check the file mode to
verify the file is executable.

We need to install `pnpm` first like we do in `ci.yml`:


f815157dd9/.github/workflows/ci.yml (L17-L25)
2025-10-01 09:04:14 -07:00
Michael Bolin
f815157dd9 chore: introduce publishing logic for @openai/codex-sdk (#4543)
There was a bit of copypasta I put up with when were publishing two
packages to npm, but now that it's three, I created some more scripts to
consolidate things.

With this change, I ran:

```shell
./scripts/stage_npm_packages.py --release-version 0.43.0-alpha.8 --package codex --package codex-responses-api-proxy --package codex-sdk
```

Indeed when it finished, I ended up with:

```shell
$ tree dist
dist
└── npm
    ├── codex-npm-0.43.0-alpha.8.tgz
    ├── codex-responses-api-proxy-npm-0.43.0-alpha.8.tgz
    └── codex-sdk-npm-0.43.0-alpha.8.tgz
$ tar tzvf dist/npm/codex-sdk-npm-0.43.0-alpha.8.tgz
-rwxr-xr-x  0 0      0    25476720 Oct 26  1985 package/vendor/aarch64-apple-darwin/codex/codex
-rwxr-xr-x  0 0      0    29871400 Oct 26  1985 package/vendor/aarch64-unknown-linux-musl/codex/codex
-rwxr-xr-x  0 0      0    28368096 Oct 26  1985 package/vendor/x86_64-apple-darwin/codex/codex
-rwxr-xr-x  0 0      0    36029472 Oct 26  1985 package/vendor/x86_64-unknown-linux-musl/codex/codex
-rw-r--r--  0 0      0       10926 Oct 26  1985 package/LICENSE
-rw-r--r--  0 0      0    30187520 Oct 26  1985 package/vendor/aarch64-pc-windows-msvc/codex/codex.exe
-rw-r--r--  0 0      0    35277824 Oct 26  1985 package/vendor/x86_64-pc-windows-msvc/codex/codex.exe
-rw-r--r--  0 0      0        4842 Oct 26  1985 package/dist/index.js
-rw-r--r--  0 0      0        1347 Oct 26  1985 package/package.json
-rw-r--r--  0 0      0        9867 Oct 26  1985 package/dist/index.js.map
-rw-r--r--  0 0      0          12 Oct 26  1985 package/README.md
-rw-r--r--  0 0      0        4287 Oct 26  1985 package/dist/index.d.ts
```
2025-10-01 08:29:59 -07:00
jif-oai
b8195a17e5 chore: sanbox extraction (#4286)
# Extract and Centralize Sandboxing
- Goal: Improve safety and clarity by centralizing sandbox planning and
execution.
  - Approach:
- Add planner (ExecPlan) and backend registry (Direct/Seatbelt/Linux)
with run_with_plan.
- Refactor codex.rs to plan-then-execute; handle failures/escalation via
the plan.
- Delegate apply_patch to the codex binary and run it with an empty env
for determinism.
2025-10-01 12:05:12 +01:00
rakesh-oai
349ef7edc6 Fix Callback URL for staging and prod environments (#4533)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.
2025-10-01 02:57:37 +00:00
Michael Bolin
5881c0d6d4 fix: remove mcp-types from app server protocol (#4537)
We continue the separation between `codex app-server` and `codex
mcp-server`.

In particular, we introduce a new crate, `codex-app-server-protocol`,
and migrate `codex-rs/protocol/src/mcp_protocol.rs` into it, renaming it
`codex-rs/app-server-protocol/src/protocol.rs`.

Because `ConversationId` was defined in `mcp_protocol.rs`, we move it
into its own file, `codex-rs/protocol/src/conversation_id.rs`, and
because it is referenced in a ton of places, we have to touch a lot of
files as part of this PR.

We also decide to get away from proper JSON-RPC 2.0 semantics, so we
also introduce `codex-rs/app-server-protocol/src/jsonrpc_lite.rs`, which
is basically the same `JSONRPCMessage` type defined in `mcp-types`
except with all of the `"jsonrpc": "2.0"` removed.

Getting rid of `"jsonrpc": "2.0"` makes our serialization logic
considerably simpler, as we can lean heavier on serde to serialize
directly into the wire format that we use now.
2025-10-01 02:16:26 +00:00
pakrym-oai
8dd771d217 Add executable detection and export Codex from the SDK (#4532)
Executable detection uses the same rules as the codex wrapper.
2025-09-30 18:06:16 -07:00
Michael Bolin
32853ecbc5 fix: use macros to ensure request/response symmetry (#4529)
Manually curating `protocol-ts/src/lib.rs` was error-prone, as expected.
I finally asked Codex to write some Rust macros so we can ensure that:

- For every variant of `ClientRequest` and `ServerRequest`, there is an
associated `params` and `response` type.
- All response types are included automatically in the output of `codex
generate-ts`.
2025-09-30 18:06:05 -07:00
pakrym-oai
7fc3edf8a7 Remove legacy codex exec --json format (#4525)
`codex exec --json` now maps to the behavior of `codex exec
--experimental-json` with new event and item shapes.

Thread events:
- thread.started
- turn.started
- turn.completed
- turn.failed
- item.started
- item.updated
- item.completed

Item types: 
- assistant_message
- reasoning
- command_execution
- file_change
- mcp_tool_call
- web_search
- todo_list
- error

Sample output:

<details>
`codex exec "list my assigned github issues"  --json | jq`

```
{
  "type": "thread.started",
  "thread_id": "01999ce5-f229-7661-8570-53312bd47ea3"
}
{
  "type": "turn.started"
}
{
  "type": "item.completed",
  "item": {
    "id": "item_0",
    "item_type": "reasoning",
    "text": "**Planning to list assigned GitHub issues**"
  }
}
{
  "type": "item.started",
  "item": {
    "id": "item_1",
    "item_type": "mcp_tool_call",
    "server": "github",
    "tool": "search_issues",
    "status": "in_progress"
  }
}
{
  "type": "item.completed",
  "item": {
    "id": "item_1",
    "item_type": "mcp_tool_call",
    "server": "github",
    "tool": "search_issues",
    "status": "completed"
  }
}
{
  "type": "item.completed",
  "item": {
    "id": "item_2",
    "item_type": "reasoning",
    "text": "**Organizing final message structure**"
  }
}
{
  "type": "item.completed",
  "item": {
    "id": "item_3",
    "item_type": "assistant_message",
    "text": "**Assigned Issues**\n- openai/codex#3267 – “stream error: stream disconnected before completion…” (bug) – last update 2025-09-08\n- openai/codex#3257 – “You've hit your usage limit. Try again in 4 days 20 hours 9 minutes.” – last update 2025-09-23\n- openai/codex#3054 – “reqwest SSL panic (library has no ciphers)” (bug) – last update 2025-09-03\n- openai/codex#3051 – “thread 'main' panicked at linux-sandbox/src/linux_run_main.rs:53:5:” (bug) – last update 2025-09-10\n- openai/codex#3004 – “Auto-compact when approaching context limit” (enhancement) – last update 2025-09-26\n- openai/codex#2916 – “Feature request: Add OpenAI service tier support for cost optimization” – last update 2025-09-12\n- openai/codex#1581 – “stream error: stream disconnected before completion: stream closed before response.complete; retrying...” (bug) – last update 2025-09-17"
  }
}
{
  "type": "turn.completed",
  "usage": {
    "input_tokens": 34785,
    "cached_input_tokens": 12544,
    "output_tokens": 560
  }
}
```

</details>
2025-09-30 17:21:37 -07:00
Jeremy Rose
01e6503672 wrap markdown at render time (#4506)
This results in correctly indenting list items with long lines.

<img width="1006" height="251" alt="Screenshot 2025-09-30 at 10 00
48 AM"
src="https://github.com/user-attachments/assets/0a076cf6-ca3c-4efb-b3af-dc07617cdb6f"
/>
2025-09-30 23:13:55 +00:00
pakrym-oai
9c259737d3 Delete codex proto (#4520) 2025-09-30 22:33:28 +00:00
Michael Bolin
b8e1fe60c5 fix: enable process hardening in Codex CLI for release builds (#4521)
I don't believe there is any upside in making process hardening opt-in
for Codex CLI releases. If you want to tinker with Codex CLI, then build
from source (or run as `root`)?
2025-09-30 14:34:35 -07:00
Michael Bolin
ddfb7eb548 fix: clean up TypeScript exports (#4518)
Fixes:

- Removed overdeclaration of types that were unnecessary because they
were already included by induction.
- Reordered list of response types to match the enum order, making it
easier to identify what was missing.
- Added `ExecArbitraryCommandResponse` because it was missing.
- Leveraged `use codex_protocol::mcp_protocol::*;` to make the file more
readable.
- Removed crate dependency on `mcp-types` now that we have separate the
app server from the MCP server:
https://github.com/openai/codex/pull/4471

My next move is to come up with some scheme that ensures request types
always have a response type and that the response type is automatically
included with the output of `codex generate-ts`.
2025-09-30 14:08:43 -07:00
Michael Bolin
6910be3224 fix: ensure every variant of ClientRequest has a params field (#4512)
This ensures changes the generated TypeScript type for `ClientRequest`
so that instead of this:

```typescript
/**
 * Request from the client to the server.
 */
export type ClientRequest =
  | { method: "initialize"; id: RequestId; params: InitializeParams }
  | { method: "newConversation"; id: RequestId; params: NewConversationParams }
  // ...
  | { method: "getUserAgent"; id: RequestId }
  | { method: "userInfo"; id: RequestId }
  // ...
```

we have this:

```typescript
/**
 * Request from the client to the server.
 */
export type ClientRequest =
  | { method: "initialize"; id: RequestId; params: InitializeParams }
  | { method: "newConversation"; id: RequestId; params: NewConversationParams }
  // ...
  | { method: "getUserAgent"; id: RequestId; params: undefined }
  | { method: "userInfo"; id: RequestId; params: undefined }
  // ...
```

which makes TypeScript happier when it comes to destructuring instances
of `ClientRequest` because it does not complain about `params` not being
guaranteed to exist anymore.
2025-09-30 12:03:32 -07:00
pakrym-oai
a534356fe1 Wire up web search item (#4511)
Add handling for web search events.
2025-09-30 12:01:17 -07:00
363 changed files with 21341 additions and 8502 deletions

View File

@@ -2,7 +2,6 @@ name: 🎁 Feature Request
description: Propose a new feature for Codex
labels:
- enhancement
- needs triage
body:
- type: markdown
attributes:
@@ -19,11 +18,6 @@ body:
label: What feature would you like to see?
validations:
required: true
- type: textarea
id: author
attributes:
label: Are you interested in implementing this feature?
description: Please wait for acknowledgement before implementing or opening a PR.
- type: textarea
id: notes
attributes:

18
.github/prompts/issue-deduplicator.txt vendored Normal file
View File

@@ -0,0 +1,18 @@
You are an assistant that triages new GitHub issues by identifying potential duplicates.
You will receive the following JSON files located in the current working directory:
- `codex-current-issue.json`: JSON object describing the newly created issue (fields: number, title, body).
- `codex-existing-issues.json`: JSON array of recent issues (each element includes number, title, body, createdAt).
Instructions:
- Load both files as JSON and review their contents carefully. The codex-existing-issues.json file is large, ensure you explore all of it.
- Compare the current issue against the existing issues to find up to five that appear to describe the same underlying problem or request.
- Only consider an issue a potential duplicate if there is a clear overlap in symptoms, feature requests, reproduction steps, or error messages.
- Prioritize newer issues when similarity is comparable.
- Ignore pull requests and issues whose similarity is tenuous.
- When unsure, prefer returning fewer matches.
Output requirements:
- Respond with a JSON array of issue numbers (integers), ordered from most likely duplicate to least.
- Include at most five numbers.
- If you find no plausible duplicates, respond with `[]`.

26
.github/prompts/issue-labeler.txt vendored Normal file
View File

@@ -0,0 +1,26 @@
You are an assistant that reviews GitHub issues for the repository.
Your job is to choose the most appropriate existing labels for the issue described later in this prompt.
Follow these rules:
- Only pick labels out of the list below.
- Prefer a small set of precise labels over many broad ones.
- If none of the labels fit, respond with an empty JSON array: []
- Output must be a JSON array of label names (strings) with no additional commentary.
Labels to apply:
1. bug — Reproducible defects in Codex products (CLI, VS Code extension, web, auth).
2. enhancement — Feature requests or usability improvements that ask for new capabilities, better ergonomics, or quality-of-life tweaks.
3. extension — VS Code (or other IDE) extension-specific issues.
4. windows-os — Bugs or friction specific to Windows environments (PowerShell behavior, path handling, copy/paste, OS-specific auth or tooling failures).
5. mcp — Topics involving Model Context Protocol servers/clients.
6. codex-web — Issues targeting the Codex web UI/Cloud experience.
8. azure — Problems or requests tied to Azure OpenAI deployments.
9. documentation — Updates or corrections needed in docs/README/config references (broken links, missing examples, outdated keys, clarification requests).
10. model-behavior — Undesirable LLM behavior: forgetting goals, refusing work, hallucinating environment details, quota misreports, or other reasoning/performance anomalies.
Issue information is available in environment variables:
ISSUE_NUMBER
ISSUE_TITLE
ISSUE_BODY
REPO_FULL_NAME

View File

@@ -27,7 +27,7 @@ jobs:
- name: Install dependencies
run: pnpm install --frozen-lockfile
# build_npm_package.py requires DotSlash when staging releases.
# stage_npm_packages.py requires DotSlash when staging releases.
- uses: facebook/install-dotslash@v2
- name: Stage npm package
@@ -37,10 +37,12 @@ jobs:
run: |
set -euo pipefail
CODEX_VERSION=0.40.0
PACK_OUTPUT="${RUNNER_TEMP}/codex-npm.tgz"
python3 ./codex-cli/scripts/build_npm_package.py \
OUTPUT_DIR="${RUNNER_TEMP}"
python3 ./scripts/stage_npm_packages.py \
--release-version "$CODEX_VERSION" \
--pack-output "$PACK_OUTPUT"
--package codex \
--output-dir "$OUTPUT_DIR"
PACK_OUTPUT="${OUTPUT_DIR}/codex-npm-${CODEX_VERSION}.tgz"
echo "pack_output=$PACK_OUTPUT" >> "$GITHUB_OUTPUT"
- name: Upload staged npm package artifact
@@ -58,3 +60,6 @@ jobs:
run: ./scripts/asciicheck.py codex-cli/README.md
- name: Check codex-cli/README ToC
run: python3 scripts/readme_toc.py codex-cli/README.md
- name: Prettier (run `pnpm run format:fix` to fix)
run: pnpm run format

140
.github/workflows/issue-deduplicator.yml vendored Normal file
View File

@@ -0,0 +1,140 @@
name: Issue Deduplicator
on:
issues:
types:
- opened
- labeled
jobs:
gather-duplicates:
name: Identify potential duplicates
if: ${{ github.event.action == 'opened' || (github.event.action == 'labeled' && github.event.label.name == 'codex-deduplicate') }}
runs-on: ubuntu-latest
permissions:
contents: read
outputs:
codex_output: ${{ steps.codex.outputs.final-message }}
steps:
- uses: actions/checkout@v4
- name: Prepare Codex inputs
env:
GH_TOKEN: ${{ github.token }}
run: |
set -eo pipefail
CURRENT_ISSUE_FILE=codex-current-issue.json
EXISTING_ISSUES_FILE=codex-existing-issues.json
gh issue list --repo "${{ github.repository }}" \
--json number,title,body,createdAt \
--limit 1000 \
--state all \
--search "sort:created-desc" \
| jq '.' \
> "$EXISTING_ISSUES_FILE"
gh issue view "${{ github.event.issue.number }}" \
--repo "${{ github.repository }}" \
--json number,title,body \
| jq '.' \
> "$CURRENT_ISSUE_FILE"
- id: codex
uses: openai/codex-action@main
with:
openai-api-key: ${{ secrets.CODEX_OPENAI_API_KEY }}
allow-users: "*"
model: gpt-5
prompt: |
You are an assistant that triages new GitHub issues by identifying potential duplicates.
You will receive the following JSON files located in the current working directory:
- `codex-current-issue.json`: JSON object describing the newly created issue (fields: number, title, body).
- `codex-existing-issues.json`: JSON array of recent issues (each element includes number, title, body, createdAt).
Instructions:
- Compare the current issue against the existing issues to find up to five that appear to describe the same underlying problem or request.
- Focus on the underlying intent and context of each issue—such as reported symptoms, feature requests, reproduction steps, or error messages—rather than relying solely on string similarity or synthetic metrics.
- After your analysis, validate your results in 1-2 lines explaining your decision to return the selected matches.
- When unsure, prefer returning fewer matches.
- Include at most five numbers.
output-schema: |
{
"type": "object",
"properties": {
"issues": {
"type": "array",
"items": {
"type": "string"
}
},
"reason": { "type": "string" }
},
"required": ["issues", "reason"],
"additionalProperties": false
}
comment-on-issue:
name: Comment with potential duplicates
needs: gather-duplicates
if: ${{ needs.gather-duplicates.result != 'skipped' }}
runs-on: ubuntu-latest
permissions:
contents: read
issues: write
steps:
- name: Comment on issue
uses: actions/github-script@v7
env:
CODEX_OUTPUT: ${{ needs.gather-duplicates.outputs.codex_output }}
with:
github-token: ${{ github.token }}
script: |
const raw = process.env.CODEX_OUTPUT ?? '';
let parsed;
try {
parsed = JSON.parse(raw);
} catch (error) {
core.info(`Codex output was not valid JSON. Raw output: ${raw}`);
core.info(`Parse error: ${error.message}`);
return;
}
const issues = Array.isArray(parsed?.issues) ? parsed.issues : [];
const currentIssueNumber = String(context.payload.issue.number);
console.log(`Current issue number: ${currentIssueNumber}`);
console.log(issues);
const filteredIssues = issues.filter((value) => String(value) !== currentIssueNumber);
if (filteredIssues.length === 0) {
core.info('Codex reported no potential duplicates.');
return;
}
const lines = [
'Potential duplicates detected. Please review them and close your issue if it is a duplicate.',
'',
...filteredIssues.map((value) => `- #${String(value)}`),
'',
'*Powered by [Codex Action](https://github.com/openai/codex-action)*'];
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.payload.issue.number,
body: lines.join("\n"),
});
- name: Remove codex-deduplicate label
if: ${{ always() && github.event.action == 'labeled' && github.event.label.name == 'codex-deduplicate' }}
env:
GH_TOKEN: ${{ github.token }}
GH_REPO: ${{ github.repository }}
run: |
gh issue edit "${{ github.event.issue.number }}" --remove-label codex-deduplicate || true
echo "Attempted to remove label: codex-deduplicate"

115
.github/workflows/issue-labeler.yml vendored Normal file
View File

@@ -0,0 +1,115 @@
name: Issue Labeler
on:
issues:
types:
- opened
- labeled
jobs:
gather-labels:
name: Generate label suggestions
if: ${{ github.event.action == 'opened' || (github.event.action == 'labeled' && github.event.label.name == 'codex-label') }}
runs-on: ubuntu-latest
permissions:
contents: read
outputs:
codex_output: ${{ steps.codex.outputs.final-message }}
steps:
- uses: actions/checkout@v4
- id: codex
uses: openai/codex-action@main
with:
openai-api-key: ${{ secrets.CODEX_OPENAI_API_KEY }}
allow-users: "*"
prompt: |
You are an assistant that reviews GitHub issues for the repository.
Your job is to choose the most appropriate existing labels for the issue described later in this prompt.
Follow these rules:
- Only pick labels out of the list below.
- Prefer a small set of precise labels over many broad ones.
Labels to apply:
1. bug — Reproducible defects in Codex products (CLI, VS Code extension, web, auth).
2. enhancement — Feature requests or usability improvements that ask for new capabilities, better ergonomics, or quality-of-life tweaks.
3. extension — VS Code (or other IDE) extension-specific issues.
4. windows-os — Bugs or friction specific to Windows environments (always when PowerShell is mentioned, path handling, copy/paste, OS-specific auth or tooling failures).
5. mcp — Topics involving Model Context Protocol servers/clients.
6. codex-web — Issues targeting the Codex web UI/Cloud experience.
8. azure — Problems or requests tied to Azure OpenAI deployments.
9. documentation — Updates or corrections needed in docs/README/config references (broken links, missing examples, outdated keys, clarification requests).
10. model-behavior — Undesirable LLM behavior: forgetting goals, refusing work, hallucinating environment details, quota misreports, or other reasoning/performance anomalies.
Issue number: ${{ github.event.issue.number }}
Issue title:
${{ github.event.issue.title }}
Issue body:
${{ github.event.issue.body }}
Repository full name:
${{ github.repository }}
output-schema: |
{
"type": "object",
"properties": {
"labels": {
"type": "array",
"items": {
"type": "string"
}
}
},
"required": ["labels"],
"additionalProperties": false
}
apply-labels:
name: Apply labels from Codex output
needs: gather-labels
if: ${{ needs.gather-labels.result != 'skipped' }}
runs-on: ubuntu-latest
permissions:
contents: read
issues: write
env:
GH_TOKEN: ${{ github.token }}
GH_REPO: ${{ github.repository }}
ISSUE_NUMBER: ${{ github.event.issue.number }}
CODEX_OUTPUT: ${{ needs.gather-labels.outputs.codex_output }}
steps:
- name: Apply labels
run: |
json=${CODEX_OUTPUT//$'\r'/}
if [ -z "$json" ]; then
echo "Codex produced no output. Skipping label application."
exit 0
fi
if ! printf '%s' "$json" | jq -e 'type == "object" and (.labels | type == "array")' >/dev/null 2>&1; then
echo "Codex output did not include a labels array. Raw output: $json"
exit 0
fi
labels=$(printf '%s' "$json" | jq -r '.labels[] | tostring')
if [ -z "$labels" ]; then
echo "Codex returned an empty array. Nothing to do."
exit 0
fi
cmd=(gh issue edit "$ISSUE_NUMBER")
while IFS= read -r label; do
cmd+=(--add-label "$label")
done <<< "$labels"
"${cmd[@]}" || true
- name: Remove codex-label trigger
if: ${{ always() && github.event.action == 'labeled' && github.event.label.name == 'codex-label' }}
run: |
gh issue edit "$ISSUE_NUMBER" --remove-label codex-label || true
echo "Attempted to remove label: codex-label"

View File

@@ -216,31 +216,30 @@ jobs:
echo "npm_tag=" >> "$GITHUB_OUTPUT"
fi
# build_npm_package.py requires DotSlash when staging releases.
- uses: facebook/install-dotslash@v2
- name: Stage codex CLI npm package
env:
GH_TOKEN: ${{ github.token }}
run: |
set -euo pipefail
TMP_DIR="${RUNNER_TEMP}/npm-stage"
./codex-cli/scripts/build_npm_package.py \
--package codex \
--release-version "${{ steps.release_name.outputs.name }}" \
--staging-dir "${TMP_DIR}" \
--pack-output "${GITHUB_WORKSPACE}/dist/npm/codex-npm-${{ steps.release_name.outputs.name }}.tgz"
- name: Setup pnpm
uses: pnpm/action-setup@v4
with:
run_install: false
- name: Stage responses API proxy npm package
- name: Setup Node.js for npm packaging
uses: actions/setup-node@v5
with:
node-version: 22
- name: Install dependencies
run: pnpm install --frozen-lockfile
# stage_npm_packages.py requires DotSlash when staging releases.
- uses: facebook/install-dotslash@v2
- name: Stage npm packages
env:
GH_TOKEN: ${{ github.token }}
run: |
set -euo pipefail
TMP_DIR="${RUNNER_TEMP}/npm-stage-responses"
./codex-cli/scripts/build_npm_package.py \
--package codex-responses-api-proxy \
./scripts/stage_npm_packages.py \
--release-version "${{ steps.release_name.outputs.name }}" \
--staging-dir "${TMP_DIR}" \
--pack-output "${GITHUB_WORKSPACE}/dist/npm/codex-responses-api-proxy-npm-${{ steps.release_name.outputs.name }}.tgz"
--package codex \
--package codex-responses-api-proxy \
--package codex-sdk
- name: Create GitHub Release
uses: softprops/action-gh-release@v2
@@ -300,6 +299,10 @@ jobs:
--repo "${GITHUB_REPOSITORY}" \
--pattern "codex-responses-api-proxy-npm-${version}.tgz" \
--dir dist/npm
gh release download "$tag" \
--repo "${GITHUB_REPOSITORY}" \
--pattern "codex-sdk-npm-${version}.tgz" \
--dir dist/npm
# No NODE_AUTH_TOKEN needed because we use OIDC.
- name: Publish to npm
@@ -316,6 +319,7 @@ jobs:
tarballs=(
"codex-npm-${VERSION}.tgz"
"codex-responses-api-proxy-npm-${VERSION}.tgz"
"codex-sdk-npm-${VERSION}.tgz"
)
for tarball in "${tarballs[@]}"; do

View File

@@ -8,11 +8,16 @@ In the codex-rs folder where the rust code lives:
- Never add or modify any code related to `CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR` or `CODEX_SANDBOX_ENV_VAR`.
- You operate in a sandbox where `CODEX_SANDBOX_NETWORK_DISABLED=1` will be set whenever you use the `shell` tool. Any existing code that uses `CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR` was authored with this fact in mind. It is often used to early exit out of tests that the author knew you would not be able to run given your sandbox limitations.
- Similarly, when you spawn a process using Seatbelt (`/usr/bin/sandbox-exec`), `CODEX_SANDBOX=seatbelt` will be set on the child process. Integration tests that want to run Seatbelt themselves cannot be run under Seatbelt, so checks for `CODEX_SANDBOX=seatbelt` are also often used to early exit out of tests, as appropriate.
- Always collapse if statements per https://rust-lang.github.io/rust-clippy/master/index.html#collapsible_if
- Always inline format! args when possible per https://rust-lang.github.io/rust-clippy/master/index.html#uninlined_format_args
- Use method references over closures when possible per https://rust-lang.github.io/rust-clippy/master/index.html#redundant_closure_for_method_calls
- When writing tests, prefer comparing the equality of entire objects over fields one by one.
Run `just fmt` (in `codex-rs` directory) automatically after making Rust code changes; do not ask for approval to run it. Before finalizing a change to `codex-rs`, run `just fix -p <project>` (in `codex-rs` directory) to fix any linter issues in the code. Prefer scoping with `-p` to avoid slow workspacewide Clippy builds; only run `just fix` without `-p` if you changed shared crates. Additionally, run the tests:
1. Run the test for the specific project that was changed. For example, if changes were made in `codex-rs/tui`, run `cargo test -p codex-tui`.
2. Once those pass, if any changes were made in common, core, or protocol, run the complete test suite with `cargo test --all-features`.
When running interactively, ask the user before running `just fix` to finalize. `just fmt` does not require approval. project-specific or individual tests can be run without asking the user, but do ask the user before running the complete test suite.
When running interactively, ask the user before running `just fix` to finalize. `just fmt` does not require approval. project-specific or individual tests can be run without asking the user, but do ask the user before running the complete test suite.
## TUI style conventions
@@ -28,6 +33,7 @@ See `codex-rs/tui/styles.md`.
- Desired: vec![" └ ".into(), "M".red(), " ".dim(), "tui/src/app.rs".dim()]
### TUI Styling (ratatui)
- Prefer Stylize helpers: use "text".dim(), .bold(), .cyan(), .italic(), .underlined() instead of manual Style where possible.
- Prefer simple conversions: use "text".into() for spans and vec![…].into() for lines; when inference is ambiguous (e.g., Paragraph::new/Cell::from), use Line::from(spans) or Span::from(text).
- Computed styles: if the Style is computed at runtime, using `Span::styled` is OK (`Span::from(text).set_style(style)` is also acceptable).
@@ -39,6 +45,7 @@ See `codex-rs/tui/styles.md`.
- Compactness: prefer the form that stays on one line after rustfmt; if only one of Line::from(vec![…]) or vec![…].into() avoids wrapping, choose that. If both wrap, pick the one with fewer wrapped lines.
### Text wrapping
- Always use textwrap::wrap to wrap plain strings.
- If you have a ratatui Line and you want to wrap it, use the helpers in tui/src/wrapping.rs, e.g. word_wrap_lines / word_wrap_line.
- If you need to indent wrapped lines, use the initial_indent / subsequent_indent options from RtOptions if you can, rather than writing custom logic.
@@ -60,8 +67,34 @@ This repo uses snapshot tests (via `insta`), especially in `codex-rs/tui`, to va
- `cargo insta accept -p codex-tui`
If you dont have the tool:
- `cargo install cargo-insta`
### Test assertions
- Tests should use pretty_assertions::assert_eq for clearer diffs. Import this at the top of the test module if it isn't already.
### Integration tests (core)
- Prefer the utilities in `core_test_support::responses` when writing end-to-end Codex tests.
- All `mount_sse*` helpers return a `ResponseMock`; hold onto it so you can assert against outbound `/responses` POST bodies.
- Use `ResponseMock::single_request()` when a test should only issue one POST, or `ResponseMock::requests()` to inspect every captured `ResponsesRequest`.
- `ResponsesRequest` exposes helpers (`body_json`, `input`, `function_call_output`, `custom_tool_call_output`, `call_output`, `header`, `path`, `query_param`) so assertions can target structured payloads instead of manual JSON digging.
- Build SSE payloads with the provided `ev_*` constructors and the `sse(...)`.
- Typical pattern:
```rust
let mock = responses::mount_sse_once(&server, responses::sse(vec![
responses::ev_response_created("resp-1"),
responses::ev_function_call(call_id, "shell", &serde_json::to_string(&args)?),
responses::ev_completed("resp-1"),
])).await;
codex.submit(Op::UserTurn { ... }).await?;
// Assert request body if needed.
let request = mock.single_request();
// assert using request.function_call_output(call_id) or request.json_body() or other helpers.
```

View File

@@ -1,4 +1,3 @@
<p align="center"><code>npm i -g @openai/codex</code><br />or <code>brew install codex</code></p>
<p align="center"><strong>Codex CLI</strong> is a coding agent from OpenAI that runs locally on your computer.
@@ -62,8 +61,7 @@ You can also use Codex with an API key, but this requires [additional setup](./d
### Model Context Protocol (MCP)
Codex CLI supports [MCP servers](./docs/advanced.md#model-context-protocol-mcp). Enable by adding an `mcp_servers` section to your `~/.codex/config.toml`.
Codex can access MCP servers. To configure them, refer to the [config docs](./docs/config.md#mcp_servers).
### Configuration
@@ -83,8 +81,11 @@ Codex CLI supports a rich set of configuration options, with preferences stored
- [**Authentication**](./docs/authentication.md)
- [Auth methods](./docs/authentication.md#forcing-a-specific-auth-method-advanced)
- [Login on a "Headless" machine](./docs/authentication.md#connecting-on-a-headless-machine)
- **Automating Codex**
- [GitHub Action](https://github.com/openai/codex-action)
- [TypeScript SDK](./sdk/typescript/README.md)
- [Non-interactive mode (`codex exec`)](./docs/exec.md)
- [**Advanced**](./docs/advanced.md)
- [Non-interactive / CI mode](./docs/advanced.md#non-interactive--ci-mode)
- [Tracing / verbose logging](./docs/advanced.md#tracing--verbose-logging)
- [Model Context Protocol (MCP)](./docs/advanced.md#model-context-protocol-mcp)
- [**Zero data retention (ZDR)**](./docs/zdr.md)

View File

@@ -1,11 +1,19 @@
# npm releases
Run the following:
To build the 0.2.x or later version of the npm module, which runs the Rust version of the CLI, build it as follows:
Use the staging helper in the repo root to generate npm tarballs for a release. For
example, to stage the CLI, responses proxy, and SDK packages for version `0.6.0`:
```bash
./codex-cli/scripts/build_npm_package.py --release-version 0.6.0
./scripts/stage_npm_packages.py \
--release-version 0.6.0 \
--package codex \
--package codex-responses-api-proxy \
--package codex-sdk
```
Note this will create `./codex-cli/vendor/` as a side-effect.
This downloads the native artifacts once, hydrates `vendor/` for each package, and writes
tarballs to `dist/npm/`.
If you need to invoke `build_npm_package.py` directly, run
`codex-cli/scripts/install_native_deps.py` first and pass `--vendor-src` pointing to the
directory that contains the populated `vendor/` tree.

View File

@@ -3,7 +3,6 @@
import argparse
import json
import re
import shutil
import subprocess
import sys
@@ -14,19 +13,25 @@ SCRIPT_DIR = Path(__file__).resolve().parent
CODEX_CLI_ROOT = SCRIPT_DIR.parent
REPO_ROOT = CODEX_CLI_ROOT.parent
RESPONSES_API_PROXY_NPM_ROOT = REPO_ROOT / "codex-rs" / "responses-api-proxy" / "npm"
GITHUB_REPO = "openai/codex"
CODEX_SDK_ROOT = REPO_ROOT / "sdk" / "typescript"
# The docs are not clear on what the expected value/format of
# workflow/workflowName is:
# https://cli.github.com/manual/gh_run_list
WORKFLOW_NAME = ".github/workflows/rust-release.yml"
PACKAGE_NATIVE_COMPONENTS: dict[str, list[str]] = {
"codex": ["codex", "rg"],
"codex-responses-api-proxy": ["codex-responses-api-proxy"],
"codex-sdk": ["codex"],
}
COMPONENT_DEST_DIR: dict[str, str] = {
"codex": "codex",
"codex-responses-api-proxy": "codex-responses-api-proxy",
"rg": "path",
}
def parse_args() -> argparse.Namespace:
parser = argparse.ArgumentParser(description="Build or stage the Codex CLI npm package.")
parser.add_argument(
"--package",
choices=("codex", "codex-responses-api-proxy"),
choices=("codex", "codex-responses-api-proxy", "codex-sdk"),
default="codex",
help="Which npm package to stage (default: codex).",
)
@@ -37,14 +42,9 @@ def parse_args() -> argparse.Namespace:
parser.add_argument(
"--release-version",
help=(
"Version to stage for npm release. When provided, the script also resolves the "
"matching rust-release workflow unless --workflow-url is supplied."
"Version to stage for npm release."
),
)
parser.add_argument(
"--workflow-url",
help="Optional GitHub Actions workflow run URL used to download native binaries.",
)
parser.add_argument(
"--staging-dir",
type=Path,
@@ -64,6 +64,11 @@ def parse_args() -> argparse.Namespace:
type=Path,
help="Path where the generated npm tarball should be written.",
)
parser.add_argument(
"--vendor-src",
type=Path,
help="Directory containing pre-installed native binaries to bundle (vendor root).",
)
return parser.parse_args()
@@ -86,29 +91,19 @@ def main() -> int:
try:
stage_sources(staging_dir, version, package)
workflow_url = args.workflow_url
resolved_head_sha: str | None = None
if not workflow_url:
if release_version:
workflow = resolve_release_workflow(version)
workflow_url = workflow["url"]
resolved_head_sha = workflow.get("headSha")
else:
workflow_url = resolve_latest_alpha_workflow_url()
elif release_version:
try:
workflow = resolve_release_workflow(version)
resolved_head_sha = workflow.get("headSha")
except Exception:
resolved_head_sha = None
vendor_src = args.vendor_src.resolve() if args.vendor_src else None
native_components = PACKAGE_NATIVE_COMPONENTS.get(package, [])
if release_version and resolved_head_sha:
print(f"should `git checkout {resolved_head_sha}`")
if native_components:
if vendor_src is None:
components_str = ", ".join(native_components)
raise RuntimeError(
"Native components "
f"({components_str}) required for package '{package}'. Provide --vendor-src "
"pointing to a directory containing pre-installed binaries."
)
if not workflow_url:
raise RuntimeError("Unable to determine workflow URL for native binaries.")
install_native_binaries(staging_dir, workflow_url, package)
copy_native_binaries(vendor_src, staging_dir, native_components)
if release_version:
staging_dir_str = str(staging_dir)
@@ -119,12 +114,20 @@ def main() -> int:
f" node {staging_dir_str}/bin/codex.js --version\n"
f" node {staging_dir_str}/bin/codex.js --help\n\n"
)
else:
elif package == "codex-responses-api-proxy":
print(
f"Staged version {version} for release in {staging_dir_str}\n\n"
"Verify the responses API proxy:\n"
f" node {staging_dir_str}/bin/codex-responses-api-proxy.js --help\n\n"
)
else:
print(
f"Staged version {version} for release in {staging_dir_str}\n\n"
"Verify the SDK contents:\n"
f" ls {staging_dir_str}/dist\n"
f" ls {staging_dir_str}/vendor\n"
" node -e \"import('./dist/index.js').then(() => console.log('ok'))\"\n\n"
)
else:
print(f"Staged package in {staging_dir}")
@@ -152,10 +155,9 @@ def prepare_staging_dir(staging_dir: Path | None) -> tuple[Path, bool]:
def stage_sources(staging_dir: Path, version: str, package: str) -> None:
bin_dir = staging_dir / "bin"
bin_dir.mkdir(parents=True, exist_ok=True)
if package == "codex":
bin_dir = staging_dir / "bin"
bin_dir.mkdir(parents=True, exist_ok=True)
shutil.copy2(CODEX_CLI_ROOT / "bin" / "codex.js", bin_dir / "codex.js")
rg_manifest = CODEX_CLI_ROOT / "bin" / "rg"
if rg_manifest.exists():
@@ -167,6 +169,8 @@ def stage_sources(staging_dir: Path, version: str, package: str) -> None:
package_json_path = CODEX_CLI_ROOT / "package.json"
elif package == "codex-responses-api-proxy":
bin_dir = staging_dir / "bin"
bin_dir.mkdir(parents=True, exist_ok=True)
launcher_src = RESPONSES_API_PROXY_NPM_ROOT / "bin" / "codex-responses-api-proxy.js"
shutil.copy2(launcher_src, bin_dir / "codex-responses-api-proxy.js")
@@ -175,6 +179,9 @@ def stage_sources(staging_dir: Path, version: str, package: str) -> None:
shutil.copy2(readme_src, staging_dir / "README.md")
package_json_path = RESPONSES_API_PROXY_NPM_ROOT / "package.json"
elif package == "codex-sdk":
package_json_path = CODEX_SDK_ROOT / "package.json"
stage_codex_sdk_sources(staging_dir)
else:
raise RuntimeError(f"Unknown package '{package}'.")
@@ -182,91 +189,85 @@ def stage_sources(staging_dir: Path, version: str, package: str) -> None:
package_json = json.load(fh)
package_json["version"] = version
if package == "codex-sdk":
scripts = package_json.get("scripts")
if isinstance(scripts, dict):
scripts.pop("prepare", None)
files = package_json.get("files")
if isinstance(files, list):
if "vendor" not in files:
files.append("vendor")
else:
package_json["files"] = ["dist", "vendor"]
with open(staging_dir / "package.json", "w", encoding="utf-8") as out:
json.dump(package_json, out, indent=2)
out.write("\n")
def install_native_binaries(staging_dir: Path, workflow_url: str, package: str) -> None:
package_components = {
"codex": ["codex", "rg"],
"codex-responses-api-proxy": ["codex-responses-api-proxy"],
}
components = package_components.get(package)
if components is None:
raise RuntimeError(f"Unknown package '{package}'.")
cmd = ["./scripts/install_native_deps.py", "--workflow-url", workflow_url]
for component in components:
cmd.extend(["--component", component])
cmd.append(str(staging_dir))
subprocess.check_call(cmd, cwd=CODEX_CLI_ROOT)
def run_command(cmd: list[str], cwd: Path | None = None) -> None:
print("+", " ".join(cmd))
subprocess.run(cmd, cwd=cwd, check=True)
def resolve_latest_alpha_workflow_url() -> str:
version = determine_latest_alpha_version()
workflow = resolve_release_workflow(version)
return workflow["url"]
def stage_codex_sdk_sources(staging_dir: Path) -> None:
package_root = CODEX_SDK_ROOT
run_command(["pnpm", "install", "--frozen-lockfile"], cwd=package_root)
run_command(["pnpm", "run", "build"], cwd=package_root)
dist_src = package_root / "dist"
if not dist_src.exists():
raise RuntimeError("codex-sdk build did not produce a dist directory.")
shutil.copytree(dist_src, staging_dir / "dist")
readme_src = package_root / "README.md"
if readme_src.exists():
shutil.copy2(readme_src, staging_dir / "README.md")
license_src = REPO_ROOT / "LICENSE"
if license_src.exists():
shutil.copy2(license_src, staging_dir / "LICENSE")
def determine_latest_alpha_version() -> str:
releases = list_releases()
best_key: tuple[int, int, int, int] | None = None
best_version: str | None = None
pattern = re.compile(r"^rust-v(\d+)\.(\d+)\.(\d+)-alpha\.(\d+)$")
for release in releases:
tag = release.get("tag_name", "")
match = pattern.match(tag)
if not match:
def copy_native_binaries(vendor_src: Path, staging_dir: Path, components: list[str]) -> None:
vendor_src = vendor_src.resolve()
if not vendor_src.exists():
raise RuntimeError(f"Vendor source directory not found: {vendor_src}")
components_set = {component for component in components if component in COMPONENT_DEST_DIR}
if not components_set:
return
vendor_dest = staging_dir / "vendor"
if vendor_dest.exists():
shutil.rmtree(vendor_dest)
vendor_dest.mkdir(parents=True, exist_ok=True)
for target_dir in vendor_src.iterdir():
if not target_dir.is_dir():
continue
key = tuple(int(match.group(i)) for i in range(1, 5))
if best_key is None or key > best_key:
best_key = key
best_version = (
f"{match.group(1)}.{match.group(2)}.{match.group(3)}-alpha.{match.group(4)}"
)
if best_version is None:
raise RuntimeError("No alpha releases found when resolving workflow URL.")
return best_version
dest_target_dir = vendor_dest / target_dir.name
dest_target_dir.mkdir(parents=True, exist_ok=True)
for component in components_set:
dest_dir_name = COMPONENT_DEST_DIR.get(component)
if dest_dir_name is None:
continue
def list_releases() -> list[dict]:
stdout = subprocess.check_output(
["gh", "api", f"/repos/{GITHUB_REPO}/releases?per_page=100"],
text=True,
)
try:
releases = json.loads(stdout or "[]")
except json.JSONDecodeError as exc:
raise RuntimeError("Unable to parse releases JSON.") from exc
if not isinstance(releases, list):
raise RuntimeError("Unexpected response when listing releases.")
return releases
src_component_dir = target_dir / dest_dir_name
if not src_component_dir.exists():
raise RuntimeError(
f"Missing native component '{component}' in vendor source: {src_component_dir}"
)
def resolve_release_workflow(version: str) -> dict:
stdout = subprocess.check_output(
[
"gh",
"run",
"list",
"--branch",
f"rust-v{version}",
"--json",
"workflowName,url,headSha",
"--workflow",
WORKFLOW_NAME,
"--jq",
"first(.[])",
],
text=True,
)
workflow = json.loads(stdout or "[]")
if not workflow:
raise RuntimeError(f"Unable to find rust-release workflow for version {version}.")
return workflow
dest_component_dir = dest_target_dir / dest_dir_name
if dest_component_dir.exists():
shutil.rmtree(dest_component_dir)
shutil.copytree(src_component_dir, dest_component_dir)
def run_npm_pack(staging_dir: Path, output_path: Path) -> Path:

1500
codex-rs/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -3,6 +3,7 @@ members = [
"backend-client",
"ansi-escape",
"app-server",
"app-server-protocol",
"apply-patch",
"arg0",
"codex-backend-openapi-models",
@@ -31,6 +32,7 @@ members = [
"git-apply",
"utils/json-to-toml",
"utils/readiness",
"utils/string",
]
resolver = "2"
@@ -47,6 +49,7 @@ edition = "2024"
app_test_support = { path = "app-server/tests/common" }
codex-ansi-escape = { path = "ansi-escape" }
codex-app-server = { path = "app-server" }
codex-app-server-protocol = { path = "app-server-protocol" }
codex-apply-patch = { path = "apply-patch" }
codex-arg0 = { path = "arg0" }
codex-chatgpt = { path = "chatgpt" }
@@ -69,6 +72,7 @@ codex-rmcp-client = { path = "rmcp-client" }
codex-tui = { path = "tui" }
codex-utils-json-to-toml = { path = "utils/json-to-toml" }
codex-utils-readiness = { path = "utils/readiness" }
codex-utils-string = { path = "utils/string" }
core_test_support = { path = "core/tests/common" }
mcp-types = { path = "mcp-types" }
mcp_test_support = { path = "mcp-server/tests/common" }
@@ -79,10 +83,12 @@ ansi-to-tui = "7.0.0"
anyhow = "1"
arboard = "3"
askama = "0.12"
assert_matches = "1.5.0"
assert_cmd = "2"
async-channel = "2.3.1"
async-stream = "0.3.6"
async-trait = "0.1.89"
axum = { version = "0.8", default-features = false }
base64 = "0.22.1"
bytes = "1.10.1"
chrono = "0.4.42"
@@ -95,11 +101,12 @@ derive_more = "2"
diffy = "0.4.2"
dirs = "6"
dotenvy = "0.15.7"
dunce = "1.0.4"
env-flags = "0.1.1"
env_logger = "0.11.5"
escargot = "0.5"
eventsource-stream = "0.2.3"
futures = "0.3"
futures = { version = "0.3", default-features = false }
icu_decimal = "2.0.0"
icu_locale_core = "2.0.0"
ignore = "0.4.23"
@@ -107,6 +114,7 @@ image = { version = "^0.25.8", default-features = false }
indexmap = "2.6.0"
insta = "1.43.2"
itertools = "0.14.0"
keyring = "3.6"
landlock = "0.4.1"
lazy_static = "1"
libc = "0.2.175"
@@ -123,6 +131,7 @@ opentelemetry-semantic-conventions = "0.30.0"
opentelemetry_sdk = "0.30.0"
os_info = "3.12.0"
owo-colors = "4.2.0"
paste = "1.0.15"
path-absolutize = "3.1.1"
path-clean = "1.0.1"
pathdiff = "0.2"
@@ -134,11 +143,13 @@ rand = "0.9"
ratatui = "0.29.0"
regex-lite = "0.1.7"
reqwest = "0.12"
rmcp = { version = "0.8.0", default-features = false }
schemars = "0.8.22"
seccompiler = "0.5.0"
serde = "1"
serde_json = "1"
serde_with = "3.14"
serial_test = "3.2.0"
sha1 = "0.10.6"
sha2 = "0.10"
shlex = "1.3.0"
@@ -164,8 +175,9 @@ tracing = "0.1.41"
tracing-appender = "0.2.3"
tracing-subscriber = "0.3.20"
tracing-test = "0.2.5"
tree-sitter = "0.25.9"
tree-sitter-bash = "0.25.0"
tree-sitter = "0.25.10"
tree-sitter-bash = "0.25"
tree-sitter-highlight = "0.25.10"
ts-rs = "11"
unicode-segmentation = "1.12.0"
unicode-width = "0.2"
@@ -233,5 +245,9 @@ strip = "symbols"
codegen-units = 1
[patch.crates-io]
# Uncomment to debug local changes.
# ratatui = { path = "../../ratatui" }
ratatui = { git = "https://github.com/nornagon/ratatui", branch = "nornagon-v0.29.0-patch" }
# Uncomment to debug local changes.
# rmcp = { path = "../../rust-sdk/crates/rmcp" }

View File

@@ -23,9 +23,15 @@ Codex supports a rich set of configuration options. Note that the Rust CLI uses
### Model Context Protocol Support
Codex CLI functions as an MCP client that can connect to MCP servers on startup. See the [`mcp_servers`](../docs/config.md#mcp_servers) section in the configuration documentation for details.
#### MCP client
It is still experimental, but you can also launch Codex as an MCP _server_ by running `codex mcp-server`. Use the [`@modelcontextprotocol/inspector`](https://github.com/modelcontextprotocol/inspector) to try it out:
Codex CLI functions as an MCP client that allows the Codex CLI and IDE extension to connect to MCP servers on startup. See the [`configuration documentation`](../docs/config.md#mcp_servers) for details.
#### MCP server (experimental)
Codex can be launched as an MCP _server_ by running `codex mcp-server`. This allows _other_ MCP clients to use Codex as a tool for another agent.
Use the [`@modelcontextprotocol/inspector`](https://github.com/modelcontextprotocol/inspector) to try it out:
```shell
npx @modelcontextprotocol/inspector codex mcp-server
@@ -71,9 +77,13 @@ To test to see what happens when a command is run under the sandbox provided by
```
# macOS
codex debug seatbelt [--full-auto] [COMMAND]...
codex sandbox macos [--full-auto] [COMMAND]...
# Linux
codex sandbox linux [--full-auto] [COMMAND]...
# Legacy aliases
codex debug seatbelt [--full-auto] [COMMAND]...
codex debug landlock [--full-auto] [COMMAND]...
```

View File

@@ -0,0 +1,24 @@
[package]
edition = "2024"
name = "codex-app-server-protocol"
version = { workspace = true }
[lib]
name = "codex_app_server_protocol"
path = "src/lib.rs"
[lints]
workspace = true
[dependencies]
codex-protocol = { workspace = true }
paste = { workspace = true }
serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }
strum_macros = { workspace = true }
ts-rs = { workspace = true }
uuid = { workspace = true, features = ["serde", "v7"] }
[dev-dependencies]
anyhow = { workspace = true }
pretty_assertions = { workspace = true }

View File

@@ -0,0 +1,67 @@
//! We do not do true JSON-RPC 2.0, as we neither send nor expect the
//! "jsonrpc": "2.0" field.
use serde::Deserialize;
use serde::Serialize;
use ts_rs::TS;
pub const JSONRPC_VERSION: &str = "2.0";
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, Hash, Eq, TS)]
#[serde(untagged)]
pub enum RequestId {
String(String),
#[ts(type = "number")]
Integer(i64),
}
pub type Result = serde_json::Value;
/// Refers to any valid JSON-RPC object that can be decoded off the wire, or encoded to be sent.
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, TS)]
#[serde(untagged)]
pub enum JSONRPCMessage {
Request(JSONRPCRequest),
Notification(JSONRPCNotification),
Response(JSONRPCResponse),
Error(JSONRPCError),
}
/// A request that expects a response.
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, TS)]
pub struct JSONRPCRequest {
pub id: RequestId,
pub method: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub params: Option<serde_json::Value>,
}
/// A notification which does not expect a response.
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, TS)]
pub struct JSONRPCNotification {
pub method: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub params: Option<serde_json::Value>,
}
/// A successful (non-error) response to a request.
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, TS)]
pub struct JSONRPCResponse {
pub id: RequestId,
pub result: Result,
}
/// A response to a request that indicates an error occurred.
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, TS)]
pub struct JSONRPCError {
pub error: JSONRPCErrorError,
pub id: RequestId,
}
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, TS)]
pub struct JSONRPCErrorError {
pub code: i64,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub data: Option<serde_json::Value>,
pub message: String,
}

View File

@@ -0,0 +1,5 @@
mod jsonrpc_lite;
mod protocol;
pub use jsonrpc_lite::*;
pub use protocol::*;

View File

@@ -1,77 +1,27 @@
use std::collections::HashMap;
use std::fmt::Display;
use std::path::PathBuf;
use crate::config_types::ReasoningEffort;
use crate::config_types::ReasoningSummary;
use crate::config_types::SandboxMode;
use crate::config_types::Verbosity;
use crate::protocol::AskForApproval;
use crate::protocol::EventMsg;
use crate::protocol::FileChange;
use crate::protocol::ReviewDecision;
use crate::protocol::SandboxPolicy;
use crate::protocol::TurnAbortReason;
use mcp_types::JSONRPCNotification;
use mcp_types::RequestId;
use crate::JSONRPCNotification;
use crate::JSONRPCRequest;
use crate::RequestId;
use codex_protocol::ConversationId;
use codex_protocol::config_types::ReasoningEffort;
use codex_protocol::config_types::ReasoningSummary;
use codex_protocol::config_types::SandboxMode;
use codex_protocol::config_types::Verbosity;
use codex_protocol::protocol::AskForApproval;
use codex_protocol::protocol::EventMsg;
use codex_protocol::protocol::FileChange;
use codex_protocol::protocol::ReviewDecision;
use codex_protocol::protocol::SandboxPolicy;
use codex_protocol::protocol::TurnAbortReason;
use paste::paste;
use serde::Deserialize;
use serde::Serialize;
use strum_macros::Display;
use ts_rs::TS;
use uuid::Uuid;
#[derive(Debug, Clone, Copy, PartialEq, Eq, TS, Hash)]
#[ts(type = "string")]
pub struct ConversationId {
uuid: Uuid,
}
impl ConversationId {
pub fn new() -> Self {
Self {
uuid: Uuid::now_v7(),
}
}
pub fn from_string(s: &str) -> Result<Self, uuid::Error> {
Ok(Self {
uuid: Uuid::parse_str(s)?,
})
}
}
impl Default for ConversationId {
fn default() -> Self {
Self::new()
}
}
impl Display for ConversationId {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.uuid)
}
}
impl Serialize for ConversationId {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
serializer.collect_str(&self.uuid)
}
}
impl<'de> Deserialize<'de> for ConversationId {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: serde::Deserializer<'de>,
{
let value = String::deserialize(deserializer)?;
let uuid = Uuid::parse_str(&value).map_err(serde::de::Error::custom)?;
Ok(Self { uuid })
}
}
#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, TS)]
#[ts(type = "string")]
pub struct GitSha(pub String);
@@ -89,117 +39,137 @@ pub enum AuthMode {
ChatGPT,
}
/// Request from the client to the server.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(tag = "method", rename_all = "camelCase")]
pub enum ClientRequest {
/// Generates an `enum ClientRequest` where each variant is a request that the
/// client can send to the server. Each variant has associated `params` and
/// `response` types. Also generates a `export_client_responses()` function to
/// export all response types to TypeScript.
macro_rules! client_request_definitions {
(
$(
$(#[$variant_meta:meta])*
$variant:ident {
params: $(#[$params_meta:meta])* $params:ty,
response: $response:ty,
}
),* $(,)?
) => {
/// Request from the client to the server.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(tag = "method", rename_all = "camelCase")]
pub enum ClientRequest {
$(
$(#[$variant_meta])*
$variant {
#[serde(rename = "id")]
request_id: RequestId,
$(#[$params_meta])*
params: $params,
},
)*
}
pub fn export_client_responses(
out_dir: &::std::path::Path,
) -> ::std::result::Result<(), ::ts_rs::ExportError> {
$(
<$response as ::ts_rs::TS>::export_all_to(out_dir)?;
)*
Ok(())
}
};
}
client_request_definitions! {
Initialize {
#[serde(rename = "id")]
request_id: RequestId,
params: InitializeParams,
response: InitializeResponse,
},
NewConversation {
#[serde(rename = "id")]
request_id: RequestId,
params: NewConversationParams,
response: NewConversationResponse,
},
/// List recorded Codex conversations (rollouts) with optional pagination and search.
ListConversations {
#[serde(rename = "id")]
request_id: RequestId,
params: ListConversationsParams,
response: ListConversationsResponse,
},
/// Resume a recorded Codex conversation from a rollout file.
ResumeConversation {
#[serde(rename = "id")]
request_id: RequestId,
params: ResumeConversationParams,
response: ResumeConversationResponse,
},
ArchiveConversation {
#[serde(rename = "id")]
request_id: RequestId,
params: ArchiveConversationParams,
response: ArchiveConversationResponse,
},
SendUserMessage {
#[serde(rename = "id")]
request_id: RequestId,
params: SendUserMessageParams,
response: SendUserMessageResponse,
},
SendUserTurn {
#[serde(rename = "id")]
request_id: RequestId,
params: SendUserTurnParams,
response: SendUserTurnResponse,
},
InterruptConversation {
#[serde(rename = "id")]
request_id: RequestId,
params: InterruptConversationParams,
response: InterruptConversationResponse,
},
AddConversationListener {
#[serde(rename = "id")]
request_id: RequestId,
params: AddConversationListenerParams,
response: AddConversationSubscriptionResponse,
},
RemoveConversationListener {
#[serde(rename = "id")]
request_id: RequestId,
params: RemoveConversationListenerParams,
response: RemoveConversationSubscriptionResponse,
},
GitDiffToRemote {
#[serde(rename = "id")]
request_id: RequestId,
params: GitDiffToRemoteParams,
response: GitDiffToRemoteResponse,
},
LoginApiKey {
#[serde(rename = "id")]
request_id: RequestId,
params: LoginApiKeyParams,
response: LoginApiKeyResponse,
},
LoginChatGpt {
#[serde(rename = "id")]
request_id: RequestId,
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
response: LoginChatGptResponse,
},
CancelLoginChatGpt {
#[serde(rename = "id")]
request_id: RequestId,
params: CancelLoginChatGptParams,
response: CancelLoginChatGptResponse,
},
LogoutChatGpt {
#[serde(rename = "id")]
request_id: RequestId,
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
response: LogoutChatGptResponse,
},
GetAuthStatus {
#[serde(rename = "id")]
request_id: RequestId,
params: GetAuthStatusParams,
response: GetAuthStatusResponse,
},
GetUserSavedConfig {
#[serde(rename = "id")]
request_id: RequestId,
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
response: GetUserSavedConfigResponse,
},
SetDefaultModel {
#[serde(rename = "id")]
request_id: RequestId,
params: SetDefaultModelParams,
response: SetDefaultModelResponse,
},
GetUserAgent {
#[serde(rename = "id")]
request_id: RequestId,
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
response: GetUserAgentResponse,
},
UserInfo {
#[serde(rename = "id")]
request_id: RequestId,
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
response: UserInfoResponse,
},
FuzzyFileSearch {
#[serde(rename = "id")]
request_id: RequestId,
params: FuzzyFileSearchParams,
response: FuzzyFileSearchResponse,
},
/// Execute a command (argv vector) under the server's sandbox.
ExecOneOffCommand {
#[serde(rename = "id")]
request_id: RequestId,
params: ExecOneOffCommandParams,
response: ExecOneOffCommandResponse,
},
}
@@ -429,7 +399,7 @@ pub struct ExecOneOffCommandParams {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(rename_all = "camelCase")]
pub struct ExecArbitraryCommandResponse {
pub struct ExecOneOffCommandResponse {
pub exit_code: i32,
pub stdout: String,
pub stderr: String,
@@ -633,30 +603,74 @@ pub enum InputItem {
},
}
// TODO(mbolin): Need test to ensure these constants match the enum variants.
/// Generates an `enum ServerRequest` where each variant is a request that the
/// server can send to the client along with the corresponding params and
/// response types. It also generates helper types used by the app/server
/// infrastructure (payload enum, request constructor, and export helpers).
macro_rules! server_request_definitions {
(
$(
$(#[$variant_meta:meta])*
$variant:ident
),* $(,)?
) => {
paste! {
/// Request initiated from the server and sent to the client.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(tag = "method", rename_all = "camelCase")]
pub enum ServerRequest {
$(
$(#[$variant_meta])*
$variant {
#[serde(rename = "id")]
request_id: RequestId,
params: [<$variant Params>],
},
)*
}
pub const APPLY_PATCH_APPROVAL_METHOD: &str = "applyPatchApproval";
pub const EXEC_COMMAND_APPROVAL_METHOD: &str = "execCommandApproval";
#[derive(Debug, Clone, PartialEq)]
pub enum ServerRequestPayload {
$( $variant([<$variant Params>]), )*
}
/// Request initiated from the server and sent to the client.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(tag = "method", rename_all = "camelCase")]
pub enum ServerRequest {
impl ServerRequestPayload {
pub fn request_with_id(self, request_id: RequestId) -> ServerRequest {
match self {
$(Self::$variant(params) => ServerRequest::$variant { request_id, params },)*
}
}
}
}
pub fn export_server_responses(
out_dir: &::std::path::Path,
) -> ::std::result::Result<(), ::ts_rs::ExportError> {
paste! {
$(<[<$variant Response>] as ::ts_rs::TS>::export_all_to(out_dir)?;)*
}
Ok(())
}
};
}
impl TryFrom<JSONRPCRequest> for ServerRequest {
type Error = serde_json::Error;
fn try_from(value: JSONRPCRequest) -> Result<Self, Self::Error> {
serde_json::from_value(serde_json::to_value(value)?)
}
}
server_request_definitions! {
/// Request to approve a patch.
ApplyPatchApproval {
#[serde(rename = "id")]
request_id: RequestId,
params: ApplyPatchApprovalParams,
},
ApplyPatchApproval,
/// Request to exec a command.
ExecCommandApproval {
#[serde(rename = "id")]
request_id: RequestId,
params: ExecCommandApprovalParams,
},
ExecCommandApproval,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(rename_all = "camelCase")]
pub struct ApplyPatchApprovalParams {
pub conversation_id: ConversationId,
/// Use to correlate this with [codex_core::protocol::PatchApplyBeginEvent]
@@ -673,6 +687,7 @@ pub struct ApplyPatchApprovalParams {
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(rename_all = "camelCase")]
pub struct ExecCommandApprovalParams {
pub conversation_id: ConversationId,
/// Use to correlate this with [codex_core::protocol::ExecCommandBeginEvent]
@@ -710,6 +725,7 @@ pub struct FuzzyFileSearchParams {
pub struct FuzzyFileSearchResult {
pub root: String,
pub path: String,
pub file_name: String,
pub score: u32,
#[serde(skip_serializing_if = "Option::is_none")]
pub indices: Option<Vec<u32>>,
@@ -746,6 +762,7 @@ pub struct SessionConfiguredNotification {
pub history_log_id: u64,
/// Current number of entries in the history log.
#[ts(type = "number")]
pub history_entry_count: usize,
/// Optional initial messages (as events) for resumed sessions.
@@ -842,12 +859,6 @@ mod tests {
Ok(())
}
#[test]
fn test_conversation_id_default_is_not_zeroes() {
let id = ConversationId::default();
assert_ne!(id.uuid, Uuid::nil());
}
#[test]
fn conversation_id_serializes_as_plain_string() -> Result<()> {
let id = ConversationId::from_string("67e55044-10b1-426f-9247-bb680e5fe0c8")?;
@@ -883,4 +894,39 @@ mod tests {
);
Ok(())
}
#[test]
fn serialize_server_request() -> Result<()> {
let conversation_id = ConversationId::from_string("67e55044-10b1-426f-9247-bb680e5fe0c8")?;
let params = ExecCommandApprovalParams {
conversation_id,
call_id: "call-42".to_string(),
command: vec!["echo".to_string(), "hello".to_string()],
cwd: PathBuf::from("/tmp"),
reason: Some("because tests".to_string()),
};
let request = ServerRequest::ExecCommandApproval {
request_id: RequestId::Integer(7),
params: params.clone(),
};
assert_eq!(
json!({
"method": "execCommandApproval",
"id": 7,
"params": {
"conversationId": "67e55044-10b1-426f-9247-bb680e5fe0c8",
"callId": "call-42",
"command": ["echo", "hello"],
"cwd": "/tmp",
"reason": "because tests",
}
}),
serde_json::to_value(&request)?,
);
let payload = ServerRequestPayload::ExecCommandApproval(params);
assert_eq!(payload.request_with_id(RequestId::Integer(7)), request);
Ok(())
}
}

View File

@@ -22,10 +22,8 @@ codex-core = { workspace = true }
codex-file-search = { workspace = true }
codex-login = { workspace = true }
codex-protocol = { workspace = true }
codex-app-server-protocol = { workspace = true }
codex-utils-json-to-toml = { workspace = true }
# We should only be using mcp-types for JSON-RPC types: it would be nice to
# split this out into a separate crate at some point.
mcp-types = { workspace = true }
serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }
tokio = { workspace = true, features = [

View File

@@ -0,0 +1,15 @@
# codex-app-server
`codex app-server` is the harness Codex uses to power rich interfaces such as the [Codex VS Code extension](https://marketplace.visualstudio.com/items?itemName=openai.chatgpt). The message schema is currently unstable, but those who wish to build experimental UIs on top of Codex may find it valuable.
## Protocol
Similar to [MCP](https://modelcontextprotocol.io/), `codex app-server` supports bidirectional communication, streaming JSONL over stdio. The protocol is JSON-RPC 2.0, though the `"jsonrpc":"2.0"` header is omitted.
## Message Schema
Currently, you can dump a TypeScript version of the schema using `codex generate-ts`. It is specific to the version of Codex you used to run `generate-ts`, so the two are guaranteed to be compatible.
```
codex generate-ts --out DIR
```

View File

@@ -3,10 +3,57 @@ use crate::error_code::INVALID_REQUEST_ERROR_CODE;
use crate::fuzzy_file_search::run_fuzzy_file_search;
use crate::outgoing_message::OutgoingMessageSender;
use crate::outgoing_message::OutgoingNotification;
use codex_app_server_protocol::AddConversationListenerParams;
use codex_app_server_protocol::AddConversationSubscriptionResponse;
use codex_app_server_protocol::ApplyPatchApprovalParams;
use codex_app_server_protocol::ApplyPatchApprovalResponse;
use codex_app_server_protocol::ArchiveConversationParams;
use codex_app_server_protocol::ArchiveConversationResponse;
use codex_app_server_protocol::AuthStatusChangeNotification;
use codex_app_server_protocol::ClientRequest;
use codex_app_server_protocol::ConversationSummary;
use codex_app_server_protocol::ExecCommandApprovalParams;
use codex_app_server_protocol::ExecCommandApprovalResponse;
use codex_app_server_protocol::ExecOneOffCommandParams;
use codex_app_server_protocol::ExecOneOffCommandResponse;
use codex_app_server_protocol::FuzzyFileSearchParams;
use codex_app_server_protocol::FuzzyFileSearchResponse;
use codex_app_server_protocol::GetUserAgentResponse;
use codex_app_server_protocol::GetUserSavedConfigResponse;
use codex_app_server_protocol::GitDiffToRemoteResponse;
use codex_app_server_protocol::InputItem as WireInputItem;
use codex_app_server_protocol::InterruptConversationParams;
use codex_app_server_protocol::InterruptConversationResponse;
use codex_app_server_protocol::JSONRPCErrorError;
use codex_app_server_protocol::ListConversationsParams;
use codex_app_server_protocol::ListConversationsResponse;
use codex_app_server_protocol::LoginApiKeyParams;
use codex_app_server_protocol::LoginApiKeyResponse;
use codex_app_server_protocol::LoginChatGptCompleteNotification;
use codex_app_server_protocol::LoginChatGptResponse;
use codex_app_server_protocol::NewConversationParams;
use codex_app_server_protocol::NewConversationResponse;
use codex_app_server_protocol::RemoveConversationListenerParams;
use codex_app_server_protocol::RemoveConversationSubscriptionResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::Result as JsonRpcResult;
use codex_app_server_protocol::ResumeConversationParams;
use codex_app_server_protocol::SendUserMessageParams;
use codex_app_server_protocol::SendUserMessageResponse;
use codex_app_server_protocol::SendUserTurnParams;
use codex_app_server_protocol::SendUserTurnResponse;
use codex_app_server_protocol::ServerNotification;
use codex_app_server_protocol::ServerRequestPayload;
use codex_app_server_protocol::SessionConfiguredNotification;
use codex_app_server_protocol::SetDefaultModelParams;
use codex_app_server_protocol::SetDefaultModelResponse;
use codex_app_server_protocol::UserInfoResponse;
use codex_app_server_protocol::UserSavedConfig;
use codex_core::AuthManager;
use codex_core::CodexConversation;
use codex_core::ConversationManager;
use codex_core::Cursor as RolloutCursor;
use codex_core::INTERACTIVE_SESSION_SOURCES;
use codex_core::NewConversation;
use codex_core::RolloutRecorder;
use codex_core::SessionMeta;
@@ -36,58 +83,12 @@ use codex_core::protocol::ReviewDecision;
use codex_login::ServerOptions as LoginServerOptions;
use codex_login::ShutdownHandle;
use codex_login::run_login_server;
use codex_protocol::mcp_protocol::APPLY_PATCH_APPROVAL_METHOD;
use codex_protocol::mcp_protocol::AddConversationListenerParams;
use codex_protocol::mcp_protocol::AddConversationSubscriptionResponse;
use codex_protocol::mcp_protocol::ApplyPatchApprovalParams;
use codex_protocol::mcp_protocol::ApplyPatchApprovalResponse;
use codex_protocol::mcp_protocol::ArchiveConversationParams;
use codex_protocol::mcp_protocol::ArchiveConversationResponse;
use codex_protocol::mcp_protocol::AuthStatusChangeNotification;
use codex_protocol::mcp_protocol::ClientRequest;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::mcp_protocol::ConversationSummary;
use codex_protocol::mcp_protocol::EXEC_COMMAND_APPROVAL_METHOD;
use codex_protocol::mcp_protocol::ExecArbitraryCommandResponse;
use codex_protocol::mcp_protocol::ExecCommandApprovalParams;
use codex_protocol::mcp_protocol::ExecCommandApprovalResponse;
use codex_protocol::mcp_protocol::ExecOneOffCommandParams;
use codex_protocol::mcp_protocol::FuzzyFileSearchParams;
use codex_protocol::mcp_protocol::FuzzyFileSearchResponse;
use codex_protocol::mcp_protocol::GetUserAgentResponse;
use codex_protocol::mcp_protocol::GetUserSavedConfigResponse;
use codex_protocol::mcp_protocol::GitDiffToRemoteResponse;
use codex_protocol::mcp_protocol::InputItem as WireInputItem;
use codex_protocol::mcp_protocol::InterruptConversationParams;
use codex_protocol::mcp_protocol::InterruptConversationResponse;
use codex_protocol::mcp_protocol::ListConversationsParams;
use codex_protocol::mcp_protocol::ListConversationsResponse;
use codex_protocol::mcp_protocol::LoginApiKeyParams;
use codex_protocol::mcp_protocol::LoginApiKeyResponse;
use codex_protocol::mcp_protocol::LoginChatGptCompleteNotification;
use codex_protocol::mcp_protocol::LoginChatGptResponse;
use codex_protocol::mcp_protocol::NewConversationParams;
use codex_protocol::mcp_protocol::NewConversationResponse;
use codex_protocol::mcp_protocol::RemoveConversationListenerParams;
use codex_protocol::mcp_protocol::RemoveConversationSubscriptionResponse;
use codex_protocol::mcp_protocol::ResumeConversationParams;
use codex_protocol::mcp_protocol::SendUserMessageParams;
use codex_protocol::mcp_protocol::SendUserMessageResponse;
use codex_protocol::mcp_protocol::SendUserTurnParams;
use codex_protocol::mcp_protocol::SendUserTurnResponse;
use codex_protocol::mcp_protocol::ServerNotification;
use codex_protocol::mcp_protocol::SessionConfiguredNotification;
use codex_protocol::mcp_protocol::SetDefaultModelParams;
use codex_protocol::mcp_protocol::SetDefaultModelResponse;
use codex_protocol::mcp_protocol::UserInfoResponse;
use codex_protocol::mcp_protocol::UserSavedConfig;
use codex_protocol::ConversationId;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ResponseItem;
use codex_protocol::protocol::InputMessageKind;
use codex_protocol::protocol::USER_MESSAGE_BEGIN;
use codex_utils_json_to_toml::json_to_toml;
use mcp_types::JSONRPCErrorError;
use mcp_types::RequestId;
use std::collections::HashMap;
use std::ffi::OsStr;
use std::path::PathBuf;
@@ -193,28 +194,43 @@ impl CodexMessageProcessor {
ClientRequest::LoginApiKey { request_id, params } => {
self.login_api_key(request_id, params).await;
}
ClientRequest::LoginChatGpt { request_id } => {
ClientRequest::LoginChatGpt {
request_id,
params: _,
} => {
self.login_chatgpt(request_id).await;
}
ClientRequest::CancelLoginChatGpt { request_id, params } => {
self.cancel_login_chatgpt(request_id, params.login_id).await;
}
ClientRequest::LogoutChatGpt { request_id } => {
ClientRequest::LogoutChatGpt {
request_id,
params: _,
} => {
self.logout_chatgpt(request_id).await;
}
ClientRequest::GetAuthStatus { request_id, params } => {
self.get_auth_status(request_id, params).await;
}
ClientRequest::GetUserSavedConfig { request_id } => {
ClientRequest::GetUserSavedConfig {
request_id,
params: _,
} => {
self.get_user_saved_config(request_id).await;
}
ClientRequest::SetDefaultModel { request_id, params } => {
self.set_default_model(request_id, params).await;
}
ClientRequest::GetUserAgent { request_id } => {
ClientRequest::GetUserAgent {
request_id,
params: _,
} => {
self.get_user_agent(request_id).await;
}
ClientRequest::UserInfo { request_id } => {
ClientRequest::UserInfo {
request_id,
params: _,
} => {
self.get_user_info(request_id).await;
}
ClientRequest::FuzzyFileSearch { request_id, params } => {
@@ -371,7 +387,7 @@ impl CodexMessageProcessor {
self.outgoing
.send_response(
request_id,
codex_protocol::mcp_protocol::CancelLoginChatGptResponse {},
codex_app_server_protocol::CancelLoginChatGptResponse {},
)
.await;
} else {
@@ -407,7 +423,7 @@ impl CodexMessageProcessor {
self.outgoing
.send_response(
request_id,
codex_protocol::mcp_protocol::LogoutChatGptResponse {},
codex_app_server_protocol::LogoutChatGptResponse {},
)
.await;
@@ -425,7 +441,7 @@ impl CodexMessageProcessor {
async fn get_auth_status(
&self,
request_id: RequestId,
params: codex_protocol::mcp_protocol::GetAuthStatusParams,
params: codex_app_server_protocol::GetAuthStatusParams,
) {
let include_token = params.include_token.unwrap_or(false);
let do_refresh = params.refresh_token.unwrap_or(false);
@@ -440,7 +456,7 @@ impl CodexMessageProcessor {
let requires_openai_auth = self.config.model_provider.requires_openai_auth;
let response = if !requires_openai_auth {
codex_protocol::mcp_protocol::GetAuthStatusResponse {
codex_app_server_protocol::GetAuthStatusResponse {
auth_method: None,
auth_token: None,
requires_openai_auth: Some(false),
@@ -460,13 +476,13 @@ impl CodexMessageProcessor {
(None, None)
}
};
codex_protocol::mcp_protocol::GetAuthStatusResponse {
codex_app_server_protocol::GetAuthStatusResponse {
auth_method: reported_auth_method,
auth_token: token_opt,
requires_openai_auth: Some(true),
}
}
None => codex_protocol::mcp_protocol::GetAuthStatusResponse {
None => codex_app_server_protocol::GetAuthStatusResponse {
auth_method: None,
auth_token: None,
requires_openai_auth: Some(true),
@@ -484,7 +500,7 @@ impl CodexMessageProcessor {
}
async fn get_user_saved_config(&self, request_id: RequestId) {
let toml_value = match load_config_as_toml(&self.config.codex_home) {
let toml_value = match load_config_as_toml(&self.config.codex_home).await {
Ok(val) => val,
Err(err) => {
let error = JSONRPCErrorError {
@@ -617,7 +633,7 @@ impl CodexMessageProcessor {
.await
{
Ok(output) => {
let response = ExecArbitraryCommandResponse {
let response = ExecOneOffCommandResponse {
exit_code: output.exit_code,
stdout: output.stdout.text,
stderr: output.stderr.text,
@@ -637,18 +653,19 @@ impl CodexMessageProcessor {
}
async fn process_new_conversation(&self, request_id: RequestId, params: NewConversationParams) {
let config = match derive_config_from_params(params, self.codex_linux_sandbox_exe.clone()) {
Ok(config) => config,
Err(err) => {
let error = JSONRPCErrorError {
code: INVALID_REQUEST_ERROR_CODE,
message: format!("error deriving config: {err}"),
data: None,
};
self.outgoing.send_error(request_id, error).await;
return;
}
};
let config =
match derive_config_from_params(params, self.codex_linux_sandbox_exe.clone()).await {
Ok(config) => config,
Err(err) => {
let error = JSONRPCErrorError {
code: INVALID_REQUEST_ERROR_CODE,
message: format!("error deriving config: {err}"),
data: None,
};
self.outgoing.send_error(request_id, error).await;
return;
}
};
match self.conversation_manager.new_conversation(config).await {
Ok(conversation_id) => {
@@ -693,6 +710,7 @@ impl CodexMessageProcessor {
&self.config.codex_home,
page_size,
cursor_ref,
INTERACTIVE_SESSION_SOURCES,
)
.await
{
@@ -735,7 +753,7 @@ impl CodexMessageProcessor {
// Derive a Config using the same logic as new conversation, honoring overrides if provided.
let config = match params.overrides {
Some(overrides) => {
derive_config_from_params(overrides, self.codex_linux_sandbox_exe.clone())
derive_config_from_params(overrides, self.codex_linux_sandbox_exe.clone()).await
}
None => Ok(self.config.as_ref().clone()),
};
@@ -793,7 +811,7 @@ impl CodexMessageProcessor {
});
// Reply with conversation id + model and initial messages (when present)
let response = codex_protocol::mcp_protocol::ResumeConversationResponse {
let response = codex_app_server_protocol::ResumeConversationResponse {
conversation_id,
model: session_configured.model.clone(),
initial_messages,
@@ -1253,9 +1271,8 @@ async fn apply_bespoke_event_handling(
reason,
grant_root,
};
let value = serde_json::to_value(&params).unwrap_or_default();
let rx = outgoing
.send_request(APPLY_PATCH_APPROVAL_METHOD, Some(value))
.send_request(ServerRequestPayload::ApplyPatchApproval(params))
.await;
// TODO(mbolin): Enforce a timeout so this task does not live indefinitely?
tokio::spawn(async move {
@@ -1275,9 +1292,8 @@ async fn apply_bespoke_event_handling(
cwd,
reason,
};
let value = serde_json::to_value(&params).unwrap_or_default();
let rx = outgoing
.send_request(EXEC_COMMAND_APPROVAL_METHOD, Some(value))
.send_request(ServerRequestPayload::ExecCommandApproval(params))
.await;
// TODO(mbolin): Enforce a timeout so this task does not live indefinitely?
@@ -1305,7 +1321,7 @@ async fn apply_bespoke_event_handling(
}
}
fn derive_config_from_params(
async fn derive_config_from_params(
params: NewConversationParams,
codex_linux_sandbox_exe: Option<PathBuf>,
) -> std::io::Result<Config> {
@@ -1343,12 +1359,12 @@ fn derive_config_from_params(
.map(|(k, v)| (k, json_to_toml(v)))
.collect();
Config::load_with_cli_overrides(cli_overrides, overrides)
Config::load_with_cli_overrides(cli_overrides, overrides).await
}
async fn on_patch_approval_response(
event_id: String,
receiver: oneshot::Receiver<mcp_types::Result>,
receiver: oneshot::Receiver<JsonRpcResult>,
codex: Arc<CodexConversation>,
) {
let response = receiver.await;
@@ -1390,7 +1406,7 @@ async fn on_patch_approval_response(
async fn on_exec_approval_response(
event_id: String,
receiver: oneshot::Receiver<mcp_types::Result>,
receiver: oneshot::Receiver<JsonRpcResult>,
conversation: Arc<CodexConversation>,
) {
let response = receiver.await;

View File

@@ -1,11 +1,12 @@
use std::num::NonZero;
use std::num::NonZeroUsize;
use std::path::Path;
use std::path::PathBuf;
use std::sync::Arc;
use std::sync::atomic::AtomicBool;
use codex_app_server_protocol::FuzzyFileSearchResult;
use codex_file_search as file_search;
use codex_protocol::mcp_protocol::FuzzyFileSearchResult;
use tokio::task::JoinSet;
use tracing::warn;
@@ -56,9 +57,16 @@ pub(crate) async fn run_fuzzy_file_search(
match res {
Ok(Ok((root, res))) => {
for m in res.matches {
let path = m.path;
//TODO(shijie): Move file name generation to file_search lib.
let file_name = Path::new(&path)
.file_name()
.map(|name| name.to_string_lossy().into_owned())
.unwrap_or_else(|| path.clone());
let result = FuzzyFileSearchResult {
root: root.clone(),
path: m.path,
path,
file_name,
score: m.score,
indices: m.indices,
};

View File

@@ -8,7 +8,7 @@ use codex_common::CliConfigOverrides;
use codex_core::config::Config;
use codex_core::config::ConfigOverrides;
use mcp_types::JSONRPCMessage;
use codex_app_server_protocol::JSONRPCMessage;
use tokio::io::AsyncBufReadExt;
use tokio::io::AsyncWriteExt;
use tokio::io::BufReader;
@@ -81,6 +81,7 @@ pub async fn run_main(
)
})?;
let config = Config::load_with_cli_overrides(cli_kv_overrides, ConfigOverrides::default())
.await
.map_err(|e| {
std::io::Error::new(ErrorKind::InvalidData, format!("error loading config: {e}"))
})?;
@@ -111,17 +112,17 @@ pub async fn run_main(
let stdout_writer_handle = tokio::spawn(async move {
let mut stdout = io::stdout();
while let Some(outgoing_message) = outgoing_rx.recv().await {
let msg: JSONRPCMessage = outgoing_message.into();
match serde_json::to_string(&msg) {
Ok(json) => {
let Ok(value) = serde_json::to_value(outgoing_message) else {
error!("Failed to convert OutgoingMessage to JSON value");
continue;
};
match serde_json::to_string(&value) {
Ok(mut json) => {
json.push('\n');
if let Err(e) = stdout.write_all(json.as_bytes()).await {
error!("Failed to write to stdout: {e}");
break;
}
if let Err(e) = stdout.write_all(b"\n").await {
error!("Failed to write newline to stdout: {e}");
break;
}
}
Err(e) => error!("Failed to serialize JSONRPCMessage: {e}"),
}

View File

@@ -3,20 +3,21 @@ use std::path::PathBuf;
use crate::codex_message_processor::CodexMessageProcessor;
use crate::error_code::INVALID_REQUEST_ERROR_CODE;
use crate::outgoing_message::OutgoingMessageSender;
use codex_protocol::mcp_protocol::ClientInfo;
use codex_protocol::mcp_protocol::ClientRequest;
use codex_protocol::mcp_protocol::InitializeResponse;
use codex_app_server_protocol::ClientInfo;
use codex_app_server_protocol::ClientRequest;
use codex_app_server_protocol::InitializeResponse;
use codex_app_server_protocol::JSONRPCError;
use codex_app_server_protocol::JSONRPCErrorError;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCRequest;
use codex_app_server_protocol::JSONRPCResponse;
use codex_core::AuthManager;
use codex_core::ConversationManager;
use codex_core::config::Config;
use codex_core::default_client::USER_AGENT_SUFFIX;
use codex_core::default_client::get_codex_user_agent;
use mcp_types::JSONRPCError;
use mcp_types::JSONRPCErrorError;
use mcp_types::JSONRPCNotification;
use mcp_types::JSONRPCRequest;
use mcp_types::JSONRPCResponse;
use codex_protocol::protocol::SessionSource;
use std::sync::Arc;
pub(crate) struct MessageProcessor {
@@ -34,8 +35,11 @@ impl MessageProcessor {
config: Arc<Config>,
) -> Self {
let outgoing = Arc::new(outgoing);
let auth_manager = AuthManager::shared(config.codex_home.clone());
let conversation_manager = Arc::new(ConversationManager::new(auth_manager.clone()));
let auth_manager = AuthManager::shared(config.codex_home.clone(), false);
let conversation_manager = Arc::new(ConversationManager::new(
auth_manager.clone(),
SessionSource::VSCode,
));
let codex_message_processor = CodexMessageProcessor::new(
auth_manager,
conversation_manager,

View File

@@ -2,16 +2,12 @@ use std::collections::HashMap;
use std::sync::atomic::AtomicI64;
use std::sync::atomic::Ordering;
use codex_protocol::mcp_protocol::ServerNotification;
use mcp_types::JSONRPC_VERSION;
use mcp_types::JSONRPCError;
use mcp_types::JSONRPCErrorError;
use mcp_types::JSONRPCMessage;
use mcp_types::JSONRPCNotification;
use mcp_types::JSONRPCRequest;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use mcp_types::Result;
use codex_app_server_protocol::JSONRPCErrorError;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::Result;
use codex_app_server_protocol::ServerNotification;
use codex_app_server_protocol::ServerRequest;
use codex_app_server_protocol::ServerRequestPayload;
use serde::Serialize;
use tokio::sync::Mutex;
use tokio::sync::mpsc;
@@ -38,8 +34,7 @@ impl OutgoingMessageSender {
pub(crate) async fn send_request(
&self,
method: &str,
params: Option<serde_json::Value>,
request: ServerRequestPayload,
) -> oneshot::Receiver<Result> {
let id = RequestId::Integer(self.next_request_id.fetch_add(1, Ordering::Relaxed));
let outgoing_message_id = id.clone();
@@ -49,11 +44,8 @@ impl OutgoingMessageSender {
request_id_to_callback.insert(id, tx_approve);
}
let outgoing_message = OutgoingMessage::Request(OutgoingRequest {
id: outgoing_message_id,
method: method.to_string(),
params,
});
let outgoing_message =
OutgoingMessage::Request(request.request_with_id(outgoing_message_id));
let _ = self.sender.send(outgoing_message);
rx_approve
}
@@ -116,8 +108,10 @@ impl OutgoingMessageSender {
}
/// Outgoing message from the server to the client.
#[derive(Debug, Clone, Serialize)]
#[serde(untagged)]
pub(crate) enum OutgoingMessage {
Request(OutgoingRequest),
Request(ServerRequest),
Notification(OutgoingNotification),
/// AppServerNotification is specific to the case where this is run as an
/// "app server" as opposed to an MCP server.
@@ -126,64 +120,6 @@ pub(crate) enum OutgoingMessage {
Error(OutgoingError),
}
impl From<OutgoingMessage> for JSONRPCMessage {
fn from(val: OutgoingMessage) -> Self {
use OutgoingMessage::*;
match val {
Request(OutgoingRequest { id, method, params }) => {
JSONRPCMessage::Request(JSONRPCRequest {
jsonrpc: JSONRPC_VERSION.into(),
id,
method,
params,
})
}
Notification(OutgoingNotification { method, params }) => {
JSONRPCMessage::Notification(JSONRPCNotification {
jsonrpc: JSONRPC_VERSION.into(),
method,
params,
})
}
AppServerNotification(notification) => {
let method = notification.to_string();
let params = match notification.to_params() {
Ok(params) => Some(params),
Err(err) => {
warn!("failed to serialize notification params: {err}");
None
}
};
JSONRPCMessage::Notification(JSONRPCNotification {
jsonrpc: JSONRPC_VERSION.into(),
method,
params,
})
}
Response(OutgoingResponse { id, result }) => {
JSONRPCMessage::Response(JSONRPCResponse {
jsonrpc: JSONRPC_VERSION.into(),
id,
result,
})
}
Error(OutgoingError { id, error }) => JSONRPCMessage::Error(JSONRPCError {
jsonrpc: JSONRPC_VERSION.into(),
id,
error,
}),
}
}
}
#[derive(Debug, Clone, PartialEq, Serialize)]
pub(crate) struct OutgoingRequest {
pub id: RequestId,
pub method: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub params: Option<serde_json::Value>,
}
#[derive(Debug, Clone, PartialEq, Serialize)]
pub(crate) struct OutgoingNotification {
pub method: String,
@@ -205,7 +141,7 @@ pub(crate) struct OutgoingError {
#[cfg(test)]
mod tests {
use codex_protocol::mcp_protocol::LoginChatGptCompleteNotification;
use codex_app_server_protocol::LoginChatGptCompleteNotification;
use pretty_assertions::assert_eq;
use serde_json::json;
use uuid::Uuid;
@@ -221,18 +157,17 @@ mod tests {
error: None,
});
let jsonrpc_notification: JSONRPCMessage =
OutgoingMessage::AppServerNotification(notification).into();
let jsonrpc_notification = OutgoingMessage::AppServerNotification(notification);
assert_eq!(
JSONRPCMessage::Notification(JSONRPCNotification {
jsonrpc: "2.0".into(),
method: "loginChatGptComplete".into(),
params: Some(json!({
json!({
"method": "loginChatGptComplete",
"params": {
"loginId": Uuid::nil(),
"success": true,
})),
},
}),
jsonrpc_notification,
serde_json::to_value(jsonrpc_notification)
.expect("ensure the strum macros serialize the method field correctly"),
"ensure the strum macros serialize the method field correctly"
);
}

View File

@@ -9,8 +9,7 @@ path = "lib.rs"
[dependencies]
anyhow = { workspace = true }
assert_cmd = { workspace = true }
codex-protocol = { workspace = true }
mcp-types = { workspace = true }
codex-app-server-protocol = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
tokio = { workspace = true, features = [

View File

@@ -2,8 +2,8 @@ mod mcp_process;
mod mock_model_server;
mod responses;
use codex_app_server_protocol::JSONRPCResponse;
pub use mcp_process::McpProcess;
use mcp_types::JSONRPCResponse;
pub use mock_model_server::create_mock_chat_completions_server;
pub use responses::create_apply_patch_sse_response;
pub use responses::create_final_assistant_message_sse_response;

View File

@@ -1,3 +1,4 @@
use std::collections::VecDeque;
use std::path::Path;
use std::process::Stdio;
use std::sync::atomic::AtomicI64;
@@ -11,29 +12,30 @@ use tokio::process::ChildStdout;
use anyhow::Context;
use assert_cmd::prelude::*;
use codex_protocol::mcp_protocol::AddConversationListenerParams;
use codex_protocol::mcp_protocol::ArchiveConversationParams;
use codex_protocol::mcp_protocol::CancelLoginChatGptParams;
use codex_protocol::mcp_protocol::ClientInfo;
use codex_protocol::mcp_protocol::ClientNotification;
use codex_protocol::mcp_protocol::GetAuthStatusParams;
use codex_protocol::mcp_protocol::InitializeParams;
use codex_protocol::mcp_protocol::InterruptConversationParams;
use codex_protocol::mcp_protocol::ListConversationsParams;
use codex_protocol::mcp_protocol::LoginApiKeyParams;
use codex_protocol::mcp_protocol::NewConversationParams;
use codex_protocol::mcp_protocol::RemoveConversationListenerParams;
use codex_protocol::mcp_protocol::ResumeConversationParams;
use codex_protocol::mcp_protocol::SendUserMessageParams;
use codex_protocol::mcp_protocol::SendUserTurnParams;
use codex_protocol::mcp_protocol::SetDefaultModelParams;
use codex_app_server_protocol::AddConversationListenerParams;
use codex_app_server_protocol::ArchiveConversationParams;
use codex_app_server_protocol::CancelLoginChatGptParams;
use codex_app_server_protocol::ClientInfo;
use codex_app_server_protocol::ClientNotification;
use codex_app_server_protocol::GetAuthStatusParams;
use codex_app_server_protocol::InitializeParams;
use codex_app_server_protocol::InterruptConversationParams;
use codex_app_server_protocol::ListConversationsParams;
use codex_app_server_protocol::LoginApiKeyParams;
use codex_app_server_protocol::NewConversationParams;
use codex_app_server_protocol::RemoveConversationListenerParams;
use codex_app_server_protocol::ResumeConversationParams;
use codex_app_server_protocol::SendUserMessageParams;
use codex_app_server_protocol::SendUserTurnParams;
use codex_app_server_protocol::ServerRequest;
use codex_app_server_protocol::SetDefaultModelParams;
use mcp_types::JSONRPC_VERSION;
use mcp_types::JSONRPCMessage;
use mcp_types::JSONRPCNotification;
use mcp_types::JSONRPCRequest;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use codex_app_server_protocol::JSONRPCError;
use codex_app_server_protocol::JSONRPCMessage;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCRequest;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use std::process::Command as StdCommand;
use tokio::process::Command;
@@ -46,6 +48,7 @@ pub struct McpProcess {
process: Child,
stdin: ChildStdin,
stdout: BufReader<ChildStdout>,
pending_user_messages: VecDeque<JSONRPCNotification>,
}
impl McpProcess {
@@ -116,6 +119,7 @@ impl McpProcess {
process,
stdin,
stdout,
pending_user_messages: VecDeque::new(),
})
}
@@ -317,7 +321,6 @@ impl McpProcess {
let request_id = self.next_request_id.fetch_add(1, Ordering::Relaxed);
let message = JSONRPCMessage::Request(JSONRPCRequest {
jsonrpc: JSONRPC_VERSION.into(),
id: RequestId::Integer(request_id),
method: method.to_string(),
params,
@@ -331,12 +334,8 @@ impl McpProcess {
id: RequestId,
result: serde_json::Value,
) -> anyhow::Result<()> {
self.send_jsonrpc_message(JSONRPCMessage::Response(JSONRPCResponse {
jsonrpc: JSONRPC_VERSION.into(),
id,
result,
}))
.await
self.send_jsonrpc_message(JSONRPCMessage::Response(JSONRPCResponse { id, result }))
.await
}
pub async fn send_notification(
@@ -345,7 +344,6 @@ impl McpProcess {
) -> anyhow::Result<()> {
let value = serde_json::to_value(notification)?;
self.send_jsonrpc_message(JSONRPCMessage::Notification(JSONRPCNotification {
jsonrpc: JSONRPC_VERSION.into(),
method: value
.get("method")
.and_then(|m| m.as_str())
@@ -373,18 +371,21 @@ impl McpProcess {
Ok(message)
}
pub async fn read_stream_until_request_message(&mut self) -> anyhow::Result<JSONRPCRequest> {
pub async fn read_stream_until_request_message(&mut self) -> anyhow::Result<ServerRequest> {
eprintln!("in read_stream_until_request_message()");
loop {
let message = self.read_jsonrpc_message().await?;
match message {
JSONRPCMessage::Notification(_) => {
eprintln!("notification: {message:?}");
JSONRPCMessage::Notification(notification) => {
eprintln!("notification: {notification:?}");
self.enqueue_user_message(notification);
}
JSONRPCMessage::Request(jsonrpc_request) => {
return Ok(jsonrpc_request);
return jsonrpc_request.try_into().with_context(
|| "failed to deserialize ServerRequest from JSONRPCRequest",
);
}
JSONRPCMessage::Error(_) => {
anyhow::bail!("unexpected JSONRPCMessage::Error: {message:?}");
@@ -405,8 +406,9 @@ impl McpProcess {
loop {
let message = self.read_jsonrpc_message().await?;
match message {
JSONRPCMessage::Notification(_) => {
eprintln!("notification: {message:?}");
JSONRPCMessage::Notification(notification) => {
eprintln!("notification: {notification:?}");
self.enqueue_user_message(notification);
}
JSONRPCMessage::Request(_) => {
anyhow::bail!("unexpected JSONRPCMessage::Request: {message:?}");
@@ -426,12 +428,13 @@ impl McpProcess {
pub async fn read_stream_until_error_message(
&mut self,
request_id: RequestId,
) -> anyhow::Result<mcp_types::JSONRPCError> {
) -> anyhow::Result<JSONRPCError> {
loop {
let message = self.read_jsonrpc_message().await?;
match message {
JSONRPCMessage::Notification(_) => {
eprintln!("notification: {message:?}");
JSONRPCMessage::Notification(notification) => {
eprintln!("notification: {notification:?}");
self.enqueue_user_message(notification);
}
JSONRPCMessage::Request(_) => {
anyhow::bail!("unexpected JSONRPCMessage::Request: {message:?}");
@@ -454,6 +457,10 @@ impl McpProcess {
) -> anyhow::Result<JSONRPCNotification> {
eprintln!("in read_stream_until_notification_message({method})");
if let Some(notification) = self.take_pending_notification_by_method(method) {
return Ok(notification);
}
loop {
let message = self.read_jsonrpc_message().await?;
match message {
@@ -461,6 +468,7 @@ impl McpProcess {
if notification.method == method {
return Ok(notification);
}
self.enqueue_user_message(notification);
}
JSONRPCMessage::Request(_) => {
anyhow::bail!("unexpected JSONRPCMessage::Request: {message:?}");
@@ -474,4 +482,21 @@ impl McpProcess {
}
}
}
fn take_pending_notification_by_method(&mut self, method: &str) -> Option<JSONRPCNotification> {
if let Some(pos) = self
.pending_user_messages
.iter()
.position(|notification| notification.method == method)
{
return self.pending_user_messages.remove(pos);
}
None
}
fn enqueue_user_message(&mut self, notification: JSONRPCNotification) {
if notification.method == "codex/event/user_message" {
self.pending_user_messages.push_back(notification);
}
}
}

View File

@@ -2,13 +2,13 @@ use std::path::Path;
use app_test_support::McpProcess;
use app_test_support::to_response;
use codex_app_server_protocol::ArchiveConversationParams;
use codex_app_server_protocol::ArchiveConversationResponse;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::NewConversationParams;
use codex_app_server_protocol::NewConversationResponse;
use codex_app_server_protocol::RequestId;
use codex_core::ARCHIVED_SESSIONS_SUBDIR;
use codex_protocol::mcp_protocol::ArchiveConversationParams;
use codex_protocol::mcp_protocol::ArchiveConversationResponse;
use codex_protocol::mcp_protocol::NewConversationParams;
use codex_protocol::mcp_protocol::NewConversationResponse;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use tempfile::TempDir;
use tokio::time::timeout;

View File

@@ -2,13 +2,13 @@ use std::path::Path;
use app_test_support::McpProcess;
use app_test_support::to_response;
use codex_protocol::mcp_protocol::AuthMode;
use codex_protocol::mcp_protocol::GetAuthStatusParams;
use codex_protocol::mcp_protocol::GetAuthStatusResponse;
use codex_protocol::mcp_protocol::LoginApiKeyParams;
use codex_protocol::mcp_protocol::LoginApiKeyResponse;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use codex_app_server_protocol::AuthMode;
use codex_app_server_protocol::GetAuthStatusParams;
use codex_app_server_protocol::GetAuthStatusResponse;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::LoginApiKeyParams;
use codex_app_server_protocol::LoginApiKeyResponse;
use codex_app_server_protocol::RequestId;
use pretty_assertions::assert_eq;
use tempfile::TempDir;
use tokio::time::timeout;

View File

@@ -5,25 +5,31 @@ use app_test_support::create_final_assistant_message_sse_response;
use app_test_support::create_mock_chat_completions_server;
use app_test_support::create_shell_sse_response;
use app_test_support::to_response;
use codex_app_server_protocol::AddConversationListenerParams;
use codex_app_server_protocol::AddConversationSubscriptionResponse;
use codex_app_server_protocol::ExecCommandApprovalParams;
use codex_app_server_protocol::InputItem;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::NewConversationParams;
use codex_app_server_protocol::NewConversationResponse;
use codex_app_server_protocol::RemoveConversationListenerParams;
use codex_app_server_protocol::RemoveConversationSubscriptionResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::SendUserMessageParams;
use codex_app_server_protocol::SendUserMessageResponse;
use codex_app_server_protocol::SendUserTurnParams;
use codex_app_server_protocol::SendUserTurnResponse;
use codex_app_server_protocol::ServerRequest;
use codex_core::protocol::AskForApproval;
use codex_core::protocol::SandboxPolicy;
use codex_core::protocol_config_types::ReasoningEffort;
use codex_core::protocol_config_types::ReasoningSummary;
use codex_core::spawn::CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR;
use codex_protocol::mcp_protocol::AddConversationListenerParams;
use codex_protocol::mcp_protocol::AddConversationSubscriptionResponse;
use codex_protocol::mcp_protocol::EXEC_COMMAND_APPROVAL_METHOD;
use codex_protocol::mcp_protocol::NewConversationParams;
use codex_protocol::mcp_protocol::NewConversationResponse;
use codex_protocol::mcp_protocol::RemoveConversationListenerParams;
use codex_protocol::mcp_protocol::RemoveConversationSubscriptionResponse;
use codex_protocol::mcp_protocol::SendUserMessageParams;
use codex_protocol::mcp_protocol::SendUserMessageResponse;
use codex_protocol::mcp_protocol::SendUserTurnParams;
use codex_protocol::mcp_protocol::SendUserTurnResponse;
use mcp_types::JSONRPCNotification;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use codex_protocol::config_types::SandboxMode;
use codex_protocol::protocol::Event;
use codex_protocol::protocol::EventMsg;
use codex_protocol::protocol::InputMessageKind;
use pretty_assertions::assert_eq;
use std::env;
use tempfile::TempDir;
@@ -115,7 +121,7 @@ async fn test_codex_jsonrpc_conversation_flow() {
let send_user_id = mcp
.send_send_user_message_request(SendUserMessageParams {
conversation_id,
items: vec![codex_protocol::mcp_protocol::InputItem::Text {
items: vec![codex_app_server_protocol::InputItem::Text {
text: "text".to_string(),
}],
})
@@ -265,7 +271,7 @@ async fn test_send_user_turn_changes_approval_policy_behavior() {
let send_user_id = mcp
.send_send_user_message_request(SendUserMessageParams {
conversation_id,
items: vec![codex_protocol::mcp_protocol::InputItem::Text {
items: vec![codex_app_server_protocol::InputItem::Text {
text: "run python".to_string(),
}],
})
@@ -290,11 +296,28 @@ async fn test_send_user_turn_changes_approval_policy_behavior() {
.await
.expect("waiting for exec approval request timeout")
.expect("exec approval request");
assert_eq!(request.method, EXEC_COMMAND_APPROVAL_METHOD);
let ServerRequest::ExecCommandApproval { request_id, params } = request else {
panic!("expected ExecCommandApproval request, got: {request:?}");
};
assert_eq!(
ExecCommandApprovalParams {
conversation_id,
call_id: "call1".to_string(),
command: vec![
"python3".to_string(),
"-c".to_string(),
"print(42)".to_string(),
],
cwd: working_directory.clone(),
reason: None,
},
params
);
// Approve so the first turn can complete
mcp.send_response(
request.id,
request_id,
serde_json::json!({ "decision": codex_core::protocol::ReviewDecision::Approved }),
)
.await
@@ -313,7 +336,7 @@ async fn test_send_user_turn_changes_approval_policy_behavior() {
let send_turn_id = mcp
.send_send_user_turn_request(SendUserTurnParams {
conversation_id,
items: vec![codex_protocol::mcp_protocol::InputItem::Text {
items: vec![codex_app_server_protocol::InputItem::Text {
text: "run python again".to_string(),
}],
cwd: working_directory.clone(),
@@ -349,6 +372,234 @@ async fn test_send_user_turn_changes_approval_policy_behavior() {
}
// Helper: minimal config.toml pointing at mock provider.
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
async fn test_send_user_turn_updates_sandbox_and_cwd_between_turns() {
if env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
let tmp = TempDir::new().expect("tmp dir");
let codex_home = tmp.path().join("codex_home");
std::fs::create_dir(&codex_home).expect("create codex home dir");
let workspace_root = tmp.path().join("workspace");
std::fs::create_dir(&workspace_root).expect("create workspace root");
let first_cwd = workspace_root.join("turn1");
let second_cwd = workspace_root.join("turn2");
std::fs::create_dir(&first_cwd).expect("create first cwd");
std::fs::create_dir(&second_cwd).expect("create second cwd");
let responses = vec![
create_shell_sse_response(
vec![
"bash".to_string(),
"-lc".to_string(),
"echo first turn".to_string(),
],
None,
Some(5000),
"call-first",
)
.expect("create first shell response"),
create_final_assistant_message_sse_response("done first")
.expect("create first final assistant message"),
create_shell_sse_response(
vec![
"bash".to_string(),
"-lc".to_string(),
"echo second turn".to_string(),
],
None,
Some(5000),
"call-second",
)
.expect("create second shell response"),
create_final_assistant_message_sse_response("done second")
.expect("create second final assistant message"),
];
let server = create_mock_chat_completions_server(responses).await;
create_config_toml(&codex_home, &server.uri()).expect("write config");
let mut mcp = McpProcess::new(&codex_home)
.await
.expect("spawn mcp process");
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize())
.await
.expect("init timeout")
.expect("init failed");
let new_conv_id = mcp
.send_new_conversation_request(NewConversationParams {
cwd: Some(first_cwd.to_string_lossy().into_owned()),
approval_policy: Some(AskForApproval::Never),
sandbox: Some(SandboxMode::WorkspaceWrite),
..Default::default()
})
.await
.expect("send newConversation");
let new_conv_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(new_conv_id)),
)
.await
.expect("newConversation timeout")
.expect("newConversation resp");
let NewConversationResponse {
conversation_id,
model,
..
} = to_response::<NewConversationResponse>(new_conv_resp)
.expect("deserialize newConversation response");
let add_listener_id = mcp
.send_add_conversation_listener_request(AddConversationListenerParams { conversation_id })
.await
.expect("send addConversationListener");
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(add_listener_id)),
)
.await
.expect("addConversationListener timeout")
.expect("addConversationListener resp");
let first_turn_id = mcp
.send_send_user_turn_request(SendUserTurnParams {
conversation_id,
items: vec![InputItem::Text {
text: "first turn".to_string(),
}],
cwd: first_cwd.clone(),
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::WorkspaceWrite {
writable_roots: vec![first_cwd.clone()],
network_access: false,
exclude_tmpdir_env_var: false,
exclude_slash_tmp: false,
},
model: model.clone(),
effort: Some(ReasoningEffort::Medium),
summary: ReasoningSummary::Auto,
})
.await
.expect("send first sendUserTurn");
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(first_turn_id)),
)
.await
.expect("sendUserTurn 1 timeout")
.expect("sendUserTurn 1 resp");
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("codex/event/task_complete"),
)
.await
.expect("task_complete 1 timeout")
.expect("task_complete 1 notification");
let second_turn_id = mcp
.send_send_user_turn_request(SendUserTurnParams {
conversation_id,
items: vec![InputItem::Text {
text: "second turn".to_string(),
}],
cwd: second_cwd.clone(),
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::DangerFullAccess,
model: model.clone(),
effort: Some(ReasoningEffort::Medium),
summary: ReasoningSummary::Auto,
})
.await
.expect("send second sendUserTurn");
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(second_turn_id)),
)
.await
.expect("sendUserTurn 2 timeout")
.expect("sendUserTurn 2 resp");
let mut env_message: Option<String> = None;
let second_cwd_str = second_cwd.to_string_lossy().into_owned();
for _ in 0..10 {
let notification = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("codex/event/user_message"),
)
.await
.expect("user_message timeout")
.expect("user_message notification");
let params = notification
.params
.clone()
.expect("user_message should include params");
let event: Event = serde_json::from_value(params).expect("deserialize user_message event");
if let EventMsg::UserMessage(user) = event.msg
&& matches!(user.kind, Some(InputMessageKind::EnvironmentContext))
&& user.message.contains(&second_cwd_str)
{
env_message = Some(user.message);
break;
}
}
let env_message = env_message.expect("expected environment context update");
assert!(
env_message.contains("<sandbox_mode>danger-full-access</sandbox_mode>"),
"env context should reflect new sandbox mode: {env_message}"
);
assert!(
env_message.contains("<network_access>enabled</network_access>"),
"env context should enable network access for danger-full-access policy: {env_message}"
);
assert!(
env_message.contains(&second_cwd_str),
"env context should include updated cwd: {env_message}"
);
let exec_begin_notification = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("codex/event/exec_command_begin"),
)
.await
.expect("exec_command_begin timeout")
.expect("exec_command_begin notification");
let params = exec_begin_notification
.params
.clone()
.expect("exec_command_begin params");
let event: Event = serde_json::from_value(params).expect("deserialize exec begin event");
let exec_begin = match event.msg {
EventMsg::ExecCommandBegin(exec_begin) => exec_begin,
other => panic!("expected ExecCommandBegin event, got {other:?}"),
};
assert_eq!(
exec_begin.cwd, second_cwd,
"exec turn should run from updated cwd"
);
assert_eq!(
exec_begin.command,
vec![
"bash".to_string(),
"-lc".to_string(),
"echo second turn".to_string()
],
"exec turn should run expected command"
);
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("codex/event/task_complete"),
)
.await
.expect("task_complete 2 timeout")
.expect("task_complete 2 notification");
}
fn create_config_toml(codex_home: &Path, server_uri: &str) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(

View File

@@ -3,18 +3,18 @@ use std::path::Path;
use app_test_support::McpProcess;
use app_test_support::to_response;
use codex_app_server_protocol::GetUserSavedConfigResponse;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::Profile;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::SandboxSettings;
use codex_app_server_protocol::Tools;
use codex_app_server_protocol::UserSavedConfig;
use codex_core::protocol::AskForApproval;
use codex_protocol::config_types::ReasoningEffort;
use codex_protocol::config_types::ReasoningSummary;
use codex_protocol::config_types::SandboxMode;
use codex_protocol::config_types::Verbosity;
use codex_protocol::mcp_protocol::GetUserSavedConfigResponse;
use codex_protocol::mcp_protocol::Profile;
use codex_protocol::mcp_protocol::SandboxSettings;
use codex_protocol::mcp_protocol::Tools;
use codex_protocol::mcp_protocol::UserSavedConfig;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use pretty_assertions::assert_eq;
use tempfile::TempDir;
use tokio::time::timeout;

View File

@@ -4,15 +4,15 @@ use app_test_support::McpProcess;
use app_test_support::create_final_assistant_message_sse_response;
use app_test_support::create_mock_chat_completions_server;
use app_test_support::to_response;
use codex_protocol::mcp_protocol::AddConversationListenerParams;
use codex_protocol::mcp_protocol::AddConversationSubscriptionResponse;
use codex_protocol::mcp_protocol::InputItem;
use codex_protocol::mcp_protocol::NewConversationParams;
use codex_protocol::mcp_protocol::NewConversationResponse;
use codex_protocol::mcp_protocol::SendUserMessageParams;
use codex_protocol::mcp_protocol::SendUserMessageResponse;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use codex_app_server_protocol::AddConversationListenerParams;
use codex_app_server_protocol::AddConversationSubscriptionResponse;
use codex_app_server_protocol::InputItem;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::NewConversationParams;
use codex_app_server_protocol::NewConversationResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::SendUserMessageParams;
use codex_app_server_protocol::SendUserMessageResponse;
use pretty_assertions::assert_eq;
use serde_json::json;
use tempfile::TempDir;

View File

@@ -1,6 +1,8 @@
use anyhow::Context;
use anyhow::Result;
use app_test_support::McpProcess;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use pretty_assertions::assert_eq;
use serde_json::json;
use tempfile::TempDir;
@@ -9,30 +11,41 @@ use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn test_fuzzy_file_search_sorts_and_includes_indices() {
async fn test_fuzzy_file_search_sorts_and_includes_indices() -> Result<()> {
// Prepare a temporary Codex home and a separate root with test files.
let codex_home = TempDir::new().expect("create temp codex home");
let root = TempDir::new().expect("create temp search root");
let codex_home = TempDir::new().context("create temp codex home")?;
let root = TempDir::new().context("create temp search root")?;
// Create files designed to have deterministic ordering for query "abc".
std::fs::write(root.path().join("abc"), "x").expect("write file abc");
std::fs::write(root.path().join("abcde"), "x").expect("write file abcx");
std::fs::write(root.path().join("abexy"), "x").expect("write file abcx");
std::fs::write(root.path().join("zzz.txt"), "x").expect("write file zzz");
// Create files designed to have deterministic ordering for query "abe".
std::fs::write(root.path().join("abc"), "x").context("write file abc")?;
std::fs::write(root.path().join("abcde"), "x").context("write file abcde")?;
std::fs::write(root.path().join("abexy"), "x").context("write file abexy")?;
std::fs::write(root.path().join("zzz.txt"), "x").context("write file zzz")?;
let sub_dir = root.path().join("sub");
std::fs::create_dir_all(&sub_dir).context("create sub dir")?;
let sub_abce_path = sub_dir.join("abce");
std::fs::write(&sub_abce_path, "x").context("write file sub/abce")?;
let sub_abce_rel = sub_abce_path
.strip_prefix(root.path())
.context("strip root prefix from sub/abce")?
.to_string_lossy()
.to_string();
// Start MCP server and initialize.
let mut mcp = McpProcess::new(codex_home.path()).await.expect("spawn mcp");
let mut mcp = McpProcess::new(codex_home.path())
.await
.context("spawn mcp")?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize())
.await
.expect("init timeout")
.expect("init failed");
.context("init timeout")?
.context("init failed")?;
let root_path = root.path().to_string_lossy().to_string();
// Send fuzzyFileSearch request.
let request_id = mcp
.send_fuzzy_file_search_request("abe", vec![root_path.clone()], None)
.await
.expect("send fuzzyFileSearch");
.context("send fuzzyFileSearch")?;
// Read response and verify shape and ordering.
let resp: JSONRPCResponse = timeout(
@@ -40,39 +53,65 @@ async fn test_fuzzy_file_search_sorts_and_includes_indices() {
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
)
.await
.expect("fuzzyFileSearch timeout")
.expect("fuzzyFileSearch resp");
.context("fuzzyFileSearch timeout")?
.context("fuzzyFileSearch resp")?;
let value = resp.result;
// The path separator on Windows affects the score.
let expected_score = if cfg!(windows) { 69 } else { 72 };
assert_eq!(
value,
json!({
"files": [
{ "root": root_path.clone(), "path": "abexy", "score": 88, "indices": [0, 1, 2] },
{ "root": root_path.clone(), "path": "abcde", "score": 74, "indices": [0, 1, 4] },
{
"root": root_path.clone(),
"path": "abexy",
"file_name": "abexy",
"score": 88,
"indices": [0, 1, 2],
},
{
"root": root_path.clone(),
"path": "abcde",
"file_name": "abcde",
"score": 74,
"indices": [0, 1, 4],
},
{
"root": root_path.clone(),
"path": sub_abce_rel,
"file_name": "abce",
"score": expected_score,
"indices": [4, 5, 7],
},
]
})
);
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn test_fuzzy_file_search_accepts_cancellation_token() {
let codex_home = TempDir::new().expect("create temp codex home");
let root = TempDir::new().expect("create temp search root");
async fn test_fuzzy_file_search_accepts_cancellation_token() -> Result<()> {
let codex_home = TempDir::new().context("create temp codex home")?;
let root = TempDir::new().context("create temp search root")?;
std::fs::write(root.path().join("alpha.txt"), "contents").expect("write alpha");
std::fs::write(root.path().join("alpha.txt"), "contents").context("write alpha")?;
let mut mcp = McpProcess::new(codex_home.path()).await.expect("spawn mcp");
let mut mcp = McpProcess::new(codex_home.path())
.await
.context("spawn mcp")?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize())
.await
.expect("init timeout")
.expect("init failed");
.context("init timeout")?
.context("init failed")?;
let root_path = root.path().to_string_lossy().to_string();
let request_id = mcp
.send_fuzzy_file_search_request("alp", vec![root_path.clone()], None)
.await
.expect("send fuzzyFileSearch");
.context("send fuzzyFileSearch")?;
let request_id_2 = mcp
.send_fuzzy_file_search_request(
@@ -81,24 +120,27 @@ async fn test_fuzzy_file_search_accepts_cancellation_token() {
Some(request_id.to_string()),
)
.await
.expect("send fuzzyFileSearch");
.context("send fuzzyFileSearch")?;
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(request_id_2)),
)
.await
.expect("fuzzyFileSearch timeout")
.expect("fuzzyFileSearch resp");
.context("fuzzyFileSearch timeout")?
.context("fuzzyFileSearch resp")?;
let files = resp
.result
.get("files")
.and_then(|value| value.as_array())
.cloned()
.expect("files array");
.context("files key missing")?
.as_array()
.context("files not array")?
.clone();
assert_eq!(files.len(), 1);
assert_eq!(files[0]["root"], root_path);
assert_eq!(files[0]["path"], "alpha.txt");
Ok(())
}

View File

@@ -3,17 +3,17 @@
use std::path::Path;
use codex_app_server_protocol::AddConversationListenerParams;
use codex_app_server_protocol::InterruptConversationParams;
use codex_app_server_protocol::InterruptConversationResponse;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::NewConversationParams;
use codex_app_server_protocol::NewConversationResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::SendUserMessageParams;
use codex_app_server_protocol::SendUserMessageResponse;
use codex_core::protocol::TurnAbortReason;
use codex_protocol::mcp_protocol::AddConversationListenerParams;
use codex_protocol::mcp_protocol::InterruptConversationParams;
use codex_protocol::mcp_protocol::InterruptConversationResponse;
use codex_protocol::mcp_protocol::NewConversationParams;
use codex_protocol::mcp_protocol::NewConversationResponse;
use codex_protocol::mcp_protocol::SendUserMessageParams;
use codex_protocol::mcp_protocol::SendUserMessageResponse;
use core_test_support::skip_if_no_network;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use tempfile::TempDir;
use tokio::time::timeout;
@@ -100,7 +100,7 @@ async fn shell_command_interruption() -> anyhow::Result<()> {
let send_user_id = mcp
.send_send_user_message_request(SendUserMessageParams {
conversation_id,
items: vec![codex_protocol::mcp_protocol::InputItem::Text {
items: vec![codex_app_server_protocol::InputItem::Text {
text: "run first sleep command".to_string(),
}],
})

View File

@@ -3,16 +3,16 @@ use std::path::Path;
use app_test_support::McpProcess;
use app_test_support::to_response;
use codex_protocol::mcp_protocol::ListConversationsParams;
use codex_protocol::mcp_protocol::ListConversationsResponse;
use codex_protocol::mcp_protocol::NewConversationParams; // reused for overrides shape
use codex_protocol::mcp_protocol::ResumeConversationParams;
use codex_protocol::mcp_protocol::ResumeConversationResponse;
use codex_protocol::mcp_protocol::ServerNotification;
use codex_protocol::mcp_protocol::SessionConfiguredNotification;
use mcp_types::JSONRPCNotification;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::ListConversationsParams;
use codex_app_server_protocol::ListConversationsResponse;
use codex_app_server_protocol::NewConversationParams; // reused for overrides shape
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ResumeConversationParams;
use codex_app_server_protocol::ResumeConversationResponse;
use codex_app_server_protocol::ServerNotification;
use codex_app_server_protocol::SessionConfiguredNotification;
use pretty_assertions::assert_eq;
use serde_json::json;
use tempfile::TempDir;

View File

@@ -3,15 +3,15 @@ use std::time::Duration;
use app_test_support::McpProcess;
use app_test_support::to_response;
use codex_app_server_protocol::CancelLoginChatGptParams;
use codex_app_server_protocol::CancelLoginChatGptResponse;
use codex_app_server_protocol::GetAuthStatusParams;
use codex_app_server_protocol::GetAuthStatusResponse;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::LoginChatGptResponse;
use codex_app_server_protocol::LogoutChatGptResponse;
use codex_app_server_protocol::RequestId;
use codex_login::login_with_api_key;
use codex_protocol::mcp_protocol::CancelLoginChatGptParams;
use codex_protocol::mcp_protocol::CancelLoginChatGptResponse;
use codex_protocol::mcp_protocol::GetAuthStatusParams;
use codex_protocol::mcp_protocol::GetAuthStatusResponse;
use codex_protocol::mcp_protocol::LoginChatGptResponse;
use codex_protocol::mcp_protocol::LogoutChatGptResponse;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use tempfile::TempDir;
use tokio::time::timeout;

View File

@@ -4,17 +4,17 @@ use app_test_support::McpProcess;
use app_test_support::create_final_assistant_message_sse_response;
use app_test_support::create_mock_chat_completions_server;
use app_test_support::to_response;
use codex_protocol::mcp_protocol::AddConversationListenerParams;
use codex_protocol::mcp_protocol::AddConversationSubscriptionResponse;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::mcp_protocol::InputItem;
use codex_protocol::mcp_protocol::NewConversationParams;
use codex_protocol::mcp_protocol::NewConversationResponse;
use codex_protocol::mcp_protocol::SendUserMessageParams;
use codex_protocol::mcp_protocol::SendUserMessageResponse;
use mcp_types::JSONRPCNotification;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use codex_app_server_protocol::AddConversationListenerParams;
use codex_app_server_protocol::AddConversationSubscriptionResponse;
use codex_app_server_protocol::InputItem;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::NewConversationParams;
use codex_app_server_protocol::NewConversationResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::SendUserMessageParams;
use codex_app_server_protocol::SendUserMessageResponse;
use codex_protocol::ConversationId;
use pretty_assertions::assert_eq;
use tempfile::TempDir;
use tokio::time::timeout;

View File

@@ -2,11 +2,11 @@ use std::path::Path;
use app_test_support::McpProcess;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::SetDefaultModelParams;
use codex_app_server_protocol::SetDefaultModelResponse;
use codex_core::config::ConfigToml;
use codex_protocol::mcp_protocol::SetDefaultModelParams;
use codex_protocol::mcp_protocol::SetDefaultModelResponse;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use pretty_assertions::assert_eq;
use tempfile::TempDir;
use tokio::time::timeout;

View File

@@ -1,8 +1,8 @@
use app_test_support::McpProcess;
use app_test_support::to_response;
use codex_protocol::mcp_protocol::GetUserAgentResponse;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use codex_app_server_protocol::GetUserAgentResponse;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use pretty_assertions::assert_eq;
use tempfile::TempDir;
use tokio::time::timeout;

View File

@@ -5,14 +5,14 @@ use app_test_support::McpProcess;
use app_test_support::to_response;
use base64::Engine;
use base64::engine::general_purpose::URL_SAFE_NO_PAD;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::UserInfoResponse;
use codex_core::auth::AuthDotJson;
use codex_core::auth::get_auth_file;
use codex_core::auth::write_auth_json;
use codex_core::token_data::IdTokenInfo;
use codex_core::token_data::TokenData;
use codex_protocol::mcp_protocol::UserInfoResponse;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use pretty_assertions::assert_eq;
use serde_json::json;
use tempfile::TempDir;

View File

@@ -23,5 +23,6 @@ tree-sitter-bash = { workspace = true }
[dev-dependencies]
assert_cmd = { workspace = true }
assert_matches = { workspace = true }
pretty_assertions = { workspace = true }
tempfile = { workspace = true }

View File

@@ -843,6 +843,7 @@ pub fn print_summary(
#[cfg(test)]
mod tests {
use super::*;
use assert_matches::assert_matches;
use pretty_assertions::assert_eq;
use std::fs;
use std::string::ToString;
@@ -894,10 +895,10 @@ mod tests {
fn assert_not_match(script: &str) {
let args = args_bash(script);
assert!(matches!(
assert_matches!(
maybe_parse_apply_patch(&args),
MaybeApplyPatch::NotApplyPatch
));
);
}
#[test]
@@ -905,10 +906,10 @@ mod tests {
let patch = "*** Begin Patch\n*** Add File: foo\n+hi\n*** End Patch".to_string();
let args = vec![patch];
let dir = tempdir().unwrap();
assert!(matches!(
assert_matches!(
maybe_parse_apply_patch_verified(&args, dir.path()),
MaybeApplyPatchVerified::CorrectnessError(ApplyPatchError::ImplicitInvocation)
));
);
}
#[test]
@@ -916,10 +917,10 @@ mod tests {
let script = "*** Begin Patch\n*** Add File: foo\n+hi\n*** End Patch";
let args = args_bash(script);
let dir = tempdir().unwrap();
assert!(matches!(
assert_matches!(
maybe_parse_apply_patch_verified(&args, dir.path()),
MaybeApplyPatchVerified::CorrectnessError(ApplyPatchError::ImplicitInvocation)
));
);
}
#[test]

View File

@@ -29,7 +29,8 @@ pub async fn run_apply_command(
.parse_overrides()
.map_err(anyhow::Error::msg)?,
ConfigOverrides::default(),
)?;
)
.await?;
init_chatgpt_token_from_auth(&config.codex_home).await?;

View File

@@ -28,9 +28,11 @@ codex-login = { workspace = true }
codex-mcp-server = { workspace = true }
codex-process-hardening = { workspace = true }
codex-protocol = { workspace = true }
codex-app-server-protocol = { workspace = true }
codex-protocol-ts = { workspace = true }
codex-responses-api-proxy = { workspace = true }
codex-tui = { workspace = true }
codex-rmcp-client = { workspace = true }
codex-cloud-tasks = { path = "../cloud-tasks" }
ctor = { workspace = true }
owo-colors = { workspace = true }
@@ -43,10 +45,9 @@ tokio = { workspace = true, features = [
"rt-multi-thread",
"signal",
] }
tracing = { workspace = true }
tracing-subscriber = { workspace = true }
[dev-dependencies]
assert_matches = { workspace = true }
assert_cmd = { workspace = true }
predicates = { workspace = true }
pretty_assertions = { workspace = true }

View File

@@ -73,7 +73,8 @@ async fn run_command_under_sandbox(
codex_linux_sandbox_exe,
..Default::default()
},
)?;
)
.await?;
// In practice, this should be `std::env::current_dir()` because this CLI
// does not support `--cwd`, but let's use the config value for consistency.

View File

@@ -1,7 +1,6 @@
pub mod debug_sandbox;
mod exit_status;
pub mod login;
pub mod proto;
use clap::Parser;
use codex_common::CliConfigOverrides;

View File

@@ -1,3 +1,4 @@
use codex_app_server_protocol::AuthMode;
use codex_common::CliConfigOverrides;
use codex_core::CodexAuth;
use codex_core::auth::CLIENT_ID;
@@ -8,7 +9,8 @@ use codex_core::config::ConfigOverrides;
use codex_login::ServerOptions;
use codex_login::run_device_code_login;
use codex_login::run_login_server;
use codex_protocol::mcp_protocol::AuthMode;
use std::io::IsTerminal;
use std::io::Read;
use std::path::PathBuf;
pub async fn login_with_chatgpt(codex_home: PathBuf) -> std::io::Result<()> {
@@ -24,7 +26,7 @@ pub async fn login_with_chatgpt(codex_home: PathBuf) -> std::io::Result<()> {
}
pub async fn run_login_with_chatgpt(cli_config_overrides: CliConfigOverrides) -> ! {
let config = load_config_or_exit(cli_config_overrides);
let config = load_config_or_exit(cli_config_overrides).await;
match login_with_chatgpt(config.codex_home).await {
Ok(_) => {
@@ -42,7 +44,7 @@ pub async fn run_login_with_api_key(
cli_config_overrides: CliConfigOverrides,
api_key: String,
) -> ! {
let config = load_config_or_exit(cli_config_overrides);
let config = load_config_or_exit(cli_config_overrides).await;
match login_with_api_key(&config.codex_home, &api_key) {
Ok(_) => {
@@ -56,13 +58,40 @@ pub async fn run_login_with_api_key(
}
}
pub fn read_api_key_from_stdin() -> String {
let mut stdin = std::io::stdin();
if stdin.is_terminal() {
eprintln!(
"--with-api-key expects the API key on stdin. Try piping it, e.g. `printenv OPENAI_API_KEY | codex login --with-api-key`."
);
std::process::exit(1);
}
eprintln!("Reading API key from stdin...");
let mut buffer = String::new();
if let Err(err) = stdin.read_to_string(&mut buffer) {
eprintln!("Failed to read API key from stdin: {err}");
std::process::exit(1);
}
let api_key = buffer.trim().to_string();
if api_key.is_empty() {
eprintln!("No API key provided via stdin.");
std::process::exit(1);
}
api_key
}
/// Login using the OAuth device code flow.
pub async fn run_login_with_device_code(
cli_config_overrides: CliConfigOverrides,
issuer_base_url: Option<String>,
client_id: Option<String>,
) -> ! {
let config = load_config_or_exit(cli_config_overrides);
let config = load_config_or_exit(cli_config_overrides).await;
let mut opts = ServerOptions::new(
config.codex_home,
client_id.unwrap_or(CLIENT_ID.to_string()),
@@ -83,7 +112,7 @@ pub async fn run_login_with_device_code(
}
pub async fn run_login_status(cli_config_overrides: CliConfigOverrides) -> ! {
let config = load_config_or_exit(cli_config_overrides);
let config = load_config_or_exit(cli_config_overrides).await;
match CodexAuth::from_codex_home(&config.codex_home) {
Ok(Some(auth)) => match auth.mode {
@@ -114,7 +143,7 @@ pub async fn run_login_status(cli_config_overrides: CliConfigOverrides) -> ! {
}
pub async fn run_logout(cli_config_overrides: CliConfigOverrides) -> ! {
let config = load_config_or_exit(cli_config_overrides);
let config = load_config_or_exit(cli_config_overrides).await;
match logout(&config.codex_home) {
Ok(true) => {
@@ -132,7 +161,7 @@ pub async fn run_logout(cli_config_overrides: CliConfigOverrides) -> ! {
}
}
fn load_config_or_exit(cli_config_overrides: CliConfigOverrides) -> Config {
async fn load_config_or_exit(cli_config_overrides: CliConfigOverrides) -> Config {
let cli_overrides = match cli_config_overrides.parse_overrides() {
Ok(v) => v,
Err(e) => {
@@ -142,7 +171,7 @@ fn load_config_or_exit(cli_config_overrides: CliConfigOverrides) -> Config {
};
let config_overrides = ConfigOverrides::default();
match Config::load_with_cli_overrides(cli_overrides, config_overrides) {
match Config::load_with_cli_overrides(cli_overrides, config_overrides).await {
Ok(config) => config,
Err(e) => {
eprintln!("Error loading configuration: {e}");

View File

@@ -7,12 +7,12 @@ use codex_chatgpt::apply_command::ApplyCommand;
use codex_chatgpt::apply_command::run_apply_command;
use codex_cli::LandlockCommand;
use codex_cli::SeatbeltCommand;
use codex_cli::login::read_api_key_from_stdin;
use codex_cli::login::run_login_status;
use codex_cli::login::run_login_with_api_key;
use codex_cli::login::run_login_with_chatgpt;
use codex_cli::login::run_login_with_device_code;
use codex_cli::login::run_logout;
use codex_cli::proto;
use codex_cloud_tasks::Cli as CloudTasksCli;
use codex_common::CliConfigOverrides;
use codex_exec::Cli as ExecCli;
@@ -26,7 +26,6 @@ use supports_color::Stream;
mod mcp_cmd;
use crate::mcp_cmd::McpCli;
use crate::proto::ProtoCli;
/// Codex CLI
///
@@ -74,15 +73,12 @@ enum Subcommand {
/// [experimental] Run the app server.
AppServer,
/// Run the Protocol stream via stdin/stdout
#[clap(visible_alias = "p")]
Proto(ProtoCli),
/// Generate shell completion scripts.
Completion(CompletionCommand),
/// Internal debugging commands.
Debug(DebugArgs),
/// Run commands within a Codex-provided sandbox.
#[clap(visible_alias = "debug")]
Sandbox(SandboxArgs),
/// Apply the latest diff produced by Codex agent as a `git apply` to your local working tree.
#[clap(visible_alias = "a")]
@@ -126,18 +122,20 @@ struct ResumeCommand {
}
#[derive(Debug, Parser)]
struct DebugArgs {
struct SandboxArgs {
#[command(subcommand)]
cmd: DebugCommand,
cmd: SandboxCommand,
}
#[derive(Debug, clap::Subcommand)]
enum DebugCommand {
enum SandboxCommand {
/// Run a command under Seatbelt (macOS only).
Seatbelt(SeatbeltCommand),
#[clap(visible_alias = "seatbelt")]
Macos(SeatbeltCommand),
/// Run a command under Landlock+seccomp (Linux only).
Landlock(LandlockCommand),
#[clap(visible_alias = "landlock")]
Linux(LandlockCommand),
}
#[derive(Debug, Parser)]
@@ -145,7 +143,18 @@ struct LoginCommand {
#[clap(skip)]
config_overrides: CliConfigOverrides,
#[arg(long = "api-key", value_name = "API_KEY")]
#[arg(
long = "with-api-key",
help = "Read the API key from stdin (e.g. `printenv OPENAI_API_KEY | codex login --with-api-key`)"
)]
with_api_key: bool,
#[arg(
long = "api-key",
value_name = "API_KEY",
help = "(deprecated) Previously accepted the API key directly; now exits with guidance to use --with-api-key",
hide = true
)]
api_key: Option<String>,
/// EXPERIMENTAL: Use device code flow (not yet supported)
@@ -224,25 +233,12 @@ fn print_exit_messages(exit_info: AppExitInfo) {
}
}
pub(crate) const CODEX_SECURE_MODE_ENV_VAR: &str = "CODEX_SECURE_MODE";
/// As early as possible in the process lifecycle, apply hardening measures
/// if the CODEX_SECURE_MODE environment variable is set to "1".
/// As early as possible in the process lifecycle, apply hardening measures. We
/// skip this in debug builds to avoid interfering with debugging.
#[ctor::ctor]
#[cfg(not(debug_assertions))]
fn pre_main_hardening() {
let secure_mode = match std::env::var(CODEX_SECURE_MODE_ENV_VAR) {
Ok(value) => value,
Err(_) => return,
};
if secure_mode == "1" {
codex_process_hardening::pre_main_hardening();
}
// Always clear this env var so child processes don't inherit it.
unsafe {
std::env::remove_var(CODEX_SECURE_MODE_ENV_VAR);
}
codex_process_hardening::pre_main_hardening();
}
fn main() -> anyhow::Result<()> {
@@ -298,7 +294,8 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
last,
config_overrides,
);
codex_tui::run_main(interactive, codex_linux_sandbox_exe).await?;
let exit_info = codex_tui::run_main(interactive, codex_linux_sandbox_exe).await?;
print_exit_messages(exit_info);
}
Some(Subcommand::Login(mut login_cli)) => {
prepend_config_flags(
@@ -317,7 +314,13 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
login_cli.client_id,
)
.await;
} else if let Some(api_key) = login_cli.api_key {
} else if login_cli.api_key.is_some() {
eprintln!(
"The --api-key flag is no longer supported. Pipe the key instead, e.g. `printenv OPENAI_API_KEY | codex login --with-api-key`."
);
std::process::exit(1);
} else if login_cli.with_api_key {
let api_key = read_api_key_from_stdin();
run_login_with_api_key(login_cli.config_overrides, api_key).await;
} else {
run_login_with_chatgpt(login_cli.config_overrides).await;
@@ -332,13 +335,6 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
);
run_logout(logout_cli.config_overrides).await;
}
Some(Subcommand::Proto(mut proto_cli)) => {
prepend_config_flags(
&mut proto_cli.config_overrides,
root_config_overrides.clone(),
);
proto::run_main(proto_cli).await?;
}
Some(Subcommand::Completion(completion_cli)) => {
print_completion(completion_cli);
}
@@ -349,8 +345,8 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
);
codex_cloud_tasks::run_main(cloud_cli, codex_linux_sandbox_exe).await?;
}
Some(Subcommand::Debug(debug_args)) => match debug_args.cmd {
DebugCommand::Seatbelt(mut seatbelt_cli) => {
Some(Subcommand::Sandbox(sandbox_args)) => match sandbox_args.cmd {
SandboxCommand::Macos(mut seatbelt_cli) => {
prepend_config_flags(
&mut seatbelt_cli.config_overrides,
root_config_overrides.clone(),
@@ -361,7 +357,7 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
)
.await?;
}
DebugCommand::Landlock(mut landlock_cli) => {
SandboxCommand::Linux(mut landlock_cli) => {
prepend_config_flags(
&mut landlock_cli.config_overrides,
root_config_overrides.clone(),
@@ -480,8 +476,9 @@ fn print_completion(cmd: CompletionCommand) {
#[cfg(test)]
mod tests {
use super::*;
use assert_matches::assert_matches;
use codex_core::protocol::TokenUsage;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::ConversationId;
fn finalize_from_args(args: &[&str]) -> TuiCli {
let cli = MultitoolCli::try_parse_from(args).expect("parse");
@@ -612,14 +609,14 @@ mod tests {
assert_eq!(interactive.model.as_deref(), Some("gpt-5-test"));
assert!(interactive.oss);
assert_eq!(interactive.config_profile.as_deref(), Some("my-profile"));
assert!(matches!(
assert_matches!(
interactive.sandbox_mode,
Some(codex_common::SandboxModeCliArg::WorkspaceWrite)
));
assert!(matches!(
);
assert_matches!(
interactive.approval_policy,
Some(codex_common::ApprovalModeCliArg::OnRequest)
));
);
assert!(interactive.full_auto);
assert_eq!(
interactive.cwd.as_deref(),

View File

@@ -12,6 +12,8 @@ use codex_core::config::load_global_mcp_servers;
use codex_core::config::write_global_mcp_servers;
use codex_core::config_types::McpServerConfig;
use codex_core::config_types::McpServerTransportConfig;
use codex_rmcp_client::delete_oauth_tokens;
use codex_rmcp_client::perform_oauth_login;
/// [experimental] Launch Codex as an MCP server or manage configured MCP servers.
///
@@ -43,6 +45,14 @@ pub enum McpSubcommand {
/// [experimental] Remove a global MCP server entry.
Remove(RemoveArgs),
/// [experimental] Authenticate with a configured MCP server via OAuth.
/// Requires experimental_use_rmcp_client = true in config.toml.
Login(LoginArgs),
/// [experimental] Remove stored OAuth credentials for a server.
/// Requires experimental_use_rmcp_client = true in config.toml.
Logout(LogoutArgs),
}
#[derive(Debug, clap::Parser)]
@@ -82,6 +92,18 @@ pub struct RemoveArgs {
pub name: String,
}
#[derive(Debug, clap::Parser)]
pub struct LoginArgs {
/// Name of the MCP server to authenticate with oauth.
pub name: String,
}
#[derive(Debug, clap::Parser)]
pub struct LogoutArgs {
/// Name of the MCP server to deauthenticate.
pub name: String,
}
impl McpCli {
pub async fn run(self) -> Result<()> {
let McpCli {
@@ -91,16 +113,22 @@ impl McpCli {
match subcommand {
McpSubcommand::List(args) => {
run_list(&config_overrides, args)?;
run_list(&config_overrides, args).await?;
}
McpSubcommand::Get(args) => {
run_get(&config_overrides, args)?;
run_get(&config_overrides, args).await?;
}
McpSubcommand::Add(args) => {
run_add(&config_overrides, args)?;
run_add(&config_overrides, args).await?;
}
McpSubcommand::Remove(args) => {
run_remove(&config_overrides, args)?;
run_remove(&config_overrides, args).await?;
}
McpSubcommand::Login(args) => {
run_login(&config_overrides, args).await?;
}
McpSubcommand::Logout(args) => {
run_logout(&config_overrides, args).await?;
}
}
@@ -108,7 +136,7 @@ impl McpCli {
}
}
fn run_add(config_overrides: &CliConfigOverrides, add_args: AddArgs) -> Result<()> {
async fn run_add(config_overrides: &CliConfigOverrides, add_args: AddArgs) -> Result<()> {
// Validate any provided overrides even though they are not currently applied.
config_overrides.parse_overrides().map_err(|e| anyhow!(e))?;
@@ -134,6 +162,7 @@ fn run_add(config_overrides: &CliConfigOverrides, add_args: AddArgs) -> Result<(
let codex_home = find_codex_home().context("failed to resolve CODEX_HOME")?;
let mut servers = load_global_mcp_servers(&codex_home)
.await
.with_context(|| format!("failed to load MCP servers from {}", codex_home.display()))?;
let new_entry = McpServerConfig {
@@ -156,7 +185,7 @@ fn run_add(config_overrides: &CliConfigOverrides, add_args: AddArgs) -> Result<(
Ok(())
}
fn run_remove(config_overrides: &CliConfigOverrides, remove_args: RemoveArgs) -> Result<()> {
async fn run_remove(config_overrides: &CliConfigOverrides, remove_args: RemoveArgs) -> Result<()> {
config_overrides.parse_overrides().map_err(|e| anyhow!(e))?;
let RemoveArgs { name } = remove_args;
@@ -165,6 +194,7 @@ fn run_remove(config_overrides: &CliConfigOverrides, remove_args: RemoveArgs) ->
let codex_home = find_codex_home().context("failed to resolve CODEX_HOME")?;
let mut servers = load_global_mcp_servers(&codex_home)
.await
.with_context(|| format!("failed to load MCP servers from {}", codex_home.display()))?;
let removed = servers.remove(&name).is_some();
@@ -183,9 +213,65 @@ fn run_remove(config_overrides: &CliConfigOverrides, remove_args: RemoveArgs) ->
Ok(())
}
fn run_list(config_overrides: &CliConfigOverrides, list_args: ListArgs) -> Result<()> {
async fn run_login(config_overrides: &CliConfigOverrides, login_args: LoginArgs) -> Result<()> {
let overrides = config_overrides.parse_overrides().map_err(|e| anyhow!(e))?;
let config = Config::load_with_cli_overrides(overrides, ConfigOverrides::default())
.await
.context("failed to load configuration")?;
if !config.use_experimental_use_rmcp_client {
bail!(
"OAuth login is only supported when experimental_use_rmcp_client is true in config.toml."
);
}
let LoginArgs { name } = login_args;
let Some(server) = config.mcp_servers.get(&name) else {
bail!("No MCP server named '{name}' found.");
};
let url = match &server.transport {
McpServerTransportConfig::StreamableHttp { url, .. } => url.clone(),
_ => bail!("OAuth login is only supported for streamable HTTP servers."),
};
perform_oauth_login(&name, &url).await?;
println!("Successfully logged in to MCP server '{name}'.");
Ok(())
}
async fn run_logout(config_overrides: &CliConfigOverrides, logout_args: LogoutArgs) -> Result<()> {
let overrides = config_overrides.parse_overrides().map_err(|e| anyhow!(e))?;
let config = Config::load_with_cli_overrides(overrides, ConfigOverrides::default())
.await
.context("failed to load configuration")?;
let LogoutArgs { name } = logout_args;
let server = config
.mcp_servers
.get(&name)
.ok_or_else(|| anyhow!("No MCP server named '{name}' found in configuration."))?;
let url = match &server.transport {
McpServerTransportConfig::StreamableHttp { url, .. } => url.clone(),
_ => bail!("OAuth logout is only supported for streamable_http transports."),
};
match delete_oauth_tokens(&name, &url) {
Ok(true) => println!("Removed OAuth credentials for '{name}'."),
Ok(false) => println!("No OAuth credentials stored for '{name}'."),
Err(err) => return Err(anyhow!("failed to delete OAuth credentials: {err}")),
}
Ok(())
}
async fn run_list(config_overrides: &CliConfigOverrides, list_args: ListArgs) -> Result<()> {
let overrides = config_overrides.parse_overrides().map_err(|e| anyhow!(e))?;
let config = Config::load_with_cli_overrides(overrides, ConfigOverrides::default())
.await
.context("failed to load configuration")?;
let mut entries: Vec<_> = config.mcp_servers.iter().collect();
@@ -343,9 +429,10 @@ fn run_list(config_overrides: &CliConfigOverrides, list_args: ListArgs) -> Resul
Ok(())
}
fn run_get(config_overrides: &CliConfigOverrides, get_args: GetArgs) -> Result<()> {
async fn run_get(config_overrides: &CliConfigOverrides, get_args: GetArgs) -> Result<()> {
let overrides = config_overrides.parse_overrides().map_err(|e| anyhow!(e))?;
let config = Config::load_with_cli_overrides(overrides, ConfigOverrides::default())
.await
.context("failed to load configuration")?;
let Some(server) = config.mcp_servers.get(&get_args.name) else {

View File

@@ -1,133 +0,0 @@
use std::io::IsTerminal;
use clap::Parser;
use codex_common::CliConfigOverrides;
use codex_core::AuthManager;
use codex_core::ConversationManager;
use codex_core::NewConversation;
use codex_core::config::Config;
use codex_core::config::ConfigOverrides;
use codex_core::protocol::Event;
use codex_core::protocol::EventMsg;
use codex_core::protocol::Submission;
use tokio::io::AsyncBufReadExt;
use tokio::io::BufReader;
use tracing::error;
use tracing::info;
#[derive(Debug, Parser)]
pub struct ProtoCli {
#[clap(skip)]
pub config_overrides: CliConfigOverrides,
}
pub async fn run_main(opts: ProtoCli) -> anyhow::Result<()> {
if std::io::stdin().is_terminal() {
anyhow::bail!("Protocol mode expects stdin to be a pipe, not a terminal");
}
tracing_subscriber::fmt()
.with_writer(std::io::stderr)
.init();
let ProtoCli { config_overrides } = opts;
let overrides_vec = config_overrides
.parse_overrides()
.map_err(anyhow::Error::msg)?;
let config = Config::load_with_cli_overrides(overrides_vec, ConfigOverrides::default())?;
// Use conversation_manager API to start a conversation
let conversation_manager =
ConversationManager::new(AuthManager::shared(config.codex_home.clone()));
let NewConversation {
conversation_id: _,
conversation,
session_configured,
} = conversation_manager.new_conversation(config).await?;
// Simulate streaming the session_configured event.
let synthetic_event = Event {
// Fake id value.
id: "".to_string(),
msg: EventMsg::SessionConfigured(session_configured),
};
let session_configured_event = match serde_json::to_string(&synthetic_event) {
Ok(s) => s,
Err(e) => {
error!("Failed to serialize session_configured: {e}");
return Err(anyhow::Error::from(e));
}
};
println!("{session_configured_event}");
// Task that reads JSON lines from stdin and forwards to Submission Queue
let sq_fut = {
let conversation = conversation.clone();
async move {
let stdin = BufReader::new(tokio::io::stdin());
let mut lines = stdin.lines();
loop {
let result = tokio::select! {
_ = tokio::signal::ctrl_c() => {
break
},
res = lines.next_line() => res,
};
match result {
Ok(Some(line)) => {
let line = line.trim();
if line.is_empty() {
continue;
}
match serde_json::from_str::<Submission>(line) {
Ok(sub) => {
if let Err(e) = conversation.submit_with_id(sub).await {
error!("{e:#}");
break;
}
}
Err(e) => {
error!("invalid submission: {e}");
}
}
}
_ => {
info!("Submission queue closed");
break;
}
}
}
}
};
// Task that reads events from the agent and prints them as JSON lines to stdout
let eq_fut = async move {
loop {
let event = tokio::select! {
_ = tokio::signal::ctrl_c() => break,
event = conversation.next_event() => event,
};
match event {
Ok(event) => {
let event_str = match serde_json::to_string(&event) {
Ok(s) => s,
Err(e) => {
error!("Failed to serialize event: {e}");
continue;
}
};
println!("{event_str}");
}
Err(e) => {
error!("{e:#}");
break;
}
}
}
info!("Event queue closed");
};
tokio::join!(sq_fut, eq_fut);
Ok(())
}

View File

@@ -13,8 +13,8 @@ fn codex_command(codex_home: &Path) -> Result<assert_cmd::Command> {
Ok(cmd)
}
#[test]
fn add_and_remove_server_updates_global_config() -> Result<()> {
#[tokio::test]
async fn add_and_remove_server_updates_global_config() -> Result<()> {
let codex_home = TempDir::new()?;
let mut add_cmd = codex_command(codex_home.path())?;
@@ -24,7 +24,7 @@ fn add_and_remove_server_updates_global_config() -> Result<()> {
.success()
.stdout(contains("Added global MCP server 'docs'."));
let servers = load_global_mcp_servers(codex_home.path())?;
let servers = load_global_mcp_servers(codex_home.path()).await?;
assert_eq!(servers.len(), 1);
let docs = servers.get("docs").expect("server should exist");
match &docs.transport {
@@ -43,7 +43,7 @@ fn add_and_remove_server_updates_global_config() -> Result<()> {
.success()
.stdout(contains("Removed global MCP server 'docs'."));
let servers = load_global_mcp_servers(codex_home.path())?;
let servers = load_global_mcp_servers(codex_home.path()).await?;
assert!(servers.is_empty());
let mut remove_again_cmd = codex_command(codex_home.path())?;
@@ -53,14 +53,14 @@ fn add_and_remove_server_updates_global_config() -> Result<()> {
.success()
.stdout(contains("No MCP server named 'docs' found."));
let servers = load_global_mcp_servers(codex_home.path())?;
let servers = load_global_mcp_servers(codex_home.path()).await?;
assert!(servers.is_empty());
Ok(())
}
#[test]
fn add_with_env_preserves_key_order_and_values() -> Result<()> {
#[tokio::test]
async fn add_with_env_preserves_key_order_and_values() -> Result<()> {
let codex_home = TempDir::new()?;
let mut add_cmd = codex_command(codex_home.path())?;
@@ -80,7 +80,7 @@ fn add_with_env_preserves_key_order_and_values() -> Result<()> {
.assert()
.success();
let servers = load_global_mcp_servers(codex_home.path())?;
let servers = load_global_mcp_servers(codex_home.path()).await?;
let envy = servers.get("envy").expect("server should exist");
let env = match &envy.transport {
McpServerTransportConfig::Stdio { env: Some(env), .. } => env,

View File

@@ -1,7 +1,7 @@
[package]
edition = "2024"
name = "codex-cloud-tasks"
version = { workspace = true }
edition = "2024"
[lib]
name = "codex_cloud_tasks"
@@ -11,26 +11,28 @@ path = "src/lib.rs"
workspace = true
[dependencies]
anyhow = "1"
clap = { version = "4", features = ["derive"] }
anyhow = { workspace = true }
base64 = { workspace = true }
chrono = { workspace = true, features = ["serde"] }
clap = { workspace = true, features = ["derive"] }
codex-cloud-tasks-client = { path = "../cloud-tasks-client", features = [
"mock",
"online",
] }
codex-common = { path = "../common", features = ["cli"] }
tokio = { version = "1", features = ["macros", "rt-multi-thread"] }
tracing = { version = "0.1.41", features = ["log"] }
tracing-subscriber = { version = "0.3.19", features = ["env-filter"] }
codex-cloud-tasks-client = { path = "../cloud-tasks-client", features = ["mock", "online"] }
ratatui = { version = "0.29.0" }
crossterm = { version = "0.28.1", features = ["event-stream"] }
tokio-stream = "0.1.17"
chrono = { version = "0.4", features = ["serde"] }
codex-login = { path = "../login" }
codex-core = { path = "../core" }
throbber-widgets-tui = "0.8.0"
base64 = "0.22"
serde_json = "1"
reqwest = { version = "0.12", features = ["json"] }
serde = { version = "1", features = ["derive"] }
unicode-width = "0.1"
codex-login = { path = "../login" }
codex-tui = { path = "../tui" }
crossterm = { workspace = true, features = ["event-stream"] }
ratatui = { workspace = true }
reqwest = { workspace = true, features = ["json"] }
serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }
tokio = { workspace = true, features = ["macros", "rt-multi-thread"] }
tokio-stream = { workspace = true }
tracing = { workspace = true, features = ["log"] }
tracing-subscriber = { workspace = true, features = ["env-filter"] }
unicode-width = { workspace = true }
[dev-dependencies]
async-trait = "0.1"
async-trait = { workspace = true }

View File

@@ -1,4 +1,5 @@
use std::time::Duration;
use std::time::Instant;
// Environment filter data models for the TUI
#[derive(Clone, Debug, Default)]
@@ -42,15 +43,13 @@ use crate::scrollable_diff::ScrollableDiff;
use codex_cloud_tasks_client::CloudBackend;
use codex_cloud_tasks_client::TaskId;
use codex_cloud_tasks_client::TaskSummary;
use throbber_widgets_tui::ThrobberState;
#[derive(Default)]
pub struct App {
pub tasks: Vec<TaskSummary>,
pub selected: usize,
pub status: String,
pub diff_overlay: Option<DiffOverlay>,
pub throbber: ThrobberState,
pub spinner_start: Option<Instant>,
pub refresh_inflight: bool,
pub details_inflight: bool,
// Environment filter state
@@ -82,7 +81,7 @@ impl App {
selected: 0,
status: "Press r to refresh".to_string(),
diff_overlay: None,
throbber: ThrobberState::default(),
spinner_start: None,
refresh_inflight: false,
details_inflight: false,
env_filter: None,

View File

@@ -190,7 +190,7 @@ pub async fn run_main(_cli: Cli, _codex_linux_sandbox_exe: Option<PathBuf>) -> a
// Require ChatGPT login (SWIC). Exit with a clear message if missing.
let _token = match codex_core::config::find_codex_home()
.ok()
.map(codex_login::AuthManager::new)
.map(|home| codex_login::AuthManager::new(home, false))
.and_then(|am| am.auth())
{
Some(auth) => {
@@ -400,16 +400,20 @@ pub async fn run_main(_cli: Cli, _codex_linux_sandbox_exe: Option<PathBuf>) -> a
let _ = frame_tx.send(Instant::now() + codex_tui::ComposerInput::recommended_flush_delay());
}
}
// Advance throbber only while loading.
// Keep spinner pulsing only while loading.
if app.refresh_inflight
|| app.details_inflight
|| app.env_loading
|| app.apply_preflight_inflight
|| app.apply_inflight
{
app.throbber.calc_next();
if app.spinner_start.is_none() {
app.spinner_start = Some(Instant::now());
}
needs_redraw = true;
let _ = frame_tx.send(Instant::now() + Duration::from_millis(100));
let _ = frame_tx.send(Instant::now() + Duration::from_millis(600));
} else {
app.spinner_start = None;
}
render_if_needed(&mut terminal, &mut app, &mut needs_redraw)?;
}
@@ -839,6 +843,9 @@ pub async fn run_main(_cli: Cli, _codex_linux_sandbox_exe: Option<PathBuf>) -> a
&& matches!(key.code, KeyCode::Char('n') | KeyCode::Char('N'))
|| matches!(key.code, KeyCode::Char('\u{000E}'));
if is_ctrl_n {
if app.new_task.is_none() {
continue;
}
if app.best_of_modal.is_some() {
app.best_of_modal = None;
needs_redraw = true;

View File

@@ -16,6 +16,7 @@ use ratatui::widgets::ListState;
use ratatui::widgets::Padding;
use ratatui::widgets::Paragraph;
use std::sync::OnceLock;
use std::time::Instant;
use crate::app::App;
use crate::app::AttemptView;
@@ -229,7 +230,7 @@ fn draw_list(frame: &mut Frame, area: Rect, app: &mut App) {
// In-box spinner during initial/refresh loads
if app.refresh_inflight {
draw_centered_spinner(frame, inner, &mut app.throbber, "Loading tasks…");
draw_centered_spinner(frame, inner, &mut app.spinner_start, "Loading tasks…");
}
}
@@ -262,9 +263,9 @@ fn draw_footer(frame: &mut Frame, area: Rect, app: &mut App) {
help.push(": Apply ".dim());
}
help.push("o : Set Env ".dim());
help.push("Ctrl+N".dim());
help.push(format!(": Attempts {}x ", app.best_of_n).dim());
if app.new_task.is_some() {
help.push("Ctrl+N".dim());
help.push(format!(": Attempts {}x ", app.best_of_n).dim());
help.push("(editing new task) ".dim());
} else {
help.push("n : New Task ".dim());
@@ -291,7 +292,7 @@ fn draw_footer(frame: &mut Frame, area: Rect, app: &mut App) {
|| app.apply_preflight_inflight
|| app.apply_inflight
{
draw_inline_spinner(frame, top[1], &mut app.throbber, "Loading…");
draw_inline_spinner(frame, top[1], &mut app.spinner_start, "Loading…");
} else {
frame.render_widget(Clear, top[1]);
}
@@ -449,7 +450,12 @@ fn draw_diff_overlay(frame: &mut Frame, area: Rect, app: &mut App) {
.map(|o| o.sd.wrapped_lines().is_empty())
.unwrap_or(true);
if app.details_inflight && raw_empty {
draw_centered_spinner(frame, content_area, &mut app.throbber, "Loading details…");
draw_centered_spinner(
frame,
content_area,
&mut app.spinner_start,
"Loading details…",
);
} else {
let scroll = app
.diff_overlay
@@ -494,11 +500,11 @@ pub fn draw_apply_modal(frame: &mut Frame, area: Rect, app: &mut App) {
frame.render_widget(header, rows[0]);
// Body: spinner while preflight/apply runs; otherwise show result message and path lists
if app.apply_preflight_inflight {
draw_centered_spinner(frame, rows[1], &mut app.throbber, "Checking…");
draw_centered_spinner(frame, rows[1], &mut app.spinner_start, "Checking…");
} else if app.apply_inflight {
draw_centered_spinner(frame, rows[1], &mut app.throbber, "Applying…");
draw_centered_spinner(frame, rows[1], &mut app.spinner_start, "Applying…");
} else if m.result_message.is_none() {
draw_centered_spinner(frame, rows[1], &mut app.throbber, "Loading…");
draw_centered_spinner(frame, rows[1], &mut app.spinner_start, "Loading…");
} else if let Some(msg) = &m.result_message {
let mut body_lines: Vec<Line> = Vec::new();
let first = match m.result_level {
@@ -859,29 +865,29 @@ fn format_relative_time(ts: chrono::DateTime<Utc>) -> String {
fn draw_inline_spinner(
frame: &mut Frame,
area: Rect,
state: &mut throbber_widgets_tui::ThrobberState,
spinner_start: &mut Option<Instant>,
label: &str,
) {
use ratatui::style::Style;
use throbber_widgets_tui::BRAILLE_EIGHT;
use throbber_widgets_tui::Throbber;
use throbber_widgets_tui::WhichUse;
let w = Throbber::default()
.label(label)
.style(Style::default().cyan())
.throbber_style(Style::default().magenta().bold())
.throbber_set(BRAILLE_EIGHT)
.use_type(WhichUse::Spin);
frame.render_stateful_widget(w, area, state);
use ratatui::widgets::Paragraph;
let start = spinner_start.get_or_insert_with(Instant::now);
let blink_on = (start.elapsed().as_millis() / 600).is_multiple_of(2);
let dot = if blink_on {
"".into()
} else {
"".dim()
};
let label = label.cyan();
let line = Line::from(vec![dot, label]);
frame.render_widget(Paragraph::new(line), area);
}
fn draw_centered_spinner(
frame: &mut Frame,
area: Rect,
state: &mut throbber_widgets_tui::ThrobberState,
spinner_start: &mut Option<Instant>,
label: &str,
) {
// Center a 1xN throbber within the given rect
// Center a 1xN spinner within the given rect
let rows = Layout::default()
.direction(Direction::Vertical)
.constraints([
@@ -898,7 +904,7 @@ fn draw_centered_spinner(
Constraint::Percentage(50),
])
.split(rows[1]);
draw_inline_spinner(frame, cols[1], state, label);
draw_inline_spinner(frame, cols[1], spinner_start, label);
}
// Styling helpers for diff rendering live inline where used.
@@ -918,7 +924,12 @@ pub fn draw_env_modal(frame: &mut Frame, area: Rect, app: &mut App) {
let content = overlay_content(inner);
if app.env_loading {
draw_centered_spinner(frame, content, &mut app.throbber, "Loading environments…");
draw_centered_spinner(
frame,
content,
&mut app.spinner_start,
"Loading environments…",
);
return;
}
@@ -1004,32 +1015,40 @@ pub fn draw_best_of_modal(frame: &mut Frame, area: Rect, app: &mut App) {
use ratatui::widgets::Wrap;
let inner = overlay_outer(area);
const MAX_WIDTH: u16 = 40;
const MIN_WIDTH: u16 = 20;
const MAX_HEIGHT: u16 = 12;
const MIN_HEIGHT: u16 = 6;
let modal_width = inner.width.min(MAX_WIDTH).max(inner.width.min(MIN_WIDTH));
let modal_height = inner
.height
.min(MAX_HEIGHT)
.max(inner.height.min(MIN_HEIGHT));
let modal_x = inner.x + (inner.width.saturating_sub(modal_width)) / 2;
let modal_y = inner.y + (inner.height.saturating_sub(modal_height)) / 2;
let modal_area = Rect::new(modal_x, modal_y, modal_width, modal_height);
let title = Line::from(vec!["Parallel Attempts".magenta().bold()]);
let block = overlay_block().title(title);
frame.render_widget(Clear, inner);
frame.render_widget(block.clone(), inner);
let content = overlay_content(inner);
frame.render_widget(Clear, modal_area);
frame.render_widget(block.clone(), modal_area);
let content = overlay_content(modal_area);
let rows = Layout::default()
.direction(Direction::Vertical)
.constraints([Constraint::Length(2), Constraint::Min(1)])
.split(content);
let hint = Paragraph::new(Line::from(
"Use ↑/↓ to choose, 1-4 jump; Enter confirm, Esc cancel"
.cyan()
.dim(),
))
.wrap(Wrap { trim: true });
let hint = Paragraph::new(Line::from("Use ↑/↓ to choose, 1-4 jump".cyan().dim()))
.wrap(Wrap { trim: true });
frame.render_widget(hint, rows[0]);
let selected = app.best_of_modal.as_ref().map(|m| m.selected).unwrap_or(0);
let options = [1usize, 2, 3, 4];
let mut items: Vec<ListItem> = Vec::new();
for &attempts in &options {
let mut spans: Vec<ratatui::text::Span> =
vec![format!("{attempts} attempt{}", if attempts == 1 { "" } else { "s" }).into()];
let noun = if attempts == 1 { "attempt" } else { "attempts" };
let mut spans: Vec<ratatui::text::Span> = vec![format!("{attempts} {noun:<8}").into()];
spans.push(" ".into());
spans.push(format!("{attempts}x parallel").dim());
if attempts == app.best_of_n {

View File

@@ -70,7 +70,7 @@ pub async fn build_chatgpt_headers() -> HeaderMap {
HeaderValue::from_str(&ua).unwrap_or(HeaderValue::from_static("codex-cli")),
);
if let Ok(home) = codex_core::config::find_codex_home() {
let am = codex_login::AuthManager::new(home);
let am = codex_login::AuthManager::new(home, false);
if let Some(auth) = am.auth()
&& let Ok(tok) = auth.get_token().await
&& !tok.is_empty()

View File

@@ -10,6 +10,7 @@ workspace = true
clap = { workspace = true, features = ["derive", "wrap_help"], optional = true }
codex-core = { workspace = true }
codex-protocol = { workspace = true }
codex-app-server-protocol = { workspace = true }
serde = { workspace = true, optional = true }
toml = { workspace = true, optional = true }

View File

@@ -1,5 +1,5 @@
use codex_app_server_protocol::AuthMode;
use codex_core::protocol_config_types::ReasoningEffort;
use codex_protocol::mcp_protocol::AuthMode;
/// A simple preset pairing a model slug with a reasoning effort.
#[derive(Debug, Clone, Copy)]
@@ -20,49 +20,49 @@ const PRESETS: &[ModelPreset] = &[
ModelPreset {
id: "gpt-5-codex-low",
label: "gpt-5-codex low",
description: "",
description: "Fastest responses with limited reasoning",
model: "gpt-5-codex",
effort: Some(ReasoningEffort::Low),
},
ModelPreset {
id: "gpt-5-codex-medium",
label: "gpt-5-codex medium",
description: "",
description: "Dynamically adjusts reasoning based on the task",
model: "gpt-5-codex",
effort: Some(ReasoningEffort::Medium),
},
ModelPreset {
id: "gpt-5-codex-high",
label: "gpt-5-codex high",
description: "",
description: "Maximizes reasoning depth for complex or ambiguous problems",
model: "gpt-5-codex",
effort: Some(ReasoningEffort::High),
},
ModelPreset {
id: "gpt-5-minimal",
label: "gpt-5 minimal",
description: "— fastest responses with limited reasoning; ideal for coding, instructions, or lightweight tasks",
description: "Fastest responses with little reasoning",
model: "gpt-5",
effort: Some(ReasoningEffort::Minimal),
},
ModelPreset {
id: "gpt-5-low",
label: "gpt-5 low",
description: "— balances speed with some reasoning; useful for straightforward queries and short explanations",
description: "Balances speed with some reasoning; useful for straightforward queries and short explanations",
model: "gpt-5",
effort: Some(ReasoningEffort::Low),
},
ModelPreset {
id: "gpt-5-medium",
label: "gpt-5 medium",
description: "— default setting; provides a solid balance of reasoning depth and latency for general-purpose tasks",
description: "Provides a solid balance of reasoning depth and latency for general-purpose tasks",
model: "gpt-5",
effort: Some(ReasoningEffort::Medium),
},
ModelPreset {
id: "gpt-5-high",
label: "gpt-5 high",
description: "— maximizes reasoning depth for complex or ambiguous problems",
description: "Maximizes reasoning depth for complex or ambiguous problems",
model: "gpt-5",
effort: Some(ReasoningEffort::High),
},

View File

@@ -19,13 +19,16 @@ async-trait = { workspace = true }
base64 = { workspace = true }
bytes = { workspace = true }
chrono = { workspace = true, features = ["serde"] }
codex-app-server-protocol = { workspace = true }
codex-apply-patch = { workspace = true }
codex-file-search = { workspace = true }
codex-mcp-client = { workspace = true }
codex-rmcp-client = { workspace = true }
codex-protocol = { workspace = true }
codex-otel = { workspace = true, features = ["otel"] }
codex-protocol = { workspace = true }
codex-rmcp-client = { workspace = true }
codex-utils-string = { workspace = true }
dirs = { workspace = true }
dunce = { workspace = true }
env-flags = { workspace = true }
eventsource-stream = { workspace = true }
futures = { workspace = true }
@@ -58,7 +61,7 @@ tokio = { workspace = true, features = [
"rt-multi-thread",
"signal",
] }
tokio-util = { workspace = true }
tokio-util = { workspace = true, features = ["rt"] }
toml = { workspace = true }
toml_edit = { workspace = true }
tracing = { workspace = true, features = ["log"] }
@@ -73,6 +76,9 @@ wildmatch = { workspace = true }
landlock = { workspace = true }
seccompiler = { workspace = true }
[target.'cfg(target_os = "macos")'.dependencies]
core-foundation = "0.9"
# Build OpenSSL from source for musl builds.
[target.x86_64-unknown-linux-musl.dependencies]
openssl-sys = { workspace = true, features = ["vendored"] }
@@ -83,16 +89,18 @@ openssl-sys = { workspace = true, features = ["vendored"] }
[dev-dependencies]
assert_cmd = { workspace = true }
assert_matches = { workspace = true }
core_test_support = { workspace = true }
escargot = { workspace = true }
maplit = { workspace = true }
predicates = { workspace = true }
pretty_assertions = { workspace = true }
serial_test = { workspace = true }
tempfile = { workspace = true }
tokio-test = { workspace = true }
tracing-test = { workspace = true, features = ["no-env-filter"] }
walkdir = { workspace = true }
wiremock = { workspace = true }
tracing-test = { workspace = true, features = ["no-env-filter"] }
[package.metadata.cargo-shear]
ignored = ["openssl-sys"]

View File

@@ -12,7 +12,7 @@ Expects `/usr/bin/sandbox-exec` to be present.
### Linux
Expects the binary containing `codex-core` to run the equivalent of `codex debug landlock` when `arg0` is `codex-linux-sandbox`. See the `codex-arg0` crate for details.
Expects the binary containing `codex-core` to run the equivalent of `codex sandbox linux` (legacy alias: `codex debug landlock`) when `arg0` is `codex-linux-sandbox`. See the `codex-arg0` crate for details.
### All Platforms

View File

@@ -5,18 +5,19 @@ You are Codex, based on GPT-5. You are running as a coding agent in the Codex CL
- The arguments to `shell` will be passed to execvp(). Most terminal commands should be prefixed with ["bash", "-lc"].
- Always set the `workdir` param when using the shell function. Do not use `cd` unless absolutely necessary.
- When searching for text or files, prefer using `rg` or `rg --files` respectively because `rg` is much faster than alternatives like `grep`. (If the `rg` command is not found, then use alternatives.)
- When editing or creating files, you MUST use apply_patch as a standalone tool without going through ["bash", "-lc"], `Python`, `cat`, `sed`, ... Example: functions.shell({"command":["apply_patch","*** Begin Patch\nAdd File: hello.txt\n+Hello, world!\n*** End Patch"]}).
## Editing constraints
- Default to ASCII when editing or creating files. Only introduce non-ASCII or other Unicode characters when there is a clear justification and the file already uses them.
- Add succinct code comments that explain what is going on if code is not self-explanatory. You should not add comments like "Assigns the value to the variable", but a brief comment might be useful ahead of a complex code block that the user would otherwise have to spend time parsing out. Usage of these comments should be rare.
- Try to use apply_patch for single file edits, but it is fine to explore other options to make the edit if it does not work well. Do not use apply_patch for changes that are auto-generated (i.e. generating package.json or running a lint or format command like gofmt) or when scripting is more efficient (such as search and replacing a string across a codebase).
- You may be in a dirty git worktree.
* NEVER revert existing changes you did not make unless explicitly requested, since these changes were made by the user.
* If asked to make a commit or code edits and there are unrelated changes to your work or changes that you didn't make in those files, don't revert those changes.
* If the changes are in files you've touched recently, you should read carefully and understand how you can work with the changes rather than reverting them.
* If the changes are in unrelated files, just ignore them and don't revert them.
- While you are working, you might notice unexpected changes that you didn't make. If this happens, STOP IMMEDIATELY and ask the user how they would like to proceed.
- **NEVER** use destructive commands like `git reset --hard` or `git checkout --` unless specifically requested or approved by the user.
## Plan tool
@@ -90,7 +91,7 @@ You are producing plain text that will later be styled by the CLI. Follow these
- Headers: optional; short Title Case (1-3 words) wrapped in **…**; no blank line before the first bullet; add only if they truly help.
- Bullets: use - ; merge related points; keep to one line when possible; 46 per list ordered by importance; keep phrasing consistent.
- Monospace: backticks for commands/paths/env vars/code ids and inline examples; use for literal keyword bullets; never combine with **.
- Code samples or multi-line snippets should be wrapped in fenced code blocks; add a language hint whenever obvious.
- Code samples or multi-line snippets should be wrapped in fenced code blocks; include an info string as often as possible.
- Structure: group related bullets; order sections general → specific → supporting; for subsections, start with a bolded keyword bullet, then items; match complexity to the task.
- Tone: collaborative, concise, factual; present tense, active voice; selfcontained; no "above/below"; parallel wording.
- Don'ts: no nested bullets/hierarchies; no ANSI codes; don't cram unrelated keywords; keep keyword lists short—wrap/reformat if long; avoid naming formatting styles in answers.

View File

@@ -27,6 +27,7 @@ pub(crate) enum InternalApplyPatchInvocation {
DelegateToExec(ApplyPatchExec),
}
#[derive(Debug)]
pub(crate) struct ApplyPatchExec {
pub(crate) action: ApplyPatchAction,
pub(crate) user_explicitly_approved_this_action: bool,
@@ -109,3 +110,28 @@ pub(crate) fn convert_apply_patch_to_protocol(
}
result
}
#[cfg(test)]
mod tests {
use super::*;
use pretty_assertions::assert_eq;
use tempfile::tempdir;
#[test]
fn convert_apply_patch_maps_add_variant() {
let tmp = tempdir().expect("tmp");
let p = tmp.path().join("a.txt");
// Create an action with a single Add change
let action = ApplyPatchAction::new_add_for_test(&p, "hello".to_string());
let got = convert_apply_patch_to_protocol(&action);
assert_eq!(
got.get(&p),
Some(&FileChange::Add {
content: "hello".to_string()
})
);
}
}

View File

@@ -15,7 +15,7 @@ use std::sync::Arc;
use std::sync::Mutex;
use std::time::Duration;
use codex_protocol::mcp_protocol::AuthMode;
use codex_app_server_protocol::AuthMode;
use crate::token_data::PlanType;
use crate::token_data::TokenData;
@@ -73,7 +73,7 @@ impl CodexAuth {
/// Loads the available auth information from the auth.json.
pub fn from_codex_home(codex_home: &Path) -> std::io::Result<Option<CodexAuth>> {
load_auth(codex_home)
load_auth(codex_home, false)
}
pub async fn get_token_data(&self) -> Result<TokenData, std::io::Error> {
@@ -188,6 +188,7 @@ impl CodexAuth {
}
pub const OPENAI_API_KEY_ENV_VAR: &str = "OPENAI_API_KEY";
pub const CODEX_API_KEY_ENV_VAR: &str = "CODEX_API_KEY";
pub fn read_openai_api_key_from_env() -> Option<String> {
env::var(OPENAI_API_KEY_ENV_VAR)
@@ -196,6 +197,13 @@ pub fn read_openai_api_key_from_env() -> Option<String> {
.filter(|value| !value.is_empty())
}
pub fn read_codex_api_key_from_env() -> Option<String> {
env::var(CODEX_API_KEY_ENV_VAR)
.ok()
.map(|value| value.trim().to_string())
.filter(|value| !value.is_empty())
}
pub fn get_auth_file(codex_home: &Path) -> PathBuf {
codex_home.join("auth.json")
}
@@ -221,7 +229,18 @@ pub fn login_with_api_key(codex_home: &Path, api_key: &str) -> std::io::Result<(
write_auth_json(&get_auth_file(codex_home), &auth_dot_json)
}
fn load_auth(codex_home: &Path) -> std::io::Result<Option<CodexAuth>> {
fn load_auth(
codex_home: &Path,
enable_codex_api_key_env: bool,
) -> std::io::Result<Option<CodexAuth>> {
if enable_codex_api_key_env && let Some(api_key) = read_codex_api_key_from_env() {
let client = crate::default_client::create_client();
return Ok(Some(CodexAuth::from_api_key_with_client(
api_key.as_str(),
client,
)));
}
let auth_file = get_auth_file(codex_home);
let client = crate::default_client::create_client();
let auth_dot_json = match try_read_auth_json(&auth_file) {
@@ -455,7 +474,7 @@ mod tests {
auth_dot_json,
auth_file: _,
..
} = super::load_auth(codex_home.path()).unwrap().unwrap();
} = super::load_auth(codex_home.path(), false).unwrap().unwrap();
assert_eq!(None, api_key);
assert_eq!(AuthMode::ChatGPT, mode);
@@ -494,7 +513,7 @@ mod tests {
)
.unwrap();
let auth = super::load_auth(dir.path()).unwrap().unwrap();
let auth = super::load_auth(dir.path(), false).unwrap().unwrap();
assert_eq!(auth.mode, AuthMode::ApiKey);
assert_eq!(auth.api_key, Some("sk-test-key".to_string()));
@@ -577,6 +596,7 @@ mod tests {
pub struct AuthManager {
codex_home: PathBuf,
inner: RwLock<CachedAuth>,
enable_codex_api_key_env: bool,
}
impl AuthManager {
@@ -584,11 +604,14 @@ impl AuthManager {
/// preferred auth method. Errors loading auth are swallowed; `auth()` will
/// simply return `None` in that case so callers can treat it as an
/// unauthenticated state.
pub fn new(codex_home: PathBuf) -> Self {
let auth = CodexAuth::from_codex_home(&codex_home).ok().flatten();
pub fn new(codex_home: PathBuf, enable_codex_api_key_env: bool) -> Self {
let auth = load_auth(&codex_home, enable_codex_api_key_env)
.ok()
.flatten();
Self {
codex_home,
inner: RwLock::new(CachedAuth { auth }),
enable_codex_api_key_env,
}
}
@@ -598,6 +621,7 @@ impl AuthManager {
Arc::new(Self {
codex_home: PathBuf::new(),
inner: RwLock::new(cached),
enable_codex_api_key_env: false,
})
}
@@ -609,7 +633,9 @@ impl AuthManager {
/// Force a reload of the auth information from auth.json. Returns
/// whether the auth value changed.
pub fn reload(&self) -> bool {
let new_auth = CodexAuth::from_codex_home(&self.codex_home).ok().flatten();
let new_auth = load_auth(&self.codex_home, self.enable_codex_api_key_env)
.ok()
.flatten();
if let Ok(mut guard) = self.inner.write() {
let changed = !AuthManager::auths_equal(&guard.auth, &new_auth);
guard.auth = new_auth;
@@ -628,8 +654,8 @@ impl AuthManager {
}
/// Convenience constructor returning an `Arc` wrapper.
pub fn shared(codex_home: PathBuf) -> Arc<Self> {
Arc::new(Self::new(codex_home))
pub fn shared(codex_home: PathBuf, enable_codex_api_key_env: bool) -> Arc<Self> {
Arc::new(Self::new(codex_home, enable_codex_api_key_env))
}
/// Attempt to refresh the current auth token (if any). On success, reload

View File

@@ -6,6 +6,8 @@ use crate::client_common::ResponseEvent;
use crate::client_common::ResponseStream;
use crate::error::CodexErr;
use crate::error::Result;
use crate::error::RetryLimitReachedError;
use crate::error::UnexpectedResponseError;
use crate::model_family::ModelFamily;
use crate::openai_tools::create_tools_json_for_chat_completions_api;
use crate::util::backoff;
@@ -320,11 +322,18 @@ pub(crate) async fn stream_chat_completions(
let status = res.status();
if !(status == StatusCode::TOO_MANY_REQUESTS || status.is_server_error()) {
let body = (res.text().await).unwrap_or_default();
return Err(CodexErr::UnexpectedStatus(status, body));
return Err(CodexErr::UnexpectedStatus(UnexpectedResponseError {
status,
body,
request_id: None,
}));
}
if attempt > max_retries {
return Err(CodexErr::RetryLimit(status));
return Err(CodexErr::RetryLimit(RetryLimitReachedError {
status,
request_id: None,
}));
}
let retry_after_secs = res

View File

@@ -5,9 +5,11 @@ use std::time::Duration;
use crate::AuthManager;
use crate::auth::CodexAuth;
use crate::error::RetryLimitReachedError;
use crate::error::UnexpectedResponseError;
use bytes::Bytes;
use codex_protocol::mcp_protocol::AuthMode;
use codex_protocol::mcp_protocol::ConversationId;
use codex_app_server_protocol::AuthMode;
use codex_protocol::ConversationId;
use eventsource_stream::Eventsource;
use futures::prelude::*;
use regex_lite::Regex;
@@ -29,6 +31,7 @@ use crate::client_common::Prompt;
use crate::client_common::ResponseEvent;
use crate::client_common::ResponseStream;
use crate::client_common::ResponsesApiRequest;
use crate::client_common::TurnType;
use crate::client_common::create_reasoning_param_for_request;
use crate::client_common::create_text_param_for_request;
use crate::config::Config;
@@ -61,7 +64,6 @@ struct ErrorResponse {
#[derive(Debug, Deserialize)]
struct Error {
r#type: Option<String>,
#[allow(dead_code)]
code: Option<String>,
message: Option<String>,
@@ -226,7 +228,7 @@ impl ModelClient {
input: &input_with_instructions,
tools: &tools_json,
tool_choice: "auto",
parallel_tool_calls: false,
parallel_tool_calls: prompt.parallel_tool_calls,
reasoning,
store: azure_workaround,
stream: true,
@@ -243,7 +245,7 @@ impl ModelClient {
let max_attempts = self.provider.request_max_retries();
for attempt in 0..=max_attempts {
match self
.attempt_stream_responses(attempt, &payload_json, &auth_manager)
.attempt_stream_responses(attempt, &payload_json, &auth_manager, prompt.turn_type)
.await
{
Ok(stream) => {
@@ -271,6 +273,7 @@ impl ModelClient {
attempt: u64,
payload_json: &Value,
auth_manager: &Option<Arc<AuthManager>>,
turn_type: TurnType,
) -> std::result::Result<ResponseStream, StreamAttemptError> {
// Always fetch the latest auth in case a prior attempt refreshed the token.
let auth = auth_manager.as_ref().and_then(|m| m.auth());
@@ -292,6 +295,13 @@ impl ModelClient {
// Send session_id for compatibility.
.header("conversation_id", self.conversation_id.to_string())
.header("session_id", self.conversation_id.to_string())
.header(
"action_kind",
match turn_type {
TurnType::Review => "review",
TurnType::Regular => "turn",
},
)
.header(reqwest::header::ACCEPT, "text/event-stream")
.json(payload_json);
@@ -307,14 +317,17 @@ impl ModelClient {
.log_request(attempt, || req_builder.send())
.await;
let mut request_id = None;
if let Ok(resp) = &res {
request_id = resp
.headers()
.get("cf-ray")
.map(|v| v.to_str().unwrap_or_default().to_string());
trace!(
"Response status: {}, cf-ray: {}",
"Response status: {}, cf-ray: {:?}",
resp.status(),
resp.headers()
.get("cf-ray")
.map(|v| v.to_str().unwrap_or_default())
.unwrap_or_default()
request_id
);
}
@@ -374,7 +387,11 @@ impl ModelClient {
// Surface the error body to callers. Use `unwrap_or_default` per Clippy.
let body = res.text().await.unwrap_or_default();
return Err(StreamAttemptError::Fatal(CodexErr::UnexpectedStatus(
status, body,
UnexpectedResponseError {
status,
body,
request_id: None,
},
)));
}
@@ -405,6 +422,7 @@ impl ModelClient {
Err(StreamAttemptError::RetryableHttpError {
status,
retry_after,
request_id,
})
}
Err(e) => Err(StreamAttemptError::RetryableTransportError(e.into())),
@@ -448,6 +466,7 @@ enum StreamAttemptError {
RetryableHttpError {
status: StatusCode,
retry_after: Option<Duration>,
request_id: Option<String>,
},
RetryableTransportError(CodexErr),
Fatal(CodexErr),
@@ -472,11 +491,13 @@ impl StreamAttemptError {
fn into_error(self) -> CodexErr {
match self {
Self::RetryableHttpError { status, .. } => {
Self::RetryableHttpError {
status, request_id, ..
} => {
if status == StatusCode::INTERNAL_SERVER_ERROR {
CodexErr::InternalServerError
} else {
CodexErr::RetryLimit(status)
CodexErr::RetryLimit(RetryLimitReachedError { status, request_id })
}
}
Self::RetryableTransportError(error) => error,
@@ -781,9 +802,13 @@ async fn process_sse<S>(
if let Some(error) = error {
match serde_json::from_value::<Error>(error.clone()) {
Ok(error) => {
let delay = try_parse_retry_after(&error);
let message = error.message.unwrap_or_default();
response_error = Some(CodexErr::Stream(message, delay));
if is_context_window_error(&error) {
response_error = Some(CodexErr::ContextWindowExceeded);
} else {
let delay = try_parse_retry_after(&error);
let message = error.message.clone().unwrap_or_default();
response_error = Some(CodexErr::Stream(message, delay));
}
}
Err(e) => {
let error = format!("failed to parse ErrorResponse: {e}");
@@ -909,9 +934,14 @@ fn try_parse_retry_after(err: &Error) -> Option<Duration> {
None
}
fn is_context_window_error(error: &Error) -> bool {
error.code.as_deref() == Some("context_length_exceeded")
}
#[cfg(test)]
mod tests {
use super::*;
use assert_matches::assert_matches;
use serde_json::json;
use tokio::sync::mpsc;
use tokio_test::io::Builder as IoBuilder;
@@ -1166,6 +1196,74 @@ mod tests {
}
}
#[tokio::test]
async fn context_window_error_is_fatal() {
let raw_error = r#"{"type":"response.failed","sequence_number":3,"response":{"id":"resp_5c66275b97b9baef1ed95550adb3b7ec13b17aafd1d2f11b","object":"response","created_at":1759510079,"status":"failed","background":false,"error":{"code":"context_length_exceeded","message":"Your input exceeds the context window of this model. Please adjust your input and try again."},"usage":null,"user":null,"metadata":{}}}"#;
let sse1 = format!("event: response.failed\ndata: {raw_error}\n\n");
let provider = ModelProviderInfo {
name: "test".to_string(),
base_url: Some("https://test.com".to_string()),
env_key: Some("TEST_API_KEY".to_string()),
env_key_instructions: None,
wire_api: WireApi::Responses,
query_params: None,
http_headers: None,
env_http_headers: None,
request_max_retries: Some(0),
stream_max_retries: Some(0),
stream_idle_timeout_ms: Some(1000),
requires_openai_auth: false,
};
let otel_event_manager = otel_event_manager();
let events = collect_events(&[sse1.as_bytes()], provider, otel_event_manager).await;
assert_eq!(events.len(), 1);
match &events[0] {
Err(err @ CodexErr::ContextWindowExceeded) => {
assert_eq!(err.to_string(), CodexErr::ContextWindowExceeded.to_string());
}
other => panic!("unexpected context window event: {other:?}"),
}
}
#[tokio::test]
async fn context_window_error_with_newline_is_fatal() {
let raw_error = r#"{"type":"response.failed","sequence_number":4,"response":{"id":"resp_fatal_newline","object":"response","created_at":1759510080,"status":"failed","background":false,"error":{"code":"context_length_exceeded","message":"Your input exceeds the context window of this model. Please adjust your input and try\nagain."},"usage":null,"user":null,"metadata":{}}}"#;
let sse1 = format!("event: response.failed\ndata: {raw_error}\n\n");
let provider = ModelProviderInfo {
name: "test".to_string(),
base_url: Some("https://test.com".to_string()),
env_key: Some("TEST_API_KEY".to_string()),
env_key_instructions: None,
wire_api: WireApi::Responses,
query_params: None,
http_headers: None,
env_http_headers: None,
request_max_retries: Some(0),
stream_max_retries: Some(0),
stream_idle_timeout_ms: Some(1000),
requires_openai_auth: false,
};
let otel_event_manager = otel_event_manager();
let events = collect_events(&[sse1.as_bytes()], provider, otel_event_manager).await;
assert_eq!(events.len(), 1);
match &events[0] {
Err(err @ CodexErr::ContextWindowExceeded) => {
assert_eq!(err.to_string(), CodexErr::ContextWindowExceeded.to_string());
}
other => panic!("unexpected context window event: {other:?}"),
}
}
// ────────────────────────────
// Table-driven test from `main`
// ────────────────────────────
@@ -1303,10 +1401,7 @@ mod tests {
let resp: ErrorResponse =
serde_json::from_str(json).expect("should deserialize old schema");
assert!(matches!(
resp.error.plan_type,
Some(PlanType::Known(KnownPlan::Pro))
));
assert_matches!(resp.error.plan_type, Some(PlanType::Known(KnownPlan::Pro)));
let plan_json = serde_json::to_string(&resp.error.plan_type).expect("serialize plan_type");
assert_eq!(plan_json, "\"pro\"");
@@ -1321,7 +1416,7 @@ mod tests {
let resp: ErrorResponse =
serde_json::from_str(json).expect("should deserialize old schema");
assert!(matches!(resp.error.plan_type, Some(PlanType::Unknown(ref s)) if s == "vip"));
assert_matches!(resp.error.plan_type, Some(PlanType::Unknown(ref s)) if s == "vip");
let plan_json = serde_json::to_string(&resp.error.plan_type).expect("serialize plan_type");
assert_eq!(plan_json, "\"vip\"");

View File

@@ -1,6 +1,6 @@
use crate::client_common::tools::ToolSpec;
use crate::error::Result;
use crate::model_family::ModelFamily;
use crate::openai_tools::OpenAiTool;
use crate::protocol::RateLimitSnapshot;
use crate::protocol::TokenUsage;
use codex_apply_patch::APPLY_PATCH_TOOL_INSTRUCTIONS;
@@ -9,9 +9,11 @@ use codex_protocol::config_types::ReasoningSummary as ReasoningSummaryConfig;
use codex_protocol::config_types::Verbosity as VerbosityConfig;
use codex_protocol::models::ResponseItem;
use futures::Stream;
use serde::Deserialize;
use serde::Serialize;
use serde_json::Value;
use std::borrow::Cow;
use std::collections::HashSet;
use std::ops::Deref;
use std::pin::Pin;
use std::task::Context;
@@ -21,6 +23,13 @@ use tokio::sync::mpsc;
/// Review thread system prompt. Edit `core/src/review_prompt.md` to customize.
pub const REVIEW_PROMPT: &str = include_str!("../review_prompt.md");
#[derive(Debug, Clone, Copy, PartialEq, Eq, Default)]
pub enum TurnType {
#[default]
Regular,
Review,
}
/// API request payload for a single model turn
#[derive(Default, Debug, Clone)]
pub struct Prompt {
@@ -29,13 +38,19 @@ pub struct Prompt {
/// Tools available to the model, including additional tools sourced from
/// external MCP servers.
pub(crate) tools: Vec<OpenAiTool>,
pub(crate) tools: Vec<ToolSpec>,
/// Whether parallel tool calls are permitted for this prompt.
pub(crate) parallel_tool_calls: bool,
/// Optional override for the built-in BASE_INSTRUCTIONS.
pub base_instructions_override: Option<String>,
/// Optional the output schema for the model's response.
pub output_schema: Option<Value>,
/// The type of turn being executed
pub turn_type: TurnType,
}
impl Prompt {
@@ -49,8 +64,8 @@ impl Prompt {
// AND
// - there is no apply_patch tool present
let is_apply_patch_tool_present = self.tools.iter().any(|tool| match tool {
OpenAiTool::Function(f) => f.name == "apply_patch",
OpenAiTool::Freeform(f) => f.name == "apply_patch",
ToolSpec::Function(f) => f.name == "apply_patch",
ToolSpec::Freeform(f) => f.name == "apply_patch",
_ => false,
});
if self.base_instructions_override.is_none()
@@ -64,10 +79,125 @@ impl Prompt {
}
pub(crate) fn get_formatted_input(&self) -> Vec<ResponseItem> {
self.input.clone()
let mut input = self.input.clone();
// when using the *Freeform* apply_patch tool specifically, tool outputs
// should be structured text, not json. Do NOT reserialize when using
// the Function tool - note that this differs from the check above for
// instructions. We declare the result as a named variable for clarity.
let is_freeform_apply_patch_tool_present = self.tools.iter().any(|tool| match tool {
ToolSpec::Freeform(f) => f.name == "apply_patch",
_ => false,
});
if is_freeform_apply_patch_tool_present {
reserialize_shell_outputs(&mut input);
}
input
}
}
fn reserialize_shell_outputs(items: &mut [ResponseItem]) {
let mut shell_call_ids: HashSet<String> = HashSet::new();
items.iter_mut().for_each(|item| match item {
ResponseItem::LocalShellCall { call_id, id, .. } => {
if let Some(identifier) = call_id.clone().or_else(|| id.clone()) {
shell_call_ids.insert(identifier);
}
}
ResponseItem::CustomToolCall {
id: _,
status: _,
call_id,
name,
input: _,
} => {
if name == "apply_patch" {
shell_call_ids.insert(call_id.clone());
}
}
ResponseItem::CustomToolCallOutput { call_id, output } => {
if shell_call_ids.remove(call_id)
&& let Some(structured) = parse_structured_shell_output(output)
{
*output = structured
}
}
ResponseItem::FunctionCall { name, call_id, .. }
if is_shell_tool_name(name) || name == "apply_patch" =>
{
shell_call_ids.insert(call_id.clone());
}
ResponseItem::FunctionCallOutput { call_id, output } => {
if shell_call_ids.remove(call_id)
&& let Some(structured) = parse_structured_shell_output(&output.content)
{
output.content = structured
}
}
_ => {}
})
}
fn is_shell_tool_name(name: &str) -> bool {
matches!(name, "shell" | "container.exec")
}
#[derive(Deserialize)]
struct ExecOutputJson {
output: String,
metadata: ExecOutputMetadataJson,
}
#[derive(Deserialize)]
struct ExecOutputMetadataJson {
exit_code: i32,
duration_seconds: f32,
}
fn parse_structured_shell_output(raw: &str) -> Option<String> {
let parsed: ExecOutputJson = serde_json::from_str(raw).ok()?;
Some(build_structured_output(&parsed))
}
fn build_structured_output(parsed: &ExecOutputJson) -> String {
let mut sections = Vec::new();
sections.push(format!("Exit code: {}", parsed.metadata.exit_code));
sections.push(format!(
"Wall time: {} seconds",
parsed.metadata.duration_seconds
));
let mut output = parsed.output.clone();
if let Some(total_lines) = extract_total_output_lines(&parsed.output) {
sections.push(format!("Total output lines: {total_lines}"));
if let Some(stripped) = strip_total_output_header(&output) {
output = stripped.to_string();
}
}
sections.push("Output:".to_string());
sections.push(output);
sections.join("\n")
}
fn extract_total_output_lines(output: &str) -> Option<u32> {
let marker_start = output.find("[... omitted ")?;
let marker = &output[marker_start..];
let (_, after_of) = marker.split_once(" of ")?;
let (total_segment, _) = after_of.split_once(' ')?;
total_segment.parse::<u32>().ok()
}
fn strip_total_output_header(output: &str) -> Option<&str> {
let after_prefix = output.strip_prefix("Total output lines: ")?;
let (_, remainder) = after_prefix.split_once('\n')?;
let remainder = remainder.strip_prefix('\n').unwrap_or(remainder);
Some(remainder)
}
#[derive(Debug)]
pub enum ResponseEvent {
Created,
@@ -160,6 +290,65 @@ pub(crate) struct ResponsesApiRequest<'a> {
pub(crate) text: Option<TextControls>,
}
pub(crate) mod tools {
use crate::openai_tools::JsonSchema;
use serde::Deserialize;
use serde::Serialize;
/// When serialized as JSON, this produces a valid "Tool" in the OpenAI
/// Responses API.
#[derive(Debug, Clone, Serialize, PartialEq)]
#[serde(tag = "type")]
pub(crate) enum ToolSpec {
#[serde(rename = "function")]
Function(ResponsesApiTool),
#[serde(rename = "local_shell")]
LocalShell {},
// TODO: Understand why we get an error on web_search although the API docs say it's supported.
// https://platform.openai.com/docs/guides/tools-web-search?api-mode=responses#:~:text=%7B%20type%3A%20%22web_search%22%20%7D%2C
#[serde(rename = "web_search")]
WebSearch {},
#[serde(rename = "custom")]
Freeform(FreeformTool),
}
impl ToolSpec {
pub(crate) fn name(&self) -> &str {
match self {
ToolSpec::Function(tool) => tool.name.as_str(),
ToolSpec::LocalShell {} => "local_shell",
ToolSpec::WebSearch {} => "web_search",
ToolSpec::Freeform(tool) => tool.name.as_str(),
}
}
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub struct FreeformTool {
pub(crate) name: String,
pub(crate) description: String,
pub(crate) format: FreeformToolFormat,
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub struct FreeformToolFormat {
pub(crate) r#type: String,
pub(crate) syntax: String,
pub(crate) definition: String,
}
#[derive(Debug, Clone, Serialize, PartialEq)]
pub struct ResponsesApiTool {
pub(crate) name: String,
pub(crate) description: String,
/// TODO: Validation. When strict is set to true, the JSON schema,
/// `required` and `additional_properties` must be present. All fields in
/// `properties` must be present in `required`.
pub(crate) strict: bool,
pub(crate) parameters: JsonSchema,
}
}
pub(crate) fn create_reasoning_param_for_request(
model_family: &ModelFamily,
effort: Option<ReasoningEffortConfig>,
@@ -279,7 +468,7 @@ mod tests {
input: &input,
tools: &tools,
tool_choice: "auto",
parallel_tool_calls: false,
parallel_tool_calls: true,
reasoning: None,
store: false,
stream: true,
@@ -320,7 +509,7 @@ mod tests {
input: &input,
tools: &tools,
tool_choice: "auto",
parallel_tool_calls: false,
parallel_tool_calls: true,
reasoning: None,
store: false,
stream: true,
@@ -356,7 +545,7 @@ mod tests {
input: &input,
tools: &tools,
tool_choice: "auto",
parallel_tool_calls: false,
parallel_tool_calls: true,
reasoning: None,
store: false,
stream: true,

File diff suppressed because it is too large Load Diff

View File

@@ -103,6 +103,18 @@ async fn run_compact_task_inner(
Err(CodexErr::Interrupted) => {
return;
}
Err(e @ CodexErr::ContextWindowExceeded) => {
sess.set_total_tokens_full(&sub_id, turn_context.as_ref())
.await;
let event = Event {
id: sub_id.clone(),
msg: EventMsg::Error(ErrorEvent {
message: e.to_string(),
}),
};
sess.send_event(event).await;
return;
}
Err(e) => {
if retries < max_retries {
retries += 1;

View File

@@ -1,25 +1,431 @@
// This is a WIP. This will eventually contain a real list of common safe Windows commands.
pub fn is_safe_command_windows(_command: &[String]) -> bool {
use shlex::split as shlex_split;
/// On Windows, we conservatively allow only clearly read-only PowerShell invocations
/// that match a small safelist. Anything else (including direct CMD commands) is unsafe.
pub fn is_safe_command_windows(command: &[String]) -> bool {
if let Some(commands) = try_parse_powershell_command_sequence(command) {
return commands
.iter()
.all(|cmd| is_safe_powershell_command(cmd.as_slice()));
}
// Only PowerShell invocations are allowed on Windows for now; anything else is unsafe.
false
}
/// Returns each command sequence if the invocation starts with a PowerShell binary.
/// For example, the tokens from `pwsh Get-ChildItem | Measure-Object` become two sequences.
fn try_parse_powershell_command_sequence(command: &[String]) -> Option<Vec<Vec<String>>> {
let (exe, rest) = command.split_first()?;
if !is_powershell_executable(exe) {
return None;
}
parse_powershell_invocation(rest)
}
/// Parses a PowerShell invocation into discrete command vectors, rejecting unsafe patterns.
fn parse_powershell_invocation(args: &[String]) -> Option<Vec<Vec<String>>> {
if args.is_empty() {
// Examples rejected here: "pwsh" and "powershell.exe" with no additional arguments.
return None;
}
let mut idx = 0;
while idx < args.len() {
let arg = &args[idx];
let lower = arg.to_ascii_lowercase();
match lower.as_str() {
"-command" | "/command" | "-c" => {
let script = args.get(idx + 1)?;
if idx + 2 != args.len() {
// Reject if there is more than one token representing the actual command.
// Examples rejected here: "pwsh -Command foo bar" and "powershell -c ls extra".
return None;
}
return parse_powershell_script(script);
}
_ if lower.starts_with("-command:") || lower.starts_with("/command:") => {
if idx + 1 != args.len() {
// Reject if there are more tokens after the command itself.
// Examples rejected here: "pwsh -Command:dir C:\\" and "powershell /Command:dir C:\\" with trailing args.
return None;
}
let script = arg.split_once(':')?.1;
return parse_powershell_script(script);
}
// Benign, no-arg flags we tolerate.
"-nologo" | "-noprofile" | "-noninteractive" | "-mta" | "-sta" => {
idx += 1;
continue;
}
// Explicitly forbidden/opaque or unnecessary for read-only operations.
"-encodedcommand" | "-ec" | "-file" | "/file" | "-windowstyle" | "-executionpolicy"
| "-workingdirectory" => {
// Examples rejected here: "pwsh -EncodedCommand ..." and "powershell -File script.ps1".
return None;
}
// Unknown switch → bail conservatively.
_ if lower.starts_with('-') => {
// Examples rejected here: "pwsh -UnknownFlag" and "powershell -foo bar".
return None;
}
// If we hit non-flag tokens, treat the remainder as a command sequence.
// This happens if powershell is invoked without -Command, e.g.
// ["pwsh", "-NoLogo", "git", "-c", "core.pager=cat", "status"]
_ => {
return split_into_commands(args[idx..].to_vec());
}
}
}
// Examples rejected here: "pwsh" and "powershell.exe -NoLogo" without a script.
None
}
/// Tokenizes an inline PowerShell script and delegates to the command splitter.
/// Examples of when this is called: pwsh.exe -Command '<script>' or pwsh.exe -Command:<script>
fn parse_powershell_script(script: &str) -> Option<Vec<Vec<String>>> {
let tokens = shlex_split(script)?;
split_into_commands(tokens)
}
/// Splits tokens into pipeline segments while ensuring no unsafe separators slip through.
/// e.g. Get-ChildItem | Measure-Object -> [['Get-ChildItem'], ['Measure-Object']]
fn split_into_commands(tokens: Vec<String>) -> Option<Vec<Vec<String>>> {
if tokens.is_empty() {
// Examples rejected here: "pwsh -Command ''" and "powershell -Command \"\"".
return None;
}
let mut commands = Vec::new();
let mut current = Vec::new();
for token in tokens.into_iter() {
match token.as_str() {
"|" | "||" | "&&" | ";" => {
if current.is_empty() {
// Examples rejected here: "pwsh -Command '| Get-ChildItem'" and "pwsh -Command '; dir'".
return None;
}
commands.push(current);
current = Vec::new();
}
// Reject if any token embeds separators, redirection, or call operator characters.
_ if token.contains(['|', ';', '>', '<', '&']) || token.contains("$(") => {
// Examples rejected here: "pwsh -Command 'dir|select'" and "pwsh -Command 'echo hi > out.txt'".
return None;
}
_ => current.push(token),
}
}
if current.is_empty() {
// Examples rejected here: "pwsh -Command 'dir |'" and "pwsh -Command 'Get-ChildItem ;'".
return None;
}
commands.push(current);
Some(commands)
}
/// Returns true when the executable name is one of the supported PowerShell binaries.
fn is_powershell_executable(exe: &str) -> bool {
matches!(
exe.to_ascii_lowercase().as_str(),
"powershell" | "powershell.exe" | "pwsh" | "pwsh.exe"
)
}
/// Validates that a parsed PowerShell command stays within our read-only safelist.
/// Everything before this is parsing, and rejecting things that make us feel uncomfortable.
fn is_safe_powershell_command(words: &[String]) -> bool {
if words.is_empty() {
// Examples rejected here: "pwsh -Command ''" and "pwsh -Command \"\"".
return false;
}
// Reject nested unsafe cmdlets inside parentheses or arguments
for w in words.iter() {
let inner = w
.trim_matches(|c| c == '(' || c == ')')
.trim_start_matches('-')
.to_ascii_lowercase();
if matches!(
inner.as_str(),
"set-content"
| "add-content"
| "out-file"
| "new-item"
| "remove-item"
| "move-item"
| "copy-item"
| "rename-item"
| "start-process"
| "stop-process"
) {
// Examples rejected here: "Write-Output (Set-Content foo6.txt 'abc')" and "Get-Content (New-Item bar.txt)".
return false;
}
}
// Block PowerShell call operator or any redirection explicitly.
if words.iter().any(|w| {
matches!(
w.as_str(),
"&" | ">" | ">>" | "1>" | "2>" | "2>&1" | "*>" | "<" | "<<"
)
}) {
// Examples rejected here: "pwsh -Command '& Remove-Item foo'" and "pwsh -Command 'Get-Content foo > bar'".
return false;
}
let command = words[0]
.trim_matches(|c| c == '(' || c == ')')
.trim_start_matches('-')
.to_ascii_lowercase();
match command.as_str() {
"echo" | "write-output" | "write-host" => true, // (no redirection allowed)
"dir" | "ls" | "get-childitem" | "gci" => true,
"cat" | "type" | "gc" | "get-content" => true,
"select-string" | "sls" | "findstr" => true,
"measure-object" | "measure" => true,
"get-location" | "gl" | "pwd" => true,
"test-path" | "tp" => true,
"resolve-path" | "rvpa" => true,
"select-object" | "select" => true,
"get-item" => true,
"git" => is_safe_git_command(words),
"rg" => is_safe_ripgrep(words),
// Extra safety: explicitly prohibit common side-effecting cmdlets regardless of args.
"set-content" | "add-content" | "out-file" | "new-item" | "remove-item" | "move-item"
| "copy-item" | "rename-item" | "start-process" | "stop-process" => {
// Examples rejected here: "pwsh -Command 'Set-Content notes.txt data'" and "pwsh -Command 'Remove-Item temp.log'".
false
}
_ => {
// Examples rejected here: "pwsh -Command 'Invoke-WebRequest https://example.com'" and "pwsh -Command 'Start-Service Spooler'".
false
}
}
}
/// Checks that an `rg` invocation avoids options that can spawn arbitrary executables.
fn is_safe_ripgrep(words: &[String]) -> bool {
const UNSAFE_RIPGREP_OPTIONS_WITH_ARGS: &[&str] = &["--pre", "--hostname-bin"];
const UNSAFE_RIPGREP_OPTIONS_WITHOUT_ARGS: &[&str] = &["--search-zip", "-z"];
!words.iter().skip(1).any(|arg| {
let arg_lc = arg.to_ascii_lowercase();
// Examples rejected here: "pwsh -Command 'rg --pre cat pattern'" and "pwsh -Command 'rg --search-zip pattern'".
UNSAFE_RIPGREP_OPTIONS_WITHOUT_ARGS.contains(&arg_lc.as_str())
|| UNSAFE_RIPGREP_OPTIONS_WITH_ARGS
.iter()
.any(|opt| arg_lc == *opt || arg_lc.starts_with(&format!("{opt}=")))
})
}
/// Ensures a Git command sticks to whitelisted read-only subcommands and flags.
fn is_safe_git_command(words: &[String]) -> bool {
const SAFE_SUBCOMMANDS: &[&str] = &["status", "log", "show", "diff", "cat-file"];
let mut iter = words.iter().skip(1);
while let Some(arg) = iter.next() {
let arg_lc = arg.to_ascii_lowercase();
if arg.starts_with('-') {
if arg.eq_ignore_ascii_case("-c") || arg.eq_ignore_ascii_case("--config") {
if iter.next().is_none() {
// Examples rejected here: "pwsh -Command 'git -c'" and "pwsh -Command 'git --config'".
return false;
}
continue;
}
if arg_lc.starts_with("-c=")
|| arg_lc.starts_with("--config=")
|| arg_lc.starts_with("--git-dir=")
|| arg_lc.starts_with("--work-tree=")
{
continue;
}
if arg.eq_ignore_ascii_case("--git-dir") || arg.eq_ignore_ascii_case("--work-tree") {
if iter.next().is_none() {
// Examples rejected here: "pwsh -Command 'git --git-dir'" and "pwsh -Command 'git --work-tree'".
return false;
}
continue;
}
continue;
}
return SAFE_SUBCOMMANDS.contains(&arg_lc.as_str());
}
// Examples rejected here: "pwsh -Command 'git'" and "pwsh -Command 'git status --short | Remove-Item foo'".
false
}
#[cfg(test)]
mod tests {
use super::is_safe_command_windows;
use std::string::ToString;
/// Converts a slice of string literals into owned `String`s for the tests.
fn vec_str(args: &[&str]) -> Vec<String> {
args.iter().map(ToString::to_string).collect()
}
#[test]
fn everything_is_unsafe() {
for cmd in [
vec_str(&["powershell.exe", "-NoLogo", "-Command", "echo hello"]),
vec_str(&["copy", "foo", "bar"]),
vec_str(&["del", "file.txt"]),
vec_str(&["powershell.exe", "Get-ChildItem"]),
] {
assert!(!is_safe_command_windows(&cmd));
}
fn recognizes_safe_powershell_wrappers() {
assert!(is_safe_command_windows(&vec_str(&[
"powershell.exe",
"-NoLogo",
"-Command",
"Get-ChildItem -Path .",
])));
assert!(is_safe_command_windows(&vec_str(&[
"powershell.exe",
"-NoProfile",
"-Command",
"git status",
])));
assert!(is_safe_command_windows(&vec_str(&[
"powershell.exe",
"Get-Content",
"Cargo.toml",
])));
// pwsh parity
assert!(is_safe_command_windows(&vec_str(&[
"pwsh.exe",
"-NoProfile",
"-Command",
"Get-ChildItem",
])));
}
#[test]
fn allows_read_only_pipelines_and_git_usage() {
assert!(is_safe_command_windows(&vec_str(&[
"pwsh",
"-NoLogo",
"-NoProfile",
"-Command",
"rg --files-with-matches foo | Measure-Object | Select-Object -ExpandProperty Count",
])));
assert!(is_safe_command_windows(&vec_str(&[
"pwsh",
"-NoLogo",
"-NoProfile",
"-Command",
"Get-Content foo.rs | Select-Object -Skip 200",
])));
assert!(is_safe_command_windows(&vec_str(&[
"pwsh",
"-NoLogo",
"-NoProfile",
"-Command",
"git -c core.pager=cat show HEAD:foo.rs",
])));
assert!(is_safe_command_windows(&vec_str(&[
"pwsh",
"-Command",
"-git cat-file -p HEAD:foo.rs",
])));
assert!(is_safe_command_windows(&vec_str(&[
"pwsh",
"-Command",
"(Get-Content foo.rs -Raw)",
])));
assert!(is_safe_command_windows(&vec_str(&[
"pwsh",
"-Command",
"Get-Item foo.rs | Select-Object Length",
])));
}
#[test]
fn rejects_powershell_commands_with_side_effects() {
assert!(!is_safe_command_windows(&vec_str(&[
"powershell.exe",
"-NoLogo",
"-Command",
"Remove-Item foo.txt",
])));
assert!(!is_safe_command_windows(&vec_str(&[
"powershell.exe",
"-NoProfile",
"-Command",
"rg --pre cat",
])));
assert!(!is_safe_command_windows(&vec_str(&[
"powershell.exe",
"-Command",
"Set-Content foo.txt 'hello'",
])));
// Redirections are blocked
assert!(!is_safe_command_windows(&vec_str(&[
"powershell.exe",
"-Command",
"echo hi > out.txt",
])));
assert!(!is_safe_command_windows(&vec_str(&[
"powershell.exe",
"-Command",
"Get-Content x | Out-File y",
])));
assert!(!is_safe_command_windows(&vec_str(&[
"powershell.exe",
"-Command",
"Write-Output foo 2> err.txt",
])));
// Call operator is blocked
assert!(!is_safe_command_windows(&vec_str(&[
"powershell.exe",
"-Command",
"& Remove-Item foo",
])));
// Chained safe + unsafe must fail
assert!(!is_safe_command_windows(&vec_str(&[
"powershell.exe",
"-Command",
"Get-ChildItem; Remove-Item foo",
])));
// Nested unsafe cmdlet inside safe command must fail
assert!(!is_safe_command_windows(&vec_str(&[
"powershell.exe",
"-Command",
"Write-Output (Set-Content foo6.txt 'abc')",
])));
// Additional nested unsafe cmdlet examples must fail
assert!(!is_safe_command_windows(&vec_str(&[
"powershell.exe",
"-Command",
"Write-Host (Remove-Item foo.txt)",
])));
assert!(!is_safe_command_windows(&vec_str(&[
"powershell.exe",
"-Command",
"Get-Content (New-Item bar.txt)",
])));
}
}

View File

@@ -1,3 +1,7 @@
use crate::config_loader::LoadedConfigLayers;
pub use crate::config_loader::load_config_as_toml;
use crate::config_loader::load_config_layers_with_overrides;
use crate::config_loader::merge_toml_values;
use crate::config_profile::ConfigProfile;
use crate::config_types::DEFAULT_OTEL_ENVIRONMENT;
use crate::config_types::History;
@@ -23,12 +27,12 @@ use crate::openai_model_info::get_model_info;
use crate::protocol::AskForApproval;
use crate::protocol::SandboxPolicy;
use anyhow::Context;
use codex_app_server_protocol::Tools;
use codex_app_server_protocol::UserSavedConfig;
use codex_protocol::config_types::ReasoningEffort;
use codex_protocol::config_types::ReasoningSummary;
use codex_protocol::config_types::SandboxMode;
use codex_protocol::config_types::Verbosity;
use codex_protocol::mcp_protocol::Tools;
use codex_protocol::mcp_protocol::UserSavedConfig;
use dirs::home_dir;
use serde::Deserialize;
use std::collections::BTreeMap;
@@ -42,7 +46,10 @@ use toml_edit::DocumentMut;
use toml_edit::Item as TomlItem;
use toml_edit::Table as TomlTable;
const OPENAI_DEFAULT_MODEL: &str = "gpt-5-codex";
#[cfg(target_os = "windows")]
pub const OPENAI_DEFAULT_MODEL: &str = "gpt-5";
#[cfg(not(target_os = "windows"))]
pub const OPENAI_DEFAULT_MODEL: &str = "gpt-5-codex";
const OPENAI_DEFAULT_REVIEW_MODEL: &str = "gpt-5-codex";
pub const GPT_5_CODEX_MEDIUM_MODEL: &str = "gpt-5-codex";
@@ -141,6 +148,9 @@ pub struct Config {
/// Maximum number of bytes to include from an AGENTS.md project doc file.
pub project_doc_max_bytes: usize,
/// Additional filenames to try when looking for project-level docs.
pub project_doc_fallback_filenames: Vec<String>,
/// Directory containing all Codex state (defaults to `~/.codex` but can be
/// overridden by the `CODEX_HOME` environment variable).
pub codex_home: PathBuf,
@@ -199,6 +209,9 @@ pub struct Config {
/// The active profile name used to derive this `Config` (if any).
pub active_profile: Option<String>,
/// Tracks whether the Windows onboarding screen has been acknowledged.
pub windows_wsl_setup_acknowledged: bool,
/// When true, disables burst-paste detection for typed input entirely.
/// All characters are inserted as they are received, and no buffering
/// or placeholder replacement will occur for fast keypress bursts.
@@ -209,50 +222,38 @@ pub struct Config {
}
impl Config {
/// Load configuration with *generic* CLI overrides (`-c key=value`) applied
/// **in between** the values parsed from `config.toml` and the
/// strongly-typed overrides specified via [`ConfigOverrides`].
///
/// The precedence order is therefore: `config.toml` < `-c` overrides <
/// `ConfigOverrides`.
pub fn load_with_cli_overrides(
pub async fn load_with_cli_overrides(
cli_overrides: Vec<(String, TomlValue)>,
overrides: ConfigOverrides,
) -> std::io::Result<Self> {
// Resolve the directory that stores Codex state (e.g. ~/.codex or the
// value of $CODEX_HOME) so we can embed it into the resulting
// `Config` instance.
let codex_home = find_codex_home()?;
// Step 1: parse `config.toml` into a generic JSON value.
let mut root_value = load_config_as_toml(&codex_home)?;
let root_value = load_resolved_config(
&codex_home,
cli_overrides,
crate::config_loader::LoaderOverrides::default(),
)
.await?;
// Step 2: apply the `-c` overrides.
for (path, value) in cli_overrides.into_iter() {
apply_toml_override(&mut root_value, &path, value);
}
// Step 3: deserialize into `ConfigToml` so that Serde can enforce the
// correct types.
let cfg: ConfigToml = root_value.try_into().map_err(|e| {
tracing::error!("Failed to deserialize overridden config: {e}");
std::io::Error::new(std::io::ErrorKind::InvalidData, e)
})?;
// Step 4: merge with the strongly-typed overrides.
Self::load_from_base_config_with_overrides(cfg, overrides, codex_home)
}
}
pub fn load_config_as_toml_with_cli_overrides(
pub async fn load_config_as_toml_with_cli_overrides(
codex_home: &Path,
cli_overrides: Vec<(String, TomlValue)>,
) -> std::io::Result<ConfigToml> {
let mut root_value = load_config_as_toml(codex_home)?;
for (path, value) in cli_overrides.into_iter() {
apply_toml_override(&mut root_value, &path, value);
}
let root_value = load_resolved_config(
codex_home,
cli_overrides,
crate::config_loader::LoaderOverrides::default(),
)
.await?;
let cfg: ConfigToml = root_value.try_into().map_err(|e| {
tracing::error!("Failed to deserialize overridden config: {e}");
@@ -262,33 +263,40 @@ pub fn load_config_as_toml_with_cli_overrides(
Ok(cfg)
}
/// Read `CODEX_HOME/config.toml` and return it as a generic TOML value. Returns
/// an empty TOML table when the file does not exist.
pub fn load_config_as_toml(codex_home: &Path) -> std::io::Result<TomlValue> {
let config_path = codex_home.join(CONFIG_TOML_FILE);
match std::fs::read_to_string(&config_path) {
Ok(contents) => match toml::from_str::<TomlValue>(&contents) {
Ok(val) => Ok(val),
Err(e) => {
tracing::error!("Failed to parse config.toml: {e}");
Err(std::io::Error::new(std::io::ErrorKind::InvalidData, e))
}
},
Err(e) if e.kind() == std::io::ErrorKind::NotFound => {
tracing::info!("config.toml not found, using defaults");
Ok(TomlValue::Table(Default::default()))
}
Err(e) => {
tracing::error!("Failed to read config.toml: {e}");
Err(e)
}
}
async fn load_resolved_config(
codex_home: &Path,
cli_overrides: Vec<(String, TomlValue)>,
overrides: crate::config_loader::LoaderOverrides,
) -> std::io::Result<TomlValue> {
let layers = load_config_layers_with_overrides(codex_home, overrides).await?;
Ok(apply_overlays(layers, cli_overrides))
}
pub fn load_global_mcp_servers(
fn apply_overlays(
layers: LoadedConfigLayers,
cli_overrides: Vec<(String, TomlValue)>,
) -> TomlValue {
let LoadedConfigLayers {
mut base,
managed_config,
managed_preferences,
} = layers;
for (path, value) in cli_overrides.into_iter() {
apply_toml_override(&mut base, &path, value);
}
for overlay in [managed_config, managed_preferences].into_iter().flatten() {
merge_toml_values(&mut base, &overlay);
}
base
}
pub async fn load_global_mcp_servers(
codex_home: &Path,
) -> std::io::Result<BTreeMap<String, McpServerConfig>> {
let root_value = load_config_as_toml(codex_home)?;
let root_value = load_config_as_toml(codex_home).await?;
let Some(servers_value) = root_value.get("mcp_servers") else {
return Ok(BTreeMap::new());
};
@@ -466,6 +474,29 @@ pub fn set_project_trusted(codex_home: &Path, project_path: &Path) -> anyhow::Re
Ok(())
}
/// Persist the acknowledgement flag for the Windows onboarding screen.
pub fn set_windows_wsl_setup_acknowledged(
codex_home: &Path,
acknowledged: bool,
) -> anyhow::Result<()> {
let config_path = codex_home.join(CONFIG_TOML_FILE);
let mut doc = match std::fs::read_to_string(config_path.clone()) {
Ok(s) => s.parse::<DocumentMut>()?,
Err(e) if e.kind() == std::io::ErrorKind::NotFound => DocumentMut::new(),
Err(e) => return Err(e.into()),
};
doc["windows_wsl_setup_acknowledged"] = toml_edit::value(acknowledged);
std::fs::create_dir_all(codex_home)?;
let tmp_file = NamedTempFile::new_in(codex_home)?;
std::fs::write(tmp_file.path(), doc.to_string())?;
tmp_file.persist(config_path)?;
Ok(())
}
fn ensure_profile_table<'a>(
doc: &'a mut DocumentMut,
profile_name: &str,
@@ -670,6 +701,9 @@ pub struct ConfigToml {
/// Maximum number of bytes to include from an AGENTS.md project doc file.
pub project_doc_max_bytes: Option<usize>,
/// Ordered list of fallback filenames to look for when AGENTS.md is missing.
pub project_doc_fallback_filenames: Option<Vec<String>>,
/// Profile to use from the `profiles` map.
pub profile: Option<String>,
@@ -716,6 +750,7 @@ pub struct ConfigToml {
pub experimental_use_exec_command_tool: Option<bool>,
pub experimental_use_unified_exec_tool: Option<bool>,
pub experimental_use_rmcp_client: Option<bool>,
pub experimental_use_freeform_apply_patch: Option<bool>,
pub projects: Option<HashMap<String, ProjectConfig>>,
@@ -729,6 +764,9 @@ pub struct ConfigToml {
/// OTEL configuration.
pub otel: Option<crate::config_types::OtelConfigToml>,
/// Tracks whether the Windows onboarding screen has been acknowledged.
pub windows_wsl_setup_acknowledged: Option<bool>,
}
impl From<ConfigToml> for UserSavedConfig {
@@ -1038,6 +1076,19 @@ impl Config {
mcp_servers: cfg.mcp_servers,
model_providers,
project_doc_max_bytes: cfg.project_doc_max_bytes.unwrap_or(PROJECT_DOC_MAX_BYTES),
project_doc_fallback_filenames: cfg
.project_doc_fallback_filenames
.unwrap_or_default()
.into_iter()
.filter_map(|name| {
let trimmed = name.trim();
if trimmed.is_empty() {
None
} else {
Some(trimmed.to_string())
}
})
.collect(),
codex_home,
history,
file_opener: cfg.file_opener.unwrap_or(UriBasedFileOpener::VsCode),
@@ -1061,7 +1112,9 @@ impl Config {
.or(cfg.chatgpt_base_url)
.unwrap_or("https://chatgpt.com/backend-api/".to_string()),
include_plan_tool: include_plan_tool.unwrap_or(false),
include_apply_patch_tool: include_apply_patch_tool.unwrap_or(false),
include_apply_patch_tool: include_apply_patch_tool
.or(cfg.experimental_use_freeform_apply_patch)
.unwrap_or(false),
tools_web_search_request,
use_experimental_streamable_shell_tool: cfg
.experimental_use_exec_command_tool
@@ -1072,6 +1125,7 @@ impl Config {
use_experimental_use_rmcp_client: cfg.experimental_use_rmcp_client.unwrap_or(false),
include_view_image_tool,
active_profile: active_profile_name,
windows_wsl_setup_acknowledged: cfg.windows_wsl_setup_acknowledged.unwrap_or(false),
disable_paste_burst: cfg.disable_paste_burst.unwrap_or(false),
tui_notifications: cfg
.tui
@@ -1310,18 +1364,18 @@ exclude_slash_tmp = true
);
}
#[test]
fn load_global_mcp_servers_returns_empty_if_missing() -> anyhow::Result<()> {
#[tokio::test]
async fn load_global_mcp_servers_returns_empty_if_missing() -> anyhow::Result<()> {
let codex_home = TempDir::new()?;
let servers = load_global_mcp_servers(codex_home.path())?;
let servers = load_global_mcp_servers(codex_home.path()).await?;
assert!(servers.is_empty());
Ok(())
}
#[test]
fn write_global_mcp_servers_round_trips_entries() -> anyhow::Result<()> {
#[tokio::test]
async fn write_global_mcp_servers_round_trips_entries() -> anyhow::Result<()> {
let codex_home = TempDir::new()?;
let mut servers = BTreeMap::new();
@@ -1340,7 +1394,7 @@ exclude_slash_tmp = true
write_global_mcp_servers(codex_home.path(), &servers)?;
let loaded = load_global_mcp_servers(codex_home.path())?;
let loaded = load_global_mcp_servers(codex_home.path()).await?;
assert_eq!(loaded.len(), 1);
let docs = loaded.get("docs").expect("docs entry");
match &docs.transport {
@@ -1356,14 +1410,47 @@ exclude_slash_tmp = true
let empty = BTreeMap::new();
write_global_mcp_servers(codex_home.path(), &empty)?;
let loaded = load_global_mcp_servers(codex_home.path())?;
let loaded = load_global_mcp_servers(codex_home.path()).await?;
assert!(loaded.is_empty());
Ok(())
}
#[test]
fn load_global_mcp_servers_accepts_legacy_ms_field() -> anyhow::Result<()> {
#[tokio::test]
async fn managed_config_wins_over_cli_overrides() -> anyhow::Result<()> {
let codex_home = TempDir::new()?;
let managed_path = codex_home.path().join("managed_config.toml");
std::fs::write(
codex_home.path().join(CONFIG_TOML_FILE),
"model = \"base\"\n",
)?;
std::fs::write(&managed_path, "model = \"managed_config\"\n")?;
let overrides = crate::config_loader::LoaderOverrides {
managed_config_path: Some(managed_path),
#[cfg(target_os = "macos")]
managed_preferences_base64: None,
};
let root_value = load_resolved_config(
codex_home.path(),
vec![("model".to_string(), TomlValue::String("cli".to_string()))],
overrides,
)
.await?;
let cfg: ConfigToml = root_value.try_into().map_err(|e| {
tracing::error!("Failed to deserialize overridden config: {e}");
std::io::Error::new(std::io::ErrorKind::InvalidData, e)
})?;
assert_eq!(cfg.model.as_deref(), Some("managed_config"));
Ok(())
}
#[tokio::test]
async fn load_global_mcp_servers_accepts_legacy_ms_field() -> anyhow::Result<()> {
let codex_home = TempDir::new()?;
let config_path = codex_home.path().join(CONFIG_TOML_FILE);
@@ -1377,15 +1464,15 @@ startup_timeout_ms = 2500
"#,
)?;
let servers = load_global_mcp_servers(codex_home.path())?;
let servers = load_global_mcp_servers(codex_home.path()).await?;
let docs = servers.get("docs").expect("docs entry");
assert_eq!(docs.startup_timeout_sec, Some(Duration::from_millis(2500)));
Ok(())
}
#[test]
fn write_global_mcp_servers_serializes_env_sorted() -> anyhow::Result<()> {
#[tokio::test]
async fn write_global_mcp_servers_serializes_env_sorted() -> anyhow::Result<()> {
let codex_home = TempDir::new()?;
let servers = BTreeMap::from([(
@@ -1420,7 +1507,7 @@ ZIG_VAR = "3"
"#
);
let loaded = load_global_mcp_servers(codex_home.path())?;
let loaded = load_global_mcp_servers(codex_home.path()).await?;
let docs = loaded.get("docs").expect("docs entry");
match &docs.transport {
McpServerTransportConfig::Stdio { command, args, env } => {
@@ -1438,8 +1525,8 @@ ZIG_VAR = "3"
Ok(())
}
#[test]
fn write_global_mcp_servers_serializes_streamable_http() -> anyhow::Result<()> {
#[tokio::test]
async fn write_global_mcp_servers_serializes_streamable_http() -> anyhow::Result<()> {
let codex_home = TempDir::new()?;
let mut servers = BTreeMap::from([(
@@ -1467,7 +1554,7 @@ startup_timeout_sec = 2.0
"#
);
let loaded = load_global_mcp_servers(codex_home.path())?;
let loaded = load_global_mcp_servers(codex_home.path()).await?;
let docs = loaded.get("docs").expect("docs entry");
match &docs.transport {
McpServerTransportConfig::StreamableHttp { url, bearer_token } => {
@@ -1499,7 +1586,7 @@ url = "https://example.com/mcp"
"#
);
let loaded = load_global_mcp_servers(codex_home.path())?;
let loaded = load_global_mcp_servers(codex_home.path()).await?;
let docs = loaded.get("docs").expect("docs entry");
match &docs.transport {
McpServerTransportConfig::StreamableHttp { url, bearer_token } => {
@@ -1811,6 +1898,7 @@ model_verbosity = "high"
mcp_servers: HashMap::new(),
model_providers: fixture.model_provider_map.clone(),
project_doc_max_bytes: PROJECT_DOC_MAX_BYTES,
project_doc_fallback_filenames: Vec::new(),
codex_home: fixture.codex_home(),
history: History::default(),
file_opener: UriBasedFileOpener::VsCode,
@@ -1830,6 +1918,7 @@ model_verbosity = "high"
use_experimental_use_rmcp_client: false,
include_view_image_tool: true,
active_profile: Some("o3".to_string()),
windows_wsl_setup_acknowledged: false,
disable_paste_burst: false,
tui_notifications: Default::default(),
otel: OtelConfig::default(),
@@ -1871,6 +1960,7 @@ model_verbosity = "high"
mcp_servers: HashMap::new(),
model_providers: fixture.model_provider_map.clone(),
project_doc_max_bytes: PROJECT_DOC_MAX_BYTES,
project_doc_fallback_filenames: Vec::new(),
codex_home: fixture.codex_home(),
history: History::default(),
file_opener: UriBasedFileOpener::VsCode,
@@ -1890,6 +1980,7 @@ model_verbosity = "high"
use_experimental_use_rmcp_client: false,
include_view_image_tool: true,
active_profile: Some("gpt3".to_string()),
windows_wsl_setup_acknowledged: false,
disable_paste_burst: false,
tui_notifications: Default::default(),
otel: OtelConfig::default(),
@@ -1946,6 +2037,7 @@ model_verbosity = "high"
mcp_servers: HashMap::new(),
model_providers: fixture.model_provider_map.clone(),
project_doc_max_bytes: PROJECT_DOC_MAX_BYTES,
project_doc_fallback_filenames: Vec::new(),
codex_home: fixture.codex_home(),
history: History::default(),
file_opener: UriBasedFileOpener::VsCode,
@@ -1965,6 +2057,7 @@ model_verbosity = "high"
use_experimental_use_rmcp_client: false,
include_view_image_tool: true,
active_profile: Some("zdr".to_string()),
windows_wsl_setup_acknowledged: false,
disable_paste_burst: false,
tui_notifications: Default::default(),
otel: OtelConfig::default(),
@@ -2007,6 +2100,7 @@ model_verbosity = "high"
mcp_servers: HashMap::new(),
model_providers: fixture.model_provider_map.clone(),
project_doc_max_bytes: PROJECT_DOC_MAX_BYTES,
project_doc_fallback_filenames: Vec::new(),
codex_home: fixture.codex_home(),
history: History::default(),
file_opener: UriBasedFileOpener::VsCode,
@@ -2026,6 +2120,7 @@ model_verbosity = "high"
use_experimental_use_rmcp_client: false,
include_view_image_tool: true,
active_profile: Some("gpt5".to_string()),
windows_wsl_setup_acknowledged: false,
disable_paste_burst: false,
tui_notifications: Default::default(),
otel: OtelConfig::default(),
@@ -2136,6 +2231,7 @@ trust_level = "trusted"
#[cfg(test)]
mod notifications_tests {
use crate::config_types::Notifications;
use assert_matches::assert_matches;
use serde::Deserialize;
#[derive(Deserialize, Debug, PartialEq)]
@@ -2155,10 +2251,7 @@ mod notifications_tests {
notifications = true
"#;
let parsed: RootTomlTest = toml::from_str(toml).expect("deserialize notifications=true");
assert!(matches!(
parsed.tui.notifications,
Notifications::Enabled(true)
));
assert_matches!(parsed.tui.notifications, Notifications::Enabled(true));
}
#[test]
@@ -2169,9 +2262,9 @@ mod notifications_tests {
"#;
let parsed: RootTomlTest =
toml::from_str(toml).expect("deserialize notifications=[\"foo\"]");
assert!(matches!(
assert_matches!(
parsed.tui.notifications,
Notifications::Custom(ref v) if v == &vec!["foo".to_string()]
));
);
}
}

View File

@@ -0,0 +1,118 @@
use std::io;
use toml::Value as TomlValue;
#[cfg(target_os = "macos")]
mod native {
use super::*;
use base64::Engine;
use base64::prelude::BASE64_STANDARD;
use core_foundation::base::TCFType;
use core_foundation::string::CFString;
use core_foundation::string::CFStringRef;
use std::ffi::c_void;
use tokio::task;
pub(crate) async fn load_managed_admin_config_layer(
override_base64: Option<&str>,
) -> io::Result<Option<TomlValue>> {
if let Some(encoded) = override_base64 {
let trimmed = encoded.trim();
return if trimmed.is_empty() {
Ok(None)
} else {
parse_managed_preferences_base64(trimmed).map(Some)
};
}
const LOAD_ERROR: &str = "Failed to load managed preferences configuration";
match task::spawn_blocking(load_managed_admin_config).await {
Ok(result) => result,
Err(join_err) => {
if join_err.is_cancelled() {
tracing::error!("Managed preferences load task was cancelled");
} else {
tracing::error!("Managed preferences load task failed: {join_err}");
}
Err(io::Error::other(LOAD_ERROR))
}
}
}
pub(super) fn load_managed_admin_config() -> io::Result<Option<TomlValue>> {
#[link(name = "CoreFoundation", kind = "framework")]
unsafe extern "C" {
fn CFPreferencesCopyAppValue(
key: CFStringRef,
application_id: CFStringRef,
) -> *mut c_void;
}
const MANAGED_PREFERENCES_APPLICATION_ID: &str = "com.openai.codex";
const MANAGED_PREFERENCES_CONFIG_KEY: &str = "config_toml_base64";
let application_id = CFString::new(MANAGED_PREFERENCES_APPLICATION_ID);
let key = CFString::new(MANAGED_PREFERENCES_CONFIG_KEY);
let value_ref = unsafe {
CFPreferencesCopyAppValue(
key.as_concrete_TypeRef(),
application_id.as_concrete_TypeRef(),
)
};
if value_ref.is_null() {
tracing::debug!(
"Managed preferences for {} key {} not found",
MANAGED_PREFERENCES_APPLICATION_ID,
MANAGED_PREFERENCES_CONFIG_KEY
);
return Ok(None);
}
let value = unsafe { CFString::wrap_under_create_rule(value_ref as _) };
let contents = value.to_string();
let trimmed = contents.trim();
parse_managed_preferences_base64(trimmed).map(Some)
}
pub(super) fn parse_managed_preferences_base64(encoded: &str) -> io::Result<TomlValue> {
let decoded = BASE64_STANDARD.decode(encoded.as_bytes()).map_err(|err| {
tracing::error!("Failed to decode managed preferences as base64: {err}");
io::Error::new(io::ErrorKind::InvalidData, err)
})?;
let decoded_str = String::from_utf8(decoded).map_err(|err| {
tracing::error!("Managed preferences base64 contents were not valid UTF-8: {err}");
io::Error::new(io::ErrorKind::InvalidData, err)
})?;
match toml::from_str::<TomlValue>(&decoded_str) {
Ok(TomlValue::Table(parsed)) => Ok(TomlValue::Table(parsed)),
Ok(other) => {
tracing::error!(
"Managed preferences TOML must have a table at the root, found {other:?}",
);
Err(io::Error::new(
io::ErrorKind::InvalidData,
"managed preferences root must be a table",
))
}
Err(err) => {
tracing::error!("Failed to parse managed preferences TOML: {err}");
Err(io::Error::new(io::ErrorKind::InvalidData, err))
}
}
}
}
#[cfg(target_os = "macos")]
pub(crate) use native::load_managed_admin_config_layer;
#[cfg(not(target_os = "macos"))]
pub(crate) async fn load_managed_admin_config_layer(
_override_base64: Option<&str>,
) -> io::Result<Option<TomlValue>> {
Ok(None)
}

View File

@@ -0,0 +1,311 @@
mod macos;
use crate::config::CONFIG_TOML_FILE;
use macos::load_managed_admin_config_layer;
use std::io;
use std::path::Path;
use std::path::PathBuf;
use tokio::fs;
use toml::Value as TomlValue;
#[cfg(unix)]
const CODEX_MANAGED_CONFIG_SYSTEM_PATH: &str = "/etc/codex/managed_config.toml";
#[derive(Debug)]
pub(crate) struct LoadedConfigLayers {
pub base: TomlValue,
pub managed_config: Option<TomlValue>,
pub managed_preferences: Option<TomlValue>,
}
#[derive(Debug, Default)]
pub(crate) struct LoaderOverrides {
pub managed_config_path: Option<PathBuf>,
#[cfg(target_os = "macos")]
pub managed_preferences_base64: Option<String>,
}
// Configuration layering pipeline (top overrides bottom):
//
// +-------------------------+
// | Managed preferences (*) |
// +-------------------------+
// ^
// |
// +-------------------------+
// | managed_config.toml |
// +-------------------------+
// ^
// |
// +-------------------------+
// | config.toml (base) |
// +-------------------------+
//
// (*) Only available on macOS via managed device profiles.
pub async fn load_config_as_toml(codex_home: &Path) -> io::Result<TomlValue> {
load_config_as_toml_with_overrides(codex_home, LoaderOverrides::default()).await
}
fn default_empty_table() -> TomlValue {
TomlValue::Table(Default::default())
}
pub(crate) async fn load_config_layers_with_overrides(
codex_home: &Path,
overrides: LoaderOverrides,
) -> io::Result<LoadedConfigLayers> {
load_config_layers_internal(codex_home, overrides).await
}
async fn load_config_as_toml_with_overrides(
codex_home: &Path,
overrides: LoaderOverrides,
) -> io::Result<TomlValue> {
let layers = load_config_layers_internal(codex_home, overrides).await?;
Ok(apply_managed_layers(layers))
}
async fn load_config_layers_internal(
codex_home: &Path,
overrides: LoaderOverrides,
) -> io::Result<LoadedConfigLayers> {
#[cfg(target_os = "macos")]
let LoaderOverrides {
managed_config_path,
managed_preferences_base64,
} = overrides;
#[cfg(not(target_os = "macos"))]
let LoaderOverrides {
managed_config_path,
} = overrides;
let managed_config_path =
managed_config_path.unwrap_or_else(|| managed_config_default_path(codex_home));
let user_config_path = codex_home.join(CONFIG_TOML_FILE);
let user_config = read_config_from_path(&user_config_path, true).await?;
let managed_config = read_config_from_path(&managed_config_path, false).await?;
#[cfg(target_os = "macos")]
let managed_preferences =
load_managed_admin_config_layer(managed_preferences_base64.as_deref()).await?;
#[cfg(not(target_os = "macos"))]
let managed_preferences = load_managed_admin_config_layer(None).await?;
Ok(LoadedConfigLayers {
base: user_config.unwrap_or_else(default_empty_table),
managed_config,
managed_preferences,
})
}
async fn read_config_from_path(
path: &Path,
log_missing_as_info: bool,
) -> io::Result<Option<TomlValue>> {
match fs::read_to_string(path).await {
Ok(contents) => match toml::from_str::<TomlValue>(&contents) {
Ok(value) => Ok(Some(value)),
Err(err) => {
tracing::error!("Failed to parse {}: {err}", path.display());
Err(io::Error::new(io::ErrorKind::InvalidData, err))
}
},
Err(err) if err.kind() == io::ErrorKind::NotFound => {
if log_missing_as_info {
tracing::info!("{} not found, using defaults", path.display());
} else {
tracing::debug!("{} not found", path.display());
}
Ok(None)
}
Err(err) => {
tracing::error!("Failed to read {}: {err}", path.display());
Err(err)
}
}
}
/// Merge config `overlay` into `base`, giving `overlay` precedence.
pub(crate) fn merge_toml_values(base: &mut TomlValue, overlay: &TomlValue) {
if let TomlValue::Table(overlay_table) = overlay
&& let TomlValue::Table(base_table) = base
{
for (key, value) in overlay_table {
if let Some(existing) = base_table.get_mut(key) {
merge_toml_values(existing, value);
} else {
base_table.insert(key.clone(), value.clone());
}
}
} else {
*base = overlay.clone();
}
}
fn managed_config_default_path(codex_home: &Path) -> PathBuf {
#[cfg(unix)]
{
let _ = codex_home;
PathBuf::from(CODEX_MANAGED_CONFIG_SYSTEM_PATH)
}
#[cfg(not(unix))]
{
codex_home.join("managed_config.toml")
}
}
fn apply_managed_layers(layers: LoadedConfigLayers) -> TomlValue {
let LoadedConfigLayers {
mut base,
managed_config,
managed_preferences,
} = layers;
for overlay in [managed_config, managed_preferences].into_iter().flatten() {
merge_toml_values(&mut base, &overlay);
}
base
}
#[cfg(test)]
mod tests {
use super::*;
use tempfile::tempdir;
#[tokio::test]
async fn merges_managed_config_layer_on_top() {
let tmp = tempdir().expect("tempdir");
let managed_path = tmp.path().join("managed_config.toml");
std::fs::write(
tmp.path().join(CONFIG_TOML_FILE),
r#"foo = 1
[nested]
value = "base"
"#,
)
.expect("write base");
std::fs::write(
&managed_path,
r#"foo = 2
[nested]
value = "managed_config"
extra = true
"#,
)
.expect("write managed config");
let overrides = LoaderOverrides {
managed_config_path: Some(managed_path),
#[cfg(target_os = "macos")]
managed_preferences_base64: None,
};
let loaded = load_config_as_toml_with_overrides(tmp.path(), overrides)
.await
.expect("load config");
let table = loaded.as_table().expect("top-level table expected");
assert_eq!(table.get("foo"), Some(&TomlValue::Integer(2)));
let nested = table
.get("nested")
.and_then(|v| v.as_table())
.expect("nested");
assert_eq!(
nested.get("value"),
Some(&TomlValue::String("managed_config".to_string()))
);
assert_eq!(nested.get("extra"), Some(&TomlValue::Boolean(true)));
}
#[tokio::test]
async fn returns_empty_when_all_layers_missing() {
let tmp = tempdir().expect("tempdir");
let managed_path = tmp.path().join("managed_config.toml");
let overrides = LoaderOverrides {
managed_config_path: Some(managed_path),
#[cfg(target_os = "macos")]
managed_preferences_base64: None,
};
let layers = load_config_layers_with_overrides(tmp.path(), overrides)
.await
.expect("load layers");
let base_table = layers.base.as_table().expect("base table expected");
assert!(
base_table.is_empty(),
"expected empty base layer when configs missing"
);
assert!(
layers.managed_config.is_none(),
"managed config layer should be absent when file missing"
);
#[cfg(not(target_os = "macos"))]
{
let loaded = load_config_as_toml(tmp.path()).await.expect("load config");
let table = loaded.as_table().expect("top-level table expected");
assert!(
table.is_empty(),
"expected empty table when configs missing"
);
}
}
#[cfg(target_os = "macos")]
#[tokio::test]
async fn managed_preferences_take_highest_precedence() {
use base64::Engine;
let managed_payload = r#"
[nested]
value = "managed"
flag = false
"#;
let encoded = base64::prelude::BASE64_STANDARD.encode(managed_payload.as_bytes());
let tmp = tempdir().expect("tempdir");
let managed_path = tmp.path().join("managed_config.toml");
std::fs::write(
tmp.path().join(CONFIG_TOML_FILE),
r#"[nested]
value = "base"
"#,
)
.expect("write base");
std::fs::write(
&managed_path,
r#"[nested]
value = "managed_config"
flag = true
"#,
)
.expect("write managed config");
let overrides = LoaderOverrides {
managed_config_path: Some(managed_path),
managed_preferences_base64: Some(encoded),
};
let loaded = load_config_as_toml_with_overrides(tmp.path(), overrides)
.await
.expect("load config");
let nested = loaded
.get("nested")
.and_then(|v| v.as_table())
.expect("nested table");
assert_eq!(
nested.get("value"),
Some(&TomlValue::String("managed".to_string()))
);
assert_eq!(nested.get("flag"), Some(&TomlValue::Boolean(false)));
}
}

View File

@@ -22,7 +22,7 @@ pub struct ConfigProfile {
pub experimental_instructions_file: Option<PathBuf>,
}
impl From<ConfigProfile> for codex_protocol::mcp_protocol::Profile {
impl From<ConfigProfile> for codex_app_server_protocol::Profile {
fn from(config_profile: ConfigProfile) -> Self {
Self {
model: config_profile.model,

View File

@@ -313,7 +313,7 @@ pub struct SandboxWorkspaceWrite {
pub exclude_slash_tmp: bool,
}
impl From<SandboxWorkspaceWrite> for codex_protocol::mcp_protocol::SandboxSettings {
impl From<SandboxWorkspaceWrite> for codex_app_server_protocol::SandboxSettings {
fn from(sandbox_workspace_write: SandboxWorkspaceWrite) -> Self {
Self {
writable_roots: sandbox_workspace_write.writable_roots,

View File

@@ -13,10 +13,11 @@ use crate::protocol::Event;
use crate::protocol::EventMsg;
use crate::protocol::SessionConfiguredEvent;
use crate::rollout::RolloutRecorder;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::ConversationId;
use codex_protocol::models::ResponseItem;
use codex_protocol::protocol::InitialHistory;
use codex_protocol::protocol::RolloutItem;
use codex_protocol::protocol::SessionSource;
use std::collections::HashMap;
use std::path::PathBuf;
use std::sync::Arc;
@@ -35,20 +36,25 @@ pub struct NewConversation {
pub struct ConversationManager {
conversations: Arc<RwLock<HashMap<ConversationId, Arc<CodexConversation>>>>,
auth_manager: Arc<AuthManager>,
session_source: SessionSource,
}
impl ConversationManager {
pub fn new(auth_manager: Arc<AuthManager>) -> Self {
pub fn new(auth_manager: Arc<AuthManager>, session_source: SessionSource) -> Self {
Self {
conversations: Arc::new(RwLock::new(HashMap::new())),
auth_manager,
session_source,
}
}
/// Construct with a dummy AuthManager containing the provided CodexAuth.
/// Used for integration tests: should not be used by ordinary business logic.
pub fn with_auth(auth: CodexAuth) -> Self {
Self::new(crate::AuthManager::from_auth_for_testing(auth))
Self::new(
crate::AuthManager::from_auth_for_testing(auth),
SessionSource::Exec,
)
}
pub async fn new_conversation(&self, config: Config) -> CodexResult<NewConversation> {
@@ -64,7 +70,13 @@ impl ConversationManager {
let CodexSpawnOk {
codex,
conversation_id,
} = Codex::spawn(config, auth_manager, InitialHistory::New).await?;
} = Codex::spawn(
config,
auth_manager,
InitialHistory::New,
self.session_source,
)
.await?;
self.finalize_spawn(codex, conversation_id).await
}
@@ -121,7 +133,7 @@ impl ConversationManager {
let CodexSpawnOk {
codex,
conversation_id,
} = Codex::spawn(config, auth_manager, initial_history).await?;
} = Codex::spawn(config, auth_manager, initial_history, self.session_source).await?;
self.finalize_spawn(codex, conversation_id).await
}
@@ -155,7 +167,7 @@ impl ConversationManager {
let CodexSpawnOk {
codex,
conversation_id,
} = Codex::spawn(config, auth_manager, history).await?;
} = Codex::spawn(config, auth_manager, history, self.session_source).await?;
self.finalize_spawn(codex, conversation_id).await
}
@@ -198,6 +210,7 @@ fn truncate_before_nth_user_message(history: InitialHistory, n: usize) -> Initia
mod tests {
use super::*;
use crate::codex::make_session_and_context;
use assert_matches::assert_matches;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ReasoningItemReasoningSummary;
use codex_protocol::models::ResponseItem;
@@ -224,7 +237,7 @@ mod tests {
#[test]
fn drops_from_last_user_only() {
let items = vec![
let items = [
user_msg("u1"),
assistant_msg("a1"),
assistant_msg("a2"),
@@ -271,7 +284,7 @@ mod tests {
.map(RolloutItem::ResponseItem)
.collect();
let truncated2 = truncate_before_nth_user_message(InitialHistory::Forked(initial2), 2);
assert!(matches!(truncated2, InitialHistory::New));
assert_matches!(truncated2, InitialHistory::New);
}
#[test]

View File

@@ -20,7 +20,7 @@ use std::sync::OnceLock;
/// The full user agent string is returned from the mcp initialize response.
/// Parenthesis will be added by Codex. This should only specify what goes inside of the parenthesis.
pub static USER_AGENT_SUFFIX: LazyLock<Mutex<Option<String>>> = LazyLock::new(|| Mutex::new(None));
pub const DEFAULT_ORIGINATOR: &str = "codex_cli_rs";
pub const CODEX_INTERNAL_ORIGINATOR_OVERRIDE_ENV_VAR: &str = "CODEX_INTERNAL_ORIGINATOR_OVERRIDE";
#[derive(Debug, Clone)]
pub struct Originator {
@@ -35,10 +35,11 @@ pub enum SetOriginatorError {
AlreadyInitialized,
}
fn init_originator_from_env() -> Originator {
let default = "codex_cli_rs";
fn get_originator_value(provided: Option<String>) -> Originator {
let value = std::env::var(CODEX_INTERNAL_ORIGINATOR_OVERRIDE_ENV_VAR)
.unwrap_or_else(|_| default.to_string());
.ok()
.or(provided)
.unwrap_or(DEFAULT_ORIGINATOR.to_string());
match HeaderValue::from_str(&value) {
Ok(header_value) => Originator {
@@ -48,31 +49,22 @@ fn init_originator_from_env() -> Originator {
Err(e) => {
tracing::error!("Unable to turn originator override {value} into header value: {e}");
Originator {
value: default.to_string(),
header_value: HeaderValue::from_static(default),
value: DEFAULT_ORIGINATOR.to_string(),
header_value: HeaderValue::from_static(DEFAULT_ORIGINATOR),
}
}
}
}
fn build_originator(value: String) -> Result<Originator, SetOriginatorError> {
let header_value =
HeaderValue::from_str(&value).map_err(|_| SetOriginatorError::InvalidHeaderValue)?;
Ok(Originator {
value,
header_value,
})
}
pub fn set_default_originator(value: &str) -> Result<(), SetOriginatorError> {
let originator = build_originator(value.to_string())?;
pub fn set_default_originator(value: String) -> Result<(), SetOriginatorError> {
let originator = get_originator_value(Some(value));
ORIGINATOR
.set(originator)
.map_err(|_| SetOriginatorError::AlreadyInitialized)
}
pub fn originator() -> &'static Originator {
ORIGINATOR.get_or_init(init_originator_from_env)
ORIGINATOR.get_or_init(|| get_originator_value(None))
}
pub fn get_codex_user_agent() -> String {

View File

@@ -1,7 +1,7 @@
use crate::exec::ExecToolCallOutput;
use crate::token_data::KnownPlan;
use crate::token_data::PlanType;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::ConversationId;
use codex_protocol::protocol::RateLimitSnapshot;
use reqwest::StatusCode;
use serde_json;
@@ -55,6 +55,11 @@ pub enum CodexErr {
#[error("stream disconnected before completion: {0}")]
Stream(String, Option<Duration>),
#[error(
"Codex ran out of room in the model's context window. Start a new conversation or clear earlier history before retrying."
)]
ContextWindowExceeded,
#[error("no conversation with id: {0}")]
ConversationNotFound(ConversationId),
@@ -76,8 +81,8 @@ pub enum CodexErr {
Interrupted,
/// Unexpected HTTP status code.
#[error("unexpected status {0}: {1}")]
UnexpectedStatus(StatusCode, String),
#[error("{0}")]
UnexpectedStatus(UnexpectedResponseError),
#[error("{0}")]
UsageLimitReached(UsageLimitReachedError),
@@ -91,8 +96,8 @@ pub enum CodexErr {
InternalServerError,
/// Retry limit exceeded.
#[error("exceeded retry limit, last status: {0}")]
RetryLimit(StatusCode),
#[error("{0}")]
RetryLimit(RetryLimitReachedError),
/// Agent loop died unexpectedly
#[error("internal error; agent loop died unexpectedly")]
@@ -108,6 +113,9 @@ pub enum CodexErr {
#[error("unsupported operation: {0}")]
UnsupportedOperation(String),
#[error("Fatal error: {0}")]
Fatal(String),
// -----------------------------------------------------------------
// Automatic conversions for common external error types
// -----------------------------------------------------------------
@@ -135,6 +143,49 @@ pub enum CodexErr {
EnvVar(EnvVarError),
}
#[derive(Debug)]
pub struct UnexpectedResponseError {
pub status: StatusCode,
pub body: String,
pub request_id: Option<String>,
}
impl std::fmt::Display for UnexpectedResponseError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"unexpected status {}: {}{}",
self.status,
self.body,
self.request_id
.as_ref()
.map(|id| format!(", request id: {id}"))
.unwrap_or_default()
)
}
}
impl std::error::Error for UnexpectedResponseError {}
#[derive(Debug)]
pub struct RetryLimitReachedError {
pub status: StatusCode,
pub request_id: Option<String>,
}
impl std::fmt::Display for RetryLimitReachedError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"exceeded retry limit, last status: {}{}",
self.status,
self.request_id
.as_ref()
.map(|id| format!(", request id: {id}"))
.unwrap_or_default()
)
}
}
#[derive(Debug)]
pub struct UsageLimitReachedError {
pub(crate) plan_type: Option<PlanType>,

View File

@@ -127,6 +127,7 @@ mod tests {
use super::map_response_item_to_event_messages;
use crate::protocol::EventMsg;
use crate::protocol::InputMessageKind;
use assert_matches::assert_matches;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ResponseItem;
use pretty_assertions::assert_eq;
@@ -158,7 +159,7 @@ mod tests {
match &events[0] {
EventMsg::UserMessage(user) => {
assert_eq!(user.message, "Hello world");
assert!(matches!(user.kind, Some(InputMessageKind::Plain)));
assert_matches!(user.kind, Some(InputMessageKind::Plain));
assert_eq!(user.images, Some(vec![img1, img2]));
}
other => panic!("expected UserMessage, got {other:?}"),

View File

@@ -1,7 +1,7 @@
use std::collections::BTreeMap;
use crate::client_common::tools::ResponsesApiTool;
use crate::openai_tools::JsonSchema;
use crate::openai_tools::ResponsesApiTool;
pub const EXEC_COMMAND_TOOL_NAME: &str = "exec_command";
pub const WRITE_STDIN_TOOL_NAME: &str = "write_stdin";
@@ -49,7 +49,7 @@ pub fn create_exec_command_tool_for_responses_api() -> ResponsesApiTool {
parameters: JsonSchema::Object {
properties,
required: Some(vec!["cmd".to_string()]),
additional_properties: Some(false),
additional_properties: Some(false.into()),
},
}
}
@@ -92,7 +92,7 @@ Can write control characters (\u0003 for Ctrl-C), or an empty string to just pol
parameters: JsonSchema::Object {
properties,
required: Some(vec!["session_id".to_string(), "chars".to_string()]),
additional_properties: Some(false),
additional_properties: Some(false.into()),
},
}
}

View File

@@ -0,0 +1,101 @@
use std::collections::HashMap;
use std::env;
use async_trait::async_trait;
use crate::CODEX_APPLY_PATCH_ARG1;
use crate::apply_patch::ApplyPatchExec;
use crate::exec::ExecParams;
use crate::function_tool::FunctionCallError;
pub(crate) enum ExecutionMode {
Shell,
ApplyPatch(ApplyPatchExec),
}
#[async_trait]
/// Backend-specific hooks that prepare and post-process execution requests for a
/// given [`ExecutionMode`].
pub(crate) trait ExecutionBackend: Send + Sync {
fn prepare(
&self,
params: ExecParams,
// Required for downcasting the apply_patch.
mode: &ExecutionMode,
) -> Result<ExecParams, FunctionCallError>;
fn stream_stdout(&self, _mode: &ExecutionMode) -> bool {
true
}
}
static SHELL_BACKEND: ShellBackend = ShellBackend;
static APPLY_PATCH_BACKEND: ApplyPatchBackend = ApplyPatchBackend;
pub(crate) fn backend_for_mode(mode: &ExecutionMode) -> &'static dyn ExecutionBackend {
match mode {
ExecutionMode::Shell => &SHELL_BACKEND,
ExecutionMode::ApplyPatch(_) => &APPLY_PATCH_BACKEND,
}
}
struct ShellBackend;
#[async_trait]
impl ExecutionBackend for ShellBackend {
fn prepare(
&self,
params: ExecParams,
mode: &ExecutionMode,
) -> Result<ExecParams, FunctionCallError> {
match mode {
ExecutionMode::Shell => Ok(params),
_ => Err(FunctionCallError::RespondToModel(
"shell backend invoked with non-shell mode".to_string(),
)),
}
}
}
struct ApplyPatchBackend;
#[async_trait]
impl ExecutionBackend for ApplyPatchBackend {
fn prepare(
&self,
params: ExecParams,
mode: &ExecutionMode,
) -> Result<ExecParams, FunctionCallError> {
match mode {
ExecutionMode::ApplyPatch(exec) => {
let path_to_codex = env::current_exe()
.ok()
.map(|p| p.to_string_lossy().to_string())
.ok_or_else(|| {
FunctionCallError::RespondToModel(
"failed to determine path to codex executable".to_string(),
)
})?;
let patch = exec.action.patch.clone();
Ok(ExecParams {
command: vec![path_to_codex, CODEX_APPLY_PATCH_ARG1.to_string(), patch],
cwd: exec.action.cwd.clone(),
timeout_ms: params.timeout_ms,
// Run apply_patch with a minimal environment for determinism and to
// avoid leaking host environment variables into the patch process.
env: HashMap::new(),
with_escalated_permissions: params.with_escalated_permissions,
justification: params.justification,
})
}
ExecutionMode::Shell => Err(FunctionCallError::RespondToModel(
"apply_patch backend invoked without patch context".to_string(),
)),
}
}
fn stream_stdout(&self, _mode: &ExecutionMode) -> bool {
false
}
}

View File

@@ -0,0 +1,51 @@
use std::collections::HashSet;
use std::sync::Arc;
use std::sync::Mutex;
#[derive(Clone, Debug, Default)]
/// Thread-safe store of user approvals so repeated commands can reuse
/// previously granted trust.
pub(crate) struct ApprovalCache {
inner: Arc<Mutex<HashSet<Vec<String>>>>,
}
impl ApprovalCache {
pub(crate) fn insert(&self, command: Vec<String>) {
if command.is_empty() {
return;
}
if let Ok(mut guard) = self.inner.lock() {
guard.insert(command);
}
}
pub(crate) fn snapshot(&self) -> HashSet<Vec<String>> {
self.inner.lock().map(|g| g.clone()).unwrap_or_default()
}
}
#[cfg(test)]
mod tests {
use super::*;
use pretty_assertions::assert_eq;
#[test]
fn insert_ignores_empty_and_dedupes() {
let cache = ApprovalCache::default();
// Empty should be ignored
cache.insert(vec![]);
assert!(cache.snapshot().is_empty());
// Insert a command and verify snapshot contains it
let cmd = vec!["foo".to_string(), "bar".to_string()];
cache.insert(cmd.clone());
let snap1 = cache.snapshot();
assert!(snap1.contains(&cmd));
// Reinserting should not create duplicates
cache.insert(cmd);
let snap2 = cache.snapshot();
assert_eq!(snap1, snap2);
}
}

View File

@@ -0,0 +1,64 @@
mod backends;
mod cache;
mod runner;
mod sandbox;
pub(crate) use backends::ExecutionMode;
pub(crate) use runner::ExecutionRequest;
pub(crate) use runner::Executor;
pub(crate) use runner::ExecutorConfig;
pub(crate) use runner::normalize_exec_result;
pub(crate) mod linkers {
use crate::exec::ExecParams;
use crate::exec::StdoutStream;
use crate::executor::backends::ExecutionMode;
use crate::executor::runner::ExecutionRequest;
use crate::tools::context::ExecCommandContext;
pub struct PreparedExec {
pub(crate) context: ExecCommandContext,
pub(crate) request: ExecutionRequest,
}
impl PreparedExec {
pub fn new(
context: ExecCommandContext,
params: ExecParams,
approval_command: Vec<String>,
mode: ExecutionMode,
stdout_stream: Option<StdoutStream>,
use_shell_profile: bool,
) -> Self {
let request = ExecutionRequest {
params,
approval_command,
mode,
stdout_stream,
use_shell_profile,
};
Self { context, request }
}
}
}
pub mod errors {
use crate::error::CodexErr;
use crate::function_tool::FunctionCallError;
use thiserror::Error;
#[derive(Debug, Error)]
pub enum ExecError {
#[error(transparent)]
Function(#[from] FunctionCallError),
#[error(transparent)]
Codex(#[from] CodexErr),
}
impl ExecError {
pub(crate) fn rejection(msg: impl Into<String>) -> Self {
FunctionCallError::RespondToModel(msg.into()).into()
}
}
}

View File

@@ -0,0 +1,409 @@
use std::path::PathBuf;
use std::sync::Arc;
use std::sync::RwLock;
use std::time::Duration;
use super::backends::ExecutionMode;
use super::backends::backend_for_mode;
use super::cache::ApprovalCache;
use crate::codex::Session;
use crate::error::CodexErr;
use crate::error::SandboxErr;
use crate::error::get_error_message_ui;
use crate::exec::ExecParams;
use crate::exec::ExecToolCallOutput;
use crate::exec::SandboxType;
use crate::exec::StdoutStream;
use crate::exec::StreamOutput;
use crate::exec::process_exec_tool_call;
use crate::executor::errors::ExecError;
use crate::executor::sandbox::select_sandbox;
use crate::function_tool::FunctionCallError;
use crate::protocol::AskForApproval;
use crate::protocol::ReviewDecision;
use crate::protocol::SandboxPolicy;
use crate::shell;
use crate::tools::context::ExecCommandContext;
use codex_otel::otel_event_manager::ToolDecisionSource;
#[derive(Clone, Debug)]
pub(crate) struct ExecutorConfig {
pub(crate) sandbox_policy: SandboxPolicy,
pub(crate) sandbox_cwd: PathBuf,
codex_linux_sandbox_exe: Option<PathBuf>,
}
impl ExecutorConfig {
pub(crate) fn new(
sandbox_policy: SandboxPolicy,
sandbox_cwd: PathBuf,
codex_linux_sandbox_exe: Option<PathBuf>,
) -> Self {
Self {
sandbox_policy,
sandbox_cwd,
codex_linux_sandbox_exe,
}
}
}
/// Coordinates sandbox selection, backend-specific preparation, and command
/// execution for tool calls requested by the model.
pub(crate) struct Executor {
approval_cache: ApprovalCache,
config: Arc<RwLock<ExecutorConfig>>,
}
impl Executor {
pub(crate) fn new(config: ExecutorConfig) -> Self {
Self {
approval_cache: ApprovalCache::default(),
config: Arc::new(RwLock::new(config)),
}
}
/// Updates the sandbox policy and working directory used for future
/// executions without recreating the executor.
pub(crate) fn update_environment(&self, sandbox_policy: SandboxPolicy, sandbox_cwd: PathBuf) {
if let Ok(mut cfg) = self.config.write() {
cfg.sandbox_policy = sandbox_policy;
cfg.sandbox_cwd = sandbox_cwd;
}
}
/// Runs a prepared execution request end-to-end: prepares parameters, decides on
/// sandbox placement (prompting the user when necessary), launches the command,
/// and lets the backend post-process the final output.
pub(crate) async fn run(
&self,
mut request: ExecutionRequest,
session: &Session,
approval_policy: AskForApproval,
context: &ExecCommandContext,
) -> Result<ExecToolCallOutput, ExecError> {
if matches!(request.mode, ExecutionMode::Shell) {
request.params =
maybe_translate_shell_command(request.params, session, request.use_shell_profile);
}
// Step 1: Normalise parameters via the selected backend.
let backend = backend_for_mode(&request.mode);
let stdout_stream = if backend.stream_stdout(&request.mode) {
request.stdout_stream.clone()
} else {
None
};
request.params = backend
.prepare(request.params, &request.mode)
.map_err(ExecError::from)?;
// Step 2: Snapshot sandbox configuration so it stays stable for this run.
let config = self
.config
.read()
.map_err(|_| ExecError::rejection("executor config poisoned"))?
.clone();
// Step 3: Decide sandbox placement, prompting for approval when needed.
let sandbox_decision = select_sandbox(
&request,
approval_policy,
self.approval_cache.snapshot(),
&config,
session,
&context.sub_id,
&context.call_id,
&context.otel_event_manager,
)
.await?;
if sandbox_decision.record_session_approval {
self.approval_cache.insert(request.approval_command.clone());
}
// Step 4: Launch the command within the chosen sandbox.
let first_attempt = self
.spawn(
request.params.clone(),
sandbox_decision.initial_sandbox,
&config,
stdout_stream.clone(),
)
.await;
// Step 5: Handle sandbox outcomes, optionally escalating to an unsandboxed retry.
match first_attempt {
Ok(output) => Ok(output),
Err(CodexErr::Sandbox(SandboxErr::Timeout { output })) => {
Err(CodexErr::Sandbox(SandboxErr::Timeout { output }).into())
}
Err(CodexErr::Sandbox(error)) => {
if sandbox_decision.escalate_on_failure {
self.retry_without_sandbox(
&request,
&config,
session,
context,
stdout_stream,
error,
)
.await
} else {
let message = sandbox_failure_message(error);
Err(ExecError::rejection(message))
}
}
Err(err) => Err(err.into()),
}
}
/// Fallback path invoked when a sandboxed run is denied so the user can
/// approve rerunning without isolation.
async fn retry_without_sandbox(
&self,
request: &ExecutionRequest,
config: &ExecutorConfig,
session: &Session,
context: &ExecCommandContext,
stdout_stream: Option<StdoutStream>,
sandbox_error: SandboxErr,
) -> Result<ExecToolCallOutput, ExecError> {
session
.notify_background_event(
&context.sub_id,
format!("Execution failed: {sandbox_error}"),
)
.await;
let decision = session
.request_command_approval(
context.sub_id.to_string(),
context.call_id.to_string(),
request.approval_command.clone(),
request.params.cwd.clone(),
Some("command failed; retry without sandbox?".to_string()),
)
.await;
context.otel_event_manager.tool_decision(
&context.tool_name,
&context.call_id,
decision,
ToolDecisionSource::User,
);
match decision {
ReviewDecision::Approved | ReviewDecision::ApprovedForSession => {
if matches!(decision, ReviewDecision::ApprovedForSession) {
self.approval_cache.insert(request.approval_command.clone());
}
session
.notify_background_event(&context.sub_id, "retrying command without sandbox")
.await;
let retry_output = self
.spawn(
request.params.clone(),
SandboxType::None,
config,
stdout_stream,
)
.await?;
Ok(retry_output)
}
ReviewDecision::Denied | ReviewDecision::Abort => {
Err(ExecError::rejection("exec command rejected by user"))
}
}
}
async fn spawn(
&self,
params: ExecParams,
sandbox: SandboxType,
config: &ExecutorConfig,
stdout_stream: Option<StdoutStream>,
) -> Result<ExecToolCallOutput, CodexErr> {
process_exec_tool_call(
params,
sandbox,
&config.sandbox_policy,
&config.sandbox_cwd,
&config.codex_linux_sandbox_exe,
stdout_stream,
)
.await
}
}
fn maybe_translate_shell_command(
params: ExecParams,
session: &Session,
use_shell_profile: bool,
) -> ExecParams {
let should_translate =
matches!(session.user_shell(), shell::Shell::PowerShell(_)) || use_shell_profile;
if should_translate
&& let Some(command) = session
.user_shell()
.format_default_shell_invocation(params.command.clone())
{
return ExecParams { command, ..params };
}
params
}
fn sandbox_failure_message(error: SandboxErr) -> String {
let codex_error = CodexErr::Sandbox(error);
let friendly = get_error_message_ui(&codex_error);
format!("failed in sandbox: {friendly}")
}
pub(crate) struct ExecutionRequest {
pub params: ExecParams,
pub approval_command: Vec<String>,
pub mode: ExecutionMode,
pub stdout_stream: Option<StdoutStream>,
pub use_shell_profile: bool,
}
pub(crate) struct NormalizedExecOutput<'a> {
borrowed: Option<&'a ExecToolCallOutput>,
synthetic: Option<ExecToolCallOutput>,
}
impl<'a> NormalizedExecOutput<'a> {
pub(crate) fn event_output(&'a self) -> &'a ExecToolCallOutput {
match (self.borrowed, self.synthetic.as_ref()) {
(Some(output), _) => output,
(None, Some(output)) => output,
(None, None) => unreachable!("normalized exec output missing data"),
}
}
}
/// Converts a raw execution result into a uniform view that always exposes an
/// [`ExecToolCallOutput`], synthesizing error output when the command fails
/// before producing a response.
pub(crate) fn normalize_exec_result(
result: &Result<ExecToolCallOutput, ExecError>,
) -> NormalizedExecOutput<'_> {
match result {
Ok(output) => NormalizedExecOutput {
borrowed: Some(output),
synthetic: None,
},
Err(ExecError::Codex(CodexErr::Sandbox(SandboxErr::Timeout { output }))) => {
NormalizedExecOutput {
borrowed: Some(output.as_ref()),
synthetic: None,
}
}
Err(err) => {
let message = match err {
ExecError::Function(FunctionCallError::RespondToModel(msg)) => msg.clone(),
ExecError::Codex(e) => get_error_message_ui(e),
err => err.to_string(),
};
let synthetic = ExecToolCallOutput {
exit_code: -1,
stdout: StreamOutput::new(String::new()),
stderr: StreamOutput::new(message.clone()),
aggregated_output: StreamOutput::new(message),
duration: Duration::default(),
timed_out: false,
};
NormalizedExecOutput {
borrowed: None,
synthetic: Some(synthetic),
}
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::error::CodexErr;
use crate::error::EnvVarError;
use crate::error::SandboxErr;
use crate::exec::StreamOutput;
use pretty_assertions::assert_eq;
fn make_output(text: &str) -> ExecToolCallOutput {
ExecToolCallOutput {
exit_code: 1,
stdout: StreamOutput::new(String::new()),
stderr: StreamOutput::new(String::new()),
aggregated_output: StreamOutput::new(text.to_string()),
duration: Duration::from_millis(123),
timed_out: false,
}
}
#[test]
fn normalize_success_borrows() {
let out = make_output("ok");
let result: Result<ExecToolCallOutput, ExecError> = Ok(out);
let normalized = normalize_exec_result(&result);
assert_eq!(normalized.event_output().aggregated_output.text, "ok");
}
#[test]
fn normalize_timeout_borrows_embedded_output() {
let out = make_output("timed out payload");
let err = CodexErr::Sandbox(SandboxErr::Timeout {
output: Box::new(out),
});
let result: Result<ExecToolCallOutput, ExecError> = Err(ExecError::Codex(err));
let normalized = normalize_exec_result(&result);
assert_eq!(
normalized.event_output().aggregated_output.text,
"timed out payload"
);
}
#[test]
fn sandbox_failure_message_uses_denied_stderr() {
let output = ExecToolCallOutput {
exit_code: 101,
stdout: StreamOutput::new(String::new()),
stderr: StreamOutput::new("sandbox stderr".to_string()),
aggregated_output: StreamOutput::new(String::new()),
duration: Duration::from_millis(10),
timed_out: false,
};
let err = SandboxErr::Denied {
output: Box::new(output),
};
let message = sandbox_failure_message(err);
assert_eq!(message, "failed in sandbox: sandbox stderr");
}
#[test]
fn normalize_function_error_synthesizes_payload() {
let err = FunctionCallError::RespondToModel("boom".to_string());
let result: Result<ExecToolCallOutput, ExecError> = Err(ExecError::Function(err));
let normalized = normalize_exec_result(&result);
assert_eq!(normalized.event_output().aggregated_output.text, "boom");
}
#[test]
fn normalize_codex_error_synthesizes_user_message() {
// Use a simple EnvVar error which formats to a clear message
let e = CodexErr::EnvVar(EnvVarError {
var: "FOO".to_string(),
instructions: Some("set it".to_string()),
});
let result: Result<ExecToolCallOutput, ExecError> = Err(ExecError::Codex(e));
let normalized = normalize_exec_result(&result);
assert!(
normalized
.event_output()
.aggregated_output
.text
.contains("Missing environment variable: `FOO`"),
"expected synthesized user-friendly message"
);
}
}

View File

@@ -0,0 +1,405 @@
use crate::apply_patch::ApplyPatchExec;
use crate::codex::Session;
use crate::exec::SandboxType;
use crate::executor::ExecutionMode;
use crate::executor::ExecutionRequest;
use crate::executor::ExecutorConfig;
use crate::executor::errors::ExecError;
use crate::safety::SafetyCheck;
use crate::safety::assess_command_safety;
use crate::safety::assess_patch_safety;
use codex_otel::otel_event_manager::OtelEventManager;
use codex_otel::otel_event_manager::ToolDecisionSource;
use codex_protocol::protocol::AskForApproval;
use codex_protocol::protocol::ReviewDecision;
use std::collections::HashSet;
/// Sandbox placement options selected for an execution run, including whether
/// to escalate after failures and whether approvals should persist.
pub(crate) struct SandboxDecision {
pub(crate) initial_sandbox: SandboxType,
pub(crate) escalate_on_failure: bool,
pub(crate) record_session_approval: bool,
}
impl SandboxDecision {
fn auto(sandbox: SandboxType, escalate_on_failure: bool) -> Self {
Self {
initial_sandbox: sandbox,
escalate_on_failure,
record_session_approval: false,
}
}
fn user_override(record_session_approval: bool) -> Self {
Self {
initial_sandbox: SandboxType::None,
escalate_on_failure: false,
record_session_approval,
}
}
}
fn should_escalate_on_failure(approval: AskForApproval, sandbox: SandboxType) -> bool {
matches!(
(approval, sandbox),
(
AskForApproval::UnlessTrusted | AskForApproval::OnFailure,
SandboxType::MacosSeatbelt | SandboxType::LinuxSeccomp
)
)
}
/// Determines how a command should be sandboxed, prompting the user when
/// policy requires explicit approval.
#[allow(clippy::too_many_arguments)]
pub async fn select_sandbox(
request: &ExecutionRequest,
approval_policy: AskForApproval,
approval_cache: HashSet<Vec<String>>,
config: &ExecutorConfig,
session: &Session,
sub_id: &str,
call_id: &str,
otel_event_manager: &OtelEventManager,
) -> Result<SandboxDecision, ExecError> {
match &request.mode {
ExecutionMode::Shell => {
select_shell_sandbox(
request,
approval_policy,
approval_cache,
config,
session,
sub_id,
call_id,
otel_event_manager,
)
.await
}
ExecutionMode::ApplyPatch(exec) => {
select_apply_patch_sandbox(exec, approval_policy, config)
}
}
}
#[allow(clippy::too_many_arguments)]
async fn select_shell_sandbox(
request: &ExecutionRequest,
approval_policy: AskForApproval,
approved_snapshot: HashSet<Vec<String>>,
config: &ExecutorConfig,
session: &Session,
sub_id: &str,
call_id: &str,
otel_event_manager: &OtelEventManager,
) -> Result<SandboxDecision, ExecError> {
let command_for_safety = if request.approval_command.is_empty() {
request.params.command.clone()
} else {
request.approval_command.clone()
};
let safety = assess_command_safety(
&command_for_safety,
approval_policy,
&config.sandbox_policy,
&approved_snapshot,
request.params.with_escalated_permissions.unwrap_or(false),
);
match safety {
SafetyCheck::AutoApprove {
sandbox_type,
user_explicitly_approved,
} => {
let mut decision = SandboxDecision::auto(
sandbox_type,
should_escalate_on_failure(approval_policy, sandbox_type),
);
if user_explicitly_approved {
decision.record_session_approval = true;
}
let (decision_for_event, source) = if user_explicitly_approved {
(ReviewDecision::ApprovedForSession, ToolDecisionSource::User)
} else {
(ReviewDecision::Approved, ToolDecisionSource::Config)
};
otel_event_manager.tool_decision("local_shell", call_id, decision_for_event, source);
Ok(decision)
}
SafetyCheck::AskUser => {
let decision = session
.request_command_approval(
sub_id.to_string(),
call_id.to_string(),
request.approval_command.clone(),
request.params.cwd.clone(),
request.params.justification.clone(),
)
.await;
otel_event_manager.tool_decision(
"local_shell",
call_id,
decision,
ToolDecisionSource::User,
);
match decision {
ReviewDecision::Approved => Ok(SandboxDecision::user_override(false)),
ReviewDecision::ApprovedForSession => Ok(SandboxDecision::user_override(true)),
ReviewDecision::Denied | ReviewDecision::Abort => {
Err(ExecError::rejection("exec command rejected by user"))
}
}
}
SafetyCheck::Reject { reason } => Err(ExecError::rejection(format!(
"exec command rejected: {reason}"
))),
}
}
fn select_apply_patch_sandbox(
exec: &ApplyPatchExec,
approval_policy: AskForApproval,
config: &ExecutorConfig,
) -> Result<SandboxDecision, ExecError> {
if exec.user_explicitly_approved_this_action {
return Ok(SandboxDecision::user_override(false));
}
match assess_patch_safety(
&exec.action,
approval_policy,
&config.sandbox_policy,
&config.sandbox_cwd,
) {
SafetyCheck::AutoApprove { sandbox_type, .. } => Ok(SandboxDecision::auto(
sandbox_type,
should_escalate_on_failure(approval_policy, sandbox_type),
)),
SafetyCheck::AskUser => Err(ExecError::rejection(
"patch requires approval but none was recorded",
)),
SafetyCheck::Reject { reason } => {
Err(ExecError::rejection(format!("patch rejected: {reason}")))
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::codex::make_session_and_context;
use crate::exec::ExecParams;
use crate::function_tool::FunctionCallError;
use crate::protocol::SandboxPolicy;
use codex_apply_patch::ApplyPatchAction;
use pretty_assertions::assert_eq;
#[tokio::test]
async fn select_apply_patch_user_override_when_explicit() {
let (session, ctx) = make_session_and_context();
let tmp = tempfile::tempdir().expect("tmp");
let p = tmp.path().join("a.txt");
let action = ApplyPatchAction::new_add_for_test(&p, "hello".to_string());
let exec = ApplyPatchExec {
action,
user_explicitly_approved_this_action: true,
};
let cfg = ExecutorConfig::new(SandboxPolicy::ReadOnly, std::env::temp_dir(), None);
let request = ExecutionRequest {
params: ExecParams {
command: vec!["apply_patch".into()],
cwd: std::env::temp_dir(),
timeout_ms: None,
env: std::collections::HashMap::new(),
with_escalated_permissions: None,
justification: None,
},
approval_command: vec!["apply_patch".into()],
mode: ExecutionMode::ApplyPatch(exec),
stdout_stream: None,
use_shell_profile: false,
};
let otel_event_manager = ctx.client.get_otel_event_manager();
let decision = select_sandbox(
&request,
AskForApproval::OnRequest,
Default::default(),
&cfg,
&session,
"sub",
"call",
&otel_event_manager,
)
.await
.expect("ok");
// Explicit user override runs without sandbox
assert_eq!(decision.initial_sandbox, SandboxType::None);
assert_eq!(decision.escalate_on_failure, false);
}
#[tokio::test]
async fn select_apply_patch_autoapprove_in_danger() {
let (session, ctx) = make_session_and_context();
let tmp = tempfile::tempdir().expect("tmp");
let p = tmp.path().join("a.txt");
let action = ApplyPatchAction::new_add_for_test(&p, "hello".to_string());
let exec = ApplyPatchExec {
action,
user_explicitly_approved_this_action: false,
};
let cfg = ExecutorConfig::new(SandboxPolicy::DangerFullAccess, std::env::temp_dir(), None);
let request = ExecutionRequest {
params: ExecParams {
command: vec!["apply_patch".into()],
cwd: std::env::temp_dir(),
timeout_ms: None,
env: std::collections::HashMap::new(),
with_escalated_permissions: None,
justification: None,
},
approval_command: vec!["apply_patch".into()],
mode: ExecutionMode::ApplyPatch(exec),
stdout_stream: None,
use_shell_profile: false,
};
let otel_event_manager = ctx.client.get_otel_event_manager();
let decision = select_sandbox(
&request,
AskForApproval::OnRequest,
Default::default(),
&cfg,
&session,
"sub",
"call",
&otel_event_manager,
)
.await
.expect("ok");
// On platforms with a sandbox, DangerFullAccess still prefers it
let expected = crate::safety::get_platform_sandbox().unwrap_or(SandboxType::None);
assert_eq!(decision.initial_sandbox, expected);
assert_eq!(decision.escalate_on_failure, false);
}
#[tokio::test]
async fn select_apply_patch_requires_approval_on_unless_trusted() {
let (session, ctx) = make_session_and_context();
let tempdir = tempfile::tempdir().expect("tmpdir");
let p = tempdir.path().join("a.txt");
let action = ApplyPatchAction::new_add_for_test(&p, "hello".to_string());
let exec = ApplyPatchExec {
action,
user_explicitly_approved_this_action: false,
};
let cfg = ExecutorConfig::new(SandboxPolicy::ReadOnly, std::env::temp_dir(), None);
let request = ExecutionRequest {
params: ExecParams {
command: vec!["apply_patch".into()],
cwd: std::env::temp_dir(),
timeout_ms: None,
env: std::collections::HashMap::new(),
with_escalated_permissions: None,
justification: None,
},
approval_command: vec!["apply_patch".into()],
mode: ExecutionMode::ApplyPatch(exec),
stdout_stream: None,
use_shell_profile: false,
};
let otel_event_manager = ctx.client.get_otel_event_manager();
let result = select_sandbox(
&request,
AskForApproval::UnlessTrusted,
Default::default(),
&cfg,
&session,
"sub",
"call",
&otel_event_manager,
)
.await;
match result {
Ok(_) => panic!("expected error"),
Err(ExecError::Function(FunctionCallError::RespondToModel(msg))) => {
assert!(msg.contains("requires approval"))
}
Err(other) => panic!("unexpected error: {other:?}"),
}
}
#[tokio::test]
async fn select_shell_autoapprove_in_danger_mode() {
let (session, ctx) = make_session_and_context();
let cfg = ExecutorConfig::new(SandboxPolicy::DangerFullAccess, std::env::temp_dir(), None);
let request = ExecutionRequest {
params: ExecParams {
command: vec!["some-unknown".into()],
cwd: std::env::temp_dir(),
timeout_ms: None,
env: std::collections::HashMap::new(),
with_escalated_permissions: None,
justification: None,
},
approval_command: vec!["some-unknown".into()],
mode: ExecutionMode::Shell,
stdout_stream: None,
use_shell_profile: false,
};
let otel_event_manager = ctx.client.get_otel_event_manager();
let decision = select_sandbox(
&request,
AskForApproval::OnRequest,
Default::default(),
&cfg,
&session,
"sub",
"call",
&otel_event_manager,
)
.await
.expect("ok");
assert_eq!(decision.initial_sandbox, SandboxType::None);
assert_eq!(decision.escalate_on_failure, false);
}
#[cfg(any(target_os = "macos", target_os = "linux"))]
#[tokio::test]
async fn select_shell_escalates_on_failure_with_platform_sandbox() {
let (session, ctx) = make_session_and_context();
let cfg = ExecutorConfig::new(SandboxPolicy::ReadOnly, std::env::temp_dir(), None);
let request = ExecutionRequest {
params: ExecParams {
// Unknown command => untrusted but not flagged dangerous
command: vec!["some-unknown".into()],
cwd: std::env::temp_dir(),
timeout_ms: None,
env: std::collections::HashMap::new(),
with_escalated_permissions: None,
justification: None,
},
approval_command: vec!["some-unknown".into()],
mode: ExecutionMode::Shell,
stdout_stream: None,
use_shell_profile: false,
};
let otel_event_manager = ctx.client.get_otel_event_manager();
let decision = select_sandbox(
&request,
AskForApproval::OnFailure,
Default::default(),
&cfg,
&session,
"sub",
"call",
&otel_event_manager,
)
.await
.expect("ok");
// On macOS/Linux we should have a platform sandbox and escalate on failure
assert_ne!(decision.initial_sandbox, SandboxType::None);
assert_eq!(decision.escalate_on_failure, true);
}
}

View File

@@ -4,4 +4,8 @@ use thiserror::Error;
pub enum FunctionCallError {
#[error("{0}")]
RespondToModel(String),
#[error("LocalShellCall without call_id or id")]
MissingLocalShellCallId,
#[error("Fatal error: {0}")]
Fatal(String),
}

View File

@@ -2,7 +2,7 @@ use std::collections::HashSet;
use std::path::Path;
use std::path::PathBuf;
use codex_protocol::mcp_protocol::GitSha;
use codex_app_server_protocol::GitSha;
use codex_protocol::protocol::GitInfo;
use futures::future::join_all;
use serde::Deserialize;

View File

@@ -18,6 +18,7 @@ pub use codex_conversation::CodexConversation;
mod command_safety;
pub mod config;
pub mod config_edit;
pub mod config_loader;
pub mod config_profile;
pub mod config_types;
mod conversation_history;
@@ -27,6 +28,7 @@ pub mod error;
pub mod exec;
mod exec_command;
pub mod exec_env;
pub mod executor;
mod flags;
pub mod git_info;
pub mod landlock;
@@ -56,7 +58,6 @@ pub mod default_client;
pub mod model_family;
mod openai_model_info;
mod openai_tools;
pub mod plan_tool;
pub mod project_doc;
mod rollout;
pub(crate) mod safety;
@@ -64,9 +65,10 @@ pub mod seatbelt;
pub mod shell;
pub mod spawn;
pub mod terminal;
mod tool_apply_patch;
mod tools;
pub mod turn_diff_tracker;
pub use rollout::ARCHIVED_SESSIONS_SUBDIR;
pub use rollout::INTERACTIVE_SESSION_SOURCES;
pub use rollout::RolloutRecorder;
pub use rollout::SESSIONS_SUBDIR;
pub use rollout::SessionMeta;

View File

@@ -108,9 +108,6 @@ impl McpClientAdapter {
params: mcp_types::InitializeRequestParams,
startup_timeout: Duration,
) -> Result<Self> {
info!(
"new_stdio_client use_rmcp_client: {use_rmcp_client} program: {program:?} args: {args:?} env: {env:?} params: {params:?} startup_timeout: {startup_timeout:?}"
);
if use_rmcp_client {
let client = Arc::new(RmcpClient::new_stdio_client(program, args, env).await?);
client.initialize(params, Some(startup_timeout)).await?;
@@ -123,12 +120,15 @@ impl McpClientAdapter {
}
async fn new_streamable_http_client(
server_name: String,
url: String,
bearer_token: Option<String>,
params: mcp_types::InitializeRequestParams,
startup_timeout: Duration,
) -> Result<Self> {
let client = Arc::new(RmcpClient::new_streamable_http_client(url, bearer_token)?);
let client = Arc::new(
RmcpClient::new_streamable_http_client(&server_name, &url, bearer_token).await?,
);
client.initialize(params, Some(startup_timeout)).await?;
Ok(McpClientAdapter::Rmcp(client))
}
@@ -202,22 +202,9 @@ impl McpConnectionManager {
continue;
}
if matches!(
cfg.transport,
McpServerTransportConfig::StreamableHttp { .. }
) && !use_rmcp_client
{
info!(
"skipping MCP server `{}` configured with url because rmcp client is disabled",
server_name
);
continue;
}
let startup_timeout = cfg.startup_timeout_sec.unwrap_or(DEFAULT_STARTUP_TIMEOUT);
let tool_timeout = cfg.tool_timeout_sec.unwrap_or(DEFAULT_TOOL_TIMEOUT);
let use_rmcp_client_flag = use_rmcp_client;
join_set.spawn(async move {
let McpServerConfig { transport, .. } = cfg;
let params = mcp_types::InitializeRequestParams {
@@ -246,17 +233,18 @@ impl McpConnectionManager {
let command_os: OsString = command.into();
let args_os: Vec<OsString> = args.into_iter().map(Into::into).collect();
McpClientAdapter::new_stdio_client(
use_rmcp_client_flag,
use_rmcp_client,
command_os,
args_os,
env,
params.clone(),
params,
startup_timeout,
)
.await
}
McpServerTransportConfig::StreamableHttp { url, bearer_token } => {
McpClientAdapter::new_streamable_http_client(
server_name.clone(),
url,
bearer_token,
params,

View File

@@ -30,7 +30,7 @@ use tokio::io::AsyncReadExt;
use crate::config::Config;
use crate::config_types::HistoryPersistence;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::ConversationId;
#[cfg(unix)]
use std::os::unix::fs::OpenOptionsExt;
#[cfg(unix)]

View File

@@ -1,5 +1,5 @@
use crate::config_types::ReasoningSummaryFormat;
use crate::tool_apply_patch::ApplyPatchToolType;
use crate::tools::handlers::apply_patch::ApplyPatchToolType;
/// The `instructions` field in the payload sent to a model should always start
/// with this content.
@@ -35,12 +35,19 @@ pub struct ModelFamily {
// See https://platform.openai.com/docs/guides/tools-local-shell
pub uses_local_shell_tool: bool,
/// Whether this model supports parallel tool calls when using the
/// Responses API.
pub supports_parallel_tool_calls: bool,
/// Present if the model performs better when `apply_patch` is provided as
/// a tool call instead of just a bash command
pub apply_patch_tool_type: Option<ApplyPatchToolType>,
// Instructions to use for querying the model
pub base_instructions: String,
/// Names of beta tools that should be exposed to this model family.
pub experimental_supported_tools: Vec<String>,
}
macro_rules! model_family {
@@ -55,8 +62,10 @@ macro_rules! model_family {
supports_reasoning_summaries: false,
reasoning_summary_format: ReasoningSummaryFormat::None,
uses_local_shell_tool: false,
supports_parallel_tool_calls: false,
apply_patch_tool_type: None,
base_instructions: BASE_INSTRUCTIONS.to_string(),
experimental_supported_tools: Vec::new(),
};
// apply overrides
$(
@@ -68,7 +77,11 @@ macro_rules! model_family {
/// Returns a `ModelFamily` for the given model slug, or `None` if the slug
/// does not match any known model family.
pub fn find_family_for_model(slug: &str) -> Option<ModelFamily> {
pub fn find_family_for_model(mut slug: &str) -> Option<ModelFamily> {
// TODO(jif) clean once we have proper feature flags
if matches!(std::env::var("CODEX_EXPERIMENTAL").as_deref(), Ok("1")) {
slug = "codex-experimental";
}
if slug.starts_with("o3") {
model_family!(
slug, "o3",
@@ -99,12 +112,40 @@ pub fn find_family_for_model(slug: &str) -> Option<ModelFamily> {
model_family!(slug, "gpt-4o", needs_special_apply_patch_instructions: true)
} else if slug.starts_with("gpt-3.5") {
model_family!(slug, "gpt-3.5", needs_special_apply_patch_instructions: true)
} else if slug.starts_with("codex-") || slug.starts_with("gpt-5-codex") {
} else if slug.starts_with("test-gpt-5-codex") {
model_family!(
slug, slug,
supports_reasoning_summaries: true,
reasoning_summary_format: ReasoningSummaryFormat::Experimental,
base_instructions: GPT_5_CODEX_INSTRUCTIONS.to_string(),
experimental_supported_tools: vec![
"read_file".to_string(),
"list_dir".to_string(),
"test_sync_tool".to_string()
],
supports_parallel_tool_calls: true,
)
// Internal models.
} else if slug.starts_with("codex-") {
model_family!(
slug, slug,
supports_reasoning_summaries: true,
reasoning_summary_format: ReasoningSummaryFormat::Experimental,
base_instructions: GPT_5_CODEX_INSTRUCTIONS.to_string(),
apply_patch_tool_type: Some(ApplyPatchToolType::Freeform),
experimental_supported_tools: vec!["read_file".to_string(), "list_dir".to_string()],
supports_parallel_tool_calls: true,
)
// Production models.
} else if slug.starts_with("gpt-5-codex") {
model_family!(
slug, slug,
supports_reasoning_summaries: true,
reasoning_summary_format: ReasoningSummaryFormat::Experimental,
base_instructions: GPT_5_CODEX_INSTRUCTIONS.to_string(),
apply_patch_tool_type: Some(ApplyPatchToolType::Freeform),
)
} else if slug.starts_with("gpt-5") {
model_family!(
@@ -125,7 +166,9 @@ pub fn derive_default_model_family(model: &str) -> ModelFamily {
supports_reasoning_summaries: false,
reasoning_summary_format: ReasoningSummaryFormat::None,
uses_local_shell_tool: false,
supports_parallel_tool_calls: false,
apply_patch_tool_type: None,
base_instructions: BASE_INSTRUCTIONS.to_string(),
experimental_supported_tools: Vec::new(),
}
}

View File

@@ -6,7 +6,7 @@
//! key. These override or extend the defaults at runtime.
use crate::CodexAuth;
use codex_protocol::mcp_protocol::AuthMode;
use codex_app_server_protocol::AuthMode;
use serde::Deserialize;
use serde::Serialize;
use std::collections::HashMap;

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,7 @@
//! Project-level documentation discovery.
//!
//! Project-level documentation can be stored in files named `AGENTS.md`.
//! Project-level documentation is primarily stored in files named `AGENTS.md`.
//! Additional fallback filenames can be configured via `project_doc_fallback_filenames`.
//! We include the concatenation of all files found along the path from the
//! repository root to the current working directory as follows:
//!
@@ -13,12 +14,13 @@
//! 3. We do **not** walk past the Git root.
use crate::config::Config;
use dunce::canonicalize as normalize_path;
use std::path::PathBuf;
use tokio::io::AsyncReadExt;
use tracing::error;
/// Currently, we only match the filename `AGENTS.md` exactly.
const CANDIDATE_FILENAMES: &[&str] = &["AGENTS.md"];
/// Default filename scanned for project-level docs.
pub const DEFAULT_PROJECT_DOC_FILENAME: &str = "AGENTS.md";
/// When both `Config::instructions` and the project doc are present, they will
/// be concatenated with the following separator.
@@ -108,7 +110,7 @@ pub async fn read_project_docs(config: &Config) -> std::io::Result<Option<String
/// is zero, returns an empty list.
pub fn discover_project_doc_paths(config: &Config) -> std::io::Result<Vec<PathBuf>> {
let mut dir = config.cwd.clone();
if let Ok(canon) = dir.canonicalize() {
if let Ok(canon) = normalize_path(&dir) {
dir = canon;
}
@@ -152,8 +154,9 @@ pub fn discover_project_doc_paths(config: &Config) -> std::io::Result<Vec<PathBu
};
let mut found: Vec<PathBuf> = Vec::new();
let candidate_filenames = candidate_filenames(config);
for d in search_dirs {
for name in CANDIDATE_FILENAMES {
for name in &candidate_filenames {
let candidate = d.join(name);
match std::fs::symlink_metadata(&candidate) {
Ok(md) => {
@@ -173,6 +176,22 @@ pub fn discover_project_doc_paths(config: &Config) -> std::io::Result<Vec<PathBu
Ok(found)
}
fn candidate_filenames<'a>(config: &'a Config) -> Vec<&'a str> {
let mut names: Vec<&'a str> =
Vec::with_capacity(1 + config.project_doc_fallback_filenames.len());
names.push(DEFAULT_PROJECT_DOC_FILENAME);
for candidate in &config.project_doc_fallback_filenames {
let candidate = candidate.as_str();
if candidate.is_empty() {
continue;
}
if !names.contains(&candidate) {
names.push(candidate);
}
}
names
}
#[cfg(test)]
mod tests {
use super::*;
@@ -202,6 +221,20 @@ mod tests {
config
}
fn make_config_with_fallback(
root: &TempDir,
limit: usize,
instructions: Option<&str>,
fallbacks: &[&str],
) -> Config {
let mut config = make_config(root, limit, instructions);
config.project_doc_fallback_filenames = fallbacks
.iter()
.map(std::string::ToString::to_string)
.collect();
config
}
/// AGENTS.md missing should yield `None`.
#[tokio::test]
async fn no_doc_file_returns_none() {
@@ -347,4 +380,45 @@ mod tests {
let res = get_user_instructions(&cfg).await.expect("doc expected");
assert_eq!(res, "root doc\n\ncrate doc");
}
/// When AGENTS.md is absent but a configured fallback exists, the fallback is used.
#[tokio::test]
async fn uses_configured_fallback_when_agents_missing() {
let tmp = tempfile::tempdir().expect("tempdir");
fs::write(tmp.path().join("EXAMPLE.md"), "example instructions").unwrap();
let cfg = make_config_with_fallback(&tmp, 4096, None, &["EXAMPLE.md"]);
let res = get_user_instructions(&cfg)
.await
.expect("fallback doc expected");
assert_eq!(res, "example instructions");
}
/// AGENTS.md remains preferred when both AGENTS.md and fallbacks are present.
#[tokio::test]
async fn agents_md_preferred_over_fallbacks() {
let tmp = tempfile::tempdir().expect("tempdir");
fs::write(tmp.path().join("AGENTS.md"), "primary").unwrap();
fs::write(tmp.path().join("EXAMPLE.md"), "secondary").unwrap();
let cfg = make_config_with_fallback(&tmp, 4096, None, &["EXAMPLE.md", ".example.md"]);
let res = get_user_instructions(&cfg)
.await
.expect("AGENTS.md should win");
assert_eq!(res, "primary");
let discovery = discover_project_doc_paths(&cfg).expect("discover paths");
assert_eq!(discovery.len(), 1);
assert!(
discovery[0]
.file_name()
.unwrap()
.to_string_lossy()
.eq(DEFAULT_PROJECT_DOC_FILENAME)
);
}
}

View File

@@ -17,6 +17,7 @@ use super::SESSIONS_SUBDIR;
use crate::protocol::EventMsg;
use codex_protocol::protocol::RolloutItem;
use codex_protocol::protocol::RolloutLine;
use codex_protocol::protocol::SessionSource;
/// Returned page of conversation summaries.
#[derive(Debug, Default, PartialEq)]
@@ -40,10 +41,25 @@ pub struct ConversationItem {
pub head: Vec<serde_json::Value>,
/// Last up to `TAIL_RECORD_LIMIT` JSONL response records parsed as JSON.
pub tail: Vec<serde_json::Value>,
/// RFC3339 timestamp string for when the session was created, if available.
pub created_at: Option<String>,
/// RFC3339 timestamp string for the most recent response in the tail, if available.
pub updated_at: Option<String>,
}
#[derive(Default)]
struct HeadTailSummary {
head: Vec<serde_json::Value>,
tail: Vec<serde_json::Value>,
saw_session_meta: bool,
saw_user_event: bool,
source: Option<SessionSource>,
created_at: Option<String>,
updated_at: Option<String>,
}
/// Hard cap to bound worstcase work per request.
const MAX_SCAN_FILES: usize = 100;
const MAX_SCAN_FILES: usize = 10000;
const HEAD_RECORD_LIMIT: usize = 10;
const TAIL_RECORD_LIMIT: usize = 10;
@@ -92,6 +108,7 @@ pub(crate) async fn get_conversations(
codex_home: &Path,
page_size: usize,
cursor: Option<&Cursor>,
allowed_sources: &[SessionSource],
) -> io::Result<ConversationsPage> {
let mut root = codex_home.to_path_buf();
root.push(SESSIONS_SUBDIR);
@@ -107,7 +124,8 @@ pub(crate) async fn get_conversations(
let anchor = cursor.cloned();
let result = traverse_directories_for_paths(root.clone(), page_size, anchor).await?;
let result =
traverse_directories_for_paths(root.clone(), page_size, anchor, allowed_sources).await?;
Ok(result)
}
@@ -126,6 +144,7 @@ async fn traverse_directories_for_paths(
root: PathBuf,
page_size: usize,
anchor: Option<Cursor>,
allowed_sources: &[SessionSource],
) -> io::Result<ConversationsPage> {
let mut items: Vec<ConversationItem> = Vec::with_capacity(page_size);
let mut scanned_files = 0usize;
@@ -179,13 +198,33 @@ async fn traverse_directories_for_paths(
}
// Read head and simultaneously detect message events within the same
// first N JSONL records to avoid a second file read.
let (head, tail, saw_session_meta, saw_user_event) =
read_head_and_tail(&path, HEAD_RECORD_LIMIT, TAIL_RECORD_LIMIT)
.await
.unwrap_or((Vec::new(), Vec::new(), false, false));
let summary = read_head_and_tail(&path, HEAD_RECORD_LIMIT, TAIL_RECORD_LIMIT)
.await
.unwrap_or_default();
if !allowed_sources.is_empty()
&& !summary
.source
.is_some_and(|source| allowed_sources.iter().any(|s| s == &source))
{
continue;
}
// Apply filters: must have session meta and at least one user message event
if saw_session_meta && saw_user_event {
items.push(ConversationItem { path, head, tail });
if summary.saw_session_meta && summary.saw_user_event {
let HeadTailSummary {
head,
tail,
created_at,
mut updated_at,
..
} = summary;
updated_at = updated_at.or_else(|| created_at.clone());
items.push(ConversationItem {
path,
head,
tail,
created_at,
updated_at,
});
}
}
}
@@ -293,17 +332,15 @@ async fn read_head_and_tail(
path: &Path,
head_limit: usize,
tail_limit: usize,
) -> io::Result<(Vec<serde_json::Value>, Vec<serde_json::Value>, bool, bool)> {
) -> io::Result<HeadTailSummary> {
use tokio::io::AsyncBufReadExt;
let file = tokio::fs::File::open(path).await?;
let reader = tokio::io::BufReader::new(file);
let mut lines = reader.lines();
let mut head: Vec<serde_json::Value> = Vec::new();
let mut saw_session_meta = false;
let mut saw_user_event = false;
let mut summary = HeadTailSummary::default();
while head.len() < head_limit {
while summary.head.len() < head_limit {
let line_opt = lines.next_line().await?;
let Some(line) = line_opt else { break };
let trimmed = line.trim();
@@ -316,14 +353,23 @@ async fn read_head_and_tail(
match rollout_line.item {
RolloutItem::SessionMeta(session_meta_line) => {
summary.source = Some(session_meta_line.meta.source);
summary.created_at = summary
.created_at
.clone()
.or_else(|| Some(rollout_line.timestamp.clone()));
if let Ok(val) = serde_json::to_value(session_meta_line) {
head.push(val);
saw_session_meta = true;
summary.head.push(val);
summary.saw_session_meta = true;
}
}
RolloutItem::ResponseItem(item) => {
summary.created_at = summary
.created_at
.clone()
.or_else(|| Some(rollout_line.timestamp.clone()));
if let Ok(val) = serde_json::to_value(item) {
head.push(val);
summary.head.push(val);
}
}
RolloutItem::TurnContext(_) => {
@@ -334,28 +380,30 @@ async fn read_head_and_tail(
}
RolloutItem::EventMsg(ev) => {
if matches!(ev, EventMsg::UserMessage(_)) {
saw_user_event = true;
summary.saw_user_event = true;
}
}
}
}
let tail = if tail_limit == 0 {
Vec::new()
} else {
read_tail_records(path, tail_limit).await?
};
Ok((head, tail, saw_session_meta, saw_user_event))
if tail_limit != 0 {
let (tail, updated_at) = read_tail_records(path, tail_limit).await?;
summary.tail = tail;
summary.updated_at = updated_at;
}
Ok(summary)
}
async fn read_tail_records(path: &Path, max_records: usize) -> io::Result<Vec<serde_json::Value>> {
async fn read_tail_records(
path: &Path,
max_records: usize,
) -> io::Result<(Vec<serde_json::Value>, Option<String>)> {
use std::io::SeekFrom;
use tokio::io::AsyncReadExt;
use tokio::io::AsyncSeekExt;
if max_records == 0 {
return Ok(Vec::new());
return Ok((Vec::new(), None));
}
const CHUNK_SIZE: usize = 8192;
@@ -363,24 +411,28 @@ async fn read_tail_records(path: &Path, max_records: usize) -> io::Result<Vec<se
let mut file = tokio::fs::File::open(path).await?;
let mut pos = file.seek(SeekFrom::End(0)).await?;
if pos == 0 {
return Ok(Vec::new());
return Ok((Vec::new(), None));
}
let mut buffer: Vec<u8> = Vec::new();
let mut latest_timestamp: Option<String> = None;
loop {
let slice_start = match (pos > 0, buffer.iter().position(|&b| b == b'\n')) {
(true, Some(idx)) => idx + 1,
_ => 0,
};
let tail = collect_last_response_values(&buffer[slice_start..], max_records);
let (tail, newest_ts) = collect_last_response_values(&buffer[slice_start..], max_records);
if latest_timestamp.is_none() {
latest_timestamp = newest_ts.clone();
}
if tail.len() >= max_records || pos == 0 {
return Ok(tail);
return Ok((tail, latest_timestamp.or(newest_ts)));
}
let read_size = CHUNK_SIZE.min(pos as usize);
if read_size == 0 {
return Ok(tail);
return Ok((tail, latest_timestamp.or(newest_ts)));
}
pos -= read_size as u64;
file.seek(SeekFrom::Start(pos)).await?;
@@ -391,15 +443,19 @@ async fn read_tail_records(path: &Path, max_records: usize) -> io::Result<Vec<se
}
}
fn collect_last_response_values(buffer: &[u8], max_records: usize) -> Vec<serde_json::Value> {
fn collect_last_response_values(
buffer: &[u8],
max_records: usize,
) -> (Vec<serde_json::Value>, Option<String>) {
use std::borrow::Cow;
if buffer.is_empty() || max_records == 0 {
return Vec::new();
return (Vec::new(), None);
}
let text: Cow<'_, str> = String::from_utf8_lossy(buffer);
let mut collected_rev: Vec<serde_json::Value> = Vec::new();
let mut latest_timestamp: Option<String> = None;
for line in text.lines().rev() {
let trimmed = line.trim();
if trimmed.is_empty() {
@@ -407,9 +463,13 @@ fn collect_last_response_values(buffer: &[u8], max_records: usize) -> Vec<serde_
}
let parsed: serde_json::Result<RolloutLine> = serde_json::from_str(trimmed);
let Ok(rollout_line) = parsed else { continue };
if let RolloutItem::ResponseItem(item) = rollout_line.item
&& let Ok(val) = serde_json::to_value(item)
let RolloutLine { timestamp, item } = rollout_line;
if let RolloutItem::ResponseItem(item) = item
&& let Ok(val) = serde_json::to_value(&item)
{
if latest_timestamp.is_none() {
latest_timestamp = Some(timestamp.clone());
}
collected_rev.push(val);
if collected_rev.len() == max_records {
break;
@@ -417,7 +477,7 @@ fn collect_last_response_values(buffer: &[u8], max_records: usize) -> Vec<serde_
}
}
collected_rev.reverse();
collected_rev
(collected_rev, latest_timestamp)
}
/// Locate a recorded conversation rollout file by its UUID string using the existing

View File

@@ -1,7 +1,11 @@
//! Rollout module: persistence and discovery of session rollout files.
use codex_protocol::protocol::SessionSource;
pub const SESSIONS_SUBDIR: &str = "sessions";
pub const ARCHIVED_SESSIONS_SUBDIR: &str = "archived_sessions";
pub const INTERACTIVE_SESSION_SOURCES: &[SessionSource] =
&[SessionSource::Cli, SessionSource::VSCode];
pub mod list;
pub(crate) mod policy;

View File

@@ -70,6 +70,7 @@ pub(crate) fn should_persist_event_msg(ev: &EventMsg) -> bool {
| EventMsg::ListCustomPromptsResponse(_)
| EventMsg::PlanUpdate(_)
| EventMsg::ShutdownComplete
| EventMsg::ViewImageToolCall(_)
| EventMsg::ConversationPath(_) => false,
}
}

View File

@@ -6,7 +6,7 @@ use std::io::Error as IoError;
use std::path::Path;
use std::path::PathBuf;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::ConversationId;
use serde_json::Value;
use time::OffsetDateTime;
use time::format_description::FormatItem;
@@ -32,6 +32,7 @@ use codex_protocol::protocol::RolloutItem;
use codex_protocol::protocol::RolloutLine;
use codex_protocol::protocol::SessionMeta;
use codex_protocol::protocol::SessionMetaLine;
use codex_protocol::protocol::SessionSource;
/// Records all [`ResponseItem`]s for a session and flushes them to disk after
/// every update.
@@ -53,6 +54,7 @@ pub enum RolloutRecorderParams {
Create {
conversation_id: ConversationId,
instructions: Option<String>,
source: SessionSource,
},
Resume {
path: PathBuf,
@@ -71,10 +73,15 @@ enum RolloutCmd {
}
impl RolloutRecorderParams {
pub fn new(conversation_id: ConversationId, instructions: Option<String>) -> Self {
pub fn new(
conversation_id: ConversationId,
instructions: Option<String>,
source: SessionSource,
) -> Self {
Self::Create {
conversation_id,
instructions,
source,
}
}
@@ -89,8 +96,9 @@ impl RolloutRecorder {
codex_home: &Path,
page_size: usize,
cursor: Option<&Cursor>,
allowed_sources: &[SessionSource],
) -> std::io::Result<ConversationsPage> {
get_conversations(codex_home, page_size, cursor).await
get_conversations(codex_home, page_size, cursor, allowed_sources).await
}
/// Attempt to create a new [`RolloutRecorder`]. If the sessions directory
@@ -101,6 +109,7 @@ impl RolloutRecorder {
RolloutRecorderParams::Create {
conversation_id,
instructions,
source,
} => {
let LogFileInfo {
file,
@@ -127,6 +136,7 @@ impl RolloutRecorder {
originator: originator().value.clone(),
cli_version: env!("CARGO_PKG_VERSION").to_string(),
instructions,
source,
}),
)
}

View File

@@ -12,13 +12,14 @@ use time::format_description::FormatItem;
use time::macros::format_description;
use uuid::Uuid;
use crate::rollout::INTERACTIVE_SESSION_SOURCES;
use crate::rollout::list::ConversationItem;
use crate::rollout::list::ConversationsPage;
use crate::rollout::list::Cursor;
use crate::rollout::list::get_conversation;
use crate::rollout::list::get_conversations;
use anyhow::Result;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::ConversationId;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ResponseItem;
use codex_protocol::protocol::CompactedItem;
@@ -28,13 +29,17 @@ use codex_protocol::protocol::RolloutItem;
use codex_protocol::protocol::RolloutLine;
use codex_protocol::protocol::SessionMeta;
use codex_protocol::protocol::SessionMetaLine;
use codex_protocol::protocol::SessionSource;
use codex_protocol::protocol::UserMessageEvent;
const NO_SOURCE_FILTER: &[SessionSource] = &[];
fn write_session_file(
root: &Path,
ts_str: &str,
uuid: Uuid,
num_records: usize,
source: Option<SessionSource>,
) -> std::io::Result<(OffsetDateTime, Uuid)> {
let format: &[FormatItem] =
format_description!("[year]-[month]-[day]T[hour]-[minute]-[second]");
@@ -52,17 +57,23 @@ fn write_session_file(
let file_path = dir.join(filename);
let mut file = File::create(file_path)?;
let mut payload = serde_json::json!({
"id": uuid,
"timestamp": ts_str,
"instructions": null,
"cwd": ".",
"originator": "test_originator",
"cli_version": "test_version",
});
if let Some(source) = source {
payload["source"] = serde_json::to_value(source).unwrap();
}
let meta = serde_json::json!({
"timestamp": ts_str,
"type": "session_meta",
"payload": {
"id": uuid,
"timestamp": ts_str,
"instructions": null,
"cwd": ".",
"originator": "test_originator",
"cli_version": "test_version"
}
"payload": payload,
});
writeln!(file, "{meta}")?;
@@ -99,11 +110,34 @@ async fn test_list_conversations_latest_first() {
let u3 = Uuid::from_u128(3);
// Create three sessions across three days
write_session_file(home, "2025-01-01T12-00-00", u1, 3).unwrap();
write_session_file(home, "2025-01-02T12-00-00", u2, 3).unwrap();
write_session_file(home, "2025-01-03T12-00-00", u3, 3).unwrap();
write_session_file(
home,
"2025-01-01T12-00-00",
u1,
3,
Some(SessionSource::VSCode),
)
.unwrap();
write_session_file(
home,
"2025-01-02T12-00-00",
u2,
3,
Some(SessionSource::VSCode),
)
.unwrap();
write_session_file(
home,
"2025-01-03T12-00-00",
u3,
3,
Some(SessionSource::VSCode),
)
.unwrap();
let page = get_conversations(home, 10, None).await.unwrap();
let page = get_conversations(home, 10, None, INTERACTIVE_SESSION_SOURCES)
.await
.unwrap();
// Build expected objects
let p1 = home
@@ -131,7 +165,8 @@ async fn test_list_conversations_latest_first() {
"instructions": null,
"cwd": ".",
"originator": "test_originator",
"cli_version": "test_version"
"cli_version": "test_version",
"source": "vscode",
})];
let head_2 = vec![serde_json::json!({
"id": u2,
@@ -139,7 +174,8 @@ async fn test_list_conversations_latest_first() {
"instructions": null,
"cwd": ".",
"originator": "test_originator",
"cli_version": "test_version"
"cli_version": "test_version",
"source": "vscode",
})];
let head_1 = vec![serde_json::json!({
"id": u1,
@@ -147,7 +183,8 @@ async fn test_list_conversations_latest_first() {
"instructions": null,
"cwd": ".",
"originator": "test_originator",
"cli_version": "test_version"
"cli_version": "test_version",
"source": "vscode",
})];
let expected_cursor: Cursor =
@@ -159,16 +196,22 @@ async fn test_list_conversations_latest_first() {
path: p1,
head: head_3,
tail: Vec::new(),
created_at: Some("2025-01-03T12-00-00".into()),
updated_at: Some("2025-01-03T12-00-00".into()),
},
ConversationItem {
path: p2,
head: head_2,
tail: Vec::new(),
created_at: Some("2025-01-02T12-00-00".into()),
updated_at: Some("2025-01-02T12-00-00".into()),
},
ConversationItem {
path: p3,
head: head_1,
tail: Vec::new(),
created_at: Some("2025-01-01T12-00-00".into()),
updated_at: Some("2025-01-01T12-00-00".into()),
},
],
next_cursor: Some(expected_cursor),
@@ -192,13 +235,50 @@ async fn test_pagination_cursor() {
let u5 = Uuid::from_u128(55);
// Oldest to newest
write_session_file(home, "2025-03-01T09-00-00", u1, 1).unwrap();
write_session_file(home, "2025-03-02T09-00-00", u2, 1).unwrap();
write_session_file(home, "2025-03-03T09-00-00", u3, 1).unwrap();
write_session_file(home, "2025-03-04T09-00-00", u4, 1).unwrap();
write_session_file(home, "2025-03-05T09-00-00", u5, 1).unwrap();
write_session_file(
home,
"2025-03-01T09-00-00",
u1,
1,
Some(SessionSource::VSCode),
)
.unwrap();
write_session_file(
home,
"2025-03-02T09-00-00",
u2,
1,
Some(SessionSource::VSCode),
)
.unwrap();
write_session_file(
home,
"2025-03-03T09-00-00",
u3,
1,
Some(SessionSource::VSCode),
)
.unwrap();
write_session_file(
home,
"2025-03-04T09-00-00",
u4,
1,
Some(SessionSource::VSCode),
)
.unwrap();
write_session_file(
home,
"2025-03-05T09-00-00",
u5,
1,
Some(SessionSource::VSCode),
)
.unwrap();
let page1 = get_conversations(home, 2, None).await.unwrap();
let page1 = get_conversations(home, 2, None, INTERACTIVE_SESSION_SOURCES)
.await
.unwrap();
let p5 = home
.join("sessions")
.join("2025")
@@ -217,7 +297,8 @@ async fn test_pagination_cursor() {
"instructions": null,
"cwd": ".",
"originator": "test_originator",
"cli_version": "test_version"
"cli_version": "test_version",
"source": "vscode",
})];
let head_4 = vec![serde_json::json!({
"id": u4,
@@ -225,7 +306,8 @@ async fn test_pagination_cursor() {
"instructions": null,
"cwd": ".",
"originator": "test_originator",
"cli_version": "test_version"
"cli_version": "test_version",
"source": "vscode",
})];
let expected_cursor1: Cursor =
serde_json::from_str(&format!("\"2025-03-04T09-00-00|{u4}\"")).unwrap();
@@ -235,11 +317,15 @@ async fn test_pagination_cursor() {
path: p5,
head: head_5,
tail: Vec::new(),
created_at: Some("2025-03-05T09-00-00".into()),
updated_at: Some("2025-03-05T09-00-00".into()),
},
ConversationItem {
path: p4,
head: head_4,
tail: Vec::new(),
created_at: Some("2025-03-04T09-00-00".into()),
updated_at: Some("2025-03-04T09-00-00".into()),
},
],
next_cursor: Some(expected_cursor1.clone()),
@@ -248,9 +334,14 @@ async fn test_pagination_cursor() {
};
assert_eq!(page1, expected_page1);
let page2 = get_conversations(home, 2, page1.next_cursor.as_ref())
.await
.unwrap();
let page2 = get_conversations(
home,
2,
page1.next_cursor.as_ref(),
INTERACTIVE_SESSION_SOURCES,
)
.await
.unwrap();
let p3 = home
.join("sessions")
.join("2025")
@@ -269,7 +360,8 @@ async fn test_pagination_cursor() {
"instructions": null,
"cwd": ".",
"originator": "test_originator",
"cli_version": "test_version"
"cli_version": "test_version",
"source": "vscode",
})];
let head_2 = vec![serde_json::json!({
"id": u2,
@@ -277,7 +369,8 @@ async fn test_pagination_cursor() {
"instructions": null,
"cwd": ".",
"originator": "test_originator",
"cli_version": "test_version"
"cli_version": "test_version",
"source": "vscode",
})];
let expected_cursor2: Cursor =
serde_json::from_str(&format!("\"2025-03-02T09-00-00|{u2}\"")).unwrap();
@@ -287,11 +380,15 @@ async fn test_pagination_cursor() {
path: p3,
head: head_3,
tail: Vec::new(),
created_at: Some("2025-03-03T09-00-00".into()),
updated_at: Some("2025-03-03T09-00-00".into()),
},
ConversationItem {
path: p2,
head: head_2,
tail: Vec::new(),
created_at: Some("2025-03-02T09-00-00".into()),
updated_at: Some("2025-03-02T09-00-00".into()),
},
],
next_cursor: Some(expected_cursor2.clone()),
@@ -300,9 +397,14 @@ async fn test_pagination_cursor() {
};
assert_eq!(page2, expected_page2);
let page3 = get_conversations(home, 2, page2.next_cursor.as_ref())
.await
.unwrap();
let page3 = get_conversations(
home,
2,
page2.next_cursor.as_ref(),
INTERACTIVE_SESSION_SOURCES,
)
.await
.unwrap();
let p1 = home
.join("sessions")
.join("2025")
@@ -315,7 +417,8 @@ async fn test_pagination_cursor() {
"instructions": null,
"cwd": ".",
"originator": "test_originator",
"cli_version": "test_version"
"cli_version": "test_version",
"source": "vscode",
})];
let expected_cursor3: Cursor =
serde_json::from_str(&format!("\"2025-03-01T09-00-00|{u1}\"")).unwrap();
@@ -324,6 +427,8 @@ async fn test_pagination_cursor() {
path: p1,
head: head_1,
tail: Vec::new(),
created_at: Some("2025-03-01T09-00-00".into()),
updated_at: Some("2025-03-01T09-00-00".into()),
}],
next_cursor: Some(expected_cursor3),
num_scanned_files: 5, // scanned 05, 04 (anchor), 03, 02 (anchor), 01
@@ -339,9 +444,11 @@ async fn test_get_conversation_contents() {
let uuid = Uuid::new_v4();
let ts = "2025-04-01T10-30-00";
write_session_file(home, ts, uuid, 2).unwrap();
write_session_file(home, ts, uuid, 2, Some(SessionSource::VSCode)).unwrap();
let page = get_conversations(home, 1, None).await.unwrap();
let page = get_conversations(home, 1, None, INTERACTIVE_SESSION_SOURCES)
.await
.unwrap();
let path = &page.items[0].path;
let content = get_conversation(path).await.unwrap();
@@ -359,7 +466,8 @@ async fn test_get_conversation_contents() {
"instructions": null,
"cwd": ".",
"originator": "test_originator",
"cli_version": "test_version"
"cli_version": "test_version",
"source": "vscode",
})];
let expected_cursor: Cursor = serde_json::from_str(&format!("\"{ts}|{uuid}\"")).unwrap();
let expected_page = ConversationsPage {
@@ -367,6 +475,8 @@ async fn test_get_conversation_contents() {
path: expected_path,
head: expected_head,
tail: Vec::new(),
created_at: Some(ts.into()),
updated_at: Some(ts.into()),
}],
next_cursor: Some(expected_cursor),
num_scanned_files: 1,
@@ -375,7 +485,19 @@ async fn test_get_conversation_contents() {
assert_eq!(page, expected_page);
// Entire file contents equality
let meta = serde_json::json!({"timestamp": ts, "type": "session_meta", "payload": {"id": uuid, "timestamp": ts, "instructions": null, "cwd": ".", "originator": "test_originator", "cli_version": "test_version"}});
let meta = serde_json::json!({
"timestamp": ts,
"type": "session_meta",
"payload": {
"id": uuid,
"timestamp": ts,
"instructions": null,
"cwd": ".",
"originator": "test_originator",
"cli_version": "test_version",
"source": "vscode",
}
});
let user_event = serde_json::json!({
"timestamp": ts,
"type": "event_msg",
@@ -410,6 +532,7 @@ async fn test_tail_includes_last_response_items() -> Result<()> {
cwd: ".".into(),
originator: "test_originator".into(),
cli_version: "test_version".into(),
source: SessionSource::VSCode,
},
git: None,
}),
@@ -442,25 +565,30 @@ async fn test_tail_includes_last_response_items() -> Result<()> {
}
drop(file);
let page = get_conversations(home, 1, None).await?;
let page = get_conversations(home, 1, None, INTERACTIVE_SESSION_SOURCES).await?;
let item = page.items.first().expect("conversation item");
let tail_len = item.tail.len();
assert_eq!(tail_len, 10usize.min(total_messages));
let expected: Vec<serde_json::Value> = (total_messages - tail_len..total_messages)
.map(|idx| {
serde_json::to_value(ResponseItem::Message {
id: None,
role: "assistant".into(),
content: vec![ContentItem::OutputText {
text: format!("reply-{idx}"),
}],
serde_json::json!({
"type": "message",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": format!("reply-{idx}"),
}
],
})
.expect("serialize response item")
})
.collect();
assert_eq!(item.tail, expected);
assert_eq!(item.created_at.as_deref(), Some(ts));
let expected_updated = format!("{ts}-{last:02}", last = total_messages - 1);
assert_eq!(item.updated_at.as_deref(), Some(expected_updated.as_str()));
Ok(())
}
@@ -488,6 +616,7 @@ async fn test_tail_handles_short_sessions() -> Result<()> {
cwd: ".".into(),
originator: "test_originator".into(),
cli_version: "test_version".into(),
source: SessionSource::VSCode,
},
git: None,
}),
@@ -519,25 +648,32 @@ async fn test_tail_handles_short_sessions() -> Result<()> {
}
drop(file);
let page = get_conversations(home, 1, None).await?;
let page = get_conversations(home, 1, None, INTERACTIVE_SESSION_SOURCES).await?;
let tail = &page.items.first().expect("conversation item").tail;
assert_eq!(tail.len(), 3);
let expected: Vec<serde_json::Value> = (0..3)
.map(|idx| {
serde_json::to_value(ResponseItem::Message {
id: None,
role: "assistant".into(),
content: vec![ContentItem::OutputText {
text: format!("short-{idx}"),
}],
serde_json::json!({
"type": "message",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": format!("short-{idx}"),
}
],
})
.expect("serialize response item")
})
.collect();
assert_eq!(tail, &expected);
let expected_updated = format!("{ts}-{last:02}", last = 2);
assert_eq!(
page.items[0].updated_at.as_deref(),
Some(expected_updated.as_str())
);
Ok(())
}
@@ -565,6 +701,7 @@ async fn test_tail_skips_trailing_non_responses() -> Result<()> {
cwd: ".".into(),
originator: "test_originator".into(),
cli_version: "test_version".into(),
source: SessionSource::VSCode,
},
git: None,
}),
@@ -610,23 +747,30 @@ async fn test_tail_skips_trailing_non_responses() -> Result<()> {
writeln!(file, "{}", serde_json::to_string(&shutdown_event)?)?;
drop(file);
let page = get_conversations(home, 1, None).await?;
let page = get_conversations(home, 1, None, INTERACTIVE_SESSION_SOURCES).await?;
let tail = &page.items.first().expect("conversation item").tail;
let expected: Vec<serde_json::Value> = (0..4)
.map(|idx| {
serde_json::to_value(ResponseItem::Message {
id: None,
role: "assistant".into(),
content: vec![ContentItem::OutputText {
text: format!("response-{idx}"),
}],
serde_json::json!({
"type": "message",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": format!("response-{idx}"),
}
],
})
.expect("serialize response item")
})
.collect();
assert_eq!(tail, &expected);
let expected_updated = format!("{ts}-{last:02}", last = 3);
assert_eq!(
page.items[0].updated_at.as_deref(),
Some(expected_updated.as_str())
);
Ok(())
}
@@ -641,11 +785,13 @@ async fn test_stable_ordering_same_second_pagination() {
let u2 = Uuid::from_u128(2);
let u3 = Uuid::from_u128(3);
write_session_file(home, ts, u1, 0).unwrap();
write_session_file(home, ts, u2, 0).unwrap();
write_session_file(home, ts, u3, 0).unwrap();
write_session_file(home, ts, u1, 0, Some(SessionSource::VSCode)).unwrap();
write_session_file(home, ts, u2, 0, Some(SessionSource::VSCode)).unwrap();
write_session_file(home, ts, u3, 0, Some(SessionSource::VSCode)).unwrap();
let page1 = get_conversations(home, 2, None).await.unwrap();
let page1 = get_conversations(home, 2, None, INTERACTIVE_SESSION_SOURCES)
.await
.unwrap();
let p3 = home
.join("sessions")
@@ -666,7 +812,8 @@ async fn test_stable_ordering_same_second_pagination() {
"instructions": null,
"cwd": ".",
"originator": "test_originator",
"cli_version": "test_version"
"cli_version": "test_version",
"source": "vscode",
})]
};
let expected_cursor1: Cursor = serde_json::from_str(&format!("\"{ts}|{u2}\"")).unwrap();
@@ -676,11 +823,15 @@ async fn test_stable_ordering_same_second_pagination() {
path: p3,
head: head(u3),
tail: Vec::new(),
created_at: Some(ts.to_string()),
updated_at: Some(ts.to_string()),
},
ConversationItem {
path: p2,
head: head(u2),
tail: Vec::new(),
created_at: Some(ts.to_string()),
updated_at: Some(ts.to_string()),
},
],
next_cursor: Some(expected_cursor1.clone()),
@@ -689,9 +840,14 @@ async fn test_stable_ordering_same_second_pagination() {
};
assert_eq!(page1, expected_page1);
let page2 = get_conversations(home, 2, page1.next_cursor.as_ref())
.await
.unwrap();
let page2 = get_conversations(
home,
2,
page1.next_cursor.as_ref(),
INTERACTIVE_SESSION_SOURCES,
)
.await
.unwrap();
let p1 = home
.join("sessions")
.join("2025")
@@ -704,6 +860,8 @@ async fn test_stable_ordering_same_second_pagination() {
path: p1,
head: head(u1),
tail: Vec::new(),
created_at: Some(ts.to_string()),
updated_at: Some(ts.to_string()),
}],
next_cursor: Some(expected_cursor2),
num_scanned_files: 3, // scanned u3, u2 (anchor), u1
@@ -711,3 +869,59 @@ async fn test_stable_ordering_same_second_pagination() {
};
assert_eq!(page2, expected_page2);
}
#[tokio::test]
async fn test_source_filter_excludes_non_matching_sessions() {
let temp = TempDir::new().unwrap();
let home = temp.path();
let interactive_id = Uuid::from_u128(42);
let non_interactive_id = Uuid::from_u128(77);
write_session_file(
home,
"2025-08-02T10-00-00",
interactive_id,
2,
Some(SessionSource::Cli),
)
.unwrap();
write_session_file(
home,
"2025-08-01T10-00-00",
non_interactive_id,
2,
Some(SessionSource::Exec),
)
.unwrap();
let interactive_only = get_conversations(home, 10, None, INTERACTIVE_SESSION_SOURCES)
.await
.unwrap();
let paths: Vec<_> = interactive_only
.items
.iter()
.map(|item| item.path.as_path())
.collect();
assert_eq!(paths.len(), 1);
assert!(paths.iter().all(|path| {
path.ends_with("rollout-2025-08-02T10-00-00-00000000-0000-0000-0000-00000000002a.jsonl")
}));
let all_sessions = get_conversations(home, 10, None, NO_SOURCE_FILTER)
.await
.unwrap();
let all_paths: Vec<_> = all_sessions
.items
.into_iter()
.map(|item| item.path)
.collect();
assert_eq!(all_paths.len(), 2);
assert!(all_paths.iter().any(|path| {
path.ends_with("rollout-2025-08-02T10-00-00-00000000-0000-0000-0000-00000000002a.jsonl")
}));
assert!(all_paths.iter().any(|path| {
path.ends_with("rollout-2025-08-01T10-00-00-00000000-0000-0000-0000-00000000004d.jsonl")
}));
}

View File

@@ -125,9 +125,10 @@ pub fn assess_command_safety(
// the session _because_ they know it needs to run outside a sandbox.
if is_known_safe_command(command) || approved.contains(command) {
let user_explicitly_approved = approved.contains(command);
return SafetyCheck::AutoApprove {
sandbox_type: SandboxType::None,
user_explicitly_approved: false,
user_explicitly_approved,
};
}
@@ -380,7 +381,7 @@ mod tests {
safety_check,
SafetyCheck::AutoApprove {
sandbox_type: SandboxType::None,
user_explicitly_approved: false,
user_explicitly_approved: true,
}
);
}

Some files were not shown because too many files have changed in this diff Show More