Compare commits

..

61 Commits

Author SHA1 Message Date
Ahmed Ibrahim
9b7329699a Merge branch 'fix-timeout' of https://github.com/openai/codex into fix-timeout 2025-11-06 10:34:20 -08:00
Ahmed Ibrahim
8307d9bf3b fix 2025-11-06 10:34:11 -08:00
Ahmed Ibrahim
294dafcacf Merge branch 'main' into fix-timeout 2025-11-06 10:29:56 -08:00
Ahmed Ibrahim
0b26c76047 fix 2025-11-06 10:27:24 -08:00
Ahmed Ibrahim
dbad5eeec6 chore: fix grammar mistakes (#6326) 2025-11-06 09:48:59 -08:00
Ahmed Ibrahim
ca9f9c6f5d usage 2025-11-06 09:25:43 -08:00
vladislav doster
4b4252210b docs: Fix code fence and typo in advanced guide (#6295)
- add `bash` to code fence
- fix spelling of `JavaScript`
2025-11-06 09:00:28 -08:00
Owen Lin
6582554926 [app-server] feat: v2 Turn APIs (#6216)
Implements:
```
turn/start
turn/interrupt
```

along with their integration tests. These are relatively light wrappers
around the existing core logic, and changes to core logic are minimal.

However, an improvement made for developer ergonomics:
- `turn/start` replaces both `SendUserMessage` (no turn overrides) and
`SendUserTurn` (can override model, approval policy, etc.)
2025-11-06 16:36:36 +00:00
Thibault Sottiaux
649ce520c4 chore: rename for clarity (#6319)
Co-authored-by: Ahmed Ibrahim <aibrahim@openai.com>
2025-11-06 08:32:57 -08:00
Thibault Sottiaux
667e841d3e feat: support models with single reasoning effort (#6300) 2025-11-05 23:06:45 -08:00
Ahmed Ibrahim
63e1ef25af feat: add model nudge for queries (#6286) 2025-11-06 03:42:59 +00:00
Celia Chen
229d18f4d2 [App-server] Add account/login/cancel v2 endpoint (#6288)
Add `account/login/cancel` v2 endpoint for auth. this is similar
implementation to `cancelLoginChatgpt` v1 endpoint.
2025-11-06 01:13:55 +00:00
wizard
4a1a7f9685 fix: ToC so it doesn’t include itself or duplicate the end marker (#4388)
turns out the ToC was including itself when generating, which messed up
comparisons and sometimes made the file rewrite endlessly.

also fixed the slice so `<!-- End ToC -->` doesn’t get duplicated when
we insert the new ToC.

should behave nicely now - no extra rewrites, no doubled markers.

Co-authored-by: Eric Traut <etraut@openai.com>
2025-11-05 14:52:51 -08:00
Eric Traut
86c149ae8e Prevent dismissal of login menu in TUI (#6285)
We currently allow the user to dismiss the login menu via Ctrl+C. This
leaves them in a bad state where they're not auth'ed but have an input
prompt. In the extension, this isn't a problem because we don't allow
the user to dismiss the login screen.

Testing: I confirmed that Ctrl+C no longer dismisses the login menu.

This is an alternative (simpler) fix for a [community
PR](https://github.com/openai/codex/pull/3234).
2025-11-05 14:25:58 -08:00
Celia Chen
05f0b4f590 [App-server] Implement v2 for account/login/start and account/login/completed (#6183)
This PR implements `account/login/start` and `account/login/completed`.
Instead of having separate endpoints for login with chatgpt and api, we
have a single enum handling different login methods. For sync auth
methods like sign in with api key, we still send a `completed`
notification back to be compatible with the async login flow.
2025-11-05 13:52:50 -08:00
easong-openai
d4eda9d10b stop capturing r when environment selection modal is open (#6249)
This fixes an issue where you can't select environments with an r in them when the selection modal is open
2025-11-05 13:23:46 -08:00
Eric Traut
d7953aed74 Fixes intermittent test failures in CI (#6282)
I'm seeing two tests fail intermittently in CI. This PR attempts to
address (or at least mitigate) the flakiness.

* summarize_context_three_requests_and_instructions - The test snapshots
server.received_requests() immediately after observing TaskComplete.
Because the OpenAI /v1/responses call is streamed, the HTTP request can
still be draining when that event fires, so wiremock occasionally
reports only two captured requests. Fix is to wait for async activity to
complete.
* archive_conversation_moves_rollout_into_archived_directory - times out
on a slow CI run. Mitigation is to increase timeout value from 10s to
20s.
2025-11-05 13:12:25 -08:00
Owen Lin
2ab1650d4d [app-server] feat: v2 Thread APIs (#6214)
Implements:
```
thread/list
thread/start
thread/resume
thread/archive
```

along with their integration tests. These are relatively light wrappers
around the existing core logic, and changes to core logic are minimal.

However, an improvement made for developer ergonomics:
- `thread/start` and `thread/resume` automatically attaches a
conversation listener internally, so clients don't have to make a
separate `AddConversationListener` call like they do today.

For consistency, also updated `model/list` and `feedback/upload` (naming
conventions, list API params).
2025-11-05 20:28:43 +00:00
Gabriel Peal
79aa83ee39 Update rmcp to 0.8.5 (#6261)
Picks up https://github.com/modelcontextprotocol/rust-sdk/pull/511 which
should fix todoist and some other MCP server oauth and may further
resolve issues in https://github.com/openai/codex/issues/5045
2025-11-05 14:20:30 -05:00
Eric Traut
c4ebe4b078 Improved token refresh handling to address "Re-connecting" behavior (#6231)
Currently, when the access token expires, we attempt to use the refresh
token to acquire a new access token. This works most of the time.
However, there are situations where the refresh token is expired,
exhausted (already used to perform a refresh), or revoked. In those
cases, the current logic treats the error as transient and attempts to
retry it repeatedly.

This PR changes the token refresh logic to differentiate between
permanent and transient errors. It also changes callers to treat the
permanent errors as fatal rather than retrying them. And it provides
better error messages to users so they understand how to address the
problem. These error messages should also help us further understand why
we're seeing examples of refresh token exhaustion.

Here is the error message in the CLI. The same text appears within the
extension.

<img width="863" height="38" alt="image"
src="https://github.com/user-attachments/assets/7ffc0d08-ebf0-4900-b9a9-265064202f4f"
/>

I also correct the spelling of "Re-connecting", which shouldn't have a
hyphen in it.

Testing: I manually tested these code paths by adding temporary code to
programmatically cause my refresh token to be exhausted (by calling the
token refresh endpoint in a tight loop more than 50 times). I then
simulated an access token expiration, which caused the token refresh
logic to be invoked. I confirmed that the updated logic properly handled
the error condition.

Note: We earlier discussed the idea of forcefully logging out the user
at the point where token refresh failed. I made several attempts to do
this, and all of them resulted in a bad UX. It's important to surface
this error to users in a way that explains the problem and tells them
that they need to log in again. We also previously discussed deleting
the auth.json file when this condition is detected. That also creates
problems because it effectively changes the auth status from logged in
to logged out, and this causes odd failures and inconsistent UX. I think
it's therefore better not to delete auth.json in this case. If the user
closes the CLI or VSCE and starts it again, we properly detect that the
access token is expired and the refresh token is "dead", and we force
the user to go through the login flow at that time.

This should address aspects of #6191, #5679, and #5505
2025-11-05 10:51:57 -08:00
Ahmed Ibrahim
1a89f70015 refactor Conversation history file into its own directory (#6229)
This is just a refactor of `conversation_history` file by breaking it up
into multiple smaller ones with helper. This refactor will help us move
more functionality related to context management here. in a clean way.
2025-11-05 10:49:35 -08:00
Jeremy Rose
62474a30e8 tui: refactor ChatWidget and BottomPane to use Renderables (#5565)
- introduce RenderableItem to support both owned and borrowed children
in composite Renderables
- refactor some of our gnarlier manual layouts, BottomPane and
ChatWidget, to use ColumnRenderable
- Renderable and friends now handle cursor_pos()
2025-11-05 09:50:40 -08:00
Dan Hernandez
9a10e80ab7 Add modelReasoningEffort option to TypeScript SDK (#6237)
## Summary
- Adds `ModelReasoningEffort` type to TypeScript SDK with values:
`minimal`, `low`, `medium`, `high`
- Adds `modelReasoningEffort` option to `ThreadOptions`
- Forwards the option to the codex CLI via `--config
model_reasoning_effort="<value>"`
- Includes test coverage for the new option

## Changes
- `sdk/typescript/src/threadOptions.ts`: Define `ModelReasoningEffort`
type and add to `ThreadOptions`
- `sdk/typescript/src/index.ts`: Export `ModelReasoningEffort` type
- `sdk/typescript/src/exec.ts`: Forward `modelReasoningEffort` to CLI as
config flag
- `sdk/typescript/src/thread.ts`: Pass option through to exec (+ debug
logging)
- `sdk/typescript/tests/run.test.ts`: Add test for
`modelReasoningEffort` flag forwarding

---------

Co-authored-by: Eric Traut <etraut@openai.com>
2025-11-05 08:51:03 -08:00
Gabriel Peal
9b538a8672 Upgrade rmcp to 0.8.4 (#6234)
Picks up https://github.com/modelcontextprotocol/rust-sdk/pull/509 which
fixes https://github.com/openai/codex/issues/6164
2025-11-05 00:23:24 -05:00
Andrew Dirksen
95af417923 allow codex to be run from pid 1 (#4200)
Previously it was not possible for codex to run commands as the init
process (pid 1) in linux. Commands run in containers tend to see their
own pid as 1. See https://github.com/openai/codex/issues/4198

This pr implements the solution mentioned in that issue.

Co-authored-by: Eric Traut <etraut@openai.com>
2025-11-04 17:54:46 -08:00
Soroush Yousefpour
fff576cf98 fix(core): load custom prompts from symlinked Markdown files (#3643)
- Discover prompts via fs::metadata to follow symlinks

- Add Unix-only symlink test in custom_prompts.rs

- Update docs/prompts.md to mention symlinks

Fixes #3637

---------

Signed-off-by: Soroush Yousefpour <h.yusefpour@gmail.com>
Co-authored-by: dedrisian-oai <dedrisian@openai.com>
Co-authored-by: Eric Traut <etraut@openai.com>
2025-11-04 17:44:02 -08:00
Lukas
1575f0504c Fix nix build (#6230)
Previously, the `nix build .#default` command fails due to a missing
output hash in the `./codex-rs/default.nix` for `crossterm-0.28.1`:

```
error: No hash was found while vendoring the git dependency crossterm-0.28.1. You can add
a hash through the `outputHashes` argument of `importCargoLock`:

outputHashes = {
 "crossterm-0.28.1" = "<hash>";
};

If you use `buildRustPackage`, you can add this attribute to the `cargoLock`
attribute set.
```

This PR adds the missing hash:

```diff
cargoLock.outputHashes = {
  "ratatui-0.29.0" = "sha256-HBvT5c8GsiCxMffNjJGLmHnvG77A6cqEL+1ARurBXho=";
+ "crossterm-0.28.1" = "sha256-6qCtfSMuXACKFb9ATID39XyFDIEMFDmbx6SSmNe+728=";
};
```

With this change, `nix build .#default` succeeds:

```
> nix build .#default --max-jobs 1 --cores 2

warning: Git tree '/home/lukas/r/github.com/lukasl-dev/codex' is dirty
[1/0/1 built] building codex-rs-0.1.0 (buildPhase)[1/0/1 built] building codex-rs-0.1.0 (buildP[1/0/1 built] building codex-rs-0.1.0 (buildPhase):    [1/0/1 built] building codex-rs-0.1.0 (b[1/0/1 built] building codex-rs-0.1.0 (buildPhase):    Compi[1/0/1 built] building codex-rs-0.1

> ./result/bin/codex
  You are running Codex in /home/lukas/r/github.com/lukasl-dev/codex

  Since this folder is version controlled, you may wish to allow Codex to work in this folder without asking for approval.
  ...
```
2025-11-04 17:07:37 -08:00
Owen Lin
edf4c3f627 [app-server] feat: export.rs supports a v2 namespace, initial v2 notifications (#6212)
**Typescript and JSON schema exports**
While working on Thread/Turn/Items type definitions, I realize we will
run into name conflicts between v1 and v2 APIs (e.g. `RateLimitWindow`
which won't be reusable since v1 uses `RateLimitWindow` from `protocol/`
which uses snake_case, but we want to expose camelCase everywhere, so
we'll define a V2 version of that struct that serializes as camelCase).

To set us up for a clean and isolated v2 API, generate types into a
`v2/` namespace for both typescript and JSON schema.
- TypeScript: v2 types emit under `out_dir/v2/*.ts`, and root index.ts
now re-exports them via `export * as v2 from "./v2"`;.
- JSON Schemas: v2 definitions bundle under `#/definitions/v2/*` rather
than the root.

The location for the original types (v1 and types pulled from
`protocol/` and other core crates) haven't changed and are still at the
root. This is for backwards compatibility: no breaking changes to
existing usages of v1 APIs and types.

**Notifications**
While working on export.rs, I:
- refactored server/client notifications with macros (like we already do
for methods) so they also get exported (I noticed they weren't being
exported at all).
- removed the hardcoded list of types to export as JSON schema by
leveraging the existing macros instead
- and took a stab at API V2 notifications. These aren't wired up yet,
and I expect to iterate on these this week.
2025-11-05 01:02:39 +00:00
Ahmed Ibrahim
d40a6b7f73 fix: Update the deprecation message to link to the docs (#6211)
The deprecation message is currently a bit confusing. Users may not
understand what is `[features].x`. I updated the docs and the
deprecation message for more guidance.

---------

Co-authored-by: Gabriel Peal <gpeal@users.noreply.github.com>
2025-11-04 21:02:27 +00:00
Dylan Hurd
3a22018edd Revert "fix: pin musl 1.2.5 for DNS fixes" (#6222)
Reverts openai/codex#6189
2025-11-04 11:56:40 -08:00
Ahmed Ibrahim
fe54c216a3 ignore deltas in codex_delegate (#6208)
ignore legacy deltas in codex-delegate to avoid this
[issue](https://github.com/openai/codex/pull/6202).
2025-11-04 19:21:35 +00:00
Dylan Hurd
cb6584de46 fix: pin musl 1.2.5 for DNS fixes (#6189)
## Summary
musl 1.2.5 includes [several fixes to DNS over
TCP](https://www.openwall.com/lists/musl/2024/03/01/2), which appears to
be the root cause of #6116.

This approach is a bit janky, but according to codex:
> On the Ubuntu 24.04 runners we use, apt-cache policy musl-tools shows
only the distro build (1.2.4-2ubuntu2)"

We should build with this version and confirm.

## Testing
- [ ] TODO: test and see if this fixes Azure issues
2025-11-04 09:17:16 -08:00
Ahmed Ibrahim
7e068e1094 fix: ignore reasoning deltas because we send it with turn item (#6202)
should fix this:

<img width="2418" height="242" alt="image"
src="https://github.com/user-attachments/assets/f818d00b-ed3a-479b-94a7-e4bc5db6326e"
/>
2025-11-04 08:27:16 -08:00
Celia Chen
d3187dbc17 [App-server] v2 for account/updated and account/logout (#6175)
V2 for `account/updated` and `account/logout` for app server. correspond
to old `authStatusChange` and `LogoutChatGpt` respectively. Followup PRs
will make other v2 endpoints call `account/updated` instead of
`authStatusChange` too.
2025-11-03 22:01:33 -08:00
Robby He
dc2f26f7b5 Fix is_api_message to correctly exclude reasoning messages (#6156)
## Problem

The `is_api_message` function in `conversation_history.rs` had a
misalignment between its documentation and implementation:

- **Comment stated**: "Anything that is not a system message or
'reasoning' message is considered an API message"
- **Code behavior**: Was returning `true` for `ResponseItem::Reasoning`,
meaning reasoning messages were incorrectly treated as API messages

This inconsistency could lead to reasoning messages being persisted in
conversation history when they should be filtered out.

## Root Cause

Investigation revealed that reasoning messages are explicitly excluded
throughout the codebase:

1. **Chat completions API** (lines 267-272 in `chat_completions.rs`)
omits reasoning from conversation history:
   ```rust
   ResponseItem::Reasoning { .. } | ResponseItem::Other => {
       // Omit these items from the conversation history.
       continue;
   }
   ```

2. **Existing tests** like `drops_reasoning_when_last_role_is_user` and
`ignores_reasoning_before_last_user` validate that reasoning should be
excluded from API payloads

## Solution

Fixed the `is_api_message` function to align with its documentation and
the rest of the codebase:

```rust
// Before: Reasoning was incorrectly returning true
ResponseItem::Reasoning { .. } | ResponseItem::WebSearchCall { .. } => true,

// After: Reasoning correctly returns false  
ResponseItem::WebSearchCall { .. } => true,
ResponseItem::Reasoning { .. } | ResponseItem::Other => false,
```

## Testing

- Enhanced existing test to verify reasoning messages are properly
filtered out
- All 264 core tests pass, including 8 chat completions tests that
validate reasoning behavior
- No regressions introduced

This ensures reasoning messages are consistently excluded from API
message processing across the entire codebase.
2025-11-03 20:55:41 -08:00
Ricardo Ander-Egg
553db8def1 Follow symlinks during file search (#4453)
I have read the CLA Document and I hereby sign the CLA

Closes #4452

This fixes a usability issue where users with symlinked folders in their
working directory couldn't search those files using the `@` file search
feature.

## Rationale

The "bug" was in the file search implementation in
`codex-rs/file-search/src/lib.rs`. The `WalkBuilder` was using default
settings which don't follow symlinks, causing two related issues:

1. Partial search results: The `@` search would find symlinked
directories but couldn't find files inside them
2. Inconsistent behavior: Users expect symlinked folders to behave like
regular folders in search results.

## Root cause

The `ignore` crate's `WalkBuilder` defaults to `.follow_links(false)`
[[source](9802945e63/crates/ignore/src/walk.rs (L532))],
so when traversing the file system, it would:

- Detect symlinked directories as directory entries
- But not traverse into them to index their contents
- The `get_file_path` function would then filter out actual directories,
leaving only the symlinked folder itself as a result

Fix: Added `.follow_links(true)` to the `WalkBuilder` configuration,
making the file search follow symlinks and index their contents just
like regular directories.

This change maintains backward compatibility since symlink following is
generally expected behavior for file search tools, and it aligns with
how users expect the `@` search feature to work.

Co-authored-by: Eric Traut <etraut@openai.com>
2025-11-03 20:28:33 -08:00
Lucas Freire Sangoi
ab63a47173 docs: add example config.toml (#5175)
I was missing an example config.toml, and following docs/config.md alone
was slower. I had GPT-5 scan the codebase for every accepted config key,
check the defaults, and generate a single example config.toml with
annotations. It lists all keys Codex reads from TOML, sets each to its
effective default where it exists, leaves optional ones commented, and
adds short comments on purpose and valid values. This should make
onboarding faster and reduce configuration errors. I can rename it to
config.example.toml or move it under docs/ if you prefer.
2025-11-03 18:19:26 -08:00
Ahmed Ibrahim
e658c6c73b fix: --search shouldn't show deprecation message (#6180)
Use the new feature flags instead of the old config.
2025-11-04 00:11:50 +00:00
Eric Traut
1e0e553304 Fixed notify handler so it passes correct input_messages details (#6143)
This fixes bug #6121. 

The `input_messages` field passed to the notify handler is currently
empty because the logic is incorrectly including the OutputText rather
than InputText. I've fixed that and added proper filtering to remove
messages associated with AGENTS.md and other context injected by the
harness.

Testing: I wrote a notify handler and verified that the user prompt is
correctly passed through to the handler.
2025-11-03 14:23:04 -08:00
iceweasel-oai
07b7d28937 log sandbox commands to $CODEX_HOME instead of cwd (#6171)
Logging commands in the Windows Sandbox is temporary, but while we are
doing it, let's always write to CODEX_HOME instead of dirtying the cwd.
2025-11-03 13:12:33 -08:00
Ahmed Ibrahim
6ee7fbcfff feat: add the time after aborting (#5996)
Tell the model how much time passed after the user aborted the call.
2025-11-03 11:44:06 -08:00
Jeremy Rose
5f3a0473f1 tui: refine text area word separator handling (#5541)
## Summary
- replace the word part enum with a simple `is_word_separator` helper
- keep word-boundary logic aligned with the helper and punctuation-aware
behavior
- extend forward/backward deletion tests to cover whitespace around
separators

## Testing
- just fix -p codex-tui
- cargo test -p codex-tui


------
https://chatgpt.com/codex/tasks/task_i_68f91c71d838832ca2a3c4f0ec1b55d4
2025-11-03 11:33:34 -08:00
iceweasel-oai
2eda75a8ee Do not skip trust prompt on Windows if sandbox is enabled. (#6167)
If the experimental windows sandbox is enabled, the trust prompt should
show on Windows.
2025-11-03 11:27:45 -08:00
Michael Bolin
e1f098b9b7 feat: add options to responses-api-proxy to support Azure (#6129)
This PR introduces an `--upstream-url` option to the proxy CLI that
determines the URL that Responses API requests should be forwarded to.
To preserve existing behavior, the default value is
`"https://api.openai.com/v1/responses"`.

The motivation for this change is that the [Codex GitHub
Action](https://github.com/openai/codex-action) should support those who
use the OpenAI Responses API via Azure. Relevant issues:

- https://github.com/openai/codex-action/issues/28
- https://github.com/openai/codex-action/issues/38
- https://github.com/openai/codex-action/pull/44

Though rather than introduce a bunch of new Azure-specific logic in the
action as https://github.com/openai/codex-action/pull/44 proposes, we
should leverage our Responses API proxy to get the _hardening_ benefits
it provides:


d5853d9c47/codex-rs/responses-api-proxy/README.md (hardening-details)

This PR should make this straightforward to incorporate in the action.
To see how the updated version of the action would consume these new
options, see https://github.com/openai/codex-action/pull/47.
2025-11-03 10:06:00 -08:00
pakrym-oai
e5e13479d0 Include reasoning tokens in the context window calculation (#6161)
This value is used to determine whether mid-turn compaction is required.
Reasoning items are only excluded between turns (and soon will start to
be preserved even across turns) so it's incorrect to subtract
reasoning_output_tokens mid term.

This will result in higher values reported between turns but we are also
looking into preserving reasoning items for the entire conversation to
improve performance and caching.
2025-11-03 10:02:23 -08:00
Jeremy Rose
7bc3ca9e40 Fix rmcp client feature flag reference (#6051)
## Summary
- update the OAuth login error message to reference
`[features].rmcp_client` in config

## Testing
- cargo test -p codex-cli

------
https://chatgpt.com/codex/tasks/task_i_69050365dc84832ca298f863c879a59a
2025-11-03 09:59:19 -08:00
Mark Hemmings
4d8b71d412 Fix typo in error message for OAuth login (#6159)
Error message for attempting to OAuth with a remote RCP is incorrect and
misleading. The correct config is

```
[features]
rmcp_client = true
```

Co-authored-by: Eric Traut <etraut@openai.com>
2025-11-03 08:59:00 -08:00
pygarap
b484672961 Add documentation for slash commands in docs/slash_commands.md. (#5685)
This pull request adds a new documentation section to explain the
available slash commands in Codex. The update introduces a clear
overview and a reference table for built-in commands, making it easier
for users to understand and utilize these features.

Documentation updates:

* Added a new section to `docs/slash_commands.md` describing what slash
commands are and listing all built-in commands with their purposes in a
formatted table.
2025-11-03 08:27:13 -08:00
Vinh Nguyen
a1ee10b438 fix: improve usage URLs in status card and snapshots (#6111)
Hi OpenAI Codex team, currently "Visit chatgpt.com/codex/settings/usage
for up-to-date information on rate limits and credits" message in status
card and error messages. For now, without the "https://" prefix, the
link cannot be clicked directly from most terminals or chat interfaces.

<img width="636" height="127" alt="Screenshot 2025-11-02 at 22 47 06"
src="https://github.com/user-attachments/assets/5ea11e8b-fb74-451c-85dc-f4d492b2678b"
/>

---

The fix is intent to improve this issue:

- It makes the link clickable in terminals that support it, hence better
accessibility
- It follows standard URL formatting practices
- It maintains consistency with other links in the application (like the
existing "https://openai.com/chatgpt/pricing" links)

Thank you!
2025-11-02 21:44:59 -08:00
Eric Traut
dccce34d84 Fix "archive conversation" on Windows (#6124)
Addresses issue https://github.com/openai/codex/issues/3582 where an
"archive conversation" command in the extension fails on Windows.

The problem is that the `archive_conversation` api server call is not
canonicalizing the path to the rollout path when performing its check to
verify that the rollout path is in the sessions directory. This causes
it to fail 100% of the time on Windows.

Testing: I was able to repro the error on Windows 100% prior to this
change. After the change, I'm no longer able to repro.
2025-11-02 21:41:05 -08:00
dependabot[bot]
f5945d7c03 chore(deps): bump actions/upload-artifact from 4 to 5 (#6137)
Bumps
[actions/upload-artifact](https://github.com/actions/upload-artifact)
from 4 to 5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/upload-artifact/releases">actions/upload-artifact's
releases</a>.</em></p>
<blockquote>
<h2>v5.0.0</h2>
<h2>What's Changed</h2>
<p><strong>BREAKING CHANGE:</strong> this update supports Node
<code>v24.x</code>. This is not a breaking change per-se but we're
treating it as such.</p>
<ul>
<li>Update README.md by <a
href="https://github.com/GhadimiR"><code>@​GhadimiR</code></a> in <a
href="https://redirect.github.com/actions/upload-artifact/pull/681">actions/upload-artifact#681</a></li>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@​nebuk89</code></a> in <a
href="https://redirect.github.com/actions/upload-artifact/pull/712">actions/upload-artifact#712</a></li>
<li>Readme: spell out the first use of GHES by <a
href="https://github.com/danwkennedy"><code>@​danwkennedy</code></a> in
<a
href="https://redirect.github.com/actions/upload-artifact/pull/727">actions/upload-artifact#727</a></li>
<li>Update GHES guidance to include reference to Node 20 version by <a
href="https://github.com/patrikpolyak"><code>@​patrikpolyak</code></a>
in <a
href="https://redirect.github.com/actions/upload-artifact/pull/725">actions/upload-artifact#725</a></li>
<li>Bump <code>@actions/artifact</code> to <code>v4.0.0</code></li>
<li>Prepare <code>v5.0.0</code> by <a
href="https://github.com/danwkennedy"><code>@​danwkennedy</code></a> in
<a
href="https://redirect.github.com/actions/upload-artifact/pull/734">actions/upload-artifact#734</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/GhadimiR"><code>@​GhadimiR</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/upload-artifact/pull/681">actions/upload-artifact#681</a></li>
<li><a href="https://github.com/nebuk89"><code>@​nebuk89</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/upload-artifact/pull/712">actions/upload-artifact#712</a></li>
<li><a
href="https://github.com/danwkennedy"><code>@​danwkennedy</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/upload-artifact/pull/727">actions/upload-artifact#727</a></li>
<li><a
href="https://github.com/patrikpolyak"><code>@​patrikpolyak</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/upload-artifact/pull/725">actions/upload-artifact#725</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/upload-artifact/compare/v4...v5.0.0">https://github.com/actions/upload-artifact/compare/v4...v5.0.0</a></p>
<h2>v4.6.2</h2>
<h2>What's Changed</h2>
<ul>
<li>Update to use artifact 2.3.2 package &amp; prepare for new
upload-artifact release by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/upload-artifact/pull/685">actions/upload-artifact#685</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/upload-artifact/pull/685">actions/upload-artifact#685</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/upload-artifact/compare/v4...v4.6.2">https://github.com/actions/upload-artifact/compare/v4...v4.6.2</a></p>
<h2>v4.6.1</h2>
<h2>What's Changed</h2>
<ul>
<li>Update to use artifact 2.2.2 package by <a
href="https://github.com/yacaovsnc"><code>@​yacaovsnc</code></a> in <a
href="https://redirect.github.com/actions/upload-artifact/pull/673">actions/upload-artifact#673</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/upload-artifact/compare/v4...v4.6.1">https://github.com/actions/upload-artifact/compare/v4...v4.6.1</a></p>
<h2>v4.6.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Expose env vars to control concurrency and timeout by <a
href="https://github.com/yacaovsnc"><code>@​yacaovsnc</code></a> in <a
href="https://redirect.github.com/actions/upload-artifact/pull/662">actions/upload-artifact#662</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/upload-artifact/compare/v4...v4.6.0">https://github.com/actions/upload-artifact/compare/v4...v4.6.0</a></p>
<h2>v4.5.0</h2>
<h2>What's Changed</h2>
<ul>
<li>fix: deprecated <code>Node.js</code> version in action by <a
href="https://github.com/hamirmahal"><code>@​hamirmahal</code></a> in <a
href="https://redirect.github.com/actions/upload-artifact/pull/578">actions/upload-artifact#578</a></li>
<li>Add new <code>artifact-digest</code> output by <a
href="https://github.com/bdehamer"><code>@​bdehamer</code></a> in <a
href="https://redirect.github.com/actions/upload-artifact/pull/656">actions/upload-artifact#656</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/hamirmahal"><code>@​hamirmahal</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/upload-artifact/pull/578">actions/upload-artifact#578</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="330a01c490"><code>330a01c</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/upload-artifact/issues/734">#734</a>
from actions/danwkennedy/prepare-5.0.0</li>
<li><a
href="03f2824452"><code>03f2824</code></a>
Update <code>github.dep.yml</code></li>
<li><a
href="905a1ecb59"><code>905a1ec</code></a>
Prepare <code>v5.0.0</code></li>
<li><a
href="2d9f9cdfa9"><code>2d9f9cd</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/upload-artifact/issues/725">#725</a>
from patrikpolyak/patch-1</li>
<li><a
href="9687587dec"><code>9687587</code></a>
Merge branch 'main' into patch-1</li>
<li><a
href="2848b2cda0"><code>2848b2c</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/upload-artifact/issues/727">#727</a>
from danwkennedy/patch-1</li>
<li><a
href="9b511775fd"><code>9b51177</code></a>
Spell out the first use of GHES</li>
<li><a
href="cd231ca1ed"><code>cd231ca</code></a>
Update GHES guidance to include reference to Node 20 version</li>
<li><a
href="de65e23aa2"><code>de65e23</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/upload-artifact/issues/712">#712</a>
from actions/nebuk89-patch-1</li>
<li><a
href="8747d8cd76"><code>8747d8c</code></a>
Update README.md</li>
<li>Additional commits viewable in <a
href="https://github.com/actions/upload-artifact/compare/v4...v5">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/upload-artifact&package-manager=github_actions&previous-version=4&new-version=5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-02 20:38:45 -08:00
Rohan Godha
5fcf923c19 fix: pasting api key stray character (#4903)
When signing in with an API key, pasting (with command+v on mac) adds a
stray `v` character to the end of the api key.



demo video (where I'm pasting in `sk-something-super-secret`)


https://github.com/user-attachments/assets/b2b34b5f-c7e4-4760-9657-c35686dd8bb8
2025-11-03 04:19:08 +00:00
Eric Traut
0c7efa0cfd Fix incorrect "deprecated" message about experimental config key (#6131)
When I enable `experimental_sandbox_command_assessment`, I get an
incorrect deprecation warning: "experimental_sandbox_command_assessment
is deprecated. Use experimental_sandbox_command_assessment instead."

This PR fixes this error.
2025-11-02 16:33:09 -08:00
Eric Traut
d5853d9c47 Changes to sandbox command assessment feature based on initial experiment feedback (#6091)
* Removed sandbox risk categories; feedback indicates that these are not
that useful and "less is more"
* Tweaked the assessment prompt to generate terser answers
* Fixed bug in orchestrator that prevents this feature from being
exposed in the extension
2025-11-01 14:52:23 -07:00
Thomas Stokes
d9118c04bf Parse the Azure OpenAI rate limit message (#5956)
Fixes #4161

Currently Codex uses a regex to parse the "Please try again in 1.898s"
OpenAI-style rate limit message, so that it can wait the correct
duration before retrying. Azure OpenAI returns a different error that
looks like "Rate limit exceeded. Try again in 35 seconds."

This PR extends the regex and parsing code to match in a more fuzzy
manner, handling anything matching the pattern "try again in
\<duration>\<unit>".
2025-11-01 09:33:13 -07:00
Vinh Nguyen
91e65ac0ce docs: Fix link anchor and markdown format in advanced guide (#5649)
Hi OpenAI Codex team, this PR fix an rendering issue in the Markdown
table and anchor link in the
[docs/advanced.md](https://github.com/openai/codex/blob/main/docs/advanced.md).

Thank you!

Co-authored-by: Eric Traut <etraut@openai.com>
2025-10-31 16:53:09 -07:00
Tony Dong
1ac4fb45d2 Fixing small typo in docs (#5659)
Fixing a typo in the docs

Co-authored-by: Eric Traut <etraut@openai.com>
2025-10-31 16:41:05 -07:00
Jeremy Rose
07b8bdfbf1 tui: patch crossterm for better color queries (#5935)
See
https://github.com/crossterm-rs/crossterm/compare/master...nornagon:crossterm:nornagon/color-query

This patches crossterm to add support for querying fg/bg color as part
of the crossterm event loop, which fixes some issues where this query
would fight with other input.

- dragging screenshots into the cli would sometimes paste half of the
pathname instead of being recognized as an image
(https://github.com/openai/codex/issues/5603)
- Fixes https://github.com/openai/codex/issues/4945
2025-10-31 16:36:41 -07:00
Anton Panasenko
0f22067242 [codex][app-server] improve error response for client requests (#6050) 2025-10-31 15:28:04 -07:00
Ritesh Chauhan
d7f8b97541 docs: fix broken link in contributing guide (#4973)
## Summary

This PR fixes a broken self-referencing link in the contributing
documentation.

## Changes

- Removed the phrase 'Following the [development
setup](#development-workflow) instructions above' from the Development
workflow section
- The link referenced a non-existent section and the phrase didn't make
logical sense in context

## Before

The text referenced 'development setup instructions above' but:
1. No section called 'development setup' exists
2. There were no instructions 'above' that point
3. The link pointed to the same section it was in

## After

Simplified to: 'Ensure your change is free of lint warnings and test
failures.'

## Type

Documentation fix


I have read the CLA Document and I hereby sign the CLA

Co-authored-by: Ritesh Chauhan <sagar.chauhn11@gmail.com>
2025-10-31 15:09:35 -07:00
jif-oai
611e00c862 feat: compactor 2 (#6027)
Co-authored-by: pakrym-oai <pakrym@openai.com>
2025-10-31 14:27:08 -07:00
147 changed files with 8338 additions and 3166 deletions

View File

@@ -46,7 +46,7 @@ jobs:
echo "pack_output=$PACK_OUTPUT" >> "$GITHUB_OUTPUT"
- name: Upload staged npm package artifact
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v5
with:
name: codex-npm-staging
path: ${{ steps.stage_npm_package.outputs.pack_output }}

View File

@@ -350,7 +350,7 @@ jobs:
fi
fi
- uses: actions/upload-artifact@v4
- uses: actions/upload-artifact@v5
with:
name: ${{ matrix.target }}
# Upload the per-binary .zst files as well as the new .tar.gz

View File

@@ -75,11 +75,13 @@ Codex CLI supports a rich set of configuration options, with preferences stored
- [**Getting started**](./docs/getting-started.md)
- [CLI usage](./docs/getting-started.md#cli-usage)
- [Slash Commands](./docs/slash_commands.md)
- [Running with a prompt as input](./docs/getting-started.md#running-with-a-prompt-as-input)
- [Example prompts](./docs/getting-started.md#example-prompts)
- [Custom prompts](./docs/prompts.md)
- [Memory with AGENTS.md](./docs/getting-started.md#memory-with-agentsmd)
- [**Configuration**](./docs/config.md)
- [Example config](./docs/example-config.md)
- [**Sandbox & approvals**](./docs/sandbox.md)
- [**Authentication**](./docs/authentication.md)
- [Auth methods](./docs/authentication.md#forcing-a-specific-auth-method-advanced)

23
codex-rs/Cargo.lock generated
View File

@@ -186,9 +186,11 @@ dependencies = [
"chrono",
"codex-app-server-protocol",
"codex-core",
"codex-protocol",
"serde",
"serde_json",
"tokio",
"uuid",
"wiremock",
]
@@ -871,6 +873,7 @@ dependencies = [
"anyhow",
"clap",
"codex-protocol",
"mcp-types",
"paste",
"pretty_assertions",
"schemars 0.8.22",
@@ -992,6 +995,7 @@ dependencies = [
"supports-color",
"tempfile",
"tokio",
"toml",
]
[[package]]
@@ -1469,6 +1473,7 @@ dependencies = [
"regex-lite",
"serde",
"serde_json",
"serial_test",
"shlex",
"strum 0.27.2",
"strum_macros 0.27.2",
@@ -1750,8 +1755,7 @@ checksum = "d0a5c400df2834b80a4c3327b3aad3a4c4cd4de0629063962b03235697506a28"
[[package]]
name = "crossterm"
version = "0.28.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "829d955a0bb380ef178a640b91779e3987da38c9aea133b20614cfed8cdea9c6"
source = "git+https://github.com/nornagon/crossterm?branch=nornagon%2Fcolor-query#87db8bfa6dc99427fd3b071681b07fc31c6ce995"
dependencies = [
"bitflags 2.10.0",
"crossterm_winapi",
@@ -5008,9 +5012,9 @@ dependencies = [
[[package]]
name = "rmcp"
version = "0.8.3"
version = "0.8.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1fdad1258f7259fdc0f2dfc266939c82c3b5d1fd72bcde274d600cdc27e60243"
checksum = "e5947688160b56fb6c827e3c20a72c90392a1d7e9dec74749197aa1780ac42ca"
dependencies = [
"base64",
"bytes",
@@ -5042,9 +5046,9 @@ dependencies = [
[[package]]
name = "rmcp-macros"
version = "0.8.3"
version = "0.8.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ede0589a208cc7ce81d1be68aa7e74b917fcd03c81528408bab0457e187dcd9b"
checksum = "01263441d3f8635c628e33856c468b96ebbce1af2d3699ea712ca71432d4ee7a"
dependencies = [
"darling 0.21.3",
"proc-macro2",
@@ -5685,6 +5689,12 @@ dependencies = [
"digest",
]
[[package]]
name = "sha1_smol"
version = "1.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bbfa15b3dddfee50a0fff136974b3e1bde555604ba463834a7eb7deb6417705d"
[[package]]
name = "sha2"
version = "0.10.9"
@@ -6872,6 +6882,7 @@ dependencies = [
"getrandom 0.3.3",
"js-sys",
"serde",
"sha1_smol",
"wasm-bindgen",
]

View File

@@ -162,7 +162,7 @@ ratatui = "0.29.0"
ratatui-macros = "0.6.0"
regex-lite = "0.1.7"
reqwest = "0.12"
rmcp = { version = "0.8.3", default-features = false }
rmcp = { version = "0.8.5", default-features = false }
schemars = "0.8.22"
seccompiler = "0.5.0"
sentry = "0.34.0"
@@ -256,7 +256,12 @@ unwrap_used = "deny"
# cargo-shear cannot see the platform-specific openssl-sys usage, so we
# silence the false positive here instead of deleting a real dependency.
[workspace.metadata.cargo-shear]
ignored = ["icu_provider", "openssl-sys", "codex-utils-readiness", "codex-utils-tokenizer"]
ignored = [
"icu_provider",
"openssl-sys",
"codex-utils-readiness",
"codex-utils-tokenizer",
]
[profile.release]
lto = "fat"
@@ -276,6 +281,7 @@ opt-level = 0
# Uncomment to debug local changes.
# ratatui = { path = "../../ratatui" }
ratatui = { git = "https://github.com/nornagon/ratatui", branch = "nornagon-v0.29.0-patch" }
crossterm = { git = "https://github.com/nornagon/crossterm", branch = "nornagon/color-query" }
# Uncomment to debug local changes.
# rmcp = { path = "../../rust-sdk/crates/rmcp" }

View File

@@ -14,6 +14,7 @@ workspace = true
anyhow = { workspace = true }
clap = { workspace = true, features = ["derive"] }
codex-protocol = { workspace = true }
mcp-types = { workspace = true }
paste = { workspace = true }
schemars = { workspace = true }
serde = { workspace = true, features = ["derive"] }

View File

@@ -2,20 +2,27 @@ use crate::ClientNotification;
use crate::ClientRequest;
use crate::ServerNotification;
use crate::ServerRequest;
use crate::export_client_notification_schemas;
use crate::export_client_param_schemas;
use crate::export_client_response_schemas;
use crate::export_client_responses;
use crate::export_server_notification_schemas;
use crate::export_server_param_schemas;
use crate::export_server_response_schemas;
use crate::export_server_responses;
use anyhow::Context;
use anyhow::Result;
use anyhow::anyhow;
use codex_protocol::parse_command::ParsedCommand;
use codex_protocol::protocol::EventMsg;
use codex_protocol::protocol::FileChange;
use codex_protocol::protocol::SandboxPolicy;
use schemars::JsonSchema;
use schemars::schema::RootSchema;
use schemars::schema_for;
use serde::Serialize;
use serde_json::Map;
use serde_json::Value;
use std::collections::BTreeMap;
use std::collections::HashMap;
use std::collections::HashSet;
use std::ffi::OsStr;
use std::fs;
@@ -28,83 +35,29 @@ use ts_rs::TS;
const HEADER: &str = "// GENERATED CODE! DO NOT MODIFY BY HAND!\n\n";
macro_rules! for_each_schema_type {
($macro:ident) => {
$macro!(crate::RequestId);
$macro!(crate::JSONRPCMessage);
$macro!(crate::JSONRPCRequest);
$macro!(crate::JSONRPCNotification);
$macro!(crate::JSONRPCResponse);
$macro!(crate::JSONRPCError);
$macro!(crate::JSONRPCErrorError);
$macro!(crate::AddConversationListenerParams);
$macro!(crate::AddConversationSubscriptionResponse);
$macro!(crate::ApplyPatchApprovalParams);
$macro!(crate::ApplyPatchApprovalResponse);
$macro!(crate::ArchiveConversationParams);
$macro!(crate::ArchiveConversationResponse);
$macro!(crate::AuthMode);
$macro!(crate::AuthStatusChangeNotification);
$macro!(crate::CancelLoginChatGptParams);
$macro!(crate::CancelLoginChatGptResponse);
$macro!(crate::ClientInfo);
$macro!(crate::ClientNotification);
$macro!(crate::ClientRequest);
$macro!(crate::ConversationSummary);
$macro!(crate::ExecCommandApprovalParams);
$macro!(crate::ExecCommandApprovalResponse);
$macro!(crate::ExecOneOffCommandParams);
$macro!(crate::ExecOneOffCommandResponse);
$macro!(crate::FuzzyFileSearchParams);
$macro!(crate::FuzzyFileSearchResponse);
$macro!(crate::FuzzyFileSearchResult);
$macro!(crate::GetAuthStatusParams);
$macro!(crate::GetAuthStatusResponse);
$macro!(crate::GetUserAgentResponse);
$macro!(crate::GetUserSavedConfigResponse);
$macro!(crate::GitDiffToRemoteParams);
$macro!(crate::GitDiffToRemoteResponse);
$macro!(crate::GitSha);
$macro!(crate::InitializeParams);
$macro!(crate::InitializeResponse);
$macro!(crate::InputItem);
$macro!(crate::InterruptConversationParams);
$macro!(crate::InterruptConversationResponse);
$macro!(crate::ListConversationsParams);
$macro!(crate::ListConversationsResponse);
$macro!(crate::LoginApiKeyParams);
$macro!(crate::LoginApiKeyResponse);
$macro!(crate::LoginChatGptCompleteNotification);
$macro!(crate::LoginChatGptResponse);
$macro!(crate::LogoutChatGptParams);
$macro!(crate::LogoutChatGptResponse);
$macro!(crate::NewConversationParams);
$macro!(crate::NewConversationResponse);
$macro!(crate::Profile);
$macro!(crate::RemoveConversationListenerParams);
$macro!(crate::RemoveConversationSubscriptionResponse);
$macro!(crate::ResumeConversationParams);
$macro!(crate::ResumeConversationResponse);
$macro!(crate::SandboxSettings);
$macro!(crate::SendUserMessageParams);
$macro!(crate::SendUserMessageResponse);
$macro!(crate::SendUserTurnParams);
$macro!(crate::SendUserTurnResponse);
$macro!(crate::ServerNotification);
$macro!(crate::ServerRequest);
$macro!(crate::SessionConfiguredNotification);
$macro!(crate::SetDefaultModelParams);
$macro!(crate::SetDefaultModelResponse);
$macro!(crate::Tools);
$macro!(crate::UserInfoResponse);
$macro!(crate::UserSavedConfig);
$macro!(codex_protocol::protocol::EventMsg);
$macro!(codex_protocol::protocol::FileChange);
$macro!(codex_protocol::parse_command::ParsedCommand);
$macro!(codex_protocol::protocol::SandboxPolicy);
};
#[derive(Clone)]
pub struct GeneratedSchema {
namespace: Option<String>,
logical_name: String,
value: Value,
in_v1_dir: bool,
}
impl GeneratedSchema {
fn namespace(&self) -> Option<&str> {
self.namespace.as_deref()
}
fn logical_name(&self) -> &str {
&self.logical_name
}
fn value(&self) -> &Value {
&self.value
}
}
type JsonSchemaEmitter = fn(&Path) -> Result<GeneratedSchema>;
pub fn generate_types(out_dir: &Path, prettier: Option<&Path>) -> Result<()> {
generate_ts(out_dir, prettier)?;
generate_json(out_dir)?;
@@ -112,7 +65,9 @@ pub fn generate_types(out_dir: &Path, prettier: Option<&Path>) -> Result<()> {
}
pub fn generate_ts(out_dir: &Path, prettier: Option<&Path>) -> Result<()> {
let v2_out_dir = out_dir.join("v2");
ensure_dir(out_dir)?;
ensure_dir(&v2_out_dir)?;
ClientRequest::export_all_to(out_dir)?;
export_client_responses(out_dir)?;
@@ -123,12 +78,15 @@ pub fn generate_ts(out_dir: &Path, prettier: Option<&Path>) -> Result<()> {
ServerNotification::export_all_to(out_dir)?;
generate_index_ts(out_dir)?;
generate_index_ts(&v2_out_dir)?;
let ts_files = ts_files_in(out_dir)?;
// Ensure our header is present on all TS files (root + subdirs like v2/).
let ts_files = ts_files_in_recursive(out_dir)?;
for file in &ts_files {
prepend_header_if_missing(file)?;
}
// Optionally run Prettier on all generated TS files.
if let Some(prettier_bin) = prettier
&& !ts_files.is_empty()
{
@@ -147,23 +105,47 @@ pub fn generate_ts(out_dir: &Path, prettier: Option<&Path>) -> Result<()> {
pub fn generate_json(out_dir: &Path) -> Result<()> {
ensure_dir(out_dir)?;
let mut bundle: BTreeMap<String, RootSchema> = BTreeMap::new();
let envelope_emitters: &[JsonSchemaEmitter] = &[
|d| write_json_schema_with_return::<crate::RequestId>(d, "RequestId"),
|d| write_json_schema_with_return::<crate::JSONRPCMessage>(d, "JSONRPCMessage"),
|d| write_json_schema_with_return::<crate::JSONRPCRequest>(d, "JSONRPCRequest"),
|d| write_json_schema_with_return::<crate::JSONRPCNotification>(d, "JSONRPCNotification"),
|d| write_json_schema_with_return::<crate::JSONRPCResponse>(d, "JSONRPCResponse"),
|d| write_json_schema_with_return::<crate::JSONRPCError>(d, "JSONRPCError"),
|d| write_json_schema_with_return::<crate::JSONRPCErrorError>(d, "JSONRPCErrorError"),
|d| write_json_schema_with_return::<crate::ClientRequest>(d, "ClientRequest"),
|d| write_json_schema_with_return::<crate::ServerRequest>(d, "ServerRequest"),
|d| write_json_schema_with_return::<crate::ClientNotification>(d, "ClientNotification"),
|d| write_json_schema_with_return::<crate::ServerNotification>(d, "ServerNotification"),
|d| write_json_schema_with_return::<EventMsg>(d, "EventMsg"),
|d| write_json_schema_with_return::<FileChange>(d, "FileChange"),
|d| write_json_schema_with_return::<crate::protocol::v1::InputItem>(d, "InputItem"),
|d| write_json_schema_with_return::<ParsedCommand>(d, "ParsedCommand"),
|d| write_json_schema_with_return::<SandboxPolicy>(d, "SandboxPolicy"),
];
macro_rules! add_schema {
($ty:path) => {{
let name = type_basename(stringify!($ty));
let schema = write_json_schema_with_return::<$ty>(out_dir, &name)?;
bundle.insert(name, schema);
}};
let mut schemas: Vec<GeneratedSchema> = Vec::new();
for emit in envelope_emitters {
schemas.push(emit(out_dir)?);
}
for_each_schema_type!(add_schema);
schemas.extend(export_client_param_schemas(out_dir)?);
schemas.extend(export_client_response_schemas(out_dir)?);
schemas.extend(export_server_param_schemas(out_dir)?);
schemas.extend(export_server_response_schemas(out_dir)?);
schemas.extend(export_client_notification_schemas(out_dir)?);
schemas.extend(export_server_notification_schemas(out_dir)?);
export_client_response_schemas(out_dir)?;
export_server_response_schemas(out_dir)?;
let bundle = build_schema_bundle(schemas)?;
write_pretty_json(
out_dir.join("codex_app_server_protocol.schemas.json"),
&bundle,
)?;
let mut definitions = Map::new();
Ok(())
}
fn build_schema_bundle(schemas: Vec<GeneratedSchema>) -> Result<Value> {
const SPECIAL_DEFINITIONS: &[&str] = &[
"ClientNotification",
"ClientRequest",
@@ -176,22 +158,62 @@ pub fn generate_json(out_dir: &Path) -> Result<()> {
"ServerRequest",
];
for (name, schema) in bundle {
let mut schema_value = serde_json::to_value(schema)?;
annotate_schema(&mut schema_value, Some(name.as_str()));
let namespaced_types = collect_namespaced_types(&schemas);
let mut definitions = Map::new();
if let Value::Object(ref mut obj) = schema_value
for schema in schemas {
let GeneratedSchema {
namespace,
logical_name,
mut value,
in_v1_dir,
} = schema;
if let Some(ref ns) = namespace {
rewrite_refs_to_namespace(&mut value, ns);
}
let mut forced_namespace_refs: Vec<(String, String)> = Vec::new();
if let Value::Object(ref mut obj) = value
&& let Some(defs) = obj.remove("definitions")
&& let Value::Object(defs_obj) = defs
{
for (def_name, mut def_schema) in defs_obj {
if !SPECIAL_DEFINITIONS.contains(&def_name.as_str()) {
annotate_schema(&mut def_schema, Some(def_name.as_str()));
if SPECIAL_DEFINITIONS.contains(&def_name.as_str()) {
continue;
}
annotate_schema(&mut def_schema, Some(def_name.as_str()));
let target_namespace = match namespace {
Some(ref ns) => Some(ns.clone()),
None => namespace_for_definition(&def_name, &namespaced_types)
.cloned()
.filter(|_| !in_v1_dir),
};
if let Some(ref ns) = target_namespace {
if namespace.as_deref() == Some(ns.as_str()) {
rewrite_refs_to_namespace(&mut def_schema, ns);
insert_into_namespace(&mut definitions, ns, def_name.clone(), def_schema)?;
} else if !forced_namespace_refs
.iter()
.any(|(name, existing_ns)| name == &def_name && existing_ns == ns)
{
forced_namespace_refs.push((def_name.clone(), ns.clone()));
}
} else {
definitions.insert(def_name, def_schema);
}
}
}
definitions.insert(name, schema_value);
for (name, ns) in forced_namespace_refs {
rewrite_named_ref_to_namespace(&mut value, &ns, &name);
}
if let Some(ref ns) = namespace {
insert_into_namespace(&mut definitions, ns, logical_name.clone(), value)?;
} else {
definitions.insert(logical_name, value);
}
}
let mut root = Map::new();
@@ -206,15 +228,28 @@ pub fn generate_json(out_dir: &Path) -> Result<()> {
root.insert("type".to_string(), Value::String("object".into()));
root.insert("definitions".to_string(), Value::Object(definitions));
write_pretty_json(
out_dir.join("codex_app_server_protocol.schemas.json"),
&Value::Object(root),
)?;
Ok(())
Ok(Value::Object(root))
}
fn write_json_schema_with_return<T>(out_dir: &Path, name: &str) -> Result<RootSchema>
fn insert_into_namespace(
definitions: &mut Map<String, Value>,
namespace: &str,
name: String,
schema: Value,
) -> Result<()> {
let entry = definitions
.entry(namespace.to_string())
.or_insert_with(|| Value::Object(Map::new()));
match entry {
Value::Object(map) => {
map.insert(name, schema);
Ok(())
}
_ => Err(anyhow!("expected namespace {namespace} to be an object")),
}
}
fn write_json_schema_with_return<T>(out_dir: &Path, name: &str) -> Result<GeneratedSchema>
where
T: JsonSchema,
{
@@ -222,17 +257,37 @@ where
let schema = schema_for!(T);
let mut schema_value = serde_json::to_value(schema)?;
annotate_schema(&mut schema_value, Some(file_stem));
write_pretty_json(out_dir.join(format!("{file_stem}.json")), &schema_value)
// If the name looks like a namespaced path (e.g., "v2::Type"), mirror
// the TypeScript layout and write to out_dir/v2/Type.json. Otherwise
// write alongside the legacy files.
let (raw_namespace, logical_name) = split_namespace(file_stem);
let out_path = if let Some(ns) = raw_namespace {
let dir = out_dir.join(ns);
ensure_dir(&dir)?;
dir.join(format!("{logical_name}.json"))
} else {
out_dir.join(format!("{file_stem}.json"))
};
write_pretty_json(out_path, &schema_value)
.with_context(|| format!("Failed to write JSON schema for {file_stem}"))?;
let annotated_schema = serde_json::from_value(schema_value)?;
Ok(annotated_schema)
let namespace = match raw_namespace {
Some("v1") | None => None,
Some(ns) => Some(ns.to_string()),
};
Ok(GeneratedSchema {
in_v1_dir: raw_namespace == Some("v1"),
namespace,
logical_name: logical_name.to_string(),
value: schema_value,
})
}
pub(crate) fn write_json_schema<T>(out_dir: &Path, name: &str) -> Result<()>
pub(crate) fn write_json_schema<T>(out_dir: &Path, name: &str) -> Result<GeneratedSchema>
where
T: JsonSchema,
{
write_json_schema_with_return::<T>(out_dir, name).map(|_| ())
write_json_schema_with_return::<T>(out_dir, name)
}
fn write_pretty_json(path: PathBuf, value: &impl Serialize) -> Result<()> {
@@ -241,13 +296,73 @@ fn write_pretty_json(path: PathBuf, value: &impl Serialize) -> Result<()> {
fs::write(&path, json).with_context(|| format!("Failed to write {}", path.display()))?;
Ok(())
}
fn type_basename(type_path: &str) -> String {
type_path
.rsplit_once("::")
.map(|(_, name)| name)
.unwrap_or(type_path)
.trim()
.to_string()
/// Split a fully-qualified type name like "v2::Type" into its namespace and logical name.
fn split_namespace(name: &str) -> (Option<&str>, &str) {
name.split_once("::")
.map_or((None, name), |(ns, rest)| (Some(ns), rest))
}
/// Recursively rewrite $ref values that point at "#/definitions/..." so that
/// they point to a namespaced location under the bundle.
fn rewrite_refs_to_namespace(value: &mut Value, ns: &str) {
match value {
Value::Object(obj) => {
if let Some(Value::String(r)) = obj.get_mut("$ref")
&& let Some(suffix) = r.strip_prefix("#/definitions/")
{
let prefix = format!("{ns}/");
if !suffix.starts_with(&prefix) {
*r = format!("#/definitions/{ns}/{suffix}");
}
}
for v in obj.values_mut() {
rewrite_refs_to_namespace(v, ns);
}
}
Value::Array(items) => {
for v in items.iter_mut() {
rewrite_refs_to_namespace(v, ns);
}
}
_ => {}
}
}
fn collect_namespaced_types(schemas: &[GeneratedSchema]) -> HashMap<String, String> {
let mut types = HashMap::new();
for schema in schemas {
if let Some(ns) = schema.namespace() {
types
.entry(schema.logical_name().to_string())
.or_insert_with(|| ns.to_string());
if let Some(Value::Object(defs)) = schema.value().get("definitions") {
for key in defs.keys() {
types.entry(key.clone()).or_insert_with(|| ns.to_string());
}
}
if let Some(Value::Object(defs)) = schema.value().get("$defs") {
for key in defs.keys() {
types.entry(key.clone()).or_insert_with(|| ns.to_string());
}
}
}
}
types
}
fn namespace_for_definition<'a>(
name: &str,
types: &'a HashMap<String, String>,
) -> Option<&'a String> {
if let Some(ns) = types.get(name) {
return Some(ns);
}
let trimmed = name.trim_end_matches(|c: char| c.is_ascii_digit());
if trimmed != name {
return types.get(trimmed);
}
None
}
fn variant_definition_name(base: &str, variant: &Value) -> Option<String> {
@@ -467,6 +582,33 @@ fn ensure_dir(dir: &Path) -> Result<()> {
.with_context(|| format!("Failed to create output directory {}", dir.display()))
}
fn rewrite_named_ref_to_namespace(value: &mut Value, ns: &str, name: &str) {
let direct = format!("#/definitions/{name}");
let prefixed = format!("{direct}/");
let replacement = format!("#/definitions/{ns}/{name}");
let replacement_prefixed = format!("{replacement}/");
match value {
Value::Object(obj) => {
if let Some(Value::String(reference)) = obj.get_mut("$ref") {
if reference == &direct {
*reference = replacement;
} else if let Some(rest) = reference.strip_prefix(&prefixed) {
*reference = format!("{replacement_prefixed}{rest}");
}
}
for child in obj.values_mut() {
rewrite_named_ref_to_namespace(child, ns, name);
}
}
Value::Array(items) => {
for child in items {
rewrite_named_ref_to_namespace(child, ns, name);
}
}
_ => {}
}
}
fn prepend_header_if_missing(path: &Path) -> Result<()> {
let mut content = String::new();
{
@@ -504,6 +646,26 @@ fn ts_files_in(dir: &Path) -> Result<Vec<PathBuf>> {
Ok(files)
}
fn ts_files_in_recursive(dir: &Path) -> Result<Vec<PathBuf>> {
let mut files = Vec::new();
let mut stack = vec![dir.to_path_buf()];
while let Some(d) = stack.pop() {
for entry in
fs::read_dir(&d).with_context(|| format!("Failed to read dir {}", d.display()))?
{
let entry = entry?;
let path = entry.path();
if path.is_dir() {
stack.push(path);
} else if path.is_file() && path.extension() == Some(OsStr::new("ts")) {
files.push(path);
}
}
}
files.sort();
Ok(files)
}
fn generate_index_ts(out_dir: &Path) -> Result<PathBuf> {
let mut entries: Vec<String> = Vec::new();
let mut stems: Vec<String> = ts_files_in(out_dir)?
@@ -520,6 +682,14 @@ fn generate_index_ts(out_dir: &Path) -> Result<PathBuf> {
entries.push(format!("export type {{ {name} }} from \"./{name}\";\n"));
}
// If this is the root out_dir and a ./v2 folder exists with TS files,
// expose it as a namespace to avoid symbol collisions at the root.
let v2_dir = out_dir.join("v2");
let has_v2_ts = ts_files_in(&v2_dir).map(|v| !v.is_empty()).unwrap_or(false);
if has_v2_ts {
entries.push("export * as v2 from \"./v2\";\n".to_string());
}
let mut content =
String::with_capacity(HEADER.len() + entries.iter().map(String::len).sum::<usize>());
content.push_str(HEADER);
@@ -546,6 +716,7 @@ mod tests {
#[test]
fn generated_ts_has_no_optional_nullable_fields() -> Result<()> {
// Assert that there are no types of the form "?: T | null" in the generated TS files.
let output_dir = std::env::temp_dir().join(format!("codex_ts_types_{}", Uuid::now_v7()));
fs::create_dir(&output_dir)?;

View File

@@ -1,15 +1,17 @@
use std::collections::HashMap;
use std::path::Path;
use std::path::PathBuf;
use crate::JSONRPCNotification;
use crate::JSONRPCRequest;
use crate::RequestId;
use crate::export::GeneratedSchema;
use crate::export::write_json_schema;
use crate::protocol::v1;
use crate::protocol::v2;
use codex_protocol::ConversationId;
use codex_protocol::parse_command::ParsedCommand;
use codex_protocol::protocol::FileChange;
use codex_protocol::protocol::RateLimitSnapshot;
use codex_protocol::protocol::ReviewDecision;
use codex_protocol::protocol::SandboxCommandAssessment;
use paste::paste;
@@ -74,33 +76,97 @@ macro_rules! client_request_definitions {
Ok(())
}
#[allow(clippy::vec_init_then_push)]
pub fn export_client_response_schemas(
out_dir: &::std::path::Path,
) -> ::anyhow::Result<()> {
) -> ::anyhow::Result<Vec<GeneratedSchema>> {
let mut schemas = Vec::new();
$(
crate::export::write_json_schema::<$response>(out_dir, stringify!($response))?;
schemas.push(write_json_schema::<$response>(out_dir, stringify!($response))?);
)*
Ok(())
Ok(schemas)
}
#[allow(clippy::vec_init_then_push)]
pub fn export_client_param_schemas(
out_dir: &::std::path::Path,
) -> ::anyhow::Result<Vec<GeneratedSchema>> {
let mut schemas = Vec::new();
$(
schemas.push(write_json_schema::<$params>(out_dir, stringify!($params))?);
)*
Ok(schemas)
}
};
}
client_request_definitions! {
/// NEW APIs
#[serde(rename = "model/list")]
#[ts(rename = "model/list")]
ListModels {
params: v2::ListModelsParams,
response: v2::ListModelsResponse,
// Thread lifecycle
#[serde(rename = "thread/start")]
#[ts(rename = "thread/start")]
ThreadStart {
params: v2::ThreadStartParams,
response: v2::ThreadStartResponse,
},
#[serde(rename = "thread/resume")]
#[ts(rename = "thread/resume")]
ThreadResume {
params: v2::ThreadResumeParams,
response: v2::ThreadResumeResponse,
},
#[serde(rename = "thread/archive")]
#[ts(rename = "thread/archive")]
ThreadArchive {
params: v2::ThreadArchiveParams,
response: v2::ThreadArchiveResponse,
},
#[serde(rename = "thread/list")]
#[ts(rename = "thread/list")]
ThreadList {
params: v2::ThreadListParams,
response: v2::ThreadListResponse,
},
#[serde(rename = "thread/compact")]
#[ts(rename = "thread/compact")]
ThreadCompact {
params: v2::ThreadCompactParams,
response: v2::ThreadCompactResponse,
},
#[serde(rename = "turn/start")]
#[ts(rename = "turn/start")]
TurnStart {
params: v2::TurnStartParams,
response: v2::TurnStartResponse,
},
#[serde(rename = "turn/interrupt")]
#[ts(rename = "turn/interrupt")]
TurnInterrupt {
params: v2::TurnInterruptParams,
response: v2::TurnInterruptResponse,
},
#[serde(rename = "account/login")]
#[ts(rename = "account/login")]
#[serde(rename = "model/list")]
#[ts(rename = "model/list")]
ModelList {
params: v2::ModelListParams,
response: v2::ModelListResponse,
},
#[serde(rename = "account/login/start")]
#[ts(rename = "account/login/start")]
LoginAccount {
params: v2::LoginAccountParams,
response: v2::LoginAccountResponse,
},
#[serde(rename = "account/login/cancel")]
#[ts(rename = "account/login/cancel")]
CancelLoginAccount {
params: v2::CancelLoginAccountParams,
response: v2::CancelLoginAccountResponse,
},
#[serde(rename = "account/logout")]
#[ts(rename = "account/logout")]
LogoutAccount {
@@ -117,9 +183,9 @@ client_request_definitions! {
#[serde(rename = "feedback/upload")]
#[ts(rename = "feedback/upload")]
UploadFeedback {
params: v2::UploadFeedbackParams,
response: v2::UploadFeedbackResponse,
FeedbackUpload {
params: v2::FeedbackUploadParams,
response: v2::FeedbackUploadResponse,
},
#[serde(rename = "account/read")]
@@ -188,6 +254,7 @@ client_request_definitions! {
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
response: v1::LoginChatGptResponse,
},
// DEPRECATED in favor of CancelLoginAccount
CancelLoginChatGpt {
params: v1::CancelLoginChatGptParams,
response: v1::CancelLoginChatGptResponse,
@@ -276,13 +343,101 @@ macro_rules! server_request_definitions {
Ok(())
}
#[allow(clippy::vec_init_then_push)]
pub fn export_server_response_schemas(
out_dir: &::std::path::Path,
) -> ::anyhow::Result<()> {
out_dir: &Path,
) -> ::anyhow::Result<Vec<GeneratedSchema>> {
let mut schemas = Vec::new();
paste! {
$(crate::export::write_json_schema::<[<$variant Response>]>(out_dir, stringify!([<$variant Response>]))?;)*
$(schemas.push(crate::export::write_json_schema::<[<$variant Response>]>(out_dir, stringify!([<$variant Response>]))?);)*
}
Ok(())
Ok(schemas)
}
#[allow(clippy::vec_init_then_push)]
pub fn export_server_param_schemas(
out_dir: &Path,
) -> ::anyhow::Result<Vec<GeneratedSchema>> {
let mut schemas = Vec::new();
paste! {
$(schemas.push(crate::export::write_json_schema::<[<$variant Params>]>(out_dir, stringify!([<$variant Params>]))?);)*
}
Ok(schemas)
}
};
}
/// Generates `ServerNotification` enum and helpers, including a JSON Schema
/// exporter for each notification.
macro_rules! server_notification_definitions {
(
$(
$(#[$variant_meta:meta])*
$variant:ident $(=> $wire:literal)? ( $payload:ty )
),* $(,)?
) => {
/// Notification sent from the server to the client.
#[derive(Serialize, Deserialize, Debug, Clone, JsonSchema, TS, Display)]
#[serde(tag = "method", content = "params", rename_all = "camelCase")]
#[strum(serialize_all = "camelCase")]
pub enum ServerNotification {
$(
$(#[$variant_meta])*
$(#[serde(rename = $wire)] #[ts(rename = $wire)] #[strum(serialize = $wire)])?
$variant($payload),
)*
}
impl ServerNotification {
pub fn to_params(self) -> Result<serde_json::Value, serde_json::Error> {
match self {
$(Self::$variant(params) => serde_json::to_value(params),)*
}
}
}
impl TryFrom<JSONRPCNotification> for ServerNotification {
type Error = serde_json::Error;
fn try_from(value: JSONRPCNotification) -> Result<Self, Self::Error> {
serde_json::from_value(serde_json::to_value(value)?)
}
}
#[allow(clippy::vec_init_then_push)]
pub fn export_server_notification_schemas(
out_dir: &::std::path::Path,
) -> ::anyhow::Result<Vec<GeneratedSchema>> {
let mut schemas = Vec::new();
$(schemas.push(crate::export::write_json_schema::<$payload>(out_dir, stringify!($payload))?);)*
Ok(schemas)
}
};
}
/// Notifications sent from the client to the server.
macro_rules! client_notification_definitions {
(
$(
$(#[$variant_meta:meta])*
$variant:ident $( ( $payload:ty ) )?
),* $(,)?
) => {
#[derive(Serialize, Deserialize, Debug, Clone, JsonSchema, TS, Display)]
#[serde(tag = "method", content = "params", rename_all = "camelCase")]
#[strum(serialize_all = "camelCase")]
pub enum ClientNotification {
$(
$(#[$variant_meta])*
$variant $( ( $payload ) )?,
)*
}
pub fn export_client_notification_schemas(
_out_dir: &::std::path::Path,
) -> ::anyhow::Result<Vec<GeneratedSchema>> {
let schemas = Vec::new();
$( $(schemas.push(crate::export::write_json_schema::<$payload>(_out_dir, stringify!($payload))?);)? )*
Ok(schemas)
}
};
}
@@ -366,52 +521,33 @@ pub struct FuzzyFileSearchResponse {
pub files: Vec<FuzzyFileSearchResult>,
}
/// Notification sent from the server to the client.
#[derive(Serialize, Deserialize, Debug, Clone, JsonSchema, TS, Display)]
#[serde(tag = "method", content = "params", rename_all = "camelCase")]
#[strum(serialize_all = "camelCase")]
pub enum ServerNotification {
server_notification_definitions! {
/// NEW NOTIFICATIONS
#[serde(rename = "account/rateLimits/updated")]
#[ts(rename = "account/rateLimits/updated")]
#[strum(serialize = "account/rateLimits/updated")]
AccountRateLimitsUpdated(RateLimitSnapshot),
ThreadStarted => "thread/started" (v2::ThreadStartedNotification),
TurnStarted => "turn/started" (v2::TurnStartedNotification),
TurnCompleted => "turn/completed" (v2::TurnCompletedNotification),
ItemStarted => "item/started" (v2::ItemStartedNotification),
ItemCompleted => "item/completed" (v2::ItemCompletedNotification),
AgentMessageDelta => "item/agentMessage/delta" (v2::AgentMessageDeltaNotification),
CommandExecutionOutputDelta => "item/commandExecution/outputDelta" (v2::CommandExecutionOutputDeltaNotification),
McpToolCallProgress => "item/mcpToolCall/progress" (v2::McpToolCallProgressNotification),
AccountUpdated => "account/updated" (v2::AccountUpdatedNotification),
AccountRateLimitsUpdated => "account/rateLimits/updated" (v2::AccountRateLimitsUpdatedNotification),
#[serde(rename = "account/login/completed")]
#[ts(rename = "account/login/completed")]
#[strum(serialize = "account/login/completed")]
AccountLoginCompleted(v2::AccountLoginCompletedNotification),
/// DEPRECATED NOTIFICATIONS below
/// Authentication status changed
AuthStatusChange(v1::AuthStatusChangeNotification),
/// ChatGPT login flow completed
/// Deprecated: use `account/login/completed` instead.
LoginChatGptComplete(v1::LoginChatGptCompleteNotification),
/// The special session configured event for a new or resumed conversation.
SessionConfigured(v1::SessionConfiguredNotification),
}
impl ServerNotification {
pub fn to_params(self) -> Result<serde_json::Value, serde_json::Error> {
match self {
ServerNotification::AccountRateLimitsUpdated(params) => serde_json::to_value(params),
ServerNotification::AuthStatusChange(params) => serde_json::to_value(params),
ServerNotification::LoginChatGptComplete(params) => serde_json::to_value(params),
ServerNotification::SessionConfigured(params) => serde_json::to_value(params),
}
}
}
impl TryFrom<JSONRPCNotification> for ServerNotification {
type Error = serde_json::Error;
fn try_from(value: JSONRPCNotification) -> Result<Self, Self::Error> {
serde_json::from_value(serde_json::to_value(value)?)
}
}
/// Notification sent from the client to the server.
#[derive(Serialize, Deserialize, Debug, Clone, JsonSchema, TS, Display)]
#[serde(tag = "method", content = "params", rename_all = "camelCase")]
#[strum(serialize_all = "camelCase")]
pub enum ClientNotification {
client_notification_definitions! {
Initialized,
}
@@ -571,7 +707,7 @@ mod tests {
};
assert_eq!(
json!({
"method": "account/login",
"method": "account/login/start",
"id": 2,
"params": {
"type": "apiKey",
@@ -587,11 +723,11 @@ mod tests {
fn serialize_account_login_chatgpt() -> Result<()> {
let request = ClientRequest::LoginAccount {
request_id: RequestId::Integer(3),
params: v2::LoginAccountParams::ChatGpt,
params: v2::LoginAccountParams::Chatgpt,
};
assert_eq!(
json!({
"method": "account/login",
"method": "account/login/start",
"id": 3,
"params": {
"type": "chatgpt"
@@ -647,7 +783,7 @@ mod tests {
serde_json::to_value(&api_key)?,
);
let chatgpt = v2::Account::ChatGpt {
let chatgpt = v2::Account::Chatgpt {
email: Some("user@example.com".to_string()),
plan_type: PlanType::Plus,
};
@@ -665,16 +801,16 @@ mod tests {
#[test]
fn serialize_list_models() -> Result<()> {
let request = ClientRequest::ListModels {
let request = ClientRequest::ModelList {
request_id: RequestId::Integer(6),
params: v2::ListModelsParams::default(),
params: v2::ModelListParams::default(),
};
assert_eq!(
json!({
"method": "model/list",
"id": 6,
"params": {
"pageSize": null,
"limit": null,
"cursor": null
}
}),

View File

@@ -374,10 +374,9 @@ pub enum InputItem {
LocalImage { path: PathBuf },
}
// Deprecated notifications (v1)
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
/// Deprecated in favor of AccountLoginCompletedNotification.
pub struct LoginChatGptCompleteNotification {
#[schemars(with = "String")]
pub login_id: Uuid,
@@ -400,6 +399,7 @@ pub struct SessionConfiguredNotification {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
/// Deprecated notification. Use AccountUpdatedNotification instead.
pub struct AuthStatusChangeNotification {
pub auth_method: Option<AuthMode>,
}

View File

@@ -1,16 +1,125 @@
use std::collections::HashMap;
use std::path::PathBuf;
use crate::protocol::common::AuthMode;
use codex_protocol::ConversationId;
use codex_protocol::account::PlanType;
use codex_protocol::config_types::ReasoningEffort;
use codex_protocol::protocol::RateLimitSnapshot;
use codex_protocol::config_types::ReasoningSummary;
use codex_protocol::protocol::RateLimitSnapshot as CoreRateLimitSnapshot;
use codex_protocol::protocol::RateLimitWindow as CoreRateLimitWindow;
use codex_protocol::user_input::UserInput as CoreUserInput;
use mcp_types::ContentBlock as McpContentBlock;
use schemars::JsonSchema;
use serde::Deserialize;
use serde::Serialize;
use serde_json::Value as JsonValue;
use ts_rs::TS;
use uuid::Uuid;
// Macro to declare a camelCased API v2 enum mirroring a core enum which
// tends to use kebab-case.
macro_rules! v2_enum_from_core {
(
pub enum $Name:ident from $Src:path { $( $Variant:ident ),+ $(,)? }
) => {
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub enum $Name { $( $Variant ),+ }
impl $Name {
pub fn to_core(self) -> $Src {
match self { $( $Name::$Variant => <$Src>::$Variant ),+ }
}
}
impl From<$Src> for $Name {
fn from(value: $Src) -> Self {
match value { $( <$Src>::$Variant => $Name::$Variant ),+ }
}
}
};
}
v2_enum_from_core!(
pub enum AskForApproval from codex_protocol::protocol::AskForApproval {
UnlessTrusted, OnFailure, OnRequest, Never
}
);
v2_enum_from_core!(
pub enum SandboxMode from codex_protocol::config_types::SandboxMode {
ReadOnly, WorkspaceWrite, DangerFullAccess
}
);
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(tag = "mode", rename_all = "camelCase")]
#[ts(tag = "mode")]
#[ts(export_to = "v2/")]
pub enum SandboxPolicy {
DangerFullAccess,
ReadOnly,
WorkspaceWrite {
#[serde(default)]
writable_roots: Vec<PathBuf>,
#[serde(default)]
network_access: bool,
#[serde(default)]
exclude_tmpdir_env_var: bool,
#[serde(default)]
exclude_slash_tmp: bool,
},
}
impl SandboxPolicy {
pub fn to_core(&self) -> codex_protocol::protocol::SandboxPolicy {
match self {
SandboxPolicy::DangerFullAccess => {
codex_protocol::protocol::SandboxPolicy::DangerFullAccess
}
SandboxPolicy::ReadOnly => codex_protocol::protocol::SandboxPolicy::ReadOnly,
SandboxPolicy::WorkspaceWrite {
writable_roots,
network_access,
exclude_tmpdir_env_var,
exclude_slash_tmp,
} => codex_protocol::protocol::SandboxPolicy::WorkspaceWrite {
writable_roots: writable_roots.clone(),
network_access: *network_access,
exclude_tmpdir_env_var: *exclude_tmpdir_env_var,
exclude_slash_tmp: *exclude_slash_tmp,
},
}
}
}
impl From<codex_protocol::protocol::SandboxPolicy> for SandboxPolicy {
fn from(value: codex_protocol::protocol::SandboxPolicy) -> Self {
match value {
codex_protocol::protocol::SandboxPolicy::DangerFullAccess => {
SandboxPolicy::DangerFullAccess
}
codex_protocol::protocol::SandboxPolicy::ReadOnly => SandboxPolicy::ReadOnly,
codex_protocol::protocol::SandboxPolicy::WorkspaceWrite {
writable_roots,
network_access,
exclude_tmpdir_env_var,
exclude_slash_tmp,
} => SandboxPolicy::WorkspaceWrite {
writable_roots,
network_access,
exclude_tmpdir_env_var,
exclude_slash_tmp,
},
}
}
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(tag = "type", rename_all = "camelCase")]
#[ts(tag = "type")]
#[ts(export_to = "v2/")]
pub enum Account {
#[serde(rename = "apiKey", rename_all = "camelCase")]
#[ts(rename = "apiKey", rename_all = "camelCase")]
@@ -18,7 +127,7 @@ pub enum Account {
#[serde(rename = "chatgpt", rename_all = "camelCase")]
#[ts(rename = "chatgpt", rename_all = "camelCase")]
ChatGpt {
Chatgpt {
email: Option<String>,
plan_type: PlanType,
},
@@ -27,9 +136,10 @@ pub enum Account {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(tag = "type")]
#[ts(tag = "type")]
#[ts(export_to = "v2/")]
pub enum LoginAccountParams {
#[serde(rename = "apiKey")]
#[ts(rename = "apiKey")]
#[serde(rename = "apiKey", rename_all = "camelCase")]
#[ts(rename = "apiKey", rename_all = "camelCase")]
ApiKey {
#[serde(rename = "apiKey")]
#[ts(rename = "apiKey")]
@@ -37,48 +147,72 @@ pub enum LoginAccountParams {
},
#[serde(rename = "chatgpt")]
#[ts(rename = "chatgpt")]
ChatGpt,
Chatgpt,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(tag = "type", rename_all = "camelCase")]
#[ts(tag = "type")]
#[ts(export_to = "v2/")]
pub enum LoginAccountResponse {
#[serde(rename = "apiKey", rename_all = "camelCase")]
#[ts(rename = "apiKey", rename_all = "camelCase")]
ApiKey {},
#[serde(rename = "chatgpt", rename_all = "camelCase")]
#[ts(rename = "chatgpt", rename_all = "camelCase")]
Chatgpt {
// Use plain String for identifiers to avoid TS/JSON Schema quirks around uuid-specific types.
// Convert to/from UUIDs at the application layer as needed.
login_id: String,
/// URL the client should open in a browser to initiate the OAuth flow.
auth_url: String,
},
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct LoginAccountResponse {
/// Only set if the login method is ChatGPT.
#[schemars(with = "String")]
pub login_id: Option<Uuid>,
/// URL the client should open in a browser to initiate the OAuth flow.
/// Only set if the login method is ChatGPT.
pub auth_url: Option<String>,
#[ts(export_to = "v2/")]
pub struct CancelLoginAccountParams {
pub login_id: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct CancelLoginAccountResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct LogoutAccountResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct GetAccountRateLimitsResponse {
pub rate_limits: RateLimitSnapshot,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct GetAccountResponse {
pub account: Account,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Default, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ListModelsParams {
/// Optional page size; defaults to a reasonable server-side value.
pub page_size: Option<usize>,
#[ts(export_to = "v2/")]
pub struct ModelListParams {
/// Opaque pagination cursor returned by a previous call.
pub cursor: Option<String>,
/// Optional page size; defaults to a reasonable server-side value.
pub limit: Option<u32>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct Model {
pub id: String,
pub model: String,
@@ -92,6 +226,7 @@ pub struct Model {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ReasoningEffortOption {
pub reasoning_effort: ReasoningEffort,
pub description: String,
@@ -99,16 +234,18 @@ pub struct ReasoningEffortOption {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ListModelsResponse {
pub items: Vec<Model>,
#[ts(export_to = "v2/")]
pub struct ModelListResponse {
pub data: Vec<Model>,
/// Opaque cursor to pass to the next call to continue after the last item.
/// if None, there are no more items to return.
/// If None, there are no more items to return.
pub next_cursor: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct UploadFeedbackParams {
#[ts(export_to = "v2/")]
pub struct FeedbackUploadParams {
pub classification: String,
pub reason: Option<String>,
pub conversation_id: Option<ConversationId>,
@@ -117,6 +254,446 @@ pub struct UploadFeedbackParams {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct UploadFeedbackResponse {
#[ts(export_to = "v2/")]
pub struct FeedbackUploadResponse {
pub thread_id: String,
}
// === Threads, Turns, and Items ===
// Thread APIs
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Default, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadStartParams {
pub model: Option<String>,
pub model_provider: Option<String>,
pub cwd: Option<String>,
pub approval_policy: Option<AskForApproval>,
pub sandbox: Option<SandboxMode>,
pub config: Option<HashMap<String, serde_json::Value>>,
pub base_instructions: Option<String>,
pub developer_instructions: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadStartResponse {
pub thread: Thread,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadResumeParams {
pub thread_id: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadResumeResponse {
pub thread: Thread,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadArchiveParams {
pub thread_id: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadArchiveResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadListParams {
/// Opaque pagination cursor returned by a previous call.
pub cursor: Option<String>,
/// Optional page size; defaults to a reasonable server-side value.
pub limit: Option<u32>,
/// Optional provider filter; when set, only sessions recorded under these
/// providers are returned. When present but empty, includes all providers.
pub model_providers: Option<Vec<String>>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadListResponse {
pub data: Vec<Thread>,
/// Opaque cursor to pass to the next call to continue after the last item.
/// if None, there are no more items to return.
pub next_cursor: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadCompactParams {
pub thread_id: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadCompactResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct Thread {
pub id: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct AccountUpdatedNotification {
pub auth_mode: Option<AuthMode>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct Turn {
pub id: String,
pub items: Vec<ThreadItem>,
pub status: TurnStatus,
pub error: Option<TurnError>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct TurnError {
pub message: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub enum TurnStatus {
Completed,
Interrupted,
Failed,
InProgress,
}
// Turn APIs
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct TurnStartParams {
pub thread_id: String,
pub input: Vec<UserInput>,
/// Override the working directory for this turn and subsequent turns.
pub cwd: Option<PathBuf>,
/// Override the approval policy for this turn and subsequent turns.
pub approval_policy: Option<AskForApproval>,
/// Override the sandbox policy for this turn and subsequent turns.
pub sandbox_policy: Option<SandboxPolicy>,
/// Override the model for this turn and subsequent turns.
pub model: Option<String>,
/// Override the reasoning effort for this turn and subsequent turns.
pub effort: Option<ReasoningEffort>,
/// Override the reasoning summary for this turn and subsequent turns.
pub summary: Option<ReasoningSummary>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct TurnStartResponse {
pub turn: Turn,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct TurnInterruptParams {
pub thread_id: String,
pub turn_id: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct TurnInterruptResponse {}
// User input types
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(tag = "type", rename_all = "camelCase")]
#[ts(tag = "type")]
#[ts(export_to = "v2/")]
pub enum UserInput {
Text { text: String },
Image { url: String },
LocalImage { path: PathBuf },
}
impl UserInput {
pub fn into_core(self) -> CoreUserInput {
match self {
UserInput::Text { text } => CoreUserInput::Text { text },
UserInput::Image { url } => CoreUserInput::Image { image_url: url },
UserInput::LocalImage { path } => CoreUserInput::LocalImage { path },
}
}
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(tag = "type", rename_all = "camelCase")]
#[ts(tag = "type")]
#[ts(export_to = "v2/")]
pub enum ThreadItem {
UserMessage {
id: String,
content: Vec<UserInput>,
},
AgentMessage {
id: String,
text: String,
},
Reasoning {
id: String,
text: String,
},
CommandExecution {
id: String,
command: String,
aggregated_output: String,
exit_code: Option<i32>,
status: CommandExecutionStatus,
duration_ms: Option<i64>,
},
FileChange {
id: String,
changes: Vec<FileUpdateChange>,
status: PatchApplyStatus,
},
McpToolCall {
id: String,
server: String,
tool: String,
status: McpToolCallStatus,
arguments: JsonValue,
result: Option<McpToolCallResult>,
error: Option<McpToolCallError>,
},
WebSearch {
id: String,
query: String,
},
TodoList {
id: String,
items: Vec<TodoItem>,
},
ImageView {
id: String,
path: String,
},
CodeReview {
id: String,
review: String,
},
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub enum CommandExecutionStatus {
InProgress,
Completed,
Failed,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct FileUpdateChange {
pub path: String,
pub kind: PatchChangeKind,
pub diff: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub enum PatchChangeKind {
Add,
Delete,
Update,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub enum PatchApplyStatus {
Completed,
Failed,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub enum McpToolCallStatus {
InProgress,
Completed,
Failed,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct McpToolCallResult {
pub content: Vec<McpContentBlock>,
pub structured_content: JsonValue,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct McpToolCallError {
pub message: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct TodoItem {
pub id: String,
pub text: String,
pub completed: bool,
}
// === Server Notifications ===
// Thread/Turn lifecycle notifications and item progress events
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadStartedNotification {
pub thread: Thread,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct TurnStartedNotification {
pub turn: Turn,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct Usage {
pub input_tokens: i32,
pub cached_input_tokens: i32,
pub output_tokens: i32,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct TurnCompletedNotification {
pub turn: Turn,
// TODO: should usage be stored on the Turn object, and we return that instead?
pub usage: Usage,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ItemStartedNotification {
pub item: ThreadItem,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ItemCompletedNotification {
pub item: ThreadItem,
}
// Item-specific progress notifications
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct AgentMessageDeltaNotification {
pub item_id: String,
pub delta: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct CommandExecutionOutputDeltaNotification {
pub item_id: String,
pub delta: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct McpToolCallProgressNotification {
pub item_id: String,
pub message: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct AccountRateLimitsUpdatedNotification {
pub rate_limits: RateLimitSnapshot,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct RateLimitSnapshot {
pub primary: Option<RateLimitWindow>,
pub secondary: Option<RateLimitWindow>,
}
impl From<CoreRateLimitSnapshot> for RateLimitSnapshot {
fn from(value: CoreRateLimitSnapshot) -> Self {
Self {
primary: value.primary.map(RateLimitWindow::from),
secondary: value.secondary.map(RateLimitWindow::from),
}
}
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct RateLimitWindow {
pub used_percent: i32,
pub window_duration_mins: Option<i64>,
pub resets_at: Option<i64>,
}
impl From<CoreRateLimitWindow> for RateLimitWindow {
fn from(value: CoreRateLimitWindow) -> Self {
Self {
used_percent: value.used_percent.round() as i32,
window_duration_mins: value.window_minutes,
resets_at: value.resets_at,
}
}
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct AccountLoginCompletedNotification {
// Use plain String for identifiers to avoid TS/JSON Schema quirks around uuid-specific types.
// Convert to/from UUIDs at the application layer as needed.
pub login_id: Option<String>,
pub success: bool,
pub error: Option<String>,
}

File diff suppressed because it is too large Load Diff

View File

@@ -64,64 +64,79 @@ impl MessageProcessor {
pub(crate) async fn process_request(&mut self, request: JSONRPCRequest) {
let request_id = request.id.clone();
if let Ok(request_json) = serde_json::to_value(request)
&& let Ok(codex_request) = serde_json::from_value::<ClientRequest>(request_json)
{
match codex_request {
// Handle Initialize internally so CodexMessageProcessor does not have to concern
// itself with the `initialized` bool.
ClientRequest::Initialize { request_id, params } => {
if self.initialized {
let error = JSONRPCErrorError {
code: INVALID_REQUEST_ERROR_CODE,
message: "Already initialized".to_string(),
data: None,
};
self.outgoing.send_error(request_id, error).await;
return;
} else {
let ClientInfo {
name,
title: _title,
version,
} = params.client_info;
let user_agent_suffix = format!("{name}; {version}");
if let Ok(mut suffix) = USER_AGENT_SUFFIX.lock() {
*suffix = Some(user_agent_suffix);
}
let request_json = match serde_json::to_value(&request) {
Ok(request_json) => request_json,
Err(err) => {
let error = JSONRPCErrorError {
code: INVALID_REQUEST_ERROR_CODE,
message: format!("Invalid request: {err}"),
data: None,
};
self.outgoing.send_error(request_id, error).await;
return;
}
};
let user_agent = get_codex_user_agent();
let response = InitializeResponse { user_agent };
self.outgoing.send_response(request_id, response).await;
let codex_request = match serde_json::from_value::<ClientRequest>(request_json) {
Ok(codex_request) => codex_request,
Err(err) => {
let error = JSONRPCErrorError {
code: INVALID_REQUEST_ERROR_CODE,
message: format!("Invalid request: {err}"),
data: None,
};
self.outgoing.send_error(request_id, error).await;
return;
}
};
self.initialized = true;
return;
}
}
_ => {
if !self.initialized {
let error = JSONRPCErrorError {
code: INVALID_REQUEST_ERROR_CODE,
message: "Not initialized".to_string(),
data: None,
};
self.outgoing.send_error(request_id, error).await;
return;
match codex_request {
// Handle Initialize internally so CodexMessageProcessor does not have to concern
// itself with the `initialized` bool.
ClientRequest::Initialize { request_id, params } => {
if self.initialized {
let error = JSONRPCErrorError {
code: INVALID_REQUEST_ERROR_CODE,
message: "Already initialized".to_string(),
data: None,
};
self.outgoing.send_error(request_id, error).await;
return;
} else {
let ClientInfo {
name,
title: _title,
version,
} = params.client_info;
let user_agent_suffix = format!("{name}; {version}");
if let Ok(mut suffix) = USER_AGENT_SUFFIX.lock() {
*suffix = Some(user_agent_suffix);
}
let user_agent = get_codex_user_agent();
let response = InitializeResponse { user_agent };
self.outgoing.send_response(request_id, response).await;
self.initialized = true;
return;
}
}
_ => {
if !self.initialized {
let error = JSONRPCErrorError {
code: INVALID_REQUEST_ERROR_CODE,
message: "Not initialized".to_string(),
data: None,
};
self.outgoing.send_error(request_id, error).await;
return;
}
}
self.codex_message_processor
.process_request(codex_request)
.await;
} else {
let error = JSONRPCErrorError {
code: INVALID_REQUEST_ERROR_CODE,
message: "Invalid request".to_string(),
data: None,
};
self.outgoing.send_error(request_id, error).await;
}
self.codex_message_processor
.process_request(codex_request)
.await;
}
pub(crate) async fn process_notification(&self, notification: JSONRPCNotification) {

View File

@@ -1,11 +1,12 @@
use codex_app_server_protocol::AuthMode;
use codex_app_server_protocol::Model;
use codex_app_server_protocol::ReasoningEffortOption;
use codex_common::model_presets::ModelPreset;
use codex_common::model_presets::ReasoningEffortPreset;
use codex_common::model_presets::builtin_model_presets;
pub fn supported_models() -> Vec<Model> {
builtin_model_presets(None)
pub fn supported_models(auth_mode: Option<AuthMode>) -> Vec<Model> {
builtin_model_presets(auth_mode)
.into_iter()
.map(model_from_preset)
.collect()

View File

@@ -141,9 +141,13 @@ pub(crate) struct OutgoingError {
#[cfg(test)]
mod tests {
use codex_app_server_protocol::AccountLoginCompletedNotification;
use codex_app_server_protocol::AccountRateLimitsUpdatedNotification;
use codex_app_server_protocol::AccountUpdatedNotification;
use codex_app_server_protocol::AuthMode;
use codex_app_server_protocol::LoginChatGptCompleteNotification;
use codex_protocol::protocol::RateLimitSnapshot;
use codex_protocol::protocol::RateLimitWindow;
use codex_app_server_protocol::RateLimitSnapshot;
use codex_app_server_protocol::RateLimitWindow;
use pretty_assertions::assert_eq;
use serde_json::json;
use uuid::Uuid;
@@ -176,27 +180,77 @@ mod tests {
}
#[test]
fn verify_account_rate_limits_notification_serialization() {
let notification = ServerNotification::AccountRateLimitsUpdated(RateLimitSnapshot {
primary: Some(RateLimitWindow {
used_percent: 25.0,
window_minutes: Some(15),
resets_at: Some(123),
fn verify_account_login_completed_notification_serialization() {
let notification =
ServerNotification::AccountLoginCompleted(AccountLoginCompletedNotification {
login_id: Some(Uuid::nil().to_string()),
success: true,
error: None,
});
let jsonrpc_notification = OutgoingMessage::AppServerNotification(notification);
assert_eq!(
json!({
"method": "account/login/completed",
"params": {
"loginId": Uuid::nil().to_string(),
"success": true,
"error": null,
},
}),
secondary: None,
});
serde_json::to_value(jsonrpc_notification)
.expect("ensure the notification serializes correctly"),
"ensure the notification serializes correctly"
);
}
#[test]
fn verify_account_rate_limits_notification_serialization() {
let notification =
ServerNotification::AccountRateLimitsUpdated(AccountRateLimitsUpdatedNotification {
rate_limits: RateLimitSnapshot {
primary: Some(RateLimitWindow {
used_percent: 25,
window_duration_mins: Some(15),
resets_at: Some(123),
}),
secondary: None,
},
});
let jsonrpc_notification = OutgoingMessage::AppServerNotification(notification);
assert_eq!(
json!({
"method": "account/rateLimits/updated",
"params": {
"primary": {
"used_percent": 25.0,
"window_minutes": 15,
"resets_at": 123,
},
"secondary": null,
"rateLimits": {
"primary": {
"usedPercent": 25,
"windowDurationMins": 15,
"resetsAt": 123
},
"secondary": null
}
},
}),
serde_json::to_value(jsonrpc_notification)
.expect("ensure the notification serializes correctly"),
"ensure the notification serializes correctly"
);
}
#[test]
fn verify_account_updated_notification_serialization() {
let notification = ServerNotification::AccountUpdated(AccountUpdatedNotification {
auth_mode: Some(AuthMode::ApiKey),
});
let jsonrpc_notification = OutgoingMessage::AppServerNotification(notification);
assert_eq!(
json!({
"method": "account/updated",
"params": {
"authMode": "apikey"
},
}),
serde_json::to_value(jsonrpc_notification)

View File

@@ -13,6 +13,7 @@ base64 = { workspace = true }
chrono = { workspace = true }
codex-app-server-protocol = { workspace = true }
codex-core = { workspace = true }
codex-protocol = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
tokio = { workspace = true, features = [
@@ -21,4 +22,5 @@ tokio = { workspace = true, features = [
"process",
"rt-multi-thread",
] }
uuid = { workspace = true }
wiremock = { workspace = true }

View File

@@ -2,6 +2,7 @@ mod auth_fixtures;
mod mcp_process;
mod mock_model_server;
mod responses;
mod rollout;
pub use auth_fixtures::ChatGptAuthFixture;
pub use auth_fixtures::ChatGptIdTokenClaims;
@@ -10,9 +11,11 @@ pub use auth_fixtures::write_chatgpt_auth;
use codex_app_server_protocol::JSONRPCResponse;
pub use mcp_process::McpProcess;
pub use mock_model_server::create_mock_chat_completions_server;
pub use mock_model_server::create_mock_chat_completions_server_unchecked;
pub use responses::create_apply_patch_sse_response;
pub use responses::create_final_assistant_message_sse_response;
pub use responses::create_shell_sse_response;
pub use rollout::create_fake_rollout;
use serde::de::DeserializeOwned;
pub fn to_response<T: DeserializeOwned>(response: JSONRPCResponse) -> anyhow::Result<T> {

View File

@@ -14,30 +14,36 @@ use anyhow::Context;
use assert_cmd::prelude::*;
use codex_app_server_protocol::AddConversationListenerParams;
use codex_app_server_protocol::ArchiveConversationParams;
use codex_app_server_protocol::CancelLoginAccountParams;
use codex_app_server_protocol::CancelLoginChatGptParams;
use codex_app_server_protocol::ClientInfo;
use codex_app_server_protocol::ClientNotification;
use codex_app_server_protocol::FeedbackUploadParams;
use codex_app_server_protocol::GetAuthStatusParams;
use codex_app_server_protocol::InitializeParams;
use codex_app_server_protocol::InterruptConversationParams;
use codex_app_server_protocol::ListConversationsParams;
use codex_app_server_protocol::ListModelsParams;
use codex_app_server_protocol::LoginApiKeyParams;
use codex_app_server_protocol::NewConversationParams;
use codex_app_server_protocol::RemoveConversationListenerParams;
use codex_app_server_protocol::ResumeConversationParams;
use codex_app_server_protocol::SendUserMessageParams;
use codex_app_server_protocol::SendUserTurnParams;
use codex_app_server_protocol::ServerRequest;
use codex_app_server_protocol::SetDefaultModelParams;
use codex_app_server_protocol::UploadFeedbackParams;
use codex_app_server_protocol::JSONRPCError;
use codex_app_server_protocol::JSONRPCMessage;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCRequest;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::ListConversationsParams;
use codex_app_server_protocol::LoginApiKeyParams;
use codex_app_server_protocol::ModelListParams;
use codex_app_server_protocol::NewConversationParams;
use codex_app_server_protocol::RemoveConversationListenerParams;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ResumeConversationParams;
use codex_app_server_protocol::SendUserMessageParams;
use codex_app_server_protocol::SendUserTurnParams;
use codex_app_server_protocol::ServerRequest;
use codex_app_server_protocol::SetDefaultModelParams;
use codex_app_server_protocol::ThreadArchiveParams;
use codex_app_server_protocol::ThreadListParams;
use codex_app_server_protocol::ThreadResumeParams;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::TurnInterruptParams;
use codex_app_server_protocol::TurnStartParams;
use std::process::Command as StdCommand;
use tokio::process::Command;
@@ -244,9 +250,9 @@ impl McpProcess {
}
/// Send a `feedback/upload` JSON-RPC request.
pub async fn send_upload_feedback_request(
pub async fn send_feedback_upload_request(
&mut self,
params: UploadFeedbackParams,
params: FeedbackUploadParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("feedback/upload", params).await
@@ -275,10 +281,46 @@ impl McpProcess {
self.send_request("listConversations", params).await
}
/// Send a `thread/start` JSON-RPC request.
pub async fn send_thread_start_request(
&mut self,
params: ThreadStartParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("thread/start", params).await
}
/// Send a `thread/resume` JSON-RPC request.
pub async fn send_thread_resume_request(
&mut self,
params: ThreadResumeParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("thread/resume", params).await
}
/// Send a `thread/archive` JSON-RPC request.
pub async fn send_thread_archive_request(
&mut self,
params: ThreadArchiveParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("thread/archive", params).await
}
/// Send a `thread/list` JSON-RPC request.
pub async fn send_thread_list_request(
&mut self,
params: ThreadListParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("thread/list", params).await
}
/// Send a `model/list` JSON-RPC request.
pub async fn send_list_models_request(
&mut self,
params: ListModelsParams,
params: ModelListParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("model/list", params).await
@@ -307,6 +349,24 @@ impl McpProcess {
self.send_request("loginChatGpt", None).await
}
/// Send a `turn/start` JSON-RPC request (v2).
pub async fn send_turn_start_request(
&mut self,
params: TurnStartParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("turn/start", params).await
}
/// Send a `turn/interrupt` JSON-RPC request (v2).
pub async fn send_turn_interrupt_request(
&mut self,
params: TurnInterruptParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("turn/interrupt", params).await
}
/// Send a `cancelLoginChatGpt` JSON-RPC request.
pub async fn send_cancel_login_chat_gpt_request(
&mut self,
@@ -321,6 +381,40 @@ impl McpProcess {
self.send_request("logoutChatGpt", None).await
}
/// Send an `account/logout` JSON-RPC request.
pub async fn send_logout_account_request(&mut self) -> anyhow::Result<i64> {
self.send_request("account/logout", None).await
}
/// Send an `account/login/start` JSON-RPC request for API key login.
pub async fn send_login_account_api_key_request(
&mut self,
api_key: &str,
) -> anyhow::Result<i64> {
let params = serde_json::json!({
"type": "apiKey",
"apiKey": api_key,
});
self.send_request("account/login/start", Some(params)).await
}
/// Send an `account/login/start` JSON-RPC request for ChatGPT login.
pub async fn send_login_account_chatgpt_request(&mut self) -> anyhow::Result<i64> {
let params = serde_json::json!({
"type": "chatgpt"
});
self.send_request("account/login/start", Some(params)).await
}
/// Send an `account/login/cancel` JSON-RPC request.
pub async fn send_cancel_login_account_request(
&mut self,
params: CancelLoginAccountParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("account/login/cancel", params).await
}
/// Send a `fuzzyFileSearch` JSON-RPC request.
pub async fn send_fuzzy_file_search_request(
&mut self,

View File

@@ -29,6 +29,25 @@ pub async fn create_mock_chat_completions_server(responses: Vec<String>) -> Mock
server
}
/// Same as `create_mock_chat_completions_server` but does not enforce an
/// expectation on the number of calls.
pub async fn create_mock_chat_completions_server_unchecked(responses: Vec<String>) -> MockServer {
let server = MockServer::start().await;
let seq_responder = SeqResponder {
num_calls: AtomicUsize::new(0),
responses,
};
Mock::given(method("POST"))
.and(path("/v1/chat/completions"))
.respond_with(seq_responder)
.mount(&server)
.await;
server
}
struct SeqResponder {
num_calls: AtomicUsize,
responses: Vec<String>,

View File

@@ -0,0 +1,82 @@
use anyhow::Result;
use codex_protocol::ConversationId;
use codex_protocol::protocol::SessionMeta;
use codex_protocol::protocol::SessionSource;
use serde_json::json;
use std::fs;
use std::path::Path;
use std::path::PathBuf;
use uuid::Uuid;
/// Create a minimal rollout file under `CODEX_HOME/sessions/YYYY/MM/DD/`.
///
/// - `filename_ts` is the filename timestamp component in `YYYY-MM-DDThh-mm-ss` format.
/// - `meta_rfc3339` is the envelope timestamp used in JSON lines.
/// - `preview` is the user message preview text.
/// - `model_provider` optionally sets the provider in the session meta payload.
///
/// Returns the generated conversation/session UUID as a string.
pub fn create_fake_rollout(
codex_home: &Path,
filename_ts: &str,
meta_rfc3339: &str,
preview: &str,
model_provider: Option<&str>,
) -> Result<String> {
let uuid = Uuid::new_v4();
let uuid_str = uuid.to_string();
let conversation_id = ConversationId::from_string(&uuid_str)?;
// sessions/YYYY/MM/DD derived from filename_ts (YYYY-MM-DDThh-mm-ss)
let year = &filename_ts[0..4];
let month = &filename_ts[5..7];
let day = &filename_ts[8..10];
let dir = codex_home.join("sessions").join(year).join(month).join(day);
fs::create_dir_all(&dir)?;
let file_path = dir.join(format!("rollout-{filename_ts}-{uuid}.jsonl"));
// Build JSONL lines
let payload = serde_json::to_value(SessionMeta {
id: conversation_id,
timestamp: meta_rfc3339.to_string(),
cwd: PathBuf::from("/"),
originator: "codex".to_string(),
cli_version: "0.0.0".to_string(),
instructions: None,
source: SessionSource::Cli,
model_provider: model_provider.map(str::to_string),
})?;
let lines = [
json!({
"timestamp": meta_rfc3339,
"type": "session_meta",
"payload": payload
})
.to_string(),
json!({
"timestamp": meta_rfc3339,
"type":"response_item",
"payload": {
"type":"message",
"role":"user",
"content":[{"type":"input_text","text": preview}]
}
})
.to_string(),
json!({
"timestamp": meta_rfc3339,
"type":"event_msg",
"payload": {
"type":"user_message",
"message": preview,
"kind": "plain"
}
})
.to_string(),
];
fs::write(file_path, lines.join("\n") + "\n")?;
Ok(uuid_str)
}

View File

@@ -12,7 +12,7 @@ use std::path::Path;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(20);
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn archive_conversation_moves_rollout_into_archived_directory() -> Result<()> {

View File

@@ -146,7 +146,7 @@ fn create_config_toml(codex_home: &Path, server_uri: String) -> std::io::Result<
r#"
model = "mock-model"
approval_policy = "never"
sandbox_mode = "danger-full-access"
sandbox_mode = "read-only"
model_provider = "mock_provider"

View File

@@ -1,5 +1,6 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_fake_rollout;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCResponse;
@@ -15,12 +16,8 @@ use codex_core::protocol::EventMsg;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ResponseItem;
use pretty_assertions::assert_eq;
use serde_json::json;
use std::fs;
use std::path::Path;
use tempfile::TempDir;
use tokio::time::timeout;
use uuid::Uuid;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
@@ -357,70 +354,3 @@ async fn test_list_and_resume_conversations() -> Result<()> {
Ok(())
}
fn create_fake_rollout(
codex_home: &Path,
filename_ts: &str,
meta_rfc3339: &str,
preview: &str,
model_provider: Option<&str>,
) -> Result<()> {
let uuid = Uuid::new_v4();
// sessions/YYYY/MM/DD/ derived from filename_ts (YYYY-MM-DDThh-mm-ss)
let year = &filename_ts[0..4];
let month = &filename_ts[5..7];
let day = &filename_ts[8..10];
let dir = codex_home.join("sessions").join(year).join(month).join(day);
fs::create_dir_all(&dir)?;
let file_path = dir.join(format!("rollout-{filename_ts}-{uuid}.jsonl"));
let mut lines = Vec::new();
// Meta line with timestamp (flattened meta in payload for new schema)
let mut payload = json!({
"id": uuid,
"timestamp": meta_rfc3339,
"cwd": "/",
"originator": "codex",
"cli_version": "0.0.0",
"instructions": null,
});
if let Some(provider) = model_provider {
payload["model_provider"] = json!(provider);
}
lines.push(
json!({
"timestamp": meta_rfc3339,
"type": "session_meta",
"payload": payload
})
.to_string(),
);
// Minimal user message entry as a persisted response item (with envelope timestamp)
lines.push(
json!({
"timestamp": meta_rfc3339,
"type":"response_item",
"payload": {
"type":"message",
"role":"user",
"content":[{"type":"input_text","text": preview}]
}
})
.to_string(),
);
// Add a matching user message event line to satisfy filters
lines.push(
json!({
"timestamp": meta_rfc3339,
"type":"event_msg",
"payload": {
"type":"user_message",
"message": preview,
"kind": "plain"
}
})
.to_string(),
);
fs::write(file_path, lines.join("\n") + "\n")?;
Ok(())
}

View File

@@ -13,3 +13,4 @@ mod send_message;
mod set_default_model;
mod user_agent;
mod user_info;
mod v2;

View File

@@ -6,9 +6,9 @@ use app_test_support::McpProcess;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCError;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::ListModelsParams;
use codex_app_server_protocol::ListModelsResponse;
use codex_app_server_protocol::Model;
use codex_app_server_protocol::ModelListParams;
use codex_app_server_protocol::ModelListResponse;
use codex_app_server_protocol::ReasoningEffortOption;
use codex_app_server_protocol::RequestId;
use codex_protocol::config_types::ReasoningEffort;
@@ -27,8 +27,8 @@ async fn list_models_returns_all_models_with_large_limit() -> Result<()> {
timeout(DEFAULT_TIMEOUT, mcp.initialize()).await??;
let request_id = mcp
.send_list_models_request(ListModelsParams {
page_size: Some(100),
.send_list_models_request(ModelListParams {
limit: Some(100),
cursor: None,
})
.await?;
@@ -39,14 +39,17 @@ async fn list_models_returns_all_models_with_large_limit() -> Result<()> {
)
.await??;
let ListModelsResponse { items, next_cursor } = to_response::<ListModelsResponse>(response)?;
let ModelListResponse {
data: items,
next_cursor,
} = to_response::<ModelListResponse>(response)?;
let expected_models = vec![
Model {
id: "gpt-5-codex".to_string(),
model: "gpt-5-codex".to_string(),
display_name: "gpt-5-codex".to_string(),
description: "Optimized for coding tasks with many tools.".to_string(),
description: "Optimized for codex.".to_string(),
supported_reasoning_efforts: vec![
ReasoningEffortOption {
reasoning_effort: ReasoningEffort::Low,
@@ -111,8 +114,8 @@ async fn list_models_pagination_works() -> Result<()> {
timeout(DEFAULT_TIMEOUT, mcp.initialize()).await??;
let first_request = mcp
.send_list_models_request(ListModelsParams {
page_size: Some(1),
.send_list_models_request(ModelListParams {
limit: Some(1),
cursor: None,
})
.await?;
@@ -123,18 +126,18 @@ async fn list_models_pagination_works() -> Result<()> {
)
.await??;
let ListModelsResponse {
items: first_items,
let ModelListResponse {
data: first_items,
next_cursor: first_cursor,
} = to_response::<ListModelsResponse>(first_response)?;
} = to_response::<ModelListResponse>(first_response)?;
assert_eq!(first_items.len(), 1);
assert_eq!(first_items[0].id, "gpt-5-codex");
let next_cursor = first_cursor.ok_or_else(|| anyhow!("cursor for second page"))?;
let second_request = mcp
.send_list_models_request(ListModelsParams {
page_size: Some(1),
.send_list_models_request(ModelListParams {
limit: Some(1),
cursor: Some(next_cursor.clone()),
})
.await?;
@@ -145,10 +148,10 @@ async fn list_models_pagination_works() -> Result<()> {
)
.await??;
let ListModelsResponse {
items: second_items,
let ModelListResponse {
data: second_items,
next_cursor: second_cursor,
} = to_response::<ListModelsResponse>(second_response)?;
} = to_response::<ModelListResponse>(second_response)?;
assert_eq!(second_items.len(), 1);
assert_eq!(second_items[0].id, "gpt-5");
@@ -164,8 +167,8 @@ async fn list_models_rejects_invalid_cursor() -> Result<()> {
timeout(DEFAULT_TIMEOUT, mcp.initialize()).await??;
let request_id = mcp
.send_list_models_request(ListModelsParams {
page_size: None,
.send_list_models_request(ModelListParams {
limit: None,
cursor: Some("invalid".to_string()),
})
.await?;

View File

@@ -7,10 +7,10 @@ use codex_app_server_protocol::GetAccountRateLimitsResponse;
use codex_app_server_protocol::JSONRPCError;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::LoginApiKeyParams;
use codex_app_server_protocol::RateLimitSnapshot;
use codex_app_server_protocol::RateLimitWindow;
use codex_app_server_protocol::RequestId;
use codex_core::auth::AuthCredentialsStoreMode;
use codex_protocol::protocol::RateLimitSnapshot;
use codex_protocol::protocol::RateLimitWindow;
use pretty_assertions::assert_eq;
use serde_json::json;
use std::path::Path;
@@ -143,13 +143,13 @@ async fn get_account_rate_limits_returns_snapshot() -> Result<()> {
let expected = GetAccountRateLimitsResponse {
rate_limits: RateLimitSnapshot {
primary: Some(RateLimitWindow {
used_percent: 42.0,
window_minutes: Some(60),
used_percent: 42,
window_duration_mins: Some(60),
resets_at: Some(primary_reset_timestamp),
}),
secondary: Some(RateLimitWindow {
used_percent: 5.0,
window_minutes: Some(1440),
used_percent: 5,
window_duration_mins: Some(1440),
resets_at: Some(secondary_reset_timestamp),
}),
},

View File

@@ -0,0 +1,309 @@
use anyhow::Result;
use anyhow::bail;
use app_test_support::McpProcess;
use app_test_support::to_response;
use codex_app_server_protocol::AuthMode;
use codex_app_server_protocol::CancelLoginAccountParams;
use codex_app_server_protocol::CancelLoginAccountResponse;
use codex_app_server_protocol::GetAuthStatusParams;
use codex_app_server_protocol::GetAuthStatusResponse;
use codex_app_server_protocol::JSONRPCError;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::LoginAccountResponse;
use codex_app_server_protocol::LogoutAccountResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ServerNotification;
use codex_core::auth::AuthCredentialsStoreMode;
use codex_login::login_with_api_key;
use pretty_assertions::assert_eq;
use serial_test::serial;
use std::path::Path;
use std::time::Duration;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
// Helper to create a minimal config.toml for the app server
fn create_config_toml(
codex_home: &Path,
forced_method: Option<&str>,
forced_workspace_id: Option<&str>,
) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
let forced_line = if let Some(method) = forced_method {
format!("forced_login_method = \"{method}\"\n")
} else {
String::new()
};
let forced_workspace_line = if let Some(ws) = forced_workspace_id {
format!("forced_chatgpt_workspace_id = \"{ws}\"\n")
} else {
String::new()
};
let contents = format!(
r#"
model = "mock-model"
approval_policy = "never"
sandbox_mode = "danger-full-access"
{forced_line}
{forced_workspace_line}
model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "http://127.0.0.1:0/v1"
wire_api = "chat"
request_max_retries = 0
stream_max_retries = 0
"#
);
std::fs::write(config_toml, contents)
}
#[tokio::test]
async fn logout_account_removes_auth_and_notifies() -> Result<()> {
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), None, None)?;
login_with_api_key(
codex_home.path(),
"sk-test-key",
AuthCredentialsStoreMode::File,
)?;
assert!(codex_home.path().join("auth.json").exists());
let mut mcp = McpProcess::new_with_env(codex_home.path(), &[("OPENAI_API_KEY", None)]).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let id = mcp.send_logout_account_request().await?;
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(id)),
)
.await??;
let _ok: LogoutAccountResponse = to_response(resp)?;
let note = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("account/updated"),
)
.await??;
let parsed: ServerNotification = note.try_into()?;
let ServerNotification::AccountUpdated(payload) = parsed else {
bail!("unexpected notification: {parsed:?}");
};
assert!(
payload.auth_mode.is_none(),
"auth_method should be None after logout"
);
assert!(
!codex_home.path().join("auth.json").exists(),
"auth.json should be deleted"
);
let status_id = mcp
.send_get_auth_status_request(GetAuthStatusParams {
include_token: Some(true),
refresh_token: Some(false),
})
.await?;
let status_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(status_id)),
)
.await??;
let status: GetAuthStatusResponse = to_response(status_resp)?;
assert_eq!(status.auth_method, None);
assert_eq!(status.auth_token, None);
Ok(())
}
#[tokio::test]
async fn login_account_api_key_succeeds_and_notifies() -> Result<()> {
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), None, None)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let req_id = mcp
.send_login_account_api_key_request("sk-test-key")
.await?;
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(req_id)),
)
.await??;
let login: LoginAccountResponse = to_response(resp)?;
assert_eq!(login, LoginAccountResponse::ApiKey {});
let note = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("account/login/completed"),
)
.await??;
let parsed: ServerNotification = note.try_into()?;
let ServerNotification::AccountLoginCompleted(payload) = parsed else {
bail!("unexpected notification: {parsed:?}");
};
pretty_assertions::assert_eq!(payload.login_id, None);
pretty_assertions::assert_eq!(payload.success, true);
pretty_assertions::assert_eq!(payload.error, None);
let note = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("account/updated"),
)
.await??;
let parsed: ServerNotification = note.try_into()?;
let ServerNotification::AccountUpdated(payload) = parsed else {
bail!("unexpected notification: {parsed:?}");
};
pretty_assertions::assert_eq!(payload.auth_mode, Some(AuthMode::ApiKey));
assert!(codex_home.path().join("auth.json").exists());
Ok(())
}
#[tokio::test]
async fn login_account_api_key_rejected_when_forced_chatgpt() -> Result<()> {
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), Some("chatgpt"), None)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let request_id = mcp
.send_login_account_api_key_request("sk-test-key")
.await?;
let err: JSONRPCError = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_error_message(RequestId::Integer(request_id)),
)
.await??;
assert_eq!(
err.error.message,
"API key login is disabled. Use ChatGPT login instead."
);
Ok(())
}
#[tokio::test]
async fn login_account_chatgpt_rejected_when_forced_api() -> Result<()> {
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), Some("api"), None)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let request_id = mcp.send_login_account_chatgpt_request().await?;
let err: JSONRPCError = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_error_message(RequestId::Integer(request_id)),
)
.await??;
assert_eq!(
err.error.message,
"ChatGPT login is disabled. Use API key login instead."
);
Ok(())
}
#[tokio::test]
// Serialize tests that launch the login server since it binds to a fixed port.
#[serial(login_port)]
async fn login_account_chatgpt_start() -> Result<()> {
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), None, None)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let request_id = mcp.send_login_account_chatgpt_request().await?;
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
)
.await??;
let login: LoginAccountResponse = to_response(resp)?;
let LoginAccountResponse::Chatgpt { login_id, auth_url } = login else {
bail!("unexpected login response: {login:?}");
};
assert!(
auth_url.contains("redirect_uri=http%3A%2F%2Flocalhost"),
"auth_url should contain a redirect_uri to localhost"
);
let cancel_id = mcp
.send_cancel_login_account_request(CancelLoginAccountParams {
login_id: login_id.clone(),
})
.await?;
let cancel_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(cancel_id)),
)
.await??;
let _ok: CancelLoginAccountResponse = to_response(cancel_resp)?;
let note = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("account/login/completed"),
)
.await??;
let parsed: ServerNotification = note.try_into()?;
let ServerNotification::AccountLoginCompleted(payload) = parsed else {
bail!("unexpected notification: {parsed:?}");
};
pretty_assertions::assert_eq!(payload.login_id, Some(login_id));
pretty_assertions::assert_eq!(payload.success, false);
assert!(
payload.error.is_some(),
"expected a non-empty error on cancel"
);
let maybe_updated = timeout(
Duration::from_millis(500),
mcp.read_stream_until_notification_message("account/updated"),
)
.await;
assert!(
maybe_updated.is_err(),
"account/updated should not be emitted when login is cancelled"
);
Ok(())
}
#[tokio::test]
// Serialize tests that launch the login server since it binds to a fixed port.
#[serial(login_port)]
async fn login_account_chatgpt_includes_forced_workspace_query_param() -> Result<()> {
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), None, Some("ws-forced"))?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let request_id = mcp.send_login_account_chatgpt_request().await?;
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
)
.await??;
let login: LoginAccountResponse = to_response(resp)?;
let LoginAccountResponse::Chatgpt { auth_url, .. } = login else {
bail!("unexpected login response: {login:?}");
};
assert!(
auth_url.contains("allowed_workspace_id=ws-forced"),
"auth URL should include forced workspace"
);
Ok(())
}

View File

@@ -0,0 +1,7 @@
mod account;
mod thread_archive;
mod thread_list;
mod thread_resume;
mod thread_start;
mod turn_interrupt;
mod turn_start;

View File

@@ -0,0 +1,93 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ThreadArchiveParams;
use codex_app_server_protocol::ThreadArchiveResponse;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
use codex_core::ARCHIVED_SESSIONS_SUBDIR;
use codex_core::find_conversation_path_by_id_str;
use std::path::Path;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
#[tokio::test]
async fn thread_archive_moves_rollout_into_archived_directory() -> Result<()> {
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path())?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
// Start a thread.
let start_id = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("mock-model".to_string()),
..Default::default()
})
.await?;
let start_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(start_id)),
)
.await??;
let ThreadStartResponse { thread } = to_response::<ThreadStartResponse>(start_resp)?;
assert!(!thread.id.is_empty());
// Locate the rollout path recorded for this thread id.
let rollout_path = find_conversation_path_by_id_str(codex_home.path(), &thread.id)
.await?
.expect("expected rollout path for thread id to exist");
assert!(
rollout_path.exists(),
"expected {} to exist",
rollout_path.display()
);
// Archive the thread.
let archive_id = mcp
.send_thread_archive_request(ThreadArchiveParams {
thread_id: thread.id.clone(),
})
.await?;
let archive_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(archive_id)),
)
.await??;
let _: ThreadArchiveResponse = to_response::<ThreadArchiveResponse>(archive_resp)?;
// Verify file moved.
let archived_directory = codex_home.path().join(ARCHIVED_SESSIONS_SUBDIR);
// The archived file keeps the original filename (rollout-...-<id>.jsonl).
let archived_rollout_path =
archived_directory.join(rollout_path.file_name().expect("rollout file name"));
assert!(
!rollout_path.exists(),
"expected rollout path {} to be moved",
rollout_path.display()
);
assert!(
archived_rollout_path.exists(),
"expected archived rollout path {} to exist",
archived_rollout_path.display()
);
Ok(())
}
fn create_config_toml(codex_home: &Path) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(config_toml, config_contents())
}
fn config_contents() -> &'static str {
r#"model = "mock-model"
approval_policy = "never"
sandbox_mode = "read-only"
"#
}

View File

@@ -0,0 +1,205 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_fake_rollout;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ThreadListParams;
use codex_app_server_protocol::ThreadListResponse;
use serde_json::json;
use tempfile::TempDir;
use tokio::time::timeout;
use uuid::Uuid;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
#[tokio::test]
async fn thread_list_basic_empty() -> Result<()> {
let codex_home = TempDir::new()?;
create_minimal_config(codex_home.path())?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
// List threads in an empty CODEX_HOME; should return an empty page with nextCursor: null.
let list_id = mcp
.send_thread_list_request(ThreadListParams {
cursor: None,
limit: Some(10),
model_providers: None,
})
.await?;
let list_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(list_id)),
)
.await??;
let ThreadListResponse { data, next_cursor } = to_response::<ThreadListResponse>(list_resp)?;
assert!(data.is_empty());
assert_eq!(next_cursor, None);
Ok(())
}
// Minimal config.toml for listing.
fn create_minimal_config(codex_home: &std::path::Path) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
r#"
model = "mock-model"
approval_policy = "never"
"#,
)
}
#[tokio::test]
async fn thread_list_pagination_next_cursor_none_on_last_page() -> Result<()> {
let codex_home = TempDir::new()?;
create_minimal_config(codex_home.path())?;
// Create three rollouts so we can paginate with limit=2.
let _a = create_fake_rollout(
codex_home.path(),
"2025-01-02T12-00-00",
"2025-01-02T12:00:00Z",
"Hello",
Some("mock_provider"),
)?;
let _b = create_fake_rollout(
codex_home.path(),
"2025-01-01T13-00-00",
"2025-01-01T13:00:00Z",
"Hello",
Some("mock_provider"),
)?;
let _c = create_fake_rollout(
codex_home.path(),
"2025-01-01T12-00-00",
"2025-01-01T12:00:00Z",
"Hello",
Some("mock_provider"),
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
// Page 1: limit 2 → expect next_cursor Some.
let page1_id = mcp
.send_thread_list_request(ThreadListParams {
cursor: None,
limit: Some(2),
model_providers: Some(vec!["mock_provider".to_string()]),
})
.await?;
let page1_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(page1_id)),
)
.await??;
let ThreadListResponse {
data: data1,
next_cursor: cursor1,
} = to_response::<ThreadListResponse>(page1_resp)?;
assert_eq!(data1.len(), 2);
let cursor1 = cursor1.expect("expected nextCursor on first page");
// Page 2: with cursor → expect next_cursor None when no more results.
let page2_id = mcp
.send_thread_list_request(ThreadListParams {
cursor: Some(cursor1),
limit: Some(2),
model_providers: Some(vec!["mock_provider".to_string()]),
})
.await?;
let page2_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(page2_id)),
)
.await??;
let ThreadListResponse {
data: data2,
next_cursor: cursor2,
} = to_response::<ThreadListResponse>(page2_resp)?;
assert!(data2.len() <= 2);
assert_eq!(cursor2, None, "expected nextCursor to be null on last page");
Ok(())
}
#[tokio::test]
async fn thread_list_respects_provider_filter() -> Result<()> {
let codex_home = TempDir::new()?;
create_minimal_config(codex_home.path())?;
// Create rollouts under two providers.
let _a = create_fake_rollout(
codex_home.path(),
"2025-01-02T10-00-00",
"2025-01-02T10:00:00Z",
"X",
Some("mock_provider"),
)?; // mock_provider
// one with a different provider
let uuid = Uuid::new_v4();
let dir = codex_home
.path()
.join("sessions")
.join("2025")
.join("01")
.join("02");
std::fs::create_dir_all(&dir)?;
let file_path = dir.join(format!("rollout-2025-01-02T11-00-00-{uuid}.jsonl"));
let lines = [
json!({
"timestamp": "2025-01-02T11:00:00Z",
"type": "session_meta",
"payload": {
"id": uuid,
"timestamp": "2025-01-02T11:00:00Z",
"cwd": "/",
"originator": "codex",
"cli_version": "0.0.0",
"instructions": null,
"source": "vscode",
"model_provider": "other_provider"
}
})
.to_string(),
json!({
"timestamp": "2025-01-02T11:00:00Z",
"type":"response_item",
"payload": {"type":"message","role":"user","content":[{"type":"input_text","text":"X"}]}
})
.to_string(),
json!({
"timestamp": "2025-01-02T11:00:00Z",
"type":"event_msg",
"payload": {"type":"user_message","message":"X","kind":"plain"}
})
.to_string(),
];
std::fs::write(file_path, lines.join("\n") + "\n")?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
// Filter to only other_provider; expect 1 item, nextCursor None.
let list_id = mcp
.send_thread_list_request(ThreadListParams {
cursor: None,
limit: Some(10),
model_providers: Some(vec!["other_provider".to_string()]),
})
.await?;
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(list_id)),
)
.await??;
let ThreadListResponse { data, next_cursor } = to_response::<ThreadListResponse>(resp)?;
assert_eq!(data.len(), 1);
assert_eq!(next_cursor, None);
Ok(())
}

View File

@@ -0,0 +1,79 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_mock_chat_completions_server;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ThreadResumeParams;
use codex_app_server_protocol::ThreadResumeResponse;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
#[tokio::test]
async fn thread_resume_returns_existing_thread() -> Result<()> {
let server = create_mock_chat_completions_server(vec![]).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
// Start a thread.
let start_id = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("gpt-5-codex".to_string()),
..Default::default()
})
.await?;
let start_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(start_id)),
)
.await??;
let ThreadStartResponse { thread } = to_response::<ThreadStartResponse>(start_resp)?;
// Resume it via v2 API.
let resume_id = mcp
.send_thread_resume_request(ThreadResumeParams {
thread_id: thread.id.clone(),
})
.await?;
let resume_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(resume_id)),
)
.await??;
let ThreadResumeResponse { thread: resumed } =
to_response::<ThreadResumeResponse>(resume_resp)?;
assert_eq!(resumed.id, thread.id);
Ok(())
}
// Helper to create a config.toml pointing at the mock model server.
fn create_config_toml(codex_home: &std::path::Path, server_uri: &str) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
format!(
r#"
model = "mock-model"
approval_policy = "never"
sandbox_mode = "read-only"
model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "chat"
request_max_retries = 0
stream_max_retries = 0
"#
),
)
}

View File

@@ -0,0 +1,81 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_mock_chat_completions_server;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
use codex_app_server_protocol::ThreadStartedNotification;
use std::path::Path;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
#[tokio::test]
async fn thread_start_creates_thread_and_emits_started() -> Result<()> {
// Provide a mock server and config so model wiring is valid.
let server = create_mock_chat_completions_server(vec![]).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
// Start server and initialize.
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
// Start a v2 thread with an explicit model override.
let req_id = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("gpt-5".to_string()),
..Default::default()
})
.await?;
// Expect a proper JSON-RPC response with a thread id.
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(req_id)),
)
.await??;
let ThreadStartResponse { thread } = to_response::<ThreadStartResponse>(resp)?;
assert!(!thread.id.is_empty(), "thread id should not be empty");
// A corresponding thread/started notification should arrive.
let notif: JSONRPCNotification = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("thread/started"),
)
.await??;
let started: ThreadStartedNotification =
serde_json::from_value(notif.params.expect("params must be present"))?;
assert_eq!(started.thread.id, thread.id);
Ok(())
}
// Helper to create a config.toml pointing at the mock model server.
fn create_config_toml(codex_home: &Path, server_uri: &str) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
format!(
r#"
model = "mock-model"
approval_policy = "never"
sandbox_mode = "read-only"
model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "chat"
request_max_retries = 0
stream_max_retries = 0
"#
),
)
}

View File

@@ -0,0 +1,128 @@
#![cfg(unix)]
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_mock_chat_completions_server;
use app_test_support::create_shell_sse_response;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
use codex_app_server_protocol::TurnInterruptParams;
use codex_app_server_protocol::TurnInterruptResponse;
use codex_app_server_protocol::TurnStartParams;
use codex_app_server_protocol::TurnStartResponse;
use codex_app_server_protocol::UserInput as V2UserInput;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
#[tokio::test]
async fn turn_interrupt_aborts_running_turn() -> Result<()> {
// Use a portable sleep command to keep the turn running.
#[cfg(target_os = "windows")]
let shell_command = vec![
"powershell".to_string(),
"-Command".to_string(),
"Start-Sleep -Seconds 10".to_string(),
];
#[cfg(not(target_os = "windows"))]
let shell_command = vec!["sleep".to_string(), "10".to_string()];
let tmp = TempDir::new()?;
let codex_home = tmp.path().join("codex_home");
std::fs::create_dir(&codex_home)?;
let working_directory = tmp.path().join("workdir");
std::fs::create_dir(&working_directory)?;
// Mock server: long-running shell command then (after abort) nothing else needed.
let server = create_mock_chat_completions_server(vec![create_shell_sse_response(
shell_command.clone(),
Some(&working_directory),
Some(10_000),
"call_sleep",
)?])
.await;
create_config_toml(&codex_home, &server.uri())?;
let mut mcp = McpProcess::new(&codex_home).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
// Start a v2 thread and capture its id.
let thread_req = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("mock-model".to_string()),
..Default::default()
})
.await?;
let thread_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(thread_req)),
)
.await??;
let ThreadStartResponse { thread } = to_response::<ThreadStartResponse>(thread_resp)?;
// Start a turn that triggers a long-running command.
let turn_req = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "run sleep".to_string(),
}],
cwd: Some(working_directory.clone()),
..Default::default()
})
.await?;
let turn_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn_req)),
)
.await??;
let TurnStartResponse { turn } = to_response::<TurnStartResponse>(turn_resp)?;
// Give the command a brief moment to start.
tokio::time::sleep(std::time::Duration::from_secs(1)).await;
// Interrupt the in-progress turn by id (v2 API).
let interrupt_id = mcp
.send_turn_interrupt_request(TurnInterruptParams {
thread_id: thread.id,
turn_id: turn.id,
})
.await?;
let interrupt_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(interrupt_id)),
)
.await??;
let _resp: TurnInterruptResponse = to_response::<TurnInterruptResponse>(interrupt_resp)?;
// No fields to assert on; successful deserialization confirms proper response shape.
Ok(())
}
// Helper to create a config.toml pointing at the mock model server.
fn create_config_toml(codex_home: &std::path::Path, server_uri: &str) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
format!(
r#"
model = "mock-model"
approval_policy = "never"
sandbox_mode = "workspace-write"
model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "chat"
request_max_retries = 0
stream_max_retries = 0
"#
),
)
}

View File

@@ -0,0 +1,486 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_final_assistant_message_sse_response;
use app_test_support::create_mock_chat_completions_server;
use app_test_support::create_mock_chat_completions_server_unchecked;
use app_test_support::create_shell_sse_response;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ServerRequest;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
use codex_app_server_protocol::TurnStartParams;
use codex_app_server_protocol::TurnStartResponse;
use codex_app_server_protocol::TurnStartedNotification;
use codex_app_server_protocol::UserInput as V2UserInput;
use codex_core::protocol_config_types::ReasoningEffort;
use codex_core::protocol_config_types::ReasoningSummary;
use codex_protocol::parse_command::ParsedCommand;
use codex_protocol::protocol::Event;
use codex_protocol::protocol::EventMsg;
use core_test_support::skip_if_no_network;
use pretty_assertions::assert_eq;
use std::path::Path;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
#[tokio::test]
async fn turn_start_emits_notifications_and_accepts_model_override() -> Result<()> {
// Provide a mock server and config so model wiring is valid.
// Three Codex turns hit the mock model (session start + two turn/start calls).
let responses = vec![
create_final_assistant_message_sse_response("Done")?,
create_final_assistant_message_sse_response("Done")?,
create_final_assistant_message_sse_response("Done")?,
];
let server = create_mock_chat_completions_server_unchecked(responses).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri(), "never")?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
// Start a thread (v2) and capture its id.
let thread_req = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("mock-model".to_string()),
..Default::default()
})
.await?;
let thread_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(thread_req)),
)
.await??;
let ThreadStartResponse { thread } = to_response::<ThreadStartResponse>(thread_resp)?;
// Start a turn with only input and thread_id set (no overrides).
let turn_req = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "Hello".to_string(),
}],
..Default::default()
})
.await?;
let turn_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn_req)),
)
.await??;
let TurnStartResponse { turn } = to_response::<TurnStartResponse>(turn_resp)?;
assert!(!turn.id.is_empty());
// Expect a turn/started notification.
let notif: JSONRPCNotification = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("turn/started"),
)
.await??;
let started: TurnStartedNotification =
serde_json::from_value(notif.params.expect("params must be present"))?;
assert_eq!(
started.turn.status,
codex_app_server_protocol::TurnStatus::InProgress
);
// Send a second turn that exercises the overrides path: change the model.
let turn_req2 = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "Second".to_string(),
}],
model: Some("mock-model-override".to_string()),
..Default::default()
})
.await?;
let turn_resp2: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn_req2)),
)
.await??;
let TurnStartResponse { turn: turn2 } = to_response::<TurnStartResponse>(turn_resp2)?;
assert!(!turn2.id.is_empty());
// Ensure the second turn has a different id than the first.
assert_ne!(turn.id, turn2.id);
// Expect a second turn/started notification as well.
let _notif2: JSONRPCNotification = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("turn/started"),
)
.await??;
// And we should ultimately get a task_complete without having to add a
// legacy conversation listener explicitly (auto-attached by thread/start).
let _task_complete: JSONRPCNotification = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("codex/event/task_complete"),
)
.await??;
Ok(())
}
#[tokio::test]
async fn turn_start_accepts_local_image_input() -> Result<()> {
// Two Codex turns hit the mock model (session start + turn/start).
let responses = vec![
create_final_assistant_message_sse_response("Done")?,
create_final_assistant_message_sse_response("Done")?,
];
// Use the unchecked variant because the request payload includes a LocalImage
// which the strict matcher does not currently cover.
let server = create_mock_chat_completions_server_unchecked(responses).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri(), "never")?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let thread_req = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("mock-model".to_string()),
..Default::default()
})
.await?;
let thread_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(thread_req)),
)
.await??;
let ThreadStartResponse { thread } = to_response::<ThreadStartResponse>(thread_resp)?;
let image_path = codex_home.path().join("image.png");
// No need to actually write the file; we just exercise the input path.
let turn_req = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::LocalImage { path: image_path }],
..Default::default()
})
.await?;
let turn_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn_req)),
)
.await??;
let TurnStartResponse { turn } = to_response::<TurnStartResponse>(turn_resp)?;
assert!(!turn.id.is_empty());
// This test only validates that turn/start responds and returns a turn.
Ok(())
}
#[tokio::test]
async fn turn_start_exec_approval_toggle_v2() -> Result<()> {
skip_if_no_network!(Ok(()));
let tmp = TempDir::new()?;
let codex_home = tmp.path().to_path_buf();
// Mock server: first turn requests a shell call (elicitation), then completes.
// Second turn same, but we'll set approval_policy=never to avoid elicitation.
let responses = vec![
create_shell_sse_response(
vec![
"python3".to_string(),
"-c".to_string(),
"print(42)".to_string(),
],
None,
Some(5000),
"call1",
)?,
create_final_assistant_message_sse_response("done 1")?,
create_shell_sse_response(
vec![
"python3".to_string(),
"-c".to_string(),
"print(42)".to_string(),
],
None,
Some(5000),
"call2",
)?,
create_final_assistant_message_sse_response("done 2")?,
];
let server = create_mock_chat_completions_server(responses).await;
// Default approval is untrusted to force elicitation on first turn.
create_config_toml(codex_home.as_path(), &server.uri(), "untrusted")?;
let mut mcp = McpProcess::new(codex_home.as_path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
// thread/start
let start_id = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("mock-model".to_string()),
..Default::default()
})
.await?;
let start_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(start_id)),
)
.await??;
let ThreadStartResponse { thread } = to_response::<ThreadStartResponse>(start_resp)?;
// turn/start — expect ExecCommandApproval request from server
let first_turn_id = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "run python".to_string(),
}],
..Default::default()
})
.await?;
// Acknowledge RPC
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(first_turn_id)),
)
.await??;
// Receive elicitation
let server_req = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_request_message(),
)
.await??;
let ServerRequest::ExecCommandApproval { request_id, params } = server_req else {
panic!("expected ExecCommandApproval request");
};
assert_eq!(params.call_id, "call1");
assert_eq!(
params.parsed_cmd,
vec![ParsedCommand::Unknown {
cmd: "python3 -c 'print(42)'".to_string()
}]
);
// Approve and wait for task completion
mcp.send_response(
request_id,
serde_json::json!({ "decision": codex_core::protocol::ReviewDecision::Approved }),
)
.await?;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("codex/event/task_complete"),
)
.await??;
// Second turn with approval_policy=never should not elicit approval
let second_turn_id = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "run python again".to_string(),
}],
approval_policy: Some(codex_app_server_protocol::AskForApproval::Never),
sandbox_policy: Some(codex_app_server_protocol::SandboxPolicy::DangerFullAccess),
model: Some("mock-model".to_string()),
effort: Some(ReasoningEffort::Medium),
summary: Some(ReasoningSummary::Auto),
..Default::default()
})
.await?;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(second_turn_id)),
)
.await??;
// Ensure we do NOT receive an ExecCommandApproval request before task completes
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("codex/event/task_complete"),
)
.await??;
Ok(())
}
#[tokio::test]
async fn turn_start_updates_sandbox_and_cwd_between_turns_v2() -> Result<()> {
// When returning Result from a test, pass an Ok(()) to the skip macro
// so the early return type matches. The no-arg form returns unit.
skip_if_no_network!(Ok(()));
let tmp = TempDir::new()?;
let codex_home = tmp.path().join("codex_home");
std::fs::create_dir(&codex_home)?;
let workspace_root = tmp.path().join("workspace");
std::fs::create_dir(&workspace_root)?;
let first_cwd = workspace_root.join("turn1");
let second_cwd = workspace_root.join("turn2");
std::fs::create_dir(&first_cwd)?;
std::fs::create_dir(&second_cwd)?;
let responses = vec![
create_shell_sse_response(
vec![
"bash".to_string(),
"-lc".to_string(),
"echo first turn".to_string(),
],
None,
Some(5000),
"call-first",
)?,
create_final_assistant_message_sse_response("done first")?,
create_shell_sse_response(
vec![
"bash".to_string(),
"-lc".to_string(),
"echo second turn".to_string(),
],
None,
Some(5000),
"call-second",
)?,
create_final_assistant_message_sse_response("done second")?,
];
let server = create_mock_chat_completions_server(responses).await;
create_config_toml(&codex_home, &server.uri(), "untrusted")?;
let mut mcp = McpProcess::new(&codex_home).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
// thread/start
let start_id = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("mock-model".to_string()),
..Default::default()
})
.await?;
let start_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(start_id)),
)
.await??;
let ThreadStartResponse { thread } = to_response::<ThreadStartResponse>(start_resp)?;
// first turn with workspace-write sandbox and first_cwd
let first_turn = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "first turn".to_string(),
}],
cwd: Some(first_cwd.clone()),
approval_policy: Some(codex_app_server_protocol::AskForApproval::Never),
sandbox_policy: Some(codex_app_server_protocol::SandboxPolicy::WorkspaceWrite {
writable_roots: vec![first_cwd.clone()],
network_access: false,
exclude_tmpdir_env_var: false,
exclude_slash_tmp: false,
}),
model: Some("mock-model".to_string()),
effort: Some(ReasoningEffort::Medium),
summary: Some(ReasoningSummary::Auto),
})
.await?;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(first_turn)),
)
.await??;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("codex/event/task_complete"),
)
.await??;
// second turn with workspace-write and second_cwd, ensure exec begins in second_cwd
let second_turn = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "second turn".to_string(),
}],
cwd: Some(second_cwd.clone()),
approval_policy: Some(codex_app_server_protocol::AskForApproval::Never),
sandbox_policy: Some(codex_app_server_protocol::SandboxPolicy::DangerFullAccess),
model: Some("mock-model".to_string()),
effort: Some(ReasoningEffort::Medium),
summary: Some(ReasoningSummary::Auto),
})
.await?;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(second_turn)),
)
.await??;
let exec_begin_notification = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("codex/event/exec_command_begin"),
)
.await??;
let params = exec_begin_notification
.params
.clone()
.expect("exec_command_begin params");
let event: Event = serde_json::from_value(params).expect("deserialize exec begin event");
let exec_begin = match event.msg {
EventMsg::ExecCommandBegin(exec_begin) => exec_begin,
other => panic!("expected ExecCommandBegin event, got {other:?}"),
};
assert_eq!(exec_begin.cwd, second_cwd);
assert_eq!(
exec_begin.command,
vec![
"bash".to_string(),
"-lc".to_string(),
"echo second turn".to_string()
]
);
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("codex/event/task_complete"),
)
.await??;
Ok(())
}
// Helper to create a config.toml pointing at the mock model server.
fn create_config_toml(
codex_home: &Path,
server_uri: &str,
approval_policy: &str,
) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
format!(
r#"
model = "mock-model"
approval_policy = "{approval_policy}"
sandbox_mode = "read-only"
model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "chat"
request_max_retries = 0
stream_max_retries = 0
"#
),
)
}

View File

@@ -39,6 +39,7 @@ ctor = { workspace = true }
owo-colors = { workspace = true }
serde_json = { workspace = true }
supports-color = { workspace = true }
toml = { workspace = true }
tokio = { workspace = true, features = [
"io-std",
"macros",

View File

@@ -124,6 +124,7 @@ async fn run_command_under_sandbox(
let cwd_clone = cwd.clone();
let env_map = env.clone();
let command_vec = command.clone();
let base_dir = config.codex_home.clone();
let res = tokio::task::spawn_blocking(move || {
run_windows_sandbox_capture(
policy_str,
@@ -132,6 +133,7 @@ async fn run_command_under_sandbox(
&cwd_clone,
env_map,
None,
Some(base_dir.as_path()),
)
})
.await;

View File

@@ -510,15 +510,21 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
Some(Subcommand::Features(FeaturesCli { sub })) => match sub {
FeaturesSubcommand::List => {
// Respect root-level `-c` overrides plus top-level flags like `--profile`.
let cli_kv_overrides = root_config_overrides
let mut cli_kv_overrides = root_config_overrides
.parse_overrides()
.map_err(anyhow::Error::msg)?;
// Honor `--search` via the new feature toggle.
if interactive.web_search {
cli_kv_overrides.push((
"features.web_search_request".to_string(),
toml::Value::Boolean(true),
));
}
// Thread through relevant top-level flags (at minimum, `--profile`).
// Also honor `--search` since it maps to a feature toggle.
let overrides = ConfigOverrides {
config_profile: interactive.config_profile.clone(),
tools_web_search_request: interactive.web_search.then_some(true),
..Default::default()
};

View File

@@ -353,7 +353,9 @@ async fn run_login(config_overrides: &CliConfigOverrides, login_args: LoginArgs)
.context("failed to load configuration")?;
if !config.features.enabled(Feature::RmcpClient) {
bail!("OAuth login is only supported when [feature].rmcp_client is true in config.toml.");
bail!(
"OAuth login is only supported when [features].rmcp_client is true in config.toml. See https://github.com/openai/codex/blob/main/docs/config.md#feature-flags for details."
);
}
let LoginArgs { name, scopes } = login_args;

View File

@@ -1044,7 +1044,7 @@ pub async fn run_main(cli: Cli, _codex_linux_sandbox_exe: Option<PathBuf>) -> an
// Close task modal/pending apply if present before opening env modal
app.diff_overlay = None;
app.env_modal = Some(app::EnvModalState { query: String::new(), selected: 0 });
// Cache environments until user explicitly refreshes with 'r' inside the modal.
// Cache environments while the modal is open to avoid repeated fetches.
let should_fetch = app.environments.is_empty();
if should_fetch {
app.env_loading = true;
@@ -1115,7 +1115,7 @@ pub async fn run_main(cli: Cli, _codex_linux_sandbox_exe: Option<PathBuf>) -> an
let _ = tx.send(evt);
});
} else {
app.status = "No environment selected (press 'e' to choose)".to_string();
app.status = "No environment selected".to_string();
}
}
needs_redraw = true;
@@ -1313,18 +1313,6 @@ pub async fn run_main(cli: Cli, _codex_linux_sandbox_exe: Option<PathBuf>) -> an
// Environment modal key handling
match key.code {
KeyCode::Esc => { app.env_modal = None; needs_redraw = true; }
KeyCode::Char('r') | KeyCode::Char('R') => {
// Trigger refresh of environments
app.env_loading = true; app.env_error = None; needs_redraw = true;
let _ = frame_tx.send(Instant::now() + Duration::from_millis(100));
let tx = tx.clone();
tokio::spawn(async move {
let base_url = crate::util::normalize_base_url(&std::env::var("CODEX_CLOUD_TASKS_BASE_URL").unwrap_or_else(|_| "https://chatgpt.com/backend-api".to_string()));
let headers = crate::util::build_chatgpt_headers().await;
let res = crate::env_detect::list_environments(&base_url, &headers).await;
let _ = tx.send(app::AppEvent::EnvironmentsLoaded(res));
});
}
KeyCode::Char(ch) if !key.modifiers.contains(KeyModifiers::CONTROL) && !key.modifiers.contains(KeyModifiers::ALT) => {
if let Some(m) = app.env_modal.as_mut() { m.query.push(ch); }
needs_redraw = true;
@@ -1431,7 +1419,7 @@ pub async fn run_main(cli: Cli, _codex_linux_sandbox_exe: Option<PathBuf>) -> an
}
KeyCode::Char('o') | KeyCode::Char('O') => {
app.env_modal = Some(app::EnvModalState { query: String::new(), selected: 0 });
// Cache environments until user explicitly refreshes with 'r' inside the modal.
// Cache environments while the modal is open to avoid repeated fetches.
let should_fetch = app.environments.is_empty();
if should_fetch { app.env_loading = true; app.env_error = None; }
needs_redraw = true;

View File

@@ -945,9 +945,7 @@ pub fn draw_env_modal(frame: &mut Frame, area: Rect, app: &mut App) {
// Subheader with usage hints (dim cyan)
let subheader = Paragraph::new(Line::from(
"Type to search, Enter select, Esc cancel; r refresh"
.cyan()
.dim(),
"Type to search, Enter select, Esc cancel".cyan().dim(),
))
.wrap(Wrap { trim: true });
frame.render_widget(subheader, rows[0]);

View File

@@ -34,7 +34,7 @@ const PRESETS: &[ModelPreset] = &[
id: "gpt-5-codex",
model: "gpt-5-codex",
display_name: "gpt-5-codex",
description: "Optimized for coding tasks with many tools.",
description: "Optimized for codex.",
default_reasoning_effort: ReasoningEffort::Medium,
supported_reasoning_efforts: &[
ReasoningEffortPreset {
@@ -52,6 +52,24 @@ const PRESETS: &[ModelPreset] = &[
],
is_default: true,
},
ModelPreset {
id: "gpt-5-codex-mini",
model: "gpt-5-codex-mini",
display_name: "gpt-5-codex-mini",
description: "Optimized for codex. Cheaper, faster, but less capable.",
default_reasoning_effort: ReasoningEffort::Medium,
supported_reasoning_efforts: &[
ReasoningEffortPreset {
effort: ReasoningEffort::Medium,
description: "Dynamically adjusts reasoning based on the task",
},
ReasoningEffortPreset {
effort: ReasoningEffort::High,
description: "Maximizes reasoning depth for complex or ambiguous problems",
},
],
is_default: false,
},
ModelPreset {
id: "gpt-5",
model: "gpt-5",
@@ -80,8 +98,13 @@ const PRESETS: &[ModelPreset] = &[
},
];
pub fn builtin_model_presets(_auth_mode: Option<AuthMode>) -> Vec<ModelPreset> {
PRESETS.to_vec()
pub fn builtin_model_presets(auth_mode: Option<AuthMode>) -> Vec<ModelPreset> {
let allow_codex_mini = matches!(auth_mode, Some(AuthMode::ChatGPT));
PRESETS
.iter()
.filter(|preset| allow_codex_mini || preset.id != "gpt-5-codex-mini")
.copied()
.collect()
}
#[cfg(test)]

View File

@@ -80,7 +80,7 @@ toml_edit = { workspace = true }
tracing = { workspace = true, features = ["log"] }
tree-sitter = { workspace = true }
tree-sitter-bash = { workspace = true }
uuid = { workspace = true, features = ["serde", "v4"] }
uuid = { workspace = true, features = ["serde", "v4", "v5"] }
which = { workspace = true }
wildmatch = { workspace = true }
codex_windows_sandbox = { package = "codex-windows-sandbox", path = "../windows-sandbox-rs" }

View File

@@ -1,12 +1,14 @@
mod storage;
use chrono::Utc;
use reqwest::StatusCode;
use serde::Deserialize;
use serde::Serialize;
#[cfg(test)]
use serial_test::serial;
use std::env;
use std::fmt::Debug;
use std::io::ErrorKind;
use std::path::Path;
use std::path::PathBuf;
use std::sync::Arc;
@@ -22,10 +24,14 @@ use crate::auth::storage::AuthStorageBackend;
use crate::auth::storage::create_auth_storage;
use crate::config::Config;
use crate::default_client::CodexHttpClient;
use crate::error::RefreshTokenFailedError;
use crate::error::RefreshTokenFailedReason;
use crate::token_data::PlanType;
use crate::token_data::TokenData;
use crate::token_data::parse_id_token;
use crate::util::try_parse_error_message;
use serde_json::Value;
use thiserror::Error;
#[derive(Debug, Clone)]
pub struct CodexAuth {
@@ -46,18 +52,54 @@ impl PartialEq for CodexAuth {
// TODO(pakrym): use token exp field to check for expiration instead
const TOKEN_REFRESH_INTERVAL: i64 = 8;
const REFRESH_TOKEN_EXPIRED_MESSAGE: &str = "Your access token could not be refreshed because your refresh token has expired. Please log out and sign in again.";
const REFRESH_TOKEN_REUSED_MESSAGE: &str = "Your access token could not be refreshed because your refresh token was already used. Please log out and sign in again.";
const REFRESH_TOKEN_INVALIDATED_MESSAGE: &str = "Your access token could not be refreshed because your refresh token was revoked. Please log out and sign in again.";
const REFRESH_TOKEN_UNKNOWN_MESSAGE: &str =
"Your access token could not be refreshed. Please log out and sign in again.";
const REFRESH_TOKEN_URL: &str = "https://auth.openai.com/oauth/token";
pub const REFRESH_TOKEN_URL_OVERRIDE_ENV_VAR: &str = "CODEX_REFRESH_TOKEN_URL_OVERRIDE";
#[derive(Debug, Error)]
pub enum RefreshTokenError {
#[error("{0}")]
Permanent(#[from] RefreshTokenFailedError),
#[error(transparent)]
Transient(#[from] std::io::Error),
}
impl RefreshTokenError {
pub fn failed_reason(&self) -> Option<RefreshTokenFailedReason> {
match self {
Self::Permanent(error) => Some(error.reason),
Self::Transient(_) => None,
}
}
fn other_with_message(message: impl Into<String>) -> Self {
Self::Transient(std::io::Error::other(message.into()))
}
}
impl From<RefreshTokenError> for std::io::Error {
fn from(err: RefreshTokenError) -> Self {
match err {
RefreshTokenError::Permanent(failed) => std::io::Error::other(failed),
RefreshTokenError::Transient(inner) => inner,
}
}
}
impl CodexAuth {
pub async fn refresh_token(&self) -> Result<String, std::io::Error> {
pub async fn refresh_token(&self) -> Result<String, RefreshTokenError> {
tracing::info!("Refreshing token");
let token_data = self
.get_current_token_data()
.ok_or(std::io::Error::other("Token data is not available."))?;
let token_data = self.get_current_token_data().ok_or_else(|| {
RefreshTokenError::Transient(std::io::Error::other("Token data is not available."))
})?;
let token = token_data.refresh_token;
let refresh_response = try_refresh_token(token, &self.client)
.await
.map_err(std::io::Error::other)?;
let refresh_response = try_refresh_token(token, &self.client).await?;
let updated = update_tokens(
&self.storage,
@@ -65,7 +107,8 @@ impl CodexAuth {
refresh_response.access_token,
refresh_response.refresh_token,
)
.await?;
.await
.map_err(RefreshTokenError::from)?;
if let Ok(mut auth_lock) = self.auth_dot_json.lock() {
*auth_lock = Some(updated.clone());
@@ -74,7 +117,7 @@ impl CodexAuth {
let access = match updated.tokens {
Some(t) => t.access_token,
None => {
return Err(std::io::Error::other(
return Err(RefreshTokenError::other_with_message(
"Token data is not available after refresh.",
));
}
@@ -99,15 +142,21 @@ impl CodexAuth {
..
}) => {
if last_refresh < Utc::now() - chrono::Duration::days(TOKEN_REFRESH_INTERVAL) {
let refresh_response = tokio::time::timeout(
let refresh_result = tokio::time::timeout(
Duration::from_secs(60),
try_refresh_token(tokens.refresh_token.clone(), &self.client),
)
.await
.map_err(|_| {
std::io::Error::other("timed out while refreshing OpenAI API key")
})?
.map_err(std::io::Error::other)?;
.await;
let refresh_response = match refresh_result {
Ok(Ok(response)) => response,
Ok(Err(err)) => return Err(err.into()),
Err(_) => {
return Err(std::io::Error::new(
ErrorKind::TimedOut,
"timed out while refreshing OpenAI API key",
));
}
};
let updated_auth_dot_json = update_tokens(
&self.storage,
@@ -425,7 +474,7 @@ async fn update_tokens(
async fn try_refresh_token(
refresh_token: String,
client: &CodexHttpClient,
) -> std::io::Result<RefreshResponse> {
) -> Result<RefreshResponse, RefreshTokenError> {
let refresh_request = RefreshRequest {
client_id: CLIENT_ID,
grant_type: "refresh_token",
@@ -433,30 +482,93 @@ async fn try_refresh_token(
scope: "openid profile email",
};
let endpoint = refresh_token_endpoint();
// Use shared client factory to include standard headers
let response = client
.post("https://auth.openai.com/oauth/token")
.post(endpoint.as_str())
.header("Content-Type", "application/json")
.json(&refresh_request)
.send()
.await
.map_err(std::io::Error::other)?;
.map_err(|err| RefreshTokenError::Transient(std::io::Error::other(err)))?;
if response.status().is_success() {
let status = response.status();
if status.is_success() {
let refresh_response = response
.json::<RefreshResponse>()
.await
.map_err(std::io::Error::other)?;
.map_err(|err| RefreshTokenError::Transient(std::io::Error::other(err)))?;
Ok(refresh_response)
} else {
Err(std::io::Error::other(format!(
"Failed to refresh token: {}: {}",
response.status(),
try_parse_error_message(&response.text().await.unwrap_or_default()),
)))
let body = response.text().await.unwrap_or_default();
if status == StatusCode::UNAUTHORIZED {
let failed = classify_refresh_token_failure(&body);
Err(RefreshTokenError::Permanent(failed))
} else {
let message = try_parse_error_message(&body);
Err(RefreshTokenError::Transient(std::io::Error::other(
format!("Failed to refresh token: {status}: {message}"),
)))
}
}
}
fn classify_refresh_token_failure(body: &str) -> RefreshTokenFailedError {
let code = extract_refresh_token_error_code(body);
let normalized_code = code.as_deref().map(str::to_ascii_lowercase);
let reason = match normalized_code.as_deref() {
Some("refresh_token_expired") => RefreshTokenFailedReason::Expired,
Some("refresh_token_reused") => RefreshTokenFailedReason::Exhausted,
Some("refresh_token_invalidated") => RefreshTokenFailedReason::Revoked,
_ => RefreshTokenFailedReason::Other,
};
if reason == RefreshTokenFailedReason::Other {
tracing::warn!(
backend_code = normalized_code.as_deref(),
backend_body = body,
"Encountered unknown 401 response while refreshing token"
);
}
let message = match reason {
RefreshTokenFailedReason::Expired => REFRESH_TOKEN_EXPIRED_MESSAGE.to_string(),
RefreshTokenFailedReason::Exhausted => REFRESH_TOKEN_REUSED_MESSAGE.to_string(),
RefreshTokenFailedReason::Revoked => REFRESH_TOKEN_INVALIDATED_MESSAGE.to_string(),
RefreshTokenFailedReason::Other => REFRESH_TOKEN_UNKNOWN_MESSAGE.to_string(),
};
RefreshTokenFailedError::new(reason, message)
}
fn extract_refresh_token_error_code(body: &str) -> Option<String> {
if body.trim().is_empty() {
return None;
}
let Value::Object(map) = serde_json::from_str::<Value>(body).ok()? else {
return None;
};
if let Some(error_value) = map.get("error") {
match error_value {
Value::Object(obj) => {
if let Some(code) = obj.get("code").and_then(Value::as_str) {
return Some(code.to_string());
}
}
Value::String(code) => {
return Some(code.to_string());
}
_ => {}
}
}
map.get("code").and_then(Value::as_str).map(str::to_string)
}
#[derive(Serialize)]
struct RefreshRequest {
client_id: &'static str,
@@ -475,6 +587,11 @@ struct RefreshResponse {
// Shared constant for token refresh (client id used for oauth token refresh flow)
pub const CLIENT_ID: &str = "app_EMoamEEZ73f0CkXaXp7hrann";
fn refresh_token_endpoint() -> String {
std::env::var(REFRESH_TOKEN_URL_OVERRIDE_ENV_VAR)
.unwrap_or_else(|_| REFRESH_TOKEN_URL.to_string())
}
use std::sync::RwLock;
/// Internal cached auth state.
@@ -965,7 +1082,9 @@ impl AuthManager {
/// Attempt to refresh the current auth token (if any). On success, reload
/// the auth state from disk so other components observe refreshed token.
pub async fn refresh_token(&self) -> std::io::Result<Option<String>> {
/// If the token refresh fails in a permanent (nontransient) way, logs out
/// to clear invalid auth state.
pub async fn refresh_token(&self) -> Result<Option<String>, RefreshTokenError> {
let auth = match self.auth() {
Some(a) => a,
None => return Ok(None),

View File

@@ -31,6 +31,7 @@ use tracing::warn;
use crate::AuthManager;
use crate::auth::CodexAuth;
use crate::auth::RefreshTokenError;
use crate::chat_completions::AggregateStreamExt;
use crate::chat_completions::stream_chat_completions;
use crate::client_common::Prompt;
@@ -389,12 +390,17 @@ impl ModelClient {
&& let Some(manager) = auth_manager.as_ref()
&& let Some(auth) = auth.as_ref()
&& auth.mode == AuthMode::ChatGPT
&& let Err(err) = manager.refresh_token().await
{
manager.refresh_token().await.map_err(|err| {
StreamAttemptError::Fatal(CodexErr::Fatal(format!(
"Failed to refresh ChatGPT credentials: {err}"
)))
})?;
let stream_error = match err {
RefreshTokenError::Permanent(failed) => {
StreamAttemptError::Fatal(CodexErr::RefreshTokenFailed(failed))
}
RefreshTokenError::Transient(other) => {
StreamAttemptError::RetryableTransportError(CodexErr::Io(other))
}
};
return Err(stream_error);
}
// The OpenAI Responses endpoint returns structured JSON bodies even for 4xx/5xx
@@ -674,6 +680,33 @@ fn parse_header_str<'a>(headers: &'a HeaderMap, name: &str) -> Option<&'a str> {
headers.get(name)?.to_str().ok()
}
async fn emit_completed(
tx_event: &mpsc::Sender<Result<ResponseEvent>>,
otel_event_manager: &OtelEventManager,
completed: ResponseCompleted,
) {
if let Some(token_usage) = &completed.usage {
otel_event_manager.sse_event_completed(
token_usage.input_tokens,
token_usage.output_tokens,
token_usage
.input_tokens_details
.as_ref()
.map(|d| d.cached_tokens),
token_usage
.output_tokens_details
.as_ref()
.map(|d| d.reasoning_tokens),
token_usage.total_tokens,
);
}
let event = ResponseEvent::Completed {
response_id: completed.id.clone(),
token_usage: completed.usage.map(Into::into),
};
let _ = tx_event.send(Ok(event)).await;
}
async fn process_sse<S>(
stream: S,
tx_event: mpsc::Sender<Result<ResponseEvent>>,
@@ -686,7 +719,7 @@ async fn process_sse<S>(
// If the stream stays completely silent for an extended period treat it as disconnected.
// The response id returned from the "complete" message.
let mut response_completed: Option<ResponseCompleted> = None;
let response_completed: Option<ResponseCompleted> = None;
let mut response_error: Option<CodexErr> = None;
loop {
@@ -705,30 +738,8 @@ async fn process_sse<S>(
}
Ok(None) => {
match response_completed {
Some(ResponseCompleted {
id: response_id,
usage,
}) => {
if let Some(token_usage) = &usage {
otel_event_manager.sse_event_completed(
token_usage.input_tokens,
token_usage.output_tokens,
token_usage
.input_tokens_details
.as_ref()
.map(|d| d.cached_tokens),
token_usage
.output_tokens_details
.as_ref()
.map(|d| d.reasoning_tokens),
token_usage.total_tokens,
);
}
let event = ResponseEvent::Completed {
response_id,
token_usage: usage.map(Into::into),
};
let _ = tx_event.send(Ok(event)).await;
Some(completed) => {
emit_completed(&tx_event, &otel_event_manager, completed).await
}
None => {
let error = response_error.unwrap_or(CodexErr::Stream(
@@ -858,7 +869,8 @@ async fn process_sse<S>(
if let Some(resp_val) = event.response {
match serde_json::from_value::<ResponseCompleted>(resp_val) {
Ok(r) => {
response_completed = Some(r);
emit_completed(&tx_event, &otel_event_manager, r).await;
return;
}
Err(e) => {
let error = format!("failed to parse ResponseCompleted: {e}");
@@ -931,8 +943,10 @@ async fn stream_from_fixture(
fn rate_limit_regex() -> &'static Regex {
static RE: OnceLock<Regex> = OnceLock::new();
// Match both OpenAI-style messages like "Please try again in 1.898s"
// and Azure OpenAI-style messages like "Try again in 35 seconds".
#[expect(clippy::unwrap_used)]
RE.get_or_init(|| Regex::new(r"Please try again in (\d+(?:\.\d+)?)(s|ms)").unwrap())
RE.get_or_init(|| Regex::new(r"(?i)try again in\s*(\d+(?:\.\d+)?)\s*(s|ms|seconds?)").unwrap())
}
fn try_parse_retry_after(err: &Error) -> Option<Duration> {
@@ -940,7 +954,8 @@ fn try_parse_retry_after(err: &Error) -> Option<Duration> {
return None;
}
// parse the Please try again in 1.898s format using regex
// parse retry hints like "try again in 1.898s" or
// "Try again in 35 seconds" using regex
let re = rate_limit_regex();
if let Some(message) = &err.message
&& let Some(captures) = re.captures(message)
@@ -950,9 +965,9 @@ fn try_parse_retry_after(err: &Error) -> Option<Duration> {
if let (Some(value), Some(unit)) = (seconds, unit) {
let value = value.as_str().parse::<f64>().ok()?;
let unit = unit.as_str();
let unit = unit.as_str().to_ascii_lowercase();
if unit == "s" {
if unit == "s" || unit.starts_with("second") {
return Some(Duration::from_secs_f64(value));
} else if unit == "ms" {
return Some(Duration::from_millis(value as u64));
@@ -1427,6 +1442,19 @@ mod tests {
assert_eq!(delay, Some(Duration::from_secs_f64(1.898)));
}
#[test]
fn test_try_parse_retry_after_azure() {
let err = Error {
r#type: None,
message: Some("Rate limit exceeded. Try again in 35 seconds.".to_string()),
code: Some("rate_limit_exceeded".to_string()),
plan_type: None,
resets_at: None,
};
let delay = try_parse_retry_after(&err);
assert_eq!(delay, Some(Duration::from_secs(35)));
}
#[test]
fn error_response_deserializes_schema_known_plan_type_and_serializes_back() {
use crate::token_data::KnownPlan;

View File

@@ -58,7 +58,7 @@ use crate::client_common::ResponseEvent;
use crate::config::Config;
use crate::config::types::McpServerTransportConfig;
use crate::config::types::ShellEnvironmentPolicy;
use crate::conversation_history::ConversationHistory;
use crate::context_manager::ContextManager;
use crate::environment_context::EnvironmentContext;
use crate::error::CodexErr;
use crate::error::Result as CodexResult;
@@ -553,7 +553,7 @@ impl Session {
None
} else {
Some(format!(
"You can either enable it using the CLI with `--enable {canonical}` or through the config.toml file with `[features].{canonical}`"
"Enable it with `--enable {canonical}` or `[features].{canonical}` in config.toml. See https://github.com/openai/codex/blob/main/docs/config.md#feature-flags for details."
))
};
post_session_configured_events.push(Event {
@@ -945,7 +945,7 @@ impl Session {
turn_context: &TurnContext,
rollout_items: &[RolloutItem],
) -> Vec<ResponseItem> {
let mut history = ConversationHistory::new();
let mut history = ContextManager::new();
for item in rollout_items {
match item {
RolloutItem::ResponseItem(response_item) => {
@@ -1032,7 +1032,7 @@ impl Session {
}
}
pub(crate) async fn clone_history(&self) -> ConversationHistory {
pub(crate) async fn clone_history(&self) -> ContextManager {
let state = self.state.lock().await;
state.clone_history()
}
@@ -1773,19 +1773,14 @@ pub(crate) async fn run_task(
sess.clone_history().await.get_history_for_prompt()
};
let turn_input_messages: Vec<String> = turn_input
let turn_input_messages = turn_input
.iter()
.filter_map(|item| match item {
ResponseItem::Message { content, .. } => Some(content),
.filter_map(|item| match parse_turn_item(item) {
Some(TurnItem::UserMessage(user_message)) => Some(user_message),
_ => None,
})
.flat_map(|content| {
content.iter().filter_map(|item| match item {
ContentItem::OutputText { text } => Some(text.clone()),
_ => None,
})
})
.collect();
.map(|user_message| user_message.message())
.collect::<Vec<String>>();
match run_turn(
Arc::clone(&sess),
Arc::clone(&turn_context),
@@ -1933,6 +1928,7 @@ async fn run_turn(
return Err(CodexErr::UsageLimitReached(e));
}
Err(CodexErr::UsageNotIncluded) => return Err(CodexErr::UsageNotIncluded),
Err(e @ CodexErr::RefreshTokenFailed(_)) => return Err(e),
Err(e) => {
// Use the configured provider-specific stream retry budget.
let max_retries = turn_context.client.get_provider().stream_max_retries();
@@ -1951,7 +1947,7 @@ async fn run_turn(
// at a seemingly frozen screen.
sess.notify_stream_error(
&turn_context,
format!("Re-connecting... {retries}/{max_retries}"),
format!("Reconnecting... {retries}/{max_retries}"),
)
.await;
@@ -2839,7 +2835,7 @@ mod tests {
turn_context: &TurnContext,
) -> (Vec<RolloutItem>, Vec<ResponseItem>) {
let mut rollout_items = Vec::new();
let mut live_history = ConversationHistory::new();
let mut live_history = ContextManager::new();
let initial_context = session.build_initial_context(turn_context);
for item in &initial_context {

View File

@@ -16,7 +16,6 @@ use crate::protocol::TurnContextItem;
use crate::protocol::WarningEvent;
use crate::truncate::truncate_middle;
use crate::util::backoff;
use askama::Template;
use codex_protocol::items::TurnItem;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ResponseInputItem;
@@ -29,13 +28,6 @@ use tracing::error;
pub const SUMMARIZATION_PROMPT: &str = include_str!("../../templates/compact/prompt.md");
const COMPACT_USER_MESSAGE_MAX_TOKENS: usize = 20_000;
#[derive(Template)]
#[template(path = "compact/history_bridge.md", escape = "none")]
struct HistoryBridgeTemplate<'a> {
user_messages_text: &'a str,
summary_text: &'a str,
}
pub(crate) async fn run_inline_auto_compact_task(
sess: Arc<Session>,
turn_context: Arc<TurnContext>,
@@ -150,6 +142,7 @@ async fn run_compact_task_inner(
let history_snapshot = sess.clone_history().await.get_history();
let summary_text = get_last_assistant_message_from_turn(&history_snapshot).unwrap_or_default();
let user_messages = collect_user_messages(&history_snapshot);
let initial_context = sess.build_initial_context(turn_context.as_ref());
let mut new_history = build_compacted_history(initial_context, &user_messages, &summary_text);
let ghost_snapshots: Vec<ResponseItem> = history_snapshot
@@ -224,33 +217,47 @@ fn build_compacted_history_with_limit(
summary_text: &str,
max_bytes: usize,
) -> Vec<ResponseItem> {
let mut user_messages_text = if user_messages.is_empty() {
"(none)".to_string()
} else {
user_messages.join("\n\n")
};
// Truncate the concatenated prior user messages so the bridge message
// stays well under the context window (approx. 4 bytes/token).
if user_messages_text.len() > max_bytes {
user_messages_text = truncate_middle(&user_messages_text, max_bytes).0;
let mut selected_messages: Vec<String> = Vec::new();
if max_bytes > 0 {
let mut remaining = max_bytes;
for message in user_messages.iter().rev() {
if remaining == 0 {
break;
}
if message.len() <= remaining {
selected_messages.push(message.clone());
remaining = remaining.saturating_sub(message.len());
} else {
let (truncated, _) = truncate_middle(message, remaining);
selected_messages.push(truncated);
break;
}
}
selected_messages.reverse();
}
for message in &selected_messages {
history.push(ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::InputText {
text: message.clone(),
}],
});
}
let summary_text = if summary_text.is_empty() {
"(no summary available)".to_string()
} else {
summary_text.to_string()
};
let Ok(bridge) = HistoryBridgeTemplate {
user_messages_text: &user_messages_text,
summary_text: &summary_text,
}
.render() else {
return vec![];
};
history.push(ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::InputText { text: bridge }],
content: vec![ContentItem::InputText { text: summary_text }],
});
history
}
@@ -390,30 +397,55 @@ mod tests {
"SUMMARY",
max_bytes,
);
assert_eq!(history.len(), 2);
// Expect exactly one bridge message added to history (plus any initial context we provided, which is none).
assert_eq!(history.len(), 1);
let truncated_message = &history[0];
let summary_message = &history[1];
// Extract the text content of the bridge message.
let bridge_text = match &history[0] {
let truncated_text = match truncated_message {
ResponseItem::Message { role, content, .. } if role == "user" => {
content_items_to_text(content).unwrap_or_default()
}
other => panic!("unexpected item in history: {other:?}"),
};
// The bridge should contain the truncation marker and not the full original payload.
assert!(
bridge_text.contains("tokens truncated"),
"expected truncation marker in bridge message"
truncated_text.contains("tokens truncated"),
"expected truncation marker in truncated user message"
);
assert!(
!bridge_text.contains(&big),
"bridge should not include the full oversized user text"
!truncated_text.contains(&big),
"truncated user message should not include the full oversized user text"
);
let summary_text = match summary_message {
ResponseItem::Message { role, content, .. } if role == "user" => {
content_items_to_text(content).unwrap_or_default()
}
other => panic!("unexpected item in history: {other:?}"),
};
assert_eq!(summary_text, "SUMMARY");
}
#[test]
fn build_compacted_history_appends_summary_message() {
let initial_context: Vec<ResponseItem> = Vec::new();
let user_messages = vec!["first user message".to_string()];
let summary_text = "summary text";
let history = build_compacted_history(initial_context, &user_messages, summary_text);
assert!(
bridge_text.contains("SUMMARY"),
"bridge should include the provided summary text"
!history.is_empty(),
"expected compacted history to include summary"
);
let last = history.last().expect("history should have a summary entry");
let summary = match last {
ResponseItem::Message { role, content, .. } if role == "user" => {
content_items_to_text(content).unwrap_or_default()
}
other => panic!("expected summary message, found {other:?}"),
};
assert_eq!(summary, summary_text);
}
}

View File

@@ -158,6 +158,11 @@ async fn forward_events(
) {
while let Ok(event) = codex.next_event().await {
match event {
// ignore all legacy delta events
Event {
id: _,
msg: EventMsg::AgentMessageDelta(_) | EventMsg::AgentReasoningDelta(_),
} => continue,
Event {
id: _,
msg: EventMsg::SessionConfigured(_),

View File

@@ -0,0 +1,174 @@
use codex_protocol::models::FunctionCallOutputPayload;
use codex_protocol::models::ResponseItem;
use codex_protocol::protocol::TokenUsage;
use codex_protocol::protocol::TokenUsageInfo;
use std::ops::Deref;
use crate::context_manager::normalize;
use crate::context_manager::truncate::format_output_for_model_body;
use crate::context_manager::truncate::globally_truncate_function_output_items;
/// Transcript of conversation history
#[derive(Debug, Clone, Default)]
pub(crate) struct ContextManager {
/// The oldest items are at the beginning of the vector.
items: Vec<ResponseItem>,
token_info: Option<TokenUsageInfo>,
}
impl ContextManager {
pub(crate) fn new() -> Self {
Self {
items: Vec::new(),
token_info: TokenUsageInfo::new_or_append(&None, &None, None),
}
}
pub(crate) fn token_info(&self) -> Option<TokenUsageInfo> {
self.token_info.clone()
}
pub(crate) fn set_token_usage_full(&mut self, context_window: i64) {
match &mut self.token_info {
Some(info) => info.fill_to_context_window(context_window),
None => {
self.token_info = Some(TokenUsageInfo::full_context_window(context_window));
}
}
}
/// `items` is ordered from oldest to newest.
pub(crate) fn record_items<I>(&mut self, items: I)
where
I: IntoIterator,
I::Item: std::ops::Deref<Target = ResponseItem>,
{
for item in items {
let item_ref = item.deref();
let is_ghost_snapshot = matches!(item_ref, ResponseItem::GhostSnapshot { .. });
if !is_api_message(item_ref) && !is_ghost_snapshot {
continue;
}
let processed = Self::process_item(&item);
self.items.push(processed);
}
}
pub(crate) fn get_history(&mut self) -> Vec<ResponseItem> {
self.normalize_history();
self.contents()
}
// Returns the history prepared for sending to the model.
// With extra response items filtered out and GhostCommits removed.
pub(crate) fn get_history_for_prompt(&mut self) -> Vec<ResponseItem> {
let mut history = self.get_history();
Self::remove_ghost_snapshots(&mut history);
history
}
pub(crate) fn remove_first_item(&mut self) {
if !self.items.is_empty() {
// Remove the oldest item (front of the list). Items are ordered from
// oldest → newest, so index 0 is the first entry recorded.
let removed = self.items.remove(0);
// If the removed item participates in a call/output pair, also remove
// its corresponding counterpart to keep the invariants intact without
// running a full normalization pass.
normalize::remove_corresponding_for(&mut self.items, &removed);
}
}
pub(crate) fn replace(&mut self, items: Vec<ResponseItem>) {
self.items = items;
}
pub(crate) fn update_token_info(
&mut self,
usage: &TokenUsage,
model_context_window: Option<i64>,
) {
self.token_info = TokenUsageInfo::new_or_append(
&self.token_info,
&Some(usage.clone()),
model_context_window,
);
}
/// This function enforces a couple of invariants on the in-memory history:
/// 1. every call (function/custom) has a corresponding output entry
/// 2. every output has a corresponding call entry
fn normalize_history(&mut self) {
// all function/tool calls must have a corresponding output
normalize::ensure_call_outputs_present(&mut self.items);
// all outputs must have a corresponding function/tool call
normalize::remove_orphan_outputs(&mut self.items);
}
/// Returns a clone of the contents in the transcript.
fn contents(&self) -> Vec<ResponseItem> {
self.items.clone()
}
fn remove_ghost_snapshots(items: &mut Vec<ResponseItem>) {
items.retain(|item| !matches!(item, ResponseItem::GhostSnapshot { .. }));
}
fn process_item(item: &ResponseItem) -> ResponseItem {
match item {
ResponseItem::FunctionCallOutput { call_id, output } => {
let truncated = format_output_for_model_body(output.content.as_str());
let truncated_items = output
.content_items
.as_ref()
.map(|items| globally_truncate_function_output_items(items));
ResponseItem::FunctionCallOutput {
call_id: call_id.clone(),
output: FunctionCallOutputPayload {
content: truncated,
content_items: truncated_items,
success: output.success,
},
}
}
ResponseItem::CustomToolCallOutput { call_id, output } => {
let truncated = format_output_for_model_body(output);
ResponseItem::CustomToolCallOutput {
call_id: call_id.clone(),
output: truncated,
}
}
ResponseItem::Message { .. }
| ResponseItem::Reasoning { .. }
| ResponseItem::LocalShellCall { .. }
| ResponseItem::FunctionCall { .. }
| ResponseItem::WebSearchCall { .. }
| ResponseItem::CustomToolCall { .. }
| ResponseItem::GhostSnapshot { .. }
| ResponseItem::Other => item.clone(),
}
}
}
/// API messages include every non-system item (user/assistant messages, reasoning,
/// tool calls, tool outputs, shell calls, and web-search calls).
fn is_api_message(message: &ResponseItem) -> bool {
match message {
ResponseItem::Message { role, .. } => role.as_str() != "system",
ResponseItem::FunctionCallOutput { .. }
| ResponseItem::FunctionCall { .. }
| ResponseItem::CustomToolCall { .. }
| ResponseItem::CustomToolCallOutput { .. }
| ResponseItem::LocalShellCall { .. }
| ResponseItem::Reasoning { .. }
| ResponseItem::WebSearchCall { .. } => true,
ResponseItem::GhostSnapshot { .. } => false,
ResponseItem::Other => false,
}
}
#[cfg(test)]
#[path = "history_tests.rs"]
mod tests;

View File

@@ -0,0 +1,841 @@
use super::*;
use crate::context_manager::truncate;
use codex_git::GhostCommit;
use codex_protocol::models::ContentItem;
use codex_protocol::models::FunctionCallOutputContentItem;
use codex_protocol::models::FunctionCallOutputPayload;
use codex_protocol::models::LocalShellAction;
use codex_protocol::models::LocalShellExecAction;
use codex_protocol::models::LocalShellStatus;
use codex_protocol::models::ReasoningItemContent;
use codex_protocol::models::ReasoningItemReasoningSummary;
use pretty_assertions::assert_eq;
use regex_lite::Regex;
fn assistant_msg(text: &str) -> ResponseItem {
ResponseItem::Message {
id: None,
role: "assistant".to_string(),
content: vec![ContentItem::OutputText {
text: text.to_string(),
}],
}
}
fn create_history_with_items(items: Vec<ResponseItem>) -> ContextManager {
let mut h = ContextManager::new();
h.record_items(items.iter());
h
}
fn user_msg(text: &str) -> ResponseItem {
ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::OutputText {
text: text.to_string(),
}],
}
}
fn reasoning_msg(text: &str) -> ResponseItem {
ResponseItem::Reasoning {
id: String::new(),
summary: vec![ReasoningItemReasoningSummary::SummaryText {
text: "summary".to_string(),
}],
content: Some(vec![ReasoningItemContent::ReasoningText {
text: text.to_string(),
}]),
encrypted_content: None,
}
}
#[test]
fn filters_non_api_messages() {
let mut h = ContextManager::default();
// System message is not API messages; Other is ignored.
let system = ResponseItem::Message {
id: None,
role: "system".to_string(),
content: vec![ContentItem::OutputText {
text: "ignored".to_string(),
}],
};
let reasoning = reasoning_msg("thinking...");
h.record_items([&system, &reasoning, &ResponseItem::Other]);
// User and assistant should be retained.
let u = user_msg("hi");
let a = assistant_msg("hello");
h.record_items([&u, &a]);
let items = h.contents();
assert_eq!(
items,
vec![
ResponseItem::Reasoning {
id: String::new(),
summary: vec![ReasoningItemReasoningSummary::SummaryText {
text: "summary".to_string(),
}],
content: Some(vec![ReasoningItemContent::ReasoningText {
text: "thinking...".to_string(),
}]),
encrypted_content: None,
},
ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::OutputText {
text: "hi".to_string()
}]
},
ResponseItem::Message {
id: None,
role: "assistant".to_string(),
content: vec![ContentItem::OutputText {
text: "hello".to_string()
}]
}
]
);
}
#[test]
fn get_history_for_prompt_drops_ghost_commits() {
let items = vec![ResponseItem::GhostSnapshot {
ghost_commit: GhostCommit::new("ghost-1".to_string(), None, Vec::new(), Vec::new()),
}];
let mut history = create_history_with_items(items);
let filtered = history.get_history_for_prompt();
assert_eq!(filtered, vec![]);
}
#[test]
fn remove_first_item_removes_matching_output_for_function_call() {
let items = vec![
ResponseItem::FunctionCall {
id: None,
name: "do_it".to_string(),
arguments: "{}".to_string(),
call_id: "call-1".to_string(),
},
ResponseItem::FunctionCallOutput {
call_id: "call-1".to_string(),
output: FunctionCallOutputPayload {
content: "ok".to_string(),
..Default::default()
},
},
];
let mut h = create_history_with_items(items);
h.remove_first_item();
assert_eq!(h.contents(), vec![]);
}
#[test]
fn remove_first_item_removes_matching_call_for_output() {
let items = vec![
ResponseItem::FunctionCallOutput {
call_id: "call-2".to_string(),
output: FunctionCallOutputPayload {
content: "ok".to_string(),
..Default::default()
},
},
ResponseItem::FunctionCall {
id: None,
name: "do_it".to_string(),
arguments: "{}".to_string(),
call_id: "call-2".to_string(),
},
];
let mut h = create_history_with_items(items);
h.remove_first_item();
assert_eq!(h.contents(), vec![]);
}
#[test]
fn remove_first_item_handles_local_shell_pair() {
let items = vec![
ResponseItem::LocalShellCall {
id: None,
call_id: Some("call-3".to_string()),
status: LocalShellStatus::Completed,
action: LocalShellAction::Exec(LocalShellExecAction {
command: vec!["echo".to_string(), "hi".to_string()],
timeout_ms: None,
working_directory: None,
env: None,
user: None,
}),
},
ResponseItem::FunctionCallOutput {
call_id: "call-3".to_string(),
output: FunctionCallOutputPayload {
content: "ok".to_string(),
..Default::default()
},
},
];
let mut h = create_history_with_items(items);
h.remove_first_item();
assert_eq!(h.contents(), vec![]);
}
#[test]
fn remove_first_item_handles_custom_tool_pair() {
let items = vec![
ResponseItem::CustomToolCall {
id: None,
status: None,
call_id: "tool-1".to_string(),
name: "my_tool".to_string(),
input: "{}".to_string(),
},
ResponseItem::CustomToolCallOutput {
call_id: "tool-1".to_string(),
output: "ok".to_string(),
},
];
let mut h = create_history_with_items(items);
h.remove_first_item();
assert_eq!(h.contents(), vec![]);
}
#[test]
fn normalization_retains_local_shell_outputs() {
let items = vec![
ResponseItem::LocalShellCall {
id: None,
call_id: Some("shell-1".to_string()),
status: LocalShellStatus::Completed,
action: LocalShellAction::Exec(LocalShellExecAction {
command: vec!["echo".to_string(), "hi".to_string()],
timeout_ms: None,
working_directory: None,
env: None,
user: None,
}),
},
ResponseItem::FunctionCallOutput {
call_id: "shell-1".to_string(),
output: FunctionCallOutputPayload {
content: "ok".to_string(),
..Default::default()
},
},
];
let mut history = create_history_with_items(items.clone());
let normalized = history.get_history();
assert_eq!(normalized, items);
}
#[test]
fn record_items_truncates_function_call_output_content() {
let mut history = ContextManager::new();
let long_line = "a very long line to trigger truncation\n";
let long_output = long_line.repeat(2_500);
let item = ResponseItem::FunctionCallOutput {
call_id: "call-100".to_string(),
output: FunctionCallOutputPayload {
content: long_output.clone(),
success: Some(true),
..Default::default()
},
};
history.record_items([&item]);
assert_eq!(history.items.len(), 1);
match &history.items[0] {
ResponseItem::FunctionCallOutput { output, .. } => {
assert_ne!(output.content, long_output);
assert!(
output.content.starts_with("Total output lines:"),
"expected truncated summary, got {}",
output.content
);
}
other => panic!("unexpected history item: {other:?}"),
}
}
#[test]
fn record_items_truncates_custom_tool_call_output_content() {
let mut history = ContextManager::new();
let line = "custom output that is very long\n";
let long_output = line.repeat(2_500);
let item = ResponseItem::CustomToolCallOutput {
call_id: "tool-200".to_string(),
output: long_output.clone(),
};
history.record_items([&item]);
assert_eq!(history.items.len(), 1);
match &history.items[0] {
ResponseItem::CustomToolCallOutput { output, .. } => {
assert_ne!(output, &long_output);
assert!(
output.starts_with("Total output lines:"),
"expected truncated summary, got {output}"
);
}
other => panic!("unexpected history item: {other:?}"),
}
}
fn assert_truncated_message_matches(message: &str, line: &str, total_lines: usize) {
let pattern = truncated_message_pattern(line, total_lines);
let regex = Regex::new(&pattern).unwrap_or_else(|err| {
panic!("failed to compile regex {pattern}: {err}");
});
let captures = regex
.captures(message)
.unwrap_or_else(|| panic!("message failed to match pattern {pattern}: {message}"));
let body = captures
.name("body")
.expect("missing body capture")
.as_str();
assert!(
body.len() <= truncate::MODEL_FORMAT_MAX_BYTES,
"body exceeds byte limit: {} bytes",
body.len()
);
}
fn truncated_message_pattern(line: &str, total_lines: usize) -> String {
let head_take = truncate::MODEL_FORMAT_HEAD_LINES.min(total_lines);
let tail_take = truncate::MODEL_FORMAT_TAIL_LINES.min(total_lines.saturating_sub(head_take));
let omitted = total_lines.saturating_sub(head_take + tail_take);
let escaped_line = regex_lite::escape(line);
if omitted == 0 {
return format!(
r"(?s)^Total output lines: {total_lines}\n\n(?P<body>{escaped_line}.*\n\[\.{{3}} output truncated to fit {max_bytes} bytes \.{{3}}]\n\n.*)$",
max_bytes = truncate::MODEL_FORMAT_MAX_BYTES,
);
}
format!(
r"(?s)^Total output lines: {total_lines}\n\n(?P<body>{escaped_line}.*\n\[\.{{3}} omitted {omitted} of {total_lines} lines \.{{3}}]\n\n.*)$",
)
}
#[test]
fn format_exec_output_truncates_large_error() {
let line = "very long execution error line that should trigger truncation\n";
let large_error = line.repeat(2_500); // way beyond both byte and line limits
let truncated = truncate::format_output_for_model_body(&large_error);
let total_lines = large_error.lines().count();
assert_truncated_message_matches(&truncated, line, total_lines);
assert_ne!(truncated, large_error);
}
#[test]
fn format_exec_output_marks_byte_truncation_without_omitted_lines() {
let long_line = "a".repeat(truncate::MODEL_FORMAT_MAX_BYTES + 50);
let truncated = truncate::format_output_for_model_body(&long_line);
assert_ne!(truncated, long_line);
let marker_line = format!(
"[... output truncated to fit {} bytes ...]",
truncate::MODEL_FORMAT_MAX_BYTES
);
assert!(
truncated.contains(&marker_line),
"missing byte truncation marker: {truncated}"
);
assert!(
!truncated.contains("omitted"),
"line omission marker should not appear when no lines were dropped: {truncated}"
);
}
#[test]
fn format_exec_output_returns_original_when_within_limits() {
let content = "example output\n".repeat(10);
assert_eq!(truncate::format_output_for_model_body(&content), content);
}
#[test]
fn format_exec_output_reports_omitted_lines_and_keeps_head_and_tail() {
let total_lines = truncate::MODEL_FORMAT_MAX_LINES + 100;
let content: String = (0..total_lines)
.map(|idx| format!("line-{idx}\n"))
.collect();
let truncated = truncate::format_output_for_model_body(&content);
let omitted = total_lines - truncate::MODEL_FORMAT_MAX_LINES;
let expected_marker = format!("[... omitted {omitted} of {total_lines} lines ...]");
assert!(
truncated.contains(&expected_marker),
"missing omitted marker: {truncated}"
);
assert!(
truncated.contains("line-0\n"),
"expected head line to remain: {truncated}"
);
let last_line = format!("line-{}\n", total_lines - 1);
assert!(
truncated.contains(&last_line),
"expected tail line to remain: {truncated}"
);
}
#[test]
fn format_exec_output_prefers_line_marker_when_both_limits_exceeded() {
let total_lines = truncate::MODEL_FORMAT_MAX_LINES + 42;
let long_line = "x".repeat(256);
let content: String = (0..total_lines)
.map(|idx| format!("line-{idx}-{long_line}\n"))
.collect();
let truncated = truncate::format_output_for_model_body(&content);
assert!(
truncated.contains("[... omitted 42 of 298 lines ...]"),
"expected omitted marker when line count exceeds limit: {truncated}"
);
assert!(
!truncated.contains("output truncated to fit"),
"line omission marker should take precedence over byte marker: {truncated}"
);
}
#[test]
fn truncates_across_multiple_under_limit_texts_and_reports_omitted() {
// Arrange: several text items, none exceeding per-item limit, but total exceeds budget.
let budget = truncate::MODEL_FORMAT_MAX_BYTES;
let t1_len = (budget / 2).saturating_sub(10);
let t2_len = (budget / 2).saturating_sub(10);
let remaining_after_t1_t2 = budget.saturating_sub(t1_len + t2_len);
let t3_len = 50; // gets truncated to remaining_after_t1_t2
let t4_len = 5; // omitted
let t5_len = 7; // omitted
let t1 = "a".repeat(t1_len);
let t2 = "b".repeat(t2_len);
let t3 = "c".repeat(t3_len);
let t4 = "d".repeat(t4_len);
let t5 = "e".repeat(t5_len);
let item = ResponseItem::FunctionCallOutput {
call_id: "call-omit".to_string(),
output: FunctionCallOutputPayload {
content: "irrelevant".to_string(),
content_items: Some(vec![
FunctionCallOutputContentItem::InputText { text: t1 },
FunctionCallOutputContentItem::InputText { text: t2 },
FunctionCallOutputContentItem::InputImage {
image_url: "img:mid".to_string(),
},
FunctionCallOutputContentItem::InputText { text: t3 },
FunctionCallOutputContentItem::InputText { text: t4 },
FunctionCallOutputContentItem::InputText { text: t5 },
]),
success: Some(true),
},
};
let mut history = ContextManager::new();
history.record_items([&item]);
assert_eq!(history.items.len(), 1);
let json = serde_json::to_value(&history.items[0]).expect("serialize to json");
let output = json
.get("output")
.expect("output field")
.as_array()
.expect("array output");
// Expect: t1 (full), t2 (full), image, t3 (truncated), summary mentioning 2 omitted.
assert_eq!(output.len(), 5);
let first = output[0].as_object().expect("first obj");
assert_eq!(first.get("type").unwrap(), "input_text");
let first_text = first.get("text").unwrap().as_str().unwrap();
assert_eq!(first_text.len(), t1_len);
let second = output[1].as_object().expect("second obj");
assert_eq!(second.get("type").unwrap(), "input_text");
let second_text = second.get("text").unwrap().as_str().unwrap();
assert_eq!(second_text.len(), t2_len);
assert_eq!(
output[2],
serde_json::json!({"type": "input_image", "image_url": "img:mid"})
);
let fourth = output[3].as_object().expect("fourth obj");
assert_eq!(fourth.get("type").unwrap(), "input_text");
let fourth_text = fourth.get("text").unwrap().as_str().unwrap();
assert_eq!(fourth_text.len(), remaining_after_t1_t2);
let summary = output[4].as_object().expect("summary obj");
assert_eq!(summary.get("type").unwrap(), "input_text");
let summary_text = summary.get("text").unwrap().as_str().unwrap();
assert!(summary_text.contains("omitted 2 text items"));
}
//TODO(aibrahim): run CI in release mode.
#[cfg(not(debug_assertions))]
#[test]
fn normalize_adds_missing_output_for_function_call() {
let items = vec![ResponseItem::FunctionCall {
id: None,
name: "do_it".to_string(),
arguments: "{}".to_string(),
call_id: "call-x".to_string(),
}];
let mut h = create_history_with_items(items);
h.normalize_history();
assert_eq!(
h.contents(),
vec![
ResponseItem::FunctionCall {
id: None,
name: "do_it".to_string(),
arguments: "{}".to_string(),
call_id: "call-x".to_string(),
},
ResponseItem::FunctionCallOutput {
call_id: "call-x".to_string(),
output: FunctionCallOutputPayload {
content: "aborted".to_string(),
..Default::default()
},
},
]
);
}
#[cfg(not(debug_assertions))]
#[test]
fn normalize_adds_missing_output_for_custom_tool_call() {
let items = vec![ResponseItem::CustomToolCall {
id: None,
status: None,
call_id: "tool-x".to_string(),
name: "custom".to_string(),
input: "{}".to_string(),
}];
let mut h = create_history_with_items(items);
h.normalize_history();
assert_eq!(
h.contents(),
vec![
ResponseItem::CustomToolCall {
id: None,
status: None,
call_id: "tool-x".to_string(),
name: "custom".to_string(),
input: "{}".to_string(),
},
ResponseItem::CustomToolCallOutput {
call_id: "tool-x".to_string(),
output: "aborted".to_string(),
},
]
);
}
#[cfg(not(debug_assertions))]
#[test]
fn normalize_adds_missing_output_for_local_shell_call_with_id() {
let items = vec![ResponseItem::LocalShellCall {
id: None,
call_id: Some("shell-1".to_string()),
status: LocalShellStatus::Completed,
action: LocalShellAction::Exec(LocalShellExecAction {
command: vec!["echo".to_string(), "hi".to_string()],
timeout_ms: None,
working_directory: None,
env: None,
user: None,
}),
}];
let mut h = create_history_with_items(items);
h.normalize_history();
assert_eq!(
h.contents(),
vec![
ResponseItem::LocalShellCall {
id: None,
call_id: Some("shell-1".to_string()),
status: LocalShellStatus::Completed,
action: LocalShellAction::Exec(LocalShellExecAction {
command: vec!["echo".to_string(), "hi".to_string()],
timeout_ms: None,
working_directory: None,
env: None,
user: None,
}),
},
ResponseItem::FunctionCallOutput {
call_id: "shell-1".to_string(),
output: FunctionCallOutputPayload {
content: "aborted".to_string(),
..Default::default()
},
},
]
);
}
#[cfg(not(debug_assertions))]
#[test]
fn normalize_removes_orphan_function_call_output() {
let items = vec![ResponseItem::FunctionCallOutput {
call_id: "orphan-1".to_string(),
output: FunctionCallOutputPayload {
content: "ok".to_string(),
..Default::default()
},
}];
let mut h = create_history_with_items(items);
h.normalize_history();
assert_eq!(h.contents(), vec![]);
}
#[cfg(not(debug_assertions))]
#[test]
fn normalize_removes_orphan_custom_tool_call_output() {
let items = vec![ResponseItem::CustomToolCallOutput {
call_id: "orphan-2".to_string(),
output: "ok".to_string(),
}];
let mut h = create_history_with_items(items);
h.normalize_history();
assert_eq!(h.contents(), vec![]);
}
#[cfg(not(debug_assertions))]
#[test]
fn normalize_mixed_inserts_and_removals() {
let items = vec![
// Will get an inserted output
ResponseItem::FunctionCall {
id: None,
name: "f1".to_string(),
arguments: "{}".to_string(),
call_id: "c1".to_string(),
},
// Orphan output that should be removed
ResponseItem::FunctionCallOutput {
call_id: "c2".to_string(),
output: FunctionCallOutputPayload {
content: "ok".to_string(),
..Default::default()
},
},
// Will get an inserted custom tool output
ResponseItem::CustomToolCall {
id: None,
status: None,
call_id: "t1".to_string(),
name: "tool".to_string(),
input: "{}".to_string(),
},
// Local shell call also gets an inserted function call output
ResponseItem::LocalShellCall {
id: None,
call_id: Some("s1".to_string()),
status: LocalShellStatus::Completed,
action: LocalShellAction::Exec(LocalShellExecAction {
command: vec!["echo".to_string()],
timeout_ms: None,
working_directory: None,
env: None,
user: None,
}),
},
];
let mut h = create_history_with_items(items);
h.normalize_history();
assert_eq!(
h.contents(),
vec![
ResponseItem::FunctionCall {
id: None,
name: "f1".to_string(),
arguments: "{}".to_string(),
call_id: "c1".to_string(),
},
ResponseItem::FunctionCallOutput {
call_id: "c1".to_string(),
output: FunctionCallOutputPayload {
content: "aborted".to_string(),
..Default::default()
},
},
ResponseItem::CustomToolCall {
id: None,
status: None,
call_id: "t1".to_string(),
name: "tool".to_string(),
input: "{}".to_string(),
},
ResponseItem::CustomToolCallOutput {
call_id: "t1".to_string(),
output: "aborted".to_string(),
},
ResponseItem::LocalShellCall {
id: None,
call_id: Some("s1".to_string()),
status: LocalShellStatus::Completed,
action: LocalShellAction::Exec(LocalShellExecAction {
command: vec!["echo".to_string()],
timeout_ms: None,
working_directory: None,
env: None,
user: None,
}),
},
ResponseItem::FunctionCallOutput {
call_id: "s1".to_string(),
output: FunctionCallOutputPayload {
content: "aborted".to_string(),
..Default::default()
},
},
]
);
}
// In debug builds we panic on normalization errors instead of silently fixing them.
#[cfg(debug_assertions)]
#[test]
#[should_panic]
fn normalize_adds_missing_output_for_function_call_panics_in_debug() {
let items = vec![ResponseItem::FunctionCall {
id: None,
name: "do_it".to_string(),
arguments: "{}".to_string(),
call_id: "call-x".to_string(),
}];
let mut h = create_history_with_items(items);
h.normalize_history();
}
#[cfg(debug_assertions)]
#[test]
#[should_panic]
fn normalize_adds_missing_output_for_custom_tool_call_panics_in_debug() {
let items = vec![ResponseItem::CustomToolCall {
id: None,
status: None,
call_id: "tool-x".to_string(),
name: "custom".to_string(),
input: "{}".to_string(),
}];
let mut h = create_history_with_items(items);
h.normalize_history();
}
#[cfg(debug_assertions)]
#[test]
#[should_panic]
fn normalize_adds_missing_output_for_local_shell_call_with_id_panics_in_debug() {
let items = vec![ResponseItem::LocalShellCall {
id: None,
call_id: Some("shell-1".to_string()),
status: LocalShellStatus::Completed,
action: LocalShellAction::Exec(LocalShellExecAction {
command: vec!["echo".to_string(), "hi".to_string()],
timeout_ms: None,
working_directory: None,
env: None,
user: None,
}),
}];
let mut h = create_history_with_items(items);
h.normalize_history();
}
#[cfg(debug_assertions)]
#[test]
#[should_panic]
fn normalize_removes_orphan_function_call_output_panics_in_debug() {
let items = vec![ResponseItem::FunctionCallOutput {
call_id: "orphan-1".to_string(),
output: FunctionCallOutputPayload {
content: "ok".to_string(),
..Default::default()
},
}];
let mut h = create_history_with_items(items);
h.normalize_history();
}
#[cfg(debug_assertions)]
#[test]
#[should_panic]
fn normalize_removes_orphan_custom_tool_call_output_panics_in_debug() {
let items = vec![ResponseItem::CustomToolCallOutput {
call_id: "orphan-2".to_string(),
output: "ok".to_string(),
}];
let mut h = create_history_with_items(items);
h.normalize_history();
}
#[cfg(debug_assertions)]
#[test]
#[should_panic]
fn normalize_mixed_inserts_and_removals_panics_in_debug() {
let items = vec![
ResponseItem::FunctionCall {
id: None,
name: "f1".to_string(),
arguments: "{}".to_string(),
call_id: "c1".to_string(),
},
ResponseItem::FunctionCallOutput {
call_id: "c2".to_string(),
output: FunctionCallOutputPayload {
content: "ok".to_string(),
..Default::default()
},
},
ResponseItem::CustomToolCall {
id: None,
status: None,
call_id: "t1".to_string(),
name: "tool".to_string(),
input: "{}".to_string(),
},
ResponseItem::LocalShellCall {
id: None,
call_id: Some("s1".to_string()),
status: LocalShellStatus::Completed,
action: LocalShellAction::Exec(LocalShellExecAction {
command: vec!["echo".to_string()],
timeout_ms: None,
working_directory: None,
env: None,
user: None,
}),
},
];
let mut h = create_history_with_items(items);
h.normalize_history();
}

View File

@@ -0,0 +1,6 @@
mod history;
mod normalize;
mod truncate;
pub(crate) use history::ContextManager;
pub(crate) use truncate::format_output_for_model_body;

View File

@@ -0,0 +1,213 @@
use std::collections::HashSet;
use codex_protocol::models::FunctionCallOutputPayload;
use codex_protocol::models::ResponseItem;
use crate::util::error_or_panic;
pub(crate) fn ensure_call_outputs_present(items: &mut Vec<ResponseItem>) {
// Collect synthetic outputs to insert immediately after their calls.
// Store the insertion position (index of call) alongside the item so
// we can insert in reverse order and avoid index shifting.
let mut missing_outputs_to_insert: Vec<(usize, ResponseItem)> = Vec::new();
for (idx, item) in items.iter().enumerate() {
match item {
ResponseItem::FunctionCall { call_id, .. } => {
let has_output = items.iter().any(|i| match i {
ResponseItem::FunctionCallOutput {
call_id: existing, ..
} => existing == call_id,
_ => false,
});
if !has_output {
error_or_panic(format!(
"Function call output is missing for call id: {call_id}"
));
missing_outputs_to_insert.push((
idx,
ResponseItem::FunctionCallOutput {
call_id: call_id.clone(),
output: FunctionCallOutputPayload {
content: "aborted".to_string(),
..Default::default()
},
},
));
}
}
ResponseItem::CustomToolCall { call_id, .. } => {
let has_output = items.iter().any(|i| match i {
ResponseItem::CustomToolCallOutput {
call_id: existing, ..
} => existing == call_id,
_ => false,
});
if !has_output {
error_or_panic(format!(
"Custom tool call output is missing for call id: {call_id}"
));
missing_outputs_to_insert.push((
idx,
ResponseItem::CustomToolCallOutput {
call_id: call_id.clone(),
output: "aborted".to_string(),
},
));
}
}
// LocalShellCall is represented in upstream streams by a FunctionCallOutput
ResponseItem::LocalShellCall { call_id, .. } => {
if let Some(call_id) = call_id.as_ref() {
let has_output = items.iter().any(|i| match i {
ResponseItem::FunctionCallOutput {
call_id: existing, ..
} => existing == call_id,
_ => false,
});
if !has_output {
error_or_panic(format!(
"Local shell call output is missing for call id: {call_id}"
));
missing_outputs_to_insert.push((
idx,
ResponseItem::FunctionCallOutput {
call_id: call_id.clone(),
output: FunctionCallOutputPayload {
content: "aborted".to_string(),
..Default::default()
},
},
));
}
}
}
_ => {}
}
}
// Insert synthetic outputs in reverse index order to avoid re-indexing.
for (idx, output_item) in missing_outputs_to_insert.into_iter().rev() {
items.insert(idx + 1, output_item);
}
}
pub(crate) fn remove_orphan_outputs(items: &mut Vec<ResponseItem>) {
let function_call_ids: HashSet<String> = items
.iter()
.filter_map(|i| match i {
ResponseItem::FunctionCall { call_id, .. } => Some(call_id.clone()),
_ => None,
})
.collect();
let local_shell_call_ids: HashSet<String> = items
.iter()
.filter_map(|i| match i {
ResponseItem::LocalShellCall {
call_id: Some(call_id),
..
} => Some(call_id.clone()),
_ => None,
})
.collect();
let custom_tool_call_ids: HashSet<String> = items
.iter()
.filter_map(|i| match i {
ResponseItem::CustomToolCall { call_id, .. } => Some(call_id.clone()),
_ => None,
})
.collect();
items.retain(|item| match item {
ResponseItem::FunctionCallOutput { call_id, .. } => {
let has_match =
function_call_ids.contains(call_id) || local_shell_call_ids.contains(call_id);
if !has_match {
error_or_panic(format!(
"Orphan function call output for call id: {call_id}"
));
}
has_match
}
ResponseItem::CustomToolCallOutput { call_id, .. } => {
let has_match = custom_tool_call_ids.contains(call_id);
if !has_match {
error_or_panic(format!(
"Orphan custom tool call output for call id: {call_id}"
));
}
has_match
}
_ => true,
});
}
pub(crate) fn remove_corresponding_for(items: &mut Vec<ResponseItem>, item: &ResponseItem) {
match item {
ResponseItem::FunctionCall { call_id, .. } => {
remove_first_matching(items, |i| {
matches!(
i,
ResponseItem::FunctionCallOutput {
call_id: existing, ..
} if existing == call_id
)
});
}
ResponseItem::FunctionCallOutput { call_id, .. } => {
if let Some(pos) = items.iter().position(|i| {
matches!(i, ResponseItem::FunctionCall { call_id: existing, .. } if existing == call_id)
}) {
items.remove(pos);
} else if let Some(pos) = items.iter().position(|i| {
matches!(i, ResponseItem::LocalShellCall { call_id: Some(existing), .. } if existing == call_id)
}) {
items.remove(pos);
}
}
ResponseItem::CustomToolCall { call_id, .. } => {
remove_first_matching(items, |i| {
matches!(
i,
ResponseItem::CustomToolCallOutput {
call_id: existing, ..
} if existing == call_id
)
});
}
ResponseItem::CustomToolCallOutput { call_id, .. } => {
remove_first_matching(
items,
|i| matches!(i, ResponseItem::CustomToolCall { call_id: existing, .. } if existing == call_id),
);
}
ResponseItem::LocalShellCall {
call_id: Some(call_id),
..
} => {
remove_first_matching(items, |i| {
matches!(
i,
ResponseItem::FunctionCallOutput {
call_id: existing, ..
} if existing == call_id
)
});
}
_ => {}
}
}
fn remove_first_matching<F>(items: &mut Vec<ResponseItem>, predicate: F)
where
F: Fn(&ResponseItem) -> bool,
{
if let Some(pos) = items.iter().position(predicate) {
items.remove(pos);
}
}

View File

@@ -0,0 +1,128 @@
use codex_protocol::models::FunctionCallOutputContentItem;
use codex_utils_string::take_bytes_at_char_boundary;
use codex_utils_string::take_last_bytes_at_char_boundary;
// Model-formatting limits: clients get full streams; only content sent to the model is truncated.
pub(crate) const MODEL_FORMAT_MAX_BYTES: usize = 10 * 1024; // 10 KiB
pub(crate) const MODEL_FORMAT_MAX_LINES: usize = 256; // lines
pub(crate) const MODEL_FORMAT_HEAD_LINES: usize = MODEL_FORMAT_MAX_LINES / 2;
pub(crate) const MODEL_FORMAT_TAIL_LINES: usize = MODEL_FORMAT_MAX_LINES - MODEL_FORMAT_HEAD_LINES; // 128
pub(crate) const MODEL_FORMAT_HEAD_BYTES: usize = MODEL_FORMAT_MAX_BYTES / 2;
pub(crate) fn globally_truncate_function_output_items(
items: &[FunctionCallOutputContentItem],
) -> Vec<FunctionCallOutputContentItem> {
let mut out: Vec<FunctionCallOutputContentItem> = Vec::with_capacity(items.len());
let mut remaining = MODEL_FORMAT_MAX_BYTES;
let mut omitted_text_items = 0usize;
for it in items {
match it {
FunctionCallOutputContentItem::InputText { text } => {
if remaining == 0 {
omitted_text_items += 1;
continue;
}
let len = text.len();
if len <= remaining {
out.push(FunctionCallOutputContentItem::InputText { text: text.clone() });
remaining -= len;
} else {
let slice = take_bytes_at_char_boundary(text, remaining);
if !slice.is_empty() {
out.push(FunctionCallOutputContentItem::InputText {
text: slice.to_string(),
});
}
remaining = 0;
}
}
// todo(aibrahim): handle input images; resize
FunctionCallOutputContentItem::InputImage { image_url } => {
out.push(FunctionCallOutputContentItem::InputImage {
image_url: image_url.clone(),
});
}
}
}
if omitted_text_items > 0 {
out.push(FunctionCallOutputContentItem::InputText {
text: format!("[omitted {omitted_text_items} text items ...]"),
});
}
out
}
pub(crate) fn format_output_for_model_body(content: &str) -> String {
// Head+tail truncation for the model: show the beginning and end with an elision.
// Clients still receive full streams; only this formatted summary is capped.
let total_lines = content.lines().count();
if content.len() <= MODEL_FORMAT_MAX_BYTES && total_lines <= MODEL_FORMAT_MAX_LINES {
return content.to_string();
}
let output = truncate_formatted_exec_output(content, total_lines);
format!("Total output lines: {total_lines}\n\n{output}")
}
fn truncate_formatted_exec_output(content: &str, total_lines: usize) -> String {
let segments: Vec<&str> = content.split_inclusive('\n').collect();
let head_take = MODEL_FORMAT_HEAD_LINES.min(segments.len());
let tail_take = MODEL_FORMAT_TAIL_LINES.min(segments.len().saturating_sub(head_take));
let omitted = segments.len().saturating_sub(head_take + tail_take);
let head_slice_end: usize = segments
.iter()
.take(head_take)
.map(|segment| segment.len())
.sum();
let tail_slice_start: usize = if tail_take == 0 {
content.len()
} else {
content.len()
- segments
.iter()
.rev()
.take(tail_take)
.map(|segment| segment.len())
.sum::<usize>()
};
let head_slice = &content[..head_slice_end];
let tail_slice = &content[tail_slice_start..];
let truncated_by_bytes = content.len() > MODEL_FORMAT_MAX_BYTES;
// this is a bit wrong. We are counting metadata lines and not just shell output lines.
let marker = if omitted > 0 {
Some(format!(
"\n[... omitted {omitted} of {total_lines} lines ...]\n\n"
))
} else if truncated_by_bytes {
Some(format!(
"\n[... output truncated to fit {MODEL_FORMAT_MAX_BYTES} bytes ...]\n\n"
))
} else {
None
};
let marker_len = marker.as_ref().map_or(0, String::len);
let base_head_budget = MODEL_FORMAT_HEAD_BYTES.min(MODEL_FORMAT_MAX_BYTES);
let head_budget = base_head_budget.min(MODEL_FORMAT_MAX_BYTES.saturating_sub(marker_len));
let head_part = take_bytes_at_char_boundary(head_slice, head_budget);
let mut result = String::with_capacity(MODEL_FORMAT_MAX_BYTES.min(content.len()));
result.push_str(head_part);
if let Some(marker_text) = marker.as_ref() {
result.push_str(marker_text);
}
let remaining = MODEL_FORMAT_MAX_BYTES.saturating_sub(result.len());
if remaining == 0 {
return result;
}
let tail_part = take_last_bytes_at_char_boundary(tail_slice, remaining);
result.push_str(tail_part);
result
}

File diff suppressed because it is too large Load Diff

View File

@@ -32,12 +32,11 @@ pub async fn discover_prompts_in_excluding(
while let Ok(Some(entry)) = entries.next_entry().await {
let path = entry.path();
let is_file = entry
.file_type()
let is_file_like = fs::metadata(&path)
.await
.map(|ft| ft.is_file())
.map(|m| m.is_file())
.unwrap_or(false);
if !is_file {
if !is_file_like {
continue;
}
// Only include Markdown files with a .md extension.
@@ -197,6 +196,25 @@ mod tests {
assert_eq!(names, vec!["good"]);
}
#[tokio::test]
#[cfg(unix)]
async fn discovers_symlinked_md_files() {
let tmp = tempdir().expect("create TempDir");
let dir = tmp.path();
// Create a real file
fs::write(dir.join("real.md"), b"real content").unwrap();
// Create a symlink to the real file
std::os::unix::fs::symlink(dir.join("real.md"), dir.join("link.md")).unwrap();
let found = discover_prompts_in(dir).await;
let names: Vec<String> = found.into_iter().map(|e| e.name).collect();
// Both real and link should be discovered, sorted alphabetically
assert_eq!(names, vec!["link", "real"]);
}
#[tokio::test]
async fn parses_frontmatter_and_strips_from_body() {
let tmp = tempdir().expect("create TempDir");

View File

@@ -135,6 +135,9 @@ pub enum CodexErr {
#[error("unsupported operation: {0}")]
UnsupportedOperation(String),
#[error("{0}")]
RefreshTokenFailed(RefreshTokenFailedError),
#[error("Fatal error: {0}")]
Fatal(String),
@@ -201,6 +204,30 @@ impl std::fmt::Display for ResponseStreamFailed {
}
}
#[derive(Debug, Clone, PartialEq, Eq, Error)]
#[error("{message}")]
pub struct RefreshTokenFailedError {
pub reason: RefreshTokenFailedReason,
pub message: String,
}
impl RefreshTokenFailedError {
pub fn new(reason: RefreshTokenFailedReason, message: impl Into<String>) -> Self {
Self {
reason,
message: message.into(),
}
}
}
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum RefreshTokenFailedReason {
Expired,
Exhausted,
Revoked,
Other,
}
#[derive(Debug)]
pub struct UnexpectedResponseError {
pub status: StatusCode,
@@ -255,7 +282,7 @@ impl std::fmt::Display for UsageLimitReachedError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let message = match self.plan_type.as_ref() {
Some(PlanType::Known(KnownPlan::Plus)) => format!(
"You've hit your usage limit. Upgrade to Pro (https://openai.com/chatgpt/pricing), visit chatgpt.com/codex/settings/usage to purchase more credits{}",
"You've hit your usage limit. Upgrade to Pro (https://openai.com/chatgpt/pricing), visit https://chatgpt.com/codex/settings/usage to purchase more credits{}",
retry_suffix_after_or(self.resets_at.as_ref())
),
Some(PlanType::Known(KnownPlan::Team)) | Some(PlanType::Known(KnownPlan::Business)) => {
@@ -269,7 +296,7 @@ impl std::fmt::Display for UsageLimitReachedError {
.to_string()
}
Some(PlanType::Known(KnownPlan::Pro)) => format!(
"You've hit your usage limit. Visit chatgpt.com/codex/settings/usage to purchase more credits{}",
"You've hit your usage limit. Visit https://chatgpt.com/codex/settings/usage to purchase more credits{}",
retry_suffix_after_or(self.resets_at.as_ref())
),
Some(PlanType::Known(KnownPlan::Enterprise))
@@ -460,7 +487,7 @@ mod tests {
};
assert_eq!(
err.to_string(),
"You've hit your usage limit. Upgrade to Pro (https://openai.com/chatgpt/pricing), visit chatgpt.com/codex/settings/usage to purchase more credits or try again later."
"You've hit your usage limit. Upgrade to Pro (https://openai.com/chatgpt/pricing), visit https://chatgpt.com/codex/settings/usage to purchase more credits or try again later."
);
}
@@ -613,7 +640,7 @@ mod tests {
rate_limits: Some(rate_limit_snapshot()),
};
let expected = format!(
"You've hit your usage limit. Visit chatgpt.com/codex/settings/usage to purchase more credits or try again at {expected_time}."
"You've hit your usage limit. Visit https://chatgpt.com/codex/settings/usage to purchase more credits or try again at {expected_time}."
);
assert_eq!(err.to_string(), expected);
});
@@ -647,7 +674,7 @@ mod tests {
rate_limits: Some(rate_limit_snapshot()),
};
let expected = format!(
"You've hit your usage limit. Upgrade to Pro (https://openai.com/chatgpt/pricing), visit chatgpt.com/codex/settings/usage to purchase more credits or try again at {expected_time}."
"You've hit your usage limit. Upgrade to Pro (https://openai.com/chatgpt/pricing), visit https://chatgpt.com/codex/settings/usage to purchase more credits or try again at {expected_time}."
);
assert_eq!(err.to_string(), expected);
});

View File

@@ -171,6 +171,7 @@ async fn exec_windows_sandbox(
params: ExecParams,
sandbox_policy: &SandboxPolicy,
) -> Result<RawExecToolCallOutput> {
use crate::config::find_codex_home;
use codex_windows_sandbox::run_windows_sandbox_capture;
let ExecParams {
@@ -188,8 +189,17 @@ async fn exec_windows_sandbox(
};
let sandbox_cwd = cwd.clone();
let logs_base_dir = find_codex_home().ok();
let spawn_res = tokio::task::spawn_blocking(move || {
run_windows_sandbox_capture(policy_str, &sandbox_cwd, command, &cwd, env, timeout_ms)
run_windows_sandbox_capture(
policy_str,
&sandbox_cwd,
command,
&cwd,
env,
timeout_ms,
logs_base_dir.as_deref(),
)
})
.await;

View File

@@ -123,7 +123,7 @@ fn set_if_some(
if let Some(enabled) = maybe_value {
set_feature(features, feature, enabled);
log_alias(alias_key, feature);
features.record_legacy_usage_force(alias_key, feature);
features.record_legacy_usage(alias_key, feature);
}
}

View File

@@ -18,7 +18,7 @@ mod codex_delegate;
mod command_safety;
pub mod config;
pub mod config_loader;
mod conversation_history;
mod context_manager;
pub mod custom_prompts;
mod environment_context;
pub mod error;
@@ -75,6 +75,7 @@ pub use rollout::find_conversation_path_by_id_str;
pub use rollout::list::ConversationItem;
pub use rollout::list::ConversationsPage;
pub use rollout::list::Cursor;
pub use rollout::list::parse_cursor;
pub use rollout::list::read_head_for_summary;
mod function_tool;
mod state;
@@ -85,6 +86,7 @@ pub mod util;
pub use apply_patch::CODEX_APPLY_PATCH_ARG1;
pub use command_safety::is_safe_command;
pub use safety::get_platform_sandbox;
pub use safety::set_windows_sandbox_enabled;
// Re-export the protocol types from the standalone `codex-protocol` crate so existing
// `codex_core::protocol::...` references continue to work across the workspace.
pub use codex_protocol::protocol;

View File

@@ -273,7 +273,7 @@ async fn traverse_directories_for_paths(
/// Pagination cursor token format: "<file_ts>|<uuid>" where `file_ts` matches the
/// filename timestamp portion (YYYY-MM-DDThh-mm-ss) used in rollout filenames.
/// The cursor orders files by timestamp desc, then UUID desc.
fn parse_cursor(token: &str) -> Option<Cursor> {
pub fn parse_cursor(token: &str) -> Option<Cursor> {
let (file_ts, uuid_str) = token.split_once('|')?;
let Ok(uuid) = Uuid::parse_str(uuid_str) else {

View File

@@ -25,16 +25,6 @@ use tracing::warn;
const SANDBOX_ASSESSMENT_TIMEOUT: Duration = Duration::from_secs(5);
const SANDBOX_RISK_CATEGORY_VALUES: &[&str] = &[
"data_deletion",
"data_exfiltration",
"privilege_escalation",
"system_modification",
"network_access",
"resource_exhaustion",
"compliance",
];
#[derive(Template)]
#[template(path = "sandboxing/assessment_prompt.md", escape = "none")]
struct SandboxAssessmentPromptTemplate<'a> {
@@ -176,27 +166,26 @@ pub(crate) async fn assess_command(
call_id,
"success",
Some(assessment.risk_level),
&assessment.risk_categories,
duration,
);
return Some(assessment);
}
Err(err) => {
warn!("failed to parse sandbox assessment JSON: {err}");
parent_otel.sandbox_assessment(call_id, "parse_error", None, &[], duration);
parent_otel.sandbox_assessment(call_id, "parse_error", None, duration);
}
},
Ok(Ok(None)) => {
warn!("sandbox assessment response did not include any message");
parent_otel.sandbox_assessment(call_id, "no_output", None, &[], duration);
parent_otel.sandbox_assessment(call_id, "no_output", None, duration);
}
Ok(Err(err)) => {
warn!("sandbox assessment failed: {err}");
parent_otel.sandbox_assessment(call_id, "model_error", None, &[], duration);
parent_otel.sandbox_assessment(call_id, "model_error", None, duration);
}
Err(_) => {
warn!("sandbox assessment timed out");
parent_otel.sandbox_assessment(call_id, "timeout", None, &[], duration);
parent_otel.sandbox_assessment(call_id, "timeout", None, duration);
}
}
@@ -229,7 +218,7 @@ fn sandbox_roots_for_prompt(policy: &SandboxPolicy, cwd: &Path) -> Vec<PathBuf>
fn sandbox_assessment_schema() -> serde_json::Value {
json!({
"type": "object",
"required": ["description", "risk_level", "risk_categories"],
"required": ["description", "risk_level"],
"properties": {
"description": {
"type": "string",
@@ -240,13 +229,6 @@ fn sandbox_assessment_schema() -> serde_json::Value {
"type": "string",
"enum": ["low", "medium", "high"]
},
"risk_categories": {
"type": "array",
"items": {
"type": "string",
"enum": SANDBOX_RISK_CATEGORY_VALUES
}
}
},
"additionalProperties": false
})

View File

@@ -67,7 +67,8 @@ pub(crate) async fn spawn_child_async(
// This relies on prctl(2), so it only works on Linux.
#[cfg(target_os = "linux")]
unsafe {
cmd.pre_exec(|| {
let parent_pid = libc::getpid();
cmd.pre_exec(move || {
// This prctl call effectively requests, "deliver SIGTERM when my
// current parent dies."
if libc::prctl(libc::PR_SET_PDEATHSIG, libc::SIGTERM) == -1 {
@@ -76,9 +77,10 @@ pub(crate) async fn spawn_child_async(
// Though if there was a race condition and this pre_exec() block is
// run _after_ the parent (i.e., the Codex process) has already
// exited, then the parent is the _init_ process (which will never
// die), so we should just terminate the child process now.
if libc::getppid() == 1 {
// exited, then parent will be the closest configured "subreaper"
// ancestor process, or PID 1 (init). If the Codex process has exited
// already, so should the child process.
if libc::getppid() != parent_pid {
libc::raise(libc::SIGTERM);
}
Ok(())

View File

@@ -3,7 +3,7 @@
use codex_protocol::models::ResponseItem;
use crate::codex::SessionConfiguration;
use crate::conversation_history::ConversationHistory;
use crate::context_manager::ContextManager;
use crate::protocol::RateLimitSnapshot;
use crate::protocol::TokenUsage;
use crate::protocol::TokenUsageInfo;
@@ -11,7 +11,7 @@ use crate::protocol::TokenUsageInfo;
/// Persistent, session-scoped state previously stored directly on `Session`.
pub(crate) struct SessionState {
pub(crate) session_configuration: SessionConfiguration,
pub(crate) history: ConversationHistory,
pub(crate) history: ContextManager,
pub(crate) latest_rate_limits: Option<RateLimitSnapshot>,
}
@@ -20,7 +20,7 @@ impl SessionState {
pub(crate) fn new(session_configuration: SessionConfiguration) -> Self {
Self {
session_configuration,
history: ConversationHistory::new(),
history: ContextManager::new(),
latest_rate_limits: None,
}
}
@@ -34,7 +34,7 @@ impl SessionState {
self.history.record_items(items)
}
pub(crate) fn clone_history(&self) -> ConversationHistory {
pub(crate) fn clone_history(&self) -> ContextManager {
self.history.clone()
}

View File

@@ -1,26 +1,27 @@
use std::sync::Arc;
use async_trait::async_trait;
use codex_protocol::models::ShellToolCallParams;
use codex_protocol::user_input::UserInput;
use tokio::sync::Mutex;
use tokio_util::sync::CancellationToken;
use tracing::error;
use uuid::Uuid;
use crate::codex::TurnContext;
use crate::exec::SandboxType;
use crate::exec::StdoutStream;
use crate::exec_env::create_env;
use crate::protocol::EventMsg;
use crate::protocol::TaskStartedEvent;
use crate::sandboxing::ExecEnv;
use crate::state::TaskKind;
use crate::tools::events::ToolEmitter;
use crate::tools::events::ToolEventCtx;
use crate::tools::sandboxing::ToolError;
use crate::tools::context::ToolPayload;
use crate::tools::parallel::ToolCallRuntime;
use crate::tools::router::ToolCall;
use crate::tools::router::ToolRouter;
use crate::turn_diff_tracker::TurnDiffTracker;
use super::SessionTask;
use super::SessionTaskContext;
const USER_SHELL_TOOL_NAME: &str = "local_shell";
#[derive(Clone)]
pub(crate) struct UserShellCommandTask {
@@ -52,8 +53,9 @@ impl SessionTask for UserShellCommandTask {
let session = session.clone_session();
session.send_event(turn_context.as_ref(), event).await;
// Execute the user's script under their default shell when known. Use
// execute_exec_env directly to avoid approvals/sandbox overhead.
// Execute the user's script under their default shell when known; this
// allows commands that use shell features (pipes, &&, redirects, etc.).
// We do not source rc files or otherwise reformat the script.
let shell_invocation = match session.user_shell() {
crate::shell::Shell::Zsh(zsh) => vec![
zsh.shell_path.clone(),
@@ -76,56 +78,35 @@ impl SessionTask for UserShellCommandTask {
}
};
// Emit shell begin event to keep UI consistent with tool runs.
let call_id = Uuid::new_v4().to_string();
let emitter = ToolEmitter::shell(shell_invocation.clone(), turn_context.cwd.clone(), true);
let event_ctx = ToolEventCtx::new(session.as_ref(), turn_context.as_ref(), &call_id, None);
emitter.begin(event_ctx).await;
let env = ExecEnv {
let params = ShellToolCallParams {
command: shell_invocation,
cwd: turn_context.cwd.clone(),
env: create_env(&turn_context.shell_environment_policy),
workdir: None,
timeout_ms: None,
sandbox: SandboxType::None,
with_escalated_permissions: None,
justification: None,
arg0: None,
};
let stream = Some(StdoutStream {
sub_id: turn_context.sub_id.clone(),
call_id: call_id.clone(),
tx_event: session.get_tx_event(),
});
let policy = turn_context.sandbox_policy.clone();
let mut exec_task = tokio::spawn(async move {
crate::exec::execute_exec_env(env, &policy, stream).await
});
let tool_call = ToolCall {
tool_name: USER_SHELL_TOOL_NAME.to_string(),
call_id: Uuid::new_v4().to_string(),
payload: ToolPayload::LocalShell { params },
};
tokio::select! {
res = &mut exec_task => {
match res {
Ok(Ok(output)) => {
let event_ctx = ToolEventCtx::new(session.as_ref(), turn_context.as_ref(), &call_id, None);
let _ = emitter.finish(event_ctx, Ok(output)).await;
}
Ok(Err(err)) => {
let event_ctx = ToolEventCtx::new(session.as_ref(), turn_context.as_ref(), &call_id, None);
let _ = emitter.finish(event_ctx, Err(ToolError::Codex(err))).await;
}
Err(join_err) => {
error!("user shell exec task join error: {join_err:?}");
}
}
}
_ = cancellation_token.cancelled() => {
exec_task.abort();
// Session will emit TurnAborted; do not finish emitter to avoid
// emitting ExecCommandEnd for an interrupted run.
}
let router = Arc::new(ToolRouter::from_config(&turn_context.tools_config, None));
let tracker = Arc::new(Mutex::new(TurnDiffTracker::new()));
let runtime = ToolCallRuntime::new(
Arc::clone(&router),
Arc::clone(&session),
Arc::clone(&turn_context),
Arc::clone(&tracker),
);
if let Err(err) = runtime
.handle_tool_call(tool_call, cancellation_token)
.await
{
error!("user shell command failed: {err:?}");
}
None
}
}

View File

@@ -40,7 +40,6 @@ pub enum ToolPayload {
},
LocalShell {
params: ShellToolCallParams,
is_user_shell_command: bool,
},
UnifiedExec {
arguments: String,
@@ -57,7 +56,7 @@ impl ToolPayload {
match self {
ToolPayload::Function { arguments } => Cow::Borrowed(arguments),
ToolPayload::Custom { input } => Cow::Borrowed(input),
ToolPayload::LocalShell { params, .. } => Cow::Owned(params.command.join(" ")),
ToolPayload::LocalShell { params } => Cow::Owned(params.command.join(" ")),
ToolPayload::UnifiedExec { arguments } => Cow::Borrowed(arguments),
ToolPayload::Mcp { raw_arguments, .. } => Cow::Borrowed(raw_arguments),
}

View File

@@ -82,10 +82,7 @@ impl ToolHandler for ShellHandler {
)
.await
}
ToolPayload::LocalShell {
params,
is_user_shell_command,
} => {
ToolPayload::LocalShell { params } => {
let exec_params = Self::to_exec_params(params, turn.as_ref());
Self::run_exec_like(
tool_name.as_str(),
@@ -94,7 +91,7 @@ impl ToolHandler for ShellHandler {
turn,
tracker,
call_id,
is_user_shell_command,
true,
)
.await
}
@@ -222,7 +219,6 @@ impl ShellHandler {
env: exec_params.env.clone(),
with_escalated_permissions: exec_params.with_escalated_permissions,
justification: exec_params.justification.clone(),
is_user_shell_command,
};
let mut orchestrator = ToolOrchestrator::new();
let mut runtime = ShellRuntime::new();

View File

@@ -9,7 +9,7 @@ pub mod runtimes;
pub mod sandboxing;
pub mod spec;
use crate::conversation_history::format_output_for_model_body;
use crate::context_manager::format_output_for_model_body;
use crate::exec::ExecToolCallOutput;
pub use router::ToolRouter;
use serde::Serialize;

View File

@@ -54,12 +54,21 @@ impl ToolOrchestrator {
let mut already_approved = false;
if needs_initial_approval {
let mut risk = None;
if let Some(metadata) = req.sandbox_retry_data() {
risk = tool_ctx
.session
.assess_sandbox_command(turn_ctx, &tool_ctx.call_id, &metadata.command, None)
.await;
}
let approval_ctx = ApprovalCtx {
session: tool_ctx.session,
turn: turn_ctx,
call_id: &tool_ctx.call_id,
retry_reason: None,
risk: None,
risk,
};
let decision = tool.start_approval_async(req, approval_ctx).await;

View File

@@ -1,4 +1,5 @@
use std::sync::Arc;
use std::time::Instant;
use tokio::sync::RwLock;
use tokio_util::either::Either;
@@ -53,13 +54,16 @@ impl ToolCallRuntime {
let turn = Arc::clone(&self.turn_context);
let tracker = Arc::clone(&self.tracker);
let lock = Arc::clone(&self.parallel_execution);
let aborted_response = Self::aborted_response(&call);
let started = Instant::now();
let readiness = self.turn_context.tool_call_gate.clone();
let handle: AbortOnDropHandle<Result<ResponseInputItem, FunctionCallError>> =
AbortOnDropHandle::new(tokio::spawn(async move {
tokio::select! {
_ = cancellation_token.cancelled() => Ok(aborted_response),
_ = cancellation_token.cancelled() => {
let secs = started.elapsed().as_secs_f32().max(0.1);
Ok(Self::aborted_response(&call, secs))
},
res = async {
tracing::info!("waiting for tool gate");
readiness.wait_ready().await;
@@ -71,7 +75,7 @@ impl ToolCallRuntime {
};
router
.dispatch_tool_call(session, turn, tracker, call)
.dispatch_tool_call(session, turn, tracker, call.clone())
.await
} => res,
}
@@ -91,23 +95,32 @@ impl ToolCallRuntime {
}
impl ToolCallRuntime {
fn aborted_response(call: &ToolCall) -> ResponseInputItem {
fn aborted_response(call: &ToolCall, secs: f32) -> ResponseInputItem {
match &call.payload {
ToolPayload::Custom { .. } => ResponseInputItem::CustomToolCallOutput {
call_id: call.call_id.clone(),
output: "aborted".to_string(),
output: Self::abort_message(call, secs),
},
ToolPayload::Mcp { .. } => ResponseInputItem::McpToolCallOutput {
call_id: call.call_id.clone(),
result: Err("aborted".to_string()),
result: Err(Self::abort_message(call, secs)),
},
_ => ResponseInputItem::FunctionCallOutput {
call_id: call.call_id.clone(),
output: FunctionCallOutputPayload {
content: "aborted".to_string(),
content: Self::abort_message(call, secs),
..Default::default()
},
},
}
}
fn abort_message(call: &ToolCall, secs: f32) -> String {
match call.tool_name.as_str() {
"shell" | "container.exec" | "local_shell" | "unified_exec" => {
format!("Wall time: {secs:.1} seconds\naborted by user")
}
_ => format!("aborted by user after {secs:.1}s"),
}
}
}

View File

@@ -120,10 +120,7 @@ impl ToolRouter {
Ok(Some(ToolCall {
tool_name: "local_shell".to_string(),
call_id,
payload: ToolPayload::LocalShell {
params,
is_user_shell_command: false,
},
payload: ToolPayload::LocalShell { params },
}))
}
}

View File

@@ -34,7 +34,6 @@ pub struct ShellRequest {
pub env: std::collections::HashMap<String, String>,
pub with_escalated_permissions: Option<bool>,
pub justification: Option<String>,
pub is_user_shell_command: bool,
}
impl ProvidesSandboxRetryData for ShellRequest {
@@ -122,9 +121,6 @@ impl Approvable<ShellRequest> for ShellRuntime {
policy: AskForApproval,
sandbox_policy: &SandboxPolicy,
) -> bool {
if req.is_user_shell_command {
return false;
}
if is_known_safe_command(&req.command) {
return false;
}
@@ -150,7 +146,7 @@ impl Approvable<ShellRequest> for ShellRuntime {
}
fn wants_escalated_first_attempt(&self, req: &ShellRequest) -> bool {
req.is_user_shell_command || req.with_escalated_permissions.unwrap_or(false)
req.with_escalated_permissions.unwrap_or(false)
}
}

View File

@@ -1,7 +0,0 @@
You were originally given instructions from a user over one or more turns. Here were the user messages:
{{ user_messages_text }}
Another language model started to solve this problem and produced a summary of its thinking process. You also have access to the state of the tools that were used by that language model. Use this to build on the work that has already been done and avoid duplicating work. Here is the summary produced by the other language model, use the information in this summary to assist with your own analysis:
{{ summary_text }}

View File

@@ -1,5 +1,9 @@
You have exceeded the maximum number of tokens, please stop coding and instead write a short memento message for the next agent. Your note should:
- Summarize what you finished and what still needs work. If there was a recent update_plan call, repeat its steps verbatim.
- List outstanding TODOs with file paths / line numbers so they're easy to find.
- Flag code that needs more tests (edge cases, performance, integration, etc.).
- Record any open bugs, quirks, or setup steps that will make it easier for the next agent to pick up where you left off.
You are performing a CONTEXT CHECKPOINT COMPACTION. Create a handoff summary for another LLM that will resume the task.
Include:
- Current progress and key decisions made
- Important context, constraints, or user preferences
- What remains to be done (clear next steps)
- Any critical data, examples, or references needed to continue
Be concise, structured, and focused on helping the next LLM seamlessly continue the work.

View File

@@ -1,13 +1,10 @@
You are a security analyst evaluating shell commands that were blocked by a sandbox. Given the provided metadata, summarize the command's likely intent and assess the risk. Return strictly valid JSON with the keys:
- description (concise summary, at most two sentences)
You are a security analyst evaluating shell commands that were blocked by a sandbox. Given the provided metadata, summarize the command's likely intent and assess the risk to help the user decide whether to approve command execution. Return strictly valid JSON with the keys:
- description (concise summary of command intent and potential effects, no more than one sentence, use present tense)
- risk_level ("low", "medium", or "high")
- risk_categories (optional array of zero or more category strings)
Risk level examples:
- low: read-only inspections, listing files, printing configuration
- medium: modifying project files, installing dependencies, fetching artifacts from trusted sources
- low: read-only inspections, listing files, printing configuration, fetching artifacts from trusted sources
- medium: modifying project files, installing dependencies
- high: deleting or overwriting data, exfiltrating secrets, escalating privileges, or disabling security controls
Recognized risk_categories: data_deletion, data_exfiltration, privilege_escalation, system_modification, network_access, resource_exhaustion, compliance.
Use multiple categories when appropriate.
If information is insufficient, choose the most cautious risk level supported by the evidence.
Respond with JSON only, without markdown code fences or extra commentary.

View File

@@ -244,7 +244,7 @@ pub mod fs_wait {
if path.exists() {
Ok(path)
} else {
Err(anyhow!("timed out waiting for {:?}", path))
Err(anyhow!("timed out waiting for {path:?}"))
}
}
@@ -284,7 +284,7 @@ pub mod fs_wait {
if let Some(found) = scan_for_match(&root, predicate) {
Ok(found)
} else {
Err(anyhow!("timed out waiting for matching file in {:?}", root))
Err(anyhow!("timed out waiting for matching file in {root:?}"))
}
}

View File

@@ -1,3 +1,4 @@
use assert_matches::assert_matches;
use std::sync::Arc;
use std::time::Duration;
@@ -13,6 +14,7 @@ use core_test_support::responses::sse;
use core_test_support::responses::start_mock_server;
use core_test_support::test_codex::test_codex;
use core_test_support::wait_for_event_with_timeout;
use regex_lite::Regex;
use serde_json::json;
/// Integration test: spawn a longrunning shell tool via a mocked Responses SSE
@@ -123,6 +125,7 @@ async fn interrupt_tool_records_history_entries() {
)
.await;
tokio::time::sleep(Duration::from_secs_f32(0.1)).await;
codex.submit(Op::Interrupt).await.unwrap();
wait_for_event_with_timeout(
@@ -159,9 +162,26 @@ async fn interrupt_tool_records_history_entries() {
response_mock.saw_function_call(call_id),
"function call not recorded in responses payload"
);
assert_eq!(
response_mock.function_call_output_text(call_id).as_deref(),
Some("aborted"),
"aborted function call output not recorded in responses payload"
let output = response_mock
.function_call_output_text(call_id)
.expect("missing function_call_output text");
let re = Regex::new(r"^Wall time: ([0-9]+(?:\.[0-9])?) seconds\naborted by user$")
.expect("compile regex");
let captures = re.captures(&output);
assert_matches!(
captures.as_ref(),
Some(caps) if caps.get(1).is_some(),
"aborted message with elapsed seconds"
);
let secs: f32 = captures
.expect("aborted message with elapsed seconds")
.get(1)
.unwrap()
.as_str()
.parse()
.unwrap();
assert!(
secs >= 0.1,
"expected at least one tenth of a second of elapsed time, got {secs}"
);
}

View File

@@ -0,0 +1,272 @@
use anyhow::Context;
use anyhow::Result;
use base64::Engine;
use chrono::Duration;
use chrono::Utc;
use codex_core::CodexAuth;
use codex_core::auth::AuthCredentialsStoreMode;
use codex_core::auth::AuthDotJson;
use codex_core::auth::REFRESH_TOKEN_URL_OVERRIDE_ENV_VAR;
use codex_core::auth::RefreshTokenError;
use codex_core::auth::load_auth_dot_json;
use codex_core::auth::save_auth;
use codex_core::error::RefreshTokenFailedReason;
use codex_core::token_data::IdTokenInfo;
use codex_core::token_data::TokenData;
use core_test_support::skip_if_no_network;
use pretty_assertions::assert_eq;
use serde::Serialize;
use serde_json::json;
use std::ffi::OsString;
use tempfile::TempDir;
use wiremock::Mock;
use wiremock::MockServer;
use wiremock::ResponseTemplate;
use wiremock::matchers::method;
use wiremock::matchers::path;
const INITIAL_ACCESS_TOKEN: &str = "initial-access-token";
const INITIAL_REFRESH_TOKEN: &str = "initial-refresh-token";
#[serial_test::serial(auth_refresh)]
#[tokio::test]
async fn refresh_token_succeeds_updates_storage() -> Result<()> {
skip_if_no_network!(Ok(()));
let server = MockServer::start().await;
Mock::given(method("POST"))
.and(path("/oauth/token"))
.respond_with(ResponseTemplate::new(200).set_body_json(json!({
"access_token": "new-access-token",
"refresh_token": "new-refresh-token"
})))
.expect(1)
.mount(&server)
.await;
let ctx = RefreshTokenTestContext::new(&server)?;
let auth = ctx.auth.clone();
let access = auth
.refresh_token()
.await
.context("refresh should succeed")?;
assert_eq!(access, "new-access-token");
let stored = ctx.load_auth()?;
let tokens = stored.tokens.as_ref().context("tokens should exist")?;
assert_eq!(tokens.access_token, "new-access-token");
assert_eq!(tokens.refresh_token, "new-refresh-token");
let refreshed_at = stored
.last_refresh
.as_ref()
.context("last_refresh should be recorded")?;
assert!(
*refreshed_at >= ctx.initial_last_refresh,
"last_refresh should advance"
);
let cached = auth
.get_token_data()
.await
.context("token data should be cached")?;
assert_eq!(cached.access_token, "new-access-token");
server.verify().await;
Ok(())
}
#[serial_test::serial(auth_refresh)]
#[tokio::test]
async fn refresh_token_returns_permanent_error_for_expired_refresh_token() -> Result<()> {
skip_if_no_network!(Ok(()));
let server = MockServer::start().await;
Mock::given(method("POST"))
.and(path("/oauth/token"))
.respond_with(ResponseTemplate::new(401).set_body_json(json!({
"error": {
"code": "refresh_token_expired"
}
})))
.expect(1)
.mount(&server)
.await;
let ctx = RefreshTokenTestContext::new(&server)?;
let auth = ctx.auth.clone();
let err = auth
.refresh_token()
.await
.err()
.context("refresh should fail")?;
assert_eq!(err.failed_reason(), Some(RefreshTokenFailedReason::Expired));
let stored = ctx.load_auth()?;
let tokens = stored.tokens.as_ref().context("tokens should remain")?;
assert_eq!(tokens.access_token, INITIAL_ACCESS_TOKEN);
assert_eq!(tokens.refresh_token, INITIAL_REFRESH_TOKEN);
assert_eq!(
*stored
.last_refresh
.as_ref()
.context("last_refresh should remain unchanged")?,
ctx.initial_last_refresh,
);
server.verify().await;
Ok(())
}
#[serial_test::serial(auth_refresh)]
#[tokio::test]
async fn refresh_token_returns_transient_error_on_server_failure() -> Result<()> {
skip_if_no_network!(Ok(()));
let server = MockServer::start().await;
Mock::given(method("POST"))
.and(path("/oauth/token"))
.respond_with(ResponseTemplate::new(500).set_body_json(json!({
"error": "temporary-failure"
})))
.expect(1)
.mount(&server)
.await;
let ctx = RefreshTokenTestContext::new(&server)?;
let auth = ctx.auth.clone();
let err = auth
.refresh_token()
.await
.err()
.context("refresh should fail")?;
assert!(matches!(err, RefreshTokenError::Transient(_)));
assert_eq!(err.failed_reason(), None);
let stored = ctx.load_auth()?;
let tokens = stored.tokens.as_ref().context("tokens should remain")?;
assert_eq!(tokens.access_token, INITIAL_ACCESS_TOKEN);
assert_eq!(tokens.refresh_token, INITIAL_REFRESH_TOKEN);
assert_eq!(
*stored
.last_refresh
.as_ref()
.context("last_refresh should remain unchanged")?,
ctx.initial_last_refresh,
);
server.verify().await;
Ok(())
}
struct RefreshTokenTestContext {
codex_home: TempDir,
auth: CodexAuth,
initial_last_refresh: chrono::DateTime<Utc>,
_env_guard: EnvGuard,
}
impl RefreshTokenTestContext {
fn new(server: &MockServer) -> Result<Self> {
let codex_home = TempDir::new()?;
let initial_last_refresh = Utc::now() - Duration::days(1);
let mut id_token = IdTokenInfo::default();
id_token.raw_jwt = minimal_jwt();
let tokens = TokenData {
id_token,
access_token: INITIAL_ACCESS_TOKEN.to_string(),
refresh_token: INITIAL_REFRESH_TOKEN.to_string(),
account_id: Some("account-id".to_string()),
};
let auth_dot_json = AuthDotJson {
openai_api_key: None,
tokens: Some(tokens),
last_refresh: Some(initial_last_refresh),
};
save_auth(
codex_home.path(),
&auth_dot_json,
AuthCredentialsStoreMode::File,
)?;
let endpoint = format!("{}/oauth/token", server.uri());
let env_guard = EnvGuard::set(REFRESH_TOKEN_URL_OVERRIDE_ENV_VAR, endpoint);
let auth = CodexAuth::from_auth_storage(codex_home.path(), AuthCredentialsStoreMode::File)?
.context("auth should load from storage")?;
Ok(Self {
codex_home,
auth,
initial_last_refresh,
_env_guard: env_guard,
})
}
fn load_auth(&self) -> Result<AuthDotJson> {
load_auth_dot_json(self.codex_home.path(), AuthCredentialsStoreMode::File)
.context("load auth.json")?
.context("auth.json should exist")
}
}
struct EnvGuard {
key: &'static str,
original: Option<OsString>,
}
impl EnvGuard {
fn set(key: &'static str, value: String) -> Self {
let original = std::env::var_os(key);
// SAFETY: these tests execute serially, so updating the process environment is safe.
unsafe {
std::env::set_var(key, &value);
}
Self { key, original }
}
}
impl Drop for EnvGuard {
fn drop(&mut self) {
// SAFETY: the guard restores the original environment value before other tests run.
unsafe {
match &self.original {
Some(value) => std::env::set_var(self.key, value),
None => std::env::remove_var(self.key),
}
}
}
}
fn minimal_jwt() -> String {
#[derive(Serialize)]
struct Header {
alg: &'static str,
typ: &'static str,
}
let header = Header {
alg: "none",
typ: "JWT",
};
let payload = json!({ "sub": "user-123" });
fn b64(data: &[u8]) -> String {
base64::engine::general_purpose::URL_SAFE_NO_PAD.encode(data)
}
let header_bytes = match serde_json::to_vec(&header) {
Ok(bytes) => bytes,
Err(err) => panic!("serialize header: {err}"),
};
let payload_bytes = match serde_json::to_vec(&payload) {
Ok(bytes) => bytes,
Err(err) => panic!("serialize payload: {err}"),
};
let header_b64 = b64(&header_bytes);
let payload_b64 = b64(&payload_bytes);
let signature_b64 = b64(b"sig");
format!("{header_b64}.{payload_b64}.{signature_b64}")
}

View File

@@ -8,6 +8,8 @@ use core_test_support::responses::ev_apply_patch_function_call;
use core_test_support::responses::ev_assistant_message;
use core_test_support::responses::ev_completed;
use core_test_support::responses::ev_function_call;
use core_test_support::responses::ev_reasoning_item_added;
use core_test_support::responses::ev_reasoning_summary_text_delta;
use core_test_support::responses::ev_response_created;
use core_test_support::responses::mount_sse_sequence;
use core_test_support::responses::sse;
@@ -15,6 +17,7 @@ use core_test_support::responses::start_mock_server;
use core_test_support::skip_if_no_network;
use core_test_support::test_codex::test_codex;
use core_test_support::wait_for_event;
use pretty_assertions::assert_eq;
/// Delegate should surface ExecApprovalRequest from sub-agent and proceed
/// after parent submits an approval decision.
@@ -171,3 +174,52 @@ async fn codex_delegate_forwards_patch_approval_and_proceeds_on_decision() {
.await;
wait_for_event(&test.codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn codex_delegate_ignores_legacy_deltas() {
skip_if_no_network!();
// Single response with reasoning summary deltas.
let sse_stream = sse(vec![
ev_response_created("resp-1"),
ev_reasoning_item_added("reason-1", &["initial"]),
ev_reasoning_summary_text_delta("think-1"),
ev_completed("resp-1"),
]);
let server = start_mock_server().await;
mount_sse_sequence(&server, vec![sse_stream]).await;
let mut builder = test_codex();
let test = builder.build(&server).await.expect("build test codex");
// Kick off review (delegated).
test.codex
.submit(Op::Review {
review_request: ReviewRequest {
prompt: "Please review".to_string(),
user_facing_hint: "review".to_string(),
},
})
.await
.expect("submit review");
let mut reasoning_delta_count = 0;
let mut legacy_reasoning_delta_count = 0;
loop {
let ev = wait_for_event(&test.codex, |_| true).await;
match ev {
EventMsg::ReasoningContentDelta(_) => reasoning_delta_count += 1,
EventMsg::AgentReasoningDelta(_) => legacy_reasoning_delta_count += 1,
EventMsg::TaskComplete(_) => break,
_ => {}
}
}
assert_eq!(reasoning_delta_count, 1, "expected one new reasoning delta");
assert_eq!(
legacy_reasoning_delta_count, 1,
"expected one legacy reasoning delta"
);
}

View File

@@ -3,6 +3,7 @@ use codex_core::ConversationManager;
use codex_core::ModelProviderInfo;
use codex_core::NewConversation;
use codex_core::built_in_model_providers;
use codex_core::config::Config;
use codex_core::protocol::ErrorEvent;
use codex_core::protocol::EventMsg;
use codex_core::protocol::Op;
@@ -13,9 +14,9 @@ use codex_protocol::user_input::UserInput;
use core_test_support::load_default_config_for_test;
use core_test_support::skip_if_no_network;
use core_test_support::wait_for_event;
use std::collections::VecDeque;
use tempfile::TempDir;
use codex_core::codex::compact::SUMMARIZATION_PROMPT;
use core_test_support::responses::ev_assistant_message;
use core_test_support::responses::ev_completed;
use core_test_support::responses::ev_completed_with_tokens;
@@ -27,6 +28,7 @@ use core_test_support::responses::sse;
use core_test_support::responses::sse_failed;
use core_test_support::responses::start_mock_server;
use pretty_assertions::assert_eq;
use serde_json::json;
// --- Test helpers -----------------------------------------------------------
pub(super) const FIRST_REPLY: &str = "FIRST_REPLY";
@@ -46,8 +48,39 @@ const CONTEXT_LIMIT_MESSAGE: &str =
const DUMMY_FUNCTION_NAME: &str = "unsupported_tool";
const DUMMY_CALL_ID: &str = "call-multi-auto";
const FUNCTION_CALL_LIMIT_MSG: &str = "function call limit push";
const POST_AUTO_USER_MSG: &str = "post auto follow-up";
const COMPACT_PROMPT_MARKER: &str =
"You are performing a CONTEXT CHECKPOINT COMPACTION for a tool.";
pub(super) const TEST_COMPACT_PROMPT: &str =
"You are performing a CONTEXT CHECKPOINT COMPACTION for a tool.\nTest-only compact prompt.";
pub(super) const COMPACT_WARNING_MESSAGE: &str = "Heads up: Long conversations and multiple compactions can cause the model to be less accurate. Start new a new conversation when possible to keep conversations small and targeted.";
fn auto_summary(summary: &str) -> String {
summary.to_string()
}
fn drop_call_id(value: &mut serde_json::Value) {
match value {
serde_json::Value::Object(obj) => {
obj.retain(|k, _| k != "call_id");
for v in obj.values_mut() {
drop_call_id(v);
}
}
serde_json::Value::Array(arr) => {
for v in arr {
drop_call_id(v);
}
}
_ => {}
}
}
fn set_test_compact_prompt(config: &mut Config) {
config.compact_prompt = Some(TEST_COMPACT_PROMPT.to_string());
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn summarize_context_three_requests_and_instructions() {
skip_if_no_network!();
@@ -73,22 +106,21 @@ async fn summarize_context_three_requests_and_instructions() {
// Mount three expectations, one per request, matched by body content.
let first_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains("\"text\":\"hello world\"")
&& !body.contains("You have exceeded the maximum number of tokens")
body.contains("\"text\":\"hello world\"") && !body.contains(COMPACT_PROMPT_MARKER)
};
mount_sse_once_match(&server, first_matcher, sse1).await;
let first_request_mock = mount_sse_once_match(&server, first_matcher, sse1).await;
let second_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains("You have exceeded the maximum number of tokens")
body.contains(COMPACT_PROMPT_MARKER)
};
mount_sse_once_match(&server, second_matcher, sse2).await;
let second_request_mock = mount_sse_once_match(&server, second_matcher, sse2).await;
let third_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains(&format!("\"text\":\"{THIRD_USER_MSG}\""))
};
mount_sse_once_match(&server, third_matcher, sse3).await;
let third_request_mock = mount_sse_once_match(&server, third_matcher, sse3).await;
// Build config pointing to the mock server and spawn Codex.
let model_provider = ModelProviderInfo {
@@ -98,6 +130,7 @@ async fn summarize_context_three_requests_and_instructions() {
let home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&home);
config.model_provider = model_provider;
set_test_compact_prompt(&mut config);
config.model_auto_compact_token_limit = Some(200_000);
let conversation_manager = ConversationManager::with_auth(CodexAuth::from_api_key("dummy"));
let NewConversation {
@@ -139,16 +172,13 @@ async fn summarize_context_three_requests_and_instructions() {
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
// Inspect the three captured requests.
let requests = server.received_requests().await.unwrap();
assert_eq!(requests.len(), 3, "expected exactly three requests");
let req1 = first_request_mock.single_request();
let req2 = second_request_mock.single_request();
let req3 = third_request_mock.single_request();
let req1 = &requests[0];
let req2 = &requests[1];
let req3 = &requests[2];
let body1 = req1.body_json::<serde_json::Value>().unwrap();
let body2 = req2.body_json::<serde_json::Value>().unwrap();
let body3 = req3.body_json::<serde_json::Value>().unwrap();
let body1 = req1.body_json();
let body2 = req2.body_json();
let body3 = req3.body_json();
// Manual compact should keep the baseline developer instructions.
let instr1 = body1.get("instructions").and_then(|v| v.as_str()).unwrap();
@@ -166,11 +196,11 @@ async fn summarize_context_three_requests_and_instructions() {
assert_eq!(last2.get("role").unwrap().as_str().unwrap(), "user");
let text2 = last2["content"][0]["text"].as_str().unwrap();
assert_eq!(
text2, SUMMARIZATION_PROMPT,
text2, TEST_COMPACT_PROMPT,
"expected summarize trigger, got `{text2}`"
);
// Third request must contain the refreshed instructions, bridge summary message and new user msg.
// Third request must contain the refreshed instructions, compacted user history, and new user message.
let input3 = body3.get("input").and_then(|v| v.as_array()).unwrap();
assert!(
@@ -178,13 +208,21 @@ async fn summarize_context_three_requests_and_instructions() {
"expected refreshed context and new user message in third request"
);
// Collect all (role, text) message tuples.
let mut messages: Vec<(String, String)> = Vec::new();
for item in input3 {
if item["type"].as_str() == Some("message") {
let role = item["role"].as_str().unwrap_or_default().to_string();
let text = item["content"][0]["text"]
.as_str()
if let Some("message") = item.get("type").and_then(|v| v.as_str()) {
let role = item
.get("role")
.and_then(|v| v.as_str())
.unwrap_or_default()
.to_string();
let text = item
.get("content")
.and_then(|v| v.as_array())
.and_then(|arr| arr.first())
.and_then(|entry| entry.get("text"))
.and_then(|v| v.as_str())
.unwrap_or_default()
.to_string();
messages.push((role, text));
@@ -200,26 +238,22 @@ async fn summarize_context_three_requests_and_instructions() {
.any(|(r, t)| r == "user" && t == THIRD_USER_MSG),
"third request should include the new user message"
);
let Some((_, bridge_text)) = messages.iter().find(|(role, text)| {
role == "user"
&& (text.contains("Here were the user messages")
|| text.contains("Here are all the user messages"))
&& text.contains(SUMMARY_TEXT)
}) else {
panic!("expected a bridge message containing the summary");
};
assert!(
bridge_text.contains("hello world"),
"bridge should capture earlier user messages"
messages
.iter()
.any(|(r, t)| r == "user" && t == "hello world"),
"third request should include the original user message"
);
assert!(
!bridge_text.contains(SUMMARIZATION_PROMPT),
"bridge text should not echo the summarize trigger"
messages
.iter()
.any(|(r, t)| r == "user" && t == SUMMARY_TEXT),
"third request should include the summary message"
);
assert!(
!messages
.iter()
.any(|(_, text)| text.contains(SUMMARIZATION_PROMPT)),
.any(|(_, text)| text.contains(TEST_COMPACT_PROMPT)),
"third request should not include the summarize trigger"
);
@@ -323,7 +357,7 @@ async fn manual_compact_uses_custom_prompt() {
if text == custom_prompt {
found_custom_prompt = true;
}
if text == SUMMARIZATION_PROMPT {
if text == TEST_COMPACT_PROMPT {
found_default_prompt = true;
}
}
@@ -354,12 +388,17 @@ async fn auto_compact_runs_after_token_limit_hit() {
ev_assistant_message("m3", AUTO_SUMMARY_TEXT),
ev_completed_with_tokens("r3", 200),
]);
let sse_resume = sse(vec![ev_completed("r3-resume")]);
let sse4 = sse(vec![
ev_assistant_message("m4", FINAL_REPLY),
ev_completed_with_tokens("r4", 120),
]);
let first_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains(FIRST_AUTO_MSG)
&& !body.contains(SECOND_AUTO_MSG)
&& !body.contains("You have exceeded the maximum number of tokens")
&& !body.contains(COMPACT_PROMPT_MARKER)
};
mount_sse_once_match(&server, first_matcher, sse1).await;
@@ -367,16 +406,30 @@ async fn auto_compact_runs_after_token_limit_hit() {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains(SECOND_AUTO_MSG)
&& body.contains(FIRST_AUTO_MSG)
&& !body.contains("You have exceeded the maximum number of tokens")
&& !body.contains(COMPACT_PROMPT_MARKER)
};
mount_sse_once_match(&server, second_matcher, sse2).await;
let third_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains("You have exceeded the maximum number of tokens")
body.contains(COMPACT_PROMPT_MARKER)
};
mount_sse_once_match(&server, third_matcher, sse3).await;
let resume_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains(AUTO_SUMMARY_TEXT)
&& !body.contains(COMPACT_PROMPT_MARKER)
&& !body.contains(POST_AUTO_USER_MSG)
};
mount_sse_once_match(&server, resume_matcher, sse_resume).await;
let fourth_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains(POST_AUTO_USER_MSG) && !body.contains(COMPACT_PROMPT_MARKER)
};
mount_sse_once_match(&server, fourth_matcher, sse4).await;
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
@@ -385,6 +438,7 @@ async fn auto_compact_runs_after_token_limit_hit() {
let home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&home);
config.model_provider = model_provider;
set_test_compact_prompt(&mut config);
config.model_auto_compact_token_limit = Some(200_000);
let conversation_manager = ConversationManager::with_auth(CodexAuth::from_api_key("dummy"));
let codex = conversation_manager
@@ -414,18 +468,29 @@ async fn auto_compact_runs_after_token_limit_hit() {
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
// wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
codex
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: POST_AUTO_USER_MSG.into(),
}],
})
.await
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
let requests = server.received_requests().await.unwrap();
assert!(
requests.len() >= 3,
"auto compact should add at least a third request, got {}",
assert_eq!(
requests.len(),
5,
"expected user turns, a compaction request, a resumed turn, and the follow-up turn; got {}",
requests.len()
);
let is_auto_compact = |req: &wiremock::Request| {
std::str::from_utf8(&req.body)
.unwrap_or("")
.contains("You have exceeded the maximum number of tokens")
.contains(COMPACT_PROMPT_MARKER)
};
let auto_compact_count = requests.iter().filter(|req| is_auto_compact(req)).count();
assert_eq!(
@@ -442,11 +507,41 @@ async fn auto_compact_runs_after_token_limit_hit() {
"auto compact should add a third request"
);
let resume_index = requests
.iter()
.enumerate()
.find_map(|(idx, req)| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
(body.contains(AUTO_SUMMARY_TEXT)
&& !body.contains(COMPACT_PROMPT_MARKER)
&& !body.contains(POST_AUTO_USER_MSG))
.then_some(idx)
})
.expect("resume request missing after compaction");
let follow_up_index = requests
.iter()
.enumerate()
.rev()
.find_map(|(idx, req)| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
(body.contains(POST_AUTO_USER_MSG) && !body.contains(COMPACT_PROMPT_MARKER))
.then_some(idx)
})
.expect("follow-up request missing");
assert_eq!(follow_up_index, 4, "follow-up request should be last");
let body_first = requests[0].body_json::<serde_json::Value>().unwrap();
let body3 = requests[auto_compact_index]
let body_auto = requests[auto_compact_index]
.body_json::<serde_json::Value>()
.unwrap();
let instructions = body3
let body_resume = requests[resume_index]
.body_json::<serde_json::Value>()
.unwrap();
let body_follow_up = requests[follow_up_index]
.body_json::<serde_json::Value>()
.unwrap();
let instructions = body_auto
.get("instructions")
.and_then(|v| v.as_str())
.unwrap_or_default();
@@ -460,13 +555,16 @@ async fn auto_compact_runs_after_token_limit_hit() {
"auto compact should keep the standard developer instructions",
);
let input3 = body3.get("input").and_then(|v| v.as_array()).unwrap();
let last3 = input3
let input_auto = body_auto.get("input").and_then(|v| v.as_array()).unwrap();
let last_auto = input_auto
.last()
.expect("auto compact request should append a user message");
assert_eq!(last3.get("type").and_then(|v| v.as_str()), Some("message"));
assert_eq!(last3.get("role").and_then(|v| v.as_str()), Some("user"));
let last_text = last3
assert_eq!(
last_auto.get("type").and_then(|v| v.as_str()),
Some("message")
);
assert_eq!(last_auto.get("role").and_then(|v| v.as_str()), Some("user"));
let last_text = last_auto
.get("content")
.and_then(|v| v.as_array())
.and_then(|items| items.first())
@@ -474,9 +572,59 @@ async fn auto_compact_runs_after_token_limit_hit() {
.and_then(|text| text.as_str())
.unwrap_or_default();
assert_eq!(
last_text, SUMMARIZATION_PROMPT,
last_text, TEST_COMPACT_PROMPT,
"auto compact should send the summarization prompt as a user message",
);
let input_resume = body_resume.get("input").and_then(|v| v.as_array()).unwrap();
assert!(
input_resume.iter().any(|item| {
item.get("type").and_then(|v| v.as_str()) == Some("message")
&& item.get("role").and_then(|v| v.as_str()) == Some("user")
&& item
.get("content")
.and_then(|v| v.as_array())
.and_then(|arr| arr.first())
.and_then(|entry| entry.get("text"))
.and_then(|v| v.as_str())
== Some(AUTO_SUMMARY_TEXT)
}),
"resume request should include compacted history"
);
let input_follow_up = body_follow_up
.get("input")
.and_then(|v| v.as_array())
.unwrap();
let user_texts: Vec<String> = input_follow_up
.iter()
.filter(|item| item.get("type").and_then(|v| v.as_str()) == Some("message"))
.filter(|item| item.get("role").and_then(|v| v.as_str()) == Some("user"))
.filter_map(|item| {
item.get("content")
.and_then(|v| v.as_array())
.and_then(|arr| arr.first())
.and_then(|entry| entry.get("text"))
.and_then(|v| v.as_str())
.map(std::string::ToString::to_string)
})
.collect();
assert!(
user_texts.iter().any(|text| text == FIRST_AUTO_MSG),
"auto compact follow-up request should include the first user message"
);
assert!(
user_texts.iter().any(|text| text == SECOND_AUTO_MSG),
"auto compact follow-up request should include the second user message"
);
assert!(
user_texts.iter().any(|text| text == POST_AUTO_USER_MSG),
"auto compact follow-up request should include the new user message"
);
assert!(
user_texts.iter().any(|text| text == AUTO_SUMMARY_TEXT),
"auto compact follow-up request should include the summary message"
);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
@@ -495,8 +643,9 @@ async fn auto_compact_persists_rollout_entries() {
ev_completed_with_tokens("r2", 330_000),
]);
let auto_summary_payload = auto_summary(AUTO_SUMMARY_TEXT);
let sse3 = sse(vec![
ev_assistant_message("m3", AUTO_SUMMARY_TEXT),
ev_assistant_message("m3", &auto_summary_payload),
ev_completed_with_tokens("r3", 200),
]);
@@ -504,7 +653,7 @@ async fn auto_compact_persists_rollout_entries() {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains(FIRST_AUTO_MSG)
&& !body.contains(SECOND_AUTO_MSG)
&& !body.contains("You have exceeded the maximum number of tokens")
&& !body.contains(COMPACT_PROMPT_MARKER)
};
mount_sse_once_match(&server, first_matcher, sse1).await;
@@ -512,13 +661,13 @@ async fn auto_compact_persists_rollout_entries() {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains(SECOND_AUTO_MSG)
&& body.contains(FIRST_AUTO_MSG)
&& !body.contains("You have exceeded the maximum number of tokens")
&& !body.contains(COMPACT_PROMPT_MARKER)
};
mount_sse_once_match(&server, second_matcher, sse2).await;
let third_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains("You have exceeded the maximum number of tokens")
body.contains(COMPACT_PROMPT_MARKER)
};
mount_sse_once_match(&server, third_matcher, sse3).await;
@@ -530,6 +679,7 @@ async fn auto_compact_persists_rollout_entries() {
let home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&home);
config.model_provider = model_provider;
set_test_compact_prompt(&mut config);
let conversation_manager = ConversationManager::with_auth(CodexAuth::from_api_key("dummy"));
let NewConversation {
conversation: codex,
@@ -603,8 +753,9 @@ async fn auto_compact_stops_after_failed_attempt() {
ev_completed_with_tokens("r1", 500),
]);
let summary_payload = auto_summary(SUMMARY_TEXT);
let sse2 = sse(vec![
ev_assistant_message("m2", SUMMARY_TEXT),
ev_assistant_message("m2", &summary_payload),
ev_completed_with_tokens("r2", 50),
]);
@@ -615,21 +766,19 @@ async fn auto_compact_stops_after_failed_attempt() {
let first_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains(FIRST_AUTO_MSG)
&& !body.contains("You have exceeded the maximum number of tokens")
body.contains(FIRST_AUTO_MSG) && !body.contains(COMPACT_PROMPT_MARKER)
};
mount_sse_once_match(&server, first_matcher, sse1.clone()).await;
let second_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains("You have exceeded the maximum number of tokens")
body.contains(COMPACT_PROMPT_MARKER)
};
mount_sse_once_match(&server, second_matcher, sse2.clone()).await;
let third_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
!body.contains("You have exceeded the maximum number of tokens")
&& body.contains(SUMMARY_TEXT)
!body.contains(COMPACT_PROMPT_MARKER) && body.contains(SUMMARY_TEXT)
};
mount_sse_once_match(&server, third_matcher, sse3.clone()).await;
@@ -641,6 +790,7 @@ async fn auto_compact_stops_after_failed_attempt() {
let home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&home);
config.model_provider = model_provider;
set_test_compact_prompt(&mut config);
config.model_auto_compact_token_limit = Some(200);
let conversation_manager = ConversationManager::with_auth(CodexAuth::from_api_key("dummy"));
let codex = conversation_manager
@@ -689,7 +839,7 @@ async fn auto_compact_stops_after_failed_attempt() {
.and_then(|items| items.first())
.and_then(|entry| entry.get("text"))
.and_then(|text| text.as_str())
.map(|text| text == SUMMARIZATION_PROMPT)
.map(|text| text == TEST_COMPACT_PROMPT)
.unwrap_or(false)
});
assert!(
@@ -736,6 +886,7 @@ async fn manual_compact_retries_after_context_window_error() {
let home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&home);
config.model_provider = model_provider;
set_test_compact_prompt(&mut config);
config.model_auto_compact_token_limit = Some(200_000);
let codex = ConversationManager::with_auth(CodexAuth::from_api_key("dummy"))
.new_conversation(config)
@@ -795,7 +946,7 @@ async fn manual_compact_retries_after_context_window_error() {
.and_then(|items| items.first())
.and_then(|entry| entry.get("text"))
.and_then(|text| text.as_str()),
Some(SUMMARIZATION_PROMPT),
Some(TEST_COMPACT_PROMPT),
"compact attempt should include summarization prompt"
);
assert_eq!(
@@ -806,7 +957,7 @@ async fn manual_compact_retries_after_context_window_error() {
.and_then(|items| items.first())
.and_then(|entry| entry.get("text"))
.and_then(|text| text.as_str()),
Some(SUMMARIZATION_PROMPT),
Some(TEST_COMPACT_PROMPT),
"retry attempt should include summarization prompt"
);
assert_eq!(
@@ -826,6 +977,228 @@ async fn manual_compact_retries_after_context_window_error() {
}
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn manual_compact_twice_preserves_latest_user_messages() {
skip_if_no_network!();
let first_user_message = "first manual turn";
let second_user_message = "second manual turn";
let final_user_message = "post compact follow-up";
let first_summary = "FIRST_MANUAL_SUMMARY";
let second_summary = "SECOND_MANUAL_SUMMARY";
let server = start_mock_server().await;
let first_turn = sse(vec![
ev_assistant_message("m1", FIRST_REPLY),
ev_completed("r1"),
]);
let first_compact_summary = auto_summary(first_summary);
let first_compact = sse(vec![
ev_assistant_message("m2", &first_compact_summary),
ev_completed("r2"),
]);
let second_turn = sse(vec![
ev_assistant_message("m3", SECOND_LARGE_REPLY),
ev_completed("r3"),
]);
let second_compact_summary = auto_summary(second_summary);
let second_compact = sse(vec![
ev_assistant_message("m4", &second_compact_summary),
ev_completed("r4"),
]);
let final_turn = sse(vec![
ev_assistant_message("m5", FINAL_REPLY),
ev_completed("r5"),
]);
let responses_mock = mount_sse_sequence(
&server,
vec![
first_turn,
first_compact,
second_turn,
second_compact,
final_turn,
],
)
.await;
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&home);
config.model_provider = model_provider;
set_test_compact_prompt(&mut config);
let codex = ConversationManager::with_auth(CodexAuth::from_api_key("dummy"))
.new_conversation(config)
.await
.unwrap()
.conversation;
codex
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: first_user_message.into(),
}],
})
.await
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
codex.submit(Op::Compact).await.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
codex
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: second_user_message.into(),
}],
})
.await
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
codex.submit(Op::Compact).await.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
codex
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: final_user_message.into(),
}],
})
.await
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
let requests = responses_mock.requests();
assert_eq!(
requests.len(),
5,
"expected exactly 5 requests (user turn, compact, user turn, compact, final turn)"
);
let contains_user_text = |input: &[serde_json::Value], expected: &str| -> bool {
input.iter().any(|item| {
item.get("type").and_then(|v| v.as_str()) == Some("message")
&& item.get("role").and_then(|v| v.as_str()) == Some("user")
&& item
.get("content")
.and_then(|v| v.as_array())
.map(|arr| {
arr.iter().any(|entry| {
entry.get("text").and_then(|v| v.as_str()) == Some(expected)
})
})
.unwrap_or(false)
})
};
let first_turn_input = requests[0].input();
assert!(
contains_user_text(&first_turn_input, first_user_message),
"first turn request missing first user message"
);
assert!(
!contains_user_text(&first_turn_input, TEST_COMPACT_PROMPT),
"first turn request should not include summarization prompt"
);
let first_compact_input = requests[1].input();
assert!(
contains_user_text(&first_compact_input, TEST_COMPACT_PROMPT),
"first compact request should include summarization prompt"
);
assert!(
contains_user_text(&first_compact_input, first_user_message),
"first compact request should include history before compaction"
);
let second_turn_input = requests[2].input();
assert!(
contains_user_text(&second_turn_input, second_user_message),
"second turn request missing second user message"
);
assert!(
contains_user_text(&second_turn_input, first_user_message),
"second turn request should include the compacted user history"
);
let second_compact_input = requests[3].input();
assert!(
contains_user_text(&second_compact_input, TEST_COMPACT_PROMPT),
"second compact request should include summarization prompt"
);
assert!(
contains_user_text(&second_compact_input, second_user_message),
"second compact request should include latest history"
);
let mut final_output = requests
.last()
.unwrap_or_else(|| panic!("final turn request missing for {final_user_message}"))
.input()
.into_iter()
.collect::<VecDeque<_>>();
// System prompt
final_output.pop_front();
// Developer instructions
final_output.pop_front();
let _ = final_output
.iter_mut()
.map(drop_call_id)
.collect::<Vec<_>>();
let expected = vec![
json!({
"content": vec![json!({
"text": first_user_message,
"type": "input_text",
})],
"role": "user",
"type": "message",
}),
json!({
"content": vec![json!({
"text": first_summary,
"type": "input_text",
})],
"role": "user",
"type": "message",
}),
json!({
"content": vec![json!({
"text": second_user_message,
"type": "input_text",
})],
"role": "user",
"type": "message",
}),
json!({
"content": vec![json!({
"text": second_summary,
"type": "input_text",
})],
"role": "user",
"type": "message",
}),
json!({
"content": vec![json!({
"text": final_user_message,
"type": "input_text",
})],
"role": "user",
"type": "message",
}),
];
assert_eq!(final_output, expected);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn auto_compact_allows_multiple_attempts_when_interleaved_with_other_turn_events() {
skip_if_no_network!();
@@ -836,8 +1209,9 @@ async fn auto_compact_allows_multiple_attempts_when_interleaved_with_other_turn_
ev_assistant_message("m1", FIRST_REPLY),
ev_completed_with_tokens("r1", 500),
]);
let first_summary_payload = auto_summary(FIRST_AUTO_SUMMARY);
let sse2 = sse(vec![
ev_assistant_message("m2", FIRST_AUTO_SUMMARY),
ev_assistant_message("m2", &first_summary_payload),
ev_completed_with_tokens("r2", 50),
]);
let sse3 = sse(vec![
@@ -848,8 +1222,9 @@ async fn auto_compact_allows_multiple_attempts_when_interleaved_with_other_turn_
ev_assistant_message("m4", SECOND_LARGE_REPLY),
ev_completed_with_tokens("r4", 450),
]);
let second_summary_payload = auto_summary(SECOND_AUTO_SUMMARY);
let sse5 = sse(vec![
ev_assistant_message("m5", SECOND_AUTO_SUMMARY),
ev_assistant_message("m5", &second_summary_payload),
ev_completed_with_tokens("r5", 60),
]);
let sse6 = sse(vec![
@@ -867,6 +1242,7 @@ async fn auto_compact_allows_multiple_attempts_when_interleaved_with_other_turn_
let home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&home);
config.model_provider = model_provider;
set_test_compact_prompt(&mut config);
config.model_auto_compact_token_limit = Some(200);
let conversation_manager = ConversationManager::with_auth(CodexAuth::from_api_key("dummy"));
let codex = conversation_manager
@@ -925,7 +1301,7 @@ async fn auto_compact_allows_multiple_attempts_when_interleaved_with_other_turn_
"first request should contain the user input"
);
assert!(
request_bodies[1].contains("You have exceeded the maximum number of tokens"),
request_bodies[1].contains(COMPACT_PROMPT_MARKER),
"first auto compact request should include the summarization prompt"
);
assert!(
@@ -933,7 +1309,7 @@ async fn auto_compact_allows_multiple_attempts_when_interleaved_with_other_turn_
"function call output should be sent before the second auto compact"
);
assert!(
request_bodies[4].contains("You have exceeded the maximum number of tokens"),
request_bodies[4].contains(COMPACT_PROMPT_MARKER),
"second auto compact request should include the summarization prompt"
);
}
@@ -956,8 +1332,9 @@ async fn auto_compact_triggers_after_function_call_over_95_percent_usage() {
ev_assistant_message("m2", FINAL_REPLY),
ev_completed_with_tokens("r2", over_limit_tokens),
]);
let auto_summary_payload = auto_summary(AUTO_SUMMARY_TEXT);
let auto_compact_turn = sse(vec![
ev_assistant_message("m3", AUTO_SUMMARY_TEXT),
ev_assistant_message("m3", &auto_summary_payload),
ev_completed_with_tokens("r3", 10),
]);
let post_auto_compact_turn = sse(vec![ev_completed_with_tokens("r4", 10)]);
@@ -977,6 +1354,7 @@ async fn auto_compact_triggers_after_function_call_over_95_percent_usage() {
let home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&home);
config.model_provider = model_provider;
set_test_compact_prompt(&mut config);
config.model_context_window = Some(context_window);
config.model_auto_compact_token_limit = Some(limit);
@@ -1027,7 +1405,7 @@ async fn auto_compact_triggers_after_function_call_over_95_percent_usage() {
let auto_compact_body = auto_compact_mock.single_request().body_json().to_string();
assert!(
auto_compact_body.contains("You have exceeded the maximum number of tokens"),
auto_compact_body.contains(COMPACT_PROMPT_MARKER),
"auto compact request should include the summarization prompt after exceeding 95% (limit {limit})"
);
}

View File

@@ -10,13 +10,13 @@
use super::compact::COMPACT_WARNING_MESSAGE;
use super::compact::FIRST_REPLY;
use super::compact::SUMMARY_TEXT;
use super::compact::TEST_COMPACT_PROMPT;
use codex_core::CodexAuth;
use codex_core::CodexConversation;
use codex_core::ConversationManager;
use codex_core::ModelProviderInfo;
use codex_core::NewConversation;
use codex_core::built_in_model_providers;
use codex_core::codex::compact::SUMMARIZATION_PROMPT;
use codex_core::config::Config;
use codex_core::config::OPENAI_DEFAULT_MODEL;
use codex_core::protocol::EventMsg;
@@ -38,6 +38,8 @@ use tempfile::TempDir;
use wiremock::MockServer;
const AFTER_SECOND_RESUME: &str = "AFTER_SECOND_RESUME";
const COMPACT_PROMPT_MARKER: &str =
"You are performing a CONTEXT CHECKPOINT COMPACTION for a tool.";
fn network_disabled() -> bool {
std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok()
@@ -66,6 +68,27 @@ fn is_ghost_snapshot_message(item: &Value) -> bool {
.is_some_and(|text| text.trim_start().starts_with("<ghost_snapshot>"))
}
fn extract_summary_message(request: &Value, summary_text: &str) -> Value {
request
.get("input")
.and_then(Value::as_array)
.and_then(|items| {
items.iter().find(|item| {
item.get("type").and_then(Value::as_str) == Some("message")
&& item.get("role").and_then(Value::as_str) == Some("user")
&& item
.get("content")
.and_then(Value::as_array)
.and_then(|arr| arr.first())
.and_then(|entry| entry.get("text"))
.and_then(Value::as_str)
== Some(summary_text)
})
})
.cloned()
.unwrap_or_else(|| panic!("expected summary message {summary_text}"))
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
/// Scenario: compact an initial conversation, resume it, fork one turn back, and
/// ensure the model-visible history matches expectations at each request.
@@ -157,6 +180,9 @@ async fn compact_resume_and_fork_preserve_model_history_view() {
.unwrap_or_default()
.to_string();
let expected_model = OPENAI_DEFAULT_MODEL;
let summary_after_compact = extract_summary_message(&requests[2], SUMMARY_TEXT);
let summary_after_resume = extract_summary_message(&requests[3], SUMMARY_TEXT);
let summary_after_fork = extract_summary_message(&requests[4], SUMMARY_TEXT);
let user_turn_1 = json!(
{
"model": expected_model,
@@ -257,7 +283,7 @@ async fn compact_resume_and_fork_preserve_model_history_view() {
"content": [
{
"type": "input_text",
"text": SUMMARIZATION_PROMPT
"text": TEST_COMPACT_PROMPT
}
]
}
@@ -306,16 +332,11 @@ async fn compact_resume_and_fork_preserve_model_history_view() {
"content": [
{
"type": "input_text",
"text": "You were originally given instructions from a user over one or more turns. Here were the user messages:
hello world
Another language model started to solve this problem and produced a summary of its thinking process. You also have access to the state of the tools that were used by that language model. Use this to build on the work that has already been done and avoid duplicating work. Here is the summary produced by the other language model, use the information in this summary to assist with your own analysis:
SUMMARY_ONLY_CONTEXT"
"text": "hello world"
}
]
},
summary_after_compact,
{
"type": "message",
"role": "user",
@@ -371,16 +392,11 @@ SUMMARY_ONLY_CONTEXT"
"content": [
{
"type": "input_text",
"text": "You were originally given instructions from a user over one or more turns. Here were the user messages:
hello world
Another language model started to solve this problem and produced a summary of its thinking process. You also have access to the state of the tools that were used by that language model. Use this to build on the work that has already been done and avoid duplicating work. Here is the summary produced by the other language model, use the information in this summary to assist with your own analysis:
SUMMARY_ONLY_CONTEXT"
"text": "hello world"
}
]
},
summary_after_resume,
{
"type": "message",
"role": "user",
@@ -456,16 +472,11 @@ SUMMARY_ONLY_CONTEXT"
"content": [
{
"type": "input_text",
"text": "You were originally given instructions from a user over one or more turns. Here were the user messages:
hello world
Another language model started to solve this problem and produced a summary of its thinking process. You also have access to the state of the tools that were used by that language model. Use this to build on the work that has already been done and avoid duplicating work. Here is the summary produced by the other language model, use the information in this summary to assist with your own analysis:
SUMMARY_ONLY_CONTEXT"
"text": "hello world"
}
]
},
summary_after_fork,
{
"type": "message",
"role": "user",
@@ -605,6 +616,11 @@ async fn compact_resume_after_second_compaction_preserves_history() {
.unwrap_or_default()
.to_string();
// Build expected final request input: initial context + forked user message +
// compacted summary + post-compact user message + resumed user message.
let summary_after_second_compact =
extract_summary_message(&requests[requests.len() - 3], SUMMARY_TEXT);
let mut expected = json!([
{
"instructions": prompt,
@@ -635,10 +651,11 @@ async fn compact_resume_after_second_compaction_preserves_history() {
"content": [
{
"type": "input_text",
"text": "You were originally given instructions from a user over one or more turns. Here were the user messages:\n\nAFTER_FORK\n\nAnother language model started to solve this problem and produced a summary of its thinking process. You also have access to the state of the tools that were used by that language model. Use this to build on the work that has already been done and avoid duplicating work. Here is the summary produced by the other language model, use the information in this summary to assist with your own analysis:\n\nSUMMARY_ONLY_CONTEXT"
"text": "AFTER_FORK"
}
]
},
summary_after_second_compact,
{
"type": "message",
"role": "user",
@@ -724,7 +741,7 @@ async fn mount_initial_flow(server: &MockServer) {
let match_first = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains("\"text\":\"hello world\"")
&& !body.contains("You have exceeded the maximum number of tokens")
&& !body.contains(COMPACT_PROMPT_MARKER)
&& !body.contains(&format!("\"text\":\"{SUMMARY_TEXT}\""))
&& !body.contains("\"text\":\"AFTER_COMPACT\"")
&& !body.contains("\"text\":\"AFTER_RESUME\"")
@@ -734,7 +751,7 @@ async fn mount_initial_flow(server: &MockServer) {
let match_compact = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains("You have exceeded the maximum number of tokens")
body.contains(COMPACT_PROMPT_MARKER)
};
mount_sse_once_match(server, match_compact, sse2).await;
@@ -768,8 +785,7 @@ async fn mount_second_compact_flow(server: &MockServer) {
let match_second_compact = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains("You have exceeded the maximum number of tokens")
&& body.contains("AFTER_FORK")
body.contains(COMPACT_PROMPT_MARKER) && body.contains("AFTER_FORK")
};
mount_sse_once_match(server, match_second_compact, sse6).await;
@@ -790,6 +806,7 @@ async fn start_test_conversation(
let home = TempDir::new().expect("create temp dir");
let mut config = load_default_config_for_test(&home);
config.model_provider = model_provider;
config.compact_prompt = Some(TEST_COMPACT_PROMPT.to_string());
let manager = ConversationManager::with_auth(CodexAuth::from_api_key("dummy"));
let NewConversation { conversation, .. } = manager

View File

@@ -43,7 +43,7 @@ async fn emits_deprecation_notice_for_legacy_feature_flag() -> anyhow::Result<()
assert_eq!(
details.as_deref(),
Some(
"You can either enable it using the CLI with `--enable streamable_shell` or through the config.toml file with `[features].streamable_shell`"
"Enable it with `--enable streamable_shell` or `[features].streamable_shell` in config.toml. See https://github.com/openai/codex/blob/main/docs/config.md#feature-flags for details."
),
);

View File

@@ -8,6 +8,7 @@ mod apply_patch_cli;
mod apply_patch_freeform;
#[cfg(not(target_os = "windows"))]
mod approvals;
mod auth_refresh;
mod cli_stream;
mod client;
mod codex_delegate;

View File

@@ -11,6 +11,9 @@ use core_test_support::skip_if_no_network;
use core_test_support::test_codex::TestCodex;
use core_test_support::test_codex::test_codex;
use core_test_support::wait_for_event;
use pretty_assertions::assert_eq;
use serde_json::Value;
use serde_json::json;
use tempfile::TempDir;
use wiremock::matchers::any;
@@ -21,6 +24,10 @@ use responses::start_mock_server;
use std::time::Duration;
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
#[ignore = "flaky on ubuntu-24.04-arm - aarch64-unknown-linux-gnu"]
// The notify script gets far enough to create (and therefore surface) the file,
// but hasnt flushed the JSON yet. Reading an empty file produces EOF while parsing
// a value at line 1 column 0. May be caused by a slow runner.
async fn summarize_context_three_requests_and_instructions() -> anyhow::Result<()> {
skip_if_no_network!(Ok(()));
@@ -61,6 +68,12 @@ echo -n "${@: -1}" > $(dirname "${0}")/notify.txt"#,
// We fork the notify script, so we need to wait for it to write to the file.
fs_wait::wait_for_path_exists(&notify_file, Duration::from_secs(5)).await?;
let notify_payload_raw = tokio::fs::read_to_string(&notify_file).await?;
let payload: Value = serde_json::from_str(&notify_payload_raw)?;
assert_eq!(payload["type"], json!("agent-turn-complete"));
assert_eq!(payload["input-messages"], json!(["hello world"]));
assert_eq!(payload["last-assistant-message"], json!("Done"));
Ok(())
}

View File

@@ -5,7 +5,7 @@
lib,
...
}:
rustPlatform.buildRustPackage (finalAttrs: {
rustPlatform.buildRustPackage (_: {
env = {
PKG_CONFIG_PATH = "${openssl.dev}/lib/pkgconfig:$PKG_CONFIG_PATH";
};
@@ -21,6 +21,7 @@ rustPlatform.buildRustPackage (finalAttrs: {
cargoLock.outputHashes = {
"ratatui-0.29.0" = "sha256-HBvT5c8GsiCxMffNjJGLmHnvG77A6cqEL+1ARurBXho=";
"crossterm-0.28.1" = "sha256-6qCtfSMuXACKFb9ATID39XyFDIEMFDmbx6SSmNe+728=";
};
meta = with lib; {

View File

@@ -159,6 +159,8 @@ pub fn run(
.threads(num_walk_builder_threads)
// Allow hidden entries.
.hidden(false)
// Follow symlinks to search their contents.
.follow_links(true)
// Don't require git to be present to apply to apply git-related ignore rules.
.require_git(false);
if !respect_gitignore {

View File

@@ -8,7 +8,6 @@ use codex_protocol::models::ResponseItem;
use codex_protocol::protocol::AskForApproval;
use codex_protocol::protocol::ReviewDecision;
use codex_protocol::protocol::SandboxPolicy;
use codex_protocol::protocol::SandboxRiskCategory;
use codex_protocol::protocol::SandboxRiskLevel;
use codex_protocol::user_input::UserInput;
use eventsource_stream::Event as StreamEvent;
@@ -373,19 +372,9 @@ impl OtelEventManager {
call_id: &str,
status: &str,
risk_level: Option<SandboxRiskLevel>,
risk_categories: &[SandboxRiskCategory],
duration: Duration,
) {
let level = risk_level.map(|level| level.as_str());
let categories = if risk_categories.is_empty() {
String::new()
} else {
risk_categories
.iter()
.map(SandboxRiskCategory::as_str)
.collect::<Vec<_>>()
.join(", ")
};
tracing::event!(
tracing::Level::INFO,
@@ -402,7 +391,6 @@ impl OtelEventManager {
call_id = %call_id,
status = %status,
risk_level = level,
risk_categories = categories,
duration_ms = %duration.as_millis(),
);
}

View File

@@ -16,24 +16,10 @@ pub enum SandboxRiskLevel {
High,
}
#[derive(Debug, Clone, Copy, Deserialize, Serialize, PartialEq, Eq, Hash, JsonSchema, TS)]
#[serde(rename_all = "snake_case")]
pub enum SandboxRiskCategory {
DataDeletion,
DataExfiltration,
PrivilegeEscalation,
SystemModification,
NetworkAccess,
ResourceExhaustion,
Compliance,
}
#[derive(Debug, Clone, Deserialize, Serialize, PartialEq, Eq, JsonSchema, TS)]
pub struct SandboxCommandAssessment {
pub description: String,
pub risk_level: SandboxRiskLevel,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub risk_categories: Vec<SandboxRiskCategory>,
}
impl SandboxRiskLevel {
@@ -46,20 +32,6 @@ impl SandboxRiskLevel {
}
}
impl SandboxRiskCategory {
pub fn as_str(&self) -> &'static str {
match self {
Self::DataDeletion => "data_deletion",
Self::DataExfiltration => "data_exfiltration",
Self::PrivilegeEscalation => "privilege_escalation",
Self::SystemModification => "system_modification",
Self::NetworkAccess => "network_access",
Self::ResourceExhaustion => "resource_exhaustion",
Self::Compliance => "compliance",
}
}
}
#[derive(Debug, Clone, Deserialize, Serialize, JsonSchema, TS)]
pub struct ExecApprovalRequestEvent {
/// Identifier for the associated exec call, if available.

View File

@@ -37,7 +37,6 @@ use ts_rs::TS;
pub use crate::approvals::ApplyPatchApprovalRequestEvent;
pub use crate::approvals::ExecApprovalRequestEvent;
pub use crate::approvals::SandboxCommandAssessment;
pub use crate::approvals::SandboxRiskCategory;
pub use crate::approvals::SandboxRiskLevel;
/// Open/close tags for special user-input blocks. Used across crates to avoid
@@ -813,12 +812,8 @@ impl TokenUsage {
(self.non_cached_input() + self.output_tokens.max(0)).max(0)
}
/// For estimating what % of the model's context window is used, we need to account
/// for reasoning output tokens from prior turns being dropped from the context window.
/// We approximate this here by subtracting reasoning output tokens from the total.
/// This will be off for the current turn and pending function calls.
pub fn tokens_in_context_window(&self) -> i64 {
(self.total_tokens - self.reasoning_output_tokens).max(0)
self.total_tokens
}
/// Estimate the remaining user-controllable percentage of the model's context window.

View File

@@ -40,12 +40,23 @@ curl --fail --silent --show-error "${PROXY_BASE_URL}/shutdown"
## CLI
```
codex-responses-api-proxy [--port <PORT>] [--server-info <FILE>] [--http-shutdown]
codex-responses-api-proxy [--port <PORT>] [--server-info <FILE>] [--http-shutdown] [--upstream-url <URL>]
```
- `--port <PORT>`: Port to bind on `127.0.0.1`. If omitted, an ephemeral port is chosen.
- `--server-info <FILE>`: If set, the proxy writes a single line of JSON with `{ "port": <PORT>, "pid": <PID> }` once listening.
- `--http-shutdown`: If set, enables `GET /shutdown` to exit the process with code `0`.
- `--upstream-url <URL>`: Absolute URL to forward requests to. Defaults to `https://api.openai.com/v1/responses`.
- Authentication is fixed to `Authorization: Bearer <key>` to match the Codex CLI expectations.
For Azure, for example (ensure your deployment accepts `Authorization: Bearer <key>`):
```shell
printenv AZURE_OPENAI_API_KEY | env -u AZURE_OPENAI_API_KEY codex-responses-api-proxy \
--http-shutdown \
--server-info /tmp/server-info.json \
--upstream-url "https://YOUR_PROJECT_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT/responses?api-version=2025-04-01-preview"
```
## Notes
@@ -57,7 +68,7 @@ codex-responses-api-proxy [--port <PORT>] [--server-info <FILE>] [--http-shutdow
Care is taken to restrict access/copying to the value of `OPENAI_API_KEY` retained in memory:
- We leverage [`codex_process_hardening`](https://github.com/openai/codex/blob/main/codex-rs/process-hardening/README.md) so `codex-responses-api-proxy` is run with standard process-hardening techniques.
- At startup, we allocate a `1024` byte buffer on the stack and write `"Bearer "` as the first `7` bytes.
- At startup, we allocate a `1024` byte buffer on the stack and copy `"Bearer "` into the start of the buffer.
- We then read from `stdin`, copying the contents into the buffer after `"Bearer "`.
- After verifying the key matches `/^[a-zA-Z0-9_-]+$/` (and does not exceed the buffer), we create a `String` from that buffer (so the data is now on the heap).
- We zero out the stack-allocated buffer using https://crates.io/crates/zeroize so it is not optimized away by the compiler.

View File

@@ -12,6 +12,7 @@ use anyhow::Context;
use anyhow::Result;
use anyhow::anyhow;
use clap::Parser;
use reqwest::Url;
use reqwest::blocking::Client;
use reqwest::header::AUTHORIZATION;
use reqwest::header::HOST;
@@ -44,6 +45,10 @@ pub struct Args {
/// Enable HTTP shutdown endpoint at GET /shutdown
#[arg(long)]
pub http_shutdown: bool,
/// Absolute URL the proxy should forward requests to (defaults to OpenAI).
#[arg(long, default_value = "https://api.openai.com/v1/responses")]
pub upstream_url: String,
}
#[derive(Serialize)]
@@ -52,10 +57,29 @@ struct ServerInfo {
pid: u32,
}
struct ForwardConfig {
upstream_url: Url,
host_header: HeaderValue,
}
/// Entry point for the library main, for parity with other crates.
pub fn run_main(args: Args) -> Result<()> {
let auth_header = read_auth_header_from_stdin()?;
let upstream_url = Url::parse(&args.upstream_url).context("parsing --upstream-url")?;
let host = match (upstream_url.host_str(), upstream_url.port()) {
(Some(host), Some(port)) => format!("{host}:{port}"),
(Some(host), None) => host.to_string(),
_ => return Err(anyhow!("upstream URL must include a host")),
};
let host_header =
HeaderValue::from_str(&host).context("constructing Host header from upstream URL")?;
let forward_config = Arc::new(ForwardConfig {
upstream_url,
host_header,
});
let (listener, bound_addr) = bind_listener(args.port)?;
if let Some(path) = args.server_info.as_ref() {
write_server_info(path, bound_addr.port())?;
@@ -75,13 +99,14 @@ pub fn run_main(args: Args) -> Result<()> {
let http_shutdown = args.http_shutdown;
for request in server.incoming_requests() {
let client = client.clone();
let forward_config = forward_config.clone();
std::thread::spawn(move || {
if http_shutdown && request.method() == &Method::Get && request.url() == "/shutdown" {
let _ = request.respond(Response::new_empty(StatusCode(200)));
std::process::exit(0);
}
if let Err(e) = forward_request(&client, auth_header, request) {
if let Err(e) = forward_request(&client, auth_header, &forward_config, request) {
eprintln!("forwarding error: {e}");
}
});
@@ -115,7 +140,12 @@ fn write_server_info(path: &Path, port: u16) -> Result<()> {
Ok(())
}
fn forward_request(client: &Client, auth_header: &'static str, mut req: Request) -> Result<()> {
fn forward_request(
client: &Client,
auth_header: &'static str,
config: &ForwardConfig,
mut req: Request,
) -> Result<()> {
// Only allow POST /v1/responses exactly, no query string.
let method = req.method().clone();
let url_path = req.url().to_string();
@@ -157,11 +187,10 @@ fn forward_request(client: &Client, auth_header: &'static str, mut req: Request)
auth_header_value.set_sensitive(true);
headers.insert(AUTHORIZATION, auth_header_value);
headers.insert(HOST, HeaderValue::from_static("api.openai.com"));
headers.insert(HOST, config.host_header.clone());
let upstream = "https://api.openai.com/v1/responses";
let upstream_resp = client
.post(upstream)
.post(config.upstream_url.clone())
.headers(headers)
.body(body)
.send()

View File

@@ -121,7 +121,7 @@ where
if total_read == capacity && !saw_newline && !saw_eof {
buf.zeroize();
return Err(anyhow!(
"OPENAI_API_KEY is too large to fit in the 512-byte buffer"
"API key is too large to fit in the {BUFFER_SIZE}-byte buffer"
));
}
@@ -133,7 +133,7 @@ where
if total == AUTH_HEADER_PREFIX.len() {
buf.zeroize();
return Err(anyhow!(
"OPENAI_API_KEY must be provided via stdin (e.g. printenv OPENAI_API_KEY | codex responses-api-proxy)"
"API key must be provided via stdin (e.g. printenv OPENAI_API_KEY | codex responses-api-proxy)"
));
}
@@ -214,7 +214,7 @@ fn validate_auth_header_bytes(key_bytes: &[u8]) -> Result<()> {
}
Err(anyhow!(
"OPENAI_API_KEY may only contain ASCII letters, numbers, '-' or '_'"
"API key may only contain ASCII letters, numbers, '-' or '_'"
))
}
@@ -290,7 +290,9 @@ mod tests {
})
.unwrap_err();
let message = format!("{err:#}");
assert!(message.contains("OPENAI_API_KEY is too large to fit in the 512-byte buffer"));
let expected_error =
format!("API key is too large to fit in the {BUFFER_SIZE}-byte buffer");
assert!(message.contains(&expected_error));
}
#[test]
@@ -317,9 +319,7 @@ mod tests {
.unwrap_err();
let message = format!("{err:#}");
assert!(
message.contains("OPENAI_API_KEY may only contain ASCII letters, numbers, '-' or '_'")
);
assert!(message.contains("API key may only contain ASCII letters, numbers, '-' or '_'"));
}
#[test]
@@ -337,8 +337,6 @@ mod tests {
.unwrap_err();
let message = format!("{err:#}");
assert!(
message.contains("OPENAI_API_KEY may only contain ASCII letters, numbers, '-' or '_'")
);
assert!(message.contains("API key may only contain ASCII letters, numbers, '-' or '_'"));
}
}

View File

@@ -10,9 +10,11 @@ use axum::extract::State;
use axum::http::Request;
use axum::http::StatusCode;
use axum::http::header::AUTHORIZATION;
use axum::http::header::CONTENT_TYPE;
use axum::middleware;
use axum::middleware::Next;
use axum::response::Response;
use axum::routing::get;
use rmcp::ErrorData as McpError;
use rmcp::handler::server::ServerHandler;
use rmcp::model::CallToolRequestParam;
@@ -256,14 +258,35 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
};
eprintln!("starting rmcp streamable http test server on http://{bind_addr}/mcp");
let router = Router::new().nest_service(
"/mcp",
StreamableHttpService::new(
|| Ok(TestToolServer::new()),
Arc::new(LocalSessionManager::default()),
StreamableHttpServerConfig::default(),
),
);
let router = Router::new()
.route(
"/.well-known/oauth-authorization-server/mcp",
get({
move || async move {
let metadata_base = format!("http://{bind_addr}");
#[expect(clippy::expect_used)]
Response::builder()
.status(StatusCode::OK)
.header(CONTENT_TYPE, "application/json")
.body(Body::from(
serde_json::to_vec(&json!({
"authorization_endpoint": format!("{metadata_base}/oauth/authorize"),
"token_endpoint": format!("{metadata_base}/oauth/token"),
"scopes_supported": [""],
})).expect("failed to serialize metadata"),
))
.expect("valid metadata response")
}
}),
)
.nest_service(
"/mcp",
StreamableHttpService::new(
|| Ok(TestToolServer::new()),
Arc::new(LocalSessionManager::default()),
StreamableHttpServerConfig::default(),
),
);
let router = if let Ok(token) = std::env::var("MCP_EXPECT_BEARER") {
let expected = Arc::new(format!("Bearer {token}"));
@@ -282,6 +305,9 @@ async fn require_bearer(
request: Request<Body>,
next: Next,
) -> Result<Response, StatusCode> {
if request.uri().path().contains("/.well-known/") {
return Ok(next.run(request).await);
}
if request
.headers()
.get(AUTHORIZATION)

View File

@@ -41,7 +41,10 @@ codex-protocol = { workspace = true }
codex-app-server-protocol = { workspace = true }
codex-feedback = { workspace = true }
color-eyre = { workspace = true }
crossterm = { workspace = true, features = ["bracketed-paste", "event-stream"] }
crossterm = { workspace = true, features = [
"bracketed-paste",
"event-stream",
] }
diffy = { workspace = true }
dirs = { workspace = true }
dunce = { workspace = true }
@@ -103,3 +106,4 @@ insta = { workspace = true }
pretty_assertions = { workspace = true }
rand = { workspace = true }
vt100 = { workspace = true }
serial_test = { workspace = true }

View File

@@ -9,6 +9,7 @@ use crate::file_search::FileSearchManager;
use crate::history_cell::HistoryCell;
use crate::pager_overlay::Overlay;
use crate::render::highlight::highlight_bash_to_lines;
use crate::render::renderable::Renderable;
use crate::resume_picker::ResumeSelection;
use crate::tui;
use crate::tui::TuiEvent;
@@ -233,7 +234,7 @@ impl App {
tui.draw(
self.chat_widget.desired_height(tui.terminal.size()?.width),
|frame| {
frame.render_widget_ref(&self.chat_widget, frame.area());
self.chat_widget.render(frame.area(), frame.buffer);
if let Some((x, y)) = self.chat_widget.cursor_pos(frame.area()) {
frame.set_cursor_position((x, y));
}

View File

@@ -20,7 +20,6 @@ use codex_core::protocol::FileChange;
use codex_core::protocol::Op;
use codex_core::protocol::ReviewDecision;
use codex_core::protocol::SandboxCommandAssessment;
use codex_core::protocol::SandboxRiskCategory;
use codex_core::protocol::SandboxRiskLevel;
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
@@ -261,10 +260,6 @@ impl BottomPaneView for ApprovalOverlay {
self.enqueue_request(request);
None
}
fn cursor_pos(&self, area: Rect) -> Option<(u16, u16)> {
self.list.cursor_pos(area)
}
}
impl Renderable for ApprovalOverlay {
@@ -275,6 +270,10 @@ impl Renderable for ApprovalOverlay {
fn render(&self, area: Rect, buf: &mut Buffer) {
self.list.render(area, buf);
}
fn cursor_pos(&self, area: Rect) -> Option<(u16, u16)> {
self.list.cursor_pos(area)
}
}
struct ApprovalRequestState {
@@ -356,35 +355,11 @@ fn render_risk_lines(risk: &SandboxCommandAssessment) -> Vec<Line<'static>> {
]));
}
let mut spans: Vec<Span<'static>> = vec!["Risk: ".into(), level_span];
if !risk.risk_categories.is_empty() {
spans.push(" (".into());
for (idx, category) in risk.risk_categories.iter().enumerate() {
if idx > 0 {
spans.push(", ".into());
}
spans.push(risk_category_label(*category).into());
}
spans.push(")".into());
}
lines.push(Line::from(spans));
lines.push(vec!["Risk: ".into(), level_span].into());
lines.push(Line::from(""));
lines
}
fn risk_category_label(category: SandboxRiskCategory) -> &'static str {
match category {
SandboxRiskCategory::DataDeletion => "data deletion",
SandboxRiskCategory::DataExfiltration => "data exfiltration",
SandboxRiskCategory::PrivilegeEscalation => "privilege escalation",
SandboxRiskCategory::SystemModification => "system modification",
SandboxRiskCategory::NetworkAccess => "network access",
SandboxRiskCategory::ResourceExhaustion => "resource exhaustion",
SandboxRiskCategory::Compliance => "compliance",
}
}
#[derive(Clone)]
enum ApprovalVariant {
Exec { id: String, command: Vec<String> },

View File

@@ -1,7 +1,6 @@
use crate::bottom_pane::ApprovalRequest;
use crate::render::renderable::Renderable;
use crossterm::event::KeyEvent;
use ratatui::layout::Rect;
use super::CancellationEvent;
@@ -27,11 +26,6 @@ pub(crate) trait BottomPaneView: Renderable {
false
}
/// Cursor position when this view is active.
fn cursor_pos(&self, _area: Rect) -> Option<(u16, u16)> {
None
}
/// Try to handle approval request; return the original value if not
/// consumed.
fn try_consume_approval_request(

View File

@@ -35,6 +35,9 @@ use crate::bottom_pane::prompt_args::parse_slash_name;
use crate::bottom_pane::prompt_args::prompt_argument_names;
use crate::bottom_pane::prompt_args::prompt_command_with_arg_placeholders;
use crate::bottom_pane::prompt_args::prompt_has_numeric_placeholders;
use crate::render::Insets;
use crate::render::RectExt;
use crate::render::renderable::Renderable;
use crate::slash_command::SlashCommand;
use crate::slash_command::built_in_slash_commands;
use crate::style::user_message_style;
@@ -158,24 +161,6 @@ impl ChatComposer {
this
}
pub fn desired_height(&self, width: u16) -> u16 {
let footer_props = self.footer_props();
let footer_hint_height = self
.custom_footer_height()
.unwrap_or_else(|| footer_height(footer_props));
let footer_spacing = Self::footer_spacing(footer_hint_height);
let footer_total_height = footer_hint_height + footer_spacing;
const COLS_WITH_MARGIN: u16 = LIVE_PREFIX_COLS + 1;
self.textarea
.desired_height(width.saturating_sub(COLS_WITH_MARGIN))
+ 2
+ match &self.active_popup {
ActivePopup::None => footer_total_height,
ActivePopup::Command(c) => c.calculate_required_height(width),
ActivePopup::File(c) => c.calculate_required_height(),
}
}
fn layout_areas(&self, area: Rect) -> [Rect; 3] {
let footer_props = self.footer_props();
let footer_hint_height = self
@@ -190,18 +175,9 @@ impl ChatComposer {
ActivePopup::File(popup) => Constraint::Max(popup.calculate_required_height()),
ActivePopup::None => Constraint::Max(footer_total_height),
};
let mut area = area;
if area.height > 1 {
area.height -= 1;
area.y += 1;
}
let [composer_rect, popup_rect] =
Layout::vertical([Constraint::Min(1), popup_constraint]).areas(area);
let mut textarea_rect = composer_rect;
textarea_rect.width = textarea_rect.width.saturating_sub(
LIVE_PREFIX_COLS + 1, /* keep a one-column right margin for wrapping */
);
textarea_rect.x = textarea_rect.x.saturating_add(LIVE_PREFIX_COLS);
Layout::vertical([Constraint::Min(3), popup_constraint]).areas(area);
let textarea_rect = composer_rect.inset(Insets::tlbr(1, LIVE_PREFIX_COLS, 1, 1));
[composer_rect, textarea_rect, popup_rect]
}
@@ -213,12 +189,6 @@ impl ChatComposer {
}
}
pub fn cursor_pos(&self, area: Rect) -> Option<(u16, u16)> {
let [_, textarea_rect, _] = self.layout_areas(area);
let state = *self.textarea_state.borrow();
self.textarea.cursor_pos_with_state(textarea_rect, state)
}
/// Returns true if the composer currently contains no user input.
pub(crate) fn is_empty(&self) -> bool {
self.textarea.is_empty()
@@ -1541,8 +1511,32 @@ impl ChatComposer {
}
}
impl WidgetRef for ChatComposer {
fn render_ref(&self, area: Rect, buf: &mut Buffer) {
impl Renderable for ChatComposer {
fn cursor_pos(&self, area: Rect) -> Option<(u16, u16)> {
let [_, textarea_rect, _] = self.layout_areas(area);
let state = *self.textarea_state.borrow();
self.textarea.cursor_pos_with_state(textarea_rect, state)
}
fn desired_height(&self, width: u16) -> u16 {
let footer_props = self.footer_props();
let footer_hint_height = self
.custom_footer_height()
.unwrap_or_else(|| footer_height(footer_props));
let footer_spacing = Self::footer_spacing(footer_hint_height);
let footer_total_height = footer_hint_height + footer_spacing;
const COLS_WITH_MARGIN: u16 = LIVE_PREFIX_COLS + 1;
self.textarea
.desired_height(width.saturating_sub(COLS_WITH_MARGIN))
+ 2
+ match &self.active_popup {
ActivePopup::None => footer_total_height,
ActivePopup::Command(c) => c.calculate_required_height(width),
ActivePopup::File(c) => c.calculate_required_height(),
}
}
fn render(&self, area: Rect, buf: &mut Buffer) {
let [composer_rect, textarea_rect, popup_rect] = self.layout_areas(area);
match &self.active_popup {
ActivePopup::Command(popup) => {
@@ -1591,16 +1585,15 @@ impl WidgetRef for ChatComposer {
}
}
let style = user_message_style();
let mut block_rect = composer_rect;
block_rect.y = composer_rect.y.saturating_sub(1);
block_rect.height = composer_rect.height.saturating_add(1);
Block::default().style(style).render_ref(block_rect, buf);
buf.set_span(
composer_rect.x,
composer_rect.y,
&"".bold(),
composer_rect.width,
);
Block::default().style(style).render_ref(composer_rect, buf);
if !textarea_rect.is_empty() {
buf.set_span(
textarea_rect.x - LIVE_PREFIX_COLS,
textarea_rect.y,
&"".bold(),
textarea_rect.width,
);
}
let mut state = self.textarea_state.borrow_mut();
StatefulWidgetRef::render_ref(&(&self.textarea), textarea_rect, buf, &mut state);
@@ -1692,7 +1685,7 @@ mod tests {
let area = Rect::new(0, 0, 40, 6);
let mut buf = Buffer::empty(area);
composer.render_ref(area, &mut buf);
composer.render(area, &mut buf);
let row_to_string = |y: u16| {
let mut row = String::new();
@@ -1756,7 +1749,7 @@ mod tests {
let height = footer_lines + footer_spacing + 8;
let mut terminal = Terminal::new(TestBackend::new(width, height)).unwrap();
terminal
.draw(|f| f.render_widget_ref(composer, f.area()))
.draw(|f| composer.render(f.area(), f.buffer_mut()))
.unwrap();
insta::assert_snapshot!(name, terminal.backend());
}
@@ -2276,7 +2269,7 @@ mod tests {
}
terminal
.draw(|f| f.render_widget_ref(composer, f.area()))
.draw(|f| composer.render(f.area(), f.buffer_mut()))
.unwrap_or_else(|e| panic!("Failed to draw {name} composer: {e}"));
insta::assert_snapshot!(name, terminal.backend());
@@ -2302,12 +2295,12 @@ mod tests {
// Type "/mo" humanlike so paste-burst doesnt interfere.
type_chars_humanlike(&mut composer, &['/', 'm', 'o']);
let mut terminal = match Terminal::new(TestBackend::new(60, 4)) {
let mut terminal = match Terminal::new(TestBackend::new(60, 5)) {
Ok(t) => t,
Err(e) => panic!("Failed to create terminal: {e}"),
};
terminal
.draw(|f| f.render_widget_ref(composer, f.area()))
.draw(|f| composer.render(f.area(), f.buffer_mut()))
.unwrap_or_else(|e| panic!("Failed to draw composer: {e}"));
// Visual snapshot should show the slash popup with /model as the first entry.

View File

@@ -103,26 +103,6 @@ impl BottomPaneView for CustomPromptView {
self.textarea.insert_str(&pasted);
true
}
fn cursor_pos(&self, area: Rect) -> Option<(u16, u16)> {
if area.height < 2 || area.width <= 2 {
return None;
}
let text_area_height = self.input_height(area.width).saturating_sub(1);
if text_area_height == 0 {
return None;
}
let extra_offset: u16 = if self.context_label.is_some() { 1 } else { 0 };
let top_line_count = 1u16 + extra_offset;
let textarea_rect = Rect {
x: area.x.saturating_add(2),
y: area.y.saturating_add(top_line_count).saturating_add(1),
width: area.width.saturating_sub(2),
height: text_area_height,
};
let state = *self.textarea_state.borrow();
self.textarea.cursor_pos_with_state(textarea_rect, state)
}
}
impl Renderable for CustomPromptView {
@@ -232,6 +212,26 @@ impl Renderable for CustomPromptView {
);
}
}
fn cursor_pos(&self, area: Rect) -> Option<(u16, u16)> {
if area.height < 2 || area.width <= 2 {
return None;
}
let text_area_height = self.input_height(area.width).saturating_sub(1);
if text_area_height == 0 {
return None;
}
let extra_offset: u16 = if self.context_label.is_some() { 1 } else { 0 };
let top_line_count = 1u16 + extra_offset;
let textarea_rect = Rect {
x: area.x.saturating_add(2),
y: area.y.saturating_add(top_line_count).saturating_add(1),
width: area.width.saturating_sub(2),
height: text_area_height,
};
let state = *self.textarea_state.borrow();
self.textarea.cursor_pos_with_state(textarea_rect, state)
}
}
impl CustomPromptView {

View File

@@ -163,6 +163,12 @@ impl BottomPaneView for FeedbackNoteView {
self.textarea.insert_str(&pasted);
true
}
}
impl Renderable for FeedbackNoteView {
fn desired_height(&self, width: u16) -> u16 {
1u16 + self.input_height(width) + 3u16
}
fn cursor_pos(&self, area: Rect) -> Option<(u16, u16)> {
if area.height < 2 || area.width <= 2 {
@@ -182,12 +188,6 @@ impl BottomPaneView for FeedbackNoteView {
let state = *self.textarea_state.borrow();
self.textarea.cursor_pos_with_state(textarea_rect, state)
}
}
impl Renderable for FeedbackNoteView {
fn desired_height(&self, width: u16) -> u16 {
1u16 + self.input_height(width) + 3u16
}
fn render(&self, area: Rect, buf: &mut Buffer) {
if area.height == 0 || area.width == 0 {

View File

@@ -3,19 +3,16 @@ use std::path::PathBuf;
use crate::app_event_sender::AppEventSender;
use crate::bottom_pane::queued_user_messages::QueuedUserMessages;
use crate::render::Insets;
use crate::render::RectExt;
use crate::render::renderable::Renderable as _;
use crate::render::renderable::FlexRenderable;
use crate::render::renderable::Renderable;
use crate::render::renderable::RenderableItem;
use crate::tui::FrameRequester;
use bottom_pane_view::BottomPaneView;
use codex_file_search::FileMatch;
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
use ratatui::buffer::Buffer;
use ratatui::layout::Constraint;
use ratatui::layout::Layout;
use ratatui::layout::Rect;
use ratatui::widgets::WidgetRef;
use std::time::Duration;
mod approval_overlay;
@@ -126,77 +123,6 @@ impl BottomPane {
self.request_redraw();
}
pub fn desired_height(&self, width: u16) -> u16 {
let top_margin = 1;
// Base height depends on whether a modal/overlay is active.
let base = match self.active_view().as_ref() {
Some(view) => view.desired_height(width),
None => {
let status_height = self
.status
.as_ref()
.map_or(0, |status| status.desired_height(width));
let queue_height = self.queued_user_messages.desired_height(width);
let spacing_height = if status_height == 0 && queue_height == 0 {
0
} else {
1
};
self.composer
.desired_height(width)
.saturating_add(spacing_height)
.saturating_add(status_height)
.saturating_add(queue_height)
}
};
// Account for bottom padding rows. Top spacing is handled in layout().
base.saturating_add(top_margin)
}
fn layout(&self, area: Rect) -> [Rect; 2] {
// At small heights, bottom pane takes the entire height.
let top_margin = if area.height <= 1 { 0 } else { 1 };
let area = area.inset(Insets::tlbr(top_margin, 0, 0, 0));
if self.active_view().is_some() {
return [Rect::ZERO, area];
}
let has_queue = !self.queued_user_messages.messages.is_empty();
let mut status_height = self
.status
.as_ref()
.map_or(0, |status| status.desired_height(area.width))
.min(area.height.saturating_sub(1));
if has_queue && status_height > 1 {
status_height = status_height.saturating_sub(1);
}
let combined_height = status_height
.saturating_add(self.queued_user_messages.desired_height(area.width))
.min(area.height.saturating_sub(1));
let [status_area, _, content_area] = Layout::vertical([
Constraint::Length(combined_height),
Constraint::Length(if combined_height == 0 { 0 } else { 1 }),
Constraint::Min(1),
])
.areas(area);
[status_area, content_area]
}
pub fn cursor_pos(&self, area: Rect) -> Option<(u16, u16)> {
// Hide the cursor whenever an overlay view is active (e.g. the
// status indicator shown while a task is running, or approval modal).
// In these states the textarea is not interactable, so we should not
// show its caret.
let [_, content] = self.layout(area);
if let Some(view) = self.active_view() {
view.cursor_pos(content)
} else {
self.composer.cursor_pos(content)
}
}
/// Forward a key event to the active view or the composer.
pub fn handle_key_event(&mut self, key_event: KeyEvent) -> InputResult {
// If a modal/view is active, handle it here; otherwise forward to composer.
@@ -540,39 +466,36 @@ impl BottomPane {
pub(crate) fn take_recent_submission_images(&mut self) -> Vec<PathBuf> {
self.composer.take_recent_submission_images()
}
fn as_renderable(&'_ self) -> RenderableItem<'_> {
if let Some(view) = self.active_view() {
RenderableItem::Borrowed(view)
} else {
let mut flex = FlexRenderable::new();
if let Some(status) = &self.status {
flex.push(0, RenderableItem::Borrowed(status));
}
flex.push(1, RenderableItem::Borrowed(&self.queued_user_messages));
if self.status.is_some() || !self.queued_user_messages.messages.is_empty() {
flex.push(0, RenderableItem::Owned("".into()));
}
let mut flex2 = FlexRenderable::new();
flex2.push(1, RenderableItem::Owned(flex.into()));
flex2.push(0, RenderableItem::Borrowed(&self.composer));
RenderableItem::Owned(Box::new(flex2))
}
}
}
impl WidgetRef for &BottomPane {
fn render_ref(&self, area: Rect, buf: &mut Buffer) {
let [top_area, content_area] = self.layout(area);
// When a modal view is active, it owns the whole content area.
if let Some(view) = self.active_view() {
view.render(content_area, buf);
} else {
let status_height = self
.status
.as_ref()
.map(|status| status.desired_height(top_area.width).min(top_area.height))
.unwrap_or(0);
if let Some(status) = &self.status
&& status_height > 0
{
status.render_ref(top_area, buf);
}
let queue_area = Rect {
x: top_area.x,
y: top_area.y.saturating_add(status_height),
width: top_area.width,
height: top_area.height.saturating_sub(status_height),
};
if queue_area.height > 0 {
self.queued_user_messages.render(queue_area, buf);
}
self.composer.render_ref(content_area, buf);
}
impl Renderable for BottomPane {
fn render(&self, area: Rect, buf: &mut Buffer) {
self.as_renderable().render(area, buf);
}
fn desired_height(&self, width: u16) -> u16 {
self.as_renderable().desired_height(width)
}
fn cursor_pos(&self, area: Rect) -> Option<(u16, u16)> {
self.as_renderable().cursor_pos(area)
}
}
@@ -599,7 +522,7 @@ mod tests {
fn render_snapshot(pane: &BottomPane, area: Rect) -> String {
let mut buf = Buffer::empty(area);
(&pane).render_ref(area, &mut buf);
pane.render(area, &mut buf);
snapshot_buffer(&buf)
}
@@ -651,7 +574,7 @@ mod tests {
// Render and verify the top row does not include an overlay.
let area = Rect::new(0, 0, 60, 6);
let mut buf = Buffer::empty(area);
(&pane).render_ref(area, &mut buf);
pane.render(area, &mut buf);
let mut r0 = String::new();
for x in 0..area.width {
@@ -665,7 +588,7 @@ mod tests {
#[test]
fn composer_shown_after_denied_while_task_running() {
let (tx_raw, rx) = unbounded_channel::<AppEvent>();
let (tx_raw, _rx) = unbounded_channel::<AppEvent>();
let tx = AppEventSender::new(tx_raw);
let mut pane = BottomPane::new(BottomPaneParams {
app_event_tx: tx,
@@ -700,14 +623,14 @@ mod tests {
std::thread::sleep(Duration::from_millis(120));
let area = Rect::new(0, 0, 40, 6);
let mut buf = Buffer::empty(area);
(&pane).render_ref(area, &mut buf);
let mut row1 = String::new();
pane.render(area, &mut buf);
let mut row0 = String::new();
for x in 0..area.width {
row1.push(buf[(x, 1)].symbol().chars().next().unwrap_or(' '));
row0.push(buf[(x, 0)].symbol().chars().next().unwrap_or(' '));
}
assert!(
row1.contains("Working"),
"expected Working header after denial on row 1: {row1:?}"
row0.contains("Working"),
"expected Working header after denial on row 0: {row0:?}"
);
// Composer placeholder should be visible somewhere below.
@@ -726,9 +649,6 @@ mod tests {
found_composer,
"expected composer visible under status line"
);
// Drain the channel to avoid unused warnings.
drop(rx);
}
#[test]
@@ -750,16 +670,10 @@ mod tests {
// Use a height that allows the status line to be visible above the composer.
let area = Rect::new(0, 0, 40, 6);
let mut buf = Buffer::empty(area);
(&pane).render_ref(area, &mut buf);
pane.render(area, &mut buf);
let mut row0 = String::new();
for x in 0..area.width {
row0.push(buf[(x, 1)].symbol().chars().next().unwrap_or(' '));
}
assert!(
row0.contains("Working"),
"expected Working header: {row0:?}"
);
let bufs = snapshot_buffer(&buf);
assert!(bufs.contains("• Working"), "expected Working header");
}
#[test]
@@ -791,36 +705,6 @@ mod tests {
);
}
#[test]
fn status_hidden_when_height_too_small() {
let (tx_raw, _rx) = unbounded_channel::<AppEvent>();
let tx = AppEventSender::new(tx_raw);
let mut pane = BottomPane::new(BottomPaneParams {
app_event_tx: tx,
frame_requester: FrameRequester::test_dummy(),
has_input_focus: true,
enhanced_keys_supported: false,
placeholder_text: "Ask Codex to do anything".to_string(),
disable_paste_burst: false,
});
pane.set_task_running(true);
// Height=2 → composer takes the full space; status collapses when there is no room.
let area2 = Rect::new(0, 0, 20, 2);
assert_snapshot!(
"status_hidden_when_height_too_small_height_2",
render_snapshot(&pane, area2)
);
// Height=1 → no padding; single row is the composer (status hidden).
let area1 = Rect::new(0, 0, 20, 1);
assert_snapshot!(
"status_hidden_when_height_too_small_height_1",
render_snapshot(&pane, area1)
);
}
#[test]
fn queued_messages_visible_when_status_hidden_snapshot() {
let (tx_raw, _rx) = unbounded_channel::<AppEvent>();

View File

@@ -4,5 +4,6 @@ expression: terminal.backend()
---
" "
" /mo "
" "
" /model choose what model and reasoning effort to use "
" /mention mention a file "

View File

@@ -2,7 +2,6 @@
source: tui/src/bottom_pane/mod.rs
expression: "render_snapshot(&pane, area)"
---
↳ Queued follow-up question
⌥ + ↑ edit

Some files were not shown because too many files have changed in this diff Show More