Compare commits

...

31 Commits

Author SHA1 Message Date
Ahmed Ibrahim
9b7329699a Merge branch 'fix-timeout' of https://github.com/openai/codex into fix-timeout 2025-11-06 10:34:20 -08:00
Ahmed Ibrahim
8307d9bf3b fix 2025-11-06 10:34:11 -08:00
Ahmed Ibrahim
294dafcacf Merge branch 'main' into fix-timeout 2025-11-06 10:29:56 -08:00
Ahmed Ibrahim
0b26c76047 fix 2025-11-06 10:27:24 -08:00
Ahmed Ibrahim
dbad5eeec6 chore: fix grammar mistakes (#6326) 2025-11-06 09:48:59 -08:00
Ahmed Ibrahim
ca9f9c6f5d usage 2025-11-06 09:25:43 -08:00
vladislav doster
4b4252210b docs: Fix code fence and typo in advanced guide (#6295)
- add `bash` to code fence
- fix spelling of `JavaScript`
2025-11-06 09:00:28 -08:00
Owen Lin
6582554926 [app-server] feat: v2 Turn APIs (#6216)
Implements:
```
turn/start
turn/interrupt
```

along with their integration tests. These are relatively light wrappers
around the existing core logic, and changes to core logic are minimal.

However, an improvement made for developer ergonomics:
- `turn/start` replaces both `SendUserMessage` (no turn overrides) and
`SendUserTurn` (can override model, approval policy, etc.)
2025-11-06 16:36:36 +00:00
Thibault Sottiaux
649ce520c4 chore: rename for clarity (#6319)
Co-authored-by: Ahmed Ibrahim <aibrahim@openai.com>
2025-11-06 08:32:57 -08:00
Thibault Sottiaux
667e841d3e feat: support models with single reasoning effort (#6300) 2025-11-05 23:06:45 -08:00
Ahmed Ibrahim
63e1ef25af feat: add model nudge for queries (#6286) 2025-11-06 03:42:59 +00:00
Celia Chen
229d18f4d2 [App-server] Add account/login/cancel v2 endpoint (#6288)
Add `account/login/cancel` v2 endpoint for auth. this is similar
implementation to `cancelLoginChatgpt` v1 endpoint.
2025-11-06 01:13:55 +00:00
wizard
4a1a7f9685 fix: ToC so it doesn’t include itself or duplicate the end marker (#4388)
turns out the ToC was including itself when generating, which messed up
comparisons and sometimes made the file rewrite endlessly.

also fixed the slice so `<!-- End ToC -->` doesn’t get duplicated when
we insert the new ToC.

should behave nicely now - no extra rewrites, no doubled markers.

Co-authored-by: Eric Traut <etraut@openai.com>
2025-11-05 14:52:51 -08:00
Eric Traut
86c149ae8e Prevent dismissal of login menu in TUI (#6285)
We currently allow the user to dismiss the login menu via Ctrl+C. This
leaves them in a bad state where they're not auth'ed but have an input
prompt. In the extension, this isn't a problem because we don't allow
the user to dismiss the login screen.

Testing: I confirmed that Ctrl+C no longer dismisses the login menu.

This is an alternative (simpler) fix for a [community
PR](https://github.com/openai/codex/pull/3234).
2025-11-05 14:25:58 -08:00
Celia Chen
05f0b4f590 [App-server] Implement v2 for account/login/start and account/login/completed (#6183)
This PR implements `account/login/start` and `account/login/completed`.
Instead of having separate endpoints for login with chatgpt and api, we
have a single enum handling different login methods. For sync auth
methods like sign in with api key, we still send a `completed`
notification back to be compatible with the async login flow.
2025-11-05 13:52:50 -08:00
easong-openai
d4eda9d10b stop capturing r when environment selection modal is open (#6249)
This fixes an issue where you can't select environments with an r in them when the selection modal is open
2025-11-05 13:23:46 -08:00
Eric Traut
d7953aed74 Fixes intermittent test failures in CI (#6282)
I'm seeing two tests fail intermittently in CI. This PR attempts to
address (or at least mitigate) the flakiness.

* summarize_context_three_requests_and_instructions - The test snapshots
server.received_requests() immediately after observing TaskComplete.
Because the OpenAI /v1/responses call is streamed, the HTTP request can
still be draining when that event fires, so wiremock occasionally
reports only two captured requests. Fix is to wait for async activity to
complete.
* archive_conversation_moves_rollout_into_archived_directory - times out
on a slow CI run. Mitigation is to increase timeout value from 10s to
20s.
2025-11-05 13:12:25 -08:00
Owen Lin
2ab1650d4d [app-server] feat: v2 Thread APIs (#6214)
Implements:
```
thread/list
thread/start
thread/resume
thread/archive
```

along with their integration tests. These are relatively light wrappers
around the existing core logic, and changes to core logic are minimal.

However, an improvement made for developer ergonomics:
- `thread/start` and `thread/resume` automatically attaches a
conversation listener internally, so clients don't have to make a
separate `AddConversationListener` call like they do today.

For consistency, also updated `model/list` and `feedback/upload` (naming
conventions, list API params).
2025-11-05 20:28:43 +00:00
Gabriel Peal
79aa83ee39 Update rmcp to 0.8.5 (#6261)
Picks up https://github.com/modelcontextprotocol/rust-sdk/pull/511 which
should fix todoist and some other MCP server oauth and may further
resolve issues in https://github.com/openai/codex/issues/5045
2025-11-05 14:20:30 -05:00
Eric Traut
c4ebe4b078 Improved token refresh handling to address "Re-connecting" behavior (#6231)
Currently, when the access token expires, we attempt to use the refresh
token to acquire a new access token. This works most of the time.
However, there are situations where the refresh token is expired,
exhausted (already used to perform a refresh), or revoked. In those
cases, the current logic treats the error as transient and attempts to
retry it repeatedly.

This PR changes the token refresh logic to differentiate between
permanent and transient errors. It also changes callers to treat the
permanent errors as fatal rather than retrying them. And it provides
better error messages to users so they understand how to address the
problem. These error messages should also help us further understand why
we're seeing examples of refresh token exhaustion.

Here is the error message in the CLI. The same text appears within the
extension.

<img width="863" height="38" alt="image"
src="https://github.com/user-attachments/assets/7ffc0d08-ebf0-4900-b9a9-265064202f4f"
/>

I also correct the spelling of "Re-connecting", which shouldn't have a
hyphen in it.

Testing: I manually tested these code paths by adding temporary code to
programmatically cause my refresh token to be exhausted (by calling the
token refresh endpoint in a tight loop more than 50 times). I then
simulated an access token expiration, which caused the token refresh
logic to be invoked. I confirmed that the updated logic properly handled
the error condition.

Note: We earlier discussed the idea of forcefully logging out the user
at the point where token refresh failed. I made several attempts to do
this, and all of them resulted in a bad UX. It's important to surface
this error to users in a way that explains the problem and tells them
that they need to log in again. We also previously discussed deleting
the auth.json file when this condition is detected. That also creates
problems because it effectively changes the auth status from logged in
to logged out, and this causes odd failures and inconsistent UX. I think
it's therefore better not to delete auth.json in this case. If the user
closes the CLI or VSCE and starts it again, we properly detect that the
access token is expired and the refresh token is "dead", and we force
the user to go through the login flow at that time.

This should address aspects of #6191, #5679, and #5505
2025-11-05 10:51:57 -08:00
Ahmed Ibrahim
1a89f70015 refactor Conversation history file into its own directory (#6229)
This is just a refactor of `conversation_history` file by breaking it up
into multiple smaller ones with helper. This refactor will help us move
more functionality related to context management here. in a clean way.
2025-11-05 10:49:35 -08:00
Jeremy Rose
62474a30e8 tui: refactor ChatWidget and BottomPane to use Renderables (#5565)
- introduce RenderableItem to support both owned and borrowed children
in composite Renderables
- refactor some of our gnarlier manual layouts, BottomPane and
ChatWidget, to use ColumnRenderable
- Renderable and friends now handle cursor_pos()
2025-11-05 09:50:40 -08:00
Dan Hernandez
9a10e80ab7 Add modelReasoningEffort option to TypeScript SDK (#6237)
## Summary
- Adds `ModelReasoningEffort` type to TypeScript SDK with values:
`minimal`, `low`, `medium`, `high`
- Adds `modelReasoningEffort` option to `ThreadOptions`
- Forwards the option to the codex CLI via `--config
model_reasoning_effort="<value>"`
- Includes test coverage for the new option

## Changes
- `sdk/typescript/src/threadOptions.ts`: Define `ModelReasoningEffort`
type and add to `ThreadOptions`
- `sdk/typescript/src/index.ts`: Export `ModelReasoningEffort` type
- `sdk/typescript/src/exec.ts`: Forward `modelReasoningEffort` to CLI as
config flag
- `sdk/typescript/src/thread.ts`: Pass option through to exec (+ debug
logging)
- `sdk/typescript/tests/run.test.ts`: Add test for
`modelReasoningEffort` flag forwarding

---------

Co-authored-by: Eric Traut <etraut@openai.com>
2025-11-05 08:51:03 -08:00
Gabriel Peal
9b538a8672 Upgrade rmcp to 0.8.4 (#6234)
Picks up https://github.com/modelcontextprotocol/rust-sdk/pull/509 which
fixes https://github.com/openai/codex/issues/6164
2025-11-05 00:23:24 -05:00
Andrew Dirksen
95af417923 allow codex to be run from pid 1 (#4200)
Previously it was not possible for codex to run commands as the init
process (pid 1) in linux. Commands run in containers tend to see their
own pid as 1. See https://github.com/openai/codex/issues/4198

This pr implements the solution mentioned in that issue.

Co-authored-by: Eric Traut <etraut@openai.com>
2025-11-04 17:54:46 -08:00
Soroush Yousefpour
fff576cf98 fix(core): load custom prompts from symlinked Markdown files (#3643)
- Discover prompts via fs::metadata to follow symlinks

- Add Unix-only symlink test in custom_prompts.rs

- Update docs/prompts.md to mention symlinks

Fixes #3637

---------

Signed-off-by: Soroush Yousefpour <h.yusefpour@gmail.com>
Co-authored-by: dedrisian-oai <dedrisian@openai.com>
Co-authored-by: Eric Traut <etraut@openai.com>
2025-11-04 17:44:02 -08:00
Lukas
1575f0504c Fix nix build (#6230)
Previously, the `nix build .#default` command fails due to a missing
output hash in the `./codex-rs/default.nix` for `crossterm-0.28.1`:

```
error: No hash was found while vendoring the git dependency crossterm-0.28.1. You can add
a hash through the `outputHashes` argument of `importCargoLock`:

outputHashes = {
 "crossterm-0.28.1" = "<hash>";
};

If you use `buildRustPackage`, you can add this attribute to the `cargoLock`
attribute set.
```

This PR adds the missing hash:

```diff
cargoLock.outputHashes = {
  "ratatui-0.29.0" = "sha256-HBvT5c8GsiCxMffNjJGLmHnvG77A6cqEL+1ARurBXho=";
+ "crossterm-0.28.1" = "sha256-6qCtfSMuXACKFb9ATID39XyFDIEMFDmbx6SSmNe+728=";
};
```

With this change, `nix build .#default` succeeds:

```
> nix build .#default --max-jobs 1 --cores 2

warning: Git tree '/home/lukas/r/github.com/lukasl-dev/codex' is dirty
[1/0/1 built] building codex-rs-0.1.0 (buildPhase)[1/0/1 built] building codex-rs-0.1.0 (buildP[1/0/1 built] building codex-rs-0.1.0 (buildPhase):    [1/0/1 built] building codex-rs-0.1.0 (b[1/0/1 built] building codex-rs-0.1.0 (buildPhase):    Compi[1/0/1 built] building codex-rs-0.1

> ./result/bin/codex
  You are running Codex in /home/lukas/r/github.com/lukasl-dev/codex

  Since this folder is version controlled, you may wish to allow Codex to work in this folder without asking for approval.
  ...
```
2025-11-04 17:07:37 -08:00
Owen Lin
edf4c3f627 [app-server] feat: export.rs supports a v2 namespace, initial v2 notifications (#6212)
**Typescript and JSON schema exports**
While working on Thread/Turn/Items type definitions, I realize we will
run into name conflicts between v1 and v2 APIs (e.g. `RateLimitWindow`
which won't be reusable since v1 uses `RateLimitWindow` from `protocol/`
which uses snake_case, but we want to expose camelCase everywhere, so
we'll define a V2 version of that struct that serializes as camelCase).

To set us up for a clean and isolated v2 API, generate types into a
`v2/` namespace for both typescript and JSON schema.
- TypeScript: v2 types emit under `out_dir/v2/*.ts`, and root index.ts
now re-exports them via `export * as v2 from "./v2"`;.
- JSON Schemas: v2 definitions bundle under `#/definitions/v2/*` rather
than the root.

The location for the original types (v1 and types pulled from
`protocol/` and other core crates) haven't changed and are still at the
root. This is for backwards compatibility: no breaking changes to
existing usages of v1 APIs and types.

**Notifications**
While working on export.rs, I:
- refactored server/client notifications with macros (like we already do
for methods) so they also get exported (I noticed they weren't being
exported at all).
- removed the hardcoded list of types to export as JSON schema by
leveraging the existing macros instead
- and took a stab at API V2 notifications. These aren't wired up yet,
and I expect to iterate on these this week.
2025-11-05 01:02:39 +00:00
Ahmed Ibrahim
d40a6b7f73 fix: Update the deprecation message to link to the docs (#6211)
The deprecation message is currently a bit confusing. Users may not
understand what is `[features].x`. I updated the docs and the
deprecation message for more guidance.

---------

Co-authored-by: Gabriel Peal <gpeal@users.noreply.github.com>
2025-11-04 21:02:27 +00:00
Dylan Hurd
3a22018edd Revert "fix: pin musl 1.2.5 for DNS fixes" (#6222)
Reverts openai/codex#6189
2025-11-04 11:56:40 -08:00
Ahmed Ibrahim
fe54c216a3 ignore deltas in codex_delegate (#6208)
ignore legacy deltas in codex-delegate to avoid this
[issue](https://github.com/openai/codex/pull/6202).
2025-11-04 19:21:35 +00:00
100 changed files with 6699 additions and 2665 deletions

View File

@@ -1,47 +0,0 @@
name: Setup musl 1.2.5 toolchain
description: Install musl 1.2.5 from source and configure the linker for the requested target.
inputs:
target:
description: Cargo target triple that requires musl (e.g., x86_64-unknown-linux-musl).
required: true
runs:
using: composite
steps:
- name: Install musl 1.2.5
shell: bash
env:
MUSL_VERSION: 1.2.5
MUSL_PREFIX: /opt/musl-1.2.5
DEBIAN_FRONTEND: noninteractive
run: |
set -euo pipefail
sudo apt-get -y update -o Acquire::Retries=3
sudo apt-get -y install --no-install-recommends build-essential curl pkg-config
curl -sSfL --retry 3 --retry-delay 1 "https://musl.libc.org/releases/musl-${MUSL_VERSION}.tar.gz" -o /tmp/musl.tar.gz
tar -xf /tmp/musl.tar.gz -C /tmp
pushd "/tmp/musl-${MUSL_VERSION}"
./configure --prefix="${MUSL_PREFIX}"
make -j"$(nproc)"
sudo make install
popd
echo "${MUSL_PREFIX}/bin" >> "$GITHUB_PATH"
musl_gcc="${MUSL_PREFIX}/bin/musl-gcc"
"${musl_gcc}" --version
case "${{ inputs.target }}" in
x86_64-unknown-linux-musl)
echo "CC_x86_64_unknown_linux_musl=${musl_gcc}" >> "$GITHUB_ENV"
echo "CARGO_TARGET_X86_64_UNKNOWN_LINUX_MUSL_LINKER=${musl_gcc}" >> "$GITHUB_ENV"
;;
aarch64-unknown-linux-musl)
echo "CC_aarch64_unknown_linux_musl=${musl_gcc}" >> "$GITHUB_ENV"
echo "CARGO_TARGET_AARCH64_UNKNOWN_LINUX_MUSL_LINKER=${musl_gcc}" >> "$GITHUB_ENV"
;;
*)
echo "Unsupported musl target '${{ inputs.target }}'" >&2
exit 1
;;
esac

View File

@@ -217,10 +217,14 @@ jobs:
key: apt-${{ matrix.runner }}-${{ matrix.target }}-v1
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Setup musl 1.2.5 toolchain
uses: ./.github/actions/setup-musl-1_2_5
with:
target: ${{ matrix.target }}
name: Install musl build tools
env:
DEBIAN_FRONTEND: noninteractive
shell: bash
run: |
set -euo pipefail
sudo apt-get -y update -o Acquire::Retries=3
sudo apt-get -y install --no-install-recommends musl-tools pkg-config
- name: Install cargo-chef
if: ${{ matrix.profile == 'release' }}

View File

@@ -92,10 +92,10 @@ jobs:
key: cargo-${{ matrix.runner }}-${{ matrix.target }}-release-${{ hashFiles('**/Cargo.lock') }}
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Setup musl 1.2.5 toolchain
uses: ./.github/actions/setup-musl-1_2_5
with:
target: ${{ matrix.target }}
name: Install musl build tools
run: |
sudo apt-get update
sudo apt-get install -y musl-tools pkg-config
- name: Cargo build
run: cargo build --target ${{ matrix.target }} --release --bin codex --bin codex-responses-api-proxy

11
codex-rs/Cargo.lock generated
View File

@@ -186,9 +186,11 @@ dependencies = [
"chrono",
"codex-app-server-protocol",
"codex-core",
"codex-protocol",
"serde",
"serde_json",
"tokio",
"uuid",
"wiremock",
]
@@ -871,6 +873,7 @@ dependencies = [
"anyhow",
"clap",
"codex-protocol",
"mcp-types",
"paste",
"pretty_assertions",
"schemars 0.8.22",
@@ -5009,9 +5012,9 @@ dependencies = [
[[package]]
name = "rmcp"
version = "0.8.3"
version = "0.8.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1fdad1258f7259fdc0f2dfc266939c82c3b5d1fd72bcde274d600cdc27e60243"
checksum = "e5947688160b56fb6c827e3c20a72c90392a1d7e9dec74749197aa1780ac42ca"
dependencies = [
"base64",
"bytes",
@@ -5043,9 +5046,9 @@ dependencies = [
[[package]]
name = "rmcp-macros"
version = "0.8.3"
version = "0.8.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ede0589a208cc7ce81d1be68aa7e74b917fcd03c81528408bab0457e187dcd9b"
checksum = "01263441d3f8635c628e33856c468b96ebbce1af2d3699ea712ca71432d4ee7a"
dependencies = [
"darling 0.21.3",
"proc-macro2",

View File

@@ -162,7 +162,7 @@ ratatui = "0.29.0"
ratatui-macros = "0.6.0"
regex-lite = "0.1.7"
reqwest = "0.12"
rmcp = { version = "0.8.3", default-features = false }
rmcp = { version = "0.8.5", default-features = false }
schemars = "0.8.22"
seccompiler = "0.5.0"
sentry = "0.34.0"
@@ -256,7 +256,12 @@ unwrap_used = "deny"
# cargo-shear cannot see the platform-specific openssl-sys usage, so we
# silence the false positive here instead of deleting a real dependency.
[workspace.metadata.cargo-shear]
ignored = ["icu_provider", "openssl-sys", "codex-utils-readiness", "codex-utils-tokenizer"]
ignored = [
"icu_provider",
"openssl-sys",
"codex-utils-readiness",
"codex-utils-tokenizer",
]
[profile.release]
lto = "fat"

View File

@@ -14,6 +14,7 @@ workspace = true
anyhow = { workspace = true }
clap = { workspace = true, features = ["derive"] }
codex-protocol = { workspace = true }
mcp-types = { workspace = true }
paste = { workspace = true }
schemars = { workspace = true }
serde = { workspace = true, features = ["derive"] }

View File

@@ -2,20 +2,27 @@ use crate::ClientNotification;
use crate::ClientRequest;
use crate::ServerNotification;
use crate::ServerRequest;
use crate::export_client_notification_schemas;
use crate::export_client_param_schemas;
use crate::export_client_response_schemas;
use crate::export_client_responses;
use crate::export_server_notification_schemas;
use crate::export_server_param_schemas;
use crate::export_server_response_schemas;
use crate::export_server_responses;
use anyhow::Context;
use anyhow::Result;
use anyhow::anyhow;
use codex_protocol::parse_command::ParsedCommand;
use codex_protocol::protocol::EventMsg;
use codex_protocol::protocol::FileChange;
use codex_protocol::protocol::SandboxPolicy;
use schemars::JsonSchema;
use schemars::schema::RootSchema;
use schemars::schema_for;
use serde::Serialize;
use serde_json::Map;
use serde_json::Value;
use std::collections::BTreeMap;
use std::collections::HashMap;
use std::collections::HashSet;
use std::ffi::OsStr;
use std::fs;
@@ -28,84 +35,29 @@ use ts_rs::TS;
const HEADER: &str = "// GENERATED CODE! DO NOT MODIFY BY HAND!\n\n";
macro_rules! for_each_schema_type {
($macro:ident) => {
$macro!(crate::RequestId);
$macro!(crate::JSONRPCMessage);
$macro!(crate::JSONRPCRequest);
$macro!(crate::JSONRPCNotification);
$macro!(crate::JSONRPCResponse);
$macro!(crate::JSONRPCError);
$macro!(crate::JSONRPCErrorError);
$macro!(crate::AddConversationListenerParams);
$macro!(crate::AddConversationSubscriptionResponse);
$macro!(crate::ApplyPatchApprovalParams);
$macro!(crate::ApplyPatchApprovalResponse);
$macro!(crate::ArchiveConversationParams);
$macro!(crate::ArchiveConversationResponse);
$macro!(crate::AuthMode);
$macro!(crate::AccountUpdatedNotification);
$macro!(crate::AuthStatusChangeNotification);
$macro!(crate::CancelLoginChatGptParams);
$macro!(crate::CancelLoginChatGptResponse);
$macro!(crate::ClientInfo);
$macro!(crate::ClientNotification);
$macro!(crate::ClientRequest);
$macro!(crate::ConversationSummary);
$macro!(crate::ExecCommandApprovalParams);
$macro!(crate::ExecCommandApprovalResponse);
$macro!(crate::ExecOneOffCommandParams);
$macro!(crate::ExecOneOffCommandResponse);
$macro!(crate::FuzzyFileSearchParams);
$macro!(crate::FuzzyFileSearchResponse);
$macro!(crate::FuzzyFileSearchResult);
$macro!(crate::GetAuthStatusParams);
$macro!(crate::GetAuthStatusResponse);
$macro!(crate::GetUserAgentResponse);
$macro!(crate::GetUserSavedConfigResponse);
$macro!(crate::GitDiffToRemoteParams);
$macro!(crate::GitDiffToRemoteResponse);
$macro!(crate::GitSha);
$macro!(crate::InitializeParams);
$macro!(crate::InitializeResponse);
$macro!(crate::InputItem);
$macro!(crate::InterruptConversationParams);
$macro!(crate::InterruptConversationResponse);
$macro!(crate::ListConversationsParams);
$macro!(crate::ListConversationsResponse);
$macro!(crate::LoginApiKeyParams);
$macro!(crate::LoginApiKeyResponse);
$macro!(crate::LoginChatGptCompleteNotification);
$macro!(crate::LoginChatGptResponse);
$macro!(crate::LogoutChatGptParams);
$macro!(crate::LogoutChatGptResponse);
$macro!(crate::NewConversationParams);
$macro!(crate::NewConversationResponse);
$macro!(crate::Profile);
$macro!(crate::RemoveConversationListenerParams);
$macro!(crate::RemoveConversationSubscriptionResponse);
$macro!(crate::ResumeConversationParams);
$macro!(crate::ResumeConversationResponse);
$macro!(crate::SandboxSettings);
$macro!(crate::SendUserMessageParams);
$macro!(crate::SendUserMessageResponse);
$macro!(crate::SendUserTurnParams);
$macro!(crate::SendUserTurnResponse);
$macro!(crate::ServerNotification);
$macro!(crate::ServerRequest);
$macro!(crate::SessionConfiguredNotification);
$macro!(crate::SetDefaultModelParams);
$macro!(crate::SetDefaultModelResponse);
$macro!(crate::Tools);
$macro!(crate::UserInfoResponse);
$macro!(crate::UserSavedConfig);
$macro!(codex_protocol::protocol::EventMsg);
$macro!(codex_protocol::protocol::FileChange);
$macro!(codex_protocol::parse_command::ParsedCommand);
$macro!(codex_protocol::protocol::SandboxPolicy);
};
#[derive(Clone)]
pub struct GeneratedSchema {
namespace: Option<String>,
logical_name: String,
value: Value,
in_v1_dir: bool,
}
impl GeneratedSchema {
fn namespace(&self) -> Option<&str> {
self.namespace.as_deref()
}
fn logical_name(&self) -> &str {
&self.logical_name
}
fn value(&self) -> &Value {
&self.value
}
}
type JsonSchemaEmitter = fn(&Path) -> Result<GeneratedSchema>;
pub fn generate_types(out_dir: &Path, prettier: Option<&Path>) -> Result<()> {
generate_ts(out_dir, prettier)?;
generate_json(out_dir)?;
@@ -113,7 +65,9 @@ pub fn generate_types(out_dir: &Path, prettier: Option<&Path>) -> Result<()> {
}
pub fn generate_ts(out_dir: &Path, prettier: Option<&Path>) -> Result<()> {
let v2_out_dir = out_dir.join("v2");
ensure_dir(out_dir)?;
ensure_dir(&v2_out_dir)?;
ClientRequest::export_all_to(out_dir)?;
export_client_responses(out_dir)?;
@@ -124,12 +78,15 @@ pub fn generate_ts(out_dir: &Path, prettier: Option<&Path>) -> Result<()> {
ServerNotification::export_all_to(out_dir)?;
generate_index_ts(out_dir)?;
generate_index_ts(&v2_out_dir)?;
let ts_files = ts_files_in(out_dir)?;
// Ensure our header is present on all TS files (root + subdirs like v2/).
let ts_files = ts_files_in_recursive(out_dir)?;
for file in &ts_files {
prepend_header_if_missing(file)?;
}
// Optionally run Prettier on all generated TS files.
if let Some(prettier_bin) = prettier
&& !ts_files.is_empty()
{
@@ -148,23 +105,47 @@ pub fn generate_ts(out_dir: &Path, prettier: Option<&Path>) -> Result<()> {
pub fn generate_json(out_dir: &Path) -> Result<()> {
ensure_dir(out_dir)?;
let mut bundle: BTreeMap<String, RootSchema> = BTreeMap::new();
let envelope_emitters: &[JsonSchemaEmitter] = &[
|d| write_json_schema_with_return::<crate::RequestId>(d, "RequestId"),
|d| write_json_schema_with_return::<crate::JSONRPCMessage>(d, "JSONRPCMessage"),
|d| write_json_schema_with_return::<crate::JSONRPCRequest>(d, "JSONRPCRequest"),
|d| write_json_schema_with_return::<crate::JSONRPCNotification>(d, "JSONRPCNotification"),
|d| write_json_schema_with_return::<crate::JSONRPCResponse>(d, "JSONRPCResponse"),
|d| write_json_schema_with_return::<crate::JSONRPCError>(d, "JSONRPCError"),
|d| write_json_schema_with_return::<crate::JSONRPCErrorError>(d, "JSONRPCErrorError"),
|d| write_json_schema_with_return::<crate::ClientRequest>(d, "ClientRequest"),
|d| write_json_schema_with_return::<crate::ServerRequest>(d, "ServerRequest"),
|d| write_json_schema_with_return::<crate::ClientNotification>(d, "ClientNotification"),
|d| write_json_schema_with_return::<crate::ServerNotification>(d, "ServerNotification"),
|d| write_json_schema_with_return::<EventMsg>(d, "EventMsg"),
|d| write_json_schema_with_return::<FileChange>(d, "FileChange"),
|d| write_json_schema_with_return::<crate::protocol::v1::InputItem>(d, "InputItem"),
|d| write_json_schema_with_return::<ParsedCommand>(d, "ParsedCommand"),
|d| write_json_schema_with_return::<SandboxPolicy>(d, "SandboxPolicy"),
];
macro_rules! add_schema {
($ty:path) => {{
let name = type_basename(stringify!($ty));
let schema = write_json_schema_with_return::<$ty>(out_dir, &name)?;
bundle.insert(name, schema);
}};
let mut schemas: Vec<GeneratedSchema> = Vec::new();
for emit in envelope_emitters {
schemas.push(emit(out_dir)?);
}
for_each_schema_type!(add_schema);
schemas.extend(export_client_param_schemas(out_dir)?);
schemas.extend(export_client_response_schemas(out_dir)?);
schemas.extend(export_server_param_schemas(out_dir)?);
schemas.extend(export_server_response_schemas(out_dir)?);
schemas.extend(export_client_notification_schemas(out_dir)?);
schemas.extend(export_server_notification_schemas(out_dir)?);
export_client_response_schemas(out_dir)?;
export_server_response_schemas(out_dir)?;
let bundle = build_schema_bundle(schemas)?;
write_pretty_json(
out_dir.join("codex_app_server_protocol.schemas.json"),
&bundle,
)?;
let mut definitions = Map::new();
Ok(())
}
fn build_schema_bundle(schemas: Vec<GeneratedSchema>) -> Result<Value> {
const SPECIAL_DEFINITIONS: &[&str] = &[
"ClientNotification",
"ClientRequest",
@@ -177,22 +158,62 @@ pub fn generate_json(out_dir: &Path) -> Result<()> {
"ServerRequest",
];
for (name, schema) in bundle {
let mut schema_value = serde_json::to_value(schema)?;
annotate_schema(&mut schema_value, Some(name.as_str()));
let namespaced_types = collect_namespaced_types(&schemas);
let mut definitions = Map::new();
if let Value::Object(ref mut obj) = schema_value
for schema in schemas {
let GeneratedSchema {
namespace,
logical_name,
mut value,
in_v1_dir,
} = schema;
if let Some(ref ns) = namespace {
rewrite_refs_to_namespace(&mut value, ns);
}
let mut forced_namespace_refs: Vec<(String, String)> = Vec::new();
if let Value::Object(ref mut obj) = value
&& let Some(defs) = obj.remove("definitions")
&& let Value::Object(defs_obj) = defs
{
for (def_name, mut def_schema) in defs_obj {
if !SPECIAL_DEFINITIONS.contains(&def_name.as_str()) {
annotate_schema(&mut def_schema, Some(def_name.as_str()));
if SPECIAL_DEFINITIONS.contains(&def_name.as_str()) {
continue;
}
annotate_schema(&mut def_schema, Some(def_name.as_str()));
let target_namespace = match namespace {
Some(ref ns) => Some(ns.clone()),
None => namespace_for_definition(&def_name, &namespaced_types)
.cloned()
.filter(|_| !in_v1_dir),
};
if let Some(ref ns) = target_namespace {
if namespace.as_deref() == Some(ns.as_str()) {
rewrite_refs_to_namespace(&mut def_schema, ns);
insert_into_namespace(&mut definitions, ns, def_name.clone(), def_schema)?;
} else if !forced_namespace_refs
.iter()
.any(|(name, existing_ns)| name == &def_name && existing_ns == ns)
{
forced_namespace_refs.push((def_name.clone(), ns.clone()));
}
} else {
definitions.insert(def_name, def_schema);
}
}
}
definitions.insert(name, schema_value);
for (name, ns) in forced_namespace_refs {
rewrite_named_ref_to_namespace(&mut value, &ns, &name);
}
if let Some(ref ns) = namespace {
insert_into_namespace(&mut definitions, ns, logical_name.clone(), value)?;
} else {
definitions.insert(logical_name, value);
}
}
let mut root = Map::new();
@@ -207,15 +228,28 @@ pub fn generate_json(out_dir: &Path) -> Result<()> {
root.insert("type".to_string(), Value::String("object".into()));
root.insert("definitions".to_string(), Value::Object(definitions));
write_pretty_json(
out_dir.join("codex_app_server_protocol.schemas.json"),
&Value::Object(root),
)?;
Ok(())
Ok(Value::Object(root))
}
fn write_json_schema_with_return<T>(out_dir: &Path, name: &str) -> Result<RootSchema>
fn insert_into_namespace(
definitions: &mut Map<String, Value>,
namespace: &str,
name: String,
schema: Value,
) -> Result<()> {
let entry = definitions
.entry(namespace.to_string())
.or_insert_with(|| Value::Object(Map::new()));
match entry {
Value::Object(map) => {
map.insert(name, schema);
Ok(())
}
_ => Err(anyhow!("expected namespace {namespace} to be an object")),
}
}
fn write_json_schema_with_return<T>(out_dir: &Path, name: &str) -> Result<GeneratedSchema>
where
T: JsonSchema,
{
@@ -223,17 +257,37 @@ where
let schema = schema_for!(T);
let mut schema_value = serde_json::to_value(schema)?;
annotate_schema(&mut schema_value, Some(file_stem));
write_pretty_json(out_dir.join(format!("{file_stem}.json")), &schema_value)
// If the name looks like a namespaced path (e.g., "v2::Type"), mirror
// the TypeScript layout and write to out_dir/v2/Type.json. Otherwise
// write alongside the legacy files.
let (raw_namespace, logical_name) = split_namespace(file_stem);
let out_path = if let Some(ns) = raw_namespace {
let dir = out_dir.join(ns);
ensure_dir(&dir)?;
dir.join(format!("{logical_name}.json"))
} else {
out_dir.join(format!("{file_stem}.json"))
};
write_pretty_json(out_path, &schema_value)
.with_context(|| format!("Failed to write JSON schema for {file_stem}"))?;
let annotated_schema = serde_json::from_value(schema_value)?;
Ok(annotated_schema)
let namespace = match raw_namespace {
Some("v1") | None => None,
Some(ns) => Some(ns.to_string()),
};
Ok(GeneratedSchema {
in_v1_dir: raw_namespace == Some("v1"),
namespace,
logical_name: logical_name.to_string(),
value: schema_value,
})
}
pub(crate) fn write_json_schema<T>(out_dir: &Path, name: &str) -> Result<()>
pub(crate) fn write_json_schema<T>(out_dir: &Path, name: &str) -> Result<GeneratedSchema>
where
T: JsonSchema,
{
write_json_schema_with_return::<T>(out_dir, name).map(|_| ())
write_json_schema_with_return::<T>(out_dir, name)
}
fn write_pretty_json(path: PathBuf, value: &impl Serialize) -> Result<()> {
@@ -242,13 +296,73 @@ fn write_pretty_json(path: PathBuf, value: &impl Serialize) -> Result<()> {
fs::write(&path, json).with_context(|| format!("Failed to write {}", path.display()))?;
Ok(())
}
fn type_basename(type_path: &str) -> String {
type_path
.rsplit_once("::")
.map(|(_, name)| name)
.unwrap_or(type_path)
.trim()
.to_string()
/// Split a fully-qualified type name like "v2::Type" into its namespace and logical name.
fn split_namespace(name: &str) -> (Option<&str>, &str) {
name.split_once("::")
.map_or((None, name), |(ns, rest)| (Some(ns), rest))
}
/// Recursively rewrite $ref values that point at "#/definitions/..." so that
/// they point to a namespaced location under the bundle.
fn rewrite_refs_to_namespace(value: &mut Value, ns: &str) {
match value {
Value::Object(obj) => {
if let Some(Value::String(r)) = obj.get_mut("$ref")
&& let Some(suffix) = r.strip_prefix("#/definitions/")
{
let prefix = format!("{ns}/");
if !suffix.starts_with(&prefix) {
*r = format!("#/definitions/{ns}/{suffix}");
}
}
for v in obj.values_mut() {
rewrite_refs_to_namespace(v, ns);
}
}
Value::Array(items) => {
for v in items.iter_mut() {
rewrite_refs_to_namespace(v, ns);
}
}
_ => {}
}
}
fn collect_namespaced_types(schemas: &[GeneratedSchema]) -> HashMap<String, String> {
let mut types = HashMap::new();
for schema in schemas {
if let Some(ns) = schema.namespace() {
types
.entry(schema.logical_name().to_string())
.or_insert_with(|| ns.to_string());
if let Some(Value::Object(defs)) = schema.value().get("definitions") {
for key in defs.keys() {
types.entry(key.clone()).or_insert_with(|| ns.to_string());
}
}
if let Some(Value::Object(defs)) = schema.value().get("$defs") {
for key in defs.keys() {
types.entry(key.clone()).or_insert_with(|| ns.to_string());
}
}
}
}
types
}
fn namespace_for_definition<'a>(
name: &str,
types: &'a HashMap<String, String>,
) -> Option<&'a String> {
if let Some(ns) = types.get(name) {
return Some(ns);
}
let trimmed = name.trim_end_matches(|c: char| c.is_ascii_digit());
if trimmed != name {
return types.get(trimmed);
}
None
}
fn variant_definition_name(base: &str, variant: &Value) -> Option<String> {
@@ -468,6 +582,33 @@ fn ensure_dir(dir: &Path) -> Result<()> {
.with_context(|| format!("Failed to create output directory {}", dir.display()))
}
fn rewrite_named_ref_to_namespace(value: &mut Value, ns: &str, name: &str) {
let direct = format!("#/definitions/{name}");
let prefixed = format!("{direct}/");
let replacement = format!("#/definitions/{ns}/{name}");
let replacement_prefixed = format!("{replacement}/");
match value {
Value::Object(obj) => {
if let Some(Value::String(reference)) = obj.get_mut("$ref") {
if reference == &direct {
*reference = replacement;
} else if let Some(rest) = reference.strip_prefix(&prefixed) {
*reference = format!("{replacement_prefixed}{rest}");
}
}
for child in obj.values_mut() {
rewrite_named_ref_to_namespace(child, ns, name);
}
}
Value::Array(items) => {
for child in items {
rewrite_named_ref_to_namespace(child, ns, name);
}
}
_ => {}
}
}
fn prepend_header_if_missing(path: &Path) -> Result<()> {
let mut content = String::new();
{
@@ -505,6 +646,26 @@ fn ts_files_in(dir: &Path) -> Result<Vec<PathBuf>> {
Ok(files)
}
fn ts_files_in_recursive(dir: &Path) -> Result<Vec<PathBuf>> {
let mut files = Vec::new();
let mut stack = vec![dir.to_path_buf()];
while let Some(d) = stack.pop() {
for entry in
fs::read_dir(&d).with_context(|| format!("Failed to read dir {}", d.display()))?
{
let entry = entry?;
let path = entry.path();
if path.is_dir() {
stack.push(path);
} else if path.is_file() && path.extension() == Some(OsStr::new("ts")) {
files.push(path);
}
}
}
files.sort();
Ok(files)
}
fn generate_index_ts(out_dir: &Path) -> Result<PathBuf> {
let mut entries: Vec<String> = Vec::new();
let mut stems: Vec<String> = ts_files_in(out_dir)?
@@ -521,6 +682,14 @@ fn generate_index_ts(out_dir: &Path) -> Result<PathBuf> {
entries.push(format!("export type {{ {name} }} from \"./{name}\";\n"));
}
// If this is the root out_dir and a ./v2 folder exists with TS files,
// expose it as a namespace to avoid symbol collisions at the root.
let v2_dir = out_dir.join("v2");
let has_v2_ts = ts_files_in(&v2_dir).map(|v| !v.is_empty()).unwrap_or(false);
if has_v2_ts {
entries.push("export * as v2 from \"./v2\";\n".to_string());
}
let mut content =
String::with_capacity(HEADER.len() + entries.iter().map(String::len).sum::<usize>());
content.push_str(HEADER);
@@ -547,6 +716,7 @@ mod tests {
#[test]
fn generated_ts_has_no_optional_nullable_fields() -> Result<()> {
// Assert that there are no types of the form "?: T | null" in the generated TS files.
let output_dir = std::env::temp_dir().join(format!("codex_ts_types_{}", Uuid::now_v7()));
fs::create_dir(&output_dir)?;

View File

@@ -1,15 +1,17 @@
use std::collections::HashMap;
use std::path::Path;
use std::path::PathBuf;
use crate::JSONRPCNotification;
use crate::JSONRPCRequest;
use crate::RequestId;
use crate::export::GeneratedSchema;
use crate::export::write_json_schema;
use crate::protocol::v1;
use crate::protocol::v2;
use codex_protocol::ConversationId;
use codex_protocol::parse_command::ParsedCommand;
use codex_protocol::protocol::FileChange;
use codex_protocol::protocol::RateLimitSnapshot;
use codex_protocol::protocol::ReviewDecision;
use codex_protocol::protocol::SandboxCommandAssessment;
use paste::paste;
@@ -74,33 +76,97 @@ macro_rules! client_request_definitions {
Ok(())
}
#[allow(clippy::vec_init_then_push)]
pub fn export_client_response_schemas(
out_dir: &::std::path::Path,
) -> ::anyhow::Result<()> {
) -> ::anyhow::Result<Vec<GeneratedSchema>> {
let mut schemas = Vec::new();
$(
crate::export::write_json_schema::<$response>(out_dir, stringify!($response))?;
schemas.push(write_json_schema::<$response>(out_dir, stringify!($response))?);
)*
Ok(())
Ok(schemas)
}
#[allow(clippy::vec_init_then_push)]
pub fn export_client_param_schemas(
out_dir: &::std::path::Path,
) -> ::anyhow::Result<Vec<GeneratedSchema>> {
let mut schemas = Vec::new();
$(
schemas.push(write_json_schema::<$params>(out_dir, stringify!($params))?);
)*
Ok(schemas)
}
};
}
client_request_definitions! {
/// NEW APIs
#[serde(rename = "model/list")]
#[ts(rename = "model/list")]
ListModels {
params: v2::ListModelsParams,
response: v2::ListModelsResponse,
// Thread lifecycle
#[serde(rename = "thread/start")]
#[ts(rename = "thread/start")]
ThreadStart {
params: v2::ThreadStartParams,
response: v2::ThreadStartResponse,
},
#[serde(rename = "thread/resume")]
#[ts(rename = "thread/resume")]
ThreadResume {
params: v2::ThreadResumeParams,
response: v2::ThreadResumeResponse,
},
#[serde(rename = "thread/archive")]
#[ts(rename = "thread/archive")]
ThreadArchive {
params: v2::ThreadArchiveParams,
response: v2::ThreadArchiveResponse,
},
#[serde(rename = "thread/list")]
#[ts(rename = "thread/list")]
ThreadList {
params: v2::ThreadListParams,
response: v2::ThreadListResponse,
},
#[serde(rename = "thread/compact")]
#[ts(rename = "thread/compact")]
ThreadCompact {
params: v2::ThreadCompactParams,
response: v2::ThreadCompactResponse,
},
#[serde(rename = "turn/start")]
#[ts(rename = "turn/start")]
TurnStart {
params: v2::TurnStartParams,
response: v2::TurnStartResponse,
},
#[serde(rename = "turn/interrupt")]
#[ts(rename = "turn/interrupt")]
TurnInterrupt {
params: v2::TurnInterruptParams,
response: v2::TurnInterruptResponse,
},
#[serde(rename = "account/login")]
#[ts(rename = "account/login")]
#[serde(rename = "model/list")]
#[ts(rename = "model/list")]
ModelList {
params: v2::ModelListParams,
response: v2::ModelListResponse,
},
#[serde(rename = "account/login/start")]
#[ts(rename = "account/login/start")]
LoginAccount {
params: v2::LoginAccountParams,
response: v2::LoginAccountResponse,
},
#[serde(rename = "account/login/cancel")]
#[ts(rename = "account/login/cancel")]
CancelLoginAccount {
params: v2::CancelLoginAccountParams,
response: v2::CancelLoginAccountResponse,
},
#[serde(rename = "account/logout")]
#[ts(rename = "account/logout")]
LogoutAccount {
@@ -117,9 +183,9 @@ client_request_definitions! {
#[serde(rename = "feedback/upload")]
#[ts(rename = "feedback/upload")]
UploadFeedback {
params: v2::UploadFeedbackParams,
response: v2::UploadFeedbackResponse,
FeedbackUpload {
params: v2::FeedbackUploadParams,
response: v2::FeedbackUploadResponse,
},
#[serde(rename = "account/read")]
@@ -188,6 +254,7 @@ client_request_definitions! {
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
response: v1::LoginChatGptResponse,
},
// DEPRECATED in favor of CancelLoginAccount
CancelLoginChatGpt {
params: v1::CancelLoginChatGptParams,
response: v1::CancelLoginChatGptResponse,
@@ -276,13 +343,101 @@ macro_rules! server_request_definitions {
Ok(())
}
#[allow(clippy::vec_init_then_push)]
pub fn export_server_response_schemas(
out_dir: &::std::path::Path,
) -> ::anyhow::Result<()> {
out_dir: &Path,
) -> ::anyhow::Result<Vec<GeneratedSchema>> {
let mut schemas = Vec::new();
paste! {
$(crate::export::write_json_schema::<[<$variant Response>]>(out_dir, stringify!([<$variant Response>]))?;)*
$(schemas.push(crate::export::write_json_schema::<[<$variant Response>]>(out_dir, stringify!([<$variant Response>]))?);)*
}
Ok(())
Ok(schemas)
}
#[allow(clippy::vec_init_then_push)]
pub fn export_server_param_schemas(
out_dir: &Path,
) -> ::anyhow::Result<Vec<GeneratedSchema>> {
let mut schemas = Vec::new();
paste! {
$(schemas.push(crate::export::write_json_schema::<[<$variant Params>]>(out_dir, stringify!([<$variant Params>]))?);)*
}
Ok(schemas)
}
};
}
/// Generates `ServerNotification` enum and helpers, including a JSON Schema
/// exporter for each notification.
macro_rules! server_notification_definitions {
(
$(
$(#[$variant_meta:meta])*
$variant:ident $(=> $wire:literal)? ( $payload:ty )
),* $(,)?
) => {
/// Notification sent from the server to the client.
#[derive(Serialize, Deserialize, Debug, Clone, JsonSchema, TS, Display)]
#[serde(tag = "method", content = "params", rename_all = "camelCase")]
#[strum(serialize_all = "camelCase")]
pub enum ServerNotification {
$(
$(#[$variant_meta])*
$(#[serde(rename = $wire)] #[ts(rename = $wire)] #[strum(serialize = $wire)])?
$variant($payload),
)*
}
impl ServerNotification {
pub fn to_params(self) -> Result<serde_json::Value, serde_json::Error> {
match self {
$(Self::$variant(params) => serde_json::to_value(params),)*
}
}
}
impl TryFrom<JSONRPCNotification> for ServerNotification {
type Error = serde_json::Error;
fn try_from(value: JSONRPCNotification) -> Result<Self, Self::Error> {
serde_json::from_value(serde_json::to_value(value)?)
}
}
#[allow(clippy::vec_init_then_push)]
pub fn export_server_notification_schemas(
out_dir: &::std::path::Path,
) -> ::anyhow::Result<Vec<GeneratedSchema>> {
let mut schemas = Vec::new();
$(schemas.push(crate::export::write_json_schema::<$payload>(out_dir, stringify!($payload))?);)*
Ok(schemas)
}
};
}
/// Notifications sent from the client to the server.
macro_rules! client_notification_definitions {
(
$(
$(#[$variant_meta:meta])*
$variant:ident $( ( $payload:ty ) )?
),* $(,)?
) => {
#[derive(Serialize, Deserialize, Debug, Clone, JsonSchema, TS, Display)]
#[serde(tag = "method", content = "params", rename_all = "camelCase")]
#[strum(serialize_all = "camelCase")]
pub enum ClientNotification {
$(
$(#[$variant_meta])*
$variant $( ( $payload ) )?,
)*
}
pub fn export_client_notification_schemas(
_out_dir: &::std::path::Path,
) -> ::anyhow::Result<Vec<GeneratedSchema>> {
let schemas = Vec::new();
$( $(schemas.push(crate::export::write_json_schema::<$payload>(_out_dir, stringify!($payload))?);)? )*
Ok(schemas)
}
};
}
@@ -366,58 +521,33 @@ pub struct FuzzyFileSearchResponse {
pub files: Vec<FuzzyFileSearchResult>,
}
/// Notification sent from the server to the client.
#[derive(Serialize, Deserialize, Debug, Clone, JsonSchema, TS, Display)]
#[serde(tag = "method", content = "params", rename_all = "camelCase")]
#[strum(serialize_all = "camelCase")]
pub enum ServerNotification {
server_notification_definitions! {
/// NEW NOTIFICATIONS
#[serde(rename = "account/updated")]
#[ts(rename = "account/updated")]
#[strum(serialize = "account/updated")]
AccountUpdated(v2::AccountUpdatedNotification),
ThreadStarted => "thread/started" (v2::ThreadStartedNotification),
TurnStarted => "turn/started" (v2::TurnStartedNotification),
TurnCompleted => "turn/completed" (v2::TurnCompletedNotification),
ItemStarted => "item/started" (v2::ItemStartedNotification),
ItemCompleted => "item/completed" (v2::ItemCompletedNotification),
AgentMessageDelta => "item/agentMessage/delta" (v2::AgentMessageDeltaNotification),
CommandExecutionOutputDelta => "item/commandExecution/outputDelta" (v2::CommandExecutionOutputDeltaNotification),
McpToolCallProgress => "item/mcpToolCall/progress" (v2::McpToolCallProgressNotification),
AccountUpdated => "account/updated" (v2::AccountUpdatedNotification),
AccountRateLimitsUpdated => "account/rateLimits/updated" (v2::AccountRateLimitsUpdatedNotification),
#[serde(rename = "account/rateLimits/updated")]
#[ts(rename = "account/rateLimits/updated")]
#[strum(serialize = "account/rateLimits/updated")]
AccountRateLimitsUpdated(RateLimitSnapshot),
#[serde(rename = "account/login/completed")]
#[ts(rename = "account/login/completed")]
#[strum(serialize = "account/login/completed")]
AccountLoginCompleted(v2::AccountLoginCompletedNotification),
/// DEPRECATED NOTIFICATIONS below
/// Authentication status changed
AuthStatusChange(v1::AuthStatusChangeNotification),
/// ChatGPT login flow completed
/// Deprecated: use `account/login/completed` instead.
LoginChatGptComplete(v1::LoginChatGptCompleteNotification),
/// The special session configured event for a new or resumed conversation.
SessionConfigured(v1::SessionConfiguredNotification),
}
impl ServerNotification {
pub fn to_params(self) -> Result<serde_json::Value, serde_json::Error> {
match self {
ServerNotification::AccountUpdated(params) => serde_json::to_value(params),
ServerNotification::AccountRateLimitsUpdated(params) => serde_json::to_value(params),
ServerNotification::AuthStatusChange(params) => serde_json::to_value(params),
ServerNotification::LoginChatGptComplete(params) => serde_json::to_value(params),
ServerNotification::SessionConfigured(params) => serde_json::to_value(params),
}
}
}
impl TryFrom<JSONRPCNotification> for ServerNotification {
type Error = serde_json::Error;
fn try_from(value: JSONRPCNotification) -> Result<Self, Self::Error> {
serde_json::from_value(serde_json::to_value(value)?)
}
}
/// Notification sent from the client to the server.
#[derive(Serialize, Deserialize, Debug, Clone, JsonSchema, TS, Display)]
#[serde(tag = "method", content = "params", rename_all = "camelCase")]
#[strum(serialize_all = "camelCase")]
pub enum ClientNotification {
client_notification_definitions! {
Initialized,
}
@@ -577,7 +707,7 @@ mod tests {
};
assert_eq!(
json!({
"method": "account/login",
"method": "account/login/start",
"id": 2,
"params": {
"type": "apiKey",
@@ -593,11 +723,11 @@ mod tests {
fn serialize_account_login_chatgpt() -> Result<()> {
let request = ClientRequest::LoginAccount {
request_id: RequestId::Integer(3),
params: v2::LoginAccountParams::ChatGpt,
params: v2::LoginAccountParams::Chatgpt,
};
assert_eq!(
json!({
"method": "account/login",
"method": "account/login/start",
"id": 3,
"params": {
"type": "chatgpt"
@@ -653,7 +783,7 @@ mod tests {
serde_json::to_value(&api_key)?,
);
let chatgpt = v2::Account::ChatGpt {
let chatgpt = v2::Account::Chatgpt {
email: Some("user@example.com".to_string()),
plan_type: PlanType::Plus,
};
@@ -671,16 +801,16 @@ mod tests {
#[test]
fn serialize_list_models() -> Result<()> {
let request = ClientRequest::ListModels {
let request = ClientRequest::ModelList {
request_id: RequestId::Integer(6),
params: v2::ListModelsParams::default(),
params: v2::ModelListParams::default(),
};
assert_eq!(
json!({
"method": "model/list",
"id": 6,
"params": {
"pageSize": null,
"limit": null,
"cursor": null
}
}),

View File

@@ -374,10 +374,9 @@ pub enum InputItem {
LocalImage { path: PathBuf },
}
// Deprecated notifications (v1)
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
/// Deprecated in favor of AccountLoginCompletedNotification.
pub struct LoginChatGptCompleteNotification {
#[schemars(with = "String")]
pub login_id: Uuid,

View File

@@ -1,17 +1,125 @@
use std::collections::HashMap;
use std::path::PathBuf;
use crate::protocol::common::AuthMode;
use codex_protocol::ConversationId;
use codex_protocol::account::PlanType;
use codex_protocol::config_types::ReasoningEffort;
use codex_protocol::protocol::RateLimitSnapshot;
use codex_protocol::config_types::ReasoningSummary;
use codex_protocol::protocol::RateLimitSnapshot as CoreRateLimitSnapshot;
use codex_protocol::protocol::RateLimitWindow as CoreRateLimitWindow;
use codex_protocol::user_input::UserInput as CoreUserInput;
use mcp_types::ContentBlock as McpContentBlock;
use schemars::JsonSchema;
use serde::Deserialize;
use serde::Serialize;
use serde_json::Value as JsonValue;
use ts_rs::TS;
use uuid::Uuid;
// Macro to declare a camelCased API v2 enum mirroring a core enum which
// tends to use kebab-case.
macro_rules! v2_enum_from_core {
(
pub enum $Name:ident from $Src:path { $( $Variant:ident ),+ $(,)? }
) => {
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub enum $Name { $( $Variant ),+ }
impl $Name {
pub fn to_core(self) -> $Src {
match self { $( $Name::$Variant => <$Src>::$Variant ),+ }
}
}
impl From<$Src> for $Name {
fn from(value: $Src) -> Self {
match value { $( <$Src>::$Variant => $Name::$Variant ),+ }
}
}
};
}
v2_enum_from_core!(
pub enum AskForApproval from codex_protocol::protocol::AskForApproval {
UnlessTrusted, OnFailure, OnRequest, Never
}
);
v2_enum_from_core!(
pub enum SandboxMode from codex_protocol::config_types::SandboxMode {
ReadOnly, WorkspaceWrite, DangerFullAccess
}
);
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(tag = "mode", rename_all = "camelCase")]
#[ts(tag = "mode")]
#[ts(export_to = "v2/")]
pub enum SandboxPolicy {
DangerFullAccess,
ReadOnly,
WorkspaceWrite {
#[serde(default)]
writable_roots: Vec<PathBuf>,
#[serde(default)]
network_access: bool,
#[serde(default)]
exclude_tmpdir_env_var: bool,
#[serde(default)]
exclude_slash_tmp: bool,
},
}
impl SandboxPolicy {
pub fn to_core(&self) -> codex_protocol::protocol::SandboxPolicy {
match self {
SandboxPolicy::DangerFullAccess => {
codex_protocol::protocol::SandboxPolicy::DangerFullAccess
}
SandboxPolicy::ReadOnly => codex_protocol::protocol::SandboxPolicy::ReadOnly,
SandboxPolicy::WorkspaceWrite {
writable_roots,
network_access,
exclude_tmpdir_env_var,
exclude_slash_tmp,
} => codex_protocol::protocol::SandboxPolicy::WorkspaceWrite {
writable_roots: writable_roots.clone(),
network_access: *network_access,
exclude_tmpdir_env_var: *exclude_tmpdir_env_var,
exclude_slash_tmp: *exclude_slash_tmp,
},
}
}
}
impl From<codex_protocol::protocol::SandboxPolicy> for SandboxPolicy {
fn from(value: codex_protocol::protocol::SandboxPolicy) -> Self {
match value {
codex_protocol::protocol::SandboxPolicy::DangerFullAccess => {
SandboxPolicy::DangerFullAccess
}
codex_protocol::protocol::SandboxPolicy::ReadOnly => SandboxPolicy::ReadOnly,
codex_protocol::protocol::SandboxPolicy::WorkspaceWrite {
writable_roots,
network_access,
exclude_tmpdir_env_var,
exclude_slash_tmp,
} => SandboxPolicy::WorkspaceWrite {
writable_roots,
network_access,
exclude_tmpdir_env_var,
exclude_slash_tmp,
},
}
}
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(tag = "type", rename_all = "camelCase")]
#[ts(tag = "type")]
#[ts(export_to = "v2/")]
pub enum Account {
#[serde(rename = "apiKey", rename_all = "camelCase")]
#[ts(rename = "apiKey", rename_all = "camelCase")]
@@ -19,7 +127,7 @@ pub enum Account {
#[serde(rename = "chatgpt", rename_all = "camelCase")]
#[ts(rename = "chatgpt", rename_all = "camelCase")]
ChatGpt {
Chatgpt {
email: Option<String>,
plan_type: PlanType,
},
@@ -28,9 +136,10 @@ pub enum Account {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(tag = "type")]
#[ts(tag = "type")]
#[ts(export_to = "v2/")]
pub enum LoginAccountParams {
#[serde(rename = "apiKey")]
#[ts(rename = "apiKey")]
#[serde(rename = "apiKey", rename_all = "camelCase")]
#[ts(rename = "apiKey", rename_all = "camelCase")]
ApiKey {
#[serde(rename = "apiKey")]
#[ts(rename = "apiKey")]
@@ -38,48 +147,72 @@ pub enum LoginAccountParams {
},
#[serde(rename = "chatgpt")]
#[ts(rename = "chatgpt")]
ChatGpt,
Chatgpt,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(tag = "type", rename_all = "camelCase")]
#[ts(tag = "type")]
#[ts(export_to = "v2/")]
pub enum LoginAccountResponse {
#[serde(rename = "apiKey", rename_all = "camelCase")]
#[ts(rename = "apiKey", rename_all = "camelCase")]
ApiKey {},
#[serde(rename = "chatgpt", rename_all = "camelCase")]
#[ts(rename = "chatgpt", rename_all = "camelCase")]
Chatgpt {
// Use plain String for identifiers to avoid TS/JSON Schema quirks around uuid-specific types.
// Convert to/from UUIDs at the application layer as needed.
login_id: String,
/// URL the client should open in a browser to initiate the OAuth flow.
auth_url: String,
},
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct LoginAccountResponse {
/// Only set if the login method is ChatGPT.
#[schemars(with = "String")]
pub login_id: Option<Uuid>,
/// URL the client should open in a browser to initiate the OAuth flow.
/// Only set if the login method is ChatGPT.
pub auth_url: Option<String>,
#[ts(export_to = "v2/")]
pub struct CancelLoginAccountParams {
pub login_id: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct CancelLoginAccountResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct LogoutAccountResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct GetAccountRateLimitsResponse {
pub rate_limits: RateLimitSnapshot,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct GetAccountResponse {
pub account: Account,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Default, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ListModelsParams {
/// Optional page size; defaults to a reasonable server-side value.
pub page_size: Option<usize>,
#[ts(export_to = "v2/")]
pub struct ModelListParams {
/// Opaque pagination cursor returned by a previous call.
pub cursor: Option<String>,
/// Optional page size; defaults to a reasonable server-side value.
pub limit: Option<u32>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct Model {
pub id: String,
pub model: String,
@@ -93,6 +226,7 @@ pub struct Model {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ReasoningEffortOption {
pub reasoning_effort: ReasoningEffort,
pub description: String,
@@ -100,16 +234,18 @@ pub struct ReasoningEffortOption {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct ListModelsResponse {
pub items: Vec<Model>,
#[ts(export_to = "v2/")]
pub struct ModelListResponse {
pub data: Vec<Model>,
/// Opaque cursor to pass to the next call to continue after the last item.
/// if None, there are no more items to return.
/// If None, there are no more items to return.
pub next_cursor: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct UploadFeedbackParams {
#[ts(export_to = "v2/")]
pub struct FeedbackUploadParams {
pub classification: String,
pub reason: Option<String>,
pub conversation_id: Option<ConversationId>,
@@ -118,12 +254,446 @@ pub struct UploadFeedbackParams {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct UploadFeedbackResponse {
#[ts(export_to = "v2/")]
pub struct FeedbackUploadResponse {
pub thread_id: String,
}
// === Threads, Turns, and Items ===
// Thread APIs
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Default, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadStartParams {
pub model: Option<String>,
pub model_provider: Option<String>,
pub cwd: Option<String>,
pub approval_policy: Option<AskForApproval>,
pub sandbox: Option<SandboxMode>,
pub config: Option<HashMap<String, serde_json::Value>>,
pub base_instructions: Option<String>,
pub developer_instructions: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadStartResponse {
pub thread: Thread,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadResumeParams {
pub thread_id: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
pub struct AccountUpdatedNotification {
pub auth_method: Option<AuthMode>,
#[ts(export_to = "v2/")]
pub struct ThreadResumeResponse {
pub thread: Thread,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadArchiveParams {
pub thread_id: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadArchiveResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadListParams {
/// Opaque pagination cursor returned by a previous call.
pub cursor: Option<String>,
/// Optional page size; defaults to a reasonable server-side value.
pub limit: Option<u32>,
/// Optional provider filter; when set, only sessions recorded under these
/// providers are returned. When present but empty, includes all providers.
pub model_providers: Option<Vec<String>>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadListResponse {
pub data: Vec<Thread>,
/// Opaque cursor to pass to the next call to continue after the last item.
/// if None, there are no more items to return.
pub next_cursor: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadCompactParams {
pub thread_id: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadCompactResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct Thread {
pub id: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct AccountUpdatedNotification {
pub auth_mode: Option<AuthMode>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct Turn {
pub id: String,
pub items: Vec<ThreadItem>,
pub status: TurnStatus,
pub error: Option<TurnError>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct TurnError {
pub message: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub enum TurnStatus {
Completed,
Interrupted,
Failed,
InProgress,
}
// Turn APIs
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct TurnStartParams {
pub thread_id: String,
pub input: Vec<UserInput>,
/// Override the working directory for this turn and subsequent turns.
pub cwd: Option<PathBuf>,
/// Override the approval policy for this turn and subsequent turns.
pub approval_policy: Option<AskForApproval>,
/// Override the sandbox policy for this turn and subsequent turns.
pub sandbox_policy: Option<SandboxPolicy>,
/// Override the model for this turn and subsequent turns.
pub model: Option<String>,
/// Override the reasoning effort for this turn and subsequent turns.
pub effort: Option<ReasoningEffort>,
/// Override the reasoning summary for this turn and subsequent turns.
pub summary: Option<ReasoningSummary>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct TurnStartResponse {
pub turn: Turn,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct TurnInterruptParams {
pub thread_id: String,
pub turn_id: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct TurnInterruptResponse {}
// User input types
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(tag = "type", rename_all = "camelCase")]
#[ts(tag = "type")]
#[ts(export_to = "v2/")]
pub enum UserInput {
Text { text: String },
Image { url: String },
LocalImage { path: PathBuf },
}
impl UserInput {
pub fn into_core(self) -> CoreUserInput {
match self {
UserInput::Text { text } => CoreUserInput::Text { text },
UserInput::Image { url } => CoreUserInput::Image { image_url: url },
UserInput::LocalImage { path } => CoreUserInput::LocalImage { path },
}
}
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(tag = "type", rename_all = "camelCase")]
#[ts(tag = "type")]
#[ts(export_to = "v2/")]
pub enum ThreadItem {
UserMessage {
id: String,
content: Vec<UserInput>,
},
AgentMessage {
id: String,
text: String,
},
Reasoning {
id: String,
text: String,
},
CommandExecution {
id: String,
command: String,
aggregated_output: String,
exit_code: Option<i32>,
status: CommandExecutionStatus,
duration_ms: Option<i64>,
},
FileChange {
id: String,
changes: Vec<FileUpdateChange>,
status: PatchApplyStatus,
},
McpToolCall {
id: String,
server: String,
tool: String,
status: McpToolCallStatus,
arguments: JsonValue,
result: Option<McpToolCallResult>,
error: Option<McpToolCallError>,
},
WebSearch {
id: String,
query: String,
},
TodoList {
id: String,
items: Vec<TodoItem>,
},
ImageView {
id: String,
path: String,
},
CodeReview {
id: String,
review: String,
},
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub enum CommandExecutionStatus {
InProgress,
Completed,
Failed,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct FileUpdateChange {
pub path: String,
pub kind: PatchChangeKind,
pub diff: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub enum PatchChangeKind {
Add,
Delete,
Update,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub enum PatchApplyStatus {
Completed,
Failed,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub enum McpToolCallStatus {
InProgress,
Completed,
Failed,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct McpToolCallResult {
pub content: Vec<McpContentBlock>,
pub structured_content: JsonValue,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct McpToolCallError {
pub message: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct TodoItem {
pub id: String,
pub text: String,
pub completed: bool,
}
// === Server Notifications ===
// Thread/Turn lifecycle notifications and item progress events
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadStartedNotification {
pub thread: Thread,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct TurnStartedNotification {
pub turn: Turn,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct Usage {
pub input_tokens: i32,
pub cached_input_tokens: i32,
pub output_tokens: i32,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct TurnCompletedNotification {
pub turn: Turn,
// TODO: should usage be stored on the Turn object, and we return that instead?
pub usage: Usage,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ItemStartedNotification {
pub item: ThreadItem,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ItemCompletedNotification {
pub item: ThreadItem,
}
// Item-specific progress notifications
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct AgentMessageDeltaNotification {
pub item_id: String,
pub delta: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct CommandExecutionOutputDeltaNotification {
pub item_id: String,
pub delta: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct McpToolCallProgressNotification {
pub item_id: String,
pub message: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct AccountRateLimitsUpdatedNotification {
pub rate_limits: RateLimitSnapshot,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct RateLimitSnapshot {
pub primary: Option<RateLimitWindow>,
pub secondary: Option<RateLimitWindow>,
}
impl From<CoreRateLimitSnapshot> for RateLimitSnapshot {
fn from(value: CoreRateLimitSnapshot) -> Self {
Self {
primary: value.primary.map(RateLimitWindow::from),
secondary: value.secondary.map(RateLimitWindow::from),
}
}
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct RateLimitWindow {
pub used_percent: i32,
pub window_duration_mins: Option<i64>,
pub resets_at: Option<i64>,
}
impl From<CoreRateLimitWindow> for RateLimitWindow {
fn from(value: CoreRateLimitWindow) -> Self {
Self {
used_percent: value.used_percent.round() as i32,
window_duration_mins: value.window_minutes,
resets_at: value.resets_at,
}
}
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct AccountLoginCompletedNotification {
// Use plain String for identifiers to avoid TS/JSON Schema quirks around uuid-specific types.
// Convert to/from UUIDs at the application layer as needed.
pub login_id: Option<String>,
pub success: bool,
pub error: Option<String>,
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,11 +1,12 @@
use codex_app_server_protocol::AuthMode;
use codex_app_server_protocol::Model;
use codex_app_server_protocol::ReasoningEffortOption;
use codex_common::model_presets::ModelPreset;
use codex_common::model_presets::ReasoningEffortPreset;
use codex_common::model_presets::builtin_model_presets;
pub fn supported_models() -> Vec<Model> {
builtin_model_presets(None)
pub fn supported_models(auth_mode: Option<AuthMode>) -> Vec<Model> {
builtin_model_presets(auth_mode)
.into_iter()
.map(model_from_preset)
.collect()

View File

@@ -141,11 +141,13 @@ pub(crate) struct OutgoingError {
#[cfg(test)]
mod tests {
use codex_app_server_protocol::AccountLoginCompletedNotification;
use codex_app_server_protocol::AccountRateLimitsUpdatedNotification;
use codex_app_server_protocol::AccountUpdatedNotification;
use codex_app_server_protocol::AuthMode;
use codex_app_server_protocol::LoginChatGptCompleteNotification;
use codex_protocol::protocol::RateLimitSnapshot;
use codex_protocol::protocol::RateLimitWindow;
use codex_app_server_protocol::RateLimitSnapshot;
use codex_app_server_protocol::RateLimitWindow;
use pretty_assertions::assert_eq;
use serde_json::json;
use uuid::Uuid;
@@ -178,27 +180,57 @@ mod tests {
}
#[test]
fn verify_account_rate_limits_notification_serialization() {
let notification = ServerNotification::AccountRateLimitsUpdated(RateLimitSnapshot {
primary: Some(RateLimitWindow {
used_percent: 25.0,
window_minutes: Some(15),
resets_at: Some(123),
fn verify_account_login_completed_notification_serialization() {
let notification =
ServerNotification::AccountLoginCompleted(AccountLoginCompletedNotification {
login_id: Some(Uuid::nil().to_string()),
success: true,
error: None,
});
let jsonrpc_notification = OutgoingMessage::AppServerNotification(notification);
assert_eq!(
json!({
"method": "account/login/completed",
"params": {
"loginId": Uuid::nil().to_string(),
"success": true,
"error": null,
},
}),
secondary: None,
});
serde_json::to_value(jsonrpc_notification)
.expect("ensure the notification serializes correctly"),
"ensure the notification serializes correctly"
);
}
#[test]
fn verify_account_rate_limits_notification_serialization() {
let notification =
ServerNotification::AccountRateLimitsUpdated(AccountRateLimitsUpdatedNotification {
rate_limits: RateLimitSnapshot {
primary: Some(RateLimitWindow {
used_percent: 25,
window_duration_mins: Some(15),
resets_at: Some(123),
}),
secondary: None,
},
});
let jsonrpc_notification = OutgoingMessage::AppServerNotification(notification);
assert_eq!(
json!({
"method": "account/rateLimits/updated",
"params": {
"primary": {
"used_percent": 25.0,
"window_minutes": 15,
"resets_at": 123,
},
"secondary": null,
"rateLimits": {
"primary": {
"usedPercent": 25,
"windowDurationMins": 15,
"resetsAt": 123
},
"secondary": null
}
},
}),
serde_json::to_value(jsonrpc_notification)
@@ -210,7 +242,7 @@ mod tests {
#[test]
fn verify_account_updated_notification_serialization() {
let notification = ServerNotification::AccountUpdated(AccountUpdatedNotification {
auth_method: Some(AuthMode::ApiKey),
auth_mode: Some(AuthMode::ApiKey),
});
let jsonrpc_notification = OutgoingMessage::AppServerNotification(notification);
@@ -218,7 +250,7 @@ mod tests {
json!({
"method": "account/updated",
"params": {
"authMethod": "apikey"
"authMode": "apikey"
},
}),
serde_json::to_value(jsonrpc_notification)

View File

@@ -13,6 +13,7 @@ base64 = { workspace = true }
chrono = { workspace = true }
codex-app-server-protocol = { workspace = true }
codex-core = { workspace = true }
codex-protocol = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
tokio = { workspace = true, features = [
@@ -21,4 +22,5 @@ tokio = { workspace = true, features = [
"process",
"rt-multi-thread",
] }
uuid = { workspace = true }
wiremock = { workspace = true }

View File

@@ -2,6 +2,7 @@ mod auth_fixtures;
mod mcp_process;
mod mock_model_server;
mod responses;
mod rollout;
pub use auth_fixtures::ChatGptAuthFixture;
pub use auth_fixtures::ChatGptIdTokenClaims;
@@ -10,9 +11,11 @@ pub use auth_fixtures::write_chatgpt_auth;
use codex_app_server_protocol::JSONRPCResponse;
pub use mcp_process::McpProcess;
pub use mock_model_server::create_mock_chat_completions_server;
pub use mock_model_server::create_mock_chat_completions_server_unchecked;
pub use responses::create_apply_patch_sse_response;
pub use responses::create_final_assistant_message_sse_response;
pub use responses::create_shell_sse_response;
pub use rollout::create_fake_rollout;
use serde::de::DeserializeOwned;
pub fn to_response<T: DeserializeOwned>(response: JSONRPCResponse) -> anyhow::Result<T> {

View File

@@ -14,30 +14,36 @@ use anyhow::Context;
use assert_cmd::prelude::*;
use codex_app_server_protocol::AddConversationListenerParams;
use codex_app_server_protocol::ArchiveConversationParams;
use codex_app_server_protocol::CancelLoginAccountParams;
use codex_app_server_protocol::CancelLoginChatGptParams;
use codex_app_server_protocol::ClientInfo;
use codex_app_server_protocol::ClientNotification;
use codex_app_server_protocol::FeedbackUploadParams;
use codex_app_server_protocol::GetAuthStatusParams;
use codex_app_server_protocol::InitializeParams;
use codex_app_server_protocol::InterruptConversationParams;
use codex_app_server_protocol::ListConversationsParams;
use codex_app_server_protocol::ListModelsParams;
use codex_app_server_protocol::LoginApiKeyParams;
use codex_app_server_protocol::NewConversationParams;
use codex_app_server_protocol::RemoveConversationListenerParams;
use codex_app_server_protocol::ResumeConversationParams;
use codex_app_server_protocol::SendUserMessageParams;
use codex_app_server_protocol::SendUserTurnParams;
use codex_app_server_protocol::ServerRequest;
use codex_app_server_protocol::SetDefaultModelParams;
use codex_app_server_protocol::UploadFeedbackParams;
use codex_app_server_protocol::JSONRPCError;
use codex_app_server_protocol::JSONRPCMessage;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCRequest;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::ListConversationsParams;
use codex_app_server_protocol::LoginApiKeyParams;
use codex_app_server_protocol::ModelListParams;
use codex_app_server_protocol::NewConversationParams;
use codex_app_server_protocol::RemoveConversationListenerParams;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ResumeConversationParams;
use codex_app_server_protocol::SendUserMessageParams;
use codex_app_server_protocol::SendUserTurnParams;
use codex_app_server_protocol::ServerRequest;
use codex_app_server_protocol::SetDefaultModelParams;
use codex_app_server_protocol::ThreadArchiveParams;
use codex_app_server_protocol::ThreadListParams;
use codex_app_server_protocol::ThreadResumeParams;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::TurnInterruptParams;
use codex_app_server_protocol::TurnStartParams;
use std::process::Command as StdCommand;
use tokio::process::Command;
@@ -244,9 +250,9 @@ impl McpProcess {
}
/// Send a `feedback/upload` JSON-RPC request.
pub async fn send_upload_feedback_request(
pub async fn send_feedback_upload_request(
&mut self,
params: UploadFeedbackParams,
params: FeedbackUploadParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("feedback/upload", params).await
@@ -275,10 +281,46 @@ impl McpProcess {
self.send_request("listConversations", params).await
}
/// Send a `thread/start` JSON-RPC request.
pub async fn send_thread_start_request(
&mut self,
params: ThreadStartParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("thread/start", params).await
}
/// Send a `thread/resume` JSON-RPC request.
pub async fn send_thread_resume_request(
&mut self,
params: ThreadResumeParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("thread/resume", params).await
}
/// Send a `thread/archive` JSON-RPC request.
pub async fn send_thread_archive_request(
&mut self,
params: ThreadArchiveParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("thread/archive", params).await
}
/// Send a `thread/list` JSON-RPC request.
pub async fn send_thread_list_request(
&mut self,
params: ThreadListParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("thread/list", params).await
}
/// Send a `model/list` JSON-RPC request.
pub async fn send_list_models_request(
&mut self,
params: ListModelsParams,
params: ModelListParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("model/list", params).await
@@ -307,6 +349,24 @@ impl McpProcess {
self.send_request("loginChatGpt", None).await
}
/// Send a `turn/start` JSON-RPC request (v2).
pub async fn send_turn_start_request(
&mut self,
params: TurnStartParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("turn/start", params).await
}
/// Send a `turn/interrupt` JSON-RPC request (v2).
pub async fn send_turn_interrupt_request(
&mut self,
params: TurnInterruptParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("turn/interrupt", params).await
}
/// Send a `cancelLoginChatGpt` JSON-RPC request.
pub async fn send_cancel_login_chat_gpt_request(
&mut self,
@@ -326,6 +386,35 @@ impl McpProcess {
self.send_request("account/logout", None).await
}
/// Send an `account/login/start` JSON-RPC request for API key login.
pub async fn send_login_account_api_key_request(
&mut self,
api_key: &str,
) -> anyhow::Result<i64> {
let params = serde_json::json!({
"type": "apiKey",
"apiKey": api_key,
});
self.send_request("account/login/start", Some(params)).await
}
/// Send an `account/login/start` JSON-RPC request for ChatGPT login.
pub async fn send_login_account_chatgpt_request(&mut self) -> anyhow::Result<i64> {
let params = serde_json::json!({
"type": "chatgpt"
});
self.send_request("account/login/start", Some(params)).await
}
/// Send an `account/login/cancel` JSON-RPC request.
pub async fn send_cancel_login_account_request(
&mut self,
params: CancelLoginAccountParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("account/login/cancel", params).await
}
/// Send a `fuzzyFileSearch` JSON-RPC request.
pub async fn send_fuzzy_file_search_request(
&mut self,

View File

@@ -29,6 +29,25 @@ pub async fn create_mock_chat_completions_server(responses: Vec<String>) -> Mock
server
}
/// Same as `create_mock_chat_completions_server` but does not enforce an
/// expectation on the number of calls.
pub async fn create_mock_chat_completions_server_unchecked(responses: Vec<String>) -> MockServer {
let server = MockServer::start().await;
let seq_responder = SeqResponder {
num_calls: AtomicUsize::new(0),
responses,
};
Mock::given(method("POST"))
.and(path("/v1/chat/completions"))
.respond_with(seq_responder)
.mount(&server)
.await;
server
}
struct SeqResponder {
num_calls: AtomicUsize,
responses: Vec<String>,

View File

@@ -0,0 +1,82 @@
use anyhow::Result;
use codex_protocol::ConversationId;
use codex_protocol::protocol::SessionMeta;
use codex_protocol::protocol::SessionSource;
use serde_json::json;
use std::fs;
use std::path::Path;
use std::path::PathBuf;
use uuid::Uuid;
/// Create a minimal rollout file under `CODEX_HOME/sessions/YYYY/MM/DD/`.
///
/// - `filename_ts` is the filename timestamp component in `YYYY-MM-DDThh-mm-ss` format.
/// - `meta_rfc3339` is the envelope timestamp used in JSON lines.
/// - `preview` is the user message preview text.
/// - `model_provider` optionally sets the provider in the session meta payload.
///
/// Returns the generated conversation/session UUID as a string.
pub fn create_fake_rollout(
codex_home: &Path,
filename_ts: &str,
meta_rfc3339: &str,
preview: &str,
model_provider: Option<&str>,
) -> Result<String> {
let uuid = Uuid::new_v4();
let uuid_str = uuid.to_string();
let conversation_id = ConversationId::from_string(&uuid_str)?;
// sessions/YYYY/MM/DD derived from filename_ts (YYYY-MM-DDThh-mm-ss)
let year = &filename_ts[0..4];
let month = &filename_ts[5..7];
let day = &filename_ts[8..10];
let dir = codex_home.join("sessions").join(year).join(month).join(day);
fs::create_dir_all(&dir)?;
let file_path = dir.join(format!("rollout-{filename_ts}-{uuid}.jsonl"));
// Build JSONL lines
let payload = serde_json::to_value(SessionMeta {
id: conversation_id,
timestamp: meta_rfc3339.to_string(),
cwd: PathBuf::from("/"),
originator: "codex".to_string(),
cli_version: "0.0.0".to_string(),
instructions: None,
source: SessionSource::Cli,
model_provider: model_provider.map(str::to_string),
})?;
let lines = [
json!({
"timestamp": meta_rfc3339,
"type": "session_meta",
"payload": payload
})
.to_string(),
json!({
"timestamp": meta_rfc3339,
"type":"response_item",
"payload": {
"type":"message",
"role":"user",
"content":[{"type":"input_text","text": preview}]
}
})
.to_string(),
json!({
"timestamp": meta_rfc3339,
"type":"event_msg",
"payload": {
"type":"user_message",
"message": preview,
"kind": "plain"
}
})
.to_string(),
];
fs::write(file_path, lines.join("\n") + "\n")?;
Ok(uuid_str)
}

View File

@@ -12,7 +12,7 @@ use std::path::Path;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(20);
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn archive_conversation_moves_rollout_into_archived_directory() -> Result<()> {

View File

@@ -146,7 +146,7 @@ fn create_config_toml(codex_home: &Path, server_uri: String) -> std::io::Result<
r#"
model = "mock-model"
approval_policy = "never"
sandbox_mode = "danger-full-access"
sandbox_mode = "read-only"
model_provider = "mock_provider"

View File

@@ -1,5 +1,6 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_fake_rollout;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCResponse;
@@ -15,12 +16,8 @@ use codex_core::protocol::EventMsg;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ResponseItem;
use pretty_assertions::assert_eq;
use serde_json::json;
use std::fs;
use std::path::Path;
use tempfile::TempDir;
use tokio::time::timeout;
use uuid::Uuid;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
@@ -357,70 +354,3 @@ async fn test_list_and_resume_conversations() -> Result<()> {
Ok(())
}
fn create_fake_rollout(
codex_home: &Path,
filename_ts: &str,
meta_rfc3339: &str,
preview: &str,
model_provider: Option<&str>,
) -> Result<()> {
let uuid = Uuid::new_v4();
// sessions/YYYY/MM/DD/ derived from filename_ts (YYYY-MM-DDThh-mm-ss)
let year = &filename_ts[0..4];
let month = &filename_ts[5..7];
let day = &filename_ts[8..10];
let dir = codex_home.join("sessions").join(year).join(month).join(day);
fs::create_dir_all(&dir)?;
let file_path = dir.join(format!("rollout-{filename_ts}-{uuid}.jsonl"));
let mut lines = Vec::new();
// Meta line with timestamp (flattened meta in payload for new schema)
let mut payload = json!({
"id": uuid,
"timestamp": meta_rfc3339,
"cwd": "/",
"originator": "codex",
"cli_version": "0.0.0",
"instructions": null,
});
if let Some(provider) = model_provider {
payload["model_provider"] = json!(provider);
}
lines.push(
json!({
"timestamp": meta_rfc3339,
"type": "session_meta",
"payload": payload
})
.to_string(),
);
// Minimal user message entry as a persisted response item (with envelope timestamp)
lines.push(
json!({
"timestamp": meta_rfc3339,
"type":"response_item",
"payload": {
"type":"message",
"role":"user",
"content":[{"type":"input_text","text": preview}]
}
})
.to_string(),
);
// Add a matching user message event line to satisfy filters
lines.push(
json!({
"timestamp": meta_rfc3339,
"type":"event_msg",
"payload": {
"type":"user_message",
"message": preview,
"kind": "plain"
}
})
.to_string(),
);
fs::write(file_path, lines.join("\n") + "\n")?;
Ok(())
}

View File

@@ -6,9 +6,9 @@ use app_test_support::McpProcess;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCError;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::ListModelsParams;
use codex_app_server_protocol::ListModelsResponse;
use codex_app_server_protocol::Model;
use codex_app_server_protocol::ModelListParams;
use codex_app_server_protocol::ModelListResponse;
use codex_app_server_protocol::ReasoningEffortOption;
use codex_app_server_protocol::RequestId;
use codex_protocol::config_types::ReasoningEffort;
@@ -27,8 +27,8 @@ async fn list_models_returns_all_models_with_large_limit() -> Result<()> {
timeout(DEFAULT_TIMEOUT, mcp.initialize()).await??;
let request_id = mcp
.send_list_models_request(ListModelsParams {
page_size: Some(100),
.send_list_models_request(ModelListParams {
limit: Some(100),
cursor: None,
})
.await?;
@@ -39,14 +39,17 @@ async fn list_models_returns_all_models_with_large_limit() -> Result<()> {
)
.await??;
let ListModelsResponse { items, next_cursor } = to_response::<ListModelsResponse>(response)?;
let ModelListResponse {
data: items,
next_cursor,
} = to_response::<ModelListResponse>(response)?;
let expected_models = vec![
Model {
id: "gpt-5-codex".to_string(),
model: "gpt-5-codex".to_string(),
display_name: "gpt-5-codex".to_string(),
description: "Optimized for coding tasks with many tools.".to_string(),
description: "Optimized for codex.".to_string(),
supported_reasoning_efforts: vec![
ReasoningEffortOption {
reasoning_effort: ReasoningEffort::Low,
@@ -111,8 +114,8 @@ async fn list_models_pagination_works() -> Result<()> {
timeout(DEFAULT_TIMEOUT, mcp.initialize()).await??;
let first_request = mcp
.send_list_models_request(ListModelsParams {
page_size: Some(1),
.send_list_models_request(ModelListParams {
limit: Some(1),
cursor: None,
})
.await?;
@@ -123,18 +126,18 @@ async fn list_models_pagination_works() -> Result<()> {
)
.await??;
let ListModelsResponse {
items: first_items,
let ModelListResponse {
data: first_items,
next_cursor: first_cursor,
} = to_response::<ListModelsResponse>(first_response)?;
} = to_response::<ModelListResponse>(first_response)?;
assert_eq!(first_items.len(), 1);
assert_eq!(first_items[0].id, "gpt-5-codex");
let next_cursor = first_cursor.ok_or_else(|| anyhow!("cursor for second page"))?;
let second_request = mcp
.send_list_models_request(ListModelsParams {
page_size: Some(1),
.send_list_models_request(ModelListParams {
limit: Some(1),
cursor: Some(next_cursor.clone()),
})
.await?;
@@ -145,10 +148,10 @@ async fn list_models_pagination_works() -> Result<()> {
)
.await??;
let ListModelsResponse {
items: second_items,
let ModelListResponse {
data: second_items,
next_cursor: second_cursor,
} = to_response::<ListModelsResponse>(second_response)?;
} = to_response::<ModelListResponse>(second_response)?;
assert_eq!(second_items.len(), 1);
assert_eq!(second_items[0].id, "gpt-5");
@@ -164,8 +167,8 @@ async fn list_models_rejects_invalid_cursor() -> Result<()> {
timeout(DEFAULT_TIMEOUT, mcp.initialize()).await??;
let request_id = mcp
.send_list_models_request(ListModelsParams {
page_size: None,
.send_list_models_request(ModelListParams {
limit: None,
cursor: Some("invalid".to_string()),
})
.await?;

View File

@@ -7,10 +7,10 @@ use codex_app_server_protocol::GetAccountRateLimitsResponse;
use codex_app_server_protocol::JSONRPCError;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::LoginApiKeyParams;
use codex_app_server_protocol::RateLimitSnapshot;
use codex_app_server_protocol::RateLimitWindow;
use codex_app_server_protocol::RequestId;
use codex_core::auth::AuthCredentialsStoreMode;
use codex_protocol::protocol::RateLimitSnapshot;
use codex_protocol::protocol::RateLimitWindow;
use pretty_assertions::assert_eq;
use serde_json::json;
use std::path::Path;
@@ -143,13 +143,13 @@ async fn get_account_rate_limits_returns_snapshot() -> Result<()> {
let expected = GetAccountRateLimitsResponse {
rate_limits: RateLimitSnapshot {
primary: Some(RateLimitWindow {
used_percent: 42.0,
window_minutes: Some(60),
used_percent: 42,
window_duration_mins: Some(60),
resets_at: Some(primary_reset_timestamp),
}),
secondary: Some(RateLimitWindow {
used_percent: 5.0,
window_minutes: Some(1440),
used_percent: 5,
window_duration_mins: Some(1440),
resets_at: Some(secondary_reset_timestamp),
}),
},

View File

@@ -2,30 +2,52 @@ use anyhow::Result;
use anyhow::bail;
use app_test_support::McpProcess;
use app_test_support::to_response;
use codex_app_server_protocol::AuthMode;
use codex_app_server_protocol::CancelLoginAccountParams;
use codex_app_server_protocol::CancelLoginAccountResponse;
use codex_app_server_protocol::GetAuthStatusParams;
use codex_app_server_protocol::GetAuthStatusResponse;
use codex_app_server_protocol::JSONRPCError;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::LoginAccountResponse;
use codex_app_server_protocol::LogoutAccountResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ServerNotification;
use codex_core::auth::AuthCredentialsStoreMode;
use codex_login::login_with_api_key;
use pretty_assertions::assert_eq;
use serial_test::serial;
use std::path::Path;
use std::time::Duration;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
// Helper to create a minimal config.toml for the app server
fn create_config_toml(codex_home: &Path) -> std::io::Result<()> {
fn create_config_toml(
codex_home: &Path,
forced_method: Option<&str>,
forced_workspace_id: Option<&str>,
) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
let forced_line = if let Some(method) = forced_method {
format!("forced_login_method = \"{method}\"\n")
} else {
String::new()
};
let forced_workspace_line = if let Some(ws) = forced_workspace_id {
format!("forced_chatgpt_workspace_id = \"{ws}\"\n")
} else {
String::new()
};
let contents = format!(
r#"
model = "mock-model"
approval_policy = "never"
sandbox_mode = "danger-full-access"
{forced_line}
{forced_workspace_line}
model_provider = "mock_provider"
@@ -35,14 +57,15 @@ base_url = "http://127.0.0.1:0/v1"
wire_api = "chat"
request_max_retries = 0
stream_max_retries = 0
"#,
)
"#
);
std::fs::write(config_toml, contents)
}
#[tokio::test]
async fn logout_account_removes_auth_and_notifies() -> Result<()> {
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path())?;
create_config_toml(codex_home.path(), None, None)?;
login_with_api_key(
codex_home.path(),
@@ -72,7 +95,7 @@ async fn logout_account_removes_auth_and_notifies() -> Result<()> {
bail!("unexpected notification: {parsed:?}");
};
assert!(
payload.auth_method.is_none(),
payload.auth_mode.is_none(),
"auth_method should be None after logout"
);
@@ -97,3 +120,190 @@ async fn logout_account_removes_auth_and_notifies() -> Result<()> {
assert_eq!(status.auth_token, None);
Ok(())
}
#[tokio::test]
async fn login_account_api_key_succeeds_and_notifies() -> Result<()> {
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), None, None)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let req_id = mcp
.send_login_account_api_key_request("sk-test-key")
.await?;
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(req_id)),
)
.await??;
let login: LoginAccountResponse = to_response(resp)?;
assert_eq!(login, LoginAccountResponse::ApiKey {});
let note = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("account/login/completed"),
)
.await??;
let parsed: ServerNotification = note.try_into()?;
let ServerNotification::AccountLoginCompleted(payload) = parsed else {
bail!("unexpected notification: {parsed:?}");
};
pretty_assertions::assert_eq!(payload.login_id, None);
pretty_assertions::assert_eq!(payload.success, true);
pretty_assertions::assert_eq!(payload.error, None);
let note = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("account/updated"),
)
.await??;
let parsed: ServerNotification = note.try_into()?;
let ServerNotification::AccountUpdated(payload) = parsed else {
bail!("unexpected notification: {parsed:?}");
};
pretty_assertions::assert_eq!(payload.auth_mode, Some(AuthMode::ApiKey));
assert!(codex_home.path().join("auth.json").exists());
Ok(())
}
#[tokio::test]
async fn login_account_api_key_rejected_when_forced_chatgpt() -> Result<()> {
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), Some("chatgpt"), None)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let request_id = mcp
.send_login_account_api_key_request("sk-test-key")
.await?;
let err: JSONRPCError = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_error_message(RequestId::Integer(request_id)),
)
.await??;
assert_eq!(
err.error.message,
"API key login is disabled. Use ChatGPT login instead."
);
Ok(())
}
#[tokio::test]
async fn login_account_chatgpt_rejected_when_forced_api() -> Result<()> {
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), Some("api"), None)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let request_id = mcp.send_login_account_chatgpt_request().await?;
let err: JSONRPCError = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_error_message(RequestId::Integer(request_id)),
)
.await??;
assert_eq!(
err.error.message,
"ChatGPT login is disabled. Use API key login instead."
);
Ok(())
}
#[tokio::test]
// Serialize tests that launch the login server since it binds to a fixed port.
#[serial(login_port)]
async fn login_account_chatgpt_start() -> Result<()> {
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), None, None)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let request_id = mcp.send_login_account_chatgpt_request().await?;
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
)
.await??;
let login: LoginAccountResponse = to_response(resp)?;
let LoginAccountResponse::Chatgpt { login_id, auth_url } = login else {
bail!("unexpected login response: {login:?}");
};
assert!(
auth_url.contains("redirect_uri=http%3A%2F%2Flocalhost"),
"auth_url should contain a redirect_uri to localhost"
);
let cancel_id = mcp
.send_cancel_login_account_request(CancelLoginAccountParams {
login_id: login_id.clone(),
})
.await?;
let cancel_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(cancel_id)),
)
.await??;
let _ok: CancelLoginAccountResponse = to_response(cancel_resp)?;
let note = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("account/login/completed"),
)
.await??;
let parsed: ServerNotification = note.try_into()?;
let ServerNotification::AccountLoginCompleted(payload) = parsed else {
bail!("unexpected notification: {parsed:?}");
};
pretty_assertions::assert_eq!(payload.login_id, Some(login_id));
pretty_assertions::assert_eq!(payload.success, false);
assert!(
payload.error.is_some(),
"expected a non-empty error on cancel"
);
let maybe_updated = timeout(
Duration::from_millis(500),
mcp.read_stream_until_notification_message("account/updated"),
)
.await;
assert!(
maybe_updated.is_err(),
"account/updated should not be emitted when login is cancelled"
);
Ok(())
}
#[tokio::test]
// Serialize tests that launch the login server since it binds to a fixed port.
#[serial(login_port)]
async fn login_account_chatgpt_includes_forced_workspace_query_param() -> Result<()> {
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), None, Some("ws-forced"))?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let request_id = mcp.send_login_account_chatgpt_request().await?;
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
)
.await??;
let login: LoginAccountResponse = to_response(resp)?;
let LoginAccountResponse::Chatgpt { auth_url, .. } = login else {
bail!("unexpected login response: {login:?}");
};
assert!(
auth_url.contains("allowed_workspace_id=ws-forced"),
"auth URL should include forced workspace"
);
Ok(())
}

View File

@@ -1,2 +1,7 @@
// v2 test suite modules
mod account;
mod thread_archive;
mod thread_list;
mod thread_resume;
mod thread_start;
mod turn_interrupt;
mod turn_start;

View File

@@ -0,0 +1,93 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ThreadArchiveParams;
use codex_app_server_protocol::ThreadArchiveResponse;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
use codex_core::ARCHIVED_SESSIONS_SUBDIR;
use codex_core::find_conversation_path_by_id_str;
use std::path::Path;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
#[tokio::test]
async fn thread_archive_moves_rollout_into_archived_directory() -> Result<()> {
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path())?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
// Start a thread.
let start_id = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("mock-model".to_string()),
..Default::default()
})
.await?;
let start_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(start_id)),
)
.await??;
let ThreadStartResponse { thread } = to_response::<ThreadStartResponse>(start_resp)?;
assert!(!thread.id.is_empty());
// Locate the rollout path recorded for this thread id.
let rollout_path = find_conversation_path_by_id_str(codex_home.path(), &thread.id)
.await?
.expect("expected rollout path for thread id to exist");
assert!(
rollout_path.exists(),
"expected {} to exist",
rollout_path.display()
);
// Archive the thread.
let archive_id = mcp
.send_thread_archive_request(ThreadArchiveParams {
thread_id: thread.id.clone(),
})
.await?;
let archive_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(archive_id)),
)
.await??;
let _: ThreadArchiveResponse = to_response::<ThreadArchiveResponse>(archive_resp)?;
// Verify file moved.
let archived_directory = codex_home.path().join(ARCHIVED_SESSIONS_SUBDIR);
// The archived file keeps the original filename (rollout-...-<id>.jsonl).
let archived_rollout_path =
archived_directory.join(rollout_path.file_name().expect("rollout file name"));
assert!(
!rollout_path.exists(),
"expected rollout path {} to be moved",
rollout_path.display()
);
assert!(
archived_rollout_path.exists(),
"expected archived rollout path {} to exist",
archived_rollout_path.display()
);
Ok(())
}
fn create_config_toml(codex_home: &Path) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(config_toml, config_contents())
}
fn config_contents() -> &'static str {
r#"model = "mock-model"
approval_policy = "never"
sandbox_mode = "read-only"
"#
}

View File

@@ -0,0 +1,205 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_fake_rollout;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ThreadListParams;
use codex_app_server_protocol::ThreadListResponse;
use serde_json::json;
use tempfile::TempDir;
use tokio::time::timeout;
use uuid::Uuid;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
#[tokio::test]
async fn thread_list_basic_empty() -> Result<()> {
let codex_home = TempDir::new()?;
create_minimal_config(codex_home.path())?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
// List threads in an empty CODEX_HOME; should return an empty page with nextCursor: null.
let list_id = mcp
.send_thread_list_request(ThreadListParams {
cursor: None,
limit: Some(10),
model_providers: None,
})
.await?;
let list_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(list_id)),
)
.await??;
let ThreadListResponse { data, next_cursor } = to_response::<ThreadListResponse>(list_resp)?;
assert!(data.is_empty());
assert_eq!(next_cursor, None);
Ok(())
}
// Minimal config.toml for listing.
fn create_minimal_config(codex_home: &std::path::Path) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
r#"
model = "mock-model"
approval_policy = "never"
"#,
)
}
#[tokio::test]
async fn thread_list_pagination_next_cursor_none_on_last_page() -> Result<()> {
let codex_home = TempDir::new()?;
create_minimal_config(codex_home.path())?;
// Create three rollouts so we can paginate with limit=2.
let _a = create_fake_rollout(
codex_home.path(),
"2025-01-02T12-00-00",
"2025-01-02T12:00:00Z",
"Hello",
Some("mock_provider"),
)?;
let _b = create_fake_rollout(
codex_home.path(),
"2025-01-01T13-00-00",
"2025-01-01T13:00:00Z",
"Hello",
Some("mock_provider"),
)?;
let _c = create_fake_rollout(
codex_home.path(),
"2025-01-01T12-00-00",
"2025-01-01T12:00:00Z",
"Hello",
Some("mock_provider"),
)?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
// Page 1: limit 2 → expect next_cursor Some.
let page1_id = mcp
.send_thread_list_request(ThreadListParams {
cursor: None,
limit: Some(2),
model_providers: Some(vec!["mock_provider".to_string()]),
})
.await?;
let page1_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(page1_id)),
)
.await??;
let ThreadListResponse {
data: data1,
next_cursor: cursor1,
} = to_response::<ThreadListResponse>(page1_resp)?;
assert_eq!(data1.len(), 2);
let cursor1 = cursor1.expect("expected nextCursor on first page");
// Page 2: with cursor → expect next_cursor None when no more results.
let page2_id = mcp
.send_thread_list_request(ThreadListParams {
cursor: Some(cursor1),
limit: Some(2),
model_providers: Some(vec!["mock_provider".to_string()]),
})
.await?;
let page2_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(page2_id)),
)
.await??;
let ThreadListResponse {
data: data2,
next_cursor: cursor2,
} = to_response::<ThreadListResponse>(page2_resp)?;
assert!(data2.len() <= 2);
assert_eq!(cursor2, None, "expected nextCursor to be null on last page");
Ok(())
}
#[tokio::test]
async fn thread_list_respects_provider_filter() -> Result<()> {
let codex_home = TempDir::new()?;
create_minimal_config(codex_home.path())?;
// Create rollouts under two providers.
let _a = create_fake_rollout(
codex_home.path(),
"2025-01-02T10-00-00",
"2025-01-02T10:00:00Z",
"X",
Some("mock_provider"),
)?; // mock_provider
// one with a different provider
let uuid = Uuid::new_v4();
let dir = codex_home
.path()
.join("sessions")
.join("2025")
.join("01")
.join("02");
std::fs::create_dir_all(&dir)?;
let file_path = dir.join(format!("rollout-2025-01-02T11-00-00-{uuid}.jsonl"));
let lines = [
json!({
"timestamp": "2025-01-02T11:00:00Z",
"type": "session_meta",
"payload": {
"id": uuid,
"timestamp": "2025-01-02T11:00:00Z",
"cwd": "/",
"originator": "codex",
"cli_version": "0.0.0",
"instructions": null,
"source": "vscode",
"model_provider": "other_provider"
}
})
.to_string(),
json!({
"timestamp": "2025-01-02T11:00:00Z",
"type":"response_item",
"payload": {"type":"message","role":"user","content":[{"type":"input_text","text":"X"}]}
})
.to_string(),
json!({
"timestamp": "2025-01-02T11:00:00Z",
"type":"event_msg",
"payload": {"type":"user_message","message":"X","kind":"plain"}
})
.to_string(),
];
std::fs::write(file_path, lines.join("\n") + "\n")?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
// Filter to only other_provider; expect 1 item, nextCursor None.
let list_id = mcp
.send_thread_list_request(ThreadListParams {
cursor: None,
limit: Some(10),
model_providers: Some(vec!["other_provider".to_string()]),
})
.await?;
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(list_id)),
)
.await??;
let ThreadListResponse { data, next_cursor } = to_response::<ThreadListResponse>(resp)?;
assert_eq!(data.len(), 1);
assert_eq!(next_cursor, None);
Ok(())
}

View File

@@ -0,0 +1,79 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_mock_chat_completions_server;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ThreadResumeParams;
use codex_app_server_protocol::ThreadResumeResponse;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
#[tokio::test]
async fn thread_resume_returns_existing_thread() -> Result<()> {
let server = create_mock_chat_completions_server(vec![]).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
// Start a thread.
let start_id = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("gpt-5-codex".to_string()),
..Default::default()
})
.await?;
let start_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(start_id)),
)
.await??;
let ThreadStartResponse { thread } = to_response::<ThreadStartResponse>(start_resp)?;
// Resume it via v2 API.
let resume_id = mcp
.send_thread_resume_request(ThreadResumeParams {
thread_id: thread.id.clone(),
})
.await?;
let resume_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(resume_id)),
)
.await??;
let ThreadResumeResponse { thread: resumed } =
to_response::<ThreadResumeResponse>(resume_resp)?;
assert_eq!(resumed.id, thread.id);
Ok(())
}
// Helper to create a config.toml pointing at the mock model server.
fn create_config_toml(codex_home: &std::path::Path, server_uri: &str) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
format!(
r#"
model = "mock-model"
approval_policy = "never"
sandbox_mode = "read-only"
model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "chat"
request_max_retries = 0
stream_max_retries = 0
"#
),
)
}

View File

@@ -0,0 +1,81 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_mock_chat_completions_server;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
use codex_app_server_protocol::ThreadStartedNotification;
use std::path::Path;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
#[tokio::test]
async fn thread_start_creates_thread_and_emits_started() -> Result<()> {
// Provide a mock server and config so model wiring is valid.
let server = create_mock_chat_completions_server(vec![]).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri())?;
// Start server and initialize.
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
// Start a v2 thread with an explicit model override.
let req_id = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("gpt-5".to_string()),
..Default::default()
})
.await?;
// Expect a proper JSON-RPC response with a thread id.
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(req_id)),
)
.await??;
let ThreadStartResponse { thread } = to_response::<ThreadStartResponse>(resp)?;
assert!(!thread.id.is_empty(), "thread id should not be empty");
// A corresponding thread/started notification should arrive.
let notif: JSONRPCNotification = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("thread/started"),
)
.await??;
let started: ThreadStartedNotification =
serde_json::from_value(notif.params.expect("params must be present"))?;
assert_eq!(started.thread.id, thread.id);
Ok(())
}
// Helper to create a config.toml pointing at the mock model server.
fn create_config_toml(codex_home: &Path, server_uri: &str) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
format!(
r#"
model = "mock-model"
approval_policy = "never"
sandbox_mode = "read-only"
model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "chat"
request_max_retries = 0
stream_max_retries = 0
"#
),
)
}

View File

@@ -0,0 +1,128 @@
#![cfg(unix)]
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_mock_chat_completions_server;
use app_test_support::create_shell_sse_response;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
use codex_app_server_protocol::TurnInterruptParams;
use codex_app_server_protocol::TurnInterruptResponse;
use codex_app_server_protocol::TurnStartParams;
use codex_app_server_protocol::TurnStartResponse;
use codex_app_server_protocol::UserInput as V2UserInput;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
#[tokio::test]
async fn turn_interrupt_aborts_running_turn() -> Result<()> {
// Use a portable sleep command to keep the turn running.
#[cfg(target_os = "windows")]
let shell_command = vec![
"powershell".to_string(),
"-Command".to_string(),
"Start-Sleep -Seconds 10".to_string(),
];
#[cfg(not(target_os = "windows"))]
let shell_command = vec!["sleep".to_string(), "10".to_string()];
let tmp = TempDir::new()?;
let codex_home = tmp.path().join("codex_home");
std::fs::create_dir(&codex_home)?;
let working_directory = tmp.path().join("workdir");
std::fs::create_dir(&working_directory)?;
// Mock server: long-running shell command then (after abort) nothing else needed.
let server = create_mock_chat_completions_server(vec![create_shell_sse_response(
shell_command.clone(),
Some(&working_directory),
Some(10_000),
"call_sleep",
)?])
.await;
create_config_toml(&codex_home, &server.uri())?;
let mut mcp = McpProcess::new(&codex_home).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
// Start a v2 thread and capture its id.
let thread_req = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("mock-model".to_string()),
..Default::default()
})
.await?;
let thread_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(thread_req)),
)
.await??;
let ThreadStartResponse { thread } = to_response::<ThreadStartResponse>(thread_resp)?;
// Start a turn that triggers a long-running command.
let turn_req = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "run sleep".to_string(),
}],
cwd: Some(working_directory.clone()),
..Default::default()
})
.await?;
let turn_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn_req)),
)
.await??;
let TurnStartResponse { turn } = to_response::<TurnStartResponse>(turn_resp)?;
// Give the command a brief moment to start.
tokio::time::sleep(std::time::Duration::from_secs(1)).await;
// Interrupt the in-progress turn by id (v2 API).
let interrupt_id = mcp
.send_turn_interrupt_request(TurnInterruptParams {
thread_id: thread.id,
turn_id: turn.id,
})
.await?;
let interrupt_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(interrupt_id)),
)
.await??;
let _resp: TurnInterruptResponse = to_response::<TurnInterruptResponse>(interrupt_resp)?;
// No fields to assert on; successful deserialization confirms proper response shape.
Ok(())
}
// Helper to create a config.toml pointing at the mock model server.
fn create_config_toml(codex_home: &std::path::Path, server_uri: &str) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
format!(
r#"
model = "mock-model"
approval_policy = "never"
sandbox_mode = "workspace-write"
model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "chat"
request_max_retries = 0
stream_max_retries = 0
"#
),
)
}

View File

@@ -0,0 +1,486 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_final_assistant_message_sse_response;
use app_test_support::create_mock_chat_completions_server;
use app_test_support::create_mock_chat_completions_server_unchecked;
use app_test_support::create_shell_sse_response;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ServerRequest;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
use codex_app_server_protocol::TurnStartParams;
use codex_app_server_protocol::TurnStartResponse;
use codex_app_server_protocol::TurnStartedNotification;
use codex_app_server_protocol::UserInput as V2UserInput;
use codex_core::protocol_config_types::ReasoningEffort;
use codex_core::protocol_config_types::ReasoningSummary;
use codex_protocol::parse_command::ParsedCommand;
use codex_protocol::protocol::Event;
use codex_protocol::protocol::EventMsg;
use core_test_support::skip_if_no_network;
use pretty_assertions::assert_eq;
use std::path::Path;
use tempfile::TempDir;
use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
#[tokio::test]
async fn turn_start_emits_notifications_and_accepts_model_override() -> Result<()> {
// Provide a mock server and config so model wiring is valid.
// Three Codex turns hit the mock model (session start + two turn/start calls).
let responses = vec![
create_final_assistant_message_sse_response("Done")?,
create_final_assistant_message_sse_response("Done")?,
create_final_assistant_message_sse_response("Done")?,
];
let server = create_mock_chat_completions_server_unchecked(responses).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri(), "never")?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
// Start a thread (v2) and capture its id.
let thread_req = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("mock-model".to_string()),
..Default::default()
})
.await?;
let thread_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(thread_req)),
)
.await??;
let ThreadStartResponse { thread } = to_response::<ThreadStartResponse>(thread_resp)?;
// Start a turn with only input and thread_id set (no overrides).
let turn_req = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "Hello".to_string(),
}],
..Default::default()
})
.await?;
let turn_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn_req)),
)
.await??;
let TurnStartResponse { turn } = to_response::<TurnStartResponse>(turn_resp)?;
assert!(!turn.id.is_empty());
// Expect a turn/started notification.
let notif: JSONRPCNotification = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("turn/started"),
)
.await??;
let started: TurnStartedNotification =
serde_json::from_value(notif.params.expect("params must be present"))?;
assert_eq!(
started.turn.status,
codex_app_server_protocol::TurnStatus::InProgress
);
// Send a second turn that exercises the overrides path: change the model.
let turn_req2 = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "Second".to_string(),
}],
model: Some("mock-model-override".to_string()),
..Default::default()
})
.await?;
let turn_resp2: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn_req2)),
)
.await??;
let TurnStartResponse { turn: turn2 } = to_response::<TurnStartResponse>(turn_resp2)?;
assert!(!turn2.id.is_empty());
// Ensure the second turn has a different id than the first.
assert_ne!(turn.id, turn2.id);
// Expect a second turn/started notification as well.
let _notif2: JSONRPCNotification = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("turn/started"),
)
.await??;
// And we should ultimately get a task_complete without having to add a
// legacy conversation listener explicitly (auto-attached by thread/start).
let _task_complete: JSONRPCNotification = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("codex/event/task_complete"),
)
.await??;
Ok(())
}
#[tokio::test]
async fn turn_start_accepts_local_image_input() -> Result<()> {
// Two Codex turns hit the mock model (session start + turn/start).
let responses = vec![
create_final_assistant_message_sse_response("Done")?,
create_final_assistant_message_sse_response("Done")?,
];
// Use the unchecked variant because the request payload includes a LocalImage
// which the strict matcher does not currently cover.
let server = create_mock_chat_completions_server_unchecked(responses).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri(), "never")?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let thread_req = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("mock-model".to_string()),
..Default::default()
})
.await?;
let thread_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(thread_req)),
)
.await??;
let ThreadStartResponse { thread } = to_response::<ThreadStartResponse>(thread_resp)?;
let image_path = codex_home.path().join("image.png");
// No need to actually write the file; we just exercise the input path.
let turn_req = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::LocalImage { path: image_path }],
..Default::default()
})
.await?;
let turn_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn_req)),
)
.await??;
let TurnStartResponse { turn } = to_response::<TurnStartResponse>(turn_resp)?;
assert!(!turn.id.is_empty());
// This test only validates that turn/start responds and returns a turn.
Ok(())
}
#[tokio::test]
async fn turn_start_exec_approval_toggle_v2() -> Result<()> {
skip_if_no_network!(Ok(()));
let tmp = TempDir::new()?;
let codex_home = tmp.path().to_path_buf();
// Mock server: first turn requests a shell call (elicitation), then completes.
// Second turn same, but we'll set approval_policy=never to avoid elicitation.
let responses = vec![
create_shell_sse_response(
vec![
"python3".to_string(),
"-c".to_string(),
"print(42)".to_string(),
],
None,
Some(5000),
"call1",
)?,
create_final_assistant_message_sse_response("done 1")?,
create_shell_sse_response(
vec![
"python3".to_string(),
"-c".to_string(),
"print(42)".to_string(),
],
None,
Some(5000),
"call2",
)?,
create_final_assistant_message_sse_response("done 2")?,
];
let server = create_mock_chat_completions_server(responses).await;
// Default approval is untrusted to force elicitation on first turn.
create_config_toml(codex_home.as_path(), &server.uri(), "untrusted")?;
let mut mcp = McpProcess::new(codex_home.as_path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
// thread/start
let start_id = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("mock-model".to_string()),
..Default::default()
})
.await?;
let start_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(start_id)),
)
.await??;
let ThreadStartResponse { thread } = to_response::<ThreadStartResponse>(start_resp)?;
// turn/start — expect ExecCommandApproval request from server
let first_turn_id = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "run python".to_string(),
}],
..Default::default()
})
.await?;
// Acknowledge RPC
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(first_turn_id)),
)
.await??;
// Receive elicitation
let server_req = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_request_message(),
)
.await??;
let ServerRequest::ExecCommandApproval { request_id, params } = server_req else {
panic!("expected ExecCommandApproval request");
};
assert_eq!(params.call_id, "call1");
assert_eq!(
params.parsed_cmd,
vec![ParsedCommand::Unknown {
cmd: "python3 -c 'print(42)'".to_string()
}]
);
// Approve and wait for task completion
mcp.send_response(
request_id,
serde_json::json!({ "decision": codex_core::protocol::ReviewDecision::Approved }),
)
.await?;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("codex/event/task_complete"),
)
.await??;
// Second turn with approval_policy=never should not elicit approval
let second_turn_id = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "run python again".to_string(),
}],
approval_policy: Some(codex_app_server_protocol::AskForApproval::Never),
sandbox_policy: Some(codex_app_server_protocol::SandboxPolicy::DangerFullAccess),
model: Some("mock-model".to_string()),
effort: Some(ReasoningEffort::Medium),
summary: Some(ReasoningSummary::Auto),
..Default::default()
})
.await?;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(second_turn_id)),
)
.await??;
// Ensure we do NOT receive an ExecCommandApproval request before task completes
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("codex/event/task_complete"),
)
.await??;
Ok(())
}
#[tokio::test]
async fn turn_start_updates_sandbox_and_cwd_between_turns_v2() -> Result<()> {
// When returning Result from a test, pass an Ok(()) to the skip macro
// so the early return type matches. The no-arg form returns unit.
skip_if_no_network!(Ok(()));
let tmp = TempDir::new()?;
let codex_home = tmp.path().join("codex_home");
std::fs::create_dir(&codex_home)?;
let workspace_root = tmp.path().join("workspace");
std::fs::create_dir(&workspace_root)?;
let first_cwd = workspace_root.join("turn1");
let second_cwd = workspace_root.join("turn2");
std::fs::create_dir(&first_cwd)?;
std::fs::create_dir(&second_cwd)?;
let responses = vec![
create_shell_sse_response(
vec![
"bash".to_string(),
"-lc".to_string(),
"echo first turn".to_string(),
],
None,
Some(5000),
"call-first",
)?,
create_final_assistant_message_sse_response("done first")?,
create_shell_sse_response(
vec![
"bash".to_string(),
"-lc".to_string(),
"echo second turn".to_string(),
],
None,
Some(5000),
"call-second",
)?,
create_final_assistant_message_sse_response("done second")?,
];
let server = create_mock_chat_completions_server(responses).await;
create_config_toml(&codex_home, &server.uri(), "untrusted")?;
let mut mcp = McpProcess::new(&codex_home).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
// thread/start
let start_id = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("mock-model".to_string()),
..Default::default()
})
.await?;
let start_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(start_id)),
)
.await??;
let ThreadStartResponse { thread } = to_response::<ThreadStartResponse>(start_resp)?;
// first turn with workspace-write sandbox and first_cwd
let first_turn = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "first turn".to_string(),
}],
cwd: Some(first_cwd.clone()),
approval_policy: Some(codex_app_server_protocol::AskForApproval::Never),
sandbox_policy: Some(codex_app_server_protocol::SandboxPolicy::WorkspaceWrite {
writable_roots: vec![first_cwd.clone()],
network_access: false,
exclude_tmpdir_env_var: false,
exclude_slash_tmp: false,
}),
model: Some("mock-model".to_string()),
effort: Some(ReasoningEffort::Medium),
summary: Some(ReasoningSummary::Auto),
})
.await?;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(first_turn)),
)
.await??;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("codex/event/task_complete"),
)
.await??;
// second turn with workspace-write and second_cwd, ensure exec begins in second_cwd
let second_turn = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "second turn".to_string(),
}],
cwd: Some(second_cwd.clone()),
approval_policy: Some(codex_app_server_protocol::AskForApproval::Never),
sandbox_policy: Some(codex_app_server_protocol::SandboxPolicy::DangerFullAccess),
model: Some("mock-model".to_string()),
effort: Some(ReasoningEffort::Medium),
summary: Some(ReasoningSummary::Auto),
})
.await?;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(second_turn)),
)
.await??;
let exec_begin_notification = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("codex/event/exec_command_begin"),
)
.await??;
let params = exec_begin_notification
.params
.clone()
.expect("exec_command_begin params");
let event: Event = serde_json::from_value(params).expect("deserialize exec begin event");
let exec_begin = match event.msg {
EventMsg::ExecCommandBegin(exec_begin) => exec_begin,
other => panic!("expected ExecCommandBegin event, got {other:?}"),
};
assert_eq!(exec_begin.cwd, second_cwd);
assert_eq!(
exec_begin.command,
vec![
"bash".to_string(),
"-lc".to_string(),
"echo second turn".to_string()
]
);
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("codex/event/task_complete"),
)
.await??;
Ok(())
}
// Helper to create a config.toml pointing at the mock model server.
fn create_config_toml(
codex_home: &Path,
server_uri: &str,
approval_policy: &str,
) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
format!(
r#"
model = "mock-model"
approval_policy = "{approval_policy}"
sandbox_mode = "read-only"
model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "{server_uri}/v1"
wire_api = "chat"
request_max_retries = 0
stream_max_retries = 0
"#
),
)
}

View File

@@ -353,7 +353,9 @@ async fn run_login(config_overrides: &CliConfigOverrides, login_args: LoginArgs)
.context("failed to load configuration")?;
if !config.features.enabled(Feature::RmcpClient) {
bail!("OAuth login is only supported when [features].rmcp_client is true in config.toml.");
bail!(
"OAuth login is only supported when [features].rmcp_client is true in config.toml. See https://github.com/openai/codex/blob/main/docs/config.md#feature-flags for details."
);
}
let LoginArgs { name, scopes } = login_args;

View File

@@ -1044,7 +1044,7 @@ pub async fn run_main(cli: Cli, _codex_linux_sandbox_exe: Option<PathBuf>) -> an
// Close task modal/pending apply if present before opening env modal
app.diff_overlay = None;
app.env_modal = Some(app::EnvModalState { query: String::new(), selected: 0 });
// Cache environments until user explicitly refreshes with 'r' inside the modal.
// Cache environments while the modal is open to avoid repeated fetches.
let should_fetch = app.environments.is_empty();
if should_fetch {
app.env_loading = true;
@@ -1115,7 +1115,7 @@ pub async fn run_main(cli: Cli, _codex_linux_sandbox_exe: Option<PathBuf>) -> an
let _ = tx.send(evt);
});
} else {
app.status = "No environment selected (press 'e' to choose)".to_string();
app.status = "No environment selected".to_string();
}
}
needs_redraw = true;
@@ -1313,18 +1313,6 @@ pub async fn run_main(cli: Cli, _codex_linux_sandbox_exe: Option<PathBuf>) -> an
// Environment modal key handling
match key.code {
KeyCode::Esc => { app.env_modal = None; needs_redraw = true; }
KeyCode::Char('r') | KeyCode::Char('R') => {
// Trigger refresh of environments
app.env_loading = true; app.env_error = None; needs_redraw = true;
let _ = frame_tx.send(Instant::now() + Duration::from_millis(100));
let tx = tx.clone();
tokio::spawn(async move {
let base_url = crate::util::normalize_base_url(&std::env::var("CODEX_CLOUD_TASKS_BASE_URL").unwrap_or_else(|_| "https://chatgpt.com/backend-api".to_string()));
let headers = crate::util::build_chatgpt_headers().await;
let res = crate::env_detect::list_environments(&base_url, &headers).await;
let _ = tx.send(app::AppEvent::EnvironmentsLoaded(res));
});
}
KeyCode::Char(ch) if !key.modifiers.contains(KeyModifiers::CONTROL) && !key.modifiers.contains(KeyModifiers::ALT) => {
if let Some(m) = app.env_modal.as_mut() { m.query.push(ch); }
needs_redraw = true;
@@ -1431,7 +1419,7 @@ pub async fn run_main(cli: Cli, _codex_linux_sandbox_exe: Option<PathBuf>) -> an
}
KeyCode::Char('o') | KeyCode::Char('O') => {
app.env_modal = Some(app::EnvModalState { query: String::new(), selected: 0 });
// Cache environments until user explicitly refreshes with 'r' inside the modal.
// Cache environments while the modal is open to avoid repeated fetches.
let should_fetch = app.environments.is_empty();
if should_fetch { app.env_loading = true; app.env_error = None; }
needs_redraw = true;

View File

@@ -945,9 +945,7 @@ pub fn draw_env_modal(frame: &mut Frame, area: Rect, app: &mut App) {
// Subheader with usage hints (dim cyan)
let subheader = Paragraph::new(Line::from(
"Type to search, Enter select, Esc cancel; r refresh"
.cyan()
.dim(),
"Type to search, Enter select, Esc cancel".cyan().dim(),
))
.wrap(Wrap { trim: true });
frame.render_widget(subheader, rows[0]);

View File

@@ -34,7 +34,7 @@ const PRESETS: &[ModelPreset] = &[
id: "gpt-5-codex",
model: "gpt-5-codex",
display_name: "gpt-5-codex",
description: "Optimized for coding tasks with many tools.",
description: "Optimized for codex.",
default_reasoning_effort: ReasoningEffort::Medium,
supported_reasoning_efforts: &[
ReasoningEffortPreset {
@@ -52,6 +52,24 @@ const PRESETS: &[ModelPreset] = &[
],
is_default: true,
},
ModelPreset {
id: "gpt-5-codex-mini",
model: "gpt-5-codex-mini",
display_name: "gpt-5-codex-mini",
description: "Optimized for codex. Cheaper, faster, but less capable.",
default_reasoning_effort: ReasoningEffort::Medium,
supported_reasoning_efforts: &[
ReasoningEffortPreset {
effort: ReasoningEffort::Medium,
description: "Dynamically adjusts reasoning based on the task",
},
ReasoningEffortPreset {
effort: ReasoningEffort::High,
description: "Maximizes reasoning depth for complex or ambiguous problems",
},
],
is_default: false,
},
ModelPreset {
id: "gpt-5",
model: "gpt-5",
@@ -80,8 +98,13 @@ const PRESETS: &[ModelPreset] = &[
},
];
pub fn builtin_model_presets(_auth_mode: Option<AuthMode>) -> Vec<ModelPreset> {
PRESETS.to_vec()
pub fn builtin_model_presets(auth_mode: Option<AuthMode>) -> Vec<ModelPreset> {
let allow_codex_mini = matches!(auth_mode, Some(AuthMode::ChatGPT));
PRESETS
.iter()
.filter(|preset| allow_codex_mini || preset.id != "gpt-5-codex-mini")
.copied()
.collect()
}
#[cfg(test)]

View File

@@ -1,12 +1,14 @@
mod storage;
use chrono::Utc;
use reqwest::StatusCode;
use serde::Deserialize;
use serde::Serialize;
#[cfg(test)]
use serial_test::serial;
use std::env;
use std::fmt::Debug;
use std::io::ErrorKind;
use std::path::Path;
use std::path::PathBuf;
use std::sync::Arc;
@@ -22,10 +24,14 @@ use crate::auth::storage::AuthStorageBackend;
use crate::auth::storage::create_auth_storage;
use crate::config::Config;
use crate::default_client::CodexHttpClient;
use crate::error::RefreshTokenFailedError;
use crate::error::RefreshTokenFailedReason;
use crate::token_data::PlanType;
use crate::token_data::TokenData;
use crate::token_data::parse_id_token;
use crate::util::try_parse_error_message;
use serde_json::Value;
use thiserror::Error;
#[derive(Debug, Clone)]
pub struct CodexAuth {
@@ -46,18 +52,54 @@ impl PartialEq for CodexAuth {
// TODO(pakrym): use token exp field to check for expiration instead
const TOKEN_REFRESH_INTERVAL: i64 = 8;
const REFRESH_TOKEN_EXPIRED_MESSAGE: &str = "Your access token could not be refreshed because your refresh token has expired. Please log out and sign in again.";
const REFRESH_TOKEN_REUSED_MESSAGE: &str = "Your access token could not be refreshed because your refresh token was already used. Please log out and sign in again.";
const REFRESH_TOKEN_INVALIDATED_MESSAGE: &str = "Your access token could not be refreshed because your refresh token was revoked. Please log out and sign in again.";
const REFRESH_TOKEN_UNKNOWN_MESSAGE: &str =
"Your access token could not be refreshed. Please log out and sign in again.";
const REFRESH_TOKEN_URL: &str = "https://auth.openai.com/oauth/token";
pub const REFRESH_TOKEN_URL_OVERRIDE_ENV_VAR: &str = "CODEX_REFRESH_TOKEN_URL_OVERRIDE";
#[derive(Debug, Error)]
pub enum RefreshTokenError {
#[error("{0}")]
Permanent(#[from] RefreshTokenFailedError),
#[error(transparent)]
Transient(#[from] std::io::Error),
}
impl RefreshTokenError {
pub fn failed_reason(&self) -> Option<RefreshTokenFailedReason> {
match self {
Self::Permanent(error) => Some(error.reason),
Self::Transient(_) => None,
}
}
fn other_with_message(message: impl Into<String>) -> Self {
Self::Transient(std::io::Error::other(message.into()))
}
}
impl From<RefreshTokenError> for std::io::Error {
fn from(err: RefreshTokenError) -> Self {
match err {
RefreshTokenError::Permanent(failed) => std::io::Error::other(failed),
RefreshTokenError::Transient(inner) => inner,
}
}
}
impl CodexAuth {
pub async fn refresh_token(&self) -> Result<String, std::io::Error> {
pub async fn refresh_token(&self) -> Result<String, RefreshTokenError> {
tracing::info!("Refreshing token");
let token_data = self
.get_current_token_data()
.ok_or(std::io::Error::other("Token data is not available."))?;
let token_data = self.get_current_token_data().ok_or_else(|| {
RefreshTokenError::Transient(std::io::Error::other("Token data is not available."))
})?;
let token = token_data.refresh_token;
let refresh_response = try_refresh_token(token, &self.client)
.await
.map_err(std::io::Error::other)?;
let refresh_response = try_refresh_token(token, &self.client).await?;
let updated = update_tokens(
&self.storage,
@@ -65,7 +107,8 @@ impl CodexAuth {
refresh_response.access_token,
refresh_response.refresh_token,
)
.await?;
.await
.map_err(RefreshTokenError::from)?;
if let Ok(mut auth_lock) = self.auth_dot_json.lock() {
*auth_lock = Some(updated.clone());
@@ -74,7 +117,7 @@ impl CodexAuth {
let access = match updated.tokens {
Some(t) => t.access_token,
None => {
return Err(std::io::Error::other(
return Err(RefreshTokenError::other_with_message(
"Token data is not available after refresh.",
));
}
@@ -99,15 +142,21 @@ impl CodexAuth {
..
}) => {
if last_refresh < Utc::now() - chrono::Duration::days(TOKEN_REFRESH_INTERVAL) {
let refresh_response = tokio::time::timeout(
let refresh_result = tokio::time::timeout(
Duration::from_secs(60),
try_refresh_token(tokens.refresh_token.clone(), &self.client),
)
.await
.map_err(|_| {
std::io::Error::other("timed out while refreshing OpenAI API key")
})?
.map_err(std::io::Error::other)?;
.await;
let refresh_response = match refresh_result {
Ok(Ok(response)) => response,
Ok(Err(err)) => return Err(err.into()),
Err(_) => {
return Err(std::io::Error::new(
ErrorKind::TimedOut,
"timed out while refreshing OpenAI API key",
));
}
};
let updated_auth_dot_json = update_tokens(
&self.storage,
@@ -425,7 +474,7 @@ async fn update_tokens(
async fn try_refresh_token(
refresh_token: String,
client: &CodexHttpClient,
) -> std::io::Result<RefreshResponse> {
) -> Result<RefreshResponse, RefreshTokenError> {
let refresh_request = RefreshRequest {
client_id: CLIENT_ID,
grant_type: "refresh_token",
@@ -433,30 +482,93 @@ async fn try_refresh_token(
scope: "openid profile email",
};
let endpoint = refresh_token_endpoint();
// Use shared client factory to include standard headers
let response = client
.post("https://auth.openai.com/oauth/token")
.post(endpoint.as_str())
.header("Content-Type", "application/json")
.json(&refresh_request)
.send()
.await
.map_err(std::io::Error::other)?;
.map_err(|err| RefreshTokenError::Transient(std::io::Error::other(err)))?;
if response.status().is_success() {
let status = response.status();
if status.is_success() {
let refresh_response = response
.json::<RefreshResponse>()
.await
.map_err(std::io::Error::other)?;
.map_err(|err| RefreshTokenError::Transient(std::io::Error::other(err)))?;
Ok(refresh_response)
} else {
Err(std::io::Error::other(format!(
"Failed to refresh token: {}: {}",
response.status(),
try_parse_error_message(&response.text().await.unwrap_or_default()),
)))
let body = response.text().await.unwrap_or_default();
if status == StatusCode::UNAUTHORIZED {
let failed = classify_refresh_token_failure(&body);
Err(RefreshTokenError::Permanent(failed))
} else {
let message = try_parse_error_message(&body);
Err(RefreshTokenError::Transient(std::io::Error::other(
format!("Failed to refresh token: {status}: {message}"),
)))
}
}
}
fn classify_refresh_token_failure(body: &str) -> RefreshTokenFailedError {
let code = extract_refresh_token_error_code(body);
let normalized_code = code.as_deref().map(str::to_ascii_lowercase);
let reason = match normalized_code.as_deref() {
Some("refresh_token_expired") => RefreshTokenFailedReason::Expired,
Some("refresh_token_reused") => RefreshTokenFailedReason::Exhausted,
Some("refresh_token_invalidated") => RefreshTokenFailedReason::Revoked,
_ => RefreshTokenFailedReason::Other,
};
if reason == RefreshTokenFailedReason::Other {
tracing::warn!(
backend_code = normalized_code.as_deref(),
backend_body = body,
"Encountered unknown 401 response while refreshing token"
);
}
let message = match reason {
RefreshTokenFailedReason::Expired => REFRESH_TOKEN_EXPIRED_MESSAGE.to_string(),
RefreshTokenFailedReason::Exhausted => REFRESH_TOKEN_REUSED_MESSAGE.to_string(),
RefreshTokenFailedReason::Revoked => REFRESH_TOKEN_INVALIDATED_MESSAGE.to_string(),
RefreshTokenFailedReason::Other => REFRESH_TOKEN_UNKNOWN_MESSAGE.to_string(),
};
RefreshTokenFailedError::new(reason, message)
}
fn extract_refresh_token_error_code(body: &str) -> Option<String> {
if body.trim().is_empty() {
return None;
}
let Value::Object(map) = serde_json::from_str::<Value>(body).ok()? else {
return None;
};
if let Some(error_value) = map.get("error") {
match error_value {
Value::Object(obj) => {
if let Some(code) = obj.get("code").and_then(Value::as_str) {
return Some(code.to_string());
}
}
Value::String(code) => {
return Some(code.to_string());
}
_ => {}
}
}
map.get("code").and_then(Value::as_str).map(str::to_string)
}
#[derive(Serialize)]
struct RefreshRequest {
client_id: &'static str,
@@ -475,6 +587,11 @@ struct RefreshResponse {
// Shared constant for token refresh (client id used for oauth token refresh flow)
pub const CLIENT_ID: &str = "app_EMoamEEZ73f0CkXaXp7hrann";
fn refresh_token_endpoint() -> String {
std::env::var(REFRESH_TOKEN_URL_OVERRIDE_ENV_VAR)
.unwrap_or_else(|_| REFRESH_TOKEN_URL.to_string())
}
use std::sync::RwLock;
/// Internal cached auth state.
@@ -965,7 +1082,9 @@ impl AuthManager {
/// Attempt to refresh the current auth token (if any). On success, reload
/// the auth state from disk so other components observe refreshed token.
pub async fn refresh_token(&self) -> std::io::Result<Option<String>> {
/// If the token refresh fails in a permanent (nontransient) way, logs out
/// to clear invalid auth state.
pub async fn refresh_token(&self) -> Result<Option<String>, RefreshTokenError> {
let auth = match self.auth() {
Some(a) => a,
None => return Ok(None),

View File

@@ -31,6 +31,7 @@ use tracing::warn;
use crate::AuthManager;
use crate::auth::CodexAuth;
use crate::auth::RefreshTokenError;
use crate::chat_completions::AggregateStreamExt;
use crate::chat_completions::stream_chat_completions;
use crate::client_common::Prompt;
@@ -389,12 +390,17 @@ impl ModelClient {
&& let Some(manager) = auth_manager.as_ref()
&& let Some(auth) = auth.as_ref()
&& auth.mode == AuthMode::ChatGPT
&& let Err(err) = manager.refresh_token().await
{
manager.refresh_token().await.map_err(|err| {
StreamAttemptError::Fatal(CodexErr::Fatal(format!(
"Failed to refresh ChatGPT credentials: {err}"
)))
})?;
let stream_error = match err {
RefreshTokenError::Permanent(failed) => {
StreamAttemptError::Fatal(CodexErr::RefreshTokenFailed(failed))
}
RefreshTokenError::Transient(other) => {
StreamAttemptError::RetryableTransportError(CodexErr::Io(other))
}
};
return Err(stream_error);
}
// The OpenAI Responses endpoint returns structured JSON bodies even for 4xx/5xx
@@ -674,6 +680,33 @@ fn parse_header_str<'a>(headers: &'a HeaderMap, name: &str) -> Option<&'a str> {
headers.get(name)?.to_str().ok()
}
async fn emit_completed(
tx_event: &mpsc::Sender<Result<ResponseEvent>>,
otel_event_manager: &OtelEventManager,
completed: ResponseCompleted,
) {
if let Some(token_usage) = &completed.usage {
otel_event_manager.sse_event_completed(
token_usage.input_tokens,
token_usage.output_tokens,
token_usage
.input_tokens_details
.as_ref()
.map(|d| d.cached_tokens),
token_usage
.output_tokens_details
.as_ref()
.map(|d| d.reasoning_tokens),
token_usage.total_tokens,
);
}
let event = ResponseEvent::Completed {
response_id: completed.id.clone(),
token_usage: completed.usage.map(Into::into),
};
let _ = tx_event.send(Ok(event)).await;
}
async fn process_sse<S>(
stream: S,
tx_event: mpsc::Sender<Result<ResponseEvent>>,
@@ -686,7 +719,7 @@ async fn process_sse<S>(
// If the stream stays completely silent for an extended period treat it as disconnected.
// The response id returned from the "complete" message.
let mut response_completed: Option<ResponseCompleted> = None;
let response_completed: Option<ResponseCompleted> = None;
let mut response_error: Option<CodexErr> = None;
loop {
@@ -705,30 +738,8 @@ async fn process_sse<S>(
}
Ok(None) => {
match response_completed {
Some(ResponseCompleted {
id: response_id,
usage,
}) => {
if let Some(token_usage) = &usage {
otel_event_manager.sse_event_completed(
token_usage.input_tokens,
token_usage.output_tokens,
token_usage
.input_tokens_details
.as_ref()
.map(|d| d.cached_tokens),
token_usage
.output_tokens_details
.as_ref()
.map(|d| d.reasoning_tokens),
token_usage.total_tokens,
);
}
let event = ResponseEvent::Completed {
response_id,
token_usage: usage.map(Into::into),
};
let _ = tx_event.send(Ok(event)).await;
Some(completed) => {
emit_completed(&tx_event, &otel_event_manager, completed).await
}
None => {
let error = response_error.unwrap_or(CodexErr::Stream(
@@ -858,7 +869,8 @@ async fn process_sse<S>(
if let Some(resp_val) = event.response {
match serde_json::from_value::<ResponseCompleted>(resp_val) {
Ok(r) => {
response_completed = Some(r);
emit_completed(&tx_event, &otel_event_manager, r).await;
return;
}
Err(e) => {
let error = format!("failed to parse ResponseCompleted: {e}");

View File

@@ -58,7 +58,7 @@ use crate::client_common::ResponseEvent;
use crate::config::Config;
use crate::config::types::McpServerTransportConfig;
use crate::config::types::ShellEnvironmentPolicy;
use crate::conversation_history::ConversationHistory;
use crate::context_manager::ContextManager;
use crate::environment_context::EnvironmentContext;
use crate::error::CodexErr;
use crate::error::Result as CodexResult;
@@ -553,7 +553,7 @@ impl Session {
None
} else {
Some(format!(
"You can either enable it using the CLI with `--enable {canonical}` or through the config.toml file with `[features].{canonical}`"
"Enable it with `--enable {canonical}` or `[features].{canonical}` in config.toml. See https://github.com/openai/codex/blob/main/docs/config.md#feature-flags for details."
))
};
post_session_configured_events.push(Event {
@@ -945,7 +945,7 @@ impl Session {
turn_context: &TurnContext,
rollout_items: &[RolloutItem],
) -> Vec<ResponseItem> {
let mut history = ConversationHistory::new();
let mut history = ContextManager::new();
for item in rollout_items {
match item {
RolloutItem::ResponseItem(response_item) => {
@@ -1032,7 +1032,7 @@ impl Session {
}
}
pub(crate) async fn clone_history(&self) -> ConversationHistory {
pub(crate) async fn clone_history(&self) -> ContextManager {
let state = self.state.lock().await;
state.clone_history()
}
@@ -1928,6 +1928,7 @@ async fn run_turn(
return Err(CodexErr::UsageLimitReached(e));
}
Err(CodexErr::UsageNotIncluded) => return Err(CodexErr::UsageNotIncluded),
Err(e @ CodexErr::RefreshTokenFailed(_)) => return Err(e),
Err(e) => {
// Use the configured provider-specific stream retry budget.
let max_retries = turn_context.client.get_provider().stream_max_retries();
@@ -1946,7 +1947,7 @@ async fn run_turn(
// at a seemingly frozen screen.
sess.notify_stream_error(
&turn_context,
format!("Re-connecting... {retries}/{max_retries}"),
format!("Reconnecting... {retries}/{max_retries}"),
)
.await;
@@ -2834,7 +2835,7 @@ mod tests {
turn_context: &TurnContext,
) -> (Vec<RolloutItem>, Vec<ResponseItem>) {
let mut rollout_items = Vec::new();
let mut live_history = ConversationHistory::new();
let mut live_history = ContextManager::new();
let initial_context = session.build_initial_context(turn_context);
for item in &initial_context {

View File

@@ -158,6 +158,11 @@ async fn forward_events(
) {
while let Ok(event) = codex.next_event().await {
match event {
// ignore all legacy delta events
Event {
id: _,
msg: EventMsg::AgentMessageDelta(_) | EventMsg::AgentReasoningDelta(_),
} => continue,
Event {
id: _,
msg: EventMsg::SessionConfigured(_),

View File

@@ -0,0 +1,174 @@
use codex_protocol::models::FunctionCallOutputPayload;
use codex_protocol::models::ResponseItem;
use codex_protocol::protocol::TokenUsage;
use codex_protocol::protocol::TokenUsageInfo;
use std::ops::Deref;
use crate::context_manager::normalize;
use crate::context_manager::truncate::format_output_for_model_body;
use crate::context_manager::truncate::globally_truncate_function_output_items;
/// Transcript of conversation history
#[derive(Debug, Clone, Default)]
pub(crate) struct ContextManager {
/// The oldest items are at the beginning of the vector.
items: Vec<ResponseItem>,
token_info: Option<TokenUsageInfo>,
}
impl ContextManager {
pub(crate) fn new() -> Self {
Self {
items: Vec::new(),
token_info: TokenUsageInfo::new_or_append(&None, &None, None),
}
}
pub(crate) fn token_info(&self) -> Option<TokenUsageInfo> {
self.token_info.clone()
}
pub(crate) fn set_token_usage_full(&mut self, context_window: i64) {
match &mut self.token_info {
Some(info) => info.fill_to_context_window(context_window),
None => {
self.token_info = Some(TokenUsageInfo::full_context_window(context_window));
}
}
}
/// `items` is ordered from oldest to newest.
pub(crate) fn record_items<I>(&mut self, items: I)
where
I: IntoIterator,
I::Item: std::ops::Deref<Target = ResponseItem>,
{
for item in items {
let item_ref = item.deref();
let is_ghost_snapshot = matches!(item_ref, ResponseItem::GhostSnapshot { .. });
if !is_api_message(item_ref) && !is_ghost_snapshot {
continue;
}
let processed = Self::process_item(&item);
self.items.push(processed);
}
}
pub(crate) fn get_history(&mut self) -> Vec<ResponseItem> {
self.normalize_history();
self.contents()
}
// Returns the history prepared for sending to the model.
// With extra response items filtered out and GhostCommits removed.
pub(crate) fn get_history_for_prompt(&mut self) -> Vec<ResponseItem> {
let mut history = self.get_history();
Self::remove_ghost_snapshots(&mut history);
history
}
pub(crate) fn remove_first_item(&mut self) {
if !self.items.is_empty() {
// Remove the oldest item (front of the list). Items are ordered from
// oldest → newest, so index 0 is the first entry recorded.
let removed = self.items.remove(0);
// If the removed item participates in a call/output pair, also remove
// its corresponding counterpart to keep the invariants intact without
// running a full normalization pass.
normalize::remove_corresponding_for(&mut self.items, &removed);
}
}
pub(crate) fn replace(&mut self, items: Vec<ResponseItem>) {
self.items = items;
}
pub(crate) fn update_token_info(
&mut self,
usage: &TokenUsage,
model_context_window: Option<i64>,
) {
self.token_info = TokenUsageInfo::new_or_append(
&self.token_info,
&Some(usage.clone()),
model_context_window,
);
}
/// This function enforces a couple of invariants on the in-memory history:
/// 1. every call (function/custom) has a corresponding output entry
/// 2. every output has a corresponding call entry
fn normalize_history(&mut self) {
// all function/tool calls must have a corresponding output
normalize::ensure_call_outputs_present(&mut self.items);
// all outputs must have a corresponding function/tool call
normalize::remove_orphan_outputs(&mut self.items);
}
/// Returns a clone of the contents in the transcript.
fn contents(&self) -> Vec<ResponseItem> {
self.items.clone()
}
fn remove_ghost_snapshots(items: &mut Vec<ResponseItem>) {
items.retain(|item| !matches!(item, ResponseItem::GhostSnapshot { .. }));
}
fn process_item(item: &ResponseItem) -> ResponseItem {
match item {
ResponseItem::FunctionCallOutput { call_id, output } => {
let truncated = format_output_for_model_body(output.content.as_str());
let truncated_items = output
.content_items
.as_ref()
.map(|items| globally_truncate_function_output_items(items));
ResponseItem::FunctionCallOutput {
call_id: call_id.clone(),
output: FunctionCallOutputPayload {
content: truncated,
content_items: truncated_items,
success: output.success,
},
}
}
ResponseItem::CustomToolCallOutput { call_id, output } => {
let truncated = format_output_for_model_body(output);
ResponseItem::CustomToolCallOutput {
call_id: call_id.clone(),
output: truncated,
}
}
ResponseItem::Message { .. }
| ResponseItem::Reasoning { .. }
| ResponseItem::LocalShellCall { .. }
| ResponseItem::FunctionCall { .. }
| ResponseItem::WebSearchCall { .. }
| ResponseItem::CustomToolCall { .. }
| ResponseItem::GhostSnapshot { .. }
| ResponseItem::Other => item.clone(),
}
}
}
/// API messages include every non-system item (user/assistant messages, reasoning,
/// tool calls, tool outputs, shell calls, and web-search calls).
fn is_api_message(message: &ResponseItem) -> bool {
match message {
ResponseItem::Message { role, .. } => role.as_str() != "system",
ResponseItem::FunctionCallOutput { .. }
| ResponseItem::FunctionCall { .. }
| ResponseItem::CustomToolCall { .. }
| ResponseItem::CustomToolCallOutput { .. }
| ResponseItem::LocalShellCall { .. }
| ResponseItem::Reasoning { .. }
| ResponseItem::WebSearchCall { .. } => true,
ResponseItem::GhostSnapshot { .. } => false,
ResponseItem::Other => false,
}
}
#[cfg(test)]
#[path = "history_tests.rs"]
mod tests;

View File

@@ -0,0 +1,841 @@
use super::*;
use crate::context_manager::truncate;
use codex_git::GhostCommit;
use codex_protocol::models::ContentItem;
use codex_protocol::models::FunctionCallOutputContentItem;
use codex_protocol::models::FunctionCallOutputPayload;
use codex_protocol::models::LocalShellAction;
use codex_protocol::models::LocalShellExecAction;
use codex_protocol::models::LocalShellStatus;
use codex_protocol::models::ReasoningItemContent;
use codex_protocol::models::ReasoningItemReasoningSummary;
use pretty_assertions::assert_eq;
use regex_lite::Regex;
fn assistant_msg(text: &str) -> ResponseItem {
ResponseItem::Message {
id: None,
role: "assistant".to_string(),
content: vec![ContentItem::OutputText {
text: text.to_string(),
}],
}
}
fn create_history_with_items(items: Vec<ResponseItem>) -> ContextManager {
let mut h = ContextManager::new();
h.record_items(items.iter());
h
}
fn user_msg(text: &str) -> ResponseItem {
ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::OutputText {
text: text.to_string(),
}],
}
}
fn reasoning_msg(text: &str) -> ResponseItem {
ResponseItem::Reasoning {
id: String::new(),
summary: vec![ReasoningItemReasoningSummary::SummaryText {
text: "summary".to_string(),
}],
content: Some(vec![ReasoningItemContent::ReasoningText {
text: text.to_string(),
}]),
encrypted_content: None,
}
}
#[test]
fn filters_non_api_messages() {
let mut h = ContextManager::default();
// System message is not API messages; Other is ignored.
let system = ResponseItem::Message {
id: None,
role: "system".to_string(),
content: vec![ContentItem::OutputText {
text: "ignored".to_string(),
}],
};
let reasoning = reasoning_msg("thinking...");
h.record_items([&system, &reasoning, &ResponseItem::Other]);
// User and assistant should be retained.
let u = user_msg("hi");
let a = assistant_msg("hello");
h.record_items([&u, &a]);
let items = h.contents();
assert_eq!(
items,
vec![
ResponseItem::Reasoning {
id: String::new(),
summary: vec![ReasoningItemReasoningSummary::SummaryText {
text: "summary".to_string(),
}],
content: Some(vec![ReasoningItemContent::ReasoningText {
text: "thinking...".to_string(),
}]),
encrypted_content: None,
},
ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::OutputText {
text: "hi".to_string()
}]
},
ResponseItem::Message {
id: None,
role: "assistant".to_string(),
content: vec![ContentItem::OutputText {
text: "hello".to_string()
}]
}
]
);
}
#[test]
fn get_history_for_prompt_drops_ghost_commits() {
let items = vec![ResponseItem::GhostSnapshot {
ghost_commit: GhostCommit::new("ghost-1".to_string(), None, Vec::new(), Vec::new()),
}];
let mut history = create_history_with_items(items);
let filtered = history.get_history_for_prompt();
assert_eq!(filtered, vec![]);
}
#[test]
fn remove_first_item_removes_matching_output_for_function_call() {
let items = vec![
ResponseItem::FunctionCall {
id: None,
name: "do_it".to_string(),
arguments: "{}".to_string(),
call_id: "call-1".to_string(),
},
ResponseItem::FunctionCallOutput {
call_id: "call-1".to_string(),
output: FunctionCallOutputPayload {
content: "ok".to_string(),
..Default::default()
},
},
];
let mut h = create_history_with_items(items);
h.remove_first_item();
assert_eq!(h.contents(), vec![]);
}
#[test]
fn remove_first_item_removes_matching_call_for_output() {
let items = vec![
ResponseItem::FunctionCallOutput {
call_id: "call-2".to_string(),
output: FunctionCallOutputPayload {
content: "ok".to_string(),
..Default::default()
},
},
ResponseItem::FunctionCall {
id: None,
name: "do_it".to_string(),
arguments: "{}".to_string(),
call_id: "call-2".to_string(),
},
];
let mut h = create_history_with_items(items);
h.remove_first_item();
assert_eq!(h.contents(), vec![]);
}
#[test]
fn remove_first_item_handles_local_shell_pair() {
let items = vec![
ResponseItem::LocalShellCall {
id: None,
call_id: Some("call-3".to_string()),
status: LocalShellStatus::Completed,
action: LocalShellAction::Exec(LocalShellExecAction {
command: vec!["echo".to_string(), "hi".to_string()],
timeout_ms: None,
working_directory: None,
env: None,
user: None,
}),
},
ResponseItem::FunctionCallOutput {
call_id: "call-3".to_string(),
output: FunctionCallOutputPayload {
content: "ok".to_string(),
..Default::default()
},
},
];
let mut h = create_history_with_items(items);
h.remove_first_item();
assert_eq!(h.contents(), vec![]);
}
#[test]
fn remove_first_item_handles_custom_tool_pair() {
let items = vec![
ResponseItem::CustomToolCall {
id: None,
status: None,
call_id: "tool-1".to_string(),
name: "my_tool".to_string(),
input: "{}".to_string(),
},
ResponseItem::CustomToolCallOutput {
call_id: "tool-1".to_string(),
output: "ok".to_string(),
},
];
let mut h = create_history_with_items(items);
h.remove_first_item();
assert_eq!(h.contents(), vec![]);
}
#[test]
fn normalization_retains_local_shell_outputs() {
let items = vec![
ResponseItem::LocalShellCall {
id: None,
call_id: Some("shell-1".to_string()),
status: LocalShellStatus::Completed,
action: LocalShellAction::Exec(LocalShellExecAction {
command: vec!["echo".to_string(), "hi".to_string()],
timeout_ms: None,
working_directory: None,
env: None,
user: None,
}),
},
ResponseItem::FunctionCallOutput {
call_id: "shell-1".to_string(),
output: FunctionCallOutputPayload {
content: "ok".to_string(),
..Default::default()
},
},
];
let mut history = create_history_with_items(items.clone());
let normalized = history.get_history();
assert_eq!(normalized, items);
}
#[test]
fn record_items_truncates_function_call_output_content() {
let mut history = ContextManager::new();
let long_line = "a very long line to trigger truncation\n";
let long_output = long_line.repeat(2_500);
let item = ResponseItem::FunctionCallOutput {
call_id: "call-100".to_string(),
output: FunctionCallOutputPayload {
content: long_output.clone(),
success: Some(true),
..Default::default()
},
};
history.record_items([&item]);
assert_eq!(history.items.len(), 1);
match &history.items[0] {
ResponseItem::FunctionCallOutput { output, .. } => {
assert_ne!(output.content, long_output);
assert!(
output.content.starts_with("Total output lines:"),
"expected truncated summary, got {}",
output.content
);
}
other => panic!("unexpected history item: {other:?}"),
}
}
#[test]
fn record_items_truncates_custom_tool_call_output_content() {
let mut history = ContextManager::new();
let line = "custom output that is very long\n";
let long_output = line.repeat(2_500);
let item = ResponseItem::CustomToolCallOutput {
call_id: "tool-200".to_string(),
output: long_output.clone(),
};
history.record_items([&item]);
assert_eq!(history.items.len(), 1);
match &history.items[0] {
ResponseItem::CustomToolCallOutput { output, .. } => {
assert_ne!(output, &long_output);
assert!(
output.starts_with("Total output lines:"),
"expected truncated summary, got {output}"
);
}
other => panic!("unexpected history item: {other:?}"),
}
}
fn assert_truncated_message_matches(message: &str, line: &str, total_lines: usize) {
let pattern = truncated_message_pattern(line, total_lines);
let regex = Regex::new(&pattern).unwrap_or_else(|err| {
panic!("failed to compile regex {pattern}: {err}");
});
let captures = regex
.captures(message)
.unwrap_or_else(|| panic!("message failed to match pattern {pattern}: {message}"));
let body = captures
.name("body")
.expect("missing body capture")
.as_str();
assert!(
body.len() <= truncate::MODEL_FORMAT_MAX_BYTES,
"body exceeds byte limit: {} bytes",
body.len()
);
}
fn truncated_message_pattern(line: &str, total_lines: usize) -> String {
let head_take = truncate::MODEL_FORMAT_HEAD_LINES.min(total_lines);
let tail_take = truncate::MODEL_FORMAT_TAIL_LINES.min(total_lines.saturating_sub(head_take));
let omitted = total_lines.saturating_sub(head_take + tail_take);
let escaped_line = regex_lite::escape(line);
if omitted == 0 {
return format!(
r"(?s)^Total output lines: {total_lines}\n\n(?P<body>{escaped_line}.*\n\[\.{{3}} output truncated to fit {max_bytes} bytes \.{{3}}]\n\n.*)$",
max_bytes = truncate::MODEL_FORMAT_MAX_BYTES,
);
}
format!(
r"(?s)^Total output lines: {total_lines}\n\n(?P<body>{escaped_line}.*\n\[\.{{3}} omitted {omitted} of {total_lines} lines \.{{3}}]\n\n.*)$",
)
}
#[test]
fn format_exec_output_truncates_large_error() {
let line = "very long execution error line that should trigger truncation\n";
let large_error = line.repeat(2_500); // way beyond both byte and line limits
let truncated = truncate::format_output_for_model_body(&large_error);
let total_lines = large_error.lines().count();
assert_truncated_message_matches(&truncated, line, total_lines);
assert_ne!(truncated, large_error);
}
#[test]
fn format_exec_output_marks_byte_truncation_without_omitted_lines() {
let long_line = "a".repeat(truncate::MODEL_FORMAT_MAX_BYTES + 50);
let truncated = truncate::format_output_for_model_body(&long_line);
assert_ne!(truncated, long_line);
let marker_line = format!(
"[... output truncated to fit {} bytes ...]",
truncate::MODEL_FORMAT_MAX_BYTES
);
assert!(
truncated.contains(&marker_line),
"missing byte truncation marker: {truncated}"
);
assert!(
!truncated.contains("omitted"),
"line omission marker should not appear when no lines were dropped: {truncated}"
);
}
#[test]
fn format_exec_output_returns_original_when_within_limits() {
let content = "example output\n".repeat(10);
assert_eq!(truncate::format_output_for_model_body(&content), content);
}
#[test]
fn format_exec_output_reports_omitted_lines_and_keeps_head_and_tail() {
let total_lines = truncate::MODEL_FORMAT_MAX_LINES + 100;
let content: String = (0..total_lines)
.map(|idx| format!("line-{idx}\n"))
.collect();
let truncated = truncate::format_output_for_model_body(&content);
let omitted = total_lines - truncate::MODEL_FORMAT_MAX_LINES;
let expected_marker = format!("[... omitted {omitted} of {total_lines} lines ...]");
assert!(
truncated.contains(&expected_marker),
"missing omitted marker: {truncated}"
);
assert!(
truncated.contains("line-0\n"),
"expected head line to remain: {truncated}"
);
let last_line = format!("line-{}\n", total_lines - 1);
assert!(
truncated.contains(&last_line),
"expected tail line to remain: {truncated}"
);
}
#[test]
fn format_exec_output_prefers_line_marker_when_both_limits_exceeded() {
let total_lines = truncate::MODEL_FORMAT_MAX_LINES + 42;
let long_line = "x".repeat(256);
let content: String = (0..total_lines)
.map(|idx| format!("line-{idx}-{long_line}\n"))
.collect();
let truncated = truncate::format_output_for_model_body(&content);
assert!(
truncated.contains("[... omitted 42 of 298 lines ...]"),
"expected omitted marker when line count exceeds limit: {truncated}"
);
assert!(
!truncated.contains("output truncated to fit"),
"line omission marker should take precedence over byte marker: {truncated}"
);
}
#[test]
fn truncates_across_multiple_under_limit_texts_and_reports_omitted() {
// Arrange: several text items, none exceeding per-item limit, but total exceeds budget.
let budget = truncate::MODEL_FORMAT_MAX_BYTES;
let t1_len = (budget / 2).saturating_sub(10);
let t2_len = (budget / 2).saturating_sub(10);
let remaining_after_t1_t2 = budget.saturating_sub(t1_len + t2_len);
let t3_len = 50; // gets truncated to remaining_after_t1_t2
let t4_len = 5; // omitted
let t5_len = 7; // omitted
let t1 = "a".repeat(t1_len);
let t2 = "b".repeat(t2_len);
let t3 = "c".repeat(t3_len);
let t4 = "d".repeat(t4_len);
let t5 = "e".repeat(t5_len);
let item = ResponseItem::FunctionCallOutput {
call_id: "call-omit".to_string(),
output: FunctionCallOutputPayload {
content: "irrelevant".to_string(),
content_items: Some(vec![
FunctionCallOutputContentItem::InputText { text: t1 },
FunctionCallOutputContentItem::InputText { text: t2 },
FunctionCallOutputContentItem::InputImage {
image_url: "img:mid".to_string(),
},
FunctionCallOutputContentItem::InputText { text: t3 },
FunctionCallOutputContentItem::InputText { text: t4 },
FunctionCallOutputContentItem::InputText { text: t5 },
]),
success: Some(true),
},
};
let mut history = ContextManager::new();
history.record_items([&item]);
assert_eq!(history.items.len(), 1);
let json = serde_json::to_value(&history.items[0]).expect("serialize to json");
let output = json
.get("output")
.expect("output field")
.as_array()
.expect("array output");
// Expect: t1 (full), t2 (full), image, t3 (truncated), summary mentioning 2 omitted.
assert_eq!(output.len(), 5);
let first = output[0].as_object().expect("first obj");
assert_eq!(first.get("type").unwrap(), "input_text");
let first_text = first.get("text").unwrap().as_str().unwrap();
assert_eq!(first_text.len(), t1_len);
let second = output[1].as_object().expect("second obj");
assert_eq!(second.get("type").unwrap(), "input_text");
let second_text = second.get("text").unwrap().as_str().unwrap();
assert_eq!(second_text.len(), t2_len);
assert_eq!(
output[2],
serde_json::json!({"type": "input_image", "image_url": "img:mid"})
);
let fourth = output[3].as_object().expect("fourth obj");
assert_eq!(fourth.get("type").unwrap(), "input_text");
let fourth_text = fourth.get("text").unwrap().as_str().unwrap();
assert_eq!(fourth_text.len(), remaining_after_t1_t2);
let summary = output[4].as_object().expect("summary obj");
assert_eq!(summary.get("type").unwrap(), "input_text");
let summary_text = summary.get("text").unwrap().as_str().unwrap();
assert!(summary_text.contains("omitted 2 text items"));
}
//TODO(aibrahim): run CI in release mode.
#[cfg(not(debug_assertions))]
#[test]
fn normalize_adds_missing_output_for_function_call() {
let items = vec![ResponseItem::FunctionCall {
id: None,
name: "do_it".to_string(),
arguments: "{}".to_string(),
call_id: "call-x".to_string(),
}];
let mut h = create_history_with_items(items);
h.normalize_history();
assert_eq!(
h.contents(),
vec![
ResponseItem::FunctionCall {
id: None,
name: "do_it".to_string(),
arguments: "{}".to_string(),
call_id: "call-x".to_string(),
},
ResponseItem::FunctionCallOutput {
call_id: "call-x".to_string(),
output: FunctionCallOutputPayload {
content: "aborted".to_string(),
..Default::default()
},
},
]
);
}
#[cfg(not(debug_assertions))]
#[test]
fn normalize_adds_missing_output_for_custom_tool_call() {
let items = vec![ResponseItem::CustomToolCall {
id: None,
status: None,
call_id: "tool-x".to_string(),
name: "custom".to_string(),
input: "{}".to_string(),
}];
let mut h = create_history_with_items(items);
h.normalize_history();
assert_eq!(
h.contents(),
vec![
ResponseItem::CustomToolCall {
id: None,
status: None,
call_id: "tool-x".to_string(),
name: "custom".to_string(),
input: "{}".to_string(),
},
ResponseItem::CustomToolCallOutput {
call_id: "tool-x".to_string(),
output: "aborted".to_string(),
},
]
);
}
#[cfg(not(debug_assertions))]
#[test]
fn normalize_adds_missing_output_for_local_shell_call_with_id() {
let items = vec![ResponseItem::LocalShellCall {
id: None,
call_id: Some("shell-1".to_string()),
status: LocalShellStatus::Completed,
action: LocalShellAction::Exec(LocalShellExecAction {
command: vec!["echo".to_string(), "hi".to_string()],
timeout_ms: None,
working_directory: None,
env: None,
user: None,
}),
}];
let mut h = create_history_with_items(items);
h.normalize_history();
assert_eq!(
h.contents(),
vec![
ResponseItem::LocalShellCall {
id: None,
call_id: Some("shell-1".to_string()),
status: LocalShellStatus::Completed,
action: LocalShellAction::Exec(LocalShellExecAction {
command: vec!["echo".to_string(), "hi".to_string()],
timeout_ms: None,
working_directory: None,
env: None,
user: None,
}),
},
ResponseItem::FunctionCallOutput {
call_id: "shell-1".to_string(),
output: FunctionCallOutputPayload {
content: "aborted".to_string(),
..Default::default()
},
},
]
);
}
#[cfg(not(debug_assertions))]
#[test]
fn normalize_removes_orphan_function_call_output() {
let items = vec![ResponseItem::FunctionCallOutput {
call_id: "orphan-1".to_string(),
output: FunctionCallOutputPayload {
content: "ok".to_string(),
..Default::default()
},
}];
let mut h = create_history_with_items(items);
h.normalize_history();
assert_eq!(h.contents(), vec![]);
}
#[cfg(not(debug_assertions))]
#[test]
fn normalize_removes_orphan_custom_tool_call_output() {
let items = vec![ResponseItem::CustomToolCallOutput {
call_id: "orphan-2".to_string(),
output: "ok".to_string(),
}];
let mut h = create_history_with_items(items);
h.normalize_history();
assert_eq!(h.contents(), vec![]);
}
#[cfg(not(debug_assertions))]
#[test]
fn normalize_mixed_inserts_and_removals() {
let items = vec![
// Will get an inserted output
ResponseItem::FunctionCall {
id: None,
name: "f1".to_string(),
arguments: "{}".to_string(),
call_id: "c1".to_string(),
},
// Orphan output that should be removed
ResponseItem::FunctionCallOutput {
call_id: "c2".to_string(),
output: FunctionCallOutputPayload {
content: "ok".to_string(),
..Default::default()
},
},
// Will get an inserted custom tool output
ResponseItem::CustomToolCall {
id: None,
status: None,
call_id: "t1".to_string(),
name: "tool".to_string(),
input: "{}".to_string(),
},
// Local shell call also gets an inserted function call output
ResponseItem::LocalShellCall {
id: None,
call_id: Some("s1".to_string()),
status: LocalShellStatus::Completed,
action: LocalShellAction::Exec(LocalShellExecAction {
command: vec!["echo".to_string()],
timeout_ms: None,
working_directory: None,
env: None,
user: None,
}),
},
];
let mut h = create_history_with_items(items);
h.normalize_history();
assert_eq!(
h.contents(),
vec![
ResponseItem::FunctionCall {
id: None,
name: "f1".to_string(),
arguments: "{}".to_string(),
call_id: "c1".to_string(),
},
ResponseItem::FunctionCallOutput {
call_id: "c1".to_string(),
output: FunctionCallOutputPayload {
content: "aborted".to_string(),
..Default::default()
},
},
ResponseItem::CustomToolCall {
id: None,
status: None,
call_id: "t1".to_string(),
name: "tool".to_string(),
input: "{}".to_string(),
},
ResponseItem::CustomToolCallOutput {
call_id: "t1".to_string(),
output: "aborted".to_string(),
},
ResponseItem::LocalShellCall {
id: None,
call_id: Some("s1".to_string()),
status: LocalShellStatus::Completed,
action: LocalShellAction::Exec(LocalShellExecAction {
command: vec!["echo".to_string()],
timeout_ms: None,
working_directory: None,
env: None,
user: None,
}),
},
ResponseItem::FunctionCallOutput {
call_id: "s1".to_string(),
output: FunctionCallOutputPayload {
content: "aborted".to_string(),
..Default::default()
},
},
]
);
}
// In debug builds we panic on normalization errors instead of silently fixing them.
#[cfg(debug_assertions)]
#[test]
#[should_panic]
fn normalize_adds_missing_output_for_function_call_panics_in_debug() {
let items = vec![ResponseItem::FunctionCall {
id: None,
name: "do_it".to_string(),
arguments: "{}".to_string(),
call_id: "call-x".to_string(),
}];
let mut h = create_history_with_items(items);
h.normalize_history();
}
#[cfg(debug_assertions)]
#[test]
#[should_panic]
fn normalize_adds_missing_output_for_custom_tool_call_panics_in_debug() {
let items = vec![ResponseItem::CustomToolCall {
id: None,
status: None,
call_id: "tool-x".to_string(),
name: "custom".to_string(),
input: "{}".to_string(),
}];
let mut h = create_history_with_items(items);
h.normalize_history();
}
#[cfg(debug_assertions)]
#[test]
#[should_panic]
fn normalize_adds_missing_output_for_local_shell_call_with_id_panics_in_debug() {
let items = vec![ResponseItem::LocalShellCall {
id: None,
call_id: Some("shell-1".to_string()),
status: LocalShellStatus::Completed,
action: LocalShellAction::Exec(LocalShellExecAction {
command: vec!["echo".to_string(), "hi".to_string()],
timeout_ms: None,
working_directory: None,
env: None,
user: None,
}),
}];
let mut h = create_history_with_items(items);
h.normalize_history();
}
#[cfg(debug_assertions)]
#[test]
#[should_panic]
fn normalize_removes_orphan_function_call_output_panics_in_debug() {
let items = vec![ResponseItem::FunctionCallOutput {
call_id: "orphan-1".to_string(),
output: FunctionCallOutputPayload {
content: "ok".to_string(),
..Default::default()
},
}];
let mut h = create_history_with_items(items);
h.normalize_history();
}
#[cfg(debug_assertions)]
#[test]
#[should_panic]
fn normalize_removes_orphan_custom_tool_call_output_panics_in_debug() {
let items = vec![ResponseItem::CustomToolCallOutput {
call_id: "orphan-2".to_string(),
output: "ok".to_string(),
}];
let mut h = create_history_with_items(items);
h.normalize_history();
}
#[cfg(debug_assertions)]
#[test]
#[should_panic]
fn normalize_mixed_inserts_and_removals_panics_in_debug() {
let items = vec![
ResponseItem::FunctionCall {
id: None,
name: "f1".to_string(),
arguments: "{}".to_string(),
call_id: "c1".to_string(),
},
ResponseItem::FunctionCallOutput {
call_id: "c2".to_string(),
output: FunctionCallOutputPayload {
content: "ok".to_string(),
..Default::default()
},
},
ResponseItem::CustomToolCall {
id: None,
status: None,
call_id: "t1".to_string(),
name: "tool".to_string(),
input: "{}".to_string(),
},
ResponseItem::LocalShellCall {
id: None,
call_id: Some("s1".to_string()),
status: LocalShellStatus::Completed,
action: LocalShellAction::Exec(LocalShellExecAction {
command: vec!["echo".to_string()],
timeout_ms: None,
working_directory: None,
env: None,
user: None,
}),
},
];
let mut h = create_history_with_items(items);
h.normalize_history();
}

View File

@@ -0,0 +1,6 @@
mod history;
mod normalize;
mod truncate;
pub(crate) use history::ContextManager;
pub(crate) use truncate::format_output_for_model_body;

View File

@@ -0,0 +1,213 @@
use std::collections::HashSet;
use codex_protocol::models::FunctionCallOutputPayload;
use codex_protocol::models::ResponseItem;
use crate::util::error_or_panic;
pub(crate) fn ensure_call_outputs_present(items: &mut Vec<ResponseItem>) {
// Collect synthetic outputs to insert immediately after their calls.
// Store the insertion position (index of call) alongside the item so
// we can insert in reverse order and avoid index shifting.
let mut missing_outputs_to_insert: Vec<(usize, ResponseItem)> = Vec::new();
for (idx, item) in items.iter().enumerate() {
match item {
ResponseItem::FunctionCall { call_id, .. } => {
let has_output = items.iter().any(|i| match i {
ResponseItem::FunctionCallOutput {
call_id: existing, ..
} => existing == call_id,
_ => false,
});
if !has_output {
error_or_panic(format!(
"Function call output is missing for call id: {call_id}"
));
missing_outputs_to_insert.push((
idx,
ResponseItem::FunctionCallOutput {
call_id: call_id.clone(),
output: FunctionCallOutputPayload {
content: "aborted".to_string(),
..Default::default()
},
},
));
}
}
ResponseItem::CustomToolCall { call_id, .. } => {
let has_output = items.iter().any(|i| match i {
ResponseItem::CustomToolCallOutput {
call_id: existing, ..
} => existing == call_id,
_ => false,
});
if !has_output {
error_or_panic(format!(
"Custom tool call output is missing for call id: {call_id}"
));
missing_outputs_to_insert.push((
idx,
ResponseItem::CustomToolCallOutput {
call_id: call_id.clone(),
output: "aborted".to_string(),
},
));
}
}
// LocalShellCall is represented in upstream streams by a FunctionCallOutput
ResponseItem::LocalShellCall { call_id, .. } => {
if let Some(call_id) = call_id.as_ref() {
let has_output = items.iter().any(|i| match i {
ResponseItem::FunctionCallOutput {
call_id: existing, ..
} => existing == call_id,
_ => false,
});
if !has_output {
error_or_panic(format!(
"Local shell call output is missing for call id: {call_id}"
));
missing_outputs_to_insert.push((
idx,
ResponseItem::FunctionCallOutput {
call_id: call_id.clone(),
output: FunctionCallOutputPayload {
content: "aborted".to_string(),
..Default::default()
},
},
));
}
}
}
_ => {}
}
}
// Insert synthetic outputs in reverse index order to avoid re-indexing.
for (idx, output_item) in missing_outputs_to_insert.into_iter().rev() {
items.insert(idx + 1, output_item);
}
}
pub(crate) fn remove_orphan_outputs(items: &mut Vec<ResponseItem>) {
let function_call_ids: HashSet<String> = items
.iter()
.filter_map(|i| match i {
ResponseItem::FunctionCall { call_id, .. } => Some(call_id.clone()),
_ => None,
})
.collect();
let local_shell_call_ids: HashSet<String> = items
.iter()
.filter_map(|i| match i {
ResponseItem::LocalShellCall {
call_id: Some(call_id),
..
} => Some(call_id.clone()),
_ => None,
})
.collect();
let custom_tool_call_ids: HashSet<String> = items
.iter()
.filter_map(|i| match i {
ResponseItem::CustomToolCall { call_id, .. } => Some(call_id.clone()),
_ => None,
})
.collect();
items.retain(|item| match item {
ResponseItem::FunctionCallOutput { call_id, .. } => {
let has_match =
function_call_ids.contains(call_id) || local_shell_call_ids.contains(call_id);
if !has_match {
error_or_panic(format!(
"Orphan function call output for call id: {call_id}"
));
}
has_match
}
ResponseItem::CustomToolCallOutput { call_id, .. } => {
let has_match = custom_tool_call_ids.contains(call_id);
if !has_match {
error_or_panic(format!(
"Orphan custom tool call output for call id: {call_id}"
));
}
has_match
}
_ => true,
});
}
pub(crate) fn remove_corresponding_for(items: &mut Vec<ResponseItem>, item: &ResponseItem) {
match item {
ResponseItem::FunctionCall { call_id, .. } => {
remove_first_matching(items, |i| {
matches!(
i,
ResponseItem::FunctionCallOutput {
call_id: existing, ..
} if existing == call_id
)
});
}
ResponseItem::FunctionCallOutput { call_id, .. } => {
if let Some(pos) = items.iter().position(|i| {
matches!(i, ResponseItem::FunctionCall { call_id: existing, .. } if existing == call_id)
}) {
items.remove(pos);
} else if let Some(pos) = items.iter().position(|i| {
matches!(i, ResponseItem::LocalShellCall { call_id: Some(existing), .. } if existing == call_id)
}) {
items.remove(pos);
}
}
ResponseItem::CustomToolCall { call_id, .. } => {
remove_first_matching(items, |i| {
matches!(
i,
ResponseItem::CustomToolCallOutput {
call_id: existing, ..
} if existing == call_id
)
});
}
ResponseItem::CustomToolCallOutput { call_id, .. } => {
remove_first_matching(
items,
|i| matches!(i, ResponseItem::CustomToolCall { call_id: existing, .. } if existing == call_id),
);
}
ResponseItem::LocalShellCall {
call_id: Some(call_id),
..
} => {
remove_first_matching(items, |i| {
matches!(
i,
ResponseItem::FunctionCallOutput {
call_id: existing, ..
} if existing == call_id
)
});
}
_ => {}
}
}
fn remove_first_matching<F>(items: &mut Vec<ResponseItem>, predicate: F)
where
F: Fn(&ResponseItem) -> bool,
{
if let Some(pos) = items.iter().position(predicate) {
items.remove(pos);
}
}

View File

@@ -0,0 +1,128 @@
use codex_protocol::models::FunctionCallOutputContentItem;
use codex_utils_string::take_bytes_at_char_boundary;
use codex_utils_string::take_last_bytes_at_char_boundary;
// Model-formatting limits: clients get full streams; only content sent to the model is truncated.
pub(crate) const MODEL_FORMAT_MAX_BYTES: usize = 10 * 1024; // 10 KiB
pub(crate) const MODEL_FORMAT_MAX_LINES: usize = 256; // lines
pub(crate) const MODEL_FORMAT_HEAD_LINES: usize = MODEL_FORMAT_MAX_LINES / 2;
pub(crate) const MODEL_FORMAT_TAIL_LINES: usize = MODEL_FORMAT_MAX_LINES - MODEL_FORMAT_HEAD_LINES; // 128
pub(crate) const MODEL_FORMAT_HEAD_BYTES: usize = MODEL_FORMAT_MAX_BYTES / 2;
pub(crate) fn globally_truncate_function_output_items(
items: &[FunctionCallOutputContentItem],
) -> Vec<FunctionCallOutputContentItem> {
let mut out: Vec<FunctionCallOutputContentItem> = Vec::with_capacity(items.len());
let mut remaining = MODEL_FORMAT_MAX_BYTES;
let mut omitted_text_items = 0usize;
for it in items {
match it {
FunctionCallOutputContentItem::InputText { text } => {
if remaining == 0 {
omitted_text_items += 1;
continue;
}
let len = text.len();
if len <= remaining {
out.push(FunctionCallOutputContentItem::InputText { text: text.clone() });
remaining -= len;
} else {
let slice = take_bytes_at_char_boundary(text, remaining);
if !slice.is_empty() {
out.push(FunctionCallOutputContentItem::InputText {
text: slice.to_string(),
});
}
remaining = 0;
}
}
// todo(aibrahim): handle input images; resize
FunctionCallOutputContentItem::InputImage { image_url } => {
out.push(FunctionCallOutputContentItem::InputImage {
image_url: image_url.clone(),
});
}
}
}
if omitted_text_items > 0 {
out.push(FunctionCallOutputContentItem::InputText {
text: format!("[omitted {omitted_text_items} text items ...]"),
});
}
out
}
pub(crate) fn format_output_for_model_body(content: &str) -> String {
// Head+tail truncation for the model: show the beginning and end with an elision.
// Clients still receive full streams; only this formatted summary is capped.
let total_lines = content.lines().count();
if content.len() <= MODEL_FORMAT_MAX_BYTES && total_lines <= MODEL_FORMAT_MAX_LINES {
return content.to_string();
}
let output = truncate_formatted_exec_output(content, total_lines);
format!("Total output lines: {total_lines}\n\n{output}")
}
fn truncate_formatted_exec_output(content: &str, total_lines: usize) -> String {
let segments: Vec<&str> = content.split_inclusive('\n').collect();
let head_take = MODEL_FORMAT_HEAD_LINES.min(segments.len());
let tail_take = MODEL_FORMAT_TAIL_LINES.min(segments.len().saturating_sub(head_take));
let omitted = segments.len().saturating_sub(head_take + tail_take);
let head_slice_end: usize = segments
.iter()
.take(head_take)
.map(|segment| segment.len())
.sum();
let tail_slice_start: usize = if tail_take == 0 {
content.len()
} else {
content.len()
- segments
.iter()
.rev()
.take(tail_take)
.map(|segment| segment.len())
.sum::<usize>()
};
let head_slice = &content[..head_slice_end];
let tail_slice = &content[tail_slice_start..];
let truncated_by_bytes = content.len() > MODEL_FORMAT_MAX_BYTES;
// this is a bit wrong. We are counting metadata lines and not just shell output lines.
let marker = if omitted > 0 {
Some(format!(
"\n[... omitted {omitted} of {total_lines} lines ...]\n\n"
))
} else if truncated_by_bytes {
Some(format!(
"\n[... output truncated to fit {MODEL_FORMAT_MAX_BYTES} bytes ...]\n\n"
))
} else {
None
};
let marker_len = marker.as_ref().map_or(0, String::len);
let base_head_budget = MODEL_FORMAT_HEAD_BYTES.min(MODEL_FORMAT_MAX_BYTES);
let head_budget = base_head_budget.min(MODEL_FORMAT_MAX_BYTES.saturating_sub(marker_len));
let head_part = take_bytes_at_char_boundary(head_slice, head_budget);
let mut result = String::with_capacity(MODEL_FORMAT_MAX_BYTES.min(content.len()));
result.push_str(head_part);
if let Some(marker_text) = marker.as_ref() {
result.push_str(marker_text);
}
let remaining = MODEL_FORMAT_MAX_BYTES.saturating_sub(result.len());
if remaining == 0 {
return result;
}
let tail_part = take_last_bytes_at_char_boundary(tail_slice, remaining);
result.push_str(tail_part);
result
}

File diff suppressed because it is too large Load Diff

View File

@@ -32,12 +32,11 @@ pub async fn discover_prompts_in_excluding(
while let Ok(Some(entry)) = entries.next_entry().await {
let path = entry.path();
let is_file = entry
.file_type()
let is_file_like = fs::metadata(&path)
.await
.map(|ft| ft.is_file())
.map(|m| m.is_file())
.unwrap_or(false);
if !is_file {
if !is_file_like {
continue;
}
// Only include Markdown files with a .md extension.
@@ -197,6 +196,25 @@ mod tests {
assert_eq!(names, vec!["good"]);
}
#[tokio::test]
#[cfg(unix)]
async fn discovers_symlinked_md_files() {
let tmp = tempdir().expect("create TempDir");
let dir = tmp.path();
// Create a real file
fs::write(dir.join("real.md"), b"real content").unwrap();
// Create a symlink to the real file
std::os::unix::fs::symlink(dir.join("real.md"), dir.join("link.md")).unwrap();
let found = discover_prompts_in(dir).await;
let names: Vec<String> = found.into_iter().map(|e| e.name).collect();
// Both real and link should be discovered, sorted alphabetically
assert_eq!(names, vec!["link", "real"]);
}
#[tokio::test]
async fn parses_frontmatter_and_strips_from_body() {
let tmp = tempdir().expect("create TempDir");

View File

@@ -135,6 +135,9 @@ pub enum CodexErr {
#[error("unsupported operation: {0}")]
UnsupportedOperation(String),
#[error("{0}")]
RefreshTokenFailed(RefreshTokenFailedError),
#[error("Fatal error: {0}")]
Fatal(String),
@@ -201,6 +204,30 @@ impl std::fmt::Display for ResponseStreamFailed {
}
}
#[derive(Debug, Clone, PartialEq, Eq, Error)]
#[error("{message}")]
pub struct RefreshTokenFailedError {
pub reason: RefreshTokenFailedReason,
pub message: String,
}
impl RefreshTokenFailedError {
pub fn new(reason: RefreshTokenFailedReason, message: impl Into<String>) -> Self {
Self {
reason,
message: message.into(),
}
}
}
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum RefreshTokenFailedReason {
Expired,
Exhausted,
Revoked,
Other,
}
#[derive(Debug)]
pub struct UnexpectedResponseError {
pub status: StatusCode,

View File

@@ -18,7 +18,7 @@ mod codex_delegate;
mod command_safety;
pub mod config;
pub mod config_loader;
mod conversation_history;
mod context_manager;
pub mod custom_prompts;
mod environment_context;
pub mod error;
@@ -75,6 +75,7 @@ pub use rollout::find_conversation_path_by_id_str;
pub use rollout::list::ConversationItem;
pub use rollout::list::ConversationsPage;
pub use rollout::list::Cursor;
pub use rollout::list::parse_cursor;
pub use rollout::list::read_head_for_summary;
mod function_tool;
mod state;

View File

@@ -273,7 +273,7 @@ async fn traverse_directories_for_paths(
/// Pagination cursor token format: "<file_ts>|<uuid>" where `file_ts` matches the
/// filename timestamp portion (YYYY-MM-DDThh-mm-ss) used in rollout filenames.
/// The cursor orders files by timestamp desc, then UUID desc.
fn parse_cursor(token: &str) -> Option<Cursor> {
pub fn parse_cursor(token: &str) -> Option<Cursor> {
let (file_ts, uuid_str) = token.split_once('|')?;
let Ok(uuid) = Uuid::parse_str(uuid_str) else {

View File

@@ -67,7 +67,8 @@ pub(crate) async fn spawn_child_async(
// This relies on prctl(2), so it only works on Linux.
#[cfg(target_os = "linux")]
unsafe {
cmd.pre_exec(|| {
let parent_pid = libc::getpid();
cmd.pre_exec(move || {
// This prctl call effectively requests, "deliver SIGTERM when my
// current parent dies."
if libc::prctl(libc::PR_SET_PDEATHSIG, libc::SIGTERM) == -1 {
@@ -76,9 +77,10 @@ pub(crate) async fn spawn_child_async(
// Though if there was a race condition and this pre_exec() block is
// run _after_ the parent (i.e., the Codex process) has already
// exited, then the parent is the _init_ process (which will never
// die), so we should just terminate the child process now.
if libc::getppid() == 1 {
// exited, then parent will be the closest configured "subreaper"
// ancestor process, or PID 1 (init). If the Codex process has exited
// already, so should the child process.
if libc::getppid() != parent_pid {
libc::raise(libc::SIGTERM);
}
Ok(())

View File

@@ -3,7 +3,7 @@
use codex_protocol::models::ResponseItem;
use crate::codex::SessionConfiguration;
use crate::conversation_history::ConversationHistory;
use crate::context_manager::ContextManager;
use crate::protocol::RateLimitSnapshot;
use crate::protocol::TokenUsage;
use crate::protocol::TokenUsageInfo;
@@ -11,7 +11,7 @@ use crate::protocol::TokenUsageInfo;
/// Persistent, session-scoped state previously stored directly on `Session`.
pub(crate) struct SessionState {
pub(crate) session_configuration: SessionConfiguration,
pub(crate) history: ConversationHistory,
pub(crate) history: ContextManager,
pub(crate) latest_rate_limits: Option<RateLimitSnapshot>,
}
@@ -20,7 +20,7 @@ impl SessionState {
pub(crate) fn new(session_configuration: SessionConfiguration) -> Self {
Self {
session_configuration,
history: ConversationHistory::new(),
history: ContextManager::new(),
latest_rate_limits: None,
}
}
@@ -34,7 +34,7 @@ impl SessionState {
self.history.record_items(items)
}
pub(crate) fn clone_history(&self) -> ConversationHistory {
pub(crate) fn clone_history(&self) -> ContextManager {
self.history.clone()
}

View File

@@ -10,8 +10,6 @@ use codex_protocol::protocol::Event;
use codex_protocol::protocol::EventMsg;
use codex_protocol::protocol::ExitedReviewModeEvent;
use codex_protocol::protocol::ItemCompletedEvent;
use codex_protocol::protocol::ReasoningContentDeltaEvent;
use codex_protocol::protocol::ReasoningRawContentDeltaEvent;
use codex_protocol::protocol::ReviewOutputEvent;
use tokio_util::sync::CancellationToken;
@@ -124,9 +122,7 @@ async fn process_review_events(
..
})
| EventMsg::AgentMessageDelta(AgentMessageDeltaEvent { .. })
| EventMsg::AgentMessageContentDelta(AgentMessageContentDeltaEvent { .. })
| EventMsg::ReasoningContentDelta(ReasoningContentDeltaEvent { .. })
| EventMsg::ReasoningRawContentDelta(ReasoningRawContentDeltaEvent { .. }) => {}
| EventMsg::AgentMessageContentDelta(AgentMessageContentDeltaEvent { .. }) => {}
EventMsg::TaskComplete(task_complete) => {
// Parse review output from the last agent message (if present).
let out = task_complete

View File

@@ -9,7 +9,7 @@ pub mod runtimes;
pub mod sandboxing;
pub mod spec;
use crate::conversation_history::format_output_for_model_body;
use crate::context_manager::format_output_for_model_body;
use crate::exec::ExecToolCallOutput;
pub use router::ToolRouter;
use serde::Serialize;

View File

@@ -0,0 +1,272 @@
use anyhow::Context;
use anyhow::Result;
use base64::Engine;
use chrono::Duration;
use chrono::Utc;
use codex_core::CodexAuth;
use codex_core::auth::AuthCredentialsStoreMode;
use codex_core::auth::AuthDotJson;
use codex_core::auth::REFRESH_TOKEN_URL_OVERRIDE_ENV_VAR;
use codex_core::auth::RefreshTokenError;
use codex_core::auth::load_auth_dot_json;
use codex_core::auth::save_auth;
use codex_core::error::RefreshTokenFailedReason;
use codex_core::token_data::IdTokenInfo;
use codex_core::token_data::TokenData;
use core_test_support::skip_if_no_network;
use pretty_assertions::assert_eq;
use serde::Serialize;
use serde_json::json;
use std::ffi::OsString;
use tempfile::TempDir;
use wiremock::Mock;
use wiremock::MockServer;
use wiremock::ResponseTemplate;
use wiremock::matchers::method;
use wiremock::matchers::path;
const INITIAL_ACCESS_TOKEN: &str = "initial-access-token";
const INITIAL_REFRESH_TOKEN: &str = "initial-refresh-token";
#[serial_test::serial(auth_refresh)]
#[tokio::test]
async fn refresh_token_succeeds_updates_storage() -> Result<()> {
skip_if_no_network!(Ok(()));
let server = MockServer::start().await;
Mock::given(method("POST"))
.and(path("/oauth/token"))
.respond_with(ResponseTemplate::new(200).set_body_json(json!({
"access_token": "new-access-token",
"refresh_token": "new-refresh-token"
})))
.expect(1)
.mount(&server)
.await;
let ctx = RefreshTokenTestContext::new(&server)?;
let auth = ctx.auth.clone();
let access = auth
.refresh_token()
.await
.context("refresh should succeed")?;
assert_eq!(access, "new-access-token");
let stored = ctx.load_auth()?;
let tokens = stored.tokens.as_ref().context("tokens should exist")?;
assert_eq!(tokens.access_token, "new-access-token");
assert_eq!(tokens.refresh_token, "new-refresh-token");
let refreshed_at = stored
.last_refresh
.as_ref()
.context("last_refresh should be recorded")?;
assert!(
*refreshed_at >= ctx.initial_last_refresh,
"last_refresh should advance"
);
let cached = auth
.get_token_data()
.await
.context("token data should be cached")?;
assert_eq!(cached.access_token, "new-access-token");
server.verify().await;
Ok(())
}
#[serial_test::serial(auth_refresh)]
#[tokio::test]
async fn refresh_token_returns_permanent_error_for_expired_refresh_token() -> Result<()> {
skip_if_no_network!(Ok(()));
let server = MockServer::start().await;
Mock::given(method("POST"))
.and(path("/oauth/token"))
.respond_with(ResponseTemplate::new(401).set_body_json(json!({
"error": {
"code": "refresh_token_expired"
}
})))
.expect(1)
.mount(&server)
.await;
let ctx = RefreshTokenTestContext::new(&server)?;
let auth = ctx.auth.clone();
let err = auth
.refresh_token()
.await
.err()
.context("refresh should fail")?;
assert_eq!(err.failed_reason(), Some(RefreshTokenFailedReason::Expired));
let stored = ctx.load_auth()?;
let tokens = stored.tokens.as_ref().context("tokens should remain")?;
assert_eq!(tokens.access_token, INITIAL_ACCESS_TOKEN);
assert_eq!(tokens.refresh_token, INITIAL_REFRESH_TOKEN);
assert_eq!(
*stored
.last_refresh
.as_ref()
.context("last_refresh should remain unchanged")?,
ctx.initial_last_refresh,
);
server.verify().await;
Ok(())
}
#[serial_test::serial(auth_refresh)]
#[tokio::test]
async fn refresh_token_returns_transient_error_on_server_failure() -> Result<()> {
skip_if_no_network!(Ok(()));
let server = MockServer::start().await;
Mock::given(method("POST"))
.and(path("/oauth/token"))
.respond_with(ResponseTemplate::new(500).set_body_json(json!({
"error": "temporary-failure"
})))
.expect(1)
.mount(&server)
.await;
let ctx = RefreshTokenTestContext::new(&server)?;
let auth = ctx.auth.clone();
let err = auth
.refresh_token()
.await
.err()
.context("refresh should fail")?;
assert!(matches!(err, RefreshTokenError::Transient(_)));
assert_eq!(err.failed_reason(), None);
let stored = ctx.load_auth()?;
let tokens = stored.tokens.as_ref().context("tokens should remain")?;
assert_eq!(tokens.access_token, INITIAL_ACCESS_TOKEN);
assert_eq!(tokens.refresh_token, INITIAL_REFRESH_TOKEN);
assert_eq!(
*stored
.last_refresh
.as_ref()
.context("last_refresh should remain unchanged")?,
ctx.initial_last_refresh,
);
server.verify().await;
Ok(())
}
struct RefreshTokenTestContext {
codex_home: TempDir,
auth: CodexAuth,
initial_last_refresh: chrono::DateTime<Utc>,
_env_guard: EnvGuard,
}
impl RefreshTokenTestContext {
fn new(server: &MockServer) -> Result<Self> {
let codex_home = TempDir::new()?;
let initial_last_refresh = Utc::now() - Duration::days(1);
let mut id_token = IdTokenInfo::default();
id_token.raw_jwt = minimal_jwt();
let tokens = TokenData {
id_token,
access_token: INITIAL_ACCESS_TOKEN.to_string(),
refresh_token: INITIAL_REFRESH_TOKEN.to_string(),
account_id: Some("account-id".to_string()),
};
let auth_dot_json = AuthDotJson {
openai_api_key: None,
tokens: Some(tokens),
last_refresh: Some(initial_last_refresh),
};
save_auth(
codex_home.path(),
&auth_dot_json,
AuthCredentialsStoreMode::File,
)?;
let endpoint = format!("{}/oauth/token", server.uri());
let env_guard = EnvGuard::set(REFRESH_TOKEN_URL_OVERRIDE_ENV_VAR, endpoint);
let auth = CodexAuth::from_auth_storage(codex_home.path(), AuthCredentialsStoreMode::File)?
.context("auth should load from storage")?;
Ok(Self {
codex_home,
auth,
initial_last_refresh,
_env_guard: env_guard,
})
}
fn load_auth(&self) -> Result<AuthDotJson> {
load_auth_dot_json(self.codex_home.path(), AuthCredentialsStoreMode::File)
.context("load auth.json")?
.context("auth.json should exist")
}
}
struct EnvGuard {
key: &'static str,
original: Option<OsString>,
}
impl EnvGuard {
fn set(key: &'static str, value: String) -> Self {
let original = std::env::var_os(key);
// SAFETY: these tests execute serially, so updating the process environment is safe.
unsafe {
std::env::set_var(key, &value);
}
Self { key, original }
}
}
impl Drop for EnvGuard {
fn drop(&mut self) {
// SAFETY: the guard restores the original environment value before other tests run.
unsafe {
match &self.original {
Some(value) => std::env::set_var(self.key, value),
None => std::env::remove_var(self.key),
}
}
}
}
fn minimal_jwt() -> String {
#[derive(Serialize)]
struct Header {
alg: &'static str,
typ: &'static str,
}
let header = Header {
alg: "none",
typ: "JWT",
};
let payload = json!({ "sub": "user-123" });
fn b64(data: &[u8]) -> String {
base64::engine::general_purpose::URL_SAFE_NO_PAD.encode(data)
}
let header_bytes = match serde_json::to_vec(&header) {
Ok(bytes) => bytes,
Err(err) => panic!("serialize header: {err}"),
};
let payload_bytes = match serde_json::to_vec(&payload) {
Ok(bytes) => bytes,
Err(err) => panic!("serialize payload: {err}"),
};
let header_b64 = b64(&header_bytes);
let payload_b64 = b64(&payload_bytes);
let signature_b64 = b64(b"sig");
format!("{header_b64}.{payload_b64}.{signature_b64}")
}

View File

@@ -8,6 +8,8 @@ use core_test_support::responses::ev_apply_patch_function_call;
use core_test_support::responses::ev_assistant_message;
use core_test_support::responses::ev_completed;
use core_test_support::responses::ev_function_call;
use core_test_support::responses::ev_reasoning_item_added;
use core_test_support::responses::ev_reasoning_summary_text_delta;
use core_test_support::responses::ev_response_created;
use core_test_support::responses::mount_sse_sequence;
use core_test_support::responses::sse;
@@ -15,6 +17,7 @@ use core_test_support::responses::start_mock_server;
use core_test_support::skip_if_no_network;
use core_test_support::test_codex::test_codex;
use core_test_support::wait_for_event;
use pretty_assertions::assert_eq;
/// Delegate should surface ExecApprovalRequest from sub-agent and proceed
/// after parent submits an approval decision.
@@ -171,3 +174,52 @@ async fn codex_delegate_forwards_patch_approval_and_proceeds_on_decision() {
.await;
wait_for_event(&test.codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn codex_delegate_ignores_legacy_deltas() {
skip_if_no_network!();
// Single response with reasoning summary deltas.
let sse_stream = sse(vec![
ev_response_created("resp-1"),
ev_reasoning_item_added("reason-1", &["initial"]),
ev_reasoning_summary_text_delta("think-1"),
ev_completed("resp-1"),
]);
let server = start_mock_server().await;
mount_sse_sequence(&server, vec![sse_stream]).await;
let mut builder = test_codex();
let test = builder.build(&server).await.expect("build test codex");
// Kick off review (delegated).
test.codex
.submit(Op::Review {
review_request: ReviewRequest {
prompt: "Please review".to_string(),
user_facing_hint: "review".to_string(),
},
})
.await
.expect("submit review");
let mut reasoning_delta_count = 0;
let mut legacy_reasoning_delta_count = 0;
loop {
let ev = wait_for_event(&test.codex, |_| true).await;
match ev {
EventMsg::ReasoningContentDelta(_) => reasoning_delta_count += 1,
EventMsg::AgentReasoningDelta(_) => legacy_reasoning_delta_count += 1,
EventMsg::TaskComplete(_) => break,
_ => {}
}
}
assert_eq!(reasoning_delta_count, 1, "expected one new reasoning delta");
assert_eq!(
legacy_reasoning_delta_count, 1,
"expected one legacy reasoning delta"
);
}

View File

@@ -108,19 +108,19 @@ async fn summarize_context_three_requests_and_instructions() {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains("\"text\":\"hello world\"") && !body.contains(COMPACT_PROMPT_MARKER)
};
mount_sse_once_match(&server, first_matcher, sse1).await;
let first_request_mock = mount_sse_once_match(&server, first_matcher, sse1).await;
let second_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains(COMPACT_PROMPT_MARKER)
};
mount_sse_once_match(&server, second_matcher, sse2).await;
let second_request_mock = mount_sse_once_match(&server, second_matcher, sse2).await;
let third_matcher = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains(&format!("\"text\":\"{THIRD_USER_MSG}\""))
};
mount_sse_once_match(&server, third_matcher, sse3).await;
let third_request_mock = mount_sse_once_match(&server, third_matcher, sse3).await;
// Build config pointing to the mock server and spawn Codex.
let model_provider = ModelProviderInfo {
@@ -172,16 +172,13 @@ async fn summarize_context_three_requests_and_instructions() {
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
// Inspect the three captured requests.
let requests = server.received_requests().await.unwrap();
assert_eq!(requests.len(), 3, "expected exactly three requests");
let req1 = first_request_mock.single_request();
let req2 = second_request_mock.single_request();
let req3 = third_request_mock.single_request();
let req1 = &requests[0];
let req2 = &requests[1];
let req3 = &requests[2];
let body1 = req1.body_json::<serde_json::Value>().unwrap();
let body2 = req2.body_json::<serde_json::Value>().unwrap();
let body3 = req3.body_json::<serde_json::Value>().unwrap();
let body1 = req1.body_json();
let body2 = req2.body_json();
let body3 = req3.body_json();
// Manual compact should keep the baseline developer instructions.
let instr1 = body1.get("instructions").and_then(|v| v.as_str()).unwrap();

View File

@@ -43,7 +43,7 @@ async fn emits_deprecation_notice_for_legacy_feature_flag() -> anyhow::Result<()
assert_eq!(
details.as_deref(),
Some(
"You can either enable it using the CLI with `--enable streamable_shell` or through the config.toml file with `[features].streamable_shell`"
"Enable it with `--enable streamable_shell` or `[features].streamable_shell` in config.toml. See https://github.com/openai/codex/blob/main/docs/config.md#feature-flags for details."
),
);

View File

@@ -8,6 +8,7 @@ mod apply_patch_cli;
mod apply_patch_freeform;
#[cfg(not(target_os = "windows"))]
mod approvals;
mod auth_refresh;
mod cli_stream;
mod client;
mod codex_delegate;

View File

@@ -24,6 +24,10 @@ use responses::start_mock_server;
use std::time::Duration;
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
#[ignore = "flaky on ubuntu-24.04-arm - aarch64-unknown-linux-gnu"]
// The notify script gets far enough to create (and therefore surface) the file,
// but hasnt flushed the JSON yet. Reading an empty file produces EOF while parsing
// a value at line 1 column 0. May be caused by a slow runner.
async fn summarize_context_three_requests_and_instructions() -> anyhow::Result<()> {
skip_if_no_network!(Ok(()));

View File

@@ -5,7 +5,7 @@
lib,
...
}:
rustPlatform.buildRustPackage (finalAttrs: {
rustPlatform.buildRustPackage (_: {
env = {
PKG_CONFIG_PATH = "${openssl.dev}/lib/pkgconfig:$PKG_CONFIG_PATH";
};
@@ -21,6 +21,7 @@ rustPlatform.buildRustPackage (finalAttrs: {
cargoLock.outputHashes = {
"ratatui-0.29.0" = "sha256-HBvT5c8GsiCxMffNjJGLmHnvG77A6cqEL+1ARurBXho=";
"crossterm-0.28.1" = "sha256-6qCtfSMuXACKFb9ATID39XyFDIEMFDmbx6SSmNe+728=";
};
meta = with lib; {

View File

@@ -10,9 +10,11 @@ use axum::extract::State;
use axum::http::Request;
use axum::http::StatusCode;
use axum::http::header::AUTHORIZATION;
use axum::http::header::CONTENT_TYPE;
use axum::middleware;
use axum::middleware::Next;
use axum::response::Response;
use axum::routing::get;
use rmcp::ErrorData as McpError;
use rmcp::handler::server::ServerHandler;
use rmcp::model::CallToolRequestParam;
@@ -256,14 +258,35 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
};
eprintln!("starting rmcp streamable http test server on http://{bind_addr}/mcp");
let router = Router::new().nest_service(
"/mcp",
StreamableHttpService::new(
|| Ok(TestToolServer::new()),
Arc::new(LocalSessionManager::default()),
StreamableHttpServerConfig::default(),
),
);
let router = Router::new()
.route(
"/.well-known/oauth-authorization-server/mcp",
get({
move || async move {
let metadata_base = format!("http://{bind_addr}");
#[expect(clippy::expect_used)]
Response::builder()
.status(StatusCode::OK)
.header(CONTENT_TYPE, "application/json")
.body(Body::from(
serde_json::to_vec(&json!({
"authorization_endpoint": format!("{metadata_base}/oauth/authorize"),
"token_endpoint": format!("{metadata_base}/oauth/token"),
"scopes_supported": [""],
})).expect("failed to serialize metadata"),
))
.expect("valid metadata response")
}
}),
)
.nest_service(
"/mcp",
StreamableHttpService::new(
|| Ok(TestToolServer::new()),
Arc::new(LocalSessionManager::default()),
StreamableHttpServerConfig::default(),
),
);
let router = if let Ok(token) = std::env::var("MCP_EXPECT_BEARER") {
let expected = Arc::new(format!("Bearer {token}"));
@@ -282,6 +305,9 @@ async fn require_bearer(
request: Request<Body>,
next: Next,
) -> Result<Response, StatusCode> {
if request.uri().path().contains("/.well-known/") {
return Ok(next.run(request).await);
}
if request
.headers()
.get(AUTHORIZATION)

View File

@@ -9,6 +9,7 @@ use crate::file_search::FileSearchManager;
use crate::history_cell::HistoryCell;
use crate::pager_overlay::Overlay;
use crate::render::highlight::highlight_bash_to_lines;
use crate::render::renderable::Renderable;
use crate::resume_picker::ResumeSelection;
use crate::tui;
use crate::tui::TuiEvent;
@@ -233,7 +234,7 @@ impl App {
tui.draw(
self.chat_widget.desired_height(tui.terminal.size()?.width),
|frame| {
frame.render_widget_ref(&self.chat_widget, frame.area());
self.chat_widget.render(frame.area(), frame.buffer);
if let Some((x, y)) = self.chat_widget.cursor_pos(frame.area()) {
frame.set_cursor_position((x, y));
}

View File

@@ -260,10 +260,6 @@ impl BottomPaneView for ApprovalOverlay {
self.enqueue_request(request);
None
}
fn cursor_pos(&self, area: Rect) -> Option<(u16, u16)> {
self.list.cursor_pos(area)
}
}
impl Renderable for ApprovalOverlay {
@@ -274,6 +270,10 @@ impl Renderable for ApprovalOverlay {
fn render(&self, area: Rect, buf: &mut Buffer) {
self.list.render(area, buf);
}
fn cursor_pos(&self, area: Rect) -> Option<(u16, u16)> {
self.list.cursor_pos(area)
}
}
struct ApprovalRequestState {

View File

@@ -1,7 +1,6 @@
use crate::bottom_pane::ApprovalRequest;
use crate::render::renderable::Renderable;
use crossterm::event::KeyEvent;
use ratatui::layout::Rect;
use super::CancellationEvent;
@@ -27,11 +26,6 @@ pub(crate) trait BottomPaneView: Renderable {
false
}
/// Cursor position when this view is active.
fn cursor_pos(&self, _area: Rect) -> Option<(u16, u16)> {
None
}
/// Try to handle approval request; return the original value if not
/// consumed.
fn try_consume_approval_request(

View File

@@ -35,6 +35,9 @@ use crate::bottom_pane::prompt_args::parse_slash_name;
use crate::bottom_pane::prompt_args::prompt_argument_names;
use crate::bottom_pane::prompt_args::prompt_command_with_arg_placeholders;
use crate::bottom_pane::prompt_args::prompt_has_numeric_placeholders;
use crate::render::Insets;
use crate::render::RectExt;
use crate::render::renderable::Renderable;
use crate::slash_command::SlashCommand;
use crate::slash_command::built_in_slash_commands;
use crate::style::user_message_style;
@@ -158,24 +161,6 @@ impl ChatComposer {
this
}
pub fn desired_height(&self, width: u16) -> u16 {
let footer_props = self.footer_props();
let footer_hint_height = self
.custom_footer_height()
.unwrap_or_else(|| footer_height(footer_props));
let footer_spacing = Self::footer_spacing(footer_hint_height);
let footer_total_height = footer_hint_height + footer_spacing;
const COLS_WITH_MARGIN: u16 = LIVE_PREFIX_COLS + 1;
self.textarea
.desired_height(width.saturating_sub(COLS_WITH_MARGIN))
+ 2
+ match &self.active_popup {
ActivePopup::None => footer_total_height,
ActivePopup::Command(c) => c.calculate_required_height(width),
ActivePopup::File(c) => c.calculate_required_height(),
}
}
fn layout_areas(&self, area: Rect) -> [Rect; 3] {
let footer_props = self.footer_props();
let footer_hint_height = self
@@ -190,18 +175,9 @@ impl ChatComposer {
ActivePopup::File(popup) => Constraint::Max(popup.calculate_required_height()),
ActivePopup::None => Constraint::Max(footer_total_height),
};
let mut area = area;
if area.height > 1 {
area.height -= 1;
area.y += 1;
}
let [composer_rect, popup_rect] =
Layout::vertical([Constraint::Min(1), popup_constraint]).areas(area);
let mut textarea_rect = composer_rect;
textarea_rect.width = textarea_rect.width.saturating_sub(
LIVE_PREFIX_COLS + 1, /* keep a one-column right margin for wrapping */
);
textarea_rect.x = textarea_rect.x.saturating_add(LIVE_PREFIX_COLS);
Layout::vertical([Constraint::Min(3), popup_constraint]).areas(area);
let textarea_rect = composer_rect.inset(Insets::tlbr(1, LIVE_PREFIX_COLS, 1, 1));
[composer_rect, textarea_rect, popup_rect]
}
@@ -213,12 +189,6 @@ impl ChatComposer {
}
}
pub fn cursor_pos(&self, area: Rect) -> Option<(u16, u16)> {
let [_, textarea_rect, _] = self.layout_areas(area);
let state = *self.textarea_state.borrow();
self.textarea.cursor_pos_with_state(textarea_rect, state)
}
/// Returns true if the composer currently contains no user input.
pub(crate) fn is_empty(&self) -> bool {
self.textarea.is_empty()
@@ -1541,8 +1511,32 @@ impl ChatComposer {
}
}
impl WidgetRef for ChatComposer {
fn render_ref(&self, area: Rect, buf: &mut Buffer) {
impl Renderable for ChatComposer {
fn cursor_pos(&self, area: Rect) -> Option<(u16, u16)> {
let [_, textarea_rect, _] = self.layout_areas(area);
let state = *self.textarea_state.borrow();
self.textarea.cursor_pos_with_state(textarea_rect, state)
}
fn desired_height(&self, width: u16) -> u16 {
let footer_props = self.footer_props();
let footer_hint_height = self
.custom_footer_height()
.unwrap_or_else(|| footer_height(footer_props));
let footer_spacing = Self::footer_spacing(footer_hint_height);
let footer_total_height = footer_hint_height + footer_spacing;
const COLS_WITH_MARGIN: u16 = LIVE_PREFIX_COLS + 1;
self.textarea
.desired_height(width.saturating_sub(COLS_WITH_MARGIN))
+ 2
+ match &self.active_popup {
ActivePopup::None => footer_total_height,
ActivePopup::Command(c) => c.calculate_required_height(width),
ActivePopup::File(c) => c.calculate_required_height(),
}
}
fn render(&self, area: Rect, buf: &mut Buffer) {
let [composer_rect, textarea_rect, popup_rect] = self.layout_areas(area);
match &self.active_popup {
ActivePopup::Command(popup) => {
@@ -1591,16 +1585,15 @@ impl WidgetRef for ChatComposer {
}
}
let style = user_message_style();
let mut block_rect = composer_rect;
block_rect.y = composer_rect.y.saturating_sub(1);
block_rect.height = composer_rect.height.saturating_add(1);
Block::default().style(style).render_ref(block_rect, buf);
buf.set_span(
composer_rect.x,
composer_rect.y,
&"".bold(),
composer_rect.width,
);
Block::default().style(style).render_ref(composer_rect, buf);
if !textarea_rect.is_empty() {
buf.set_span(
textarea_rect.x - LIVE_PREFIX_COLS,
textarea_rect.y,
&"".bold(),
textarea_rect.width,
);
}
let mut state = self.textarea_state.borrow_mut();
StatefulWidgetRef::render_ref(&(&self.textarea), textarea_rect, buf, &mut state);
@@ -1692,7 +1685,7 @@ mod tests {
let area = Rect::new(0, 0, 40, 6);
let mut buf = Buffer::empty(area);
composer.render_ref(area, &mut buf);
composer.render(area, &mut buf);
let row_to_string = |y: u16| {
let mut row = String::new();
@@ -1756,7 +1749,7 @@ mod tests {
let height = footer_lines + footer_spacing + 8;
let mut terminal = Terminal::new(TestBackend::new(width, height)).unwrap();
terminal
.draw(|f| f.render_widget_ref(composer, f.area()))
.draw(|f| composer.render(f.area(), f.buffer_mut()))
.unwrap();
insta::assert_snapshot!(name, terminal.backend());
}
@@ -2276,7 +2269,7 @@ mod tests {
}
terminal
.draw(|f| f.render_widget_ref(composer, f.area()))
.draw(|f| composer.render(f.area(), f.buffer_mut()))
.unwrap_or_else(|e| panic!("Failed to draw {name} composer: {e}"));
insta::assert_snapshot!(name, terminal.backend());
@@ -2302,12 +2295,12 @@ mod tests {
// Type "/mo" humanlike so paste-burst doesnt interfere.
type_chars_humanlike(&mut composer, &['/', 'm', 'o']);
let mut terminal = match Terminal::new(TestBackend::new(60, 4)) {
let mut terminal = match Terminal::new(TestBackend::new(60, 5)) {
Ok(t) => t,
Err(e) => panic!("Failed to create terminal: {e}"),
};
terminal
.draw(|f| f.render_widget_ref(composer, f.area()))
.draw(|f| composer.render(f.area(), f.buffer_mut()))
.unwrap_or_else(|e| panic!("Failed to draw composer: {e}"));
// Visual snapshot should show the slash popup with /model as the first entry.

View File

@@ -103,26 +103,6 @@ impl BottomPaneView for CustomPromptView {
self.textarea.insert_str(&pasted);
true
}
fn cursor_pos(&self, area: Rect) -> Option<(u16, u16)> {
if area.height < 2 || area.width <= 2 {
return None;
}
let text_area_height = self.input_height(area.width).saturating_sub(1);
if text_area_height == 0 {
return None;
}
let extra_offset: u16 = if self.context_label.is_some() { 1 } else { 0 };
let top_line_count = 1u16 + extra_offset;
let textarea_rect = Rect {
x: area.x.saturating_add(2),
y: area.y.saturating_add(top_line_count).saturating_add(1),
width: area.width.saturating_sub(2),
height: text_area_height,
};
let state = *self.textarea_state.borrow();
self.textarea.cursor_pos_with_state(textarea_rect, state)
}
}
impl Renderable for CustomPromptView {
@@ -232,6 +212,26 @@ impl Renderable for CustomPromptView {
);
}
}
fn cursor_pos(&self, area: Rect) -> Option<(u16, u16)> {
if area.height < 2 || area.width <= 2 {
return None;
}
let text_area_height = self.input_height(area.width).saturating_sub(1);
if text_area_height == 0 {
return None;
}
let extra_offset: u16 = if self.context_label.is_some() { 1 } else { 0 };
let top_line_count = 1u16 + extra_offset;
let textarea_rect = Rect {
x: area.x.saturating_add(2),
y: area.y.saturating_add(top_line_count).saturating_add(1),
width: area.width.saturating_sub(2),
height: text_area_height,
};
let state = *self.textarea_state.borrow();
self.textarea.cursor_pos_with_state(textarea_rect, state)
}
}
impl CustomPromptView {

View File

@@ -163,6 +163,12 @@ impl BottomPaneView for FeedbackNoteView {
self.textarea.insert_str(&pasted);
true
}
}
impl Renderable for FeedbackNoteView {
fn desired_height(&self, width: u16) -> u16 {
1u16 + self.input_height(width) + 3u16
}
fn cursor_pos(&self, area: Rect) -> Option<(u16, u16)> {
if area.height < 2 || area.width <= 2 {
@@ -182,12 +188,6 @@ impl BottomPaneView for FeedbackNoteView {
let state = *self.textarea_state.borrow();
self.textarea.cursor_pos_with_state(textarea_rect, state)
}
}
impl Renderable for FeedbackNoteView {
fn desired_height(&self, width: u16) -> u16 {
1u16 + self.input_height(width) + 3u16
}
fn render(&self, area: Rect, buf: &mut Buffer) {
if area.height == 0 || area.width == 0 {

View File

@@ -3,19 +3,16 @@ use std::path::PathBuf;
use crate::app_event_sender::AppEventSender;
use crate::bottom_pane::queued_user_messages::QueuedUserMessages;
use crate::render::Insets;
use crate::render::RectExt;
use crate::render::renderable::Renderable as _;
use crate::render::renderable::FlexRenderable;
use crate::render::renderable::Renderable;
use crate::render::renderable::RenderableItem;
use crate::tui::FrameRequester;
use bottom_pane_view::BottomPaneView;
use codex_file_search::FileMatch;
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
use ratatui::buffer::Buffer;
use ratatui::layout::Constraint;
use ratatui::layout::Layout;
use ratatui::layout::Rect;
use ratatui::widgets::WidgetRef;
use std::time::Duration;
mod approval_overlay;
@@ -126,77 +123,6 @@ impl BottomPane {
self.request_redraw();
}
pub fn desired_height(&self, width: u16) -> u16 {
let top_margin = 1;
// Base height depends on whether a modal/overlay is active.
let base = match self.active_view().as_ref() {
Some(view) => view.desired_height(width),
None => {
let status_height = self
.status
.as_ref()
.map_or(0, |status| status.desired_height(width));
let queue_height = self.queued_user_messages.desired_height(width);
let spacing_height = if status_height == 0 && queue_height == 0 {
0
} else {
1
};
self.composer
.desired_height(width)
.saturating_add(spacing_height)
.saturating_add(status_height)
.saturating_add(queue_height)
}
};
// Account for bottom padding rows. Top spacing is handled in layout().
base.saturating_add(top_margin)
}
fn layout(&self, area: Rect) -> [Rect; 2] {
// At small heights, bottom pane takes the entire height.
let top_margin = if area.height <= 1 { 0 } else { 1 };
let area = area.inset(Insets::tlbr(top_margin, 0, 0, 0));
if self.active_view().is_some() {
return [Rect::ZERO, area];
}
let has_queue = !self.queued_user_messages.messages.is_empty();
let mut status_height = self
.status
.as_ref()
.map_or(0, |status| status.desired_height(area.width))
.min(area.height.saturating_sub(1));
if has_queue && status_height > 1 {
status_height = status_height.saturating_sub(1);
}
let combined_height = status_height
.saturating_add(self.queued_user_messages.desired_height(area.width))
.min(area.height.saturating_sub(1));
let [status_area, _, content_area] = Layout::vertical([
Constraint::Length(combined_height),
Constraint::Length(if combined_height == 0 { 0 } else { 1 }),
Constraint::Min(1),
])
.areas(area);
[status_area, content_area]
}
pub fn cursor_pos(&self, area: Rect) -> Option<(u16, u16)> {
// Hide the cursor whenever an overlay view is active (e.g. the
// status indicator shown while a task is running, or approval modal).
// In these states the textarea is not interactable, so we should not
// show its caret.
let [_, content] = self.layout(area);
if let Some(view) = self.active_view() {
view.cursor_pos(content)
} else {
self.composer.cursor_pos(content)
}
}
/// Forward a key event to the active view or the composer.
pub fn handle_key_event(&mut self, key_event: KeyEvent) -> InputResult {
// If a modal/view is active, handle it here; otherwise forward to composer.
@@ -540,39 +466,36 @@ impl BottomPane {
pub(crate) fn take_recent_submission_images(&mut self) -> Vec<PathBuf> {
self.composer.take_recent_submission_images()
}
fn as_renderable(&'_ self) -> RenderableItem<'_> {
if let Some(view) = self.active_view() {
RenderableItem::Borrowed(view)
} else {
let mut flex = FlexRenderable::new();
if let Some(status) = &self.status {
flex.push(0, RenderableItem::Borrowed(status));
}
flex.push(1, RenderableItem::Borrowed(&self.queued_user_messages));
if self.status.is_some() || !self.queued_user_messages.messages.is_empty() {
flex.push(0, RenderableItem::Owned("".into()));
}
let mut flex2 = FlexRenderable::new();
flex2.push(1, RenderableItem::Owned(flex.into()));
flex2.push(0, RenderableItem::Borrowed(&self.composer));
RenderableItem::Owned(Box::new(flex2))
}
}
}
impl WidgetRef for &BottomPane {
fn render_ref(&self, area: Rect, buf: &mut Buffer) {
let [top_area, content_area] = self.layout(area);
// When a modal view is active, it owns the whole content area.
if let Some(view) = self.active_view() {
view.render(content_area, buf);
} else {
let status_height = self
.status
.as_ref()
.map(|status| status.desired_height(top_area.width).min(top_area.height))
.unwrap_or(0);
if let Some(status) = &self.status
&& status_height > 0
{
status.render_ref(top_area, buf);
}
let queue_area = Rect {
x: top_area.x,
y: top_area.y.saturating_add(status_height),
width: top_area.width,
height: top_area.height.saturating_sub(status_height),
};
if queue_area.height > 0 {
self.queued_user_messages.render(queue_area, buf);
}
self.composer.render_ref(content_area, buf);
}
impl Renderable for BottomPane {
fn render(&self, area: Rect, buf: &mut Buffer) {
self.as_renderable().render(area, buf);
}
fn desired_height(&self, width: u16) -> u16 {
self.as_renderable().desired_height(width)
}
fn cursor_pos(&self, area: Rect) -> Option<(u16, u16)> {
self.as_renderable().cursor_pos(area)
}
}
@@ -599,7 +522,7 @@ mod tests {
fn render_snapshot(pane: &BottomPane, area: Rect) -> String {
let mut buf = Buffer::empty(area);
(&pane).render_ref(area, &mut buf);
pane.render(area, &mut buf);
snapshot_buffer(&buf)
}
@@ -651,7 +574,7 @@ mod tests {
// Render and verify the top row does not include an overlay.
let area = Rect::new(0, 0, 60, 6);
let mut buf = Buffer::empty(area);
(&pane).render_ref(area, &mut buf);
pane.render(area, &mut buf);
let mut r0 = String::new();
for x in 0..area.width {
@@ -665,7 +588,7 @@ mod tests {
#[test]
fn composer_shown_after_denied_while_task_running() {
let (tx_raw, rx) = unbounded_channel::<AppEvent>();
let (tx_raw, _rx) = unbounded_channel::<AppEvent>();
let tx = AppEventSender::new(tx_raw);
let mut pane = BottomPane::new(BottomPaneParams {
app_event_tx: tx,
@@ -700,14 +623,14 @@ mod tests {
std::thread::sleep(Duration::from_millis(120));
let area = Rect::new(0, 0, 40, 6);
let mut buf = Buffer::empty(area);
(&pane).render_ref(area, &mut buf);
let mut row1 = String::new();
pane.render(area, &mut buf);
let mut row0 = String::new();
for x in 0..area.width {
row1.push(buf[(x, 1)].symbol().chars().next().unwrap_or(' '));
row0.push(buf[(x, 0)].symbol().chars().next().unwrap_or(' '));
}
assert!(
row1.contains("Working"),
"expected Working header after denial on row 1: {row1:?}"
row0.contains("Working"),
"expected Working header after denial on row 0: {row0:?}"
);
// Composer placeholder should be visible somewhere below.
@@ -726,9 +649,6 @@ mod tests {
found_composer,
"expected composer visible under status line"
);
// Drain the channel to avoid unused warnings.
drop(rx);
}
#[test]
@@ -750,16 +670,10 @@ mod tests {
// Use a height that allows the status line to be visible above the composer.
let area = Rect::new(0, 0, 40, 6);
let mut buf = Buffer::empty(area);
(&pane).render_ref(area, &mut buf);
pane.render(area, &mut buf);
let mut row0 = String::new();
for x in 0..area.width {
row0.push(buf[(x, 1)].symbol().chars().next().unwrap_or(' '));
}
assert!(
row0.contains("Working"),
"expected Working header: {row0:?}"
);
let bufs = snapshot_buffer(&buf);
assert!(bufs.contains("• Working"), "expected Working header");
}
#[test]
@@ -791,36 +705,6 @@ mod tests {
);
}
#[test]
fn status_hidden_when_height_too_small() {
let (tx_raw, _rx) = unbounded_channel::<AppEvent>();
let tx = AppEventSender::new(tx_raw);
let mut pane = BottomPane::new(BottomPaneParams {
app_event_tx: tx,
frame_requester: FrameRequester::test_dummy(),
has_input_focus: true,
enhanced_keys_supported: false,
placeholder_text: "Ask Codex to do anything".to_string(),
disable_paste_burst: false,
});
pane.set_task_running(true);
// Height=2 → composer takes the full space; status collapses when there is no room.
let area2 = Rect::new(0, 0, 20, 2);
assert_snapshot!(
"status_hidden_when_height_too_small_height_2",
render_snapshot(&pane, area2)
);
// Height=1 → no padding; single row is the composer (status hidden).
let area1 = Rect::new(0, 0, 20, 1);
assert_snapshot!(
"status_hidden_when_height_too_small_height_1",
render_snapshot(&pane, area1)
);
}
#[test]
fn queued_messages_visible_when_status_hidden_snapshot() {
let (tx_raw, _rx) = unbounded_channel::<AppEvent>();

View File

@@ -4,5 +4,6 @@ expression: terminal.backend()
---
" "
" /mo "
" "
" /model choose what model and reasoning effort to use "
" /mention mention a file "

View File

@@ -2,7 +2,6 @@
source: tui/src/bottom_pane/mod.rs
expression: "render_snapshot(&pane, area)"
---
↳ Queued follow-up question
⌥ + ↑ edit

View File

@@ -2,7 +2,6 @@
source: tui/src/bottom_pane/mod.rs
expression: "render_snapshot(&pane, area)"
---
• Working (0s • esc to interru

View File

@@ -2,7 +2,6 @@
source: tui/src/bottom_pane/mod.rs
expression: "render_snapshot(&pane, area)"
---
• Working (0s • esc to interrupt)
↳ Queued follow-up question
⌥ + ↑ edit

View File

@@ -1,6 +0,0 @@
---
source: tui/src/bottom_pane/mod.rs
expression: "render_snapshot(&pane, area2)"
---
Ask Codex to do a

View File

@@ -54,15 +54,11 @@ use crossterm::event::KeyEventKind;
use crossterm::event::KeyModifiers;
use rand::Rng;
use ratatui::buffer::Buffer;
use ratatui::layout::Constraint;
use ratatui::layout::Layout;
use ratatui::layout::Rect;
use ratatui::style::Color;
use ratatui::style::Stylize;
use ratatui::text::Line;
use ratatui::widgets::Paragraph;
use ratatui::widgets::Widget;
use ratatui::widgets::WidgetRef;
use ratatui::widgets::Wrap;
use tokio::sync::mpsc::UnboundedSender;
use tracing::debug;
@@ -92,8 +88,12 @@ use crate::history_cell::McpToolCallCell;
use crate::markdown::append_markdown;
#[cfg(target_os = "windows")]
use crate::onboarding::WSL_INSTRUCTIONS;
use crate::render::Insets;
use crate::render::renderable::ColumnRenderable;
use crate::render::renderable::FlexRenderable;
use crate::render::renderable::Renderable;
use crate::render::renderable::RenderableExt;
use crate::render::renderable::RenderableItem;
use crate::slash_command::SlashCommand;
use crate::status::RateLimitSnapshotDisplay;
use crate::text_formatting::truncate_text;
@@ -132,6 +132,8 @@ struct RunningCommand {
}
const RATE_LIMIT_WARNING_THRESHOLDS: [f64; 3] = [75.0, 90.0, 95.0];
const NUDGE_MODEL_SLUG: &str = "gpt-5-codex-mini";
const RATE_LIMIT_SWITCH_PROMPT_THRESHOLD: f64 = 90.0;
#[derive(Default)]
struct RateLimitWarningState {
@@ -230,6 +232,14 @@ pub(crate) struct ChatWidgetInit {
pub(crate) feedback: codex_feedback::CodexFeedback,
}
#[derive(Default)]
enum RateLimitSwitchPromptState {
#[default]
Idle,
Pending,
Shown,
}
pub(crate) struct ChatWidget {
app_event_tx: AppEventSender,
codex_op_tx: UnboundedSender<Op>,
@@ -242,6 +252,7 @@ pub(crate) struct ChatWidget {
token_info: Option<TokenUsageInfo>,
rate_limit_snapshot: Option<RateLimitSnapshotDisplay>,
rate_limit_warnings: RateLimitWarningState,
rate_limit_switch_prompt: RateLimitSwitchPromptState,
// Stream lifecycle controller
stream_controller: Option<StreamController>,
running_commands: HashMap<String, RunningCommand>,
@@ -293,6 +304,15 @@ impl From<String> for UserMessage {
}
}
impl From<&str> for UserMessage {
fn from(text: &str) -> Self {
Self {
text: text.to_string(),
image_paths: Vec::new(),
}
}
}
fn create_initial_user_message(text: String, image_paths: Vec<PathBuf>) -> Option<UserMessage> {
if text.is_empty() && image_paths.is_empty() {
None
@@ -454,6 +474,8 @@ impl ChatWidget {
self.notify(Notification::AgentTurnComplete {
response: last_agent_message.unwrap_or_default(),
});
self.maybe_show_pending_rate_limit_prompt();
}
pub(crate) fn set_token_info(&mut self, info: Option<TokenUsageInfo>) {
@@ -488,6 +510,27 @@ impl ChatWidget {
.and_then(|window| window.window_minutes),
);
let high_usage = snapshot
.secondary
.as_ref()
.map(|w| w.used_percent >= RATE_LIMIT_SWITCH_PROMPT_THRESHOLD)
.unwrap_or(false)
|| snapshot
.primary
.as_ref()
.map(|w| w.used_percent >= RATE_LIMIT_SWITCH_PROMPT_THRESHOLD)
.unwrap_or(false);
if high_usage
&& self.config.model != NUDGE_MODEL_SLUG
&& !matches!(
self.rate_limit_switch_prompt,
RateLimitSwitchPromptState::Shown
)
{
self.rate_limit_switch_prompt = RateLimitSwitchPromptState::Pending;
}
let display = crate::status::rate_limit_snapshot_display(&snapshot, Local::now());
self.rate_limit_snapshot = Some(display);
@@ -509,6 +552,7 @@ impl ChatWidget {
self.bottom_pane.set_task_running(false);
self.running_commands.clear();
self.stream_controller = None;
self.maybe_show_pending_rate_limit_prompt();
}
fn on_error(&mut self, message: String) {
@@ -951,27 +995,6 @@ impl ChatWidget {
}
}
fn layout_areas(&self, area: Rect) -> [Rect; 3] {
let bottom_min = self.bottom_pane.desired_height(area.width).min(area.height);
let remaining = area.height.saturating_sub(bottom_min);
let active_desired = self
.active_cell
.as_ref()
.map_or(0, |c| c.desired_height(area.width) + 1);
let active_height = active_desired.min(remaining);
// Note: no header area; remaining is not used beyond computing active height.
let header_height = 0u16;
Layout::vertical([
Constraint::Length(header_height),
Constraint::Length(active_height),
Constraint::Min(bottom_min),
])
.areas(area)
}
pub(crate) fn new(
common: ChatWidgetInit,
conversation_manager: Arc<ConversationManager>,
@@ -1013,6 +1036,7 @@ impl ChatWidget {
token_info: None,
rate_limit_snapshot: None,
rate_limit_warnings: RateLimitWarningState::default(),
rate_limit_switch_prompt: RateLimitSwitchPromptState::default(),
stream_controller: None,
running_commands: HashMap::new(),
task_complete_pending: false,
@@ -1079,6 +1103,7 @@ impl ChatWidget {
token_info: None,
rate_limit_snapshot: None,
rate_limit_warnings: RateLimitWarningState::default(),
rate_limit_switch_prompt: RateLimitSwitchPromptState::default(),
stream_controller: None,
running_commands: HashMap::new(),
task_complete_pending: false,
@@ -1100,14 +1125,6 @@ impl ChatWidget {
}
}
pub fn desired_height(&self, width: u16) -> u16 {
self.bottom_pane.desired_height(width)
+ self
.active_cell
.as_ref()
.map_or(0, |c| c.desired_height(width) + 1)
}
pub(crate) fn handle_key_event(&mut self, key_event: KeyEvent) {
match key_event {
KeyEvent {
@@ -1158,12 +1175,7 @@ impl ChatWidget {
text,
image_paths: self.bottom_pane.take_recent_submission_images(),
};
if self.bottom_pane.is_task_running() {
self.queued_user_messages.push_back(user_message);
self.refresh_queued_user_messages();
} else {
self.submit_user_message(user_message);
}
self.queue_user_message(user_message);
}
InputResult::Command(cmd) => {
self.dispatch_command(cmd);
@@ -1220,7 +1232,7 @@ impl ChatWidget {
return;
}
const INIT_PROMPT: &str = include_str!("../prompt_for_init_command.md");
self.submit_text_message(INIT_PROMPT.to_string());
self.submit_user_message(INIT_PROMPT.to_string().into());
}
SlashCommand::Compact => {
self.clear_token_usage();
@@ -1368,6 +1380,15 @@ impl ChatWidget {
self.app_event_tx.send(AppEvent::InsertHistoryCell(cell));
}
fn queue_user_message(&mut self, user_message: UserMessage) {
if self.bottom_pane.is_task_running() {
self.queued_user_messages.push_back(user_message);
self.refresh_queued_user_messages();
} else {
self.submit_user_message(user_message);
}
}
fn submit_user_message(&mut self, user_message: UserMessage) {
let UserMessage { text, image_paths } = user_message;
if text.is_empty() && image_paths.is_empty() {
@@ -1682,6 +1703,83 @@ impl ChatWidget {
));
}
fn lower_cost_preset(&self) -> Option<ModelPreset> {
let auth_mode = self.auth_manager.auth().map(|auth| auth.mode);
builtin_model_presets(auth_mode)
.into_iter()
.find(|preset| preset.model == NUDGE_MODEL_SLUG)
}
fn maybe_show_pending_rate_limit_prompt(&mut self) {
if !matches!(
self.rate_limit_switch_prompt,
RateLimitSwitchPromptState::Pending
) {
return;
}
if let Some(preset) = self.lower_cost_preset() {
self.open_rate_limit_switch_prompt(preset);
self.rate_limit_switch_prompt = RateLimitSwitchPromptState::Shown;
} else {
self.rate_limit_switch_prompt = RateLimitSwitchPromptState::Idle;
}
}
fn open_rate_limit_switch_prompt(&mut self, preset: ModelPreset) {
let switch_model = preset.model.to_string();
let display_name = preset.display_name.to_string();
let default_effort: ReasoningEffortConfig = preset.default_reasoning_effort;
let switch_actions: Vec<SelectionAction> = vec![Box::new(move |tx| {
tx.send(AppEvent::CodexOp(Op::OverrideTurnContext {
cwd: None,
approval_policy: None,
sandbox_policy: None,
model: Some(switch_model.clone()),
effort: Some(Some(default_effort)),
summary: None,
}));
tx.send(AppEvent::UpdateModel(switch_model.clone()));
tx.send(AppEvent::UpdateReasoningEffort(Some(default_effort)));
})];
let keep_actions: Vec<SelectionAction> = Vec::new();
let description = if preset.description.is_empty() {
Some("Uses fewer credits for upcoming turns.".to_string())
} else {
Some(preset.description.to_string())
};
let items = vec![
SelectionItem {
name: format!("Switch to {display_name}"),
description,
selected_description: None,
is_current: false,
actions: switch_actions,
dismiss_on_select: true,
..Default::default()
},
SelectionItem {
name: "Keep current model".to_string(),
description: None,
selected_description: None,
is_current: false,
actions: keep_actions,
dismiss_on_select: true,
..Default::default()
},
];
self.bottom_pane.show_selection_view(SelectionViewParams {
title: Some("Approaching rate limits".to_string()),
subtitle: Some(format!("Switch to {display_name} for lower credit usage?")),
footer_hint: Some(standard_popup_hint_line()),
items,
..Default::default()
});
}
/// Open a popup to choose the model (stage 1). After selecting a model,
/// a second popup is shown to choose the reasoning effort.
pub(crate) fn open_model_popup(&mut self) {
@@ -1698,6 +1796,7 @@ impl ChatWidget {
};
let is_current = preset.model == current_model;
let preset_for_action = preset;
let single_supported_effort = preset_for_action.supported_reasoning_efforts.len() == 1;
let actions: Vec<SelectionAction> = vec![Box::new(move |tx| {
tx.send(AppEvent::OpenReasoningPopup {
model: preset_for_action,
@@ -1708,7 +1807,7 @@ impl ChatWidget {
description,
is_current,
actions,
dismiss_on_select: false,
dismiss_on_select: single_supported_effort,
..Default::default()
});
}
@@ -1747,6 +1846,15 @@ impl ChatWidget {
});
}
if choices.len() == 1 {
if let Some(effort) = choices.first().and_then(|c| c.stored) {
self.apply_model_and_effort(preset.model.to_string(), Some(effort));
} else {
self.apply_model_and_effort(preset.model.to_string(), None);
}
return;
}
let default_choice: Option<ReasoningEffortConfig> = choices
.iter()
.any(|choice| choice.stored == Some(default_effort))
@@ -1785,7 +1893,7 @@ impl ChatWidget {
let warning = "⚠ High reasoning effort can quickly consume Plus plan rate limits.";
let show_warning =
preset.model == "gpt-5-codex" && effort == ReasoningEffortConfig::High;
preset.model.starts_with("gpt-5-codex") && effort == ReasoningEffortConfig::High;
let selected_description = show_warning.then(|| {
description
.as_ref()
@@ -1842,6 +1950,32 @@ impl ChatWidget {
});
}
fn apply_model_and_effort(&self, model: String, effort: Option<ReasoningEffortConfig>) {
self.app_event_tx
.send(AppEvent::CodexOp(Op::OverrideTurnContext {
cwd: None,
approval_policy: None,
sandbox_policy: None,
model: Some(model.clone()),
effort: Some(effort),
summary: None,
}));
self.app_event_tx.send(AppEvent::UpdateModel(model.clone()));
self.app_event_tx
.send(AppEvent::UpdateReasoningEffort(effort));
self.app_event_tx.send(AppEvent::PersistModelSelection {
model: model.clone(),
effort,
});
tracing::info!(
"Selected model: {}, Selected effort: {}",
model,
effort
.map(|e| e.to_string())
.unwrap_or_else(|| "default".to_string())
);
}
/// Open a popup to choose the approvals mode (ask for approval policy + sandbox policy).
pub(crate) fn open_approvals_popup(&mut self) {
let current_approval = self.config.approval_policy;
@@ -2322,16 +2456,6 @@ impl ChatWidget {
self.bottom_pane.show_view(Box::new(view));
}
/// Programmatically submit a user text message as if typed in the
/// composer. The text will be added to conversation history and sent to
/// the agent.
pub(crate) fn submit_text_message(&mut self, text: String) {
if text.is_empty() {
return;
}
self.submit_user_message(text.into());
}
pub(crate) fn token_usage(&self) -> TokenUsage {
self.token_info
.as_ref()
@@ -2357,30 +2481,34 @@ impl ChatWidget {
self.token_info = None;
}
pub fn cursor_pos(&self, area: Rect) -> Option<(u16, u16)> {
let [_, _, bottom_pane_area] = self.layout_areas(area);
self.bottom_pane.cursor_pos(bottom_pane_area)
fn as_renderable(&self) -> RenderableItem<'_> {
let active_cell_renderable = match &self.active_cell {
Some(cell) => RenderableItem::Borrowed(cell).inset(Insets::tlbr(1, 0, 0, 0)),
None => RenderableItem::Owned(Box::new(())),
};
let mut flex = FlexRenderable::new();
flex.push(1, active_cell_renderable);
flex.push(
0,
RenderableItem::Borrowed(&self.bottom_pane).inset(Insets::tlbr(1, 0, 0, 0)),
);
RenderableItem::Owned(Box::new(flex))
}
}
impl WidgetRef for &ChatWidget {
fn render_ref(&self, area: Rect, buf: &mut Buffer) {
let [_, active_cell_area, bottom_pane_area] = self.layout_areas(area);
(&self.bottom_pane).render(bottom_pane_area, buf);
if !active_cell_area.is_empty()
&& let Some(cell) = &self.active_cell
{
let mut area = active_cell_area;
area.y = area.y.saturating_add(1);
area.height = area.height.saturating_sub(1);
if let Some(exec) = cell.as_any().downcast_ref::<ExecCell>() {
exec.render_ref(area, buf);
} else if let Some(tool) = cell.as_any().downcast_ref::<McpToolCallCell>() {
tool.render_ref(area, buf);
}
}
impl Renderable for ChatWidget {
fn render(&self, area: Rect, buf: &mut Buffer) {
self.as_renderable().render(area, buf);
self.last_rendered_width.set(Some(area.width as usize));
}
fn desired_height(&self, width: u16) -> u16 {
self.as_renderable().desired_height(width)
}
fn cursor_pos(&self, area: Rect) -> Option<(u16, u16)> {
self.as_renderable().cursor_pos(area)
}
}
enum Notification {

View File

@@ -2,4 +2,4 @@
source: tui/src/chatwidget/tests.rs
expression: terminal.backend()
---
" Ask Codex to do anything "
" "

View File

@@ -3,4 +3,4 @@ source: tui/src/chatwidget/tests.rs
expression: terminal.backend()
---
" "
" Ask Codex to do anything "
" "

View File

@@ -1,8 +1,7 @@
---
source: tui/src/chatwidget/tests.rs
assertion_line: 1470
expression: terminal.backend()
---
" "
" "
" Ask Codex to do anything "
" "

View File

@@ -2,4 +2,4 @@
source: tui/src/chatwidget/tests.rs
expression: terminal.backend()
---
" Ask Codex to do anything "
" "

View File

@@ -1,7 +1,6 @@
---
source: tui/src/chatwidget/tests.rs
assertion_line: 1500
expression: terminal.backend()
---
" "
" Ask Codex to do anything "
" "

View File

@@ -1,8 +1,7 @@
---
source: tui/src/chatwidget/tests.rs
assertion_line: 1500
expression: terminal.backend()
---
" "
"• Thinking (0s • esc to interrupt) "
" Ask Codex to do anything "
" "
" "

View File

@@ -0,0 +1,27 @@
---
source: tui/src/chatwidget/tests.rs
expression: term.backend().vt100().screen().contents()
---
• Working (0s • esc to interrupt)
↳ Hello, world! 0
↳ Hello, world! 1
↳ Hello, world! 2
↳ Hello, world! 3
↳ Hello, world! 4
↳ Hello, world! 5
↳ Hello, world! 6
↳ Hello, world! 7
↳ Hello, world! 8
↳ Hello, world! 9
↳ Hello, world! 10
↳ Hello, world! 11
↳ Hello, world! 12
↳ Hello, world! 13
↳ Hello, world! 14
↳ Hello, world! 15
↳ Hello, world! 16
Ask Codex to do anything
100% context left · ? for shortcuts

View File

@@ -5,7 +5,7 @@ expression: popup
Select Model and Effort
Switch the model for this and future Codex CLI sessions
1. gpt-5-codex (current) Optimized for coding tasks with many tools.
1. gpt-5-codex (current) Optimized for codex.
2. gpt-5 Broad world knowledge with strong general
reasoning.

View File

@@ -0,0 +1,12 @@
---
source: tui/src/chatwidget/tests.rs
expression: popup
---
Approaching rate limits
Switch to gpt-5-codex-mini for lower credit usage?
1. Switch to gpt-5-codex-mini Optimized for codex. Cheaper, faster, but
less capable.
2. Keep current model
Press enter to confirm or esc to go back

View File

@@ -5,6 +5,8 @@ use crate::test_backend::VT100Backend;
use crate::tui::FrameRequester;
use assert_matches::assert_matches;
use codex_common::approval_presets::builtin_approval_presets;
use codex_common::model_presets::ModelPreset;
use codex_common::model_presets::ReasoningEffortPreset;
use codex_core::AuthManager;
use codex_core::CodexAuth;
use codex_core::config::Config;
@@ -26,6 +28,7 @@ use codex_core::protocol::FileChange;
use codex_core::protocol::Op;
use codex_core::protocol::PatchApplyBeginEvent;
use codex_core::protocol::PatchApplyEndEvent;
use codex_core::protocol::RateLimitWindow;
use codex_core::protocol::ReviewCodeLocation;
use codex_core::protocol::ReviewFinding;
use codex_core::protocol::ReviewLineRange;
@@ -102,6 +105,17 @@ fn upgrade_event_payload_for_tests(mut payload: serde_json::Value) -> serde_json
payload
}
fn snapshot(percent: f64) -> RateLimitSnapshot {
RateLimitSnapshot {
primary: Some(RateLimitWindow {
used_percent: percent,
window_minutes: Some(60),
resets_at: None,
}),
secondary: None,
}
}
#[test]
fn resumed_initial_messages_render_history() {
let (mut chat, mut rx, _ops) = make_chatwidget_manual();
@@ -285,6 +299,7 @@ fn make_chatwidget_manual() -> (
token_info: None,
rate_limit_snapshot: None,
rate_limit_warnings: RateLimitWarningState::default(),
rate_limit_switch_prompt: RateLimitSwitchPromptState::default(),
stream_controller: None,
running_commands: HashMap::new(),
task_complete_pending: false,
@@ -393,6 +408,82 @@ fn test_rate_limit_warnings_monthly() {
);
}
#[test]
fn rate_limit_switch_prompt_skips_when_on_lower_cost_model() {
let (mut chat, _, _) = make_chatwidget_manual();
chat.auth_manager =
AuthManager::from_auth_for_testing(CodexAuth::create_dummy_chatgpt_auth_for_testing());
chat.config.model = NUDGE_MODEL_SLUG.to_string();
chat.on_rate_limit_snapshot(Some(snapshot(95.0)));
assert!(matches!(
chat.rate_limit_switch_prompt,
RateLimitSwitchPromptState::Idle
));
}
#[test]
fn rate_limit_switch_prompt_shows_once_per_session() {
let auth = CodexAuth::create_dummy_chatgpt_auth_for_testing();
let (mut chat, _, _) = make_chatwidget_manual();
chat.config.model = "gpt-5".to_string();
chat.auth_manager = AuthManager::from_auth_for_testing(auth);
chat.on_rate_limit_snapshot(Some(snapshot(90.0)));
assert!(
chat.rate_limit_warnings.primary_index >= 1,
"warnings not emitted"
);
chat.maybe_show_pending_rate_limit_prompt();
assert!(matches!(
chat.rate_limit_switch_prompt,
RateLimitSwitchPromptState::Shown
));
chat.on_rate_limit_snapshot(Some(snapshot(95.0)));
assert!(matches!(
chat.rate_limit_switch_prompt,
RateLimitSwitchPromptState::Shown
));
}
#[test]
fn rate_limit_switch_prompt_defers_until_task_complete() {
let auth = CodexAuth::create_dummy_chatgpt_auth_for_testing();
let (mut chat, _, _) = make_chatwidget_manual();
chat.config.model = "gpt-5".to_string();
chat.auth_manager = AuthManager::from_auth_for_testing(auth);
chat.bottom_pane.set_task_running(true);
chat.on_rate_limit_snapshot(Some(snapshot(90.0)));
assert!(matches!(
chat.rate_limit_switch_prompt,
RateLimitSwitchPromptState::Pending
));
chat.bottom_pane.set_task_running(false);
chat.maybe_show_pending_rate_limit_prompt();
assert!(matches!(
chat.rate_limit_switch_prompt,
RateLimitSwitchPromptState::Shown
));
}
#[test]
fn rate_limit_switch_prompt_popup_snapshot() {
let (mut chat, _rx, _op_rx) = make_chatwidget_manual();
chat.auth_manager =
AuthManager::from_auth_for_testing(CodexAuth::create_dummy_chatgpt_auth_for_testing());
chat.config.model = "gpt-5".to_string();
chat.on_rate_limit_snapshot(Some(snapshot(92.0)));
chat.maybe_show_pending_rate_limit_prompt();
let popup = render_bottom_popup(&chat, 80);
assert_snapshot!("rate_limit_switch_prompt_popup", popup);
}
// (removed experimental resize snapshot test)
#[test]
@@ -424,7 +515,7 @@ fn exec_approval_emits_proposed_command_and_decision_history() {
// The approval modal should display the command snippet for user confirmation.
let area = Rect::new(0, 0, 80, chat.desired_height(80));
let mut buf = ratatui::buffer::Buffer::empty(area);
(&chat).render_ref(area, &mut buf);
chat.render(area, &mut buf);
assert_snapshot!("exec_approval_modal_exec", format!("{buf:?}"));
// Approve via keyboard and verify a concise decision history line is added
@@ -465,7 +556,7 @@ fn exec_approval_decision_truncates_multiline_and_long_commands() {
let area = Rect::new(0, 0, 80, chat.desired_height(80));
let mut buf = ratatui::buffer::Buffer::empty(area);
(&chat).render_ref(area, &mut buf);
chat.render(area, &mut buf);
let mut saw_first_line = false;
for y in 0..area.height {
let mut row = String::new();
@@ -1039,7 +1130,7 @@ fn review_commit_picker_shows_subjects_without_timestamps() {
let height = chat.desired_height(width);
let area = ratatui::layout::Rect::new(0, 0, width, height);
let mut buf = ratatui::buffer::Buffer::empty(area);
(&chat).render_ref(area, &mut buf);
chat.render(area, &mut buf);
let mut blob = String::new();
for y in 0..area.height {
@@ -1267,7 +1358,7 @@ fn render_bottom_first_row(chat: &ChatWidget, width: u16) -> String {
let height = chat.desired_height(width);
let area = Rect::new(0, 0, width, height);
let mut buf = Buffer::empty(area);
(chat).render_ref(area, &mut buf);
chat.render(area, &mut buf);
for y in 0..area.height {
let mut row = String::new();
for x in 0..area.width {
@@ -1289,7 +1380,7 @@ fn render_bottom_popup(chat: &ChatWidget, width: u16) -> String {
let height = chat.desired_height(width);
let area = Rect::new(0, 0, width, height);
let mut buf = Buffer::empty(area);
(chat).render_ref(area, &mut buf);
chat.render(area, &mut buf);
let mut lines: Vec<String> = (0..area.height)
.map(|row| {
@@ -1410,6 +1501,44 @@ fn model_reasoning_selection_popup_snapshot() {
assert_snapshot!("model_reasoning_selection_popup", popup);
}
#[test]
fn single_reasoning_option_skips_selection() {
let (mut chat, mut rx, _op_rx) = make_chatwidget_manual();
static SINGLE_EFFORT: [ReasoningEffortPreset; 1] = [ReasoningEffortPreset {
effort: ReasoningEffortConfig::High,
description: "Maximizes reasoning depth for complex or ambiguous problems",
}];
let preset = ModelPreset {
id: "model-with-single-reasoning",
model: "model-with-single-reasoning",
display_name: "model-with-single-reasoning",
description: "",
default_reasoning_effort: ReasoningEffortConfig::High,
supported_reasoning_efforts: &SINGLE_EFFORT,
is_default: false,
};
chat.open_reasoning_popup(preset);
let popup = render_bottom_popup(&chat, 80);
assert!(
!popup.contains("Select Reasoning Level"),
"expected reasoning selection popup to be skipped"
);
let mut events = Vec::new();
while let Ok(ev) = rx.try_recv() {
events.push(ev);
}
assert!(
events
.iter()
.any(|ev| matches!(ev, AppEvent::UpdateReasoningEffort(Some(effort)) if *effort == ReasoningEffortConfig::High)),
"expected reasoning effort to be applied automatically; events: {events:?}"
);
}
#[test]
fn feedback_selection_popup_snapshot() {
let (mut chat, _rx, _op_rx) = make_chatwidget_manual();
@@ -1701,7 +1830,7 @@ fn approval_modal_exec_snapshot() {
terminal.set_viewport_area(viewport);
terminal
.draw(|f| f.render_widget_ref(&chat, f.area()))
.draw(|f| chat.render(f.area(), f.buffer_mut()))
.expect("draw approval modal");
assert!(
terminal
@@ -1742,7 +1871,7 @@ fn approval_modal_exec_without_reason_snapshot() {
ratatui::Terminal::new(VT100Backend::new(80, height)).expect("create terminal");
terminal.set_viewport_area(Rect::new(0, 0, 80, height));
terminal
.draw(|f| f.render_widget_ref(&chat, f.area()))
.draw(|f| chat.render(f.area(), f.buffer_mut()))
.expect("draw approval modal (no reason)");
assert_snapshot!(
"approval_modal_exec_no_reason",
@@ -1781,7 +1910,7 @@ fn approval_modal_patch_snapshot() {
ratatui::Terminal::new(VT100Backend::new(80, height)).expect("create terminal");
terminal.set_viewport_area(Rect::new(0, 0, 80, height));
terminal
.draw(|f| f.render_widget_ref(&chat, f.area()))
.draw(|f| chat.render(f.area(), f.buffer_mut()))
.expect("draw patch approval modal");
assert_snapshot!(
"approval_modal_patch",
@@ -1873,7 +2002,7 @@ fn ui_snapshots_small_heights_idle() {
let name = format!("chat_small_idle_h{h}");
let mut terminal = Terminal::new(TestBackend::new(40, h)).expect("create terminal");
terminal
.draw(|f| f.render_widget_ref(&chat, f.area()))
.draw(|f| chat.render(f.area(), f.buffer_mut()))
.expect("draw chat idle");
assert_snapshot!(name, terminal.backend());
}
@@ -1903,7 +2032,7 @@ fn ui_snapshots_small_heights_task_running() {
let name = format!("chat_small_running_h{h}");
let mut terminal = Terminal::new(TestBackend::new(40, h)).expect("create terminal");
terminal
.draw(|f| f.render_widget_ref(&chat, f.area()))
.draw(|f| chat.render(f.area(), f.buffer_mut()))
.expect("draw chat running");
assert_snapshot!(name, terminal.backend());
}
@@ -1953,7 +2082,7 @@ fn status_widget_and_approval_modal_snapshot() {
let mut terminal = ratatui::Terminal::new(ratatui::backend::TestBackend::new(80, height))
.expect("create terminal");
terminal
.draw(|f| f.render_widget_ref(&chat, f.area()))
.draw(|f| chat.render(f.area(), f.buffer_mut()))
.expect("draw status + approval modal");
assert_snapshot!("status_widget_and_approval_modal", terminal.backend());
}
@@ -1982,7 +2111,7 @@ fn status_widget_active_snapshot() {
let mut terminal = ratatui::Terminal::new(ratatui::backend::TestBackend::new(80, height))
.expect("create terminal");
terminal
.draw(|f| f.render_widget_ref(&chat, f.area()))
.draw(|f| chat.render(f.area(), f.buffer_mut()))
.expect("draw status widget");
assert_snapshot!("status_widget_active", terminal.backend());
}
@@ -2017,7 +2146,7 @@ fn apply_patch_events_emit_history_cells() {
let area = Rect::new(0, 0, 80, chat.desired_height(80));
let mut buf = ratatui::buffer::Buffer::empty(area);
(&chat).render_ref(area, &mut buf);
chat.render(area, &mut buf);
let mut saw_summary = false;
for y in 0..area.height {
let mut row = String::new();
@@ -2304,7 +2433,7 @@ fn apply_patch_untrusted_shows_approval_modal() {
// Render and ensure the approval modal title is present
let area = Rect::new(0, 0, 80, 12);
let mut buf = Buffer::empty(area);
(&chat).render_ref(area, &mut buf);
chat.render(area, &mut buf);
let mut contains_title = false;
for y in 0..area.height {
@@ -2358,7 +2487,7 @@ fn apply_patch_request_shows_diff_summary() {
let area = Rect::new(0, 0, 80, chat.desired_height(80));
let mut buf = ratatui::buffer::Buffer::empty(area);
(&chat).render_ref(area, &mut buf);
chat.render(area, &mut buf);
let mut saw_header = false;
let mut saw_line1 = false;
@@ -2683,7 +2812,7 @@ fn chatwidget_exec_and_status_layout_vt100_snapshot() {
}
term.draw(|f| {
(&chat).render_ref(f.area(), f.buffer_mut());
chat.render(f.area(), f.buffer_mut());
})
.unwrap();
@@ -2781,3 +2910,28 @@ printf 'fenced within fenced\n'
assert_snapshot!(term.backend().vt100().screen().contents());
}
#[test]
fn chatwidget_tall() {
let (mut chat, _rx, _op_rx) = make_chatwidget_manual();
chat.handle_codex_event(Event {
id: "t1".into(),
msg: EventMsg::TaskStarted(TaskStartedEvent {
model_context_window: None,
}),
});
for i in 0..30 {
chat.queue_user_message(format!("Hello, world! {i}").into());
}
let width: u16 = 80;
let height: u16 = 24;
let backend = VT100Backend::new(width, height);
let mut term = crate::custom_terminal::Terminal::with_options(backend).expect("terminal");
let desired_height = chat.desired_height(width).min(height);
term.set_viewport_area(Rect::new(0, height - desired_height, width, desired_height));
term.draw(|f| {
chat.render(f.area(), f.buffer_mut());
})
.unwrap();
assert_snapshot!(term.backend().vt100().screen().contents());
}

View File

@@ -67,7 +67,7 @@ impl From<DiffSummary> for Box<dyn Renderable> {
rows.push(Box::new(path));
rows.push(Box::new(RtLine::from("")));
rows.push(Box::new(InsetRenderable::new(
row.change,
Box::new(row.change) as Box<dyn Renderable>,
Insets::tlbr(0, 2, 0, 0),
)));
}

View File

@@ -19,9 +19,6 @@ use itertools::Itertools;
use ratatui::prelude::*;
use ratatui::style::Modifier;
use ratatui::style::Stylize;
use ratatui::widgets::Paragraph;
use ratatui::widgets::WidgetRef;
use ratatui::widgets::Wrap;
use textwrap::WordSplitter;
use unicode_width::UnicodeWidthStr;
@@ -205,31 +202,6 @@ impl HistoryCell for ExecCell {
}
}
impl WidgetRef for &ExecCell {
fn render_ref(&self, area: Rect, buf: &mut Buffer) {
if area.height == 0 {
return;
}
let content_area = Rect {
x: area.x,
y: area.y,
width: area.width,
height: area.height,
};
let lines = self.display_lines(area.width);
let max_rows = area.height as usize;
let rendered = if lines.len() > max_rows {
lines[lines.len() - max_rows..].to_vec()
} else {
lines
};
Paragraph::new(Text::from(rendered))
.wrap(Wrap { trim: false })
.render(content_area, buf);
}
}
impl ExecCell {
fn exploring_display_lines(&self, width: u16) -> Vec<Line<'static>> {
let mut out: Vec<Line<'static>> = Vec::new();

View File

@@ -11,6 +11,7 @@ use crate::markdown::append_markdown;
use crate::render::line_utils::line_to_static;
use crate::render::line_utils::prefix_lines;
use crate::render::line_utils::push_owned_lines;
use crate::render::renderable::Renderable;
use crate::style::user_message_style;
use crate::text_formatting::format_and_truncate_tool_result;
use crate::text_formatting::truncate_text;
@@ -45,7 +46,6 @@ use ratatui::style::Style;
use ratatui::style::Styled;
use ratatui::style::Stylize;
use ratatui::widgets::Paragraph;
use ratatui::widgets::WidgetRef;
use ratatui::widgets::Wrap;
use std::any::Any;
use std::collections::HashMap;
@@ -99,6 +99,24 @@ pub(crate) trait HistoryCell: std::fmt::Debug + Send + Sync + Any {
}
}
impl Renderable for Box<dyn HistoryCell> {
fn render(&self, area: Rect, buf: &mut Buffer) {
let lines = self.display_lines(area.width);
let y = if area.height == 0 {
0
} else {
let overflow = lines.len().saturating_sub(usize::from(area.height));
u16::try_from(overflow).unwrap_or(u16::MAX)
};
Paragraph::new(Text::from(lines))
.scroll((y, 0))
.render(area, buf);
}
fn desired_height(&self, width: u16) -> u16 {
HistoryCell::desired_height(self.as_ref(), width)
}
}
impl dyn HistoryCell {
pub(crate) fn as_any(&self) -> &dyn Any {
self
@@ -929,23 +947,6 @@ impl HistoryCell for McpToolCallCell {
}
}
impl WidgetRef for &McpToolCallCell {
fn render_ref(&self, area: Rect, buf: &mut Buffer) {
if area.height == 0 {
return;
}
let lines = self.display_lines(area.width);
let max_rows = area.height as usize;
let rendered = if lines.len() > max_rows {
lines[lines.len() - max_rows..].to_vec()
} else {
lines
};
Text::from(rendered).render(area, buf);
}
}
pub(crate) fn new_active_mcp_tool_call(
call_id: String,
invocation: McpInvocation,

View File

@@ -170,6 +170,12 @@ impl OnboardingScreen {
out
}
fn is_auth_in_progress(&self) -> bool {
self.steps.iter().any(|step| {
matches!(step, Step::Auth(_)) && matches!(step.get_step_state(), StepState::InProgress)
})
}
pub(crate) fn is_done(&self) -> bool {
self.is_done
|| !self
@@ -216,7 +222,9 @@ impl KeyboardHandler for OnboardingScreen {
kind: KeyEventKind::Press,
..
} => {
self.is_done = true;
if !self.is_auth_in_progress() {
self.is_done = true;
}
}
_ => {
if let Some(Step::Welcome(widget)) = self

View File

@@ -7,13 +7,13 @@
use crossterm::event::KeyEvent;
use ratatui::buffer::Buffer;
use ratatui::layout::Rect;
use ratatui::widgets::WidgetRef;
use std::time::Duration;
use crate::app_event::AppEvent;
use crate::app_event_sender::AppEventSender;
use crate::bottom_pane::ChatComposer;
use crate::bottom_pane::InputResult;
use crate::render::renderable::Renderable;
/// Action returned from feeding a key event into the ComposerInput.
pub enum ComposerAction {
@@ -94,7 +94,7 @@ impl ComposerInput {
/// Render the input into the provided buffer at `area`.
pub fn render_ref(&self, area: Rect, buf: &mut Buffer) {
WidgetRef::render_ref(&self.inner, area, buf);
self.inner.render(area, buf);
}
/// Return true if a paste-burst detection is currently active.

View File

@@ -13,9 +13,49 @@ use crate::render::RectExt as _;
pub trait Renderable {
fn render(&self, area: Rect, buf: &mut Buffer);
fn desired_height(&self, width: u16) -> u16;
fn cursor_pos(&self, _area: Rect) -> Option<(u16, u16)> {
None
}
}
impl<R: Renderable + 'static> From<R> for Box<dyn Renderable> {
pub enum RenderableItem<'a> {
Owned(Box<dyn Renderable + 'a>),
Borrowed(&'a dyn Renderable),
}
impl<'a> Renderable for RenderableItem<'a> {
fn render(&self, area: Rect, buf: &mut Buffer) {
match self {
RenderableItem::Owned(child) => child.render(area, buf),
RenderableItem::Borrowed(child) => child.render(area, buf),
}
}
fn desired_height(&self, width: u16) -> u16 {
match self {
RenderableItem::Owned(child) => child.desired_height(width),
RenderableItem::Borrowed(child) => child.desired_height(width),
}
}
fn cursor_pos(&self, area: Rect) -> Option<(u16, u16)> {
match self {
RenderableItem::Owned(child) => child.cursor_pos(area),
RenderableItem::Borrowed(child) => child.cursor_pos(area),
}
}
}
impl<'a> From<Box<dyn Renderable + 'a>> for RenderableItem<'a> {
fn from(value: Box<dyn Renderable + 'a>) -> Self {
RenderableItem::Owned(value)
}
}
impl<'a, R> From<R> for Box<dyn Renderable + 'a>
where
R: Renderable + 'a,
{
fn from(value: R) -> Self {
Box::new(value)
}
@@ -98,11 +138,11 @@ impl<R: Renderable> Renderable for Arc<R> {
}
}
pub struct ColumnRenderable {
children: Vec<Box<dyn Renderable>>,
pub struct ColumnRenderable<'a> {
children: Vec<RenderableItem<'a>>,
}
impl Renderable for ColumnRenderable {
impl Renderable for ColumnRenderable<'_> {
fn render(&self, area: Rect, buf: &mut Buffer) {
let mut y = area.y;
for child in &self.children {
@@ -121,29 +161,166 @@ impl Renderable for ColumnRenderable {
.map(|child| child.desired_height(width))
.sum()
}
/// Returns the cursor position of the first child that has a cursor position, offset by the
/// child's position in the column.
///
/// It is generally assumed that either zero or one child will have a cursor position.
fn cursor_pos(&self, area: Rect) -> Option<(u16, u16)> {
let mut y = area.y;
for child in &self.children {
let child_area = Rect::new(area.x, y, area.width, child.desired_height(area.width))
.intersection(area);
if !child_area.is_empty()
&& let Some((px, py)) = child.cursor_pos(child_area)
{
return Some((px, py));
}
y += child_area.height;
}
None
}
}
impl ColumnRenderable {
impl<'a> ColumnRenderable<'a> {
pub fn new() -> Self {
Self::with(vec![])
Self { children: vec![] }
}
pub fn with(children: impl IntoIterator<Item = Box<dyn Renderable>>) -> Self {
pub fn with<I, T>(children: I) -> Self
where
I: IntoIterator<Item = T>,
T: Into<RenderableItem<'a>>,
{
Self {
children: children.into_iter().collect(),
children: children.into_iter().map(Into::into).collect(),
}
}
pub fn push(&mut self, child: impl Into<Box<dyn Renderable>>) {
self.children.push(child.into());
pub fn push(&mut self, child: impl Into<Box<dyn Renderable + 'a>>) {
self.children.push(RenderableItem::Owned(child.into()));
}
#[allow(dead_code)]
pub fn push_ref<R>(&mut self, child: &'a R)
where
R: Renderable + 'a,
{
self.children
.push(RenderableItem::Borrowed(child as &'a dyn Renderable));
}
}
pub struct RowRenderable {
children: Vec<(u16, Box<dyn Renderable>)>,
pub struct FlexChild<'a> {
flex: i32,
child: RenderableItem<'a>,
}
impl Renderable for RowRenderable {
pub struct FlexRenderable<'a> {
children: Vec<FlexChild<'a>>,
}
/// Lays out children in a column, with the ability to specify a flex factor for each child.
///
/// Children with flex factor > 0 will be allocated the remaining space after the non-flex children,
/// proportional to the flex factor.
impl<'a> FlexRenderable<'a> {
pub fn new() -> Self {
Self { children: vec![] }
}
pub fn push(&mut self, flex: i32, child: impl Into<RenderableItem<'a>>) {
self.children.push(FlexChild {
flex,
child: child.into(),
});
}
/// Loosely inspired by Flutter's Flex widget.
///
/// Ref https://github.com/flutter/flutter/blob/3fd81edbf1e015221e143c92b2664f4371bdc04a/packages/flutter/lib/src/rendering/flex.dart#L1205-L1209
fn allocate(&self, area: Rect) -> Vec<Rect> {
let mut allocated_rects = Vec::with_capacity(self.children.len());
let mut child_sizes = vec![0; self.children.len()];
let mut allocated_size = 0;
let mut total_flex = 0;
// 1. Allocate space to non-flex children.
let max_size = area.height;
let mut last_flex_child_idx = 0;
for (i, FlexChild { flex, child }) in self.children.iter().enumerate() {
if *flex > 0 {
total_flex += flex;
last_flex_child_idx = i;
} else {
child_sizes[i] = child
.desired_height(area.width)
.min(max_size.saturating_sub(allocated_size));
allocated_size += child_sizes[i];
}
}
let free_space = max_size.saturating_sub(allocated_size);
// 2. Allocate space to flex children, proportional to their flex factor.
let mut allocated_flex_space = 0;
if total_flex > 0 {
let space_per_flex = free_space / total_flex as u16;
for (i, FlexChild { flex, child }) in self.children.iter().enumerate() {
if *flex > 0 {
// Last flex child gets all the remaining space, to prevent a rounding error
// from not allocating all the space.
let max_child_extent = if i == last_flex_child_idx {
free_space - allocated_flex_space
} else {
space_per_flex * *flex as u16
};
let child_size = child.desired_height(area.width).min(max_child_extent);
child_sizes[i] = child_size;
allocated_size += child_size;
allocated_flex_space += child_size;
}
}
}
let mut y = area.y;
for size in child_sizes {
let child_area = Rect::new(area.x, y, area.width, size);
allocated_rects.push(child_area);
y += child_area.height;
}
allocated_rects
}
}
impl<'a> Renderable for FlexRenderable<'a> {
fn render(&self, area: Rect, buf: &mut Buffer) {
self.allocate(area)
.into_iter()
.zip(self.children.iter())
.for_each(|(rect, child)| {
child.child.render(rect, buf);
});
}
fn desired_height(&self, width: u16) -> u16 {
self.allocate(Rect::new(0, 0, width, u16::MAX))
.last()
.map(|rect| rect.bottom())
.unwrap_or(0)
}
fn cursor_pos(&self, area: Rect) -> Option<(u16, u16)> {
self.allocate(area)
.into_iter()
.zip(self.children.iter())
.find_map(|(rect, child)| child.child.cursor_pos(rect))
}
}
pub struct RowRenderable<'a> {
children: Vec<(u16, RenderableItem<'a>)>,
}
impl Renderable for RowRenderable<'_> {
fn render(&self, area: Rect, buf: &mut Buffer) {
let mut x = area.x;
for (width, child) in &self.children {
@@ -172,24 +349,49 @@ impl Renderable for RowRenderable {
}
max_height
}
fn cursor_pos(&self, area: Rect) -> Option<(u16, u16)> {
let mut x = area.x;
for (width, child) in &self.children {
let available_width = area.width.saturating_sub(x - area.x);
let child_area = Rect::new(x, area.y, (*width).min(available_width), area.height);
if !child_area.is_empty()
&& let Some(pos) = child.cursor_pos(child_area)
{
return Some(pos);
}
x = x.saturating_add(*width);
}
None
}
}
impl RowRenderable {
impl<'a> RowRenderable<'a> {
pub fn new() -> Self {
Self { children: vec![] }
}
pub fn push(&mut self, width: u16, child: impl Into<Box<dyn Renderable>>) {
self.children.push((width, child.into()));
self.children
.push((width, RenderableItem::Owned(child.into())));
}
#[allow(dead_code)]
pub fn push_ref<R>(&mut self, width: u16, child: &'a R)
where
R: Renderable + 'a,
{
self.children
.push((width, RenderableItem::Borrowed(child as &'a dyn Renderable)));
}
}
pub struct InsetRenderable {
child: Box<dyn Renderable>,
pub struct InsetRenderable<'a> {
child: RenderableItem<'a>,
insets: Insets,
}
impl Renderable for InsetRenderable {
impl<'a> Renderable for InsetRenderable<'a> {
fn render(&self, area: Rect, buf: &mut Buffer) {
self.child.render(area.inset(self.insets), buf);
}
@@ -199,10 +401,13 @@ impl Renderable for InsetRenderable {
+ self.insets.top
+ self.insets.bottom
}
fn cursor_pos(&self, area: Rect) -> Option<(u16, u16)> {
self.child.cursor_pos(area.inset(self.insets))
}
}
impl InsetRenderable {
pub fn new(child: impl Into<Box<dyn Renderable>>, insets: Insets) -> Self {
impl<'a> InsetRenderable<'a> {
pub fn new(child: impl Into<RenderableItem<'a>>, insets: Insets) -> Self {
Self {
child: child.into(),
insets,
@@ -210,15 +415,17 @@ impl InsetRenderable {
}
}
pub trait RenderableExt {
fn inset(self, insets: Insets) -> Box<dyn Renderable>;
pub trait RenderableExt<'a> {
fn inset(self, insets: Insets) -> RenderableItem<'a>;
}
impl<R: Into<Box<dyn Renderable>>> RenderableExt for R {
fn inset(self, insets: Insets) -> Box<dyn Renderable> {
Box::new(InsetRenderable {
child: self.into(),
insets,
})
impl<'a, R> RenderableExt<'a> for R
where
R: Renderable + 'a,
{
fn inset(self, insets: Insets) -> RenderableItem<'a> {
let child: RenderableItem<'a> =
RenderableItem::Owned(Box::new(self) as Box<dyn Renderable + 'a>);
RenderableItem::Owned(Box::new(InsetRenderable { child, insets }))
}
}

View File

@@ -16,6 +16,7 @@ use crate::app_event::AppEvent;
use crate::app_event_sender::AppEventSender;
use crate::exec_cell::spinner;
use crate::key_hint;
use crate::render::renderable::Renderable;
use crate::shimmer::shimmer_spans;
use crate::tui::FrameRequester;
@@ -62,10 +63,6 @@ impl StatusIndicatorWidget {
}
}
pub fn desired_height(&self, _width: u16) -> u16 {
1
}
pub(crate) fn interrupt(&self) {
self.app_event_tx.send(AppEvent::CodexOp(Op::Interrupt));
}
@@ -75,15 +72,15 @@ impl StatusIndicatorWidget {
self.header = header;
}
pub(crate) fn set_interrupt_hint_visible(&mut self, visible: bool) {
self.show_interrupt_hint = visible;
}
#[cfg(test)]
pub(crate) fn header(&self) -> &str {
&self.header
}
pub(crate) fn set_interrupt_hint_visible(&mut self, visible: bool) {
self.show_interrupt_hint = visible;
}
#[cfg(test)]
pub(crate) fn interrupt_hint_visible(&self) -> bool {
self.show_interrupt_hint
@@ -131,8 +128,12 @@ impl StatusIndicatorWidget {
}
}
impl WidgetRef for StatusIndicatorWidget {
fn render_ref(&self, area: Rect, buf: &mut Buffer) {
impl Renderable for StatusIndicatorWidget {
fn desired_height(&self, _width: u16) -> u16 {
1
}
fn render(&self, area: Rect, buf: &mut Buffer) {
if area.is_empty() {
return;
}
@@ -200,7 +201,7 @@ mod tests {
// Render into a fixed-size test terminal and snapshot the backend.
let mut terminal = Terminal::new(TestBackend::new(80, 2)).expect("terminal");
terminal
.draw(|f| w.render_ref(f.area(), f.buffer_mut()))
.draw(|f| w.render(f.area(), f.buffer_mut()))
.expect("draw");
insta::assert_snapshot!(terminal.backend());
}
@@ -214,7 +215,7 @@ mod tests {
// Render into a fixed-size test terminal and snapshot the backend.
let mut terminal = Terminal::new(TestBackend::new(20, 2)).expect("terminal");
terminal
.draw(|f| w.render_ref(f.area(), f.buffer_mut()))
.draw(|f| w.render(f.area(), f.buffer_mut()))
.expect("draw");
insta::assert_snapshot!(terminal.backend());
}

View File

@@ -12,7 +12,7 @@ Because Codex is written in Rust, it honors the `RUST_LOG` environment variable
The TUI defaults to `RUST_LOG=codex_core=info,codex_tui=info,codex_rmcp_client=info` and log messages are written to `~/.codex/log/codex-tui.log`, so you can leave the following running in a separate terminal to monitor log messages as they are written:
```
```bash
tail -F ~/.codex/log/codex-tui.log
```
@@ -67,7 +67,7 @@ Use the MCP inspector and `codex mcp-server` to build a simple tic-tac-toe game
**approval-policy:** never
**prompt:** Implement a simple tic-tac-toe game with HTML, Javascript, and CSS. Write the game in a single file called index.html.
**prompt:** Implement a simple tic-tac-toe game with HTML, JavaScript, and CSS. Write the game in a single file called index.html.
**sandbox:** workspace-write

View File

@@ -4,6 +4,7 @@ Codex configuration gives you fine-grained control over the model, execution env
## Quick navigation
- [Feature flags](#feature-flags)
- [Model selection](#model-selection)
- [Execution environment](#execution-environment)
- [MCP integration](#mcp-integration)
@@ -26,6 +27,36 @@ Codex supports several mechanisms for setting config values:
Both the `--config` flag and the `config.toml` file support the following options:
## Feature flags
Optional and experimental capabilities are toggled via the `[features]` table in `$CODEX_HOME/config.toml`. If you see a deprecation notice mentioning a legacy key (for example `experimental_use_exec_command_tool`), move the setting into `[features]` or pass `--enable <feature>`.
```toml
[features]
streamable_shell = true # enable the streamable exec tool
web_search_request = true # allow the model to request web searches
# view_image_tool defaults to true; omit to keep defaults
```
Supported features:
| Key | Default | Stage | Description |
| ----------------------------------------- | :-----: | ------------ | ---------------------------------------------------- |
| `unified_exec` | false | Experimental | Use the unified PTY-backed exec tool |
| `streamable_shell` | false | Experimental | Use the streamable exec-command/write-stdin pair |
| `rmcp_client` | false | Experimental | Enable oauth support for streamable HTTP MCP servers |
| `apply_patch_freeform` | false | Beta | Include the freeform `apply_patch` tool |
| `view_image_tool` | true | Stable | Include the `view_image` tool |
| `web_search_request` | false | Stable | Allow the model to issue web searches |
| `experimental_sandbox_command_assessment` | false | Experimental | Enable model-based sandbox risk assessment |
| `ghost_commit` | false | Experimental | Create a ghost commit each turn |
| `enable_experimental_windows_sandbox` | false | Experimental | Use the Windows restricted-token sandbox |
Notes:
- Omit a key to accept its default.
- Legacy booleans such as `experimental_use_exec_command_tool`, `experimental_use_unified_exec_tool`, `include_apply_patch_tool`, and similar `experimental_use_*` keys are deprecated; setting the corresponding `[features].<key>` avoids repeated warnings.
## Model selection
### model

View File

@@ -5,7 +5,7 @@ Custom prompts turn your repeatable instructions into reusable slash commands, s
### Where prompts live
- Location: store prompts in `$CODEX_HOME/prompts/` (defaults to `~/.codex/prompts/`). Set `CODEX_HOME` if you want to use a different folder.
- File type: Codex only loads `.md` files. Non-Markdown files are ignored.
- File type: Codex only loads `.md` files. Non-Markdown files are ignored. Both regular files and symlinks to Markdown files are supported.
- Naming: The filename (without `.md`) becomes the prompt name. A file called `review.md` registers the prompt `review`.
- Refresh: Prompts are loaded when a session starts. Restart Codex (or start a new session) after adding or editing files.
- Conflicts: Files whose names collide with built-in commands (like `init`) stay hidden in the slash popup, but you can still invoke them with `/prompts:<name>`.

View File

@@ -87,8 +87,9 @@ def check_or_fix(readme_path: Path, fix: bool) -> int:
# extract current ToC list items
current_block = lines[begin_idx + 1 : end_idx]
current = [l for l in current_block if l.lstrip().startswith("- [")]
# generate expected ToC
expected = generate_toc_lines(content)
# generate expected ToC from content without current ToC
toc_content = lines[:begin_idx] + lines[end_idx+1:]
expected = generate_toc_lines("\n".join(toc_content))
if current == expected:
return 0
if not fix:
@@ -108,7 +109,7 @@ def check_or_fix(readme_path: Path, fix: bool) -> int:
return 1
# rebuild file with updated ToC
prefix = lines[: begin_idx + 1]
suffix = lines[end_idx:]
suffix = lines[end_idx+1:]
new_lines = prefix + [""] + expected + [""] + suffix
readme_path.write_text("\n".join(new_lines) + "\n", encoding="utf-8")
print(f"Updated ToC in {readme_path}.")

View File

@@ -3,7 +3,7 @@ import path from "node:path";
import readline from "node:readline";
import { fileURLToPath } from "node:url";
import { SandboxMode } from "./threadOptions";
import { SandboxMode, ModelReasoningEffort } from "./threadOptions";
export type CodexExecArgs = {
input: string;
@@ -22,6 +22,8 @@ export type CodexExecArgs = {
skipGitRepoCheck?: boolean;
// --output-schema
outputSchemaFile?: string;
// --config model_reasoning_effort
modelReasoningEffort?: ModelReasoningEffort;
};
const INTERNAL_ORIGINATOR_ENV = "CODEX_INTERNAL_ORIGINATOR_OVERRIDE";
@@ -56,6 +58,10 @@ export class CodexExec {
commandArgs.push("--output-schema", args.outputSchemaFile);
}
if (args.modelReasoningEffort) {
commandArgs.push("--config", `model_reasoning_effort="${args.modelReasoningEffort}"`);
}
if (args.images?.length) {
for (const image of args.images) {
commandArgs.push("--image", image);

View File

@@ -30,5 +30,10 @@ export { Codex } from "./codex";
export type { CodexOptions } from "./codexOptions";
export type { ThreadOptions, ApprovalMode, SandboxMode } from "./threadOptions";
export type {
ThreadOptions,
ApprovalMode,
SandboxMode,
ModelReasoningEffort,
} from "./threadOptions";
export type { TurnOptions } from "./turnOptions";

View File

@@ -85,6 +85,7 @@ export class Thread {
workingDirectory: options?.workingDirectory,
skipGitRepoCheck: options?.skipGitRepoCheck,
outputSchemaFile: schemaPath,
modelReasoningEffort: options?.modelReasoningEffort,
});
try {
for await (const item of generator) {

View File

@@ -2,9 +2,12 @@ export type ApprovalMode = "never" | "on-request" | "on-failure" | "untrusted";
export type SandboxMode = "read-only" | "workspace-write" | "danger-full-access";
export type ModelReasoningEffort = "minimal" | "low" | "medium" | "high";
export type ThreadOptions = {
model?: string;
sandboxMode?: SandboxMode;
workingDirectory?: string;
skipGitRepoCheck?: boolean;
modelReasoningEffort?: ModelReasoningEffort;
};

View File

@@ -223,6 +223,37 @@ describe("Codex", () => {
}
});
it("passes modelReasoningEffort to exec", async () => {
const { url, close } = await startResponsesTestProxy({
statusCode: 200,
responseBodies: [
sse(
responseStarted("response_1"),
assistantMessage("Reasoning effort applied", "item_1"),
responseCompleted("response_1"),
),
],
});
const { args: spawnArgs, restore } = codexExecSpy();
try {
const client = new Codex({ codexPathOverride: codexExecPath, baseUrl: url, apiKey: "test" });
const thread = client.startThread({
modelReasoningEffort: "high",
});
await thread.run("apply reasoning effort");
const commandArgs = spawnArgs[0];
expect(commandArgs).toBeDefined();
expectPair(commandArgs, ["--config", 'model_reasoning_effort="high"']);
} finally {
restore();
await close();
}
});
it("writes output schema to a temporary file and forwards it", async () => {
const { url, close, requests } = await startResponsesTestProxy({
statusCode: 200,