Compare commits

...

44 Commits

Author SHA1 Message Date
zhao-oai
27018edc50 Merge branch 'main' into patch-guard 2025-10-15 09:38:48 -07:00
Gabriel Peal
8a281cd1f4 [MCP] Prompt mcp login when adding a streamable HTTP server that supports oauth (#5193)
1. If Codex detects that a `codex mcp add -url …` server supports oauth,
it will auto-initiate the login flow.
2. If the TUI starts and a MCP server supports oauth but isn't logged
in, it will give the user an explicit warning telling them to log in.
2025-10-15 12:27:40 -04:00
Shijie Rao
e8863b233b feat: updated github issue template (#5191)
### Update github issue template for bug submission. 
* Add subscription field
* Require codex cli/extension version
* Require subscription plan
* Require error message with added context
2025-10-15 07:27:24 -07:00
jif-oai
8fed0b53c4 test: reduce time dependency on test harness (#5053)
Tightened the CLI integration tests to stop relying on wall-clock
sleeps—new fs watcher helper waits for session files instead of timing
out, and SSE mocks/fixtures make the flows deterministic.
2025-10-15 09:56:59 +01:00
Dylan
00debb6399 fix(core) use regex for all shell_serialization tests (#5189)
## Summary
Thought I switched all of these to using a regex instead, but missed 2.
This should address our [flakey test
problem](https://github.com/openai/codex/actions/runs/18511206616/job/52752341520?pr=5185).

## Test Plan
- [x] Only updated unit tests
2025-10-14 16:29:02 -07:00
kevin zhao
0144fb4fab initial commit 2025-10-14 14:45:15 -07:00
Dylan
0a0a10d8b3 fix: apply_patch shell_serialization tests (#4786)
## Summary
Adds additional shell_serialization tests specifically for apply_patch
and other cases.

## Test Plan
- [x] These are all tests
2025-10-14 13:00:49 -07:00
Javi
13035561cd feat: pass codex thread ID in notifier metadata (#4582) 2025-10-14 11:55:10 -07:00
Jeremy Rose
9be704a934 tui: reserve 1 cell right margin for composer and user history (#5026)
keep a 1 cell margin at the right edge of the screen in the composer
(and in the user message in history).

this lets us print clear-to-EOL 1 char before the end of the line in
history, so that resizing the terminal will keep the background color
(at least in iterm/terminal.app). it also stops the cursor in the
textarea from floating off the right edge.

---------

Co-authored-by: joshka-oai <joshka@openai.com>
2025-10-14 18:02:11 +00:00
jif-oai
f7b4e29609 feat: feature flag (#4948)
Add proper feature flag instead of having custom flags for everything.
This is just for experimental/wip part of the code
It can be used through CLI:
```bash
codex --enable unified_exec --disable view_image_tool
```

Or in the `config.toml`
```toml
# Global toggles applied to every profile unless overridden.
[features]
apply_patch_freeform = true
view_image_tool = false
```

Follow-up:
In a following PR, the goal is to have a default have `bundles` of
features that we can associate to a model
2025-10-14 17:50:00 +00:00
Jeremy Rose
d6c5df9a0a detect Bun installs in CLI update banner (#5074)
## Summary
- detect Bun-managed installs in the JavaScript launcher and set a
dedicated environment flag
- show a Bun-specific upgrade command in the update banner when that
flag is present

Fixes #5012

------
https://chatgpt.com/codex/tasks/task_i_68e95c439494832c835bdf34b3b1774e

---------

Co-authored-by: Michael Bolin <mbolin@openai.com>
2025-10-14 17:49:44 +00:00
Jeremy Rose
8662162f45 cloud: codex cloud exec (#5060)
By analogy to `codex exec`, this kicks off a task in codex cloud
noninteractively.
2025-10-14 10:49:17 -07:00
jif-oai
57584d6f34 fix: the 7 omitted lines issue (#5141)
Before, the CLI was always showing `... +7 lines` (with the 7 constant)
due to a double truncation

<img width="263" height="127" alt="Screenshot 2025-10-13 at 10 28 11"
src="https://github.com/user-attachments/assets/49a92d2b-c28a-4e2f-96d1-1818955470b8"
/>
2025-10-14 18:15:47 +01:00
jif-oai
268a10f917 feat: add header for task kind (#5142)
Add a header in the responses API request for the task kind (compact,
review, ...) for observability purpose
The header name is `codex-task-type`
2025-10-14 15:17:00 +00:00
jif-oai
5346cc422d feat: discard prompt starting with a slash (#5048)
This is does not consider lines starting with a space or containing
multiple `/` as commands
<img width="550" height="362" alt="Screenshot 2025-10-13 at 10 00 08"
src="https://github.com/user-attachments/assets/17f7347f-db24-47cb-9845-b0eb6fb139cb"
/>
2025-10-14 09:47:20 +01:00
Shijie Rao
26f7c46856 fixes #5011: update mcp server doc (#5014) 2025-10-10 17:23:41 -07:00
Jeremy Rose
90af046c5c tui: include the image name in the textarea placeholder (#5056)
Fixes #5013
2025-10-10 09:56:18 -07:00
jif-oai
961ed31901 feat: make shortcut works even with capslock (#5049)
Shortcut where not working in caps-lock. Fixing this
2025-10-10 14:35:28 +00:00
jif-oai
85e7357973 fix: workflow cache (#5050)
Decouple cache saving to fix the `verify` steps never being run due to a
cache saving issue
2025-10-10 15:57:47 +02:00
jif-oai
f98fa85b44 feat: message when stream get correctly resumed (#4988)
<img width="366" height="109" alt="Screenshot 2025-10-09 at 17 44 16"
src="https://github.com/user-attachments/assets/26bc6f60-11bc-4fc6-a1cc-430ca1203969"
/>
2025-10-10 09:07:14 +00:00
Jeremy Rose
ddcaf3dccd tui: fix crash when alt+bksp past unicode nbsp (#5016)
notably, screenshot filenames on macOS by default contain U+202F right
before the "AM/PM" part of the filename.
2025-10-09 15:07:04 -07:00
Jeremy Rose
56296cad82 tui: /diff mode wraps long lines (#4891)
fixes a regression that stopped long lines from being wrapped when
viewing `/diff`.
2025-10-09 14:01:45 -07:00
Jeremy Rose
95b41dd7f1 tui: fix wrapping in trust_directory (#5007)
Refactor trust_directory to use ColumnRenderable & friends, thus
correcting wrapping behavior at small widths. Also introduce
RowRenderable with fixed-width rows.

- fixed wrapping in trust_directory
- changed selector cursor to match other list item selections
- allow y/n to work as well as 1/2
- fixed key_hint to be standard

before:
<img width="661" height="550" alt="Screenshot 2025-10-09 at 9 50 36 AM"
src="https://github.com/user-attachments/assets/e01627aa-bee4-4e25-8eca-5575c43f05bf"
/>

after:
<img width="661" height="550" alt="Screenshot 2025-10-09 at 9 51 31 AM"
src="https://github.com/user-attachments/assets/cb816cbd-7609-4c83-b62f-b4dba392d79a"
/>
2025-10-09 17:39:45 +00:00
Jeremy Rose
bf82353f45 tui: fix wrapping in user approval decisions (#5008)
before:
<img width="706" height="71" alt="Screenshot 2025-10-09 at 10 20 57 AM"
src="https://github.com/user-attachments/assets/ff758b77-4e64-4736-b867-7ebf596e4e65"
/>

after:
<img width="706" height="71" alt="Screenshot 2025-10-09 at 10 20 35 AM"
src="https://github.com/user-attachments/assets/6a44efc0-d9ee-40ce-a709-cce969d6e3b3"
/>
2025-10-09 10:37:13 -07:00
pakrym-oai
0308febc23 Remove unused type (#5003)
It was never exported
2025-10-09 10:29:22 -07:00
Shijie Rao
7b4a4c2219 Shijie/codesign binary (#4899)
### Summary
* Added code signing for MacOS. 

### Before - UNSIGNED codex-aarch64
<img width="716" height="334" alt="Screenshot 2025-10-08 at 11 53 28 AM"
src="https://github.com/user-attachments/assets/276000f1-8be2-4b89-9aff-858fac28b4d4"
/>

### After - SIGNED codex-aarch64
<img width="706" height="410" alt="Screenshot 2025-10-08 at 11 52 20 AM"
src="https://github.com/user-attachments/assets/927528f8-2686-4d15-b3cb-c47a8f11ef29"
/>
2025-10-09 09:42:24 -07:00
jif-oai
3ddd4d47d0 fix: lagged output in unified_exec buffer (#4992)
Handle `Lagged` error when the broadcast buffer of the unified_exec is
full
2025-10-09 16:06:07 +00:00
jif-oai
ca6a0358de bug: sandbox denied error logs (#4874)
Check on STDOUT / STDERR or aggregated output for some logs when sanbox
is denied
2025-10-09 16:01:01 +00:00
jif-oai
0026b12615 feat: indentation mode for read_file (#4887)
Add a read file that select the region of the file based on the
indentation level
2025-10-09 15:55:02 +00:00
dedrisian-oai
4300236681 revert /name for now (#4978)
There was a regression where we'd read entire rollout contents if there
was no /name present.
2025-10-08 17:13:49 -07:00
dedrisian-oai
ec238a2c39 feat: Set chat name (#4974)
Set chat name with `/name` so they appear in the codex resume page:


https://github.com/user-attachments/assets/c0252bba-3a53-44c7-a740-f4690a3ad405
2025-10-08 16:35:35 -07:00
rakesh-oai
b6165aee0c Create alias (#4971)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.
2025-10-08 22:29:20 +00:00
Jeremy Rose
f4bc03d7c0 tui: fix off-by-16 in terminal_palette (#4967)
caught by a bad refactor in #4957
2025-10-08 14:57:32 -07:00
Gabriel Peal
3c5e12e2a4 [MCP] Add auth status to MCP servers (#4918)
This adds a queryable auth status for MCP servers which is useful:
1. To determine whether a streamable HTTP server supports auth or not
based on whether or not it supports RFC 8414-3.2
2. Allow us to build a better user experience on top of MCP status
2025-10-08 17:37:57 -04:00
dedrisian-oai
c89229db97 Make context line permanent (#4699)
https://github.com/user-attachments/assets/f72c64de-8d6a-45b6-93df-f3a68038067f
2025-10-08 14:32:54 -07:00
Gabriel Peal
d3820f4782 [MCP] Add an enabled config field (#4917)
This lets users more easily toggle MCP servers.
2025-10-08 16:24:51 -04:00
Jeremy Rose
e896db1180 tui: hardcode xterm palette, shimmer blends between fg and bg (#4957)
Instead of querying all 256 terminal colors on startup, which was slow
in some terminals, hardcode the default xterm palette.

Additionally, tweak the shimmer so that it blends between default_fg and
default_bg, instead of "dark gray" (according to the terminal) and pure
white (regardless of terminal theme).
2025-10-08 20:23:13 +00:00
dedrisian-oai
96acb8a74e Fix transcript mode rendering issue when showing tab chars (#4911)
There's a weird rendering issue with transcript mode: Tab chars bleed
through when scrolling up/down.

e.g. `nl -ba ...` adds tab chars to each line, which make scrolling look
glitchy in transcript mode.

Before:


https://github.com/user-attachments/assets/631ee7fc-6083-4d35-aaf0-a0b08e734470

After:


https://github.com/user-attachments/assets/bbba6111-4bfc-4862-8357-0f51aa2a21ac
2025-10-08 11:42:09 -07:00
jif-oai
687a13bbe5 feat: truncate on compact (#4942)
Truncate the message during compaction if it is just too large
Do it iteratively as tokenization is basically free on server-side
2025-10-08 18:11:08 +01:00
Michael Bolin
fe8122e514 fix: change log_sse_event() so it no longer takes a closure (#4953)
Unlikely fix for https://github.com/openai/codex/issues/4381, but worth a shot given that https://github.com/openai/codex/pull/2103 changed around the same time.
2025-10-08 16:53:35 +00:00
jif-oai
876d4f450a bug: fix CLI UP/ENTER (#4944)
Clear the history cursor before checking for duplicate submissions so
sending the same message twice exits history mode. This prevents Up/Down
from staying stuck in history browsing after duplicate sends.
2025-10-08 07:07:29 -07:00
jif-oai
f52320be86 feat: grep_files as a tool (#4820)
Add `grep_files` to be able to perform more action in parallel
2025-10-08 11:02:50 +01:00
Gabriel Peal
a43ae86b6c [MCP] Add support for streamable http servers with codex mcp add and replace bearer token handling (#4904)
1. You can now add streamable http servers via the CLI
2. As part of this, I'm also changing the existing bearer_token plain
text config field with ane env var

```
mcp add github --url https://api.githubcopilot.com/mcp/ --bearer-token-env-var=GITHUB_PAT
```
2025-10-07 23:21:37 -04:00
Gabriel Peal
496cb801e1 [MCP] Add the ability to explicitly specify a credentials store (#4857)
This lets users/companies explicitly choose whether to force/disallow
the keyring/fallback file storage for mcp credentials.

People who develop with Codex will want to use this until we sign
binaries or else each ad-hoc debug builds will require keychain access
on every build. I don't love this and am open to other ideas for how to
handle that.


```toml
mcp_oauth_credentials_store = "auto"
mcp_oauth_credentials_store = "file"
mcp_oauth_credentials_store = "keyrung"
```
Defaults to `auto`
2025-10-07 22:39:32 -04:00
124 changed files with 7158 additions and 1402 deletions

View File

@@ -20,6 +20,14 @@ body:
attributes:
label: What version of Codex is running?
description: Copy the output of `codex --version`
validations:
required: true
- type: input
id: plan
attributes:
label: What subscription do you have?
validations:
required: true
- type: input
id: model
attributes:
@@ -32,11 +40,18 @@ body:
description: |
For MacOS and Linux: copy the output of `uname -mprs`
For Windows: copy the output of `"$([Environment]::OSVersion | ForEach-Object VersionString) $(if ([Environment]::Is64BitOperatingSystem) { "x64" } else { "x86" })"` in the PowerShell console
- type: textarea
id: actual
attributes:
label: What issue are you seeing?
description: Please include the full error messages and prompts with PII redacted. If possible, please provide text instead of a screenshot.
validations:
required: true
- type: textarea
id: steps
attributes:
label: What steps can reproduce the bug?
description: Explain the bug and provide a code snippet that can reproduce it.
description: Explain the bug and provide a code snippet that can reproduce it. Please include session id, token limit usage, context window usage if applicable.
validations:
required: true
- type: textarea
@@ -44,11 +59,6 @@ body:
attributes:
label: What is the expected behavior?
description: If possible, please provide text instead of a screenshot.
- type: textarea
id: actual
attributes:
label: What do you see instead?
description: If possible, please provide text instead of a screenshot.
- type: textarea
id: notes
attributes:

View File

@@ -14,11 +14,21 @@ body:
id: version
attributes:
label: What version of the VS Code extension are you using?
validations:
required: true
- type: input
id: plan
attributes:
label: What subscription do you have?
validations:
required: true
- type: input
id: ide
attributes:
label: Which IDE are you using?
description: Like `VS Code`, `Cursor`, `Windsurf`, etc.
validations:
required: true
- type: input
id: platform
attributes:
@@ -26,11 +36,18 @@ body:
description: |
For MacOS and Linux: copy the output of `uname -mprs`
For Windows: copy the output of `"$([Environment]::OSVersion | ForEach-Object VersionString) $(if ([Environment]::Is64BitOperatingSystem) { "x64" } else { "x86" })"` in the PowerShell console
- type: textarea
id: actual
attributes:
label: What issue are you seeing?
description: Please include the full error messages and prompts with PII redacted. If possible, please provide text instead of a screenshot.
validations:
required: true
- type: textarea
id: steps
attributes:
label: What steps can reproduce the bug?
description: Explain the bug and provide a code snippet that can reproduce it.
description: Explain the bug and provide a code snippet that can reproduce it. Please include session id, token limit usage, context window usage if applicable.
validations:
required: true
- type: textarea
@@ -38,11 +55,6 @@ body:
attributes:
label: What is the expected behavior?
description: If possible, please provide text instead of a screenshot.
- type: textarea
id: actual
attributes:
label: What do you see instead?
description: If possible, please provide text instead of a screenshot.
- type: textarea
id: notes
attributes:

View File

@@ -148,15 +148,26 @@ jobs:
targets: ${{ matrix.target }}
components: clippy
- uses: actions/cache@v4
# Explicit cache restore: split cargo home vs target, so we can
# avoid caching the large target dir on the gnu-dev job.
- name: Restore cargo home cache
id: cache_cargo_home_restore
uses: actions/cache/restore@v4
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
${{ github.workspace }}/codex-rs/target/
key: cargo-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ hashFiles('**/Cargo.lock') }}
key: cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ hashFiles('**/Cargo.lock') }}
- name: Restore target cache (except gnu-dev)
id: cache_target_restore
if: ${{ !(matrix.target == 'x86_64-unknown-linux-gnu' && matrix.profile != 'release') }}
uses: actions/cache/restore@v4
with:
path: ${{ github.workspace }}/codex-rs/target/
key: cargo-target-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ hashFiles('**/Cargo.lock') }}
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Install musl build tools
@@ -194,6 +205,31 @@ jobs:
env:
RUST_BACKTRACE: 1
# Save caches explicitly; make non-fatal so cache packaging
# never fails the overall job. Only save when key wasn't hit.
- name: Save cargo home cache
if: always() && !cancelled() && steps.cache_cargo_home_restore.outputs.cache-hit != 'true'
continue-on-error: true
uses: actions/cache/save@v4
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
key: cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ hashFiles('**/Cargo.lock') }}
- name: Save target cache (except gnu-dev)
if: >-
always() && !cancelled() &&
(steps.cache_target_restore.outputs.cache-hit != 'true') &&
!(matrix.target == 'x86_64-unknown-linux-gnu' && matrix.profile != 'release')
continue-on-error: true
uses: actions/cache/save@v4
with:
path: ${{ github.workspace }}/codex-rs/target/
key: cargo-target-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ hashFiles('**/Cargo.lock') }}
# Fail the job if any of the previous steps failed.
- name: verify all steps passed
if: |

View File

@@ -47,7 +47,7 @@ jobs:
build:
needs: tag-check
name: ${{ matrix.runner }} - ${{ matrix.target }}
name: Build - ${{ matrix.runner }} - ${{ matrix.target }}
runs-on: ${{ matrix.runner }}
timeout-minutes: 30
defaults:
@@ -94,11 +94,118 @@ jobs:
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Install musl build tools
run: |
sudo apt install -y musl-tools pkg-config
sudo apt-get update
sudo apt-get install -y musl-tools pkg-config
- name: Cargo build
run: cargo build --target ${{ matrix.target }} --release --bin codex --bin codex-responses-api-proxy
- if: ${{ matrix.runner == 'macos-14' }}
name: Configure Apple code signing
shell: bash
env:
KEYCHAIN_PASSWORD: actions
APPLE_CERTIFICATE: ${{ secrets.APPLE_CERTIFICATE_P12 }}
APPLE_CERTIFICATE_PASSWORD: ${{ secrets.APPLE_CERTIFICATE_PASSWORD }}
run: |
set -euo pipefail
if [[ -z "${APPLE_CERTIFICATE:-}" ]]; then
echo "APPLE_CERTIFICATE is required for macOS signing"
exit 1
fi
if [[ -z "${APPLE_CERTIFICATE_PASSWORD:-}" ]]; then
echo "APPLE_CERTIFICATE_PASSWORD is required for macOS signing"
exit 1
fi
cert_path="${RUNNER_TEMP}/apple_signing_certificate.p12"
echo "$APPLE_CERTIFICATE" | base64 -d > "$cert_path"
keychain_path="${RUNNER_TEMP}/codex-signing.keychain-db"
security create-keychain -p "$KEYCHAIN_PASSWORD" "$keychain_path"
security set-keychain-settings -lut 21600 "$keychain_path"
security unlock-keychain -p "$KEYCHAIN_PASSWORD" "$keychain_path"
keychain_args=()
cleanup_keychain() {
if ((${#keychain_args[@]} > 0)); then
security list-keychains -s "${keychain_args[@]}" || true
security default-keychain -s "${keychain_args[0]}" || true
else
security list-keychains -s || true
fi
if [[ -f "$keychain_path" ]]; then
security delete-keychain "$keychain_path" || true
fi
}
while IFS= read -r keychain; do
[[ -n "$keychain" ]] && keychain_args+=("$keychain")
done < <(security list-keychains | sed 's/^[[:space:]]*//;s/[[:space:]]*$//;s/"//g')
if ((${#keychain_args[@]} > 0)); then
security list-keychains -s "$keychain_path" "${keychain_args[@]}"
else
security list-keychains -s "$keychain_path"
fi
security default-keychain -s "$keychain_path"
security import "$cert_path" -k "$keychain_path" -P "$APPLE_CERTIFICATE_PASSWORD" -T /usr/bin/codesign -T /usr/bin/security
security set-key-partition-list -S apple-tool:,apple: -s -k "$KEYCHAIN_PASSWORD" "$keychain_path" > /dev/null
codesign_hashes=()
while IFS= read -r hash; do
[[ -n "$hash" ]] && codesign_hashes+=("$hash")
done < <(security find-identity -v -p codesigning "$keychain_path" \
| sed -n 's/.*\([0-9A-F]\{40\}\).*/\1/p' \
| sort -u)
if ((${#codesign_hashes[@]} == 0)); then
echo "No signing identities found in $keychain_path"
cleanup_keychain
rm -f "$cert_path"
exit 1
fi
if ((${#codesign_hashes[@]} > 1)); then
echo "Multiple signing identities found in $keychain_path:"
printf ' %s\n' "${codesign_hashes[@]}"
cleanup_keychain
rm -f "$cert_path"
exit 1
fi
APPLE_CODESIGN_IDENTITY="${codesign_hashes[0]}"
rm -f "$cert_path"
echo "APPLE_CODESIGN_IDENTITY=$APPLE_CODESIGN_IDENTITY" >> "$GITHUB_ENV"
echo "APPLE_CODESIGN_KEYCHAIN=$keychain_path" >> "$GITHUB_ENV"
echo "::add-mask::$APPLE_CODESIGN_IDENTITY"
- if: ${{ matrix.runner == 'macos-14' }}
name: Sign macOS binaries
shell: bash
run: |
set -euo pipefail
if [[ -z "${APPLE_CODESIGN_IDENTITY:-}" ]]; then
echo "APPLE_CODESIGN_IDENTITY is required for macOS signing"
exit 1
fi
keychain_args=()
if [[ -n "${APPLE_CODESIGN_KEYCHAIN:-}" && -f "${APPLE_CODESIGN_KEYCHAIN}" ]]; then
keychain_args+=(--keychain "${APPLE_CODESIGN_KEYCHAIN}")
fi
for binary in codex codex-responses-api-proxy; do
path="target/${{ matrix.target }}/release/${binary}"
codesign --force --options runtime --timestamp --sign "$APPLE_CODESIGN_IDENTITY" "${keychain_args[@]}" "$path"
done
- name: Stage artifacts
shell: bash
run: |
@@ -157,6 +264,29 @@ jobs:
zstd -T0 -19 --rm "$dest/$base"
done
- name: Remove signing keychain
if: ${{ always() && matrix.runner == 'macos-14' }}
shell: bash
env:
APPLE_CODESIGN_KEYCHAIN: ${{ env.APPLE_CODESIGN_KEYCHAIN }}
run: |
set -euo pipefail
if [[ -n "${APPLE_CODESIGN_KEYCHAIN:-}" ]]; then
keychain_args=()
while IFS= read -r keychain; do
[[ "$keychain" == "$APPLE_CODESIGN_KEYCHAIN" ]] && continue
[[ -n "$keychain" ]] && keychain_args+=("$keychain")
done < <(security list-keychains | sed 's/^[[:space:]]*//;s/[[:space:]]*$//;s/"//g')
if ((${#keychain_args[@]} > 0)); then
security list-keychains -s "${keychain_args[@]}"
security default-keychain -s "${keychain_args[0]}"
fi
if [[ -f "$APPLE_CODESIGN_KEYCHAIN" ]]; then
security delete-keychain "$APPLE_CODESIGN_KEYCHAIN"
fi
fi
- uses: actions/upload-artifact@v4
with:
name: ${{ matrix.target }}

35
codex-cli/bin/codex.js Executable file → Normal file
View File

@@ -80,6 +80,32 @@ function getUpdatedPath(newDirs) {
return updatedPath;
}
/**
* Use heuristics to detect the package manager that was used to install Codex
* in order to give the user a hint about how to update it.
*/
function detectPackageManager() {
const userAgent = process.env.npm_config_user_agent || "";
if (/\bbun\//.test(userAgent)) {
return "bun";
}
const execPath = process.env.npm_execpath || "";
if (execPath.includes("bun")) {
return "bun";
}
if (
process.env.BUN_INSTALL ||
process.env.BUN_INSTALL_GLOBAL_DIR ||
process.env.BUN_INSTALL_BIN_DIR
) {
return "bun";
}
return userAgent ? "npm" : null;
}
const additionalDirs = [];
const pathDir = path.join(archRoot, "path");
if (existsSync(pathDir)) {
@@ -87,9 +113,16 @@ if (existsSync(pathDir)) {
}
const updatedPath = getUpdatedPath(additionalDirs);
const env = { ...process.env, PATH: updatedPath };
const packageManagerEnvVar =
detectPackageManager() === "bun"
? "CODEX_MANAGED_BY_BUN"
: "CODEX_MANAGED_BY_NPM";
env[packageManagerEnvVar] = "1";
const child = spawn(binaryPath, process.argv.slice(2), {
stdio: "inherit",
env: { ...process.env, PATH: updatedPath, CODEX_MANAGED_BY_NPM: "1" },
env,
});
child.on("error", (err) => {

86
codex-rs/Cargo.lock generated
View File

@@ -1051,6 +1051,7 @@ dependencies = [
"escargot",
"eventsource-stream",
"futures",
"ignore",
"indexmap 2.10.0",
"landlock",
"libc",
@@ -1069,6 +1070,7 @@ dependencies = [
"serde_json",
"serial_test",
"sha1",
"sha2",
"shlex",
"similar",
"strum_macros 0.27.2",
@@ -1352,6 +1354,7 @@ version = "0.0.0"
dependencies = [
"anyhow",
"axum",
"codex-protocol",
"dirs",
"futures",
"keyring",
@@ -1576,10 +1579,12 @@ dependencies = [
"anyhow",
"assert_cmd",
"codex-core",
"notify",
"regex-lite",
"serde_json",
"tempfile",
"tokio",
"walkdir",
"wiremock",
]
@@ -2370,6 +2375,15 @@ dependencies = [
"percent-encoding",
]
[[package]]
name = "fsevent-sys"
version = "4.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "76ee7a02da4d231650c7cea31349b889be2f45ddb3ef3032d2ec8185f6313fd2"
dependencies = [
"libc",
]
[[package]]
name = "futures"
version = "0.3.31"
@@ -3056,6 +3070,26 @@ version = "2.0.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f4c7245a08504955605670dbf141fceab975f15ca21570696aebe9d2e71576bd"
[[package]]
name = "inotify"
version = "0.11.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f37dccff2791ab604f9babef0ba14fbe0be30bd368dc541e2b08d07c8aa908f3"
dependencies = [
"bitflags 2.9.1",
"inotify-sys",
"libc",
]
[[package]]
name = "inotify-sys"
version = "0.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e05c02b5e89bff3b946cedeca278abc628fe811e604f027c45a8aa3cf793d0eb"
dependencies = [
"libc",
]
[[package]]
name = "inout"
version = "0.1.4"
@@ -3256,6 +3290,26 @@ dependencies = [
"zeroize",
]
[[package]]
name = "kqueue"
version = "1.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "eac30106d7dce88daf4a3fcb4879ea939476d5074a9b7ddd0fb97fa4bed5596a"
dependencies = [
"kqueue-sys",
"libc",
]
[[package]]
name = "kqueue-sys"
version = "1.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ed9625ffda8729b85e45cf04090035ac368927b8cebc34898e7c120f52e4838b"
dependencies = [
"bitflags 1.3.2",
"libc",
]
[[package]]
name = "lalrpop"
version = "0.19.12"
@@ -3655,6 +3709,30 @@ version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "61807f77802ff30975e01f4f071c8ba10c022052f98b3294119f3e615d13e5be"
[[package]]
name = "notify"
version = "8.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4d3d07927151ff8575b7087f245456e549fea62edf0ec4e565a5ee50c8402bc3"
dependencies = [
"bitflags 2.9.1",
"fsevent-sys",
"inotify",
"kqueue",
"libc",
"log",
"mio",
"notify-types",
"walkdir",
"windows-sys 0.60.2",
]
[[package]]
name = "notify-types"
version = "2.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5e0826a989adedc2a244799e823aece04662b66609d96af8dff7ac6df9a8925d"
[[package]]
name = "nu-ansi-term"
version = "0.50.1"
@@ -4772,9 +4850,9 @@ dependencies = [
[[package]]
name = "rmcp"
version = "0.8.0"
version = "0.8.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "583d060e99feb3a3683fb48a1e4bf5f8d4a50951f429726f330ee5ff548837f8"
checksum = "6f35acda8f89fca5fd8c96cae3c6d5b4c38ea0072df4c8030915f3b5ff469c1c"
dependencies = [
"base64",
"bytes",
@@ -4806,9 +4884,9 @@ dependencies = [
[[package]]
name = "rmcp-macros"
version = "0.8.0"
version = "0.8.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "421d8b0ba302f479214889486f9550e63feca3af310f1190efcf6e2016802693"
checksum = "c9f1d5220aaa23b79c3d02e18f7a554403b3ccea544bbb6c69d6bcb3e854a274"
dependencies = [
"darling 0.21.3",
"proc-macro2",

View File

@@ -122,6 +122,7 @@ log = "0.4"
maplit = "1.0.2"
mime_guess = "2.0.5"
multimap = "0.10.0"
notify = "8.2.0"
nucleo-matcher = "0.3.1"
openssl-sys = "*"
opentelemetry = "0.30.0"

View File

@@ -3,11 +3,30 @@ use ansi_to_tui::IntoText;
use ratatui::text::Line;
use ratatui::text::Text;
// Expand tabs in a best-effort way for transcript rendering.
// Tabs can interact poorly with left-gutter prefixes in our TUI and CLI
// transcript views (e.g., `nl` separates line numbers from content with a tab).
// Replacing tabs with spaces avoids odd visual artifacts without changing
// semantics for our use cases.
fn expand_tabs(s: &str) -> std::borrow::Cow<'_, str> {
if s.contains('\t') {
// Keep it simple: replace each tab with 4 spaces.
// We do not try to align to tab stops since most usages (like `nl`)
// look acceptable with a fixed substitution and this avoids stateful math
// across spans.
std::borrow::Cow::Owned(s.replace('\t', " "))
} else {
std::borrow::Cow::Borrowed(s)
}
}
/// This function should be used when the contents of `s` are expected to match
/// a single line. If multiple lines are found, a warning is logged and only the
/// first line is returned.
pub fn ansi_escape_line(s: &str) -> Line<'static> {
let text = ansi_escape(s);
// Normalize tabs to spaces to avoid odd gutter collisions in transcript mode.
let s = expand_tabs(s);
let text = ansi_escape(&s);
match text.lines.as_slice() {
[] => "".into(),
[only] => only.clone(),

View File

@@ -26,6 +26,8 @@ use supports_color::Stream;
mod mcp_cmd;
use crate::mcp_cmd::McpCli;
use codex_core::config::Config;
use codex_core::config::ConfigOverrides;
/// Codex CLI
///
@@ -45,6 +47,9 @@ struct MultitoolCli {
#[clap(flatten)]
pub config_overrides: CliConfigOverrides,
#[clap(flatten)]
pub feature_toggles: FeatureToggles,
#[clap(flatten)]
interactive: TuiCli,
@@ -97,6 +102,9 @@ enum Subcommand {
/// Internal: run the responses API proxy.
#[clap(hide = true)]
ResponsesApiProxy(ResponsesApiProxyArgs),
/// Inspect feature flags.
Features(FeaturesCli),
}
#[derive(Debug, Parser)]
@@ -157,7 +165,7 @@ struct LoginCommand {
)]
api_key: Option<String>,
#[arg(long = "use-device-code")]
#[arg(long = "device-auth")]
use_device_code: bool,
/// EXPERIMENTAL: Use custom OAuth issuer base URL (advanced)
@@ -231,6 +239,53 @@ fn print_exit_messages(exit_info: AppExitInfo) {
}
}
#[derive(Debug, Default, Parser, Clone)]
struct FeatureToggles {
/// Enable a feature (repeatable). Equivalent to `-c features.<name>=true`.
#[arg(long = "enable", value_name = "FEATURE", action = clap::ArgAction::Append, global = true)]
enable: Vec<String>,
/// Disable a feature (repeatable). Equivalent to `-c features.<name>=false`.
#[arg(long = "disable", value_name = "FEATURE", action = clap::ArgAction::Append, global = true)]
disable: Vec<String>,
}
impl FeatureToggles {
fn to_overrides(&self) -> Vec<String> {
let mut v = Vec::new();
for k in &self.enable {
v.push(format!("features.{k}=true"));
}
for k in &self.disable {
v.push(format!("features.{k}=false"));
}
v
}
}
#[derive(Debug, Parser)]
struct FeaturesCli {
#[command(subcommand)]
sub: FeaturesSubcommand,
}
#[derive(Debug, Parser)]
enum FeaturesSubcommand {
/// List known features with their stage and effective state.
List,
}
fn stage_str(stage: codex_core::features::Stage) -> &'static str {
use codex_core::features::Stage;
match stage {
Stage::Experimental => "experimental",
Stage::Beta => "beta",
Stage::Stable => "stable",
Stage::Deprecated => "deprecated",
Stage::Removed => "removed",
}
}
/// As early as possible in the process lifecycle, apply hardening measures. We
/// skip this in debug builds to avoid interfering with debugging.
#[ctor::ctor]
@@ -248,11 +303,17 @@ fn main() -> anyhow::Result<()> {
async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()> {
let MultitoolCli {
config_overrides: root_config_overrides,
config_overrides: mut root_config_overrides,
feature_toggles,
mut interactive,
subcommand,
} = MultitoolCli::parse();
// Fold --enable/--disable into config overrides so they flow to all subcommands.
root_config_overrides
.raw_overrides
.extend(feature_toggles.to_overrides());
match subcommand {
None => {
prepend_config_flags(
@@ -381,6 +442,30 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
Some(Subcommand::GenerateTs(gen_cli)) => {
codex_protocol_ts::generate_ts(&gen_cli.out_dir, gen_cli.prettier.as_deref())?;
}
Some(Subcommand::Features(FeaturesCli { sub })) => match sub {
FeaturesSubcommand::List => {
// Respect root-level `-c` overrides plus top-level flags like `--profile`.
let cli_kv_overrides = root_config_overrides
.parse_overrides()
.map_err(|e| anyhow::anyhow!(e))?;
// Thread through relevant top-level flags (at minimum, `--profile`).
// Also honor `--search` since it maps to a feature toggle.
let overrides = ConfigOverrides {
config_profile: interactive.config_profile.clone(),
tools_web_search_request: interactive.web_search.then_some(true),
..Default::default()
};
let config = Config::load_with_cli_overrides(cli_kv_overrides, overrides).await?;
for def in codex_core::features::FEATURES.iter() {
let name = def.key;
let stage = stage_str(def.stage);
let enabled = config.features.enabled(def.id);
println!("{name}\t{stage}\t{enabled}");
}
}
},
}
Ok(())
@@ -484,6 +569,7 @@ mod tests {
interactive,
config_overrides: root_overrides,
subcommand,
feature_toggles: _,
} = cli;
let Subcommand::Resume(ResumeCommand {

View File

@@ -4,6 +4,7 @@ use anyhow::Context;
use anyhow::Result;
use anyhow::anyhow;
use anyhow::bail;
use clap::ArgGroup;
use codex_common::CliConfigOverrides;
use codex_core::config::Config;
use codex_core::config::ConfigOverrides;
@@ -12,8 +13,12 @@ use codex_core::config::load_global_mcp_servers;
use codex_core::config::write_global_mcp_servers;
use codex_core::config_types::McpServerConfig;
use codex_core::config_types::McpServerTransportConfig;
use codex_core::features::Feature;
use codex_core::mcp::auth::compute_auth_statuses;
use codex_core::protocol::McpAuthStatus;
use codex_rmcp_client::delete_oauth_tokens;
use codex_rmcp_client::perform_oauth_login;
use codex_rmcp_client::supports_oauth_login;
/// [experimental] Launch Codex as an MCP server or manage configured MCP servers.
///
@@ -77,13 +82,61 @@ pub struct AddArgs {
/// Name for the MCP server configuration.
pub name: String,
/// Environment variables to set when launching the server.
#[arg(long, value_parser = parse_env_pair, value_name = "KEY=VALUE")]
pub env: Vec<(String, String)>,
#[command(flatten)]
pub transport_args: AddMcpTransportArgs,
}
#[derive(Debug, clap::Args)]
#[command(
group(
ArgGroup::new("transport")
.args(["command", "url"])
.required(true)
.multiple(false)
)
)]
pub struct AddMcpTransportArgs {
#[command(flatten)]
pub stdio: Option<AddMcpStdioArgs>,
#[command(flatten)]
pub streamable_http: Option<AddMcpStreamableHttpArgs>,
}
#[derive(Debug, clap::Args)]
pub struct AddMcpStdioArgs {
/// Command to launch the MCP server.
#[arg(trailing_var_arg = true, num_args = 1..)]
/// Use --url for a streamable HTTP server.
#[arg(
trailing_var_arg = true,
num_args = 0..,
)]
pub command: Vec<String>,
/// Environment variables to set when launching the server.
/// Only valid with stdio servers.
#[arg(
long,
value_parser = parse_env_pair,
value_name = "KEY=VALUE",
)]
pub env: Vec<(String, String)>,
}
#[derive(Debug, clap::Args)]
pub struct AddMcpStreamableHttpArgs {
/// URL for a streamable HTTP MCP server.
#[arg(long)]
pub url: String,
/// Optional environment variable to read for a bearer token.
/// Only valid with streamable HTTP servers.
#[arg(
long = "bearer-token-env-var",
value_name = "ENV_VAR",
requires = "url"
)]
pub bearer_token_env_var: Option<String>,
}
#[derive(Debug, clap::Parser)]
@@ -138,39 +191,61 @@ impl McpCli {
async fn run_add(config_overrides: &CliConfigOverrides, add_args: AddArgs) -> Result<()> {
// Validate any provided overrides even though they are not currently applied.
config_overrides.parse_overrides().map_err(|e| anyhow!(e))?;
let overrides = config_overrides.parse_overrides().map_err(|e| anyhow!(e))?;
let config = Config::load_with_cli_overrides(overrides, ConfigOverrides::default())
.await
.context("failed to load configuration")?;
let AddArgs { name, env, command } = add_args;
let AddArgs {
name,
transport_args,
} = add_args;
validate_server_name(&name)?;
let mut command_parts = command.into_iter();
let command_bin = command_parts
.next()
.ok_or_else(|| anyhow!("command is required"))?;
let command_args: Vec<String> = command_parts.collect();
let env_map = if env.is_empty() {
None
} else {
let mut map = HashMap::new();
for (key, value) in env {
map.insert(key, value);
}
Some(map)
};
let codex_home = find_codex_home().context("failed to resolve CODEX_HOME")?;
let mut servers = load_global_mcp_servers(&codex_home)
.await
.with_context(|| format!("failed to load MCP servers from {}", codex_home.display()))?;
let new_entry = McpServerConfig {
transport: McpServerTransportConfig::Stdio {
command: command_bin,
args: command_args,
env: env_map,
let transport = match transport_args {
AddMcpTransportArgs {
stdio: Some(stdio), ..
} => {
let mut command_parts = stdio.command.into_iter();
let command_bin = command_parts
.next()
.ok_or_else(|| anyhow!("command is required"))?;
let command_args: Vec<String> = command_parts.collect();
let env_map = if stdio.env.is_empty() {
None
} else {
Some(stdio.env.into_iter().collect::<HashMap<_, _>>())
};
McpServerTransportConfig::Stdio {
command: command_bin,
args: command_args,
env: env_map,
}
}
AddMcpTransportArgs {
streamable_http:
Some(AddMcpStreamableHttpArgs {
url,
bearer_token_env_var,
}),
..
} => McpServerTransportConfig::StreamableHttp {
url,
bearer_token_env_var,
},
AddMcpTransportArgs { .. } => bail!("exactly one of --command or --url must be provided"),
};
let new_entry = McpServerConfig {
transport: transport.clone(),
enabled: true,
startup_timeout_sec: None,
tool_timeout_sec: None,
};
@@ -182,6 +257,17 @@ async fn run_add(config_overrides: &CliConfigOverrides, add_args: AddArgs) -> Re
println!("Added global MCP server '{name}'.");
if let McpServerTransportConfig::StreamableHttp {
url,
bearer_token_env_var: None,
} = transport
&& matches!(supports_oauth_login(&url).await, Ok(true))
{
println!("Detected OAuth support. Starting OAuth flow…");
perform_oauth_login(&name, &url, config.mcp_oauth_credentials_store_mode).await?;
println!("Successfully logged in.");
}
Ok(())
}
@@ -219,7 +305,7 @@ async fn run_login(config_overrides: &CliConfigOverrides, login_args: LoginArgs)
.await
.context("failed to load configuration")?;
if !config.use_experimental_use_rmcp_client {
if !config.features.enabled(Feature::RmcpClient) {
bail!(
"OAuth login is only supported when experimental_use_rmcp_client is true in config.toml."
);
@@ -236,7 +322,7 @@ async fn run_login(config_overrides: &CliConfigOverrides, login_args: LoginArgs)
_ => bail!("OAuth login is only supported for streamable HTTP servers."),
};
perform_oauth_login(&name, &url).await?;
perform_oauth_login(&name, &url, config.mcp_oauth_credentials_store_mode).await?;
println!("Successfully logged in to MCP server '{name}'.");
Ok(())
}
@@ -259,7 +345,7 @@ async fn run_logout(config_overrides: &CliConfigOverrides, logout_args: LogoutAr
_ => bail!("OAuth logout is only supported for streamable_http transports."),
};
match delete_oauth_tokens(&name, &url) {
match delete_oauth_tokens(&name, &url, config.mcp_oauth_credentials_store_mode) {
Ok(true) => println!("Removed OAuth credentials for '{name}'."),
Ok(false) => println!("No OAuth credentials stored for '{name}'."),
Err(err) => return Err(anyhow!("failed to delete OAuth credentials: {err}")),
@@ -276,11 +362,20 @@ async fn run_list(config_overrides: &CliConfigOverrides, list_args: ListArgs) ->
let mut entries: Vec<_> = config.mcp_servers.iter().collect();
entries.sort_by(|(a, _), (b, _)| a.cmp(b));
let auth_statuses = compute_auth_statuses(
config.mcp_servers.iter(),
config.mcp_oauth_credentials_store_mode,
)
.await;
if list_args.json {
let json_entries: Vec<_> = entries
.into_iter()
.map(|(name, cfg)| {
let auth_status = auth_statuses
.get(name.as_str())
.copied()
.unwrap_or(McpAuthStatus::Unsupported);
let transport = match &cfg.transport {
McpServerTransportConfig::Stdio { command, args, env } => serde_json::json!({
"type": "stdio",
@@ -288,17 +383,21 @@ async fn run_list(config_overrides: &CliConfigOverrides, list_args: ListArgs) ->
"args": args,
"env": env,
}),
McpServerTransportConfig::StreamableHttp { url, bearer_token } => {
McpServerTransportConfig::StreamableHttp {
url,
bearer_token_env_var,
} => {
serde_json::json!({
"type": "streamable_http",
"url": url,
"bearer_token": bearer_token,
"bearer_token_env_var": bearer_token_env_var,
})
}
};
serde_json::json!({
"name": name,
"enabled": cfg.enabled,
"transport": transport,
"startup_timeout_sec": cfg
.startup_timeout_sec
@@ -306,6 +405,7 @@ async fn run_list(config_overrides: &CliConfigOverrides, list_args: ListArgs) ->
"tool_timeout_sec": cfg
.tool_timeout_sec
.map(|timeout| timeout.as_secs_f64()),
"auth_status": auth_status,
})
})
.collect();
@@ -319,8 +419,8 @@ async fn run_list(config_overrides: &CliConfigOverrides, list_args: ListArgs) ->
return Ok(());
}
let mut stdio_rows: Vec<[String; 4]> = Vec::new();
let mut http_rows: Vec<[String; 3]> = Vec::new();
let mut stdio_rows: Vec<[String; 6]> = Vec::new();
let mut http_rows: Vec<[String; 5]> = Vec::new();
for (name, cfg) in entries {
match &cfg.transport {
@@ -343,21 +443,59 @@ async fn run_list(config_overrides: &CliConfigOverrides, list_args: ListArgs) ->
.join(", ")
}
};
stdio_rows.push([name.clone(), command.clone(), args_display, env_display]);
}
McpServerTransportConfig::StreamableHttp { url, bearer_token } => {
let has_bearer = if bearer_token.is_some() {
"True"
let status = if cfg.enabled {
"enabled".to_string()
} else {
"False"
"disabled".to_string()
};
http_rows.push([name.clone(), url.clone(), has_bearer.into()]);
let auth_status = auth_statuses
.get(name.as_str())
.copied()
.unwrap_or(McpAuthStatus::Unsupported)
.to_string();
stdio_rows.push([
name.clone(),
command.clone(),
args_display,
env_display,
status,
auth_status,
]);
}
McpServerTransportConfig::StreamableHttp {
url,
bearer_token_env_var,
} => {
let status = if cfg.enabled {
"enabled".to_string()
} else {
"disabled".to_string()
};
let auth_status = auth_statuses
.get(name.as_str())
.copied()
.unwrap_or(McpAuthStatus::Unsupported)
.to_string();
http_rows.push([
name.clone(),
url.clone(),
bearer_token_env_var.clone().unwrap_or("-".to_string()),
status,
auth_status,
]);
}
}
}
if !stdio_rows.is_empty() {
let mut widths = ["Name".len(), "Command".len(), "Args".len(), "Env".len()];
let mut widths = [
"Name".len(),
"Command".len(),
"Args".len(),
"Env".len(),
"Status".len(),
"Auth".len(),
];
for row in &stdio_rows {
for (i, cell) in row.iter().enumerate() {
widths[i] = widths[i].max(cell.len());
@@ -365,28 +503,36 @@ async fn run_list(config_overrides: &CliConfigOverrides, list_args: ListArgs) ->
}
println!(
"{:<name_w$} {:<cmd_w$} {:<args_w$} {:<env_w$}",
"Name",
"Command",
"Args",
"Env",
"{name:<name_w$} {command:<cmd_w$} {args:<args_w$} {env:<env_w$} {status:<status_w$} {auth:<auth_w$}",
name = "Name",
command = "Command",
args = "Args",
env = "Env",
status = "Status",
auth = "Auth",
name_w = widths[0],
cmd_w = widths[1],
args_w = widths[2],
env_w = widths[3],
status_w = widths[4],
auth_w = widths[5],
);
for row in &stdio_rows {
println!(
"{:<name_w$} {:<cmd_w$} {:<args_w$} {:<env_w$}",
row[0],
row[1],
row[2],
row[3],
"{name:<name_w$} {command:<cmd_w$} {args:<args_w$} {env:<env_w$} {status:<status_w$} {auth:<auth_w$}",
name = row[0].as_str(),
command = row[1].as_str(),
args = row[2].as_str(),
env = row[3].as_str(),
status = row[4].as_str(),
auth = row[5].as_str(),
name_w = widths[0],
cmd_w = widths[1],
args_w = widths[2],
env_w = widths[3],
status_w = widths[4],
auth_w = widths[5],
);
}
}
@@ -396,7 +542,13 @@ async fn run_list(config_overrides: &CliConfigOverrides, list_args: ListArgs) ->
}
if !http_rows.is_empty() {
let mut widths = ["Name".len(), "Url".len(), "Has Bearer Token".len()];
let mut widths = [
"Name".len(),
"Url".len(),
"Bearer Token Env Var".len(),
"Status".len(),
"Auth".len(),
];
for row in &http_rows {
for (i, cell) in row.iter().enumerate() {
widths[i] = widths[i].max(cell.len());
@@ -404,24 +556,32 @@ async fn run_list(config_overrides: &CliConfigOverrides, list_args: ListArgs) ->
}
println!(
"{:<name_w$} {:<url_w$} {:<token_w$}",
"Name",
"Url",
"Has Bearer Token",
"{name:<name_w$} {url:<url_w$} {token:<token_w$} {status:<status_w$} {auth:<auth_w$}",
name = "Name",
url = "Url",
token = "Bearer Token Env Var",
status = "Status",
auth = "Auth",
name_w = widths[0],
url_w = widths[1],
token_w = widths[2],
status_w = widths[3],
auth_w = widths[4],
);
for row in &http_rows {
println!(
"{:<name_w$} {:<url_w$} {:<token_w$}",
row[0],
row[1],
row[2],
"{name:<name_w$} {url:<url_w$} {token:<token_w$} {status:<status_w$} {auth:<auth_w$}",
name = row[0].as_str(),
url = row[1].as_str(),
token = row[2].as_str(),
status = row[3].as_str(),
auth = row[4].as_str(),
name_w = widths[0],
url_w = widths[1],
token_w = widths[2],
status_w = widths[3],
auth_w = widths[4],
);
}
}
@@ -447,14 +607,18 @@ async fn run_get(config_overrides: &CliConfigOverrides, get_args: GetArgs) -> Re
"args": args,
"env": env,
}),
McpServerTransportConfig::StreamableHttp { url, bearer_token } => serde_json::json!({
McpServerTransportConfig::StreamableHttp {
url,
bearer_token_env_var,
} => serde_json::json!({
"type": "streamable_http",
"url": url,
"bearer_token": bearer_token,
"bearer_token_env_var": bearer_token_env_var,
}),
};
let output = serde_json::to_string_pretty(&serde_json::json!({
"name": get_args.name,
"enabled": server.enabled,
"transport": transport,
"startup_timeout_sec": server
.startup_timeout_sec
@@ -468,6 +632,7 @@ async fn run_get(config_overrides: &CliConfigOverrides, get_args: GetArgs) -> Re
}
println!("{}", get_args.name);
println!(" enabled: {}", server.enabled);
match &server.transport {
McpServerTransportConfig::Stdio { command, args, env } => {
println!(" transport: stdio");
@@ -493,11 +658,14 @@ async fn run_get(config_overrides: &CliConfigOverrides, get_args: GetArgs) -> Re
};
println!(" env: {env_display}");
}
McpServerTransportConfig::StreamableHttp { url, bearer_token } => {
McpServerTransportConfig::StreamableHttp {
url,
bearer_token_env_var,
} => {
println!(" transport: streamable_http");
println!(" url: {url}");
let bearer = bearer_token.as_deref().unwrap_or("-");
println!(" bearer_token: {bearer}");
let env_var = bearer_token_env_var.as_deref().unwrap_or("-");
println!(" bearer_token_env_var: {env_var}");
}
}
if let Some(timeout) = server.startup_timeout_sec {

View File

@@ -35,6 +35,7 @@ async fn add_and_remove_server_updates_global_config() -> Result<()> {
}
other => panic!("unexpected transport: {other:?}"),
}
assert!(docs.enabled);
let mut remove_cmd = codex_command(codex_home.path())?;
remove_cmd
@@ -90,6 +91,122 @@ async fn add_with_env_preserves_key_order_and_values() -> Result<()> {
assert_eq!(env.len(), 2);
assert_eq!(env.get("FOO"), Some(&"bar".to_string()));
assert_eq!(env.get("ALPHA"), Some(&"beta".to_string()));
assert!(envy.enabled);
Ok(())
}
#[tokio::test]
async fn add_streamable_http_without_manual_token() -> Result<()> {
let codex_home = TempDir::new()?;
let mut add_cmd = codex_command(codex_home.path())?;
add_cmd
.args(["mcp", "add", "github", "--url", "https://example.com/mcp"])
.assert()
.success();
let servers = load_global_mcp_servers(codex_home.path()).await?;
let github = servers.get("github").expect("github server should exist");
match &github.transport {
McpServerTransportConfig::StreamableHttp {
url,
bearer_token_env_var,
} => {
assert_eq!(url, "https://example.com/mcp");
assert!(bearer_token_env_var.is_none());
}
other => panic!("unexpected transport: {other:?}"),
}
assert!(github.enabled);
assert!(!codex_home.path().join(".credentials.json").exists());
assert!(!codex_home.path().join(".env").exists());
Ok(())
}
#[tokio::test]
async fn add_streamable_http_with_custom_env_var() -> Result<()> {
let codex_home = TempDir::new()?;
let mut add_cmd = codex_command(codex_home.path())?;
add_cmd
.args([
"mcp",
"add",
"issues",
"--url",
"https://example.com/issues",
"--bearer-token-env-var",
"GITHUB_TOKEN",
])
.assert()
.success();
let servers = load_global_mcp_servers(codex_home.path()).await?;
let issues = servers.get("issues").expect("issues server should exist");
match &issues.transport {
McpServerTransportConfig::StreamableHttp {
url,
bearer_token_env_var,
} => {
assert_eq!(url, "https://example.com/issues");
assert_eq!(bearer_token_env_var.as_deref(), Some("GITHUB_TOKEN"));
}
other => panic!("unexpected transport: {other:?}"),
}
assert!(issues.enabled);
Ok(())
}
#[tokio::test]
async fn add_streamable_http_rejects_removed_flag() -> Result<()> {
let codex_home = TempDir::new()?;
let mut add_cmd = codex_command(codex_home.path())?;
add_cmd
.args([
"mcp",
"add",
"github",
"--url",
"https://example.com/mcp",
"--with-bearer-token",
])
.assert()
.failure()
.stderr(contains("--with-bearer-token"));
let servers = load_global_mcp_servers(codex_home.path()).await?;
assert!(servers.is_empty());
Ok(())
}
#[tokio::test]
async fn add_cant_add_command_and_url() -> Result<()> {
let codex_home = TempDir::new()?;
let mut add_cmd = codex_command(codex_home.path())?;
add_cmd
.args([
"mcp",
"add",
"github",
"--url",
"https://example.com/mcp",
"--command",
"--",
"echo",
"hello",
])
.assert()
.failure()
.stderr(contains("unexpected argument '--command' found"));
let servers = load_global_mcp_servers(codex_home.path()).await?;
assert!(servers.is_empty());
Ok(())
}

View File

@@ -1,6 +1,7 @@
use std::path::Path;
use anyhow::Result;
use predicates::prelude::PredicateBooleanExt;
use predicates::str::contains;
use pretty_assertions::assert_eq;
use serde_json::Value as JsonValue;
@@ -53,6 +54,10 @@ fn list_and_get_render_expected_output() -> Result<()> {
assert!(stdout.contains("docs"));
assert!(stdout.contains("docs-server"));
assert!(stdout.contains("TOKEN=secret"));
assert!(stdout.contains("Status"));
assert!(stdout.contains("Auth"));
assert!(stdout.contains("enabled"));
assert!(stdout.contains("Unsupported"));
let mut list_json_cmd = codex_command(codex_home.path())?;
let json_output = list_json_cmd.args(["mcp", "list", "--json"]).output()?;
@@ -64,6 +69,7 @@ fn list_and_get_render_expected_output() -> Result<()> {
json!([
{
"name": "docs",
"enabled": true,
"transport": {
"type": "stdio",
"command": "docs-server",
@@ -76,7 +82,8 @@ fn list_and_get_render_expected_output() -> Result<()> {
}
},
"startup_timeout_sec": null,
"tool_timeout_sec": null
"tool_timeout_sec": null,
"auth_status": "unsupported"
}
]
)
@@ -91,6 +98,7 @@ fn list_and_get_render_expected_output() -> Result<()> {
assert!(stdout.contains("command: docs-server"));
assert!(stdout.contains("args: --port 4000"));
assert!(stdout.contains("env: TOKEN=secret"));
assert!(stdout.contains("enabled: true"));
assert!(stdout.contains("remove: codex mcp remove docs"));
let mut get_json_cmd = codex_command(codex_home.path())?;
@@ -98,7 +106,7 @@ fn list_and_get_render_expected_output() -> Result<()> {
.args(["mcp", "get", "docs", "--json"])
.assert()
.success()
.stdout(contains("\"name\": \"docs\""));
.stdout(contains("\"name\": \"docs\"").and(contains("\"enabled\": true")));
Ok(())
}

View File

@@ -1,3 +1,4 @@
use clap::Args;
use clap::Parser;
use codex_common::CliConfigOverrides;
@@ -6,4 +7,43 @@ use codex_common::CliConfigOverrides;
pub struct Cli {
#[clap(skip)]
pub config_overrides: CliConfigOverrides,
#[command(subcommand)]
pub command: Option<Command>,
}
#[derive(Debug, clap::Subcommand)]
pub enum Command {
/// Submit a new Codex Cloud task without launching the TUI.
Exec(ExecCommand),
}
#[derive(Debug, Args)]
pub struct ExecCommand {
/// Task prompt to run in Codex Cloud.
#[arg(value_name = "QUERY")]
pub query: Option<String>,
/// Target environment identifier (see `codex cloud` to browse).
#[arg(long = "env", value_name = "ENV_ID")]
pub environment: String,
/// Number of assistant attempts (best-of-N).
#[arg(
long = "attempts",
default_value_t = 1usize,
value_parser = parse_attempts
)]
pub attempts: usize,
}
fn parse_attempts(input: &str) -> Result<usize, String> {
let value: usize = input
.parse()
.map_err(|_| "attempts must be an integer between 1 and 4".to_string())?;
if (1..=4).contains(&value) {
Ok(value)
} else {
Err("attempts must be between 1 and 4".to_string())
}
}

View File

@@ -7,7 +7,9 @@ mod ui;
pub mod util;
pub use cli::Cli;
use anyhow::anyhow;
use std::io::IsTerminal;
use std::io::Read;
use std::path::PathBuf;
use std::sync::Arc;
use std::time::Duration;
@@ -23,6 +25,175 @@ struct ApplyJob {
diff_override: Option<String>,
}
struct BackendContext {
backend: Arc<dyn codex_cloud_tasks_client::CloudBackend>,
base_url: String,
}
async fn init_backend(user_agent_suffix: &str) -> anyhow::Result<BackendContext> {
let use_mock = matches!(
std::env::var("CODEX_CLOUD_TASKS_MODE").ok().as_deref(),
Some("mock") | Some("MOCK")
);
let base_url = std::env::var("CODEX_CLOUD_TASKS_BASE_URL")
.unwrap_or_else(|_| "https://chatgpt.com/backend-api".to_string());
set_user_agent_suffix(user_agent_suffix);
if use_mock {
return Ok(BackendContext {
backend: Arc::new(codex_cloud_tasks_client::MockClient),
base_url,
});
}
let ua = codex_core::default_client::get_codex_user_agent();
let mut http = codex_cloud_tasks_client::HttpClient::new(base_url.clone())?.with_user_agent(ua);
let style = if base_url.contains("/backend-api") {
"wham"
} else {
"codex-api"
};
append_error_log(format!("startup: base_url={base_url} path_style={style}"));
let auth = match codex_core::config::find_codex_home()
.ok()
.map(|home| codex_login::AuthManager::new(home, false))
.and_then(|am| am.auth())
{
Some(auth) => auth,
None => {
eprintln!(
"Not signed in. Please run 'codex login' to sign in with ChatGPT, then re-run 'codex cloud'."
);
std::process::exit(1);
}
};
if let Some(acc) = auth.get_account_id() {
append_error_log(format!("auth: mode=ChatGPT account_id={acc}"));
}
let token = match auth.get_token().await {
Ok(t) if !t.is_empty() => t,
_ => {
eprintln!(
"Not signed in. Please run 'codex login' to sign in with ChatGPT, then re-run 'codex cloud'."
);
std::process::exit(1);
}
};
http = http.with_bearer_token(token.clone());
if let Some(acc) = auth
.get_account_id()
.or_else(|| util::extract_chatgpt_account_id(&token))
{
append_error_log(format!("auth: set ChatGPT-Account-Id header: {acc}"));
http = http.with_chatgpt_account_id(acc);
}
Ok(BackendContext {
backend: Arc::new(http),
base_url,
})
}
async fn run_exec_command(args: crate::cli::ExecCommand) -> anyhow::Result<()> {
let crate::cli::ExecCommand {
query,
environment,
attempts,
} = args;
let ctx = init_backend("codex_cloud_tasks_exec").await?;
let prompt = resolve_query_input(query)?;
let env_id = resolve_environment_id(&ctx, &environment).await?;
let created = codex_cloud_tasks_client::CloudBackend::create_task(
&*ctx.backend,
&env_id,
&prompt,
"main",
false,
attempts,
)
.await?;
let url = util::task_url(&ctx.base_url, &created.id.0);
println!("{url}");
Ok(())
}
async fn resolve_environment_id(ctx: &BackendContext, requested: &str) -> anyhow::Result<String> {
let trimmed = requested.trim();
if trimmed.is_empty() {
return Err(anyhow!("environment id must not be empty"));
}
let normalized = util::normalize_base_url(&ctx.base_url);
let headers = util::build_chatgpt_headers().await;
let environments = crate::env_detect::list_environments(&normalized, &headers).await?;
if environments.is_empty() {
return Err(anyhow!(
"no cloud environments are available for this workspace"
));
}
if let Some(row) = environments.iter().find(|row| row.id == trimmed) {
return Ok(row.id.clone());
}
let label_matches = environments
.iter()
.filter(|row| {
row.label
.as_deref()
.map(|label| label.eq_ignore_ascii_case(trimmed))
.unwrap_or(false)
})
.collect::<Vec<_>>();
match label_matches.as_slice() {
[] => Err(anyhow!(
"environment '{trimmed}' not found; run `codex cloud` to list available environments"
)),
[single] => Ok(single.id.clone()),
[first, rest @ ..] => {
let first_id = &first.id;
if rest.iter().all(|row| row.id == *first_id) {
Ok(first_id.clone())
} else {
Err(anyhow!(
"environment label '{trimmed}' is ambiguous; run `codex cloud` to pick the desired environment id"
))
}
}
}
}
fn resolve_query_input(query_arg: Option<String>) -> anyhow::Result<String> {
match query_arg {
Some(q) if q != "-" => Ok(q),
maybe_dash => {
let force_stdin = matches!(maybe_dash.as_deref(), Some("-"));
if std::io::stdin().is_terminal() && !force_stdin {
return Err(anyhow!(
"no query provided. Pass one as an argument or pipe it via stdin."
));
}
if !force_stdin {
eprintln!("Reading query from stdin...");
}
let mut buffer = String::new();
std::io::stdin()
.read_to_string(&mut buffer)
.map_err(|e| anyhow!("failed to read query from stdin: {e}"))?;
if buffer.trim().is_empty() {
return Err(anyhow!(
"no query provided via stdin (received empty input)."
));
}
Ok(buffer)
}
}
}
fn level_from_status(status: codex_cloud_tasks_client::ApplyStatus) -> app::ApplyResultLevel {
match status {
codex_cloud_tasks_client::ApplyStatus::Success => app::ApplyResultLevel::Success,
@@ -148,7 +319,14 @@ fn spawn_apply(
// (no standalone patch summarizer needed UI displays raw diffs)
/// Entry point for the `codex cloud` subcommand.
pub async fn run_main(_cli: Cli, _codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()> {
pub async fn run_main(cli: Cli, _codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()> {
if let Some(command) = cli.command {
return match command {
crate::cli::Command::Exec(args) => run_exec_command(args).await,
};
}
let Cli { .. } = cli;
// Very minimal logging setup; mirrors other crates' pattern.
let default_level = "error";
let _ = tracing_subscriber::fmt()
@@ -162,72 +340,8 @@ pub async fn run_main(_cli: Cli, _codex_linux_sandbox_exe: Option<PathBuf>) -> a
.try_init();
info!("Launching Cloud Tasks list UI");
set_user_agent_suffix("codex_cloud_tasks_tui");
// Default to online unless explicitly configured to use mock.
let use_mock = matches!(
std::env::var("CODEX_CLOUD_TASKS_MODE").ok().as_deref(),
Some("mock") | Some("MOCK")
);
let backend: Arc<dyn codex_cloud_tasks_client::CloudBackend> = if use_mock {
Arc::new(codex_cloud_tasks_client::MockClient)
} else {
// Build an HTTP client against the configured (or default) base URL.
let base_url = std::env::var("CODEX_CLOUD_TASKS_BASE_URL")
.unwrap_or_else(|_| "https://chatgpt.com/backend-api".to_string());
let ua = codex_core::default_client::get_codex_user_agent();
let mut http =
codex_cloud_tasks_client::HttpClient::new(base_url.clone())?.with_user_agent(ua);
// Log which base URL and path style we're going to use.
let style = if base_url.contains("/backend-api") {
"wham"
} else {
"codex-api"
};
append_error_log(format!("startup: base_url={base_url} path_style={style}"));
// Require ChatGPT login (SWIC). Exit with a clear message if missing.
let _token = match codex_core::config::find_codex_home()
.ok()
.map(|home| codex_login::AuthManager::new(home, false))
.and_then(|am| am.auth())
{
Some(auth) => {
// Log account context for debugging workspace selection.
if let Some(acc) = auth.get_account_id() {
append_error_log(format!("auth: mode=ChatGPT account_id={acc}"));
}
match auth.get_token().await {
Ok(t) if !t.is_empty() => {
// Attach token and ChatGPT-Account-Id header if available
http = http.with_bearer_token(t.clone());
if let Some(acc) = auth
.get_account_id()
.or_else(|| util::extract_chatgpt_account_id(&t))
{
append_error_log(format!("auth: set ChatGPT-Account-Id header: {acc}"));
http = http.with_chatgpt_account_id(acc);
}
t
}
_ => {
eprintln!(
"Not signed in. Please run 'codex login' to sign in with ChatGPT, then re-run 'codex cloud'."
);
std::process::exit(1);
}
}
}
None => {
eprintln!(
"Not signed in. Please run 'codex login' to sign in with ChatGPT, then re-run 'codex cloud'."
);
std::process::exit(1);
}
};
Arc::new(http)
};
let BackendContext { backend, .. } = init_backend("codex_cloud_tasks_tui").await?;
let backend = backend;
// Terminal setup
use crossterm::ExecutableCommand;

View File

@@ -91,3 +91,18 @@ pub async fn build_chatgpt_headers() -> HeaderMap {
}
headers
}
/// Construct a browser-friendly task URL for the given backend base URL.
pub fn task_url(base_url: &str, task_id: &str) -> String {
let normalized = normalize_base_url(base_url);
if let Some(root) = normalized.strip_suffix("/backend-api") {
return format!("{root}/codex/tasks/{task_id}");
}
if let Some(root) = normalized.strip_suffix("/api/codex") {
return format!("{root}/codex/tasks/{task_id}");
}
if normalized.ends_with("/codex") {
return format!("{normalized}/tasks/{task_id}");
}
format!("{normalized}/codex/tasks/{task_id}")
}

View File

@@ -33,6 +33,7 @@ env-flags = { workspace = true }
eventsource-stream = { workspace = true }
futures = { workspace = true }
indexmap = { workspace = true }
ignore = { workspace = true }
libc = { workspace = true }
mcp-types = { workspace = true }
os_info = { workspace = true }
@@ -43,6 +44,7 @@ reqwest = { workspace = true, features = ["json", "stream"] }
serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }
sha1 = { workspace = true }
sha2 = { workspace = true }
shlex = { workspace = true }
similar = { workspace = true }
strum_macros = { workspace = true }

View File

@@ -389,10 +389,12 @@ async fn process_chat_sse<S>(
let mut reasoning_text = String::new();
loop {
let sse = match otel_event_manager
.log_sse_event(|| timeout(idle_timeout, stream.next()))
.await
{
let start = std::time::Instant::now();
let response = timeout(idle_timeout, stream.next()).await;
let duration = start.elapsed();
otel_event_manager.log_sse_event(&response, duration);
let sse = match response {
Ok(Some(Ok(ev))) => ev,
Ok(Some(Err(e))) => {
let _ = tx_event

View File

@@ -47,6 +47,7 @@ use crate::openai_tools::create_tools_json_for_responses_api;
use crate::protocol::RateLimitSnapshot;
use crate::protocol::RateLimitWindow;
use crate::protocol::TokenUsage;
use crate::state::TaskKind;
use crate::token_data::PlanType;
use crate::util::backoff;
use codex_otel::otel_event_manager::OtelEventManager;
@@ -123,8 +124,16 @@ impl ModelClient {
/// the provider config. Public callers always invoke `stream()` the
/// specialised helpers are private to avoid accidental misuse.
pub async fn stream(&self, prompt: &Prompt) -> Result<ResponseStream> {
self.stream_with_task_kind(prompt, TaskKind::Regular).await
}
pub(crate) async fn stream_with_task_kind(
&self,
prompt: &Prompt,
task_kind: TaskKind,
) -> Result<ResponseStream> {
match self.provider.wire_api {
WireApi::Responses => self.stream_responses(prompt).await,
WireApi::Responses => self.stream_responses(prompt, task_kind).await,
WireApi::Chat => {
// Create the raw streaming connection first.
let response_stream = stream_chat_completions(
@@ -165,7 +174,11 @@ impl ModelClient {
}
/// Implementation for the OpenAI *Responses* experimental API.
async fn stream_responses(&self, prompt: &Prompt) -> Result<ResponseStream> {
async fn stream_responses(
&self,
prompt: &Prompt,
task_kind: TaskKind,
) -> Result<ResponseStream> {
if let Some(path) = &*CODEX_RS_SSE_FIXTURE {
// short circuit for tests
warn!(path, "Streaming from fixture");
@@ -244,7 +257,7 @@ impl ModelClient {
let max_attempts = self.provider.request_max_retries();
for attempt in 0..=max_attempts {
match self
.attempt_stream_responses(attempt, &payload_json, &auth_manager)
.attempt_stream_responses(attempt, &payload_json, &auth_manager, task_kind)
.await
{
Ok(stream) => {
@@ -272,6 +285,7 @@ impl ModelClient {
attempt: u64,
payload_json: &Value,
auth_manager: &Option<Arc<AuthManager>>,
task_kind: TaskKind,
) -> std::result::Result<ResponseStream, StreamAttemptError> {
// Always fetch the latest auth in case a prior attempt refreshed the token.
let auth = auth_manager.as_ref().and_then(|m| m.auth());
@@ -294,6 +308,7 @@ impl ModelClient {
.header("conversation_id", self.conversation_id.to_string())
.header("session_id", self.conversation_id.to_string())
.header(reqwest::header::ACCEPT, "text/event-stream")
.header("Codex-Task-Type", task_kind.header_value())
.json(payload_json);
if let Some(auth) = auth.as_ref()
@@ -649,10 +664,12 @@ async fn process_sse<S>(
let mut response_error: Option<CodexErr> = None;
loop {
let sse = match otel_event_manager
.log_sse_event(|| timeout(idle_timeout, stream.next()))
.await
{
let start = std::time::Instant::now();
let response = timeout(idle_timeout, stream.next()).await;
let duration = start.elapsed();
otel_event_manager.log_sse_event(&response, duration);
let sse = match response {
Ok(Some(Ok(sse))) => sse,
Ok(Some(Err(e))) => {
debug!("SSE Error: {e:#}");

View File

@@ -0,0 +1,168 @@
use std::fmt::Write;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ResponseItem;
use crate::codebase_snapshot::SnapshotDiff;
pub(crate) const CODEBASE_CHANGE_NOTICE_MAX_PATHS: usize = 40;
#[derive(Clone, Debug, PartialEq, Eq)]
pub(crate) struct CodebaseChangeNotice {
added: Vec<String>,
removed: Vec<String>,
modified: Vec<String>,
truncated: bool,
}
impl CodebaseChangeNotice {
pub(crate) fn new(diff: SnapshotDiff, limit: usize) -> Self {
let mut remaining = limit;
let mut truncated = false;
let added = take_paths(diff.added, &mut remaining, &mut truncated);
let removed = take_paths(diff.removed, &mut remaining, &mut truncated);
let modified = take_paths(diff.modified, &mut remaining, &mut truncated);
Self {
added,
removed,
modified,
truncated,
}
}
pub(crate) fn is_empty(&self) -> bool {
self.added.is_empty() && self.removed.is_empty() && self.modified.is_empty()
}
pub(crate) fn serialize_to_xml(&self) -> String {
let mut output = String::new();
if self.truncated {
let _ = writeln!(output, "<codebase_changes truncated=\"true\">");
} else {
let _ = writeln!(output, "<codebase_changes>");
}
let mut summary_parts = Vec::new();
if !self.added.is_empty() {
summary_parts.push(format!("added {}", self.added.len()));
}
if !self.removed.is_empty() {
summary_parts.push(format!("removed {}", self.removed.len()));
}
if !self.modified.is_empty() {
summary_parts.push(format!("modified {}", self.modified.len()));
}
if summary_parts.is_empty() {
let _ = writeln!(output, " <summary>no changes</summary>");
} else {
let summary = summary_parts.join(", ");
let _ = writeln!(output, " <summary>{summary}</summary>");
}
serialize_section(&mut output, "added", &self.added);
serialize_section(&mut output, "removed", &self.removed);
serialize_section(&mut output, "modified", &self.modified);
if self.truncated {
let _ = writeln!(output, " <note>additional paths omitted</note>");
}
let _ = writeln!(output, "</codebase_changes>");
output
}
}
fn take_paths(mut paths: Vec<String>, remaining: &mut usize, truncated: &mut bool) -> Vec<String> {
if *remaining == 0 {
if !paths.is_empty() {
*truncated = true;
}
return Vec::new();
}
if paths.len() > *remaining {
paths.truncate(*remaining);
*truncated = true;
}
*remaining -= paths.len();
paths
}
fn serialize_section(output: &mut String, tag: &str, paths: &[String]) {
if paths.is_empty() {
return;
}
let _ = writeln!(output, " <{tag}>");
for path in paths {
let _ = writeln!(output, " <path>{}</path>", escape_xml(path));
}
let _ = writeln!(output, " </{tag}>");
}
fn escape_xml(value: &str) -> String {
let mut escaped = String::with_capacity(value.len());
for ch in value.chars() {
match ch {
'&' => escaped.push_str("&amp;"),
'<' => escaped.push_str("&lt;"),
'>' => escaped.push_str("&gt;"),
'"' => escaped.push_str("&quot;"),
'\'' => escaped.push_str("&apos;"),
other => escaped.push(other),
}
}
escaped
}
impl From<CodebaseChangeNotice> for ResponseItem {
fn from(notice: CodebaseChangeNotice) -> Self {
ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::InputText {
text: notice.serialize_to_xml(),
}],
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use pretty_assertions::assert_eq;
#[test]
fn constructs_notice_with_limit() {
let diff = SnapshotDiff {
added: vec!["a.rs".to_string(), "b.rs".to_string()],
removed: vec!["c.rs".to_string()],
modified: vec!["d.rs".to_string(), "e.rs".to_string()],
};
let notice = CodebaseChangeNotice::new(diff, 3);
assert!(notice.truncated);
assert_eq!(
notice.added.len() + notice.removed.len() + notice.modified.len(),
3
);
}
#[test]
fn serializes_notice() {
let diff = SnapshotDiff {
added: vec!["src/lib.rs".to_string()],
removed: Vec::new(),
modified: vec!["src/main.rs".to_string()],
};
let notice = CodebaseChangeNotice::new(diff, CODEBASE_CHANGE_NOTICE_MAX_PATHS);
let xml = notice.serialize_to_xml();
assert!(xml.contains("<added>"));
assert!(xml.contains("<modified>"));
assert!(xml.contains("src/lib.rs"));
assert!(xml.contains("src/main.rs"));
}
}

View File

@@ -0,0 +1,278 @@
use std::borrow::Cow;
use std::collections::BTreeMap;
use std::fs::File;
use std::io::Read;
use std::path::Path;
use std::path::PathBuf;
use std::time::SystemTime;
use anyhow::Context;
use anyhow::Result;
use ignore::WalkBuilder;
use sha2::Digest;
use sha2::Sha256;
use tokio::task;
use tracing::warn;
#[derive(Clone, Debug, PartialEq, Eq)]
pub(crate) struct CodebaseSnapshot {
root: PathBuf,
entries: BTreeMap<String, EntryFingerprint>,
root_digest: DigestBytes,
}
#[derive(Clone, Debug, PartialEq, Eq)]
pub(crate) struct EntryFingerprint {
pub kind: EntryKind,
pub digest: DigestBytes,
pub size: u64,
pub modified_millis: Option<u128>,
}
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
#[repr(u8)]
pub(crate) enum EntryKind {
File,
Symlink,
}
#[derive(Clone, Debug, PartialEq, Eq, Default)]
pub(crate) struct SnapshotDiff {
pub added: Vec<String>,
pub removed: Vec<String>,
pub modified: Vec<String>,
}
impl SnapshotDiff {
pub fn is_empty(&self) -> bool {
self.added.is_empty() && self.removed.is_empty() && self.modified.is_empty()
}
}
pub(crate) type DigestBytes = [u8; 32];
impl CodebaseSnapshot {
pub(crate) async fn capture(root: PathBuf) -> Result<Self> {
task::spawn_blocking(move || Self::from_disk(&root))
.await
.map_err(|e| anyhow::anyhow!("codebase snapshot task failed: {e}"))?
}
pub(crate) fn from_disk(root: &Path) -> Result<Self> {
if !root.exists() {
return Ok(Self::empty(root));
}
let mut entries: BTreeMap<String, EntryFingerprint> = BTreeMap::new();
let mut walker = WalkBuilder::new(root);
walker
.hidden(false)
.git_ignore(true)
.git_exclude(true)
.parents(true)
.ignore(true)
.follow_links(false);
for result in walker.build() {
let entry = match result {
Ok(entry) => entry,
Err(err) => {
warn!("codebase snapshot failed to read entry: {err}");
continue;
}
};
let path = entry.path();
if entry.depth() == 0 {
continue;
}
let relative = match path.strip_prefix(root) {
Ok(rel) => rel,
Err(_) => continue,
};
if relative.as_os_str().is_empty() {
continue;
}
let rel_string = normalize_rel_path(relative);
let file_type = match entry.file_type() {
Some(file_type) => file_type,
None => continue,
};
if file_type.is_dir() {
continue;
}
if file_type.is_file() {
match fingerprint_file(path) {
Ok(fp) => {
entries.insert(rel_string, fp);
}
Err(err) => {
warn!(
"codebase snapshot failed to hash file {}: {err}",
path.display()
);
}
}
continue;
}
if file_type.is_symlink() {
match fingerprint_symlink(path) {
Ok(fp) => {
entries.insert(rel_string, fp);
}
Err(err) => {
warn!(
"codebase snapshot failed to hash symlink {}: {err}",
path.display()
);
}
}
continue;
}
}
let root_digest = compute_root_digest(&entries);
Ok(Self {
root: root.to_path_buf(),
entries,
root_digest,
})
}
pub(crate) fn diff(&self, newer: &CodebaseSnapshot) -> SnapshotDiff {
let mut diff = SnapshotDiff::default();
for (path, fingerprint) in &newer.entries {
match self.entries.get(path) {
None => diff.added.push(path.clone()),
Some(existing) if existing != fingerprint => diff.modified.push(path.clone()),
_ => {}
}
}
for path in self.entries.keys() {
if !newer.entries.contains_key(path) {
diff.removed.push(path.clone());
}
}
diff
}
pub(crate) fn root(&self) -> &Path {
&self.root
}
fn empty(root: &Path) -> Self {
Self {
root: root.to_path_buf(),
entries: BTreeMap::new(),
root_digest: Sha256::digest(b"").into(),
}
}
}
fn fingerprint_file(path: &Path) -> Result<EntryFingerprint> {
let metadata = path
.metadata()
.with_context(|| format!("metadata {}", path.display()))?;
let mut file = File::open(path).with_context(|| format!("open {}", path.display()))?;
let mut hasher = Sha256::new();
let mut buf = [0u8; 64 * 1024];
loop {
let read = file.read(&mut buf)?;
if read == 0 {
break;
}
hasher.update(&buf[..read]);
}
Ok(EntryFingerprint {
kind: EntryKind::File,
digest: hasher.finalize().into(),
size: metadata.len(),
modified_millis: metadata.modified().ok().and_then(system_time_to_millis),
})
}
fn fingerprint_symlink(path: &Path) -> Result<EntryFingerprint> {
let target =
std::fs::read_link(path).with_context(|| format!("read_link {}", path.display()))?;
let mut hasher = Sha256::new();
let target_str = normalize_rel_path(&target);
hasher.update(target_str.as_bytes());
Ok(EntryFingerprint {
kind: EntryKind::Symlink,
digest: hasher.finalize().into(),
size: 0,
modified_millis: None,
})
}
fn compute_root_digest(entries: &BTreeMap<String, EntryFingerprint>) -> DigestBytes {
let mut hasher = Sha256::new();
for (path, fingerprint) in entries {
hasher.update(path.as_bytes());
hasher.update(fingerprint.digest);
hasher.update([fingerprint.kind as u8]);
hasher.update(fingerprint.size.to_le_bytes());
if let Some(modified) = fingerprint.modified_millis {
hasher.update(modified.to_le_bytes());
}
}
hasher.finalize().into()
}
fn normalize_rel_path(path: &Path) -> String {
let s = path_to_cow(path);
if s.is_empty() {
String::new()
} else {
s.replace('\\', "/")
}
}
fn path_to_cow(path: &Path) -> Cow<'_, str> {
path.to_string_lossy()
}
fn system_time_to_millis(ts: SystemTime) -> Option<u128> {
ts.duration_since(SystemTime::UNIX_EPOCH)
.map(|duration| duration.as_millis())
.ok()
}
#[cfg(test)]
mod tests {
use super::*;
use pretty_assertions::assert_eq;
use tempfile::tempdir;
#[test]
fn diff_tracks_added_modified_removed() {
let dir = tempdir().unwrap();
let root = dir.path();
std::fs::write(root.join("file_a.txt"), "alpha").unwrap();
std::fs::write(root.join("file_b.txt"), "bravo").unwrap();
let snapshot_one = CodebaseSnapshot::from_disk(root).unwrap();
std::fs::write(root.join("file_a.txt"), "alpha-updated").unwrap();
std::fs::remove_file(root.join("file_b.txt")).unwrap();
std::fs::write(root.join("file_c.txt"), "charlie").unwrap();
let snapshot_two = CodebaseSnapshot::from_disk(root).unwrap();
let diff = snapshot_one.diff(&snapshot_two);
assert_eq!(diff.added, vec!["file_c.txt".to_string()]);
assert_eq!(diff.modified, vec!["file_a.txt".to_string()]);
assert_eq!(diff.removed, vec!["file_b.txt".to_string()]);
}
}

View File

@@ -1,5 +1,6 @@
use std::borrow::Cow;
use std::fmt::Debug;
use std::path::Path;
use std::path::PathBuf;
use std::sync::Arc;
use std::sync::atomic::AtomicU64;
@@ -17,6 +18,7 @@ use codex_apply_patch::ApplyPatchAction;
use codex_protocol::ConversationId;
use codex_protocol::protocol::ConversationPathResponseEvent;
use codex_protocol::protocol::ExitedReviewModeEvent;
use codex_protocol::protocol::McpAuthStatus;
use codex_protocol::protocol::ReviewRequest;
use codex_protocol::protocol::RolloutItem;
use codex_protocol::protocol::SessionSource;
@@ -42,6 +44,9 @@ use crate::apply_patch::convert_apply_patch_to_protocol;
use crate::client::ModelClient;
use crate::client_common::Prompt;
use crate::client_common::ResponseEvent;
use crate::codebase_change_notice::CODEBASE_CHANGE_NOTICE_MAX_PATHS;
use crate::codebase_change_notice::CodebaseChangeNotice;
use crate::codebase_snapshot::CodebaseSnapshot;
use crate::config::Config;
use crate::config_types::ShellEnvironmentPolicy;
use crate::conversation_history::ConversationHistory;
@@ -57,6 +62,7 @@ use crate::exec_command::WriteStdinParams;
use crate::executor::Executor;
use crate::executor::ExecutorConfig;
use crate::executor::normalize_exec_result;
use crate::mcp::auth::compute_auth_statuses;
use crate::mcp_connection_manager::McpConnectionManager;
use crate::model_family::find_family_for_model;
use crate::openai_model_info::get_model_info;
@@ -98,6 +104,7 @@ use crate::rollout::RolloutRecorderParams;
use crate::shell;
use crate::state::ActiveTurn;
use crate::state::SessionServices;
use crate::state::TaskKind;
use crate::tasks::CompactTask;
use crate::tasks::RegularTask;
use crate::tasks::ReviewTask;
@@ -363,14 +370,32 @@ impl Session {
let mcp_fut = McpConnectionManager::new(
config.mcp_servers.clone(),
config.use_experimental_use_rmcp_client,
config
.features
.enabled(crate::features::Feature::RmcpClient),
config.mcp_oauth_credentials_store_mode,
);
let default_shell_fut = shell::default_user_shell();
let history_meta_fut = crate::message_history::history_metadata(&config);
let auth_statuses_fut = compute_auth_statuses(
config.mcp_servers.iter(),
config.mcp_oauth_credentials_store_mode,
);
// Join all independent futures.
let (rollout_recorder, mcp_res, default_shell, (history_log_id, history_entry_count)) =
tokio::join!(rollout_fut, mcp_fut, default_shell_fut, history_meta_fut);
let (
rollout_recorder,
mcp_res,
default_shell,
(history_log_id, history_entry_count),
auth_statuses,
) = tokio::join!(
rollout_fut,
mcp_fut,
default_shell_fut,
history_meta_fut,
auth_statuses_fut
);
let rollout_recorder = rollout_recorder.map_err(|e| {
error!("failed to initialize rollout recorder: {e:#}");
@@ -397,11 +422,24 @@ impl Session {
// Surface individual client start-up failures to the user.
if !failed_clients.is_empty() {
for (server_name, err) in failed_clients {
let message = format!("MCP client for `{server_name}` failed to start: {err:#}");
error!("{message}");
let log_message =
format!("MCP client for `{server_name}` failed to start: {err:#}");
error!("{log_message}");
let display_message = if matches!(
auth_statuses.get(&server_name),
Some(McpAuthStatus::NotLoggedIn)
) {
format!(
"The {server_name} MCP server is not logged in. Run `codex mcp login {server_name}` to log in."
)
} else {
log_message
};
post_session_configured_error_events.push(Event {
id: INITIAL_SUBMIT_ID.to_owned(),
msg: EventMsg::Error(ErrorEvent { message }),
msg: EventMsg::Error(ErrorEvent {
message: display_message,
}),
});
}
}
@@ -444,12 +482,7 @@ impl Session {
client,
tools_config: ToolsConfig::new(&ToolsConfigParams {
model_family: &config.model_family,
include_plan_tool: config.include_plan_tool,
include_apply_patch_tool: config.include_apply_patch_tool,
include_web_search_request: config.tools_web_search_request,
use_streamable_shell_tool: config.use_experimental_streamable_shell_tool,
include_view_image_tool: config.include_view_image_tool,
experimental_unified_exec_tool: config.use_experimental_unified_exec_tool,
features: &config.features,
}),
user_instructions,
base_instructions,
@@ -717,6 +750,73 @@ impl Session {
self.persist_rollout_items(&rollout_items).await;
}
async fn stored_snapshot_for_root(&self, root: &Path) -> Option<CodebaseSnapshot> {
let state = self.state.lock().await;
state
.codebase_snapshot
.as_ref()
.filter(|snapshot| snapshot.root() == root)
.cloned()
}
async fn set_codebase_snapshot(&self, snapshot: CodebaseSnapshot) {
let mut state = self.state.lock().await;
state.codebase_snapshot = Some(snapshot);
}
pub(crate) async fn emit_codebase_delta_if_changed(
&self,
turn_context: &TurnContext,
sub_id: &str,
) -> anyhow::Result<()> {
let cwd = turn_context.cwd.clone();
let previous = self.stored_snapshot_for_root(&cwd).await;
let latest = CodebaseSnapshot::capture(cwd.clone()).await?;
if let Some(previous_snapshot) = previous {
let diff = previous_snapshot.diff(&latest);
if diff.is_empty() {
self.set_codebase_snapshot(latest).await;
return Ok(());
}
let notice = CodebaseChangeNotice::new(diff, CODEBASE_CHANGE_NOTICE_MAX_PATHS);
if notice.is_empty() {
self.set_codebase_snapshot(latest).await;
return Ok(());
}
let response_item: ResponseItem = notice.into();
self.record_conversation_items(std::slice::from_ref(&response_item))
.await;
for msg in
map_response_item_to_event_messages(&response_item, self.show_raw_agent_reasoning())
{
let event = Event {
id: sub_id.to_string(),
msg,
};
self.send_event(event).await;
}
self.set_codebase_snapshot(latest).await;
return Ok(());
}
self.set_codebase_snapshot(latest).await;
Ok(())
}
pub(crate) async fn refresh_codebase_snapshot(
&self,
turn_context: &TurnContext,
) -> anyhow::Result<()> {
let snapshot = CodebaseSnapshot::capture(turn_context.cwd.clone()).await?;
self.set_codebase_snapshot(snapshot).await;
Ok(())
}
pub(crate) fn build_initial_context(&self, turn_context: &TurnContext) -> Vec<ResponseItem> {
let mut items = Vec::<ResponseItem>::with_capacity(2);
if let Some(user_instructions) = turn_context.user_instructions.as_deref() {
@@ -1193,12 +1293,7 @@ async fn submission_loop(
let tools_config = ToolsConfig::new(&ToolsConfigParams {
model_family: &effective_family,
include_plan_tool: config.include_plan_tool,
include_apply_patch_tool: config.include_apply_patch_tool,
include_web_search_request: config.tools_web_search_request,
use_streamable_shell_tool: config.use_experimental_streamable_shell_tool,
include_view_image_tool: config.include_view_image_tool,
experimental_unified_exec_tool: config.use_experimental_unified_exec_tool,
features: &config.features,
});
let new_turn_context = TurnContext {
@@ -1295,14 +1390,7 @@ async fn submission_loop(
client,
tools_config: ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
include_plan_tool: config.include_plan_tool,
include_apply_patch_tool: config.include_apply_patch_tool,
include_web_search_request: config.tools_web_search_request,
use_streamable_shell_tool: config
.use_experimental_streamable_shell_tool,
include_view_image_tool: config.include_view_image_tool,
experimental_unified_exec_tool: config
.use_experimental_unified_exec_tool,
features: &config.features,
}),
user_instructions: turn_context.user_instructions.clone(),
base_instructions: turn_context.base_instructions.clone(),
@@ -1402,10 +1490,18 @@ async fn submission_loop(
// This is a cheap lookup from the connection manager's cache.
let tools = sess.services.mcp_connection_manager.list_all_tools();
let auth_statuses = compute_auth_statuses(
config.mcp_servers.iter(),
config.mcp_oauth_credentials_store_mode,
)
.await;
let event = Event {
id: sub_id,
msg: EventMsg::McpListToolsResponse(
crate::protocol::McpListToolsResponseEvent { tools },
crate::protocol::McpListToolsResponseEvent {
tools,
auth_statuses,
},
),
};
sess.send_event(event).await;
@@ -1526,14 +1622,15 @@ async fn spawn_review_thread(
let model = config.review_model.clone();
let review_model_family = find_family_for_model(&model)
.unwrap_or_else(|| parent_turn_context.client.get_model_family());
// For reviews, disable plan, web_search, view_image regardless of global settings.
let mut review_features = config.features.clone();
review_features.disable(crate::features::Feature::PlanTool);
review_features.disable(crate::features::Feature::WebSearchRequest);
review_features.disable(crate::features::Feature::ViewImageTool);
review_features.disable(crate::features::Feature::StreamableShell);
let tools_config = ToolsConfig::new(&ToolsConfigParams {
model_family: &review_model_family,
include_plan_tool: false,
include_apply_patch_tool: config.include_apply_patch_tool,
include_web_search_request: false,
use_streamable_shell_tool: false,
include_view_image_tool: false,
experimental_unified_exec_tool: config.use_experimental_unified_exec_tool,
features: &review_features,
});
let base_instructions = REVIEW_PROMPT.to_string();
@@ -1624,6 +1721,7 @@ pub(crate) async fn run_task(
turn_context: Arc<TurnContext>,
sub_id: String,
input: Vec<InputItem>,
task_kind: TaskKind,
) -> Option<String> {
if input.is_empty() {
return None;
@@ -1651,6 +1749,14 @@ pub(crate) async fn run_task(
.await;
}
if !is_review_mode
&& let Err(err) = sess
.emit_codebase_delta_if_changed(turn_context.as_ref(), &sub_id)
.await
{
warn!(error = ?err, "failed to compute codebase changes");
}
let mut last_agent_message: Option<String> = None;
// Although from the perspective of codex.rs, TurnDiffTracker has the lifecycle of a Task which contains
// many turns, from the perspective of the user, it is a single turn.
@@ -1707,6 +1813,7 @@ pub(crate) async fn run_task(
Arc::clone(&turn_diff_tracker),
sub_id.clone(),
turn_input,
task_kind,
)
.await
{
@@ -1859,6 +1966,7 @@ pub(crate) async fn run_task(
);
sess.notifier()
.notify(&UserNotification::AgentTurnComplete {
thread_id: sess.conversation_id.to_string(),
turn_id: sub_id.clone(),
input_messages: turn_input_messages,
last_assistant_message: last_agent_message.clone(),
@@ -1882,6 +1990,11 @@ pub(crate) async fn run_task(
}
}
if !is_review_mode && let Err(err) = sess.refresh_codebase_snapshot(turn_context.as_ref()).await
{
warn!(error = ?err, "failed to refresh codebase snapshot");
}
// If this was a review thread and we have a final assistant message,
// try to parse it as a ReviewOutput.
//
@@ -1932,6 +2045,7 @@ async fn run_turn(
turn_diff_tracker: SharedTurnDiffTracker,
sub_id: String,
input: Vec<ResponseItem>,
task_kind: TaskKind,
) -> CodexResult<TurnRunResult> {
let mcp_tools = sess.services.mcp_connection_manager.list_all_tools();
let router = Arc::new(ToolRouter::from_config(
@@ -1961,6 +2075,7 @@ async fn run_turn(
Arc::clone(&turn_diff_tracker),
&sub_id,
&prompt,
task_kind,
)
.await
{
@@ -1998,9 +2113,7 @@ async fn run_turn(
// at a seemingly frozen screen.
sess.notify_stream_error(
&sub_id,
format!(
"stream error: {e}; retrying {retries}/{max_retries} in {delay:?}"
),
format!("Re-connecting... {retries}/{max_retries}"),
)
.await;
@@ -2036,6 +2149,7 @@ async fn try_run_turn(
turn_diff_tracker: SharedTurnDiffTracker,
sub_id: &str,
prompt: &Prompt,
task_kind: TaskKind,
) -> CodexResult<TurnRunResult> {
// call_ids that are part of this response.
let completed_call_ids = prompt
@@ -2101,7 +2215,11 @@ async fn try_run_turn(
summary: turn_context.client.get_reasoning_summary(),
});
sess.persist_rollout_items(&[rollout_item]).await;
let mut stream = turn_context.client.clone().stream(&prompt).await?;
let mut stream = turn_context
.client
.clone()
.stream_with_task_kind(prompt.as_ref(), task_kind)
.await?;
let tool_runtime = ToolCallRuntime::new(
Arc::clone(&router),
@@ -2740,12 +2858,7 @@ mod tests {
);
let tools_config = ToolsConfig::new(&ToolsConfigParams {
model_family: &config.model_family,
include_plan_tool: config.include_plan_tool,
include_apply_patch_tool: config.include_apply_patch_tool,
include_web_search_request: config.tools_web_search_request,
use_streamable_shell_tool: config.use_experimental_streamable_shell_tool,
include_view_image_tool: config.include_view_image_tool,
experimental_unified_exec_tool: config.use_experimental_unified_exec_tool,
features: &config.features,
});
let turn_context = TurnContext {
client,
@@ -2813,12 +2926,7 @@ mod tests {
);
let tools_config = ToolsConfig::new(&ToolsConfigParams {
model_family: &config.model_family,
include_plan_tool: config.include_plan_tool,
include_apply_patch_tool: config.include_apply_patch_tool,
include_web_search_request: config.tools_web_search_request,
use_streamable_shell_tool: config.use_experimental_streamable_shell_tool,
include_view_image_tool: config.include_view_image_tool,
experimental_unified_exec_tool: config.use_experimental_unified_exec_tool,
features: &config.features,
});
let turn_context = Arc::new(TurnContext {
client,

View File

@@ -16,6 +16,7 @@ use crate::protocol::InputItem;
use crate::protocol::InputMessageKind;
use crate::protocol::TaskStartedEvent;
use crate::protocol::TurnContextItem;
use crate::state::TaskKind;
use crate::truncate::truncate_middle;
use crate::util::backoff;
use askama::Template;
@@ -70,14 +71,10 @@ async fn run_compact_task_inner(
input: Vec<InputItem>,
) {
let initial_input_for_turn: ResponseInputItem = ResponseInputItem::from(input);
let turn_input = sess
let mut turn_input = sess
.turn_input_with_history(vec![initial_input_for_turn.clone().into()])
.await;
let prompt = Prompt {
input: turn_input,
..Default::default()
};
let mut truncated_count = 0usize;
let max_retries = turn_context.client.get_provider().stream_max_retries();
let mut retries = 0;
@@ -93,17 +90,36 @@ async fn run_compact_task_inner(
sess.persist_rollout_items(&[rollout_item]).await;
loop {
let prompt = Prompt {
input: turn_input.clone(),
..Default::default()
};
let attempt_result =
drain_to_completed(&sess, turn_context.as_ref(), &sub_id, &prompt).await;
match attempt_result {
Ok(()) => {
if truncated_count > 0 {
sess.notify_background_event(
&sub_id,
format!(
"Trimmed {truncated_count} older conversation item(s) before compacting so the prompt fits the model context window."
),
)
.await;
}
break;
}
Err(CodexErr::Interrupted) => {
return;
}
Err(e @ CodexErr::ContextWindowExceeded) => {
if turn_input.len() > 1 {
turn_input.remove(0);
truncated_count += 1;
retries = 0;
continue;
}
sess.set_total_tokens_full(&sub_id, turn_context.as_ref())
.await;
let event = Event {
@@ -121,9 +137,7 @@ async fn run_compact_task_inner(
let delay = backoff(retries);
sess.notify_stream_error(
&sub_id,
format!(
"stream error: {e}; retrying {retries}/{max_retries} in {delay:?}"
),
format!("Re-connecting... {retries}/{max_retries}"),
)
.await;
tokio::time::sleep(delay).await;
@@ -245,7 +259,11 @@ async fn drain_to_completed(
sub_id: &str,
prompt: &Prompt,
) -> CodexResult<()> {
let mut stream = turn_context.client.clone().stream(prompt).await?;
let mut stream = turn_context
.client
.clone()
.stream_with_task_kind(prompt, TaskKind::Compact)
.await?;
loop {
let maybe_event = stream.next().await;
let Some(event) = maybe_event else {

View File

@@ -17,6 +17,10 @@ use crate::config_types::ShellEnvironmentPolicy;
use crate::config_types::ShellEnvironmentPolicyToml;
use crate::config_types::Tui;
use crate::config_types::UriBasedFileOpener;
use crate::features::Feature;
use crate::features::FeatureOverrides;
use crate::features::Features;
use crate::features::FeaturesToml;
use crate::git_info::resolve_root_git_project_for_trust;
use crate::model_family::ModelFamily;
use crate::model_family::derive_default_model_family;
@@ -33,12 +37,15 @@ use codex_protocol::config_types::ReasoningEffort;
use codex_protocol::config_types::ReasoningSummary;
use codex_protocol::config_types::SandboxMode;
use codex_protocol::config_types::Verbosity;
use codex_rmcp_client::OAuthCredentialsStoreMode;
use dirs::home_dir;
use serde::Deserialize;
use std::collections::BTreeMap;
use std::collections::HashMap;
use std::io::ErrorKind;
use std::path::Path;
use std::path::PathBuf;
use tempfile::NamedTempFile;
use toml::Value as TomlValue;
use toml_edit::Array as TomlArray;
@@ -142,6 +149,15 @@ pub struct Config {
/// Definition for MCP servers that Codex can reach out to for tool calls.
pub mcp_servers: HashMap<String, McpServerConfig>,
/// Preferred store for MCP OAuth credentials.
/// keyring: Use an OS-specific keyring service.
/// Credentials stored in the keyring will only be readable by Codex unless the user explicitly grants access via OS-level keyring access.
/// https://github.com/openai/codex/blob/main/codex-rs/rmcp-client/src/oauth.rs#L2
/// file: CODEX_HOME/.credentials.json
/// This file will be readable to Codex and other applications running as the same user.
/// auto (default): keyring if available, otherwise file.
pub mcp_oauth_credentials_store_mode: OAuthCredentialsStoreMode,
/// Combined provider map (defaults merged with user-defined overrides).
pub model_providers: HashMap<String, ModelProviderInfo>,
@@ -206,6 +222,9 @@ pub struct Config {
/// Include the `view_image` tool that lets the agent attach a local image path to context.
pub include_view_image_tool: bool,
/// Centralized feature flags; source of truth for feature gating.
pub features: Features,
/// The active profile name used to derive this `Config` (if any).
pub active_profile: Option<String>,
@@ -301,12 +320,35 @@ pub async fn load_global_mcp_servers(
return Ok(BTreeMap::new());
};
ensure_no_inline_bearer_tokens(servers_value)?;
servers_value
.clone()
.try_into()
.map_err(|e| std::io::Error::new(std::io::ErrorKind::InvalidData, e))
}
/// We briefly allowed plain text bearer_token fields in MCP server configs.
/// We want to warn people who recently added these fields but can remove this after a few months.
fn ensure_no_inline_bearer_tokens(value: &TomlValue) -> std::io::Result<()> {
let Some(servers_table) = value.as_table() else {
return Ok(());
};
for (server_name, server_value) in servers_table {
if let Some(server_table) = server_value.as_table()
&& server_table.contains_key("bearer_token")
{
let message = format!(
"mcp_servers.{server_name} uses unsupported `bearer_token`; set `bearer_token_env_var`."
);
return Err(std::io::Error::new(ErrorKind::InvalidData, message));
}
}
Ok(())
}
pub fn write_global_mcp_servers(
codex_home: &Path,
servers: &BTreeMap<String, McpServerConfig>,
@@ -355,14 +397,21 @@ pub fn write_global_mcp_servers(
entry["env"] = TomlItem::Table(env_table);
}
}
McpServerTransportConfig::StreamableHttp { url, bearer_token } => {
McpServerTransportConfig::StreamableHttp {
url,
bearer_token_env_var,
} => {
entry["url"] = toml_edit::value(url.clone());
if let Some(token) = bearer_token {
entry["bearer_token"] = toml_edit::value(token.clone());
if let Some(env_var) = bearer_token_env_var {
entry["bearer_token_env_var"] = toml_edit::value(env_var.clone());
}
}
}
if !config.enabled {
entry["enabled"] = toml_edit::value(false);
}
if let Some(timeout) = config.startup_timeout_sec {
entry["startup_timeout_sec"] = toml_edit::value(timeout.as_secs_f64());
}
@@ -694,6 +743,14 @@ pub struct ConfigToml {
#[serde(default)]
pub mcp_servers: HashMap<String, McpServerConfig>,
/// Preferred backend for storing MCP OAuth credentials.
/// keyring: Use an OS-specific keyring service.
/// https://github.com/openai/codex/blob/main/codex-rs/rmcp-client/src/oauth.rs#L2
/// file: Use a file in the Codex home directory.
/// auto (default): Use the OS-specific keyring service if available, otherwise use a file.
#[serde(default)]
pub mcp_oauth_credentials_store: Option<OAuthCredentialsStoreMode>,
/// User-defined provider entries that extend/override the built-in list.
#[serde(default)]
pub model_providers: HashMap<String, ModelProviderInfo>,
@@ -744,19 +801,15 @@ pub struct ConfigToml {
/// Base URL for requests to ChatGPT (as opposed to the OpenAI API).
pub chatgpt_base_url: Option<String>,
/// Experimental path to a file whose contents replace the built-in BASE_INSTRUCTIONS.
pub experimental_instructions_file: Option<PathBuf>,
pub experimental_use_exec_command_tool: Option<bool>,
pub experimental_use_unified_exec_tool: Option<bool>,
pub experimental_use_rmcp_client: Option<bool>,
pub experimental_use_freeform_apply_patch: Option<bool>,
pub projects: Option<HashMap<String, ProjectConfig>>,
/// Nested tools section for feature toggles
pub tools: Option<ToolsToml>,
/// Centralized feature flags (new). Prefer this over individual toggles.
#[serde(default)]
pub features: Option<FeaturesToml>,
/// When true, disables burst-paste detection for typed input entirely.
/// All characters are inserted as they are received, and no buffering
/// or placeholder replacement will occur for fast keypress bursts.
@@ -767,6 +820,13 @@ pub struct ConfigToml {
/// Tracks whether the Windows onboarding screen has been acknowledged.
pub windows_wsl_setup_acknowledged: Option<bool>,
/// Legacy, now use features
pub experimental_instructions_file: Option<PathBuf>,
pub experimental_use_exec_command_tool: Option<bool>,
pub experimental_use_unified_exec_tool: Option<bool>,
pub experimental_use_rmcp_client: Option<bool>,
pub experimental_use_freeform_apply_patch: Option<bool>,
}
impl From<ConfigToml> for UserSavedConfig {
@@ -930,9 +990,9 @@ impl Config {
config_profile: config_profile_key,
codex_linux_sandbox_exe,
base_instructions,
include_plan_tool,
include_apply_patch_tool,
include_view_image_tool,
include_plan_tool: include_plan_tool_override,
include_apply_patch_tool: include_apply_patch_tool_override,
include_view_image_tool: include_view_image_tool_override,
show_raw_agent_reasoning,
tools_web_search_request: override_tools_web_search_request,
} = overrides;
@@ -955,6 +1015,15 @@ impl Config {
None => ConfigProfile::default(),
};
let feature_overrides = FeatureOverrides {
include_plan_tool: include_plan_tool_override,
include_apply_patch_tool: include_apply_patch_tool_override,
include_view_image_tool: include_view_image_tool_override,
web_search_request: override_tools_web_search_request,
};
let features = Features::from_config(&cfg, &config_profile, feature_overrides);
let sandbox_policy = cfg.derive_sandbox_policy(sandbox_mode);
let mut model_providers = built_in_model_providers();
@@ -1000,13 +1069,13 @@ impl Config {
let history = cfg.history.unwrap_or_default();
let tools_web_search_request = override_tools_web_search_request
.or(cfg.tools.as_ref().and_then(|t| t.web_search))
.unwrap_or(false);
let include_view_image_tool = include_view_image_tool
.or(cfg.tools.as_ref().and_then(|t| t.view_image))
.unwrap_or(true);
let include_plan_tool_flag = features.enabled(Feature::PlanTool);
let include_apply_patch_tool_flag = features.enabled(Feature::ApplyPatchFreeform);
let include_view_image_tool_flag = features.enabled(Feature::ViewImageTool);
let tools_web_search_request = features.enabled(Feature::WebSearchRequest);
let use_experimental_streamable_shell_tool = features.enabled(Feature::StreamableShell);
let use_experimental_unified_exec_tool = features.enabled(Feature::UnifiedExec);
let use_experimental_use_rmcp_client = features.enabled(Feature::RmcpClient);
let model = model
.or(config_profile.model)
@@ -1074,6 +1143,9 @@ impl Config {
user_instructions,
base_instructions,
mcp_servers: cfg.mcp_servers,
// The config.toml omits "_mode" because it's a config file. However, "_mode"
// is important in code to differentiate the mode from the store implementation.
mcp_oauth_credentials_store_mode: cfg.mcp_oauth_credentials_store.unwrap_or_default(),
model_providers,
project_doc_max_bytes: cfg.project_doc_max_bytes.unwrap_or(PROJECT_DOC_MAX_BYTES),
project_doc_fallback_filenames: cfg
@@ -1111,19 +1183,14 @@ impl Config {
.chatgpt_base_url
.or(cfg.chatgpt_base_url)
.unwrap_or("https://chatgpt.com/backend-api/".to_string()),
include_plan_tool: include_plan_tool.unwrap_or(false),
include_apply_patch_tool: include_apply_patch_tool
.or(cfg.experimental_use_freeform_apply_patch)
.unwrap_or(false),
include_plan_tool: include_plan_tool_flag,
include_apply_patch_tool: include_apply_patch_tool_flag,
tools_web_search_request,
use_experimental_streamable_shell_tool: cfg
.experimental_use_exec_command_tool
.unwrap_or(false),
use_experimental_unified_exec_tool: cfg
.experimental_use_unified_exec_tool
.unwrap_or(false),
use_experimental_use_rmcp_client: cfg.experimental_use_rmcp_client.unwrap_or(false),
include_view_image_tool,
use_experimental_streamable_shell_tool,
use_experimental_unified_exec_tool,
use_experimental_use_rmcp_client,
include_view_image_tool: include_view_image_tool_flag,
features,
active_profile: active_profile_name,
windows_wsl_setup_acknowledged: cfg.windows_wsl_setup_acknowledged.unwrap_or(false),
disable_paste_burst: cfg.disable_paste_burst.unwrap_or(false),
@@ -1256,6 +1323,7 @@ pub fn log_dir(cfg: &Config) -> std::io::Result<PathBuf> {
mod tests {
use crate::config_types::HistoryPersistence;
use crate::config_types::Notifications;
use crate::features::Feature;
use super::*;
use pretty_assertions::assert_eq;
@@ -1364,6 +1432,172 @@ exclude_slash_tmp = true
);
}
#[test]
fn config_defaults_to_auto_oauth_store_mode() -> std::io::Result<()> {
let codex_home = TempDir::new()?;
let cfg = ConfigToml::default();
let config = Config::load_from_base_config_with_overrides(
cfg,
ConfigOverrides::default(),
codex_home.path().to_path_buf(),
)?;
assert_eq!(
config.mcp_oauth_credentials_store_mode,
OAuthCredentialsStoreMode::Auto,
);
Ok(())
}
#[test]
fn profile_legacy_toggles_override_base() -> std::io::Result<()> {
let codex_home = TempDir::new()?;
let mut profiles = HashMap::new();
profiles.insert(
"work".to_string(),
ConfigProfile {
include_plan_tool: Some(true),
include_view_image_tool: Some(false),
..Default::default()
},
);
let cfg = ConfigToml {
profiles,
profile: Some("work".to_string()),
..Default::default()
};
let config = Config::load_from_base_config_with_overrides(
cfg,
ConfigOverrides::default(),
codex_home.path().to_path_buf(),
)?;
assert!(config.features.enabled(Feature::PlanTool));
assert!(!config.features.enabled(Feature::ViewImageTool));
assert!(config.include_plan_tool);
assert!(!config.include_view_image_tool);
Ok(())
}
#[test]
fn feature_table_overrides_legacy_flags() -> std::io::Result<()> {
let codex_home = TempDir::new()?;
let mut entries = BTreeMap::new();
entries.insert("plan_tool".to_string(), false);
entries.insert("apply_patch_freeform".to_string(), false);
let cfg = ConfigToml {
features: Some(crate::features::FeaturesToml { entries }),
..Default::default()
};
let config = Config::load_from_base_config_with_overrides(
cfg,
ConfigOverrides::default(),
codex_home.path().to_path_buf(),
)?;
assert!(!config.features.enabled(Feature::PlanTool));
assert!(!config.features.enabled(Feature::ApplyPatchFreeform));
assert!(!config.include_plan_tool);
assert!(!config.include_apply_patch_tool);
Ok(())
}
#[test]
fn legacy_toggles_map_to_features() -> std::io::Result<()> {
let codex_home = TempDir::new()?;
let cfg = ConfigToml {
experimental_use_exec_command_tool: Some(true),
experimental_use_unified_exec_tool: Some(true),
experimental_use_rmcp_client: Some(true),
experimental_use_freeform_apply_patch: Some(true),
..Default::default()
};
let config = Config::load_from_base_config_with_overrides(
cfg,
ConfigOverrides::default(),
codex_home.path().to_path_buf(),
)?;
assert!(config.features.enabled(Feature::ApplyPatchFreeform));
assert!(config.features.enabled(Feature::StreamableShell));
assert!(config.features.enabled(Feature::UnifiedExec));
assert!(config.features.enabled(Feature::RmcpClient));
assert!(config.include_apply_patch_tool);
assert!(config.use_experimental_streamable_shell_tool);
assert!(config.use_experimental_unified_exec_tool);
assert!(config.use_experimental_use_rmcp_client);
Ok(())
}
#[test]
fn config_honors_explicit_file_oauth_store_mode() -> std::io::Result<()> {
let codex_home = TempDir::new()?;
let cfg = ConfigToml {
mcp_oauth_credentials_store: Some(OAuthCredentialsStoreMode::File),
..Default::default()
};
let config = Config::load_from_base_config_with_overrides(
cfg,
ConfigOverrides::default(),
codex_home.path().to_path_buf(),
)?;
assert_eq!(
config.mcp_oauth_credentials_store_mode,
OAuthCredentialsStoreMode::File,
);
Ok(())
}
#[tokio::test]
async fn managed_config_overrides_oauth_store_mode() -> anyhow::Result<()> {
let codex_home = TempDir::new()?;
let managed_path = codex_home.path().join("managed_config.toml");
let config_path = codex_home.path().join(CONFIG_TOML_FILE);
std::fs::write(&config_path, "mcp_oauth_credentials_store = \"file\"\n")?;
std::fs::write(&managed_path, "mcp_oauth_credentials_store = \"keyring\"\n")?;
let overrides = crate::config_loader::LoaderOverrides {
managed_config_path: Some(managed_path.clone()),
#[cfg(target_os = "macos")]
managed_preferences_base64: None,
};
let root_value = load_resolved_config(codex_home.path(), Vec::new(), overrides).await?;
let cfg: ConfigToml = root_value.try_into().map_err(|e| {
tracing::error!("Failed to deserialize overridden config: {e}");
std::io::Error::new(std::io::ErrorKind::InvalidData, e)
})?;
assert_eq!(
cfg.mcp_oauth_credentials_store,
Some(OAuthCredentialsStoreMode::Keyring),
);
let final_config = Config::load_from_base_config_with_overrides(
cfg,
ConfigOverrides::default(),
codex_home.path().to_path_buf(),
)?;
assert_eq!(
final_config.mcp_oauth_credentials_store_mode,
OAuthCredentialsStoreMode::Keyring,
);
Ok(())
}
#[tokio::test]
async fn load_global_mcp_servers_returns_empty_if_missing() -> anyhow::Result<()> {
let codex_home = TempDir::new()?;
@@ -1387,6 +1621,7 @@ exclude_slash_tmp = true
args: vec!["hello".to_string()],
env: None,
},
enabled: true,
startup_timeout_sec: Some(Duration::from_secs(3)),
tool_timeout_sec: Some(Duration::from_secs(5)),
},
@@ -1407,6 +1642,7 @@ exclude_slash_tmp = true
}
assert_eq!(docs.startup_timeout_sec, Some(Duration::from_secs(3)));
assert_eq!(docs.tool_timeout_sec, Some(Duration::from_secs(5)));
assert!(docs.enabled);
let empty = BTreeMap::new();
write_global_mcp_servers(codex_home.path(), &empty)?;
@@ -1471,6 +1707,31 @@ startup_timeout_ms = 2500
Ok(())
}
#[tokio::test]
async fn load_global_mcp_servers_rejects_inline_bearer_token() -> anyhow::Result<()> {
let codex_home = TempDir::new()?;
let config_path = codex_home.path().join(CONFIG_TOML_FILE);
std::fs::write(
&config_path,
r#"
[mcp_servers.docs]
url = "https://example.com/mcp"
bearer_token = "secret"
"#,
)?;
let err = load_global_mcp_servers(codex_home.path())
.await
.expect_err("bearer_token entries should be rejected");
assert_eq!(err.kind(), std::io::ErrorKind::InvalidData);
assert!(err.to_string().contains("bearer_token"));
assert!(err.to_string().contains("bearer_token_env_var"));
Ok(())
}
#[tokio::test]
async fn write_global_mcp_servers_serializes_env_sorted() -> anyhow::Result<()> {
let codex_home = TempDir::new()?;
@@ -1486,6 +1747,7 @@ startup_timeout_ms = 2500
("ALPHA_VAR".to_string(), "1".to_string()),
])),
},
enabled: true,
startup_timeout_sec: None,
tool_timeout_sec: None,
},
@@ -1534,8 +1796,9 @@ ZIG_VAR = "3"
McpServerConfig {
transport: McpServerTransportConfig::StreamableHttp {
url: "https://example.com/mcp".to_string(),
bearer_token: Some("secret-token".to_string()),
bearer_token_env_var: Some("MCP_TOKEN".to_string()),
},
enabled: true,
startup_timeout_sec: Some(Duration::from_secs(2)),
tool_timeout_sec: None,
},
@@ -1549,7 +1812,7 @@ ZIG_VAR = "3"
serialized,
r#"[mcp_servers.docs]
url = "https://example.com/mcp"
bearer_token = "secret-token"
bearer_token_env_var = "MCP_TOKEN"
startup_timeout_sec = 2.0
"#
);
@@ -1557,9 +1820,12 @@ startup_timeout_sec = 2.0
let loaded = load_global_mcp_servers(codex_home.path()).await?;
let docs = loaded.get("docs").expect("docs entry");
match &docs.transport {
McpServerTransportConfig::StreamableHttp { url, bearer_token } => {
McpServerTransportConfig::StreamableHttp {
url,
bearer_token_env_var,
} => {
assert_eq!(url, "https://example.com/mcp");
assert_eq!(bearer_token.as_deref(), Some("secret-token"));
assert_eq!(bearer_token_env_var.as_deref(), Some("MCP_TOKEN"));
}
other => panic!("unexpected transport {other:?}"),
}
@@ -1570,8 +1836,9 @@ startup_timeout_sec = 2.0
McpServerConfig {
transport: McpServerTransportConfig::StreamableHttp {
url: "https://example.com/mcp".to_string(),
bearer_token: None,
bearer_token_env_var: None,
},
enabled: true,
startup_timeout_sec: None,
tool_timeout_sec: None,
},
@@ -1589,9 +1856,12 @@ url = "https://example.com/mcp"
let loaded = load_global_mcp_servers(codex_home.path()).await?;
let docs = loaded.get("docs").expect("docs entry");
match &docs.transport {
McpServerTransportConfig::StreamableHttp { url, bearer_token } => {
McpServerTransportConfig::StreamableHttp {
url,
bearer_token_env_var,
} => {
assert_eq!(url, "https://example.com/mcp");
assert!(bearer_token.is_none());
assert!(bearer_token_env_var.is_none());
}
other => panic!("unexpected transport {other:?}"),
}
@@ -1599,6 +1869,40 @@ url = "https://example.com/mcp"
Ok(())
}
#[tokio::test]
async fn write_global_mcp_servers_serializes_disabled_flag() -> anyhow::Result<()> {
let codex_home = TempDir::new()?;
let servers = BTreeMap::from([(
"docs".to_string(),
McpServerConfig {
transport: McpServerTransportConfig::Stdio {
command: "docs-server".to_string(),
args: Vec::new(),
env: None,
},
enabled: false,
startup_timeout_sec: None,
tool_timeout_sec: None,
},
)]);
write_global_mcp_servers(codex_home.path(), &servers)?;
let config_path = codex_home.path().join(CONFIG_TOML_FILE);
let serialized = std::fs::read_to_string(&config_path)?;
assert!(
serialized.contains("enabled = false"),
"serialized config missing disabled flag:\n{serialized}"
);
let loaded = load_global_mcp_servers(codex_home.path()).await?;
let docs = loaded.get("docs").expect("docs entry");
assert!(!docs.enabled);
Ok(())
}
#[tokio::test]
async fn persist_model_selection_updates_defaults() -> anyhow::Result<()> {
let codex_home = TempDir::new()?;
@@ -1896,6 +2200,7 @@ model_verbosity = "high"
notify: None,
cwd: fixture.cwd(),
mcp_servers: HashMap::new(),
mcp_oauth_credentials_store_mode: Default::default(),
model_providers: fixture.model_provider_map.clone(),
project_doc_max_bytes: PROJECT_DOC_MAX_BYTES,
project_doc_fallback_filenames: Vec::new(),
@@ -1917,6 +2222,7 @@ model_verbosity = "high"
use_experimental_unified_exec_tool: false,
use_experimental_use_rmcp_client: false,
include_view_image_tool: true,
features: Features::with_defaults(),
active_profile: Some("o3".to_string()),
windows_wsl_setup_acknowledged: false,
disable_paste_burst: false,
@@ -1958,6 +2264,7 @@ model_verbosity = "high"
notify: None,
cwd: fixture.cwd(),
mcp_servers: HashMap::new(),
mcp_oauth_credentials_store_mode: Default::default(),
model_providers: fixture.model_provider_map.clone(),
project_doc_max_bytes: PROJECT_DOC_MAX_BYTES,
project_doc_fallback_filenames: Vec::new(),
@@ -1979,6 +2286,7 @@ model_verbosity = "high"
use_experimental_unified_exec_tool: false,
use_experimental_use_rmcp_client: false,
include_view_image_tool: true,
features: Features::with_defaults(),
active_profile: Some("gpt3".to_string()),
windows_wsl_setup_acknowledged: false,
disable_paste_burst: false,
@@ -2035,6 +2343,7 @@ model_verbosity = "high"
notify: None,
cwd: fixture.cwd(),
mcp_servers: HashMap::new(),
mcp_oauth_credentials_store_mode: Default::default(),
model_providers: fixture.model_provider_map.clone(),
project_doc_max_bytes: PROJECT_DOC_MAX_BYTES,
project_doc_fallback_filenames: Vec::new(),
@@ -2056,6 +2365,7 @@ model_verbosity = "high"
use_experimental_unified_exec_tool: false,
use_experimental_use_rmcp_client: false,
include_view_image_tool: true,
features: Features::with_defaults(),
active_profile: Some("zdr".to_string()),
windows_wsl_setup_acknowledged: false,
disable_paste_burst: false,
@@ -2098,6 +2408,7 @@ model_verbosity = "high"
notify: None,
cwd: fixture.cwd(),
mcp_servers: HashMap::new(),
mcp_oauth_credentials_store_mode: Default::default(),
model_providers: fixture.model_provider_map.clone(),
project_doc_max_bytes: PROJECT_DOC_MAX_BYTES,
project_doc_fallback_filenames: Vec::new(),
@@ -2119,6 +2430,7 @@ model_verbosity = "high"
use_experimental_unified_exec_tool: false,
use_experimental_use_rmcp_client: false,
include_view_image_tool: true,
features: Features::with_defaults(),
active_profile: Some("gpt5".to_string()),
windows_wsl_setup_acknowledged: false,
disable_paste_burst: false,

View File

@@ -20,6 +20,18 @@ pub struct ConfigProfile {
pub model_verbosity: Option<Verbosity>,
pub chatgpt_base_url: Option<String>,
pub experimental_instructions_file: Option<PathBuf>,
pub include_plan_tool: Option<bool>,
pub include_apply_patch_tool: Option<bool>,
pub include_view_image_tool: Option<bool>,
pub experimental_use_unified_exec_tool: Option<bool>,
pub experimental_use_exec_command_tool: Option<bool>,
pub experimental_use_rmcp_client: Option<bool>,
pub experimental_use_freeform_apply_patch: Option<bool>,
pub tools_web_search: Option<bool>,
pub tools_view_image: Option<bool>,
/// Optional feature toggles scoped to this profile.
#[serde(default)]
pub features: Option<crate::features::FeaturesToml>,
}
impl From<ConfigProfile> for codex_app_server_protocol::Profile {

View File

@@ -20,6 +20,10 @@ pub struct McpServerConfig {
#[serde(flatten)]
pub transport: McpServerTransportConfig,
/// When `false`, Codex skips initializing this MCP server.
#[serde(default = "default_enabled")]
pub enabled: bool,
/// Startup timeout in seconds for initializing MCP server & initially listing tools.
#[serde(
default,
@@ -48,6 +52,7 @@ impl<'de> Deserialize<'de> for McpServerConfig {
url: Option<String>,
bearer_token: Option<String>,
bearer_token_env_var: Option<String>,
#[serde(default)]
startup_timeout_sec: Option<f64>,
@@ -55,6 +60,8 @@ impl<'de> Deserialize<'de> for McpServerConfig {
startup_timeout_ms: Option<u64>,
#[serde(default, with = "option_duration_secs")]
tool_timeout_sec: Option<Duration>,
#[serde(default)]
enabled: Option<bool>,
}
let raw = RawMcpServerConfig::deserialize(deserializer)?;
@@ -86,11 +93,15 @@ impl<'de> Deserialize<'de> for McpServerConfig {
args,
env,
url,
bearer_token,
bearer_token_env_var,
..
} => {
throw_if_set("stdio", "url", url.as_ref())?;
throw_if_set("stdio", "bearer_token", bearer_token.as_ref())?;
throw_if_set(
"stdio",
"bearer_token_env_var",
bearer_token_env_var.as_ref(),
)?;
McpServerTransportConfig::Stdio {
command,
args: args.unwrap_or_default(),
@@ -100,6 +111,7 @@ impl<'de> Deserialize<'de> for McpServerConfig {
RawMcpServerConfig {
url: Some(url),
bearer_token,
bearer_token_env_var,
command,
args,
env,
@@ -108,7 +120,11 @@ impl<'de> Deserialize<'de> for McpServerConfig {
throw_if_set("streamable_http", "command", command.as_ref())?;
throw_if_set("streamable_http", "args", args.as_ref())?;
throw_if_set("streamable_http", "env", env.as_ref())?;
McpServerTransportConfig::StreamableHttp { url, bearer_token }
throw_if_set("streamable_http", "bearer_token", bearer_token.as_ref())?;
McpServerTransportConfig::StreamableHttp {
url,
bearer_token_env_var,
}
}
_ => return Err(SerdeError::custom("invalid transport")),
};
@@ -117,10 +133,15 @@ impl<'de> Deserialize<'de> for McpServerConfig {
transport,
startup_timeout_sec,
tool_timeout_sec: raw.tool_timeout_sec,
enabled: raw.enabled.unwrap_or_else(default_enabled),
})
}
}
const fn default_enabled() -> bool {
true
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq)]
#[serde(untagged, deny_unknown_fields, rename_all = "snake_case")]
pub enum McpServerTransportConfig {
@@ -135,11 +156,11 @@ pub enum McpServerTransportConfig {
/// https://modelcontextprotocol.io/specification/2025-06-18/basic/transports#streamable-http
StreamableHttp {
url: String,
/// A plain text bearer token to use for authentication.
/// This bearer token will be included in the HTTP request header as an `Authorization: Bearer <token>` header.
/// This should be used with caution because it lives on disk in clear text.
/// Name of the environment variable to read for an HTTP bearer token.
/// When set, requests will include the token via `Authorization: Bearer <token>`.
/// The actual secret value must be provided via the environment.
#[serde(default, skip_serializing_if = "Option::is_none")]
bearer_token: Option<String>,
bearer_token_env_var: Option<String>,
},
}
@@ -450,6 +471,7 @@ mod tests {
env: None
}
);
assert!(cfg.enabled);
}
#[test]
@@ -470,6 +492,7 @@ mod tests {
env: None
}
);
assert!(cfg.enabled);
}
#[test]
@@ -491,6 +514,20 @@ mod tests {
env: Some(HashMap::from([("FOO".to_string(), "BAR".to_string())]))
}
);
assert!(cfg.enabled);
}
#[test]
fn deserialize_disabled_server_config() {
let cfg: McpServerConfig = toml::from_str(
r#"
command = "echo"
enabled = false
"#,
)
.expect("should deserialize disabled server config");
assert!(!cfg.enabled);
}
#[test]
@@ -506,17 +543,18 @@ mod tests {
cfg.transport,
McpServerTransportConfig::StreamableHttp {
url: "https://example.com/mcp".to_string(),
bearer_token: None
bearer_token_env_var: None
}
);
assert!(cfg.enabled);
}
#[test]
fn deserialize_streamable_http_server_config_with_bearer_token() {
fn deserialize_streamable_http_server_config_with_env_var() {
let cfg: McpServerConfig = toml::from_str(
r#"
url = "https://example.com/mcp"
bearer_token = "secret"
bearer_token_env_var = "GITHUB_TOKEN"
"#,
)
.expect("should deserialize http config");
@@ -525,9 +563,10 @@ mod tests {
cfg.transport,
McpServerTransportConfig::StreamableHttp {
url: "https://example.com/mcp".to_string(),
bearer_token: Some("secret".to_string())
bearer_token_env_var: Some("GITHUB_TOKEN".to_string())
}
);
assert!(cfg.enabled);
}
#[test]
@@ -553,13 +592,18 @@ mod tests {
}
#[test]
fn deserialize_rejects_bearer_token_for_stdio_transport() {
toml::from_str::<McpServerConfig>(
fn deserialize_rejects_inline_bearer_token_field() {
let err = toml::from_str::<McpServerConfig>(
r#"
command = "echo"
url = "https://example.com"
bearer_token = "secret"
"#,
)
.expect_err("should reject bearer token for stdio transport");
.expect_err("should reject bearer_token field");
assert!(
err.to_string().contains("bearer_token is not supported"),
"unexpected error: {err}"
);
}
}

View File

@@ -1,6 +1,7 @@
use crate::exec::ExecToolCallOutput;
use crate::token_data::KnownPlan;
use crate::token_data::PlanType;
use crate::truncate::truncate_middle;
use codex_protocol::ConversationId;
use codex_protocol::protocol::RateLimitSnapshot;
use reqwest::StatusCode;
@@ -12,6 +13,9 @@ use tokio::task::JoinError;
pub type Result<T> = std::result::Result<T, CodexErr>;
/// Limit UI error messages to a reasonable size while keeping useful context.
const ERROR_MESSAGE_UI_MAX_BYTES: usize = 2 * 1024; // 4 KiB
#[derive(Error, Debug)]
pub enum SandboxErr {
/// Error from sandbox execution
@@ -304,21 +308,44 @@ impl CodexErr {
}
pub fn get_error_message_ui(e: &CodexErr) -> String {
match e {
CodexErr::Sandbox(SandboxErr::Denied { output }) => output.stderr.text.clone(),
let message = match e {
CodexErr::Sandbox(SandboxErr::Denied { output }) => {
let aggregated = output.aggregated_output.text.trim();
if !aggregated.is_empty() {
output.aggregated_output.text.clone()
} else {
let stderr = output.stderr.text.trim();
let stdout = output.stdout.text.trim();
match (stderr.is_empty(), stdout.is_empty()) {
(false, false) => format!("{stderr}\n{stdout}"),
(false, true) => output.stderr.text.clone(),
(true, false) => output.stdout.text.clone(),
(true, true) => format!(
"command failed inside sandbox with exit code {}",
output.exit_code
),
}
}
}
// Timeouts are not sandbox errors from a UX perspective; present them plainly
CodexErr::Sandbox(SandboxErr::Timeout { output }) => format!(
"error: command timed out after {} ms",
output.duration.as_millis()
),
CodexErr::Sandbox(SandboxErr::Timeout { output }) => {
format!(
"error: command timed out after {} ms",
output.duration.as_millis()
)
}
_ => e.to_string(),
}
};
truncate_middle(&message, ERROR_MESSAGE_UI_MAX_BYTES).0
}
#[cfg(test)]
mod tests {
use super::*;
use crate::exec::StreamOutput;
use codex_protocol::protocol::RateLimitWindow;
use pretty_assertions::assert_eq;
fn rate_limit_snapshot() -> RateLimitSnapshot {
RateLimitSnapshot {
@@ -348,6 +375,73 @@ mod tests {
);
}
#[test]
fn sandbox_denied_uses_aggregated_output_when_stderr_empty() {
let output = ExecToolCallOutput {
exit_code: 77,
stdout: StreamOutput::new(String::new()),
stderr: StreamOutput::new(String::new()),
aggregated_output: StreamOutput::new("aggregate detail".to_string()),
duration: Duration::from_millis(10),
timed_out: false,
};
let err = CodexErr::Sandbox(SandboxErr::Denied {
output: Box::new(output),
});
assert_eq!(get_error_message_ui(&err), "aggregate detail");
}
#[test]
fn sandbox_denied_reports_both_streams_when_available() {
let output = ExecToolCallOutput {
exit_code: 9,
stdout: StreamOutput::new("stdout detail".to_string()),
stderr: StreamOutput::new("stderr detail".to_string()),
aggregated_output: StreamOutput::new(String::new()),
duration: Duration::from_millis(10),
timed_out: false,
};
let err = CodexErr::Sandbox(SandboxErr::Denied {
output: Box::new(output),
});
assert_eq!(get_error_message_ui(&err), "stderr detail\nstdout detail");
}
#[test]
fn sandbox_denied_reports_stdout_when_no_stderr() {
let output = ExecToolCallOutput {
exit_code: 11,
stdout: StreamOutput::new("stdout only".to_string()),
stderr: StreamOutput::new(String::new()),
aggregated_output: StreamOutput::new(String::new()),
duration: Duration::from_millis(8),
timed_out: false,
};
let err = CodexErr::Sandbox(SandboxErr::Denied {
output: Box::new(output),
});
assert_eq!(get_error_message_ui(&err), "stdout only");
}
#[test]
fn sandbox_denied_reports_exit_code_when_no_output_available() {
let output = ExecToolCallOutput {
exit_code: 13,
stdout: StreamOutput::new(String::new()),
stderr: StreamOutput::new(String::new()),
aggregated_output: StreamOutput::new(String::new()),
duration: Duration::from_millis(5),
timed_out: false,
};
let err = CodexErr::Sandbox(SandboxErr::Denied {
output: Box::new(output),
});
assert_eq!(
get_error_message_ui(&err),
"command failed inside sandbox with exit code 13"
);
}
#[test]
fn usage_limit_reached_error_formats_free_plan() {
let err = UsageLimitReachedError {

View File

@@ -177,7 +177,7 @@ pub async fn process_exec_tool_call(
}));
}
if exit_code != 0 && is_likely_sandbox_denied(sandbox_type, exit_code) {
if is_likely_sandbox_denied(sandbox_type, &exec_output) {
return Err(CodexErr::Sandbox(SandboxErr::Denied {
output: Box::new(exec_output),
}));
@@ -195,21 +195,57 @@ pub async fn process_exec_tool_call(
/// We don't have a fully deterministic way to tell if our command failed
/// because of the sandbox - a command in the user's zshrc file might hit an
/// error, but the command itself might fail or succeed for other reasons.
/// For now, we conservatively check for 'command not found' (exit code 127),
/// and can add additional cases as necessary.
fn is_likely_sandbox_denied(sandbox_type: SandboxType, exit_code: i32) -> bool {
if sandbox_type == SandboxType::None {
/// For now, we conservatively check for well known command failure exit codes and
/// also look for common sandbox denial keywords in the command output.
fn is_likely_sandbox_denied(sandbox_type: SandboxType, exec_output: &ExecToolCallOutput) -> bool {
if sandbox_type == SandboxType::None || exec_output.exit_code == 0 {
return false;
}
// Quick rejects: well-known non-sandbox shell exit codes
// 127: command not found, 2: misuse of shell builtins
if exit_code == 127 {
// 2: misuse of shell builtins
// 126: permission denied
// 127: command not found
const QUICK_REJECT_EXIT_CODES: [i32; 3] = [2, 126, 127];
if QUICK_REJECT_EXIT_CODES.contains(&exec_output.exit_code) {
return false;
}
// For all other cases, we assume the sandbox is the cause
true
const SANDBOX_DENIED_KEYWORDS: [&str; 6] = [
"operation not permitted",
"permission denied",
"read-only file system",
"seccomp",
"sandbox",
"landlock",
];
if [
&exec_output.stderr.text,
&exec_output.stdout.text,
&exec_output.aggregated_output.text,
]
.into_iter()
.any(|section| {
let lower = section.to_lowercase();
SANDBOX_DENIED_KEYWORDS
.iter()
.any(|needle| lower.contains(needle))
}) {
return true;
}
#[cfg(unix)]
{
const SIGSYS_CODE: i32 = libc::SIGSYS;
if sandbox_type == SandboxType::LinuxSeccomp
&& exec_output.exit_code == EXIT_CODE_SIGNAL_BASE + SIGSYS_CODE
{
return true;
}
}
false
}
#[derive(Debug)]
@@ -436,3 +472,77 @@ fn synthetic_exit_status(code: i32) -> ExitStatus {
#[expect(clippy::unwrap_used)]
std::process::ExitStatus::from_raw(code.try_into().unwrap())
}
#[cfg(test)]
mod tests {
use super::*;
use std::time::Duration;
fn make_exec_output(
exit_code: i32,
stdout: &str,
stderr: &str,
aggregated: &str,
) -> ExecToolCallOutput {
ExecToolCallOutput {
exit_code,
stdout: StreamOutput::new(stdout.to_string()),
stderr: StreamOutput::new(stderr.to_string()),
aggregated_output: StreamOutput::new(aggregated.to_string()),
duration: Duration::from_millis(1),
timed_out: false,
}
}
#[test]
fn sandbox_detection_requires_keywords() {
let output = make_exec_output(1, "", "", "");
assert!(!is_likely_sandbox_denied(
SandboxType::LinuxSeccomp,
&output
));
}
#[test]
fn sandbox_detection_identifies_keyword_in_stderr() {
let output = make_exec_output(1, "", "Operation not permitted", "");
assert!(is_likely_sandbox_denied(SandboxType::LinuxSeccomp, &output));
}
#[test]
fn sandbox_detection_respects_quick_reject_exit_codes() {
let output = make_exec_output(127, "", "command not found", "");
assert!(!is_likely_sandbox_denied(
SandboxType::LinuxSeccomp,
&output
));
}
#[test]
fn sandbox_detection_ignores_non_sandbox_mode() {
let output = make_exec_output(1, "", "Operation not permitted", "");
assert!(!is_likely_sandbox_denied(SandboxType::None, &output));
}
#[test]
fn sandbox_detection_uses_aggregated_output() {
let output = make_exec_output(
101,
"",
"",
"cargo failed: Read-only file system when writing target",
);
assert!(is_likely_sandbox_denied(
SandboxType::MacosSeatbelt,
&output
));
}
#[cfg(unix)]
#[test]
fn sandbox_detection_flags_sigsys_exit_code() {
let exit_code = EXIT_CODE_SIGNAL_BASE + libc::SIGSYS;
let output = make_exec_output(exit_code, "", "", "");
assert!(is_likely_sandbox_denied(SandboxType::LinuxSeccomp, &output));
}
}

View File

@@ -6,6 +6,7 @@ use async_trait::async_trait;
use crate::CODEX_APPLY_PATCH_ARG1;
use crate::apply_patch::ApplyPatchExec;
use crate::exec::ExecParams;
use crate::executor::ExecutorConfig;
use crate::function_tool::FunctionCallError;
pub(crate) enum ExecutionMode {
@@ -22,6 +23,7 @@ pub(crate) trait ExecutionBackend: Send + Sync {
params: ExecParams,
// Required for downcasting the apply_patch.
mode: &ExecutionMode,
config: &ExecutorConfig,
) -> Result<ExecParams, FunctionCallError>;
fn stream_stdout(&self, _mode: &ExecutionMode) -> bool {
@@ -47,6 +49,7 @@ impl ExecutionBackend for ShellBackend {
&self,
params: ExecParams,
mode: &ExecutionMode,
_config: &ExecutorConfig,
) -> Result<ExecParams, FunctionCallError> {
match mode {
ExecutionMode::Shell => Ok(params),
@@ -65,17 +68,22 @@ impl ExecutionBackend for ApplyPatchBackend {
&self,
params: ExecParams,
mode: &ExecutionMode,
config: &ExecutorConfig,
) -> Result<ExecParams, FunctionCallError> {
match mode {
ExecutionMode::ApplyPatch(exec) => {
let path_to_codex = env::current_exe()
.ok()
.map(|p| p.to_string_lossy().to_string())
.ok_or_else(|| {
FunctionCallError::RespondToModel(
"failed to determine path to codex executable".to_string(),
)
})?;
let path_to_codex = if let Some(exe_path) = &config.codex_exe {
exe_path.to_string_lossy().to_string()
} else {
env::current_exe()
.ok()
.map(|p| p.to_string_lossy().to_string())
.ok_or_else(|| {
FunctionCallError::RespondToModel(
"failed to determine path to codex executable".to_string(),
)
})?
};
let patch = exec.action.patch.clone();
Ok(ExecParams {

View File

@@ -30,19 +30,19 @@ use codex_otel::otel_event_manager::ToolDecisionSource;
pub(crate) struct ExecutorConfig {
pub(crate) sandbox_policy: SandboxPolicy,
pub(crate) sandbox_cwd: PathBuf,
codex_linux_sandbox_exe: Option<PathBuf>,
pub(crate) codex_exe: Option<PathBuf>,
}
impl ExecutorConfig {
pub(crate) fn new(
sandbox_policy: SandboxPolicy,
sandbox_cwd: PathBuf,
codex_linux_sandbox_exe: Option<PathBuf>,
codex_exe: Option<PathBuf>,
) -> Self {
Self {
sandbox_policy,
sandbox_cwd,
codex_linux_sandbox_exe,
codex_exe,
}
}
}
@@ -86,7 +86,14 @@ impl Executor {
maybe_translate_shell_command(request.params, session, request.use_shell_profile);
}
// Step 1: Normalise parameters via the selected backend.
// Step 1: Snapshot sandbox configuration so it stays stable for this run.
let config = self
.config
.read()
.map_err(|_| ExecError::rejection("executor config poisoned"))?
.clone();
// Step 2: Normalise parameters via the selected backend.
let backend = backend_for_mode(&request.mode);
let stdout_stream = if backend.stream_stdout(&request.mode) {
request.stdout_stream.clone()
@@ -94,16 +101,9 @@ impl Executor {
None
};
request.params = backend
.prepare(request.params, &request.mode)
.prepare(request.params, &request.mode, &config)
.map_err(ExecError::from)?;
// Step 2: Snapshot sandbox configuration so it stays stable for this run.
let config = self
.config
.read()
.map_err(|_| ExecError::rejection("executor config poisoned"))?
.clone();
// Step 3: Decide sandbox placement, prompting for approval when needed.
let sandbox_decision = select_sandbox(
&request,
@@ -227,7 +227,7 @@ impl Executor {
sandbox,
&config.sandbox_policy,
&config.sandbox_cwd,
&config.codex_linux_sandbox_exe,
&config.codex_exe,
stdout_stream,
)
.await
@@ -380,6 +380,23 @@ mod tests {
assert_eq!(message, "failed in sandbox: sandbox stderr");
}
#[test]
fn sandbox_failure_message_falls_back_to_aggregated_output() {
let output = ExecToolCallOutput {
exit_code: 101,
stdout: StreamOutput::new(String::new()),
stderr: StreamOutput::new(String::new()),
aggregated_output: StreamOutput::new("aggregate text".to_string()),
duration: Duration::from_millis(10),
timed_out: false,
};
let err = SandboxErr::Denied {
output: Box::new(output),
};
let message = sandbox_failure_message(err);
assert_eq!(message, "failed in sandbox: aggregate text");
}
#[test]
fn normalize_function_error_synthesizes_payload() {
let err = FunctionCallError::RespondToModel("boom".to_string());

View File

@@ -0,0 +1,250 @@
//! Centralized feature flags and metadata.
//!
//! This module defines a small set of toggles that gate experimental and
//! optional behavior across the codebase. Instead of wiring individual
//! booleans through multiple types, call sites consult a single `Features`
//! container attached to `Config`.
use crate::config::ConfigToml;
use crate::config_profile::ConfigProfile;
use serde::Deserialize;
use std::collections::BTreeMap;
use std::collections::BTreeSet;
mod legacy;
pub(crate) use legacy::LegacyFeatureToggles;
/// High-level lifecycle stage for a feature.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum Stage {
Experimental,
Beta,
Stable,
Deprecated,
Removed,
}
/// Unique features toggled via configuration.
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub enum Feature {
/// Use the single unified PTY-backed exec tool.
UnifiedExec,
/// Use the streamable exec-command/write-stdin tool pair.
StreamableShell,
/// Use the official Rust MCP client (rmcp).
RmcpClient,
/// Include the plan tool.
PlanTool,
/// Include the freeform apply_patch tool.
ApplyPatchFreeform,
/// Include the view_image tool.
ViewImageTool,
/// Allow the model to request web searches.
WebSearchRequest,
}
impl Feature {
pub fn key(self) -> &'static str {
self.info().key
}
pub fn stage(self) -> Stage {
self.info().stage
}
pub fn default_enabled(self) -> bool {
self.info().default_enabled
}
fn info(self) -> &'static FeatureSpec {
FEATURES
.iter()
.find(|spec| spec.id == self)
.unwrap_or_else(|| unreachable!("missing FeatureSpec for {:?}", self))
}
}
/// Holds the effective set of enabled features.
#[derive(Debug, Clone, Default, PartialEq)]
pub struct Features {
enabled: BTreeSet<Feature>,
}
#[derive(Debug, Clone, Default)]
pub struct FeatureOverrides {
pub include_plan_tool: Option<bool>,
pub include_apply_patch_tool: Option<bool>,
pub include_view_image_tool: Option<bool>,
pub web_search_request: Option<bool>,
}
impl FeatureOverrides {
fn apply(self, features: &mut Features) {
LegacyFeatureToggles {
include_plan_tool: self.include_plan_tool,
include_apply_patch_tool: self.include_apply_patch_tool,
include_view_image_tool: self.include_view_image_tool,
tools_web_search: self.web_search_request,
..Default::default()
}
.apply(features);
}
}
impl Features {
/// Starts with built-in defaults.
pub fn with_defaults() -> Self {
let mut set = BTreeSet::new();
for spec in FEATURES {
if spec.default_enabled {
set.insert(spec.id);
}
}
Self { enabled: set }
}
pub fn enabled(&self, f: Feature) -> bool {
self.enabled.contains(&f)
}
pub fn enable(&mut self, f: Feature) {
self.enabled.insert(f);
}
pub fn disable(&mut self, f: Feature) {
self.enabled.remove(&f);
}
/// Apply a table of key -> bool toggles (e.g. from TOML).
pub fn apply_map(&mut self, m: &BTreeMap<String, bool>) {
for (k, v) in m {
match feature_for_key(k) {
Some(feat) => {
if *v {
self.enable(feat);
} else {
self.disable(feat);
}
}
None => {
tracing::warn!("unknown feature key in config: {k}");
}
}
}
}
pub fn from_config(
cfg: &ConfigToml,
config_profile: &ConfigProfile,
overrides: FeatureOverrides,
) -> Self {
let mut features = Features::with_defaults();
let base_legacy = LegacyFeatureToggles {
experimental_use_freeform_apply_patch: cfg.experimental_use_freeform_apply_patch,
experimental_use_exec_command_tool: cfg.experimental_use_exec_command_tool,
experimental_use_unified_exec_tool: cfg.experimental_use_unified_exec_tool,
experimental_use_rmcp_client: cfg.experimental_use_rmcp_client,
tools_web_search: cfg.tools.as_ref().and_then(|t| t.web_search),
tools_view_image: cfg.tools.as_ref().and_then(|t| t.view_image),
..Default::default()
};
base_legacy.apply(&mut features);
if let Some(base_features) = cfg.features.as_ref() {
features.apply_map(&base_features.entries);
}
let profile_legacy = LegacyFeatureToggles {
include_plan_tool: config_profile.include_plan_tool,
include_apply_patch_tool: config_profile.include_apply_patch_tool,
include_view_image_tool: config_profile.include_view_image_tool,
experimental_use_freeform_apply_patch: config_profile
.experimental_use_freeform_apply_patch,
experimental_use_exec_command_tool: config_profile.experimental_use_exec_command_tool,
experimental_use_unified_exec_tool: config_profile.experimental_use_unified_exec_tool,
experimental_use_rmcp_client: config_profile.experimental_use_rmcp_client,
tools_web_search: config_profile.tools_web_search,
tools_view_image: config_profile.tools_view_image,
};
profile_legacy.apply(&mut features);
if let Some(profile_features) = config_profile.features.as_ref() {
features.apply_map(&profile_features.entries);
}
overrides.apply(&mut features);
features
}
}
/// Keys accepted in `[features]` tables.
fn feature_for_key(key: &str) -> Option<Feature> {
for spec in FEATURES {
if spec.key == key {
return Some(spec.id);
}
}
legacy::feature_for_key(key)
}
/// Deserializable features table for TOML.
#[derive(Deserialize, Debug, Clone, Default, PartialEq)]
pub struct FeaturesToml {
#[serde(flatten)]
pub entries: BTreeMap<String, bool>,
}
/// Single, easy-to-read registry of all feature definitions.
#[derive(Debug, Clone, Copy)]
pub struct FeatureSpec {
pub id: Feature,
pub key: &'static str,
pub stage: Stage,
pub default_enabled: bool,
}
pub const FEATURES: &[FeatureSpec] = &[
FeatureSpec {
id: Feature::UnifiedExec,
key: "unified_exec",
stage: Stage::Experimental,
default_enabled: false,
},
FeatureSpec {
id: Feature::StreamableShell,
key: "streamable_shell",
stage: Stage::Experimental,
default_enabled: false,
},
FeatureSpec {
id: Feature::RmcpClient,
key: "rmcp_client",
stage: Stage::Experimental,
default_enabled: false,
},
FeatureSpec {
id: Feature::PlanTool,
key: "plan_tool",
stage: Stage::Stable,
default_enabled: false,
},
FeatureSpec {
id: Feature::ApplyPatchFreeform,
key: "apply_patch_freeform",
stage: Stage::Beta,
default_enabled: false,
},
FeatureSpec {
id: Feature::ViewImageTool,
key: "view_image_tool",
stage: Stage::Stable,
default_enabled: true,
},
FeatureSpec {
id: Feature::WebSearchRequest,
key: "web_search_request",
stage: Stage::Stable,
default_enabled: false,
},
];

View File

@@ -0,0 +1,158 @@
use super::Feature;
use super::Features;
use tracing::info;
#[derive(Clone, Copy)]
struct Alias {
legacy_key: &'static str,
feature: Feature,
}
const ALIASES: &[Alias] = &[
Alias {
legacy_key: "experimental_use_unified_exec_tool",
feature: Feature::UnifiedExec,
},
Alias {
legacy_key: "experimental_use_exec_command_tool",
feature: Feature::StreamableShell,
},
Alias {
legacy_key: "experimental_use_rmcp_client",
feature: Feature::RmcpClient,
},
Alias {
legacy_key: "experimental_use_freeform_apply_patch",
feature: Feature::ApplyPatchFreeform,
},
Alias {
legacy_key: "include_apply_patch_tool",
feature: Feature::ApplyPatchFreeform,
},
Alias {
legacy_key: "include_plan_tool",
feature: Feature::PlanTool,
},
Alias {
legacy_key: "include_view_image_tool",
feature: Feature::ViewImageTool,
},
Alias {
legacy_key: "web_search",
feature: Feature::WebSearchRequest,
},
];
pub(crate) fn feature_for_key(key: &str) -> Option<Feature> {
ALIASES
.iter()
.find(|alias| alias.legacy_key == key)
.map(|alias| {
log_alias(alias.legacy_key, alias.feature);
alias.feature
})
}
#[derive(Debug, Default)]
pub struct LegacyFeatureToggles {
pub include_plan_tool: Option<bool>,
pub include_apply_patch_tool: Option<bool>,
pub include_view_image_tool: Option<bool>,
pub experimental_use_freeform_apply_patch: Option<bool>,
pub experimental_use_exec_command_tool: Option<bool>,
pub experimental_use_unified_exec_tool: Option<bool>,
pub experimental_use_rmcp_client: Option<bool>,
pub tools_web_search: Option<bool>,
pub tools_view_image: Option<bool>,
}
impl LegacyFeatureToggles {
pub fn apply(self, features: &mut Features) {
set_if_some(
features,
Feature::PlanTool,
self.include_plan_tool,
"include_plan_tool",
);
set_if_some(
features,
Feature::ApplyPatchFreeform,
self.include_apply_patch_tool,
"include_apply_patch_tool",
);
set_if_some(
features,
Feature::ApplyPatchFreeform,
self.experimental_use_freeform_apply_patch,
"experimental_use_freeform_apply_patch",
);
set_if_some(
features,
Feature::StreamableShell,
self.experimental_use_exec_command_tool,
"experimental_use_exec_command_tool",
);
set_if_some(
features,
Feature::UnifiedExec,
self.experimental_use_unified_exec_tool,
"experimental_use_unified_exec_tool",
);
set_if_some(
features,
Feature::RmcpClient,
self.experimental_use_rmcp_client,
"experimental_use_rmcp_client",
);
set_if_some(
features,
Feature::WebSearchRequest,
self.tools_web_search,
"tools.web_search",
);
set_if_some(
features,
Feature::ViewImageTool,
self.include_view_image_tool,
"include_view_image_tool",
);
set_if_some(
features,
Feature::ViewImageTool,
self.tools_view_image,
"tools.view_image",
);
}
}
fn set_if_some(
features: &mut Features,
feature: Feature,
maybe_value: Option<bool>,
alias_key: &'static str,
) {
if let Some(enabled) = maybe_value {
set_feature(features, feature, enabled);
log_alias(alias_key, feature);
}
}
fn set_feature(features: &mut Features, feature: Feature, enabled: bool) {
if enabled {
features.enable(feature);
} else {
features.disable(feature);
}
}
fn log_alias(alias: &str, feature: Feature) {
let canonical = feature.key();
if alias == canonical {
return;
}
info!(
%alias,
canonical,
"legacy feature toggle detected; prefer `[features].{canonical}`"
);
}

View File

@@ -11,6 +11,8 @@ pub mod bash;
mod chat_completions;
mod client;
mod client_common;
mod codebase_change_notice;
mod codebase_snapshot;
pub mod codex;
mod codex_conversation;
pub mod token_data;
@@ -29,9 +31,11 @@ pub mod exec;
mod exec_command;
pub mod exec_env;
pub mod executor;
pub mod features;
mod flags;
pub mod git_info;
pub mod landlock;
pub mod mcp;
mod mcp_connection_manager;
mod mcp_tool_call;
mod message_history;

View File

@@ -0,0 +1,58 @@
use std::collections::HashMap;
use anyhow::Result;
use codex_protocol::protocol::McpAuthStatus;
use codex_rmcp_client::OAuthCredentialsStoreMode;
use codex_rmcp_client::determine_streamable_http_auth_status;
use futures::future::join_all;
use tracing::warn;
use crate::config_types::McpServerConfig;
use crate::config_types::McpServerTransportConfig;
pub async fn compute_auth_statuses<'a, I>(
servers: I,
store_mode: OAuthCredentialsStoreMode,
) -> HashMap<String, McpAuthStatus>
where
I: IntoIterator<Item = (&'a String, &'a McpServerConfig)>,
{
let futures = servers.into_iter().map(|(name, config)| {
let name = name.clone();
let config = config.clone();
async move {
let status = match compute_auth_status(&name, &config, store_mode).await {
Ok(status) => status,
Err(error) => {
warn!("failed to determine auth status for MCP server `{name}`: {error:?}");
McpAuthStatus::Unsupported
}
};
(name, status)
}
});
join_all(futures).await.into_iter().collect()
}
async fn compute_auth_status(
server_name: &str,
config: &McpServerConfig,
store_mode: OAuthCredentialsStoreMode,
) -> Result<McpAuthStatus> {
match &config.transport {
McpServerTransportConfig::Stdio { .. } => Ok(McpAuthStatus::Unsupported),
McpServerTransportConfig::StreamableHttp {
url,
bearer_token_env_var,
} => {
determine_streamable_http_auth_status(
server_name,
url,
bearer_token_env_var.as_deref(),
store_mode,
)
.await
}
}
}

View File

@@ -0,0 +1 @@
pub mod auth;

View File

@@ -8,6 +8,7 @@
use std::collections::HashMap;
use std::collections::HashSet;
use std::env;
use std::ffi::OsString;
use std::sync::Arc;
use std::time::Duration;
@@ -16,6 +17,7 @@ use anyhow::Context;
use anyhow::Result;
use anyhow::anyhow;
use codex_mcp_client::McpClient;
use codex_rmcp_client::OAuthCredentialsStoreMode;
use codex_rmcp_client::RmcpClient;
use mcp_types::ClientCapabilities;
use mcp_types::Implementation;
@@ -125,9 +127,11 @@ impl McpClientAdapter {
bearer_token: Option<String>,
params: mcp_types::InitializeRequestParams,
startup_timeout: Duration,
store_mode: OAuthCredentialsStoreMode,
) -> Result<Self> {
let client = Arc::new(
RmcpClient::new_streamable_http_client(&server_name, &url, bearer_token).await?,
RmcpClient::new_streamable_http_client(&server_name, &url, bearer_token, store_mode)
.await?,
);
client.initialize(params, Some(startup_timeout)).await?;
Ok(McpClientAdapter::Rmcp(client))
@@ -182,6 +186,7 @@ impl McpConnectionManager {
pub async fn new(
mcp_servers: HashMap<String, McpServerConfig>,
use_rmcp_client: bool,
store_mode: OAuthCredentialsStoreMode,
) -> Result<(Self, ClientStartErrors)> {
// Early exit if no servers are configured.
if mcp_servers.is_empty() {
@@ -202,9 +207,21 @@ impl McpConnectionManager {
continue;
}
if !cfg.enabled {
continue;
}
let startup_timeout = cfg.startup_timeout_sec.unwrap_or(DEFAULT_STARTUP_TIMEOUT);
let tool_timeout = cfg.tool_timeout_sec.unwrap_or(DEFAULT_TOOL_TIMEOUT);
let resolved_bearer_token = match &cfg.transport {
McpServerTransportConfig::StreamableHttp {
bearer_token_env_var,
..
} => resolve_bearer_token(&server_name, bearer_token_env_var.as_deref()),
_ => Ok(None),
};
join_set.spawn(async move {
let McpServerConfig { transport, .. } = cfg;
let params = mcp_types::InitializeRequestParams {
@@ -242,13 +259,14 @@ impl McpConnectionManager {
)
.await
}
McpServerTransportConfig::StreamableHttp { url, bearer_token } => {
McpServerTransportConfig::StreamableHttp { url, .. } => {
McpClientAdapter::new_streamable_http_client(
server_name.clone(),
url,
bearer_token,
resolved_bearer_token.unwrap_or_default(),
params,
startup_timeout,
store_mode,
)
.await
}
@@ -336,6 +354,33 @@ impl McpConnectionManager {
}
}
fn resolve_bearer_token(
server_name: &str,
bearer_token_env_var: Option<&str>,
) -> Result<Option<String>> {
let Some(env_var) = bearer_token_env_var else {
return Ok(None);
};
match env::var(env_var) {
Ok(value) => {
if value.is_empty() {
Err(anyhow!(
"Environment variable {env_var} for MCP server '{server_name}' is empty"
))
} else {
Ok(Some(value))
}
}
Err(env::VarError::NotPresent) => Err(anyhow!(
"Environment variable {env_var} for MCP server '{server_name}' is not set"
)),
Err(env::VarError::NotUnicode(_)) => Err(anyhow!(
"Environment variable {env_var} for MCP server '{server_name}' contains invalid Unicode"
)),
}
}
/// Query every server for its available tools and return a single map that
/// contains **all** tools. Each key is the fully-qualified name for the tool.
async fn list_all_tools(clients: &HashMap<String, ManagedClient>) -> Result<Vec<ToolInfo>> {

View File

@@ -119,9 +119,10 @@ pub fn find_family_for_model(mut slug: &str) -> Option<ModelFamily> {
reasoning_summary_format: ReasoningSummaryFormat::Experimental,
base_instructions: GPT_5_CODEX_INSTRUCTIONS.to_string(),
experimental_supported_tools: vec![
"read_file".to_string(),
"grep_files".to_string(),
"list_dir".to_string(),
"test_sync_tool".to_string()
"read_file".to_string(),
"test_sync_tool".to_string(),
],
supports_parallel_tool_calls: true,
)
@@ -134,7 +135,11 @@ pub fn find_family_for_model(mut slug: &str) -> Option<ModelFamily> {
reasoning_summary_format: ReasoningSummaryFormat::Experimental,
base_instructions: GPT_5_CODEX_INSTRUCTIONS.to_string(),
apply_patch_tool_type: Some(ApplyPatchToolType::Freeform),
experimental_supported_tools: vec!["read_file".to_string(), "list_dir".to_string()],
experimental_supported_tools: vec![
"grep_files".to_string(),
"list_dir".to_string(),
"read_file".to_string(),
],
supports_parallel_tool_calls: true,
)

View File

@@ -2,6 +2,7 @@
use codex_protocol::models::ResponseItem;
use crate::codebase_snapshot::CodebaseSnapshot;
use crate::conversation_history::ConversationHistory;
use crate::protocol::RateLimitSnapshot;
use crate::protocol::TokenUsage;
@@ -13,6 +14,7 @@ pub(crate) struct SessionState {
pub(crate) history: ConversationHistory,
pub(crate) token_info: Option<TokenUsageInfo>,
pub(crate) latest_rate_limits: Option<RateLimitSnapshot>,
pub(crate) codebase_snapshot: Option<CodebaseSnapshot>,
}
impl SessionState {

View File

@@ -34,6 +34,16 @@ pub(crate) enum TaskKind {
Compact,
}
impl TaskKind {
pub(crate) fn header_value(self) -> &'static str {
match self {
TaskKind::Regular => "standard",
TaskKind::Review => "review",
TaskKind::Compact => "compact",
}
}
}
#[derive(Clone)]
pub(crate) struct RunningTask {
pub(crate) handle: AbortHandle,
@@ -113,3 +123,15 @@ impl ActiveTurn {
}
}
}
#[cfg(test)]
mod tests {
use super::TaskKind;
#[test]
fn header_value_matches_expected_labels() {
assert_eq!(TaskKind::Regular.header_value(), "standard");
assert_eq!(TaskKind::Review.header_value(), "review");
assert_eq!(TaskKind::Compact.header_value(), "compact");
}
}

View File

@@ -27,6 +27,6 @@ impl SessionTask for RegularTask {
input: Vec<InputItem>,
) -> Option<String> {
let sess = session.clone_session();
run_task(sess, ctx, sub_id, input).await
run_task(sess, ctx, sub_id, input, TaskKind::Regular).await
}
}

View File

@@ -28,7 +28,7 @@ impl SessionTask for ReviewTask {
input: Vec<InputItem>,
) -> Option<String> {
let sess = session.clone_session();
run_task(sess, ctx, sub_id, input).await
run_task(sess, ctx, sub_id, input, TaskKind::Review).await
}
async fn abort(&self, session: Arc<SessionTaskContext>, sub_id: &str) {

View File

@@ -0,0 +1,272 @@
use std::path::Path;
use std::time::Duration;
use async_trait::async_trait;
use serde::Deserialize;
use tokio::process::Command;
use tokio::time::timeout;
use crate::function_tool::FunctionCallError;
use crate::tools::context::ToolInvocation;
use crate::tools::context::ToolOutput;
use crate::tools::context::ToolPayload;
use crate::tools::registry::ToolHandler;
use crate::tools::registry::ToolKind;
pub struct GrepFilesHandler;
const DEFAULT_LIMIT: usize = 100;
const MAX_LIMIT: usize = 2000;
const COMMAND_TIMEOUT: Duration = Duration::from_secs(30);
fn default_limit() -> usize {
DEFAULT_LIMIT
}
#[derive(Deserialize)]
struct GrepFilesArgs {
pattern: String,
#[serde(default)]
include: Option<String>,
#[serde(default)]
path: Option<String>,
#[serde(default = "default_limit")]
limit: usize,
}
#[async_trait]
impl ToolHandler for GrepFilesHandler {
fn kind(&self) -> ToolKind {
ToolKind::Function
}
async fn handle(&self, invocation: ToolInvocation) -> Result<ToolOutput, FunctionCallError> {
let ToolInvocation { payload, turn, .. } = invocation;
let arguments = match payload {
ToolPayload::Function { arguments } => arguments,
_ => {
return Err(FunctionCallError::RespondToModel(
"grep_files handler received unsupported payload".to_string(),
));
}
};
let args: GrepFilesArgs = serde_json::from_str(&arguments).map_err(|err| {
FunctionCallError::RespondToModel(format!(
"failed to parse function arguments: {err:?}"
))
})?;
let pattern = args.pattern.trim();
if pattern.is_empty() {
return Err(FunctionCallError::RespondToModel(
"pattern must not be empty".to_string(),
));
}
if args.limit == 0 {
return Err(FunctionCallError::RespondToModel(
"limit must be greater than zero".to_string(),
));
}
let limit = args.limit.min(MAX_LIMIT);
let search_path = turn.resolve_path(args.path.clone());
verify_path_exists(&search_path).await?;
let include = args.include.as_deref().map(str::trim).and_then(|val| {
if val.is_empty() {
None
} else {
Some(val.to_string())
}
});
let search_results =
run_rg_search(pattern, include.as_deref(), &search_path, limit, &turn.cwd).await?;
if search_results.is_empty() {
Ok(ToolOutput::Function {
content: "No matches found.".to_string(),
success: Some(false),
})
} else {
Ok(ToolOutput::Function {
content: search_results.join("\n"),
success: Some(true),
})
}
}
}
async fn verify_path_exists(path: &Path) -> Result<(), FunctionCallError> {
tokio::fs::metadata(path).await.map_err(|err| {
FunctionCallError::RespondToModel(format!("unable to access `{}`: {err}", path.display()))
})?;
Ok(())
}
async fn run_rg_search(
pattern: &str,
include: Option<&str>,
search_path: &Path,
limit: usize,
cwd: &Path,
) -> Result<Vec<String>, FunctionCallError> {
let mut command = Command::new("rg");
command
.current_dir(cwd)
.arg("--files-with-matches")
.arg("--sortr=modified")
.arg("--regexp")
.arg(pattern)
.arg("--no-messages");
if let Some(glob) = include {
command.arg("--glob").arg(glob);
}
command.arg("--").arg(search_path);
let output = timeout(COMMAND_TIMEOUT, command.output())
.await
.map_err(|_| {
FunctionCallError::RespondToModel("rg timed out after 30 seconds".to_string())
})?
.map_err(|err| {
FunctionCallError::RespondToModel(format!(
"failed to launch rg: {err}. Ensure ripgrep is installed and on PATH."
))
})?;
match output.status.code() {
Some(0) => Ok(parse_results(&output.stdout, limit)),
Some(1) => Ok(Vec::new()),
_ => {
let stderr = String::from_utf8_lossy(&output.stderr);
Err(FunctionCallError::RespondToModel(format!(
"rg failed: {stderr}"
)))
}
}
}
fn parse_results(stdout: &[u8], limit: usize) -> Vec<String> {
let mut results = Vec::new();
for line in stdout.split(|byte| *byte == b'\n') {
if line.is_empty() {
continue;
}
if let Ok(text) = std::str::from_utf8(line) {
if text.is_empty() {
continue;
}
results.push(text.to_string());
if results.len() == limit {
break;
}
}
}
results
}
#[cfg(test)]
mod tests {
use super::*;
use std::process::Command as StdCommand;
use tempfile::tempdir;
#[test]
fn parses_basic_results() {
let stdout = b"/tmp/file_a.rs\n/tmp/file_b.rs\n";
let parsed = parse_results(stdout, 10);
assert_eq!(
parsed,
vec!["/tmp/file_a.rs".to_string(), "/tmp/file_b.rs".to_string()]
);
}
#[test]
fn parse_truncates_after_limit() {
let stdout = b"/tmp/file_a.rs\n/tmp/file_b.rs\n/tmp/file_c.rs\n";
let parsed = parse_results(stdout, 2);
assert_eq!(
parsed,
vec!["/tmp/file_a.rs".to_string(), "/tmp/file_b.rs".to_string()]
);
}
#[tokio::test]
async fn run_search_returns_results() -> anyhow::Result<()> {
if !rg_available() {
return Ok(());
}
let temp = tempdir().expect("create temp dir");
let dir = temp.path();
std::fs::write(dir.join("match_one.txt"), "alpha beta gamma").unwrap();
std::fs::write(dir.join("match_two.txt"), "alpha delta").unwrap();
std::fs::write(dir.join("other.txt"), "omega").unwrap();
let results = run_rg_search("alpha", None, dir, 10, dir).await?;
assert_eq!(results.len(), 2);
assert!(results.iter().any(|path| path.ends_with("match_one.txt")));
assert!(results.iter().any(|path| path.ends_with("match_two.txt")));
Ok(())
}
#[tokio::test]
async fn run_search_with_glob_filter() -> anyhow::Result<()> {
if !rg_available() {
return Ok(());
}
let temp = tempdir().expect("create temp dir");
let dir = temp.path();
std::fs::write(dir.join("match_one.rs"), "alpha beta gamma").unwrap();
std::fs::write(dir.join("match_two.txt"), "alpha delta").unwrap();
let results = run_rg_search("alpha", Some("*.rs"), dir, 10, dir).await?;
assert_eq!(results.len(), 1);
assert!(results.iter().all(|path| path.ends_with("match_one.rs")));
Ok(())
}
#[tokio::test]
async fn run_search_respects_limit() -> anyhow::Result<()> {
if !rg_available() {
return Ok(());
}
let temp = tempdir().expect("create temp dir");
let dir = temp.path();
std::fs::write(dir.join("one.txt"), "alpha one").unwrap();
std::fs::write(dir.join("two.txt"), "alpha two").unwrap();
std::fs::write(dir.join("three.txt"), "alpha three").unwrap();
let results = run_rg_search("alpha", None, dir, 2, dir).await?;
assert_eq!(results.len(), 2);
Ok(())
}
#[tokio::test]
async fn run_search_handles_no_matches() -> anyhow::Result<()> {
if !rg_available() {
return Ok(());
}
let temp = tempdir().expect("create temp dir");
let dir = temp.path();
std::fs::write(dir.join("one.txt"), "omega").unwrap();
let results = run_rg_search("alpha", None, dir, 5, dir).await?;
assert!(results.is_empty());
Ok(())
}
fn rg_available() -> bool {
StdCommand::new("rg")
.arg("--version")
.output()
.map(|output| output.status.success())
.unwrap_or(false)
}
}

View File

@@ -1,5 +1,6 @@
pub mod apply_patch;
mod exec_stream;
mod grep_files;
mod list_dir;
mod mcp;
mod plan;
@@ -13,6 +14,7 @@ pub use plan::PLAN_TOOL;
pub use apply_patch::ApplyPatchHandler;
pub use exec_stream::ExecStreamHandler;
pub use grep_files::GrepFilesHandler;
pub use list_dir::ListDirHandler;
pub use mcp::McpHandler;
pub use plan::PlanHandler;

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +1,7 @@
use crate::client_common::tools::ResponsesApiTool;
use crate::client_common::tools::ToolSpec;
use crate::features::Feature;
use crate::features::Features;
use crate::model_family::ModelFamily;
use crate::tools::handlers::PLAN_TOOL;
use crate::tools::handlers::apply_patch::ApplyPatchToolType;
@@ -33,26 +35,23 @@ pub(crate) struct ToolsConfig {
pub(crate) struct ToolsConfigParams<'a> {
pub(crate) model_family: &'a ModelFamily,
pub(crate) include_plan_tool: bool,
pub(crate) include_apply_patch_tool: bool,
pub(crate) include_web_search_request: bool,
pub(crate) use_streamable_shell_tool: bool,
pub(crate) include_view_image_tool: bool,
pub(crate) experimental_unified_exec_tool: bool,
pub(crate) features: &'a Features,
}
impl ToolsConfig {
pub fn new(params: &ToolsConfigParams) -> Self {
let ToolsConfigParams {
model_family,
include_plan_tool,
include_apply_patch_tool,
include_web_search_request,
use_streamable_shell_tool,
include_view_image_tool,
experimental_unified_exec_tool,
features,
} = params;
let shell_type = if *use_streamable_shell_tool {
let use_streamable_shell_tool = features.enabled(Feature::StreamableShell);
let experimental_unified_exec_tool = features.enabled(Feature::UnifiedExec);
let include_plan_tool = features.enabled(Feature::PlanTool);
let include_apply_patch_tool = features.enabled(Feature::ApplyPatchFreeform);
let include_web_search_request = features.enabled(Feature::WebSearchRequest);
let include_view_image_tool = features.enabled(Feature::ViewImageTool);
let shell_type = if use_streamable_shell_tool {
ConfigShellToolType::Streamable
} else if model_family.uses_local_shell_tool {
ConfigShellToolType::Local
@@ -64,7 +63,7 @@ impl ToolsConfig {
Some(ApplyPatchToolType::Freeform) => Some(ApplyPatchToolType::Freeform),
Some(ApplyPatchToolType::Function) => Some(ApplyPatchToolType::Function),
None => {
if *include_apply_patch_tool {
if include_apply_patch_tool {
Some(ApplyPatchToolType::Freeform)
} else {
None
@@ -74,11 +73,11 @@ impl ToolsConfig {
Self {
shell_type,
plan_tool: *include_plan_tool,
plan_tool: include_plan_tool,
apply_patch_tool_type,
web_search_request: *include_web_search_request,
include_view_image_tool: *include_view_image_tool,
experimental_unified_exec_tool: *experimental_unified_exec_tool,
web_search_request: include_web_search_request,
include_view_image_tool,
experimental_unified_exec_tool,
experimental_supported_tools: model_family.experimental_supported_tools.clone(),
}
}
@@ -320,6 +319,56 @@ fn create_test_sync_tool() -> ToolSpec {
})
}
fn create_grep_files_tool() -> ToolSpec {
let mut properties = BTreeMap::new();
properties.insert(
"pattern".to_string(),
JsonSchema::String {
description: Some("Regular expression pattern to search for.".to_string()),
},
);
properties.insert(
"include".to_string(),
JsonSchema::String {
description: Some(
"Optional glob that limits which files are searched (e.g. \"*.rs\" or \
\"*.{ts,tsx}\")."
.to_string(),
),
},
);
properties.insert(
"path".to_string(),
JsonSchema::String {
description: Some(
"Directory or file path to search. Defaults to the session's working directory."
.to_string(),
),
},
);
properties.insert(
"limit".to_string(),
JsonSchema::Number {
description: Some(
"Maximum number of file paths to return (defaults to 100).".to_string(),
),
},
);
ToolSpec::Function(ResponsesApiTool {
name: "grep_files".to_string(),
description: "Finds files whose contents match the pattern and lists them by modification \
time."
.to_string(),
strict: false,
parameters: JsonSchema::Object {
properties,
required: Some(vec!["pattern".to_string()]),
additional_properties: Some(false.into()),
},
})
}
fn create_read_file_tool() -> ToolSpec {
let mut properties = BTreeMap::new();
properties.insert(
@@ -342,11 +391,72 @@ fn create_read_file_tool() -> ToolSpec {
description: Some("The maximum number of lines to return.".to_string()),
},
);
properties.insert(
"mode".to_string(),
JsonSchema::String {
description: Some(
"Optional mode selector: \"slice\" for simple ranges (default) or \"indentation\" \
to expand around an anchor line."
.to_string(),
),
},
);
let mut indentation_properties = BTreeMap::new();
indentation_properties.insert(
"anchor_line".to_string(),
JsonSchema::Number {
description: Some(
"Anchor line to center the indentation lookup on (defaults to offset).".to_string(),
),
},
);
indentation_properties.insert(
"max_levels".to_string(),
JsonSchema::Number {
description: Some(
"How many parent indentation levels (smaller indents) to include.".to_string(),
),
},
);
indentation_properties.insert(
"include_siblings".to_string(),
JsonSchema::Boolean {
description: Some(
"When true, include additional blocks that share the anchor indentation."
.to_string(),
),
},
);
indentation_properties.insert(
"include_header".to_string(),
JsonSchema::Boolean {
description: Some(
"Include doc comments or attributes directly above the selected block.".to_string(),
),
},
);
indentation_properties.insert(
"max_lines".to_string(),
JsonSchema::Number {
description: Some(
"Hard cap on the number of lines returned when using indentation mode.".to_string(),
),
},
);
properties.insert(
"indentation".to_string(),
JsonSchema::Object {
properties: indentation_properties,
required: None,
additional_properties: Some(false.into()),
},
);
ToolSpec::Function(ResponsesApiTool {
name: "read_file".to_string(),
description:
"Reads a local file with 1-indexed line numbers and returns up to the requested number of lines."
"Reads a local file with 1-indexed line numbers, supporting slice and indentation-aware block modes."
.to_string(),
strict: false,
parameters: JsonSchema::Object {
@@ -610,6 +720,7 @@ pub(crate) fn build_specs(
use crate::exec_command::create_write_stdin_tool_for_responses_api;
use crate::tools::handlers::ApplyPatchHandler;
use crate::tools::handlers::ExecStreamHandler;
use crate::tools::handlers::GrepFilesHandler;
use crate::tools::handlers::ListDirHandler;
use crate::tools::handlers::McpHandler;
use crate::tools::handlers::PlanHandler;
@@ -678,8 +789,16 @@ pub(crate) fn build_specs(
if config
.experimental_supported_tools
.iter()
.any(|tool| tool == "read_file")
.contains(&"grep_files".to_string())
{
let grep_files_handler = Arc::new(GrepFilesHandler);
builder.push_spec_with_parallel_support(create_grep_files_tool(), true);
builder.register_handler("grep_files", grep_files_handler);
}
if config
.experimental_supported_tools
.contains(&"read_file".to_string())
{
let read_file_handler = Arc::new(ReadFileHandler);
builder.push_spec_with_parallel_support(create_read_file_tool(), true);
@@ -698,8 +817,7 @@ pub(crate) fn build_specs(
if config
.experimental_supported_tools
.iter()
.any(|tool| tool == "test_sync_tool")
.contains(&"test_sync_tool".to_string())
{
let test_sync_handler = Arc::new(TestSyncHandler);
builder.push_spec_with_parallel_support(create_test_sync_tool(), true);
@@ -787,14 +905,13 @@ mod tests {
fn test_build_specs() {
let model_family = find_family_for_model("codex-mini-latest")
.expect("codex-mini-latest should be a valid model family");
let mut features = Features::with_defaults();
features.enable(Feature::PlanTool);
features.enable(Feature::WebSearchRequest);
features.enable(Feature::UnifiedExec);
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
include_plan_tool: true,
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
experimental_unified_exec_tool: true,
features: &features,
});
let (tools, _) = build_specs(&config, Some(HashMap::new())).build();
@@ -807,14 +924,13 @@ mod tests {
#[test]
fn test_build_specs_default_shell() {
let model_family = find_family_for_model("o3").expect("o3 should be a valid model family");
let mut features = Features::with_defaults();
features.enable(Feature::PlanTool);
features.enable(Feature::WebSearchRequest);
features.enable(Feature::UnifiedExec);
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
include_plan_tool: true,
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
experimental_unified_exec_tool: true,
features: &features,
});
let (tools, _) = build_specs(&config, Some(HashMap::new())).build();
@@ -829,34 +945,30 @@ mod tests {
fn test_parallel_support_flags() {
let model_family = find_family_for_model("gpt-5-codex")
.expect("codex-mini-latest should be a valid model family");
let mut features = Features::with_defaults();
features.disable(Feature::ViewImageTool);
features.enable(Feature::UnifiedExec);
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
include_plan_tool: false,
include_apply_patch_tool: false,
include_web_search_request: false,
use_streamable_shell_tool: false,
include_view_image_tool: false,
experimental_unified_exec_tool: true,
features: &features,
});
let (tools, _) = build_specs(&config, None).build();
assert!(!find_tool(&tools, "unified_exec").supports_parallel_tool_calls);
assert!(find_tool(&tools, "read_file").supports_parallel_tool_calls);
assert!(find_tool(&tools, "grep_files").supports_parallel_tool_calls);
assert!(find_tool(&tools, "list_dir").supports_parallel_tool_calls);
assert!(find_tool(&tools, "read_file").supports_parallel_tool_calls);
}
#[test]
fn test_test_model_family_includes_sync_tool() {
let model_family = find_family_for_model("test-gpt-5-codex")
.expect("test-gpt-5-codex should be a valid model family");
let mut features = Features::with_defaults();
features.disable(Feature::ViewImageTool);
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
include_plan_tool: false,
include_apply_patch_tool: false,
include_web_search_request: false,
use_streamable_shell_tool: false,
include_view_image_tool: false,
experimental_unified_exec_tool: false,
features: &features,
});
let (tools, _) = build_specs(&config, None).build();
@@ -870,20 +982,23 @@ mod tests {
.iter()
.any(|tool| tool_name(&tool.spec) == "read_file")
);
assert!(
tools
.iter()
.any(|tool| tool_name(&tool.spec) == "grep_files")
);
assert!(tools.iter().any(|tool| tool_name(&tool.spec) == "list_dir"));
}
#[test]
fn test_build_specs_mcp_tools() {
let model_family = find_family_for_model("o3").expect("o3 should be a valid model family");
let mut features = Features::with_defaults();
features.enable(Feature::UnifiedExec);
features.enable(Feature::WebSearchRequest);
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
include_plan_tool: false,
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
experimental_unified_exec_tool: true,
features: &features,
});
let (tools, _) = build_specs(
&config,
@@ -981,14 +1096,11 @@ mod tests {
#[test]
fn test_build_specs_mcp_tools_sorted_by_name() {
let model_family = find_family_for_model("o3").expect("o3 should be a valid model family");
let mut features = Features::with_defaults();
features.enable(Feature::UnifiedExec);
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
include_plan_tool: false,
include_apply_patch_tool: false,
include_web_search_request: false,
use_streamable_shell_tool: false,
include_view_image_tool: true,
experimental_unified_exec_tool: true,
features: &features,
});
// Intentionally construct a map with keys that would sort alphabetically.
@@ -1058,14 +1170,12 @@ mod tests {
fn test_mcp_tool_property_missing_type_defaults_to_string() {
let model_family = find_family_for_model("gpt-5-codex")
.expect("gpt-5-codex should be a valid model family");
let mut features = Features::with_defaults();
features.enable(Feature::UnifiedExec);
features.enable(Feature::WebSearchRequest);
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
include_plan_tool: false,
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
experimental_unified_exec_tool: true,
features: &features,
});
let (tools, _) = build_specs(
@@ -1127,14 +1237,12 @@ mod tests {
fn test_mcp_tool_integer_normalized_to_number() {
let model_family = find_family_for_model("gpt-5-codex")
.expect("gpt-5-codex should be a valid model family");
let mut features = Features::with_defaults();
features.enable(Feature::UnifiedExec);
features.enable(Feature::WebSearchRequest);
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
include_plan_tool: false,
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
experimental_unified_exec_tool: true,
features: &features,
});
let (tools, _) = build_specs(
@@ -1191,14 +1299,13 @@ mod tests {
fn test_mcp_tool_array_without_items_gets_default_string_items() {
let model_family = find_family_for_model("gpt-5-codex")
.expect("gpt-5-codex should be a valid model family");
let mut features = Features::with_defaults();
features.enable(Feature::UnifiedExec);
features.enable(Feature::WebSearchRequest);
features.enable(Feature::ApplyPatchFreeform);
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
include_plan_tool: false,
include_apply_patch_tool: true,
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
experimental_unified_exec_tool: true,
features: &features,
});
let (tools, _) = build_specs(
@@ -1258,14 +1365,12 @@ mod tests {
fn test_mcp_tool_anyof_defaults_to_string() {
let model_family = find_family_for_model("gpt-5-codex")
.expect("gpt-5-codex should be a valid model family");
let mut features = Features::with_defaults();
features.enable(Feature::UnifiedExec);
features.enable(Feature::WebSearchRequest);
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
include_plan_tool: false,
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
experimental_unified_exec_tool: true,
features: &features,
});
let (tools, _) = build_specs(
@@ -1337,14 +1442,12 @@ mod tests {
fn test_get_openai_tools_mcp_tools_with_additional_properties_schema() {
let model_family = find_family_for_model("gpt-5-codex")
.expect("gpt-5-codex should be a valid model family");
let mut features = Features::with_defaults();
features.enable(Feature::UnifiedExec);
features.enable(Feature::WebSearchRequest);
let config = ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
include_plan_tool: false,
include_apply_patch_tool: false,
include_web_search_request: true,
use_streamable_shell_tool: false,
include_view_image_tool: true,
experimental_unified_exec_tool: true,
features: &features,
});
let (tools, _) = build_specs(
&config,

View File

@@ -110,11 +110,22 @@ impl ManagedUnifiedExecSession {
let buffer_clone = Arc::clone(&output_buffer);
let notify_clone = Arc::clone(&output_notify);
let output_task = tokio::spawn(async move {
while let Ok(chunk) = receiver.recv().await {
let mut guard = buffer_clone.lock().await;
guard.push_chunk(chunk);
drop(guard);
notify_clone.notify_waiters();
loop {
match receiver.recv().await {
Ok(chunk) => {
let mut guard = buffer_clone.lock().await;
guard.push_chunk(chunk);
drop(guard);
notify_clone.notify_waiters();
}
// If we lag behind the broadcast buffer, skip missed
// messages but keep the task alive to continue streaming.
Err(tokio::sync::broadcast::error::RecvError::Lagged(_)) => {
continue;
}
// When the sender closes, exit the task.
Err(tokio::sync::broadcast::error::RecvError::Closed) => break,
}
}
});

View File

@@ -49,6 +49,7 @@ impl UserNotifier {
pub(crate) enum UserNotification {
#[serde(rename_all = "kebab-case")]
AgentTurnComplete {
thread_id: String,
turn_id: String,
/// Messages that the user sent to the agent to initiate the turn.
@@ -67,6 +68,7 @@ mod tests {
#[test]
fn test_user_notification() -> Result<()> {
let notification = UserNotification::AgentTurnComplete {
thread_id: "b5f6c1c2-1111-2222-3333-444455556666".to_string(),
turn_id: "12345".to_string(),
input_messages: vec!["Rename `foo` to `bar` and update the callsites.".to_string()],
last_assistant_message: Some(
@@ -76,7 +78,7 @@ mod tests {
let serialized = serde_json::to_string(&notification)?;
assert_eq!(
serialized,
r#"{"type":"agent-turn-complete","turn-id":"12345","input-messages":["Rename `foo` to `bar` and update the callsites."],"last-assistant-message":"Rename complete and verified `cargo build` succeeds."}"#
r#"{"type":"agent-turn-complete","thread-id":"b5f6c1c2-1111-2222-3333-444455556666","turn-id":"12345","input-messages":["Rename `foo` to `bar` and update the callsites."],"last-assistant-message":"Rename complete and verified `cargo build` succeeds."}"#
);
Ok(())
}

View File

@@ -10,8 +10,10 @@ path = "lib.rs"
anyhow = { workspace = true }
assert_cmd = { workspace = true }
codex-core = { workspace = true }
notify = { workspace = true }
regex-lite = { workspace = true }
serde_json = { workspace = true }
tempfile = { workspace = true }
tokio = { workspace = true, features = ["time"] }
walkdir = { workspace = true }
wiremock = { workspace = true }

View File

@@ -164,6 +164,149 @@ pub fn sandbox_network_env_var() -> &'static str {
codex_core::spawn::CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR
}
pub mod fs_wait {
use anyhow::Result;
use anyhow::anyhow;
use notify::RecursiveMode;
use notify::Watcher;
use std::path::Path;
use std::path::PathBuf;
use std::sync::mpsc;
use std::sync::mpsc::RecvTimeoutError;
use std::time::Duration;
use std::time::Instant;
use tokio::task;
use walkdir::WalkDir;
pub async fn wait_for_path_exists(
path: impl Into<PathBuf>,
timeout: Duration,
) -> Result<PathBuf> {
let path = path.into();
task::spawn_blocking(move || wait_for_path_exists_blocking(path, timeout)).await?
}
pub async fn wait_for_matching_file(
root: impl Into<PathBuf>,
timeout: Duration,
predicate: impl FnMut(&Path) -> bool + Send + 'static,
) -> Result<PathBuf> {
let root = root.into();
task::spawn_blocking(move || {
let mut predicate = predicate;
blocking_find_matching_file(root, timeout, &mut predicate)
})
.await?
}
fn wait_for_path_exists_blocking(path: PathBuf, timeout: Duration) -> Result<PathBuf> {
if path.exists() {
return Ok(path);
}
let watch_root = nearest_existing_ancestor(&path);
let (tx, rx) = mpsc::channel();
let mut watcher = notify::recommended_watcher(move |res| {
let _ = tx.send(res);
})?;
watcher.watch(&watch_root, RecursiveMode::Recursive)?;
let deadline = Instant::now() + timeout;
loop {
if path.exists() {
return Ok(path.clone());
}
let now = Instant::now();
if now >= deadline {
break;
}
let remaining = deadline.saturating_duration_since(now);
match rx.recv_timeout(remaining) {
Ok(Ok(_event)) => {
if path.exists() {
return Ok(path.clone());
}
}
Ok(Err(err)) => return Err(err.into()),
Err(RecvTimeoutError::Timeout) => break,
Err(RecvTimeoutError::Disconnected) => break,
}
}
if path.exists() {
Ok(path)
} else {
Err(anyhow!("timed out waiting for {:?}", path))
}
}
fn blocking_find_matching_file(
root: PathBuf,
timeout: Duration,
predicate: &mut impl FnMut(&Path) -> bool,
) -> Result<PathBuf> {
let root = wait_for_path_exists_blocking(root, timeout)?;
if let Some(found) = scan_for_match(&root, predicate) {
return Ok(found);
}
let (tx, rx) = mpsc::channel();
let mut watcher = notify::recommended_watcher(move |res| {
let _ = tx.send(res);
})?;
watcher.watch(&root, RecursiveMode::Recursive)?;
let deadline = Instant::now() + timeout;
while Instant::now() < deadline {
let remaining = deadline.saturating_duration_since(Instant::now());
match rx.recv_timeout(remaining) {
Ok(Ok(_event)) => {
if let Some(found) = scan_for_match(&root, predicate) {
return Ok(found);
}
}
Ok(Err(err)) => return Err(err.into()),
Err(RecvTimeoutError::Timeout) => break,
Err(RecvTimeoutError::Disconnected) => break,
}
}
if let Some(found) = scan_for_match(&root, predicate) {
Ok(found)
} else {
Err(anyhow!("timed out waiting for matching file in {:?}", root))
}
}
fn scan_for_match(root: &Path, predicate: &mut impl FnMut(&Path) -> bool) -> Option<PathBuf> {
for entry in WalkDir::new(root).into_iter().filter_map(Result::ok) {
let path = entry.path();
if !entry.file_type().is_file() {
continue;
}
if predicate(path) {
return Some(path.to_path_buf());
}
}
None
}
fn nearest_existing_ancestor(path: &Path) -> PathBuf {
let mut current = path;
loop {
if current.exists() {
return current.to_path_buf();
}
match current.parent() {
Some(parent) => current = parent,
None => return PathBuf::from("."),
}
}
}
}
#[macro_export]
macro_rules! skip_if_sandbox {
() => {{

View File

@@ -1,4 +1,5 @@
use std::mem::swap;
use std::path::PathBuf;
use std::sync::Arc;
use codex_core::CodexAuth;
@@ -39,6 +40,12 @@ impl TestCodexBuilder {
let mut config = load_default_config_for_test(&home);
config.cwd = cwd.path().to_path_buf();
config.model_provider = model_provider;
config.codex_linux_sandbox_exe = Some(PathBuf::from(
assert_cmd::Command::cargo_bin("codex")?
.get_program()
.to_os_string(),
));
let mut mutators = vec![];
swap(&mut self.config_mutators, &mut mutators);

View File

@@ -0,0 +1,102 @@
use std::sync::Arc;
use codex_app_server_protocol::AuthMode;
use codex_core::ContentItem;
use codex_core::ModelClient;
use codex_core::ModelProviderInfo;
use codex_core::Prompt;
use codex_core::ResponseEvent;
use codex_core::ResponseItem;
use codex_core::WireApi;
use codex_otel::otel_event_manager::OtelEventManager;
use codex_protocol::ConversationId;
use core_test_support::load_default_config_for_test;
use core_test_support::responses;
use futures::StreamExt;
use tempfile::TempDir;
use wiremock::matchers::header;
#[tokio::test]
async fn responses_stream_includes_task_type_header() {
core_test_support::skip_if_no_network!();
let server = responses::start_mock_server().await;
let response_body = responses::sse(vec![
responses::ev_response_created("resp-1"),
responses::ev_completed("resp-1"),
]);
let request_recorder = responses::mount_sse_once_match(
&server,
header("Codex-Task-Type", "standard"),
response_body,
)
.await;
let provider = ModelProviderInfo {
name: "mock".into(),
base_url: Some(format!("{}/v1", server.uri())),
env_key: None,
env_key_instructions: None,
wire_api: WireApi::Responses,
query_params: None,
http_headers: None,
env_http_headers: None,
request_max_retries: Some(0),
stream_max_retries: Some(0),
stream_idle_timeout_ms: Some(5_000),
requires_openai_auth: false,
};
let codex_home = TempDir::new().expect("failed to create TempDir");
let mut config = load_default_config_for_test(&codex_home);
config.model_provider_id = provider.name.clone();
config.model_provider = provider.clone();
let effort = config.model_reasoning_effort;
let summary = config.model_reasoning_summary;
let config = Arc::new(config);
let conversation_id = ConversationId::new();
let otel_event_manager = OtelEventManager::new(
conversation_id,
config.model.as_str(),
config.model_family.slug.as_str(),
None,
Some(AuthMode::ChatGPT),
false,
"test".to_string(),
);
let client = ModelClient::new(
Arc::clone(&config),
None,
otel_event_manager,
provider,
effort,
summary,
conversation_id,
);
let mut prompt = Prompt::default();
prompt.input = vec![ResponseItem::Message {
id: None,
role: "user".into(),
content: vec![ContentItem::InputText {
text: "hello".into(),
}],
}];
let mut stream = client.stream(&prompt).await.expect("stream failed");
while let Some(event) = stream.next().await {
if matches!(event, Ok(ResponseEvent::Completed { .. })) {
break;
}
}
let request = request_recorder.single_request();
assert_eq!(
request.header("Codex-Task-Type").as_deref(),
Some("standard")
);
}

View File

@@ -1,12 +1,11 @@
use assert_cmd::Command as AssertCommand;
use codex_core::RolloutRecorder;
use codex_core::protocol::GitInfo;
use core_test_support::fs_wait;
use core_test_support::skip_if_no_network;
use std::time::Duration;
use std::time::Instant;
use tempfile::TempDir;
use uuid::Uuid;
use walkdir::WalkDir;
use wiremock::Mock;
use wiremock::MockServer;
use wiremock::ResponseTemplate;
@@ -211,12 +210,12 @@ async fn responses_api_stream_cli() {
/// End-to-end: create a session (writes rollout), verify the file, then resume and confirm append.
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn integration_creates_and_checks_session_file() {
async fn integration_creates_and_checks_session_file() -> anyhow::Result<()> {
// Honor sandbox network restrictions for CI parity with the other tests.
skip_if_no_network!();
skip_if_no_network!(Ok(()));
// 1. Temp home so we read/write isolated session files.
let home = TempDir::new().unwrap();
let home = TempDir::new()?;
// 2. Unique marker we'll look for in the session log.
let marker = format!("integration-test-{}", Uuid::new_v4());
@@ -254,63 +253,20 @@ async fn integration_creates_and_checks_session_file() {
// Wait for sessions dir to appear.
let sessions_dir = home.path().join("sessions");
let dir_deadline = Instant::now() + Duration::from_secs(5);
while !sessions_dir.exists() && Instant::now() < dir_deadline {
std::thread::sleep(Duration::from_millis(50));
}
assert!(sessions_dir.exists(), "sessions directory never appeared");
fs_wait::wait_for_path_exists(&sessions_dir, Duration::from_secs(5)).await?;
// Find the session file that contains `marker`.
let deadline = Instant::now() + Duration::from_secs(10);
let mut matching_path: Option<std::path::PathBuf> = None;
while Instant::now() < deadline && matching_path.is_none() {
for entry in WalkDir::new(&sessions_dir) {
let entry = match entry {
Ok(e) => e,
Err(_) => continue,
};
if !entry.file_type().is_file() {
continue;
}
if !entry.file_name().to_string_lossy().ends_with(".jsonl") {
continue;
}
let path = entry.path();
let Ok(content) = std::fs::read_to_string(path) else {
continue;
};
let mut lines = content.lines();
if lines.next().is_none() {
continue;
}
for line in lines {
if line.trim().is_empty() {
continue;
}
let item: serde_json::Value = match serde_json::from_str(line) {
Ok(v) => v,
Err(_) => continue,
};
if item.get("type").and_then(|t| t.as_str()) == Some("response_item")
&& let Some(payload) = item.get("payload")
&& payload.get("type").and_then(|t| t.as_str()) == Some("message")
&& let Some(c) = payload.get("content")
&& c.to_string().contains(&marker)
{
matching_path = Some(path.to_path_buf());
break;
}
}
let marker_clone = marker.clone();
let path = fs_wait::wait_for_matching_file(&sessions_dir, Duration::from_secs(10), move |p| {
if p.extension().and_then(|ext| ext.to_str()) != Some("jsonl") {
return false;
}
if matching_path.is_none() {
std::thread::sleep(Duration::from_millis(50));
}
}
let path = match matching_path {
Some(p) => p,
None => panic!("No session file containing the marker was found"),
};
let Ok(content) = std::fs::read_to_string(p) else {
return false;
};
content.contains(&marker_clone)
})
.await?;
// Basic sanity checks on location and metadata.
let rel = match path.strip_prefix(&sessions_dir) {
@@ -418,42 +374,25 @@ async fn integration_creates_and_checks_session_file() {
assert!(output2.status.success(), "resume codex-cli run failed");
// Find the new session file containing the resumed marker.
let deadline = Instant::now() + Duration::from_secs(10);
let mut resumed_path: Option<std::path::PathBuf> = None;
while Instant::now() < deadline && resumed_path.is_none() {
for entry in WalkDir::new(&sessions_dir) {
let entry = match entry {
Ok(e) => e,
Err(_) => continue,
};
if !entry.file_type().is_file() {
continue;
let marker2_clone = marker2.clone();
let resumed_path =
fs_wait::wait_for_matching_file(&sessions_dir, Duration::from_secs(10), move |p| {
if p.extension().and_then(|ext| ext.to_str()) != Some("jsonl") {
return false;
}
if !entry.file_name().to_string_lossy().ends_with(".jsonl") {
continue;
}
let p = entry.path();
let Ok(c) = std::fs::read_to_string(p) else {
continue;
};
if c.contains(&marker2) {
resumed_path = Some(p.to_path_buf());
break;
}
}
if resumed_path.is_none() {
std::thread::sleep(Duration::from_millis(50));
}
}
std::fs::read_to_string(p)
.map(|content| content.contains(&marker2_clone))
.unwrap_or(false)
})
.await?;
let resumed_path = resumed_path.expect("No resumed session file found containing the marker2");
// Resume should write to the existing log file.
assert_eq!(
resumed_path, path,
"resume should create a new session file"
);
let resumed_content = std::fs::read_to_string(&resumed_path).unwrap();
let resumed_content = std::fs::read_to_string(&resumed_path)?;
assert!(
resumed_content.contains(&marker),
"resumed file missing original marker"
@@ -462,6 +401,7 @@ async fn integration_creates_and_checks_session_file() {
resumed_content.contains(&marker2),
"resumed file missing resumed marker"
);
Ok(())
}
/// Integration test to verify git info is collected and recorded in session files.

View File

@@ -22,6 +22,7 @@ use core_test_support::responses::ev_function_call;
use core_test_support::responses::mount_sse_once_match;
use core_test_support::responses::mount_sse_sequence;
use core_test_support::responses::sse;
use core_test_support::responses::sse_failed;
use core_test_support::responses::start_mock_server;
use pretty_assertions::assert_eq;
// --- Test helpers -----------------------------------------------------------
@@ -38,6 +39,8 @@ const SECOND_LARGE_REPLY: &str = "SECOND_LARGE_REPLY";
const FIRST_AUTO_SUMMARY: &str = "FIRST_AUTO_SUMMARY";
const SECOND_AUTO_SUMMARY: &str = "SECOND_AUTO_SUMMARY";
const FINAL_REPLY: &str = "FINAL_REPLY";
const CONTEXT_LIMIT_MESSAGE: &str =
"Your input exceeds the context window of this model. Please adjust your input and try again.";
const DUMMY_FUNCTION_NAME: &str = "unsupported_tool";
const DUMMY_CALL_ID: &str = "call-multi-auto";
@@ -622,6 +625,130 @@ async fn auto_compact_stops_after_failed_attempt() {
);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn manual_compact_retries_after_context_window_error() {
skip_if_no_network!();
let server = start_mock_server().await;
let user_turn = sse(vec![
ev_assistant_message("m1", FIRST_REPLY),
ev_completed("r1"),
]);
let compact_failed = sse_failed(
"resp-fail",
"context_length_exceeded",
CONTEXT_LIMIT_MESSAGE,
);
let compact_succeeds = sse(vec![
ev_assistant_message("m2", SUMMARY_TEXT),
ev_completed("r2"),
]);
let request_log = mount_sse_sequence(
&server,
vec![
user_turn.clone(),
compact_failed.clone(),
compact_succeeds.clone(),
],
)
.await;
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&home);
config.model_provider = model_provider;
config.model_auto_compact_token_limit = Some(200_000);
let codex = ConversationManager::with_auth(CodexAuth::from_api_key("dummy"))
.new_conversation(config)
.await
.unwrap()
.conversation;
codex
.submit(Op::UserInput {
items: vec![InputItem::Text {
text: "first turn".into(),
}],
})
.await
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
codex.submit(Op::Compact).await.unwrap();
let EventMsg::BackgroundEvent(event) =
wait_for_event(&codex, |ev| matches!(ev, EventMsg::BackgroundEvent(_))).await
else {
panic!("expected background event after compact retry");
};
assert!(
event.message.contains("Trimmed 1 older conversation item"),
"background event should mention trimmed item count: {}",
event.message
);
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
let requests = request_log.requests();
assert_eq!(
requests.len(),
3,
"expected user turn and two compact attempts"
);
let compact_attempt = requests[1].body_json();
let retry_attempt = requests[2].body_json();
let compact_input = compact_attempt["input"]
.as_array()
.unwrap_or_else(|| panic!("compact attempt missing input array: {compact_attempt}"));
let retry_input = retry_attempt["input"]
.as_array()
.unwrap_or_else(|| panic!("retry attempt missing input array: {retry_attempt}"));
assert_eq!(
compact_input
.last()
.and_then(|item| item.get("content"))
.and_then(|v| v.as_array())
.and_then(|items| items.first())
.and_then(|entry| entry.get("text"))
.and_then(|text| text.as_str()),
Some(SUMMARIZATION_PROMPT),
"compact attempt should include summarization prompt"
);
assert_eq!(
retry_input
.last()
.and_then(|item| item.get("content"))
.and_then(|v| v.as_array())
.and_then(|items| items.first())
.and_then(|entry| entry.get("text"))
.and_then(|text| text.as_str()),
Some(SUMMARIZATION_PROMPT),
"retry attempt should include summarization prompt"
);
assert_eq!(
retry_input.len(),
compact_input.len().saturating_sub(1),
"retry should drop exactly one history item (before {} vs after {})",
compact_input.len(),
retry_input.len()
);
if let (Some(first_before), Some(first_after)) = (compact_input.first(), retry_input.first()) {
assert_ne!(
first_before, first_after,
"retry should drop the oldest conversation item"
);
} else {
panic!("expected non-empty compact inputs");
}
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn auto_compact_allows_multiple_attempts_when_interleaved_with_other_turn_events() {
skip_if_no_network!();

View File

@@ -0,0 +1,237 @@
#![cfg(not(target_os = "windows"))]
use anyhow::Result;
use codex_core::model_family::find_family_for_model;
use codex_core::protocol::AskForApproval;
use codex_core::protocol::EventMsg;
use codex_core::protocol::InputItem;
use codex_core::protocol::Op;
use codex_core::protocol::SandboxPolicy;
use codex_protocol::config_types::ReasoningSummary;
use core_test_support::responses;
use core_test_support::responses::ev_assistant_message;
use core_test_support::responses::ev_completed;
use core_test_support::responses::ev_function_call;
use core_test_support::responses::ev_response_created;
use core_test_support::responses::sse;
use core_test_support::responses::start_mock_server;
use core_test_support::skip_if_no_network;
use core_test_support::test_codex::TestCodex;
use core_test_support::test_codex::test_codex;
use core_test_support::wait_for_event;
use serde_json::Value;
use std::collections::HashSet;
use std::path::Path;
use std::process::Command as StdCommand;
use wiremock::matchers::any;
const MODEL_WITH_TOOL: &str = "test-gpt-5-codex";
fn ripgrep_available() -> bool {
StdCommand::new("rg")
.arg("--version")
.output()
.map(|output| output.status.success())
.unwrap_or(false)
}
macro_rules! skip_if_ripgrep_missing {
($ret:expr $(,)?) => {{
if !ripgrep_available() {
eprintln!("rg not available in PATH; skipping test");
return $ret;
}
}};
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn grep_files_tool_collects_matches() -> Result<()> {
skip_if_no_network!(Ok(()));
skip_if_ripgrep_missing!(Ok(()));
let server = start_mock_server().await;
let test = build_test_codex(&server).await?;
let search_dir = test.cwd.path().join("src");
std::fs::create_dir_all(&search_dir)?;
let alpha = search_dir.join("alpha.rs");
let beta = search_dir.join("beta.rs");
let gamma = search_dir.join("gamma.txt");
std::fs::write(&alpha, "alpha needle\n")?;
std::fs::write(&beta, "beta needle\n")?;
std::fs::write(&gamma, "needle in text but excluded\n")?;
let call_id = "grep-files-collect";
let arguments = serde_json::json!({
"pattern": "needle",
"path": search_dir.to_string_lossy(),
"include": "*.rs",
})
.to_string();
mount_tool_sequence(&server, call_id, &arguments, "grep_files").await;
submit_turn(&test, "please find uses of needle").await?;
let bodies = recorded_bodies(&server).await?;
let tool_output = find_tool_output(&bodies, call_id).expect("tool output present");
let payload = tool_output.get("output").expect("output field present");
let (content_opt, success_opt) = extract_content_and_success(payload);
let content = content_opt.expect("content present");
let success = success_opt.unwrap_or(true);
assert!(success, "expected success for matches, got {payload:?}");
let entries = collect_file_names(content);
assert_eq!(entries.len(), 2, "content: {content}");
assert!(
entries.contains("alpha.rs"),
"missing alpha.rs in {entries:?}"
);
assert!(
entries.contains("beta.rs"),
"missing beta.rs in {entries:?}"
);
assert!(
!entries.contains("gamma.txt"),
"txt file should be filtered out: {entries:?}"
);
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn grep_files_tool_reports_empty_results() -> Result<()> {
skip_if_no_network!(Ok(()));
skip_if_ripgrep_missing!(Ok(()));
let server = start_mock_server().await;
let test = build_test_codex(&server).await?;
let search_dir = test.cwd.path().join("logs");
std::fs::create_dir_all(&search_dir)?;
std::fs::write(search_dir.join("output.txt"), "no hits here")?;
let call_id = "grep-files-empty";
let arguments = serde_json::json!({
"pattern": "needle",
"path": search_dir.to_string_lossy(),
"limit": 5,
})
.to_string();
mount_tool_sequence(&server, call_id, &arguments, "grep_files").await;
submit_turn(&test, "search again").await?;
let bodies = recorded_bodies(&server).await?;
let tool_output = find_tool_output(&bodies, call_id).expect("tool output present");
let payload = tool_output.get("output").expect("output field present");
let (content_opt, success_opt) = extract_content_and_success(payload);
let content = content_opt.expect("content present");
if let Some(success) = success_opt {
assert!(!success, "expected success=false payload: {payload:?}");
}
assert_eq!(content, "No matches found.");
Ok(())
}
#[allow(clippy::expect_used)]
async fn build_test_codex(server: &wiremock::MockServer) -> Result<TestCodex> {
let mut builder = test_codex().with_config(|config| {
config.model = MODEL_WITH_TOOL.to_string();
config.model_family =
find_family_for_model(MODEL_WITH_TOOL).expect("model family for test model");
});
builder.build(server).await
}
async fn submit_turn(test: &TestCodex, prompt: &str) -> Result<()> {
let session_model = test.session_configured.model.clone();
test.codex
.submit(Op::UserTurn {
items: vec![InputItem::Text {
text: prompt.into(),
}],
final_output_json_schema: None,
cwd: test.cwd.path().to_path_buf(),
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::DangerFullAccess,
model: session_model,
effort: None,
summary: ReasoningSummary::Auto,
})
.await?;
wait_for_event(&test.codex, |event| {
matches!(event, EventMsg::TaskComplete(_))
})
.await;
Ok(())
}
async fn mount_tool_sequence(
server: &wiremock::MockServer,
call_id: &str,
arguments: &str,
tool_name: &str,
) {
let first_response = sse(vec![
ev_response_created("resp-1"),
ev_function_call(call_id, tool_name, arguments),
ev_completed("resp-1"),
]);
responses::mount_sse_once_match(server, any(), first_response).await;
let second_response = sse(vec![
ev_assistant_message("msg-1", "done"),
ev_completed("resp-2"),
]);
responses::mount_sse_once_match(server, any(), second_response).await;
}
#[allow(clippy::expect_used)]
async fn recorded_bodies(server: &wiremock::MockServer) -> Result<Vec<Value>> {
let requests = server.received_requests().await.expect("requests recorded");
Ok(requests
.iter()
.map(|req| req.body_json::<Value>().expect("request json"))
.collect())
}
fn find_tool_output<'a>(requests: &'a [Value], call_id: &str) -> Option<&'a Value> {
requests.iter().find_map(|body| {
body.get("input")
.and_then(Value::as_array)
.and_then(|items| {
items.iter().find(|item| {
item.get("type").and_then(Value::as_str) == Some("function_call_output")
&& item.get("call_id").and_then(Value::as_str) == Some(call_id)
})
})
})
}
fn collect_file_names(content: &str) -> HashSet<String> {
content
.lines()
.filter_map(|line| {
if line.trim().is_empty() {
return None;
}
Path::new(line)
.file_name()
.map(|name| name.to_string_lossy().into_owned())
})
.collect()
}
fn extract_content_and_success(value: &Value) -> (Option<&str>, Option<bool>) {
match value {
Value::String(text) => (Some(text.as_str()), None),
Value::Object(obj) => (
obj.get("content").and_then(Value::as_str),
obj.get("success").and_then(Value::as_bool),
),
_ => (None, None),
}
}

View File

@@ -9,6 +9,7 @@ mod compact_resume_fork;
mod exec;
mod exec_stream_events;
mod fork_conversation;
mod grep_files;
mod json_result;
mod list_dir;
mod live_cli;

View File

@@ -4,6 +4,7 @@ use codex_core::CodexAuth;
use codex_core::ConversationManager;
use codex_core::ModelProviderInfo;
use codex_core::built_in_model_providers;
use codex_core::features::Feature;
use codex_core::model_family::find_family_for_model;
use codex_core::protocol::EventMsg;
use codex_core::protocol::InputItem;
@@ -56,12 +57,12 @@ async fn collect_tool_identifiers_for_model(model: &str) -> Vec<String> {
config.model = model.to_string();
config.model_family =
find_family_for_model(model).unwrap_or_else(|| panic!("unknown model family for {model}"));
config.include_plan_tool = false;
config.include_apply_patch_tool = false;
config.include_view_image_tool = false;
config.tools_web_search_request = false;
config.use_experimental_streamable_shell_tool = false;
config.use_experimental_unified_exec_tool = false;
config.features.disable(Feature::PlanTool);
config.features.disable(Feature::ApplyPatchFreeform);
config.features.disable(Feature::ViewImageTool);
config.features.disable(Feature::WebSearchRequest);
config.features.disable(Feature::StreamableShell);
config.features.disable(Feature::UnifiedExec);
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));

View File

@@ -5,6 +5,7 @@ use codex_core::ConversationManager;
use codex_core::ModelProviderInfo;
use codex_core::built_in_model_providers;
use codex_core::config::OPENAI_DEFAULT_MODEL;
use codex_core::features::Feature;
use codex_core::model_family::find_family_for_model;
use codex_core::protocol::AskForApproval;
use codex_core::protocol::EventMsg;
@@ -99,10 +100,10 @@ async fn codex_mini_latest_tools() {
config.cwd = cwd.path().to_path_buf();
config.model_provider = model_provider;
config.user_instructions = Some("be consistent and helpful".to_string());
config.features.disable(Feature::ApplyPatchFreeform);
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));
config.include_apply_patch_tool = false;
config.model = "codex-mini-latest".to_string();
config.model_family = find_family_for_model("codex-mini-latest").unwrap();
@@ -185,7 +186,7 @@ async fn prompt_tools_are_consistent_across_requests() {
config.cwd = cwd.path().to_path_buf();
config.model_provider = model_provider;
config.user_instructions = Some("be consistent and helpful".to_string());
config.include_plan_tool = true;
config.features.enable(Feature::PlanTool);
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));

View File

@@ -9,6 +9,7 @@ use std::time::UNIX_EPOCH;
use codex_core::config_types::McpServerConfig;
use codex_core::config_types::McpServerTransportConfig;
use codex_core::features::Feature;
use codex_core::protocol::AskForApproval;
use codex_core::protocol::EventMsg;
@@ -74,7 +75,7 @@ async fn stdio_server_round_trip() -> anyhow::Result<()> {
let fixture = test_codex()
.with_config(move |config| {
config.use_experimental_use_rmcp_client = true;
config.features.enable(Feature::RmcpClient);
config.mcp_servers.insert(
server_name.to_string(),
McpServerConfig {
@@ -86,6 +87,7 @@ async fn stdio_server_round_trip() -> anyhow::Result<()> {
expected_env_value.to_string(),
)])),
},
enabled: true,
startup_timeout_sec: Some(Duration::from_secs(10)),
tool_timeout_sec: None,
},
@@ -226,14 +228,15 @@ async fn streamable_http_tool_call_round_trip() -> anyhow::Result<()> {
let fixture = test_codex()
.with_config(move |config| {
config.use_experimental_use_rmcp_client = true;
config.features.enable(Feature::RmcpClient);
config.mcp_servers.insert(
server_name.to_string(),
McpServerConfig {
transport: McpServerTransportConfig::StreamableHttp {
url: server_url,
bearer_token: None,
bearer_token_env_var: None,
},
enabled: true,
startup_timeout_sec: Some(Duration::from_secs(10)),
tool_timeout_sec: None,
},
@@ -406,14 +409,15 @@ async fn streamable_http_with_oauth_round_trip() -> anyhow::Result<()> {
let fixture = test_codex()
.with_config(move |config| {
config.use_experimental_use_rmcp_client = true;
config.features.enable(Feature::RmcpClient);
config.mcp_servers.insert(
server_name.to_string(),
McpServerConfig {
transport: McpServerTransportConfig::StreamableHttp {
url: server_url,
bearer_token: None,
bearer_token_env_var: None,
},
enabled: true,
startup_timeout_sec: Some(Duration::from_secs(10)),
tool_timeout_sec: None,
},

View File

@@ -1,6 +1,7 @@
#![cfg(not(target_os = "windows"))]
use anyhow::Result;
use codex_core::features::Feature;
use codex_core::model_family::find_family_for_model;
use codex_core::protocol::AskForApproval;
use codex_core::protocol::EventMsg;
@@ -9,9 +10,12 @@ use codex_core::protocol::Op;
use codex_core::protocol::SandboxPolicy;
use codex_protocol::config_types::ReasoningSummary;
use core_test_support::assert_regex_match;
use core_test_support::responses::ev_apply_patch_function_call;
use core_test_support::responses::ev_assistant_message;
use core_test_support::responses::ev_completed;
use core_test_support::responses::ev_custom_tool_call;
use core_test_support::responses::ev_function_call;
use core_test_support::responses::ev_local_shell_call;
use core_test_support::responses::ev_response_created;
use core_test_support::responses::mount_sse_sequence;
use core_test_support::responses::sse;
@@ -20,8 +24,11 @@ use core_test_support::skip_if_no_network;
use core_test_support::test_codex::TestCodex;
use core_test_support::test_codex::test_codex;
use core_test_support::wait_for_event;
use pretty_assertions::assert_eq;
use regex_lite::Regex;
use serde_json::Value;
use serde_json::json;
use std::fs;
async fn submit_turn(test: &TestCodex, prompt: &str, sandbox_policy: SandboxPolicy) -> Result<()> {
let session_model = test.session_configured.model.clone();
@@ -71,13 +78,28 @@ fn find_function_call_output<'a>(bodies: &'a [Value], call_id: &str) -> Option<&
None
}
fn find_custom_tool_call_output<'a>(bodies: &'a [Value], call_id: &str) -> Option<&'a Value> {
for body in bodies {
if let Some(items) = body.get("input").and_then(Value::as_array) {
for item in items {
if item.get("type").and_then(Value::as_str) == Some("custom_tool_call_output")
&& item.get("call_id").and_then(Value::as_str) == Some(call_id)
{
return Some(item);
}
}
}
}
None
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn shell_output_stays_json_without_freeform_apply_patch() -> Result<()> {
skip_if_no_network!(Ok(()));
let server = start_mock_server().await;
let mut builder = test_codex().with_config(|config| {
config.include_apply_patch_tool = false;
config.features.disable(Feature::ApplyPatchFreeform);
config.model = "gpt-5".to_string();
config.model_family = find_family_for_model("gpt-5").expect("gpt-5 is a model family");
});
@@ -119,7 +141,12 @@ async fn shell_output_stays_json_without_freeform_apply_patch() -> Result<()> {
.and_then(Value::as_str)
.expect("shell output string");
let parsed: Value = serde_json::from_str(output)?;
let mut parsed: Value = serde_json::from_str(output)?;
if let Some(metadata) = parsed.get_mut("metadata").and_then(Value::as_object_mut) {
// duration_seconds is non-deterministic; remove it for deep equality
let _ = metadata.remove("duration_seconds");
}
assert_eq!(
parsed
.get("metadata")
@@ -143,7 +170,7 @@ async fn shell_output_is_structured_with_freeform_apply_patch() -> Result<()> {
let server = start_mock_server().await;
let mut builder = test_codex().with_config(|config| {
config.include_apply_patch_tool = true;
config.features.enable(Feature::ApplyPatchFreeform);
});
let test = builder.build(&server).await?;
@@ -198,6 +225,83 @@ freeform shell
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn shell_output_for_freeform_tool_records_duration() -> Result<()> {
skip_if_no_network!(Ok(()));
let server = start_mock_server().await;
let mut builder = test_codex().with_config(|config| {
config.include_apply_patch_tool = true;
});
let test = builder.build(&server).await?;
#[cfg(target_os = "linux")]
let sleep_cmd = vec!["/bin/bash", "-c", "sleep 1"];
#[cfg(target_os = "macos")]
let sleep_cmd = vec!["/bin/bash", "-c", "sleep 1"];
#[cfg(windows)]
let sleep_cmd = "timeout 1";
let call_id = "shell-structured";
let args = json!({
"command": sleep_cmd,
"timeout_ms": 2_000,
});
let responses = vec![
sse(vec![
json!({"type": "response.created", "response": {"id": "resp-1"}}),
ev_function_call(call_id, "shell", &serde_json::to_string(&args)?),
ev_completed("resp-1"),
]),
sse(vec![
ev_assistant_message("msg-1", "done"),
ev_completed("resp-2"),
]),
];
mount_sse_sequence(&server, responses).await;
submit_turn(
&test,
"run the structured shell command",
SandboxPolicy::DangerFullAccess,
)
.await?;
let requests = server
.received_requests()
.await
.expect("recorded requests present");
let bodies = request_bodies(&requests)?;
let output_item =
find_function_call_output(&bodies, call_id).expect("structured output present");
let output = output_item
.get("output")
.and_then(Value::as_str)
.expect("structured output string");
let expected_pattern = r#"(?s)^Exit code: 0
Wall time: [0-9]+(?:\.[0-9]+)? seconds
Output:
$"#;
assert_regex_match(expected_pattern, output);
let wall_time_regex = Regex::new(r"(?m)^Wall (?:time|Clock): ([0-9]+(?:\.[0-9]+)?) seconds$")
.expect("compile wall time regex");
let wall_time_seconds = wall_time_regex
.captures(output)
.and_then(|caps| caps.get(1))
.and_then(|value| value.as_str().parse::<f32>().ok())
.expect("expected structured shell output to contain wall time seconds");
assert!(
wall_time_seconds > 0.5,
"expected wall time to be greater than zero seconds, got {wall_time_seconds}"
);
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn shell_output_reserializes_truncated_content() -> Result<()> {
skip_if_no_network!(Ok(()));
@@ -213,7 +317,7 @@ async fn shell_output_reserializes_truncated_content() -> Result<()> {
let call_id = "shell-truncated";
let args = json!({
"command": ["/bin/sh", "-c", "seq 1 400"],
"timeout_ms": 1_000,
"timeout_ms": 5_000,
});
let responses = vec![
sse(vec![
@@ -275,3 +379,428 @@ $"#;
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn apply_patch_custom_tool_output_is_structured() -> Result<()> {
skip_if_no_network!(Ok(()));
let server = start_mock_server().await;
let mut builder = test_codex().with_config(|config| {
config.include_apply_patch_tool = true;
});
let test = builder.build(&server).await?;
let call_id = "apply-patch-structured";
let file_name = "structured.txt";
let patch = format!(
r#"*** Begin Patch
*** Add File: {file_name}
+from custom tool
*** End Patch
"#
);
let responses = vec![
sse(vec![
json!({"type": "response.created", "response": {"id": "resp-1"}}),
ev_custom_tool_call(call_id, "apply_patch", &patch),
ev_completed("resp-1"),
]),
sse(vec![
ev_assistant_message("msg-1", "done"),
ev_completed("resp-2"),
]),
];
mount_sse_sequence(&server, responses).await;
submit_turn(
&test,
"apply the patch via custom tool",
SandboxPolicy::DangerFullAccess,
)
.await?;
let requests = server
.received_requests()
.await
.expect("recorded requests present");
let bodies = request_bodies(&requests)?;
let output_item =
find_custom_tool_call_output(&bodies, call_id).expect("apply_patch output present");
let output = output_item
.get("output")
.and_then(Value::as_str)
.expect("apply_patch output string");
let expected_pattern = format!(
r"(?s)^Exit code: 0
Wall time: [0-9]+(?:\.[0-9]+)? seconds
Output:
Success. Updated the following files:
A {file_name}
?$"
);
assert_regex_match(&expected_pattern, output);
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn apply_patch_custom_tool_call_creates_file() -> Result<()> {
skip_if_no_network!(Ok(()));
let server = start_mock_server().await;
let mut builder = test_codex().with_config(|config| {
config.include_apply_patch_tool = true;
});
let test = builder.build(&server).await?;
let call_id = "apply-patch-add-file";
let file_name = "custom_tool_apply_patch.txt";
let patch = format!(
"*** Begin Patch\n*** Add File: {file_name}\n+custom tool content\n*** End Patch\n"
);
let responses = vec![
sse(vec![
json!({"type": "response.created", "response": {"id": "resp-1"}}),
ev_custom_tool_call(call_id, "apply_patch", &patch),
ev_completed("resp-1"),
]),
sse(vec![
ev_assistant_message("msg-1", "apply_patch done"),
ev_completed("resp-2"),
]),
];
mount_sse_sequence(&server, responses).await;
submit_turn(
&test,
"apply the patch via custom tool to create a file",
SandboxPolicy::DangerFullAccess,
)
.await?;
let requests = server
.received_requests()
.await
.expect("recorded requests present");
let bodies = request_bodies(&requests)?;
let output_item =
find_custom_tool_call_output(&bodies, call_id).expect("apply_patch output present");
let output = output_item
.get("output")
.and_then(Value::as_str)
.expect("apply_patch output string");
let expected_pattern = format!(
r"(?s)^Exit code: 0
Wall time: [0-9]+(?:\.[0-9]+)? seconds
Output:
Success. Updated the following files:
A {file_name}
?$"
);
assert_regex_match(&expected_pattern, output);
let new_file_path = test.cwd.path().join(file_name);
let created_contents = fs::read_to_string(&new_file_path)?;
assert_eq!(
created_contents, "custom tool content\n",
"expected file contents for {file_name}"
);
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn apply_patch_custom_tool_call_updates_existing_file() -> Result<()> {
skip_if_no_network!(Ok(()));
let server = start_mock_server().await;
let mut builder = test_codex().with_config(|config| {
config.include_apply_patch_tool = true;
});
let test = builder.build(&server).await?;
let call_id = "apply-patch-update-file";
let file_name = "custom_tool_apply_patch_existing.txt";
let file_path = test.cwd.path().join(file_name);
fs::write(&file_path, "before\n")?;
let patch = format!(
"*** Begin Patch\n*** Update File: {file_name}\n@@\n-before\n+after\n*** End Patch\n"
);
let responses = vec![
sse(vec![
json!({"type": "response.created", "response": {"id": "resp-1"}}),
ev_custom_tool_call(call_id, "apply_patch", &patch),
ev_completed("resp-1"),
]),
sse(vec![
ev_assistant_message("msg-1", "apply_patch update done"),
ev_completed("resp-2"),
]),
];
mount_sse_sequence(&server, responses).await;
submit_turn(
&test,
"apply the patch via custom tool to update a file",
SandboxPolicy::DangerFullAccess,
)
.await?;
let requests = server
.received_requests()
.await
.expect("recorded requests present");
let bodies = request_bodies(&requests)?;
let output_item =
find_custom_tool_call_output(&bodies, call_id).expect("apply_patch output present");
let output = output_item
.get("output")
.and_then(Value::as_str)
.expect("apply_patch output string");
let expected_pattern = format!(
r"(?s)^Exit code: 0
Wall time: [0-9]+(?:\.[0-9]+)? seconds
Output:
Success. Updated the following files:
M {file_name}
?$"
);
assert_regex_match(&expected_pattern, output);
let updated_contents = fs::read_to_string(file_path)?;
assert_eq!(updated_contents, "after\n", "expected updated file content");
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn apply_patch_custom_tool_call_reports_failure_output() -> Result<()> {
skip_if_no_network!(Ok(()));
let server = start_mock_server().await;
let mut builder = test_codex().with_config(|config| {
config.include_apply_patch_tool = true;
});
let test = builder.build(&server).await?;
let call_id = "apply-patch-failure";
let missing_file = "missing_custom_tool_apply_patch.txt";
let patch = format!(
"*** Begin Patch\n*** Update File: {missing_file}\n@@\n-before\n+after\n*** End Patch\n"
);
let responses = vec![
sse(vec![
json!({"type": "response.created", "response": {"id": "resp-1"}}),
ev_custom_tool_call(call_id, "apply_patch", &patch),
ev_completed("resp-1"),
]),
sse(vec![
ev_assistant_message("msg-1", "apply_patch failure done"),
ev_completed("resp-2"),
]),
];
mount_sse_sequence(&server, responses).await;
submit_turn(
&test,
"attempt a failing apply_patch via custom tool",
SandboxPolicy::DangerFullAccess,
)
.await?;
let requests = server
.received_requests()
.await
.expect("recorded requests present");
let bodies = request_bodies(&requests)?;
let output_item =
find_custom_tool_call_output(&bodies, call_id).expect("apply_patch output present");
let output = output_item
.get("output")
.and_then(Value::as_str)
.expect("apply_patch output string");
let expected_output = format!(
"apply_patch verification failed: Failed to read file to update {}/{missing_file}: No such file or directory (os error 2)",
test.cwd.path().to_string_lossy()
);
assert_eq!(output, expected_output);
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn apply_patch_function_call_output_is_structured() -> Result<()> {
skip_if_no_network!(Ok(()));
let server = start_mock_server().await;
let mut builder = test_codex().with_config(|config| {
config.include_apply_patch_tool = true;
});
let test = builder.build(&server).await?;
let call_id = "apply-patch-function";
let file_name = "function_apply_patch.txt";
let patch =
format!("*** Begin Patch\n*** Add File: {file_name}\n+via function call\n*** End Patch\n");
let responses = vec![
sse(vec![
json!({"type": "response.created", "response": {"id": "resp-1"}}),
ev_apply_patch_function_call(call_id, &patch),
ev_completed("resp-1"),
]),
sse(vec![
ev_assistant_message("msg-1", "apply_patch function done"),
ev_completed("resp-2"),
]),
];
mount_sse_sequence(&server, responses).await;
submit_turn(
&test,
"apply the patch via function-call apply_patch",
SandboxPolicy::DangerFullAccess,
)
.await?;
let requests = server
.received_requests()
.await
.expect("recorded requests present");
let bodies = request_bodies(&requests)?;
let output_item =
find_function_call_output(&bodies, call_id).expect("apply_patch function output present");
let output = output_item
.get("output")
.and_then(Value::as_str)
.expect("apply_patch output string");
let expected_pattern = format!(
r"(?s)^Exit code: 0
Wall time: [0-9]+(?:\.[0-9]+)? seconds
Output:
Success. Updated the following files:
A {file_name}
?$"
);
assert_regex_match(&expected_pattern, output);
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn shell_output_is_structured_for_nonzero_exit() -> Result<()> {
skip_if_no_network!(Ok(()));
let server = start_mock_server().await;
let mut builder = test_codex().with_config(|config| {
config.model = "gpt-5-codex".to_string();
config.model_family =
find_family_for_model("gpt-5-codex").expect("gpt-5-codex is a model family");
config.include_apply_patch_tool = true;
});
let test = builder.build(&server).await?;
let call_id = "shell-nonzero-exit";
let args = json!({
"command": ["/bin/sh", "-c", "exit 42"],
"timeout_ms": 1_000,
});
let responses = vec![
sse(vec![
json!({"type": "response.created", "response": {"id": "resp-1"}}),
ev_function_call(call_id, "shell", &serde_json::to_string(&args)?),
ev_completed("resp-1"),
]),
sse(vec![
ev_assistant_message("msg-1", "shell failure handled"),
ev_completed("resp-2"),
]),
];
mount_sse_sequence(&server, responses).await;
submit_turn(
&test,
"run the failing shell command",
SandboxPolicy::DangerFullAccess,
)
.await?;
let requests = server
.received_requests()
.await
.expect("recorded requests present");
let bodies = request_bodies(&requests)?;
let output_item = find_function_call_output(&bodies, call_id).expect("shell output present");
let output = output_item
.get("output")
.and_then(Value::as_str)
.expect("shell output string");
let expected_pattern = r"(?s)^Exit code: 42
Wall time: [0-9]+(?:\.[0-9]+)? seconds
Output:
?$";
assert_regex_match(expected_pattern, output);
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn local_shell_call_output_is_structured() -> Result<()> {
skip_if_no_network!(Ok(()));
let server = start_mock_server().await;
let mut builder = test_codex().with_config(|config| {
config.model = "gpt-5-codex".to_string();
config.model_family =
find_family_for_model("gpt-5-codex").expect("gpt-5-codex is a model family");
config.include_apply_patch_tool = true;
});
let test = builder.build(&server).await?;
let call_id = "local-shell-call";
let responses = vec![
sse(vec![
json!({"type": "response.created", "response": {"id": "resp-1"}}),
ev_local_shell_call(call_id, "completed", vec!["/bin/echo", "local shell"]),
ev_completed("resp-1"),
]),
sse(vec![
ev_assistant_message("msg-1", "local shell done"),
ev_completed("resp-2"),
]),
];
mount_sse_sequence(&server, responses).await;
submit_turn(
&test,
"run the local shell command",
SandboxPolicy::DangerFullAccess,
)
.await?;
let requests = server
.received_requests()
.await
.expect("recorded requests present");
let bodies = request_bodies(&requests)?;
let output_item =
find_function_call_output(&bodies, call_id).expect("local shell output present");
let output = output_item
.get("output")
.and_then(Value::as_str)
.expect("local shell output string");
let expected_pattern = r"(?s)^Exit code: 0
Wall time: [0-9]+(?:\.[0-9]+)? seconds
Output:
local shell
?$";
assert_regex_match(expected_pattern, output);
Ok(())
}

View File

@@ -1,6 +1,9 @@
#![cfg(not(target_os = "windows"))]
use std::fs;
use assert_matches::assert_matches;
use codex_core::features::Feature;
use codex_core::model_family::find_family_for_model;
use codex_core::protocol::AskForApproval;
use codex_core::protocol::EventMsg;
@@ -104,7 +107,7 @@ async fn update_plan_tool_emits_plan_update_event() -> anyhow::Result<()> {
let server = start_mock_server().await;
let mut builder = test_codex().with_config(|config| {
config.include_plan_tool = true;
config.features.enable(Feature::PlanTool);
});
let TestCodex {
codex,
@@ -191,7 +194,7 @@ async fn update_plan_tool_rejects_malformed_payload() -> anyhow::Result<()> {
let server = start_mock_server().await;
let mut builder = test_codex().with_config(|config| {
config.include_plan_tool = true;
config.features.enable(Feature::PlanTool);
});
let TestCodex {
codex,
@@ -285,7 +288,7 @@ async fn apply_patch_tool_executes_and_emits_patch_events() -> anyhow::Result<()
let server = start_mock_server().await;
let mut builder = test_codex().with_config(|config| {
config.include_apply_patch_tool = true;
config.features.enable(Feature::ApplyPatchFreeform);
});
let TestCodex {
codex,
@@ -294,15 +297,19 @@ async fn apply_patch_tool_executes_and_emits_patch_events() -> anyhow::Result<()
..
} = builder.build(&server).await?;
let file_name = "notes.txt";
let file_path = cwd.path().join(file_name);
let call_id = "apply-patch-call";
let patch_content = r#"*** Begin Patch
*** Add File: notes.txt
let patch_content = format!(
r#"*** Begin Patch
*** Add File: {file_name}
+Tool harness apply patch
*** End Patch"#;
*** End Patch"#
);
let first_response = sse(vec![
ev_response_created("resp-1"),
ev_apply_patch_function_call(call_id, patch_content),
ev_apply_patch_function_call(call_id, &patch_content),
ev_completed("resp-1"),
]);
responses::mount_sse_once_match(&server, any(), first_response).await;
@@ -351,6 +358,7 @@ async fn apply_patch_tool_executes_and_emits_patch_events() -> anyhow::Result<()
assert!(saw_patch_begin, "expected PatchApplyBegin event");
let patch_end_success =
patch_end_success.expect("expected PatchApplyEnd event to capture success flag");
assert!(patch_end_success);
let req = second_mock.single_request();
let output_item = req.function_call_output(call_id);
@@ -360,38 +368,21 @@ async fn apply_patch_tool_executes_and_emits_patch_events() -> anyhow::Result<()
);
let output_text = extract_output_text(&output_item).expect("output text present");
if let Ok(exec_output) = serde_json::from_str::<Value>(output_text) {
let exit_code = exec_output["metadata"]["exit_code"]
.as_i64()
.expect("exit_code present");
let summary = exec_output["output"].as_str().expect("output field");
assert_eq!(
exit_code, 0,
"expected apply_patch exit_code=0, got {exit_code}, summary: {summary:?}"
);
assert!(
patch_end_success,
"expected PatchApplyEnd success flag, summary: {summary:?}"
);
assert!(
summary.contains("Success."),
"expected apply_patch summary to note success, got {summary:?}"
);
let expected_pattern = format!(
r"(?s)^Exit code: 0
Wall time: [0-9]+(?:\.[0-9]+)? seconds
Output:
Success. Updated the following files:
A {file_name}
?$"
);
assert_regex_match(&expected_pattern, output_text);
let patched_path = cwd.path().join("notes.txt");
let contents = std::fs::read_to_string(&patched_path)
.unwrap_or_else(|e| panic!("failed reading {}: {e}", patched_path.display()));
assert_eq!(contents, "Tool harness apply patch\n");
} else {
assert!(
output_text.contains("codex-run-as-apply-patch"),
"expected apply_patch failure message to mention codex-run-as-apply-patch, got {output_text:?}"
);
assert!(
!patch_end_success,
"expected PatchApplyEnd to report success=false when apply_patch invocation fails"
);
}
let updated_contents = fs::read_to_string(file_path)?;
assert_eq!(
updated_contents, "Tool harness apply patch\n",
"expected updated file content"
);
Ok(())
}
@@ -403,7 +394,7 @@ async fn apply_patch_reports_parse_diagnostics() -> anyhow::Result<()> {
let server = start_mock_server().await;
let mut builder = test_codex().with_config(|config| {
config.include_apply_patch_tool = true;
config.features.enable(Feature::ApplyPatchFreeform);
});
let TestCodex {
codex,

View File

@@ -63,8 +63,9 @@ async fn build_codex_with_test_tool(server: &wiremock::MockServer) -> anyhow::Re
}
fn assert_parallel_duration(actual: Duration) {
// Allow headroom for runtime overhead while still differentiating from serial execution.
assert!(
actual < Duration::from_millis(500),
actual < Duration::from_millis(750),
"expected parallel execution to finish quickly, got {actual:?}"
);
}

View File

@@ -2,6 +2,7 @@
#![allow(clippy::unwrap_used, clippy::expect_used)]
use anyhow::Result;
use codex_core::features::Feature;
use codex_core::model_family::find_family_for_model;
use codex_core::protocol::AskForApproval;
use codex_core::protocol::EventMsg;
@@ -293,7 +294,11 @@ async fn collect_tools(use_unified_exec: bool) -> Result<Vec<String>> {
let mock = mount_sse_sequence(&server, responses).await;
let mut builder = test_codex().with_config(move |config| {
config.use_experimental_unified_exec_tool = use_unified_exec;
if use_unified_exec {
config.features.enable(Feature::UnifiedExec);
} else {
config.features.disable(Feature::UnifiedExec);
}
});
let test = builder.build(&server).await?;
@@ -403,79 +408,6 @@ async fn shell_timeout_includes_timeout_prefix_and_metadata() -> Result<()> {
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn shell_sandbox_denied_truncates_error_output() -> Result<()> {
skip_if_no_network!(Ok(()));
let server = start_mock_server().await;
let mut builder = test_codex();
let test = builder.build(&server).await?;
let call_id = "shell-denied";
let long_line = "this is a long stderr line that should trigger truncation 0123456789abcdefghijklmnopqrstuvwxyz";
let script = format!(
"for i in $(seq 1 500); do >&2 echo '{long_line}'; done; cat <<'EOF' > denied.txt\ncontent\nEOF",
);
let args = json!({
"command": ["/bin/sh", "-c", script],
"timeout_ms": 1_000,
});
mount_sse_once(
&server,
sse(vec![
ev_response_created("resp-1"),
ev_function_call(call_id, "shell", &serde_json::to_string(&args)?),
ev_completed("resp-1"),
]),
)
.await;
let second_mock = mount_sse_once(
&server,
sse(vec![
ev_assistant_message("msg-1", "done"),
ev_completed("resp-2"),
]),
)
.await;
submit_turn(
&test,
"attempt to write in read-only sandbox",
AskForApproval::Never,
SandboxPolicy::ReadOnly,
)
.await?;
let denied_item = second_mock.single_request().function_call_output(call_id);
let output = denied_item
.get("output")
.and_then(Value::as_str)
.expect("denied output string");
let sandbox_pattern = r#"(?s)^Exit code: -?\d+
Wall time: [0-9]+(?:\.[0-9]+)? seconds
Total output lines: \d+
Output:
failed in sandbox: .*?(?:Operation not permitted|Permission denied|Read-only file system).*?
\[\.{3} omitted \d+ of \d+ lines \.{3}\]
.*this is a long stderr line that should trigger truncation 0123456789abcdefghijklmnopqrstuvwxyz.*
\n?$"#;
let sandbox_regex = Regex::new(sandbox_pattern)?;
if !sandbox_regex.is_match(output) {
let fallback_pattern = r#"(?s)^Total output lines: \d+
failed in sandbox: this is a long stderr line that should trigger truncation 0123456789abcdefghijklmnopqrstuvwxyz
.*this is a long stderr line that should trigger truncation 0123456789abcdefghijklmnopqrstuvwxyz.*
.*(?:Operation not permitted|Permission denied|Read-only file system).*$"#;
assert_regex_match(fallback_pattern, output);
}
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn shell_spawn_failure_truncates_exec_error() -> Result<()> {
skip_if_no_network!(Ok(()));

View File

@@ -3,6 +3,7 @@
use std::collections::HashMap;
use anyhow::Result;
use codex_core::features::Feature;
use codex_core::protocol::AskForApproval;
use codex_core::protocol::EventMsg;
use codex_core::protocol::InputItem;
@@ -42,7 +43,13 @@ fn collect_tool_outputs(bodies: &[Value]) -> Result<HashMap<String, Value>> {
if let Some(call_id) = item.get("call_id").and_then(Value::as_str) {
let content = extract_output_text(item)
.ok_or_else(|| anyhow::anyhow!("missing tool output content"))?;
let parsed: Value = serde_json::from_str(content)?;
let trimmed = content.trim();
if trimmed.is_empty() {
continue;
}
let parsed: Value = serde_json::from_str(trimmed).map_err(|err| {
anyhow::anyhow!("failed to parse tool output content {trimmed:?}: {err}")
})?;
outputs.insert(call_id.to_string(), parsed);
}
}
@@ -59,7 +66,7 @@ async fn unified_exec_reuses_session_via_stdin() -> Result<()> {
let server = start_mock_server().await;
let mut builder = test_codex().with_config(|config| {
config.use_experimental_unified_exec_tool = true;
config.features.enable(Feature::UnifiedExec);
});
let TestCodex {
codex,
@@ -168,7 +175,7 @@ async fn unified_exec_reuses_session_via_stdin() -> Result<()> {
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn unified_exec_timeout_and_followup_poll() -> Result<()> {
async fn unified_exec_streams_after_lagged_output() -> Result<()> {
skip_if_no_network!(Ok(()));
skip_if_sandbox!(Ok(()));
@@ -176,6 +183,132 @@ async fn unified_exec_timeout_and_followup_poll() -> Result<()> {
let mut builder = test_codex().with_config(|config| {
config.use_experimental_unified_exec_tool = true;
config.features.enable(Feature::UnifiedExec);
});
let TestCodex {
codex,
cwd,
session_configured,
..
} = builder.build(&server).await?;
let script = r#"python3 - <<'PY'
import sys
import time
chunk = b'x' * (1 << 20)
for _ in range(4):
sys.stdout.buffer.write(chunk)
sys.stdout.flush()
time.sleep(0.2)
for _ in range(5):
sys.stdout.write("TAIL-MARKER\n")
sys.stdout.flush()
time.sleep(0.05)
time.sleep(0.2)
PY
"#;
let first_call_id = "uexec-lag-start";
let first_args = serde_json::json!({
"input": ["/bin/sh", "-c", script],
"timeout_ms": 25,
});
let second_call_id = "uexec-lag-poll";
let second_args = serde_json::json!({
"input": Vec::<String>::new(),
"session_id": "0",
"timeout_ms": 2_000,
});
let responses = vec![
sse(vec![
ev_response_created("resp-1"),
ev_function_call(
first_call_id,
"unified_exec",
&serde_json::to_string(&first_args)?,
),
ev_completed("resp-1"),
]),
sse(vec![
ev_response_created("resp-2"),
ev_function_call(
second_call_id,
"unified_exec",
&serde_json::to_string(&second_args)?,
),
ev_completed("resp-2"),
]),
sse(vec![
ev_assistant_message("msg-1", "lag handled"),
ev_completed("resp-3"),
]),
];
mount_sse_sequence(&server, responses).await;
let session_model = session_configured.model.clone();
codex
.submit(Op::UserTurn {
items: vec![InputItem::Text {
text: "exercise lag handling".into(),
}],
final_output_json_schema: None,
cwd: cwd.path().to_path_buf(),
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::DangerFullAccess,
model: session_model,
effort: None,
summary: ReasoningSummary::Auto,
})
.await?;
wait_for_event(&codex, |event| matches!(event, EventMsg::TaskComplete(_))).await;
let requests = server.received_requests().await.expect("recorded requests");
assert!(!requests.is_empty(), "expected at least one POST request");
let bodies = requests
.iter()
.map(|req| req.body_json::<Value>().expect("request json"))
.collect::<Vec<_>>();
let outputs = collect_tool_outputs(&bodies)?;
let start_output = outputs
.get(first_call_id)
.expect("missing initial unified_exec output");
let session_id = start_output["session_id"].as_str().unwrap_or_default();
assert!(
!session_id.is_empty(),
"expected session id from initial unified_exec response"
);
let poll_output = outputs
.get(second_call_id)
.expect("missing poll unified_exec output");
let poll_text = poll_output["output"].as_str().unwrap_or_default();
assert!(
poll_text.contains("TAIL-MARKER"),
"expected poll output to contain tail marker, got {poll_text:?}"
);
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn unified_exec_timeout_and_followup_poll() -> Result<()> {
skip_if_no_network!(Ok(()));
skip_if_sandbox!(Ok(()));
let server = start_mock_server().await;
let mut builder = test_codex().with_config(|config| {
config.features.enable(Feature::UnifiedExec);
});
let TestCodex {
codex,

View File

@@ -5,6 +5,7 @@ use std::os::unix::fs::PermissionsExt;
use codex_core::protocol::EventMsg;
use codex_core::protocol::InputItem;
use codex_core::protocol::Op;
use core_test_support::fs_wait;
use core_test_support::responses;
use core_test_support::skip_if_no_network;
use core_test_support::test_codex::TestCodex;
@@ -17,8 +18,7 @@ use responses::ev_assistant_message;
use responses::ev_completed;
use responses::sse;
use responses::start_mock_server;
use tokio::time::Duration;
use tokio::time::sleep;
use std::time::Duration;
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn summarize_context_three_requests_and_instructions() -> anyhow::Result<()> {
@@ -60,14 +60,7 @@ echo -n "${@: -1}" > $(dirname "${0}")/notify.txt"#,
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
// We fork the notify script, so we need to wait for it to write to the file.
for _ in 0..100u32 {
if notify_file.exists() {
break;
}
sleep(Duration::from_millis(100)).await;
}
assert!(notify_file.exists());
fs_wait::wait_for_path_exists(&notify_file, Duration::from_secs(5)).await?;
Ok(())
}

View File

@@ -148,21 +148,15 @@ impl OtelEventManager {
response
}
pub async fn log_sse_event<Next, Fut, E>(
pub fn log_sse_event<E>(
&self,
next: Next,
) -> Result<Option<Result<StreamEvent, StreamError<E>>>, Elapsed>
where
Next: FnOnce() -> Fut,
Fut: Future<Output = Result<Option<Result<StreamEvent, StreamError<E>>>, Elapsed>>,
response: &Result<Option<Result<StreamEvent, StreamError<E>>>, Elapsed>,
duration: Duration,
) where
E: Display,
{
let start = std::time::Instant::now();
let response = next().await;
let duration = start.elapsed();
match response {
Ok(Some(Ok(ref sse))) => {
Ok(Some(Ok(sse))) => {
if sse.data.trim() == "[DONE]" {
self.sse_event(&sse.event, duration);
} else {
@@ -191,7 +185,7 @@ impl OtelEventManager {
}
}
}
Ok(Some(Err(ref error))) => {
Ok(Some(Err(error))) => {
self.sse_event_failed(None, duration, error);
}
Ok(None) => {}
@@ -199,8 +193,6 @@ impl OtelEventManager {
self.sse_event_failed(None, duration, &"idle timeout waiting for SSE");
}
}
response
}
fn sse_event(&self, kind: &str, duration: Duration) {

View File

@@ -1203,6 +1203,11 @@ pub struct StreamErrorEvent {
pub message: String,
}
#[derive(Debug, Clone, Deserialize, Serialize, TS)]
pub struct StreamInfoEvent {
pub message: String,
}
#[derive(Debug, Clone, Deserialize, Serialize, TS)]
pub struct PatchApplyBeginEvent {
/// Identifier so this can be paired with the PatchApplyEnd event.
@@ -1243,6 +1248,30 @@ pub struct GetHistoryEntryResponseEvent {
pub struct McpListToolsResponseEvent {
/// Fully qualified tool name -> tool definition.
pub tools: std::collections::HashMap<String, McpTool>,
/// Authentication status for each configured MCP server.
pub auth_statuses: std::collections::HashMap<String, McpAuthStatus>,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize, TS)]
#[serde(rename_all = "snake_case")]
#[ts(rename_all = "snake_case")]
pub enum McpAuthStatus {
Unsupported,
NotLoggedIn,
BearerToken,
OAuth,
}
impl fmt::Display for McpAuthStatus {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let text = match self {
McpAuthStatus::Unsupported => "Unsupported",
McpAuthStatus::NotLoggedIn => "Not logged in",
McpAuthStatus::BearerToken => "Bearer token",
McpAuthStatus::OAuth => "OAuth",
};
f.write_str(text)
}
}
/// Response payload for `Op::ListCustomPrompts`.

View File

@@ -12,6 +12,7 @@ axum = { workspace = true, default-features = false, features = [
"http1",
"tokio",
] }
codex-protocol = { workspace = true }
keyring = { workspace = true, features = [
"apple-native",
"crypto-rust",

View File

@@ -0,0 +1,125 @@
use std::time::Duration;
use anyhow::Error;
use anyhow::Result;
use codex_protocol::protocol::McpAuthStatus;
use reqwest::Client;
use reqwest::StatusCode;
use reqwest::Url;
use serde::Deserialize;
use tracing::debug;
use crate::OAuthCredentialsStoreMode;
use crate::oauth::has_oauth_tokens;
const DISCOVERY_TIMEOUT: Duration = Duration::from_secs(5);
const OAUTH_DISCOVERY_HEADER: &str = "MCP-Protocol-Version";
const OAUTH_DISCOVERY_VERSION: &str = "2024-11-05";
/// Determine the authentication status for a streamable HTTP MCP server.
pub async fn determine_streamable_http_auth_status(
server_name: &str,
url: &str,
bearer_token_env_var: Option<&str>,
store_mode: OAuthCredentialsStoreMode,
) -> Result<McpAuthStatus> {
if bearer_token_env_var.is_some() {
return Ok(McpAuthStatus::BearerToken);
}
if has_oauth_tokens(server_name, url, store_mode)? {
return Ok(McpAuthStatus::OAuth);
}
match supports_oauth_login(url).await {
Ok(true) => Ok(McpAuthStatus::NotLoggedIn),
Ok(false) => Ok(McpAuthStatus::Unsupported),
Err(error) => {
debug!(
"failed to detect OAuth support for MCP server `{server_name}` at {url}: {error:?}"
);
Ok(McpAuthStatus::Unsupported)
}
}
}
/// Attempt to determine whether a streamable HTTP MCP server advertises OAuth login.
pub async fn supports_oauth_login(url: &str) -> Result<bool> {
let base_url = Url::parse(url)?;
let client = Client::builder().timeout(DISCOVERY_TIMEOUT).build()?;
let mut last_error: Option<Error> = None;
for candidate_path in discovery_paths(base_url.path()) {
let mut discovery_url = base_url.clone();
discovery_url.set_path(&candidate_path);
let response = match client
.get(discovery_url.clone())
.header(OAUTH_DISCOVERY_HEADER, OAUTH_DISCOVERY_VERSION)
.send()
.await
{
Ok(response) => response,
Err(err) => {
last_error = Some(err.into());
continue;
}
};
if response.status() != StatusCode::OK {
continue;
}
let metadata = match response.json::<OAuthDiscoveryMetadata>().await {
Ok(metadata) => metadata,
Err(err) => {
last_error = Some(err.into());
continue;
}
};
if metadata.authorization_endpoint.is_some() && metadata.token_endpoint.is_some() {
return Ok(true);
}
}
if let Some(err) = last_error {
debug!("OAuth discovery requests failed for {url}: {err:?}");
}
Ok(false)
}
#[derive(Debug, Deserialize)]
struct OAuthDiscoveryMetadata {
#[serde(default)]
authorization_endpoint: Option<String>,
#[serde(default)]
token_endpoint: Option<String>,
}
/// Implements RFC 8414 section 3.1 for discovering well-known oauth endpoints.
/// This is a requirement for MCP servers to support OAuth.
/// https://datatracker.ietf.org/doc/html/rfc8414#section-3.1
/// https://github.com/modelcontextprotocol/rust-sdk/blob/main/crates/rmcp/src/transport/auth.rs#L182
fn discovery_paths(base_path: &str) -> Vec<String> {
let trimmed = base_path.trim_start_matches('/').trim_end_matches('/');
let canonical = "/.well-known/oauth-authorization-server".to_string();
if trimmed.is_empty() {
return vec![canonical];
}
let mut candidates = Vec::new();
let mut push_unique = |candidate: String| {
if !candidates.contains(&candidate) {
candidates.push(candidate);
}
};
push_unique(format!("{canonical}/{trimmed}"));
push_unique(format!("/{trimmed}/.well-known/oauth-authorization-server"));
push_unique(canonical);
candidates
}

View File

@@ -1,3 +1,4 @@
mod auth_status;
mod find_codex_home;
mod logging_client_handler;
mod oauth;
@@ -5,6 +6,10 @@ mod perform_oauth_login;
mod rmcp_client;
mod utils;
pub use auth_status::determine_streamable_http_auth_status;
pub use auth_status::supports_oauth_login;
pub use codex_protocol::protocol::McpAuthStatus;
pub use oauth::OAuthCredentialsStoreMode;
pub use oauth::StoredOAuthTokens;
pub use oauth::WrappedOAuthTokenResponse;
pub use oauth::delete_oauth_tokens;

View File

@@ -58,6 +58,21 @@ pub struct StoredOAuthTokens {
pub token_response: WrappedOAuthTokenResponse,
}
/// Determine where Codex should store and read MCP credentials.
#[derive(Debug, Default, Copy, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum OAuthCredentialsStoreMode {
/// `Keyring` when available; otherwise, `File`.
/// Credentials stored in the keyring will only be readable by Codex unless the user explicitly grants access via OS-level keyring access.
#[default]
Auto,
/// CODEX_HOME/.credentials.json
/// This file will be readable to Codex and other applications running as the same user.
File,
/// Keyring when available, otherwise fail.
Keyring,
}
#[derive(Debug)]
struct CredentialStoreError(anyhow::Error);
@@ -83,15 +98,15 @@ impl fmt::Display for CredentialStoreError {
impl std::error::Error for CredentialStoreError {}
trait CredentialStore {
trait KeyringStore {
fn load(&self, service: &str, account: &str) -> Result<Option<String>, CredentialStoreError>;
fn save(&self, service: &str, account: &str, value: &str) -> Result<(), CredentialStoreError>;
fn delete(&self, service: &str, account: &str) -> Result<bool, CredentialStoreError>;
}
struct KeyringCredentialStore;
struct DefaultKeyringStore;
impl CredentialStore for KeyringCredentialStore {
impl KeyringStore for DefaultKeyringStore {
fn load(&self, service: &str, account: &str) -> Result<Option<String>, CredentialStoreError> {
let entry = Entry::new(service, account).map_err(CredentialStoreError::new)?;
match entry.get_password() {
@@ -129,47 +144,93 @@ impl PartialEq for WrappedOAuthTokenResponse {
}
}
pub(crate) fn load_oauth_tokens(server_name: &str, url: &str) -> Result<Option<StoredOAuthTokens>> {
let store = KeyringCredentialStore;
load_oauth_tokens_with_store(&store, server_name, url)
pub(crate) fn load_oauth_tokens(
server_name: &str,
url: &str,
store_mode: OAuthCredentialsStoreMode,
) -> Result<Option<StoredOAuthTokens>> {
let keyring_store = DefaultKeyringStore;
match store_mode {
OAuthCredentialsStoreMode::Auto => {
load_oauth_tokens_from_keyring_with_fallback_to_file(&keyring_store, server_name, url)
}
OAuthCredentialsStoreMode::File => load_oauth_tokens_from_file(server_name, url),
OAuthCredentialsStoreMode::Keyring => {
load_oauth_tokens_from_keyring(&keyring_store, server_name, url)
.with_context(|| "failed to read OAuth tokens from keyring".to_string())
}
}
}
fn load_oauth_tokens_with_store<C: CredentialStore>(
store: &C,
pub(crate) fn has_oauth_tokens(
server_name: &str,
url: &str,
store_mode: OAuthCredentialsStoreMode,
) -> Result<bool> {
Ok(load_oauth_tokens(server_name, url, store_mode)?.is_some())
}
fn load_oauth_tokens_from_keyring_with_fallback_to_file<K: KeyringStore>(
keyring_store: &K,
server_name: &str,
url: &str,
) -> Result<Option<StoredOAuthTokens>> {
match load_oauth_tokens_from_keyring(keyring_store, server_name, url) {
Ok(Some(tokens)) => Ok(Some(tokens)),
Ok(None) => load_oauth_tokens_from_file(server_name, url),
Err(error) => {
warn!("failed to read OAuth tokens from keyring: {error}");
load_oauth_tokens_from_file(server_name, url)
.with_context(|| format!("failed to read OAuth tokens from keyring: {error}"))
}
}
}
fn load_oauth_tokens_from_keyring<K: KeyringStore>(
keyring_store: &K,
server_name: &str,
url: &str,
) -> Result<Option<StoredOAuthTokens>> {
let key = compute_store_key(server_name, url)?;
match store.load(KEYRING_SERVICE, &key) {
match keyring_store.load(KEYRING_SERVICE, &key) {
Ok(Some(serialized)) => {
let tokens: StoredOAuthTokens = serde_json::from_str(&serialized)
.context("failed to deserialize OAuth tokens from keyring")?;
Ok(Some(tokens))
}
Ok(None) => load_oauth_tokens_from_file(server_name, url),
Err(error) => {
let message = error.message();
warn!("failed to read OAuth tokens from keyring: {message}");
load_oauth_tokens_from_file(server_name, url)
.with_context(|| format!("failed to read OAuth tokens from keyring: {message}"))
Ok(None) => Ok(None),
Err(error) => Err(error.into_error()),
}
}
pub fn save_oauth_tokens(
server_name: &str,
tokens: &StoredOAuthTokens,
store_mode: OAuthCredentialsStoreMode,
) -> Result<()> {
let keyring_store = DefaultKeyringStore;
match store_mode {
OAuthCredentialsStoreMode::Auto => save_oauth_tokens_with_keyring_with_fallback_to_file(
&keyring_store,
server_name,
tokens,
),
OAuthCredentialsStoreMode::File => save_oauth_tokens_to_file(tokens),
OAuthCredentialsStoreMode::Keyring => {
save_oauth_tokens_with_keyring(&keyring_store, server_name, tokens)
}
}
}
pub fn save_oauth_tokens(server_name: &str, tokens: &StoredOAuthTokens) -> Result<()> {
let store = KeyringCredentialStore;
save_oauth_tokens_with_store(&store, server_name, tokens)
}
fn save_oauth_tokens_with_store<C: CredentialStore>(
store: &C,
fn save_oauth_tokens_with_keyring<K: KeyringStore>(
keyring_store: &K,
server_name: &str,
tokens: &StoredOAuthTokens,
) -> Result<()> {
let serialized = serde_json::to_string(tokens).context("failed to serialize OAuth tokens")?;
let key = compute_store_key(server_name, &tokens.url)?;
match store.save(KEYRING_SERVICE, &key, &serialized) {
match keyring_store.save(KEYRING_SERVICE, &key, &serialized) {
Ok(()) => {
if let Err(error) = delete_oauth_tokens_from_file(&key) {
warn!("failed to remove OAuth tokens from fallback storage: {error:?}");
@@ -177,31 +238,61 @@ fn save_oauth_tokens_with_store<C: CredentialStore>(
Ok(())
}
Err(error) => {
let message = error.message();
warn!("failed to write OAuth tokens to keyring: {message}");
let message = format!(
"failed to write OAuth tokens to keyring: {}",
error.message()
);
warn!("{message}");
Err(error.into_error().context(message))
}
}
}
fn save_oauth_tokens_with_keyring_with_fallback_to_file<K: KeyringStore>(
keyring_store: &K,
server_name: &str,
tokens: &StoredOAuthTokens,
) -> Result<()> {
match save_oauth_tokens_with_keyring(keyring_store, server_name, tokens) {
Ok(()) => Ok(()),
Err(error) => {
let message = error.to_string();
warn!("falling back to file storage for OAuth tokens: {message}");
save_oauth_tokens_to_file(tokens)
.with_context(|| format!("failed to write OAuth tokens to keyring: {message}"))
}
}
}
pub fn delete_oauth_tokens(server_name: &str, url: &str) -> Result<bool> {
let store = KeyringCredentialStore;
delete_oauth_tokens_with_store(&store, server_name, url)
pub fn delete_oauth_tokens(
server_name: &str,
url: &str,
store_mode: OAuthCredentialsStoreMode,
) -> Result<bool> {
let keyring_store = DefaultKeyringStore;
delete_oauth_tokens_from_keyring_and_file(&keyring_store, store_mode, server_name, url)
}
fn delete_oauth_tokens_with_store<C: CredentialStore>(
store: &C,
fn delete_oauth_tokens_from_keyring_and_file<K: KeyringStore>(
keyring_store: &K,
store_mode: OAuthCredentialsStoreMode,
server_name: &str,
url: &str,
) -> Result<bool> {
let key = compute_store_key(server_name, url)?;
let keyring_removed = match store.delete(KEYRING_SERVICE, &key) {
let keyring_result = keyring_store.delete(KEYRING_SERVICE, &key);
let keyring_removed = match keyring_result {
Ok(removed) => removed,
Err(error) => {
let message = error.message();
warn!("failed to delete OAuth tokens from keyring: {message}");
return Err(error.into_error()).context("failed to delete OAuth tokens from keyring");
match store_mode {
OAuthCredentialsStoreMode::Auto | OAuthCredentialsStoreMode::Keyring => {
return Err(error.into_error())
.context("failed to delete OAuth tokens from keyring");
}
OAuthCredentialsStoreMode::File => false,
}
}
};
@@ -218,6 +309,7 @@ struct OAuthPersistorInner {
server_name: String,
url: String,
authorization_manager: Arc<Mutex<AuthorizationManager>>,
store_mode: OAuthCredentialsStoreMode,
last_credentials: Mutex<Option<StoredOAuthTokens>>,
}
@@ -225,14 +317,16 @@ impl OAuthPersistor {
pub(crate) fn new(
server_name: String,
url: String,
manager: Arc<Mutex<AuthorizationManager>>,
authorization_manager: Arc<Mutex<AuthorizationManager>>,
store_mode: OAuthCredentialsStoreMode,
initial_credentials: Option<StoredOAuthTokens>,
) -> Self {
Self {
inner: Arc::new(OAuthPersistorInner {
server_name,
url,
authorization_manager: manager,
authorization_manager,
store_mode,
last_credentials: Mutex::new(initial_credentials),
}),
}
@@ -257,15 +351,18 @@ impl OAuthPersistor {
};
let mut last_credentials = self.inner.last_credentials.lock().await;
if last_credentials.as_ref() != Some(&stored) {
save_oauth_tokens(&self.inner.server_name, &stored)?;
save_oauth_tokens(&self.inner.server_name, &stored, self.inner.store_mode)?;
*last_credentials = Some(stored);
}
}
None => {
let mut last_serialized = self.inner.last_credentials.lock().await;
if last_serialized.take().is_some()
&& let Err(error) =
delete_oauth_tokens(&self.inner.server_name, &self.inner.url)
&& let Err(error) = delete_oauth_tokens(
&self.inner.server_name,
&self.inner.url,
self.inner.store_mode,
)
{
warn!(
"failed to remove OAuth tokens for server {}: {error}",
@@ -542,7 +639,7 @@ mod tests {
}
}
impl CredentialStore for MockCredentialStore {
impl KeyringStore for MockCredentialStore {
fn load(
&self,
_service: &str,
@@ -643,7 +740,8 @@ mod tests {
let key = super::compute_store_key(&tokens.server_name, &tokens.url)?;
store.save(KEYRING_SERVICE, &key, &serialized)?;
let loaded = super::load_oauth_tokens_with_store(&store, &tokens.server_name, &tokens.url)?;
let loaded =
super::load_oauth_tokens_from_keyring(&store, &tokens.server_name, &tokens.url)?;
assert_eq!(loaded, Some(expected));
Ok(())
}
@@ -657,8 +755,12 @@ mod tests {
super::save_oauth_tokens_to_file(&tokens)?;
let loaded = super::load_oauth_tokens_with_store(&store, &tokens.server_name, &tokens.url)?
.expect("tokens should load from fallback");
let loaded = super::load_oauth_tokens_from_keyring_with_fallback_to_file(
&store,
&tokens.server_name,
&tokens.url,
)?
.expect("tokens should load from fallback");
assert_tokens_match_without_expiry(&loaded, &expected);
Ok(())
}
@@ -674,8 +776,12 @@ mod tests {
super::save_oauth_tokens_to_file(&tokens)?;
let loaded = super::load_oauth_tokens_with_store(&store, &tokens.server_name, &tokens.url)?
.expect("tokens should load from fallback");
let loaded = super::load_oauth_tokens_from_keyring_with_fallback_to_file(
&store,
&tokens.server_name,
&tokens.url,
)?
.expect("tokens should load from fallback");
assert_tokens_match_without_expiry(&loaded, &expected);
Ok(())
}
@@ -689,7 +795,11 @@ mod tests {
super::save_oauth_tokens_to_file(&tokens)?;
super::save_oauth_tokens_with_store(&store, &tokens.server_name, &tokens)?;
super::save_oauth_tokens_with_keyring_with_fallback_to_file(
&store,
&tokens.server_name,
&tokens,
)?;
let fallback_path = super::fallback_file_path()?;
assert!(!fallback_path.exists(), "fallback file should be removed");
@@ -706,7 +816,11 @@ mod tests {
let key = super::compute_store_key(&tokens.server_name, &tokens.url)?;
store.set_error(&key, KeyringError::Invalid("error".into(), "save".into()));
super::save_oauth_tokens_with_store(&store, &tokens.server_name, &tokens)?;
super::save_oauth_tokens_with_keyring_with_fallback_to_file(
&store,
&tokens.server_name,
&tokens,
)?;
let fallback_path = super::fallback_file_path()?;
assert!(fallback_path.exists(), "fallback file should be created");
@@ -734,8 +848,34 @@ mod tests {
store.save(KEYRING_SERVICE, &key, &serialized)?;
super::save_oauth_tokens_to_file(&tokens)?;
let removed =
super::delete_oauth_tokens_with_store(&store, &tokens.server_name, &tokens.url)?;
let removed = super::delete_oauth_tokens_from_keyring_and_file(
&store,
OAuthCredentialsStoreMode::Auto,
&tokens.server_name,
&tokens.url,
)?;
assert!(removed);
assert!(!store.contains(&key));
assert!(!super::fallback_file_path()?.exists());
Ok(())
}
#[test]
fn delete_oauth_tokens_file_mode_removes_keyring_only_entry() -> Result<()> {
let _env = TempCodexHome::new();
let store = MockCredentialStore::default();
let tokens = sample_tokens();
let serialized = serde_json::to_string(&tokens)?;
let key = super::compute_store_key(&tokens.server_name, &tokens.url)?;
store.save(KEYRING_SERVICE, &key, &serialized)?;
assert!(store.contains(&key));
let removed = super::delete_oauth_tokens_from_keyring_and_file(
&store,
OAuthCredentialsStoreMode::Auto,
&tokens.server_name,
&tokens.url,
)?;
assert!(removed);
assert!(!store.contains(&key));
assert!(!super::fallback_file_path()?.exists());
@@ -751,8 +891,12 @@ mod tests {
store.set_error(&key, KeyringError::Invalid("error".into(), "delete".into()));
super::save_oauth_tokens_to_file(&tokens).unwrap();
let result =
super::delete_oauth_tokens_with_store(&store, &tokens.server_name, &tokens.url);
let result = super::delete_oauth_tokens_from_keyring_and_file(
&store,
OAuthCredentialsStoreMode::Auto,
&tokens.server_name,
&tokens.url,
);
assert!(result.is_err());
assert!(super::fallback_file_path().unwrap().exists());
Ok(())

View File

@@ -12,6 +12,7 @@ use tokio::sync::oneshot;
use tokio::time::timeout;
use urlencoding::decode;
use crate::OAuthCredentialsStoreMode;
use crate::StoredOAuthTokens;
use crate::WrappedOAuthTokenResponse;
use crate::save_oauth_tokens;
@@ -26,7 +27,11 @@ impl Drop for CallbackServerGuard {
}
}
pub async fn perform_oauth_login(server_name: &str, server_url: &str) -> Result<()> {
pub async fn perform_oauth_login(
server_name: &str,
server_url: &str,
store_mode: OAuthCredentialsStoreMode,
) -> Result<()> {
let server = Arc::new(Server::http("127.0.0.1:0").map_err(|err| anyhow!(err))?);
let guard = CallbackServerGuard {
server: Arc::clone(&server),
@@ -81,7 +86,7 @@ pub async fn perform_oauth_login(server_name: &str, server_url: &str) -> Result<
client_id,
token_response: WrappedOAuthTokenResponse(credentials),
};
save_oauth_tokens(server_name, &stored)?;
save_oauth_tokens(server_name, &stored, store_mode)?;
drop(guard);
Ok(())

View File

@@ -35,6 +35,7 @@ use tracing::warn;
use crate::load_oauth_tokens;
use crate::logging_client_handler::LoggingClientHandler;
use crate::oauth::OAuthCredentialsStoreMode;
use crate::oauth::OAuthPersistor;
use crate::oauth::StoredOAuthTokens;
use crate::utils::convert_call_tool_result;
@@ -119,10 +120,11 @@ impl RmcpClient {
server_name: &str,
url: &str,
bearer_token: Option<String>,
store_mode: OAuthCredentialsStoreMode,
) -> Result<Self> {
let initial_oauth_tokens = match bearer_token {
Some(_) => None,
None => match load_oauth_tokens(server_name, url) {
None => match load_oauth_tokens(server_name, url, store_mode) {
Ok(tokens) => tokens,
Err(err) => {
warn!("failed to read tokens for server `{server_name}`: {err}");
@@ -132,7 +134,8 @@ impl RmcpClient {
};
let transport = if let Some(initial_tokens) = initial_oauth_tokens.clone() {
let (transport, oauth_persistor) =
create_oauth_transport_and_runtime(server_name, url, initial_tokens).await?;
create_oauth_transport_and_runtime(server_name, url, initial_tokens, store_mode)
.await?;
PendingTransport::StreamableHttpWithOAuth {
transport,
oauth_persistor,
@@ -286,6 +289,7 @@ async fn create_oauth_transport_and_runtime(
server_name: &str,
url: &str,
initial_tokens: StoredOAuthTokens,
credentials_store: OAuthCredentialsStoreMode,
) -> Result<(
StreamableHttpClientTransport<AuthClient<reqwest::Client>>,
OAuthPersistor,
@@ -320,6 +324,7 @@ async fn create_oauth_transport_and_runtime(
server_name.to_string(),
url.to_string(),
auth_manager,
credentials_store,
Some(initial_tokens),
);

View File

@@ -16,7 +16,6 @@ use crate::key_hint::KeyBinding;
use crate::render::highlight::highlight_bash_to_lines;
use crate::render::renderable::ColumnRenderable;
use crate::render::renderable::Renderable;
use crate::text_formatting::truncate_text;
use codex_core::protocol::FileChange;
use codex_core::protocol::Op;
use codex_core::protocol::ReviewDecision;
@@ -105,9 +104,9 @@ impl ApprovalOverlay {
),
};
let header = Box::new(ColumnRenderable::new([
Box::new(Line::from(title.bold())),
Box::new(Line::from("")),
let header = Box::new(ColumnRenderable::with([
Line::from(title.bold()).into(),
Line::from("").into(),
header,
]));
@@ -160,11 +159,8 @@ impl ApprovalOverlay {
}
fn handle_exec_decision(&self, id: &str, command: &[String], decision: ReviewDecision) {
if let Some(lines) = build_exec_history_lines(command.to_vec(), decision) {
self.app_event_tx.send(AppEvent::InsertHistoryCell(Box::new(
history_cell::new_user_approval_decision(lines),
)));
}
let cell = history_cell::new_approval_decision_cell(command.to_vec(), decision);
self.app_event_tx.send(AppEvent::InsertHistoryCell(cell));
self.app_event_tx.send(AppEvent::CodexOp(Op::ExecApproval {
id: id.to_string(),
decision,
@@ -327,7 +323,7 @@ impl From<ApprovalRequest> for ApprovalRequestState {
header.push(DiffSummary::new(changes, cwd).into());
Self {
variant: ApprovalVariant::ApplyPatch { id },
header: Box::new(ColumnRenderable::new(header)),
header: Box::new(ColumnRenderable::with(header)),
}
}
}
@@ -396,91 +392,11 @@ fn patch_options() -> Vec<ApprovalOption> {
]
}
fn build_exec_history_lines(
command: Vec<String>,
decision: ReviewDecision,
) -> Option<Vec<Line<'static>>> {
use ReviewDecision::*;
let (symbol, summary): (Span<'static>, Vec<Span<'static>>) = match decision {
Approved => {
let snippet = Span::from(exec_snippet(&command)).dim();
(
"".green(),
vec![
"You ".into(),
"approved".bold(),
" codex to run ".into(),
snippet,
" this time".bold(),
],
)
}
ApprovedForSession => {
let snippet = Span::from(exec_snippet(&command)).dim();
(
"".green(),
vec![
"You ".into(),
"approved".bold(),
" codex to run ".into(),
snippet,
" every time this session".bold(),
],
)
}
Denied => {
let snippet = Span::from(exec_snippet(&command)).dim();
(
"".red(),
vec![
"You ".into(),
"did not approve".bold(),
" codex to run ".into(),
snippet,
],
)
}
Abort => {
let snippet = Span::from(exec_snippet(&command)).dim();
(
"".red(),
vec![
"You ".into(),
"canceled".bold(),
" the request to run ".into(),
snippet,
],
)
}
};
let mut lines = Vec::new();
let mut spans = Vec::new();
spans.push(symbol);
spans.extend(summary);
lines.push(Line::from(spans));
Some(lines)
}
fn truncate_exec_snippet(full_cmd: &str) -> String {
let mut snippet = match full_cmd.split_once('\n') {
Some((first, _)) => format!("{first} ..."),
None => full_cmd.to_string(),
};
snippet = truncate_text(&snippet, 80);
snippet
}
fn exec_snippet(command: &[String]) -> String {
let full_cmd = strip_bash_lc_and_escape(command);
truncate_exec_snippet(&full_cmd)
}
#[cfg(test)]
mod tests {
use super::*;
use crate::app_event::AppEvent;
use pretty_assertions::assert_eq;
use tokio::sync::mpsc::unbounded_channel;
fn make_exec_request() -> ApprovalRequest {
@@ -550,6 +466,34 @@ mod tests {
);
}
#[test]
fn exec_history_cell_wraps_with_two_space_indent() {
let command = vec![
"/bin/zsh".into(),
"-lc".into(),
"git add tui/src/render/mod.rs tui/src/render/renderable.rs".into(),
];
let cell = history_cell::new_approval_decision_cell(command, ReviewDecision::Approved);
let lines = cell.display_lines(28);
let rendered: Vec<String> = lines
.iter()
.map(|line| {
line.spans
.iter()
.map(|span| span.content.as_ref())
.collect::<String>()
})
.collect();
let expected = vec![
"✔ You approved codex to".to_string(),
" run /bin/zsh -lc 'git add".to_string(),
" tui/src/render/mod.rs tui/".to_string(),
" src/render/renderable.rs'".to_string(),
" this time".to_string(),
];
assert_eq!(rendered, expected);
}
#[test]
fn enter_sets_last_selected_index_without_dismissing() {
let (tx_raw, mut rx) = unbounded_channel::<AppEvent>();

View File

@@ -149,7 +149,7 @@ impl ChatComposer {
paste_burst: PasteBurst::default(),
disable_paste_burst: false,
custom_prompts: Vec::new(),
footer_mode: FooterMode::ShortcutPrompt,
footer_mode: FooterMode::ShortcutSummary,
footer_hint_override: None,
context_window_percent: None,
};
@@ -165,8 +165,9 @@ impl ChatComposer {
.unwrap_or_else(|| footer_height(footer_props));
let footer_spacing = Self::footer_spacing(footer_hint_height);
let footer_total_height = footer_hint_height + footer_spacing;
const COLS_WITH_MARGIN: u16 = LIVE_PREFIX_COLS + 1;
self.textarea
.desired_height(width.saturating_sub(LIVE_PREFIX_COLS))
.desired_height(width.saturating_sub(COLS_WITH_MARGIN))
+ 2
+ match &self.active_popup {
ActivePopup::None => footer_total_height,
@@ -197,7 +198,9 @@ impl ChatComposer {
let [composer_rect, popup_rect] =
Layout::vertical([Constraint::Min(1), popup_constraint]).areas(area);
let mut textarea_rect = composer_rect;
textarea_rect.width = textarea_rect.width.saturating_sub(LIVE_PREFIX_COLS);
textarea_rect.width = textarea_rect.width.saturating_sub(
LIVE_PREFIX_COLS + 1, /* keep a one-column right margin for wrapping */
);
textarea_rect.x = textarea_rect.x.saturating_add(LIVE_PREFIX_COLS);
[composer_rect, textarea_rect, popup_rect]
}
@@ -319,8 +322,12 @@ impl ChatComposer {
}
/// Attempt to start a burst by retro-capturing recent chars before the cursor.
pub fn attach_image(&mut self, path: PathBuf, width: u32, height: u32, format_label: &str) {
let placeholder = format!("[image {width}x{height} {format_label}]");
pub fn attach_image(&mut self, path: PathBuf, width: u32, height: u32, _format_label: &str) {
let file_label = path
.file_name()
.map(|name| name.to_string_lossy().into_owned())
.unwrap_or_else(|| "image".to_string());
let placeholder = format!("[{file_label} {width}x{height}]");
// Insert as an element to match large paste placeholder behavior:
// styled distinctly and treated atomically for cursor/mutations.
self.textarea.insert_element(&placeholder);
@@ -958,6 +965,7 @@ impl ChatComposer {
}
let mut text = self.textarea.text().to_string();
let original_input = text.clone();
let input_starts_with_space = original_input.starts_with(' ');
self.textarea.set_text("");
// Replace all pending pastes in the text
@@ -971,6 +979,35 @@ impl ChatComposer {
// If there is neither text nor attachments, suppress submission entirely.
let has_attachments = !self.attached_images.is_empty();
text = text.trim().to_string();
if let Some((name, _rest)) = parse_slash_name(&text) {
let treat_as_plain_text = input_starts_with_space || name.contains('/');
if !treat_as_plain_text {
let is_builtin = built_in_slash_commands()
.into_iter()
.any(|(command_name, _)| command_name == name);
let prompt_prefix = format!("{PROMPTS_CMD_PREFIX}:");
let is_known_prompt = name
.strip_prefix(&prompt_prefix)
.map(|prompt_name| {
self.custom_prompts
.iter()
.any(|prompt| prompt.name == prompt_name)
})
.unwrap_or(false);
if !is_builtin && !is_known_prompt {
let message = format!(
r#"Unrecognized command '/{name}'. Type "/" for a list of supported commands."#
);
self.app_event_tx.send(AppEvent::InsertHistoryCell(Box::new(
history_cell::new_info_event(message, None),
)));
self.textarea.set_text(&original_input);
self.textarea.set_cursor(original_input.len());
return (InputResult::None, true);
}
}
}
let expanded_prompt = match expand_custom_prompt(&text, &self.custom_prompts) {
Ok(expanded) => expanded,
Err(err) => {
@@ -1345,8 +1382,8 @@ impl ChatComposer {
FooterMode::EscHint => FooterMode::EscHint,
FooterMode::ShortcutOverlay => FooterMode::ShortcutOverlay,
FooterMode::CtrlCReminder => FooterMode::CtrlCReminder,
FooterMode::ShortcutPrompt if self.ctrl_c_quit_hint => FooterMode::CtrlCReminder,
FooterMode::ShortcutPrompt if !self.is_empty() => FooterMode::Empty,
FooterMode::ShortcutSummary if self.ctrl_c_quit_hint => FooterMode::CtrlCReminder,
FooterMode::ShortcutSummary if !self.is_empty() => FooterMode::ContextOnly,
other => other,
}
}
@@ -1779,11 +1816,11 @@ mod tests {
// Toggle back to prompt mode so subsequent typing captures characters.
let _ = composer.handle_key_event(KeyEvent::new(KeyCode::Char('?'), KeyModifiers::NONE));
assert_eq!(composer.footer_mode, FooterMode::ShortcutPrompt);
assert_eq!(composer.footer_mode, FooterMode::ShortcutSummary);
type_chars_humanlike(&mut composer, &['h']);
assert_eq!(composer.textarea.text(), "h");
assert_eq!(composer.footer_mode(), FooterMode::Empty);
assert_eq!(composer.footer_mode(), FooterMode::ContextOnly);
let (result, needs_redraw) =
composer.handle_key_event(KeyEvent::new(KeyCode::Char('?'), KeyModifiers::NONE));
@@ -1792,8 +1829,8 @@ mod tests {
std::thread::sleep(ChatComposer::recommended_paste_flush_delay());
let _ = composer.flush_paste_burst_if_due();
assert_eq!(composer.textarea.text(), "h?");
assert_eq!(composer.footer_mode, FooterMode::ShortcutPrompt);
assert_eq!(composer.footer_mode(), FooterMode::Empty);
assert_eq!(composer.footer_mode, FooterMode::ShortcutSummary);
assert_eq!(composer.footer_mode(), FooterMode::ContextOnly);
}
#[test]
@@ -2581,7 +2618,7 @@ mod tests {
let (result, _) =
composer.handle_key_event(KeyEvent::new(KeyCode::Enter, KeyModifiers::NONE));
match result {
InputResult::Submitted(text) => assert_eq!(text, "[image 32x16 PNG] hi"),
InputResult::Submitted(text) => assert_eq!(text, "[image1.png 32x16] hi"),
_ => panic!("expected Submitted"),
}
let imgs = composer.take_recent_submission_images();
@@ -2604,7 +2641,7 @@ mod tests {
let (result, _) =
composer.handle_key_event(KeyEvent::new(KeyCode::Enter, KeyModifiers::NONE));
match result {
InputResult::Submitted(text) => assert_eq!(text, "[image 10x5 PNG]"),
InputResult::Submitted(text) => assert_eq!(text, "[image2.png 10x5]"),
_ => panic!("expected Submitted"),
}
let imgs = composer.take_recent_submission_images();
@@ -2677,7 +2714,12 @@ mod tests {
composer.handle_key_event(KeyEvent::new(KeyCode::Backspace, KeyModifiers::NONE));
assert_eq!(composer.attached_images.len(), 1);
assert!(composer.textarea.text().starts_with("[image 10x5 PNG]"));
assert!(
composer
.textarea
.text()
.starts_with("[image_multibyte.png 10x5]")
);
}
#[test]
@@ -2700,21 +2742,31 @@ mod tests {
composer.handle_paste(" ".into());
composer.attach_image(path2.clone(), 10, 5, "PNG");
let ph = composer.attached_images[0].placeholder.clone();
let placeholder1 = composer.attached_images[0].placeholder.clone();
let placeholder2 = composer.attached_images[1].placeholder.clone();
let text = composer.textarea.text().to_string();
let start1 = text.find(&ph).expect("first placeholder present");
let end1 = start1 + ph.len();
let start1 = text.find(&placeholder1).expect("first placeholder present");
let end1 = start1 + placeholder1.len();
composer.textarea.set_cursor(end1);
// Backspace should delete the first placeholder and its mapping.
composer.handle_key_event(KeyEvent::new(KeyCode::Backspace, KeyModifiers::NONE));
let new_text = composer.textarea.text().to_string();
assert_eq!(1, new_text.matches(&ph).count(), "one placeholder remains");
assert_eq!(
0,
new_text.matches(&placeholder1).count(),
"first placeholder removed"
);
assert_eq!(
1,
new_text.matches(&placeholder2).count(),
"second placeholder remains"
);
assert_eq!(
vec![AttachedImage {
path: path2,
placeholder: "[image 10x5 PNG]".to_string()
placeholder: "[image_dup2.png 10x5]".to_string()
}],
composer.attached_images,
"one image mapping remains"
@@ -2741,7 +2793,12 @@ mod tests {
let needs_redraw = composer.handle_paste(tmp_path.to_string_lossy().to_string());
assert!(needs_redraw);
assert!(composer.textarea.text().starts_with("[image 3x2 PNG] "));
assert!(
composer
.textarea
.text()
.starts_with("[codex_tui_test_paste_image.png 3x2] ")
);
let imgs = composer.take_recent_submission_images();
assert_eq!(imgs, vec![tmp_path]);
@@ -2853,6 +2910,76 @@ mod tests {
assert!(composer.textarea.is_empty());
}
#[test]
fn slash_path_input_submits_without_command_error() {
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
use crossterm::event::KeyModifiers;
let (tx, mut rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
composer
.textarea
.set_text("/Users/example/project/src/main.rs");
let (result, _needs_redraw) =
composer.handle_key_event(KeyEvent::new(KeyCode::Enter, KeyModifiers::NONE));
if let InputResult::Submitted(text) = result {
assert_eq!(text, "/Users/example/project/src/main.rs");
} else {
panic!("expected Submitted");
}
assert!(composer.textarea.is_empty());
match rx.try_recv() {
Ok(event) => panic!("unexpected event: {event:?}"),
Err(tokio::sync::mpsc::error::TryRecvError::Empty) => {}
Err(err) => panic!("unexpected channel state: {err:?}"),
}
}
#[test]
fn slash_with_leading_space_submits_as_text() {
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
use crossterm::event::KeyModifiers;
let (tx, mut rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
composer.textarea.set_text(" /this-looks-like-a-command");
let (result, _needs_redraw) =
composer.handle_key_event(KeyEvent::new(KeyCode::Enter, KeyModifiers::NONE));
if let InputResult::Submitted(text) = result {
assert_eq!(text, "/this-looks-like-a-command");
} else {
panic!("expected Submitted");
}
assert!(composer.textarea.is_empty());
match rx.try_recv() {
Ok(event) => panic!("unexpected event: {event:?}"),
Err(tokio::sync::mpsc::error::TryRecvError::Empty) => {}
Err(err) => panic!("unexpected channel state: {err:?}"),
}
}
#[test]
fn custom_prompt_invalid_args_reports_error() {
let (tx, mut rx) = unbounded_channel::<AppEvent>();

View File

@@ -59,14 +59,15 @@ impl ChatComposerHistory {
return;
}
self.history_cursor = None;
self.last_history_text = None;
// Avoid inserting a duplicate if identical to the previous entry.
if self.local_history.last().is_some_and(|prev| prev == text) {
return;
}
self.local_history.push(text.to_string());
self.history_cursor = None;
self.last_history_text = None;
}
/// Should Up/Down key presses be interpreted as history navigation given

View File

@@ -23,10 +23,10 @@ pub(crate) struct FooterProps {
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
pub(crate) enum FooterMode {
CtrlCReminder,
ShortcutPrompt,
ShortcutSummary,
ShortcutOverlay,
EscHint,
Empty,
ContextOnly,
}
pub(crate) fn toggle_shortcut_mode(current: FooterMode, ctrl_c_hint: bool) -> FooterMode {
@@ -35,7 +35,7 @@ pub(crate) fn toggle_shortcut_mode(current: FooterMode, ctrl_c_hint: bool) -> Fo
}
match current {
FooterMode::ShortcutOverlay | FooterMode::CtrlCReminder => FooterMode::ShortcutPrompt,
FooterMode::ShortcutOverlay | FooterMode::CtrlCReminder => FooterMode::ShortcutSummary,
_ => FooterMode::ShortcutOverlay,
}
}
@@ -53,7 +53,7 @@ pub(crate) fn reset_mode_after_activity(current: FooterMode) -> FooterMode {
FooterMode::EscHint
| FooterMode::ShortcutOverlay
| FooterMode::CtrlCReminder
| FooterMode::Empty => FooterMode::ShortcutPrompt,
| FooterMode::ContextOnly => FooterMode::ShortcutSummary,
other => other,
}
}
@@ -72,26 +72,29 @@ pub(crate) fn render_footer(area: Rect, buf: &mut Buffer, props: FooterProps) {
}
fn footer_lines(props: FooterProps) -> Vec<Line<'static>> {
// Show the context indicator on the left, appended after the primary hint
// (e.g., "? for shortcuts"). Keep it visible even when typing (i.e., when
// the shortcut hint is hidden). Hide it only for the multi-line
// ShortcutOverlay.
match props.mode {
FooterMode::CtrlCReminder => vec![ctrl_c_reminder_line(CtrlCReminderState {
is_task_running: props.is_task_running,
})],
FooterMode::ShortcutPrompt => {
if props.is_task_running {
vec![context_window_line(props.context_window_percent)]
} else {
vec![Line::from(vec![
key_hint::plain(KeyCode::Char('?')).into(),
" for shortcuts".dim(),
])]
}
FooterMode::ShortcutSummary => {
let mut line = context_window_line(props.context_window_percent);
line.push_span(" · ".dim());
line.extend(vec![
key_hint::plain(KeyCode::Char('?')).into(),
" for shortcuts".dim(),
]);
vec![line]
}
FooterMode::ShortcutOverlay => shortcut_overlay_lines(ShortcutsState {
use_shift_enter_hint: props.use_shift_enter_hint,
esc_backtrack_hint: props.esc_backtrack_hint,
}),
FooterMode::EscHint => vec![esc_hint_line(props.esc_backtrack_hint)],
FooterMode::Empty => Vec::new(),
FooterMode::ContextOnly => vec![context_window_line(props.context_window_percent)],
}
}
@@ -219,18 +222,8 @@ fn build_columns(entries: Vec<Line<'static>>) -> Vec<Line<'static>> {
}
fn context_window_line(percent: Option<u8>) -> Line<'static> {
let mut spans: Vec<Span<'static>> = Vec::new();
match percent {
Some(percent) => {
spans.push(format!("{percent}%").dim());
spans.push(" context left".dim());
}
None => {
spans.push(key_hint::plain(KeyCode::Char('?')).into());
spans.push(" for shortcuts".dim());
}
}
Line::from(spans)
let percent = percent.unwrap_or(100);
Line::from(vec![Span::from(format!("{percent}% context left")).dim()])
}
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
@@ -402,7 +395,7 @@ mod tests {
snapshot_footer(
"footer_shortcuts_default",
FooterProps {
mode: FooterMode::ShortcutPrompt,
mode: FooterMode::ShortcutSummary,
esc_backtrack_hint: false,
use_shift_enter_hint: false,
is_task_running: false,
@@ -468,7 +461,7 @@ mod tests {
snapshot_footer(
"footer_shortcuts_context_running",
FooterProps {
mode: FooterMode::ShortcutPrompt,
mode: FooterMode::ShortcutSummary,
esc_backtrack_hint: false,
use_shift_enter_hint: false,
is_task_running: true,

View File

@@ -87,7 +87,7 @@ impl ListSelectionView {
if params.title.is_some() || params.subtitle.is_some() {
let title = params.title.map(|title| Line::from(title.bold()));
let subtitle = params.subtitle.map(|subtitle| Line::from(subtitle.dim()));
header = Box::new(ColumnRenderable::new([
header = Box::new(ColumnRenderable::with([
header,
Box::new(title),
Box::new(subtitle),

View File

@@ -11,4 +11,4 @@ expression: terminal.backend()
" "
" "
" "
" "
" 100% context left "

View File

@@ -1,6 +1,5 @@
---
source: tui/src/bottom_pane/chat_composer.rs
assertion_line: 1938
expression: terminal.backend()
---
" "
@@ -12,4 +11,4 @@ expression: terminal.backend()
" "
" "
" "
" ? for shortcuts "
" 100% context left · ? for shortcuts "

View File

@@ -1,6 +1,5 @@
---
source: tui/src/bottom_pane/chat_composer.rs
assertion_line: 1497
expression: terminal.backend()
---
" "

View File

@@ -1,6 +1,5 @@
---
source: tui/src/bottom_pane/chat_composer.rs
assertion_line: 1497
expression: terminal.backend()
---
" "

View File

@@ -1,6 +1,5 @@
---
source: tui/src/bottom_pane/chat_composer.rs
assertion_line: 1497
expression: terminal.backend()
---
" "

View File

@@ -1,6 +1,5 @@
---
source: tui/src/bottom_pane/chat_composer.rs
assertion_line: 1497
expression: terminal.backend()
---
" "

View File

@@ -1,6 +1,5 @@
---
source: tui/src/bottom_pane/chat_composer.rs
assertion_line: 1497
expression: terminal.backend()
---
" "

View File

@@ -10,3 +10,4 @@ expression: terminal.backend()
" "
" "
" "
" 100% context left "

View File

@@ -1,6 +1,5 @@
---
source: tui/src/bottom_pane/chat_composer.rs
assertion_line: 1497
expression: terminal.backend()
---
" "

View File

@@ -11,4 +11,4 @@ expression: terminal.backend()
" "
" "
" "
" "
" 100% context left "

View File

@@ -11,4 +11,4 @@ expression: terminal.backend()
" "
" "
" "
" "
" 100% context left "

View File

@@ -11,4 +11,4 @@ expression: terminal.backend()
" "
" "
" "
" "
" 100% context left "

View File

@@ -1,6 +1,5 @@
---
source: tui/src/bottom_pane/footer.rs
assertion_line: 389
expression: terminal.backend()
---
" ctrl + c again to quit "

View File

@@ -1,6 +1,5 @@
---
source: tui/src/bottom_pane/footer.rs
assertion_line: 389
expression: terminal.backend()
---
" ctrl + c again to interrupt "

View File

@@ -1,6 +1,5 @@
---
source: tui/src/bottom_pane/footer.rs
assertion_line: 389
expression: terminal.backend()
---
" esc esc to edit previous message "

View File

@@ -1,6 +1,5 @@
---
source: tui/src/bottom_pane/footer.rs
assertion_line: 389
expression: terminal.backend()
---
" esc again to edit previous message "

View File

@@ -2,4 +2,4 @@
source: tui/src/bottom_pane/footer.rs
expression: terminal.backend()
---
" 72% context left "
" 72% context left · ? for shortcuts "

View File

@@ -1,6 +1,5 @@
---
source: tui/src/bottom_pane/footer.rs
assertion_line: 389
expression: terminal.backend()
---
" ? for shortcuts "
" 100% context left · ? for shortcuts "

View File

@@ -8,4 +8,4 @@ expression: "render_snapshot(&pane, area)"
Ask Codex to do anything
? for shortcuts
100% context left · ? for sh

View File

@@ -2,4 +2,4 @@
source: tui/src/bottom_pane/mod.rs
expression: "render_snapshot(&pane, area1)"
---
Ask Codex to do an
Ask Codex to do a

View File

@@ -3,4 +3,4 @@ source: tui/src/bottom_pane/mod.rs
expression: "render_snapshot(&pane, area2)"
---
Ask Codex to do an
Ask Codex to do a

View File

@@ -799,16 +799,22 @@ impl TextArea {
}
pub(crate) fn beginning_of_previous_word(&self) -> usize {
if let Some(first_non_ws) = self.text[..self.cursor_pos].rfind(|c: char| !c.is_whitespace())
{
let candidate = self.text[..first_non_ws]
.rfind(|c: char| c.is_whitespace())
.map(|i| i + 1)
.unwrap_or(0);
self.adjust_pos_out_of_elements(candidate, true)
} else {
0
}
let prefix = &self.text[..self.cursor_pos];
let Some((first_non_ws_idx, _)) = prefix
.char_indices()
.rev()
.find(|&(_, ch)| !ch.is_whitespace())
else {
return 0;
};
let before = &prefix[..first_non_ws_idx];
let candidate = before
.char_indices()
.rev()
.find(|&(_, ch)| ch.is_whitespace())
.map(|(idx, ch)| idx + ch.len_utf8())
.unwrap_or(0);
self.adjust_pos_out_of_elements(candidate, true)
}
pub(crate) fn end_of_next_word(&self) -> usize {
@@ -1262,6 +1268,15 @@ mod tests {
assert_eq!(t.cursor(), 6);
}
#[test]
fn delete_backward_word_handles_narrow_no_break_space() {
let mut t = ta_with("32\u{202F}AM");
t.set_cursor(t.text().len());
t.input(KeyEvent::new(KeyCode::Backspace, KeyModifiers::ALT));
pretty_assertions::assert_eq!(t.text(), "32\u{202F}");
pretty_assertions::assert_eq!(t.cursor(), t.text().len());
}
#[test]
fn delete_forward_word_with_without_alt_modifier() {
let mut t = ta_with("hello world");

View File

@@ -240,6 +240,10 @@ pub(crate) struct ChatWidget {
reasoning_buffer: String,
// Accumulates full reasoning content for transcript-only recording
full_reasoning_buffer: String,
// Current status header shown in the status indicator.
current_status_header: String,
// Previous status header to restore after a transient stream retry.
retry_status_header: Option<String>,
conversation_id: Option<ConversationId>,
frame_requester: FrameRequester,
// Whether to include the initial welcome banner on session configured
@@ -303,6 +307,14 @@ impl ChatWidget {
}
}
fn set_status_header(&mut self, header: String) {
if self.current_status_header == header {
return;
}
self.current_status_header = header.clone();
self.bottom_pane.update_status_header(header);
}
// --- Small event handlers ---
fn on_session_configured(&mut self, event: codex_core::protocol::SessionConfiguredEvent) {
self.bottom_pane
@@ -352,7 +364,7 @@ impl ChatWidget {
if let Some(header) = extract_first_bold(&self.reasoning_buffer) {
// Update the shimmer header to the extracted reasoning chunk header.
self.bottom_pane.update_status_header(header);
self.set_status_header(header);
} else {
// Fallback while we don't yet have a bold header: leave existing header as-is.
}
@@ -386,6 +398,8 @@ impl ChatWidget {
fn on_task_started(&mut self) {
self.bottom_pane.clear_ctrl_c_quit_hint();
self.bottom_pane.set_task_running(true);
self.retry_status_header = None;
self.set_status_header(String::from("Working"));
self.full_reasoning_buffer.clear();
self.reasoning_buffer.clear();
self.request_redraw();
@@ -621,9 +635,10 @@ impl ChatWidget {
}
fn on_stream_error(&mut self, message: String) {
// Show stream errors in the transcript so users see retry/backoff info.
self.add_to_history(history_cell::new_stream_error_event(message));
self.request_redraw();
if self.retry_status_header.is_none() {
self.retry_status_header = Some(self.current_status_header.clone());
}
self.set_status_header(message);
}
/// Periodic tick to commit at most one queued line to history with a small delay,
@@ -928,6 +943,8 @@ impl ChatWidget {
interrupts: InterruptManager::new(),
reasoning_buffer: String::new(),
full_reasoning_buffer: String::new(),
current_status_header: String::from("Working"),
retry_status_header: None,
conversation_id: None,
queued_user_messages: VecDeque::new(),
show_welcome_banner: true,
@@ -991,6 +1008,8 @@ impl ChatWidget {
interrupts: InterruptManager::new(),
reasoning_buffer: String::new(),
full_reasoning_buffer: String::new(),
current_status_header: String::from("Working"),
retry_status_header: None,
conversation_id: None,
queued_user_messages: VecDeque::new(),
show_welcome_banner: true,
@@ -1015,20 +1034,20 @@ impl ChatWidget {
pub(crate) fn handle_key_event(&mut self, key_event: KeyEvent) {
match key_event {
KeyEvent {
code: KeyCode::Char('c'),
modifiers: crossterm::event::KeyModifiers::CONTROL,
code: KeyCode::Char(c),
modifiers,
kind: KeyEventKind::Press,
..
} => {
} if modifiers.contains(KeyModifiers::CONTROL) && c.eq_ignore_ascii_case(&'c') => {
self.on_ctrl_c();
return;
}
KeyEvent {
code: KeyCode::Char('v'),
modifiers: KeyModifiers::CONTROL,
code: KeyCode::Char(c),
modifiers,
kind: KeyEventKind::Press,
..
} => {
} if modifiers.contains(KeyModifiers::CONTROL) && c.eq_ignore_ascii_case(&'v') => {
if let Ok((path, info)) = paste_image_to_temp_png() {
self.attach_image(path, info.width, info.height, info.encoded_format.label());
}
@@ -1906,7 +1925,11 @@ impl ChatWidget {
}
fn on_list_mcp_tools(&mut self, ev: McpListToolsResponseEvent) {
self.add_to_history(history_cell::new_mcp_tools_output(&self.config, ev.tools));
self.add_to_history(history_cell::new_mcp_tools_output(
&self.config,
ev.tools,
&ev.auth_statuses,
));
}
fn on_list_custom_prompts(&mut self, ev: ListCustomPromptsResponseEvent) {

View File

@@ -1,6 +1,5 @@
---
source: tui/src/chatwidget/tests.rs
assertion_line: 1152
expression: "lines[start_idx..].join(\"\\n\")"
---
• I need to check the codex-rs repository to explain why the project's binaries
@@ -33,7 +32,7 @@ expression: "lines[start_idx..].join(\"\\n\")"
│ … +1 lines
└ --- ansi-escape/Cargo.toml
[package]
… +7 lines
… +243 lines
] }
tracing = { version

Some files were not shown because too many files have changed in this diff Show More