Compare commits

..

16 Commits

Author SHA1 Message Date
Dylan Hurd
6ce70dbb5c scenario cleanup 2026-01-14 23:34:25 -08:00
Dylan Hurd
961270022d detach unit tests from scenarios 2026-01-14 23:34:24 -08:00
Dylan Hurd
28e8306f6a clean up test scenarios 2026-01-14 23:34:24 -08:00
Dylan Hurd
14f85c1d31 update scenarios 2026-01-14 23:34:24 -08:00
Dylan Hurd
050382098a more tests 2026-01-14 23:34:24 -08:00
Dylan Hurd
badd55aae3 fix(apply-patch) Support repeated @@ lines 2026-01-14 23:34:24 -08:00
sayan-oai
5675af5190 chore: clarify default shell for unified_exec (#8997)
The description of the `shell` arg for `exec_command` states the default
is `/bin/bash`, but AFAICT it's the user's default shell.

Default logic
[here](2a06d64bc9/codex-rs/core/src/tools/handlers/unified_exec.rs (L123)).

EDIT: #9004 has an alternative where we inform the model of the default
shell itself.
2026-01-13 21:37:41 -08:00
Eric Traut
31d9b6f4d2 Improve handling of config and rules errors for app server clients (#9182)
When an invalid config.toml key or value is detected, the CLI currently
just quits. This leaves the VSCE in a dead state.

This PR changes the behavior to not quit and bubble up the config error
to users to make it actionable. It also surfaces errors related to
"rules" parsing.

This allows us to surface these errors to users in the VSCE, like this:

<img width="342" height="129" alt="Screenshot 2026-01-13 at 4 29 22 PM"
src="https://github.com/user-attachments/assets/a79ffbe7-7604-400c-a304-c5165b6eebc4"
/>

<img width="346" height="244" alt="Screenshot 2026-01-13 at 4 45 06 PM"
src="https://github.com/user-attachments/assets/de874f7c-16a2-4a95-8c6d-15f10482e67b"
/>
2026-01-13 17:57:09 -08:00
Ahmed Ibrahim
5a82a72d93 Use offline cache for tui migrations (#9186) 2026-01-13 17:44:12 -08:00
Josh McKinney
ce49e92848 fix(tui): harden paste-burst state transitions (#9124)
User-facing symptom: On terminals that deliver pastes as rapid
KeyCode::Char/Enter streams (notably Windows), paste-burst transient
state
can leak into the next input. Users can see Enter insert a newline when
they meant to submit, or see characters appear late / handled through
the
wrong path.

System problem: PasteBurst is time-based. Clearing only the
classification window (e.g. via clear_window_after_non_char()) can erase
last_plain_char_time without emitting buffered text. If a buffer is
still
non-empty after that, flush_if_due() no longer has a timeout clock to
flush against, so the buffer can get "stuck" until another plain char
arrives.

This was surfaced while adding deterministic regression tests for
paste-burst behavior.

Fix: when disabling burst detection, defuse any in-flight burst state:
flush held/buffered text through handle_paste() (so it follows normal
paste integration), then clear timing and Enter suppression.

Document the rationale inline and update docs/tui-chat-composer.md so
"disable_paste_burst" matches the actual behavior.
2026-01-14 01:42:21 +00:00
Ahmed Ibrahim
4d787a2cc2 Renew cache ttl on etag match (#9174)
so we don't do unnecessary fetches
2026-01-14 01:21:41 +00:00
Ahmed Ibrahim
c96c26cf5b [CODEX-4427] improve parsed commands (#8933)
**make command summaries more accurate by distinguishing
list/search/read operations across common CLI tools.**
- Added parsing helpers to centralize operand/flag handling (cd_target,
sed_read_path, first_non_flag_operand, single_non_flag_operand,
parse_grep_like, awk_data_file_operand, python_walks_files,
is_python_command) and reused them in summarize_main_tokens/shell
parsing.
- Newly parsed list-files commands: git ls-files, rg --files (incl.
rga/ripgrep-all), eza/exa, tree, du, python -c file-walks, plus fd/find
map to ListFiles when no query.
- Newly parsed search commands: git grep, grep/egrep/fgrep, ag/ack/pt,
rg/rga files-with-matches flags (-l/-L, --files-with-matches,
--files-without-match), with improved flag skipping to avoid
misclassifying args as paths.
- Newly parsed read commands: bat/batcat, less, more, awk <file>, and
more flexible sed -n range + file detection.
- refine “small formatting command” detection for awk/sed, handle cd
with -- or multiple operands, keep pipeline summaries focused on primary
command.
2026-01-13 16:59:33 -08:00
Ahmed Ibrahim
7e33ac7eb6 clean models manager (#9168)
Have only the following Methods:
- `list_models`: getting current available models
- `try_list_models`: sync version no refresh for tui use
- `get_default_model`: get the default model (should be tightened to
core and received on session configuration)
- `get_model_info`: get `ModelInfo` for a specific model (should be
tightened to core but used in tests)
- `refresh_if_new_etag`: trigger refresh on different etags

Also move the cache to its own struct
2026-01-13 16:55:33 -08:00
github-actions[bot]
ebbbee70c6 Update models.json (#9136)
Automated update of models.json.

Co-authored-by: aibrahim-oai <219906144+aibrahim-oai@users.noreply.github.com>
2026-01-13 16:46:24 -08:00
pakrym-oai
5a70b1568f WebSocket test server script (#9175)
Very simple test server to test Responses API WebSocket transport
integration.
2026-01-13 16:21:14 -08:00
Michael Bolin
903a0c0933 feat: add bazel-codex entry to justfile (#9177)
This is less straightforward than I realized, so created an entry for
this in our `justfile`.

Verified that running `just bazel-codex` from anywhere in the repo uses
the user's `$PWD` as the one to run Codex.

While here, updated the `MODULE.bazel.lock`, though it looks like I need
to add a CI job that runs `bazel mod deps --lockfile_mode=error` or
something.
2026-01-13 16:16:22 -08:00
64 changed files with 2573 additions and 562 deletions

3
MODULE.bazel.lock generated
View File

@@ -451,6 +451,7 @@
"darling_macro_0.20.11": "{\"dependencies\":[{\"name\":\"darling_core\",\"req\":\"=0.20.11\"},{\"name\":\"quote\",\"req\":\"^1.0.18\"},{\"name\":\"syn\",\"req\":\"^2.0.15\"}],\"features\":{}}",
"darling_macro_0.21.3": "{\"dependencies\":[{\"name\":\"darling_core\",\"req\":\"=0.21.3\"},{\"name\":\"quote\",\"req\":\"^1.0.18\"},{\"name\":\"syn\",\"req\":\"^2.0.15\"}],\"features\":{}}",
"darling_macro_0.23.0": "{\"dependencies\":[{\"name\":\"darling_core\",\"req\":\"=0.23.0\"},{\"name\":\"quote\",\"req\":\"^1.0.18\"},{\"name\":\"syn\",\"req\":\"^2.0.15\"}],\"features\":{}}",
"data-encoding_2.10.0": "{\"dependencies\":[],\"features\":{\"alloc\":[],\"default\":[\"std\"],\"std\":[\"alloc\"]}}",
"dbus-secret-service_4.1.0": "{\"dependencies\":[{\"name\":\"aes\",\"optional\":true,\"req\":\"^0.8\"},{\"features\":[\"std\"],\"name\":\"block-padding\",\"optional\":true,\"req\":\"^0.3\"},{\"features\":[\"block-padding\",\"alloc\"],\"name\":\"cbc\",\"optional\":true,\"req\":\"^0.1\"},{\"name\":\"dbus\",\"req\":\"^0.9\"},{\"name\":\"fastrand\",\"optional\":true,\"req\":\"^2.3\"},{\"name\":\"hkdf\",\"optional\":true,\"req\":\"^0.12\"},{\"name\":\"num\",\"optional\":true,\"req\":\"^0.4\"},{\"name\":\"once_cell\",\"optional\":true,\"req\":\"^1\"},{\"name\":\"openssl\",\"optional\":true,\"req\":\"^0.10.55\"},{\"name\":\"sha2\",\"optional\":true,\"req\":\"^0.10\"},{\"features\":[\"derive\"],\"name\":\"zeroize\",\"req\":\"^1.8\"}],\"features\":{\"crypto-openssl\":[\"dep:fastrand\",\"dep:num\",\"dep:once_cell\",\"dep:openssl\"],\"crypto-rust\":[\"dep:aes\",\"dep:block-padding\",\"dep:cbc\",\"dep:fastrand\",\"dep:hkdf\",\"dep:num\",\"dep:once_cell\",\"dep:sha2\"],\"vendored\":[\"dbus/vendored\",\"openssl?/vendored\"]}}",
"dbus_0.9.9": "{\"dependencies\":[{\"name\":\"futures-channel\",\"optional\":true,\"req\":\"^0.3\"},{\"name\":\"futures-executor\",\"optional\":true,\"req\":\"^0.3\"},{\"default_features\":false,\"name\":\"futures-util\",\"optional\":true,\"req\":\"^0.3\"},{\"name\":\"libc\",\"req\":\"^0.2.66\"},{\"name\":\"libdbus-sys\",\"req\":\"^0.2.6\"},{\"kind\":\"dev\",\"name\":\"tempfile\",\"req\":\"^3\"},{\"features\":[\"Win32_Networking_WinSock\"],\"name\":\"windows-sys\",\"req\":\"^0.59.0\",\"target\":\"cfg(windows)\"}],\"features\":{\"futures\":[\"futures-util\",\"futures-channel\"],\"no-string-validation\":[],\"stdfd\":[],\"vendored\":[\"libdbus-sys/vendored\"]}}",
"deadpool-runtime_0.1.4": "{\"dependencies\":[{\"features\":[\"unstable\"],\"name\":\"async-std_1\",\"optional\":true,\"package\":\"async-std\",\"req\":\"^1.0\"},{\"features\":[\"time\",\"rt\"],\"name\":\"tokio_1\",\"optional\":true,\"package\":\"tokio\",\"req\":\"^1.0\"}],\"features\":{}}",
@@ -906,6 +907,7 @@
"tokio-rustls_0.26.2": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"argh\",\"req\":\"^0.1.1\"},{\"kind\":\"dev\",\"name\":\"futures-util\",\"req\":\"^0.3.1\"},{\"kind\":\"dev\",\"name\":\"lazy_static\",\"req\":\"^1.1\"},{\"features\":[\"pem\"],\"kind\":\"dev\",\"name\":\"rcgen\",\"req\":\"^0.13\"},{\"default_features\":false,\"features\":[\"std\"],\"name\":\"rustls\",\"req\":\"^0.23.22\"},{\"name\":\"tokio\",\"req\":\"^1.0\"},{\"features\":[\"full\"],\"kind\":\"dev\",\"name\":\"tokio\",\"req\":\"^1.0\"},{\"kind\":\"dev\",\"name\":\"webpki-roots\",\"req\":\"^0.26\"}],\"features\":{\"aws-lc-rs\":[\"aws_lc_rs\"],\"aws_lc_rs\":[\"rustls/aws_lc_rs\"],\"default\":[\"logging\",\"tls12\",\"aws_lc_rs\"],\"early-data\":[],\"fips\":[\"rustls/fips\"],\"logging\":[\"rustls/logging\"],\"ring\":[\"rustls/ring\"],\"tls12\":[\"rustls/tls12\"]}}",
"tokio-stream_0.1.18": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"async-stream\",\"req\":\"^0.3\"},{\"default_features\":false,\"kind\":\"dev\",\"name\":\"futures\",\"req\":\"^0.3\"},{\"name\":\"futures-core\",\"req\":\"^0.3.0\"},{\"kind\":\"dev\",\"name\":\"parking_lot\",\"req\":\"^0.12.0\"},{\"name\":\"pin-project-lite\",\"req\":\"^0.2.11\"},{\"features\":[\"sync\"],\"name\":\"tokio\",\"req\":\"^1.15.0\"},{\"features\":[\"full\",\"test-util\"],\"kind\":\"dev\",\"name\":\"tokio\",\"req\":\"^1.2.0\"},{\"kind\":\"dev\",\"name\":\"tokio-test\",\"req\":\"^0.4\"},{\"name\":\"tokio-util\",\"optional\":true,\"req\":\"^0.7.0\"}],\"features\":{\"default\":[\"time\"],\"fs\":[\"tokio/fs\"],\"full\":[\"time\",\"net\",\"io-util\",\"fs\",\"sync\",\"signal\"],\"io-util\":[\"tokio/io-util\"],\"net\":[\"tokio/net\"],\"signal\":[\"tokio/signal\"],\"sync\":[\"tokio/sync\",\"tokio-util\"],\"time\":[\"tokio/time\"]}}",
"tokio-test_0.4.4": "{\"dependencies\":[{\"name\":\"async-stream\",\"req\":\"^0.3.3\"},{\"name\":\"bytes\",\"req\":\"^1.0.0\"},{\"name\":\"futures-core\",\"req\":\"^0.3.0\"},{\"kind\":\"dev\",\"name\":\"futures-util\",\"req\":\"^0.3.0\"},{\"features\":[\"rt\",\"sync\",\"time\",\"test-util\"],\"name\":\"tokio\",\"req\":\"^1.2.0\"},{\"features\":[\"full\"],\"kind\":\"dev\",\"name\":\"tokio\",\"req\":\"^1.2.0\"},{\"name\":\"tokio-stream\",\"req\":\"^0.1.1\"}],\"features\":{}}",
"tokio-tungstenite_0.21.0": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"env_logger\",\"req\":\"^0.10.0\"},{\"kind\":\"dev\",\"name\":\"futures-channel\",\"req\":\"^0.3.28\"},{\"default_features\":false,\"features\":[\"sink\",\"std\"],\"name\":\"futures-util\",\"req\":\"^0.3.28\"},{\"default_features\":false,\"features\":[\"http1\",\"server\",\"tcp\"],\"kind\":\"dev\",\"name\":\"hyper\",\"req\":\"^0.14.25\"},{\"name\":\"log\",\"req\":\"^0.4.17\"},{\"name\":\"native-tls-crate\",\"optional\":true,\"package\":\"native-tls\",\"req\":\"^0.2.11\"},{\"name\":\"rustls\",\"optional\":true,\"req\":\"^0.22.0\"},{\"name\":\"rustls-native-certs\",\"optional\":true,\"req\":\"^0.7.0\"},{\"name\":\"rustls-pki-types\",\"optional\":true,\"req\":\"^1.0\"},{\"default_features\":false,\"features\":[\"io-util\"],\"name\":\"tokio\",\"req\":\"^1.0.0\"},{\"default_features\":false,\"features\":[\"io-std\",\"macros\",\"net\",\"rt-multi-thread\",\"time\"],\"kind\":\"dev\",\"name\":\"tokio\",\"req\":\"^1.27.0\"},{\"name\":\"tokio-native-tls\",\"optional\":true,\"req\":\"^0.3.1\"},{\"name\":\"tokio-rustls\",\"optional\":true,\"req\":\"^0.25.0\"},{\"default_features\":false,\"name\":\"tungstenite\",\"req\":\"^0.21.0\"},{\"kind\":\"dev\",\"name\":\"url\",\"req\":\"^2.3.1\"},{\"name\":\"webpki-roots\",\"optional\":true,\"req\":\"^0.26.0\"}],\"features\":{\"__rustls-tls\":[\"rustls\",\"rustls-pki-types\",\"tokio-rustls\",\"stream\",\"tungstenite/__rustls-tls\",\"handshake\"],\"connect\":[\"stream\",\"tokio/net\",\"handshake\"],\"default\":[\"connect\",\"handshake\"],\"handshake\":[\"tungstenite/handshake\"],\"native-tls\":[\"native-tls-crate\",\"tokio-native-tls\",\"stream\",\"tungstenite/native-tls\",\"handshake\"],\"native-tls-vendored\":[\"native-tls\",\"native-tls-crate/vendored\",\"tungstenite/native-tls-vendored\"],\"rustls-tls-native-roots\":[\"__rustls-tls\",\"rustls-native-certs\"],\"rustls-tls-webpki-roots\":[\"__rustls-tls\",\"webpki-roots\"],\"stream\":[]}}",
"tokio-util_0.7.18": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"async-stream\",\"req\":\"^0.3.0\"},{\"name\":\"bytes\",\"req\":\"^1.5.0\"},{\"kind\":\"dev\",\"name\":\"futures\",\"req\":\"^0.3.0\"},{\"name\":\"futures-core\",\"req\":\"^0.3.0\"},{\"name\":\"futures-io\",\"optional\":true,\"req\":\"^0.3.0\"},{\"name\":\"futures-sink\",\"req\":\"^0.3.0\"},{\"kind\":\"dev\",\"name\":\"futures-test\",\"req\":\"^0.3.5\"},{\"name\":\"futures-util\",\"optional\":true,\"req\":\"^0.3.0\"},{\"default_features\":false,\"name\":\"hashbrown\",\"optional\":true,\"req\":\"^0.15.0\"},{\"features\":[\"futures\",\"checkpoint\"],\"kind\":\"dev\",\"name\":\"loom\",\"req\":\"^0.7\",\"target\":\"cfg(loom)\"},{\"kind\":\"dev\",\"name\":\"parking_lot\",\"req\":\"^0.12.0\"},{\"name\":\"pin-project-lite\",\"req\":\"^0.2.11\"},{\"name\":\"slab\",\"optional\":true,\"req\":\"^0.4.4\"},{\"kind\":\"dev\",\"name\":\"tempfile\",\"req\":\"^3.1.0\"},{\"features\":[\"sync\"],\"name\":\"tokio\",\"req\":\"^1.44.0\"},{\"features\":[\"full\"],\"kind\":\"dev\",\"name\":\"tokio\",\"req\":\"^1.0.0\"},{\"kind\":\"dev\",\"name\":\"tokio-stream\",\"req\":\"^0.1\"},{\"kind\":\"dev\",\"name\":\"tokio-test\",\"req\":\"^0.4.0\"},{\"default_features\":false,\"features\":[\"std\"],\"name\":\"tracing\",\"optional\":true,\"req\":\"^0.1.29\"}],\"features\":{\"__docs_rs\":[\"futures-util\"],\"codec\":[],\"compat\":[\"futures-io\"],\"default\":[],\"full\":[\"codec\",\"compat\",\"io-util\",\"time\",\"net\",\"rt\",\"join-map\"],\"io\":[],\"io-util\":[\"io\",\"tokio/rt\",\"tokio/io-util\"],\"join-map\":[\"rt\",\"hashbrown\"],\"net\":[\"tokio/net\"],\"rt\":[\"tokio/rt\",\"tokio/sync\",\"futures-util\"],\"time\":[\"tokio/time\",\"slab\"]}}",
"tokio_1.48.0": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"async-stream\",\"req\":\"^0.3\"},{\"name\":\"backtrace\",\"optional\":true,\"req\":\"^0.3.58\",\"target\":\"cfg(all(tokio_unstable, target_os = \\\"linux\\\"))\"},{\"name\":\"bytes\",\"optional\":true,\"req\":\"^1.2.1\"},{\"features\":[\"async-await\"],\"kind\":\"dev\",\"name\":\"futures\",\"req\":\"^0.3.0\"},{\"kind\":\"dev\",\"name\":\"futures-concurrency\",\"req\":\"^7.6.3\"},{\"default_features\":false,\"name\":\"io-uring\",\"optional\":true,\"req\":\"^0.7.6\",\"target\":\"cfg(all(tokio_unstable, target_os = \\\"linux\\\"))\"},{\"name\":\"libc\",\"optional\":true,\"req\":\"^0.2.168\",\"target\":\"cfg(all(tokio_unstable, target_os = \\\"linux\\\"))\"},{\"name\":\"libc\",\"optional\":true,\"req\":\"^0.2.168\",\"target\":\"cfg(unix)\"},{\"kind\":\"dev\",\"name\":\"libc\",\"req\":\"^0.2.168\",\"target\":\"cfg(unix)\"},{\"features\":[\"futures\",\"checkpoint\"],\"kind\":\"dev\",\"name\":\"loom\",\"req\":\"^0.7\",\"target\":\"cfg(loom)\"},{\"default_features\":false,\"name\":\"mio\",\"optional\":true,\"req\":\"^1.0.1\"},{\"default_features\":false,\"features\":[\"os-poll\",\"os-ext\"],\"name\":\"mio\",\"optional\":true,\"req\":\"^1.0.1\",\"target\":\"cfg(all(tokio_unstable, target_os = \\\"linux\\\"))\"},{\"features\":[\"tokio\"],\"kind\":\"dev\",\"name\":\"mio-aio\",\"req\":\"^1\",\"target\":\"cfg(target_os = \\\"freebsd\\\")\"},{\"kind\":\"dev\",\"name\":\"mockall\",\"req\":\"^0.13.0\"},{\"default_features\":false,\"features\":[\"aio\",\"fs\",\"socket\"],\"kind\":\"dev\",\"name\":\"nix\",\"req\":\"^0.29.0\",\"target\":\"cfg(unix)\"},{\"name\":\"parking_lot\",\"optional\":true,\"req\":\"^0.12.0\"},{\"name\":\"pin-project-lite\",\"req\":\"^0.2.11\"},{\"kind\":\"dev\",\"name\":\"proptest\",\"req\":\"^1\",\"target\":\"cfg(not(target_family = \\\"wasm\\\"))\"},{\"kind\":\"dev\",\"name\":\"rand\",\"req\":\"^0.9\",\"target\":\"cfg(not(all(target_family = \\\"wasm\\\", target_os = \\\"unknown\\\")))\"},{\"name\":\"signal-hook-registry\",\"optional\":true,\"req\":\"^1.1.1\",\"target\":\"cfg(unix)\"},{\"name\":\"slab\",\"optional\":true,\"req\":\"^0.4.9\",\"target\":\"cfg(all(tokio_unstable, target_os = \\\"linux\\\"))\"},{\"features\":[\"all\"],\"name\":\"socket2\",\"optional\":true,\"req\":\"^0.6.0\",\"target\":\"cfg(not(target_family = \\\"wasm\\\"))\"},{\"kind\":\"dev\",\"name\":\"socket2\",\"req\":\"^0.6.0\",\"target\":\"cfg(not(target_family = \\\"wasm\\\"))\"},{\"kind\":\"dev\",\"name\":\"tempfile\",\"req\":\"^3.1.0\",\"target\":\"cfg(not(target_family = \\\"wasm\\\"))\"},{\"name\":\"tokio-macros\",\"optional\":true,\"req\":\"~2.6.0\"},{\"kind\":\"dev\",\"name\":\"tokio-stream\",\"req\":\"^0.1\"},{\"kind\":\"dev\",\"name\":\"tokio-test\",\"req\":\"^0.4.0\"},{\"features\":[\"rt\"],\"kind\":\"dev\",\"name\":\"tokio-util\",\"req\":\"^0.7\"},{\"default_features\":false,\"features\":[\"std\"],\"name\":\"tracing\",\"optional\":true,\"req\":\"^0.1.29\",\"target\":\"cfg(tokio_unstable)\"},{\"kind\":\"dev\",\"name\":\"tracing-mock\",\"req\":\"=0.1.0-beta.1\",\"target\":\"cfg(all(tokio_unstable, target_has_atomic = \\\"64\\\"))\"},{\"kind\":\"dev\",\"name\":\"wasm-bindgen-test\",\"req\":\"^0.3.0\",\"target\":\"cfg(all(target_family = \\\"wasm\\\", not(target_os = \\\"wasi\\\")))\"},{\"name\":\"windows-sys\",\"optional\":true,\"req\":\"^0.61\",\"target\":\"cfg(windows)\"},{\"features\":[\"Win32_Foundation\",\"Win32_Security_Authorization\"],\"kind\":\"dev\",\"name\":\"windows-sys\",\"req\":\"^0.61\",\"target\":\"cfg(windows)\"}],\"features\":{\"default\":[],\"fs\":[],\"full\":[\"fs\",\"io-util\",\"io-std\",\"macros\",\"net\",\"parking_lot\",\"process\",\"rt\",\"rt-multi-thread\",\"signal\",\"sync\",\"time\"],\"io-std\":[],\"io-uring\":[\"dep:io-uring\",\"libc\",\"mio/os-poll\",\"mio/os-ext\",\"dep:slab\"],\"io-util\":[\"bytes\"],\"macros\":[\"tokio-macros\"],\"net\":[\"libc\",\"mio/os-poll\",\"mio/os-ext\",\"mio/net\",\"socket2\",\"windows-sys/Win32_Foundation\",\"windows-sys/Win32_Security\",\"windows-sys/Win32_Storage_FileSystem\",\"windows-sys/Win32_System_Pipes\",\"windows-sys/Win32_System_SystemServices\"],\"process\":[\"bytes\",\"libc\",\"mio/os-poll\",\"mio/os-ext\",\"mio/net\",\"signal-hook-registry\",\"windows-sys/Win32_Foundation\",\"windows-sys/Win32_System_Threading\",\"windows-sys/Win32_System_WindowsProgramming\"],\"rt\":[],\"rt-multi-thread\":[\"rt\"],\"signal\":[\"libc\",\"mio/os-poll\",\"mio/net\",\"mio/os-ext\",\"signal-hook-registry\",\"windows-sys/Win32_Foundation\",\"windows-sys/Win32_System_Console\"],\"sync\":[],\"taskdump\":[\"dep:backtrace\"],\"test-util\":[\"rt\",\"sync\",\"time\"],\"time\":[]}}",
"toml_0.5.11": "{\"dependencies\":[{\"name\":\"indexmap\",\"optional\":true,\"req\":\"^1.0\"},{\"name\":\"serde\",\"req\":\"^1.0.97\"},{\"kind\":\"dev\",\"name\":\"serde_derive\",\"req\":\"^1.0\"},{\"kind\":\"dev\",\"name\":\"serde_json\",\"req\":\"^1.0\"}],\"features\":{\"default\":[],\"preserve_order\":[\"indexmap\"]}}",
@@ -940,6 +942,7 @@
"ts-rs-macros_11.1.0": "{\"dependencies\":[{\"name\":\"proc-macro2\",\"req\":\"^1\"},{\"name\":\"quote\",\"req\":\"^1\"},{\"features\":[\"full\",\"extra-traits\"],\"name\":\"syn\",\"req\":\"^2.0.28\"},{\"name\":\"termcolor\",\"optional\":true,\"req\":\"^1\"}],\"features\":{\"no-serde-warnings\":[],\"serde-compat\":[\"termcolor\"]}}",
"ts-rs_11.1.0": "{\"dependencies\":[{\"features\":[\"serde\"],\"name\":\"bigdecimal\",\"optional\":true,\"req\":\">=0.0.13, <0.5\"},{\"name\":\"bson\",\"optional\":true,\"req\":\"^2\"},{\"name\":\"bytes\",\"optional\":true,\"req\":\"^1\"},{\"name\":\"chrono\",\"optional\":true,\"req\":\"^0.4\"},{\"features\":[\"serde\"],\"kind\":\"dev\",\"name\":\"chrono\",\"req\":\"^0.4\"},{\"name\":\"dprint-plugin-typescript\",\"optional\":true,\"req\":\"=0.95\"},{\"name\":\"heapless\",\"optional\":true,\"req\":\">=0.7, <0.9\"},{\"name\":\"indexmap\",\"optional\":true,\"req\":\"^2\"},{\"name\":\"ordered-float\",\"optional\":true,\"req\":\">=3, <6\"},{\"name\":\"semver\",\"optional\":true,\"req\":\"^1\"},{\"features\":[\"derive\"],\"kind\":\"dev\",\"name\":\"serde\",\"req\":\"^1.0\"},{\"name\":\"serde_json\",\"optional\":true,\"req\":\"^1\"},{\"kind\":\"dev\",\"name\":\"serde_json\",\"req\":\"^1\"},{\"name\":\"smol_str\",\"optional\":true,\"req\":\"^0.3\"},{\"name\":\"thiserror\",\"req\":\"^2\"},{\"features\":[\"sync\"],\"name\":\"tokio\",\"optional\":true,\"req\":\"^1\"},{\"features\":[\"sync\",\"rt\"],\"kind\":\"dev\",\"name\":\"tokio\",\"req\":\"^1.40\"},{\"name\":\"ts-rs-macros\",\"req\":\"=11.1.0\"},{\"name\":\"url\",\"optional\":true,\"req\":\"^2\"},{\"name\":\"uuid\",\"optional\":true,\"req\":\"^1\"}],\"features\":{\"bigdecimal-impl\":[\"bigdecimal\"],\"bson-uuid-impl\":[\"bson\"],\"bytes-impl\":[\"bytes\"],\"chrono-impl\":[\"chrono\"],\"default\":[\"serde-compat\"],\"format\":[\"dprint-plugin-typescript\"],\"heapless-impl\":[\"heapless\"],\"import-esm\":[],\"indexmap-impl\":[\"indexmap\"],\"no-serde-warnings\":[\"ts-rs-macros/no-serde-warnings\"],\"ordered-float-impl\":[\"ordered-float\"],\"semver-impl\":[\"semver\"],\"serde-compat\":[\"ts-rs-macros/serde-compat\"],\"serde-json-impl\":[\"serde_json\"],\"smol_str-impl\":[\"smol_str\"],\"tokio-impl\":[\"tokio\"],\"url-impl\":[\"url\"],\"uuid-impl\":[\"uuid\"]}}",
"tui-scrollbar_0.2.2": "{\"dependencies\":[{\"kind\":\"dev\",\"name\":\"color-eyre\",\"req\":\"^0.6\"},{\"name\":\"crossterm_0_28\",\"optional\":true,\"package\":\"crossterm\",\"req\":\"^0.28\"},{\"name\":\"crossterm_0_29\",\"optional\":true,\"package\":\"crossterm\",\"req\":\"^0.29\"},{\"name\":\"document-features\",\"req\":\"^0.2.11\"},{\"kind\":\"dev\",\"name\":\"ratatui\",\"req\":\"^0.30.0\"},{\"name\":\"ratatui-core\",\"req\":\"^0.1\"}],\"features\":{\"crossterm\":[\"crossterm_0_29\"],\"crossterm_0_28\":[\"dep:crossterm_0_28\"],\"crossterm_0_29\":[\"dep:crossterm_0_29\"],\"default\":[]}}",
"tungstenite_0.21.0": "{\"dependencies\":[{\"name\":\"byteorder\",\"req\":\"^1.3.2\"},{\"name\":\"bytes\",\"req\":\"^1.0\"},{\"kind\":\"dev\",\"name\":\"criterion\",\"req\":\"^0.5.0\"},{\"name\":\"data-encoding\",\"optional\":true,\"req\":\"^2\"},{\"kind\":\"dev\",\"name\":\"env_logger\",\"req\":\"^0.10.0\"},{\"name\":\"http\",\"optional\":true,\"req\":\"^1.0\"},{\"name\":\"httparse\",\"optional\":true,\"req\":\"^1.3.4\"},{\"kind\":\"dev\",\"name\":\"input_buffer\",\"req\":\"^0.5.0\"},{\"name\":\"log\",\"req\":\"^0.4.8\"},{\"name\":\"native-tls-crate\",\"optional\":true,\"package\":\"native-tls\",\"req\":\"^0.2.3\"},{\"name\":\"rand\",\"req\":\"^0.8.0\"},{\"kind\":\"dev\",\"name\":\"rand\",\"req\":\"^0.8.4\"},{\"name\":\"rustls\",\"optional\":true,\"req\":\"^0.22.0\"},{\"name\":\"rustls-native-certs\",\"optional\":true,\"req\":\"^0.7.0\"},{\"name\":\"rustls-pki-types\",\"optional\":true,\"req\":\"^1.0\"},{\"name\":\"sha1\",\"optional\":true,\"req\":\"^0.10\"},{\"kind\":\"dev\",\"name\":\"socket2\",\"req\":\"^0.5.5\"},{\"name\":\"thiserror\",\"req\":\"^1.0.23\"},{\"name\":\"url\",\"optional\":true,\"req\":\"^2.1.0\"},{\"name\":\"utf-8\",\"req\":\"^0.7.5\"},{\"name\":\"webpki-roots\",\"optional\":true,\"req\":\"^0.26\"}],\"features\":{\"__rustls-tls\":[\"rustls\",\"rustls-pki-types\"],\"default\":[\"handshake\"],\"handshake\":[\"data-encoding\",\"http\",\"httparse\",\"sha1\",\"url\"],\"native-tls\":[\"native-tls-crate\"],\"native-tls-vendored\":[\"native-tls\",\"native-tls-crate/vendored\"],\"rustls-tls-native-roots\":[\"__rustls-tls\",\"rustls-native-certs\"],\"rustls-tls-webpki-roots\":[\"__rustls-tls\",\"webpki-roots\"]}}",
"typenum_1.18.0": "{\"dependencies\":[{\"default_features\":false,\"name\":\"scale-info\",\"optional\":true,\"req\":\"^1.0\"}],\"features\":{\"const-generics\":[],\"force_unix_path_separator\":[],\"i128\":[],\"no_std\":[],\"scale_info\":[\"scale-info/derive\"],\"strict\":[]}}",
"uds_windows_1.1.0": "{\"dependencies\":[{\"name\":\"memoffset\",\"req\":\"^0.9.0\"},{\"name\":\"tempfile\",\"req\":\"^3\",\"target\":\"cfg(windows)\"},{\"features\":[\"winsock2\",\"ws2def\",\"minwinbase\",\"ntdef\",\"processthreadsapi\",\"handleapi\",\"ws2tcpip\",\"winbase\"],\"name\":\"winapi\",\"req\":\"^0.3.9\",\"target\":\"cfg(windows)\"}],\"features\":{}}",
"uname_0.1.1": "{\"dependencies\":[{\"name\":\"libc\",\"req\":\"^0.2\"}],\"features\":{}}",

73
codex-rs/Cargo.lock generated
View File

@@ -3443,13 +3443,13 @@ dependencies = [
"http 1.3.1",
"hyper",
"hyper-util",
"rustls 0.23.29",
"rustls",
"rustls-native-certs",
"rustls-pki-types",
"tokio",
"tokio-rustls 0.26.2",
"tokio-rustls",
"tower-service",
"webpki-roots 1.0.2",
"webpki-roots",
]
[[package]]
@@ -5335,7 +5335,7 @@ dependencies = [
"quinn-proto",
"quinn-udp",
"rustc-hash",
"rustls 0.23.29",
"rustls",
"socket2 0.6.1",
"thiserror 2.0.17",
"tokio",
@@ -5355,7 +5355,7 @@ dependencies = [
"rand 0.9.2",
"ring",
"rustc-hash",
"rustls 0.23.29",
"rustls",
"rustls-pki-types",
"slab",
"thiserror 2.0.17",
@@ -5639,7 +5639,7 @@ dependencies = [
"percent-encoding",
"pin-project-lite",
"quinn",
"rustls 0.23.29",
"rustls",
"rustls-native-certs",
"rustls-pki-types",
"serde",
@@ -5648,7 +5648,7 @@ dependencies = [
"sync_wrapper",
"tokio",
"tokio-native-tls",
"tokio-rustls 0.26.2",
"tokio-rustls",
"tokio-util",
"tower",
"tower-http",
@@ -5658,7 +5658,7 @@ dependencies = [
"wasm-bindgen-futures",
"wasm-streams",
"web-sys",
"webpki-roots 1.0.2",
"webpki-roots",
]
[[package]]
@@ -5770,20 +5770,6 @@ dependencies = [
"windows-sys 0.52.0",
]
[[package]]
name = "rustls"
version = "0.22.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bf4ef73721ac7bcd79b2b315da7779d8fc09718c6b3d2d1b2d94850eb8c18432"
dependencies = [
"log",
"ring",
"rustls-pki-types",
"rustls-webpki 0.102.8",
"subtle",
"zeroize",
]
[[package]]
name = "rustls"
version = "0.23.29"
@@ -5794,7 +5780,7 @@ dependencies = [
"once_cell",
"ring",
"rustls-pki-types",
"rustls-webpki 0.103.4",
"rustls-webpki",
"subtle",
"zeroize",
]
@@ -5821,17 +5807,6 @@ dependencies = [
"zeroize",
]
[[package]]
name = "rustls-webpki"
version = "0.102.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "64ca1bc8749bd4cf37b5ce386cc146580777b4e8572c7b97baf22c83f444bee9"
dependencies = [
"ring",
"rustls-pki-types",
"untrusted",
]
[[package]]
name = "rustls-webpki"
version = "0.103.4"
@@ -7129,24 +7104,13 @@ dependencies = [
"tokio",
]
[[package]]
name = "tokio-rustls"
version = "0.25.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "775e0c0f0adb3a2f22a00c4745d728b479985fc15ee7ca6a2608388c5569860f"
dependencies = [
"rustls 0.22.4",
"rustls-pki-types",
"tokio",
]
[[package]]
name = "tokio-rustls"
version = "0.26.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8e727b36a1a0e8b74c376ac2211e40c2c8af09fb4013c60d910495810f008e9b"
dependencies = [
"rustls 0.23.29",
"rustls",
"tokio",
]
@@ -7183,12 +7147,8 @@ checksum = "c83b561d025642014097b66e6c1bb422783339e0909e4429cde4749d1990bc38"
dependencies = [
"futures-util",
"log",
"rustls 0.22.4",
"rustls-pki-types",
"tokio",
"tokio-rustls 0.25.0",
"tungstenite",
"webpki-roots 0.26.11",
]
[[package]]
@@ -7299,7 +7259,7 @@ dependencies = [
"rustls-native-certs",
"sync_wrapper",
"tokio",
"tokio-rustls 0.26.2",
"tokio-rustls",
"tokio-stream",
"tower",
"tower-layer",
@@ -7598,8 +7558,6 @@ dependencies = [
"httparse",
"log",
"rand 0.8.5",
"rustls 0.22.4",
"rustls-pki-types",
"sha1",
"thiserror 1.0.69",
"url",
@@ -8073,15 +8031,6 @@ dependencies = [
"rustls-pki-types",
]
[[package]]
name = "webpki-roots"
version = "0.26.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "521bc38abb08001b01866da9f51eb7c5d647a19260e00054a8c7fd5f9e57f7a9"
dependencies = [
"webpki-roots 1.0.2",
]
[[package]]
name = "webpki-roots"
version = "1.0.2"

View File

@@ -212,7 +212,7 @@ tiny_http = "0.12"
tokio = "1"
tokio-stream = "0.1.18"
tokio-test = "0.4"
tokio-tungstenite = { version = "0.21.0", features = ["rustls-tls-webpki-roots"] }
tokio-tungstenite = "0.21.0"
tokio-util = "0.7.18"
toml = "0.9.5"
toml_edit = "0.24.0"

View File

@@ -567,6 +567,7 @@ server_notification_definitions! {
ReasoningTextDelta => "item/reasoning/textDelta" (v2::ReasoningTextDeltaNotification),
ContextCompacted => "thread/compacted" (v2::ContextCompactedNotification),
DeprecationNotice => "deprecationNotice" (v2::DeprecationNoticeNotification),
ConfigWarning => "configWarning" (v2::ConfigWarningNotification),
/// Notifies the user of world-writable directories on Windows, which cannot be protected by the sandbox.
WindowsWorldWritableWarning => "windows/worldWritableWarning" (v2::WindowsWorldWritableWarningNotification),

View File

@@ -2107,6 +2107,16 @@ pub struct DeprecationNoticeNotification {
pub details: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ConfigWarningNotification {
/// Concise summary of the warning.
pub summary: String,
/// Optional extra guidance or error details.
pub details: Option<String>,
}
#[cfg(test)]
mod tests {
use super::*;

View File

@@ -1,6 +1,7 @@
#![deny(clippy::print_stdout, clippy::print_stderr)]
use codex_common::CliConfigOverrides;
use codex_core::config::Config;
use codex_core::config::ConfigBuilder;
use codex_core::config_loader::LoaderOverrides;
use std::io::ErrorKind;
@@ -10,7 +11,9 @@ use std::path::PathBuf;
use crate::message_processor::MessageProcessor;
use crate::outgoing_message::OutgoingMessage;
use crate::outgoing_message::OutgoingMessageSender;
use codex_app_server_protocol::ConfigWarningNotification;
use codex_app_server_protocol::JSONRPCMessage;
use codex_core::check_execpolicy_for_warnings;
use codex_feedback::CodexFeedback;
use tokio::io::AsyncBufReadExt;
use tokio::io::AsyncWriteExt;
@@ -82,14 +85,38 @@ pub async fn run_main(
)
})?;
let loader_overrides_for_config_api = loader_overrides.clone();
let config = ConfigBuilder::default()
let mut config_warnings = Vec::new();
let config = match ConfigBuilder::default()
.cli_overrides(cli_kv_overrides.clone())
.loader_overrides(loader_overrides)
.build()
.await
.map_err(|e| {
std::io::Error::new(ErrorKind::InvalidData, format!("error loading config: {e}"))
})?;
{
Ok(config) => config,
Err(err) => {
let message = ConfigWarningNotification {
summary: "Invalid configuration; using defaults.".to_string(),
details: Some(err.to_string()),
};
config_warnings.push(message);
Config::load_default_with_cli_overrides(cli_kv_overrides.clone()).map_err(|e| {
std::io::Error::new(
ErrorKind::InvalidData,
format!("error loading default config after config error: {e}"),
)
})?
}
};
if let Ok(Some(err)) =
check_execpolicy_for_warnings(&config.features, &config.config_layer_stack).await
{
let message = ConfigWarningNotification {
summary: "Error parsing rules; custom rules not applied.".to_string(),
details: Some(err.to_string()),
};
config_warnings.push(message);
}
let feedback = CodexFeedback::new();
@@ -127,6 +154,12 @@ pub async fn run_main(
.with(otel_logger_layer)
.with(otel_tracing_layer)
.try_init();
for warning in &config_warnings {
match &warning.details {
Some(details) => error!("{} {}", warning.summary, details),
None => error!("{}", warning.summary),
}
}
// Task: process incoming messages.
let processor_handle = tokio::spawn({
@@ -140,6 +173,7 @@ pub async fn run_main(
cli_overrides,
loader_overrides,
feedback.clone(),
config_warnings,
);
async move {
while let Some(msg) = incoming_rx.recv().await {

View File

@@ -10,6 +10,7 @@ use codex_app_server_protocol::ClientRequest;
use codex_app_server_protocol::ConfigBatchWriteParams;
use codex_app_server_protocol::ConfigReadParams;
use codex_app_server_protocol::ConfigValueWriteParams;
use codex_app_server_protocol::ConfigWarningNotification;
use codex_app_server_protocol::InitializeResponse;
use codex_app_server_protocol::JSONRPCError;
use codex_app_server_protocol::JSONRPCErrorError;
@@ -17,6 +18,7 @@ use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCRequest;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ServerNotification;
use codex_core::AuthManager;
use codex_core::ThreadManager;
use codex_core::config::Config;
@@ -34,6 +36,7 @@ pub(crate) struct MessageProcessor {
codex_message_processor: CodexMessageProcessor,
config_api: ConfigApi,
initialized: bool,
config_warnings: Vec<ConfigWarningNotification>,
}
impl MessageProcessor {
@@ -46,6 +49,7 @@ impl MessageProcessor {
cli_overrides: Vec<(String, TomlValue)>,
loader_overrides: LoaderOverrides,
feedback: CodexFeedback,
config_warnings: Vec<ConfigWarningNotification>,
) -> Self {
let outgoing = Arc::new(outgoing);
let auth_manager = AuthManager::shared(
@@ -74,6 +78,7 @@ impl MessageProcessor {
codex_message_processor,
config_api,
initialized: false,
config_warnings,
}
}
@@ -155,6 +160,16 @@ impl MessageProcessor {
self.initialized = true;
if !self.config_warnings.is_empty() {
for notification in self.config_warnings.drain(..) {
self.outgoing
.send_server_notification(ServerNotification::ConfigWarning(
notification,
))
.await;
}
}
return;
}
}

View File

@@ -4,12 +4,13 @@ use codex_app_server_protocol::Model;
use codex_app_server_protocol::ReasoningEffortOption;
use codex_core::ThreadManager;
use codex_core::config::Config;
use codex_core::models_manager::manager::RefreshStrategy;
use codex_protocol::openai_models::ModelPreset;
use codex_protocol::openai_models::ReasoningEffortPreset;
pub async fn supported_models(thread_manager: Arc<ThreadManager>, config: &Config) -> Vec<Model> {
thread_manager
.list_models(config)
.list_models(config, RefreshStrategy::OnlineIfUncached)
.await
.into_iter()
.filter(|preset| preset.show_in_picker)

View File

@@ -162,6 +162,7 @@ mod tests {
use codex_app_server_protocol::AccountRateLimitsUpdatedNotification;
use codex_app_server_protocol::AccountUpdatedNotification;
use codex_app_server_protocol::AuthMode;
use codex_app_server_protocol::ConfigWarningNotification;
use codex_app_server_protocol::LoginChatGptCompleteNotification;
use codex_app_server_protocol::RateLimitSnapshot;
use codex_app_server_protocol::RateLimitWindow;
@@ -279,4 +280,26 @@ mod tests {
"ensure the notification serializes correctly"
);
}
#[test]
fn verify_config_warning_notification_serialization() {
let notification = ServerNotification::ConfigWarning(ConfigWarningNotification {
summary: "Config error: using defaults".to_string(),
details: Some("error loading config: bad config".to_string()),
});
let jsonrpc_notification = OutgoingMessage::AppServerNotification(notification);
assert_eq!(
json!( {
"method": "configWarning",
"params": {
"summary": "Config error: using defaults",
"details": "error loading config: bad config",
},
}),
serde_json::to_value(jsonrpc_notification)
.expect("ensure the notification serializes correctly"),
"ensure the notification serializes correctly"
);
}
}

View File

@@ -383,9 +383,12 @@ fn compute_replacements(
let mut line_index: usize = 0;
for chunk in chunks {
// If a chunk has a `change_context`, we use seek_sequence to find it, then
// adjust our `line_index` to continue from there.
if let Some(ctx_line) = &chunk.change_context {
// If a chunk has one or more `change_context` lines, seek them in order
// to progressively narrow down the position of the chunk. This supports
// multiple @@ context headers such as:
// @@ class BaseClass:
// @@ def method():
for ctx_line in &chunk.change_context {
if let Some(idx) = seek_sequence::seek_sequence(
original_lines,
std::slice::from_ref(ctx_line),
@@ -824,6 +827,51 @@ mod tests {
assert_eq!(String::from_utf8(stderr).unwrap(), "");
}
#[test]
fn test_apply_patch_multiple_change_contexts_success() {
let dir = tempdir().unwrap();
let path = dir.path().join("example.py");
let original = r#"
class BaseClass:
def method():
# untouched
pass
class OtherClass:
def method():
# to_remove
pass
"#;
fs::write(&path, original).unwrap();
let patch = wrap_patch(&format!(
r#"*** Update File: {}
@@ class OtherClass:
@@ def method():
- # to_remove
+ # to_add"#,
path.display()
));
let mut stdout = Vec::new();
let mut stderr = Vec::new();
apply_patch(&patch, &mut stdout, &mut stderr).unwrap();
let contents = fs::read_to_string(&path).unwrap();
let expected = r#"
class BaseClass:
def method():
# untouched
pass
class OtherClass:
def method():
# to_add
pass
"#;
assert_eq!(contents, expected);
}
#[test]
fn test_unified_diff() {
// Start with a file containing four lines.
@@ -1062,4 +1110,40 @@ g
let result = apply_patch(&patch, &mut stdout, &mut stderr);
assert!(result.is_err());
}
#[test]
fn test_apply_patch_multiple_change_contexts_missing_context() {
let dir = tempdir().unwrap();
let path = dir.path().join("example.py");
let original = r#"class BaseClass:
def method():
# to_remove
pass
class OtherClass:
def method():
# untouched
pass"#;
fs::write(&path, original).unwrap();
let patch = wrap_patch(&format!(
r#"*** Update File: {}
@@ class BaseClass:
@@ def missing():
- # to_remove
+ # to_add"#,
path.display()
));
let mut stdout = Vec::new();
let mut stderr = Vec::new();
let result = apply_patch(&patch, &mut stdout, &mut stderr);
assert!(matches!(
result,
Err(ApplyPatchError::IoError(IoError { context, .. }))
if context.contains("Failed to find context ' def missing():'")
&& context.contains(&path.display().to_string())
));
}
}

View File

@@ -89,12 +89,13 @@ use Hunk::*;
#[derive(Debug, PartialEq, Clone)]
pub struct UpdateFileChunk {
/// A single line of context used to narrow down the position of the chunk
/// (this is usually a class, method, or function definition.)
pub change_context: Option<String>,
/// Lines of context used to narrow down the position of the chunk.
/// These are usually class, method, or function definitions. If empty,
/// the chunk has no explicit context.
pub change_context: Vec<String>,
/// A contiguous block of lines that should be replaced with `new_lines`.
/// `old_lines` must occur strictly after `change_context`.
/// `old_lines` must occur strictly after the last `change_context` line.
pub old_lines: Vec<String>,
pub new_lines: Vec<String>,
@@ -351,28 +352,42 @@ fn parse_update_file_chunk(
line_number,
});
}
// If we see an explicit context marker @@ or @@ <context>, consume it; otherwise, optionally
// allow treating the chunk as starting directly with diff lines.
let (change_context, start_index) = if lines[0] == EMPTY_CHANGE_CONTEXT_MARKER {
(None, 1)
} else if let Some(context) = lines[0].strip_prefix(CHANGE_CONTEXT_MARKER) {
(Some(context.to_string()), 1)
} else {
if !allow_missing_context {
return Err(InvalidHunkError {
message: format!(
"Expected update hunk to start with a @@ context marker, got: '{}'",
lines[0]
),
line_number,
});
// Consume one or more explicit context markers (`@@` or `@@ <context>`). This
// supports multiple `@@` lines to narrow down the target location, e.g.:
// @@ class BaseClass:
// @@ def method():
// -old
// +new
let mut change_context: Vec<String> = Vec::new();
let mut start_index = 0;
while start_index < lines.len() {
let line = lines[start_index];
if line == EMPTY_CHANGE_CONTEXT_MARKER {
start_index += 1;
continue;
}
(None, 0)
};
if let Some(context) = line.strip_prefix(CHANGE_CONTEXT_MARKER) {
change_context.push(context.to_string());
start_index += 1;
continue;
}
break;
}
if start_index == 0 && !allow_missing_context {
return Err(InvalidHunkError {
message: format!(
"Expected update hunk to start with a @@ context marker, got: '{}'",
lines[0]
),
line_number,
});
}
if start_index >= lines.len() {
return Err(InvalidHunkError {
message: "Update hunk does not contain any lines".to_string(),
line_number: line_number + 1,
line_number: line_number + start_index,
});
}
let mut chunk = UpdateFileChunk {
@@ -517,7 +532,7 @@ fn test_parse_patch() {
path: PathBuf::from("path/update.py"),
move_path: Some(PathBuf::from("path/update2.py")),
chunks: vec![UpdateFileChunk {
change_context: Some("def f():".to_string()),
change_context: vec!["def f():".to_string()],
old_lines: vec![" pass".to_string()],
new_lines: vec![" return 123".to_string()],
is_end_of_file: false
@@ -544,7 +559,7 @@ fn test_parse_patch() {
path: PathBuf::from("file.py"),
move_path: None,
chunks: vec![UpdateFileChunk {
change_context: None,
change_context: Vec::new(),
old_lines: vec![],
new_lines: vec!["line".to_string()],
is_end_of_file: false
@@ -574,7 +589,7 @@ fn test_parse_patch() {
path: PathBuf::from("file2.py"),
move_path: None,
chunks: vec![UpdateFileChunk {
change_context: None,
change_context: Vec::new(),
old_lines: vec!["import foo".to_string()],
new_lines: vec!["import foo".to_string(), "bar".to_string()],
is_end_of_file: false,
@@ -594,7 +609,7 @@ fn test_parse_patch_lenient() {
path: PathBuf::from("file2.py"),
move_path: None,
chunks: vec![UpdateFileChunk {
change_context: None,
change_context: Vec::new(),
old_lines: vec!["import foo".to_string()],
new_lines: vec!["import foo".to_string(), "bar".to_string()],
is_end_of_file: false,
@@ -730,7 +745,7 @@ fn test_update_file_chunk() {
),
Ok((
(UpdateFileChunk {
change_context: Some("change_context".to_string()),
change_context: vec!["change_context".to_string()],
old_lines: vec![
"".to_string(),
"context".to_string(),
@@ -752,7 +767,7 @@ fn test_update_file_chunk() {
parse_update_file_chunk(&["@@", "+line", "*** End of File"], 123, false),
Ok((
(UpdateFileChunk {
change_context: None,
change_context: Vec::new(),
old_lines: vec![],
new_lines: vec!["line".to_string()],
is_end_of_file: true
@@ -761,3 +776,31 @@ fn test_update_file_chunk() {
))
);
}
#[test]
fn test_update_file_chunk_multiple_change_context_lines() {
assert_eq!(
parse_update_file_chunk(
&[
"@@ class BaseClass:",
"@@ def method():",
"- # to_remove",
"+ # to_add",
],
200,
false
),
Ok((
(UpdateFileChunk {
change_context: vec![
"class BaseClass:".to_string(),
" def method():".to_string()
],
old_lines: vec![" # to_remove".to_string()],
new_lines: vec![" # to_add".to_string()],
is_end_of_file: false
}),
4
))
);
}

View File

@@ -0,0 +1,10 @@
class BaseClass:
def method(self):
# to_add
pass
class OtherClass:
def method(self):
# untouched
pass

View File

@@ -0,0 +1,10 @@
class BaseClass:
def method(self):
# untouched
pass
class OtherClass:
def method(self):
# to_add
pass

View File

@@ -0,0 +1,10 @@
class BaseClass:
def method(self):
# to_remove
pass
class OtherClass:
def method(self):
# untouched
pass

View File

@@ -0,0 +1,10 @@
class BaseClass:
def method(self):
# untouched
pass
class OtherClass:
def method(self):
# to_remove
pass

View File

@@ -0,0 +1,12 @@
*** Begin Patch
*** Update File: first.py
@@ class BaseClass:
@@ def method(self):
- # to_remove
+ # to_add
*** Update File: second.py
@@ class OtherClass:
@@ def method(self):
- # to_remove
+ # to_add
*** End Patch

View File

@@ -0,0 +1,5 @@
foo
bar
bar
bar
new

View File

@@ -0,0 +1,5 @@
foo
bar
bar
bar
baz

View File

@@ -0,0 +1,7 @@
*** Begin Patch
*** Update File: foo.txt
@@ bar
@@ bar
-baz
+new
*** End Patch

View File

@@ -0,0 +1,6 @@
foo
bar
one
bar
bar
two

View File

@@ -0,0 +1,6 @@
foo
bar
bar
bar
bar
baz

View File

@@ -0,0 +1,11 @@
*** Begin Patch
*** Update File: foo.txt
@@ foo
@@ bar
-bar
+one
@@ bar
@@ bar
-baz
+two
*** End Patch

View File

@@ -0,0 +1,4 @@
class Foo:
def foo(self):
# to_remove
pass

View File

@@ -0,0 +1,10 @@
class BaseClass:
def method(self):
# to_add
pass
class OtherClass:
def method(self):
# untouched
pass

View File

@@ -0,0 +1,4 @@
class Foo:
def foo(self):
# to_remove
pass

View File

@@ -0,0 +1,10 @@
class BaseClass:
def method(self):
# to_add
pass
class OtherClass:
def method(self):
# untouched
pass

View File

@@ -0,0 +1,12 @@
*** Begin Patch
*** Update File: success.py
@@ class BaseClass:
@@ def method(self):
- # to_remove
+ # to_add
*** Update File: failure.py
@@ class Foo:
@@ def missing(self):
- # to_remove
+ # to_add
*** End Patch

View File

@@ -0,0 +1,5 @@
class Foo:
# this is a comment
def foo(self):
# to_add
pass

View File

@@ -0,0 +1,5 @@
class Foo:
# this is a comment
def foo(self):
# to_remove
pass

View File

@@ -0,0 +1,7 @@
*** Begin Patch
*** Update File: skip.py
@@ class Foo:
@@ def foo(self):
- # to_remove
+ # to_add
*** End Patch

View File

@@ -16,7 +16,7 @@ pub use sandbox_mode_cli_arg::SandboxModeCliArg;
#[cfg(feature = "cli")]
pub mod format_env_display;
#[cfg(any(feature = "cli", test))]
#[cfg(feature = "cli")]
mod config_override;
#[cfg(feature = "cli")]

File diff suppressed because one or more lines are too long

View File

@@ -251,17 +251,22 @@ impl Codex {
let exec_policy = ExecPolicyManager::load(&config.features, &config.config_layer_stack)
.await
.map_err(|err| CodexErr::Fatal(format!("failed to load execpolicy: {err}")))?;
.map_err(|err| CodexErr::Fatal(format!("failed to load rules: {err}")))?;
let config = Arc::new(config);
if config.features.enabled(Feature::RemoteModels)
&& let Err(err) = models_manager
.refresh_available_models_with_cache(&config)
.await
{
error!("failed to refresh available models: {err:?}");
}
let model = models_manager.get_model(&config.model, &config).await;
let _ = models_manager
.list_models(
&config,
crate::models_manager::manager::RefreshStrategy::OnlineIfUncached,
)
.await;
let model = models_manager
.get_default_model(
&config.model,
&config,
crate::models_manager::manager::RefreshStrategy::OnlineIfUncached,
)
.await;
let session_configuration = SessionConfiguration {
provider: config.model_provider.clone(),
model: model.clone(),
@@ -965,7 +970,7 @@ impl Session {
let model_info = self
.services
.models_manager
.construct_model_info(session_configuration.model.as_str(), &per_turn_config)
.get_model_info(session_configuration.model.as_str(), &per_turn_config)
.await;
let mut turn_context: TurnContext = Self::make_turn_context(
Some(Arc::clone(&self.services.auth_manager)),
@@ -988,6 +993,14 @@ impl Session {
.await
}
async fn get_config(&self) -> std::sync::Arc<Config> {
let state = self.state.lock().await;
state
.session_configuration
.original_config_do_not_use
.clone()
}
pub(crate) async fn new_default_turn_with_sub_id(&self, sub_id: String) -> Arc<TurnContext> {
let session_configuration = {
let state = self.state.lock().await;
@@ -2374,7 +2387,7 @@ async fn spawn_review_thread(
let review_model_info = sess
.services
.models_manager
.construct_model_info(&model, &config)
.get_model_info(&model, &config)
.await;
// For reviews, disable web_search and view_image regardless of global settings.
let mut review_features = sess.features.clone();
@@ -2904,9 +2917,10 @@ async fn try_run_turn(
}
ResponseEvent::ModelsEtag(etag) => {
// Update internal state with latest models etag
let config = sess.get_config().await;
sess.services
.models_manager
.refresh_if_new_etag(etag, sess.features.enabled(Feature::RemoteModels))
.refresh_if_new_etag(etag, &config)
.await;
}
ResponseEvent::Completed {

View File

@@ -460,6 +460,28 @@ impl Config {
.await
}
/// Load a default configuration when user config files are invalid.
pub fn load_default_with_cli_overrides(
cli_overrides: Vec<(String, TomlValue)>,
) -> std::io::Result<Self> {
let codex_home = find_codex_home()?;
let mut merged = toml::Value::try_from(ConfigToml::default()).map_err(|e| {
std::io::Error::new(
std::io::ErrorKind::InvalidData,
format!("failed to serialize default config: {e}"),
)
})?;
let cli_layer = crate::config_loader::build_cli_overrides_layer(&cli_overrides);
crate::config_loader::merge_toml_values(&mut merged, &cli_layer);
let config_toml = deserialize_config_toml_with_base(merged, &codex_home)?;
Self::load_config_with_layer_stack(
config_toml,
ConfigOverrides::default(),
codex_home,
ConfigLayerStack::default(),
)
}
/// This is a secondary way of creating [Config], which is appropriate when
/// the harness is meant to be used with a specific configuration that
/// ignores user settings. For example, the `codex exec` subcommand is

View File

@@ -31,6 +31,7 @@ pub use config_requirements::McpServerRequirement;
pub use config_requirements::RequirementSource;
pub use config_requirements::SandboxModeRequirement;
pub use merge::merge_toml_values;
pub(crate) use overrides::build_cli_overrides_layer;
pub use state::ConfigLayerEntry;
pub use state::ConfigLayerStack;
pub use state::ConfigLayerStackOrdering;

View File

@@ -1,10 +1,10 @@
use toml::Value as TomlValue;
pub(super) fn default_empty_table() -> TomlValue {
pub(crate) fn default_empty_table() -> TomlValue {
TomlValue::Table(Default::default())
}
pub(super) fn build_cli_overrides_layer(cli_overrides: &[(String, TomlValue)]) -> TomlValue {
pub(crate) fn build_cli_overrides_layer(cli_overrides: &[(String, TomlValue)]) -> TomlValue {
let mut root = default_empty_table();
for (path, value) in cli_overrides {
apply_toml_override(&mut root, path, value.clone());

View File

@@ -46,19 +46,19 @@ fn is_policy_match(rule_match: &RuleMatch) -> bool {
#[derive(Debug, Error)]
pub enum ExecPolicyError {
#[error("failed to read execpolicy files from {dir}: {source}")]
#[error("failed to read rules files from {dir}: {source}")]
ReadDir {
dir: PathBuf,
source: std::io::Error,
},
#[error("failed to read execpolicy file {path}: {source}")]
#[error("failed to read rules file {path}: {source}")]
ReadFile {
path: PathBuf,
source: std::io::Error,
},
#[error("failed to parse execpolicy file {path}: {source}")]
#[error("failed to parse rules file {path}: {source}")]
ParsePolicy {
path: String,
source: codex_execpolicy::Error,
@@ -67,19 +67,19 @@ pub enum ExecPolicyError {
#[derive(Debug, Error)]
pub enum ExecPolicyUpdateError {
#[error("failed to update execpolicy file {path}: {source}")]
#[error("failed to update rules file {path}: {source}")]
AppendRule { path: PathBuf, source: AmendError },
#[error("failed to join blocking execpolicy update task: {source}")]
#[error("failed to join blocking rules update task: {source}")]
JoinBlockingTask { source: tokio::task::JoinError },
#[error("failed to update in-memory execpolicy: {source}")]
#[error("failed to update in-memory rules: {source}")]
AddRule {
#[from]
source: ExecPolicyRuleError,
},
#[error("cannot append execpolicy rule because execpolicy feature is disabled")]
#[error("cannot append rule because rules feature is disabled")]
FeatureDisabled,
}
@@ -98,7 +98,11 @@ impl ExecPolicyManager {
features: &Features,
config_stack: &ConfigLayerStack,
) -> Result<Self, ExecPolicyError> {
let policy = load_exec_policy_for_features(features, config_stack).await?;
let (policy, warning) =
load_exec_policy_for_features_with_warning(features, config_stack).await?;
if let Some(err) = warning.as_ref() {
tracing::warn!("failed to parse rules: {err}");
}
Ok(Self::new(Arc::new(policy)))
}
@@ -195,14 +199,26 @@ impl Default for ExecPolicyManager {
}
}
async fn load_exec_policy_for_features(
pub async fn check_execpolicy_for_warnings(
features: &Features,
config_stack: &ConfigLayerStack,
) -> Result<Policy, ExecPolicyError> {
) -> Result<Option<ExecPolicyError>, ExecPolicyError> {
let (_, warning) = load_exec_policy_for_features_with_warning(features, config_stack).await?;
Ok(warning)
}
async fn load_exec_policy_for_features_with_warning(
features: &Features,
config_stack: &ConfigLayerStack,
) -> Result<(Policy, Option<ExecPolicyError>), ExecPolicyError> {
if !features.enabled(Feature::ExecPolicy) {
Ok(Policy::empty())
} else {
load_exec_policy(config_stack).await
return Ok((Policy::empty(), None));
}
match load_exec_policy(config_stack).await {
Ok(policy) => Ok((policy, None)),
Err(err @ ExecPolicyError::ParsePolicy { .. }) => Ok((Policy::empty(), Some(err))),
Err(err) => Err(err),
}
}
@@ -239,7 +255,7 @@ pub async fn load_exec_policy(config_stack: &ConfigLayerStack) -> Result<Policy,
}
let policy = parser.build();
tracing::debug!("loaded execpolicy from {} files", policy_paths.len());
tracing::debug!("loaded rules from {} files", policy_paths.len());
Ok(policy)
}

View File

@@ -114,6 +114,7 @@ pub use apply_patch::CODEX_APPLY_PATCH_ARG1;
pub use command_safety::is_dangerous_command;
pub use command_safety::is_safe_command;
pub use exec_policy::ExecPolicyError;
pub use exec_policy::check_execpolicy_for_warnings;
pub use exec_policy::load_exec_policy;
pub use safety::get_platform_sandbox;
pub use safety::is_windows_elevated_sandbox_enabled;

View File

@@ -5,9 +5,105 @@ use serde::Deserialize;
use serde::Serialize;
use std::io;
use std::io::ErrorKind;
use std::path::Path;
use std::path::PathBuf;
use std::time::Duration;
use tokio::fs;
use tracing::error;
/// Manages loading and saving of models cache to disk.
#[derive(Debug)]
pub(crate) struct ModelsCacheManager {
cache_path: PathBuf,
cache_ttl: Duration,
}
impl ModelsCacheManager {
/// Create a new cache manager with the given path and TTL.
pub(crate) fn new(cache_path: PathBuf, cache_ttl: Duration) -> Self {
Self {
cache_path,
cache_ttl,
}
}
/// Attempt to load a fresh cache entry. Returns `None` if the cache doesn't exist or is stale.
pub(crate) async fn load_fresh(&self) -> Option<ModelsCache> {
let cache = match self.load().await {
Ok(cache) => cache?,
Err(err) => {
error!("failed to load models cache: {err}");
return None;
}
};
if !cache.is_fresh(self.cache_ttl) {
return None;
}
Some(cache)
}
/// Persist the cache to disk, creating parent directories as needed.
pub(crate) async fn persist_cache(&self, models: &[ModelInfo], etag: Option<String>) {
let cache = ModelsCache {
fetched_at: Utc::now(),
etag,
models: models.to_vec(),
};
if let Err(err) = self.save_internal(&cache).await {
error!("failed to write models cache: {err}");
}
}
/// Renew the cache TTL by updating the fetched_at timestamp to now.
pub(crate) async fn renew_cache_ttl(&self) -> io::Result<()> {
let mut cache = match self.load().await? {
Some(cache) => cache,
None => return Err(io::Error::new(ErrorKind::NotFound, "cache not found")),
};
cache.fetched_at = Utc::now();
self.save_internal(&cache).await
}
async fn load(&self) -> io::Result<Option<ModelsCache>> {
match fs::read(&self.cache_path).await {
Ok(contents) => {
let cache = serde_json::from_slice(&contents)
.map_err(|err| io::Error::new(ErrorKind::InvalidData, err.to_string()))?;
Ok(Some(cache))
}
Err(err) if err.kind() == ErrorKind::NotFound => Ok(None),
Err(err) => Err(err),
}
}
async fn save_internal(&self, cache: &ModelsCache) -> io::Result<()> {
if let Some(parent) = self.cache_path.parent() {
fs::create_dir_all(parent).await?;
}
let json = serde_json::to_vec_pretty(cache)
.map_err(|err| io::Error::new(ErrorKind::InvalidData, err.to_string()))?;
fs::write(&self.cache_path, json).await
}
#[cfg(test)]
/// Set the cache TTL.
pub(crate) fn set_ttl(&mut self, ttl: Duration) {
self.cache_ttl = ttl;
}
#[cfg(test)]
/// Manipulate cache file for testing. Allows setting a custom fetched_at timestamp.
pub(crate) async fn manipulate_cache_for_test<F>(&self, f: F) -> io::Result<()>
where
F: FnOnce(&mut DateTime<Utc>),
{
let mut cache = match self.load().await? {
Some(cache) => cache,
None => return Err(io::Error::new(ErrorKind::NotFound, "cache not found")),
};
f(&mut cache.fetched_at);
self.save_internal(&cache).await
}
}
/// Serialized snapshot of models and metadata cached on disk.
#[derive(Debug, Clone, Serialize, Deserialize)]
@@ -20,7 +116,7 @@ pub(crate) struct ModelsCache {
impl ModelsCache {
/// Returns `true` when the cache entry has not exceeded the configured TTL.
pub(crate) fn is_fresh(&self, ttl: Duration) -> bool {
fn is_fresh(&self, ttl: Duration) -> bool {
if ttl.is_zero() {
return false;
}
@@ -31,26 +127,3 @@ impl ModelsCache {
age <= ttl_duration
}
}
/// Read and deserialize the cache file if it exists.
pub(crate) async fn load_cache(path: &Path) -> io::Result<Option<ModelsCache>> {
match fs::read(path).await {
Ok(contents) => {
let cache = serde_json::from_slice(&contents)
.map_err(|err| io::Error::new(ErrorKind::InvalidData, err.to_string()))?;
Ok(Some(cache))
}
Err(err) if err.kind() == ErrorKind::NotFound => Ok(None),
Err(err) => Err(err),
}
}
/// Persist the cache contents to disk, creating parent directories as needed.
pub(crate) async fn save_cache(path: &Path, cache: &ModelsCache) -> io::Result<()> {
if let Some(parent) = path.parent() {
fs::create_dir_all(parent).await?;
}
let json = serde_json::to_vec_pretty(cache)
.map_err(|err| io::Error::new(ErrorKind::InvalidData, err.to_string()))?;
fs::write(path, json).await
}

View File

@@ -1,4 +1,3 @@
use chrono::Utc;
use codex_api::ModelsClient;
use codex_api::ReqwestTransport;
use codex_app_server_protocol::AuthMode;
@@ -6,7 +5,6 @@ use codex_protocol::openai_models::ModelInfo;
use codex_protocol::openai_models::ModelPreset;
use codex_protocol::openai_models::ModelsResponse;
use http::HeaderMap;
use std::collections::HashSet;
use std::path::PathBuf;
use std::sync::Arc;
use std::time::Duration;
@@ -15,8 +13,7 @@ use tokio::sync::TryLockError;
use tokio::time::timeout;
use tracing::error;
use super::cache;
use super::cache::ModelsCache;
use super::cache::ModelsCacheManager;
use crate::api_bridge::auth_provider_from_auth;
use crate::api_bridge::map_api_error;
use crate::auth::AuthManager;
@@ -36,6 +33,17 @@ const OPENAI_DEFAULT_API_MODEL: &str = "gpt-5.1-codex-max";
const OPENAI_DEFAULT_CHATGPT_MODEL: &str = "gpt-5.2-codex";
const CODEX_AUTO_BALANCED_MODEL: &str = "codex-auto-balanced";
/// Strategy for refreshing available models.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum RefreshStrategy {
/// Always fetch from the network, ignoring cache.
Online,
/// Only use cached data, never fetch from the network.
Offline,
/// Use cache if available and fresh, otherwise fetch from the network.
OnlineIfUncached,
}
/// Coordinates remote model discovery plus cached metadata on disk.
#[derive(Debug)]
pub struct ModelsManager {
@@ -43,64 +51,157 @@ pub struct ModelsManager {
remote_models: RwLock<Vec<ModelInfo>>,
auth_manager: Arc<AuthManager>,
etag: RwLock<Option<String>>,
codex_home: PathBuf,
cache_ttl: Duration,
cache_manager: ModelsCacheManager,
provider: ModelProviderInfo,
}
impl ModelsManager {
/// Construct a manager scoped to the provided `AuthManager`.
///
/// Uses `codex_home` to store cached model metadata and initializes with built-in presets.
pub fn new(codex_home: PathBuf, auth_manager: Arc<AuthManager>) -> Self {
let cache_path = codex_home.join(MODEL_CACHE_FILE);
let cache_manager = ModelsCacheManager::new(cache_path, DEFAULT_MODEL_CACHE_TTL);
Self {
local_models: builtin_model_presets(auth_manager.get_auth_mode()),
remote_models: RwLock::new(Self::load_remote_models_from_file().unwrap_or_default()),
auth_manager,
etag: RwLock::new(None),
codex_home,
cache_ttl: DEFAULT_MODEL_CACHE_TTL,
cache_manager,
provider: ModelProviderInfo::create_openai_provider(),
}
}
#[cfg(any(test, feature = "test-support"))]
/// Construct a manager scoped to the provided `AuthManager` with a specific provider. Used for integration tests.
pub fn with_provider(
codex_home: PathBuf,
auth_manager: Arc<AuthManager>,
provider: ModelProviderInfo,
) -> Self {
Self {
local_models: builtin_model_presets(auth_manager.get_auth_mode()),
remote_models: RwLock::new(Self::load_remote_models_from_file().unwrap_or_default()),
auth_manager,
etag: RwLock::new(None),
codex_home,
cache_ttl: DEFAULT_MODEL_CACHE_TTL,
provider,
/// List all available models, refreshing according to the specified strategy.
///
/// Returns model presets sorted by priority and filtered by auth mode and visibility.
pub async fn list_models(
&self,
config: &Config,
refresh_strategy: RefreshStrategy,
) -> Vec<ModelPreset> {
if let Err(err) = self
.refresh_available_models(config, refresh_strategy)
.await
{
error!("failed to refresh available models: {err}");
}
let remote_models = self.get_remote_models(config).await;
self.build_available_models(remote_models)
}
/// Attempt to list models without blocking, using the current cached state.
///
/// Returns an error if the internal lock cannot be acquired.
pub fn try_list_models(&self, config: &Config) -> Result<Vec<ModelPreset>, TryLockError> {
let remote_models = self.try_get_remote_models(config)?;
Ok(self.build_available_models(remote_models))
}
// todo(aibrahim): should be visible to core only and sent on session_configured event
/// Get the model identifier to use, refreshing according to the specified strategy.
///
/// If `model` is provided, returns it directly. Otherwise selects the default based on
/// auth mode and available models (prefers `codex-auto-balanced` for ChatGPT auth).
pub async fn get_default_model(
&self,
model: &Option<String>,
config: &Config,
refresh_strategy: RefreshStrategy,
) -> String {
if let Some(model) = model.as_ref() {
return model.to_string();
}
if let Err(err) = self
.refresh_available_models(config, refresh_strategy)
.await
{
error!("failed to refresh available models: {err}");
}
// if codex-auto-balanced exists & signed in with chatgpt mode, return it, otherwise return the default model
let auth_mode = self.auth_manager.get_auth_mode();
let remote_models = self.get_remote_models(config).await;
if auth_mode == Some(AuthMode::ChatGPT) {
let has_auto_balanced = self
.build_available_models(remote_models)
.iter()
.any(|model| model.model == CODEX_AUTO_BALANCED_MODEL && model.show_in_picker);
if has_auto_balanced {
return CODEX_AUTO_BALANCED_MODEL.to_string();
}
return OPENAI_DEFAULT_CHATGPT_MODEL.to_string();
}
OPENAI_DEFAULT_API_MODEL.to_string()
}
// todo(aibrahim): look if we can tighten it to pub(crate)
/// Look up model metadata, applying remote overrides and config adjustments.
pub async fn get_model_info(&self, model: &str, config: &Config) -> ModelInfo {
let remote = self
.get_remote_models(config)
.await
.into_iter()
.find(|m| m.slug == model);
let model = if let Some(remote) = remote {
remote
} else {
model_info::find_model_info_for_slug(model)
};
model_info::with_config_overrides(model, config)
}
/// Refresh models if the provided ETag differs from the cached ETag.
///
/// Uses `Online` strategy to fetch latest models when ETags differ.
pub(crate) async fn refresh_if_new_etag(&self, etag: String, config: &Config) {
let current_etag = self.get_etag().await;
if current_etag.clone().is_some() && current_etag.as_deref() == Some(etag.as_str()) {
if let Err(err) = self.cache_manager.renew_cache_ttl().await {
error!("failed to renew cache TTL: {err}");
}
return;
}
if let Err(err) = self
.refresh_available_models(config, RefreshStrategy::Online)
.await
{
error!("failed to refresh available models: {err}");
}
}
/// Fetch the latest remote models, using the on-disk cache when still fresh.
pub async fn refresh_available_models_with_cache(&self, config: &Config) -> CoreResult<()> {
/// Refresh available models according to the specified strategy.
async fn refresh_available_models(
&self,
config: &Config,
refresh_strategy: RefreshStrategy,
) -> CoreResult<()> {
if !config.features.enabled(Feature::RemoteModels)
|| self.auth_manager.get_auth_mode() == Some(AuthMode::ApiKey)
{
return Ok(());
}
if self.try_load_cache().await {
return Ok(());
match refresh_strategy {
RefreshStrategy::Offline => {
// Only try to load from cache, never fetch
self.try_load_cache().await;
Ok(())
}
RefreshStrategy::OnlineIfUncached => {
// Try cache first, fall back to online if unavailable
if self.try_load_cache().await {
return Ok(());
}
self.fetch_and_update_models().await
}
RefreshStrategy::Online => {
// Always fetch from network
self.fetch_and_update_models().await
}
}
self.refresh_available_models_no_cache(config.features.enabled(Feature::RemoteModels))
.await
}
pub(crate) async fn refresh_available_models_no_cache(
&self,
remote_models_feature: bool,
) -> CoreResult<()> {
if !remote_models_feature || self.auth_manager.get_auth_mode() == Some(AuthMode::ApiKey) {
return Ok(());
}
async fn fetch_and_update_models(&self) -> CoreResult<()> {
let auth = self.auth_manager.auth().await;
let api_provider = self.provider.to_api_provider(Some(AuthMode::ChatGPT))?;
let api_auth = auth_provider_from_auth(auth.clone(), &self.provider)?;
@@ -118,84 +219,10 @@ impl ModelsManager {
self.apply_remote_models(models.clone()).await;
*self.etag.write().await = etag.clone();
self.persist_cache(&models, etag).await;
self.cache_manager.persist_cache(&models, etag).await;
Ok(())
}
pub async fn list_models(&self, config: &Config) -> Vec<ModelPreset> {
if let Err(err) = self.refresh_available_models_with_cache(config).await {
error!("failed to refresh available models: {err}");
}
let remote_models = self.remote_models(config).await;
self.build_available_models(remote_models)
}
pub fn try_list_models(&self, config: &Config) -> Result<Vec<ModelPreset>, TryLockError> {
let remote_models = self.try_get_remote_models(config)?;
Ok(self.build_available_models(remote_models))
}
/// Look up the requested model metadata while applying remote metadata overrides.
pub async fn construct_model_info(&self, model: &str, config: &Config) -> ModelInfo {
let remote = self
.remote_models(config)
.await
.into_iter()
.find(|m| m.slug == model);
let model = if let Some(remote) = remote {
remote
} else {
model_info::find_model_info_for_slug(model)
};
model_info::with_config_overrides(model, config)
}
pub async fn get_model(&self, model: &Option<String>, config: &Config) -> String {
if let Some(model) = model.as_ref() {
return model.to_string();
}
if let Err(err) = self.refresh_available_models_with_cache(config).await {
error!("failed to refresh available models: {err}");
}
// if codex-auto-balanced exists & signed in with chatgpt mode, return it, otherwise return the default model
let auth_mode = self.auth_manager.get_auth_mode();
let remote_models = self.remote_models(config).await;
if auth_mode == Some(AuthMode::ChatGPT) {
let has_auto_balanced = self
.build_available_models(remote_models)
.iter()
.any(|model| model.model == CODEX_AUTO_BALANCED_MODEL && model.show_in_picker);
if has_auto_balanced {
return CODEX_AUTO_BALANCED_MODEL.to_string();
}
return OPENAI_DEFAULT_CHATGPT_MODEL.to_string();
}
OPENAI_DEFAULT_API_MODEL.to_string()
}
pub async fn refresh_if_new_etag(&self, etag: String, remote_models_feature: bool) {
let current_etag = self.get_etag().await;
if current_etag.clone().is_some() && current_etag.as_deref() == Some(etag.as_str()) {
return;
}
if let Err(err) = self
.refresh_available_models_no_cache(remote_models_feature)
.await
{
error!("failed to refresh available models: {err}");
}
}
#[cfg(any(test, feature = "test-support"))]
pub fn get_model_offline(model: Option<&str>) -> String {
model.unwrap_or(OPENAI_DEFAULT_CHATGPT_MODEL).to_string()
}
#[cfg(any(test, feature = "test-support"))]
/// Offline helper that builds a `ModelInfo` without consulting remote state.
pub fn construct_model_info_offline(model: &str, config: &Config) -> ModelInfo {
model_info::with_config_overrides(model_info::find_model_info_for_slug(model), config)
}
async fn get_etag(&self) -> Option<String> {
self.etag.read().await.clone()
}
@@ -213,49 +240,25 @@ impl ModelsManager {
/// Attempt to satisfy the refresh from the cache when it matches the provider and TTL.
async fn try_load_cache(&self) -> bool {
// todo(aibrahim): think if we should store fetched_at in ModelsManager so we don't always need to read the disk
let cache_path = self.cache_path();
let cache = match cache::load_cache(&cache_path).await {
Ok(cache) => cache,
Err(err) => {
error!("failed to load models cache: {err}");
return false;
}
};
let cache = match cache {
let cache = match self.cache_manager.load_fresh().await {
Some(cache) => cache,
None => return false,
};
if !cache.is_fresh(self.cache_ttl) {
return false;
}
let models = cache.models.clone();
*self.etag.write().await = cache.etag.clone();
self.apply_remote_models(models.clone()).await;
true
}
/// Serialize the latest fetch to disk for reuse across future processes.
async fn persist_cache(&self, models: &[ModelInfo], etag: Option<String>) {
let cache = ModelsCache {
fetched_at: Utc::now(),
etag,
models: models.to_vec(),
};
let cache_path = self.cache_path();
if let Err(err) = cache::save_cache(&cache_path, &cache).await {
error!("failed to write models cache: {err}");
}
}
/// Merge remote model metadata into picker-ready presets, preserving existing entries.
fn build_available_models(&self, mut remote_models: Vec<ModelInfo>) -> Vec<ModelPreset> {
remote_models.sort_by(|a, b| a.priority.cmp(&b.priority));
let remote_presets: Vec<ModelPreset> = remote_models.into_iter().map(Into::into).collect();
let existing_presets = self.local_models.clone();
let mut merged_presets = Self::merge_presets(remote_presets, existing_presets);
merged_presets = self.filter_visible_models(merged_presets);
let mut merged_presets = ModelPreset::merge(remote_presets, existing_presets);
let chatgpt_mode = self.auth_manager.get_auth_mode() == Some(AuthMode::ChatGPT);
merged_presets = ModelPreset::filter_by_auth(merged_presets, chatgpt_mode);
let has_default = merged_presets.iter().any(|preset| preset.is_default);
if !has_default {
@@ -272,40 +275,7 @@ impl ModelsManager {
merged_presets
}
fn filter_visible_models(&self, models: Vec<ModelPreset>) -> Vec<ModelPreset> {
let chatgpt_mode = self.auth_manager.get_auth_mode() == Some(AuthMode::ChatGPT);
models
.into_iter()
.filter(|model| chatgpt_mode || model.supported_in_api)
.collect()
}
fn merge_presets(
remote_presets: Vec<ModelPreset>,
existing_presets: Vec<ModelPreset>,
) -> Vec<ModelPreset> {
if remote_presets.is_empty() {
return existing_presets;
}
let remote_slugs: HashSet<&str> = remote_presets
.iter()
.map(|preset| preset.model.as_str())
.collect();
let mut merged_presets = remote_presets.clone();
for mut preset in existing_presets {
if remote_slugs.contains(preset.model.as_str()) {
continue;
}
preset.is_default = false;
merged_presets.push(preset);
}
merged_presets
}
async fn remote_models(&self, config: &Config) -> Vec<ModelInfo> {
async fn get_remote_models(&self, config: &Config) -> Vec<ModelInfo> {
if config.features.enabled(Feature::RemoteModels) {
self.remote_models.read().await.clone()
} else {
@@ -321,8 +291,35 @@ impl ModelsManager {
}
}
fn cache_path(&self) -> PathBuf {
self.codex_home.join(MODEL_CACHE_FILE)
#[cfg(any(test, feature = "test-support"))]
/// Construct a manager with a specific provider for testing.
pub fn with_provider(
codex_home: PathBuf,
auth_manager: Arc<AuthManager>,
provider: ModelProviderInfo,
) -> Self {
let cache_path = codex_home.join(MODEL_CACHE_FILE);
let cache_manager = ModelsCacheManager::new(cache_path, DEFAULT_MODEL_CACHE_TTL);
Self {
local_models: builtin_model_presets(auth_manager.get_auth_mode()),
remote_models: RwLock::new(Self::load_remote_models_from_file().unwrap_or_default()),
auth_manager,
etag: RwLock::new(None),
cache_manager,
provider,
}
}
#[cfg(any(test, feature = "test-support"))]
/// Get model identifier without consulting remote state or cache.
pub fn get_model_offline(model: Option<&str>) -> String {
model.unwrap_or(OPENAI_DEFAULT_CHATGPT_MODEL).to_string()
}
#[cfg(any(test, feature = "test-support"))]
/// Build `ModelInfo` without consulting remote state or cache.
pub fn construct_model_info_offline(model: &str, config: &Config) -> ModelInfo {
model_info::with_config_overrides(model_info::find_model_info_for_slug(model), config)
}
}
@@ -338,13 +335,13 @@ fn format_client_version_to_whole() -> String {
#[cfg(test)]
mod tests {
use super::cache::ModelsCache;
use super::*;
use crate::CodexAuth;
use crate::auth::AuthCredentialsStoreMode;
use crate::config::ConfigBuilder;
use crate::features::Feature;
use crate::model_provider_info::WireApi;
use chrono::Utc;
use codex_protocol::openai_models::ModelsResponse;
use core_test_support::responses::mount_models_once;
use pretty_assertions::assert_eq;
@@ -434,13 +431,15 @@ mod tests {
ModelsManager::with_provider(codex_home.path().to_path_buf(), auth_manager, provider);
manager
.refresh_available_models_with_cache(&config)
.refresh_available_models(&config, RefreshStrategy::OnlineIfUncached)
.await
.expect("refresh succeeds");
let cached_remote = manager.remote_models(&config).await;
let cached_remote = manager.get_remote_models(&config).await;
assert_eq!(cached_remote, remote_models);
let available = manager.list_models(&config).await;
let available = manager
.list_models(&config, RefreshStrategy::OnlineIfUncached)
.await;
let high_idx = available
.iter()
.position(|model| model.model == "priority-high")
@@ -494,22 +493,22 @@ mod tests {
ModelsManager::with_provider(codex_home.path().to_path_buf(), auth_manager, provider);
manager
.refresh_available_models_with_cache(&config)
.refresh_available_models(&config, RefreshStrategy::OnlineIfUncached)
.await
.expect("first refresh succeeds");
assert_eq!(
manager.remote_models(&config).await,
manager.get_remote_models(&config).await,
remote_models,
"remote cache should store fetched models"
);
// Second call should read from cache and avoid the network.
manager
.refresh_available_models_with_cache(&config)
.refresh_available_models(&config, RefreshStrategy::OnlineIfUncached)
.await
.expect("cached refresh succeeds");
assert_eq!(
manager.remote_models(&config).await,
manager.get_remote_models(&config).await,
remote_models,
"cache path should not mutate stored models"
);
@@ -549,19 +548,18 @@ mod tests {
ModelsManager::with_provider(codex_home.path().to_path_buf(), auth_manager, provider);
manager
.refresh_available_models_with_cache(&config)
.refresh_available_models(&config, RefreshStrategy::OnlineIfUncached)
.await
.expect("initial refresh succeeds");
// Rewrite cache with an old timestamp so it is treated as stale.
let cache_path = codex_home.path().join(MODEL_CACHE_FILE);
let contents =
std::fs::read_to_string(&cache_path).expect("cache file should exist after refresh");
let mut cache: ModelsCache =
serde_json::from_str(&contents).expect("cache should deserialize");
cache.fetched_at = Utc::now() - chrono::Duration::hours(1);
std::fs::write(&cache_path, serde_json::to_string_pretty(&cache).unwrap())
.expect("cache rewrite succeeds");
manager
.cache_manager
.manipulate_cache_for_test(|fetched_at| {
*fetched_at = Utc::now() - chrono::Duration::hours(1);
})
.await
.expect("cache manipulation succeeds");
let updated_models = vec![remote_model("fresh", "Fresh", 9)];
server.reset().await;
@@ -574,11 +572,11 @@ mod tests {
.await;
manager
.refresh_available_models_with_cache(&config)
.refresh_available_models(&config, RefreshStrategy::OnlineIfUncached)
.await
.expect("second refresh succeeds");
assert_eq!(
manager.remote_models(&config).await,
manager.get_remote_models(&config).await,
updated_models,
"stale cache should trigger refetch"
);
@@ -618,10 +616,10 @@ mod tests {
let provider = provider_for(server.uri());
let mut manager =
ModelsManager::with_provider(codex_home.path().to_path_buf(), auth_manager, provider);
manager.cache_ttl = Duration::ZERO;
manager.cache_manager.set_ttl(Duration::ZERO);
manager
.refresh_available_models_with_cache(&config)
.refresh_available_models(&config, RefreshStrategy::OnlineIfUncached)
.await
.expect("initial refresh succeeds");
@@ -636,7 +634,7 @@ mod tests {
.await;
manager
.refresh_available_models_with_cache(&config)
.refresh_available_models(&config, RefreshStrategy::OnlineIfUncached)
.await
.expect("second refresh succeeds");

File diff suppressed because it is too large Load Diff

View File

@@ -138,8 +138,15 @@ impl ThreadManager {
self.state.models_manager.clone()
}
pub async fn list_models(&self, config: &Config) -> Vec<ModelPreset> {
self.state.models_manager.list_models(config).await
pub async fn list_models(
&self,
config: &Config,
refresh_strategy: crate::models_manager::manager::RefreshStrategy,
) -> Vec<ModelPreset> {
self.state
.models_manager
.list_models(config, refresh_strategy)
.await
}
pub async fn list_thread_ids(&self) -> Vec<ThreadId> {

View File

@@ -157,7 +157,7 @@ fn create_exec_command_tool() -> ToolSpec {
(
"shell".to_string(),
JsonSchema::String {
description: Some("Shell binary to launch. Defaults to /bin/bash.".to_string()),
description: Some("Shell binary to launch. Defaults to the user's default shell.".to_string()),
},
),
(

View File

@@ -2,6 +2,7 @@ use anyhow::Result;
use codex_core::CodexAuth;
use codex_core::ThreadManager;
use codex_core::built_in_model_providers;
use codex_core::models_manager::manager::RefreshStrategy;
use codex_protocol::openai_models::ModelPreset;
use codex_protocol::openai_models::ReasoningEffort;
use codex_protocol::openai_models::ReasoningEffortPreset;
@@ -18,7 +19,9 @@ async fn list_models_returns_api_key_models() -> Result<()> {
CodexAuth::from_api_key("sk-test"),
built_in_model_providers()["openai"].clone(),
);
let models = manager.list_models(&config).await;
let models = manager
.list_models(&config, RefreshStrategy::OnlineIfUncached)
.await;
let expected_models = expected_models_for_api_key();
assert_eq!(expected_models, models);
@@ -34,7 +37,9 @@ async fn list_models_returns_chatgpt_models() -> Result<()> {
CodexAuth::create_dummy_chatgpt_auth_for_testing(),
built_in_model_providers()["openai"].clone(),
);
let models = manager.list_models(&config).await;
let models = manager
.list_models(&config, RefreshStrategy::OnlineIfUncached)
.await;
let expected_models = expected_models_for_chatgpt();
assert_eq!(expected_models, models);

View File

@@ -42,6 +42,7 @@ mod live_cli;
mod model_info_overrides;
mod model_overrides;
mod model_tools;
mod models_cache_ttl;
mod models_etag_responses;
mod otel;
mod pending_input;

View File

@@ -0,0 +1,184 @@
use std::path::Path;
use std::sync::Arc;
use anyhow::Result;
use chrono::DateTime;
use chrono::TimeZone;
use chrono::Utc;
use codex_core::CodexAuth;
use codex_core::features::Feature;
use codex_core::models_manager::manager::RefreshStrategy;
use codex_core::protocol::EventMsg;
use codex_core::protocol::Op;
use codex_core::protocol::SandboxPolicy;
use codex_protocol::config_types::ReasoningSummary;
use codex_protocol::openai_models::ConfigShellToolType;
use codex_protocol::openai_models::ModelInfo;
use codex_protocol::openai_models::ModelVisibility;
use codex_protocol::openai_models::ModelsResponse;
use codex_protocol::openai_models::ReasoningEffort;
use codex_protocol::openai_models::ReasoningEffortPreset;
use codex_protocol::openai_models::TruncationPolicyConfig;
use codex_protocol::user_input::UserInput;
use core_test_support::responses;
use core_test_support::responses::ev_assistant_message;
use core_test_support::responses::ev_completed;
use core_test_support::responses::ev_response_created;
use core_test_support::responses::sse;
use core_test_support::responses::sse_response;
use core_test_support::test_codex::test_codex;
use core_test_support::wait_for_event;
use pretty_assertions::assert_eq;
use serde::Deserialize;
use serde::Serialize;
use wiremock::MockServer;
const ETAG: &str = "\"models-etag-ttl\"";
const CACHE_FILE: &str = "models_cache.json";
const REMOTE_MODEL: &str = "codex-test-ttl";
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn renews_cache_ttl_on_matching_models_etag() -> Result<()> {
let server = MockServer::start().await;
let remote_model = test_remote_model(REMOTE_MODEL, 1);
let models_mock = responses::mount_models_once_with_etag(
&server,
ModelsResponse {
models: vec![remote_model.clone()],
},
ETAG,
)
.await;
let mut builder = test_codex().with_auth(CodexAuth::create_dummy_chatgpt_auth_for_testing());
builder = builder.with_config(|config| {
config.features.enable(Feature::RemoteModels);
config.model = Some("gpt-5".to_string());
config.model_provider.request_max_retries = Some(0);
config.model_provider.stream_max_retries = Some(1);
});
let test = builder.build(&server).await?;
let codex = Arc::clone(&test.codex);
let config = test.config.clone();
// Populate cache via initial refresh.
let models_manager = test.thread_manager.get_models_manager();
let _ = models_manager
.list_models(&config, RefreshStrategy::OnlineIfUncached)
.await;
let cache_path = config.codex_home.join(CACHE_FILE);
let stale_time = Utc.timestamp_opt(0, 0).single().expect("valid epoch");
rewrite_cache_timestamp(&cache_path, stale_time).await?;
// Trigger responses with matching ETag, which should renew the cache TTL without another /models.
let response_body = sse(vec![
ev_response_created("resp-1"),
ev_assistant_message("msg-1", "done"),
ev_completed("resp-1"),
]);
let _responses_mock = responses::mount_response_once(
&server,
sse_response(response_body).insert_header("X-Models-Etag", ETAG),
)
.await;
codex
.submit(Op::UserTurn {
items: vec![UserInput::Text { text: "hi".into() }],
final_output_json_schema: None,
cwd: test.cwd_path().to_path_buf(),
approval_policy: codex_core::protocol::AskForApproval::Never,
sandbox_policy: SandboxPolicy::DangerFullAccess,
model: test.session_configured.model.clone(),
effort: None,
summary: ReasoningSummary::Auto,
})
.await?;
let _ = wait_for_event(&codex, |event| matches!(event, EventMsg::TurnComplete(_))).await;
let refreshed_cache = read_cache(&cache_path).await?;
assert!(
refreshed_cache.fetched_at > stale_time,
"cache TTL should be renewed"
);
assert_eq!(
models_mock.requests().len(),
1,
"/models should not refetch on matching etag"
);
// Cached models remain usable offline.
let offline_models = test
.thread_manager
.list_models(&config, RefreshStrategy::Offline)
.await;
assert!(
offline_models
.iter()
.any(|preset| preset.model == REMOTE_MODEL),
"offline listing should use renewed cache"
);
Ok(())
}
async fn rewrite_cache_timestamp(path: &Path, fetched_at: DateTime<Utc>) -> Result<()> {
let mut cache = read_cache(path).await?;
cache.fetched_at = fetched_at;
let contents = serde_json::to_vec_pretty(&cache)?;
tokio::fs::write(path, contents).await?;
Ok(())
}
async fn read_cache(path: &Path) -> Result<ModelsCache> {
let contents = tokio::fs::read(path).await?;
let cache = serde_json::from_slice(&contents)?;
Ok(cache)
}
#[derive(Debug, Clone, Serialize, Deserialize)]
struct ModelsCache {
fetched_at: DateTime<Utc>,
#[serde(default)]
etag: Option<String>,
models: Vec<ModelInfo>,
}
fn test_remote_model(slug: &str, priority: i32) -> ModelInfo {
ModelInfo {
slug: slug.to_string(),
display_name: "Remote Test".to_string(),
description: Some("remote model".to_string()),
default_reasoning_level: Some(ReasoningEffort::Medium),
supported_reasoning_levels: vec![
ReasoningEffortPreset {
effort: ReasoningEffort::Low,
description: "low".to_string(),
},
ReasoningEffortPreset {
effort: ReasoningEffort::Medium,
description: "medium".to_string(),
},
],
shell_type: ConfigShellToolType::ShellCommand,
visibility: ModelVisibility::List,
supported_in_api: true,
priority,
upgrade: None,
base_instructions: "base instructions".to_string(),
supports_reasoning_summaries: false,
support_verbosity: false,
default_verbosity: None,
apply_patch_tool_type: None,
truncation_policy: TruncationPolicyConfig::bytes(10_000),
supports_parallel_tool_calls: false,
context_window: Some(272_000),
auto_compact_token_limit: None,
effective_context_window_percent: 95,
experimental_supported_tools: Vec::new(),
}
}

View File

@@ -85,7 +85,7 @@ async fn prompt_tools_are_consistent_across_requests() -> anyhow::Result<()> {
.await?;
let base_instructions = thread_manager
.get_models_manager()
.construct_model_info(
.get_model_info(
config
.model
.as_deref()

View File

@@ -7,9 +7,9 @@ use codex_core::CodexAuth;
use codex_core::ModelProviderInfo;
use codex_core::built_in_model_providers;
use codex_core::config::Config;
use codex_core::error::CodexErr;
use codex_core::features::Feature;
use codex_core::models_manager::manager::ModelsManager;
use codex_core::models_manager::manager::RefreshStrategy;
use codex_core::protocol::AskForApproval;
use codex_core::protocol::EventMsg;
use codex_core::protocol::ExecCommandSource;
@@ -127,7 +127,7 @@ async fn remote_models_remote_model_uses_unified_exec() -> Result<()> {
assert_eq!(requests[0].url.path(), "/v1/models");
let model_info = models_manager
.construct_model_info(REMOTE_MODEL_SLUG, &config)
.get_model_info(REMOTE_MODEL_SLUG, &config)
.await;
assert_eq!(model_info.shell_type, ConfigShellToolType::UnifiedExec);
@@ -225,9 +225,7 @@ async fn remote_models_truncation_policy_without_override_preserves_remote() ->
let models_manager = test.thread_manager.get_models_manager();
wait_for_model_available(&models_manager, slug, &test.config).await;
let model_info = models_manager
.construct_model_info(slug, &test.config)
.await;
let model_info = models_manager.get_model_info(slug, &test.config).await;
assert_eq!(
model_info.truncation_policy,
TruncationPolicyConfig::bytes(12_000)
@@ -273,9 +271,7 @@ async fn remote_models_truncation_policy_with_tool_output_override() -> Result<(
let models_manager = test.thread_manager.get_models_manager();
wait_for_model_available(&models_manager, slug, &test.config).await;
let model_info = models_manager
.construct_model_info(slug, &test.config)
.await;
let model_info = models_manager.get_model_info(slug, &test.config).await;
assert_eq!(
model_info.truncation_policy,
TruncationPolicyConfig::bytes(200)
@@ -423,12 +419,9 @@ async fn remote_models_preserve_builtin_presets() -> Result<()> {
provider,
);
manager
.refresh_available_models_with_cache(&config)
.await
.expect("refresh succeeds");
let available = manager.list_models(&config).await;
let available = manager
.list_models(&config, RefreshStrategy::OnlineIfUncached)
.await;
let remote = available
.iter()
.find(|model| model.model == "remote-alpha")
@@ -483,22 +476,25 @@ async fn remote_models_request_times_out_after_5s() -> Result<()> {
);
let start = Instant::now();
let refresh = timeout(
let model = timeout(
Duration::from_secs(7),
manager.refresh_available_models_with_cache(&config),
manager.get_default_model(&None, &config, RefreshStrategy::OnlineIfUncached),
)
.await;
let elapsed = start.elapsed();
let err = refresh
.expect("refresh should finish")
.expect_err("refresh should time out");
let request_summaries: Vec<String> = server
// get_model should return a default model even when refresh times out
let default_model = model.expect("get_model should finish and return default model");
assert!(
default_model == "gpt-5.2-codex",
"get_model should return default model when refresh times out, got: {default_model}"
);
let _ = server
.received_requests()
.await
.expect("mock server should capture requests")
.iter()
.map(|req| format!("{} {}", req.method, req.url.path()))
.collect();
.collect::<Vec<String>>();
assert!(
elapsed >= Duration::from_millis(4_500),
"expected models call to block near the timeout; took {elapsed:?}"
@@ -507,10 +503,6 @@ async fn remote_models_request_times_out_after_5s() -> Result<()> {
elapsed < Duration::from_millis(5_800),
"expected models call to time out before the delayed response; took {elapsed:?}"
);
match err {
CodexErr::Timeout => {}
other => panic!("expected timeout error, got {other:?}; requests: {request_summaries:?}"),
}
assert_eq!(
models_mock.requests().len(),
1,
@@ -550,10 +542,14 @@ async fn remote_models_hide_picker_only_models() -> Result<()> {
provider,
);
let selected = manager.get_model(&None, &config).await;
let selected = manager
.get_default_model(&None, &config, RefreshStrategy::OnlineIfUncached)
.await;
assert_eq!(selected, "gpt-5.2-codex");
let available = manager.list_models(&config).await;
let available = manager
.list_models(&config, RefreshStrategy::OnlineIfUncached)
.await;
let hidden = available
.iter()
.find(|model| model.model == "codex-auto-balanced")
@@ -571,7 +567,9 @@ async fn wait_for_model_available(
let deadline = Instant::now() + Duration::from_secs(2);
loop {
if let Some(model) = {
let guard = manager.list_models(config).await;
let guard = manager
.list_models(config, RefreshStrategy::OnlineIfUncached)
.await;
guard.iter().find(|model| model.model == slug).cloned()
} {
return model;

View File

@@ -29,6 +29,7 @@ use codex_core::config::find_codex_home;
use codex_core::config::load_config_as_toml_with_cli_overrides;
use codex_core::config::resolve_oss_provider;
use codex_core::git_info::get_git_repo_root;
use codex_core::models_manager::manager::RefreshStrategy;
use codex_core::protocol::AskForApproval;
use codex_core::protocol::Event;
use codex_core::protocol::EventMsg;
@@ -233,15 +234,17 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
}
};
let otel =
codex_core::otel_init::build_provider(&config, env!("CARGO_PKG_VERSION"), None, false);
#[allow(clippy::print_stderr)]
let otel = match otel {
Ok(otel) => otel,
Err(e) => {
let otel = match std::panic::catch_unwind(std::panic::AssertUnwindSafe(|| {
codex_core::otel_init::build_provider(&config, env!("CARGO_PKG_VERSION"), None, false)
})) {
Ok(Ok(otel)) => otel,
Ok(Err(e)) => {
eprintln!("Could not create otel exporter: {e}");
std::process::exit(1);
None
}
Err(_) => {
eprintln!("Could not create otel exporter: panicked during initialization");
None
}
};
@@ -310,7 +313,7 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
);
let default_model = thread_manager
.get_models_manager()
.get_model(&config.model, &config)
.get_default_model(&config.model, &config, RefreshStrategy::OnlineIfUncached)
.await;
// Handle resume subcommand by resolving a rollout path and using explicit resume API.

View File

@@ -1,4 +1,5 @@
use std::collections::HashMap;
use std::collections::HashSet;
use schemars::JsonSchema;
use serde::Deserialize;
@@ -243,6 +244,46 @@ impl From<ModelInfo> for ModelPreset {
}
}
impl ModelPreset {
/// Filter models based on authentication mode.
///
/// In ChatGPT mode, all models are visible. Otherwise, only API-supported models are shown.
pub fn filter_by_auth(models: Vec<ModelPreset>, chatgpt_mode: bool) -> Vec<ModelPreset> {
models
.into_iter()
.filter(|model| chatgpt_mode || model.supported_in_api)
.collect()
}
/// Merge remote presets with existing presets, preferring remote when slugs match.
///
/// Remote presets take precedence. Existing presets not in remote are appended with `is_default` set to false.
pub fn merge(
remote_presets: Vec<ModelPreset>,
existing_presets: Vec<ModelPreset>,
) -> Vec<ModelPreset> {
if remote_presets.is_empty() {
return existing_presets;
}
let remote_slugs: HashSet<&str> = remote_presets
.iter()
.map(|preset| preset.model.as_str())
.collect();
let mut merged_presets = remote_presets.clone();
for mut preset in existing_presets {
if remote_slugs.contains(preset.model.as_str()) {
continue;
}
preset.is_default = false;
merged_presets.push(preset);
}
merged_presets
}
}
fn reasoning_effort_mapping_from_presets(
presets: &[ReasoningEffortPreset],
) -> Option<HashMap<ReasoningEffort, ReasoningEffort>> {

View File

@@ -32,7 +32,7 @@ use codex_core::config::edit::ConfigEdit;
use codex_core::config::edit::ConfigEditsBuilder;
#[cfg(target_os = "windows")]
use codex_core::features::Feature;
use codex_core::models_manager::manager::ModelsManager;
use codex_core::models_manager::manager::RefreshStrategy;
use codex_core::models_manager::model_presets::HIDE_GPT_5_1_CODEX_MAX_MIGRATION_PROMPT_CONFIG;
use codex_core::models_manager::model_presets::HIDE_GPT5_1_MIGRATION_PROMPT_CONFIG;
use codex_core::protocol::DeprecationNoticeEvent;
@@ -210,9 +210,8 @@ async fn handle_model_migration_prompt_if_needed(
config: &mut Config,
model: &str,
app_event_tx: &AppEventSender,
models_manager: Arc<ModelsManager>,
available_models: Vec<ModelPreset>,
) -> Option<AppExitInfo> {
let available_models = models_manager.list_models(config).await;
let upgrade = available_models
.iter()
.find(|preset| preset.model == model)
@@ -383,14 +382,18 @@ impl App {
));
let mut model = thread_manager
.get_models_manager()
.get_model(&config.model, &config)
.get_default_model(&config.model, &config, RefreshStrategy::Offline)
.await;
let available_models = thread_manager
.get_models_manager()
.list_models(&config, RefreshStrategy::Offline)
.await;
let exit_info = handle_model_migration_prompt_if_needed(
tui,
&mut config,
model.as_str(),
&app_event_tx,
thread_manager.get_models_manager(),
available_models,
)
.await;
if let Some(exit_info) = exit_info {
@@ -619,7 +622,7 @@ impl App {
let model_info = self
.server
.get_models_manager()
.construct_model_info(self.current_model.as_str(), &self.config)
.get_model_info(self.current_model.as_str(), &self.config)
.await;
match event {
AppEvent::NewSession => {

View File

@@ -33,7 +33,8 @@
//! burst detection for actual paste streams.
//!
//! The burst detector can also be disabled (`disable_paste_burst`), which bypasses the state
//! machine and treats the key stream as normal typing.
//! machine and treats the key stream as normal typing. When toggling from enabled → disabled, the
//! composer flushes/clears any in-flight burst state so it cannot leak into subsequent input.
//!
//! For the detailed burst state machine, see `codex-rs/tui/src/bottom_pane/paste_burst.rs`.
//! For a narrative overview of the combined state machine, see `docs/tui-chat-composer.md`.
@@ -379,16 +380,27 @@ impl ChatComposer {
/// `disable_paste_burst` is an escape hatch for terminals/platforms where the burst heuristic
/// is unwanted or has already been handled elsewhere.
///
/// When enabling the flag we clear the burst classification window so subsequent input cannot
/// be incorrectly grouped into a previous burst.
/// When transitioning from enabled → disabled, we "defuse" any in-flight burst state so it
/// cannot affect subsequent normal typing:
///
/// This does not flush any in-progress buffer; callers should avoid toggling this mid-burst
/// (or should flush first).
/// - First, flush any held/buffered text immediately via
/// [`PasteBurst::flush_before_modified_input`], and feed it through `handle_paste(String)`.
/// This preserves user input and routes it through the same integration path as explicit
/// pastes (large-paste placeholders, image-path detection, and popup sync).
/// - Then clear the burst timing and Enter-suppression window via
/// [`PasteBurst::clear_after_explicit_paste`].
///
/// We intentionally do not use `clear_window_after_non_char()` here: it clears timing state
/// without emitting any buffered text, which can leave a non-empty buffer unable to flush
/// later (because `flush_if_due()` relies on `last_plain_char_time` to time out).
pub(crate) fn set_disable_paste_burst(&mut self, disabled: bool) {
let was_disabled = self.disable_paste_burst;
self.disable_paste_burst = disabled;
if disabled && !was_disabled {
self.paste_burst.clear_window_after_non_char();
if let Some(pasted) = self.paste_burst.flush_before_modified_input() {
self.handle_paste(pasted);
}
self.paste_burst.clear_after_explicit_paste();
}
}
@@ -799,6 +811,15 @@ impl ChatComposer {
/// the cursor to a UTF-8 char boundary before slicing `textarea.text()`.
#[inline]
fn handle_non_ascii_char(&mut self, input: KeyEvent) -> (InputResult, bool) {
if self.disable_paste_burst {
// When burst detection is disabled, treat IME/non-ASCII input as normal typing.
// In particular, do not retro-capture or buffer already-inserted prefix text.
self.textarea.input(input);
let text_after = self.textarea.text();
self.pending_pastes
.retain(|(placeholder, _)| text_after.contains(placeholder));
return (InputResult::None, true);
}
if let KeyEvent {
code: KeyCode::Char(ch),
..
@@ -1372,7 +1393,7 @@ impl ChatComposer {
.next()
.unwrap_or("")
.starts_with('/');
if self.paste_burst.is_active() && !in_slash_context {
if !self.disable_paste_burst && self.paste_burst.is_active() && !in_slash_context {
let now = Instant::now();
if self.paste_burst.append_newline_if_active(now) {
return (InputResult::None, true);
@@ -1381,10 +1402,11 @@ impl ChatComposer {
// During a paste-like burst, treat Enter/Ctrl+Shift+Q as a newline instead of submit.
let now = Instant::now();
if self
.paste_burst
.newline_should_insert_instead_of_submit(now)
&& !in_slash_context
if !in_slash_context
&& !self.disable_paste_burst
&& self
.paste_burst
.newline_should_insert_instead_of_submit(now)
{
self.textarea.insert_str("\n");
self.paste_burst.extend_window(now);
@@ -1580,6 +1602,7 @@ impl ChatComposer {
// If we're capturing a burst and receive Enter, accumulate it instead of inserting.
if matches!(input.code, KeyCode::Enter)
&& !self.disable_paste_burst
&& self.paste_burst.is_active()
&& self.paste_burst.append_newline_if_active(now)
{
@@ -1598,7 +1621,7 @@ impl ChatComposer {
} = input
{
let has_ctrl_or_alt = has_ctrl_or_alt(modifiers);
if !has_ctrl_or_alt {
if !has_ctrl_or_alt && !self.disable_paste_burst {
// Non-ASCII characters (e.g., from IMEs) can arrive in quick bursts, so avoid
// holding the first char while still allowing burst detection for paste input.
if !ch.is_ascii() {
@@ -1644,6 +1667,17 @@ impl ChatComposer {
}
}
// Flush any buffered burst before applying a non-char input (arrow keys, etc).
//
// `clear_window_after_non_char()` clears `last_plain_char_time`. If we cleared that while
// `PasteBurst.buffer` is non-empty, `flush_if_due()` would no longer have a timestamp to
// time out against, and the buffered paste could remain stuck until another plain char
// arrives.
if !matches!(input.code, KeyCode::Char(_) | KeyCode::Enter)
&& let Some(pasted) = self.paste_burst.flush_before_modified_input()
{
self.handle_paste(pasted);
}
// Backspace at the start of an image placeholder should delete that placeholder (rather
// than deleting content before it). Do this without scanning the full text by consulting
// the textarea's element list.
@@ -2921,6 +2955,104 @@ mod tests {
assert_eq!(composer.textarea.text(), "hi\nthere");
}
/// Behavior: even if Enter suppression would normally be active for a burst, Enter should
/// still dispatch a built-in slash command when the first line begins with `/`.
#[test]
fn slash_context_enter_ignores_paste_burst_enter_suppression() {
use crate::slash_command::SlashCommand;
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
use crossterm::event::KeyModifiers;
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
composer.textarea.set_text("/diff");
composer.textarea.set_cursor("/diff".len());
composer
.paste_burst
.begin_with_retro_grabbed(String::new(), Instant::now());
let (result, _) =
composer.handle_key_event(KeyEvent::new(KeyCode::Enter, KeyModifiers::NONE));
assert!(matches!(result, InputResult::Command(SlashCommand::Diff)));
}
/// Behavior: if a burst is buffering text and the user presses a non-char key, flush the
/// buffered burst *before* applying that key so the buffer cannot get stuck.
#[test]
fn non_char_key_flushes_active_burst_before_input() {
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
use crossterm::event::KeyModifiers;
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
// Force an active burst so we can deterministically buffer characters without relying on
// timing.
composer
.paste_burst
.begin_with_retro_grabbed(String::new(), Instant::now());
let _ = composer.handle_key_event(KeyEvent::new(KeyCode::Char('h'), KeyModifiers::NONE));
let _ = composer.handle_key_event(KeyEvent::new(KeyCode::Char('i'), KeyModifiers::NONE));
assert!(composer.textarea.text().is_empty());
assert!(composer.is_in_paste_burst());
let _ = composer.handle_key_event(KeyEvent::new(KeyCode::Left, KeyModifiers::NONE));
assert_eq!(composer.textarea.text(), "hi");
assert_eq!(composer.textarea.cursor(), 1);
assert!(!composer.is_in_paste_burst());
}
/// Behavior: enabling `disable_paste_burst` flushes any held first character (flicker
/// suppression) and then inserts subsequent chars immediately without creating burst state.
#[test]
fn disable_paste_burst_flushes_pending_first_char_and_inserts_immediately() {
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
use crossterm::event::KeyModifiers;
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
// First ASCII char is normally held briefly. Flip the config mid-stream and ensure the
// held char is not dropped.
let _ = composer.handle_key_event(KeyEvent::new(KeyCode::Char('a'), KeyModifiers::NONE));
assert!(composer.is_in_paste_burst());
assert!(composer.textarea.text().is_empty());
composer.set_disable_paste_burst(true);
assert_eq!(composer.textarea.text(), "a");
assert!(!composer.is_in_paste_burst());
let _ = composer.handle_key_event(KeyEvent::new(KeyCode::Char('b'), KeyModifiers::NONE));
assert_eq!(composer.textarea.text(), "ab");
assert!(!composer.is_in_paste_burst());
}
/// Behavior: a small explicit paste inserts text directly (no placeholder), and the submitted
/// text matches what is visible in the textarea.
#[test]

View File

@@ -301,15 +301,23 @@ pub async fn run_main(
ensure_oss_provider_ready(provider_id, &config).await?;
}
let otel =
codex_core::otel_init::build_provider(&config, env!("CARGO_PKG_VERSION"), None, true);
#[allow(clippy::print_stderr)]
let otel = match otel {
Ok(otel) => otel,
Err(e) => {
eprintln!("Could not create otel exporter: {e}");
std::process::exit(1);
let otel = match std::panic::catch_unwind(std::panic::AssertUnwindSafe(|| {
codex_core::otel_init::build_provider(&config, env!("CARGO_PKG_VERSION"), None, true)
})) {
Ok(Ok(otel)) => otel,
Ok(Err(e)) => {
#[allow(clippy::print_stderr)]
{
eprintln!("Could not create otel exporter: {e}");
}
None
}
Err(_) => {
#[allow(clippy::print_stderr)]
{
eprintln!("Could not create otel exporter: panicked during initialization");
}
None
}
};

View File

@@ -35,14 +35,14 @@ model_provider = "ollama"
std::fs::write(codex_home.join("config.toml"), config_contents)?;
let CodexCliOutput { exit_code, output } = run_codex_cli(codex_home, cwd).await?;
assert_eq!(1, exit_code, "Codex CLI should exit nonzero.");
assert_ne!(0, exit_code, "Codex CLI should exit nonzero.");
assert!(
output.contains("ERROR: Failed to initialize codex:"),
"expected startup error in output, got: {output}"
);
assert!(
output.contains("failed to read execpolicy files"),
"expected execpolicy read error in output, got: {output}"
output.contains("failed to read rules files"),
"expected rules read error in output, got: {output}"
);
Ok(())
}
@@ -63,7 +63,7 @@ async fn run_codex_cli(
codex_home.as_ref().display().to_string(),
);
let args = vec!["-c".to_string(), "analytics_enabled=false".to_string()];
let args = vec!["-c".to_string(), "analytics.enabled=false".to_string()];
let spawned = codex_utils_pty::spawn_pty_process(
codex_cli.to_string_lossy().as_ref(),
&args,

View File

@@ -49,7 +49,7 @@ use codex_core::config::Config;
use codex_core::config::edit::ConfigEditsBuilder;
#[cfg(target_os = "windows")]
use codex_core::features::Feature;
use codex_core::models_manager::manager::ModelsManager;
use codex_core::models_manager::manager::RefreshStrategy;
use codex_core::models_manager::model_presets::HIDE_GPT_5_1_CODEX_MAX_MIGRATION_PROMPT_CONFIG;
use codex_core::models_manager::model_presets::HIDE_GPT5_1_MIGRATION_PROMPT_CONFIG;
use codex_core::protocol::DeprecationNoticeEvent;
@@ -247,9 +247,8 @@ async fn handle_model_migration_prompt_if_needed(
config: &mut Config,
model: &str,
app_event_tx: &AppEventSender,
models_manager: Arc<ModelsManager>,
available_models: Vec<ModelPreset>,
) -> Option<AppExitInfo> {
let available_models = models_manager.list_models(config).await;
let upgrade = available_models
.iter()
.find(|preset| preset.model == model)
@@ -451,14 +450,18 @@ impl App {
));
let mut model = thread_manager
.get_models_manager()
.get_model(&config.model, &config)
.get_default_model(&config.model, &config, RefreshStrategy::Offline)
.await;
let available_models = thread_manager
.get_models_manager()
.list_models(&config, RefreshStrategy::Offline)
.await;
let exit_info = handle_model_migration_prompt_if_needed(
tui,
&mut config,
model.as_str(),
&app_event_tx,
thread_manager.get_models_manager(),
available_models,
)
.await;
if let Some(exit_info) = exit_info {

View File

@@ -32,7 +32,8 @@
//! burst detection for actual paste streams.
//!
//! The burst detector can also be disabled (`disable_paste_burst`), which bypasses the state
//! machine and treats the key stream as normal typing.
//! machine and treats the key stream as normal typing. When toggling from enabled → disabled, the
//! composer flushes/clears any in-flight burst state so it cannot leak into subsequent input.
//!
//! For the detailed burst state machine, see `codex-rs/tui2/src/bottom_pane/paste_burst.rs`.
//! For a narrative overview of the combined state machine, see `docs/tui-chat-composer.md`.
@@ -391,16 +392,27 @@ impl ChatComposer {
/// `disable_paste_burst` is an escape hatch for terminals/platforms where the burst heuristic
/// is unwanted or has already been handled elsewhere.
///
/// When enabling the flag we clear the burst classification window so subsequent input cannot
/// be incorrectly grouped into a previous burst.
/// When transitioning from enabled → disabled, we "defuse" any in-flight burst state so it
/// cannot affect subsequent normal typing:
///
/// This does not flush any in-progress buffer; callers should avoid toggling this mid-burst
/// (or should flush first).
/// - First, flush any held/buffered text immediately via
/// [`PasteBurst::flush_before_modified_input`], and feed it through `handle_paste(String)`.
/// This preserves user input and routes it through the same integration path as explicit
/// pastes (large-paste placeholders, image-path detection, and popup sync).
/// - Then clear the burst timing and Enter-suppression window via
/// [`PasteBurst::clear_after_explicit_paste`].
///
/// We intentionally do not use `clear_window_after_non_char()` here: it clears timing state
/// without emitting any buffered text, which can leave a non-empty buffer unable to flush
/// later (because `flush_if_due()` relies on `last_plain_char_time` to time out).
pub(crate) fn set_disable_paste_burst(&mut self, disabled: bool) {
let was_disabled = self.disable_paste_burst;
self.disable_paste_burst = disabled;
if disabled && !was_disabled {
self.paste_burst.clear_window_after_non_char();
if let Some(pasted) = self.paste_burst.flush_before_modified_input() {
self.handle_paste(pasted);
}
self.paste_burst.clear_after_explicit_paste();
}
}
@@ -703,6 +715,7 @@ impl ChatComposer {
}
#[inline]
/// Clamp a cursor index to a UTF-8 char boundary.
fn clamp_to_char_boundary(text: &str, pos: usize) -> usize {
let mut p = pos.min(text.len());
if p < text.len() && !text.is_char_boundary(p) {
@@ -732,6 +745,15 @@ impl ChatComposer {
/// the cursor to a UTF-8 char boundary before slicing `textarea.text()`.
#[inline]
fn handle_non_ascii_char(&mut self, input: KeyEvent) -> (InputResult, bool) {
if self.disable_paste_burst {
// When burst detection is disabled, treat IME/non-ASCII input as normal typing.
// In particular, do not retro-capture or buffer already-inserted prefix text.
self.textarea.input(input);
let text_after = self.textarea.text();
self.pending_pastes
.retain(|(placeholder, _)| text_after.contains(placeholder));
return (InputResult::None, true);
}
if let KeyEvent {
code: KeyCode::Char(ch),
..
@@ -1305,7 +1327,7 @@ impl ChatComposer {
.next()
.unwrap_or("")
.starts_with('/');
if self.paste_burst.is_active() && !in_slash_context {
if !self.disable_paste_burst && self.paste_burst.is_active() && !in_slash_context {
let now = Instant::now();
if self.paste_burst.append_newline_if_active(now) {
return (InputResult::None, true);
@@ -1314,10 +1336,11 @@ impl ChatComposer {
// During a paste-like burst, treat Enter/Ctrl+Shift+Q as a newline instead of submit.
let now = Instant::now();
if self
.paste_burst
.newline_should_insert_instead_of_submit(now)
&& !in_slash_context
if !in_slash_context
&& !self.disable_paste_burst
&& self
.paste_burst
.newline_should_insert_instead_of_submit(now)
{
self.textarea.insert_str("\n");
self.paste_burst.extend_window(now);
@@ -1519,6 +1542,7 @@ impl ChatComposer {
// If we're capturing a burst and receive Enter, accumulate it instead of inserting.
if matches!(input.code, KeyCode::Enter)
&& !self.disable_paste_burst
&& self.paste_burst.is_active()
&& self.paste_burst.append_newline_if_active(now)
{
@@ -1537,7 +1561,7 @@ impl ChatComposer {
} = input
{
let has_ctrl_or_alt = has_ctrl_or_alt(modifiers);
if !has_ctrl_or_alt {
if !has_ctrl_or_alt && !self.disable_paste_burst {
// Non-ASCII characters (e.g., from IMEs) can arrive in quick bursts, so avoid
// holding the first char while still allowing burst detection for paste input.
if !ch.is_ascii() {
@@ -1583,6 +1607,18 @@ impl ChatComposer {
}
}
// Flush any buffered burst before applying a non-char input (arrow keys, etc).
//
// `clear_window_after_non_char()` clears `last_plain_char_time`. If we cleared that while
// `PasteBurst.buffer` is non-empty, `flush_if_due()` would no longer have a timestamp to
// time out against, and the buffered paste could remain stuck until another plain char
// arrives.
if !matches!(input.code, KeyCode::Char(_) | KeyCode::Enter)
&& let Some(pasted) = self.paste_burst.flush_before_modified_input()
{
self.handle_paste(pasted);
}
// Backspace at the start of an image placeholder should delete that placeholder (rather
// than deleting content before it). Do this without scanning the full text by consulting
// the textarea's element list.
@@ -2896,6 +2932,136 @@ mod tests {
assert_eq!(composer.textarea.text(), "hi\nthere");
}
/// Behavior: even if Enter suppression would normally be active for a burst, Enter should
/// still dispatch a built-in slash command when the first line begins with `/`.
#[test]
fn slash_context_enter_ignores_paste_burst_enter_suppression() {
use crate::slash_command::SlashCommand;
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
use crossterm::event::KeyModifiers;
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
composer.textarea.set_text("/diff");
composer.textarea.set_cursor("/diff".len());
composer
.paste_burst
.begin_with_retro_grabbed(String::new(), Instant::now());
let (result, _) =
composer.handle_key_event(KeyEvent::new(KeyCode::Enter, KeyModifiers::NONE));
assert!(matches!(result, InputResult::Command(SlashCommand::Diff)));
}
/// Behavior: if a burst is buffering text and the user presses a non-char key, flush the
/// buffered burst *before* applying that key so the buffer cannot get stuck.
#[test]
fn non_char_key_flushes_active_burst_before_input() {
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
use crossterm::event::KeyModifiers;
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
// Force an active burst so we can deterministically buffer characters without relying on
// timing.
composer
.paste_burst
.begin_with_retro_grabbed(String::new(), Instant::now());
let _ = composer.handle_key_event(KeyEvent::new(KeyCode::Char('h'), KeyModifiers::NONE));
let _ = composer.handle_key_event(KeyEvent::new(KeyCode::Char('i'), KeyModifiers::NONE));
assert!(composer.textarea.text().is_empty());
assert!(composer.is_in_paste_burst());
let _ = composer.handle_key_event(KeyEvent::new(KeyCode::Left, KeyModifiers::NONE));
assert_eq!(composer.textarea.text(), "hi");
assert_eq!(composer.textarea.cursor(), 1);
assert!(!composer.is_in_paste_burst());
}
/// Behavior: enabling `disable_paste_burst` flushes any held first character (flicker
/// suppression) and then inserts subsequent chars immediately without creating burst state.
#[test]
fn disable_paste_burst_flushes_pending_first_char_and_inserts_immediately() {
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
use crossterm::event::KeyModifiers;
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
// First ASCII char is normally held briefly. Flip the config mid-stream and ensure the
// held char is not dropped.
let _ = composer.handle_key_event(KeyEvent::new(KeyCode::Char('a'), KeyModifiers::NONE));
assert!(composer.is_in_paste_burst());
assert!(composer.textarea.text().is_empty());
composer.set_disable_paste_burst(true);
assert_eq!(composer.textarea.text(), "a");
assert!(!composer.is_in_paste_burst());
let _ = composer.handle_key_event(KeyEvent::new(KeyCode::Char('b'), KeyModifiers::NONE));
assert_eq!(composer.textarea.text(), "ab");
assert!(!composer.is_in_paste_burst());
}
/// Behavior: when a burst is already active, a non-ASCII char should be captured into the
/// burst buffer via the `try_append_char_if_active` fast-path.
#[test]
fn non_ascii_appends_to_active_burst_buffer() {
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
use crossterm::event::KeyModifiers;
let (tx, _rx) = unbounded_channel::<AppEvent>();
let sender = AppEventSender::new(tx);
let mut composer = ChatComposer::new(
true,
sender,
false,
"Ask Codex to do anything".to_string(),
false,
);
// Force an active burst so the non-ASCII char takes the fast-path
// (try_append_char_if_active) into the burst buffer.
composer
.paste_burst
.begin_with_retro_grabbed(String::new(), Instant::now());
let _ = composer.handle_key_event(KeyEvent::new(KeyCode::Char('1'), KeyModifiers::NONE));
let _ = composer.handle_key_event(KeyEvent::new(KeyCode::Char('あ'), KeyModifiers::NONE));
assert!(composer.textarea.text().is_empty());
let _ = flush_after_paste_burst(&mut composer);
assert_eq!(composer.textarea.text(), "1あ");
}
/// Behavior: a small explicit paste inserts text directly (no placeholder), and the submitted
/// text matches what is visible in the textarea.
#[test]

View File

@@ -317,15 +317,23 @@ pub async fn run_main(
ensure_oss_provider_ready(provider_id, &config).await?;
}
let otel =
codex_core::otel_init::build_provider(&config, env!("CARGO_PKG_VERSION"), None, true);
#[allow(clippy::print_stderr)]
let otel = match otel {
Ok(otel) => otel,
Err(e) => {
eprintln!("Could not create otel exporter: {e}");
std::process::exit(1);
let otel = match std::panic::catch_unwind(std::panic::AssertUnwindSafe(|| {
codex_core::otel_init::build_provider(&config, env!("CARGO_PKG_VERSION"), None, true)
})) {
Ok(Ok(otel)) => otel,
Ok(Err(e)) => {
#[allow(clippy::print_stderr)]
{
eprintln!("Could not create otel exporter: {e}");
}
None
}
Err(_) => {
#[allow(clippy::print_stderr)]
{
eprintln!("Could not create otel exporter: panicked during initialization");
}
None
}
};

View File

@@ -35,14 +35,14 @@ model_provider = "ollama"
std::fs::write(codex_home.join("config.toml"), config_contents)?;
let CodexCliOutput { exit_code, output } = run_codex_cli(codex_home, cwd).await?;
assert_eq!(1, exit_code, "Codex CLI should exit nonzero.");
assert_ne!(0, exit_code, "Codex CLI should exit nonzero.");
assert!(
output.contains("ERROR: Failed to initialize codex:"),
"expected startup error in output, got: {output}"
);
assert!(
output.contains("failed to read execpolicy files"),
"expected execpolicy read error in output, got: {output}"
output.contains("failed to read rules files"),
"expected rules read error in output, got: {output}"
);
Ok(())
}
@@ -63,7 +63,7 @@ async fn run_codex_cli(
codex_home.as_ref().display().to_string(),
);
let args = vec!["-c".to_string(), "analytics_enabled=false".to_string()];
let args = vec!["-c".to_string(), "analytics.enabled=false".to_string()];
let spawned = codex_utils_pty::spawn_pty_process(
codex_cli.to_string_lossy().as_ref(),
&args,

View File

@@ -92,9 +92,9 @@ When enabled:
- The burst detector is bypassed for new input (no flicker suppression hold and no burst buffering
decisions for incoming characters).
- The key stream is treated as normal typing (including normal slash command behavior).
- Enabling the flag clears the burst classification window. In the current implementation it does
**not** flush or clear an already-buffered burst, so callers should avoid toggling this flag
mid-burst (or should flush first).
- Enabling the flag flushes any held/buffered burst text through the normal paste path
(`ChatComposer::handle_paste`) and then clears the burst timing and Enter-suppression windows so
transient burst state cannot leak into subsequent input.
### Enter handling

View File

@@ -44,6 +44,13 @@ install:
test:
cargo nextest run --no-fail-fast
# Build and run Codex from source using Bazel.
# Note we have to use the combination of `[no-cd]` and `--run_under="cd $PWD &&"`
# to ensure that Bazel runs the command in the current working directory.
[no-cd]
bazel-codex *args:
bazel run //codex-rs/cli:codex --run_under="cd $PWD &&" -- "$@"
bazel-test:
bazel test //... --keep_going

View File

@@ -0,0 +1,195 @@
#!/usr/bin/env python3
import argparse
import asyncio
import datetime as dt
import json
import sys
from typing import Any
import websockets
HOST = "127.0.0.1"
DEFAULT_PORT = 8765
PATH = "/v1/responses"
CALL_ID = "shell-command-call"
FUNCTION_NAME = "shell_command"
FUNCTION_ARGS_JSON = json.dumps({"command": "echo websocket"}, separators=(",", ":"))
ASSISTANT_TEXT = "done"
def _utc_iso() -> str:
return dt.datetime.now(tz=dt.timezone.utc).isoformat(timespec="milliseconds")
def _default_usage() -> dict[str, Any]:
return {
"input_tokens": 0,
"input_tokens_details": None,
"output_tokens": 0,
"output_tokens_details": None,
"total_tokens": 0,
}
def _event_response_created(response_id: str) -> dict[str, Any]:
return {"type": "response.created", "response": {"id": response_id}}
def _event_response_done() -> dict[str, Any]:
return {"type": "response.done", "response": {"usage": _default_usage()}}
def _event_response_completed(response_id: str) -> dict[str, Any]:
return {"type": "response.completed", "response": {"id": response_id, "usage": _default_usage()}}
def _event_function_call(call_id: str, name: str, arguments_json: str) -> dict[str, Any]:
return {
"type": "response.output_item.done",
"item": {"type": "function_call", "call_id": call_id, "name": name, "arguments": arguments_json},
}
def _event_assistant_message(message_id: str, text: str) -> dict[str, Any]:
return {
"type": "response.output_item.done",
"item": {
"type": "message",
"role": "assistant",
"id": message_id,
"content": [{"type": "output_text", "text": text}],
},
}
def _dump_json(payload: Any) -> str:
return json.dumps(payload, ensure_ascii=False, separators=(",", ":"))
def _print_request(prefix: str, payload: Any) -> None:
pretty = json.dumps(payload, ensure_ascii=False, indent=2, sort_keys=True)
sys.stdout.write(f"{prefix} {_utc_iso()}\n{pretty}\n")
sys.stdout.flush()
async def _handle_connection(
websocket: Any,
*,
expected_path: str = PATH,
) -> None:
# websockets v15 exposes the request path here.
path = getattr(getattr(websocket, "request", None), "path", None)
if path is None:
# Older handler signatures could pass `path` separately; accept if unavailable.
path = "(unknown)"
sys.stdout.write(f"[conn] {_utc_iso()} connected path={path}\n")
sys.stdout.flush()
path_no_qs = path.split("?", 1)[0] if path != "(unknown)" else path
if path_no_qs != "(unknown)" and path_no_qs != expected_path:
sys.stdout.write(f"[conn] {_utc_iso()} rejecting unexpected path (expected {expected_path})\n")
sys.stdout.flush()
await websocket.close(code=1008, reason="unexpected websocket path")
return
async def recv_json(label: str) -> Any:
msg = await websocket.recv()
if isinstance(msg, bytes):
payload = json.loads(msg.decode("utf-8"))
else:
payload = json.loads(msg)
_print_request(f"[{label}] recv", payload)
return payload
async def send_event(ev: dict[str, Any]) -> None:
sys.stdout.write(f"[conn] {_utc_iso()} send {_dump_json(ev)}\n")
await websocket.send(_dump_json(ev))
# Request 1: provoke a function call (mirrors `codex-rs/core/tests/suite/agent_websocket.rs`).
await recv_json("req1")
await send_event(_event_response_created("resp-1"))
await send_event(_event_function_call(CALL_ID, FUNCTION_NAME, FUNCTION_ARGS_JSON))
await send_event(_event_response_done())
# Request 2: expect appended tool output; send final assistant message.
await recv_json("req2")
await send_event(_event_response_created("resp-2"))
await send_event(_event_assistant_message("msg-1", ASSISTANT_TEXT))
await send_event(_event_response_completed("resp-2"))
sys.stdout.write(f"[conn] {_utc_iso()} closing\n")
sys.stdout.flush()
await websocket.close()
async def _serve(port: int) -> int:
async def handler(ws: Any) -> None:
try:
await _handle_connection(ws, expected_path=PATH)
except websockets.exceptions.ConnectionClosedOK:
return
try:
server = await websockets.serve(handler, HOST, port)
except OSError as err:
sys.stderr.write(f"[server] failed to bind ws://{HOST}:{port}: {err}\n")
return 2
bound_port = server.sockets[0].getsockname()[1]
ws_uri = f"ws://{HOST}:{bound_port}"
sys.stdout.write("[server] mock Responses WebSocket server running\n")
sys.stdout.write(f"""Add this to your config.toml:
[model_providers.localapi_ws]
base_url = "{ws_uri}/v1"
name = "localapi_ws"
wire_api = "responses_websocket"
env_key = "OPENAI_API_KEY_STAGING"
[profiles.localapi_ws]
model = "gpt-5.2"
model_provider = "localapi_ws"
model_reasoning_effort = "medium"
start codex with `codex --profile localapi_ws`
""")
sys.stdout.flush()
try:
await asyncio.Future()
finally:
server.close()
await server.wait_closed()
return 0
def main() -> int:
parser = argparse.ArgumentParser(
description=(
"Mock a minimal Responses API WebSocket endpoint for the `test_codex` flow.\n"
f"Binds to {HOST}:{DEFAULT_PORT} by default and logs incoming JSON requests to stdout."
),
formatter_class=argparse.RawTextHelpFormatter,
)
parser.add_argument(
"--port",
type=int,
default=DEFAULT_PORT,
help=f"Bind port (default: {DEFAULT_PORT}; use 0 for random free port).",
)
args = parser.parse_args()
try:
return asyncio.run(_serve(args.port))
except KeyboardInterrupt:
return 0
if __name__ == "__main__":
raise SystemExit(main())