Compare commits

...

12 Commits

Author SHA1 Message Date
David Zbarsky
3545251f3c longpaths 2026-02-10 10:26:32 -05:00
David Zbarsky
017374c6c8 more hacks 2026-02-10 09:24:49 -05:00
David Zbarsky
0e76aa3ec1 windows exec hacks 2026-02-10 08:42:24 -05:00
David Zbarsky
9b13e10025 [bazel] Enable windows builds? 2026-02-10 06:54:41 -05:00
zbarsky-openai
86183847fd [bazel] Upgrade some rulesets in preparation for enabling windows, part 2 (#11197)
https://github.com/openai/codex/pull/11109 had automerge set, so I
didn't get to address feedback before merging, oops!
2026-02-09 20:08:10 +00:00
pakrym-oai
086d02fb14 Try to stop small helper methods (#11203) 2026-02-09 20:01:30 +00:00
pakrym-oai
7044511ae8 Move warmup to the task level (#11216)
Instead of storing a special connection on the client level make the
regular task responsible for establishing a normal client session and
open a connection on it.

Then when the turn is started we pass in a pre-established session.
2026-02-09 11:58:53 -08:00
pakrym-oai
ccd17374cb Move warmup to the task level (#11216)
Instead of storing a special connection on the client level make the
regular task responsible for establishing a normal client session and
open a connection on it.

Then when the turn is started we pass in a pre-established session.
2026-02-09 10:57:52 -08:00
Eric Traut
9346d321d2 Fixed bug in file watcher that results in spurious skills update events and large log files (#11217)
On some platforms, the "notify" file watcher library emits events for
file opens and reads, not just file modifications or deletes. The
previous implementation didn't take this into account.

Furthermore, the `tracing.info!` call that I previously added was
emitting a lot of logs. I had assumed incorrectly that `info` level
logging was disabled by default, but it's apparently enabled for this
crate. This is resulting in large logs (hundreds of MB) for some users.
2026-02-09 10:33:57 -08:00
Rasmus Rygaard
b2d3843109 Translate websocket errors (#10937)
When getting errors over a websocket connection, translate the error
into our regular API error format
2026-02-09 17:53:09 +00:00
jif-oai
cfce286459 tools: remove get_memory tool and tests (#11198)
Drop this memory tool as the design changed
2026-02-09 17:47:36 +00:00
Charley Cunningham
0883e5d3e5 core: account for all post-response items in auto-compact token checks (#11132)
## Summary
- change compaction pre-check accounting to include all items added
after the last model-generated item, not only trailing codex-generated
outputs
- use that boundary consistently in get_total_token_usage() and
get_total_token_usage_breakdown()
- update history tests to cover user/tool-output items after the last
model item

## Why
last_token_usage.total_tokens is API-reported for the last successful
model response. After that point, local history may gain additional
items (user messages, injected context, tool outputs). Compaction
triggering must account for all of those items to avoid late compaction
attempts that can overflow context.

## Testing
- just fmt
- cargo test -p codex-core
2026-02-09 08:34:38 -08:00
26 changed files with 921 additions and 651 deletions

View File

@@ -7,7 +7,7 @@ common --disk_cache=~/.cache/bazel-disk-cache
common --repo_contents_cache=~/.cache/bazel-repo-contents-cache
common --repository_cache=~/.cache/bazel-repo-cache
common --remote_cache_compression
startup --experimental_remote_repo_contents_cache
#startup --experimental_remote_repo_contents_cache
common --experimental_platform_in_output_dir
@@ -17,6 +17,7 @@ common --noenable_runfiles
common --enable_platform_specific_config
common:linux --host_platform=//:local_linux
common:windows --host_platform=//:local_windows
common --@rules_cc//cc/toolchains/args/archiver_flags:use_libtool_on_macos=False
common --@toolchains_llvm_bootstrapped//config:experimental_stub_libgcc_s

View File

@@ -10,11 +10,6 @@ on:
- main
workflow_dispatch:
concurrency:
# Cancel previous actions from the same PR or branch except 'main' branch.
# See https://docs.github.com/en/actions/using-jobs/using-concurrency and https://docs.github.com/en/actions/learn-github-actions/contexts for more info.
group: concurrency-group::${{ github.workflow }}::${{ github.event.pull_request.number > 0 && format('pr-{0}', github.event.pull_request.number) || github.ref_name }}${{ github.ref_name == 'main' && format('::{0}', github.run_id) || ''}}
cancel-in-progress: ${{ github.ref_name != 'main' }}
jobs:
test:
strategy:
@@ -36,9 +31,10 @@ jobs:
target: aarch64-unknown-linux-musl
- os: ubuntu-24.04
target: x86_64-unknown-linux-musl
# TODO: Enable Windows once we fix the toolchain issues there.
#- os: windows-latest
# target: x86_64-pc-windows-gnullvm
# Windows
- os: windows-latest
target: x86_64-pc-windows-gnullvm
runs-on: ${{ matrix.os }}
# Configure a human readable name for each job
@@ -95,7 +91,12 @@ jobs:
shell: pwsh
run: |
# Use a very short path to reduce argv/path length issues.
"BAZEL_STARTUP_ARGS=--output_user_root=C:\" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append
"BAZEL_STARTUP_ARGS=--output_user_root=D:\" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append
- name: Enable Git long paths (Windows)
if: runner.os == 'Windows'
shell: pwsh
run: git config --global core.longpaths true
- name: bazel test //...
env:

View File

@@ -23,5 +23,5 @@ common:macos --config=remote
common:macos --strategy=remote
common:macos --strategy=TestRunner=darwin-sandbox,local
# On windows we can not cross-build the tests but run them locally due to what appears to be a Bazel bug
# On windows we cannot cross-build the tests but run them locally due to what appears to be a Bazel bug
# (windows vs unix path confusion)

View File

@@ -15,13 +15,14 @@ In the codex-rs folder where the rust code lives:
- When writing tests, prefer comparing the equality of entire objects over fields one by one.
- When making a change that adds or changes an API, ensure that the documentation in the `docs/` folder is up to date if applicable.
- If you change `ConfigToml` or nested config types, run `just write-config-schema` to update `codex-rs/core/config.schema.json`.
- Do not create small helper methods that are referenced only once.
Run `just fmt` (in `codex-rs` directory) automatically after you have finished making Rust code changes; do not ask for approval to run it. Additionally, run the tests:
1. Run the test for the specific project that was changed. For example, if changes were made in `codex-rs/tui`, run `cargo test -p codex-tui`.
2. Once those pass, if any changes were made in common, core, or protocol, run the complete test suite with `cargo test --all-features`. project-specific or individual tests can be run without asking the user, but do ask the user before running the complete test suite.
Before finalizing a large change to `codex-rs`, run `just fix -p <project>` (in `codex-rs` directory) to fix any linter issues in the code. Prefer scoping with `-p` to avoid slow workspacewide Clippy builds; only run `just fix` without `-p` if you changed shared crates.
Before finalizing a large change to `codex-rs`, run `just fix -p <project>` (in `codex-rs` directory) to fix any linter issues in the code. Prefer scoping with `-p` to avoid slow workspacewide Clippy builds; only run `just fix` without `-p` if you changed shared crates. Do not re-run tests after running `fix` or `fmt`.
## TUI style conventions

View File

@@ -1,11 +1,12 @@
bazel_dep(name = "platforms", version = "1.0.0")
bazel_dep(name = "toolchains_llvm_bootstrapped", version = "0.5.2")
bazel_dep(name = "toolchains_llvm_bootstrapped", version = "0.5.4")
single_version_override(
module_name = "toolchains_llvm_bootstrapped",
patch_strip = 1,
patches = [
"//patches:toolchains_llvm_bootstrapped_mingw_cfguard.patch",
"//patches:toolchains_llvm_bootstrapped_mingw_ssp.patch",
"//patches:toolchains_llvm_bootstrapped_resource_dir.patch",
"//patches:toolchains_llvm_private_frameworks.patch",
],
)
@@ -39,9 +40,13 @@ bazel_dep(name = "rules_rs", version = "0.0.23")
# Special toolchains branch
archive_override(
module_name = "rules_rs",
integrity = "sha256-YbDRjZos4UmfIPY98znK1BgBWRQ1/ui3CtL6RqxE30I=",
strip_prefix = "rules_rs-6cf3d940fdc48baf3ebd6c37daf8e0be8fc73ecb",
url = "https://github.com/dzbarsky/rules_rs/archive/6cf3d940fdc48baf3ebd6c37daf8e0be8fc73ecb.tar.gz",
integrity = "sha256-C5nTK7oyhrP+qWGx+AXHlWk4hz6PlC2F9sTvO/gM77g=",
patch_strip = 1,
patches = [
"//patches:rules_rs_windows_gnullvm_exec.patch",
],
strip_prefix = "rules_rs-7a47423f9f52de881b82beda8b1d8a76968775aa",
url = "https://github.com/dzbarsky/rules_rs/archive/7a47423f9f52de881b82beda8b1d8a76968775aa.tar.gz",
)
rules_rust = use_extension("@rules_rs//rs/experimental:rules_rust.bzl", "rules_rust")
@@ -50,16 +55,15 @@ use_repo(rules_rust, "rules_rust")
toolchains = use_extension("@rules_rs//rs/experimental/toolchains:module_extension.bzl", "toolchains")
toolchains.toolchain(
edition = "2024",
# TODO(zbarsky): bump to 1.93 after fixing mingw
version = "1.92.0",
version = "1.93.0",
)
use_repo(
toolchains,
"experimental_rust_toolchains_1_92_0",
"rust_toolchain_artifacts_macos_aarch64_1_92_0",
"experimental_rust_toolchains_1_93_0",
"rust_toolchain_artifacts_macos_aarch64_1_93_0",
)
register_toolchains("@experimental_rust_toolchains_1_92_0//:all")
register_toolchains("@experimental_rust_toolchains_1_93_0//:all")
crate = use_extension("@rules_rs//rs:extensions.bzl", "crate")
crate.from_cargo(
@@ -70,10 +74,12 @@ crate.from_cargo(
"aarch64-unknown-linux-musl",
"aarch64-apple-darwin",
"aarch64-pc-windows-gnullvm",
"aarch64-pc-windows-msvc",
"x86_64-unknown-linux-gnu",
"x86_64-unknown-linux-musl",
"x86_64-apple-darwin",
"x86_64-pc-windows-gnullvm",
"x86_64-pc-windows-msvc",
],
)

188
MODULE.bazel.lock generated
View File

@@ -49,7 +49,8 @@
"https://bcr.bazel.build/modules/bazel_features/1.9.0/MODULE.bazel": "885151d58d90d8d9c811eb75e3288c11f850e1d6b481a8c9f766adee4712358b",
"https://bcr.bazel.build/modules/bazel_features/1.9.1/MODULE.bazel": "8f679097876a9b609ad1f60249c49d68bfab783dd9be012faf9d82547b14815a",
"https://bcr.bazel.build/modules/bazel_lib/3.0.0/MODULE.bazel": "22b70b80ac89ad3f3772526cd9feee2fa412c2b01933fea7ed13238a448d370d",
"https://bcr.bazel.build/modules/bazel_lib/3.0.0/source.json": "895f21909c6fba01d7c17914bb6c8e135982275a1b18cdaa4e62272217ef1751",
"https://bcr.bazel.build/modules/bazel_lib/3.2.0/MODULE.bazel": "39b50d94b9be6bda507862254e20c263f9b950e3160112348d10a938be9ce2c2",
"https://bcr.bazel.build/modules/bazel_lib/3.2.0/source.json": "a6f45a903134bebbf33a6166dd42b4c7ab45169de094b37a85f348ca41170a84",
"https://bcr.bazel.build/modules/bazel_skylib/1.0.3/MODULE.bazel": "bcb0fd896384802d1ad283b4e4eb4d718eebd8cb820b0a2c3a347fb971afd9d8",
"https://bcr.bazel.build/modules/bazel_skylib/1.1.1/MODULE.bazel": "1add3e7d93ff2e6998f9e118022c84d163917d912f5afafb3058e3d2f1545b5e",
"https://bcr.bazel.build/modules/bazel_skylib/1.2.0/MODULE.bazel": "44fe84260e454ed94ad326352a698422dbe372b21a1ac9f3eab76eb531223686",
@@ -217,7 +218,8 @@
"https://bcr.bazel.build/modules/tar.bzl/0.6.0/MODULE.bazel": "a3584b4edcfafcabd9b0ef9819808f05b372957bbdff41601429d5fd0aac2e7c",
"https://bcr.bazel.build/modules/tar.bzl/0.6.0/source.json": "4a620381df075a16cb3a7ed57bd1d05f7480222394c64a20fa51bdb636fda658",
"https://bcr.bazel.build/modules/toolchains_llvm_bootstrapped/0.5.2/MODULE.bazel": "f7c822cea99caef928d7cbe695498096e53c4b2c0ea45997e9a64bf6b77b43b0",
"https://bcr.bazel.build/modules/toolchains_llvm_bootstrapped/0.5.2/source.json": "13d260b3a10804b3b2ab822c49e329c36ef5cd325fa01d0f9a1616c5364b7fff",
"https://bcr.bazel.build/modules/toolchains_llvm_bootstrapped/0.5.4/MODULE.bazel": "a23e1b57a4deac3d8bd617f3bbdb1e88198a5a7dfeadd7a4f2a43e87f419ad2a",
"https://bcr.bazel.build/modules/toolchains_llvm_bootstrapped/0.5.4/source.json": "335ee2e79e7338a2941121a2cbeafe5eed28a6e309c96f8ab66324b5aa906d13",
"https://bcr.bazel.build/modules/upb/0.0.0-20220923-a547704/MODULE.bazel": "7298990c00040a0e2f121f6c32544bab27d4452f80d9ce51349b1a28f3005c43",
"https://bcr.bazel.build/modules/with_cfg.bzl/0.12.0/MODULE.bazel": "b573395fe63aef4299ba095173e2f62ccfee5ad9bbf7acaa95dba73af9fc2b38",
"https://bcr.bazel.build/modules/with_cfg.bzl/0.12.0/source.json": "3f3fbaeafecaf629877ad152a2c9def21f8d330d91aa94c5dc75bbb98c10b8b8",
@@ -1500,95 +1502,99 @@
"zvariant_utils_2.1.0": "{\"dependencies\":[{\"name\":\"proc-macro2\",\"req\":\"^1.0.81\"},{\"name\":\"quote\",\"req\":\"^1.0.36\"},{\"features\":[\"extra-traits\",\"full\"],\"name\":\"syn\",\"req\":\"^2.0.64\"}],\"features\":{}}"
},
"@@rules_rs+//rs/experimental/toolchains:module_extension.bzl%toolchains": {
"cargo-1.92.0-aarch64-apple-darwin.tar.xz": "bce6e7def37240c5a63115828017a9fc0ebcb31e64115382f5943b62b71aa34a",
"cargo-1.92.0-aarch64-pc-windows-gnullvm.tar.xz": "56d38a6a89e2a38ec777938d74feb93a3498bc8788df96a94fcb4eac2e07338b",
"cargo-1.92.0-aarch64-unknown-linux-gnu.tar.xz": "cb2ce6be6411b986e25c71ad8a813f9dfbe3461738136fd684e3644f8dd75df4",
"cargo-1.92.0-x86_64-apple-darwin.tar.xz": "b033a7c33aba8af947c9d0ab2785f9696347cded228ffe731897f1c627466262",
"cargo-1.92.0-x86_64-pc-windows-gnullvm.tar.xz": "c27d5936e1c11feb33f3221b85741c33f783c8723fca84552e7c2a5a73959352",
"cargo-1.92.0-x86_64-unknown-linux-gnu.tar.xz": "e5e12be2c7126a7036c8adf573078a28b92611f5767cc9bd0a6f7c83081df103",
"clippy-1.92.0-aarch64-apple-darwin.tar.xz": "08c65b6cf8faae3861706f8c97acf2aa6b784ed9455354c3b13495a7cfe5cb84",
"clippy-1.92.0-aarch64-pc-windows-gnullvm.tar.xz": "ea3e63c684273f629f918e8f50d510225c48a35ec28eaf026fa68c27f273bbd6",
"clippy-1.92.0-aarch64-unknown-linux-gnu.tar.xz": "333ab38c673b589468b8293b525e5704fb52515d9d516ee28d3d34dd5a63d3c3",
"clippy-1.92.0-x86_64-apple-darwin.tar.xz": "39cce87aab3d8b71350edcb3f943fba7bc59581ce1e65e158ee01e64cf0f1cf5",
"clippy-1.92.0-x86_64-pc-windows-gnullvm.tar.xz": "b22f615210f328aabafa8d0eab3fed62784fa771956c51a92ae24a4dfd2073ed",
"clippy-1.92.0-x86_64-unknown-linux-gnu.tar.xz": "2c1bf6e7da8ec50feba03fe188fc9a744ba59e2c6ece7970c13e201d08defa9a",
"rust-std-1.92.0-aarch64-apple-darwin.tar.xz": "ea619984fcb8e24b05dbd568d599b8e10d904435ab458dfba6469e03e0fd69aa",
"rust-std-1.92.0-aarch64-apple-ios-macabi.tar.xz": "d0453906db0abe9efb595e1ed59feb131a94c0312c0bc72da6482936667ff8da",
"rust-std-1.92.0-aarch64-apple-ios-sim.tar.xz": "9362b66fbaf2503276ea34f34b41b171269db9ba4dce05dde3472ea9d1852601",
"rust-std-1.92.0-aarch64-apple-ios.tar.xz": "81fb496f94a3f52ec2818a76a7107905b13b37d490ca849d22c0a7a7d7b0125e",
"rust-std-1.92.0-aarch64-linux-android.tar.xz": "ce6350bd43856c630773c93e40310989c6cb98a1178233c44e512a31761943d0",
"rust-std-1.92.0-aarch64-pc-windows-gnullvm.tar.xz": "4e79f57b80040757a3425568b5978986f026daf771649a64021c74bcc138214b",
"rust-std-1.92.0-aarch64-pc-windows-msvc.tar.xz": "b20c5c696af4ecfb683370ca0ee3c76ab8726fe6470ce9f1368d41a5b06ea065",
"rust-std-1.92.0-aarch64-unknown-fuchsia.tar.xz": "6021246b6c0d9d6104c0b350f7cd48a31d5707edaa8063f77f5636fe07bdb79d",
"rust-std-1.92.0-aarch64-unknown-linux-gnu.tar.xz": "ce2ab42c09d633b0a8b4b65a297c700ae0fad47aae890f75894782f95be7e36d",
"rust-std-1.92.0-aarch64-unknown-linux-musl.tar.xz": "94b9f84f21d29825c55a27fbb6b4b9fb9296a4a841aa54d85b95c42445623363",
"rust-std-1.92.0-aarch64-unknown-none-softfloat.tar.xz": "0dc46fafaaa36f53eec49e14a69e1d6d9ac6f0b9624a01081ad311d8139a2be0",
"rust-std-1.92.0-aarch64-unknown-none.tar.xz": "ab6a2edab924739fc2c86e9f8fd8068b379e881a6261a177d66608d3ea4cacb1",
"rust-std-1.92.0-aarch64-unknown-uefi.tar.xz": "f98001222bf23743598382c232b08d3137035b53645a420a1425288d501808af",
"rust-std-1.92.0-arm-linux-androideabi.tar.xz": "d41ec7255556b605dda04201a23e4445b5b86bc6786c703f91eb985bddc9f4ca",
"rust-std-1.92.0-arm-unknown-linux-gnueabi.tar.xz": "35478e20f8cc13912b31f2905b313a2820ddae564b363a66ab7a5da39d12787f",
"rust-std-1.92.0-arm-unknown-linux-gnueabihf.tar.xz": "836ada282b65c57d71f9b7e6fb490832410c954aac905c5437fb0bf851b53c83",
"rust-std-1.92.0-arm-unknown-linux-musleabi.tar.xz": "91d3c5fdbda9ba2e778bb638e3a5d060f3021bbc7a60edf22b0035be4e611b30",
"rust-std-1.92.0-arm-unknown-linux-musleabihf.tar.xz": "20411e3858308add0dadde9ce2059d82cdd6c5339881fa93ac995e733df0f832",
"rust-std-1.92.0-armv7-linux-androideabi.tar.xz": "06ac2f08dcf5c480e7767c0541c9bd7460754ec3a369a4b7097b328eca28fab9",
"rust-std-1.92.0-armv7-unknown-linux-gnueabi.tar.xz": "3d5e3fb14441ea8e6fc6307cbd89efd69be42143ff7f2b8dfb19818ddca993c0",
"rust-std-1.92.0-armv7-unknown-linux-gnueabihf.tar.xz": "e3ac810db43067d8af9e17679d47534e871f1daad8cd0762e67da935681e9e19",
"rust-std-1.92.0-armv7-unknown-linux-musleabi.tar.xz": "89cbf7db934d543754b446a52398081ec40ee6b98ed5bca93ac1dbd5faf48c16",
"rust-std-1.92.0-armv7-unknown-linux-musleabihf.tar.xz": "c7c08a389bd351226fd52266bfe2d2c444597e1bbb5d0307da44bdfa4df62c99",
"rust-std-1.92.0-i686-linux-android.tar.xz": "e45832b005556f65c6d26f05f34bd4ab5ceea8d9b9fefc4175d52b0780ca89db",
"rust-std-1.92.0-i686-pc-windows-gnu.tar.xz": "b3eefe10d4aed3ccbeaff3ae9cd2e0e0a36894c0635d0e69af1c9a698679d75f",
"rust-std-1.92.0-i686-pc-windows-gnullvm.tar.xz": "e6d709f85dea51d81f2f1a030845b732b9f7761d95d93c57c7276b0a451c2993",
"rust-std-1.92.0-i686-pc-windows-msvc.tar.xz": "e5671b276047647e994a7cab99c90ee70c46787572fbe4e266a13c6edb84d5ce",
"rust-std-1.92.0-i686-unknown-freebsd.tar.xz": "e008a0506ec4d5eff30abdf376c7933e235670bd6c5e1131c52bcda097a21116",
"rust-std-1.92.0-i686-unknown-linux-gnu.tar.xz": "abc840631a4462a4c8ec61341110ff653ab2ef86ef3b10f84489d00cc8a9310d",
"rust-std-1.92.0-i686-unknown-linux-musl.tar.xz": "c796874b1343721f575203fa179dc258e09ac45cd95dd6c35c4d5979a3870494",
"rust-std-1.92.0-i686-unknown-uefi.tar.xz": "90da7759e28e62fb82454d4eb4e02ac14320b8da3372e02f9ca4f47f7afbfe40",
"rust-std-1.92.0-powerpc-unknown-linux-gnu.tar.xz": "c3e809a324b00eb53096c58df38645bb496c6560de334dfe04ed0b77c0605aaa",
"rust-std-1.92.0-powerpc64-unknown-linux-gnu.tar.xz": "2ce706afa4a46b6773340854de877fc63618a40e351298a4e3da8eb482619863",
"rust-std-1.92.0-powerpc64le-unknown-linux-gnu.tar.xz": "eba59766c2d9805c0a1fc82fd723acbb36569e1bec1088c037bba84d965f70ba",
"rust-std-1.92.0-powerpc64le-unknown-linux-musl.tar.xz": "8b515e18b6ac8f8d37ea3cabe644b7f571984333c3b4192b7f5877e79eae7893",
"rust-std-1.92.0-riscv32imc-unknown-none-elf.tar.xz": "e1c6968ce25ab78f2c5b5460691763a2d39cb71b01d6d54c208d4a8b735b0584",
"rust-std-1.92.0-riscv64gc-unknown-linux-gnu.tar.xz": "8ee20dcf3b1063fa6069b3ce85e1fcf42794dfa783263314865cb53fff42d9e4",
"rust-std-1.92.0-riscv64gc-unknown-linux-musl.tar.xz": "f20d822309900fd6c7230688694baf91e900154e44e6247feca49b7b7a203a57",
"rust-std-1.92.0-riscv64gc-unknown-none-elf.tar.xz": "7eacf6a98786d58ef912643fd5aedc35111712e96a47b7e5d3ccddcdf9fb166d",
"rust-std-1.92.0-s390x-unknown-linux-gnu.tar.xz": "ebf944dc95015498d322504a54e4f9cdb28590f7790aa3a9eb86d6cf4b6c93ff",
"rust-std-1.92.0-thumbv6m-none-eabi.tar.xz": "f55de77126b60e1da38f8a5cdd127db872155ce9fbb53d4459502dd47b1dd476",
"rust-std-1.92.0-thumbv7em-none-eabi.tar.xz": "fdee017dcebfa8675220c20ca65b945b6eaee744d5d19b92cb0ca5480dd19ba6",
"rust-std-1.92.0-thumbv7em-none-eabihf.tar.xz": "6bfd083ce4917440fb4b04ca39ab4dadc9f40d9dc2775056899857bfa8911fd0",
"rust-std-1.92.0-thumbv7m-none-eabi.tar.xz": "7dc4c92e97db5ce1ddd9fbb7fbb1ad2d367f1c2e20d296a6d6427480c900c315",
"rust-std-1.92.0-thumbv8m.main-none-eabi.tar.xz": "61a6a80d03ebdb80ab06a044d4ec60e3c2bd8dc4d6011e920cf41592df4f0646",
"rust-std-1.92.0-thumbv8m.main-none-eabihf.tar.xz": "24c2f65371a2a5c6a40b51ae0e276ea9afff0479255f180f5e1549a0a4d58fb3",
"rust-std-1.92.0-wasm32-unknown-emscripten.tar.xz": "59b7adf18c0cc416a005fad7f704203b965905eb1c0ed9c556daa3c14048b9f4",
"rust-std-1.92.0-wasm32-unknown-unknown.tar.xz": "6c73f053ccd6adc886f802270ba960fd854e5e1111e4b5cfb875f1fdcfc0eb60",
"rust-std-1.92.0-wasm32-wasip1-threads.tar.xz": "feea056dd657a26560dfddfe4b53daeef3ded83c140b453b6dbdebaabd2a664c",
"rust-std-1.92.0-wasm32-wasip1.tar.xz": "8107dc35f0b6b744d998e766351419b4e0d27cddd6456c461337753a25f29fcd",
"rust-std-1.92.0-wasm32-wasip2.tar.xz": "e69ce601b6b24eea08b0d9e1fb0d9bf2a4188b3109353b272be4d995f687c943",
"rust-std-1.92.0-x86_64-apple-darwin.tar.xz": "6ce143bf9e83c71e200f4180e8774ab22c8c8c2351c88484b13ff13be82c8d57",
"rust-std-1.92.0-x86_64-apple-ios-macabi.tar.xz": "6a292d774653f2deaac743060ec68bfd5d7ff01d9b364e1a7d1e57a679650b47",
"rust-std-1.92.0-x86_64-apple-ios.tar.xz": "b6e38e5f8c9e6fb294681a7951301856b8f9424af4589e14493c0c939338814c",
"rust-std-1.92.0-x86_64-linux-android.tar.xz": "ffd39429435ff2f0763b940dd6afb4b9ccb1ed443eeef4fff9f1e9b3c5730026",
"rust-std-1.92.0-x86_64-pc-windows-gnu.tar.xz": "d4043304ef0e4792fb79a1153cbeca41308aac37cb1af3aa6bc3f0bb6d2276e1",
"rust-std-1.92.0-x86_64-pc-windows-gnullvm.tar.xz": "6169605b3073a7c2d6960bc1c74cb9cd6b65f45ee34e7ede02368e07ce4564cf",
"rust-std-1.92.0-x86_64-pc-windows-msvc.tar.xz": "b4e53a9c9b96a1a0618364791b7728a1c8978101ec6d1ee78fe930e1ef061994",
"rust-std-1.92.0-x86_64-unknown-freebsd.tar.xz": "151929a4255175d14b2cbcb69ef46d9217add23be268b9cd1446f8eab16616f2",
"rust-std-1.92.0-x86_64-unknown-fuchsia.tar.xz": "51ec26391d166e2f190a329919e3c7bd793861f02284aba461a086268725d9f1",
"rust-std-1.92.0-x86_64-unknown-linux-gnu.tar.xz": "5f106805ed86ebf8df287039e53a45cf974391ef4d088c2760776b05b8e48b5d",
"rust-std-1.92.0-x86_64-unknown-linux-musl.tar.xz": "11f0b7efccdb5e7972e3c0fc23693a487abc28b624675c08161d055a016d527e",
"rust-std-1.92.0-x86_64-unknown-netbsd.tar.xz": "db6a8d3a091551701b12e40cc58d4a541adfb63f250074aae90d250329beb8de",
"rust-std-1.92.0-x86_64-unknown-none.tar.xz": "1d8420ab8eb241a35e38b76470277c722bd5a7aa4ac0c7a565ad6f30b37cb852",
"rust-std-1.92.0-x86_64-unknown-uefi.tar.xz": "1b849250cf095269f3a2c7bc2087a919386da7da28e80dc289e6268bc705142d",
"rustc-1.92.0-aarch64-apple-darwin.tar.xz": "15dee753c9217dff4cf45d734b29dc13ce6017d8a55fe34eed75022b39a63ff0",
"rustc-1.92.0-aarch64-pc-windows-gnullvm.tar.xz": "a7556b86bce94dd8c078a62f71a9a0a3f4b3e841bc5d4fae546d797b78186eb1",
"rustc-1.92.0-aarch64-unknown-linux-gnu.tar.xz": "7c8706fad4c038b5eacab0092e15db54d2b365d5f3323ca046fe987f814e7826",
"rustc-1.92.0-x86_64-apple-darwin.tar.xz": "0facbd5d2742c8e97c53d59c9b5b81db6088cfc285d9ecb99523a50d6765fc5c",
"rustc-1.92.0-x86_64-pc-windows-gnullvm.tar.xz": "e67ec11b4c6e04e95effa7b4063ae8327c2861c6d08a7c692d69a0f1adcd8ecb",
"rustc-1.92.0-x86_64-unknown-linux-gnu.tar.xz": "78b2dd9c6b1fcd2621fa81c611cf5e2d6950690775038b585c64f364422886e0",
"rustfmt-1.92.0-aarch64-apple-darwin.tar.xz": "5d8ea865a7999dc9141603be8a9352745bf8440da051eb1c43f0fcaaf6845441",
"rustfmt-1.92.0-aarch64-pc-windows-gnullvm.tar.xz": "272d3af11e41ebdcc605ff8b6163f20b55526ed87e812fd1e0d001141a0c9a8a",
"rustfmt-1.92.0-aarch64-unknown-linux-gnu.tar.xz": "1dce37aea2a7cb801f1756ffc531d7140428315a3d2c2f836272547eb7b9dacd",
"rustfmt-1.92.0-x86_64-apple-darwin.tar.xz": "e038bda323ed7f4d417efc5be44c4245d74b8394f9f8393b9d464d662c3a9499",
"rustfmt-1.92.0-x86_64-pc-windows-gnullvm.tar.xz": "c03f55deeaaffb9520426d573e42ece889e760788012bdc97d2d27146033b0be",
"rustfmt-1.92.0-x86_64-unknown-linux-gnu.tar.xz": "38951ee55f21e9170236fc98c8ba373ae4338d863087c6b0a5fa8c4e797d52c4"
"cargo-1.93.0-aarch64-apple-darwin.tar.xz": "6443909350322ad07f09bb5edfd9ff29268e6fe88c7d78bfba7a5e254248dc25",
"cargo-1.93.0-aarch64-pc-windows-msvc.tar.xz": "155bff7a16aa7054e7ed7c3a82e362d4b302b3882d751b823e06ff63ae3f103d",
"cargo-1.93.0-aarch64-unknown-linux-gnu.tar.xz": "5998940b8b97286bb67facb1a85535eeb3d4d7a61e36a85e386e5c0c5cfe5266",
"cargo-1.93.0-x86_64-apple-darwin.tar.xz": "95a47c5ed797c35419908f04188d8b7de09946e71073c4b72632b16f5b10dfae",
"cargo-1.93.0-x86_64-pc-windows-gnullvm.tar.xz": "f19766837559f90476508140cb95cc708220012ec00a854fa9f99187b1f246b6",
"cargo-1.93.0-x86_64-pc-windows-msvc.tar.xz": "e59c5e2baa9ec17261f2cda6676ebf7b68b21a860e3f7451c4d964728951da75",
"cargo-1.93.0-x86_64-unknown-linux-gnu.tar.xz": "c23de3ae709ff33eed5e4ae59d1f9bcd75fa4dbaa9fb92f7b06bfb534b8db880",
"clippy-1.93.0-aarch64-apple-darwin.tar.xz": "0b6e943a8d12be0e68575acf59c9ea102daf795055fcbbf862b0bfd35ec40039",
"clippy-1.93.0-aarch64-pc-windows-msvc.tar.xz": "07bcf2edb88cdf5ead2f02e4a8493e9b0ef935a31253fac6f9f3378d8023f113",
"clippy-1.93.0-aarch64-unknown-linux-gnu.tar.xz": "872ae6d68d625946d281b91d928332e6b74f6ab269b6af842338df4338805a60",
"clippy-1.93.0-x86_64-apple-darwin.tar.xz": "e6d0b1afb9607c14a1172d09ee194a032bbb3e48af913d55c5a473e0559eddde",
"clippy-1.93.0-x86_64-pc-windows-gnullvm.tar.xz": "b6f1f7264ed6943c59dedfb9531fbadcc3c0fcf273c940a63d58898b14a1060f",
"clippy-1.93.0-x86_64-pc-windows-msvc.tar.xz": "25fb103390bf392980b4689ac09b2ec2ab4beefb7022a983215b613ad05eab57",
"clippy-1.93.0-x86_64-unknown-linux-gnu.tar.xz": "793108977514b15c0f45ade28ae35c58b05370cb0f22e89bd98fdfa61eabf55d",
"rust-std-1.93.0-aarch64-apple-darwin.tar.xz": "8603c63715349636ed85b4fe716c4e827a727918c840e54aff5b243cedadf19b",
"rust-std-1.93.0-aarch64-apple-ios-macabi.tar.xz": "24d47e615ce101869ff452a572a6b77ed7cf70f2454d0b50892ac849e8c7ac4d",
"rust-std-1.93.0-aarch64-apple-ios-sim.tar.xz": "d1d5e2d1b79206f2cc9fb7f6a2958cfe0f4bbc9147fda8dbc3608aa4be5e6816",
"rust-std-1.93.0-aarch64-apple-ios.tar.xz": "49228e70387773a71cf144509baf39979ab2cdb604340fff64b483ab41f61617",
"rust-std-1.93.0-aarch64-linux-android.tar.xz": "59c16648d9a29c07f63a1749cae6b73078f20fef1206c5e0f81c784ae8796cdb",
"rust-std-1.93.0-aarch64-pc-windows-gnullvm.tar.xz": "9a270d50eaaacc7cb1925565a8b65ff831686aa1592b7034bb9848d7f2a9738d",
"rust-std-1.93.0-aarch64-pc-windows-msvc.tar.xz": "f7bd3d25baf3643c8769b8c4d2e6cde45bb25042fac698e0daf19fc9f58f8568",
"rust-std-1.93.0-aarch64-unknown-fuchsia.tar.xz": "d1e46c443a9607603c810942e99a95a1acfb105d1312426b468ff68febaabf77",
"rust-std-1.93.0-aarch64-unknown-linux-gnu.tar.xz": "84e82ff52c39c64dfd0e1c2d58fd3d5309d1d2502378131544c0d486b44af20a",
"rust-std-1.93.0-aarch64-unknown-linux-musl.tar.xz": "bab885a87da586040064064bd1c314d707164d8dc0fefee39d59be7f15ce6f7d",
"rust-std-1.93.0-aarch64-unknown-none-softfloat.tar.xz": "0f6305daf658a7d6c0efd075859cb60432c13b82e7ecee0d097074e4e1873391",
"rust-std-1.93.0-aarch64-unknown-none.tar.xz": "3cf1aa3309a8903e89bb20537113155ca4e88844c8cc9c34c43865d5ce5a6192",
"rust-std-1.93.0-aarch64-unknown-uefi.tar.xz": "317b0af124e0e124bd76b8e5a2fb0c600279177d0bed9c841a3202df2d0f7f8e",
"rust-std-1.93.0-arm-linux-androideabi.tar.xz": "d010b26fc88e28a93cc94ea6ca5d2c90efed7f9846fae1e40db7c183b50575e2",
"rust-std-1.93.0-arm-unknown-linux-gnueabi.tar.xz": "deedc54ffce099781986eed4aec0843866f1bf72904ab0f5cdb115b9c7af540e",
"rust-std-1.93.0-arm-unknown-linux-gnueabihf.tar.xz": "89e44e042bc1241b3999191c385fec8433d60a5a9fc373583cd3b2d9408d5074",
"rust-std-1.93.0-arm-unknown-linux-musleabi.tar.xz": "641a17acb5104637d4dc9c4be022a7927ae721eb08759fea96ecfaf5c60be4dc",
"rust-std-1.93.0-arm-unknown-linux-musleabihf.tar.xz": "94a92b454bf3b0aab046b257f555ccb08f16dc2dc281bea6a4ef17ea8f58cbdc",
"rust-std-1.93.0-armv7-linux-androideabi.tar.xz": "e295f26bb219a7a4ebb5c2e07fedfebb075be6830aaf910c742a57cd21018b6d",
"rust-std-1.93.0-armv7-unknown-linux-gnueabi.tar.xz": "8a7bd5227c78294864095edb07837ff32ff6c07cd1a4a418f9bcc3ebd7e79874",
"rust-std-1.93.0-armv7-unknown-linux-gnueabihf.tar.xz": "f015f9b2d588454a9dc62942ab2e800d82c602e4eab6f89f8213419491bcd203",
"rust-std-1.93.0-armv7-unknown-linux-musleabi.tar.xz": "d15d24c9fb7c15243e1341cea53590002df271060118914bd0efcda8ccbd0731",
"rust-std-1.93.0-armv7-unknown-linux-musleabihf.tar.xz": "a2e5ec22ed35fb51a503d1e10b37447b0fa7333f079585bc0b6a2eb599de43f3",
"rust-std-1.93.0-i686-linux-android.tar.xz": "68fd86f62dd63221717d1741210f0f5cf75da7a1e32eed5a46b1e67c9d9430e1",
"rust-std-1.93.0-i686-pc-windows-gnu.tar.xz": "cb613d5d1eb245e8a1f4c0b25f93c2997cd06c1cc3fc202155f2997aebf44d4d",
"rust-std-1.93.0-i686-pc-windows-gnullvm.tar.xz": "0f713dc252a6de706519fe6cdaab6d66aaf1b555133b536cc0ab28061aa4269c",
"rust-std-1.93.0-i686-pc-windows-msvc.tar.xz": "33dc1951e2dc21bd05361160d52f496eecf48e0b95df5083172698b1cd5b9a3f",
"rust-std-1.93.0-i686-unknown-freebsd.tar.xz": "67718aae1381879fdcca5699051eb87e0cda3d2fd0fe75e306ba0948b79df7db",
"rust-std-1.93.0-i686-unknown-linux-gnu.tar.xz": "b8b7020a61418b95c1ea26badaf8db6979778e28dbadddc81fb5010fe27c935b",
"rust-std-1.93.0-i686-unknown-linux-musl.tar.xz": "867e54b3e89dc0b6d2b7a538a80443add6c3990bb4bd2260dea2ed98a0dc9870",
"rust-std-1.93.0-i686-unknown-uefi.tar.xz": "929fd484b08d5b2077ff864f5f2d24b51a78f1b6e837b9eab9d7e8fb7f31adce",
"rust-std-1.93.0-powerpc-unknown-linux-gnu.tar.xz": "e851c0fa3e726ce3f7139c5803198a1aa9723594394734ac9e373c93d92e5ea3",
"rust-std-1.93.0-powerpc64-unknown-linux-gnu.tar.xz": "f729bb7d95705e12a92eb072e494b93d8822ca40aa4802ca780b0cf33b56d401",
"rust-std-1.93.0-powerpc64le-unknown-linux-gnu.tar.xz": "d209ac698a69ca9b9035adb97a0ed8e60a08db52960198c3e03b9ee714c1a46b",
"rust-std-1.93.0-powerpc64le-unknown-linux-musl.tar.xz": "34b98d5eca2fdbd6ba41b0faf14160ef1ebd038f6ecaa264d318ad33263e1cf1",
"rust-std-1.93.0-riscv32imc-unknown-none-elf.tar.xz": "71af84c81241cbc7811b267927990be025f30d7d3dc55df4b56da7ac250f7c78",
"rust-std-1.93.0-riscv64gc-unknown-linux-gnu.tar.xz": "b769fb6c9f3e0419a6bd0b7b79f9191bbd7a48a9f243b23eb7d135711aa6de1b",
"rust-std-1.93.0-riscv64gc-unknown-linux-musl.tar.xz": "a7ced602573d814d875d69022e026c1ccb520b4b2de9d430ddfd0966ec6c9643",
"rust-std-1.93.0-riscv64gc-unknown-none-elf.tar.xz": "842f72913f288a0c76601438e67ccd88c816dbf187587928e48bf8b9ce74cbf3",
"rust-std-1.93.0-s390x-unknown-linux-gnu.tar.xz": "41a65db45a288eb3eedb187b366f132d5b3615767de7ce994b123b342ac7a848",
"rust-std-1.93.0-thumbv6m-none-eabi.tar.xz": "be3f8aad5680dabb203300847dcbbabc15729170ba5c3a9c499efae4df410a9e",
"rust-std-1.93.0-thumbv7em-none-eabi.tar.xz": "8f93eefca39c0da417feddab64775f862c72bbe80da11914dcf47babef8f1644",
"rust-std-1.93.0-thumbv7em-none-eabihf.tar.xz": "a3b6914b966ac93dbe7531016d5f570b189445603c43614a60e0b9ea12674bd3",
"rust-std-1.93.0-thumbv7m-none-eabi.tar.xz": "cfa6227214f3ae58c06b36547c5bd6f0f6787764afa48cfa4ff3488264deab6c",
"rust-std-1.93.0-thumbv8m.main-none-eabi.tar.xz": "5f24df0aa8322561125575e365be7ad13a5bb26cf73c7fc9a3f4bcfa58e0febc",
"rust-std-1.93.0-thumbv8m.main-none-eabihf.tar.xz": "906b07580be2df277cced2b56bc03cb565b758c382bf3e82cbd8375b459815dd",
"rust-std-1.93.0-wasm32-unknown-emscripten.tar.xz": "63cdbb1ea7f353060539c00f7346f4f5fb0d6f09899cacddc1f172ef07c4af8b",
"rust-std-1.93.0-wasm32-unknown-unknown.tar.xz": "3100cb920ddac646943243f0eddd331128836b9161dd5f7b0a6c76375d39cc5e",
"rust-std-1.93.0-wasm32-wasip1-threads.tar.xz": "439c65dea31e855f0258632b6d19435ba8a80561297fa6dc6be48048c5cd1871",
"rust-std-1.93.0-wasm32-wasip1.tar.xz": "075de970ef865678dad258f1566d7cfe76a594698e9bf93dd69fa5cfdfcf1a6f",
"rust-std-1.93.0-wasm32-wasip2.tar.xz": "0ef01bb552036ab44456f5505015b13c88d3694629804d7af46452c8b0a48f8c",
"rust-std-1.93.0-x86_64-apple-darwin.tar.xz": "f112d41c8a31794f0f561d37fe77010ed0b405fa70284a2910891869d8c52418",
"rust-std-1.93.0-x86_64-apple-ios-macabi.tar.xz": "a543dd545747d372d973ace8b485a13603ce96c110c7ae734d605e45f6e162c5",
"rust-std-1.93.0-x86_64-apple-ios.tar.xz": "e151013b9bc5990e178285a33e62bae7700d8c48c06e97771abb1643aa907d75",
"rust-std-1.93.0-x86_64-linux-android.tar.xz": "dc05ca79d9fecc5ce3643adb9c6f89fd35c8e1d7146bf9b62e30bad41f9fb6a7",
"rust-std-1.93.0-x86_64-pc-windows-gnu.tar.xz": "a07c6ab596fad15ca7acd63ee7f2a5fea93fd421179252067e309c2aa0b2021b",
"rust-std-1.93.0-x86_64-pc-windows-gnullvm.tar.xz": "ef6cf0977bc5aa4bbd594afb9df4ba76fdd4f0fc5685cddbefff49ceed202a91",
"rust-std-1.93.0-x86_64-pc-windows-msvc.tar.xz": "2593e29af0b8def34ceb1185b8e85bd93a9e0d3b0c108d704c1b31370c50a48c",
"rust-std-1.93.0-x86_64-unknown-freebsd.tar.xz": "51b2feaff7c2d28633504ed92ab442a55d112e6a2bf09c91188f00dbaf03378a",
"rust-std-1.93.0-x86_64-unknown-fuchsia.tar.xz": "41f0f3eb96cedfc13ab5fd4f15065063f262d035c1f71d96c42587acdacdbabe",
"rust-std-1.93.0-x86_64-unknown-linux-gnu.tar.xz": "a849a418d0f27e69573e41763c395e924a0b98c16fcdc55599c1c79c27c1c777",
"rust-std-1.93.0-x86_64-unknown-linux-musl.tar.xz": "874658d2ced1ed2b9bf66c148b78a2e10cad475d0a4db32e68a08900905b89b8",
"rust-std-1.93.0-x86_64-unknown-netbsd.tar.xz": "aad63193af89772031f9a85f193afc0b15f8e6d4a9a4983c5f5d3802f69a89e8",
"rust-std-1.93.0-x86_64-unknown-none.tar.xz": "01dcca7ae4b7e82fbfa399adb5e160afaa13143e5a17e1e0737c38cf07365fb3",
"rust-std-1.93.0-x86_64-unknown-uefi.tar.xz": "ec4e439d9485ce752b56999e8e41ed82373fc833a005cf2531c6f7ef7e785392",
"rustc-1.93.0-aarch64-apple-darwin.tar.xz": "092be03c02b44c405dab1232541c84f32b2d9e8295747568c3d531dd137221dc",
"rustc-1.93.0-aarch64-pc-windows-msvc.tar.xz": "a3ac1a8e411de8470f71b366f89d187718c431526912b181692ed0a18c56c7ad",
"rustc-1.93.0-aarch64-unknown-linux-gnu.tar.xz": "1a9045695892ec08d8e9751bf7cf7db71fe27a6202dd12ce13aca48d0602dbde",
"rustc-1.93.0-x86_64-apple-darwin.tar.xz": "594bb293f0a4f444656cf8dec2149fcb979c606260efee9e09bcf8c9c6ed6ae7",
"rustc-1.93.0-x86_64-pc-windows-gnullvm.tar.xz": "0cdaa8de66f5ce21d1ea73917efc5c64f408bda49f678ddde19465ced9d5ec63",
"rustc-1.93.0-x86_64-pc-windows-msvc.tar.xz": "fa17677eee0d83eb055b309953184bf87ba634923d8897f860cda65d55c6e350",
"rustc-1.93.0-x86_64-unknown-linux-gnu.tar.xz": "00c6e6740ea6a795e33568cd7514855d58408a1180cd820284a7bbf7c46af715",
"rustfmt-1.93.0-aarch64-apple-darwin.tar.xz": "0dd1faedf0768ef362f4aae4424b34e8266f2b9cf5e76ea4fcaf780220b363a0",
"rustfmt-1.93.0-aarch64-pc-windows-msvc.tar.xz": "24eed108489567133bbfe40c8eacda1567be55fae4c526911b39eb33eb27a6cb",
"rustfmt-1.93.0-aarch64-unknown-linux-gnu.tar.xz": "92e1acb45ae642136258b4dabb39302af2d53c83e56ebd5858bc969f9e5c141a",
"rustfmt-1.93.0-x86_64-apple-darwin.tar.xz": "c8453b4c5758eb39423042ffa9c23ed6128cbed2b15b581e5e1192c9cc0b1d4e",
"rustfmt-1.93.0-x86_64-pc-windows-gnullvm.tar.xz": "47167e9e78db9be4503a060dee02f4df2cda252da32175dbf44331f965a747b9",
"rustfmt-1.93.0-x86_64-pc-windows-msvc.tar.xz": "5becc7c2dba4b9ab5199012cad30829235a7f7fb5d85a238697e8f0e44cbd9af",
"rustfmt-1.93.0-x86_64-unknown-linux-gnu.tar.xz": "7f81f6c17d11a7fda5b4e1b111942fb3b23d30dcec767e13e340ebfb762a5e33"
}
}
}

View File

@@ -13,7 +13,12 @@ use codex_client::TransportError;
use futures::SinkExt;
use futures::StreamExt;
use http::HeaderMap;
use http::HeaderName;
use http::HeaderValue;
use http::StatusCode;
use serde::Deserialize;
use serde_json::Value;
use serde_json::map::Map as JsonMap;
use std::sync::Arc;
use std::sync::OnceLock;
use std::time::Duration;
@@ -252,6 +257,83 @@ fn map_ws_error(err: WsError, url: &Url) -> ApiError {
}
}
#[derive(Debug, Deserialize)]
struct WrappedWebsocketErrorEvent {
#[serde(rename = "type")]
kind: String,
#[serde(alias = "status_code")]
status: Option<u16>,
#[serde(default)]
error: Option<Value>,
#[serde(default)]
headers: Option<JsonMap<String, Value>>,
}
fn parse_wrapped_websocket_error_event(payload: &str) -> Option<WrappedWebsocketErrorEvent> {
let event: WrappedWebsocketErrorEvent = serde_json::from_str(payload).ok()?;
if event.kind != "error" {
return None;
}
Some(event)
}
fn map_wrapped_websocket_error_event(event: WrappedWebsocketErrorEvent) -> Option<ApiError> {
let WrappedWebsocketErrorEvent {
status,
error,
headers,
..
} = event;
let status = StatusCode::from_u16(status?).ok()?;
if status.is_success() {
return None;
}
let body = error.map(|error| {
serde_json::to_string_pretty(&serde_json::json!({
"error": error
}))
.unwrap_or_else(|_| {
serde_json::json!({
"error": error
})
.to_string()
})
});
Some(ApiError::Transport(TransportError::Http {
status,
url: None,
headers: headers.map(json_headers_to_http_headers),
body,
}))
}
fn json_headers_to_http_headers(headers: JsonMap<String, Value>) -> HeaderMap {
let mut mapped = HeaderMap::new();
for (name, value) in headers {
let Ok(header_name) = HeaderName::from_bytes(name.as_bytes()) else {
continue;
};
let Some(header_value) = json_header_value(value) else {
continue;
};
mapped.insert(header_name, header_value);
}
mapped
}
fn json_header_value(value: Value) -> Option<HeaderValue> {
let value = match value {
Value::String(value) => value,
Value::Number(value) => value.to_string(),
Value::Bool(value) => value.to_string(),
_ => return None,
};
HeaderValue::from_str(&value).ok()
}
async fn run_websocket_response_stream(
ws_stream: &mut WsStream,
tx_event: mpsc::Sender<std::result::Result<ResponseEvent, ApiError>>,
@@ -306,6 +388,12 @@ async fn run_websocket_response_stream(
match message {
Message::Text(text) => {
trace!("websocket event: {text}");
if let Some(wrapped_error) = parse_wrapped_websocket_error_event(&text)
&& let Some(error) = map_wrapped_websocket_error_event(wrapped_error)
{
return Err(error);
}
let event = match serde_json::from_str::<ResponsesStreamEvent>(&text) {
Ok(event) => event,
Err(err) => {
@@ -357,10 +445,124 @@ async fn run_websocket_response_stream(
#[cfg(test)]
mod tests {
use super::*;
use pretty_assertions::assert_eq;
use serde_json::json;
#[test]
fn websocket_config_enables_permessage_deflate() {
let config = websocket_config();
assert!(config.extensions.permessage_deflate.is_some());
}
#[test]
fn parse_wrapped_websocket_error_event_maps_to_transport_http() {
let payload = json!({
"type": "error",
"status": 429,
"error": {
"type": "usage_limit_reached",
"message": "The usage limit has been reached",
"plan_type": "pro",
"resets_at": 1738888888
},
"headers": {
"x-codex-primary-used-percent": "100.0",
"x-codex-primary-window-minutes": 15
}
})
.to_string();
let wrapped_error = parse_wrapped_websocket_error_event(&payload)
.expect("expected websocket error payload to be parsed");
let api_error = map_wrapped_websocket_error_event(wrapped_error)
.expect("expected websocket error payload to map to ApiError");
let ApiError::Transport(TransportError::Http {
status,
headers,
body,
..
}) = api_error
else {
panic!("expected ApiError::Transport(Http)");
};
assert_eq!(status, StatusCode::TOO_MANY_REQUESTS);
let headers = headers.expect("expected headers");
assert_eq!(
headers
.get("x-codex-primary-used-percent")
.and_then(|value| value.to_str().ok()),
Some("100.0")
);
assert_eq!(
headers
.get("x-codex-primary-window-minutes")
.and_then(|value| value.to_str().ok()),
Some("15")
);
let body = body.expect("expected body");
assert!(body.contains("usage_limit_reached"));
assert!(body.contains("The usage limit has been reached"));
}
#[test]
fn parse_wrapped_websocket_error_event_ignores_non_error_payloads() {
let payload = json!({
"type": "response.created",
"response": {
"id": "resp-1"
}
})
.to_string();
let wrapped_error = parse_wrapped_websocket_error_event(&payload);
assert!(wrapped_error.is_none());
}
#[test]
fn parse_wrapped_websocket_error_event_with_status_maps_invalid_request() {
let payload = json!({
"type": "error",
"status": 400,
"error": {
"type": "invalid_request_error",
"message": "Model does not support image inputs"
}
})
.to_string();
let wrapped_error = parse_wrapped_websocket_error_event(&payload)
.expect("expected websocket error payload to be parsed");
let api_error = map_wrapped_websocket_error_event(wrapped_error)
.expect("expected websocket error payload to map to ApiError");
let ApiError::Transport(TransportError::Http { status, body, .. }) = api_error else {
panic!("expected ApiError::Transport(Http)");
};
assert_eq!(status, StatusCode::BAD_REQUEST);
let body = body.expect("expected body");
assert!(body.contains("invalid_request_error"));
assert!(body.contains("Model does not support image inputs"));
}
#[test]
fn parse_wrapped_websocket_error_event_without_status_is_not_mapped() {
let payload = json!({
"type": "error",
"error": {
"type": "usage_limit_reached",
"message": "The usage limit has been reached"
},
"headers": {
"x-codex-primary-used-percent": "100.0",
"x-codex-primary-window-minutes": 15
}
})
.to_string();
let wrapped_error = parse_wrapped_websocket_error_event(&payload)
.expect("expected websocket error payload to be parsed");
let api_error = map_wrapped_websocket_error_event(wrapped_error);
assert!(api_error.is_none());
}
}

View File

@@ -9,26 +9,24 @@
//! call site.
//!
//! A [`ModelClientSession`] is created per turn and is used to stream one or more Responses API
//! requests during that turn. It caches a Responses WebSocket connection (opened lazily, or reused
//! from a session-level preconnect) and stores per-turn state such as the `x-codex-turn-state`
//! token used for sticky routing.
//! requests during that turn. It caches a Responses WebSocket connection (opened lazily) and stores
//! per-turn state such as the `x-codex-turn-state` token used for sticky routing.
//!
//! Preconnect is intentionally handshake-only: it may warm a socket and capture sticky-routing
//! Prewarm is intentionally handshake-only: it may warm a socket and capture sticky-routing
//! state, but the first `response.create` payload is still sent only when a turn starts.
//!
//! Internally, startup preconnect stores a single task handle. On first use in a turn, the session
//! awaits that task and adopts the warmed socket if it succeeds; if it fails, the stream attempt
//! fails and the normal retry/fallback loop decides what to do next.
//! Startup prewarm is owned by turn-scoped callers (for example, a pre-created regular task). When
//! a warmed [`ModelClientSession`] is available, turn execution can reuse it; otherwise the turn
//! lazily opens a websocket on first stream call.
//!
//! ## Retry-Budget Tradeoff
//!
//! Startup preconnect is treated as the first websocket connection attempt for the first turn. If
//! Startup prewarm is treated as the first websocket connection attempt for the first turn. If
//! it fails, the stream attempt fails and the retry/fallback loop decides whether to retry or fall
//! back. This avoids duplicate handshakes but means a failed preconnect can consume one retry
//! back. This avoids duplicate handshakes but means a failed prewarm can consume one retry
//! budget slot before any turn payload is sent.
use std::sync::Arc;
use std::sync::Mutex;
use std::sync::OnceLock;
use std::sync::atomic::AtomicBool;
use std::sync::atomic::Ordering;
@@ -73,7 +71,6 @@ use codex_protocol::protocol::SessionSource;
use eventsource_stream::Event;
use eventsource_stream::EventStreamError;
use futures::StreamExt;
use futures::future::BoxFuture;
use http::HeaderMap as ApiHeaderMap;
use http::HeaderValue;
use http::StatusCode as HttpStatusCode;
@@ -83,7 +80,6 @@ use std::time::Duration;
use tokio::sync::mpsc;
use tokio::sync::oneshot;
use tokio::sync::oneshot::error::TryRecvError;
use tokio::task::JoinHandle;
use tokio_tungstenite::tungstenite::Error;
use tokio_tungstenite::tungstenite::Message;
use tracing::warn;
@@ -109,17 +105,11 @@ pub const X_CODEX_TURN_METADATA_HEADER: &str = "x-codex-turn-metadata";
pub const X_RESPONSESAPI_INCLUDE_TIMING_METRICS_HEADER: &str =
"x-responsesapi-include-timing-metrics";
const RESPONSES_WEBSOCKETS_V2_BETA_HEADER_VALUE: &str = "responses_websockets=2026-02-06";
struct PreconnectedWebSocket {
connection: ApiWebSocketConnection,
turn_state: Option<String>,
}
type PreconnectTask = JoinHandle<Option<PreconnectedWebSocket>>;
/// Session-scoped state shared by all [`ModelClient`] clones.
///
/// This is intentionally kept minimal so `ModelClient` does not need to hold a full `Config`. Most
/// configuration is per turn and is passed explicitly to streaming/unary methods.
#[derive(Debug)]
struct ModelClientState {
auth_manager: Option<Arc<AuthManager>>,
conversation_id: ThreadId,
@@ -132,40 +122,11 @@ struct ModelClientState {
include_timing_metrics: bool,
beta_features_header: Option<String>,
disable_websockets: AtomicBool,
preconnect: Mutex<Option<PreconnectTask>>,
}
impl std::fmt::Debug for ModelClientState {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("ModelClientState")
.field("auth_manager", &self.auth_manager)
.field("conversation_id", &self.conversation_id)
.field("provider", &self.provider)
.field("session_source", &self.session_source)
.field("model_verbosity", &self.model_verbosity)
.field(
"enable_responses_websockets",
&self.enable_responses_websockets,
)
.field(
"enable_request_compression",
&self.enable_request_compression,
)
.field("include_timing_metrics", &self.include_timing_metrics)
.field("beta_features_header", &self.beta_features_header)
.field(
"disable_websockets",
&self.disable_websockets.load(Ordering::Relaxed),
)
.field("preconnect", &"<opaque>")
.finish()
}
}
/// Resolved API client setup for a single request attempt.
///
/// Keeping this as a single bundle ensures preconnect and normal request paths
/// Keeping this as a single bundle ensures prewarm and normal request paths
/// share the same auth/provider setup flow.
struct CurrentClientSetup {
auth: Option<CodexAuth>,
@@ -192,17 +153,14 @@ pub struct ModelClient {
/// A turn-scoped streaming session created from a [`ModelClient`].
///
/// The session establishes a Responses WebSocket connection lazily (or adopts a preconnected one)
/// and reuses it across multiple requests within the turn. It also caches per-turn state:
/// The session establishes a Responses WebSocket connection lazily and reuses it across multiple
/// requests within the turn. It also caches per-turn state:
///
/// - The last request's input items, so subsequent calls can use `response.append` when the input
/// is an incremental extension of the previous request.
/// - The `x-codex-turn-state` sticky-routing token, which must be replayed for all requests within
/// the same turn.
///
/// When startup preconnect is still running, first use of this session awaits that in-flight task
/// before opening a new websocket so preconnect acts as the first connection attempt for the turn.
///
/// Create a fresh `ModelClientSession` for each Codex turn. Reusing it across turns would replay
/// the previous turn's sticky-routing token into the next turn, which violates the client/server
/// contract and can cause routing bugs.
@@ -261,16 +219,14 @@ impl ModelClient {
include_timing_metrics,
beta_features_header,
disable_websockets: AtomicBool::new(false),
preconnect: Mutex::new(None),
}),
}
}
/// Creates a fresh turn-scoped streaming session.
///
/// This constructor does not perform network I/O itself. The returned session either adopts a
/// previously preconnected websocket or opens a websocket lazily when the first stream request
/// is issued.
/// This constructor does not perform network I/O itself; the session opens a websocket lazily
/// when the first stream request is issued.
pub fn new_session(&self) -> ModelClientSession {
ModelClientSession {
client: self.clone(),
@@ -282,79 +238,6 @@ impl ModelClient {
}
}
/// Spawns a best-effort task that warms a websocket for the first turn.
///
/// This call performs only connection setup; it never sends prompt payloads.
///
/// A timeout when computing turn metadata is treated the same as "no metadata" so startup
/// cannot block indefinitely on optional preconnect context.
pub fn pre_establish_connection(
&self,
otel_manager: OtelManager,
turn_metadata_header: BoxFuture<'static, Option<String>>,
) {
if !self.responses_websocket_enabled() || self.disable_websockets() {
return;
}
let model_client = self.clone();
let handle = tokio::spawn(async move {
let turn_metadata_header = turn_metadata_header.await;
model_client
.preconnect(&otel_manager, turn_metadata_header.as_deref())
.await
});
self.set_preconnected_task(Some(handle));
}
/// Opportunistically pre-establishes a Responses WebSocket connection for this session.
///
/// This method is best-effort: it returns an error on setup/connect failure and the caller
/// can decide whether to ignore it. A successful preconnect reduces first-turn latency but
/// never sends an initial prompt; the first `response.create` is still sent only when a turn
/// starts.
///
/// The preconnected slot is single-consumer and single-use: the next `ModelClientSession` may
/// adopt it once, after which later turns either keep using that same turn-local connection or
/// create a new one.
async fn preconnect(
&self,
otel_manager: &OtelManager,
turn_metadata_header: Option<&str>,
) -> Option<PreconnectedWebSocket> {
if !self.responses_websocket_enabled() || self.disable_websockets() {
return None;
}
let client_setup = self
.current_client_setup()
.await
.map_err(|err| {
ApiError::Stream(format!(
"failed to build websocket preconnect client setup: {err}"
))
})
.ok()?;
let turn_state = Arc::new(OnceLock::new());
let connection = self
.connect_websocket(
otel_manager,
client_setup.api_provider,
client_setup.api_auth,
Some(Arc::clone(&turn_state)),
turn_metadata_header,
)
.await
.ok()?;
Some(PreconnectedWebSocket {
connection,
turn_state: turn_state.get().cloned(),
})
}
/// Compacts the current conversation history using the Compact endpoint.
///
/// This is a unary call (no streaming) that returns a new list of
@@ -475,7 +358,7 @@ impl ModelClient {
/// Returns auth + provider configuration resolved from the current session auth state.
///
/// This centralizes setup used by both preconnect and normal request paths so they stay in
/// This centralizes setup used by both prewarm and normal request paths so they stay in
/// lockstep when auth/provider resolution changes.
async fn current_client_setup(&self) -> Result<CurrentClientSetup> {
let auth = match self.state.auth_manager.as_ref() {
@@ -496,7 +379,7 @@ impl ModelClient {
/// Opens a websocket connection using the same header and telemetry wiring as normal turns.
///
/// Both startup preconnect and in-turn `needs_new` reconnects call this path so handshake
/// Both startup prewarm and in-turn `needs_new` reconnects call this path so handshake
/// behavior remains consistent across both flows.
async fn connect_websocket(
&self,
@@ -513,7 +396,7 @@ impl ModelClient {
.await
}
/// Builds websocket handshake headers for both preconnect and turn-time reconnect.
/// Builds websocket handshake headers for both prewarm and turn-time reconnect.
///
/// Callers should pass the current turn-state lock when available so sticky-routing state is
/// replayed on reconnect within the same turn.
@@ -548,28 +431,6 @@ impl ModelClient {
}
headers
}
/// Consumes the warmed websocket task slot.
fn take_preconnected_task(&self) -> Option<PreconnectTask> {
let mut state = self
.state
.preconnect
.lock()
.unwrap_or_else(std::sync::PoisonError::into_inner);
state.take()
}
fn set_preconnected_task(&self, task: Option<PreconnectTask>) {
let mut state = self
.state
.preconnect
.lock()
.unwrap_or_else(std::sync::PoisonError::into_inner);
if let Some(running_task) = state.take() {
running_task.abort();
}
*state = task;
}
}
impl ModelClientSession {
@@ -772,13 +633,42 @@ impl ModelClientSession {
)
}
/// Returns a websocket connection for this turn, reusing preconnect when possible.
/// Opportunistically warms a websocket for this turn-scoped client session.
///
/// This method first tries to adopt the session-level preconnect slot, then falls back to a
/// fresh websocket handshake only when the turn has no live connection. If startup preconnect
/// is still running, it is awaited first so that task acts as the first connection attempt for
/// this turn instead of racing a second handshake. If that attempt fails, the normal connect
/// and stream retry flow continues unchanged.
/// This performs only connection setup; it never sends prompt payloads.
pub async fn prewarm_websocket(
&mut self,
otel_manager: &OtelManager,
turn_metadata_header: Option<&str>,
) -> std::result::Result<(), ApiError> {
if !self.client.responses_websocket_enabled() || self.client.disable_websockets() {
return Ok(());
}
if self.connection.is_some() {
return Ok(());
}
let client_setup = self.client.current_client_setup().await.map_err(|err| {
ApiError::Stream(format!(
"failed to build websocket prewarm client setup: {err}"
))
})?;
let connection = self
.client
.connect_websocket(
otel_manager,
client_setup.api_provider,
client_setup.api_auth,
Some(Arc::clone(&self.turn_state)),
turn_metadata_header,
)
.await?;
self.connection = Some(connection);
Ok(())
}
/// Returns a websocket connection for this turn.
async fn websocket_connection(
&mut self,
otel_manager: &OtelManager,
@@ -787,27 +677,6 @@ impl ModelClientSession {
turn_metadata_header: Option<&str>,
options: &ApiResponsesOptions,
) -> std::result::Result<&ApiWebSocketConnection, ApiError> {
// Prefer the session-level preconnect slot before creating a new websocket.
if self.connection.is_none()
&& let Some(task) = self.client.take_preconnected_task()
{
match task.await {
Ok(Some(preconnected)) => {
let PreconnectedWebSocket {
connection,
turn_state,
} = preconnected;
if let Some(turn_state) = turn_state {
let _ = self.turn_state.set(turn_state);
}
self.connection = Some(connection);
}
_ => {
warn!("startup websocket preconnect task failed");
}
};
}
let needs_new = match self.connection.as_ref() {
Some(conn) => conn.is_closed().await,
None => true,
@@ -1083,8 +952,7 @@ impl ModelClientSession {
/// Permanently disables WebSockets for this Codex session and resets WebSocket state.
///
/// This is used after exhausting the provider retry budget, to force subsequent requests onto
/// the HTTP transport. It also clears any warmed websocket preconnect state so future turns
/// cannot accidentally adopt a stale socket after fallback has been activated.
/// the HTTP transport.
///
/// Returns `true` if this call activated fallback, or `false` if fallback was already active.
pub(crate) fn try_switch_fallback_transport(&mut self, otel_manager: &OtelManager) -> bool {
@@ -1098,7 +966,6 @@ impl ModelClientSession {
&[("from_wire_api", "responses_websocket")],
);
self.client.set_preconnected_task(None);
self.connection = None;
self.websocket_last_items.clear();
}

View File

@@ -200,6 +200,7 @@ use crate::state::SessionServices;
use crate::state::SessionState;
use crate::state_db;
use crate::tasks::GhostSnapshotTask;
use crate::tasks::RegularTask;
use crate::tasks::ReviewTask;
use crate::tasks::SessionTask;
use crate::tasks::SessionTaskContext;
@@ -566,8 +567,8 @@ impl TurnContext {
/// Resolves the per-turn metadata header under a shared timeout policy.
///
/// This uses the same timeout helper as websocket startup preconnect so both turn execution
/// and background preconnect observe identical "timeout means best-effort fallback" behavior.
/// This uses the same timeout helper as websocket startup prewarm so both turn execution and
/// background prewarm observe identical "timeout means best-effort fallback" behavior.
pub async fn resolve_turn_metadata_header(&self) -> Option<String> {
resolve_turn_metadata_header_with_timeout(
self.build_turn_metadata_header(),
@@ -579,7 +580,7 @@ impl TurnContext {
/// Starts best-effort background computation of turn metadata.
///
/// This warms the cached value used by [`TurnContext::resolve_turn_metadata_header`] so turns
/// and websocket preconnect are less likely to pay metadata construction latency on demand.
/// and websocket prewarm are less likely to pay metadata construction latency on demand.
pub fn spawn_turn_metadata_header_task(self: &Arc<Self>) {
let context = Arc::clone(self);
tokio::spawn(async move {
@@ -1044,7 +1045,7 @@ impl Session {
}
};
session_configuration.thread_name = thread_name.clone();
let state = SessionState::new(session_configuration.clone());
let mut state = SessionState::new(session_configuration.clone());
let services = SessionServices {
mcp_connection_manager: Arc::new(RwLock::new(McpConnectionManager::default())),
@@ -1082,6 +1083,18 @@ impl Session {
),
};
let turn_metadata_header = resolve_turn_metadata_header_with_timeout(
build_turn_metadata_header(session_configuration.cwd.clone()),
None,
)
.boxed();
let startup_regular_task = RegularTask::with_startup_prewarm(
services.model_client.clone(),
services.otel_manager.clone(),
turn_metadata_header,
);
state.set_startup_regular_task(startup_regular_task);
let sess = Arc::new(Session {
conversation_id,
tx_event: tx_event.clone(),
@@ -1094,18 +1107,6 @@ impl Session {
next_internal_sub_id: AtomicU64::new(0),
});
// Warm a websocket in the background so the first turn can reuse it.
// This performs only connection setup; user input is still sent later via response.create
// when submit_turn() runs.
let turn_metadata_header = resolve_turn_metadata_header_with_timeout(
build_turn_metadata_header(session_configuration.cwd.clone()),
None,
)
.boxed();
sess.services
.model_client
.pre_establish_connection(sess.services.otel_manager.clone(), turn_metadata_header);
// Dispatch the SessionConfiguredEvent first and then report any errors.
// If resuming, include converted initial messages in the payload so UIs can render them immediately.
let initial_messages = initial_history.get_event_msgs();
@@ -1493,6 +1494,11 @@ impl Session {
.await
}
pub(crate) async fn take_startup_regular_task(&self) -> Option<RegularTask> {
let mut state = self.state.lock().await;
state.take_startup_regular_task()
}
async fn get_config(&self) -> std::sync::Arc<Config> {
let state = self.state.lock().await;
state
@@ -2849,7 +2855,6 @@ mod handlers {
use crate::review_prompts::resolve_review_request;
use crate::rollout::session_index;
use crate::tasks::CompactTask;
use crate::tasks::RegularTask;
use crate::tasks::UndoTask;
use crate::tasks::UserShellCommandMode;
use crate::tasks::UserShellCommandTask;
@@ -2992,7 +2997,8 @@ mod handlers {
sess.refresh_mcp_servers_if_requested(&current_context)
.await;
sess.spawn_task(Arc::clone(&current_context), items, RegularTask)
let regular_task = sess.take_startup_regular_task().await.unwrap_or_default();
sess.spawn_task(Arc::clone(&current_context), items, regular_task)
.await;
*previous_context = Some(current_context);
}
@@ -3708,6 +3714,7 @@ pub(crate) async fn run_turn(
sess: Arc<Session>,
turn_context: Arc<TurnContext>,
input: Vec<UserInput>,
prewarmed_client_session: Option<ModelClientSession>,
cancellation_token: CancellationToken,
) -> Option<String> {
if input.is_empty() {
@@ -3825,7 +3832,8 @@ pub(crate) async fn run_turn(
let turn_metadata_header = turn_context.resolve_turn_metadata_header().await;
// `ModelClientSession` is turn-scoped and caches WebSocket + sticky routing state, so we reuse
// one instance across retries within this turn.
let mut client_session = sess.services.model_client.new_session();
let mut client_session =
prewarmed_client_session.unwrap_or_else(|| sess.services.model_client.new_session());
loop {
// Note that pending_input would be something like a message the user

View File

@@ -236,15 +236,23 @@ impl ContextManager {
})
}
fn get_trailing_codex_generated_items_tokens(&self) -> i64 {
let mut total = 0i64;
for item in self.items.iter().rev() {
if !is_codex_generated_item(item) {
break;
}
total = total.saturating_add(estimate_item_token_count(item));
}
total
// These are local items added after the most recent model-emitted item.
// They are not reflected in `last_token_usage.total_tokens`.
fn items_after_last_model_generated_item(&self) -> &[ResponseItem] {
let start = self
.items
.iter()
.rposition(is_model_generated_item)
.map_or(self.items.len(), |index| index.saturating_add(1));
&self.items[start..]
}
fn get_items_after_last_model_generated_tokens(&self) -> i64 {
self.items_after_last_model_generated_item()
.iter()
.fold(0i64, |acc, item| {
acc.saturating_add(estimate_item_token_count(item))
})
}
/// When true, the server already accounted for past reasoning tokens and
@@ -255,13 +263,14 @@ impl ContextManager {
.as_ref()
.map(|info| info.last_token_usage.total_tokens)
.unwrap_or(0);
let trailing_codex_generated_tokens = self.get_trailing_codex_generated_items_tokens();
let items_after_last_model_generated_tokens =
self.get_items_after_last_model_generated_tokens();
if server_reasoning_included {
last_tokens.saturating_add(trailing_codex_generated_tokens)
last_tokens.saturating_add(items_after_last_model_generated_tokens)
} else {
last_tokens
.saturating_add(self.get_non_last_reasoning_items_tokens())
.saturating_add(trailing_codex_generated_tokens)
.saturating_add(items_after_last_model_generated_tokens)
}
}
@@ -367,6 +376,22 @@ fn estimate_item_token_count(item: &ResponseItem) -> i64 {
}
}
fn is_model_generated_item(item: &ResponseItem) -> bool {
match item {
ResponseItem::Message { role, .. } => role == "assistant",
ResponseItem::Reasoning { .. }
| ResponseItem::FunctionCall { .. }
| ResponseItem::WebSearchCall { .. }
| ResponseItem::CustomToolCall { .. }
| ResponseItem::LocalShellCall { .. }
| ResponseItem::Compaction { .. } => true,
ResponseItem::FunctionCallOutput { .. }
| ResponseItem::CustomToolCallOutput { .. }
| ResponseItem::GhostSnapshot { .. }
| ResponseItem::Other => false,
}
}
pub(crate) fn is_codex_generated_item(item: &ResponseItem) -> bool {
matches!(
item,

View File

@@ -62,13 +62,6 @@ fn user_input_text_msg(text: &str) -> ResponseItem {
}
}
fn function_call_output(call_id: &str, content: &str) -> ResponseItem {
ResponseItem::FunctionCallOutput {
call_id: call_id.to_string(),
output: FunctionCallOutputPayload::from_text(content.to_string()),
}
}
fn custom_tool_call_output(call_id: &str, output: &str) -> ResponseItem {
ResponseItem::CustomToolCallOutput {
call_id: call_id.to_string(),
@@ -189,48 +182,32 @@ fn non_last_reasoning_tokens_ignore_entries_after_last_user() {
}
#[test]
fn trailing_codex_generated_tokens_stop_at_first_non_generated_item() {
let earlier_output = function_call_output("call-earlier", "earlier output");
let trailing_function_output = function_call_output("call-tail-1", "tail function output");
let trailing_custom_output = custom_tool_call_output("call-tail-2", "tail custom output");
fn items_after_last_model_generated_tokens_include_user_and_tool_output() {
let history = create_history_with_items(vec![
earlier_output,
user_msg("boundary item"),
trailing_function_output.clone(),
trailing_custom_output.clone(),
assistant_msg("already counted by API"),
user_msg("new user message"),
custom_tool_call_output("call-tail", "new tool output"),
]);
let expected_tokens = estimate_item_token_count(&trailing_function_output)
.saturating_add(estimate_item_token_count(&trailing_custom_output));
let expected_tokens = estimate_item_token_count(&user_msg("new user message")).saturating_add(
estimate_item_token_count(&custom_tool_call_output("call-tail", "new tool output")),
);
assert_eq!(
history.get_trailing_codex_generated_items_tokens(),
history.get_items_after_last_model_generated_tokens(),
expected_tokens
);
}
#[test]
fn trailing_codex_generated_tokens_exclude_function_call_tail() {
let history = create_history_with_items(vec![ResponseItem::FunctionCall {
id: None,
name: "not-generated".to_string(),
arguments: "{}".to_string(),
call_id: "call-tail".to_string(),
}]);
fn items_after_last_model_generated_tokens_are_zero_without_model_generated_items() {
let history = create_history_with_items(vec![user_msg("no model output yet")]);
assert_eq!(history.get_trailing_codex_generated_items_tokens(), 0);
assert_eq!(history.get_items_after_last_model_generated_tokens(), 0);
}
#[test]
fn total_token_usage_includes_only_trailing_codex_generated_items() {
let non_trailing_output = function_call_output("call-before-message", "not trailing");
let trailing_assistant = assistant_msg("assistant boundary");
let trailing_output = custom_tool_call_output("tool-tail", "trailing output");
let mut history = create_history_with_items(vec![
non_trailing_output,
user_msg("boundary"),
trailing_assistant,
trailing_output.clone(),
]);
fn total_token_usage_includes_all_items_after_last_model_generated_item() {
let mut history = create_history_with_items(vec![assistant_msg("already counted by API")]);
history.update_token_info(
&TokenUsage {
total_tokens: 100,
@@ -238,10 +215,17 @@ fn total_token_usage_includes_only_trailing_codex_generated_items() {
},
None,
);
let added_user = user_msg("new user message");
let added_tool_output = custom_tool_call_output("tool-tail", "new tool output");
history.record_items(
[&added_user, &added_tool_output],
TruncationPolicy::Tokens(10_000),
);
assert_eq!(
history.get_total_token_usage(true),
100 + estimate_item_token_count(&trailing_output)
100 + estimate_item_token_count(&added_user)
+ estimate_item_token_count(&added_tool_output)
);
}

View File

@@ -11,6 +11,7 @@ use std::sync::RwLock;
use std::time::Duration;
use notify::Event;
use notify::EventKind;
use notify::RecommendedWatcher;
use notify::RecursiveMode;
use notify::Watcher;
@@ -19,7 +20,6 @@ use tokio::sync::broadcast;
use tokio::sync::mpsc;
use tokio::time::Instant;
use tokio::time::sleep_until;
use tracing::info;
use tracing::warn;
use crate::config::Config;
@@ -163,12 +163,6 @@ impl FileWatcher {
res = raw_rx.recv() => {
match res {
Some(Ok(event)) => {
info!(
event_kind = ?event.kind,
event_paths = ?event.paths,
event_attrs = ?event.attrs,
"file watcher received filesystem event"
);
let skills_paths = classify_event(&event, &state);
let now = Instant::now();
skills.add(skills_paths);
@@ -245,6 +239,13 @@ impl FileWatcher {
}
fn classify_event(event: &Event, state: &RwLock<WatchState>) -> Vec<PathBuf> {
if !matches!(
event.kind,
EventKind::Create(_) | EventKind::Modify(_) | EventKind::Remove(_)
) {
return Vec::new();
}
let mut skills_paths = Vec::new();
let skills_roots = match state.read() {
Ok(state) => state.skills_roots.clone(),
@@ -271,6 +272,11 @@ fn is_skills_path(path: &Path, roots: &HashSet<PathBuf>) -> bool {
mod tests {
use super::*;
use notify::EventKind;
use notify::event::AccessKind;
use notify::event::AccessMode;
use notify::event::CreateKind;
use notify::event::ModifyKind;
use notify::event::RemoveKind;
use pretty_assertions::assert_eq;
use tokio::time::timeout;
@@ -278,8 +284,8 @@ mod tests {
PathBuf::from(name)
}
fn notify_event(paths: Vec<PathBuf>) -> Event {
let mut event = Event::new(EventKind::Any);
fn notify_event(kind: EventKind, paths: Vec<PathBuf>) -> Event {
let mut event = Event::new(kind);
for path in paths {
event = event.add_path(path);
}
@@ -327,10 +333,13 @@ mod tests {
let state = RwLock::new(WatchState {
skills_roots: HashSet::from([root.clone()]),
});
let event = notify_event(vec![
root.join("demo/SKILL.md"),
path("/tmp/other/not-a-skill.txt"),
]);
let event = notify_event(
EventKind::Create(CreateKind::Any),
vec![
root.join("demo/SKILL.md"),
path("/tmp/other/not-a-skill.txt"),
],
);
let classified = classify_event(&event, &state);
assert_eq!(classified, vec![root.join("demo/SKILL.md")]);
@@ -343,11 +352,14 @@ mod tests {
let state = RwLock::new(WatchState {
skills_roots: HashSet::from([root_a.clone(), root_b.clone()]),
});
let event = notify_event(vec![
root_a.join("alpha/SKILL.md"),
path("/tmp/skills-extra/not-under-skills.txt"),
root_b.join("beta/SKILL.md"),
]);
let event = notify_event(
EventKind::Modify(ModifyKind::Any),
vec![
root_a.join("alpha/SKILL.md"),
path("/tmp/skills-extra/not-under-skills.txt"),
root_b.join("beta/SKILL.md"),
],
);
let classified = classify_event(&event, &state);
assert_eq!(
@@ -356,6 +368,27 @@ mod tests {
);
}
#[test]
fn classify_event_ignores_non_mutating_event_kinds() {
let root = path("/tmp/skills");
let state = RwLock::new(WatchState {
skills_roots: HashSet::from([root.clone()]),
});
let path = root.join("demo/SKILL.md");
let access_event = notify_event(
EventKind::Access(AccessKind::Open(AccessMode::Any)),
vec![path.clone()],
);
assert_eq!(classify_event(&access_event, &state), Vec::<PathBuf>::new());
let any_event = notify_event(EventKind::Any, vec![path.clone()]);
assert_eq!(classify_event(&any_event, &state), Vec::<PathBuf>::new());
let other_event = notify_event(EventKind::Other, vec![path]);
assert_eq!(classify_event(&other_event, &state), Vec::<PathBuf>::new());
}
#[test]
fn register_skills_root_dedupes_state_entries() {
let watcher = FileWatcher::noop();
@@ -382,7 +415,10 @@ mod tests {
watcher.spawn_event_loop(raw_rx, Arc::clone(&watcher.state), tx);
raw_tx
.send(Ok(notify_event(vec![root.join("a/SKILL.md")])))
.send(Ok(notify_event(
EventKind::Create(CreateKind::File),
vec![root.join("a/SKILL.md")],
)))
.expect("send first event");
let first = timeout(Duration::from_secs(2), rx.recv())
.await
@@ -396,7 +432,10 @@ mod tests {
);
raw_tx
.send(Ok(notify_event(vec![root.join("b/SKILL.md")])))
.send(Ok(notify_event(
EventKind::Remove(RemoveKind::File),
vec![root.join("b/SKILL.md")],
)))
.expect("send second event");
drop(raw_tx);

View File

@@ -9,6 +9,7 @@ use crate::context_manager::ContextManager;
use crate::protocol::RateLimitSnapshot;
use crate::protocol::TokenUsage;
use crate::protocol::TokenUsageInfo;
use crate::tasks::RegularTask;
use crate::truncate::TruncationPolicy;
/// Persistent, session-scoped state previously stored directly on `Session`.
@@ -26,6 +27,8 @@ pub(crate) struct SessionState {
pub(crate) initial_context_seeded: bool,
/// Previous rollout model for one-shot model-switch handling on first turn after resume.
pub(crate) pending_resume_previous_model: Option<String>,
/// Startup regular task pre-created during session initialization.
pub(crate) startup_regular_task: Option<RegularTask>,
}
impl SessionState {
@@ -41,6 +44,7 @@ impl SessionState {
mcp_dependency_prompted: HashSet::new(),
initial_context_seeded: false,
pending_resume_previous_model: None,
startup_regular_task: None,
}
}
@@ -128,6 +132,14 @@ impl SessionState {
pub(crate) fn dependency_env(&self) -> HashMap<String, String> {
self.dependency_env.clone()
}
pub(crate) fn set_startup_regular_task(&mut self, task: RegularTask) {
self.startup_regular_task = Some(task);
}
pub(crate) fn take_startup_regular_task(&mut self) -> Option<RegularTask> {
self.startup_regular_task.take()
}
}
// Sometimes new snapshots don't include credits or plan information.

View File

@@ -1,19 +1,82 @@
use std::sync::Arc;
use std::sync::Mutex;
use crate::client::ModelClient;
use crate::client::ModelClientSession;
use crate::codex::TurnContext;
use crate::codex::run_turn;
use crate::state::TaskKind;
use async_trait::async_trait;
use codex_otel::OtelManager;
use codex_protocol::user_input::UserInput;
use futures::future::BoxFuture;
use tokio::task::JoinHandle;
use tokio_util::sync::CancellationToken;
use tracing::Instrument;
use tracing::trace_span;
use tracing::warn;
use super::SessionTask;
use super::SessionTaskContext;
#[derive(Clone, Copy, Default)]
pub(crate) struct RegularTask;
type PrewarmedSessionTask = JoinHandle<Option<ModelClientSession>>;
pub(crate) struct RegularTask {
prewarmed_session_task: Mutex<Option<PrewarmedSessionTask>>,
}
impl Default for RegularTask {
fn default() -> Self {
Self {
prewarmed_session_task: Mutex::new(None),
}
}
}
impl RegularTask {
pub(crate) fn with_startup_prewarm(
model_client: ModelClient,
otel_manager: OtelManager,
turn_metadata_header: BoxFuture<'static, Option<String>>,
) -> Self {
let prewarmed_session_task = tokio::spawn(async move {
let mut client_session = model_client.new_session();
let turn_metadata_header = turn_metadata_header.await;
match client_session
.prewarm_websocket(&otel_manager, turn_metadata_header.as_deref())
.await
{
Ok(()) => Some(client_session),
Err(err) => {
warn!("startup websocket prewarm task failed: {err}");
None
}
}
});
Self {
prewarmed_session_task: Mutex::new(Some(prewarmed_session_task)),
}
}
async fn take_prewarmed_session(&self) -> Option<ModelClientSession> {
let prewarmed_session_task = self
.prewarmed_session_task
.lock()
.unwrap_or_else(std::sync::PoisonError::into_inner)
.take();
match prewarmed_session_task {
Some(task) => match task.await {
Ok(client_session) => client_session,
Err(err) => {
warn!("startup websocket prewarm task join failed: {err}");
None
}
},
None => None,
}
}
}
#[async_trait]
impl SessionTask for RegularTask {
@@ -34,8 +97,15 @@ impl SessionTask for RegularTask {
sess.services
.otel_manager
.apply_traceparent_parent(&run_turn_span);
run_turn(sess, ctx, input, cancellation_token)
.instrument(run_turn_span)
.await
let prewarmed_client_session = self.take_prewarmed_session().await;
run_turn(
sess,
ctx,
input,
prewarmed_client_session,
cancellation_token,
)
.instrument(run_turn_span)
.await
}
}

View File

@@ -1,72 +0,0 @@
use crate::function_tool::FunctionCallError;
use crate::state_db;
use crate::tools::context::ToolInvocation;
use crate::tools::context::ToolOutput;
use crate::tools::context::ToolPayload;
use crate::tools::handlers::parse_arguments;
use crate::tools::registry::ToolHandler;
use crate::tools::registry::ToolKind;
use async_trait::async_trait;
use codex_protocol::ThreadId;
use codex_protocol::models::FunctionCallOutputBody;
use serde::Deserialize;
use serde_json::json;
pub struct GetMemoryHandler;
#[derive(Deserialize)]
struct GetMemoryArgs {
memory_id: String,
}
#[async_trait]
impl ToolHandler for GetMemoryHandler {
fn kind(&self) -> ToolKind {
ToolKind::Function
}
async fn handle(&self, invocation: ToolInvocation) -> Result<ToolOutput, FunctionCallError> {
let ToolInvocation {
session, payload, ..
} = invocation;
let arguments = match payload {
ToolPayload::Function { arguments } => arguments,
_ => {
return Err(FunctionCallError::RespondToModel(
"get_memory handler received unsupported payload".to_string(),
));
}
};
let args: GetMemoryArgs = parse_arguments(&arguments)?;
let thread_id = ThreadId::from_string(args.memory_id.as_str()).map_err(|err| {
FunctionCallError::RespondToModel(format!("memory_id must be a valid thread id: {err}"))
})?;
let state_db_ctx = session.state_db();
let memory =
state_db::get_thread_memory(state_db_ctx.as_deref(), thread_id, "get_memory_tool")
.await
.ok_or_else(|| {
FunctionCallError::RespondToModel(format!(
"memory not found for memory_id={}",
args.memory_id
))
})?;
let content = serde_json::to_string_pretty(&json!({
"memory_id": args.memory_id,
"trace_summary": memory.trace_summary,
"memory_summary": memory.memory_summary,
}))
.map_err(|err| {
FunctionCallError::Fatal(format!("failed to serialize memory payload: {err}"))
})?;
Ok(ToolOutput::Function {
body: FunctionCallOutputBody::Text(content),
success: Some(true),
})
}
}

View File

@@ -1,7 +1,6 @@
pub mod apply_patch;
pub(crate) mod collab;
mod dynamic;
mod get_memory;
mod grep_files;
mod list_dir;
mod mcp;
@@ -21,7 +20,6 @@ use crate::function_tool::FunctionCallError;
pub use apply_patch::ApplyPatchHandler;
pub use collab::CollabHandler;
pub use dynamic::DynamicToolHandler;
pub use get_memory::GetMemoryHandler;
pub use grep_files::GrepFilesHandler;
pub use list_dir::ListDirHandler;
pub use mcp::McpHandler;

View File

@@ -33,7 +33,6 @@ pub(crate) struct ToolsConfig {
pub supports_image_input: bool,
pub collab_tools: bool,
pub collaboration_modes_tools: bool,
pub memory_tools: bool,
pub request_rule_enabled: bool,
pub experimental_supported_tools: Vec<String>,
}
@@ -54,7 +53,6 @@ impl ToolsConfig {
let include_apply_patch_tool = features.enabled(Feature::ApplyPatchFreeform);
let include_collab_tools = features.enabled(Feature::Collab);
let include_collaboration_modes_tools = features.enabled(Feature::CollaborationModes);
let include_memory_tools = features.enabled(Feature::MemoryTool);
let request_rule_enabled = features.enabled(Feature::RequestRule);
let shell_type = if !features.enabled(Feature::ShellTool) {
@@ -89,7 +87,6 @@ impl ToolsConfig {
supports_image_input: model_info.input_modalities.contains(&InputModality::Image),
collab_tools: include_collab_tools,
collaboration_modes_tools: include_collaboration_modes_tools,
memory_tools: include_memory_tools,
request_rule_enabled,
experimental_supported_tools: model_info.experimental_supported_tools.clone(),
}
@@ -663,28 +660,6 @@ fn create_request_user_input_tool() -> ToolSpec {
})
}
fn create_get_memory_tool() -> ToolSpec {
let properties = BTreeMap::from([(
"memory_id".to_string(),
JsonSchema::String {
description: Some(
"Memory ID to fetch. Uses the thread ID as the memory identifier.".to_string(),
),
},
)]);
ToolSpec::Function(ResponsesApiTool {
name: "get_memory".to_string(),
description: "Loads the full stored memory payload for a memory_id.".to_string(),
strict: false,
parameters: JsonSchema::Object {
properties,
required: Some(vec!["memory_id".to_string()]),
additional_properties: Some(false.into()),
},
})
}
fn create_close_agent_tool() -> ToolSpec {
let mut properties = BTreeMap::new();
properties.insert(
@@ -1279,7 +1254,6 @@ pub(crate) fn build_specs(
use crate::tools::handlers::ApplyPatchHandler;
use crate::tools::handlers::CollabHandler;
use crate::tools::handlers::DynamicToolHandler;
use crate::tools::handlers::GetMemoryHandler;
use crate::tools::handlers::GrepFilesHandler;
use crate::tools::handlers::ListDirHandler;
use crate::tools::handlers::McpHandler;
@@ -1301,7 +1275,6 @@ pub(crate) fn build_specs(
let plan_handler = Arc::new(PlanHandler);
let apply_patch_handler = Arc::new(ApplyPatchHandler);
let dynamic_tool_handler = Arc::new(DynamicToolHandler);
let get_memory_handler = Arc::new(GetMemoryHandler);
let view_image_handler = Arc::new(ViewImageHandler);
let mcp_handler = Arc::new(McpHandler);
let mcp_resource_handler = Arc::new(McpResourceHandler);
@@ -1361,11 +1334,6 @@ pub(crate) fn build_specs(
builder.register_handler("request_user_input", request_user_input_handler);
}
if config.memory_tools {
builder.push_spec(create_get_memory_tool());
builder.register_handler("get_memory", get_memory_handler);
}
if let Some(apply_patch_tool_type) = &config.apply_patch_tool_type {
match apply_patch_tool_type {
ApplyPatchToolType::Freeform => {
@@ -1742,33 +1710,6 @@ mod tests {
assert_contains_tool_names(&tools, &["request_user_input"]);
}
#[test]
fn get_memory_requires_memory_tool_feature() {
let config = test_config();
let model_info = ModelsManager::construct_model_info_offline("gpt-5-codex", &config);
let mut features = Features::with_defaults();
features.disable(Feature::MemoryTool);
let tools_config = ToolsConfig::new(&ToolsConfigParams {
model_info: &model_info,
features: &features,
web_search_mode: Some(WebSearchMode::Cached),
});
let (tools, _) = build_specs(&tools_config, None, &[]).build();
assert!(
!tools.iter().any(|t| t.spec.name() == "get_memory"),
"get_memory should be disabled when memory_tool feature is off"
);
features.enable(Feature::MemoryTool);
let tools_config = ToolsConfig::new(&ToolsConfigParams {
model_info: &model_info,
features: &features,
web_search_mode: Some(WebSearchMode::Cached),
});
let (tools, _) = build_specs(&tools_config, None, &[]).build();
assert_contains_tool_names(&tools, &["get_memory"]);
}
fn assert_model_tools(
model_slug: &str,
features: &Features,

View File

@@ -1,7 +1,7 @@
//! Helpers for computing and resolving optional per-turn metadata headers.
//!
//! This module owns both metadata construction and the shared timeout policy used by
//! turn execution and startup websocket preconnect. Keeping timeout behavior centralized
//! turn execution and startup websocket prewarm. Keeping timeout behavior centralized
//! ensures both call sites treat timeout as the same best-effort fallback condition.
use std::collections::BTreeMap;
@@ -23,7 +23,7 @@ pub(crate) const TURN_METADATA_HEADER_TIMEOUT: Duration = Duration::from_millis(
/// On timeout, this logs a warning and returns the provided fallback header.
///
/// Keeping this helper centralized avoids drift between turn-time metadata resolution and startup
/// websocket preconnect, both of which need identical timeout semantics.
/// websocket prewarm, both of which need identical timeout semantics.
pub(crate) async fn resolve_turn_metadata_header_with_timeout<F>(
build_header: F,
fallback_on_timeout: Option<String>,

159
codex-rs/core/tests/suite/client_websockets.rs Normal file → Executable file
View File

@@ -12,6 +12,8 @@ use codex_core::WireApi;
use codex_core::X_RESPONSESAPI_INCLUDE_TIMING_METRICS_HEADER;
use codex_core::features::Feature;
use codex_core::models_manager::manager::ModelsManager;
use codex_core::protocol::EventMsg;
use codex_core::protocol::Op;
use codex_core::protocol::SessionSource;
use codex_otel::OtelManager;
use codex_otel::TelemetryAuthMode;
@@ -22,6 +24,7 @@ use codex_protocol::account::PlanType;
use codex_protocol::config_types::ReasoningSummary;
use codex_protocol::openai_models::ModelInfo;
use codex_protocol::openai_models::ReasoningEffort as ReasoningEffortConfig;
use codex_protocol::user_input::UserInput;
use core_test_support::load_default_config_for_test;
use core_test_support::responses::WebSocketConnectionConfig;
use core_test_support::responses::WebSocketTestServer;
@@ -30,7 +33,8 @@ use core_test_support::responses::ev_response_created;
use core_test_support::responses::start_websocket_server;
use core_test_support::responses::start_websocket_server_with_headers;
use core_test_support::skip_if_no_network;
use futures::FutureExt;
use core_test_support::test_codex::test_codex;
use core_test_support::wait_for_event;
use futures::StreamExt;
use opentelemetry_sdk::metrics::InMemoryMetricExporter;
use pretty_assertions::assert_eq;
@@ -98,11 +102,11 @@ async fn responses_websocket_preconnect_reuses_connection() {
.await;
let harness = websocket_harness(&server).await;
harness
.client
.pre_establish_connection(harness.otel_manager.clone(), async { None }.boxed());
let mut client_session = harness.client.new_session();
client_session
.prewarm_websocket(&harness.otel_manager, None)
.await
.expect("websocket prewarm failed");
let prompt = prompt_with_input(vec![message_item("hello")]);
stream_until_complete(&mut client_session, &harness, &prompt).await;
@@ -123,11 +127,11 @@ async fn responses_websocket_preconnect_is_reused_even_with_header_changes() {
.await;
let harness = websocket_harness(&server).await;
harness
.client
.pre_establish_connection(harness.otel_manager.clone(), async { None }.boxed());
let mut client_session = harness.client.new_session();
client_session
.prewarm_websocket(&harness.otel_manager, None)
.await
.expect("websocket prewarm failed");
let prompt = prompt_with_input(vec![message_item("hello")]);
let mut stream = client_session
.stream(
@@ -393,6 +397,143 @@ async fn responses_websocket_emits_rate_limit_events() {
server.shutdown().await;
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn responses_websocket_usage_limit_error_emits_rate_limit_event() {
skip_if_no_network!();
let usage_limit_error = json!({
"type": "error",
"status": 429,
"error": {
"type": "usage_limit_reached",
"message": "The usage limit has been reached",
"plan_type": "pro",
"resets_at": 1704067242,
"resets_in_seconds": 1234
},
"headers": {
"x-codex-primary-used-percent": "100.0",
"x-codex-secondary-used-percent": "87.5",
"x-codex-primary-over-secondary-limit-percent": "95.0",
"x-codex-primary-window-minutes": "15",
"x-codex-secondary-window-minutes": "60"
}
});
let server = start_websocket_server(vec![vec![vec![usage_limit_error]]]).await;
let mut builder = test_codex().with_config(|config| {
config.model_provider.request_max_retries = Some(0);
config.model_provider.stream_max_retries = Some(0);
});
let test = builder
.build_with_websocket_server(&server)
.await
.expect("build websocket codex");
let submission_id = test
.codex
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
.await
.expect("submission should succeed while emitting usage limit error events");
let token_event =
wait_for_event(&test.codex, |msg| matches!(msg, EventMsg::TokenCount(_))).await;
let EventMsg::TokenCount(event) = token_event else {
unreachable!();
};
let event_json = serde_json::to_value(&event).expect("serialize token count event");
pretty_assertions::assert_eq!(
event_json,
json!({
"info": null,
"rate_limits": {
"primary": {
"used_percent": 100.0,
"window_minutes": 15,
"resets_at": null
},
"secondary": {
"used_percent": 87.5,
"window_minutes": 60,
"resets_at": null
},
"credits": null,
"plan_type": null
}
})
);
let error_event = wait_for_event(&test.codex, |msg| matches!(msg, EventMsg::Error(_))).await;
let EventMsg::Error(error_event) = error_event else {
unreachable!();
};
assert!(
error_event.message.to_lowercase().contains("usage limit"),
"unexpected error message for submission {submission_id}: {}",
error_event.message
);
server.shutdown().await;
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn responses_websocket_invalid_request_error_with_status_is_forwarded() {
skip_if_no_network!();
let invalid_request_error = json!({
"type": "error",
"status": 400,
"error": {
"type": "invalid_request_error",
"message": "Model 'castor-raikou-0205-ev3' does not support image inputs."
}
});
let server = start_websocket_server(vec![vec![vec![invalid_request_error]]]).await;
let mut builder = test_codex().with_config(|config| {
config.model_provider.request_max_retries = Some(0);
config.model_provider.stream_max_retries = Some(0);
});
let test = builder
.build_with_websocket_server(&server)
.await
.expect("build websocket codex");
let submission_id = test
.codex
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
.await
.expect("submission should succeed while emitting invalid request events");
let error_event = wait_for_event(&test.codex, |msg| matches!(msg, EventMsg::Error(_))).await;
let EventMsg::Error(error_event) = error_event else {
unreachable!();
};
assert!(
error_event
.message
.to_lowercase()
.contains("does not support image inputs"),
"unexpected error message for submission {submission_id}: {}",
error_event.message
);
server.shutdown().await;
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn responses_websocket_appends_on_prefix() {
skip_if_no_network!();

View File

@@ -1,101 +0,0 @@
#![allow(clippy::expect_used, clippy::unwrap_used)]
use anyhow::Result;
use codex_core::features::Feature;
use core_test_support::responses::ev_assistant_message;
use core_test_support::responses::ev_completed;
use core_test_support::responses::ev_response_created;
use core_test_support::responses::mount_function_call_agent_response;
use core_test_support::responses::mount_sse_once;
use core_test_support::responses::sse;
use core_test_support::responses::start_mock_server;
use core_test_support::skip_if_no_network;
use core_test_support::test_codex::test_codex;
use pretty_assertions::assert_eq;
use serde_json::Value;
use serde_json::json;
use tokio::time::Duration;
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn get_memory_tool_returns_persisted_thread_memory() -> Result<()> {
skip_if_no_network!(Ok(()));
let server = start_mock_server().await;
let mut builder = test_codex().with_config(|config| {
config.features.enable(Feature::Sqlite);
config.features.enable(Feature::MemoryTool);
});
let test = builder.build(&server).await?;
let db = test.codex.state_db().expect("state db enabled");
let thread_id = test.session_configured.session_id;
let thread_id_string = thread_id.to_string();
mount_sse_once(
&server,
sse(vec![
ev_response_created("resp-init"),
ev_assistant_message("msg-init", "Materialized"),
ev_completed("resp-init"),
]),
)
.await;
test.submit_turn("materialize thread before memory write")
.await?;
let mut thread_exists = false;
// Wait for DB creation.
for _ in 0..100 {
if db.get_thread(thread_id).await?.is_some() {
thread_exists = true;
break;
}
tokio::time::sleep(Duration::from_millis(25)).await;
}
assert!(thread_exists, "thread should exist in state db");
let trace_summary = "trace summary from sqlite";
let memory_summary = "memory summary from sqlite";
db.upsert_thread_memory(thread_id, trace_summary, memory_summary)
.await?;
let call_id = "memory-call-1";
let arguments = json!({
"memory_id": thread_id_string,
})
.to_string();
let mocks =
mount_function_call_agent_response(&server, call_id, &arguments, "get_memory").await;
test.submit_turn("load the saved memory").await?;
let initial_request = mocks.function_call.single_request().body_json();
assert!(
initial_request["tools"]
.as_array()
.expect("tools array")
.iter()
.filter_map(|tool| tool.get("name").and_then(Value::as_str))
.any(|name| name == "get_memory"),
"get_memory tool should be exposed when memory_tool feature is enabled"
);
let completion_request = mocks.completion.single_request();
let (content_opt, success_opt) = completion_request
.function_call_output_content_and_success(call_id)
.expect("function_call_output should be present");
let success = success_opt.unwrap_or(true);
assert!(success, "expected successful get_memory tool call output");
let content = content_opt.expect("function_call_output content should be present");
let payload: Value = serde_json::from_str(&content)?;
assert_eq!(
payload,
json!({
"memory_id": thread_id_string,
"trace_summary": trace_summary,
"memory_summary": memory_summary,
})
);
Ok(())
}

View File

@@ -82,7 +82,6 @@ mod list_dir;
mod list_models;
mod live_cli;
mod live_reload;
mod memory_tool;
mod model_info_overrides;
mod model_overrides;
mod model_switching;

View File

@@ -85,7 +85,6 @@ def codex_rust_crate(
proc_macro_dev_deps = all_crate_deps(proc_macro_dev = True)
test_env = {
"INSTA_REQUIRE_FULL_MATCH": "0",
"INSTA_WORKSPACE_ROOT": ".",
"INSTA_SNAPSHOT_PATH": "src",
}

View File

@@ -0,0 +1,115 @@
diff --git a/rs/experimental/platforms/triples.bzl b/rs/experimental/platforms/triples.bzl
index d4892e922..e4a3cd534 100644
--- a/rs/experimental/platforms/triples.bzl
+++ b/rs/experimental/platforms/triples.bzl
@@ -30,6 +30,7 @@ SUPPORTED_EXEC_TRIPLES = [
"x86_64-unknown-linux-gnu",
"aarch64-unknown-linux-gnu",
"x86_64-pc-windows-msvc",
+ "x86_64-pc-windows-gnullvm",
"aarch64-pc-windows-msvc",
"x86_64-apple-darwin",
"aarch64-apple-darwin",
diff --git a/rs/experimental/toolchains/declare_toolchains.bzl b/rs/experimental/toolchains/declare_toolchains.bzl
index 377e758eb..ab2908d14 100644
--- a/rs/experimental/toolchains/declare_toolchains.bzl
+++ b/rs/experimental/toolchains/declare_toolchains.bzl
@@ -12,6 +12,11 @@ def _channel(version):
return "beta"
return "stable"
+def _exec_triple_suffix(exec_triple):
+ if exec_triple.system == "windows":
+ return "{}_{}_{}".format(exec_triple.system, exec_triple.arch, exec_triple.abi)
+ return "{}_{}".format(exec_triple.system, exec_triple.arch)
+
def declare_toolchains(
*,
version,
@@ -26,13 +31,12 @@ def declare_toolchains(
# Rustfmt
for triple in execs:
exec_triple = _parse_triple(triple)
- triple_suffix = exec_triple.system + "_" + exec_triple.arch
+ triple_suffix = _exec_triple_suffix(exec_triple)
repo_label = "@rust_toolchain_artifacts_{}_{}//:".format(triple_suffix, version_key)
- rustfmt_toolchain_name = "{}_{}_{}_rustfmt_toolchain".format(
- exec_triple.system,
- exec_triple.arch,
+ rustfmt_toolchain_name = "{}_{}_rustfmt_toolchain".format(
+ triple_suffix,
version_key,
)
@@ -46,11 +50,8 @@ def declare_toolchains(
)
native.toolchain(
- name = "{}_{}_rustfmt_{}".format(exec_triple.system, exec_triple.arch, version_key),
- exec_compatible_with = [
- "@platforms//os:" + exec_triple.system,
- "@platforms//cpu:" + exec_triple.arch,
- ],
+ name = "{}_rustfmt_{}".format(triple_suffix, version_key),
+ exec_compatible_with = triple_to_constraint_set(triple),
target_compatible_with = [],
target_settings = [
"@rules_rust//rust/toolchain/channel:" + channel,
@@ -63,13 +64,12 @@ def declare_toolchains(
# Rustc
for triple in execs:
exec_triple = _parse_triple(triple)
- triple_suffix = exec_triple.system + "_" + exec_triple.arch
+ triple_suffix = _exec_triple_suffix(exec_triple)
repo_label = "@rust_toolchain_artifacts_{}_{}//:".format(triple_suffix, version_key)
- rust_toolchain_name = "{}_{}_{}_rust_toolchain".format(
- exec_triple.system,
- exec_triple.arch,
+ rust_toolchain_name = "{}_{}_rust_toolchain".format(
+ triple_suffix,
version_key,
)
@@ -130,11 +130,8 @@ def declare_toolchains(
target_key = sanitize_triple(target_triple)
native.toolchain(
- name = "{}_{}_to_{}_{}".format(exec_triple.system, exec_triple.arch, target_key, version_key),
- exec_compatible_with = [
- "@platforms//os:" + exec_triple.system,
- "@platforms//cpu:" + exec_triple.arch,
- ],
+ name = "{}_to_{}_{}".format(triple_suffix, target_key, version_key),
+ exec_compatible_with = triple_to_constraint_set(triple),
target_compatible_with = triple_to_constraint_set(target_triple),
target_settings = [
"@rules_rust//rust/toolchain/channel:" + channel,
diff --git a/rs/experimental/toolchains/module_extension.bzl b/rs/experimental/toolchains/module_extension.bzl
index 04477fefa..55fe0fce0 100644
--- a/rs/experimental/toolchains/module_extension.bzl
+++ b/rs/experimental/toolchains/module_extension.bzl
@@ -37,6 +37,11 @@ def _normalize_arch_name(arch):
return "aarch64"
return arch
+def _exec_triple_suffix(exec_triple):
+ if exec_triple.system == "windows":
+ return "{}_{}_{}".format(exec_triple.system, exec_triple.arch, exec_triple.abi)
+ return "{}_{}".format(exec_triple.system, exec_triple.arch)
+
def _sanitize_path_fragment(path):
return path.replace("/", "_").replace(":", "_")
@@ -230,7 +235,7 @@ def _toolchains_impl(mctx):
if cargo_sha == None:
fail("Could not determine cargo sha for {}".format(triple))
- triple_suffix = exec_triple.system + "_" + exec_triple.arch
+ triple_suffix = _exec_triple_suffix(exec_triple)
rust_toolchain_artifacts(
name = "rust_toolchain_artifacts_{}_{}".format(triple_suffix, version_key),

View File

@@ -0,0 +1,12 @@
diff --git a/runtimes/mingw/crt_sources.bzl b/runtimes/mingw/crt_sources.bzl
index 744f076..1a8a204 100644
--- a/runtimes/mingw/crt_sources.bzl
+++ b/runtimes/mingw/crt_sources.bzl
@@ -336,6 +336,7 @@ MINGWEX_TEXTUAL_SRCS = [
]
MINGWEX_X86_SRCS = [
+ "cfguard/guard_dispatch.S",
"math/cbrtl.c",
"math/erfl.c",
"math/fdiml.c",

View File

@@ -0,0 +1,28 @@
--- a/runtimes/mingw/BUILD.bazel
+++ b/runtimes/mingw/BUILD.bazel
@@ -284,6 +284,16 @@
# TODO(zbarsky): Hack for now until we have real libstdc++ support...
stub_library(
name = "stdc++",
+)
+
+# Clang may inject -lssp and -lssp_nonshared for windows-gnu links.
+# Provide compatibility archives in the MinGW runtime search directory.
+stub_library(
+ name = "ssp",
+)
+
+stub_library(
+ name = "ssp_nonshared",
)
copy_to_directory(
@@ -293,6 +303,8 @@
":mingwex",
":moldname",
":stdc++",
+ ":ssp",
+ ":ssp_nonshared",
":ucrt",
":ucrtbase",
":ucrtbased",

View File

@@ -1,12 +0,0 @@
diff --git a/extensions/osx.bzl b/extensions/osx.bzl
index 1e25dc0..8f3e2f9 100644
--- a/extensions/osx.bzl
+++ b/extensions/osx.bzl
@@ -41,6 +41,7 @@ def _osx_extension_impl(mctx):
for framework in frameworks:
includes.append("System/Library/Frameworks/%s.framework/*" % framework)
+ includes.append("System/Library/PrivateFrameworks/%s.framework/*" % framework)
# The following directories are unused, deprecated, or private headers.
# These components: