## Why
The app-server watcher relocation leaves the generic filesystem watcher
as the last watcher-specific implementation still living inside
`codex-core`. Moving that code to a small crate keeps `codex-core`
focused on thread execution and lets app-server depend on the watcher
without reaching back into core for filesystem watching primitives.
This PR is stacked on #21287.
## What changed
- Added a new `codex-file-watcher` crate containing the existing watcher
implementation and its unit tests.
- Updated app-server `fs_watch`, `skills_watcher`, and listener state to
import watcher types from `codex-file-watcher`.
- Removed the `file_watcher` module and `notify` dependency from
`codex-core`.
- Updated Cargo workspace metadata and `Cargo.lock` for the new internal
crate.
## Validation
- `cargo check -p codex-file-watcher -p codex-core -p codex-app-server`
- `cargo test -p codex-file-watcher`
- `cargo test -p codex-app-server
skills_changed_notification_is_emitted_after_skill_change`
- `just bazel-lock-update`
- `just bazel-lock-check`
- `just fix -p codex-file-watcher`
- `just fix -p codex-core`
- `just fix -p codex-app-server`
## Why
Desktop and mobile Codex clients need a machine-readable way to
bootstrap and manage `codex app-server` on remote machines reached over
SSH. The same flow is also useful for bringing up app-server with
`remote_control` enabled on a fresh developer machine and keeping that
managed install current without requiring a human session.
## What changed
- add the new experimental `codex-app-server-daemon` crate and wire it
into `codex app-server daemon` lifecycle commands: `start`, `restart`,
`stop`, `version`, and `bootstrap`
- add explicit `enable-remote-control` and `disable-remote-control`
commands that persist the launch setting and restart a running managed
daemon so the change takes effect immediately
- emit JSON success responses for daemon commands so remote callers can
consume them directly
- support a Unix-only pidfile-backed detached backend for lifecycle
management
- assume the standalone `install.sh` layout for daemon-managed binaries
and always launch `CODEX_HOME/packages/standalone/current/codex`
- add bootstrap support for the standalone managed install plus a
detached hourly updater loop
- harden lifecycle management around concurrent operations, pidfile
ownership, stale state cleanup, updater ownership, managed-binary
preflight, Unix-only rejection, forced shutdown after the graceful
window, and updater process-group tracking/cleanup
- document the experimental Unix-only support boundary plus the
standalone bootstrap/update flow in
`codex-rs/app-server-daemon/README.md`
## Verification
- `cargo test -p codex-app-server-daemon -p codex-cli`
- live pid validation on `cb4`: `bootstrap --remote-control`, `restart`,
`version`, `stop`
## Follow-up
- Add updater self-refresh so the long-lived `pid-update-loop` can
replace its own executable image after installing a newer managed Codex
binary.
## Summary
Support registry-backed remote executors end to end so downstream
services can resolve an executor id into an exec-server URL and make
that environment available to Codex without relying on the legacy cloud
environments flow.
## What changed
- switch remote executor registration to the executor registry bootstrap
contract
- allow named remote environments to be inserted into
`EnvironmentManager` at runtime
- add the experimental app-server RPC `environment/add` so initialized
experimental clients can register those remote environments for later
`thread/start` and `turn/start` selection
## Validation
Ran focused validation locally:
- `cargo test -p codex-exec-server environment_manager_`
- `cargo test -p codex-exec-server
register_executor_posts_with_bearer_token_header`
- `cargo test -p codex-app-server-protocol`
## Summary
Startup tool construction currently depends on connector directory
metadata for `tool_suggest` discoverables. On a cold directory cache,
that can put slow connector-directory requests on the blocking path even
though the tools array only needs directory data for install
suggestions, not for the live connector MCP tools themselves.
This PR keeps the discoverables path off that cold network fetch:
- read connector directory metadata from cache only when building
discoverable tools
- persist connector directory metadata to
`~/.codex/cache/codex_app_directory/<hash>.json` and use it to hydrate
the in-memory cache on later runs before the normal refresh path updates
it
- use connector-directory-specific cache naming to distinguish this
metadata cache from the separate Codex Apps tools-spec cache
This reduces first-turn startup work without changing how live connector
MCP tools are sourced. Longer term, directory-backed install suggestions
should move to a search-based flow so they no longer need to be inlined
into the tools prompt at all.
## Testing
- `cargo test -p codex-connectors`
- `cargo test -p codex-chatgpt`
- `cargo test -p codex-core
request_plugin_install_is_available_without_search_tool_after_discovery_attempts`
- `cargo test -p codex-core
tool_suggest_uses_connector_id_fallback_when_directory_cache_is_empty`
## Summary
In https://github.com/openai/codex/pull/21584, we disabled doctests for
crates that lack any doctests. We can enforce that property via `cargo
shear --deny-warnings`: crates that lack doctests will be flagged if
doctests are enabled, and crates with doctests will be flagged if
doctests are disabled.
A few additional notes:
- By adding `--deny-warnings`, `cargo shear` also flagged a number of
modules that were not reachable at all. Some of those have been removed.
- This PR removes a usage of `windows_modules!` (since `cargo shear` and
`rustfmt` couldn't see through it) in favor of simple `#[cfg(target_os =
"windows")]` macros. As a consequence, many of these files exhibit churn
in this PR, since they weren't being formatted by `rustfmt` at all on
main.
- Again, to make the code more analyzable, this PR also removes some
usages of `#[path = "cwd_junction.rs"]` in favor of a more standard
module structure. The bin sidecar structure is still retained, but,
e.g., `windows-sandbox-rs/src/bin/command_runner.rs` was moved to
`windows-sandbox-rs/src/bin/command_runner/main.rs`, and so on.
---------
Co-authored-by: Codex <noreply@openai.com>
## Summary
Codex's Amazon Bedrock provider signs Mantle requests with SigV4 using
credentials resolved by the AWS SDK. That worked for standard AWS
profiles and environment credentials, but AWS CLI console-login profiles
created by `aws login` require the SDK's `credentials-login` feature to
resolve `login_session` credentials.
This change enables that credential provider so Bedrock can use AWS
console-login credentials through the existing provider-owned AWS auth
path.
While testing the console-login path, we also hit a Mantle-specific
SigV4 regression from the new split between `session_id` and
`thread_id`. Mantle does not preserve legacy OpenAI compatibility
headers that use `snake_case` before SigV4 verification, so signing
those headers can make the server reconstruct a different canonical
request. The Bedrock auth path now removes that header class before
signing, keeping preserved hyphenated Codex/AWS headers such as
`x-codex-turn-metadata` signed normally.
## Changes
- Enable `aws-config`'s `credentials-login` feature in
`codex-rs/aws-auth`.
- Add a compile-time regression test for
`aws_config::login::LoginCredentialsProvider`.
- Strip `snake_case` compatibility headers from Bedrock Mantle SigV4
requests before signing.
- Expand the Bedrock auth regression test to cover `session_id`,
`thread_id`, and future headers of the same shape.
- Refresh Cargo and Bazel lockfiles for the added `aws-sdk-signin`
dependency.
## Tests
- tested with `aws login` locally and verified that it works as
intended.
## Why
After stdio transports and provider-owned defaults exist, Codex needs a
config-backed provider that can describe more than the single legacy
`CODEX_EXEC_SERVER_URL` remote. This PR adds that provider without
activating it in product entrypoints yet, keeping parser/validation
review separate from runtime wiring.
**Stack position:** this is PR 4 of 5. It builds on PR 3's
provider/default model and adds the `environments.toml` provider used by
PR 5.
## What Changed
- Add `environment_toml.rs` as the TOML-specific home for parsing,
validation, and provider construction.
- Keep the TOML schema/provider structs private; the public constructor
added here is `EnvironmentManager::from_codex_home(...)`.
- Add `TomlEnvironmentProvider`, including validation for:
- reserved ids such as `local` and `none`
- duplicate ids
- unknown explicit defaults
- empty programs or URLs
- exactly one of `url` or `program` per configured environment
- Support websocket environments with `url = "ws://..."` / `wss://...`.
- Support stdio-command environments with `program = "..."`.
- Add helpers to load `environments.toml` from `CODEX_HOME`, but do not
wire entrypoints to call them yet.
- Add the `toml` dependency for parsing.
## Stack
- 1. https://github.com/openai/codex/pull/20663 - Add stdio exec-server
listener
- 2. https://github.com/openai/codex/pull/20664 - Add stdio exec-server
client transport
- 3. https://github.com/openai/codex/pull/20665 - Make environment
providers own default selection
- **4. This PR:** https://github.com/openai/codex/pull/20666 - Add
CODEX_HOME environments TOML provider
- 5. https://github.com/openai/codex/pull/20667 - Load configured
environments from CODEX_HOME
Split from original draft: https://github.com/openai/codex/pull/20508
## Validation
Not run locally; this was split out of the original draft stack.
## Documentation
This introduces the config shape for `environments.toml`; user-facing
documentation should be added before this stack is treated as a
documented public workflow.
---------
Co-authored-by: Codex <noreply@openai.com>
Remove the remote thread-store backend and checked-in protobuf
artifacts. We've moved these into another crate that link against this
one.
Also remove the config settings for thread store backend selection,
since we'll instead pass an instantiated thread store into the core-api
crate's main entrypoint.
## DISCLAIMER
This is experimental and no production service must rely on this
## Why
Built-in MCPs are product-owned runtime capabilities, but they were
previously flattened into the same config-backed stdio path as
user-configured servers. That made them depend on a hidden `codex
builtin-mcp` re-exec path, exposed them through config-oriented CLI
flows, and erased distinctions the runtime needs to preserve—most
notably whether an MCP call should count as external context for
memory-mode pollution.
## What changed
- Model product-owned built-ins separately from config-backed MCP
servers via `BuiltinMcpServer` and `EffectiveMcpServer`.
- Launch built-ins in process through a reusable async transport instead
of the hidden `builtin-mcp` stdio subcommand.
- Keep config-oriented CLI operations such as `codex mcp
list/get/login/logout` scoped to configured servers, while merging
built-ins only into the effective runtime server set.
- Retain server metadata after launch so parallel-tool support and
context classification come from the live server set; built-in
`memories` is now classified as local Codex state rather than external
context.
## Test plan
- `cargo test -p codex-mcp`
- `cargo test -p codex-core --test suite
builtin_memories_mcp_call_does_not_mark_thread_memory_mode_polluted_when_configured`
---------
Co-authored-by: Codex <noreply@openai.com>
Supersedes the abandoned #19859, rebuilt on latest `main`.
# Why
PR #19705 adds discovery for hooks bundled with plugins, but `/plugins`
still only shows skills, apps, and MCP servers. This follow-up makes
bundled hooks visible in the same plugin detail view so users can
inspect the full plugin surface in one place.
We also need `PluginHookSummary` to populate Plugin Hooks in the app;
`hooks/list` is not enough there because plugin detail needs to show
hooks for disabled plugins too.
# What
- extend `plugin/read` with `PluginHookSummary` entries for bundled
hooks
- summarize plugin hooks while loading plugin details
- render a `Hooks` row in the `/plugins` detail popup
<img width="3456" height="848" alt="CleanShot 2026-04-27 at 11 45 34@2x"
src="https://github.com/user-attachments/assets/fe3a38d6-a260-4351-8513-fb04c93d725b"
/>
## Why
Reverts #20689 to restore the previous optional state DB plumbing. The
conflict resolution keeps the newer installation ID and session/thread
identity changes that landed after #20689, while removing the mandatory
state DB and agent graph store dependency from ThreadManager
construction.
## What changed
- Restored `Option<StateDbHandle>` through app-server, MCP server,
prompt debug, and test entry points.
- Removed the `codex-core` dependency on `codex-agent-graph-store` and
reverted descendant lookup back to the existing state DB path when
available.
- Kept newer `installation_id` forwarding by passing it beside the
optional DB handle.
- Kept local thread-name updates working when the optional state DB
handle is absent.
## Validation
- `git diff --check`
- `cargo test -p codex-thread-store`
- `cargo test -p codex-state -p codex-rollout -p
codex-app-server-protocol`
- Attempted `env CARGO_INCREMENTAL=0 cargo test -p codex-core -p
codex-app-server -p codex-app-server-client -p codex-mcp-server -p
codex-thread-manager-sample -p codex-tui`; blocked locally by a rustc
ICE while compiling `v8 v146.4.0` with `rustc 1.93.0 (254b59607
2026-01-19)` on `aarch64-apple-darwin`.
## Why
The core `Op::ListMcpTools` request path is no longer needed. Keeping it
around left a dead request/response surface alongside the app-server MCP
inventory APIs that own current server status listing.
## What Changed
- Removed `Op::ListMcpTools`, `EventMsg::McpListToolsResponse`, and the
core handler that built the MCP snapshot response.
- Removed the now-unused `codex-mcp` snapshot wrapper/export and passive
event handling arms in rollout and MCP-server consumers.
- Updated tests that used the old op as a synchronization hook to wait
on existing startup/skills events, and deleted the plugin test that only
exercised the removed listing op.
## Validation
- `cargo test -p codex-protocol`
- `cargo test -p codex-mcp`
- `cargo test -p codex-rollout -p codex-rollout-trace -p
codex-mcp-server`
- `cargo test -p codex-core --test all
pending_input::queued_inter_agent_mail`
- `cargo test -p codex-core --test all
rmcp_client::stdio_mcp_tool_call_includes_sandbox_state_meta`
- `cargo test -p codex-core --test all
rmcp_client::stdio_image_responses`
- `just fix -p codex-core -p codex-protocol -p codex-mcp -p
codex-rollout -p codex-rollout-trace -p codex-mcp-server`
## Why
Message history was implemented inside `codex-core` and surfaced through
core protocol ops and `SessionConfiguredEvent` fields even though the
current consumer is TUI-local prompt recall. That made core own UI
history persistence and exposed `history_log_id` / `history_entry_count`
through surfaces that app-server and other clients do not need.
This change moves message history persistence out of core and keeps the
recall plumbing local to the TUI.
## What changed
- Added a new `codex-message-history` crate for appending, looking up,
trimming, and reading metadata from `history.jsonl`.
- Removed core protocol history ops/events: `AddToHistory`,
`GetHistoryEntryRequest`, and `GetHistoryEntryResponse`.
- Removed `history_log_id` and `history_entry_count` from
`SessionConfiguredEvent` and updated exec/MCP/test fixtures accordingly.
- Updated the TUI to dispatch local app events for message-history
append/lookup and keep its persistent-history metadata in TUI session
state.
## Validation
- `cargo test -p codex-message-history -p codex-protocol`
- `cargo test -p codex-exec event_processor_with_json_output`
- `cargo test -p codex-mcp-server outgoing_message`
- `cargo test -p codex-tui`
- `just fix -p codex-message-history -p codex-protocol -p codex-core -p
codex-tui -p codex-exec -p codex-mcp-server`
## Why
The in-process `app-server-client` tests were still building their
configs from the ambient `codex_home` and letting the embedded app
server create its own state DB when `state_db` was absent. That matters
because in-process startup falls back to
`init_state_db_from_config(...)` in that case, so tests can otherwise
share persisted state instead of getting isolated fixtures:
[`app-server/src/in_process.rs`](a98623511b/codex-rs/app-server/src/in_process.rs (L368-L373)).
## What changed
- Give each in-process test client its own temporary `codex_home`.
- Initialize the matching state DB from that per-client config and pass
it into the client explicitly.
- Keep the temp directory alive for the lifetime of the test client
through a small `TestClient` wrapper.
- Add `tempfile` as a dev dependency for the new harness.
The updated setup lives in
[`app-server-client/src/lib.rs`](35c1133d45/codex-rs/app-server-client/src/lib.rs (L982-L1055)).
## Testing
- Existing `codex-app-server-client` tests continue to exercise the
updated in-process client path through the isolated helper.
**Summary**
- Add `codex-bwrap`, a standalone `bwrap` binary built from the existing
vendored bubblewrap sources.
- Remove the linked vendored bwrap path from `codex-linux-sandbox`;
runtime now prefers system `bwrap` and falls back to bundled
`codex-resources/bwrap`.
- Add bundled SHA-256 verification with missing/all-zero digest as the
dev-mode skip value, then exec the verified file through
`/proc/self/fd`.
- Keep `launcher.rs` focused on choosing and dispatching the preferred
launcher. Bundled lookup, digest verification, and bundled exec now live
in `linux-sandbox/src/bundled_bwrap.rs`; Bazel runfiles lookup lives in
`linux-sandbox/src/bazel_bwrap.rs`; shared argv/fd exec helpers live in
`linux-sandbox/src/exec_util.rs`.
- Teach Bazel tests to surface the Bazel-built `//codex-rs/bwrap:bwrap`
through `CARGO_BIN_EXE_bwrap`; `codex-linux-sandbox` only honors that
fallback in debug Bazel runfiles environments so release/user runtime
lookup stays tied to `codex-resources/bwrap`.
- Allow `codex-exec-server` filesystem helpers to preserve just the
Bazel bwrap/runfiles variables they need in debug Bazel builds, since
those helpers intentionally rebuild a small environment before spawning
`codex-linux-sandbox`.
- Verify the Bazel bwrap target in Linux release CI with a build-only
check. Running `bwrap --version` is too strong for GitHub runners
because bubblewrap still attempts namespace setup there.
**Verification**
- Latest update: `cargo test -p codex-linux-sandbox`
- Latest update: `just fix -p codex-linux-sandbox`
- `cargo check --target x86_64-unknown-linux-gnu -p codex-linux-sandbox`
could not run locally because this macOS machine does not have
`x86_64-linux-gnu-gcc`; GitHub Linux Bazel CI is expected to cover the
Linux-only modules.
- Earlier in this PR: `cargo test -p codex-bwrap`
- Earlier in this PR: `cargo test -p codex-exec-server`
- Earlier in this PR: `cargo check --release -p codex-exec-server`
- Earlier in this PR: `just fix -p codex-linux-sandbox -p
codex-exec-server`
- Earlier in this PR: `bazel test --nobuild
//codex-rs/linux-sandbox:linux-sandbox-all-test
//codex-rs/core:core-all-test
//codex-rs/exec-server:exec-server-file_system-test
//codex-rs/app-server:app-server-all-test` (analysis completed; Bazel
then refuses to run tests under `--nobuild`)
- Earlier in this PR: `bazel build --nobuild //codex-rs/bwrap:bwrap`
- Prior to this update: `just bazel-lock-update`, `just
bazel-lock-check`, and YAML parse check for
`.github/workflows/bazel.yml`
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/21255).
* #21257
* #21256
* __->__ #21255
## Summary
This PR adds the first `codex-rs` milestone for remote-exec e2e: a local
`codex exec-server` can now register itself with
`codex-cloud-environments` and attach to the returned rendezvous
websocket.
At a high level, `codex exec-server --cloud ...` now:
- loads ChatGPT auth from normal Codex config
- registers an executor with `codex-cloud-environments`
- receives a signed rendezvous websocket URL
- serves the existing exec-server JSON-RPC protocol over that websocket
## What Changed
- Added `--cloud`, `--cloud-base-url`, `--cloud-environment-id`, and
`--cloud-name` to `codex exec-server`
- Added a new `exec-server/src/cloud.rs` module that handles:
- registration requests
- auth/header setup
- bounded auth retry on `401/403`
- reconnect/backoff after websocket disconnects
- Reused the existing `ConnectionProcessor` / `ExecServerHandler` path
so cloud mode serves the same exec/filesystem RPC surface as local
websocket mode
- Added cloud-specific error variants and minimal docs for the new mode
## Testing
Manual e2e test that fully goes through exec server flow with our codex
cloud agent as orchestrator
## Why
We want the agent graph store to be passed down the stack as a real
dependency, the same way we already treat the thread store.
This will let us inject the agent graph store as a real dependency and
support implementations other than the local SQLite-backed one. Right
now most code instantiates a state DB and an agent graph store
just-in-time. Ideally, we would not depend on the state DB directly but
only read through the higher-level interfaces.
This change makes the dependency boundaries explicit and moves state DB
initialization to process bootstrap instead of hiding it inside local
store implementations.
## What changed
- `ThreadManager` now requires a `StateDbHandle` and an
`AgentGraphStore` at construction time instead of treating them as
optional internals.
- The local store constructors no longer lazily initialize SQLite.
Callers now initialize the state DB once per process and use that shared
handle to build:
- `LocalThreadStore`
- `LocalAgentGraphStore`
- App bootstraps (`app-server`, `mcp-server`, `prompt_debug`, and the
thread-manager sample) now initialize the state DB up front and inject
the resulting handle down the stack.
- `app-server` now consistently uses its process-scoped state DB handle
instead of reopening SQLite or trying to recover it from loaded threads.
- Device-key storage now reuses the shared state DB handle instead of
maintaining its own lazy opener.
- The thread archive / descendant traversal paths now use the injected
`AgentGraphStore` instead of reaching through local
thread-store-specific state.
## Verification
- `cargo check -p codex-core -p codex-thread-store -p codex-app-server
-p codex-mcp-server -p codex-thread-manager-sample --tests`
- `cargo test -p codex-thread-store`
- `cargo test -p codex-core
thread_manager_accepts_separate_agent_graph_store_and_thread_store --
--nocapture`
- `cargo test -p codex-app-server
thread_archive_archives_spawned_descendants -- --nocapture`
# Why
We want shared hook trust that both the app and the TUI can build on,
but the metadata is only useful if runtime behavior agrees with it. This
PR adds a single backend trust model for hooks so unmanaged hooks cannot
run until the current definition has been reviewed, while managed hooks
remain runnable and non-configurable.
# What
- persist `trusted_hash` alongside hook state in `config.toml`
- expose `currentHash` and derived `trustStatus` through `hooks/list`
- derive trust from normalized hook definitions so equivalent hooks from
`config.toml` and `hooks.json` share the same trust identity
- gate unmanaged hooks on trust before they enter the runnable handler
set
# Reviewer Notes
- key file to review is `codex-rs/hooks/src/engine/discovery.rs`
- the only **core** change is schema related
## Why
Large hook outputs can enter model-visible context through hook-specific
paths such as `additionalContext` and `Stop` continuation prompts.
Without a dedicated cap, one hook can inject a large blob directly into
conversation history instead of leaving a bounded preview for the model
and preserving the full text elsewhere.
## What
- spill hook text once it exceeds a fixed `2_500`-token budget,
preserving the full output on disk and leaving a head/tail preview plus
saved path in context
- add shared hook-output spilling under
`CODEX_HOME/hook_outputs/<thread_id>/<uuid>.txt`
- apply the cap to both `additionalContext`, `feedback_message`, and
`Stop` continuation fragments
## Why
The memories MCP server currently keeps handwritten JSON Schema beside
the Rust types that actually serialize and deserialize the tool
payloads:
[`schema.rs`](2f5c06a29c/codex-rs/memories/mcp/src/schema.rs (L4-L133)),
[`server.rs`](2f5c06a29c/codex-rs/memories/mcp/src/server.rs (L44-L75)),
and
[`backend.rs`](2f5c06a29c/codex-rs/memories/mcp/src/backend.rs (L41-L117)).
That duplicates the tool contract and makes schema drift easier as the
API evolves.
## What changed
- derive `JsonSchema` for the memories tool arguments, responses, and
nested response types
- replace the handwritten schema builders with shared `schemars`
generation
- preserve the existing wire shape while generating schemas, including
nullable output `Option` fields and non-nullable optional input fields
- wire the `list`, `read`, and `search` tools to the generated schemas
## Verification
- CI pending
Refs:
https://linear.app/openai/issue/SE-6311/login-fails-for-experian-users-behind-tls-inspecting-proxy
## Summary
- When a custom CA bundle is configured, force the shared `codex-client`
reqwest builder onto rustls before registering custom roots.
- Add the `rustls-tls-native-roots` reqwest feature so the rustls client
preserves native roots plus the enterprise CA bundle.
- Add subprocess TLS coverage for both a direct local TLS 1.3 server and
a hermetic local CONNECT TLS-intercepting proxy that forwards a
token-exchange-shaped POST to a local origin.
## Plain-language explanation
Experian users are behind a TLS-inspecting proxy, so the login token
exchange needs to trust the enterprise CA bundle from
`CODEX_CA_CERTIFICATE` or `SSL_CERT_FILE`. Before this change, that
custom-CA branch still used reqwest default TLS selection, which could
fail in the proxy environment. Now, only when a custom CA is configured,
Codex selects rustls first and then adds the custom CA roots, matching
the validated behavior from the Experian test build while leaving normal
system-root clients unchanged.
The new regression test recreates the enterprise-proxy shape locally:
the probe client sends an HTTPS `POST /oauth/token` through an explicit
HTTP CONNECT proxy, the proxy presents a leaf certificate signed by a
runtime-generated test CA, decrypts the request, forwards it to a local
origin, and relays the `ok` response back.
## Scope note
- The actual production fix is the first commit: `8368119282 Fix custom
CA reqwest clients to use rustls`.
- The second commit is integration-test coverage only. It generates all
test CA and localhost certificate material at runtime.
## Validation
- `cd codex-rs && cargo test -p codex-client --test ca_env
posts_to_token_origin_through_tls_intercepting_proxy_with_custom_ca_bundle
-- --nocapture`
- `cd codex-rs && cargo test -p codex-client`
- `cd codex-rs && cargo test -p codex-login`
- `cd codex-rs && just fmt`
- `cd codex-rs && just bazel-lock-update`
- `cd codex-rs && just bazel-lock-check`
- `cd codex-rs && just fix -p codex-client`
## Why
`codex-app-server` currently owns both request-processing code and
transport implementation details. Splitting the transport layer into its
own crate makes that boundary explicit, reduces the amount of
transport-specific dependency surface carried by `codex-app-server`, and
gives future transport work a narrower place to evolve.
## What changed
- Added `codex-app-server-transport` and moved the existing transport
tree into it, including stdio, unix socket, websocket, remote-control
transport, and websocket auth.
- Moved shared transport-facing message types into the new crate so both
the transport implementation and `codex-app-server` use the same
definitions.
- Kept processor-facing connection state and outbound routing in
`codex-app-server`, with the routing tests moved next to that local
wrapper.
- Updated workspace metadata, Bazel crate metadata, and
`codex-app-server` dependencies for the new crate boundary.
## Validation
- `cargo metadata --locked --no-deps`
- `git diff --check`
- Attempted `cargo test -p codex-app-server-transport`, `cargo test -p
codex-app-server`, `just fix -p codex-app-server-transport`, and `just
fix -p codex-app-server`; all were blocked before compilation by the
existing `packageproxy` resolution failure for locked `rustls-webpki =
0.103.13`.
- Attempted Bazel build / lockfile validation; those were blocked by
external fetch failures against BuildBuddy / GitHub while resolving
`v8`.
## Summary
This PR installs a first wave of WFP (Windows Filtering Platform)
filters that reduce the surface area of network egress vulnerabilities
for the Windows Sandbox.
- Add persistent Windows Filtering Platform provider, sublayer, and
filters for the Windows sandbox offline account.
- Install WFP filters during elevated full setup, log failures
non-fatally, and emit setup metrics when analytics are enabled.
- Bump the Windows sandbox setup version so existing users rerun full
setup and receive the new filters.
## What WFP is
Windows Filtering Platform (WFP) is the low-level Windows networking
policy engine underneath things like Windows Firewall. It lets
privileged code install persistent filtering rules at specific network
stack layers, with conditions like "only traffic from this Windows
account" or "only this remote port," and an action like block.
In this change, we create a Codex-owned persistent WFP provider and
sublayer, then install block filters scoped to the Windows sandbox's
offline user account via `ALE_USER_ID`. That means the filters are
targeted at sandboxed processes running as that account, rather than
globally affecting the host.
## Initial filter set
We are starting with 12 concrete WFP filters across a few high-value
bypass surfaces. The table below describes the filter families rather
than one filter per row:
| Area | Concrete filters | Purpose |
| --- | --- | --- |
| ICMP | 4 filters: ICMP v4/v6 on `ALE_AUTH_CONNECT` and
`ALE_RESOURCE_ASSIGNMENT` | Block direct ping-style network reachability
checks from the offline account. |
| DNS | 2 filters: remote port `53` on `ALE_AUTH_CONNECT_V4/V6` | Block
direct DNS queries that bypass our intended proxy/offline path. |
| DNS-over-TLS | 2 filters: remote port `853` on
`ALE_AUTH_CONNECT_V4/V6` | Block encrypted DNS attempts that could
bypass ordinary DNS interception. |
| SMB / NetBIOS | 4 filters: remote ports `445` and `139` on
`ALE_AUTH_CONNECT_V4/V6` | Block Windows file-sharing/network share
traffic from sandboxed processes. |
For IPv4/IPv6 coverage, the port-based filters are installed on both
`ALE_AUTH_CONNECT_V4` and `ALE_AUTH_CONNECT_V6`. ICMP also gets both
connect-layer and resource-assignment-layer coverage because ICMP
traffic is shaped differently from ordinary TCP/UDP port traffic.
## Validation
- `cargo fmt -p codex-windows-sandbox` (completed with existing
stable-rustfmt warnings about `imports_granularity = Item`)
- `cargo test -p codex-windows-sandbox wfp::tests`
- `cargo test -p codex-windows-sandbox` (fails in existing legacy
PowerShell sandbox tests because `Microsoft.PowerShell.Utility` could
not be loaded; WFP tests passed before that failure)
## Why
`/status` was showing the configured `ModelProviderInfo.base_url` for
Amazon Bedrock, which can be stale or misleading because the actual
Bedrock Mantle endpoint is derived at runtime from the resolved AWS
region. This made sessions report the wrong provider endpoint even
though requests used the correct runtime URL.
## What changed
- Added `ModelProvider::runtime_base_url()` so provider implementations
can expose the request-time base URL through the shared runtime provider
abstraction.
- Moved Bedrock region-to-Mantle URL resolution into
`amazon_bedrock::mantle::runtime_base_url()`, keeping region resolution
private to the Mantle module.
- Overrode `runtime_base_url()` for Amazon Bedrock so it returns the
resolved Mantle endpoint instead of the configured default.
- Resolved and cached the runtime provider base URL during TUI startup,
then used that cached value when rendering `/status`.
- Added status coverage that verifies Bedrock displays the runtime URL
and ignores the configured Bedrock `base_url` when they differ.
## Verification
model provider is resolved correctly in local build:
<img width="696" height="245" alt="Screenshot 2026-04-29 at 5 01 36 PM"
src="https://github.com/user-attachments/assets/a13c10a5-3720-41ab-8ace-3c4bc573f971"
/>
## Why
Follow-up to #20291.
The v2 item-event-to-notification translation had been embedded in
`app-server/src/bespoke_event_handling.rs`, which made it hard to reuse
anywhere else. This PR moves that stateless mapping into shared protocol
code so other entry points can produce the same `ServerNotification`
payloads without copying app-server logic.
That also lets `thread-manager-sample` demonstrate the same notification
surface that the app server exposes, instead of only printing the final
assistant message.
## What changed
- move `item_event_to_server_notification` into
`codex-app-server-protocol::protocol::event_mapping`
- keep the mapper tests next to the shared implementation in
`codex-app-server-protocol`
- re-export the mapper from `codex-core-api` so lightweight consumers
can use it without reaching into `app-server-protocol` directly
- simplify `app-server/src/bespoke_event_handling.rs` so it delegates
the stateless event-to-notification projection to the shared helper
- update `thread-manager-sample` to:
- print mapped notifications as newline-delimited JSON
- use the shared mapper through `codex-core-api`
- enable the default feature set so the sample exposes the normal tool
surface
- use a `read_only` permission profile so shell commands can run in the
sample without widening permissions
## Testing
- `cargo test -p codex-app-server-protocol`
- `cargo test -p codex-core-api`
- `cargo test -p codex-app-server bespoke_event_handling::tests`
- `cargo test -p codex-thread-manager-sample`
- `cargo run -p codex-thread-manager-sample -- "briefly explore the repo
with pwd and ls, then summarize it"`
## Why
We need a way to list the available hooks to expose via the TUI and App
so users can view and manage their hooks
## What
- Adds `hooks/list` for one or more `cwd` values that returns discovered
hook metadata
## Stack
1. openai/codex#19705
2. This PR - openai/codex#19778
3. openai/codex#19840
4. openai/codex#19882
## Review Notes
The generated schema files account for most of the raw diff, these files
have the core change:
- `hooks/src/engine/discovery.rs` builds the inventory entries during
hook discovery while leaving runtime handlers focused on execution.
- `app-server/src/codex_message_processor.rs` wires `hooks/list` into
the app-server flow for each requested `cwd`.
- `app-server-protocol/src/protocol/v2.rs` defines the new v2
request/response payloads exposed on the wire.
### Core Changes
`core/src/plugins/manager.rs` adds `plugins_for_layer_stack(...)` so
`skills/list` and `hooks/list`can resolve plugin state for each
requested `cwd`
---------
Co-authored-by: Codex <noreply@openai.com>
Summary:
- Add a checked-in codex-core public API listing generated by
cargo-public-api.
- Add scripts/regen-public-api.sh with an embedded crate list,
auto-install for cargo-public-api 0.51.0, pinned nightly, and --check
mode.
- Add Rust CI jobs on the codex Linux x64 runner pool to verify the
listing stays up to date.
Testing:
- bash -n scripts/regen-public-api.sh
- just regen-public-api --check
- yq '.' .github/workflows/rust-ci.yml
.github/workflows/rust-ci-full.yml
- git diff --check
## Summary
Persisted subagent parent/child topology currently leaks through
`StateRuntime`'s SQLite-specific thread-spawn helpers. This PR
introduces a narrow `AgentGraphStore` boundary so follow-up work can
route graph operations through a local or remote store without coupling
orchestration code directly to the state DB graph API.
## Changes
- Adds the new `codex-agent-graph-store` crate.
- Defines a flat `AgentGraphStore` trait for the v1 graph surface:
upsert edge, set edge status, list direct children, and list
descendants.
- Adds public graph types for `ThreadSpawnEdgeStatus`,
`AgentGraphStoreError`, and `AgentGraphStoreResult`.
- Implements `LocalAgentGraphStore` on top of an existing
`codex_state::StateRuntime`, preserving today's SQLite-backed
`thread_spawn_edges` behavior.
- Registers the crate in Cargo/Bazel metadata.
This PR only adds the local contract and implementation; call-site
migration and the remote gRPC store are left to the follow-up PRs in the
stack.
## Testing
- `cargo test -p codex-agent-graph-store`
The new unit tests cover local parity with the existing `StateRuntime`
graph methods, `Open`/`Closed` filtering, status updates, and stable
breadth-first descendant ordering.
## Why
`x-codex-turn-metadata` is sent as an HTTP/WebSocket header, but Codex
was serializing the metadata JSON with raw UTF-8 string contents. When a
workspace path contains non-ASCII characters, common HTTP stacks can
reject or corrupt that header before the request reaches the provider.
Fixes#17468. Also addresses the duplicate WebSocket report in #19581.
## What changed
- Added `codex_utils_string::to_ascii_json_string`, a shared helper that
serializes JSON normally while escaping non-ASCII string content as
`\uXXXX`.
- Switched turn metadata header serialization, including merged
Responses API client metadata, to use the ASCII-safe JSON helper.
- Added coverage for non-ASCII workspace paths and non-ASCII client
metadata while preserving the same parsed JSON values.
## Verification
- `cargo test -p codex-utils-string`
- `cargo test -p codex-core turn_metadata`
- `just bazel-lock-check`
Summary:
- Add codex-thread-manager-sample, a one-shot binary that starts a
ThreadManager thread, submits a prompt, and prints the final assistant
output.
- Pass ThreadStore into ThreadManager::new and expose
thread_store_from_config for existing callsites.
- Build the sample Config directly with only --model and prompt inputs.
Verification:
- just fmt
- cargo check -p codex-thread-manager-sample -p codex-app-server -p
codex-mcp-server
- git diff --check
Tests: Not run per request.
## Why
This PR expands the migration path so Codex can detect and import MCP
server config, hooks, commands, and subagents configs in a Codex-native
shape.
## What changed
- Added a `codex-external-agent-migration` crate that owns conversion
logic for external-agent MCP servers, hooks, commands, and subagents.
- Extended the app-server external-agent config detection/import API
with migration item types for MCP server config, hooks, commands, and
subagents.
## Migration strategy
The migration is intentionally conservative: Codex only imports
external-agent config that can be represented safely in Codex today.
Unsupported or ambiguous config is skipped instead of being partially
translated into behavior that may not match the source system.
- **MCP servers**: import supported stdio and HTTP MCP server
definitions into `mcp_servers`. Disabled servers and servers filtered
out by source `enabledMcpjsonServers` / `disabledMcpjsonServers` are
skipped. Project-scoped MCP entries from `.claude.json` are included
when they match the repo path.
- **Hooks**: import only supported command hooks into
`.codex/hooks.json`. Unsupported hook features such as conditional
groups, async handlers, prompt/http hooks, or unknown fields are
skipped. Referenced hook scripts are copied into `.codex/hooks/`,
preserving any existing target scripts.
- **Commands**: import supported external commands as Codex skills under
`.agents/skills/source-command-*`. Commands that rely on source runtime
expansion such as `$ARGUMENTS`, `$1`, `@file` references, shell
interpolation, or colliding generated names are skipped.
- **Subagents**: import valid subagent Markdown files into
`.codex/agents/*.toml` when they have the minimum Codex agent fields.
Source model names are not migrated, so imported agents keep the user’s
Codex default model; compatible reasoning effort and sandbox mode are
migrated when present.
- **Skills and project guidance**: copy missing skill directories into
`.agents/skills` and migrate `CLAUDE.md` guidance into `AGENTS.md`,
rewriting source-agent terminology to Codex terminology where
appropriate.
- **Detection details**: detected migration items include lightweight
details for UI preview, such as MCP server names, hook event names,
generated command skill names, and subagent names. Import still
recomputes from disk instead of trusting details as the source of truth.
- Adds focused coverage for the new migration behavior and app-server
import flow.
## Verification
- `cargo test -p codex-external-agent-migration`
- `cargo test -p codex-hooks`
- `cargo test -p codex-app-server external_agent_config`
- `just bazel-lock-check`
## Why
Plugins can bundle lifecycle hooks, but Codex previously only discovered
hooks from user, project, and managed config layers. This adds the
plugin discovery and runtime plumbing needed for plugin-bundled hooks
while keeping execution behind the `plugin_hooks` feature flag.
## What
- Discovers plugin hook sources from each plugin's default
`hooks/hooks.json`.
- Supports `plugin.json` manifest `hooks` entries as either relative
paths or inline hook objects.
- Plumbs discovered plugin hook sources through plugin loading into the
hook runtime when `plugin_hooks` is enabled.
- Marks plugin-originated hook runs as `HookSource::Plugin`.
- Injects `PLUGIN_ROOT` and `CLAUDE_PLUGIN_ROOT` into plugin hook
command environments.
- Updates generated schemas and hook source metadata for the plugin hook
source.
## Stack
1. This PR - openai/codex#19705
2. openai/codex#19778
3. openai/codex#19840
4. openai/codex#19882
## Reviewer Notes
- Core logic is in `codex-rs/core-plugins/src/loader.rs` and
`codex-rs/hooks/src/engine/discovery.rs`
- Moved existing / adding new tests to
`codex-rs/core-plugins/src/loader_tests.rs` hence the large diff there
- Otherwise mostly plumbing and minor schema updates
### Core Changes
The `codex-rs/core` changes are limited to wiring plugin hook support
into existing core flows:
- `core/src/session/session.rs` conditionally pulls effective plugin
hook sources and plugin hook load warnings from `PluginsManager` when
`plugin_hooks` is enabled, then passes them into `HooksConfig`.
- `core/src/hook_runtime.rs` adds the `plugin` metric tag for
`HookSource::Plugin`.
- `core/config.schema.json` picks up the new `plugin_hooks` feature
flag, and `core/src/plugins/manager_tests.rs` updates fixtures for the
added plugin hook fields.
---------
Co-authored-by: Codex <noreply@openai.com>
## Why
The Linux managed-proxy bridge helpers are long-lived child processes in
the sandbox networking path. Before this change they stayed dumpable and
the network seccomp profile did not block cross-process memory syscalls,
so another same-user process could potentially inspect or modify bridge
memory instead of interacting only through the intended proxy interface.
## What changed
- reuse the shared `codex-process-hardening` helper to mark bridge
helper children non-dumpable before they begin serving
- deny `process_vm_readv` and `process_vm_writev` in the existing
network seccomp filter
## Security impact
Bridge helpers are less exposed to same-user cross-process inspection or
memory writes, which reduces the chance that sandboxed code can
interfere with proxy support processes outside the intended IPC path.
## Verification
- `cargo test -p codex-process-hardening`
- `cargo test -p codex-linux-sandbox`
- attempted `cargo check -p codex-linux-sandbox --target
x86_64-unknown-linux-gnu`; blocked on missing `x86_64-linux-gnu-gcc` on
this macOS host
---------
Co-authored-by: Codex <noreply@openai.com>
## Why
The migration away from `SandboxPolicy` needs new configs to start from
permissions profiles instead of deriving profiles from legacy sandbox
modes. Existing users can have empty `config.toml` files, and we should
not rewrite user-owned config files that may live in shared
repositories.
This PR introduces built-in profile names so an empty config can resolve
to a canonical `PermissionProfile`, while explicit named `[permissions]`
profiles still behave predictably.
## What changed
- Adds built-in `default_permissions` profile names:
- `:read-only` maps to `PermissionProfile::read_only()`.
- `:workspace` maps to the workspace-write profile, including
project-root metadata carveouts.
- `:danger-no-sandbox` maps to `PermissionProfile::Disabled`, preserving
the distinction between no sandbox and a broad managed sandbox.
- Reserves the `:` prefix for built-in profiles so user-defined
`[permissions]` profiles cannot collide with future built-ins.
- Allows `default_permissions` to reference a built-in profile without
requiring a `[permissions]` table.
- Makes an otherwise empty config choose a built-in profile by
trust/platform context: trusted or untrusted project roots use
`:workspace` when the platform supports that sandbox, while roots
without a trust decision use `:read-only`.
- Keeps legacy `sandbox_mode` configs on the legacy path, and still
rejects user-defined `[permissions]` profiles that omit
`default_permissions` so we do not silently guess among custom profiles.
- Preserves compatibility behavior for implicit defaults: bare
`network.enabled = true` allows runtime network without starting the
managed proxy, explicit profile proxy policy still starts the proxy, and
implicit workspace/add-dir roots keep legacy metadata carveouts.
## Verification
- `cargo test -p codex-core builtin --lib`
- `cargo test -p codex-core profile_network_proxy_config`
- `cargo test -p codex-core
implicit_builtin_workspace_profile_preserves_add_dir_metadata_carveouts`
- `cargo test -p codex-core
permissions_profiles_network_enabled_allows_runtime_network_without_proxy`
- `cargo test -p codex-core
permissions_profiles_proxy_policy_starts_managed_network_proxy`
## Documentation
Public Codex config docs should mention these built-in names when the
`[permissions]` config format is ready to document as stable.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/19900).
* #20041
* #20040
* #20037
* #20035
* #20034
* #20033
* #20032
* #20030
* #20028
* #20027
* #20026
* #20024
* #20021
* #20018
* #20016
* #20015
* #20013
* #20011
* #20010
* #20008
* __->__ #19900
## Summary
This extends external agent detection/import beyond config artifacts so
Codex can detect recent sessions files from the external agent home and
import them into Codex rollout history.
## What changed
- Added a focused `external_agent_sessions` module for:
- session discovery
- source-record parsing
- rollout construction
- import ledger tracking
- Wired session detection/import into the app-server external agent
config API.
- Added compaction handling so large imported sessions can be resumed
safely before the first follow-up turn.
## Testing
Added coverage for:
- recent-session detection
- custom-title handling
- recency filtering
- dedupe and re-detect-after-source-change behavior
- visible imported turn construction
- backward-compatible import payload deserialization
- end-to-end RPC import flow
- rejection of undetected session paths
- repeat-import behavior
- large-session compaction before first follow-up
Ran:
- `cargo test -p codex-app-server external_agent_config_import_ --test
all`
## Why
Memory startup runs in the background after an eligible turn, but it can
consume Codex backend quota at exactly the wrong time: when the user is
already near a rate-limit boundary. This PR adds a guard so the memory
pipeline backs off when the Codex rate-limit snapshot says the remaining
budget is too low.
## What Changed
- Added `memories.min_rate_limit_remaining_percent` with a default of
`25`, clamped to `0..=100`, and regenerated `core/config.schema.json`.
- Added `codex-rs/memories/write/src/guard.rs`, which fetches Codex
backend rate limits before memory startup and skips phase 1 / phase 2
when the Codex limit is reached or either tracked window is above the
configured usage ceiling.
- Keeps startup best-effort: non-Codex auth or rate-limit fetch/client
failures preserve the existing memory startup behavior.
- Records a `codex.memory.startup` counter with
`status=skipped_rate_limit` when startup is skipped.
- Added config parsing/clamping coverage and guard unit tests.
## Verification
- Added `codex-rs/memories/write/src/guard_tests.rs` for threshold,
primary/secondary window, and reached-limit behavior.
- Added config tests for TOML parsing and clamping.
Keep extracting memories out of core and moving the write trigger in the
app-server
This is temporary and it should move at the client level as a follow-up
This makes core fully independant from `codex-memories-write`
---------
Co-authored-by: Codex <noreply@openai.com>
## Summary
- Remove `ghost_snapshot` / `GhostCommit` from the Responses API surface
and generated SDK/schema artifacts.
- Keep legacy config loading compatible, but make undo a no-op that
reports the feature is unavailable.
- Clean up core history, compaction, telemetry, rollout, and tests to
stop carrying ghost snapshot items.
## Testing
- Unit tests passed for `codex-protocol`, `codex-core` targeted undo and
compaction flows, `codex-rollout`, and `codex-app-server-protocol`.
- Regenerated config and app-server schemas plus Python SDK artifacts
and verified they match the checked-in outputs.
## Summary
- Extracted the shared filesystem types and `ExecutorFileSystem` trait
into a new `codex-file-system` crate
- Switched `codex-config` and `codex-git-utils` to depend on that crate
instead of `codex-exec-server`
- Kept `codex-exec-server` re-exporting the same API for existing
callers
## Testing
- Ran `cargo test -p codex-file-system`
- Ran `cargo test -p codex-git-utils`
- Ran `cargo test -p codex-config`
- Ran `cargo test -p codex-exec-server`
- Ran `just fix -p codex-file-system`, `just fix -p codex-git-utils`,
`just fix -p codex-config`, `just fix -p codex-exec-server`
- Ran `just fmt`
- Updated and verified the Bazel module lockfile
## Why
Config loading had become split across crates: `codex-config` owned the
config types and merge logic, while `codex-core` still owned the loader
that assembled the layer stack. This change consolidates that
responsibility in `codex-config`, so the crate that defines config
behavior also owns how configs are discovered and loaded.
To make that move possible without reintroducing the old dependency
cycle, the shell-environment policy types and helpers that
`codex-exec-server` needs now live in `codex-protocol` instead of
flowing through `codex-config`.
This also makes the migrated loader tests more deterministic on machines
that already have managed or system Codex config installed by letting
tests override the system config and requirements paths instead of
reading the host's `/etc/codex`.
## What Changed
- moved the config loader implementation from `codex-core` into
`codex-config::loader` and deleted the old `core::config_loader` module
instead of leaving a compatibility shim
- moved shell-environment policy types and helpers into
`codex-protocol`, then updated `codex-exec-server` and other downstream
crates to import them from their new home
- updated downstream callers to use loader/config APIs from
`codex-config`
- added test-only loader overrides for system config and requirements
paths so loader-focused tests do not depend on host-managed config state
- cleaned up now-unused dependency entries and platform-specific cfgs
that were surfaced by post-push CI
## Testing
- `cargo test -p codex-config`
- `cargo test -p codex-core config_loader_tests::`
- `cargo test -p codex-protocol -p codex-exec-server -p
codex-cloud-requirements -p codex-rmcp-client --lib`
- `cargo test --lib -p codex-app-server-client -p codex-exec`
- `cargo test --no-run --lib -p codex-app-server`
- `cargo test -p codex-linux-sandbox --lib`
- `cargo shear`
- `just bazel-lock-check`
## Notes
- I did not chase unrelated full-suite failures outside the migrated
loader surface.
- `cargo test -p codex-core --lib` still hits unrelated proxy-sensitive
failures on this machine, and Windows CI still shows unrelated
long-running/timeouting test noise outside the loader migration itself.
## Summary
This PR lets programmatic AgentIdentity users provide one token through
either stdin login or environment auth.
`codex login --with-agent-identity` reads an Agent Identity JWT from
stdin, validates that it has the required claims, and stores that token
as the `agent_identity` value in `auth.json`. The file format is
token-only; the decoded account and key fields are runtime state, not
hand-authored auth.json fields.
The Agent Identity JWT claim shape and decoder live in
`codex-agent-identity`; `codex-login` only owns env/storage precedence
and conversion into `CodexAuth::AgentIdentity`.
When env auth is enabled, `CODEX_AGENT_IDENTITY` can provide the same
JWT without writing auth state to disk. `CODEX_API_KEY` still wins if
both env vars are set.
Reference old stack: https://github.com/openai/codex/pull/17387/changes
Reference JWT/env stack: https://github.com/openai/codex/pull/18176
## Stack
1. https://github.com/openai/codex/pull/18757: full revert
2. https://github.com/openai/codex/pull/18871: isolated Agent Identity
crate
3. https://github.com/openai/codex/pull/18785: explicit AgentIdentity
auth mode and startup task allocation
4. https://github.com/openai/codex/pull/18811: migrate Codex backend
auth callsites through AuthProvider
5. This PR: accept AgentIdentity JWTs through login/env
## Testing
Tests: targeted login and Agent Identity crate tests, CLI checks, scoped
formatter/linter cleanup, and CI.
---------
Co-authored-by: Shijie Rao <shijie.rao@openai.com>
This removes the hidden `codex responses` CLI subcommand after
confirming no downstream callers rely on it, deleting the raw Responses
passthrough implementation, unregistering the subcommand, and dropping
the now-unused CLI dependencies on `codex-api` and
`codex-model-provider`.