## Summary `cargo test` has entails both running standard Rust tests and doctests. It turns out that the doctest discovery is fairly slow, and it's a cost you pay even for crates that don't include any doctests. This PR disables doctests with `doctest = false` for crates that lack any doctests. For the collection of crates below, this speeds up test execution by >4x. E.g., before this PR: ``` Benchmark 1: cargo test -p codex-utils-absolute-path -p codex-utils-cache -p codex-utils-cli -p codex-utils-home-dir -p codex-utils-output-truncation -p codex-utils-path -p codex-utils-string -p codex-utils-template -p codex-utils-elapsed -p codex-utils-json-to-toml Time (mean ± σ): 1.849 s ± 4.455 s [User: 0.752 s, System: 1.367 s] Range (min … max): 0.418 s … 14.529 s 10 runs ``` And after: ``` Benchmark 1: cargo test -p codex-utils-absolute-path -p codex-utils-cache -p codex-utils-cli -p codex-utils-home-dir -p codex-utils-output-truncation -p codex-utils-path -p codex-utils-string -p codex-utils-template -p codex-utils-elapsed -p codex-utils-json-to-toml Time (mean ± σ): 428.6 ms ± 6.9 ms [User: 187.7 ms, System: 219.7 ms] Range (min … max): 418.0 ms … 436.8 ms 10 runs ``` For a single crate, with >2x speedup, before: ``` Benchmark 1: cargo test -p codex-utils-string Time (mean ± σ): 491.1 ms ± 9.0 ms [User: 229.8 ms, System: 234.9 ms] Range (min … max): 480.9 ms … 512.0 ms 10 runs ``` And after: ``` Benchmark 1: cargo test -p codex-utils-string Time (mean ± σ): 213.9 ms ± 4.3 ms [User: 112.8 ms, System: 84.0 ms] Range (min … max): 206.8 ms … 221.0 ms 13 runs ``` Co-authored-by: Codex <noreply@openai.com>
codex-app-server-client
Shared in-process app-server client used by conversational CLI surfaces:
codex-execcodex-tui
Purpose
This crate centralizes startup and lifecycle management for an in-process
codex-app-server runtime, so CLI clients do not need to duplicate:
- app-server bootstrap and initialize handshake
- in-memory request/event transport wiring
- lifecycle orchestration around caller-provided startup identity
- graceful shutdown behavior
Startup identity
Callers pass both the app-server SessionSource and the initialize
client_info.name explicitly when starting the facade.
That keeps thread metadata (for example in thread/list and thread/read)
aligned with the originating runtime without baking TUI/exec-specific policy
into the shared client layer.
Transport model
The in-process path uses typed channels:
- client -> server:
ClientRequest/ClientNotification - server -> client:
InProcessServerEventServerRequestServerNotificationLegacyNotification
JSON serialization is still used at external transport boundaries (stdio/websocket), but the in-process hot path is typed.
Typed requests still receive app-server responses through the JSON-RPC result envelope internally. That is intentional: the in-process path is meant to preserve app-server semantics while removing the process boundary, not to introduce a second response contract.
Bootstrap behavior
The client facade starts an already-initialized in-process runtime, but thread bootstrap still follows normal app-server flow:
- caller sends
thread/startorthread/resume - app-server returns the immediate typed response
- richer session metadata may arrive later as a
SessionConfiguredlegacy event
Surfaces such as TUI and exec may therefore need a short bootstrap phase where they reconcile startup response data with later events.
Backpressure and shutdown
- Queues are bounded and use
DEFAULT_IN_PROCESS_CHANNEL_CAPACITYby default. - Full queues return explicit overload behavior instead of unbounded growth.
shutdown()performs a bounded graceful shutdown and then aborts if timeout is exceeded.
If the client falls behind on event consumption, the worker emits
InProcessServerEvent::Lagged and may reject pending server requests so
approval flows do not hang indefinitely behind a saturated queue.