mirror of
https://github.com/openai/codex.git
synced 2026-05-16 01:02:48 +00:00
This is the first mechanical cleanup in a stack whose higher-level goal is to enable Clippy coverage for async guards held across `.await` points. The follow-up commits enable Clippy's [`await_holding_lock`](https://rust-lang.github.io/rust-clippy/master/index.html#await_holding_lock) lint and the configurable [`await_holding_invalid_type`](https://rust-lang.github.io/rust-clippy/master/index.html#await_holding_invalid_type) lint for Tokio guard types. This PR handles the cases where the underlying issue is not protected shared mutable state, but a `tokio::sync::mpsc::UnboundedReceiver` wrapped in `Arc<Mutex<_>>` so cloned owners can call `recv().await`. Using a mutex for that shape forces the receiver lock guard to live across `.await`. Switching these paths to `async-channel` gives us cloneable `Receiver`s, so each owner can hold a receiver handle directly and await messages without an async mutex guard. ## What changed - In `codex-rs/code-mode`, replace the turn-message `mpsc::UnboundedSender`/`UnboundedReceiver` plus `Arc<Mutex<Receiver>>` with `async_channel::Sender`/`Receiver`. - In `codex-rs/codex-api`, replace the realtime websocket event receiver with an `async_channel::Receiver`, allowing `RealtimeWebsocketEvents` clones to receive without locking. - Add `async-channel` as a dependency for `codex-code-mode` and `codex-api`, and update `Cargo.lock`. ## Verification - The split stack was verified at the final lint-enabling head with `just clippy`.
codex-api
Typed clients for Codex/OpenAI APIs built on top of the generic transport in codex-client.
- Hosts the request/response models and request builders for Responses and Compact APIs.
- Owns provider configuration (base URLs, headers, query params), auth header injection, retry tuning, and stream idle settings.
- Parses SSE streams into
ResponseEvent/ResponseStream, including rate-limit snapshots and API-specific error mapping. - Serves as the wire-level layer consumed by
codex-core; higher layers handle auth refresh and business logic.
Core interface
The public interface of this crate is intentionally small and uniform:
-
Responses endpoint
- Input:
ResponsesApiRequestfor the request body (model,instructions,input,tools,parallel_tool_calls, reasoning/text controls).ResponsesOptionsfor transport/header concerns (conversation_id,session_source,extra_headers,compression,turn_state).
- Output: a
ResponseStreamofResponseEvent(both re-exported fromcommon).
- Input:
-
Compaction endpoint
- Input:
CompactionInput<'a>(re-exported ascodex_api::CompactionInput):model: &str.input: &[ResponseItem]– history to compact.instructions: &str– fully-resolved compaction instructions.
- Output:
Vec<ResponseItem>. CompactClient::compact_input(&CompactionInput, extra_headers)wraps the JSON encoding and retry/telemetry wiring.
- Input:
-
Memory summarize endpoint
- Input:
MemorySummarizeInput(re-exported ascodex_api::MemorySummarizeInput):model: String.raw_memories: Vec<RawMemory>(serialized astracesfor wire compatibility).RawMemoryincludesid,metadata.source_path, and normalizeditems.
reasoning: Option<Reasoning>.
- Output:
Vec<MemorySummarizeOutput>. MemoriesClient::summarize_input(&MemorySummarizeInput, extra_headers)wraps JSON encoding and retry/telemetry wiring.
- Input:
All HTTP details (URLs, headers, retry/backoff policies, SSE framing) are encapsulated in codex-api and codex-client. Callers construct prompts/inputs using protocol types and work with typed streams of ResponseEvent or compacted ResponseItem values.