mirror of
https://github.com/openai/codex.git
synced 2026-02-01 22:47:52 +00:00
Adds a new feature `enable_request_compression` that will compress using zstd requests to the codex-backend. Currently only enabled for codex-backend so only enabled for openai providers when using chatgpt::auth even when the feature is enabled Added a new info log line too for evaluating the compression ratio and overhead off compressing before requesting. You can enable with `RUST_LOG=$RUST_LOG,codex_client::transport=info` ``` 2026-01-06T00:09:48.272113Z INFO codex_client::transport: Compressed request body with zstd pre_compression_bytes=28914 post_compression_bytes=11485 compression_duration_ms=0 ```
codex-client
Generic transport layer that wraps HTTP requests, retries, and streaming primitives without any Codex/OpenAI awareness.
- Defines
HttpTransportand a defaultReqwestTransportplus thinRequest/Responsetypes. - Provides retry utilities (
RetryPolicy,RetryOn,run_with_retry,backoff) that callers plug into for unary and streaming calls. - Supplies the
sse_streamhelper to turn byte streams into raw SSEdata:frames with idle timeouts and surfaced stream errors. - Consumed by higher-level crates like
codex-api; it stays neutral on endpoints, headers, or API-specific error shapes.