mirror of
https://github.com/openai/codex.git
synced 2026-05-14 16:22:51 +00:00
## Why `/responses/compact` should preserve the request-affinity fields that apply to the active auth mode. ChatGPT-auth compact requests need the effective `service_tier`, and compact requests for every auth mode need the stable `prompt_cache_key`, so compaction does not quietly lose routing or cache behavior that normal sampling already has. This follows the request-parity direction from #20719, but keeps the net change focused on the compact payload fields needed here. ## What changed - Add `service_tier` and `prompt_cache_key` to the compact endpoint input payload. - Build the remote compact payload from the existing responses request builder output so `Fast` still maps to `priority` when compact sends a service tier. - Pass the turn service tier into remote compaction, but only include it in compact payloads for ChatGPT-backed auth. - Keep `prompt_cache_key` on compact payloads for all auth modes. - Add request-body diff snapshot coverage in `core/tests/suite/compact_remote.rs` for: - API-key auth reusing `prompt_cache_key` while omitting `service_tier` even when `Fast` is configured. - ChatGPT auth reusing both `service_tier` and `prompt_cache_key`. - Drive the snapshot coverage through five varied turns: plain text, multi-part text, tool-call continuation, image+text input, local-shell continuation, and final-turn reasoning output. ## Verification - Added insta snapshots for compact request-body parity against the last normal `/responses` request after five varied turns. - Not run locally per repo guidance; relying on GitHub CI for test execution. --------- Co-authored-by: Codex <noreply@openai.com>