mirror of
https://github.com/openai/codex.git
synced 2026-05-14 08:12:36 +00:00
## Why Large hook outputs can enter model-visible context through hook-specific paths such as `additionalContext` and `Stop` continuation prompts. Without a dedicated cap, one hook can inject a large blob directly into conversation history instead of leaving a bounded preview for the model and preserving the full text elsewhere. ## What - spill hook text once it exceeds a fixed `2_500`-token budget, preserving the full output on disk and leaving a head/tail preview plus saved path in context - add shared hook-output spilling under `CODEX_HOME/hook_outputs/<thread_id>/<uuid>.txt` - apply the cap to both `additionalContext`, `feedback_message`, and `Stop` continuation fragments