mirror of
https://github.com/openai/codex.git
synced 2026-05-02 10:26:45 +00:00
Surface reasoning tokens in exec JSON usage (#19308)
## Summary Fixes #19022. `codex exec --json` currently emits `turn.completed.usage` with input, cached input, and output token counts, but drops the reasoning-token split that Codex already receives through thread token usage updates. Programmatic consumers that rely on the JSON stream, especially ephemeral runs that do not write rollout files, need this field to accurately display reasoning-model usage. This PR adds `reasoning_output_tokens` to the public exec JSON `Usage` payload and maps it from the existing `ThreadTokenUsageUpdated` total token usage data. ## Verification - Added coverage to `event_processor_with_json_output::token_usage_update_is_emitted_on_turn_completion` so `turn.completed.usage.reasoning_output_tokens` is asserted. - Updated SDK expectations for `run()` and `runStreamed()` so TypeScript consumers see the new usage field. - Ran `cargo test -p codex-exec`. - Ran `pnpm --filter ./sdk/typescript run build`. - Ran `pnpm --filter ./sdk/typescript run lint`. - Ran `pnpm --filter ./sdk/typescript exec jest --runInBand --testTimeout=30000`.
This commit is contained in:
@@ -1232,6 +1232,7 @@ fn token_usage_update_is_emitted_on_turn_completion() {
|
||||
input_tokens: 10,
|
||||
cached_input_tokens: 3,
|
||||
output_tokens: 29,
|
||||
reasoning_output_tokens: 7,
|
||||
},
|
||||
})],
|
||||
status: CodexStatus::InitiateShutdown,
|
||||
|
||||
Reference in New Issue
Block a user