mirror of
https://github.com/openai/codex.git
synced 2026-04-24 06:35:50 +00:00
Raise image byte estimate for compaction token accounting (#12717)
Increase `IMAGE_BYTES_ESTIMATE` from 340 bytes to 7,373 bytes so the existing 4-bytes/token heuristic yields an image estimate of ~1,844 tokens instead of ~85. This makes auto-compaction more conservative for image-heavy transcripts and avoids underestimating context usage, which can otherwise cause compaction to fail when there is not enough free context remaining. The new value was chosen because that's the image resolution cap used for our latest models. Follow-up to [#12419](https://github.com/openai/codex/pull/12419). Refs [#11845](https://github.com/openai/codex/issues/11845).
This commit is contained in:
@@ -421,9 +421,9 @@ fn estimate_item_token_count(item: &ResponseItem) -> i64 {
|
||||
|
||||
/// Approximate model-visible byte cost for one image input.
|
||||
///
|
||||
/// The estimator later converts bytes to tokens using a 4-bytes/token heuristic,
|
||||
/// so 340 bytes is approximately 85 tokens.
|
||||
const IMAGE_BYTES_ESTIMATE: i64 = 340;
|
||||
/// The estimator later converts bytes to tokens using a 4-bytes/token heuristic
|
||||
/// with ceiling division, so 7,373 bytes maps to approximately 1,844 tokens.
|
||||
const IMAGE_BYTES_ESTIMATE: i64 = 7373;
|
||||
|
||||
pub(crate) fn estimate_response_item_model_visible_bytes(item: &ResponseItem) -> i64 {
|
||||
match item {
|
||||
|
||||
Reference in New Issue
Block a user