mirror of
https://github.com/openai/codex.git
synced 2026-05-02 18:37:01 +00:00
Add the utility to truncate by tokens (#6746)
- This PR is to make it on path for truncating by tokens. This path will be initially used by unified exec and context manager (responsible for MCP calls mainly). - We are exposing new config `calls_output_max_tokens` - Use `tokens` as the main budget unit but truncate based on the model family by Introducing `TruncationPolicy`. - Introduce `truncate_text` as a router for truncation based on the mode. In next PRs: - remove truncate_with_line_bytes_budget - Add the ability to the model to override the token budget.
This commit is contained in:
@@ -1,7 +1,5 @@
|
||||
mod history;
|
||||
mod normalize;
|
||||
|
||||
pub(crate) use crate::truncate::MODEL_FORMAT_MAX_BYTES;
|
||||
pub(crate) use crate::truncate::MODEL_FORMAT_MAX_LINES;
|
||||
pub(crate) use crate::truncate::format_output_for_model_body;
|
||||
pub(crate) use crate::truncate::truncate_with_line_bytes_budget;
|
||||
pub(crate) use history::ContextManager;
|
||||
|
||||
Reference in New Issue
Block a user