mirror of
https://github.com/openai/codex.git
synced 2026-05-04 19:36:45 +00:00
perf(tui2): cache transcript view rendering (#8693)
The transcript viewport draws every frame. Ratatui's Line::render_ref does grapheme segmentation and span layout, so repeated redraws can burn CPU during streaming even when the visible transcript hasn't changed. Introduce TranscriptViewCache to reduce per-frame work: - WrappedTranscriptCache memoizes flattened+wrapped transcript lines per width, appends incrementally as new cells arrive, and rebuilds on width change, truncation (backtrack), or transcript replacement. - TranscriptRasterCache caches rasterized rows (Vec<Cell>) per line index and user-row styling; redraws copy cells instead of rerendering spans. The caches are width-scoped and store base transcript content only; selection highlighting and copy affordances are applied after drawing. User rows include the row-wide base style in the cached raster. Refactor transcript_render to expose append_wrapped_transcript_cell for incremental building and add a test that incremental append matches the full build. Add docs/tui2/performance-testing.md as a playbook for macOS sample profiles and hotspot greps. Expand transcript_view_cache tests to cover rebuild conditions, raster equivalence vs direct rendering, user-row caching, and eviction. Test: cargo test -p codex-tui2
This commit is contained in:
@@ -81,6 +81,7 @@ mod transcript_copy_ui;
|
||||
mod transcript_multi_click;
|
||||
mod transcript_render;
|
||||
mod transcript_selection;
|
||||
mod transcript_view_cache;
|
||||
mod tui;
|
||||
mod ui_consts;
|
||||
pub mod update_action;
|
||||
|
||||
Reference in New Issue
Block a user