mirror of
https://github.com/openai/codex.git
synced 2026-05-16 17:23:57 +00:00
## Why Users have shared that the TUI can feel too visually flat because themes mostly show up in code syntax highlighting. The configurable statusline is a natural place to make the active theme more visible, while still letting users keep the existing monotone statusline if they prefer it. ## What Changed - Added a statusline styling helper that builds the rendered statusline from `(StatusLineItem, text)` segments, preserving item identity while keeping the plain text output unchanged. - Derived foreground accent colors from the active syntax theme by looking up TextMate scopes through the existing syntax highlighter, with conservative ANSI fallbacks when a scope does not provide a foreground. - Tuned theme-derived colors to keep the accents visible without making the statusline feel overly bright. - Added `[tui].status_line_use_colors`, defaulting to `true`, plus a separated `/statusline` toggle so users can enable or disable theme-derived statusline colors from the setup UI. - Updated the live statusline and `/statusline` preview to use the same styled builder, while keeping terminal-title preview text plain. - Kept statusline separators and active-agent add-ons subdued while removing blanket dimming from the whole passive statusline. ## Verification - `cargo test -p codex-tui status_line` - `cargo test -p codex-tui theme_picker` - `cargo test -p codex-tui foreground_style_for_scopes` - `cargo test -p codex-tui` - `cargo test -p codex-config` - `cargo test -p codex-core status_line_use_colors` - `cargo insta pending-snapshots --manifest-path tui/Cargo.toml` ## Visual <img width="369" height="23" alt="Screenshot 2026-04-30 at 6 16 08 PM" src="https://github.com/user-attachments/assets/11d03efb-8e4f-4450-8f4d-00a9659ef4cd" /> <img width="385" height="23" alt="Screenshot 2026-04-30 at 6 16 02 PM" src="https://github.com/user-attachments/assets/a3d89f36-bdc1-42e8-8e84-61350e3999e2" />
ThreadManager Sample
Small one-shot binary that starts a Codex thread with ThreadManager from
codex-core-api, submits a single user turn, and prints the final assistant
message.
cargo run -p codex-thread-manager-sample -- "Say hello"
Use --model to override the configured default model:
cargo run -p codex-thread-manager-sample -- --model gpt-5.2 "Say hello"
The prompt can also be piped through stdin:
printf 'Say hello\n' | cargo run -p codex-thread-manager-sample