## Problem When a turn streamed a preamble line before any tool activity, `ChatWidget` hid the status row while committing streamed lines and did not restore it until a later event (commonly `ExecCommandBegin`). During that idle gap, the UI looked finished even though the turn was still active. ## Mental model The bottom status row and transcript stream are separate progress affordances: - transcript stream shows committed output - status row (spinner/shimmer + header) shows liveness of an active turn While stream output is actively committing, hiding the status row is acceptable to avoid redundant visual noise. Once stream controllers go idle, an active turn must restore the status row immediately so liveness remains visible across preamble-to-tool gaps. ## Non-goals - No changes to streaming chunking policy or pacing. - No changes to final completion behavior (status still hides when task actually ends). - No refactor of status lifecycle ownership between `ChatWidget` and `BottomPane`. ## Tradeoffs - We keep the existing behavior of hiding the status row during active stream commits. - We add explicit restoration on the idle boundary when the task is still running. - This introduces one extra status update on idle transitions, which is small overhead but makes liveness semantics consistent. ## Architecture `run_commit_tick_with_scope` in `chatwidget.rs` now documents and enforces a two-phase contract: 1. For each committed streamed cell, hide status and append transcript output. 2. If controllers are present and all idle, restore status iff task is still running, preserving the current header. This keeps status ownership in `ChatWidget` while relying on `BottomPane` helpers: - `hide_status_indicator()` during active stream commits - `ensure_status_indicator()` + `set_status_header(current_status_header)` at stream-idle boundary Documentation pass additions: - Clarified the function-level contract and lifecycle intent in `run_commit_tick_with_scope`. - Added an explicit regression snapshot test comment describing the failing sequence. ## Observability Signal that the fix is present: - In the preamble-idle state, rendered output still includes `• Working (… esc to interrupt)`. - New snapshot: `codex_tui__chatwidget__tests__preamble_keeps_working_status.snap`. Debug path for future regressions: - Start at `run_commit_tick_with_scope` for hide/restore transitions. - Verify `bottom_pane.is_task_running()` at idle transition. - Confirm `current_status_header` continuity when status is recreated. - Use the new snapshot and targeted test sequence to reproduce deterministic preamble-idle behavior. ## Tests - Updated regression assertion: - `streaming_final_answer_keeps_task_running_state` now expects status widget to remain present while turn is running. - Renamed/updated behavioral regression: - `preamble_keeps_status_indicator_visible_until_exec_begin`. - Added snapshot regression coverage: - `preamble_keeps_working_status_snapshot`. - Snapshot file: `tui/src/chatwidget/snapshots/codex_tui__chatwidget__tests__preamble_keeps_working_status.snap`. Commands run: - `just fmt` - `cargo test -p codex-tui preamble_keeps_status_indicator_visible_until_exec_begin` - `cargo test -p codex-tui preamble_keeps_working_status_snapshot` ## Risks / Inconsistencies - Status visibility policy is still split across multiple event paths (`commit tick`, `turn complete`, `exec begin`), so future regressions can reintroduce ordering gaps. - Restoration depends on `is_task_running()` correctness; if task lifecycle flags drift, status behavior will drift too. - Snapshot proves rendered state, not animation cadence; cadence still relies on frame scheduling behavior elsewhere.
npm i -g @openai/codex
or brew install --cask codex
Codex CLI is a coding agent from OpenAI that runs locally on your computer.
If you want Codex in your code editor (VS Code, Cursor, Windsurf), install in your IDE.
If you are looking for the cloud-based agent from OpenAI, Codex Web, go to chatgpt.com/codex.
Quickstart
Installing and running Codex CLI
Install globally with your preferred package manager:
# Install using npm
npm install -g @openai/codex
# Install using Homebrew
brew install --cask codex
Then simply run codex to get started.
You can also go to the latest GitHub Release and download the appropriate binary for your platform.
Each GitHub Release contains many executables, but in practice, you likely want one of these:
- macOS
- Apple Silicon/arm64:
codex-aarch64-apple-darwin.tar.gz - x86_64 (older Mac hardware):
codex-x86_64-apple-darwin.tar.gz
- Apple Silicon/arm64:
- Linux
- x86_64:
codex-x86_64-unknown-linux-musl.tar.gz - arm64:
codex-aarch64-unknown-linux-musl.tar.gz
- x86_64:
Each archive contains a single entry with the platform baked into the name (e.g., codex-x86_64-unknown-linux-musl), so you likely want to rename it to codex after extracting it.
Using Codex with your ChatGPT plan
Run codex and select Sign in with ChatGPT. We recommend signing into your ChatGPT account to use Codex as part of your Plus, Pro, Team, Edu, or Enterprise plan. Learn more about what's included in your ChatGPT plan.
You can also use Codex with an API key, but this requires additional setup.
Docs
This repository is licensed under the Apache-2.0 License.
