## Summary - submit `Enter` steers immediately while a turn is already running instead of routing them through `queued_user_messages` - keep those submitted steers visible in the footer as `pending_steers` until core records them as a user message or aborts the turn - reconcile pending steers on `ItemCompleted(UserMessage)`, not `RawResponseItem` - emit user-message item lifecycle for leftover pending input at task finish, then remove the TUI `TurnComplete` fallback - keep `queued_user_messages` for actual queued drafts, rendered below pending steers ## Problem While the assistant was generating, pressing `Enter` could send the input into `queued_user_messages`. That queue only drains after the turn ends, so ordinary steers behaved like queued drafts instead of landing at the next core sampling boundary. The first version of this fix also used `RawResponseItem` to decide when a steer had landed. Review feedback was that this is the wrong abstraction for client behavior. There was also a late edge case in core: if pending steer input was accepted after the final sampling decision but before `TurnComplete`, core would record that user message into history at task finish without emitting `ItemStarted(UserMessage)` / `ItemCompleted(UserMessage)`. TUI had a fallback to paper over that gap locally. ## Approach - `Enter` during an active turn now submits a normal `Op::UserTurn` immediately - TUI keeps a local pending-steer preview instead of rendering that user message into history immediately - when core records the steer as `ItemCompleted(UserMessage)`, TUI matches and removes the corresponding pending preview, then renders the committed user message - core now emits the same user-message lifecycle when `on_task_finished(...)` drains leftover pending user input, before `TurnComplete` - with that lifecycle gap closed in core, TUI no longer needs to flush pending steers into history on `TurnComplete` - if the turn is interrupted, pending steers and queued drafts are both restored into the composer, with pending steers first ## Notes - `Tab` still uses the real queued-message path - `queued_user_messages` and `pending_steers` are separate state with separate semantics - the pending-steer matching key is built directly from `UserInput` - this removes the new TUI dependency on `RawResponseItem` ## Validation - `just fmt` - `cargo test -p codex-core task_finish_emits_turn_item_lifecycle_for_leftover_pending_user_input -- --nocapture` - `cargo test -p codex-tui`
npm i -g @openai/codex
or brew install --cask codex
Codex CLI is a coding agent from OpenAI that runs locally on your computer.
If you want Codex in your code editor (VS Code, Cursor, Windsurf), install in your IDE.
If you want the desktop app experience, run
codex app or visit the Codex App page.
If you are looking for the cloud-based agent from OpenAI, Codex Web, go to chatgpt.com/codex.
Quickstart
Installing and running Codex CLI
Install globally with your preferred package manager:
# Install using npm
npm install -g @openai/codex
# Install using Homebrew
brew install --cask codex
Then simply run codex to get started.
You can also go to the latest GitHub Release and download the appropriate binary for your platform.
Each GitHub Release contains many executables, but in practice, you likely want one of these:
- macOS
- Apple Silicon/arm64:
codex-aarch64-apple-darwin.tar.gz - x86_64 (older Mac hardware):
codex-x86_64-apple-darwin.tar.gz
- Apple Silicon/arm64:
- Linux
- x86_64:
codex-x86_64-unknown-linux-musl.tar.gz - arm64:
codex-aarch64-unknown-linux-musl.tar.gz
- x86_64:
Each archive contains a single entry with the platform baked into the name (e.g., codex-x86_64-unknown-linux-musl), so you likely want to rename it to codex after extracting it.
Using Codex with your ChatGPT plan
Run codex and select Sign in with ChatGPT. We recommend signing into your ChatGPT account to use Codex as part of your Plus, Pro, Team, Edu, or Enterprise plan. Learn more about what's included in your ChatGPT plan.
You can also use Codex with an API key, but this requires additional setup.
Docs
This repository is licensed under the Apache-2.0 License.
