## TL;DR When running codex with `-c features.tui_app_server=true` we see corruption when streaming large amounts of data. This PR marks other event types as _critical_ by making them _must-deliver_. ## Problem When the TUI consumer falls behind the app-server event stream, the bounded `mpsc` channel fills up and the forwarding layer drops events via `try_send`. Previously only `TurnCompleted` was marked as must-deliver. Streamed assistant text (`AgentMessageDelta`) and the authoritative final item (`ItemCompleted`) were treated as droppable — the same as ephemeral command output deltas. Because the TUI renders markdown incrementally from these deltas, dropping any of them produces permanently corrupted or incomplete paragraphs that persist for the rest of the session. ## Mental model The app-server event stream has two tiers of importance: 1. **Lossless (transcript + terminal):** Events that form the authoritative record of what the assistant said or that signal turn lifecycle transitions. Losing any of these corrupts the visible output or leaves surfaces waiting forever. These are: `AgentMessageDelta`, `PlanDelta`, `ReasoningSummaryTextDelta`, `ReasoningTextDelta`, `ItemCompleted`, and `TurnCompleted`. 2. **Best-effort (everything else):** Ephemeral status events like `CommandExecutionOutputDelta` and progress notifications. Dropping these under load causes cosmetic gaps but no permanent corruption. The forwarding layer uses `try_send` for best-effort events (dropping on backpressure) and blocking `send().await` for lossless events (applying back-pressure to the producer until the consumer catches up). ## Non-goals - Eliminating backpressure entirely. The bounded queue is intentional; this change only widens the set of events that survive it. - Changing the event protocol or adding new notification types. - Addressing root causes of consumer slowness (e.g. TUI render cost). ## Tradeoffs Blocking on transcript events means a slow consumer can now stall the producer for the duration of those events. This is acceptable because: (a) the alternative is permanently broken output, which is worse; (b) the consumer already had to keep up with `TurnCompleted` blocking sends; and (c) transcript events arrive at model-output speed, not burst speed, so sustained saturation is unlikely in practice. ## Architecture Two parallel changes, one per transport: - **In-process path** (`lib.rs`): The inline forwarding logic was extracted into `forward_in_process_event`, a standalone async function that encapsulates the lag-marker / must-deliver / try-send decision tree. The worker loop now delegates to it. A new `server_notification_requires_delivery` function (shared `pub(crate)`) centralizes the notification classification. - **Remote path** (`remote.rs`): The local `event_requires_delivery` now delegates to the same shared `server_notification_requires_delivery`, keeping both transports in sync. ## Observability No new metrics or log lines. The existing `warn!` on event drops continues to fire for best-effort events. Lossless events that block will not produce a log line (they simply wait). ## Tests - `event_requires_delivery_marks_transcript_and_terminal_events`: unit test confirming the expanded classification covers `AgentMessageDelta`, `ItemCompleted`, `TurnCompleted`, and excludes `CommandExecutionOutputDelta` and `Lagged`. - `forward_in_process_event_preserves_transcript_notifications_under_backpressure`: integration-style test that fills a capacity-1 channel, verifies a best-effort event is dropped (skipped count increments), then sends lossless transcript events and confirms they all arrive in order with the correct lag marker preceding them. - `remote_backpressure_preserves_transcript_notifications`: end-to-end test over a real websocket that verifies the remote transport preserves transcript events under the same backpressure scenario. - `event_requires_delivery_marks_transcript_and_disconnect_events` (remote): unit test confirming the remote-side classification covers transcript events and `Disconnected`. --------- Co-authored-by: Eric Traut <etraut@openai.com>
npm i -g @openai/codex
or brew install --cask codex
Codex CLI is a coding agent from OpenAI that runs locally on your computer.
If you want Codex in your code editor (VS Code, Cursor, Windsurf), install in your IDE.
If you want the desktop app experience, run
codex app or visit the Codex App page.
If you are looking for the cloud-based agent from OpenAI, Codex Web, go to chatgpt.com/codex.
Quickstart
Installing and running Codex CLI
Install globally with your preferred package manager:
# Install using npm
npm install -g @openai/codex
# Install using Homebrew
brew install --cask codex
Then simply run codex to get started.
You can also go to the latest GitHub Release and download the appropriate binary for your platform.
Each GitHub Release contains many executables, but in practice, you likely want one of these:
- macOS
- Apple Silicon/arm64:
codex-aarch64-apple-darwin.tar.gz - x86_64 (older Mac hardware):
codex-x86_64-apple-darwin.tar.gz
- Apple Silicon/arm64:
- Linux
- x86_64:
codex-x86_64-unknown-linux-musl.tar.gz - arm64:
codex-aarch64-unknown-linux-musl.tar.gz
- x86_64:
Each archive contains a single entry with the platform baked into the name (e.g., codex-x86_64-unknown-linux-musl), so you likely want to rename it to codex after extracting it.
Using Codex with your ChatGPT plan
Run codex and select Sign in with ChatGPT. We recommend signing into your ChatGPT account to use Codex as part of your Plus, Pro, Team, Edu, or Enterprise plan. Learn more about what's included in your ChatGPT plan.
You can also use Codex with an API key, but this requires additional setup.
Docs
This repository is licensed under the Apache-2.0 License.
