Commit Graph

6 Commits

Author SHA1 Message Date
Eric Traut
36712d8546 Install rustls provider for remote websocket client (#17288)
Addresses #17283

Problem: `codex --remote wss://...` could panic because
app-server-client did not install rustls' process-level crypto provider
before opening TLS websocket connections.

Solution: Add the existing rustls provider utility dependency and
install it before the remote websocket connect.
2026-04-09 20:29:12 -07:00
Eric Traut
d65deec617 Remove the legacy TUI split (#15922)
This is the part 1 of 2 PRs that will delete the `tui` /
`tui_app_server` split. This part simply deletes the existing `tui`
directory and marks the `tui_app_server` feature flag as removed. I left
the `tui_app_server` feature flag in place for now so its presence
doesn't result in an error. It is simply ignored.

Part 2 will rename the `tui_app_server` directory `tui`. I did this as
two parts to reduce visible code churn.
2026-03-27 22:56:44 +00:00
Eric Traut
1ff39b6fa8 Wire remote app-server auth through the client (#14853)
For app-server websocket auth, support the two server-side mechanisms
from
PR #14847:

- `--ws-auth capability-token --ws-token-file /abs/path`
- `--ws-auth signed-bearer-token --ws-shared-secret-file /abs/path`
  with optional `--ws-issuer`, `--ws-audience`, and
  `--ws-max-clock-skew-seconds`

On the client side, add interactive remote support via:

- `--remote ws://host:port` or `--remote wss://host:port`
- `--remote-auth-token-env <ENV_VAR>`

Codex reads the bearer token from the named environment variable and
sends it
as `Authorization: Bearer <token>` during the websocket handshake.
Remote auth
tokens are only allowed for `wss://` URLs or loopback `ws://` URLs.

Testing:
- tested both auth methods manually to confirm connection success and
rejection for both auth types
2026-03-25 22:17:03 -06:00
Felipe Coury
e9996ec62a fix(tui_app_server): preserve transcript events under backpressure (#15759)
## TL;DR

When running codex with `-c features.tui_app_server=true` we see
corruption when streaming large amounts of data. This PR marks other
event types as _critical_ by making them _must-deliver_.

## Problem

When the TUI consumer falls behind the app-server event stream, the
bounded `mpsc` channel fills up and the forwarding layer drops events
via `try_send`. Previously only `TurnCompleted` was marked as
must-deliver. Streamed assistant text (`AgentMessageDelta`) and the
authoritative final item (`ItemCompleted`) were treated as droppable —
the same as ephemeral command output deltas. Because the TUI renders
markdown incrementally from these deltas, dropping any of them produces
permanently corrupted or incomplete paragraphs that persist for the rest
of the session.

## Mental model

The app-server event stream has two tiers of importance:

1. **Lossless (transcript + terminal):** Events that form the
authoritative record of what the assistant said or that signal turn
lifecycle transitions. Losing any of these corrupts the visible output
or leaves surfaces waiting forever. These are: `AgentMessageDelta`,
`PlanDelta`, `ReasoningSummaryTextDelta`, `ReasoningTextDelta`,
`ItemCompleted`, and `TurnCompleted`.

2. **Best-effort (everything else):** Ephemeral status events like
`CommandExecutionOutputDelta` and progress notifications. Dropping these
under load causes cosmetic gaps but no permanent corruption.

The forwarding layer uses `try_send` for best-effort events (dropping on
backpressure) and blocking `send().await` for lossless events (applying
back-pressure to the producer until the consumer catches up).

## Non-goals

- Eliminating backpressure entirely. The bounded queue is intentional;
this change only widens the set of events that survive it.
- Changing the event protocol or adding new notification types.
- Addressing root causes of consumer slowness (e.g. TUI render cost).

## Tradeoffs

Blocking on transcript events means a slow consumer can now stall the
producer for the duration of those events. This is acceptable because:
(a) the alternative is permanently broken output, which is worse; (b)
the consumer already had to keep up with `TurnCompleted` blocking sends;
and (c) transcript events arrive at model-output speed, not burst speed,
so sustained saturation is unlikely in practice.

## Architecture

Two parallel changes, one per transport:

- **In-process path** (`lib.rs`): The inline forwarding logic was
extracted into `forward_in_process_event`, a standalone async function
that encapsulates the lag-marker / must-deliver / try-send decision
tree. The worker loop now delegates to it. A new
`server_notification_requires_delivery` function (shared `pub(crate)`)
centralizes the notification classification.

- **Remote path** (`remote.rs`): The local `event_requires_delivery` now
delegates to the same shared `server_notification_requires_delivery`,
keeping both transports in sync.

## Observability

No new metrics or log lines. The existing `warn!` on event drops
continues to fire for best-effort events. Lossless events that block
will not produce a log line (they simply wait).

## Tests

- `event_requires_delivery_marks_transcript_and_terminal_events`: unit
test confirming the expanded classification covers `AgentMessageDelta`,
`ItemCompleted`, `TurnCompleted`, and excludes
`CommandExecutionOutputDelta` and `Lagged`.
-
`forward_in_process_event_preserves_transcript_notifications_under_backpressure`:
integration-style test that fills a capacity-1 channel, verifies a
best-effort event is dropped (skipped count increments), then sends
lossless transcript events and confirms they all arrive in order with
the correct lag marker preceding them.
- `remote_backpressure_preserves_transcript_notifications`: end-to-end
test over a real websocket that verifies the remote transport preserves
transcript events under the same backpressure scenario.
- `event_requires_delivery_marks_transcript_and_disconnect_events`
(remote): unit test confirming the remote-side classification covers
transcript events and `Disconnected`.

---------

Co-authored-by: Eric Traut <etraut@openai.com>
2026-03-25 13:50:39 -06:00
Eric Traut
45f68843b8 Finish moving codex exec to app-server (#15424)
This PR completes the conversion of non-interactive `codex exec` to use
app server rather than directly using core events and methods.

### Summary
- move `codex-exec` off exec-owned `AuthManager` and `ThreadManager`
state
- route exec bootstrap, resume, and auth refresh through existing
app-server paths
- replace legacy `codex/event/*` decoding in exec with typed app-server
notification handling
- update human and JSONL exec output adapters to translate existing
app-server notifications only
- clean up "app server client" layer by eliminating support for legacy
notifications; this is no longer needed
- remove exposure of `authManager` and `threadManager` from "app server
client" layer

### Testing
- `exec` has pretty extensive unit and integration tests already, and
these all pass
- In addition, I asked Codex to put together a comprehensive manual set
of tests to cover all of the `codex exec` functionality (including
command-line options), and it successfully generated and ran these tests
2026-03-24 08:51:32 -06:00
Eric Traut
db89b73a9c Move TUI on top of app server (parallel code) (#14717)
This PR replicates the `tui` code directory and creates a temporary
parallel `tui_app_server` directory. It also implements a new feature
flag `tui_app_server` to select between the two tui implementations.

Once the new app-server-based TUI is stabilized, we'll delete the old
`tui` directory and feature flag.
2026-03-16 10:49:19 -06:00