Commit Graph

519 Commits

Author SHA1 Message Date
Eric Traut
e4edafe1a8 Log ChatGPT user ID for feedback tags (#13901)
There are some bug investigations that currently require us to ask users
for their user ID even though they've already uploaded logs and session
details via `/feedback`. This frustrates users and increases the time
for diagnosis.

This PR includes the ChatGPT user ID in the metadata uploaded for
`/feedback` (both the TUI and app-server).
2026-03-10 09:57:41 -06:00
Matthew Zeng
566e4cee4b [apps] Fix apps enablement condition. (#14011)
- [x] Fix apps enablement condition to check both the feature flag and
that the user is not an API key user.
2026-03-09 22:25:43 -07:00
pash-openai
63597d1b2d tui: only show fast status for gpt-5.4 (#14135) 2026-03-09 21:12:05 -07:00
Andrei Eternal
244b2d53f4 start of hooks engine (#13276)
(Experimental)

This PR adds a first MVP for hooks, with SessionStart and Stop

The core design is:

- hooks live in a dedicated engine under codex-rs/hooks
- each hook type has its own event-specific file
- hook execution is synchronous and blocks normal turn progression while
running
- matching hooks run in parallel, then their results are aggregated into
a normalized HookRunSummary

On the AppServer side, hooks are exposed as operational metadata rather
than transcript-native items:

- new live notifications: hook/started, hook/completed
- persisted/replayed hook results live on Turn.hookRuns
- we intentionally did not add hook-specific ThreadItem variants

Hooks messages are not persisted, they remain ephemeral. The context
changes they add are (they get appended to the user's prompt)
2026-03-10 04:11:31 +00:00
Won Park
42f20a6845 pass on save info to model + ui tweaks (#14123)
Passing on more information to the model for context purposes, to
streamline image-identification.
2026-03-09 20:10:15 +00:00
Dylan Hurd
06f82c123c feat(tui) render request_permissions calls (#14004)
## Summary
Adds support for tui rendering of request_permission calls

<img width="724" height="245" alt="Screenshot 2026-03-08 at 9 04 07 PM"
src="https://github.com/user-attachments/assets/e1997825-a496-4bfb-bbda-43d0006460a5"
/>


## Testing
- [x] Added snapshot test
2026-03-09 04:24:04 +00:00
Jack Mousseau
e6b93841c5 Add request permissions tool (#13092)
Adds a built-in `request_permissions` tool and wires it through the
Codex core, protocol, and app-server layers so a running turn can ask
the client for additional permissions instead of relying on a static
session policy.

The new flow emits a `RequestPermissions` event from core, tracks the
pending request by call ID, forwards it through app-server v2 as an
`item/permissions/requestApproval` request, and resumes the tool call
once the client returns an approved subset of the requested permission
profile.
2026-03-08 20:23:06 -07:00
Charley Cunningham
4ad3b59de3 tui: clarify pending steer follow-ups (#13841)
## Summary
- split the pending input preview into labeled pending-steer and queued
follow-up sections
- explain that pending steers submit after the next tool call and that
Esc can interrupt and send them immediately
- treat Esc as an interrupt-plus-resubmit path when pending steers
exist, with updated TUI snapshots and tests

Queues and steers:
<img width="1038" height="263" alt="Screenshot 2026-03-07 at 10 17
17 PM"
src="https://github.com/user-attachments/assets/4ef433ef-27a3-4b7c-ad69-2046f6eb89e6"
/>

After pressing Esc:
<img width="1046" height="320" alt="Screenshot 2026-03-07 at 10 17
21 PM"
src="https://github.com/user-attachments/assets/0f4d89e0-b6b9-486a-9f04-b6021f169ba7"
/>

## Codex author
`codex resume 019cc6f4-2cca-7803-b717-8264526dbd97`

---------

Co-authored-by: Codex <noreply@openai.com>
2026-03-08 20:13:21 -07:00
Eric Traut
e8d7ede83c Fix TUI context window display before first TokenCount (#13896)
The TUI was showing the raw configured `model_context_window` until the
first
`TokenCount` event arrived, even though core had already emitted the
effective
runtime window on `TurnStarted`. This made the footer, status-line
context
window, and `/status` output briefly inconsistent for models/configs
where the
effective window differs from the configured value, such as the
`gpt-5.4`
1,000,000-token override reported in #13623.

Update the TUI to cache `TurnStarted.model_context_window` immediately
so
pre-token-count displays use the runtime effective window, and add
regression
coverage for the startup path.

---------

Co-authored-by: Charles Cunningham <ccunningham@openai.com>
Co-authored-by: Codex <noreply@openai.com>
2026-03-07 17:01:47 -07:00
Eric Traut
8df4d9b3b2 Add Fast mode status-line indicator (#13670)
Addresses feature request #13660

Adds new option to `/statusline` so the status line can display "fast
on" or "fast off"

Summary
- introduce a `FastMode` status-line item so `/statusline` can render
explicit `Fast on`/`Fast off` text for the service tier
- wire the item into the picker metadata and resolve its string from
`ChatWidget` without adding any unrelated `thread-name` logic or storage
changes
- ensure the refresh paths keep the cached footer in sync when the
service tier (fast mode) changes

Testing
- Manually tested

Here's what it looks like when enabled:

<img width="366" height="75" alt="image"
src="https://github.com/user-attachments/assets/7f992d2b-6dab-49ed-aa43-ad496f56f193"
/>
2026-03-07 00:42:08 -07:00
Owen Lin
289ed549cf chore(otel): rename OtelManager to SessionTelemetry (#13808)
## Summary
This is a purely mechanical refactor of `OtelManager` ->
`SessionTelemetry` to better convey what the struct is doing. No
behavior change.

## Why

`OtelManager` ended up sounding much broader than what this type
actually does. It doesn't manage OTEL globally; it's the session-scoped
telemetry surface for emitting log/trace events and recording metrics
with consistent session metadata (`app_version`, `model`, `slug`,
`originator`, etc.).

`SessionTelemetry` is a more accurate name, and updating the call sites
makes that boundary a lot easier to follow.

## Validation

- `just fmt`
- `cargo test -p codex-otel`
- `cargo test -p codex-core`
2026-03-06 16:23:30 -08:00
sayan-oai
8a54d3caaa feat: structured plugin parsing (#13711)
#### What

Add structured `@plugin` parsing and TUI support for plugin mentions.

- Core: switch from plain-text `@display_name` parsing to structured
`plugin://...` mentions via `UserInput::Mention` and
`[$...](plugin://...)` links in text, same pattern as apps/skills.
- TUI: add plugin mention popup, autocomplete, and chips when typing
`$`. Load plugin capability summaries and feed them into the composer;
plugin mentions appear alongside skills and apps.
- Generalize mention parsing to a sigil parameter, still defaults to `$`

<img width="797" height="119" alt="image"
src="https://github.com/user-attachments/assets/f0fe2658-d908-4927-9139-73f850805ceb"
/>

Builds on #13510. Currently clients have to build their own `id` via
`plugin@marketplace` and filter plugins to show by `enabled`, but we
will add `id` and `available` as fields returned from `plugin/list`
soon.

####Tests

Added tests, verified locally.
2026-03-06 11:08:36 -08:00
jif-oai
b6d43ec8eb feat: status line with real data (#13619) 2026-03-06 11:01:40 +01:00
Matthew Zeng
98dca99db7 [elicitations] Switch to use MCP style elicitation payload for mcp tool approvals. (#13621)
- [x] Switch to use MCP style elicitation payload for mcp tool
approvals.
- [ ] TODO: Update the UI to support the full spec.
2026-03-06 01:50:26 -08:00
rhan-oai
9fcbbeb5ae [diagnostics] show diagnostics earlier in workflow (#13604)
<img width="591" height="243" alt="Screenshot 2026-03-05 at 10 17 06 AM"
src="https://github.com/user-attachments/assets/84a6658b-6017-4602-b1f8-2098b9b5eff9"
/>

- show feedback earlier
- preserve raw literal env vars (no trimming, sanitizing, etc.)
2026-03-05 11:23:47 -08:00
Owen Lin
926b2f19e8 feat(app-server): support mcp elicitations in v2 api (#13425)
This adds a first-class server request for MCP server elicitations:
`mcpServer/elicitation/request`.

Until now, MCP elicitation requests only showed up as a raw
`codex/event/elicitation_request` event from core. That made it hard for
v2 clients to handle elicitations using the same request/response flow
as other server-driven interactions (like shell and `apply_patch`
tools).

This also updates the underlying MCP elicitation request handling in
core to pass through the full MCP request (including URL and form data)
so we can expose it properly in app-server.

### Why not `item/mcpToolCall/elicitationRequest`?
This is because MCP elicitations are related to MCP servers first, and
only optionally to a specific MCP tool call.

In the MCP protocol, elicitation is a server-to-client capability: the
server sends `elicitation/create`, and the client replies with an
elicitation result. RMCP models it that way as well.

In practice an elicitation is often triggered by an MCP tool call, but
not always.

### What changed
- add `mcpServer/elicitation/request` to the v2 app-server API
- translate core `codex/event/elicitation_request` events into the new
v2 server request
- map client responses back into `Op::ResolveElicitation` so the MCP
server can continue
- update app-server docs and generated protocol schema
- add an end-to-end app-server test that covers the full round trip
through a real RMCP elicitation flow
- The new test exercises a realistic case where an MCP tool call
triggers an elicitation, the app-server emits
mcpServer/elicitation/request, the client accepts it, and the tool call
resumes and completes successfully.

### app-server API flow
- Client starts a thread with `thread/start`.
- Client starts a turn with `turn/start`.
- App-server sends `item/started` for the `mcpToolCall`.
- While that tool call is in progress, app-server sends
`mcpServer/elicitation/request`.
- Client responds to that request with `{ action: "accept" | "decline" |
"cancel" }`.
- App-server sends `serverRequest/resolved`.
- App-server sends `item/completed` for the mcpToolCall.
- App-server sends `turn/completed`.
- If the turn is interrupted while the elicitation is pending,
app-server still sends `serverRequest/resolved` before the turn
finishes.
2026-03-05 07:20:20 -08:00
pash-openai
1ce1712aeb [tui] Show speed in session header (#13446)
- add a speed row to the startup/session header under the model row
- render the speed row with the same styling pattern as the model row,
using /fast to change
- show only Fast or Standard to users and update the affected snapshots

---------

Co-authored-by: Codex <noreply@openai.com>
2026-03-05 00:00:16 -08:00
Matthew Zeng
3336639213 [apps] Fix the issue where apps is not enabled after codex resume. (#13533)
- [x] Fix the issue where apps is not enabled after codex resume.
2026-03-04 22:39:31 -08:00
Won Park
229e6d0347 image-gen-event/client_processing (#13512)
enabling client-side to process with image-generation capabilities
(setting app-server)
2026-03-04 16:54:38 -08:00
Eric Traut
f80e5d979d Notify TUI about plan mode prompts and user input requests (#13495)
Addresses #13478

Summary
- Add two new scopes for `tui.notifications` config: `plan-mode-prompt`
and `user-input-requested`.
- Add Plan Mode prompt and user-input-requested notifications to the TUI
so these events surface consistently outside of plan mode
- Add helpers and tests to ensure the new notification types publish the
right titles, summaries, and type tags for filtering
- Add prioritization mechanism to fix an existing bug where one
notification event could arbitrarily overwrite others

Testing
- Manually tested plan mode to ensure that notification appeared
2026-03-04 15:08:57 -07:00
jif-oai
e6b2e3a9f7 fix: bad merge (#13461) 2026-03-04 11:00:48 +00:00
jif-oai
e4a202ea52 fix: pending messages in /agent (#13240) 2026-03-04 10:17:29 +00:00
Val Kharitonov
4f6c4bb143 support 'flex' tier in app-server in addition to 'fast' (#13391) 2026-03-03 22:46:05 -08:00
Michael Bolin
bfff0c729f config: enforce enterprise feature requirements (#13388)
## Why

Enterprises can already constrain approvals, sandboxing, and web search
through `requirements.toml` and MDM, but feature flags were still only
configurable as managed defaults. That meant an enterprise could suggest
feature values, but it could not actually pin them.

This change closes that gap and makes enterprise feature requirements
behave like the other constrained settings. The effective feature set
now stays consistent with enterprise requirements during config load,
when config writes are validated, and when runtime code mutates feature
flags later in the session.

It also tightens the runtime API for managed features. `ManagedFeatures`
now follows the same constraint-oriented shape as `Constrained<T>`
instead of exposing panic-prone mutation helpers, and production code
can no longer construct it through an unconstrained `From<Features>`
path.

The PR also hardens the `compact_resume_fork` integration coverage on
Windows. After the feature-management changes,
`compact_resume_after_second_compaction_preserves_history` was
overflowing the libtest/Tokio thread stacks on Windows, so the test now
uses an explicit larger-stack harness as a pragmatic mitigation. That
may not be the ideal root-cause fix, and it merits a parallel
investigation into whether part of the async future chain should be
boxed to reduce stack pressure instead.

## What Changed

Enterprises can now pin feature values in `requirements.toml` with the
requirements-side `features` table:

```toml
[features]
personality = true
unified_exec = false
```

Only canonical feature keys are allowed in the requirements `features`
table; omitted keys remain unconstrained.

- Added a requirements-side pinned feature map to
`ConfigRequirementsToml`, threaded it through source-preserving
requirements merge and normalization in `codex-config`, and made the
TOML surface use `[features]` (while still accepting legacy
`[feature_requirements]` for compatibility).
- Exposed `featureRequirements` from `configRequirements/read`,
regenerated the JSON/TypeScript schema artifacts, and updated the
app-server README.
- Wrapped the effective feature set in `ManagedFeatures`, backed by
`ConstrainedWithSource<Features>`, and changed its API to mirror
`Constrained<T>`: `can_set(...)`, `set(...) -> ConstraintResult<()>`,
and result-returning `enable` / `disable` / `set_enabled` helpers.
- Removed the legacy-usage and bulk-map passthroughs from
`ManagedFeatures`; callers that need those behaviors now mutate a plain
`Features` value and reapply it through `set(...)`, so the constrained
wrapper remains the enforcement boundary.
- Removed the production loophole for constructing unconstrained
`ManagedFeatures`. Non-test code now creates it through the configured
feature-loading path, and `impl From<Features> for ManagedFeatures` is
restricted to `#[cfg(test)]`.
- Rejected legacy feature aliases in enterprise feature requirements,
and return a load error when a pinned combination cannot survive
dependency normalization.
- Validated config writes against enterprise feature requirements before
persisting changes, including explicit conflicting writes and
profile-specific feature states that normalize into invalid
combinations.
- Updated runtime and TUI feature-toggle paths to use the constrained
setter API and to persist or apply the effective post-constraint value
rather than the requested value.
- Updated the `core_test_support` Bazel target to include the bundled
core model-catalog fixtures in its runtime data, so helper code that
resolves `core/models.json` through runfiles works in remote Bazel test
environments.
- Renamed the core config test coverage to emphasize that effective
feature values are normalized at runtime, while conflicting persisted
config writes are rejected.
- Ran `compact_resume_after_second_compaction_preserves_history` inside
an explicit 8 MiB test thread and Tokio runtime worker stack, following
the existing larger-stack integration-test pattern, to keep the Windows
`compact_resume_fork` test slice from aborting while a parallel
investigation continues into whether some of the underlying async
futures should be boxed.

## Verification

- `cargo test -p codex-config`
- `cargo test -p codex-core feature_requirements_ -- --nocapture`
- `cargo test -p codex-core
load_requirements_toml_produces_expected_constraints -- --nocapture`
- `cargo test -p codex-core
compact_resume_after_second_compaction_preserves_history -- --nocapture`
- `cargo test -p codex-core compact_resume_fork -- --nocapture`
- Re-ran the built `codex-core` `tests/all` binary with
`RUST_MIN_STACK=262144` for
`compact_resume_after_second_compaction_preserves_history` to confirm
the explicit-stack harness fixes the deterministic low-stack repro.
- `cargo test -p codex-core`
- This still fails locally in unrelated integration areas that expect
the `codex` / `test_stdio_server` binaries or hit existing `search_tool`
wiremock mismatches.

## Docs

`developers.openai.com/codex` should document the requirements-side
`[features]` table for enterprise and MDM-managed configuration,
including that it only accepts canonical feature keys and that
conflicting config writes are rejected.
2026-03-04 04:40:22 +00:00
rhan-oai
e951ef4374 [feedback] diagnostics (#13292)
- added header logic to display diagnostics on cli
- added logic for collecting env vars

<img width="606" height="327" alt="Screenshot 2026-03-03 at 3 49 31 PM"
src="https://github.com/user-attachments/assets/05e78c56-8cb3-47fa-abaf-3e57f1fdd8e2"
/>

<img width="690" height="353" alt="Screenshot 2026-03-02 at 6 47 54 PM"
src="https://github.com/user-attachments/assets/e470b559-13f4-44d9-897f-bc398943c6d1"
/>
2026-03-03 16:34:11 -08:00
Charley Cunningham
299b8ac445 tui: align pending steers with core acceptance (#12868)
## Summary
- submit `Enter` steers immediately while a turn is already running
instead of routing them through `queued_user_messages`
- keep those submitted steers visible in the footer as `pending_steers`
until core records them as a user message or aborts the turn
- reconcile pending steers on `ItemCompleted(UserMessage)`, not
`RawResponseItem`
- emit user-message item lifecycle for leftover pending input at task
finish, then remove the TUI `TurnComplete` fallback
- keep `queued_user_messages` for actual queued drafts, rendered below
pending steers

## Problem
While the assistant was generating, pressing `Enter` could send the
input into `queued_user_messages`. That queue only drains after the turn
ends, so ordinary steers behaved like queued drafts instead of landing
at the next core sampling boundary.

The first version of this fix also used `RawResponseItem` to decide when
a steer had landed. Review feedback was that this is the wrong
abstraction for client behavior.

There was also a late edge case in core: if pending steer input was
accepted after the final sampling decision but before `TurnComplete`,
core would record that user message into history at task finish without
emitting `ItemStarted(UserMessage)` / `ItemCompleted(UserMessage)`. TUI
had a fallback to paper over that gap locally.

## Approach
- `Enter` during an active turn now submits a normal `Op::UserTurn`
immediately
- TUI keeps a local pending-steer preview instead of rendering that user
message into history immediately
- when core records the steer as `ItemCompleted(UserMessage)`, TUI
matches and removes the corresponding pending preview, then renders the
committed user message
- core now emits the same user-message lifecycle when
`on_task_finished(...)` drains leftover pending user input, before
`TurnComplete`
- with that lifecycle gap closed in core, TUI no longer needs to flush
pending steers into history on `TurnComplete`
- if the turn is interrupted, pending steers and queued drafts are both
restored into the composer, with pending steers first

## Notes
- `Tab` still uses the real queued-message path
- `queued_user_messages` and `pending_steers` are separate state with
separate semantics
- the pending-steer matching key is built directly from `UserInput`
- this removes the new TUI dependency on `RawResponseItem`

## Validation
- `just fmt`
- `cargo test -p codex-core
task_finish_emits_turn_item_lifecycle_for_leftover_pending_user_input --
--nocapture`
- `cargo test -p codex-tui`
2026-03-03 15:31:52 -08:00
pash-openai
2f5b01abd6 add fast mode toggle (#13212)
- add a local Fast mode setting in codex-core (similar to how model id
is currently stored on disk locally)
- send `service_tier=priority` on requests when Fast is enabled
- add `/fast` in the TUI and persist it locally
- feature flag
2026-03-02 20:29:33 -08:00
jif-oai
f8838fd6f3 feat: enable ma through /agent (#13246)
<img width="639" height="139" alt="Screenshot 2026-03-02 at 16 06 41"
src="https://github.com/user-attachments/assets/c006fcec-c1e7-41ce-bb84-c121d5ffb501"
/>

Then
<img width="372" height="37" alt="Screenshot 2026-03-02 at 16 06 49"
src="https://github.com/user-attachments/assets/aa4ad703-e7e7-4620-9032-f5cd4f48ff79"
/>
2026-03-02 18:37:29 +00:00
jif-oai
9a42a56d8f chore: /multiagent alias for /agent (#13249)
Add a `/mutli-agents` alias for `/agent` and update the wording
2026-03-02 16:51:54 +00:00
jif-oai
b08bdd91e3 fix: /status when sub-agent (#13130)
Fix https://github.com/openai/codex/issues/13066
2026-03-02 11:57:15 +00:00
xl-openai
752402c4fe feat: load from plugins (#12864)
Support loading plugins.

Plugins can now be enabled via [plugins.<name>] in config.toml. They are
loaded as first-class entities through PluginsManager, and their default
skills/ and .mcp.json contributions are integrated into the existing
skills and MCP flows.
2026-03-01 10:50:56 -08:00
jif-oai
2b38b4e03b feat: approval for sub-agent in the TUI (#12995)
<img width="766" height="290" alt="Screenshot 2026-02-27 at 10 50 48"
src="https://github.com/user-attachments/assets/3bc96cd9-ed2c-4d67-a317-8f7b60abbbb1"
/>
2026-02-28 14:07:07 +01:00
Ahmed Ibrahim
ec6f6aacbf Add model availability NUX tooltips (#13021)
- override startup tooltips with model availability NUX and persist
per-model show counts in config
- stop showing each model after four exposures and fall back to normal
tooltips
2026-02-27 17:14:06 -08:00
Ahmed Ibrahim
f90e97e414 Add realtime audio device picker (#12850)
## Summary
- add a dedicated /audio picker for realtime microphone and speaker
selection
- persist realtime audio choices and prompt to restart only local audio
when voice is live
- add snapshot coverage for the new picker surfaces

## Validation
- cargo test -p codex-tui
- cargo insta accept
- just fix -p codex-tui
- just fmt
2026-02-26 17:27:44 -08:00
pakrym-oai
951a389654 Allow clients not to send summary as an option (#12950)
Summary is a required parameter on UserTurn. Ideally we'd like the core
to decide the appropriate summary level.

Make the summary optional and don't send it when not needed.
2026-02-26 14:37:38 -08:00
pakrym-oai
ba41e84a50 Use model catalog default for reasoning summary fallback (#12873)
## Summary
- make `Config.model_reasoning_summary` optional so unset means use
model default
- resolve the optional config value to a concrete summary when building
`TurnContext`
- add protocol support for `default_reasoning_summary` in model metadata

## Validation
- `cargo test -p codex-core --lib client::tests -- --nocapture`

---------

Co-authored-by: Codex <noreply@openai.com>
2026-02-26 09:31:13 -08:00
Michael Bolin
14116ade8d feat: include available decisions in command approval requests (#12758)
Command-approval clients currently infer which choices to show from
side-channel fields like `networkApprovalContext`,
`proposedExecpolicyAmendment`, and `additionalPermissions`. That makes
the request shape harder to evolve, and it forces each client to
replicate the server's heuristics instead of receiving the exact
decision list for the prompt.

This PR introduces a mapping between `CommandExecutionApprovalDecision`
and `codex_protocol::protocol::ReviewDecision`:

```rust
impl From<CoreReviewDecision> for CommandExecutionApprovalDecision {
    fn from(value: CoreReviewDecision) -> Self {
        match value {
            CoreReviewDecision::Approved => Self::Accept,
            CoreReviewDecision::ApprovedExecpolicyAmendment {
                proposed_execpolicy_amendment,
            } => Self::AcceptWithExecpolicyAmendment {
                execpolicy_amendment: proposed_execpolicy_amendment.into(),
            },
            CoreReviewDecision::ApprovedForSession => Self::AcceptForSession,
            CoreReviewDecision::NetworkPolicyAmendment {
                network_policy_amendment,
            } => Self::ApplyNetworkPolicyAmendment {
                network_policy_amendment: network_policy_amendment.into(),
            },
            CoreReviewDecision::Abort => Self::Cancel,
            CoreReviewDecision::Denied => Self::Decline,
        }
    }
}
```

And updates `CommandExecutionRequestApprovalParams` to have a new field:

```rust
available_decisions: Option<Vec<CommandExecutionApprovalDecision>>
```

when, if specified, should make it easier for clients to display an
appropriate list of options in the UI.

This makes it possible for `CoreShellActionProvider::prompt()` in
`unix_escalation.rs` to specify the `Vec<ReviewDecision>` directly,
adding support for `ApprovedForSession` when approving a skill script,
which was previously missing in the TUI.

Note this results in a significant change to `exec_options()` in
`approval_overlay.rs`, as the displayed options are now derived from
`available_decisions: &[ReviewDecision]`.

## What Changed

- Add `available_decisions` to
[`ExecApprovalRequestEvent`](de00e932dd/codex-rs/protocol/src/approvals.rs (L111-L175)),
including helpers to derive the legacy default choices when older
senders omit the field.
- Map `codex_protocol::protocol::ReviewDecision` to app-server
`CommandExecutionApprovalDecision` and expose the ordered list as
experimental `availableDecisions` in
[`CommandExecutionRequestApprovalParams`](de00e932dd/codex-rs/app-server-protocol/src/protocol/v2.rs (L3798-L3807)).
- Thread optional `available_decisions` through the core approval path
so Unix shell escalation can explicitly request `ApprovedForSession` for
session-scoped approvals instead of relying on client heuristics.
[`unix_escalation.rs`](de00e932dd/codex-rs/core/src/tools/runtimes/shell/unix_escalation.rs (L194-L214))
- Update the TUI approval overlay to build its buttons from the ordered
decision list, while preserving the legacy fallback when
`available_decisions` is missing.
- Update the app-server README, test client output, and generated schema
artifacts to document and surface the new field.

## Testing

- Add `approval_overlay.rs` coverage for explicit decision lists,
including the generic `ApprovedForSession` path and network approval
options.
- Update `chatwidget/tests.rs` and app-server protocol tests to populate
the new optional field and keep older event shapes working.

## Developers Docs

- If we document `item/commandExecution/requestApproval` on
[developers.openai.com/codex](https://developers.openai.com/codex), add
experimental `availableDecisions` as the preferred source of approval
choices and note that older servers may omit it.
2026-02-26 01:10:46 +00:00
Celia Chen
4f45668106 Revert "Add skill approval event/response (#12633)" (#12811)
This reverts commit https://github.com/openai/codex/pull/12633. We no
longer need this PR, because we favor sending normal exec command
approval server request with `additional_permissions` of skill
permissions instead
2026-02-26 01:02:42 +00:00
Ahmed Ibrahim
e76b1a2853 Remove steer feature flag (#12026)
All code should go in the direction that steer is enabled

---------

Co-authored-by: Codex <noreply@openai.com>
2026-02-25 15:41:42 -08:00
Owen Lin
a0fd94bde6 feat(app-server): add ThreadItem::DynamicToolCall (#12732)
Previously, clients would call `thread/start` with dynamic_tools set,
and when a model invokes a dynamic tool, it would just make the
server->client `item/tool/call` request and wait for the client's
response to complete the tool call. This works, but it doesn't have an
`item/started` or `item/completed` event.

Now we are doing this:
- [new] emit `item/started` with `DynamicToolCall` populated with the
call arguments
- send an `item/tool/call` server request
- [new] once the client responds, emit `item/completed` with
`DynamicToolCall` populated with the response.

Also, with `persistExtendedHistory: true`, dynamic tool calls are now
reconstructable in `thread/read` and `thread/resume` as
`ThreadItem::DynamicToolCall`.
2026-02-25 12:00:10 -08:00
jif-oai
bcd6e68054 Display pending child-thread approvals in TUI (#12767)
Summary
- propagate approval policy from parent to spawned agents and drop the
Never override so sub-agents respect the caller’s request
- refresh the pending-approval list whenever events arrive or the active
thread changes and surface the list above the composer for inactive
threads
- add widgets, helpers, and tests covering the new pending-thread
approval UI state

![Uploading Screenshot 2026-02-25 at 11.02.18.png…]()
2026-02-25 11:40:11 +00:00
viyatb-oai
c086b36b58 feat(ui): add network approval persistence plumbing (#12358)
## Summary
- add TUI approval options for persistent network host rules
- add app-server v2 approval payload plumbing for network approval
context + proposed network policy amendments
- add app-server handling to translate `applyNetworkPolicyAmendment`
decisions back into core review decisions
- update docs/test client output and generated app-server schemas/types
2026-02-25 07:06:19 +00:00
Michael Bolin
ddfa032eb8 fix: chatwidget was not honoring approval_id for an ExecApprovalRequestEvent (#12746)
## Why

`ExecApprovalRequestEvent` can carry a distinct `approval_id` for
subcommand approvals, including the `execve`-intercepted zsh-fork path.

The session registers the pending approval callback under `approval_id`
when one is present, but `ChatWidget` was stashing `call_id` in the
approval modal state. When the user approved the command in the TUI, the
response was sent back with the wrong identifier, so the pending
approval could not be matched and the approval callback would not
resolve.

Note `approval_id` was introduced in
https://github.com/openai/codex/pull/12051.

## What changed

- In `tui/src/chatwidget.rs`, `ChatWidget` now uses
`ExecApprovalRequestEvent::effective_approval_id()` when constructing
`ApprovalRequest::Exec`.
- That preserves the existing behavior for normal shell and
`unified_exec` approvals, where `approval_id` is absent and the
effective id still falls back to `call_id`.
- For subcommand approvals that provide a distinct `approval_id`, the
TUI now sends back the same key that
`Session::request_command_approval()` registered.

## Verification

- Traced the approval flow end to end to confirm the same effective
approval id is now used on both sides of the round trip:
- `Session::request_command_approval()` registers the pending callback
under `approval_id.unwrap_or(call_id)`.
- `ChatWidget` now emits `Op::ExecApproval` with that same effective id.
2026-02-24 22:27:05 -08:00
Michael Bolin
448fb6ac22 fix: clarify the value of SkillMetadata.path (#12729)
Rename `SkillMetadata.path` to `SkillMetadata.path_to_skills_md` for
clarity.

Would ideally change the type to `AbsolutePathBuf`, but that can be done
later.
2026-02-24 17:15:54 -08:00
Won Park
ee1520e79e feat(tui) - /copy (#12613)
# /copy!

/copy allows you to copy the latest **complete** message from Codex on
the TUI.
2026-02-24 14:17:01 -08:00
Ahmed Ibrahim
b6ab2214e3 Add TUI realtime conversation mode (#12687)
- Add a hidden `realtime_conversation` feature flag and `/realtime`
slash command for start/stop live voice sessions.
- Reuse transcription composer/footer UI for live metering, stream mic
audio, play assistant audio, render realtime user text events, and
force-close on feature disable.

---------

Co-authored-by: Codex <noreply@openai.com>
2026-02-24 12:54:30 -08:00
Won Park
ca556fa313 ctrl-L (clears terminal but does not start a new chat) (#12628)
# ctrl-L

- Clears your terminal window
- Does not start a new chat
2026-02-24 10:03:42 -08:00
Dylan Hurd
f6053fdfb3 feat(core) Introduce Feature::RequestPermissions (#11871)
## Summary
Introduces the initial implementation of Feature::RequestPermissions.
RequestPermissions allows the model to request that a command be run
inside the sandbox, with additional permissions, like writing to a
specific folder. Eventually this will include other rules as well, and
the ability to persist these permissions, but this PR is already quite
large - let's get the core flow working and go from there!

<img width="1279" height="541" alt="Screenshot 2026-02-15 at 2 26 22 PM"
src="https://github.com/user-attachments/assets/0ee3ec0f-02ec-4509-91a2-809ac80be368"
/>

## Testing
- [x] Added tests
- [x] Tested locally
- [x] Feature
2026-02-24 09:48:57 -08:00
jif-oai
0679e70bfc fix: replay after /agent (#12663)
Filter the events after a`/agent` replay to prevent replaying decision
events
2026-02-24 12:08:38 +00:00
pakrym-oai
58763afa0f Add skill approval event/response (#12633)
Set the stage for skill-level permission approval in addition to
command-level.

Behind a feature flag.
2026-02-23 22:28:58 -08:00