Commit Graph

26 Commits

Author SHA1 Message Date
starr-openai
3971afd6d0 Route unified-exec through exec-server
Co-authored-by: Codex <noreply@openai.com>
2026-03-18 01:50:14 +00:00
Charley Cunningham
bc24017d64 Add Smart Approvals guardian review across core, app-server, and TUI (#13860)
## Summary
- add `approvals_reviewer = "user" | "guardian_subagent"` as the runtime
control for who reviews approval requests
- route Smart Approvals guardian review through core for command
execution, file changes, managed-network approvals, MCP approvals, and
delegated/subagent approval flows
- expose guardian review in app-server with temporary unstable
`item/autoApprovalReview/{started,completed}` notifications carrying
`targetItemId`, `review`, and `action`
- update the TUI so Smart Approvals can be enabled from `/experimental`,
aligned with the matching `/approvals` mode, and surfaced clearly while
reviews are pending or resolved

## Runtime model
This PR does not introduce a new `approval_policy`.

Instead:
- `approval_policy` still controls when approval is needed
- `approvals_reviewer` controls who reviewable approval requests are
routed to:
  - `user`
  - `guardian_subagent`

`guardian_subagent` is a carefully prompted reviewer subagent that
gathers relevant context and applies a risk-based decision framework
before approving or denying the request.

The `smart_approvals` feature flag is a rollout/UI gate. Core runtime
behavior keys off `approvals_reviewer`.

When Smart Approvals is enabled from the TUI, it also switches the
current `/approvals` settings to the matching Smart Approvals mode so
users immediately see guardian review in the active thread:
- `approval_policy = on-request`
- `approvals_reviewer = guardian_subagent`
- `sandbox_mode = workspace-write`

Users can still change `/approvals` afterward.

Config-load behavior stays intentionally narrow:
- plain `smart_approvals = true` in `config.toml` remains just the
rollout/UI gate and does not auto-set `approvals_reviewer`
- the deprecated `guardian_approval = true` alias migration does
backfill `approvals_reviewer = "guardian_subagent"` in the same scope
when that reviewer is not already configured there, so old configs
preserve their original guardian-enabled behavior

ARC remains a separate safety check. For MCP tool approvals, ARC
escalations now flow into the configured reviewer instead of always
bypassing guardian and forcing manual review.

## Config stability
The runtime reviewer override is stable, but the config-backed
app-server protocol shape is still settling.

- `thread/start`, `thread/resume`, and `turn/start` keep stable
`approvalsReviewer` overrides
- the config-backed `approvals_reviewer` exposure returned via
`config/read` (including profile-level config) is now marked
`[UNSTABLE]` / experimental in the app-server protocol until we are more
confident in that config surface

## App-server surface
This PR intentionally keeps the guardian app-server shape narrow and
temporary.

It adds generic unstable lifecycle notifications:
- `item/autoApprovalReview/started`
- `item/autoApprovalReview/completed`

with payloads of the form:
- `{ threadId, turnId, targetItemId, review, action? }`

`review` is currently:
- `{ status, riskScore?, riskLevel?, rationale? }`
- where `status` is one of `inProgress`, `approved`, `denied`, or
`aborted`

`action` carries the guardian action summary payload from core when
available. This lets clients render temporary standalone pending-review
UI, including parallel reviews, even when the underlying tool item has
not been emitted yet.

These notifications are explicitly documented as `[UNSTABLE]` and
expected to change soon.

This PR does **not** persist guardian review state onto `thread/read`
tool items. The intended follow-up is to attach guardian review state to
the reviewed tool item lifecycle instead, which would improve
consistency with manual approvals and allow thread history / reconnect
flows to replay guardian review state directly.

## TUI behavior
- `/experimental` exposes the rollout gate as `Smart Approvals`
- enabling it in the TUI enables the feature and switches the current
session to the matching Smart Approvals `/approvals` mode
- disabling it in the TUI clears the persisted `approvals_reviewer`
override when appropriate and returns the session to default manual
review when the effective reviewer changes
- `/approvals` still exposes the reviewer choice directly
- the TUI renders:
- pending guardian review state in the live status footer, including
parallel review aggregation
  - resolved approval/denial state in history

## Scope notes
This PR includes the supporting core/runtime work needed to make Smart
Approvals usable end-to-end:
- shell / unified-exec / apply_patch / managed-network / MCP guardian
review
- delegated/subagent approval routing into guardian review
- guardian review risk metadata and action summaries for app-server/TUI
- config/profile/TUI handling for `smart_approvals`, `guardian_approval`
alias migration, and `approvals_reviewer`
- a small internal cleanup of delegated approval forwarding to dedupe
fallback paths and simplify guardian-vs-parent approval waiting (no
intended behavior change)

Out of scope for this PR:
- redesigning the existing manual approval protocol shapes
- persisting guardian review state onto app-server `ThreadItem`s
- delegated MCP elicitation auto-review (the current delegated MCP
guardian shim only covers the legacy `RequestUserInput` path)

---------

Co-authored-by: Codex <noreply@openai.com>
2026-03-13 15:27:00 -07:00
Rohan Mehta
61098c7f51 Allow full web search tool config (#13675)
Previously, we could only configure whether web search was on/off.

This PR enables sending along a web search config, which includes all
the stuff responsesapi supports: filters, location, etc.
2026-03-07 00:50:50 +00:00
pash-openai
2f5b01abd6 add fast mode toggle (#13212)
- add a local Fast mode setting in codex-core (similar to how model id
is currently stored on disk locally)
- send `service_tier=priority` on requests when Fast is enabled
- add `/fast` in the TUI and persist it locally
- feature flag
2026-03-02 20:29:33 -08:00
sayan-oai
5a635f3427 profile-level model_catalog_json overrie (#12410)
enable `model-catalog_json` config value on `ConfigProfile` as well
2026-02-21 19:39:02 +00:00
Charley Cunningham
4c1744afb2 Improve Plan mode reasoning selection flow (#12303)
Addresses https://github.com/openai/codex/issues/11013

## Summary
- add a Plan implementation path in the TUI that lets users choose
reasoning before switching to Default mode and implementing
- add Plan-mode reasoning scope handling (Plan-only override vs
all-modes default), including config/schema/docs plumbing for
`plan_mode_reasoning_effort`
- remove the hardcoded Plan preset medium default and make the reasoning
popup reflect the active Plan override as `(current)`
- split the collaboration-mode switch notification UI hint into #12307
to keep this diff focused

If I have `plan_mode_reasoning_effort = "medium"` set in my
`config.toml`:
<img width="699" height="127" alt="Screenshot 2026-02-20 at 6 59 37 PM"
src="https://github.com/user-attachments/assets/b33abf04-6b7a-49ed-b2e9-d24b99795369"
/>

If I don't have `plan_mode_reasoning_effort` set in my `config.toml`:
<img width="704" height="129" alt="Screenshot 2026-02-20 at 7 01 51 PM"
src="https://github.com/user-attachments/assets/88a086d4-d2f1-49c7-8be4-f6f0c0fa1b8d"
/>

## Codex author
`codex resume 019c78a2-726b-7fe3-adac-3fa4523dcc2a`
2026-02-20 20:08:56 -08:00
aaronl-openai
f600453699 [js_repl] paths for node module resolution can be specified for js_repl (#11944)
# External (non-OpenAI) Pull Request Requirements

In `js_repl` mode, module resolution currently starts from
`js_repl_kernel.js`, which is written to a per-kernel temp dir. This
effectively means that bare imports will not resolve.

This PR adds a new config option, `js_repl_node_module_dirs`, which is a
list of dirs that are used (in order) to resolve a bare import. If none
of those work, the current working directory of the thread is used.

For example:
```toml
js_repl_node_module_dirs = [
    "/path/to/node_modules/",
    "/other/path/to/node_modules/",
]
```
2026-02-17 23:29:49 -08:00
Owen Lin
edacbf7b6e feat(core): zsh exec bridge (#12052)
zsh fork PR stack:
- https://github.com/openai/codex/pull/12051 
- https://github.com/openai/codex/pull/12052 👈 

### Summary
This PR introduces a feature-gated native shell runtime path that routes
shell execution through a patched zsh exec bridge, removing MCP-specific
behavior from the shell hot path while preserving existing
CommandExecution lifecycle semantics.

When shell_zsh_fork is enabled, shell commands run via patched zsh with
per-`execve` interception through EXEC_WRAPPER. Core receives wrapper
IPC requests over a Unix socket, applies existing approval policy, and
returns allow/deny before the subcommand executes.

### What’s included
**1) New zsh exec bridge runtime in core**
- Wrapper-mode entrypoint (maybe_run_zsh_exec_wrapper_mode) for
EXEC_WRAPPER invocations.
- Per-execution Unix-socket IPC handling for wrapper requests/responses.
- Approval callback integration using existing core approval
orchestration.
- Streaming stdout/stderr deltas to existing command output event
pipeline.
- Error handling for malformed IPC, denial/abort, and execution
failures.

**2) Session lifecycle integration**
SessionServices now owns a `ZshExecBridge`.
Session startup initializes bridge state; shutdown tears it down
cleanly.

**3) Shell runtime routing (feature-gated)**
When `shell_zsh_fork` is enabled:
- Build execution env/spec as usual.
- Add wrapper socket env wiring.
- Execute via `zsh_exec_bridge.execute_shell_request(...)` instead of
the regular shell path.
- Non-zsh-fork behavior remains unchanged.

**4) Config + feature wiring**
- Added `Feature::ShellZshFork` (under development).
- Added config support for `zsh_path` (optional absolute path to patched
zsh):
- `Config`, `ConfigToml`, `ConfigProfile`, overrides, and schema.
- Session startup validates that `zsh_path` exists/usable when zsh-fork
is enabled.
- Added startup test for missing `zsh_path` failure mode.

**5) Seatbelt/sandbox updates for wrapper IPC**
- Extended seatbelt policy generation to optionally allow outbound
connection to explicitly permitted Unix sockets.
- Wired sandboxing path to pass wrapper socket path through to seatbelt
policy generation.
- Added/updated seatbelt tests for explicit socket allow rule and
argument emission.

**6) Runtime entrypoint hooks**
- This allows the same binary to act as the zsh wrapper subprocess when
invoked via `EXEC_WRAPPER`.

**7) Tool selection behavior**
- ToolsConfig now prefers ShellCommand type when shell_zsh_fork is
enabled.
- Added test coverage for precedence with unified-exec enabled.
2026-02-17 20:19:53 -08:00
Curtis 'Fjord' Hawthorne
42e22f3bde Add feature-gated freeform js_repl core runtime (#10674)
## Summary

This PR adds an **experimental, feature-gated `js_repl` core runtime**
so models can execute JavaScript in a persistent REPL context across
tool calls.

The implementation integrates with existing feature gating, tool
registration, prompt composition, config/schema docs, and tests.

## What changed

- Added new experimental feature flag: `features.js_repl`.
- Added freeform `js_repl` tool and companion `js_repl_reset` tool.
- Gated tool availability behind `Feature::JsRepl`.
- Added conditional prompt-section injection for JS REPL instructions
via marker-based prompt processing.
- Implemented JS REPL handlers, including freeform parsing and pragma
support (timeout/reset controls).
- Added runtime resolution order for Node:
  1. `CODEX_JS_REPL_NODE_PATH`
  2. `js_repl_node_path` in config
  3. `PATH`
- Added JS runtime assets/version files and updated docs/schema.

## Why

This enables richer agent workflows that require incremental JavaScript
execution with preserved state, while keeping rollout safe behind an
explicit feature flag.

## Testing

Coverage includes:

- Feature-flag gating behavior for tool exposure.
- Freeform parser/pragma handling edge cases.
- Runtime behavior (state persistence across calls and top-level `await`
support).

## Usage

```toml
[features]
js_repl = true
```

Optional runtime override:

- `CODEX_JS_REPL_NODE_PATH`, or
- `js_repl_node_path` in config.

#### [git stack](https://github.com/magus/git-stack-cli)
- 👉 `1` https://github.com/openai/codex/pull/10674
-  `2` https://github.com/openai/codex/pull/10672
-  `3` https://github.com/openai/codex/pull/10671
-  `4` https://github.com/openai/codex/pull/10673
-  `5` https://github.com/openai/codex/pull/10670
2026-02-11 12:05:02 -08:00
iceweasel-oai
87279de434 Promote Windows Sandbox (#11341)
1. Move Windows Sandbox NUX to right after trust directory screen
2. Don't offer read-only as an option in Sandbox NUX.
Elevated/Legacy/Quit
3. Don't allow new untrusted directories. It's trust or quit
4. move experimental sandbox features to `[windows]
sandbox="elevated|unelevatd"`
5. Copy tweaks = elevated -> default, non-elevated -> non-admin
2026-02-11 11:48:33 -08:00
Dylan Hurd
a33fa4bfe5 chore(config) Rename config setting to personality (#10314)
## Summary
Let's make the setting name consistent with the SlashCommand!

## Testing
- [x] Updated tests
2026-01-31 19:38:06 -08:00
Michael Bolin
f4d55319d1 feat: rename experimental_instructions_file to model_instructions_file (#9555)
A user who has `experimental_instructions_file` set will now see this:

<img width="888" height="660" alt="image"
src="https://github.com/user-attachments/assets/51c98312-eb9b-4881-81f1-bea6677e158d"
/>

And a `codex exec` would include this warning:

<img width="888" height="660" alt="image"
src="https://github.com/user-attachments/assets/a89f62be-1edf-4593-a75e-e0b4a762ed7d"
/>
2026-01-21 02:25:08 +00:00
Dylan Hurd
714151eb4e feat(personality) introduce model_personality config (#9459)
## Summary
Introduces the concept of a config model_personality. I would consider
this an MVP for testing out the feature. There are a number of
follow-ups to this PR:

- More sophisticated templating with validation
- In-product experience to manage this

## Testing
- [x] Testing locally
2026-01-20 11:06:14 -08:00
sayan-oai
5e426ac270 add WebSearchMode enum (#9216)
### What
Add `WebSearchMode` enum (disabled, cached live, defaults to cached) to
config + V2 protocol. This enum takes precedence over legacy flags:
`web_search_cached`, `web_search_request`, and `tools.web_search`.

Keep `--search` as live.

### Tests
Added tests
2026-01-14 12:51:42 -08:00
sayan-oai
40e2405998 add generated jsonschema for config.toml (#8956)
### What
Add JSON Schema generation for `config.toml`, with checked‑in
`docs/config.schema.json`. We can move the schema elsewhere if preferred
(and host it if there's demand).

Add fixture test to prevent drift and `just write-config-schema` to
regenerate on schema changes.

Generate MCP config schema from `RawMcpServerConfig` instead of
`McpServerConfig` because that is the runtime type used for
deserialization.

Populate feature flag values into generated schema so they can be
autocompleted.

### Tests
Added tests + regenerate script to prevent drift. Tested autocompletions
using generated jsonschema locally with Even Better TOML.



https://github.com/user-attachments/assets/5aa7cd39-520c-4a63-96fb-63798183d0bc
2026-01-13 10:22:51 -08:00
Javi
915352b10c feat: add analytics config setting (#8350) 2026-01-06 19:04:13 +00:00
Michael Bolin
8ff16a7714 feat: support in-repo .codex/config.toml entries as sources of config info (#8354)
- We now support `.codex/config.toml` in repo (from `cwd` up to the
first `.git` found, if any) as layers in `ConfigLayerStack`. A new
`ConfigLayerSource::Project` variant was added to support this.
- In doing this work, I realized that we were resolving relative paths
in `config.toml` after merging everything into one `toml::Value`, which
is wrong: paths should be relativized with respect to the folder
containing the `config.toml` that was deserialized. This PR introduces a
deserialize/re-serialize strategy to account for this in
`resolve_config_paths()`. (This is why `Serialize` is added to so many
types as part of this PR.)
- Added tests to verify this new behavior.



---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/8354).
* #8359
* __->__ #8354
2025-12-22 11:07:36 -08:00
Shijie Rao
987dd7fde3 Chore: remove rmcp feature and exp flag usages (#8087)
### Summary
With codesigning on Mac, Windows and Linux, we should be able to safely
remove `features.rmcp_client` and `use_experimental_use_rmcp_client`
check from the codebase now.
2025-12-20 14:18:00 -08:00
Michael Bolin
1e9babe178 fix: PathBuf -> AbsolutePathBuf in ConfigToml struct (#8205)
We should not have any `PathBuf` fields in `ConfigToml` or any of the
transitive structs we include, as we should use `AbsolutePathBuf`
instead so that we do not have to keep track of the file from which
`ConfigToml` was loaded such that we need it to resolve relative paths
later when the values of `ConfigToml` are used.

I only found two instances of this: `experimental_instructions_file` and
`experimental_compact_prompt_file`. Incidentally, when these were
specified as relative paths, they were resolved against `cwd` rather
than `config.toml`'s parent, which seems wrong to me. I changed the
behavior so they are resolved against the parent folder of the
`config.toml` being parsed, which we get "for free" due to the
introduction of `AbsolutePathBufGuard ` in
https://github.com/openai/codex/pull/7796.

While it is not great to change the behavior of a released feature,
these fields are prefixed with `experimental_`, which I interpret to
mean we have the liberty to change the contract.

For reference:

- `experimental_instructions_file` was introduced in
https://github.com/openai/codex/pull/1803
- `experimental_compact_prompt_file` was introduced in
https://github.com/openai/codex/pull/5959
2025-12-17 12:08:18 -08:00
Eric Traut
c4af707e09 Removed experimental "command risk assessment" feature (#7799)
This experimental feature received lukewarm reception during internal
testing. Removing from the code base.
2025-12-10 09:48:11 -08:00
Ahmed Ibrahim
71504325d3 Migrate model preset (#7542)
- Introduce `openai_models` in `/core`
- Move `PRESETS` under it
- Move `ModelPreset`, `ModelUpgrade`, `ReasoningEffortPreset`,
`ReasoningEffortPreset`, and `ReasoningEffortPreset` to `protocol`
- Introduce `Op::ListModels` and `EventMsg::AvailableModels`

Next steps:
- migrate `app-server` and `tui` to use the introduced Operation
2025-12-03 20:30:43 +00:00
rugvedS07
837bc98a1d LM Studio OSS Support (#2312)
## Overview

Adds LM Studio OSS support. Closes #1883


### Changes
This PR enhances the behavior of `--oss` flag to support LM Studio as a
provider. Additionally, it introduces a new flag`--local-provider` which
can take in `lmstudio` or `ollama` as values if the user wants to
explicitly choose which one to use.

If no provider is specified `codex --oss` will auto-select the provider
based on whichever is running.

#### Additional enhancements 
The default can be set using `oss-provider` in config like:

```
oss_provider = "lmstudio"
```

For non-interactive users, they will need to either provide the provider
as an arg or have it in their `config.toml`

### Notes
For best performance, [set the default context
length](https://lmstudio.ai/docs/app/advanced/per-model) for gpt-oss to
the maximum your machine can support

---------

Co-authored-by: Matt Clayton <matt@lmstudio.ai>
Co-authored-by: Eric Traut <etraut@openai.com>
2025-11-17 11:49:09 -08:00
pakrym-oai
c368c6aeea Remove shell tool when unified exec is enabled (#6345)
Also drop streameable shell that's just an alias for unified exec.
2025-11-06 15:46:24 -08:00
Celia Chen
6ef658a9f9 [Hygiene] Remove include_view_image_tool config (#5976)
There's still some debate about whether we want to expose
`tools.view_image` or `feature.view_image` so those are left unchanged
for now, but this old `include_view_image_tool` config is good-to-go.
Also updated the doc to reflect that `view_image` tool is now by default
true.
2025-10-30 13:23:24 -07:00
jif-oai
f4f9695978 feat: compaction prompt configurable (#5959)
```
 codex -c compact_prompt="Summarize in bullet points"
 ```
2025-10-30 14:24:24 +00:00
jif-oai
aa76003e28 chore: unify config crates (#5958) 2025-10-30 10:28:32 +00:00