83 Commits

Author SHA1 Message Date
Matthew Zeng
a2c829a808 [connectors] Support connectors part 1 - App server & MCP (#9667)
In order to make Codex work with connectors, we add a built-in gateway
MCP that acts as a transparent proxy between the client and the
connectors. The gateway MCP collects actions that are accessible to the
user and sends them down to the user, when a connector action is chosen
to be called, the client invokes the action through the gateway MCP as
well.

 - [x] Add the system built-in gateway MCP to list and run connectors.
 - [x] Add the app server methods and protocol
2026-01-22 16:48:43 -08:00
Josh McKinney
4283a7432b tui: double-press Ctrl+C/Ctrl+D to quit (#8936)
## Problem

Codex’s TUI quit behavior has historically been easy to trigger
accidentally and hard to reason
about.

- `Ctrl+C`/`Ctrl+D` could terminate the UI immediately, which is a
common key to press while trying
  to dismiss a modal, cancel a command, or recover from a stuck state.
- “Quit” and “shutdown” were not consistently separated, so some exit
paths could bypass the
  shutdown/cleanup work that should run before the process terminates.

This PR makes quitting both safer (harder to do by accident) and more
uniform across quit
gestures, while keeping the shutdown-first semantics explicit.

## Mental model

After this change, the system treats quitting as a UI request that is
coordinated by the app
layer.

- The UI requests exit via `AppEvent::Exit(ExitMode)`.
- `ExitMode::ShutdownFirst` is the normal user path: the app triggers
`Op::Shutdown`, continues
rendering while shutdown runs, and only ends the UI loop once shutdown
has completed.
- `ExitMode::Immediate` exists as an escape hatch (and as the
post-shutdown “now actually exit”
signal); it bypasses cleanup and should not be the default for
user-triggered quits.

User-facing quit gestures are intentionally “two-step” for safety:

- `Ctrl+C` and `Ctrl+D` no longer exit immediately.
- The first press arms a 1-second window and shows a footer hint (“ctrl
+ <key> again to quit”).
- Pressing the same key again within the window requests a
shutdown-first quit; otherwise the
  hint expires and the next press starts a fresh window.

Key routing remains modal-first:

- A modal/popup gets first chance to consume `Ctrl+C`.
- If a modal handles `Ctrl+C`, any armed quit shortcut is cleared so
dismissing a modal cannot
  prime a subsequent `Ctrl+C` to quit.
- `Ctrl+D` only participates in quitting when the composer is empty and
no modal/popup is active.

The design doc `docs/exit-confirmation-prompt-design.md` captures the
intended routing and the
invariants the UI should maintain.

## Non-goals

- This does not attempt to redesign modal UX or make modals uniformly
dismissible via `Ctrl+C`.
It only ensures modals get priority and that quit arming does not leak
across modal handling.
- This does not introduce a persistent confirmation prompt/menu for
quitting; the goal is to keep
  the exit gesture lightweight and consistent.
- This does not change the semantics of core shutdown itself; it changes
how the UI requests and
  sequences it.

## Tradeoffs

- Quitting via `Ctrl+C`/`Ctrl+D` now requires a deliberate second
keypress, which adds friction for
  users who relied on the old “instant quit” behavior.
- The UI now maintains a small time-bounded state machine for the armed
shortcut, which increases
  complexity and introduces timing-dependent behavior.

This design was chosen over alternatives (a modal confirmation prompt or
a long-lived “are you
sure” state) because it provides an explicit safety barrier while
keeping the flow fast and
keyboard-native.

## Architecture

- `ChatWidget` owns the quit-shortcut state machine and decides when a
quit gesture is allowed
  (idle vs cancellable work, composer state, etc.).
- `BottomPane` owns rendering and local input routing for modals/popups.
It is responsible for
consuming cancellation keys when a view is active and for
showing/expiring the footer hint.
- `App` owns shutdown sequencing: translating
`AppEvent::Exit(ShutdownFirst)` into `Op::Shutdown`
  and only terminating the UI loop when exit is safe.

This keeps “what should happen” decisions (quit vs interrupt vs ignore)
in the chat/widget layer,
while keeping “how it looks and which view gets the key” in the
bottom-pane layer.

## Observability

You can tell this is working by running the TUIs and exercising the quit
gestures:

- While idle: pressing `Ctrl+C` (or `Ctrl+D` with an empty composer and
no modal) shows a footer
hint for ~1 second; pressing again within that window exits via
shutdown-first.
- While streaming/tools/review are active: `Ctrl+C` interrupts work
rather than quitting.
- With a modal/popup open: `Ctrl+C` dismisses/handles the modal (if it
chooses to) and does not
arm a quit shortcut; a subsequent quick `Ctrl+C` should not quit unless
the user re-arms it.

Failure modes are visible as:

- Quits that happen immediately (no hint window) from `Ctrl+C`/`Ctrl+D`.
- Quits that occur while a modal is open and consuming `Ctrl+C`.
- UI termination before shutdown completes (cleanup skipped).

## Tests

- Updated/added unit and snapshot coverage in `codex-tui` and
`codex-tui2` to validate:
  - The quit hint appears and expires on the expected key.
- Double-press within the window triggers a shutdown-first quit request.
- Modal-first routing prevents quit bypass and clears any armed shortcut
when a modal consumes
    `Ctrl+C`.

These tests focus on the UI-level invariants and rendered output; they
do not attempt to validate
real terminal key-repeat timing or end-to-end process shutdown behavior.

---
Screenshot:
<img width="912" height="740" alt="Screenshot 2026-01-13 at 1 05 28 PM"
src="https://github.com/user-attachments/assets/18f3d22e-2557-47f2-a369-ae7a9531f29f"
/>
2026-01-14 17:42:52 +00:00
sayan-oai
40e2405998 add generated jsonschema for config.toml (#8956)
### What
Add JSON Schema generation for `config.toml`, with checked‑in
`docs/config.schema.json`. We can move the schema elsewhere if preferred
(and host it if there's demand).

Add fixture test to prevent drift and `just write-config-schema` to
regenerate on schema changes.

Generate MCP config schema from `RawMcpServerConfig` instead of
`McpServerConfig` because that is the runtime type used for
deserialization.

Populate feature flag values into generated schema so they can be
autocompleted.

### Tests
Added tests + regenerate script to prevent drift. Tested autocompletions
using generated jsonschema locally with Even Better TOML.



https://github.com/user-attachments/assets/5aa7cd39-520c-4a63-96fb-63798183d0bc
2026-01-13 10:22:51 -08:00
Thibault Sottiaux
ee9d441777 chore: update outdated docs (#8701) 2026-01-03 02:19:52 -08:00
Eric Traut
ab753387cc Replaced user documentation with links to developers docs site (#8662)
This eliminates redundant user documentation and allows us to focus our
documentation investments.

I left tombstone files for most of the existing ".md" docs files to
avoid broken links. These now contain brief links to the developers docs
site.
2026-01-02 13:01:53 -07:00
Ahmed Ibrahim
40de81e7af Remove reasoning format (#8484)
This isn't very useful parameter. 

logic:
```
if model puts `**` in their reasoning, trim it and visualize the header.
if couldn't trim: don't render
if model doesn't support: don't render
```

We can simplify to:
```
if could trim, visualize header.
if not, don't render
```
2025-12-23 16:01:46 -08:00
Michael Bolin
314937fb11 feat: add support for project_root_markers in config.toml (#8359)
- allow configuring `project_root_markers` in `config.toml`
(user/system/MDM) to control project discovery beyond `.git`
- honor the markers after merging pre-project layers; default to
`[".git"]` when unset and skip ancestor walk when set to an empty array
- document the option and add coverage for alternate markers in config
loader tests
2025-12-22 19:45:45 +00:00
jif-oai
45727b9ed3 chore: drop undo from the docs (#8431) 2025-12-22 15:09:48 +00:00
Robby He
372de6d2c5 docs: add developer_instructions config option and update descriptions (#8376)
Updates the configuration documentation to clarify and improve the
description of the `developer_instructions` and `instructions` fields.

Documentation updates:

* Added a description for the `developer_instructions` field in
`docs/config.md`, clarifying that it provides additional developer
instructions.
* Updated the comments in `docs/example-config.md` to specify that
`developer_instructions` is injected before `AGENTS.md`, and clarified
that the `instructions` field is ignored and that `AGENTS.md` is
preferred.

___

ref #7973 

Thanks to @miraclebakelaser for the message. I have double-confirmed
that developer instructions are always injected before user
instructions. According to the source code
[codex_core::codex::Session::build_initial_context](https://github.com/openai/codex/blob/rust-v0.77.0-alpha.2/codex-rs/core/src/codex.rs#L1279),
we can see the specific order of these instructions.
2025-12-22 07:37:37 -07:00
Charlie Weems
99cbba8ea5 Update ghost_commit flag reference to undo (#8091)
Minor documentation update to fix #7966 (documentation of undo flag).
2025-12-21 23:27:54 -08:00
Shijie Rao
987dd7fde3 Chore: remove rmcp feature and exp flag usages (#8087)
### Summary
With codesigning on Mac, Windows and Linux, we should be able to safely
remove `features.rmcp_client` and `use_experimental_use_rmcp_client`
check from the codebase now.
2025-12-20 14:18:00 -08:00
Josh McKinney
63942b883c feat(tui2): tune scrolling inpu based on (#8357)
## TUI2: Normalize Mouse Scroll Input Across Terminals (Wheel +
Trackpad)

This changes TUI2 scrolling to a stream-based model that normalizes
terminal scroll event density into consistent wheel behavior (default:
~3 transcript lines per physical wheel notch) while keeping trackpad
input higher fidelity via fractional accumulation.

Primary code: `codex-rs/tui2/src/tui/scrolling/mouse.rs`

Doc of record (model + probe-derived data):
`codex-rs/tui2/docs/scroll_input_model.md`

### Why

Terminals encode both mouse wheels and trackpads as discrete scroll
up/down events with direction but no magnitude, and they vary widely in
how many raw events they emit per physical wheel notch (commonly 1, 3,
or 9+). Timing alone doesn’t reliably distinguish wheel vs trackpad, so
cadence-based heuristics are unstable across terminals/hardware.

This PR treats scroll input as short *streams* separated by silence or
direction flips, normalizes raw event density into tick-equivalents,
coalesces redraws for dense streams, and exposes explicit config
overrides.

### What Changed

#### Scroll Model (TUI2)

- Stream detection
  - Start a stream on the first scroll event.
  - End a stream on an idle gap (`STREAM_GAP_MS`) or a direction flip.
- Normalization
- Convert raw events into tick-equivalents using per-terminal
`tui.scroll_events_per_tick`.
- Wheel-like vs trackpad-like behavior
- Wheel-like: fixed “classic” lines per wheel notch; flush immediately
for responsiveness.
- Trackpad-like: fractional accumulation + carry across stream
boundaries; coalesce flushes to ~60Hz to avoid floods and reduce “stop
lag / overshoot”.
- Trackpad divisor is intentionally capped: `min(scroll_events_per_tick,
3)` so terminals with dense wheel ticks (e.g. 9 events per notch) don’t
make trackpads feel artificially slow.
- Auto mode (default)
  - Start conservatively as trackpad-like (avoid overshoot).
- Promote to wheel-like if the first tick-worth of events arrives
quickly.
- Fallback for 1-event-per-tick terminals (no tick-completion timing
signal).

#### Trackpad Acceleration

Some terminals produce relatively low vertical event density for
trackpad gestures, which makes large/faster swipes feel sluggish even
when small motions feel correct. To address that, trackpad-like streams
apply a bounded multiplier based on event count:

- `multiplier = clamp(1 + abs(events) / scroll_trackpad_accel_events,
1..scroll_trackpad_accel_max)`

The multiplier is applied to the trackpad stream’s computed line delta
(including carried fractional remainder). Defaults are conservative and
bounded.

#### Config Knobs (TUI2)

All keys live under `[tui]`:

- `scroll_wheel_lines`: lines per physical wheel notch (default: 3).
- `scroll_events_per_tick`: raw vertical scroll events per physical
wheel notch (terminal-specific default; fallback: 3).
- Wheel-like per-event contribution: `scroll_wheel_lines /
scroll_events_per_tick`.
- `scroll_trackpad_lines`: baseline trackpad sensitivity (default: 1).
- Trackpad-like per-event contribution: `scroll_trackpad_lines /
min(scroll_events_per_tick, 3)`.
- `scroll_trackpad_accel_events` / `scroll_trackpad_accel_max`: bounded
trackpad acceleration (defaults: 30 / 3).
- `scroll_mode = auto|wheel|trackpad`: force behavior or use the
heuristic (default: `auto`).
- `scroll_wheel_tick_detect_max_ms`: auto-mode promotion threshold (ms).
- `scroll_wheel_like_max_duration_ms`: auto-mode fallback for
1-event-per-tick terminals (ms).
- `scroll_invert`: invert scroll direction (applies to wheel +
trackpad).

Config docs: `docs/config.md` and field docs in
`codex-rs/core/src/config/types.rs`.

#### App Integration

- The app schedules follow-up ticks to close idle streams (via
`ScrollUpdate::next_tick_in` and `schedule_frame_in`) and finalizes
streams on draw ticks.
  - `codex-rs/tui2/src/app.rs`

#### Docs

- Single doc of record describing the model + preserved probe
findings/spec:
  - `codex-rs/tui2/docs/scroll_input_model.md`

#### Other (jj-only friendliness)

- `codex-rs/tui2/src/diff_render.rs`: prefer stable cwd-relative paths
when the file is under the cwd even if there’s no `.git`.

### Terminal Defaults

Per-terminal defaults are derived from scroll-probe logs (see doc).
Notable:

- Ghostty currently defaults to `scroll_events_per_tick = 3` even though
logs measured ~9 in one setup. This is a deliberate stopgap; if your
Ghostty build emits ~9 events per wheel notch, set:

  ```toml
  [tui]
  scroll_events_per_tick = 9
  ```

### Testing

- `just fmt`
- `just fix -p codex-core --allow-no-vcs`
- `cargo test -p codex-core --lib` (pass)
- `cargo test -p codex-tui2` (scroll tests pass; remaining failures are
known flaky VT100 color tests in `insert_history`)

### Review Focus

- Stream finalization + frame scheduling in `codex-rs/tui2/src/app.rs`.
- Auto-mode promotion thresholds and the 1-event-per-tick fallback
behavior.
- Trackpad divisor cap (`min(events_per_tick, 3)`) and acceleration
defaults.
- Ghostty default tradeoff (3 vs ~9) and whether we should change it.
2025-12-20 12:48:12 -08:00
Andrew Ambrosino
9fb9ed6cea Set exclude to true by default in app server (#8281) 2025-12-18 14:28:30 -08:00
jif-oai
3d92b443b0 feat: add config to disable warnings around ghost snapshot (#8178) 2025-12-17 18:50:22 +00:00
Michael Bolin
bef36f4ae7 feat: if .codex is a sub-folder of a writable root, then make it read-only to the sandbox (#8088)
In preparation for in-repo configuration support, this updates
`WritableRoot::get_writable_roots_with_cwd()` to include the `.codex`
subfolder in `WritableRoot.read_only_subpaths`, if it exists, as we
already do for `.git`.

As noted, currently, like `.git`, `.codex` will only be read-only under
macOS Seatbelt, but we plan to bring support to other OSes, as well.

Updated the integration test in `seatbelt.rs` so that it actually
attempts to run the generated Seatbelt commands, verifying that:

- trying to write to `.codex/config.toml` in a writable root fails
- trying to write to `.git/hooks/pre-commit` in a writable root fails
- trying to write to the writable root containing the `.codex` and
`.git` subfolders succeeds
2025-12-15 22:54:43 -08:00
Lucas Kim
54def78a22 docs: fix gpt-5.2 typo in config.md (#8079)
Fix small typo in docs/config.md: `gpt5-2` -> `gpt-5.2`
2025-12-15 15:15:14 -08:00
Eric Traut
5b472c933d Fixed formatting issue (#8069) 2025-12-15 06:18:33 -08:00
Mikhail Beliakov
4501c0ece4 Update config.md (#8066)
Update supporting docs with the actual options
2025-12-15 06:12:52 -08:00
jif-oai
4274e6189a feat: config ghost commits (#7873) 2025-12-15 09:13:06 +01:00
Victor Vannara
7c6a47958a docs: document enabling experimental skills (#8024)
## Notes

Skills are behind the experimental `skills` feature flag (disabled by
default), but the skills guide didn't explain how to turn them on.

- Add an explicit enable section to `docs/skills.md` (config +
`--enable`)
- Add the skills flag to `docs/config.md` and `docs/example-config.md`
- Document the `/skills` slash command
2025-12-14 14:34:22 -08:00
Victor Vannara
190fa9e104 docs: clarify xhigh reasoning effort on gpt-5.2 (#7911)
## Changes
- Update config docs and example config comments to state that "xhigh"
is supported on gpt-5.2 as well as gpt-5.1-codex-max
- Adjust the FAQ model-support section to reflect broader xhigh
availability
2025-12-11 21:18:47 -08:00
dank-openai
36610d975a Fix toasts on Windows under WSL 2 (#7137)
Before this: no notifications or toasts when using Codex CLI in WSL 2.

After this: I get toasts from Codex
2025-12-11 15:09:00 -08:00
Eric Traut
c4af707e09 Removed experimental "command risk assessment" feature (#7799)
This experimental feature received lukewarm reception during internal
testing. Removing from the code base.
2025-12-10 09:48:11 -08:00
Josh McKinney
0c8828c5e2 feat(tui2): add feature-flagged tui2 frontend (#7793)
Introduce a new codex-tui2 crate that re-exports the existing
interactive TUI surface and delegates run_main directly to codex-tui.
This keeps behavior identical while giving tui2 its own crate for future
viewport work.

Wire the codex CLI to select the frontend via the tui2 feature flag.
When the merged CLI overrides include features.tui2=true (e.g. via
--enable tui2), interactive runs are routed through
codex_tui2::run_main; otherwise they continue to use the original
codex_tui::run_main.

Register Feature::Tui2 in the core feature registry and add the tui2
crate and dependency entries so the new frontend builds alongside the
existing TUI.

This is a stub that only wires up the feature flag for this.

<img width="619" height="364" alt="image"
src="https://github.com/user-attachments/assets/4893f030-932f-471e-a443-63fe6b5d8ed9"
/>
2025-12-09 16:23:53 -08:00
gameofby
98923654d0 fix: refine the warning message and docs for deprecated tools config (#7685)
Issue #7661 revealed that users are confused by deprecation warnings
like:
> `tools.web_search` is deprecated. Use `web_search_request` instead.

This message misleadingly suggests renaming the config key from
`web_search` to `web_search_request`, when the actual required change is
to **move and rename the configuration from the `[tools]` section to the
`[features]` section**.

This PR clarifies the warning messages and documentation to make it
clear that deprecated `[tools]` configurations should be moved to
`[features]`. Changes made:
- Updated deprecation warning format in `codex-rs/core/src/codex.rs:520`
to include `[features].` prefix
- Updated corresponding test expectations in
`codex-rs/core/tests/suite/deprecation_notice.rs:39`
- Improved documentation in `docs/config.md` to clarify upfront that
`[tools]` options are deprecated in favor of `[features]`
2025-12-08 01:23:21 -08:00
Robby He
57ba9fa100 fix(doc): TOML otel exporter example — multi-line inline table is inv… (#7669)
…alid (#7668)

The `otel` exporter example in `docs/config.md` is misleading and will
cause
the configuration parser to fail if copied verbatim.

Summary
-------
The example uses a TOML inline table but spreads the inline-table braces
across multiple lines. TOML inline tables must be contained on a single
line
(`key = { a = 1, b = 2 }`); placing newlines inside the braces triggers
a
parse error in most TOML parsers and prevents Codex from starting.

Reproduction
------------
1. Paste the snippet below into `~/.codex/config.toml` (or your project
config).
2. Run `codex` (or the command that loads the config).
3. The process will fail to start with a TOML parse error similar to:

```text
Error loading config.toml: TOML parse error at line 55, column 27
   |
55 | exporter = { otlp-http = {
   |                           ^
newlines are unsupported in inline tables, expected nothing
```

Problematic snippet (as currently shown in the docs)
---------------------------------------------------
```toml
[otel]
exporter = { otlp-http = {
  endpoint = "https://otel.example.com/v1/logs",
  protocol = "binary",
  headers = { "x-otlp-api-key" = "${OTLP_TOKEN}" }
}}
```

Recommended fixes
------------------
```toml
[otel.exporter."otlp-http"]
endpoint = "https://otel.example.com/v1/logs"
protocol = "binary"

[otel.exporter."otlp-http".headers]
"x-otlp-api-key" = "${OTLP_TOKEN}"
```

Or, keep an inline table but write it on one line (valid but less
readable):

```toml
[otel]
exporter = { "otlp-http" = { endpoint = "https://otel.example.com/v1/logs", protocol = "binary", headers = { "x-otlp-api-key" = "${OTLP_TOKEN}" } } }
```
2025-12-08 01:20:23 -08:00
Jay Sabva
315b1e957d docs: fix documentation of rmcp client flag (#7665)
## Summary
- Updated the rmcp client flag's documentation in config.md file
- changed it from `experimental_use_rmcp_client` to `rmcp_client`
2025-12-06 10:17:18 -08:00
zhao-oai
3d35cb4619 Refactor execpolicy fallback evaluation (#7544)
## Refactor of the `execpolicy` crate

To illustrate why we need this refactor, consider an agent attempting to
run `apple | rm -rf ./`. Suppose `apple` is allowed by `execpolicy`.
Before this PR, `execpolicy` would consider `apple` and `pear` and only
render one rule match: `Allow`. We would skip any heuristics checks on
`rm -rf ./` and immediately approve `apple | rm -rf ./` to run.

To fix this, we now thread a `fallback` evaluation function into
`execpolicy` that runs when no `execpolicy` rules match a given command.
In our example, we would run `fallback` on `rm -rf ./` and prevent
`apple | rm -rf ./` from being run without approval.
2025-12-03 23:39:48 -08:00
zhao-oai
e925a380dc whitelist command prefix integration in core and tui (#7033)
this PR enables TUI to approve commands and add their prefixes to an
allowlist:
<img width="708" height="605" alt="Screenshot 2025-11-21 at 4 18 07 PM"
src="https://github.com/user-attachments/assets/56a19893-4553-4770-a881-becf79eeda32"
/>

note: we only show the option to whitelist the command when 
1) command is not multi-part (e.g `git add -A && git commit -m 'hello
world'`)
2) command is not already matched by an existing rule
2025-12-03 23:17:02 -08:00
liam
4d4778ec1c Trim history.jsonl when history.max_bytes is set (#6242)
This PR honors the `history.max_bytes` configuration parameter by
trimming `history.jsonl` whenever it grows past the configured limit.
While appending new entries we retain the newest record, drop the oldest
lines to stay within the byte budget, and serialize the compacted file
back to disk under the same lock to keep writers safe.
2025-12-02 14:01:05 -08:00
Kaden Gruizenga
41760f8a09 docs: clarify codex max defaults and xhigh availability (#7449)
## Summary
Adds the missing `xhigh` reasoning level everywhere it should have been
documented, and makes clear it only works with `gpt-5.1-codex-max`.

## Changes

* `docs/config.md`

* Add `xhigh` to the official list of reasoning levels with a note that
`xhigh` is exclusive to Codex Max.

* `docs/example-config.md`

* Update the example comment adding `xhigh` as a valid option but only
for Codex Max.

* `docs/faq.md`

  * Update the model recommendation to `GPT-5.1 Codex Max`.
* Mention that users can choose `high` or the newly documented `xhigh`
level when using Codex Max.
2025-12-01 10:46:53 -08:00
Gabriel Peal
3741f387e9 Allow enterprises to skip upgrade checks and messages (#7213)
This is a feature primarily for enterprises who centrally manage Codex
updates.
2025-11-24 15:04:49 -05:00
Eric Traut
207d94b0e7 Removed streamable_shell from docs (#7235)
This config option no longer exists

Addresses #7207
2025-11-24 11:47:57 -08:00
jif-oai
af65666561 chore: drop model_max_output_tokens (#7100) 2025-11-21 17:42:54 +00:00
Eric Traut
d909048a85 Added feature switch to disable animations in TUI (#6870)
This PR adds support for a new feature flag `tui.animations`. By
default, the TUI uses animations in its welcome screen, "working"
spinners, and "shimmer" effects. This animations can interfere with
screen readers, so it's good to provide a way to disable them.

This change is inspired by [a
PR](https://github.com/openai/codex/pull/4014) contributed by @Orinks.
That PR has faltered a bit, but I think the core idea is sound. This
version incorporates feedback from @aibrahim-oai. In particular:
1. It uses a feature flag (`tui.animations`) rather than the unqualified
CLI key `no-animations`. Feature flags are the preferred way to expose
boolean switches. They are also exposed via CLI command switches.
2. It includes more complete documentation.
3. It disables a few animations that the other PR omitted.
2025-11-20 10:40:08 -08:00
Ahmed Ibrahim
d5dfba2509 feat: arcticfox in the wild (#6906)
<img width="485" height="600" alt="image"
src="https://github.com/user-attachments/assets/4341740d-dd58-4a3e-b69a-33a3be0606c5"
/>

---------

Co-authored-by: jif-oai <jif@openai.com>
2025-11-19 16:31:06 +00:00
Ahmed Ibrahim
793063070b fix: typos in model picker (#6859)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2025-11-19 06:29:02 +00:00
simister
0bf857bc91 Fix typo in config.md for MCP server (#6845)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2025-11-18 14:06:13 -08:00
Anton Panasenko
f7a921039c [codex][otel] support mtls configuration (#6228)
fix for https://github.com/openai/codex/issues/6153

supports mTLS configuration and includes TLS features in the library
build to enable secure HTTPS connections with custom root certificates.

grpc:
https://docs.rs/tonic/0.13.1/src/tonic/transport/channel/endpoint.rs.html#63
https:
https://docs.rs/reqwest/0.12.23/src/reqwest/async_impl/client.rs.html#516
2025-11-18 14:01:01 -08:00
Ahmed Ibrahim
3de8790714 Add the utility to truncate by tokens (#6746)
- This PR is to make it on path for truncating by tokens. This path will
be initially used by unified exec and context manager (responsible for
MCP calls mainly).
- We are exposing new config `calls_output_max_tokens`
- Use `tokens` as the main budget unit but truncate based on the model
family by Introducing `TruncationPolicy`.
- Introduce `truncate_text` as a router for truncation based on the
mode.

In next PRs:
- remove truncate_with_line_bytes_budget
- Add the ability to the model to override the token budget.
2025-11-18 11:36:23 -08:00
Ahmed Ibrahim
ddcc60a085 Update defaults to gpt-5.1 (#6652)
## Summary
- update documentation, example configs, and automation defaults to
reference gpt-5.1 / gpt-5.1-codex
- bump the CLI and core configuration defaults, model presets, and error
messaging to the new models while keeping the model-family/tool coverage
for legacy slugs
- refresh tests, fixtures, and TUI snapshots so they expect the upgraded
defaults

## Testing
- `cargo test -p codex-core
config::tests::test_precedence_fixture_with_gpt5_profile`


------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_6916c5b3c2b08321ace04ee38604fc6b)
2025-11-17 17:40:11 -08:00
rugvedS07
837bc98a1d LM Studio OSS Support (#2312)
## Overview

Adds LM Studio OSS support. Closes #1883


### Changes
This PR enhances the behavior of `--oss` flag to support LM Studio as a
provider. Additionally, it introduces a new flag`--local-provider` which
can take in `lmstudio` or `ollama` as values if the user wants to
explicitly choose which one to use.

If no provider is specified `codex --oss` will auto-select the provider
based on whichever is running.

#### Additional enhancements 
The default can be set using `oss-provider` in config like:

```
oss_provider = "lmstudio"
```

For non-interactive users, they will need to either provide the provider
as an arg or have it in their `config.toml`

### Notes
For best performance, [set the default context
length](https://lmstudio.ai/docs/app/advanced/per-model) for gpt-oss to
the maximum your machine can support

---------

Co-authored-by: Matt Clayton <matt@lmstudio.ai>
Co-authored-by: Eric Traut <etraut@openai.com>
2025-11-17 11:49:09 -08:00
Jeremy Rose
799364de87 Enable TUI notifications by default (#6633)
## Summary
- default the `tui.notifications` setting to enabled so desktop
notifications work out of the box
- update configuration tests and documentation to reflect the new
default

## Testing
- `cargo test -p codex-core` *(fails:
`exec::tests::kill_child_process_group_kills_grandchildren_on_timeout`
is flaky in this sandbox because the spawned grandchild process stays
alive)*
- `cargo test -p codex-core
exec::tests::kill_child_process_group_kills_grandchildren_on_timeout`
*(fails: same sandbox limitation as above)*

------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_69166f811144832c9e8aaf8ee2642373)
2025-11-14 09:28:09 -08:00
Eric Traut
65cb1a1b77 Updated docs to reflect recent changes in web_search configuration (#6376)
This is a simplified version of [a
PR](https://github.com/openai/codex/pull/6134) supplied by a community
member.

It updates the docs to reflect a recent config deprecation.
2025-11-10 07:57:56 -08:00
Ahmed Ibrahim
d40a6b7f73 fix: Update the deprecation message to link to the docs (#6211)
The deprecation message is currently a bit confusing. Users may not
understand what is `[features].x`. I updated the docs and the
deprecation message for more guidance.

---------

Co-authored-by: Gabriel Peal <gpeal@users.noreply.github.com>
2025-11-04 21:02:27 +00:00
Tony Dong
1ac4fb45d2 Fixing small typo in docs (#5659)
Fixing a typo in the docs

Co-authored-by: Eric Traut <etraut@openai.com>
2025-10-31 16:41:05 -07:00
Celia Chen
6ef658a9f9 [Hygiene] Remove include_view_image_tool config (#5976)
There's still some debate about whether we want to expose
`tools.view_image` or `feature.view_image` so those are left unchanged
for now, but this old `include_view_image_tool` config is good-to-go.
Also updated the doc to reflect that `view_image` tool is now by default
true.
2025-10-30 13:23:24 -07:00
Celia Chen
4a42c4e142 [Auth] Choose which auth storage to use based on config (#5792)
This PR is a follow-up to #5591. It allows users to choose which auth
storage mode they want by using the new
`cli_auth_credentials_store_mode` config.
2025-10-27 19:41:49 -07:00
Gabriel Peal
7aab45e060 [MCP] Minor docs clarifications around stdio tokens (#5676)
Noticed
[here](https://github.com/openai/codex/issues/4707#issuecomment-3446547561)
2025-10-26 13:38:30 -04:00
Gabriel Peal
4cd6b01494 [MCP] Remove the legacy stdio client in favor of rmcp (#5529)
I haven't heard of any issues with the studio rmcp client so let's
remove the legacy one and default to the new one.

Any code changes are moving code from the adapter inline but there
should be no meaningful functionality changes.
2025-10-22 12:06:59 -07:00