…alid (#7668)
The `otel` exporter example in `docs/config.md` is misleading and will
cause
the configuration parser to fail if copied verbatim.
Summary
-------
The example uses a TOML inline table but spreads the inline-table braces
across multiple lines. TOML inline tables must be contained on a single
line
(`key = { a = 1, b = 2 }`); placing newlines inside the braces triggers
a
parse error in most TOML parsers and prevents Codex from starting.
Reproduction
------------
1. Paste the snippet below into `~/.codex/config.toml` (or your project
config).
2. Run `codex` (or the command that loads the config).
3. The process will fail to start with a TOML parse error similar to:
```text
Error loading config.toml: TOML parse error at line 55, column 27
|
55 | exporter = { otlp-http = {
| ^
newlines are unsupported in inline tables, expected nothing
```
Problematic snippet (as currently shown in the docs)
---------------------------------------------------
```toml
[otel]
exporter = { otlp-http = {
endpoint = "https://otel.example.com/v1/logs",
protocol = "binary",
headers = { "x-otlp-api-key" = "${OTLP_TOKEN}" }
}}
```
Recommended fixes
------------------
```toml
[otel.exporter."otlp-http"]
endpoint = "https://otel.example.com/v1/logs"
protocol = "binary"
[otel.exporter."otlp-http".headers]
"x-otlp-api-key" = "${OTLP_TOKEN}"
```
Or, keep an inline table but write it on one line (valid but less
readable):
```toml
[otel]
exporter = { "otlp-http" = { endpoint = "https://otel.example.com/v1/logs", protocol = "binary", headers = { "x-otlp-api-key" = "${OTLP_TOKEN}" } } }
```
Update install and contributing guides to use the root justfile helpers
(`just fmt`, `just fix -p <crate>`, and targeted tests) instead of the
older cargo fmt/clippy/test instructions that have been in place since
459363e17b. This matches the justfile relocation to the repo root in
952d6c946 and the current lint/test workflow for CI (see
`.github/workflows/rust-ci.yml`).
## Refactor of the `execpolicy` crate
To illustrate why we need this refactor, consider an agent attempting to
run `apple | rm -rf ./`. Suppose `apple` is allowed by `execpolicy`.
Before this PR, `execpolicy` would consider `apple` and `pear` and only
render one rule match: `Allow`. We would skip any heuristics checks on
`rm -rf ./` and immediately approve `apple | rm -rf ./` to run.
To fix this, we now thread a `fallback` evaluation function into
`execpolicy` that runs when no `execpolicy` rules match a given command.
In our example, we would run `fallback` on `rm -rf ./` and prevent
`apple | rm -rf ./` from being run without approval.
this PR enables TUI to approve commands and add their prefixes to an
allowlist:
<img width="708" height="605" alt="Screenshot 2025-11-21 at 4 18 07 PM"
src="https://github.com/user-attachments/assets/56a19893-4553-4770-a881-becf79eeda32"
/>
note: we only show the option to whitelist the command when
1) command is not multi-part (e.g `git add -A && git commit -m 'hello
world'`)
2) command is not already matched by an existing rule
This PR honors the `history.max_bytes` configuration parameter by
trimming `history.jsonl` whenever it grows past the configured limit.
While appending new entries we retain the newest record, drop the oldest
lines to stay within the byte budget, and serialize the compacted file
back to disk under the same lock to keep writers safe.
This change prototypes support for Skills with the CLI. This is an
**experimental** feature for internal testing.
---------
Co-authored-by: Gav Verma <gverma@openai.com>
## Summary
Adds the missing `xhigh` reasoning level everywhere it should have been
documented, and makes clear it only works with `gpt-5.1-codex-max`.
## Changes
* `docs/config.md`
* Add `xhigh` to the official list of reasoning levels with a note that
`xhigh` is exclusive to Codex Max.
* `docs/example-config.md`
* Update the example comment adding `xhigh` as a valid option but only
for Codex Max.
* `docs/faq.md`
* Update the model recommendation to `GPT-5.1 Codex Max`.
* Mention that users can choose `high` or the newly documented `xhigh`
level when using Codex Max.
This PR is a modified version of [a
PR](https://github.com/openai/codex/pull/7316) submitted by @yydrowz3.
* Removes a redundant `experimental_sandbox_command_assessment` flag
* Moves `mcp_oauth_credentials_store` from the `[features]` table, where
it doesn't belong
This PR is a documentation only one which:
- addresses the #7231 by adding a paragraph in `docs/getting-started.md`
in the tips category to encourage users to load everything needed in
their environment
- corrects link referencing in `docs/platform-sandboxing.md` so that the
page link opens at the right section
- removes the explicit heading IDs like {#my-id} in `docs/advanced.md`
which are not supported by GitHub and are **not** rendered in the UI:
<img width="1198" height="849" alt="Screenshot 2025-11-26 at 16 25 31"
src="https://github.com/user-attachments/assets/308d33c3-81d3-4785-a6c1-e9377e6d3ea6"
/>
This caused the following links in `README.md` to not work in `main` but
to work in this branch (you can test by going to
https://github.com/openai/codex/blob/docs/getting-started-enhancement/README.md)
- the MCP link goes straight to the correct section now:
```markdown
- [**Advanced**](./docs/advanced.md)
- [Tracing / verbose logging](./docs/advanced.md#tracing--verbose-logging)
- [Model Context Protocol (MCP)](./docs/advanced.md#model-context-protocol-mcp)
```
---------
Signed-off-by: lionel-oai <lionel@openai.com>
Signed-off-by: lionelchg <lionel.cheng@hotmail.fr>
Co-authored-by: lionelchg <lionel.cheng@hotmail.fr>
This PR adds support for a new feature flag `tui.animations`. By
default, the TUI uses animations in its welcome screen, "working"
spinners, and "shimmer" effects. This animations can interfere with
screen readers, so it's good to provide a way to disable them.
This change is inspired by [a
PR](https://github.com/openai/codex/pull/4014) contributed by @Orinks.
That PR has faltered a bit, but I think the core idea is sound. This
version incorporates feedback from @aibrahim-oai. In particular:
1. It uses a feature flag (`tui.animations`) rather than the unqualified
CLI key `no-animations`. Feature flags are the preferred way to expose
boolean switches. They are also exposed via CLI command switches.
2. It includes more complete documentation.
3. It disables a few animations that the other PR omitted.
# External (non-OpenAI) Pull Request Requirements
Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md
If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.
Include a link to a bug report or enhancement request.
By default, show only sessions that shared a cwd with the current cwd.
`--all` shows all sessions in all cwds. Also, show the branch name from
the rollout metadata.
<img width="1091" height="638" alt="Screenshot 2025-11-04 at 3 30 47 PM"
src="https://github.com/user-attachments/assets/aae90308-6115-455f-aff7-22da5f1d9681"
/>
# External (non-OpenAI) Pull Request Requirements
Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md
If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.
Include a link to a bug report or enhancement request.
- This PR is to make it on path for truncating by tokens. This path will
be initially used by unified exec and context manager (responsible for
MCP calls mainly).
- We are exposing new config `calls_output_max_tokens`
- Use `tokens` as the main budget unit but truncate based on the model
family by Introducing `TruncationPolicy`.
- Introduce `truncate_text` as a router for truncation based on the
mode.
In next PRs:
- remove truncate_with_line_bytes_budget
- Add the ability to the model to override the token budget.
## Summary
- update documentation, example configs, and automation defaults to
reference gpt-5.1 / gpt-5.1-codex
- bump the CLI and core configuration defaults, model presets, and error
messaging to the new models while keeping the model-family/tool coverage
for legacy slugs
- refresh tests, fixtures, and TUI snapshots so they expect the upgraded
defaults
## Testing
- `cargo test -p codex-core
config::tests::test_precedence_fixture_with_gpt5_profile`
------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_6916c5b3c2b08321ace04ee38604fc6b)
## Overview
Adds LM Studio OSS support. Closes#1883
### Changes
This PR enhances the behavior of `--oss` flag to support LM Studio as a
provider. Additionally, it introduces a new flag`--local-provider` which
can take in `lmstudio` or `ollama` as values if the user wants to
explicitly choose which one to use.
If no provider is specified `codex --oss` will auto-select the provider
based on whichever is running.
#### Additional enhancements
The default can be set using `oss-provider` in config like:
```
oss_provider = "lmstudio"
```
For non-interactive users, they will need to either provide the provider
as an arg or have it in their `config.toml`
### Notes
For best performance, [set the default context
length](https://lmstudio.ai/docs/app/advanced/per-model) for gpt-oss to
the maximum your machine can support
---------
Co-authored-by: Matt Clayton <matt@lmstudio.ai>
Co-authored-by: Eric Traut <etraut@openai.com>
The Custom Prompts documentation (docs/prompts.md) was incomplete for
named arguments:
1. **Documentation for custom prompts was incomplete** - named argument
usage was mentioned briefly but lacked comprehensive canonical examples
showing proper syntax and behavior.
2. **Fixed by adding canonical, tested syntax and examples:**
- Example 1: Basic named arguments with TICKET_ID and TICKET_TITLE
- Example 2: Mixed positional and named arguments with FILE and FOCUS
- Example 3: Using positional arguments
- Example 4: Updated draftpr example to use proper $FEATURE_NAME syntax
- Added clear usage examples showing KEY=value syntax
- Added expanded prompt examples showing the result
- Documented error handling and validation requirements
3. **Added Implementation Reference section** that references the
relevant feature implementation from the codebase (PRs #4470 and #4474
for initial implementation, #5332 and #5403 for clarifications).
This addresses issue #5039 by providing complete, accurate documentation
for named argument usage in custom prompts.
---------
Co-authored-by: Eric Traut <etraut@openai.com>
## Summary
- default the `tui.notifications` setting to enabled so desktop
notifications work out of the box
- update configuration tests and documentation to reflect the new
default
## Testing
- `cargo test -p codex-core` *(fails:
`exec::tests::kill_child_process_group_kills_grandchildren_on_timeout`
is flaky in this sandbox because the spawned grandchild process stays
alive)*
- `cargo test -p codex-core
exec::tests::kill_child_process_group_kills_grandchildren_on_timeout`
*(fails: same sandbox limitation as above)*
------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_69166f811144832c9e8aaf8ee2642373)
## Summary
- add a `hide_rate_limit_model_nudge` notice flag plus config edit
plumbing so the rate limit reminder preference is persisted and
documented
- extend the chat widget prompt with a "never show again" option, and
wire new app events so selecting it hides future nudges immediately and
writes the config
- add unit coverage and refresh the snapshot for the three-option prompt
## Testing
- `just fmt`
- `just fix -p codex-tui`
- `just fix -p codex-core`
- `cargo test -p codex-tui`
- `cargo test -p codex-core` *(fails at
`exec::tests::kill_child_process_group_kills_grandchildren_on_timeout`:
grandchild process still alive)*
------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_6910d7f407748321b2661fc355416994)
This is a simplified version of [a
PR](https://github.com/openai/codex/pull/6134) supplied by a community
member.
It updates the docs to reflect a recent config deprecation.
Some PRs are being submitted without reference to existing bug reports
or feature requests. This updates the PR template and contributing
guidelines to request that all PRs from the community contain such a
link. This provides additional context and helps prioritize, track, and
assess PRs.
The deprecation message is currently a bit confusing. Users may not
understand what is `[features].x`. I updated the docs and the
deprecation message for more guidance.
---------
Co-authored-by: Gabriel Peal <gpeal@users.noreply.github.com>
I was missing an example config.toml, and following docs/config.md alone
was slower. I had GPT-5 scan the codebase for every accepted config key,
check the defaults, and generate a single example config.toml with
annotations. It lists all keys Codex reads from TOML, sets each to its
effective default where it exists, leaves optional ones commented, and
adds short comments on purpose and valid values. This should make
onboarding faster and reduce configuration errors. I can rename it to
config.example.toml or move it under docs/ if you prefer.
This pull request adds a new documentation section to explain the
available slash commands in Codex. The update introduces a clear
overview and a reference table for built-in commands, making it easier
for users to understand and utilize these features.
Documentation updates:
* Added a new section to `docs/slash_commands.md` describing what slash
commands are and listing all built-in commands with their purposes in a
formatted table.
## Summary
This PR fixes a broken self-referencing link in the contributing
documentation.
## Changes
- Removed the phrase 'Following the [development
setup](#development-workflow) instructions above' from the Development
workflow section
- The link referenced a non-existent section and the phrase didn't make
logical sense in context
## Before
The text referenced 'development setup instructions above' but:
1. No section called 'development setup' exists
2. There were no instructions 'above' that point
3. The link pointed to the same section it was in
## After
Simplified to: 'Ensure your change is free of lint warnings and test
failures.'
## Type
Documentation fix
I have read the CLA Document and I hereby sign the CLA
Co-authored-by: Ritesh Chauhan <sagar.chauhn11@gmail.com>
- Added the new codex-windows-sandbox crate that builds both a library
entry point (run_windows_sandbox_capture) and a CLI executable to launch
commands inside a Windows restricted-token sandbox, including ACL
management, capability SID provisioning, network lockdown, and output
capture
(windows-sandbox-rs/src/lib.rs:167, windows-sandbox-rs/src/main.rs:54).
- Introduced the experimental WindowsSandbox feature flag and wiring so
Windows builds can opt into the sandbox:
SandboxType::WindowsRestrictedToken, the in-process execution path, and
platform sandbox selection now honor the flag (core/src/features.rs:47,
core/src/config.rs:1224, core/src/safety.rs:19,
core/src/sandboxing/mod.rs:69, core/src/exec.rs:79,
core/src/exec.rs:172).
- Updated workspace metadata to include the new crate and its
Windows-specific dependencies so the core crate can link against it
(codex-rs/
Cargo.toml:91, core/Cargo.toml:86).
- Added a PowerShell bootstrap script that installs the Windows
toolchain, required CLI utilities, and builds the workspace to ease
development
on the platform (scripts/setup-windows.ps1:1).
- Landed a Python smoke-test suite that exercises
read-only/workspace-write policies, ACL behavior, and network denial for
the Windows sandbox
binary (windows-sandbox-rs/sandbox_smoketests.py:1).
There's still some debate about whether we want to expose
`tools.view_image` or `feature.view_image` so those are left unchanged
for now, but this old `include_view_image_tool` config is good-to-go.
Also updated the doc to reflect that `view_image` tool is now by default
true.
This PR is a follow-up to #5591. It allows users to choose which auth
storage mode they want by using the new
`cli_auth_credentials_store_mode` config.