### Motivation
- Avoid placing PATH entries under the system temp directory by creating
the helper directory under `CODEX_HOME` instead of
`std::env::temp_dir()`.
- Fail fast on unsafe configuration by rejecting `CODEX_HOME` values
that live under the system temp root to prevent writable PATH entries.
### Testing
- Ran `just fmt`, which completed with a non-blocking
`imports_granularity` warning.
- Ran `just fix -p codex-arg0` (Clippy fixes) which completed
successfully.
- Ran `cargo test -p codex-arg0` and the test run completed
successfully.
This PR configures Codex CLI so it can be built with
[Bazel](https://bazel.build) in addition to Cargo. The `.bazelrc`
includes configuration so that remote builds can be done using
[BuildBuddy](https://www.buildbuddy.io).
If you are familiar with Bazel, things should work as you expect, e.g.,
run `bazel test //... --keep-going` to run all the tests in the repo,
but we have also added some new aliases in the `justfile` for
convenience:
- `just bazel-test` to run tests locally
- `just bazel-remote-test` to run tests remotely (currently, the remote
build is for x86_64 Linux regardless of your host platform). Note we are
currently seeing the following test failures in the remote build, so we
still need to figure out what is happening here:
```
failures:
suite::compact::manual_compact_twice_preserves_latest_user_messages
suite::compact_resume_fork::compact_resume_after_second_compaction_preserves_history
suite::compact_resume_fork::compact_resume_and_fork_preserve_model_history_view
```
- `just build-for-release` to build release binaries for all
platforms/architectures remotely
To setup remote execution:
- [Create a buildbuddy account](https://app.buildbuddy.io/) (OpenAI
employees should also request org access at
https://openai.buildbuddy.io/join/ with their `@openai.com` email
address.)
- [Copy your API key](https://app.buildbuddy.io/docs/setup/) to
`~/.bazelrc` (add the line `build
--remote_header=x-buildbuddy-api-key=YOUR_KEY`)
- Use `--config=remote` in your `bazel` invocations (or add `common
--config=remote` to your `~/.bazelrc`, or use the `just` commands)
## CI
In terms of CI, this PR introduces `.github/workflows/bazel.yml`, which
uses Bazel to run the tests _locally_ on Mac and Linux GitHub runners
(we are working on supporting Windows, but that is not ready yet). Note
that the failures we are seeing in `just bazel-remote-test` do not occur
on these GitHub CI jobs, so everything in `.github/workflows/bazel.yml`
is green right now.
The `bazel.yml` uses extra config in `.github/workflows/ci.bazelrc` so
that macOS CI jobs build _remotely_ on Linux hosts (using the
`docker://docker.io/mbolin491/codex-bazel` Docker image declared in the
root `BUILD.bazel`) using cross-compilation to build the macOS
artifacts. Then these artifacts are downloaded locally to GitHub's macOS
runner so the tests can be executed natively. This is the relevant
config that enables this:
```
common:macos --config=remote
common:macos --strategy=remote
common:macos --strategy=TestRunner=darwin-sandbox,local
```
Because of the remote caching benefits we get from BuildBuddy, these new
CI jobs can be extremely fast! For example, consider these two jobs that
ran all the tests on Linux x86_64:
- Bazel 1m37s
https://github.com/openai/codex/actions/runs/20861063212/job/59940545209?pr=8875
- Cargo 9m20s
https://github.com/openai/codex/actions/runs/20861063192/job/59940559592?pr=8875
For now, we will continue to run both the Bazel and Cargo jobs for PRs,
but once we add support for Windows and running Clippy, we should be
able to cutover to using Bazel exclusively for PRs, which should still
speed things up considerably. We will probably continue to run the Cargo
jobs post-merge for commits that land on `main` as a sanity check.
Release builds will also continue to be done by Cargo for now.
Earlier attempt at this PR: https://github.com/openai/codex/pull/8832
Earlier attempt to add support for Buck2, now abandoned:
https://github.com/openai/codex/pull/8504
---------
Co-authored-by: David Zbarsky <dzbarsky@gmail.com>
Co-authored-by: Michael Bolin <mbolin@openai.com>
Fixes#2558
Codex uses alternate screen mode (CSI 1049) which, per xterm spec,
doesn't support scrollback. Zellij follows this strictly, so users can't
scroll back through output.
**Changes:**
- Add `tui.alternate_screen` config: `auto` (default), `always`, `never`
- Add `--no-alt-screen` CLI flag
- Auto-detect Zellij and skip alt screen (uses existing `ZELLIJ` env var
detection)
**Usage:**
```bash
# CLI flag
codex --no-alt-screen
# Or in config.toml
[tui]
alternate_screen = "never"
```
With default `auto` mode, Zellij users get working scrollback without
any config changes.
---------
Co-authored-by: Josh McKinney <joshka@openai.com>
Some enterprises do not want their users to be able to `/feedback`.
<img width="395" height="325" alt="image"
src="https://github.com/user-attachments/assets/2dae9c0b-20c3-4a15-bcd3-0187857ebbd8"
/>
Adds to `config.toml`:
```toml
[feedback]
enabled = false
```
I've deliberately decided to:
1. leave other references to `/feedback` (e.g. in the interrupt message,
tips of the day) unchanged. I think we should continue to promote the
feature even if it is not usable currently.
2. leave the `/feedback` menu item selectable and display an error
saying it's disabled, rather than remove the menu item (which I believe
would raise more questions).
but happy to discuss these.
This will be followed by a change to requirements.toml that admins can
use to force the value of feedback.enabled.
**Motivation**
The `originator` header is important for codex-backend’s Responses API
proxy because it identifies the real end client (codex cli, codex vscode
extension, codex exec, future IDEs) and is used to categorize requests
by client for our enterprise compliance API.
Today the `originator` header is set by either:
- the `CODEX_INTERNAL_ORIGINATOR_OVERRIDE` env var (our VSCode extension
does this)
- calling `set_default_originator()` which sets a global immutable
singleton (`codex exec` does this)
For `codex app-server`, we want the `initialize` JSON-RPC request to set
that header because it is a natural place to do so. Example:
```json
{
"method": "initialize",
"id": 0,
"params": {
"clientInfo": {
"name": "codex_vscode",
"title": "Codex VS Code Extension",
"version": "0.1.0"
}
}
}
```
and when app-server receives that request, it can call
`set_default_originator()`. This is a much more natural interface than
asking third party developers to set an env var.
One hiccup is that `originator()` reads the global singleton and locks
in the value, preventing a later `set_default_originator()` call from
setting it. This would be fine but is brittle, since any codepath that
calls `originator()` before app-server can process an `initialize`
JSON-RPC call would prevent app-server from setting it. This was
actually the case with OTEL initialization which runs on boot, but I
also saw this behavior in certain tests.
Instead, what we now do is:
- [unchanged] If `CODEX_INTERNAL_ORIGINATOR_OVERRIDE` env var is set,
`originator()` would return that value and `set_default_originator()`
with some other value does NOT override it.
- [new] If no env var is set, `originator()` would return the default
value which is `codex_cli_rs` UNTIL `set_default_originator()` is called
once, in which case it is set to the new value and becomes immutable.
Later calls to `set_default_originator()` returns
`SetOriginatorError::AlreadyInitialized`.
**Other notes**
- I updated `codex_core::otel_init::build_provider` to accepts a service
name override, and app-server sends a hardcoded `codex_app_server`
service name to distinguish it from `codex_cli_rs` used by default (e.g.
TUI).
**Next steps**
- Update VSCE to set the proper value for `clientInfo.name` on
`initialize` and drop the `CODEX_INTERNAL_ORIGINATOR_OVERRIDE` env var.
- Delete support for `CODEX_INTERNAL_ORIGINATOR_OVERRIDE` in codex-rs.
This is a proposed fix for #8912
Information provided by Codex:
no_proxy means “don’t use any system proxy settings for this client,”
even if macOS has proxies configured in System Settings or via
environment. On macOS, reqwest’s proxy discovery can call into the
system-configuration framework; that’s the code path that was panicking
with “Attempted to create a NULL object.” By forcing a direct connection
for the OAuth discovery request, we avoid that proxy-resolution path
entirely, so the system-configuration crate never gets invoked and the
panic disappears.
Effectively:
With proxies: reqwest asks the OS for proxy config →
system-configuration gets touched → panic.
With no_proxy: reqwest skips proxy lookup → no system-configuration call
→ no panic.
So the fix doesn’t change any MCP protocol behavior; it just prevents
the OAuth discovery probe from touching the macOS proxy APIs that are
crashing in the reported environment.
This fix changes behavior for the OAuth discovery probe used in codex
mcp list/auth status detection. With no_proxy, that probe won’t use
system or env proxy settings, so:
If a server is only reachable via a proxy, the discovery call may fail
and we’ll show auth as Unsupported/NotLoggedIn incorrectly.
If the server is reachable directly (common case), behavior is
unchanged.
As an alternative, we could try to get a fix into the
[system-configuration](https://github.com/mullvad/system-configuration-rs)
library. It looks like this library is still under development but has
slow release pace.
As explained in https://github.com/openai/codex/issues/8945 and
https://github.com/openai/codex/issues/8472, there are legitimate cases
where users expect processes spawned by Codex to inherit environment
variables such as `LD_LIBRARY_PATH` and `DYLD_LIBRARY_PATH`, where
failing to do so can cause significant performance issues.
This PR removes the use of
`codex_process_hardening::pre_main_hardening()` in Codex CLI (which was
added not in response to a known security issue, but because it seemed
like a prudent thing to do from a security perspective:
https://github.com/openai/codex/pull/4521), but we will continue to use
it in `codex-responses-api-proxy`. At some point, we probably want to
introduce a slightly different version of
`codex_process_hardening::pre_main_hardening()` in Codex CLI that
excludes said environment variables from the Codex process itself, but
continues to propagate them to subprocesses.
Handle null tool arguments in the MCP resource handler so optional
resource tools accept null without failing, preserving normal JSON
parsing for non-null payloads and improving robustness when models emit
null; this avoids spurious argument parse errors for list/read MCP
resource calls.
Elevated Sandbox NUX:
* prompt for elevated sandbox setup when agent mode is selected (via
/approvals or at startup)
* prompt for degraded sandbox if elevated setup is declined or fails
* introduce /elevate-sandbox command to upgrade from degraded
experience.
This seems to be necessary to get the Bazel builds on ARM Linux to go
green on https://github.com/openai/codex/pull/8875.
I don't feel great about timeout-whack-a-mole, but we're still learning
here...
Fix flakiness of CI test:
https://github.com/openai/codex/actions/runs/20350530276/job/58473691434?pr=8282
This PR does two things:
1. move the flakiness test to use responses API instead of chat
completion API
2. make mcp_process agnostic to the order of
responses/notifications/requests that come in, by buffering messages not
read
I have seen this test flake out sometimes when running the macOS build
using Bazel in CI: https://github.com/openai/codex/pull/8875. Perhaps
Bazel runs with greater parallelism, inducing a heavier load, causing an
issue?
Historically we started with a CodexAuth that knew how to refresh it's
own tokens and then added AuthManager that did a different kind of
refresh (re-reading from disk).
I don't think it makes sense for both `CodexAuth` and `AuthManager` to
be mutable and contain behaviors.
Move all refresh logic into `AuthManager` and keep `CodexAuth` as a data
object.
This updates core shell environment policy handling to match Windows
case-insensitive variable names and adds a Windows-only regression test,
so Path/TEMP are no longer dropped when inherit=core.
Fix flakiness of CI tests:
https://github.com/openai/codex/actions/runs/20350530276/job/58473691443?pr=8282
This PR does two things:
1. test with responses API instead of chat completions API in
thread_resume tests;
2. have a new responses API fixture that mocks out arbitrary numbers of
responses API calls (including no calls) and have the same repeated
response.
Tested by CI
**Before:**
```
Error loading configuration: value `Never` is not in the allowed set [OnRequest]
```
**After:**
```
Error loading configuration: invalid value for `approval_policy`: `Never` is not in the
allowed set [OnRequest] (set by MDM com.openai.codex:requirements_toml_base64)
```
Done by introducing a new struct `ConfigRequirementsWithSources` onto
which we `merge_unset_fields` now. Also introduces a pair of requirement
value and its `RequirementSource` (inspired by `ConfigLayerSource`):
```rust
pub struct Sourced<T> {
pub value: T,
pub source: RequirementSource,
}
```
## Summary
- avoid setting a new process group when stdio is inherited (keeps child
in foreground PG)
- keep process-group isolation when stdio is redirected so killpg
cleanup still works
- prevents macOS job-control SIGTTIN stops that look like hangs after
output
## Testing
- `cargo build -p codex-cli`
- `GIT_CONFIG_GLOBAL=/dev/null GIT_CONFIG_NOSYSTEM=1
CARGO_BIN_EXE_codex=/Users/denis/Code/codex/codex-rs/target/debug/codex
/opt/homebrew/bin/timeout 30m cargo test -p codex-core -p codex-exec`
## Context
This fixes macOS sandbox hangs for commands like `elixir -v` / `erl
-noshell`, where the child was moved into a new process group while
still attached to the controlling TTY. See issue #8690.
## Authorship & collaboration
- This change and analysis were authored by **Codex** (AI coding agent).
- Human collaborator: @seeekr provided repro environment, context, and
review guidance.
- CLI used: `codex-cli 0.77.0`.
- Model: `gpt-5.2-codex (xhigh)`.
Co-authored-by: Eric Traut <etraut@openai.com>