seeing issues with azure after default-enabling web search: #10071,
#10257.
need to work with azure to fix api-side, for now turning off
default-enable of web_search for azure.
diff is big because i moved logic to reuse
## Summary
Let's dial in this api contract in a bit more with more robust fallback
behavior when model_instructions_template is false.
Switches to a more explicit template / variables structure, with more
fallbacks.
## Testing
- [x] Adding unit tests
- [x] Tested locally
### Summary
- Parse all `web_search` tool actions (`search`, `find_in_page`,
`open_page`).
- Previously we only parsed + displayed `search`, which made the TUI
appear to pause when the other actions were being used.
- Show in progress `web_search` calls as `Searching the web`
- Previously we only showed completed tool calls
<img width="308" height="149" alt="image"
src="https://github.com/user-attachments/assets/90a4e8ff-b06a-48ff-a282-b57b31121845"
/>
### Tests
Added + updated tests, tested locally
### Follow ups
Update VSCode extension to display these as well
## Summary
#9555 is the start of a rename, so I'm starting to standardize here.
Sets up `model_instructions` templating with a strongly-typed object for
injecting a personality block into the model instructions.
## Testing
- [x] Added tests
- [x] Ran locally
**Goal**: Prevent response.failed events with `invalid_prompt` from
being treated as retryable errors so the UI shows the actual error
message instead of continually retrying.
**Before**: Codex would continue to retry despite the prompt being
marked as disallowed
**After**: Codex will stop retrying once prompt is marked disallowed
- capture the header from SSE/WS handshakes, store it per
ModelClientSession using `Oncelock`, echo it on turn-scoped requests,
and add SSE+WS integration tests for within-turn persistence +
cross-turn reset.
- keep `x-codex-turn-state` sticky within a user turn to maintain
routing continuity for retries/tool follow-ups.
This PR configures Codex CLI so it can be built with
[Bazel](https://bazel.build) in addition to Cargo. The `.bazelrc`
includes configuration so that remote builds can be done using
[BuildBuddy](https://www.buildbuddy.io).
If you are familiar with Bazel, things should work as you expect, e.g.,
run `bazel test //... --keep-going` to run all the tests in the repo,
but we have also added some new aliases in the `justfile` for
convenience:
- `just bazel-test` to run tests locally
- `just bazel-remote-test` to run tests remotely (currently, the remote
build is for x86_64 Linux regardless of your host platform). Note we are
currently seeing the following test failures in the remote build, so we
still need to figure out what is happening here:
```
failures:
suite::compact::manual_compact_twice_preserves_latest_user_messages
suite::compact_resume_fork::compact_resume_after_second_compaction_preserves_history
suite::compact_resume_fork::compact_resume_and_fork_preserve_model_history_view
```
- `just build-for-release` to build release binaries for all
platforms/architectures remotely
To setup remote execution:
- [Create a buildbuddy account](https://app.buildbuddy.io/) (OpenAI
employees should also request org access at
https://openai.buildbuddy.io/join/ with their `@openai.com` email
address.)
- [Copy your API key](https://app.buildbuddy.io/docs/setup/) to
`~/.bazelrc` (add the line `build
--remote_header=x-buildbuddy-api-key=YOUR_KEY`)
- Use `--config=remote` in your `bazel` invocations (or add `common
--config=remote` to your `~/.bazelrc`, or use the `just` commands)
## CI
In terms of CI, this PR introduces `.github/workflows/bazel.yml`, which
uses Bazel to run the tests _locally_ on Mac and Linux GitHub runners
(we are working on supporting Windows, but that is not ready yet). Note
that the failures we are seeing in `just bazel-remote-test` do not occur
on these GitHub CI jobs, so everything in `.github/workflows/bazel.yml`
is green right now.
The `bazel.yml` uses extra config in `.github/workflows/ci.bazelrc` so
that macOS CI jobs build _remotely_ on Linux hosts (using the
`docker://docker.io/mbolin491/codex-bazel` Docker image declared in the
root `BUILD.bazel`) using cross-compilation to build the macOS
artifacts. Then these artifacts are downloaded locally to GitHub's macOS
runner so the tests can be executed natively. This is the relevant
config that enables this:
```
common:macos --config=remote
common:macos --strategy=remote
common:macos --strategy=TestRunner=darwin-sandbox,local
```
Because of the remote caching benefits we get from BuildBuddy, these new
CI jobs can be extremely fast! For example, consider these two jobs that
ran all the tests on Linux x86_64:
- Bazel 1m37s
https://github.com/openai/codex/actions/runs/20861063212/job/59940545209?pr=8875
- Cargo 9m20s
https://github.com/openai/codex/actions/runs/20861063192/job/59940559592?pr=8875
For now, we will continue to run both the Bazel and Cargo jobs for PRs,
but once we add support for Windows and running Clippy, we should be
able to cutover to using Bazel exclusively for PRs, which should still
speed things up considerably. We will probably continue to run the Cargo
jobs post-merge for commits that land on `main` as a sanity check.
Release builds will also continue to be done by Cargo for now.
Earlier attempt at this PR: https://github.com/openai/codex/pull/8832
Earlier attempt to add support for Buck2, now abandoned:
https://github.com/openai/codex/pull/8504
---------
Co-authored-by: David Zbarsky <dzbarsky@gmail.com>
Co-authored-by: Michael Bolin <mbolin@openai.com>
Adds a new feature
`enable_request_compression` that will compress using zstd requests to
the codex-backend. Currently only enabled for codex-backend so only enabled for openai providers when using chatgpt::auth even when the feature is enabled
Added a new info log line too for evaluating the compression ratio and
overhead off compressing before requesting. You can enable with
`RUST_LOG=$RUST_LOG,codex_client::transport=info`
```
2026-01-06T00:09:48.272113Z INFO codex_client::transport: Compressed request body with zstd pre_compression_bytes=28914 post_compression_bytes=11485 compression_duration_ms=0
```
- Merge ModelFamily into ModelInfo
- Remove logic for adding instructions to apply patch
- Add compaction limit and visible context window to `ModelInfo`
Context
- This code parses Server-Sent Events (SSE) from the legacy Chat
Completions streaming API (wire_api = "chat").
- The upstream protocol terminates a stream with a final sentinel event:
data: [DONE].
- Some of our test stubs/helpers historically end the stream with data:
DONE (no brackets).
How this was found
- GitHub Actions on Windows failed in codex-app-server integration tests
with wiremock verification errors (expected multiple POSTs, got 1).
Diagnosis
- The job logs included: codex_api::sse::chat: Failed to parse
ChatCompletions SSE event ... data: DONE.
- eventsource_stream surfaces the sentinel as a normal SSE event; it
does not automatically close the stream.
- The parser previously attempted to JSON-decode every data: payload.
The sentinel is not JSON, so we logged and skipped it, then continued
polling.
- On servers that keep the HTTP connection open after emitting the
sentinel (notably wiremock on Windows), skipping the sentinel meant we
never emitted ResponseEvent::Completed.
- Higher layers wait for completion before progressing (emitting
approval requests and issuing follow-up model calls), so the test never
reached the subsequent requests and wiremock panicked when its
expected-call count was not met.
Fix
- Treat both data: [DONE] and data: DONE as explicit end-of-stream
sentinels.
- When a sentinel is seen, flush any pending assistant/reasoning items
and emit ResponseEvent::Completed once.
Tests
- Add a regression unit test asserting we complete on the sentinel even
if the underlying connection is not closed.
Fix this: https://github.com/openai/codex/issues/8479
The issue is that chat completion API expect all the tool calls in a
single assistant message and then all the tool call output in a single
response message
This isn't very useful parameter.
logic:
```
if model puts `**` in their reasoning, trim it and visualize the header.
if couldn't trim: don't render
if model doesn't support: don't render
```
We can simplify to:
```
if could trim, visualize header.
if not, don't render
```
# External (non-OpenAI) Pull Request Requirements
Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md
If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.
Include a link to a bug report or enhancement request.
# External (non-OpenAI) Pull Request Requirements
Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md
If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.
Include a link to a bug report or enhancement request.
- Introduce `with_remote_overrides` and update
`refresh_available_models`
- Put `auth_manager` instead of `auth_mode` on `models_manager`
- Remove `ShellType` and `ReasoningLevel` to use already existing
structs
- Introduce `openai_models` in `/core`
- Move `PRESETS` under it
- Move `ModelPreset`, `ModelUpgrade`, `ReasoningEffortPreset`,
`ReasoningEffortPreset`, and `ReasoningEffortPreset` to `protocol`
- Introduce `Op::ListModels` and `EventMsg::AvailableModels`
Next steps:
- migrate `app-server` and `tui` to use the introduced Operation