Commit Graph

47 Commits

Author SHA1 Message Date
willwang-openai
2a299317d2 display promo message in usage error (#10285)
If a promo message is attached to a rate limit response, then display it
in the error message.
2026-01-31 08:13:25 -08:00
sayan-oai
31d1e49340 fix: dont auto-enable web_search for azure (#10266)
seeing issues with azure after default-enabling web search: #10071,
#10257.

need to work with azure to fix api-side, for now turning off
default-enable of web_search for azure.

diff is big because i moved logic to reuse
2026-01-30 22:52:37 +00:00
Dylan Hurd
e3ab0bd973 chore(personality) new schema with fallbacks (#10147)
## Summary
Let's dial in this api contract in a bit more with more robust fallback
behavior when model_instructions_template is false.

Switches to a more explicit template / variables structure, with more
fallbacks.

## Testing
- [x] Adding unit tests
- [x] Tested locally
2026-01-30 00:10:12 -07:00
sayan-oai
86adf53235 fix: handle all web_search actions and in progress invocations (#9960)
### Summary
- Parse all `web_search` tool actions (`search`, `find_in_page`,
`open_page`).
- Previously we only parsed + displayed `search`, which made the TUI
appear to pause when the other actions were being used.
- Show in progress `web_search` calls as `Searching the web`
  - Previously we only showed completed tool calls

<img width="308" height="149" alt="image"
src="https://github.com/user-attachments/assets/90a4e8ff-b06a-48ff-a282-b57b31121845"
/>

### Tests
Added + updated tests, tested locally

### Follow ups
Update VSCode extension to display these as well
2026-01-27 03:33:48 +00:00
jif-oai
515ac2cd19 feat: add thread spawn source for collab tools (#9769) 2026-01-24 14:21:34 +00:00
Anton Panasenko
e117a3ff33 feat: support proxy for ws connection (#9719)
reapply websocket changes without changing tls lib.
2026-01-22 15:23:15 -08:00
pakrym-oai
b511c38ddb Support end_turn flag (#9698)
Experimental flag that signals the end of the turn.
2026-01-22 17:27:48 +00:00
pakrym-oai
4d48d4e0c2 Revert "feat: support proxy for ws connection" (#9693)
Reverts openai/codex#9409
2026-01-22 15:57:18 +00:00
Dylan Hurd
96a72828be feat(core) ModelInfo.model_instructions_template (#9597)
## Summary
#9555 is the start of a rename, so I'm starting to standardize here.
Sets up `model_instructions` templating with a strongly-typed object for
injecting a personality block into the model instructions.

## Testing
- [x] Added tests
- [x] Ran locally
2026-01-21 18:11:18 -08:00
pakrym-oai
f2e1ad59bc Add websockets logging (#9633)
To help with debugging.
2026-01-21 21:35:38 +00:00
pakrym-oai
527b7b4c02 Feature to auto-enable websockets transport (#9578) 2026-01-20 20:32:06 -08:00
Anton Panasenko
7b27aa7707 feat: support proxy for ws connection (#9409)
unfortunately tokio-tungstenite doesn't support proxy configuration
outbox, while https://github.com/snapview/tokio-tungstenite/pull/370 is
in review, we can depend on source code for now.
2026-01-20 09:36:30 -08:00
Ahmed Ibrahim
b11e96fb04 Act on reasoning-included per turn (#9402)
- Reset reasoning-included flag each turn and update compaction test
2026-01-19 11:23:25 -08:00
Fouad Matin
93a5e0fe1c fix(codex-api): treat invalid_prompt as non-retryable (#9400)
**Goal**: Prevent response.failed events with `invalid_prompt` from
being treated as retryable errors so the UI shows the actual error
message instead of continually retrying.

**Before**: Codex would continue to retry despite the prompt being
marked as disallowed
**After**: Codex will stop retrying once prompt is marked disallowed
2026-01-16 22:22:08 -08:00
Ahmed Ibrahim
ebdd8795e9 Turn-state sticky routing per turn (#9332)
- capture the header from SSE/WS handshakes, store it per
ModelClientSession using `Oncelock`, echo it on turn-scoped requests,
and add SSE+WS integration tests for within-turn persistence +
cross-turn reset.

- keep `x-codex-turn-state` sticky within a user turn to maintain
routing continuity for retries/tool follow-ups.
2026-01-16 09:30:11 -08:00
Michael Bolin
3728db11b8 fix: eliminate unnecessary clone() for each SSE event (#9238)
Given how many SSE events we get, seems worth fixing.
2026-01-15 00:06:09 +00:00
Celia Chen
02f67bace8 fix: Emit response.completed immediately for Responses SSE (#9170)
we see windows test failures like this:
https://github.com/openai/codex/actions/runs/20930055601/job/60138344260.

The issue is that SSE connections sometimes remain open after the
completion event esp. for windows. We should emit the completion event
and return immediately. this is consistent with the protocol:

> The Model streams responses back in an SSE, which are collected until
"completed" message and the SSE terminates

from
https://github.com/openai/codex/blob/dev/cc/fix-windows-test/codex-rs/docs/protocol_v1.md#L37.

this helps us achieve parity with responses websocket logic here:
https://github.com/openai/codex/blob/dev/cc/fix-windows-test/codex-rs/codex-api/src/endpoint/responses_websocket.rs#L220-L227.
2026-01-14 10:05:00 -08:00
pakrym-oai
2d56519ecd Support response.done and add integration tests (#9129)
The agent loop using a persistent incremental web socket connection.
2026-01-13 16:12:30 +00:00
pakrym-oai
e726a82c8a Websocket append support (#9128)
Support an incremental append request in websocket transport.
2026-01-13 06:07:13 +00:00
pakrym-oai
d75626ad99 Reuse websocket connection (#9127)
Reuses the connection but still sends full requests.
2026-01-13 03:30:09 +00:00
pakrym-oai
490c1c1fdd Add model client sessions (#9102)
Maintain a long-running session.
2026-01-13 01:15:56 +00:00
pakrym-oai
3a6a43ff5c Extract single responses SSE event parsing (#9114)
To be reused in WebSockets parsing.
2026-01-12 13:59:51 -08:00
pakrym-oai
5dfa780f3d Remove unused conversation_id header (#9107)
It's an exact copy of session_id
2026-01-12 21:01:07 +00:00
pakrym-oai
cabf85aa18 Log unhandled sse events (#8949) 2026-01-09 12:36:07 -08:00
zbarsky-openai
2a06d64bc9 feat: add support for building with Bazel (#8875)
This PR configures Codex CLI so it can be built with
[Bazel](https://bazel.build) in addition to Cargo. The `.bazelrc`
includes configuration so that remote builds can be done using
[BuildBuddy](https://www.buildbuddy.io).

If you are familiar with Bazel, things should work as you expect, e.g.,
run `bazel test //... --keep-going` to run all the tests in the repo,
but we have also added some new aliases in the `justfile` for
convenience:

- `just bazel-test` to run tests locally
- `just bazel-remote-test` to run tests remotely (currently, the remote
build is for x86_64 Linux regardless of your host platform). Note we are
currently seeing the following test failures in the remote build, so we
still need to figure out what is happening here:

```
failures:
    suite::compact::manual_compact_twice_preserves_latest_user_messages
    suite::compact_resume_fork::compact_resume_after_second_compaction_preserves_history
    suite::compact_resume_fork::compact_resume_and_fork_preserve_model_history_view
```

- `just build-for-release` to build release binaries for all
platforms/architectures remotely

To setup remote execution:
- [Create a buildbuddy account](https://app.buildbuddy.io/) (OpenAI
employees should also request org access at
https://openai.buildbuddy.io/join/ with their `@openai.com` email
address.)
- [Copy your API key](https://app.buildbuddy.io/docs/setup/) to
`~/.bazelrc` (add the line `build
--remote_header=x-buildbuddy-api-key=YOUR_KEY`)
- Use `--config=remote` in your `bazel` invocations (or add `common
--config=remote` to your `~/.bazelrc`, or use the `just` commands)

## CI

In terms of CI, this PR introduces `.github/workflows/bazel.yml`, which
uses Bazel to run the tests _locally_ on Mac and Linux GitHub runners
(we are working on supporting Windows, but that is not ready yet). Note
that the failures we are seeing in `just bazel-remote-test` do not occur
on these GitHub CI jobs, so everything in `.github/workflows/bazel.yml`
is green right now.

The `bazel.yml` uses extra config in `.github/workflows/ci.bazelrc` so
that macOS CI jobs build _remotely_ on Linux hosts (using the
`docker://docker.io/mbolin491/codex-bazel` Docker image declared in the
root `BUILD.bazel`) using cross-compilation to build the macOS
artifacts. Then these artifacts are downloaded locally to GitHub's macOS
runner so the tests can be executed natively. This is the relevant
config that enables this:

```
common:macos --config=remote
common:macos --strategy=remote
common:macos --strategy=TestRunner=darwin-sandbox,local
```

Because of the remote caching benefits we get from BuildBuddy, these new
CI jobs can be extremely fast! For example, consider these two jobs that
ran all the tests on Linux x86_64:

- Bazel 1m37s
https://github.com/openai/codex/actions/runs/20861063212/job/59940545209?pr=8875
- Cargo 9m20s
https://github.com/openai/codex/actions/runs/20861063192/job/59940559592?pr=8875

For now, we will continue to run both the Bazel and Cargo jobs for PRs,
but once we add support for Windows and running Clippy, we should be
able to cutover to using Bazel exclusively for PRs, which should still
speed things up considerably. We will probably continue to run the Cargo
jobs post-merge for commits that land on `main` as a sanity check.

Release builds will also continue to be done by Cargo for now.

Earlier attempt at this PR: https://github.com/openai/codex/pull/8832
Earlier attempt to add support for Buck2, now abandoned:
https://github.com/openai/codex/pull/8504

---------

Co-authored-by: David Zbarsky <dzbarsky@gmail.com>
Co-authored-by: Michael Bolin <mbolin@openai.com>
2026-01-09 11:09:43 -08:00
Channing Conger
21c6d40a44 Add feature for optional request compression (#8767)
Adds a new feature
`enable_request_compression` that will compress using zstd requests to
the codex-backend. Currently only enabled for codex-backend so only enabled for openai providers when using chatgpt::auth even when the feature is enabled

Added a new info log line too for evaluating the compression ratio and
overhead off compressing before requesting. You can enable with
`RUST_LOG=$RUST_LOG,codex_client::transport=info`

```
2026-01-06T00:09:48.272113Z  INFO codex_client::transport: Compressed request body with zstd pre_compression_bytes=28914 post_compression_bytes=11485 compression_duration_ms=0
```
2026-01-07 13:21:40 -08:00
Ahmed Ibrahim
9179c9deac Merge Modelfamily into modelinfo (#8763)
- Merge ModelFamily into ModelInfo
- Remove logic for adding instructions to apply patch
- Add compaction limit and visible context window to `ModelInfo`
2026-01-07 10:35:09 -08:00
Josh McKinney
bba5e5e0d4 fix(codex-api): handle Chat Completions DONE sentinel (#8708)
Context
- This code parses Server-Sent Events (SSE) from the legacy Chat
Completions streaming API (wire_api = "chat").
- The upstream protocol terminates a stream with a final sentinel event:
data: [DONE].
- Some of our test stubs/helpers historically end the stream with data:
DONE (no brackets).

How this was found
- GitHub Actions on Windows failed in codex-app-server integration tests
with wiremock verification errors (expected multiple POSTs, got 1).

Diagnosis
- The job logs included: codex_api::sse::chat: Failed to parse
ChatCompletions SSE event ... data: DONE.
- eventsource_stream surfaces the sentinel as a normal SSE event; it
does not automatically close the stream.
- The parser previously attempted to JSON-decode every data: payload.
The sentinel is not JSON, so we logged and skipped it, then continued
polling.
- On servers that keep the HTTP connection open after emitting the
sentinel (notably wiremock on Windows), skipping the sentinel meant we
never emitted ResponseEvent::Completed.
- Higher layers wait for completion before progressing (emitting
approval requests and issuing follow-up model calls), so the test never
reached the subsequent requests and wiremock panicked when its
expected-call count was not met.

Fix
- Treat both data: [DONE] and data: DONE as explicit end-of-stream
sentinels.
- When a sentinel is seen, flush any pending assistant/reasoning items
and emit ResponseEvent::Completed once.

Tests
- Add a regression unit test asserting we complete on the sentinel even
if the underlying connection is not closed.
2026-01-05 09:29:42 -08:00
jif-oai
649badd102 fix: chat multiple tool calls (#8556)
Fix this: https://github.com/openai/codex/issues/8479

The issue is that chat completion API expect all the tool calls in a
single assistant message and then all the tool call output in a single
response message
2026-01-05 10:37:43 +00:00
Ahmed Ibrahim
66b7c673e9 Refresh on models etag mismatch (#8491)
- Send models etag
- Refresh models on 412
- This wires `ModelsManager` to `ModelFamily` so we don't mutate it
mid-turn
2026-01-01 11:41:16 -08:00
Ahmed Ibrahim
40de81e7af Remove reasoning format (#8484)
This isn't very useful parameter. 

logic:
```
if model puts `**` in their reasoning, trim it and visualize the header.
if couldn't trim: don't render
if model doesn't support: don't render
```

We can simplify to:
```
if could trim, visualize header.
if not, don't render
```
2025-12-23 16:01:46 -08:00
Ahmed Ibrahim
6b2ef216f1 remove minimal client version (#8447)
This isn't needed value by client
2025-12-22 12:52:24 -08:00
jif-oai
ac6ba286aa feat: experimental menu (#8071)
This will automatically render any `Stage::Beta` features.

The change only gets applied to the *next session*. This started as a
bug but actually this is a good thing to prevent out of distribution
push

<img width="986" height="288" alt="Screenshot 2025-12-15 at 15 38 35"
src="https://github.com/user-attachments/assets/78b7a71d-0e43-4828-a118-91c5237909c7"
/>


<img width="509" height="109" alt="Screenshot 2025-12-15 at 17 35 44"
src="https://github.com/user-attachments/assets/6933de52-9b66-4abf-b58b-a5f26d5747e2"
/>
2025-12-17 17:08:03 +00:00
jif-oai
d7482510b1 nit: trace span for regular task (#8053)
Logs are too spammy

---------

Co-authored-by: Anton Panasenko <apanasenko@openai.com>
2025-12-16 16:53:15 +00:00
Anton Panasenko
ad7b9d63c3 [codex] add otel tracing (#7844) 2025-12-12 17:07:17 -08:00
pakrym-oai
b3ddd50eee Remote compact for API-key users (#7835) 2025-12-12 10:05:02 -08:00
Ahmed Ibrahim
b7fa7ca8e9 Update Model Info (#7853) 2025-12-11 14:06:07 -08:00
Ahmed Ibrahim
cacfd003ac override instructions using ModelInfo (#7754)
Making sure we can override base instructions
2025-12-08 17:30:42 -08:00
Ahmed Ibrahim
222a491570 load models from disk and set a ttl and etag (#7722)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2025-12-08 13:43:04 -08:00
Ahmed Ibrahim
53a486f7ea Add remote models feature flag (#7648)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2025-12-07 09:47:48 -08:00
jif-oai
5f80ad6da8 fix: chat completion with parallel tool call (#7634) 2025-12-05 10:20:36 -08:00
zhao-oai
b8eab7ce90 fix: taking plan type from usage endpoint instead of thru auth token (#7610)
pull plan type from the usage endpoint, persist it in session state /
tui state, and propagate through rate limit snapshots
2025-12-04 23:34:13 -08:00
Ahmed Ibrahim
7b359c9c8e Call models endpoint in models manager (#7616)
- Introduce `with_remote_overrides` and update
`refresh_available_models`
- Put `auth_manager` instead of `auth_mode` on `models_manager`
- Remove `ShellType` and `ReasoningLevel` to use already existing
structs
2025-12-04 18:28:03 -08:00
jif-oai
6736d1828d fix: sse for chat (#7594) 2025-12-04 16:46:56 -08:00
Ahmed Ibrahim
903b7774bc Add models endpoint (#7603)
- Use the codex-api crate to introduce models endpoint. 
- Add `models` to codex core tests helpers
- Add `ModelsInfo` for the endpoint return type
2025-12-04 12:57:54 -08:00
Ahmed Ibrahim
71504325d3 Migrate model preset (#7542)
- Introduce `openai_models` in `/core`
- Move `PRESETS` under it
- Move `ModelPreset`, `ModelUpgrade`, `ReasoningEffortPreset`,
`ReasoningEffortPreset`, and `ReasoningEffortPreset` to `protocol`
- Introduce `Op::ListModels` and `EventMsg::AvailableModels`

Next steps:
- migrate `app-server` and `tui` to use the introduced Operation
2025-12-03 20:30:43 +00:00
jif-oai
4502b1b263 chore: proper client extraction (#6996) 2025-11-25 18:06:12 +00:00