Compare commits

...

163 Commits

Author SHA1 Message Date
jif-oai
5cba950f25 Review app-server request loop 2026-02-10 15:00:35 +00:00
jif-oai
66f9f631a2 Fix app-server outbound connection 2026-02-10 14:46:51 +00:00
jif-oai
9a64991e69 Plan outbound routing fixes 2026-02-10 14:38:45 +00:00
jif-oai
bd89f60348 Plan outbound router ordering fixes 2026-02-10 14:24:07 +00:00
jif-oai
f742019bec Discuss outbound routing fixes 2026-02-10 14:15:50 +00:00
jif-oai
aee456ef18 Document app-server backpressure 2026-02-10 14:08:48 +00:00
jif-oai
223fadc760 Fix spawn_agent input type (#11304) 2026-02-10 12:16:39 +00:00
jif-oai
87ccc5bbae feat: add connector capabilities to sub-agents (#11191) 2026-02-10 11:53:01 +00:00
jif-oai
6049ff02a0 memories: add extraction and prompt module foundation (#11200)
## Summary
- add the new `core/src/memories` module (phase-one parsing, rollout
filtering, storage, selection, prompts)
- add Askama-backed memory templates for stage-one input/system and
consolidation prompts
- add module tests for parsing, filtering, path bucketing, and summary
maintenance

## Testing
- just fmt
- cargo test -p codex-core --lib memories::
2026-02-10 10:10:24 +00:00
Michael Bolin
44ebf4588f feat: retain NetworkProxy, when appropriate (#11207)
As of this PR, `SessionServices` retains a
`Option<StartedNetworkProxy>`, if appropriate.

Now the `network` field on `Config` is `Option<NetworkProxySpec>`
instead of `Option<NetworkProxy>`.

Over in `Session::new()`, we invoke `NetworkProxySpec::start_proxy()` to
create the `StartedNetworkProxy`, which is a new struct that retains the
`NetworkProxy` as well as the `NetworkProxyHandle`. (Note that `Drop` is
implemented for `NetworkProxyHandle` to ensure the proxies are shutdown
when it is dropped.)

The `NetworkProxy` from the `StartedNetworkProxy` is threaded through to
the appropriate places.


---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/11207).
* #11285
* __->__ #11207
2026-02-10 02:09:23 -08:00
Michael Bolin
8e240a13be chore: put crypto provider logic in a shared crate (#11294)
Ensures a process-wide rustls crypto provider is installed.

Both the `codex-network-proxy` and `codex-api` crates need this.
2026-02-10 01:04:31 -08:00
alexsong-oai
9fded117ac feat: support configurable metric_exporter (#10940) 2026-02-10 08:14:28 +00:00
viyatb-oai
3391e5ea86 feat(sandbox): enforce proxy-aware network routing in sandbox (#11113)
## Summary
- expand proxy env injection to cover common tool env vars
(`HTTP_PROXY`/`HTTPS_PROXY`/`ALL_PROXY`/`NO_PROXY` families +
tool-specific variants)
- harden macOS Seatbelt network policy generation to route through
inferred loopback proxy endpoints and fail closed when proxy env is
malformed
- thread proxy-aware Linux sandbox flags and add minimal bwrap netns
isolation hook for restricted non-proxy runs
- add/refresh tests for proxy env wiring, Seatbelt policy generation,
and Linux sandbox argument wiring
2026-02-10 07:44:21 +00:00
Dylan Hurd
b61ea47e83 chore(tui) cleanup /approvals (#10215)
## Summary
Consolidate on the new `/permissions` flow

## Testing
- [x] updated snapshots
2026-02-09 23:24:06 -08:00
alexsong-oai
91704c5672 feat: add SkillPolicy to skill metadata and support allow_implicit_invocation (#11244)
Tested by setting the policy in agents/openai.yaml to true, false, and
leaving it unset (default).
```
policy:
  allow_implicit_invocation: false
```
<img width="847" height="289" alt="Screenshot 2026-02-09 at 3 42 41 PM"
src="https://github.com/user-attachments/assets/d3476264-3355-47cf-894a-4ffba53e3481"
/>
2026-02-09 23:13:27 -08:00
Matthew Zeng
005e040f97 [apps] Add thread_id param to optionally load thread config for apps feature check. (#11279)
- [x] Add thread_id param to optionally load thread config for apps
feature check
2026-02-09 23:10:26 -08:00
Michael Bolin
503186b31f feat: reserve loopback ephemeral listeners for managed proxy (#11269)
Codex may run many per-thread proxy instances, so hardcoded proxy ports
are brittle and conflict-prone. The previous "ephemeral" approach still
had a race: `build()` read `local_addr()` from temporary listeners and
dropped them before `run()` rebound the ports. That left a
[TOCTOU](https://en.wikipedia.org/wiki/Time-of-check_to_time-of-use)
window where the OS (or another process) could reuse the same port,
causing intermittent `EADDRINUSE` and partial proxy startup.

Change the managed proxy path to reserve real listener sockets up front
and keep them alive until startup:

- add `ReservedListeners` on `NetworkProxy` to hold HTTP/SOCKS/admin std
listeners allocated during `build()`
- in managed mode, bind `127.0.0.1:0` for each listener and carry those
bound sockets into `run()` instead of rebinding by address later
- add `run_*_with_std_listener` entry points for HTTP, SOCKS5, and admin
servers so `run()` can start services from already-reserved sockets
- keep static/configured ports only when `managed_by_codex(false)`,
including explicit `socks_addr` override support
- remove fallback synthetic port allocation and add tests for managed
ephemeral loopback binding and unmanaged configured-port behavior

This makes managed startup deterministic, avoids port collisions, and
preserves the intended distinction between Codex-managed ephemeral ports
and externally managed fixed ports.
2026-02-10 06:11:02 +00:00
Eric Traut
bb974c78de Disable dynamic model refresh for custom model providers (#11239)
The dynamic model refresh feature (`https://api.openai.com/v1/models`
endpoint) is currently gated on a runtime check for an auth method other
than API Key. It should be gated on a check specifically for ChatGPT
Auth because some custom model providers (e.g. for local models) use no
auth mechanism. A call to `self.auth_manager.auth_mode()` will return
`None` in this case.

Addresses #11213
2026-02-09 21:36:09 -08:00
dependabot[bot]
c0994b363d chore(deps): bump regex from 1.12.2 to 1.12.3 in /codex-rs (#11138)
Bumps [regex](https://github.com/rust-lang/regex) from 1.12.2 to 1.12.3.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/rust-lang/regex/blob/master/CHANGELOG.md">regex's
changelog</a>.</em></p>
<blockquote>
<h1>1.12.3 (2025-02-03)</h1>
<p>This release excludes some unnecessary things from the archive
published to
crates.io. Specifically, fuzzing data and various shell scripts are now
excluded. If you run into problems, please file an issue.</p>
<p>Improvements:</p>
<ul>
<li><a
href="https://redirect.github.com/rust-lang/regex/pull/1319">#1319</a>:
Switch from a Cargo <code>exclude</code> list to an <code>include</code>
list, and exclude some
unnecessary stuff.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="b028e4f40e"><code>b028e4f</code></a>
1.12.3</li>
<li><a
href="5e195de266"><code>5e195de</code></a>
regex-automata-0.4.14</li>
<li><a
href="a3433f6918"><code>a3433f6</code></a>
regex-syntax-0.8.9</li>
<li><a
href="0c07fae444"><code>0c07fae</code></a>
regex-lite-0.1.9</li>
<li><a
href="6a810068f0"><code>6a81006</code></a>
cargo: exclude development scripts and fuzzing data</li>
<li><a
href="4733e28ba4"><code>4733e28</code></a>
automata: fix <code>onepass::DFA::try_search_slots</code> panic when too
many slots are ...</li>
<li>See full diff in <a
href="https://github.com/rust-lang/regex/compare/1.12.2...1.12.3">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=regex&package-manager=cargo&previous-version=1.12.2&new-version=1.12.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-09 21:34:22 -08:00
dependabot[bot]
cd7f8c6dab chore(deps): bump anyhow from 1.0.100 to 1.0.101 in /codex-rs (#11139)
Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.100 to
1.0.101.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/dtolnay/anyhow/releases">anyhow's
releases</a>.</em></p>
<blockquote>
<h2>1.0.101</h2>
<ul>
<li>Add #[inline] to anyhow::Ok helper (<a
href="https://redirect.github.com/dtolnay/anyhow/issues/437">#437</a>,
thanks <a
href="https://github.com/Ibitier"><code>@​Ibitier</code></a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="80bfe291b1"><code>80bfe29</code></a>
Release 1.0.101</li>
<li><a
href="dff8c432f9"><code>dff8c43</code></a>
Merge pull request <a
href="https://redirect.github.com/dtolnay/anyhow/issues/437">#437</a>
from Ibitier/inline-ok-helper</li>
<li><a
href="85d9ea9a1c"><code>85d9ea9</code></a>
Add #[inline] to anyhow::Ok helper</li>
<li><a
href="54036cc289"><code>54036cc</code></a>
Update ui test suite to nightly-2026-01-21</li>
<li><a
href="cce0579d85"><code>cce0579</code></a>
Update actions/upload-artifact@v5 -&gt; v6</li>
<li><a
href="f2c598ca0e"><code>f2c598c</code></a>
Update actions/upload-artifact@v4 -&gt; v5</li>
<li><a
href="2c0bda4ce9"><code>2c0bda4</code></a>
Update to 2021 edition</li>
<li><a
href="0d82268129"><code>0d82268</code></a>
Remove rustc version requirement from readme</li>
<li><a
href="67df01216d"><code>67df012</code></a>
Merge pull request <a
href="https://redirect.github.com/dtolnay/anyhow/issues/436">#436</a>
from dtolnay/up</li>
<li><a
href="c8984880a8"><code>c898488</code></a>
Raise required compiler to Rust 1.68</li>
<li>Additional commits viewable in <a
href="https://github.com/dtolnay/anyhow/compare/1.0.100...1.0.101">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=anyhow&package-manager=cargo&previous-version=1.0.100&new-version=1.0.101)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-09 21:33:56 -08:00
dependabot[bot]
10b1214606 chore(deps): bump insta from 1.46.2 to 1.46.3 in /codex-rs (#11140)
Bumps [insta](https://github.com/mitsuhiko/insta) from 1.46.2 to 1.46.3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/mitsuhiko/insta/releases">insta's
releases</a>.</em></p>
<blockquote>
<h2>1.46.3</h2>
<h2>Release Notes</h2>
<ul>
<li>Fix inline escaped snapshots incorrectly stripping leading newlines
when content contains control characters like carriage returns. The
escaped format (used for snapshots with control chars) now correctly
preserves the original content without stripping a non-existent
formatting newline. <a
href="https://redirect.github.com/mitsuhiko/insta/issues/865">#865</a></li>
</ul>
<h2>Install cargo-insta 1.46.3</h2>
<h3>Install prebuilt binaries via shell script</h3>
<pre lang="sh"><code>curl --proto '=https' --tlsv1.2 -LsSf
https://github.com/mitsuhiko/insta/releases/download/1.46.3/cargo-insta-installer.sh
| sh
</code></pre>
<h3>Install prebuilt binaries via powershell script</h3>
<pre lang="sh"><code>powershell -ExecutionPolicy Bypass -c &quot;irm
https://github.com/mitsuhiko/insta/releases/download/1.46.3/cargo-insta-installer.ps1
| iex&quot;
</code></pre>
<h2>Download cargo-insta 1.46.3</h2>
<table>
<thead>
<tr>
<th>File</th>
<th>Platform</th>
<th>Checksum</th>
</tr>
</thead>
<tbody>
<tr>
<td><a
href="https://github.com/mitsuhiko/insta/releases/download/1.46.3/cargo-insta-aarch64-apple-darwin.tar.xz">cargo-insta-aarch64-apple-darwin.tar.xz</a></td>
<td>Apple Silicon macOS</td>
<td><a
href="https://github.com/mitsuhiko/insta/releases/download/1.46.3/cargo-insta-aarch64-apple-darwin.tar.xz.sha256">checksum</a></td>
</tr>
<tr>
<td><a
href="https://github.com/mitsuhiko/insta/releases/download/1.46.3/cargo-insta-x86_64-apple-darwin.tar.xz">cargo-insta-x86_64-apple-darwin.tar.xz</a></td>
<td>Intel macOS</td>
<td><a
href="https://github.com/mitsuhiko/insta/releases/download/1.46.3/cargo-insta-x86_64-apple-darwin.tar.xz.sha256">checksum</a></td>
</tr>
<tr>
<td><a
href="https://github.com/mitsuhiko/insta/releases/download/1.46.3/cargo-insta-x86_64-pc-windows-msvc.zip">cargo-insta-x86_64-pc-windows-msvc.zip</a></td>
<td>x64 Windows</td>
<td><a
href="https://github.com/mitsuhiko/insta/releases/download/1.46.3/cargo-insta-x86_64-pc-windows-msvc.zip.sha256">checksum</a></td>
</tr>
<tr>
<td><a
href="https://github.com/mitsuhiko/insta/releases/download/1.46.3/cargo-insta-x86_64-unknown-linux-gnu.tar.xz">cargo-insta-x86_64-unknown-linux-gnu.tar.xz</a></td>
<td>x64 Linux</td>
<td><a
href="https://github.com/mitsuhiko/insta/releases/download/1.46.3/cargo-insta-x86_64-unknown-linux-gnu.tar.xz.sha256">checksum</a></td>
</tr>
<tr>
<td><a
href="https://github.com/mitsuhiko/insta/releases/download/1.46.3/cargo-insta-x86_64-unknown-linux-musl.tar.xz">cargo-insta-x86_64-unknown-linux-musl.tar.xz</a></td>
<td>x64 MUSL Linux</td>
<td><a
href="https://github.com/mitsuhiko/insta/releases/download/1.46.3/cargo-insta-x86_64-unknown-linux-musl.tar.xz.sha256">checksum</a></td>
</tr>
</tbody>
</table>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/mitsuhiko/insta/blob/master/CHANGELOG.md">insta's
changelog</a>.</em></p>
<blockquote>
<h2>1.46.3</h2>
<ul>
<li>Fix inline escaped snapshots incorrectly stripping leading newlines
when content contains control characters like carriage returns. The
escaped format (used for snapshots with control chars) now correctly
preserves the original content without stripping a non-existent
formatting newline. <a
href="https://redirect.github.com/mitsuhiko/insta/issues/865">#865</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1324590175"><code>1324590</code></a>
Release 1.46.3 (<a
href="https://redirect.github.com/mitsuhiko/insta/issues/870">#870</a>)</li>
<li><a
href="b26bc7ffe1"><code>b26bc7f</code></a>
Fix escaped format inline snapshots not stripping formatting newline (<a
href="https://redirect.github.com/mitsuhiko/insta/issues/869">#869</a>)</li>
<li>See full diff in <a
href="https://github.com/mitsuhiko/insta/compare/1.46.2...1.46.3">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=insta&package-manager=cargo&previous-version=1.46.2&new-version=1.46.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-09 21:33:31 -08:00
Owen Lin
53741013ab fix(app-server): for external auth, replace id_token with chatgpt_acc… (#11240)
…ount_id and chatgpt_plan_type

### Summary
Following up on external auth mode which was introduced here:
https://github.com/openai/codex/pull/10012

Turns out some clients have a differently shaped ID token and don't have
a chosen workspace (aka chatgpt_account_id) encoded in their ID token.
So, let's replace `id_token` param with `chatgpt_account_id` and
`chatgpt_plan_type` (optional) when initializing the external ChatGPT
auth mode (`account/login/start` with `chatgptAuthTokens`).

The client was able to test end-to-end with a Codex build from this
branch and verified it worked!
2026-02-09 20:48:58 -08:00
Dylan Hurd
168c359b71 Adjust shell command timeouts for Windows (#11247)
Summary
- add platform-aware defaults for shell command timeouts so Windows
tests get longer waits
- keep medium timeout longer on Windows to ensure flakiness is reduced

Testing
- Not run (not requested)
2026-02-09 20:03:32 -08:00
Josh McKinney
de59e550c0 test: deflake nextest child-process leak in MCP harnesses (#11263)
## Summary
- add deterministic child-process cleanup to both test `McpProcess`
helpers
- keep Tokio `kill_on_drop(true)` but also reap via bounded `try_wait()`
polling in `Drop`
- document the failure mode and why this avoids nondeterministic `LEAK`
flakes

## Why
`cargo nextest` leak detection can intermittently report `LEAK` when a
spawned server outlives test teardown, making CI flaky.

## Testing
- `just fmt`
- `cargo test -p codex-app-server`
- `cargo test -p codex-mcp-server`


## Failing CI Reference
- Original failing job:
https://github.com/openai/codex/actions/runs/21845226299/job/63039443593?pr=11245
2026-02-10 03:43:24 +00:00
Michael Bolin
862ab63071 chore: change ConfigState so it no longer depends on a single config.toml file for reloading (#11262)
If anything, it should depend on `ConfigLayerStack`.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/11262).
* #11207
* __->__ #11262
2026-02-09 19:26:39 -08:00
Ahmed Ibrahim
d1df3bd63b Revert "Revert "Update models.json"" (#11256)
Reverts openai/codex#11255
2026-02-09 19:22:41 -08:00
Josh McKinney
34c88d10ea deflake linux-sandbox NoNewPrivs timeout (#11245)
Deflake `codex-linux-sandbox::all
suite::landlock::test_no_new_privs_is_enabled`.

CI has intermittently failed with `Sandbox(Timeout)` (exit 124) because
the sandboxed
`grep '^NoNewPrivs:' /proc/self/status` can run close to the short
timeout budget.

This updates only this test to use `LONG_TIMEOUT_MS`, which removes the
near-threshold timeout
behavior while keeping the rest of the suite unchanged.

Refs (previous failures):
- PR:
https://github.com/openai/codex/actions/runs/21836764823/job/63009902779
- PR:
https://github.com/openai/codex/actions/runs/21837427251/job/63012470353
- main:
https://github.com/openai/codex/actions/runs/21830746538/job/62988079964

Validation:
- Local: `cd codex-rs && cargo test -p codex-linux-sandbox` (non-Linux
runs 0 tests)
2026-02-10 03:03:58 +00:00
Ahmed Ibrahim
03adb5db3e Revert "Update models.json" (#11255)
Reverts openai/codex#9739
2026-02-09 17:44:11 -08:00
github-actions[bot]
c816c430a0 Update models.json (#9739)
Automated update of models.json.

---------

Co-authored-by: aibrahim-oai <219906144+aibrahim-oai@users.noreply.github.com>
Co-authored-by: Ahmed Ibrahim <aibrahim@openai.com>
2026-02-09 17:20:18 -08:00
Ahmed Ibrahim
a1abd53b6a Remove offline fallback for models (#11238)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2026-02-09 16:58:54 -08:00
Josh McKinney
a3e4bd3bc0 fix(tui): tab submits when no task running in steer mode (#10035)
When steer mode is enabled, Tab used to only queue while a task was
running and otherwise did nothing. Treat Tab as an immediate submit when
no task is running so input isn't dropped when the inflight turn ends
mid-typing.

Adds a regression test and updates docs/tooltips.
2026-02-10 00:39:09 +00:00
Philipp Mildenberger
c9271cdff2 fix: nix build by adding missing dependencies and fix outputHashes (#11185)
Fixes #11020

I do think think `nix build` should run in CI, I had multiple issues
trying to build the flake in the past, as it's continuously out of sync
with the rest of the repo. (like a few days ago I didn't need the
updated outputHashes, just the missing packages).

Co-authored-by: Eric Traut <etraut@openai.com>
2026-02-09 15:25:48 -08:00
Dylan Hurd
d65f09b913 fix(feature) UnderDevelopment feature must be off (#11242)
## Summary
1. Bump RemoteModels to Stable
2. Assert that all UnderDevelopment features are off by default

## Testing
- [x] Added unit test
2026-02-09 15:14:15 -08:00
Ahmed Ibrahim
481145e959 Use longest remote model prefix matching (#11228)
Match model metadata by longest matching remote slug prefix before local
fallback.

- Update `get_model_info` to prefer the most specific remote slug prefix
for the requested model.
- Add an integration test to assert `gpt-5.3-codex-test` resolves to
`gpt-5.3-codex` over `gpt-5.3`.
2026-02-09 15:05:56 -08:00
Matthew Zeng
d90df4761b [apps] Add gated instructions for Apps. (#10924)
- [x] Add gated instructions for Apps.
2026-02-09 14:48:09 -08:00
natea-oai
ed977dbeda adding image support for gif and webp (#11237)
Adds image support for gif and webp images. Tested using webp and gif
(both single and multi image gif files)
2026-02-09 14:47:22 -08:00
alexsong-oai
373f5467ef Add originator to otel metadata tags (#11232) 2026-02-09 14:29:19 -08:00
Josh McKinney
2bdf9617bb fix(tui): keep unified exec summary on working line (#10962)
## Problem
When unified-exec background sessions appear while the status indicator
is visible, the bottom pane can grow by one row to show a dedicated
footer line. That row insertion/removal makes the composer jump
vertically and produces visible jitter/flicker during streaming turns.

## Mental model
The bottom pane should expose one canonical background-exec summary
string, but it should surface that string in only one place at a time:
- if the status indicator row is visible, show the summary inline on
that row;
- if the status indicator row is hidden, show the summary as the
standalone unified-exec footer row.

This keeps status information visible while preserving a stable pane
height.

## Non-goals
This change does not alter unified-exec lifecycle, process tracking, or
`/ps` behavior. It does not redesign status text copy, spinner timing,
or interrupt handling semantics.

## Tradeoffs
Inlining the summary preserves layout stability and keeps interrupt
affordances in a fixed location, but it reduces horizontal space for
long status/detail text in narrow terminals. We accept that truncation
risk in exchange for removing vertical jitter and keeping the composer
anchored.

## Architecture
`UnifiedExecFooter` remains the source of truth for background-process
summary copy via `summary_text()`. `BottomPane` mirrors that text into
`StatusIndicatorWidget::update_inline_message()` whenever process state
changes or a status widget is created. Rendering enforces single-surface
output: the standalone footer row is skipped while status is present,
and the status row appends the summary after the elapsed/interrupt
segment.

## Documentation pass
Added non-functional docs/comments that make the new invariant explicit:
- status row owns inline summary when present;
- unified-exec footer row renders only when status row is absent;
- summary ordering keeps elapsed/interrupt affordance in a stable
position.

## Observability
No new telemetry or logs are introduced. The behavior is traceable
through:
- `BottomPane::set_unified_exec_processes()` for state updates,
- `BottomPane::sync_status_inline_message()` for status-row
synchronization,
- `StatusIndicatorWidget::render()` for final inline ordering.

## Tests
- Added
`bottom_pane::tests::unified_exec_summary_does_not_increase_height_when_status_visible`
to lock the no-height-growth invariant.
- Updated the unified-exec status restoration snapshot to match inline
rendering order.
- Validated with:
  - `just fmt`
  - `cargo test -p codex-tui --lib`

---------

Co-authored-by: Sayan Sisodiya <sayan@openai.com>
2026-02-09 14:25:32 -08:00
jif-oai
ffd4bd345c feat: tie shell snapshot to cwd (#11231)
Fix for this: https://github.com/openai/codex/issues/11223

Basically we tie the shell snapshot to a `cwd` to handle `cwd`-based env
setups
2026-02-09 22:14:39 +00:00
jif-oai
c2ca51273f feat: use a notify instead of grace to close ue process (#11219) 2026-02-09 22:14:33 +00:00
xl-openai
cca13fb03a skill-creator: Remove invalid reference. (#10960)
Remove references to two files that do not exist.
2026-02-09 13:37:27 -08:00
xl-openai
a33ee46e3b feat: extend skills/list to support additional roots. (#10835)
Add an optional perCwdExtraUserRoots
2026-02-09 13:30:38 -08:00
jif-oai
74ecd6e3b2 state: add memory consolidation lock primitives (#11199)
## Summary
- add a migration for memory_consolidation_locks
- add acquire/release lock primitives to codex-state runtime
- add core/state_db wrappers and cwd normalization for memory queries
and lock keys

## Testing
- cargo test -p codex-state memory_consolidation_lock_
- cargo test -p codex-core --lib state_db::
2026-02-09 21:04:20 +00:00
Anton Panasenko
becc3a0424 feat: search_tool (#10657)
**Why We Did This**
- The goal is to reduce MCP tool context pollution by not exposing the
full MCP tool list up front
- It forces an explicit discovery step (`search_tool_bm25`) so the model
narrows tool scope before making MCP calls, which helps relevance and
lowers prompt/tool clutter.

**What It Changed**
- Added a new experimental feature flag `search_tool` in
`core/src/features.rs:90` and `core/src/features.rs:430`.
- Added config/schema support for that flag in
`core/config.schema.json:214` and `core/config.schema.json:1235`.
- Added BM25 dependency (`bm25`) in `Cargo.toml:129` and
`core/Cargo.toml:23`.
- Added new tool handler `search_tool_bm25` in
`core/src/tools/handlers/search_tool_bm25.rs:18`.
- Registered the handler and tool spec in
`core/src/tools/handlers/mod.rs:11` and `core/src/tools/spec.rs:780` and
`core/src/tools/spec.rs:1344`.
- Extended `ToolsConfig` to carry `search_tool` enablement in
`core/src/tools/spec.rs:32` and `core/src/tools/spec.rs:56`.
- Injected dedicated developer instructions for tool-discovery workflow
in `core/src/codex.rs:483` and `core/src/codex.rs:1976`, using
`core/templates/search_tool/developer_instructions.md:1`.
- Added session state to store one-shot selected MCP tools in
`core/src/state/session.rs:27` and `core/src/state/session.rs:131`.
- Added filtering so when feature is enabled, only selected MCP tools
are exposed on the next request (then consumed) in
`core/src/codex.rs:3800` and `core/src/codex.rs:3843`.
- Added E2E suite coverage for
enablement/instructions/hide-until-search/one-turn-selection in
`core/tests/suite/search_tool.rs:72`,
`core/tests/suite/search_tool.rs:109`,
`core/tests/suite/search_tool.rs:147`, and
`core/tests/suite/search_tool.rs:218`.
- Refactored test helper utilities to support config-driven tool
collection in `core/tests/suite/tools.rs:281`.

**Net Behavioral Effect**
- With `search_tool` **off**: existing MCP behavior (tools exposed
normally).
- With `search_tool` **on**: MCP tools start hidden, model must call
`search_tool_bm25`, and only returned `selected_tools` are available for
the next model call.
2026-02-09 12:53:50 -08:00
Charley Cunningham
9450cd9ce5 core: add focused diagnostics for remote compaction overflows (#11133)
## Summary
- add targeted remote-compaction failure diagnostics in compact_remote
logging
- log the specific values needed to explain overflow timing:
  - last_api_response_total_tokens
  - estimated_tokens_of_items_added_since_last_successful_api_response
  - estimated_bytes_of_items_added_since_last_successful_api_response
  - failing_compaction_request_body_bytes
- simplify breakdown naming and remove
last_api_response_total_bytes_estimate (it was an approximation and not
useful for debugging)

## Why
When compaction fails with context_length_exceeded, we need concrete,
low-ambiguity numbers that map directly to:
1) what the API most recently reported, and
2) what local history added since then.

This keeps the failure logs actionable without adding broad, noisy
metrics.

## Testing
- just fmt
- cargo test -p codex-core
2026-02-09 12:42:20 -08:00
Charley Cunningham
f88667042e TUI: fix request_user_input wrapping for long option labels (#11123)
## Summary

This PR fixes long-text rendering in the `request_user_input` TUI
overlay while preserving a clear two-column option layout. (Issue
https://github.com/openai/codex/issues/11093)

Before:
- very long option labels could push description text into a narrow
right-edge strip
- option labels were effectively single-line when descriptions were
present, causing truncation/poor readability
- label and description wrapping interacted in one combined wrapped line

<img width="504" height="409" alt="Screenshot 2026-02-08 at 2 27 25 PM"
src="https://github.com/user-attachments/assets/a9afd108-d792-4522-bce1-e43b3cce882b"
/>

After:
- option labels wrap inside the left column
- descriptions wrap independently inside the right column
- row measurement and row rendering use the same wrapping path, so
layout stays stable

<img width="582" height="211" alt="Screenshot 2026-02-09 at 10 28 02 AM"
src="https://github.com/user-attachments/assets/47885a1c-07e5-4b0f-b992-032b149f1b0d"
/>

## Problem

`request_user_input` needs to handle verbose prompts/options. With
oversized labels:
- descriptions could collapse into a thin, hard-to-read column
- important label context was lost

## Root Cause

In shared row rendering (`selection_popup_common`):
- rows were wrapped as a single combined line
- auto column sizing could still place `desc_col` too far right for long
labels
- `request_user_input` rows did not provide wrap metadata to align
continuation lines after the option prefix

## What Changed

### 1) `request_user_input` rows opt into wrapped labels
File: `codex-rs/tui/src/bottom_pane/request_user_input/mod.rs`

- In `option_rows()`, compute the rendered option prefix (`› 1. ` / ` 2.
`) and set `wrap_indent` from its display width.
- Apply the same behavior to the synthetic “None of the above” row.
- Add long-text snapshot test coverage
(`question_with_very_long_option_text` +
`request_user_input_long_option_text_snapshot`).

### 2) Shared renderer now has an opt-in two-column wrapping path
File: `codex-rs/tui/src/bottom_pane/selection_popup_common.rs`

- Add focused helpers:
  - `should_wrap_name_in_column`
  - `wrap_two_column_row`
  - `wrap_standard_row`
  - `wrap_row_lines`
  - `apply_row_state_style`
- For opted-in rows (plain option rows with `wrap_indent` +
description), wrap label and description independently in their own
columns.
- Keep the legacy standard wrapping path for non-opted rows.
- Use the same `wrap_row_lines` function in both rendering and height
measurement to keep them in sync.

### 3) Keep column sizing simple and derived from existing fixed split
constants
File: `codex-rs/tui/src/bottom_pane/selection_popup_common.rs`

- Keep fixed mode at `3/10` left column (`30/70` split).
- In auto modes, cap label width using those same fixed constants (max
70% label, min 30% description), instead of extra special-case
constants/branches.
- Add/keep narrow-width safety guard in `wrap_two_column_row` so
extremely small widths do not panic.

### 4) Snapshot coverage
File: `codex-rs/tui/src/bottom_pane/request_user_input/snapshots/

codex_tui__bottom_pane__request_user_input__tests__request_user_input_long_option_text.snap`

- Add snapshot for long-label/long-description two-column rendering
behavior.
2026-02-09 12:23:31 -08:00
jif-oai
c2bfd1e473 Revert "chore: enable sub agents" (#11230)
Reverts openai/codex#11173
2026-02-09 20:22:38 +00:00
viyatb-oai
c2c6bc90f8 chore: remove network-proxy-cli crate (#11158)
## Summary
- remove `network-proxy-cli` from the Rust workspace members
- delete the dedicated `codex-network-proxy-cli` crate files
- remove the stale `codex-network-proxy-cli` package entry from
`Cargo.lock`

## Testing
- just fmt
- cargo test -p codex-network-proxy
2026-02-09 12:13:55 -08:00
zbarsky-openai
86183847fd [bazel] Upgrade some rulesets in preparation for enabling windows, part 2 (#11197)
https://github.com/openai/codex/pull/11109 had automerge set, so I
didn't get to address feedback before merging, oops!
2026-02-09 20:08:10 +00:00
pakrym-oai
086d02fb14 Try to stop small helper methods (#11203) 2026-02-09 20:01:30 +00:00
pakrym-oai
7044511ae8 Move warmup to the task level (#11216)
Instead of storing a special connection on the client level make the
regular task responsible for establishing a normal client session and
open a connection on it.

Then when the turn is started we pass in a pre-established session.
2026-02-09 11:58:53 -08:00
pakrym-oai
ccd17374cb Move warmup to the task level (#11216)
Instead of storing a special connection on the client level make the
regular task responsible for establishing a normal client session and
open a connection on it.

Then when the turn is started we pass in a pre-established session.
2026-02-09 10:57:52 -08:00
Eric Traut
9346d321d2 Fixed bug in file watcher that results in spurious skills update events and large log files (#11217)
On some platforms, the "notify" file watcher library emits events for
file opens and reads, not just file modifications or deletes. The
previous implementation didn't take this into account.

Furthermore, the `tracing.info!` call that I previously added was
emitting a lot of logs. I had assumed incorrectly that `info` level
logging was disabled by default, but it's apparently enabled for this
crate. This is resulting in large logs (hundreds of MB) for some users.
2026-02-09 10:33:57 -08:00
Rasmus Rygaard
b2d3843109 Translate websocket errors (#10937)
When getting errors over a websocket connection, translate the error
into our regular API error format
2026-02-09 17:53:09 +00:00
jif-oai
cfce286459 tools: remove get_memory tool and tests (#11198)
Drop this memory tool as the design changed
2026-02-09 17:47:36 +00:00
Charley Cunningham
0883e5d3e5 core: account for all post-response items in auto-compact token checks (#11132)
## Summary
- change compaction pre-check accounting to include all items added
after the last model-generated item, not only trailing codex-generated
outputs
- use that boundary consistently in get_total_token_usage() and
get_total_token_usage_breakdown()
- update history tests to cover user/tool-output items after the last
model item

## Why
last_token_usage.total_tokens is API-reported for the last successful
model response. After that point, local history may gain additional
items (user messages, injected context, tool outputs). Compaction
triggering must account for all of those items to avoid late compaction
attempts that can overflow context.

## Testing
- just fmt
- cargo test -p codex-core
2026-02-09 08:34:38 -08:00
gt-oai
9fe925b15a Load requirements on windows (#10770)
We support requirements on Unix, loading from
`/etc/codex/requirements.toml`. On MacOS, we also support MDM.

Now, on Windows, we'll load requirements from
`%ProgramData%\OpenAI\Codex\requirements.toml`
2026-02-09 16:05:38 +00:00
gt-oai
54b401aa5f Deflake mixed parallel tools timing test (#11193)
```
        FAIL [   1.903s] (1926/3311) codex-core::all suite::tool_parallelism::mixed_parallel_tools_run_in_parallel
  stdout ───

    running 1 test
    test suite::tool_parallelism::mixed_parallel_tools_run_in_parallel ... FAILED

    failures:

    failures:
        suite::tool_parallelism::mixed_parallel_tools_run_in_parallel

    test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 684 filtered out; finished in 1.86s
    
  stderr ───

    thread 'suite::tool_parallelism::mixed_parallel_tools_run_in_parallel' (205083) panicked at core/tests/suite/tool_parallelism.rs:74:5:
    expected parallel execution to finish quickly, got 1.406255993s
    stack backtrace:
       0: __rustc::rust_begin_unwind
                 at /rustc/254b59607d4417e9dffbc307138ae5c86280fe4c/library/std/src/panicking.rs:689:5
       1: core::panicking::panic_fmt
                 at /rustc/254b59607d4417e9dffbc307138ae5c86280fe4c/library/core/src/panicking.rs:80:14
       2: all::suite::tool_parallelism::assert_parallel_duration
                 at ./tests/suite/tool_parallelism.rs:74:5
       3: all::suite::tool_parallelism::mixed_parallel_tools_run_in_parallel::{{closure}}
                 at ./tests/suite/tool_parallelism.rs:206:5
       4: <core::pin::Pin<P> as core::future::future::Future>::poll
                 at /home/runner/.rustup/toolchains/1.93.0-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/future/future.rs:133:9
       5: tokio::runtime::park::CachedParkThread::block_on::{{closure}}
                 at /home/runner/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.49.0/src/runtime/park.rs:284:71
       6: tokio::task::coop::with_budget
                 at /home/runner/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.49.0/src/task/coop/mod.rs:167:5
       7: tokio::task::coop::budget
                 at /home/runner/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.49.0/src/task/coop/mod.rs:133:5
       8: tokio::runtime::park::CachedParkThread::block_on
                 at /home/runner/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.49.0/src/runtime/park.rs:284:31
       9: tokio::runtime::context::blocking::BlockingRegionGuard::block_on
                 at /home/runner/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.49.0/src/runtime/context/blocking.rs:66:14
      10: tokio::runtime::scheduler::multi_thread::MultiThread::block_on::{{closure}}
                 at /home/runner/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.49.0/src/runtime/scheduler/multi_thread/mod.rs:89:22
      11: tokio::runtime::context::runtime::enter_runtime
                 at /home/runner/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.49.0/src/runtime/context/runtime.rs:65:16
      12: tokio::runtime::scheduler::multi_thread::MultiThread::block_on
                 at /home/runner/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.49.0/src/runtime/scheduler/multi_thread/mod.rs:88:9
      13: tokio::runtime::runtime::Runtime::block_on_inner
                 at /home/runner/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.49.0/src/runtime/runtime.rs:370:50
      14: tokio::runtime::runtime::Runtime::block_on
                 at /home/runner/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.49.0/src/runtime/runtime.rs:342:18
      15: all::suite::tool_parallelism::mixed_parallel_tools_run_in_parallel
                 at ./tests/suite/tool_parallelism.rs:208:7
      16: all::suite::tool_parallelism::mixed_parallel_tools_run_in_parallel::{{closure}}
                 at ./tests/suite/tool_parallelism.rs:178:52
      17: core::ops::function::FnOnce::call_once
                 at /home/runner/.rustup/toolchains/1.93.0-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ops/function.rs:250:5
      18: core::ops::function::FnOnce::call_once
                 at /rustc/254b59607d4417e9dffbc307138ae5c86280fe4c/library/core/src/ops/function.rs:250:5
    note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
```
2026-02-09 15:16:54 +00:00
jif-oai
284c03ceab chore: enable sub agents (#11173) 2026-02-09 11:25:37 +00:00
jif-oai
13de744296 fix: do not show closed agents in /agent (#11175) 2026-02-09 11:25:31 +00:00
jif-oai
753821c90f chore: enable shell snapshot (#11172) 2026-02-09 11:23:59 +00:00
jif-oai
6cf61725d0 feat: do not close unified exec processes across turns (#10799)
With this PR we do not close the unified exec processes (i.e. background
terminals) at the end of a turn unless:
* The user interrupt the turn
* The user decide to clean the processes through `app-server` or
`/clean`

I made sure that `codex exec` correctly kill all the processes
2026-02-09 10:27:46 +00:00
rakan-oai
4e9e6ca243 tui: avoid no-op status-line redraws (#11155)
Rate-limit snapshots are polled every 60s, which causes unconditional
redraws.
This causes spurious "tab changed" indicators in terminal apps.
2026-02-09 00:13:19 -08:00
Michael Bolin
383b45279e feat: include NetworkConfig through ExecParams (#11105)
This PR adds the following field to `Config`:

```rust
pub network: Option<NetworkProxy>,
```

Though for the moment, it will always be initialized as `None` (this
will be addressed in a subsequent PR).

This PR does the work to thread `network` through to `execute_exec_env()`, `process_exec_tool_call()`, and `UnifiedExecRuntime.run()` to ensure it is available whenever we span a process.
2026-02-09 03:32:17 +00:00
Michael Bolin
ff74aaae21 chore: reverse the codex-network-proxy -> codex-core dependency (#11121) 2026-02-08 17:03:24 -08:00
Matthew Zeng
45b7763c3f [apps] Improve app loading. (#10994)
There are two concepts of apps that we load in the harness:

- Directory apps, which is all the apps that the user can install.
- Accessible apps, which is what the user actually installed and can be
$ inserted and be used by the model. These are extracted from the tools
that are loaded through the gateway MCP.

Previously we wait for both sets of apps before returning the full apps
list. Which causes many issues because accessible apps won't be
available to the UI or the model if directory apps aren't loaded or
failed to load.

In this PR we are separating them so that accessible apps can be loaded
separately and are instantly available to be shown in the UI and to be
provided in model context. We also added an app-server event so that
clients can subscribe to also get accessible apps without being blocked
on the full app list.

- [x] Separate accessible apps and directory apps loading.
- [x] `app/list` request will also emit `app/list/updated` notifications
that app-server clients can subscribe. Which allows clients to get
accessible apps list to render in the $ menu without being blocked by
directory apps.
- [x] Cache both accessible and directory apps with 1 hour TTL to avoid
reloading them when creating new threads.
- [x] TUI improvements to redraw $ menu and /apps menu when app list is
updated.
2026-02-08 15:24:56 -08:00
Michael Bolin
181b721ba5 feat: include [experimental_network] in <environment_context> (#11044)
If `NetworkConstraints` is set, then include the relevant settings on `<environment_context>`. Example:

```xml
<environment_context>
  <cwd>/repo</cwd>
  <shell>bash</shell>
  <network enabled="true">
    <allowed>api.example.com</allowed>
    <allowed>*.openai.com</allowed>
    <denied>blocked.example.com</denied>
  </network>
</environment_context>
```
2026-02-08 15:16:50 -08:00
Matthew Zeng
9f1009540b Upgrade rmcp to 0.14 (#10718)
- [x] Upgrade rmcp to 0.14
2026-02-08 15:07:53 -08:00
Michael Bolin
ef5d26e586 chore: refactor network-proxy so that ConfigReloader is injectable behavior (#11114)
Currently, `codex-network-proxy` depends on `codex-core`, but this
should be the other way around. As a first step, refactor out
`ConfigReloader`, which should make it easier to move
`codex-rs/network-proxy/src/state.rs` to `codex-core` in a subsequent
commit.
2026-02-08 22:28:20 +00:00
zbarsky-openai
44a1355133 [bazel] Upgrade some rulesets in preparation for enabling windows (#11109) 2026-02-08 13:40:32 -08:00
Tom
409ec76fbc Gate view_image tool by model input_modalities (#11051)
- Plumb input modalities from model catalog through the openai model
protocol. Default to text and image.
- Conditionally add the view_image tool only if input modalities support
image.
2026-02-08 10:45:26 -08:00
Michael Bolin
91a3e17960 fix: remove config.schema.json from tag check (#10980)
Given that we have https://github.com/openai/codex/pull/10977, the
existing "Verify config schema fixture" step seems unnecessary. Further,
because it happens as part of the `tag-check` job (which is meant to be
fast), it slows down the entire build process because it delays the more
expensive steps from starting.
2026-02-08 08:49:43 -08:00
Eric Traut
b3de6c7f2b Defer persistence of rollout file (#11028)
- Defer rollout persistence for fresh threads (`InitialHistory::New`):
keep rollout events in memory and only materialize rollout file + state
DB row on first `EventMsg::UserMessage`.
- Keep precomputed rollout path available before materialization.
- Change `thread/start` to build thread response from live config
snapshot and optional precomputed path.
- Improve pre-materialization behavior in app-server/TUI: clearer
invalid-request errors for file-backed ops and a friendlier `/fork` “not
ready yet” UX.
- Update tests to match deferred semantics across
start/read/archive/unarchive/fork/resume/review flows.
- Improved resilience of user_shell test, which should be unrelated to
this change but must be affected by timing changes

For Reviewers:
* The primary change is in recorder.rs
* Most of the other changes were to fix up broken assumptions in
existing tests

Testing:
* Manually tested CLI
* Exercised app server paths by manually running IDE Extension with
rebuilt CLI binary
* Only user-visible change is that `/fork` in TUI generates visible
error if used prior to first turn
2026-02-07 23:05:03 -08:00
pakrym-oai
6d08298f4e Fallback to HTTP on UPGRADE_REQUIRED (#10824)
Allow the server to trigger a connection downgrade in case the protocol
changes in incompatible ways.
2026-02-08 05:06:33 +00:00
Chriss4123
d68e9c0f19 fix(tui): rehydrate drafts and restore image placeholders (#9040)
Fixes #9050

When a draft is stashed with Ctrl+C, we now persist the full draft state
(text elements, local image paths, and pending paste payloads) in local
history. Up/Down recall rehydrates placeholder elements and attachments
so styling remains correct and large pastes still expand on submit.
Persistent (cross‑session) history remains text‑only.

Backtrack prefills now reuse the selected user message’s text elements
and local image paths, so image placeholders/attachments rehydrate when
rolling back.

External editor replacements keep only attachments whose placeholders
remain and then normalize image placeholders to `[Image #1]..[Image #N]`
to keep the attachment mapping consistent.

Docs:
- docs/tui-chat-composer.md

Testing:
- just fix -p codex-tui
- cargo test -p codex-tui

Co-authored-by: Eric Traut <etraut@openai.com>
2026-02-07 20:08:45 -08:00
Anton Panasenko
a94505a92a feat: enable premessage-deflate for websockets (#10966)
note:
unfortunately, tokio-tungstenite / tungstenite upgrade triggers some
problems with linker of rama-tls-boring with openssl:
```
error: linking with `/Users/apanasenko/Library/Caches/cargo-zigbuild/0.20.1/zigcc-x86_64-unknown-linux-musl-ff6a.sh` failed: exit status: 1
  |
  = note:  "/Users/apanasenko/Library/Caches/cargo-zigbuild/0.20.1/zigcc-x86_64-unknown-linux-musl-ff6a.sh" "-m64" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/self-contained/rcrt1.o" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/self-contained/crti.o" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/self-contained/crtbeginS.o" "<1 object files omitted>" "-Wl,--as-needed" "-Wl,-Bstatic" "/var/folders/kt/52y_g75x3ng8ktvk3rfwm6400000gp/T/rustcyGQdYm/{liblzma_sys-662a82316f96ec30,libbzip2_sys-bf78a2d58d5cbce6,liblibsqlite3_sys-6c004987fd67a36a,libtree_sitter_bash-220b99a97d331ab7,libtree_sitter-858f0a1dbfea58bd,libzstd_sys-6eb237deec748c5b,libring-2a87376483bf916f,libopenssl_sys-7c189e68b37fe2bb,liblibz_sys-4344eef4345520b1,librama_boring_sys-0414e98115015ee0}.rlib" "-lc++" "-lc++abi" "-lunwind" "-lc" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/libcompiler_builtins-*.rlib" "-L" "/var/folders/kt/52y_g75x3ng8ktvk3rfwm6400000gp/T/rustcyGQdYm/raw-dylibs" "-Wl,-Bdynamic" "-Wl,--eh-frame-hdr" "-Wl,-z,noexecstack" "-nostartfiles" "-L" "/Users/apanasenko/code/codex/codex-rs/target/x86_64-unknown-linux-musl/release/build/libz-sys-ff5ea50d88c28ffb/out/lib" "-L" "/Users/apanasenko/code/codex/codex-rs/target/x86_64-unknown-linux-musl/release/build/ring-bdec3dddc19f5a5e/out" "-L" "/Users/apanasenko/code/codex/codex-rs/target/x86_64-unknown-linux-musl/release/build/openssl-sys-96e0870de3ca22bc/out/openssl-build/install/lib" "-L" "/Users/apanasenko/code/codex/codex-rs/target/x86_64-unknown-linux-musl/release/build/zstd-sys-0cc37a5da1481740/out" "-L" "/Users/apanasenko/code/codex/codex-rs/target/x86_64-unknown-linux-musl/release/build/tree-sitter-72d2418073317c0f/out" "-L" "/Users/apanasenko/code/codex/codex-rs/target/x86_64-unknown-linux-musl/release/build/tree-sitter-bash-bfd293a9f333ce6a/out" "-L" "/Users/apanasenko/code/codex/codex-rs/target/x86_64-unknown-linux-musl/release/build/libsqlite3-sys-b78b2cfb81a330fc/out" "-L" "/Users/apanasenko/code/codex/codex-rs/target/x86_64-unknown-linux-musl/release/build/bzip2-sys-69a145cc859ef275/out/lib" "-L" "/Users/apanasenko/code/codex/codex-rs/target/x86_64-unknown-linux-musl/release/build/lzma-sys-07e92d0b6baa6fd4/out" "-L" "/Users/apanasenko/code/codex/codex-rs/target/x86_64-unknown-linux-musl/release/build/rama-boring-sys-0bc2dfbf669addc4/out/build/crypto/" "-L" "/Users/apanasenko/code/codex/codex-rs/target/x86_64-unknown-linux-musl/release/build/rama-boring-sys-0bc2dfbf669addc4/out/build/ssl/" "-L" "/Users/apanasenko/code/codex/codex-rs/target/x86_64-unknown-linux-musl/release/build/rama-boring-sys-0bc2dfbf669addc4/out/build/" "-L" "/Users/apanasenko/code/codex/codex-rs/target/x86_64-unknown-linux-musl/release/build/rama-boring-sys-0bc2dfbf669addc4/out/build" "-L" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/self-contained" "-L" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib" "-o" "/Users/apanasenko/code/codex/codex-rs/target/x86_64-unknown-linux-musl/release/deps/codex_network_proxy-d08268b863517761" "-Wl,--gc-sections" "-static-pie" "-Wl,-z,relro,-z,now" "-Wl,-O1" "-Wl,--strip-all" "-nodefaultlibs" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/self-contained/crtendS.o" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/self-contained/crtn.o"
  = note: some arguments are omitted. use `--verbose` to show all linker arguments
  = note: warning: ignoring deprecated linker optimization setting '1'
          warning: unable to open library directory '/Users/apanasenko/code/codex/codex-rs/target/x86_64-unknown-linux-musl/release/build/rama-boring-sys-0bc2dfbf669addc4/out/build/crypto/': FileNotFound
          ld.lld: error: duplicate symbol: SSL_export_keying_material
          >>> defined at ssl_lib.c:3816 (ssl/ssl_lib.c:3816)
          >>>            libssl-lib-ssl_lib.o:(SSL_export_keying_material) in archive /var/folders/kt/52y_g75x3ng8ktvk3rfwm6400000gp/T/rustcyGQdYm/libopenssl_sys-7c189e68b37fe2bb.rlib
          >>> defined at t1_enc.cc:205 (/Users/apanasenko/code/codex/codex-rs/target/x86_64-unknown-linux-musl/release/build/rama-boring-sys-0bc2dfbf669addc4/out/boringssl/ssl/t1_enc.cc:205)
          >>>            t1_enc.cc.o:(.text.SSL_export_keying_material+0x0) in archive /var/folders/kt/52y_g75x3ng8ktvk3rfwm6400000gp/T/rustcyGQdYm/librama_boring_sys-0414e98115015ee0.rlib

          ld.lld: error: duplicate symbol: d2i_ASN1_TIME
          >>> defined at a_time.c:27 (crypto/asn1/a_time.c:27)
          >>>            libcrypto-lib-a_time.o:(d2i_ASN1_TIME) in archive /var/folders/kt/52y_g75x3ng8ktvk3rfwm6400000gp/T/rustcyGQdYm/libopenssl_sys-7c189e68b37fe2bb.rlib
          >>> defined at a_time.cc:34 (/Users/apanasenko/code/codex/codex-rs/target/x86_64-unknown-linux-musl/release/build/rama-boring-sys-0bc2dfbf669addc4/out/boringssl/crypto/asn1/a_time.cc:34)
          >>>            a_time.cc.o:(.text.d2i_ASN1_TIME+0x0) in archive /var/folders/kt/52y_g75x3ng8ktvk3rfwm6400000gp/T/rustcyGQdYm/librama_boring_sys-0414e98115015ee0.rlib
``` 

that force me to migrate away from rama-tls-boring to rama-tls-rustls
and pin `ring` for rustls.
2026-02-07 17:59:34 -08:00
pakrym-oai
8fe5066bcc Simplify pre-connect (#11040) 2026-02-07 15:52:03 -08:00
Michael Bolin
2e89cb9117 feat: include state of [experimental_network] in /debug-config output (#11039)
#10958 introduced experimental support for a network config in
`/etc/codex/requirements.toml`, so this extends `/debug-config` to
surface this information, if set, which should make it easier to debug.
2026-02-07 21:38:12 +00:00
Charley Cunningham
e6662d6387 app-server: treat null mode developer instructions as built-in defaults (#10983)
## Summary
- make `turn/start` normalize
`collaborationMode.settings.developer_instructions: null` to the
built-in instructions for the selected mode
- prevent app-server clients from accidentally clearing mode-switch
developer instructions by sending `null`
- document this behavior in the v2 protocol and app-server docs

## What changed
- `codex-rs/app-server/src/codex_message_processor.rs`
  - added a small `normalize_turn_start_collaboration_mode` helper
  - in `turn_start`, apply normalization before `OverrideTurnContext`
- `codex-rs/app-server/tests/suite/v2/turn_start.rs`
- extended `turn_start_accepts_collaboration_mode_override_v2` to assert
the outgoing request includes default-mode instruction text when the
client sends `developer_instructions: null`
- `codex-rs/app-server-protocol/src/protocol/v2.rs`
- clarified `TurnStartParams.collaboration_mode` docs:
`settings.developer_instructions: null` means use built-in mode
instructions
- regenerated schema fixture:
- `codex-rs/app-server-protocol/schema/typescript/v2/TurnStartParams.ts`
- docs:
  - `codex-rs/app-server/README.md`
  - `codex-rs/docs/codex_mcp_interface.md`
2026-02-07 12:59:41 -08:00
viyatb-oai
739908a12c feat(core): add network constraints schema to requirements.toml (#10958)
## Summary

Add `requirements.toml` schema support for admin-defined network
constraints in the requirements layer

example config:

```
[experimental_network]
enabled = true
allowed_domains = ["api.openai.com"]
denied_domains = ["example.com"]
```
2026-02-07 19:48:24 +00:00
Eric Traut
16e7cf05d2 Fixed a flaky Windows test that is consistently causing a CI failure (#10987)
Loop wait_for_complete/wait_for_updates_at_least until deadline to
prevent Windows CI false timeouts in query-change session tests.
2026-02-07 09:08:13 -08:00
Eric Traut
10336068db Fix flaky windows CI test (#10993)
Hardens PTY Python REPL test and make MCP test startup deterministic

**Summary**
- `utils/pty/src/tests.rs`
- Added a REPL readiness handshake (`wait_for_python_repl_ready`) that
repeatedly sends a marker and waits for it in PTY output before sending
test commands.
  - Updated `pty_python_repl_emits_output_and_exits` to:
    - wait for readiness first,
    - preserve startup output,
    - append output collected through process exit.
- Reduces Windows/ConPTY flakiness from early stdin writes racing REPL
startup.

- `mcp-server/tests/suite/codex_tool.rs`
- Avoid remote model refresh during MCP test startup, reducing
timeout-prone nondeterminism.
2026-02-07 08:55:42 -08:00
jif-oai
83c74125bc Bootstrap shell commands via user shell snapshot (#10909)
Summary
- wrap `shell -lc` executions that use a snapshot with the session shell
so the saved environment is sourced before delegating to the original
shell
- escape single quotes in the generated wrapper and add tests covering
Bash/Zsh/sh session bootstrapping

Testing
- Not run (not requested)
2026-02-07 17:36:44 +01:00
jif-oai
62605fa471 Add resume_agent collab tool (#10903)
Summary
- add the new resume_agent collab tool path through core, protocol, and
the app server API, including the resume events
- update the schema/TypeScript definitions plus docs so resume_agent
appears in generated artifacts and README
- note that resumed agents rehydrate rollout history without overwriting
their base instructions

Testing
- Not run (not requested)
2026-02-07 17:31:45 +01:00
Michael Bolin
4cd0c42a28 fix: normalize line endings when reading file on Windows (#10988)
I did not wait for CI on https://github.com/openai/codex/pull/10980
because it was blocking an alpha release, but apparently it broken the
Windows build.
2026-02-06 23:49:19 -08:00
Charley Cunningham
f3f35526a8 Show left/right arrows to navigate in tui request_user_input (#10921)
<img width="785" height="185" alt="Screenshot 2026-02-06 at 10 25 13 AM"
src="https://github.com/user-attachments/assets/402a6e79-4626-4df9-b3da-bc2f28e64611"
/>

<img width="784" height="213" alt="Screenshot 2026-02-06 at 10 26 37 AM"
src="https://github.com/user-attachments/assets/cf9614b2-aa1e-4c61-8579-1d2c7e1c7dc1"
/>

"left/right to navigate questions" in request_user_input footer
2026-02-06 23:41:08 -08:00
Eric Traut
3779b52e2d Do not poll for usage when using API Key auth (#10973)
Fixes #10869

- Gate TUI rate-limit polling on ChatGPT-auth providers only.
- `prefetch_rate_limits()` now checks `should_prefetch_rate_limits()`.
- New gate requires:
  - `config.model_provider.requires_openai_auth`
  - cached auth is ChatGPT (`CodexAuth::is_chatgpt_auth`)
- Prevents `/wham/usage` polling in API/custom-endpoint profiles.
2026-02-06 23:26:44 -08:00
Michael Bolin
18bb25557c fix: use expected line ending in codex-rs/core/config.schema.json (#10977)
Fixes a line ending that was altered in https://github.com/openai/codex/pull/10861.

This is breaking the release due to:

a118494323/.github/workflows/rust-release.yml (L54-L55)

This PR updates the test to check for this so we should catch it in CI (or when running tests locally):

a118494323/codex-rs/core/src/config/schema.rs (L105-L131)
2026-02-06 22:30:57 -08:00
Michael Bolin
a118494323 feat: add support for allowed_web_search_modes in requirements.toml (#10964)
This PR makes it possible to disable live web search via an enterprise
config even if the user is running in `--yolo` mode (though cached web
search will still be available). To do this, create
`/etc/codex/requirements.toml` as follows:

```toml
# "live" is not allowed; "disabled" is allowed even though not listed explicitly.
allowed_web_search_modes = ["cached"]
```

Or set `requirements_toml_base64` MDM as explained on
https://developers.openai.com/codex/security/#locations.

### Why
- Enforce admin/MDM/`requirements.toml` constraints on web-search
behavior, independent of user config and per-turn sandbox defaults.
- Ensure per-turn config resolution and review-mode overrides never
crash when constraints are present.

### What
- Add `allowed_web_search_modes` to requirements parsing and surface it
in app-server v2 `ConfigRequirements` (`allowedWebSearchModes`), with
fixtures updated.
- Define a requirements allowlist type (`WebSearchModeRequirement`) and
normalize semantics:
  - `disabled` is always implicitly allowed (even if not listed).
  - An empty list is treated as `["disabled"]`.
- Make `Config.web_search_mode` a `Constrained<WebSearchMode>` and apply
requirements via `ConstrainedWithSource<WebSearchMode>`.
- Update per-turn resolution (`resolve_web_search_mode_for_turn`) to:
- Prefer `Live → Cached → Disabled` when
`SandboxPolicy::DangerFullAccess` is active (subject to requirements),
unless the user preference is explicitly `Disabled`.
- Otherwise, honor the user’s preferred mode, falling back to an allowed
mode when necessary.
- Update TUI `/debug-config` and app-server mapping to display
normalized `allowed_web_search_modes` (including implicit `disabled`).
- Fix web-search integration tests to assert cached behavior under
`SandboxPolicy::ReadOnly` (since `DangerFullAccess` legitimately prefers
`live` when allowed).
2026-02-07 05:55:15 +00:00
Eric Traut
82c981cafc Process-group cleanup for stdio MCP servers to prevent orphan process storms (#10710)
This PR changes stdio MCP child processes to run in their own process
group
* Add guarded teardown in codex-rmcp-client: send SIGTERM to the group
first, then SIGKILL after a short grace period.
* Add terminate_process_group helper in process_group.rs.
* Add Unix regression test in process_group_cleanup.rs to verify wrapper
+ grandchild are reaped on client drop.

Addresses reported MCP process/thread storm: #10581
2026-02-06 21:26:36 -08:00
Eric Traut
4d52428fa2 Fixed a flaky test (#10970)
## Summary

Stabilize v2 review integration tests by making them hermetic with
respect to model discovery.

`app-server` review tests were intermittently timing out in CI
(especially on Windows runners) because their test config allowed remote
model refresh. During `thread/start`, the test process could issue live
`/v1/models` requests, introducing external network latency and
nondeterministic timing before review flow assertions.

This change disables remote model fetching in the review test config
helper used by these tests.
2026-02-06 21:26:26 -08:00
viyatb-oai
8cd46ebad6 refactor(network-proxy): flatten network config under [network] (#10965)
Summary:
- Rename config table from network_proxy to network.
- Flatten allowed_domains, denied_domains, allow_unix_sockets, and
allow_local_binding onto NetworkProxySettings.
- Update runtime, state constraints, tests, and README to the new config
shape.
2026-02-07 05:22:44 +00:00
sayan-oai
5d2702f6b8 fix(tui): conditionally restore status indicator using message phase (#10947)
TLDR: use new message phase field emitted by preamble-supported models
to determine whether an AgentMessage is mid-turn commentary. if so,
restore the status indicator afterwards to indicate the turn has not
completed.

### Problem
`commit_tick` hides the status indicator while streaming assistant text.
For preamble-capable models, that text can be commentary mid-turn, so
hiding was correct during streaming but restore timing mattered:
- restoring too aggressively caused jitter/flashing
- not restoring caused indicator to stay hidden before subsequent work
(tool calls, web search, etc.)

### Fix
- Add optional `phase` to `AgentMessageItem` and propagate it from
`ResponseItem::Message`
- Keep indicator hidden during streamed commit ticks, restore only when:
  - assistant item completes as `phase=commentary`, and
  - stream queues are idle + task is still running.
- Treat `phase=None` as final-answer behavior (no restore) to keep
existing behavior for non-preamble models

### Tests
Add/update tests for:
- no idle-tick restore without commentary completion
- commentary completion restoring status before tool begin
- snapshot coverage for preamble/status behavior

---------

Co-authored-by: Josh McKinney <joshka@openai.com>
2026-02-07 02:39:52 +00:00
canvrno-oai
1446bd2b23 Mark Config.apps as experimental, correct schema generation issue (#10938)
This PR makes `Config.apps `experimental-only and fixes a TS schema
post-processing bug that removed needed imports. The bug happened
because import pruning only checked the inner type body after filtering,
not the full alias, so `JsonValue` got dropped from `Config.ts`. We now
prune against the full alias body and added a regression test for this
scenario.
2026-02-06 16:30:41 -08:00
Javi
87ce50f118 app-server: print help message to console when starting websockets server (#10943)
Follow-up to https://github.com/openai/codex/pull/10693

<img width="596" height="77" alt="image"
src="https://github.com/user-attachments/assets/9140df70-01d1-4c5a-85ee-ca15a09a0e77"
/>
2026-02-07 00:18:42 +00:00
daniel-oai
84bce2b8e6 TUI/Core: preserve duplicate skill/app mention selection across submit + resume (#10855)
## What changed

- In `codex-rs/core/src/skills/injection.rs`, we now honor explicit
`UserInput::Skill { name, path }` first, then fall back to text mentions
only when safe.
- In `codex-rs/tui/src/bottom_pane/chat_composer.rs`, mention selection
is now token-bound (selected mention is tied to the specific inserted
`$token`), and we snapshot bindings at submit time so selection is not
lost.
- In `codex-rs/tui/src/chatwidget.rs` and
`codex-rs/tui/src/bottom_pane/mod.rs`, submit/queue paths now consume
the submit-time mention snapshot (instead of rereading cleared composer
state).
- In `codex-rs/tui/src/mention_codec.rs` and
`codex-rs/tui/src/bottom_pane/chat_composer_history.rs`, history now
round-trips mention targets so resume restores the same selected
duplicate.
- In `codex-rs/tui/src/bottom_pane/skill_popup.rs` and
`codex-rs/tui/src/bottom_pane/chat_composer.rs`, duplicate labels are
normalized to `[Repo]` / `[App]`, app rows no longer show `Connected -`,
and description space is a bit wider.

<img width="550" height="163" alt="Screenshot 2026-02-05 at 9 56 56 PM"
src="https://github.com/user-attachments/assets/346a7eb2-a342-4a49-aec8-68dfec0c7d89"
/>
<img width="550" height="163" alt="Screenshot 2026-02-05 at 9 57 09 PM"
src="https://github.com/user-attachments/assets/5e04d9af-cccf-4932-98b3-c37183e445ed"
/>


## Before vs now

- Before: selecting a duplicate could still submit the default/repo
match, and resume could lose which duplicate was originally selected.
- Now: the exact selected target (skill path or app id) is preserved
through submit, queue/restore, and resume.

## Manual test

1. Build and run this branch locally:
   - `cd /Users/daniels/code/codex/codex-rs`
   - `cargo build -p codex-cli --bin codex`
   - `./target/debug/codex`
2. Open mention picker with `$` and pick a duplicate entry (not the
first one).
3. Confirm duplicate UI:
   - repo duplicate rows show `[Repo]`
   - app duplicate rows show `[App]`
   - app description does **not** start with `Connected -`
4. Submit the prompt, then press Up to restore draft and submit again.  
   Expected: it keeps the same selected duplicate target.
5. Use `/resume` to reopen the session and send again.  
Expected: restored mention still resolves to the same duplicate target.
2026-02-06 15:59:00 -08:00
alexsong-oai
daeef06bec add originator to otel (#10826) 2026-02-06 15:13:56 -08:00
Brian Yu
1fbf5ed06f Support alternative websocket API (#10861)
**Test plan**

```
cargo build -p codex-cli && RUST_LOG='codex_api::endpoint::responses_websocket=trace,codex_core::client=debug,codex_core::codex=debug' \
  ./target/debug/codex \
    --enable responses_websockets_v2 \
    --profile byok \
    --full-auto
```
2026-02-06 14:40:50 -08:00
Ahmed Ibrahim
ba8b5d9018 Treat compaction failure as failure state (#10927)
- Return compaction errors from local and remote compaction flows.\n-
Stop turns/tasks when auto-compaction fails instead of continuing
execution.
2026-02-06 13:51:46 -08:00
Owen Lin
1751116ec6 chore(app-server): add experimental annotation to relevant fields (#10928)
These fields had always been documented as experimental/unstable with
docstrings, but now let's actually use the `experimental` annotation to
be more explicit.

- thread/start.experimentalRawEvents
- thread/resume.history
- thread/resume.path
- thread/fork.path
- turn/start.collaborationMode
- account/login/start.chatgptAuthTokens
2026-02-06 20:48:04 +00:00
Owen Lin
731f0f384a chore(app-server): update AGENTS.md for config + optional collection guidance (#10914)
Based on recent app-server PRs
2026-02-06 12:45:27 -08:00
Charley Cunningham
143daadb31 core: refresh developer instructions after compaction replacement history (#10574)
## Summary

When replaying compacted history (especially `replacement_history` from
remote compaction), we should not keep stale developer messages from
older session state. This PR trims developer-
role messages from compacted replacement history and reinjects fresh
developer instructions derived from current turn/session state.

This aligns compaction replay behavior with the intended "fresh
instructions after summary" model.

## Problem

Compaction replay had two paths:

- `Compacted { replacement_history: None }`: rebuilt with fresh initial
context
- `Compacted { replacement_history: Some(...) }`: previously used raw
replacement history as-is

The second path could carry stale developer instructions
(permissions/personality/collab-mode guidance) across session changes.

## What Changed

### 1) Added helper to refresh compacted developer instructions

- **File:** `codex-rs/core/src/compact.rs`
- **Function:** `refresh_compacted_developer_instructions(...)`

Behavior:
- remove all `ResponseItem::Message { role: "developer", .. }` from
compacted history
- append fresh developer messages from current
`build_initial_context(...)`

### 2) Applied helper in remote compaction flow

- **File:** `codex-rs/core/src/compact_remote.rs`
- After receiving compact endpoint output, refresh developer
instructions before replacing history and persisting
`replacement_history`.

### 3) Applied helper while reconstructing history from rollout

- **File:** `codex-rs/core/src/codex.rs`
- In `reconstruct_history_from_rollout(...)`, when processing
`Compacted` entries with `replacement_history`, refresh developer
instructions instead of directly replacing with raw history.

## Non-Goals / Follow-up

This PR does **not** address the existing first-turn-after-resume
double-injection behavior.
A follow-up PR will handle resume-time dedup/idempotence separately.

If you want, I can also give you a shorter “squash-merge friendly”
version of the description.

## Codex author
`codex fork 019c25e6-706e-75d1-9198-688ec00a8256`
2026-02-06 12:25:08 -08:00
Josh McKinney
e416e578bb core: preconnect Responses websocket for first turn (#10698)
## Problem
The first user turn can pay websocket handshake latency even when a
session has already started. We want to reduce that initial delay while
preserving turn semantics and avoiding any prompt send during startup.

Reviewer feedback also called out duplicated connect/setup paths and
unnecessary preconnect state complexity.

## Mental model
`ModelClient` owns session-scoped transport state. During session
startup, it can opportunistically warm one websocket handshake slot. A
turn-scoped `ModelClientSession` adopts that slot once if available,
restores captured sticky turn-state, and otherwise opens a websocket
through the same shared connect path.

If startup preconnect is still in flight, first turn setup awaits that
task and treats it as the first connection attempt for the turn.

Preconnect is handshake-only. The first `response.create` is still sent
only when a turn starts.

## Non-goals
This change does not make preconnect required for correctness and does
not change prompt/turn payload semantics. It also does not expand
fallback behavior beyond clearing preconnect state when fallback
activates.

## Tradeoffs
The implementation prioritizes simpler ownership and shared connection
code over header-match gating for reuse. The single-slot cache keeps
lifecycle straightforward but only benefits the immediate next turn.

Awaiting in-flight preconnect has the same app-level connect-timeout
semantics as existing websocket connect behavior (no new timeout class
introduced by this PR).

## Architecture
`core/src/client.rs`:
- Added session-level preconnect lifecycle state (`Idle` / `InFlight` /
`Ready`) carrying one warmed websocket plus optional captured
turn-state.
- Added `pre_establish_connection()` startup warmup and `preconnect()`
handshake-only setup.
- Deduped auth/provider resolution into `current_client_setup()` and
websocket handshake wiring into `connect_websocket()` /
`build_websocket_headers()`.
- Updated turn websocket path to adopt preconnect first, await in-flight
preconnect when present, then create a new websocket only when needed.
- Ensured fallback activation clears warmed preconnect state.
- Added documentation for lifecycle, ownership, sticky-routing
invariants, and timeout semantics.

`core/src/codex.rs`:
- Session startup invokes `model_client.pre_establish_connection(...)`.
- Turn metadata resolution uses the shared timeout helper.

`core/src/turn_metadata.rs`:
- Centralized shared timeout helper used by both turn-time metadata
resolution and startup preconnect metadata building.

`core/tests/common/responses.rs` + websocket test suites:
- Added deterministic handshake waiting helper (`wait_for_handshakes`)
with bounded polling.
- Added startup preconnect and in-flight preconnect reuse coverage.
- Fallback expectations now assert exactly two websocket attempts in
covered scenarios (startup preconnect + turn attempt before fallback
sticks).

## Observability
Preconnect remains best-effort and non-fatal. Existing
websocket/fallback telemetry remains in place, and debug logs now make
preconnect-await behavior and preconnect failures easier to reason
about.

## Tests
Validated with:
1. `just fmt`
2. `cargo test -p codex-core websocket_preconnect -- --nocapture`
3. `cargo test -p codex-core websocket_fallback -- --nocapture`
4. `cargo test -p codex-core
websocket_first_turn_waits_for_inflight_preconnect -- --nocapture`
2026-02-06 19:08:24 +00:00
viyatb-oai
8896ca0ee6 fix(linux-sandbox): block io_uring syscalls in no-network seccomp policy (#10814)
## Summary

- Add seccomp deny rules for `io_uring` syscalls in the Linux sandbox
network policy.
- Specifically deny:
  - `SYS_io_uring_setup`
  - `SYS_io_uring_enter`
  - `SYS_io_uring_register`
2026-02-06 11:00:54 -08:00
viyatb-oai
db0d8710d5 feat(network-proxy): add structured policy decision to blocked errors (#10420)
## Summary
Add explicit, model-visible network policy decision metadata to blocked
proxy responses/errors.

Introduces a standardized prefix line: `CODEX_NETWORK_POLICY_DECISION
{json}`

and wires it through blocked paths for:
- HTTP requests
- HTTPS CONNECT
- SOCKS5 TCP/UDP denials

## Why
The model should see *why* a request was blocked
(reason/source/protocol/host/port) so it can choose the correct next
action.

## Notes
- This PR is intentionally independent of config-layering/network-rule
runtime integration.
- Focus is blocked decision surface only.
2026-02-06 10:46:50 -08:00
canvrno-oai
36c16e0c58 Add app configs to config.toml (#10822)
Adds app configs to config.toml + tests
2026-02-06 10:29:08 -08:00
Charley Cunningham
b7ecd166a6 Queue nudges while plan generating (#10457)
## Summary

This PR fixes a UI/streaming race when nudged or steer-enabled messages
are queued during an active Plan stream.

Previously, `submit_user_message_with_mode` switched collaboration mode
immediately (via `set_collaboration_mask`) even when the message was
queued. If that happened mid-Plan stream, `active_mode_kind` could flip
away from Plan before the turn finished, causing subsequent
`on_plan_delta` updates to be ignored in the UI.

Now, mode switching is deferred until the queued message is actually
submitted.

## What changed

- Added a per-message deferred mode override on `UserMessage`:
  - `collaboration_mode_override: Option<CollaborationModeMask>`
- Updated `submit_user_message_with_mode` to:
  - create a `UserMessage` carrying the mode override
- queue or submit that message without mutating global mode immediately
- Updated `submit_user_message` to:
- apply `collaboration_mode_override` just before constructing/sending
`Op::UserTurn`
- Kept queueing condition scoped to active Plan stream rendering:
- queue only while plan output is actively streaming in TUI
(`plan_stream_controller.is_some()`)

## Why

This preserves Plan mode for the remainder of the in-flight Plan turn,
so streamed plan deltas continue rendering correctly, while still
ensuring the follow-up queued message is sent with the intended
collaboration mode.

## Behavior after this change

- If a nudged/steer submission happens while Plan output is actively
streaming:
  - message is queued
  - UI stays in Plan mode for the running turn
- once dequeued/submitted, mode override is applied and the message is
sent in the intended mode
- If no Plan stream is active:
- submission proceeds immediately and mode override is applied as before

## Tests

Added/updated coverage in `tui/src/chatwidget/tests.rs`:

- `submit_user_message_with_mode_queues_while_plan_stream_is_active`
  - asserts mode remains Plan while queued
- asserts mode switches to Code when queued message is actually
submitted
- `submit_user_message_with_mode_submits_when_plan_stream_is_not_active`
- `steer_enter_queues_while_plan_stream_is_active`
- `steer_enter_submits_when_plan_stream_is_not_active`

Also updated existing `UserMessage { ... }` test fixtures to include the
new field.

## Codex author
`codex fork 019c1047-d5d5-7c92-a357-6009604dc7e8`
2026-02-06 09:43:00 -08:00
Eric Traut
4521a6e852 Removed "exec_policy" feature flag (#10851)
This is no longer needed because it's on by default
2026-02-06 08:59:47 -08:00
jif-oai
aab61934af Handle required MCP startup failures across components (#10902)
Summary
- add a `required` flag for MCP servers everywhere config/CLI data is
touched so mandatory helpers can be round-tripped
- have `codex exec` and `codex app-server` thread start/resume fail fast
when required MCPs fail to initialize
2026-02-06 17:14:37 +01:00
jif-oai
3800173459 feat: backfill async again (#10894) 2026-02-06 15:41:52 +01:00
jif-oai
1020872eca nit: test an (#10892)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2026-02-06 14:41:53 +01:00
jif-oai
66554abfb9 sec: fix version of time to prevent vulnerability (#10876)
RUSTSEC-2026-0009
2026-02-06 12:10:07 +01:00
Eric Traut
dd80e332c4 Removed the "remote_compaction" feature flag (#10840)
This feature is always on now
2026-02-05 23:54:57 -08:00
Eric Traut
f61226d32a Personality setting is no longer available in experimental menu (#10852)
This PR removes the inaccurate "Disable in /experimental." statement now
that the "personality" feature flag is no longer experimental.

This addresses #10850
2026-02-05 22:19:09 -08:00
Eric Traut
e5c1a2d6fb Log an event (info only) when we receive a file watcher event (#10843) 2026-02-05 20:24:16 -08:00
Ahmed Ibrahim
048e0f3888 Gate app tooltips to macOS (#10784)
- Gate app promo tips to macOS and use non-app copy elsewhere.
2026-02-05 19:18:08 -08:00
Anton Panasenko
4ee039744e feat: expose detailed metrics to runtime metrics (#10699) 2026-02-05 18:22:30 -08:00
gt-oai
d74fa8edd1 Print warning when config does not meet requirements (#10792)
<img width="1019" height="284" alt="Screenshot 2026-02-05 at 23 34 08"
src="https://github.com/user-attachments/assets/19ec3ce1-3c3b-40f5-b251-a31d964bf3bb"
/>

Currently, if a config value is set that fails the requirements, we exit
Codex.

Now, instead of this, we print a warning and default to a
requirements-permitting value.
2026-02-06 01:12:44 +00:00
Owen Lin
0d8b2b74c4 feat(app-server): turn/steer API (#10821)
This PR adds a dedicated `turn/steer` API for appending user input to an
in-flight turn.

## Motivation
Currently, steering in the app is implemented by just calling
`turn/start` while a turn is running. This has some really weird quirks:
- Client gets back a new `turn.id`, even though streamed
events/approvals remained tied to the original active turn ID.
- All the various turn-level override params on `turn/start` do not
apply to the "steer", and would only apply to the next real turn.
- There can also be a race condition where the client thinks the turn is
active but the server has already completed it, so there might be bugs
if the client has baked in some client-specific behavior thinking it's a
steer when in fact the server kicked off a new turn. This is
particularly possible when running a client against a remote app-server.

Having a dedicated `turn/steer` API eliminates all those quirks.

`turn/steer` behavior:
- Requires an active turn on threadId. Returns a JSON-RPC error if there
is no active turn.
- If expectedTurnId is provided, it must match the active turn (more
useful when connecting to a remote app-server).
- Does not emit `turn/started`.
- Does not accept turn overrides (`cwd`, `model`, `sandbox`, etc.) or
`outputSchema` to accurately reflect that these are not applied when
steering.
2026-02-06 00:35:04 +00:00
Matthew Zeng
729b016515 Add stage field for experimental flags. (#10793)
- [x] Add stage field for experimental flags.
2026-02-05 23:31:04 +00:00
Noah Jorgensen
dcea972db8 updates: use brew api for version check (#10809)
## Problem

`codex` currently prompts you to update via `brew upgrade --cask codex`
but the brew api does not return the new version

> <img width="1500" height="822" alt="Screenshot 2026-02-05 at 12 36
09 PM"
src="https://github.com/user-attachments/assets/9e12929d-95e8-43f4-8fba-ab93f5f76e73"
/>

## Solution

`codex-rs/tui/src/updates.rs` was using the [latest cask in
github](https://github.com/Homebrew/homebrew-cask/blob/HEAD/Casks/c/codex.rb)
but this does not agree with the brew api, which leads to the issue
above. Instead we use the [brew api json
endpoint](https://github.com/Homebrew/homebrew-cask/blob/HEAD/Casks/c/codex.rb)
to ensure our version check agrees with the upgrade command.
2026-02-05 15:12:27 -08:00
pakrym-oai
dbe47ea01a Send beta header with websocket connects (#10727) 2026-02-05 15:05:02 -08:00
sayan-oai
378f1cabe8 go back to auto-enabling web_search for azure (#10820)
###### What
Remove special-casing that prevented auto-enabling `web_search` for
Azure model provider users. Addresses #10071, #10257.

###### Why
Azure fixed their responsesapi implementation; `web_search` is now
supported on models it wasn't before (like `gpt-5.1-codex-max`).

This request now works:
```
curl "$AZURE_API_ENDPOINT" -H "Content-Type: application/json" -H "Authorization: Bearer $AZURE_API_KEY" -d '{
  "model": "gpt-5.1-codex-max",
  "tools": [
    { "type": "web_search" }
  ],
  "tool_choice": "auto",
  "input": "Find the sunrise time in Paris today and cite the source."
}'
```

###### Tests
Tested with above curl, removed Azure-specific tests.
2026-02-05 14:57:07 -08:00
xl-openai
43a7290f11 Sync app-server requirements API with refreshed cloud loader (#10815)
configRequirements/read now returns updated cloud requirements after
login.
2026-02-05 14:43:31 -08:00
jif-oai
e65f76947f other announcement (#10818) 2026-02-05 22:21:02 +00:00
Max Johnson
8473096efb Add app-server transport layer with websocket support (#10693)
- Adds --listen <URL> to codex app-server with two listen modes:
      - stdio:// (default, existing behavior)
      - ws://IP:PORT (new websocket transport)
  - Refactors message routing to be connection-aware:
- Tracks per-connection session state (initialize/experimental
capability)
      - Routes responses/errors to the originating connection
- Broadcasts server notifications/requests to initialized connections
- Updates initialization semantics to be per connection (not
process-global), and updates app-server docs accordingly.
- Adds websocket accept/read/write handling (JSON-RPC per text frame,
ping/pong handling, connection lifecycle events).

Testing

- Unit tests for transport URL parsing and targeted response/error
routing.
  - New websocket integration test validating:
      - per-connection initialization requirements
      - no cross-connection response leakage
      - same request IDs on different connections route independently.
2026-02-05 20:56:34 +00:00
jif-oai
428a9f6035 feat: wait for backfill to be ready (#10790) 2026-02-05 20:45:16 +00:00
pap-openai
529b539564 Add analytics for /rename and /fork (#10655) 2026-02-05 20:18:29 +00:00
sayan-oai
5602edc1d0 chore: limit update to 0.98.0 NUX to < 0.98.0 ver (#10787)
seems like footgun if we forget to remove before releasing 0.99.0,
limited announcement to versions < 0.98.0
2026-02-05 12:11:32 -08:00
Matthew Zeng
7e81f63698 [app-server] Add a method to list experimental features. (#10721)
- [x] Add a method to list experimental features.
2026-02-05 20:04:01 +00:00
jif-oai
ddd09a9368 fix: announcement in prio (#10783) 2026-02-05 19:57:57 +00:00
sayan-oai
5fdf6f5efa chore: rm web-search-eligible header (#10660)
default-enablement of web_search is now client-side, no need to send
eligibility headers to backend.

Tested locally, headers no longer sent.

will wait for corresponding backend change to deploy before merging
2026-02-05 11:48:34 -08:00
iceweasel-oai
901d5b8fd6 add sandbox policy and sandbox name to codex.tool.call metrics (#10711)
This will give visibility into the comparative success rate of the
Windows sandbox implementations compared to other platforms.
2026-02-05 11:42:12 -08:00
jif-oai
4df9f2020b nit: gpt-5.3-codex announcement 2 (#10782) 2026-02-05 19:22:24 +00:00
jif-oai
ddfb8bfd77 nit: gpt-5.3-codex announcement (#10775) 2026-02-05 19:17:04 +00:00
Owen Lin
3582b74d01 fix(auth): isolate chatgptAuthTokens concept to auth manager and app-server (#10423)
So that the rest of the codebase (like TUI) don't need to be concerned
whether ChatGPT auth was handled by Codex itself or passed in via
app-server's external auth mode.
2026-02-05 10:46:06 -08:00
Owen Lin
5c0fd62ff1 fix(tui): fix resume_picker_orders_by_updated_at test (#10769)
I think this was due to https://github.com/openai/codex/issues/10752
landing and not rebased on top of
9ee746afd6
2026-02-05 18:03:10 +00:00
Felipe Coury
22545bf206 feat(tui): add sortable resume picker with created/updated timestamp toggle (#10752)
## Summary

- Add sorting support to the resume session picker with Tab key toggle
- Sessions can now be sorted by either creation time or last updated
time
- Display the current sort mode in the picker header
- Default to sorting by creation time (most recent first)

## Changes

- Add `sort_key` field to `PickerState` to track current sort order
- Pass sort key to `RolloutRecorder::list_threads()` for proper backend
sorting
- Add Tab key handler to toggle between `CreatedAt` and `UpdatedAt`
sorting
- Show current sort mode ("Created at" / "Updated at") in header
- Add "Tab to toggle sort" keyboard hint
- Intelligently hide secondary date column when terminal is narrow
- Reload session list when sort order changes

## Test plan

- [x] Unit tests for sort key toggle functionality
- [x] Snapshot tests updated for new header format
- [x] Test that Tab key triggers reload with new sort key
- [x] Test column visibility adapts to narrow terminals
2026-02-05 09:08:31 -08:00
Felipe Coury
b0e5a6305b feat(tui): add /statusline command for interactive status line configuration (#10546)
## Summary
- Adds a new `/statusline` command to configure TUI footer status line
- Introduces reusable `MultiSelectPicker` component with keyboard
navigation, optional ordering and toggle support
- Implement status line setup modal that persist configuration to
config.toml

  ## Status Line Items
  The following items can be displayed in the status line:
  - **Model**: Current model name (with optional reasoning level)
  - **Context**: Remaining/used context window percentage
  - **Rate Limits**: 5-day and weekly usage limits
  - **Git**: Current branch (with optimized lookups)
  - **Tokens**: Used tokens, input/output token counts
  - **Session**: Session ID (full or shortened prefix)
  - **Paths**: Current directory, project root
  - **Version**: Codex version

  ## Features
  - Live preview while configuring status line items
  - Fuzzy search filtering in the picker
  - Intelligent truncation when items don't fit
  - Items gracefully omit when data is unavailable
  - Configuration persists to `config.toml`
  - Validates and warns about invalid status line items

  ## Test plan
  - [x] Run `/statusline` and verify picker UI appears
  - [x] Toggle items on/off and verify live preview updates
  - [x] Confirm selection persists after restart
  - [x] Verify truncation behavior with many items selected
  - [x] Test git branch detection in and out of git repos

---------

Co-authored-by: Josh McKinney <joshka@openai.com>
2026-02-05 08:50:21 -08:00
gt-oai
3b54fd7336 Add hooks implementation and wire up to notify (#9691)
This introduces a `Hooks` service. It registers hooks from config and
dispatches hook events at runtime.

N.B. The hook config is not wired up to this yet. But for legacy
reasons, we wire up `notify` from config and power it using hooks now.
Nothing about the `notify` interface has changed.

I'd start by reviewing `hooks/types.rs`

Some things to note:
  - hook names subject to change
  - no hook result yet
  - stopping semantics yet to be introduced
  - additional hooks yet to be introduced
2026-02-05 16:49:35 +00:00
jif-oai
9ee746afd6 Leverage state DB metadata for thread summaries (#10621)
Summary:
- read conversation summaries and cwd info from the state DB when
possible so we no longer rely on rollout files for metadata and avoid
extra I/O
- persist CLI version in thread metadata, surface it through summary
builders, and add the necessary DB migration hooks
- simplify thread listing by using enriched state DB data directly
rather than reading rollout heads

Testing:
- Not run (not requested)
2026-02-05 16:39:11 +00:00
jif-oai
68e82e5dc9 nit: add DB version is discrepancy recording (#10762) 2026-02-05 16:24:18 +00:00
jif-oai
901215e310 feat: repair DB in case of missing lines (#10751) 2026-02-05 16:21:49 +00:00
jif-oai
41f3b1ba0b feat: add memory tool (#10637)
Add a tool for memory to retrieve a full memory based on the memory ID
2026-02-05 16:16:31 +00:00
jif-oai
fe1cbd0f38 chore: handle shutdown correctly in tui (#10756) 2026-02-05 16:07:50 +00:00
jif-oai
d337b51741 feat: wire ephemeral in codex exec (#10758) 2026-02-05 15:49:57 +00:00
jif-oai
4033f905c6 feat: resumable backfill (#10745)
## Summary

This PR makes SQLite rollout backfill resumable and repeatable instead
of one-shot-on-db-create.

## What changed

- Added a persisted backfill state table:
  - state/migrations/0008_backfill_state.sql
- Tracks status (pending|running|complete), last_watermark, and
last_success_at.
- Added backfill state model/types in codex-state:
  - BackfillState, BackfillStatus (state/src/model/backfill_state.rs)
- Added runtime APIs to manage backfill lifecycle/progress:
  - get_backfill_state
  - mark_backfill_running
  - checkpoint_backfill
  - mark_backfill_complete
- Updated core startup behavior:
- Backfill now runs whenever state is not Complete (not only when DB
file is newly created).
- Reworked backfill execution:
- Collect rollout files, derive deterministic watermark per path, sort,
resume from last_watermark.
- Process in batches (BACKFILL_BATCH_SIZE = 200), checkpoint after each
batch.
  - Mark complete with last_success_at at the end.

## Why

Previous behavior could leave users permanently partially backfilled if
the process exited during initial async backfill. This change allows
safe continuation across restarts and avoids restarting from scratch.
2026-02-05 14:34:34 +00:00
iceweasel-oai
f2ffc4e5d0 Include real OS info in metrics. (#10425)
calculated a hashed user ID from either auth user id or API key
Also correctly populates OS.

These will make our metrics more useful and powerful for analysis.
2026-02-05 06:30:31 -08:00
jif-oai
040ecee715 Update explorer role default model (#10748)
Summary
- switch the explorer role in core agent configuration to use
`gpt-5.1-codex-mini` as the default model override
- leave other role defaults untouched

Testing
- Not run (not requested)
2026-02-05 13:51:53 +00:00
pap-openai
b2424cb635 adding fork information (UI) when forking (#10246)
- shows `/fork` command that ran in prev session
- shows `session forked from name (uuid) || uuid (if name is not set)` as an event in new session
2026-02-05 13:24:55 +00:00
jif-oai
aa46b5cf99 nit: backfill stronger (#10738) 2026-02-05 12:30:16 +00:00
jif-oai
97582ac52d Allow user shell commands to run alongside active turns (#10513)
Summary
- refactor user shell command execution into a shared helper and add
modes for standalone vs active-turn execution
- run user shell commands asynchronously when a turn is already active
so they don’t replace or abort the current turn
- extend the tests to cover the new behavior and add the generated Codex
environment manifest

Testing
- Not run (not requested)
2026-02-05 11:11:00 +00:00
jif-oai
c67120f4a0 fix: flaky landlock (#10689)
https://openai.slack.com/archives/C095U48JNL9/p1770243347893959
2026-02-05 10:30:18 +00:00
Ashutosh Kumar Singh
7b28b350e1 fix(tui): flush input buffer on init to prevent early exit on Windows (#10729)
Fixes #10661.

### Problem
On Windows, the sign-in menu can exit immediately if the OS-level input
buffer contains trailing characters (like the Enter key from running the
command).

### Solution
**Flush Input Buffer on Init**: Use FlushConsoleInputBuffer on Windows
(and cflush on Unix) in ui::init() to discard any input captured before
the TUI was ready.

Verified by @CodebyAmbrose in #10661.
2026-02-05 00:59:32 -08:00
Dylan Hurd
fe8b474acd fix(core,app-server) resume with different model (#10719)
## Summary
When resuming with a different model, we should also append a developer
message with the model instructions

## Testing
- [x] Added unit tests
2026-02-05 00:40:05 -08:00
xl-openai
1e1146cd29 Reload cloud requirements after user login (#10725)
Reload cloud requirements after user login so it could take effect
immediately.
2026-02-05 00:27:16 -08:00
Charley Cunningham
dc7007beaa Fix remote compaction estimator/payload instruction small mismatch (#10692)
## Summary
This PR fixes a deterministic mismatch in remote compaction where
pre-trim estimation and the `/v1/responses/compact` payload could use
different base instructions.

Before this change:
- pre-trim estimation used model-derived instructions
(`model_info.get_model_instructions(...)`)
- compact payload used session base instructions
(`sess.get_base_instructions()`)

After this change:
- remote pre-trim estimation and compact payload both use the same
`BaseInstructions` instance from session state.

## Changes
- Added a shared estimator entry point in `ContextManager`:
- `estimate_token_count_with_base_instructions(&self, base_instructions:
&BaseInstructions) -> Option<i64>`
- Kept `estimate_token_count(&TurnContext)` as a thin wrapper that
resolves model/personality instructions and delegates to the new helper.
- Updated remote compaction flow to fetch base instructions once and
reuse it for both:
  - trim preflight estimation
  - compact request payload construction
- Added regression coverage for parity and behavior:
  - unit test verifying explicit-base estimator behavior
- integration test proving remote compaction uses session override
instructions and trims accordingly

## Why this matters
This removes a deterministic divergence source where pre-trim could
think the request fits while the actual compact request exceeded context
because its instructions were longer/different.

## Scope
In scope:
- estimator/payload base-instructions parity in remote compaction

Out of scope:
- retry-on-`context_length_exceeded`
- compaction threshold/headroom policy changes
- broader trimming policy changes

## Codex author:
`codex fork 019c2b24-c2df-7b31-a482-fb8cf7a28559`
2026-02-04 23:24:06 -08:00
Ahmed Ibrahim
cd5f49a619 Make steer stable by default (#10690)
Promotes the Steer feature from Experimental to Stable and enables it by
default.

## What is Steer mode?

Steer mode changes how message submission works in the TUI:

- **With Steer enabled (new default)**: 
  - `Enter` submits messages immediately, even when a task is running
- `Tab` queues messages when a task is running (allows building up a
queue)
  
- **With Steer disabled (old behavior)**:
  - `Enter` queues messages when a task is running
  - This preserves the previous "queue while a task is running" behavior

## How Steer vs Queue work

The key difference is in the submission behavior:

1. **Steer mode** (`steer_enabled = true`):
- Enter → `InputResult::Submitted` → sends immediately via
`submit_user_message()`
- Tab → `InputResult::Queued` → queues via `queue_user_message()` if a
task is running
- This gives users direct control: Enter for immediate submission, Tab
for queuing

2. **Queue mode** (`steer_enabled = false`, previous default):
- Enter → `InputResult::Queued` → always queues when a task is running
   - Tab → `InputResult::Queued` → queues when a task is running
- This preserves the original behavior where Enter respects the running
task queue

## Implementation details

The behavior is controlled in
`ChatComposer::handle_key_event_without_popup()`:
- When `steer_enabled` is true, Enter calls `handle_submission(false)`
(submit immediately)
- When `steer_enabled` is false, Enter calls `handle_submission(true)`
(queue)

See `codex-rs/tui/src/bottom_pane/chat_composer.rs` for the
implementation.

## Documentation

For more details on the chat composer behavior, see:
- [TUI Chat Composer documentation](docs/tui-chat-composer.md)
- Feature flag definition: `codex-rs/core/src/features.rs`
2026-02-04 23:12:59 -08:00
Charley Cunningham
41b4962b0a Sync collaboration mode naming across Default prompt, tools, and TUI (#10666)
## Summary
- add shared `ModeKind` helpers for display names, TUI visibility, and
`request_user_input` availability
- derive TUI mode filtering/labels from shared `ModeKind` metadata
instead of local hardcoded matches
- derive `request_user_input` availability text and unavailable error
mode names from shared mode metadata
- replace hardcoded known mode names in the Default collaboration-mode
template with `{{KNOWN_MODE_NAMES}}` and fill it from
`TUI_VISIBLE_COLLABORATION_MODES`
- add regression tests for mode metadata sync and placeholder
replacement

## Notes
- `cargo test -p codex-core` integration target (`tests/all`) still
shows pre-existing env-specific failures in this environment due missing
`test_stdio_server` binary resolution; core unit tests are green.

## Codex author
`codex resume 019c26ff-dfe7-7173-bc04-c9e1fff1e447`
2026-02-04 23:03:28 -08:00
Dylan Hurd
e482978261 fix(core) switching model appends model instructions (#10651)
## Summary
When switching models, we should append the instructions of the new
model to the conversation as a developer message.

## Test
- [x] Adds a unit test
2026-02-05 05:50:38 +00:00
Dylan Hurd
a05aadfa1b chore(config) Default Personality Pragmatic (#10705)
## Summary
Switch back to Pragmatic personality

## Testing
- [x] Updated unit tests
2026-02-04 21:22:47 -08:00
cryptonerdcn
1dc06b6ffc fix: ensure resume args precede image args (#10709)
## Summary
Fixes argument ordering when `resumeThread()` is used with
`local_image`. The SDK previously emitted CLI args with `--image` before
`resume <threadId>`, which caused the Codex CLI to treat `resume`/UUID
as image paths and start a new session. This PR moves `resume
<threadId>` before any `--image` flags and adds a regression test.

## Bug Report / Links
- OpenAI issue: https://github.com/openai/codex/issues/10708
- Repro repo:
https://github.com/cryptonerdcn/codex-resume-local-image-repro
- Repro issue (repo):
https://github.com/cryptonerdcn/codex-resume-local-image-repro/issues/1

## Repro (pre-fix)
1. Build SDK from source
2. Run resume + local_image
3. Args order: `--image <path> resume <id>`
4. Result: new session created (thread id changes)

## Fix
Move `resume <threadId>` before `--image` in `CodexExec.run` and add a
regression test to assert ordering.

## Tests
- `cd sdk/typescript && npm test`
  - **Failed**: `codex-rs/target/debug/codex` missing (ENOENT)

## Notes
- I can rerun tests in an environment with `codex-rs` built and report
results.
2026-02-04 21:19:56 -08:00
sayan-oai
4ed8d74aab fix: ensure status indicator present earlier in exec path (#10700)
ensure status indicator present in all classifications of exec tool.
fixes indicator disappearing after preambles, will look into using
`phase` to avoid this class of error in a few hours.

commands parsed as unknown faced this issue

tested locally, added test for specific failure flow
2026-02-05 03:56:50 +00:00
443 changed files with 33089 additions and 6209 deletions

View File

@@ -15,8 +15,8 @@ common --experimental_platform_in_output_dir
common --noenable_runfiles
common --enable_platform_specific_config
# TODO(zbarsky): We need to untangle these libc constraints to get linux remote builds working.
common:linux --host_platform=//:local
common:linux --host_platform=//:local_linux
common:windows --host_platform=//:local_windows
common --@rules_cc//cc/toolchains/args/archiver_flags:use_libtool_on_macos=False
common --@toolchains_llvm_bootstrapped//config:experimental_stub_libgcc_s
@@ -28,7 +28,14 @@ common:windows --@rules_rust//rust/settings:experimental_use_sh_toolchain_for_bo
common --incompatible_strict_action_env
# Not ideal, but We need to allow dotslash to be found
common --test_env=PATH=/opt/homebrew/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin
common:linux --test_env=PATH=/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin
common:macos --test_env=PATH=/opt/homebrew/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin
# Pass through some env vars Windows needs to use powershell?
common:windows --test_env=PATH
common:windows --test_env=SYSTEMROOT
common:windows --test_env=COMSPEC
common:windows --test_env=WINDIR
common --test_output=errors
common --bes_results_url=https://app.buildbuddy.io/invocation/

View File

@@ -77,6 +77,12 @@ for arg in "\$@"; do
fi
continue
;;
-Wp,-U_FORTIFY_SOURCE)
# aws-lc-sys emits this GCC preprocessor forwarding form in debug
# builds, but zig cc expects the define flag directly.
args+=("-U_FORTIFY_SOURCE")
continue
;;
esac
args+=("\${arg}")
done
@@ -96,15 +102,23 @@ for arg in "\$@"; do
fi
case "\${arg}" in
--target)
# Drop explicit --target and its value: we always pass zig's -target below.
skip_next=1
continue
;;
--target=*|-target=*|-target)
# Zig expects -target and rejects Rust triples like *-unknown-linux-musl.
if [[ "\${arg}" == "-target" ]]; then
skip_next=1
fi
continue
;;
-Wp,-U_FORTIFY_SOURCE)
# aws-lc-sys emits this GCC forwarding form in debug builds; zig c++
# expects the define flag directly.
args+=("-U_FORTIFY_SOURCE")
continue
;;
esac
args+=("\${arg}")
done

View File

@@ -1,6 +1,13 @@
common --remote_download_minimal
common --nobuild_runfile_links
common --keep_going
common --verbose_failures
# Disable disk cache since we have remote one and aren't using persistent workers.
common --disk_cache=
# Rearrange caches on Windows so they're on the same volume as the checkout.
common:windows --repo_contents_cache=D:/a/.cache/bazel-repo-contents-cache
common:windows --repository_cache=D:/a/.cache/bazel-repo-cache
# We prefer to run the build actions entirely remotely so we can dial up the concurrency.
# We have platform-specific tests, so we want to execute the tests on all platforms using the strongest sandboxing available on each platform.
@@ -16,5 +23,5 @@ common:macos --config=remote
common:macos --strategy=remote
common:macos --strategy=TestRunner=darwin-sandbox,local
common:windows --strategy=TestRunner=local
# On windows we cannot cross-build the tests but run them locally due to what appears to be a Bazel bug
# (windows vs unix path confusion)

View File

@@ -45,15 +45,6 @@ jobs:
echo "✅ Tag and Cargo.toml agree (${tag_ver})"
echo "::endgroup::"
- name: Verify config schema fixture
shell: bash
working-directory: codex-rs
run: |
set -euo pipefail
echo "If this fails, run: just write-config-schema to overwrite fixture with intentional changes."
cargo run -p codex-core --bin codex-write-config-schema
git diff --exit-code core/config.schema.json
build:
needs: tag-check
name: Build - ${{ matrix.runner }} - ${{ matrix.target }}

View File

@@ -15,13 +15,14 @@ In the codex-rs folder where the rust code lives:
- When writing tests, prefer comparing the equality of entire objects over fields one by one.
- When making a change that adds or changes an API, ensure that the documentation in the `docs/` folder is up to date if applicable.
- If you change `ConfigToml` or nested config types, run `just write-config-schema` to update `codex-rs/core/config.schema.json`.
- Do not create small helper methods that are referenced only once.
Run `just fmt` (in `codex-rs` directory) automatically after you have finished making Rust code changes; do not ask for approval to run it. Additionally, run the tests:
1. Run the test for the specific project that was changed. For example, if changes were made in `codex-rs/tui`, run `cargo test -p codex-tui`.
2. Once those pass, if any changes were made in common, core, or protocol, run the complete test suite with `cargo test --all-features`. project-specific or individual tests can be run without asking the user, but do ask the user before running the complete test suite.
Before finalizing a large change to `codex-rs`, run `just fix -p <project>` (in `codex-rs` directory) to fix any linter issues in the code. Prefer scoping with `-p` to avoid slow workspacewide Clippy builds; only run `just fix` without `-p` if you changed shared crates.
Before finalizing a large change to `codex-rs`, run `just fix -p <project>` (in `codex-rs` directory) to fix any linter issues in the code. Prefer scoping with `-p` to avoid slow workspacewide Clippy builds; only run `just fix` without `-p` if you changed shared crates. Do not re-run tests after running `fix` or `fmt`.
## TUI style conventions
@@ -128,15 +129,11 @@ These guidelines apply to app-server protocol work in `codex-rs`, especially:
`*Params` for request payloads, `*Response` for responses, and `*Notification` for notifications.
- Expose RPC methods as `<resource>/<method>` and keep `<resource>` singular (for example, `thread/read`, `app/list`).
- Always expose fields as camelCase on the wire with `#[serde(rename_all = "camelCase")]` unless a tagged union or explicit compatibility requirement needs a targeted rename.
- Exception: config RPC payloads are expected to use snake_case to mirror config.toml keys (see the config read/write/list APIs in `app-server-protocol/src/protocol/v2.rs`).
- Always set `#[ts(export_to = "v2/")]` on v2 request/response/notification types so generated TypeScript lands in the correct namespace.
- Never use `#[serde(skip_serializing_if = "Option::is_none")]` for v2 API payload fields.
Exception: client->server requests that intentionally have no params may use:
`params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>`.
- For client->server JSON-RPC request payloads (`*Params`) only, every optional field must be annotated with `#[ts(optional = nullable)]`. Do not use `#[ts(optional = nullable)]` outside client->server request payloads (`*Params`).
- For client->server JSON-RPC request payloads only, and you want to express a boolean field where omission means `false`, use `#[serde(default, skip_serializing_if = "std::ops::Not::not")] pub field: bool` over `Option<bool>`.
- For new list methods, implement cursor pagination by default:
request fields `pub cursor: Option<String>` and `pub limit: Option<u32>`,
response fields `pub data: Vec<...>` and `pub next_cursor: Option<String>`.
- Keep Rust and TS wire renames aligned. If a field or variant uses `#[serde(rename = "...")]`, add matching `#[ts(rename = "...")]`.
- For discriminated unions, use explicit tagging in both serializers:
`#[serde(tag = "type", ...)]` and `#[ts(tag = "type", ...)]`.
@@ -145,6 +142,15 @@ These guidelines apply to app-server protocol work in `codex-rs`, especially:
- For experimental API surface area:
use `#[experimental("method/or/field")]`, derive `ExperimentalApi` when field-level gating is needed, and use `inspect_params: true` in `common.rs` when only some fields of a method are experimental.
### Client->server request payloads (`*Params`)
- Every optional field must be annotated with `#[ts(optional = nullable)]`. Do not use `#[ts(optional = nullable)]` outside client->server request payloads (`*Params`).
- Optional collection fields (for example `Vec`, `HashMap`) must use `Option<...>` + `#[ts(optional = nullable)]`. Do not use `#[serde(default)]` to model optional collections, and do not use `skip_serializing_if` on v2 payload fields.
- When you want omission to mean `false` for boolean fields, use `#[serde(default, skip_serializing_if = "std::ops::Not::not")] pub field: bool` over `Option<bool>`.
- For new list methods, implement cursor pagination by default:
request fields `pub cursor: Option<String>` and `pub limit: Option<u32>`,
response fields `pub data: Vec<...>` and `pub next_cursor: Option<String>`.
### Development Workflow
- Update docs/examples when API behavior changes (at minimum `app-server/README.md`).

View File

@@ -6,13 +6,21 @@ xcode_config(name = "disable_xcode")
# TODO(zbarsky): Upstream a better libc constraint into rules_rust.
# We only enable this on linux though for sanity, and because it breaks remote execution.
platform(
name = "local",
name = "local_linux",
constraint_values = [
# We mark the local platform as glibc-compatible because musl-built rust cannot dlopen proc macros.
"@toolchains_llvm_bootstrapped//constraints/libc:gnu.2.28",
],
parents = [
"@platforms//host",
parents = ["@platforms//host"],
)
platform(
name = "local_windows",
constraint_values = [
# We just need to pick one of the ABIs. Do the same one we target.
"@rules_rs//rs/experimental/platforms/constraints:windows_gnullvm",
],
parents = ["@platforms//host"],
)
alias(

View File

@@ -1,13 +1,14 @@
bazel_dep(name = "platforms", version = "1.0.0")
bazel_dep(name = "toolchains_llvm_bootstrapped", version = "0.3.1")
archive_override(
bazel_dep(name = "toolchains_llvm_bootstrapped", version = "0.5.3")
single_version_override(
module_name = "toolchains_llvm_bootstrapped",
integrity = "sha256-4/2h4tYSUSptxFVI9G50yJxWGOwHSeTeOGBlaLQBV8g=",
strip_prefix = "toolchains_llvm_bootstrapped-d20baf67e04d8e2887e3779022890d1dc5e6b948",
urls = ["https://github.com/cerisier/toolchains_llvm_bootstrapped/archive/d20baf67e04d8e2887e3779022890d1dc5e6b948.tar.gz"],
patch_strip = 1,
patches = [
"//patches:toolchains_llvm_bootstrapped_resource_dir.patch",
],
)
osx = use_extension("@toolchains_llvm_bootstrapped//toolchain/extension:osx.bzl", "osx")
osx = use_extension("@toolchains_llvm_bootstrapped//extensions:osx.bzl", "osx")
osx.framework(name = "ApplicationServices")
osx.framework(name = "AppKit")
osx.framework(name = "ColorSync")
@@ -16,6 +17,7 @@ osx.framework(name = "CoreGraphics")
osx.framework(name = "CoreServices")
osx.framework(name = "CoreText")
osx.framework(name = "CFNetwork")
osx.framework(name = "FontServices")
osx.framework(name = "Foundation")
osx.framework(name = "ImageIO")
osx.framework(name = "Kernel")
@@ -31,48 +33,92 @@ register_toolchains(
bazel_dep(name = "apple_support", version = "2.1.0")
bazel_dep(name = "rules_cc", version = "0.2.16")
bazel_dep(name = "rules_platform", version = "0.1.0")
bazel_dep(name = "rules_rust", version = "0.68.1")
single_version_override(
module_name = "rules_rust",
patch_strip = 1,
patches = [
"//patches:rules_rust.patch",
"//patches:rules_rust_windows_gnu.patch",
"//patches:rules_rust_musl.patch",
],
)
RUST_TRIPLES = [
"aarch64-unknown-linux-musl",
"aarch64-apple-darwin",
"aarch64-pc-windows-gnullvm",
"x86_64-unknown-linux-musl",
"x86_64-apple-darwin",
"x86_64-pc-windows-gnullvm",
]
rust = use_extension("@rules_rust//rust:extensions.bzl", "rust")
rust.toolchain(
edition = "2024",
extra_target_triples = RUST_TRIPLES,
versions = ["1.93.0"],
)
use_repo(rust, "rust_toolchains")
register_toolchains("@rust_toolchains//:all")
bazel_dep(name = "rules_rs", version = "0.0.23")
# Special toolchains branch
archive_override(
module_name = "rules_rs",
integrity = "sha256-YbDRjZos4UmfIPY98znK1BgBWRQ1/ui3CtL6RqxE30I=",
strip_prefix = "rules_rs-6cf3d940fdc48baf3ebd6c37daf8e0be8fc73ecb",
url = "https://github.com/dzbarsky/rules_rs/archive/6cf3d940fdc48baf3ebd6c37daf8e0be8fc73ecb.tar.gz",
)
rules_rust = use_extension("@rules_rs//rs/experimental:rules_rust.bzl", "rules_rust")
use_repo(rules_rust, "rules_rust")
toolchains = use_extension("@rules_rs//rs/experimental/toolchains:module_extension.bzl", "toolchains")
toolchains.toolchain(
edition = "2024",
version = "1.93.0",
)
use_repo(
toolchains,
"experimental_rust_toolchains_1_93_0",
"rust_toolchain_artifacts_macos_aarch64_1_93_0",
)
register_toolchains("@experimental_rust_toolchains_1_93_0//:all")
crate = use_extension("@rules_rs//rs:extensions.bzl", "crate")
crate.from_cargo(
cargo_lock = "//codex-rs:Cargo.lock",
cargo_toml = "//codex-rs:Cargo.toml",
platform_triples = RUST_TRIPLES,
platform_triples = [
"aarch64-unknown-linux-gnu",
"aarch64-unknown-linux-musl",
"aarch64-apple-darwin",
"aarch64-pc-windows-gnullvm",
"x86_64-unknown-linux-gnu",
"x86_64-unknown-linux-musl",
"x86_64-apple-darwin",
"x86_64-pc-windows-gnullvm",
],
)
bazel_dep(name = "zstd", version = "1.5.7")
crate.annotation(
crate = "zstd-sys",
gen_build_script = "off",
deps = ["@zstd"],
)
crate.annotation(
crate = "nucleo-matcher",
strip_prefix = "matcher",
version = "0.3.1",
build_script_env = {
"AWS_LC_SYS_NO_JITTER_ENTROPY": "1",
},
crate = "aws-lc-sys",
patch_args = ["-p1"],
patches = [
"//patches:aws-lc-sys_memcmp_check.patch",
],
)
inject_repo(crate, "zstd")
bazel_dep(name = "bzip2", version = "1.0.8.bcr.3")
crate.annotation(
crate = "bzip2-sys",
gen_build_script = "off",
deps = ["@bzip2//:bz2"],
)
inject_repo(crate, "bzip2")
bazel_dep(name = "zlib", version = "1.3.1.bcr.8")
crate.annotation(
crate = "libz-sys",
gen_build_script = "off",
deps = ["@zlib"],
)
inject_repo(crate, "zlib")
# TODO(zbarsky): Enable annotation after fixing windows arm64 builds.
crate.annotation(
crate = "lzma-sys",
gen_build_script = "on",
)
bazel_dep(name = "openssl", version = "3.5.4.bcr.0")

228
MODULE.bazel.lock generated

File diff suppressed because one or more lines are too long

View File

@@ -15,3 +15,9 @@ target_app = "cli"
content = "This is a test announcement"
version_regex = "^0\\.0\\.0$"
to_date = "2026-05-10"
[[announcements]]
content = "**BREAKING NEWS**: `gpt-5.3-codex` is out! Upgrade to `0.98.0` for a faster, smarter, more steerable agent."
from_date = "2026-02-01"
to_date = "2026-02-16"
version_regex = "^0\\.(?:[0-9]|[1-8][0-9]|9[0-7])\\."

605
codex-rs/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -46,6 +46,7 @@ members = [
"utils/home-dir",
"utils/pty",
"utils/readiness",
"utils/rustls-provider",
"utils/string",
"codex-client",
"codex-api",
@@ -93,6 +94,7 @@ codex-linux-sandbox = { path = "linux-sandbox" }
codex-lmstudio = { path = "lmstudio" }
codex-login = { path = "login" }
codex-mcp-server = { path = "mcp-server" }
codex-network-proxy = { path = "network-proxy" }
codex-ollama = { path = "ollama" }
codex-otel = { path = "otel" }
codex-process-hardening = { path = "process-hardening" }
@@ -110,6 +112,7 @@ codex-utils-json-to-toml = { path = "utils/json-to-toml" }
codex-utils-home-dir = { path = "utils/home-dir" }
codex-utils-pty = { path = "utils/pty" }
codex-utils-readiness = { path = "utils/readiness" }
codex-utils-rustls-provider = { path = "utils/rustls-provider" }
codex-utils-string = { path = "utils/string" }
codex-windows-sandbox = { path = "windows-sandbox-rs" }
core_test_support = { path = "core/tests/common" }
@@ -127,8 +130,10 @@ assert_matches = "1.5.0"
async-channel = "2.3.1"
async-stream = "0.3.6"
async-trait = "0.1.89"
askama = "0.15.4"
axum = { version = "0.8", default-features = false }
base64 = "0.22.1"
bm25 = "2.3.2"
bytes = "1.10.1"
chardetng = "0.1.17"
chrono = "0.4.43"
@@ -158,7 +163,7 @@ indoc = "2.0"
image = { version = "^0.25.9", default-features = false }
include_dir = "0.7.4"
indexmap = "2.12.0"
insta = "1.46.0"
insta = "1.46.3"
inventory = "0.3.19"
itertools = "0.14.0"
keyring = { version = "3.6", default-features = false }
@@ -191,10 +196,11 @@ pulldown-cmark = "0.10"
rand = "0.9"
ratatui = "0.29.0"
ratatui-macros = "0.6.0"
regex = "1.12.2"
regex = "1.12.3"
regex-lite = "0.1.8"
reqwest = "0.12"
rmcp = { version = "0.12.0", default-features = false }
rmcp = { version = "0.14.0", default-features = false }
rustls = { version = "0.23", default-features = false, features = ["ring", "std"] }
runfiles = { git = "https://github.com/dzbarsky/rules_rust", rev = "b56cbaa8465e74127f1ea216f813cd377295ad81" }
schemars = "0.8.22"
seccompiler = "0.5.0"
@@ -221,12 +227,13 @@ tempfile = "3.23.0"
test-log = "0.2.19"
textwrap = "0.16.2"
thiserror = "2.0.17"
time = "0.3"
time = "0.3.47"
tiny_http = "0.12"
tokio = "1"
tokio-stream = "0.1.18"
tokio-test = "0.4"
tokio-tungstenite = { version = "0.28.0", features = ["proxy", "rustls-tls-native-roots"] }
tungstenite = { version = "0.27.0", features = ["deflate", "proxy"] }
tokio-util = "0.7.18"
toml = "0.9.5"
toml_edit = "0.24.0"
@@ -317,10 +324,11 @@ opt-level = 0
# ratatui = { path = "../../ratatui" }
crossterm = { git = "https://github.com/nornagon/crossterm", branch = "nornagon/color-query" }
ratatui = { git = "https://github.com/nornagon/ratatui", branch = "nornagon-v0.29.0-patch" }
tokio-tungstenite = { git = "https://github.com/JakkuSakura/tokio-tungstenite", rev = "2ae536b0de793f3ddf31fc2f22d445bf1ef2023d" }
tokio-tungstenite = { git = "https://github.com/openai-oss-forks/tokio-tungstenite", rev = "132f5b39c862e3a970f731d709608b3e6276d5f6" }
tungstenite = { git = "https://github.com/openai-oss-forks/tungstenite-rs", rev = "9200079d3b54a1ff51072e24d81fd354f085156f" }
# Uncomment to debug local changes.
# rmcp = { path = "../../rust-sdk/crates/rmcp" }
[patch."ssh://git@github.com/JakkuSakura/tungstenite-rs.git"]
tungstenite = { git = "https://github.com/JakkuSakura/tungstenite-rs", rev = "f514de8644821113e5d18a027d6d28a5c8cc0a6e" }
[patch."ssh://git@github.com/openai-oss-forks/tungstenite-rs.git"]
tungstenite = { git = "https://github.com/openai-oss-forks/tungstenite-rs", rev = "9200079d3b54a1ff51072e24d81fd354f085156f" }

View File

@@ -51,6 +51,7 @@ You can enable notifications by configuring a script that is run whenever the ag
### `codex exec` to run Codex programmatically/non-interactively
To run Codex non-interactively, run `codex exec PROMPT` (you can also pass the prompt via `stdin`) and Codex will work on your task until it decides that it is done and exits. Output is printed to the terminal directly. You can set the `RUST_LOG` environment variable to see more about what's going on.
Use `codex exec --ephemeral ...` to run without persisting session rollout files to disk.
### Experimenting with the Codex Sandbox

View File

@@ -15,7 +15,7 @@
},
"properties": {
"previousAccountId": {
"description": "Workspace/account identifier that Codex was previously using.\n\nClients that manage multiple accounts/workspaces can use this as a hint to refresh the token for the correct workspace.\n\nThis may be `null` when the prior ID token did not include a workspace identifier (`chatgpt_account_id`) or when the token could not be parsed.",
"description": "Workspace/account identifier that Codex was previously using.\n\nClients that manage multiple accounts/workspaces can use this as a hint to refresh the token for the correct workspace.\n\nThis may be `null` when the prior auth state did not include a workspace identifier (`chatgpt_account_id`).",
"type": [
"string",
"null"

View File

@@ -4,13 +4,19 @@
"accessToken": {
"type": "string"
},
"idToken": {
"chatgptAccountId": {
"type": "string"
},
"chatgptPlanType": {
"type": [
"string",
"null"
]
}
},
"required": [
"accessToken",
"idToken"
"chatgptAccountId"
],
"title": "ChatgptAuthTokensRefreshResponse",
"type": "object"

View File

@@ -21,6 +21,7 @@
"type": "object"
},
"AppsListParams": {
"description": "EXPERIMENTAL - list available apps/connectors.",
"properties": {
"cursor": {
"description": "Opaque pagination cursor returned by a previous call.",
@@ -29,6 +30,10 @@
"null"
]
},
"forceRefetch": {
"description": "When true, bypass app caches and fetch the latest data from sources.",
"type": "boolean"
},
"limit": {
"description": "Optional page size; defaults to a reasonable server-side value.",
"format": "uint32",
@@ -37,6 +42,13 @@
"integer",
"null"
]
},
"threadId": {
"description": "Optional thread id used to evaluate app feature gating from that thread's config.",
"type": [
"string",
"null"
]
}
},
"type": "object"
@@ -422,6 +434,27 @@
],
"type": "object"
},
"ExperimentalFeatureListParams": {
"properties": {
"cursor": {
"description": "Opaque pagination cursor returned by a previous call.",
"type": [
"string",
"null"
]
},
"limit": {
"description": "Optional page size; defaults to a reasonable server-side value.",
"format": "uint32",
"minimum": 0.0,
"type": [
"integer",
"null"
]
}
},
"type": "object"
},
"FeedbackUploadParams": {
"properties": {
"classification": {
@@ -972,13 +1005,20 @@
"description": "[UNSTABLE] FOR OPENAI INTERNAL USE ONLY - DO NOT USE. The access token must contain the same scopes that Codex-managed ChatGPT auth tokens have.",
"properties": {
"accessToken": {
"description": "Access token (JWT) supplied by the client. This token is used for backend API requests.",
"description": "Access token (JWT) supplied by the client. This token is used for backend API requests and email extraction.",
"type": "string"
},
"idToken": {
"description": "ID token (JWT) supplied by the client.\n\nThis token is used for identity and account metadata (email, plan type, workspace id).",
"chatgptAccountId": {
"description": "Workspace/account identifier supplied by the client.",
"type": "string"
},
"chatgptPlanType": {
"description": "Optional plan type supplied by the client.\n\nWhen `null`, Codex attempts to derive the plan type from access-token claims. If unavailable, the plan defaults to `unknown`.",
"type": [
"string",
"null"
]
},
"type": {
"enum": [
"chatgptAuthTokens"
@@ -989,7 +1029,7 @@
},
"required": [
"accessToken",
"idToken",
"chatgptAccountId",
"type"
],
"title": "ChatgptAuthTokensLoginAccountParams",
@@ -1043,11 +1083,23 @@
"type": "string"
},
"MessagePhase": {
"enum": [
"commentary",
"final_answer"
],
"type": "string"
"description": "Classifies an assistant message as interim commentary or final answer text.\n\nProviders do not emit this consistently, so callers must treat `None` as \"phase unknown\" and keep compatibility behavior for legacy models.",
"oneOf": [
{
"description": "Mid-turn assistant text (for example preamble/progress narration).\n\nAdditional tool calls or assistant output may follow before turn completion.",
"enum": [
"commentary"
],
"type": "string"
},
{
"description": "The assistant's terminal answer text for the current turn.",
"enum": [
"final_answer"
],
"type": "string"
}
]
},
"ModeKind": {
"description": "Initial collaboration mode to use when the TUI starts.",
@@ -2172,6 +2224,24 @@
],
"type": "object"
},
"SkillsListExtraRootsForCwd": {
"properties": {
"cwd": {
"type": "string"
},
"extraUserRoots": {
"items": {
"type": "string"
},
"type": "array"
}
},
"required": [
"cwd",
"extraUserRoots"
],
"type": "object"
},
"SkillsListParams": {
"properties": {
"cwds": {
@@ -2184,6 +2254,17 @@
"forceReload": {
"description": "When true, bypass the skills cache and re-scan skills from disk.",
"type": "boolean"
},
"perCwdExtraUserRoots": {
"default": null,
"description": "Optional per-cwd extra roots to scan as user-scoped skills.",
"items": {
"$ref": "#/definitions/SkillsListExtraRootsForCwd"
},
"type": [
"array",
"null"
]
}
},
"type": "object"
@@ -2302,13 +2383,6 @@
"null"
]
},
"path": {
"description": "[UNSTABLE] Specify the rollout path to fork from. If specified, the thread_id param will be ignored.",
"type": [
"string",
"null"
]
},
"sandbox": {
"anyOf": [
{
@@ -2465,16 +2539,6 @@
"null"
]
},
"history": {
"description": "[UNSTABLE] FOR CODEX CLOUD - DO NOT USE. If specified, the thread will be resumed with the provided history instead of loaded from disk.",
"items": {
"$ref": "#/definitions/ResponseItem"
},
"type": [
"array",
"null"
]
},
"model": {
"description": "Configuration overrides for the resumed thread, if any.",
"type": [
@@ -2488,13 +2552,6 @@
"null"
]
},
"path": {
"description": "[UNSTABLE] Specify the rollout path to resume from. If specified, the thread_id param will be ignored.",
"type": [
"string",
"null"
]
},
"personality": {
"anyOf": [
{
@@ -2622,11 +2679,6 @@
"null"
]
},
"experimentalRawEvents": {
"default": false,
"description": "If true, opt into emitting raw response items on the event stream.\n\nThis is for internal use only (e.g. Codex Cloud). (TODO): Figure out a better way to categorize internal / experimental events & protocols.",
"type": "boolean"
},
"model": {
"type": [
"string",
@@ -2701,17 +2753,6 @@
],
"description": "Override the approval policy for this turn and subsequent turns."
},
"collaborationMode": {
"anyOf": [
{
"$ref": "#/definitions/CollaborationMode"
},
{
"type": "null"
}
],
"description": "EXPERIMENTAL - set a pre-set collaboration mode. Takes precedence over model, reasoning_effort, and developer instructions if set."
},
"cwd": {
"description": "Override the working directory for this turn and subsequent turns.",
"type": [
@@ -2789,6 +2830,29 @@
],
"type": "object"
},
"TurnSteerParams": {
"properties": {
"expectedTurnId": {
"description": "Required active turn id precondition. The request fails when it does not match the currently active turn.",
"type": "string"
},
"input": {
"items": {
"$ref": "#/definitions/UserInput"
},
"type": "array"
},
"threadId": {
"type": "string"
}
},
"required": [
"expectedTurnId",
"input",
"threadId"
],
"type": "object"
},
"UserInput": {
"oneOf": [
{
@@ -3490,6 +3554,30 @@
"title": "Turn/startRequest",
"type": "object"
},
{
"properties": {
"id": {
"$ref": "#/definitions/RequestId"
},
"method": {
"enum": [
"turn/steer"
],
"title": "Turn/steerRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/TurnSteerParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "Turn/steerRequest",
"type": "object"
},
{
"properties": {
"id": {
@@ -3562,6 +3650,30 @@
"title": "Model/listRequest",
"type": "object"
},
{
"properties": {
"id": {
"$ref": "#/definitions/RequestId"
},
"method": {
"enum": [
"experimentalFeature/list"
],
"title": "ExperimentalFeature/listRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/ExperimentalFeatureListParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "ExperimentalFeature/listRequest",
"type": "object"
},
{
"properties": {
"id": {

View File

@@ -887,6 +887,17 @@
"model_provider_id": {
"type": "string"
},
"network_proxy": {
"anyOf": [
{
"$ref": "#/definitions/SessionNetworkProxyRuntime"
},
{
"type": "null"
}
],
"description": "Runtime proxy bind addresses, when the managed proxy was started for this session."
},
"reasoning_effort": {
"anyOf": [
{
@@ -2775,6 +2786,95 @@
],
"title": "CollabCloseEndEventMsg",
"type": "object"
},
{
"description": "Collab interaction: resume begin.",
"properties": {
"call_id": {
"description": "Identifier for the collab tool call.",
"type": "string"
},
"receiver_thread_id": {
"allOf": [
{
"$ref": "#/definitions/ThreadId"
}
],
"description": "Thread ID of the receiver."
},
"sender_thread_id": {
"allOf": [
{
"$ref": "#/definitions/ThreadId"
}
],
"description": "Thread ID of the sender."
},
"type": {
"enum": [
"collab_resume_begin"
],
"title": "CollabResumeBeginEventMsgType",
"type": "string"
}
},
"required": [
"call_id",
"receiver_thread_id",
"sender_thread_id",
"type"
],
"title": "CollabResumeBeginEventMsg",
"type": "object"
},
{
"description": "Collab interaction: resume end.",
"properties": {
"call_id": {
"description": "Identifier for the collab tool call.",
"type": "string"
},
"receiver_thread_id": {
"allOf": [
{
"$ref": "#/definitions/ThreadId"
}
],
"description": "Thread ID of the receiver."
},
"sender_thread_id": {
"allOf": [
{
"$ref": "#/definitions/ThreadId"
}
],
"description": "Thread ID of the sender."
},
"status": {
"allOf": [
{
"$ref": "#/definitions/AgentStatus"
}
],
"description": "Last known status of the receiver agent reported to the sender agent after resume."
},
"type": {
"enum": [
"collab_resume_end"
],
"title": "CollabResumeEndEventMsgType",
"type": "string"
}
},
"required": [
"call_id",
"receiver_thread_id",
"sender_thread_id",
"status",
"type"
],
"title": "CollabResumeEndEventMsg",
"type": "object"
}
]
},
@@ -3169,11 +3269,23 @@
]
},
"MessagePhase": {
"enum": [
"commentary",
"final_answer"
],
"type": "string"
"description": "Classifies an assistant message as interim commentary or final answer text.\n\nProviders do not emit this consistently, so callers must treat `None` as \"phase unknown\" and keep compatibility behavior for legacy models.",
"oneOf": [
{
"description": "Mid-turn assistant text (for example preamble/progress narration).\n\nAdditional tool calls or assistant output may follow before turn completion.",
"enum": [
"commentary"
],
"type": "string"
},
{
"description": "The assistant's terminal answer text for the current turn.",
"enum": [
"final_answer"
],
"type": "string"
}
]
},
"ModeKind": {
"description": "Initial collaboration mode to use when the TUI starts.",
@@ -4326,6 +4438,25 @@
}
]
},
"SessionNetworkProxyRuntime": {
"properties": {
"admin_addr": {
"type": "string"
},
"http_addr": {
"type": "string"
},
"socks_addr": {
"type": "string"
}
},
"required": [
"admin_addr",
"http_addr",
"socks_addr"
],
"type": "object"
},
"SkillDependencies": {
"properties": {
"tools": {
@@ -4685,6 +4816,7 @@
"type": "object"
},
{
"description": "Assistant-authored message payload used in turn-item streams.\n\n`phase` is optional because not all providers/models emit it. Consumers should use it when present, but retain legacy completion semantics when it is `None`.",
"properties": {
"content": {
"items": {
@@ -4695,6 +4827,17 @@
"id": {
"type": "string"
},
"phase": {
"anyOf": [
{
"$ref": "#/definitions/MessagePhase"
},
{
"type": "null"
}
],
"description": "Optional phase metadata carried through from `ResponseItem::Message`.\n\nThis is currently used by TUI rendering to distinguish mid-turn commentary from a final answer and avoid status-indicator jitter."
},
"type": {
"enum": [
"AgentMessage"
@@ -5487,6 +5630,17 @@
"model_provider_id": {
"type": "string"
},
"network_proxy": {
"anyOf": [
{
"$ref": "#/definitions/SessionNetworkProxyRuntime"
},
{
"type": "null"
}
],
"description": "Runtime proxy bind addresses, when the managed proxy was started for this session."
},
"reasoning_effort": {
"anyOf": [
{
@@ -7375,6 +7529,95 @@
],
"title": "CollabCloseEndEventMsg",
"type": "object"
},
{
"description": "Collab interaction: resume begin.",
"properties": {
"call_id": {
"description": "Identifier for the collab tool call.",
"type": "string"
},
"receiver_thread_id": {
"allOf": [
{
"$ref": "#/definitions/ThreadId"
}
],
"description": "Thread ID of the receiver."
},
"sender_thread_id": {
"allOf": [
{
"$ref": "#/definitions/ThreadId"
}
],
"description": "Thread ID of the sender."
},
"type": {
"enum": [
"collab_resume_begin"
],
"title": "CollabResumeBeginEventMsgType",
"type": "string"
}
},
"required": [
"call_id",
"receiver_thread_id",
"sender_thread_id",
"type"
],
"title": "CollabResumeBeginEventMsg",
"type": "object"
},
{
"description": "Collab interaction: resume end.",
"properties": {
"call_id": {
"description": "Identifier for the collab tool call.",
"type": "string"
},
"receiver_thread_id": {
"allOf": [
{
"$ref": "#/definitions/ThreadId"
}
],
"description": "Thread ID of the receiver."
},
"sender_thread_id": {
"allOf": [
{
"$ref": "#/definitions/ThreadId"
}
],
"description": "Thread ID of the sender."
},
"status": {
"allOf": [
{
"$ref": "#/definitions/AgentStatus"
}
],
"description": "Last known status of the receiver agent reported to the sender agent after resume."
},
"type": {
"enum": [
"collab_resume_end"
],
"title": "CollabResumeEndEventMsgType",
"type": "string"
}
},
"required": [
"call_id",
"receiver_thread_id",
"sender_thread_id",
"status",
"type"
],
"title": "CollabResumeEndEventMsg",
"type": "object"
}
],
"title": "EventMsg"

View File

@@ -165,6 +165,71 @@
}
]
},
"AppInfo": {
"description": "EXPERIMENTAL - app metadata returned by app-list APIs.",
"properties": {
"description": {
"type": [
"string",
"null"
]
},
"distributionChannel": {
"type": [
"string",
"null"
]
},
"id": {
"type": "string"
},
"installUrl": {
"type": [
"string",
"null"
]
},
"isAccessible": {
"default": false,
"type": "boolean"
},
"logoUrl": {
"type": [
"string",
"null"
]
},
"logoUrlDark": {
"type": [
"string",
"null"
]
},
"name": {
"type": "string"
}
},
"required": [
"id",
"name"
],
"type": "object"
},
"AppListUpdatedNotification": {
"description": "EXPERIMENTAL - notification emitted when the app list changes.",
"properties": {
"data": {
"items": {
"$ref": "#/definitions/AppInfo"
},
"type": "array"
}
},
"required": [
"data"
],
"type": "object"
},
"AskForApproval": {
"description": "Determines the conditions under which the user is consulted to approve running the command proposed by Codex.",
"oneOf": [
@@ -617,6 +682,7 @@
"enum": [
"spawnAgent",
"sendInput",
"resumeAgent",
"wait",
"closeAgent"
],
@@ -1465,6 +1531,17 @@
"model_provider_id": {
"type": "string"
},
"network_proxy": {
"anyOf": [
{
"$ref": "#/definitions/SessionNetworkProxyRuntime"
},
{
"type": "null"
}
],
"description": "Runtime proxy bind addresses, when the managed proxy was started for this session."
},
"reasoning_effort": {
"anyOf": [
{
@@ -3353,6 +3430,95 @@
],
"title": "CollabCloseEndEventMsg",
"type": "object"
},
{
"description": "Collab interaction: resume begin.",
"properties": {
"call_id": {
"description": "Identifier for the collab tool call.",
"type": "string"
},
"receiver_thread_id": {
"allOf": [
{
"$ref": "#/definitions/ThreadId"
}
],
"description": "Thread ID of the receiver."
},
"sender_thread_id": {
"allOf": [
{
"$ref": "#/definitions/ThreadId"
}
],
"description": "Thread ID of the sender."
},
"type": {
"enum": [
"collab_resume_begin"
],
"title": "CollabResumeBeginEventMsgType",
"type": "string"
}
},
"required": [
"call_id",
"receiver_thread_id",
"sender_thread_id",
"type"
],
"title": "CollabResumeBeginEventMsg",
"type": "object"
},
{
"description": "Collab interaction: resume end.",
"properties": {
"call_id": {
"description": "Identifier for the collab tool call.",
"type": "string"
},
"receiver_thread_id": {
"allOf": [
{
"$ref": "#/definitions/ThreadId"
}
],
"description": "Thread ID of the receiver."
},
"sender_thread_id": {
"allOf": [
{
"$ref": "#/definitions/ThreadId"
}
],
"description": "Thread ID of the sender."
},
"status": {
"allOf": [
{
"$ref": "#/definitions/AgentStatus"
}
],
"description": "Last known status of the receiver agent reported to the sender agent after resume."
},
"type": {
"enum": [
"collab_resume_end"
],
"title": "CollabResumeEndEventMsgType",
"type": "string"
}
},
"required": [
"call_id",
"receiver_thread_id",
"sender_thread_id",
"status",
"type"
],
"title": "CollabResumeEndEventMsg",
"type": "object"
}
]
},
@@ -3948,11 +4114,23 @@
"type": "string"
},
"MessagePhase": {
"enum": [
"commentary",
"final_answer"
],
"type": "string"
"description": "Classifies an assistant message as interim commentary or final answer text.\n\nProviders do not emit this consistently, so callers must treat `None` as \"phase unknown\" and keep compatibility behavior for legacy models.",
"oneOf": [
{
"description": "Mid-turn assistant text (for example preamble/progress narration).\n\nAdditional tool calls or assistant output may follow before turn completion.",
"enum": [
"commentary"
],
"type": "string"
},
{
"description": "The assistant's terminal answer text for the current turn.",
"enum": [
"final_answer"
],
"type": "string"
}
]
},
"ModeKind": {
"description": "Initial collaboration mode to use when the TUI starts.",
@@ -5416,6 +5594,25 @@
],
"type": "object"
},
"SessionNetworkProxyRuntime": {
"properties": {
"admin_addr": {
"type": "string"
},
"http_addr": {
"type": "string"
},
"socks_addr": {
"type": "string"
}
},
"required": [
"admin_addr",
"http_addr",
"socks_addr"
],
"type": "object"
},
"SessionSource": {
"oneOf": [
{
@@ -6686,6 +6883,7 @@
"type": "object"
},
{
"description": "Assistant-authored message payload used in turn-item streams.\n\n`phase` is optional because not all providers/models emit it. Consumers should use it when present, but retain legacy completion semantics when it is `None`.",
"properties": {
"content": {
"items": {
@@ -6696,6 +6894,17 @@
"id": {
"type": "string"
},
"phase": {
"anyOf": [
{
"$ref": "#/definitions/MessagePhase"
},
{
"type": "null"
}
],
"description": "Optional phase metadata carried through from `ResponseItem::Message`.\n\nThis is currently used by TUI rendering to distinguish mid-turn commentary from a final answer and avoid status-indicator jitter."
},
"type": {
"enum": [
"AgentMessage"
@@ -7772,6 +7981,26 @@
"title": "Account/rateLimits/updatedNotification",
"type": "object"
},
{
"properties": {
"method": {
"enum": [
"app/list/updated"
],
"title": "App/list/updatedNotificationMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/AppListUpdatedNotification"
}
},
"required": [
"method",
"params"
],
"title": "App/list/updatedNotification",
"type": "object"
},
{
"properties": {
"method": {

View File

@@ -41,7 +41,7 @@
"ChatgptAuthTokensRefreshParams": {
"properties": {
"previousAccountId": {
"description": "Workspace/account identifier that Codex was previously using.\n\nClients that manage multiple accounts/workspaces can use this as a hint to refresh the token for the correct workspace.\n\nThis may be `null` when the prior ID token did not include a workspace identifier (`chatgpt_account_id`) or when the token could not be parsed.",
"description": "Workspace/account identifier that Codex was previously using.\n\nClients that manage multiple accounts/workspaces can use this as a hint to refresh the token for the correct workspace.\n\nThis may be `null` when the prior auth state did not include a workspace identifier (`chatgpt_account_id`).",
"type": [
"string",
"null"

View File

@@ -338,7 +338,7 @@
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"previousAccountId": {
"description": "Workspace/account identifier that Codex was previously using.\n\nClients that manage multiple accounts/workspaces can use this as a hint to refresh the token for the correct workspace.\n\nThis may be `null` when the prior ID token did not include a workspace identifier (`chatgpt_account_id`) or when the token could not be parsed.",
"description": "Workspace/account identifier that Codex was previously using.\n\nClients that manage multiple accounts/workspaces can use this as a hint to refresh the token for the correct workspace.\n\nThis may be `null` when the prior auth state did not include a workspace identifier (`chatgpt_account_id`).",
"type": [
"string",
"null"
@@ -371,13 +371,19 @@
"accessToken": {
"type": "string"
},
"idToken": {
"chatgptAccountId": {
"type": "string"
},
"chatgptPlanType": {
"type": [
"string",
"null"
]
}
},
"required": [
"accessToken",
"idToken"
"chatgptAccountId"
],
"title": "ChatgptAuthTokensRefreshResponse",
"type": "object"
@@ -862,6 +868,30 @@
"title": "Turn/startRequest",
"type": "object"
},
{
"properties": {
"id": {
"$ref": "#/definitions/RequestId"
},
"method": {
"enum": [
"turn/steer"
],
"title": "Turn/steerRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/v2/TurnSteerParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "Turn/steerRequest",
"type": "object"
},
{
"properties": {
"id": {
@@ -934,6 +964,30 @@
"title": "Model/listRequest",
"type": "object"
},
{
"properties": {
"id": {
"$ref": "#/definitions/RequestId"
},
"method": {
"enum": [
"experimentalFeature/list"
],
"title": "ExperimentalFeature/listRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/v2/ExperimentalFeatureListParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "ExperimentalFeature/listRequest",
"type": "object"
},
{
"properties": {
"id": {
@@ -2846,6 +2900,17 @@
"model_provider_id": {
"type": "string"
},
"network_proxy": {
"anyOf": [
{
"$ref": "#/definitions/SessionNetworkProxyRuntime"
},
{
"type": "null"
}
],
"description": "Runtime proxy bind addresses, when the managed proxy was started for this session."
},
"reasoning_effort": {
"anyOf": [
{
@@ -4734,6 +4799,95 @@
],
"title": "CollabCloseEndEventMsg",
"type": "object"
},
{
"description": "Collab interaction: resume begin.",
"properties": {
"call_id": {
"description": "Identifier for the collab tool call.",
"type": "string"
},
"receiver_thread_id": {
"allOf": [
{
"$ref": "#/definitions/v2/ThreadId"
}
],
"description": "Thread ID of the receiver."
},
"sender_thread_id": {
"allOf": [
{
"$ref": "#/definitions/v2/ThreadId"
}
],
"description": "Thread ID of the sender."
},
"type": {
"enum": [
"collab_resume_begin"
],
"title": "CollabResumeBeginEventMsgType",
"type": "string"
}
},
"required": [
"call_id",
"receiver_thread_id",
"sender_thread_id",
"type"
],
"title": "CollabResumeBeginEventMsg",
"type": "object"
},
{
"description": "Collab interaction: resume end.",
"properties": {
"call_id": {
"description": "Identifier for the collab tool call.",
"type": "string"
},
"receiver_thread_id": {
"allOf": [
{
"$ref": "#/definitions/v2/ThreadId"
}
],
"description": "Thread ID of the receiver."
},
"sender_thread_id": {
"allOf": [
{
"$ref": "#/definitions/v2/ThreadId"
}
],
"description": "Thread ID of the sender."
},
"status": {
"allOf": [
{
"$ref": "#/definitions/AgentStatus"
}
],
"description": "Last known status of the receiver agent reported to the sender agent after resume."
},
"type": {
"enum": [
"collab_resume_end"
],
"title": "CollabResumeEndEventMsgType",
"type": "string"
}
},
"required": [
"call_id",
"receiver_thread_id",
"sender_thread_id",
"status",
"type"
],
"title": "CollabResumeEndEventMsg",
"type": "object"
}
],
"title": "EventMsg"
@@ -6016,11 +6170,23 @@
]
},
"MessagePhase": {
"enum": [
"commentary",
"final_answer"
],
"type": "string"
"description": "Classifies an assistant message as interim commentary or final answer text.\n\nProviders do not emit this consistently, so callers must treat `None` as \"phase unknown\" and keep compatibility behavior for legacy models.",
"oneOf": [
{
"description": "Mid-turn assistant text (for example preamble/progress narration).\n\nAdditional tool calls or assistant output may follow before turn completion.",
"enum": [
"commentary"
],
"type": "string"
},
{
"description": "The assistant's terminal answer text for the current turn.",
"enum": [
"final_answer"
],
"type": "string"
}
]
},
"ModeKind": {
"description": "Initial collaboration mode to use when the TUI starts.",
@@ -8043,6 +8209,26 @@
"title": "Account/rateLimits/updatedNotification",
"type": "object"
},
{
"properties": {
"method": {
"enum": [
"app/list/updated"
],
"title": "App/list/updatedNotificationMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/v2/AppListUpdatedNotification"
}
},
"required": [
"method",
"params"
],
"title": "App/list/updatedNotification",
"type": "object"
},
{
"properties": {
"method": {
@@ -8503,6 +8689,25 @@
"title": "SessionConfiguredNotification",
"type": "object"
},
"SessionNetworkProxyRuntime": {
"properties": {
"admin_addr": {
"type": "string"
},
"http_addr": {
"type": "string"
},
"socks_addr": {
"type": "string"
}
},
"required": [
"admin_addr",
"http_addr",
"socks_addr"
],
"type": "object"
},
"SessionSource": {
"oneOf": [
{
@@ -9098,6 +9303,7 @@
"type": "object"
},
{
"description": "Assistant-authored message payload used in turn-item streams.\n\n`phase` is optional because not all providers/models emit it. Consumers should use it when present, but retain legacy completion semantics when it is `None`.",
"properties": {
"content": {
"items": {
@@ -9108,6 +9314,17 @@
"id": {
"type": "string"
},
"phase": {
"anyOf": [
{
"$ref": "#/definitions/MessagePhase"
},
{
"type": "null"
}
],
"description": "Optional phase metadata carried through from `ResponseItem::Message`.\n\nThis is currently used by TUI rendering to distinguish mid-turn commentary from a final answer and avoid status-indicator jitter."
},
"type": {
"enum": [
"AgentMessage"
@@ -9777,7 +9994,34 @@
},
"type": "object"
},
"AppConfig": {
"properties": {
"disabled_reason": {
"anyOf": [
{
"$ref": "#/definitions/v2/AppDisabledReason"
},
{
"type": "null"
}
]
},
"enabled": {
"default": true,
"type": "boolean"
}
},
"type": "object"
},
"AppDisabledReason": {
"enum": [
"unknown",
"user"
],
"type": "string"
},
"AppInfo": {
"description": "EXPERIMENTAL - app metadata returned by app-list APIs.",
"properties": {
"description": {
"type": [
@@ -9826,8 +10070,29 @@
],
"type": "object"
},
"AppListUpdatedNotification": {
"$schema": "http://json-schema.org/draft-07/schema#",
"description": "EXPERIMENTAL - notification emitted when the app list changes.",
"properties": {
"data": {
"items": {
"$ref": "#/definitions/v2/AppInfo"
},
"type": "array"
}
},
"required": [
"data"
],
"title": "AppListUpdatedNotification",
"type": "object"
},
"AppsConfig": {
"type": "object"
},
"AppsListParams": {
"$schema": "http://json-schema.org/draft-07/schema#",
"description": "EXPERIMENTAL - list available apps/connectors.",
"properties": {
"cursor": {
"description": "Opaque pagination cursor returned by a previous call.",
@@ -9836,6 +10101,10 @@
"null"
]
},
"forceRefetch": {
"description": "When true, bypass app caches and fetch the latest data from sources.",
"type": "boolean"
},
"limit": {
"description": "Optional page size; defaults to a reasonable server-side value.",
"format": "uint32",
@@ -9844,6 +10113,13 @@
"integer",
"null"
]
},
"threadId": {
"description": "Optional thread id used to evaluate app feature gating from that thread's config.",
"type": [
"string",
"null"
]
}
},
"title": "AppsListParams",
@@ -9851,6 +10127,7 @@
},
"AppsListResponse": {
"$schema": "http://json-schema.org/draft-07/schema#",
"description": "EXPERIMENTAL - app list response.",
"properties": {
"data": {
"items": {
@@ -10133,6 +10410,7 @@
"enum": [
"spawnAgent",
"sendInput",
"resumeAgent",
"wait",
"closeAgent"
],
@@ -10900,6 +11178,15 @@
"null"
]
},
"allowedWebSearchModes": {
"items": {
"$ref": "#/definitions/v2/WebSearchMode"
},
"type": [
"array",
"null"
]
},
"enforceResidency": {
"anyOf": [
{
@@ -11204,6 +11491,143 @@
"title": "ErrorNotification",
"type": "object"
},
"ExperimentalFeature": {
"properties": {
"announcement": {
"description": "Announcement copy shown to users when the feature is introduced. Null when this feature is not in beta.",
"type": [
"string",
"null"
]
},
"defaultEnabled": {
"description": "Whether this feature is enabled by default.",
"type": "boolean"
},
"description": {
"description": "Short summary describing what the feature does. Null when this feature is not in beta.",
"type": [
"string",
"null"
]
},
"displayName": {
"description": "User-facing display name shown in the experimental features UI. Null when this feature is not in beta.",
"type": [
"string",
"null"
]
},
"enabled": {
"description": "Whether this feature is currently enabled in the loaded config.",
"type": "boolean"
},
"name": {
"description": "Stable key used in config.toml and CLI flag toggles.",
"type": "string"
},
"stage": {
"allOf": [
{
"$ref": "#/definitions/v2/ExperimentalFeatureStage"
}
],
"description": "Lifecycle stage of this feature flag."
}
},
"required": [
"defaultEnabled",
"enabled",
"name",
"stage"
],
"type": "object"
},
"ExperimentalFeatureListParams": {
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"cursor": {
"description": "Opaque pagination cursor returned by a previous call.",
"type": [
"string",
"null"
]
},
"limit": {
"description": "Optional page size; defaults to a reasonable server-side value.",
"format": "uint32",
"minimum": 0.0,
"type": [
"integer",
"null"
]
}
},
"title": "ExperimentalFeatureListParams",
"type": "object"
},
"ExperimentalFeatureListResponse": {
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"data": {
"items": {
"$ref": "#/definitions/v2/ExperimentalFeature"
},
"type": "array"
},
"nextCursor": {
"description": "Opaque cursor to pass to the next call to continue after the last item. If None, there are no more items to return.",
"type": [
"string",
"null"
]
}
},
"required": [
"data"
],
"title": "ExperimentalFeatureListResponse",
"type": "object"
},
"ExperimentalFeatureStage": {
"oneOf": [
{
"description": "Feature is available for user testing and feedback.",
"enum": [
"beta"
],
"type": "string"
},
{
"description": "Feature is still being built and not ready for broad use.",
"enum": [
"underDevelopment"
],
"type": "string"
},
{
"description": "Feature is production-ready.",
"enum": [
"stable"
],
"type": "string"
},
{
"description": "Feature is deprecated and should be avoided.",
"enum": [
"deprecated"
],
"type": "string"
},
{
"description": "Feature flag is retained only for backwards compatibility.",
"enum": [
"removed"
],
"type": "string"
}
]
},
"FeedbackUploadParams": {
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
@@ -11690,13 +12114,20 @@
"description": "[UNSTABLE] FOR OPENAI INTERNAL USE ONLY - DO NOT USE. The access token must contain the same scopes that Codex-managed ChatGPT auth tokens have.",
"properties": {
"accessToken": {
"description": "Access token (JWT) supplied by the client. This token is used for backend API requests.",
"description": "Access token (JWT) supplied by the client. This token is used for backend API requests and email extraction.",
"type": "string"
},
"idToken": {
"description": "ID token (JWT) supplied by the client.\n\nThis token is used for identity and account metadata (email, plan type, workspace id).",
"chatgptAccountId": {
"description": "Workspace/account identifier supplied by the client.",
"type": "string"
},
"chatgptPlanType": {
"description": "Optional plan type supplied by the client.\n\nWhen `null`, Codex attempts to derive the plan type from access-token claims. If unavailable, the plan defaults to `unknown`.",
"type": [
"string",
"null"
]
},
"type": {
"enum": [
"chatgptAuthTokens"
@@ -11707,7 +12138,7 @@
},
"required": [
"accessToken",
"idToken",
"chatgptAccountId",
"type"
],
"title": "ChatgptAuthTokensv2::LoginAccountParams",
@@ -11964,11 +12395,23 @@
"type": "string"
},
"MessagePhase": {
"enum": [
"commentary",
"final_answer"
],
"type": "string"
"description": "Classifies an assistant message as interim commentary or final answer text.\n\nProviders do not emit this consistently, so callers must treat `None` as \"phase unknown\" and keep compatibility behavior for legacy models.",
"oneOf": [
{
"description": "Mid-turn assistant text (for example preamble/progress narration).\n\nAdditional tool calls or assistant output may follow before turn completion.",
"enum": [
"commentary"
],
"type": "string"
},
{
"description": "The assistant's terminal answer text for the current turn.",
"enum": [
"final_answer"
],
"type": "string"
}
]
},
"ModeKind": {
"description": "Initial collaboration mode to use when the TUI starts.",
@@ -12089,6 +12532,84 @@
],
"type": "string"
},
"NetworkRequirements": {
"properties": {
"allowLocalBinding": {
"type": [
"boolean",
"null"
]
},
"allowUnixSockets": {
"items": {
"type": "string"
},
"type": [
"array",
"null"
]
},
"allowUpstreamProxy": {
"type": [
"boolean",
"null"
]
},
"allowedDomains": {
"items": {
"type": "string"
},
"type": [
"array",
"null"
]
},
"dangerouslyAllowNonLoopbackAdmin": {
"type": [
"boolean",
"null"
]
},
"dangerouslyAllowNonLoopbackProxy": {
"type": [
"boolean",
"null"
]
},
"deniedDomains": {
"items": {
"type": "string"
},
"type": [
"array",
"null"
]
},
"enabled": {
"type": [
"boolean",
"null"
]
},
"httpPort": {
"format": "uint16",
"minimum": 0.0,
"type": [
"integer",
"null"
]
},
"socksPort": {
"format": "uint16",
"minimum": 0.0,
"type": [
"integer",
"null"
]
}
},
"type": "object"
},
"OverriddenMetadata": {
"properties": {
"effectiveValue": true,
@@ -13608,6 +14129,24 @@
],
"type": "object"
},
"SkillsListExtraRootsForCwd": {
"properties": {
"cwd": {
"type": "string"
},
"extraUserRoots": {
"items": {
"type": "string"
},
"type": "array"
}
},
"required": [
"cwd",
"extraUserRoots"
],
"type": "object"
},
"SkillsListParams": {
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
@@ -13621,6 +14160,17 @@
"forceReload": {
"description": "When true, bypass the skills cache and re-scan skills from disk.",
"type": "boolean"
},
"perCwdExtraUserRoots": {
"default": null,
"description": "Optional per-cwd extra roots to scan as user-scoped skills.",
"items": {
"$ref": "#/definitions/v2/SkillsListExtraRootsForCwd"
},
"type": [
"array",
"null"
]
}
},
"title": "SkillsListParams",
@@ -14005,13 +14555,6 @@
"null"
]
},
"path": {
"description": "[UNSTABLE] Specify the rollout path to fork from. If specified, the thread_id param will be ignored.",
"type": [
"string",
"null"
]
},
"sandbox": {
"anyOf": [
{
@@ -14770,16 +15313,6 @@
"null"
]
},
"history": {
"description": "[UNSTABLE] FOR CODEX CLOUD - DO NOT USE. If specified, the thread will be resumed with the provided history instead of loaded from disk.",
"items": {
"$ref": "#/definitions/v2/ResponseItem"
},
"type": [
"array",
"null"
]
},
"model": {
"description": "Configuration overrides for the resumed thread, if any.",
"type": [
@@ -14793,13 +15326,6 @@
"null"
]
},
"path": {
"description": "[UNSTABLE] Specify the rollout path to resume from. If specified, the thread_id param will be ignored.",
"type": [
"string",
"null"
]
},
"personality": {
"anyOf": [
{
@@ -14999,11 +15525,6 @@
"null"
]
},
"experimentalRawEvents": {
"default": false,
"description": "If true, opt into emitting raw response items on the event stream.\n\nThis is for internal use only (e.g. Codex Cloud). (TODO): Figure out a better way to categorize internal / experimental events & protocols.",
"type": "boolean"
},
"model": {
"type": [
"string",
@@ -15440,17 +15961,6 @@
],
"description": "Override the approval policy for this turn and subsequent turns."
},
"collaborationMode": {
"anyOf": [
{
"$ref": "#/definitions/v2/CollaborationMode"
},
{
"type": "null"
}
],
"description": "EXPERIMENTAL - set a pre-set collaboration mode. Takes precedence over model, reasoning_effort, and developer instructions if set."
},
"cwd": {
"description": "Override the working directory for this turn and subsequent turns.",
"type": [
@@ -15568,6 +16078,44 @@
],
"type": "string"
},
"TurnSteerParams": {
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"expectedTurnId": {
"description": "Required active turn id precondition. The request fails when it does not match the currently active turn.",
"type": "string"
},
"input": {
"items": {
"$ref": "#/definitions/v2/UserInput"
},
"type": "array"
},
"threadId": {
"type": "string"
}
},
"required": [
"expectedTurnId",
"input",
"threadId"
],
"title": "TurnSteerParams",
"type": "object"
},
"TurnSteerResponse": {
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"turnId": {
"type": "string"
}
},
"required": [
"turnId"
],
"title": "TurnSteerResponse",
"type": "object"
},
"UserInput": {
"oneOf": [
{

View File

@@ -887,6 +887,17 @@
"model_provider_id": {
"type": "string"
},
"network_proxy": {
"anyOf": [
{
"$ref": "#/definitions/SessionNetworkProxyRuntime"
},
{
"type": "null"
}
],
"description": "Runtime proxy bind addresses, when the managed proxy was started for this session."
},
"reasoning_effort": {
"anyOf": [
{
@@ -2775,6 +2786,95 @@
],
"title": "CollabCloseEndEventMsg",
"type": "object"
},
{
"description": "Collab interaction: resume begin.",
"properties": {
"call_id": {
"description": "Identifier for the collab tool call.",
"type": "string"
},
"receiver_thread_id": {
"allOf": [
{
"$ref": "#/definitions/ThreadId"
}
],
"description": "Thread ID of the receiver."
},
"sender_thread_id": {
"allOf": [
{
"$ref": "#/definitions/ThreadId"
}
],
"description": "Thread ID of the sender."
},
"type": {
"enum": [
"collab_resume_begin"
],
"title": "CollabResumeBeginEventMsgType",
"type": "string"
}
},
"required": [
"call_id",
"receiver_thread_id",
"sender_thread_id",
"type"
],
"title": "CollabResumeBeginEventMsg",
"type": "object"
},
{
"description": "Collab interaction: resume end.",
"properties": {
"call_id": {
"description": "Identifier for the collab tool call.",
"type": "string"
},
"receiver_thread_id": {
"allOf": [
{
"$ref": "#/definitions/ThreadId"
}
],
"description": "Thread ID of the receiver."
},
"sender_thread_id": {
"allOf": [
{
"$ref": "#/definitions/ThreadId"
}
],
"description": "Thread ID of the sender."
},
"status": {
"allOf": [
{
"$ref": "#/definitions/AgentStatus"
}
],
"description": "Last known status of the receiver agent reported to the sender agent after resume."
},
"type": {
"enum": [
"collab_resume_end"
],
"title": "CollabResumeEndEventMsgType",
"type": "string"
}
},
"required": [
"call_id",
"receiver_thread_id",
"sender_thread_id",
"status",
"type"
],
"title": "CollabResumeEndEventMsg",
"type": "object"
}
]
},
@@ -3169,11 +3269,23 @@
]
},
"MessagePhase": {
"enum": [
"commentary",
"final_answer"
],
"type": "string"
"description": "Classifies an assistant message as interim commentary or final answer text.\n\nProviders do not emit this consistently, so callers must treat `None` as \"phase unknown\" and keep compatibility behavior for legacy models.",
"oneOf": [
{
"description": "Mid-turn assistant text (for example preamble/progress narration).\n\nAdditional tool calls or assistant output may follow before turn completion.",
"enum": [
"commentary"
],
"type": "string"
},
{
"description": "The assistant's terminal answer text for the current turn.",
"enum": [
"final_answer"
],
"type": "string"
}
]
},
"ModeKind": {
"description": "Initial collaboration mode to use when the TUI starts.",
@@ -4326,6 +4438,25 @@
}
]
},
"SessionNetworkProxyRuntime": {
"properties": {
"admin_addr": {
"type": "string"
},
"http_addr": {
"type": "string"
},
"socks_addr": {
"type": "string"
}
},
"required": [
"admin_addr",
"http_addr",
"socks_addr"
],
"type": "object"
},
"SkillDependencies": {
"properties": {
"tools": {
@@ -4685,6 +4816,7 @@
"type": "object"
},
{
"description": "Assistant-authored message payload used in turn-item streams.\n\n`phase` is optional because not all providers/models emit it. Consumers should use it when present, but retain legacy completion semantics when it is `None`.",
"properties": {
"content": {
"items": {
@@ -4695,6 +4827,17 @@
"id": {
"type": "string"
},
"phase": {
"anyOf": [
{
"$ref": "#/definitions/MessagePhase"
},
{
"type": "null"
}
],
"description": "Optional phase metadata carried through from `ResponseItem::Message`.\n\nThis is currently used by TUI rendering to distinguish mid-turn commentary from a final answer and avoid status-indicator jitter."
},
"type": {
"enum": [
"AgentMessage"

View File

@@ -271,11 +271,23 @@
"type": "string"
},
"MessagePhase": {
"enum": [
"commentary",
"final_answer"
],
"type": "string"
"description": "Classifies an assistant message as interim commentary or final answer text.\n\nProviders do not emit this consistently, so callers must treat `None` as \"phase unknown\" and keep compatibility behavior for legacy models.",
"oneOf": [
{
"description": "Mid-turn assistant text (for example preamble/progress narration).\n\nAdditional tool calls or assistant output may follow before turn completion.",
"enum": [
"commentary"
],
"type": "string"
},
{
"description": "The assistant's terminal answer text for the current turn.",
"enum": [
"final_answer"
],
"type": "string"
}
]
},
"NewConversationParams": {
"properties": {

View File

@@ -887,6 +887,17 @@
"model_provider_id": {
"type": "string"
},
"network_proxy": {
"anyOf": [
{
"$ref": "#/definitions/SessionNetworkProxyRuntime"
},
{
"type": "null"
}
],
"description": "Runtime proxy bind addresses, when the managed proxy was started for this session."
},
"reasoning_effort": {
"anyOf": [
{
@@ -2775,6 +2786,95 @@
],
"title": "CollabCloseEndEventMsg",
"type": "object"
},
{
"description": "Collab interaction: resume begin.",
"properties": {
"call_id": {
"description": "Identifier for the collab tool call.",
"type": "string"
},
"receiver_thread_id": {
"allOf": [
{
"$ref": "#/definitions/ThreadId"
}
],
"description": "Thread ID of the receiver."
},
"sender_thread_id": {
"allOf": [
{
"$ref": "#/definitions/ThreadId"
}
],
"description": "Thread ID of the sender."
},
"type": {
"enum": [
"collab_resume_begin"
],
"title": "CollabResumeBeginEventMsgType",
"type": "string"
}
},
"required": [
"call_id",
"receiver_thread_id",
"sender_thread_id",
"type"
],
"title": "CollabResumeBeginEventMsg",
"type": "object"
},
{
"description": "Collab interaction: resume end.",
"properties": {
"call_id": {
"description": "Identifier for the collab tool call.",
"type": "string"
},
"receiver_thread_id": {
"allOf": [
{
"$ref": "#/definitions/ThreadId"
}
],
"description": "Thread ID of the receiver."
},
"sender_thread_id": {
"allOf": [
{
"$ref": "#/definitions/ThreadId"
}
],
"description": "Thread ID of the sender."
},
"status": {
"allOf": [
{
"$ref": "#/definitions/AgentStatus"
}
],
"description": "Last known status of the receiver agent reported to the sender agent after resume."
},
"type": {
"enum": [
"collab_resume_end"
],
"title": "CollabResumeEndEventMsgType",
"type": "string"
}
},
"required": [
"call_id",
"receiver_thread_id",
"sender_thread_id",
"status",
"type"
],
"title": "CollabResumeEndEventMsg",
"type": "object"
}
]
},
@@ -3169,11 +3269,23 @@
]
},
"MessagePhase": {
"enum": [
"commentary",
"final_answer"
],
"type": "string"
"description": "Classifies an assistant message as interim commentary or final answer text.\n\nProviders do not emit this consistently, so callers must treat `None` as \"phase unknown\" and keep compatibility behavior for legacy models.",
"oneOf": [
{
"description": "Mid-turn assistant text (for example preamble/progress narration).\n\nAdditional tool calls or assistant output may follow before turn completion.",
"enum": [
"commentary"
],
"type": "string"
},
{
"description": "The assistant's terminal answer text for the current turn.",
"enum": [
"final_answer"
],
"type": "string"
}
]
},
"ModeKind": {
"description": "Initial collaboration mode to use when the TUI starts.",
@@ -4326,6 +4438,25 @@
}
]
},
"SessionNetworkProxyRuntime": {
"properties": {
"admin_addr": {
"type": "string"
},
"http_addr": {
"type": "string"
},
"socks_addr": {
"type": "string"
}
},
"required": [
"admin_addr",
"http_addr",
"socks_addr"
],
"type": "object"
},
"SkillDependencies": {
"properties": {
"tools": {
@@ -4685,6 +4816,7 @@
"type": "object"
},
{
"description": "Assistant-authored message payload used in turn-item streams.\n\n`phase` is optional because not all providers/models emit it. Consumers should use it when present, but retain legacy completion semantics when it is `None`.",
"properties": {
"content": {
"items": {
@@ -4695,6 +4827,17 @@
"id": {
"type": "string"
},
"phase": {
"anyOf": [
{
"$ref": "#/definitions/MessagePhase"
},
{
"type": "null"
}
],
"description": "Optional phase metadata carried through from `ResponseItem::Message`.\n\nThis is currently used by TUI rendering to distinguish mid-turn commentary from a final answer and avoid status-indicator jitter."
},
"type": {
"enum": [
"AgentMessage"

View File

@@ -887,6 +887,17 @@
"model_provider_id": {
"type": "string"
},
"network_proxy": {
"anyOf": [
{
"$ref": "#/definitions/SessionNetworkProxyRuntime"
},
{
"type": "null"
}
],
"description": "Runtime proxy bind addresses, when the managed proxy was started for this session."
},
"reasoning_effort": {
"anyOf": [
{
@@ -2775,6 +2786,95 @@
],
"title": "CollabCloseEndEventMsg",
"type": "object"
},
{
"description": "Collab interaction: resume begin.",
"properties": {
"call_id": {
"description": "Identifier for the collab tool call.",
"type": "string"
},
"receiver_thread_id": {
"allOf": [
{
"$ref": "#/definitions/ThreadId"
}
],
"description": "Thread ID of the receiver."
},
"sender_thread_id": {
"allOf": [
{
"$ref": "#/definitions/ThreadId"
}
],
"description": "Thread ID of the sender."
},
"type": {
"enum": [
"collab_resume_begin"
],
"title": "CollabResumeBeginEventMsgType",
"type": "string"
}
},
"required": [
"call_id",
"receiver_thread_id",
"sender_thread_id",
"type"
],
"title": "CollabResumeBeginEventMsg",
"type": "object"
},
{
"description": "Collab interaction: resume end.",
"properties": {
"call_id": {
"description": "Identifier for the collab tool call.",
"type": "string"
},
"receiver_thread_id": {
"allOf": [
{
"$ref": "#/definitions/ThreadId"
}
],
"description": "Thread ID of the receiver."
},
"sender_thread_id": {
"allOf": [
{
"$ref": "#/definitions/ThreadId"
}
],
"description": "Thread ID of the sender."
},
"status": {
"allOf": [
{
"$ref": "#/definitions/AgentStatus"
}
],
"description": "Last known status of the receiver agent reported to the sender agent after resume."
},
"type": {
"enum": [
"collab_resume_end"
],
"title": "CollabResumeEndEventMsgType",
"type": "string"
}
},
"required": [
"call_id",
"receiver_thread_id",
"sender_thread_id",
"status",
"type"
],
"title": "CollabResumeEndEventMsg",
"type": "object"
}
]
},
@@ -3169,11 +3269,23 @@
]
},
"MessagePhase": {
"enum": [
"commentary",
"final_answer"
],
"type": "string"
"description": "Classifies an assistant message as interim commentary or final answer text.\n\nProviders do not emit this consistently, so callers must treat `None` as \"phase unknown\" and keep compatibility behavior for legacy models.",
"oneOf": [
{
"description": "Mid-turn assistant text (for example preamble/progress narration).\n\nAdditional tool calls or assistant output may follow before turn completion.",
"enum": [
"commentary"
],
"type": "string"
},
{
"description": "The assistant's terminal answer text for the current turn.",
"enum": [
"final_answer"
],
"type": "string"
}
]
},
"ModeKind": {
"description": "Initial collaboration mode to use when the TUI starts.",
@@ -4326,6 +4438,25 @@
}
]
},
"SessionNetworkProxyRuntime": {
"properties": {
"admin_addr": {
"type": "string"
},
"http_addr": {
"type": "string"
},
"socks_addr": {
"type": "string"
}
},
"required": [
"admin_addr",
"http_addr",
"socks_addr"
],
"type": "object"
},
"SkillDependencies": {
"properties": {
"tools": {
@@ -4685,6 +4816,7 @@
"type": "object"
},
{
"description": "Assistant-authored message payload used in turn-item streams.\n\n`phase` is optional because not all providers/models emit it. Consumers should use it when present, but retain legacy completion semantics when it is `None`.",
"properties": {
"content": {
"items": {
@@ -4695,6 +4827,17 @@
"id": {
"type": "string"
},
"phase": {
"anyOf": [
{
"$ref": "#/definitions/MessagePhase"
},
{
"type": "null"
}
],
"description": "Optional phase metadata carried through from `ResponseItem::Message`.\n\nThis is currently used by TUI rendering to distinguish mid-turn commentary from a final answer and avoid status-indicator jitter."
},
"type": {
"enum": [
"AgentMessage"

View File

@@ -0,0 +1,69 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"AppInfo": {
"description": "EXPERIMENTAL - app metadata returned by app-list APIs.",
"properties": {
"description": {
"type": [
"string",
"null"
]
},
"distributionChannel": {
"type": [
"string",
"null"
]
},
"id": {
"type": "string"
},
"installUrl": {
"type": [
"string",
"null"
]
},
"isAccessible": {
"default": false,
"type": "boolean"
},
"logoUrl": {
"type": [
"string",
"null"
]
},
"logoUrlDark": {
"type": [
"string",
"null"
]
},
"name": {
"type": "string"
}
},
"required": [
"id",
"name"
],
"type": "object"
}
},
"description": "EXPERIMENTAL - notification emitted when the app list changes.",
"properties": {
"data": {
"items": {
"$ref": "#/definitions/AppInfo"
},
"type": "array"
}
},
"required": [
"data"
],
"title": "AppListUpdatedNotification",
"type": "object"
}

View File

@@ -1,5 +1,6 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"description": "EXPERIMENTAL - list available apps/connectors.",
"properties": {
"cursor": {
"description": "Opaque pagination cursor returned by a previous call.",
@@ -8,6 +9,10 @@
"null"
]
},
"forceRefetch": {
"description": "When true, bypass app caches and fetch the latest data from sources.",
"type": "boolean"
},
"limit": {
"description": "Optional page size; defaults to a reasonable server-side value.",
"format": "uint32",
@@ -16,6 +21,13 @@
"integer",
"null"
]
},
"threadId": {
"description": "Optional thread id used to evaluate app feature gating from that thread's config.",
"type": [
"string",
"null"
]
}
},
"title": "AppsListParams",

View File

@@ -2,6 +2,7 @@
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"AppInfo": {
"description": "EXPERIMENTAL - app metadata returned by app-list APIs.",
"properties": {
"description": {
"type": [
@@ -51,6 +52,7 @@
"type": "object"
}
},
"description": "EXPERIMENTAL - app list response.",
"properties": {
"data": {
"items": {

View File

@@ -17,6 +17,35 @@
},
"type": "object"
},
"AppConfig": {
"properties": {
"disabled_reason": {
"anyOf": [
{
"$ref": "#/definitions/AppDisabledReason"
},
{
"type": "null"
}
]
},
"enabled": {
"default": true,
"type": "boolean"
}
},
"type": "object"
},
"AppDisabledReason": {
"enum": [
"unknown",
"user"
],
"type": "string"
},
"AppsConfig": {
"type": "object"
},
"AskForApproval": {
"enum": [
"untrusted",

View File

@@ -30,6 +30,15 @@
"null"
]
},
"allowedWebSearchModes": {
"items": {
"$ref": "#/definitions/WebSearchMode"
},
"type": [
"array",
"null"
]
},
"enforceResidency": {
"anyOf": [
{
@@ -43,6 +52,84 @@
},
"type": "object"
},
"NetworkRequirements": {
"properties": {
"allowLocalBinding": {
"type": [
"boolean",
"null"
]
},
"allowUnixSockets": {
"items": {
"type": "string"
},
"type": [
"array",
"null"
]
},
"allowUpstreamProxy": {
"type": [
"boolean",
"null"
]
},
"allowedDomains": {
"items": {
"type": "string"
},
"type": [
"array",
"null"
]
},
"dangerouslyAllowNonLoopbackAdmin": {
"type": [
"boolean",
"null"
]
},
"dangerouslyAllowNonLoopbackProxy": {
"type": [
"boolean",
"null"
]
},
"deniedDomains": {
"items": {
"type": "string"
},
"type": [
"array",
"null"
]
},
"enabled": {
"type": [
"boolean",
"null"
]
},
"httpPort": {
"format": "uint16",
"minimum": 0.0,
"type": [
"integer",
"null"
]
},
"socksPort": {
"format": "uint16",
"minimum": 0.0,
"type": [
"integer",
"null"
]
}
},
"type": "object"
},
"ResidencyRequirement": {
"enum": [
"us"
@@ -56,6 +143,14 @@
"danger-full-access"
],
"type": "string"
},
"WebSearchMode": {
"enum": [
"disabled",
"cached",
"live"
],
"type": "string"
}
},
"properties": {

View File

@@ -0,0 +1,23 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"cursor": {
"description": "Opaque pagination cursor returned by a previous call.",
"type": [
"string",
"null"
]
},
"limit": {
"description": "Optional page size; defaults to a reasonable server-side value.",
"format": "uint32",
"minimum": 0.0,
"type": [
"integer",
"null"
]
}
},
"title": "ExperimentalFeatureListParams",
"type": "object"
}

View File

@@ -0,0 +1,116 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"ExperimentalFeature": {
"properties": {
"announcement": {
"description": "Announcement copy shown to users when the feature is introduced. Null when this feature is not in beta.",
"type": [
"string",
"null"
]
},
"defaultEnabled": {
"description": "Whether this feature is enabled by default.",
"type": "boolean"
},
"description": {
"description": "Short summary describing what the feature does. Null when this feature is not in beta.",
"type": [
"string",
"null"
]
},
"displayName": {
"description": "User-facing display name shown in the experimental features UI. Null when this feature is not in beta.",
"type": [
"string",
"null"
]
},
"enabled": {
"description": "Whether this feature is currently enabled in the loaded config.",
"type": "boolean"
},
"name": {
"description": "Stable key used in config.toml and CLI flag toggles.",
"type": "string"
},
"stage": {
"allOf": [
{
"$ref": "#/definitions/ExperimentalFeatureStage"
}
],
"description": "Lifecycle stage of this feature flag."
}
},
"required": [
"defaultEnabled",
"enabled",
"name",
"stage"
],
"type": "object"
},
"ExperimentalFeatureStage": {
"oneOf": [
{
"description": "Feature is available for user testing and feedback.",
"enum": [
"beta"
],
"type": "string"
},
{
"description": "Feature is still being built and not ready for broad use.",
"enum": [
"underDevelopment"
],
"type": "string"
},
{
"description": "Feature is production-ready.",
"enum": [
"stable"
],
"type": "string"
},
{
"description": "Feature is deprecated and should be avoided.",
"enum": [
"deprecated"
],
"type": "string"
},
{
"description": "Feature flag is retained only for backwards compatibility.",
"enum": [
"removed"
],
"type": "string"
}
]
}
},
"properties": {
"data": {
"items": {
"$ref": "#/definitions/ExperimentalFeature"
},
"type": "array"
},
"nextCursor": {
"description": "Opaque cursor to pass to the next call to continue after the last item. If None, there are no more items to return.",
"type": [
"string",
"null"
]
}
},
"required": [
"data"
],
"title": "ExperimentalFeatureListResponse",
"type": "object"
}

View File

@@ -52,6 +52,7 @@
"enum": [
"spawnAgent",
"sendInput",
"resumeAgent",
"wait",
"closeAgent"
],

View File

@@ -52,6 +52,7 @@
"enum": [
"spawnAgent",
"sendInput",
"resumeAgent",
"wait",
"closeAgent"
],

View File

@@ -41,13 +41,20 @@
"description": "[UNSTABLE] FOR OPENAI INTERNAL USE ONLY - DO NOT USE. The access token must contain the same scopes that Codex-managed ChatGPT auth tokens have.",
"properties": {
"accessToken": {
"description": "Access token (JWT) supplied by the client. This token is used for backend API requests.",
"description": "Access token (JWT) supplied by the client. This token is used for backend API requests and email extraction.",
"type": "string"
},
"idToken": {
"description": "ID token (JWT) supplied by the client.\n\nThis token is used for identity and account metadata (email, plan type, workspace id).",
"chatgptAccountId": {
"description": "Workspace/account identifier supplied by the client.",
"type": "string"
},
"chatgptPlanType": {
"description": "Optional plan type supplied by the client.\n\nWhen `null`, Codex attempts to derive the plan type from access-token claims. If unavailable, the plan defaults to `unknown`.",
"type": [
"string",
"null"
]
},
"type": {
"enum": [
"chatgptAuthTokens"
@@ -58,7 +65,7 @@
},
"required": [
"accessToken",
"idToken",
"chatgptAccountId",
"type"
],
"title": "ChatgptAuthTokensv2::LoginAccountParams",

View File

@@ -238,11 +238,23 @@
"type": "string"
},
"MessagePhase": {
"enum": [
"commentary",
"final_answer"
],
"type": "string"
"description": "Classifies an assistant message as interim commentary or final answer text.\n\nProviders do not emit this consistently, so callers must treat `None` as \"phase unknown\" and keep compatibility behavior for legacy models.",
"oneOf": [
{
"description": "Mid-turn assistant text (for example preamble/progress narration).\n\nAdditional tool calls or assistant output may follow before turn completion.",
"enum": [
"commentary"
],
"type": "string"
},
{
"description": "The assistant's terminal answer text for the current turn.",
"enum": [
"final_answer"
],
"type": "string"
}
]
},
"ReasoningItemContent": {
"oneOf": [

View File

@@ -194,6 +194,7 @@
"enum": [
"spawnAgent",
"sendInput",
"resumeAgent",
"wait",
"closeAgent"
],

View File

@@ -1,5 +1,25 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"SkillsListExtraRootsForCwd": {
"properties": {
"cwd": {
"type": "string"
},
"extraUserRoots": {
"items": {
"type": "string"
},
"type": "array"
}
},
"required": [
"cwd",
"extraUserRoots"
],
"type": "object"
}
},
"properties": {
"cwds": {
"description": "When empty, defaults to the current session working directory.",
@@ -11,6 +31,17 @@
"forceReload": {
"description": "When true, bypass the skills cache and re-scan skills from disk.",
"type": "boolean"
},
"perCwdExtraUserRoots": {
"default": null,
"description": "Optional per-cwd extra roots to scan as user-scoped skills.",
"items": {
"$ref": "#/definitions/SkillsListExtraRootsForCwd"
},
"type": [
"array",
"null"
]
}
},
"title": "SkillsListParams",

View File

@@ -69,13 +69,6 @@
"null"
]
},
"path": {
"description": "[UNSTABLE] Specify the rollout path to fork from. If specified, the thread_id param will be ignored.",
"type": [
"string",
"null"
]
},
"sandbox": {
"anyOf": [
{

View File

@@ -207,6 +207,7 @@
"enum": [
"spawnAgent",
"sendInput",
"resumeAgent",
"wait",
"closeAgent"
],

View File

@@ -194,6 +194,7 @@
"enum": [
"spawnAgent",
"sendInput",
"resumeAgent",
"wait",
"closeAgent"
],

View File

@@ -194,6 +194,7 @@
"enum": [
"spawnAgent",
"sendInput",
"resumeAgent",
"wait",
"closeAgent"
],

View File

@@ -247,11 +247,23 @@
"type": "string"
},
"MessagePhase": {
"enum": [
"commentary",
"final_answer"
],
"type": "string"
"description": "Classifies an assistant message as interim commentary or final answer text.\n\nProviders do not emit this consistently, so callers must treat `None` as \"phase unknown\" and keep compatibility behavior for legacy models.",
"oneOf": [
{
"description": "Mid-turn assistant text (for example preamble/progress narration).\n\nAdditional tool calls or assistant output may follow before turn completion.",
"enum": [
"commentary"
],
"type": "string"
},
{
"description": "The assistant's terminal answer text for the current turn.",
"enum": [
"final_answer"
],
"type": "string"
}
]
},
"Personality": {
"enum": [
@@ -832,16 +844,6 @@
"null"
]
},
"history": {
"description": "[UNSTABLE] FOR CODEX CLOUD - DO NOT USE. If specified, the thread will be resumed with the provided history instead of loaded from disk.",
"items": {
"$ref": "#/definitions/ResponseItem"
},
"type": [
"array",
"null"
]
},
"model": {
"description": "Configuration overrides for the resumed thread, if any.",
"type": [
@@ -855,13 +857,6 @@
"null"
]
},
"path": {
"description": "[UNSTABLE] Specify the rollout path to resume from. If specified, the thread_id param will be ignored.",
"type": [
"string",
"null"
]
},
"personality": {
"anyOf": [
{

View File

@@ -207,6 +207,7 @@
"enum": [
"spawnAgent",
"sendInput",
"resumeAgent",
"wait",
"closeAgent"
],

View File

@@ -194,6 +194,7 @@
"enum": [
"spawnAgent",
"sendInput",
"resumeAgent",
"wait",
"closeAgent"
],

View File

@@ -86,11 +86,6 @@
"null"
]
},
"experimentalRawEvents": {
"default": false,
"description": "If true, opt into emitting raw response items on the event stream.\n\nThis is for internal use only (e.g. Codex Cloud). (TODO): Figure out a better way to categorize internal / experimental events & protocols.",
"type": "boolean"
},
"model": {
"type": [
"string",

View File

@@ -207,6 +207,7 @@
"enum": [
"spawnAgent",
"sendInput",
"resumeAgent",
"wait",
"closeAgent"
],

View File

@@ -194,6 +194,7 @@
"enum": [
"spawnAgent",
"sendInput",
"resumeAgent",
"wait",
"closeAgent"
],

View File

@@ -194,6 +194,7 @@
"enum": [
"spawnAgent",
"sendInput",
"resumeAgent",
"wait",
"closeAgent"
],

View File

@@ -194,6 +194,7 @@
"enum": [
"spawnAgent",
"sendInput",
"resumeAgent",
"wait",
"closeAgent"
],

View File

@@ -383,17 +383,6 @@
],
"description": "Override the approval policy for this turn and subsequent turns."
},
"collaborationMode": {
"anyOf": [
{
"$ref": "#/definitions/CollaborationMode"
},
{
"type": "null"
}
],
"description": "EXPERIMENTAL - set a pre-set collaboration mode. Takes precedence over model, reasoning_effort, and developer instructions if set."
},
"cwd": {
"description": "Override the working directory for this turn and subsequent turns.",
"type": [

View File

@@ -194,6 +194,7 @@
"enum": [
"spawnAgent",
"sendInput",
"resumeAgent",
"wait",
"closeAgent"
],

View File

@@ -194,6 +194,7 @@
"enum": [
"spawnAgent",
"sendInput",
"resumeAgent",
"wait",
"closeAgent"
],

View File

@@ -0,0 +1,189 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"ByteRange": {
"properties": {
"end": {
"format": "uint",
"minimum": 0.0,
"type": "integer"
},
"start": {
"format": "uint",
"minimum": 0.0,
"type": "integer"
}
},
"required": [
"end",
"start"
],
"type": "object"
},
"TextElement": {
"properties": {
"byteRange": {
"allOf": [
{
"$ref": "#/definitions/ByteRange"
}
],
"description": "Byte range in the parent `text` buffer that this element occupies."
},
"placeholder": {
"description": "Optional human-readable placeholder for the element, displayed in the UI.",
"type": [
"string",
"null"
]
}
},
"required": [
"byteRange"
],
"type": "object"
},
"UserInput": {
"oneOf": [
{
"properties": {
"text": {
"type": "string"
},
"text_elements": {
"default": [],
"description": "UI-defined spans within `text` used to render or persist special elements.",
"items": {
"$ref": "#/definitions/TextElement"
},
"type": "array"
},
"type": {
"enum": [
"text"
],
"title": "TextUserInputType",
"type": "string"
}
},
"required": [
"text",
"type"
],
"title": "TextUserInput",
"type": "object"
},
{
"properties": {
"type": {
"enum": [
"image"
],
"title": "ImageUserInputType",
"type": "string"
},
"url": {
"type": "string"
}
},
"required": [
"type",
"url"
],
"title": "ImageUserInput",
"type": "object"
},
{
"properties": {
"path": {
"type": "string"
},
"type": {
"enum": [
"localImage"
],
"title": "LocalImageUserInputType",
"type": "string"
}
},
"required": [
"path",
"type"
],
"title": "LocalImageUserInput",
"type": "object"
},
{
"properties": {
"name": {
"type": "string"
},
"path": {
"type": "string"
},
"type": {
"enum": [
"skill"
],
"title": "SkillUserInputType",
"type": "string"
}
},
"required": [
"name",
"path",
"type"
],
"title": "SkillUserInput",
"type": "object"
},
{
"properties": {
"name": {
"type": "string"
},
"path": {
"type": "string"
},
"type": {
"enum": [
"mention"
],
"title": "MentionUserInputType",
"type": "string"
}
},
"required": [
"name",
"path",
"type"
],
"title": "MentionUserInput",
"type": "object"
}
]
}
},
"properties": {
"expectedTurnId": {
"description": "Required active turn id precondition. The request fails when it does not match the currently active turn.",
"type": "string"
},
"input": {
"items": {
"$ref": "#/definitions/UserInput"
},
"type": "array"
},
"threadId": {
"type": "string"
}
},
"required": [
"expectedTurnId",
"input",
"threadId"
],
"title": "TurnSteerParams",
"type": "object"
}

View File

@@ -0,0 +1,13 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"turnId": {
"type": "string"
}
},
"required": [
"turnId"
],
"title": "TurnSteerResponse",
"type": "object"
}

View File

@@ -2,5 +2,20 @@
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { AgentMessageContent } from "./AgentMessageContent";
import type { MessagePhase } from "./MessagePhase";
export type AgentMessageItem = { id: string, content: Array<AgentMessageContent>, };
/**
* Assistant-authored message payload used in turn-item streams.
*
* `phase` is optional because not all providers/models emit it. Consumers
* should use it when present, but retain legacy completion semantics when it
* is `None`.
*/
export type AgentMessageItem = { id: string, content: Array<AgentMessageContent>,
/**
* Optional phase metadata carried through from `ResponseItem::Message`.
*
* This is currently used by TUI rendering to distinguish mid-turn
* commentary from a final answer and avoid status-indicator jitter.
*/
phase?: MessagePhase, };

View File

@@ -27,6 +27,7 @@ import type { CommandExecParams } from "./v2/CommandExecParams";
import type { ConfigBatchWriteParams } from "./v2/ConfigBatchWriteParams";
import type { ConfigReadParams } from "./v2/ConfigReadParams";
import type { ConfigValueWriteParams } from "./v2/ConfigValueWriteParams";
import type { ExperimentalFeatureListParams } from "./v2/ExperimentalFeatureListParams";
import type { FeedbackUploadParams } from "./v2/FeedbackUploadParams";
import type { GetAccountParams } from "./v2/GetAccountParams";
import type { ListMcpServerStatusParams } from "./v2/ListMcpServerStatusParams";
@@ -51,8 +52,9 @@ import type { ThreadStartParams } from "./v2/ThreadStartParams";
import type { ThreadUnarchiveParams } from "./v2/ThreadUnarchiveParams";
import type { TurnInterruptParams } from "./v2/TurnInterruptParams";
import type { TurnStartParams } from "./v2/TurnStartParams";
import type { TurnSteerParams } from "./v2/TurnSteerParams";
/**
* Request from the client to the server.
*/
export type ClientRequest ={ "method": "initialize", id: RequestId, params: InitializeParams, } | { "method": "thread/start", id: RequestId, params: ThreadStartParams, } | { "method": "thread/resume", id: RequestId, params: ThreadResumeParams, } | { "method": "thread/fork", id: RequestId, params: ThreadForkParams, } | { "method": "thread/archive", id: RequestId, params: ThreadArchiveParams, } | { "method": "thread/name/set", id: RequestId, params: ThreadSetNameParams, } | { "method": "thread/unarchive", id: RequestId, params: ThreadUnarchiveParams, } | { "method": "thread/compact/start", id: RequestId, params: ThreadCompactStartParams, } | { "method": "thread/rollback", id: RequestId, params: ThreadRollbackParams, } | { "method": "thread/list", id: RequestId, params: ThreadListParams, } | { "method": "thread/loaded/list", id: RequestId, params: ThreadLoadedListParams, } | { "method": "thread/read", id: RequestId, params: ThreadReadParams, } | { "method": "skills/list", id: RequestId, params: SkillsListParams, } | { "method": "skills/remote/read", id: RequestId, params: SkillsRemoteReadParams, } | { "method": "skills/remote/write", id: RequestId, params: SkillsRemoteWriteParams, } | { "method": "app/list", id: RequestId, params: AppsListParams, } | { "method": "skills/config/write", id: RequestId, params: SkillsConfigWriteParams, } | { "method": "turn/start", id: RequestId, params: TurnStartParams, } | { "method": "turn/interrupt", id: RequestId, params: TurnInterruptParams, } | { "method": "review/start", id: RequestId, params: ReviewStartParams, } | { "method": "model/list", id: RequestId, params: ModelListParams, } | { "method": "mcpServer/oauth/login", id: RequestId, params: McpServerOauthLoginParams, } | { "method": "config/mcpServer/reload", id: RequestId, params: undefined, } | { "method": "mcpServerStatus/list", id: RequestId, params: ListMcpServerStatusParams, } | { "method": "account/login/start", id: RequestId, params: LoginAccountParams, } | { "method": "account/login/cancel", id: RequestId, params: CancelLoginAccountParams, } | { "method": "account/logout", id: RequestId, params: undefined, } | { "method": "account/rateLimits/read", id: RequestId, params: undefined, } | { "method": "feedback/upload", id: RequestId, params: FeedbackUploadParams, } | { "method": "command/exec", id: RequestId, params: CommandExecParams, } | { "method": "config/read", id: RequestId, params: ConfigReadParams, } | { "method": "config/value/write", id: RequestId, params: ConfigValueWriteParams, } | { "method": "config/batchWrite", id: RequestId, params: ConfigBatchWriteParams, } | { "method": "configRequirements/read", id: RequestId, params: undefined, } | { "method": "account/read", id: RequestId, params: GetAccountParams, } | { "method": "newConversation", id: RequestId, params: NewConversationParams, } | { "method": "getConversationSummary", id: RequestId, params: GetConversationSummaryParams, } | { "method": "listConversations", id: RequestId, params: ListConversationsParams, } | { "method": "resumeConversation", id: RequestId, params: ResumeConversationParams, } | { "method": "forkConversation", id: RequestId, params: ForkConversationParams, } | { "method": "archiveConversation", id: RequestId, params: ArchiveConversationParams, } | { "method": "sendUserMessage", id: RequestId, params: SendUserMessageParams, } | { "method": "sendUserTurn", id: RequestId, params: SendUserTurnParams, } | { "method": "interruptConversation", id: RequestId, params: InterruptConversationParams, } | { "method": "addConversationListener", id: RequestId, params: AddConversationListenerParams, } | { "method": "removeConversationListener", id: RequestId, params: RemoveConversationListenerParams, } | { "method": "gitDiffToRemote", id: RequestId, params: GitDiffToRemoteParams, } | { "method": "loginApiKey", id: RequestId, params: LoginApiKeyParams, } | { "method": "loginChatGpt", id: RequestId, params: undefined, } | { "method": "cancelLoginChatGpt", id: RequestId, params: CancelLoginChatGptParams, } | { "method": "logoutChatGpt", id: RequestId, params: undefined, } | { "method": "getAuthStatus", id: RequestId, params: GetAuthStatusParams, } | { "method": "getUserSavedConfig", id: RequestId, params: undefined, } | { "method": "setDefaultModel", id: RequestId, params: SetDefaultModelParams, } | { "method": "getUserAgent", id: RequestId, params: undefined, } | { "method": "userInfo", id: RequestId, params: undefined, } | { "method": "fuzzyFileSearch", id: RequestId, params: FuzzyFileSearchParams, } | { "method": "execOneOffCommand", id: RequestId, params: ExecOneOffCommandParams, };
export type ClientRequest ={ "method": "initialize", id: RequestId, params: InitializeParams, } | { "method": "thread/start", id: RequestId, params: ThreadStartParams, } | { "method": "thread/resume", id: RequestId, params: ThreadResumeParams, } | { "method": "thread/fork", id: RequestId, params: ThreadForkParams, } | { "method": "thread/archive", id: RequestId, params: ThreadArchiveParams, } | { "method": "thread/name/set", id: RequestId, params: ThreadSetNameParams, } | { "method": "thread/unarchive", id: RequestId, params: ThreadUnarchiveParams, } | { "method": "thread/compact/start", id: RequestId, params: ThreadCompactStartParams, } | { "method": "thread/rollback", id: RequestId, params: ThreadRollbackParams, } | { "method": "thread/list", id: RequestId, params: ThreadListParams, } | { "method": "thread/loaded/list", id: RequestId, params: ThreadLoadedListParams, } | { "method": "thread/read", id: RequestId, params: ThreadReadParams, } | { "method": "skills/list", id: RequestId, params: SkillsListParams, } | { "method": "skills/remote/read", id: RequestId, params: SkillsRemoteReadParams, } | { "method": "skills/remote/write", id: RequestId, params: SkillsRemoteWriteParams, } | { "method": "app/list", id: RequestId, params: AppsListParams, } | { "method": "skills/config/write", id: RequestId, params: SkillsConfigWriteParams, } | { "method": "turn/start", id: RequestId, params: TurnStartParams, } | { "method": "turn/steer", id: RequestId, params: TurnSteerParams, } | { "method": "turn/interrupt", id: RequestId, params: TurnInterruptParams, } | { "method": "review/start", id: RequestId, params: ReviewStartParams, } | { "method": "model/list", id: RequestId, params: ModelListParams, } | { "method": "experimentalFeature/list", id: RequestId, params: ExperimentalFeatureListParams, } | { "method": "mcpServer/oauth/login", id: RequestId, params: McpServerOauthLoginParams, } | { "method": "config/mcpServer/reload", id: RequestId, params: undefined, } | { "method": "mcpServerStatus/list", id: RequestId, params: ListMcpServerStatusParams, } | { "method": "account/login/start", id: RequestId, params: LoginAccountParams, } | { "method": "account/login/cancel", id: RequestId, params: CancelLoginAccountParams, } | { "method": "account/logout", id: RequestId, params: undefined, } | { "method": "account/rateLimits/read", id: RequestId, params: undefined, } | { "method": "feedback/upload", id: RequestId, params: FeedbackUploadParams, } | { "method": "command/exec", id: RequestId, params: CommandExecParams, } | { "method": "config/read", id: RequestId, params: ConfigReadParams, } | { "method": "config/value/write", id: RequestId, params: ConfigValueWriteParams, } | { "method": "config/batchWrite", id: RequestId, params: ConfigBatchWriteParams, } | { "method": "configRequirements/read", id: RequestId, params: undefined, } | { "method": "account/read", id: RequestId, params: GetAccountParams, } | { "method": "newConversation", id: RequestId, params: NewConversationParams, } | { "method": "getConversationSummary", id: RequestId, params: GetConversationSummaryParams, } | { "method": "listConversations", id: RequestId, params: ListConversationsParams, } | { "method": "resumeConversation", id: RequestId, params: ResumeConversationParams, } | { "method": "forkConversation", id: RequestId, params: ForkConversationParams, } | { "method": "archiveConversation", id: RequestId, params: ArchiveConversationParams, } | { "method": "sendUserMessage", id: RequestId, params: SendUserMessageParams, } | { "method": "sendUserTurn", id: RequestId, params: SendUserTurnParams, } | { "method": "interruptConversation", id: RequestId, params: InterruptConversationParams, } | { "method": "addConversationListener", id: RequestId, params: AddConversationListenerParams, } | { "method": "removeConversationListener", id: RequestId, params: RemoveConversationListenerParams, } | { "method": "gitDiffToRemote", id: RequestId, params: GitDiffToRemoteParams, } | { "method": "loginApiKey", id: RequestId, params: LoginApiKeyParams, } | { "method": "loginChatGpt", id: RequestId, params: undefined, } | { "method": "cancelLoginChatGpt", id: RequestId, params: CancelLoginChatGptParams, } | { "method": "logoutChatGpt", id: RequestId, params: undefined, } | { "method": "getAuthStatus", id: RequestId, params: GetAuthStatusParams, } | { "method": "getUserSavedConfig", id: RequestId, params: undefined, } | { "method": "setDefaultModel", id: RequestId, params: SetDefaultModelParams, } | { "method": "getUserAgent", id: RequestId, params: undefined, } | { "method": "userInfo", id: RequestId, params: undefined, } | { "method": "fuzzyFileSearch", id: RequestId, params: FuzzyFileSearchParams, } | { "method": "execOneOffCommand", id: RequestId, params: ExecOneOffCommandParams, };

View File

@@ -0,0 +1,18 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { ThreadId } from "./ThreadId";
export type CollabResumeBeginEvent = {
/**
* Identifier for the collab tool call.
*/
call_id: string,
/**
* Thread ID of the sender.
*/
sender_thread_id: ThreadId,
/**
* Thread ID of the receiver.
*/
receiver_thread_id: ThreadId, };

View File

@@ -0,0 +1,24 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { AgentStatus } from "./AgentStatus";
import type { ThreadId } from "./ThreadId";
export type CollabResumeEndEvent = {
/**
* Identifier for the collab tool call.
*/
call_id: string,
/**
* Thread ID of the sender.
*/
sender_thread_id: ThreadId,
/**
* Thread ID of the receiver.
*/
receiver_thread_id: ThreadId,
/**
* Last known status of the receiver agent reported to the sender agent after
* resume.
*/
status: AgentStatus, };

View File

@@ -17,6 +17,8 @@ import type { CollabAgentSpawnBeginEvent } from "./CollabAgentSpawnBeginEvent";
import type { CollabAgentSpawnEndEvent } from "./CollabAgentSpawnEndEvent";
import type { CollabCloseBeginEvent } from "./CollabCloseBeginEvent";
import type { CollabCloseEndEvent } from "./CollabCloseEndEvent";
import type { CollabResumeBeginEvent } from "./CollabResumeBeginEvent";
import type { CollabResumeEndEvent } from "./CollabResumeEndEvent";
import type { CollabWaitingBeginEvent } from "./CollabWaitingBeginEvent";
import type { CollabWaitingEndEvent } from "./CollabWaitingEndEvent";
import type { ContextCompactedEvent } from "./ContextCompactedEvent";
@@ -72,4 +74,4 @@ import type { WebSearchEndEvent } from "./WebSearchEndEvent";
* Response event from the agent
* NOTE: Make sure none of these values have optional types, as it will mess up the extension code-gen.
*/
export type EventMsg = { "type": "error" } & ErrorEvent | { "type": "warning" } & WarningEvent | { "type": "context_compacted" } & ContextCompactedEvent | { "type": "thread_rolled_back" } & ThreadRolledBackEvent | { "type": "task_started" } & TurnStartedEvent | { "type": "task_complete" } & TurnCompleteEvent | { "type": "token_count" } & TokenCountEvent | { "type": "agent_message" } & AgentMessageEvent | { "type": "user_message" } & UserMessageEvent | { "type": "agent_message_delta" } & AgentMessageDeltaEvent | { "type": "agent_reasoning" } & AgentReasoningEvent | { "type": "agent_reasoning_delta" } & AgentReasoningDeltaEvent | { "type": "agent_reasoning_raw_content" } & AgentReasoningRawContentEvent | { "type": "agent_reasoning_raw_content_delta" } & AgentReasoningRawContentDeltaEvent | { "type": "agent_reasoning_section_break" } & AgentReasoningSectionBreakEvent | { "type": "session_configured" } & SessionConfiguredEvent | { "type": "thread_name_updated" } & ThreadNameUpdatedEvent | { "type": "mcp_startup_update" } & McpStartupUpdateEvent | { "type": "mcp_startup_complete" } & McpStartupCompleteEvent | { "type": "mcp_tool_call_begin" } & McpToolCallBeginEvent | { "type": "mcp_tool_call_end" } & McpToolCallEndEvent | { "type": "web_search_begin" } & WebSearchBeginEvent | { "type": "web_search_end" } & WebSearchEndEvent | { "type": "exec_command_begin" } & ExecCommandBeginEvent | { "type": "exec_command_output_delta" } & ExecCommandOutputDeltaEvent | { "type": "terminal_interaction" } & TerminalInteractionEvent | { "type": "exec_command_end" } & ExecCommandEndEvent | { "type": "view_image_tool_call" } & ViewImageToolCallEvent | { "type": "exec_approval_request" } & ExecApprovalRequestEvent | { "type": "request_user_input" } & RequestUserInputEvent | { "type": "dynamic_tool_call_request" } & DynamicToolCallRequest | { "type": "elicitation_request" } & ElicitationRequestEvent | { "type": "apply_patch_approval_request" } & ApplyPatchApprovalRequestEvent | { "type": "deprecation_notice" } & DeprecationNoticeEvent | { "type": "background_event" } & BackgroundEventEvent | { "type": "undo_started" } & UndoStartedEvent | { "type": "undo_completed" } & UndoCompletedEvent | { "type": "stream_error" } & StreamErrorEvent | { "type": "patch_apply_begin" } & PatchApplyBeginEvent | { "type": "patch_apply_end" } & PatchApplyEndEvent | { "type": "turn_diff" } & TurnDiffEvent | { "type": "get_history_entry_response" } & GetHistoryEntryResponseEvent | { "type": "mcp_list_tools_response" } & McpListToolsResponseEvent | { "type": "list_custom_prompts_response" } & ListCustomPromptsResponseEvent | { "type": "list_skills_response" } & ListSkillsResponseEvent | { "type": "list_remote_skills_response" } & ListRemoteSkillsResponseEvent | { "type": "remote_skill_downloaded" } & RemoteSkillDownloadedEvent | { "type": "skills_update_available" } | { "type": "plan_update" } & UpdatePlanArgs | { "type": "turn_aborted" } & TurnAbortedEvent | { "type": "shutdown_complete" } | { "type": "entered_review_mode" } & ReviewRequest | { "type": "exited_review_mode" } & ExitedReviewModeEvent | { "type": "raw_response_item" } & RawResponseItemEvent | { "type": "item_started" } & ItemStartedEvent | { "type": "item_completed" } & ItemCompletedEvent | { "type": "agent_message_content_delta" } & AgentMessageContentDeltaEvent | { "type": "plan_delta" } & PlanDeltaEvent | { "type": "reasoning_content_delta" } & ReasoningContentDeltaEvent | { "type": "reasoning_raw_content_delta" } & ReasoningRawContentDeltaEvent | { "type": "collab_agent_spawn_begin" } & CollabAgentSpawnBeginEvent | { "type": "collab_agent_spawn_end" } & CollabAgentSpawnEndEvent | { "type": "collab_agent_interaction_begin" } & CollabAgentInteractionBeginEvent | { "type": "collab_agent_interaction_end" } & CollabAgentInteractionEndEvent | { "type": "collab_waiting_begin" } & CollabWaitingBeginEvent | { "type": "collab_waiting_end" } & CollabWaitingEndEvent | { "type": "collab_close_begin" } & CollabCloseBeginEvent | { "type": "collab_close_end" } & CollabCloseEndEvent;
export type EventMsg = { "type": "error" } & ErrorEvent | { "type": "warning" } & WarningEvent | { "type": "context_compacted" } & ContextCompactedEvent | { "type": "thread_rolled_back" } & ThreadRolledBackEvent | { "type": "task_started" } & TurnStartedEvent | { "type": "task_complete" } & TurnCompleteEvent | { "type": "token_count" } & TokenCountEvent | { "type": "agent_message" } & AgentMessageEvent | { "type": "user_message" } & UserMessageEvent | { "type": "agent_message_delta" } & AgentMessageDeltaEvent | { "type": "agent_reasoning" } & AgentReasoningEvent | { "type": "agent_reasoning_delta" } & AgentReasoningDeltaEvent | { "type": "agent_reasoning_raw_content" } & AgentReasoningRawContentEvent | { "type": "agent_reasoning_raw_content_delta" } & AgentReasoningRawContentDeltaEvent | { "type": "agent_reasoning_section_break" } & AgentReasoningSectionBreakEvent | { "type": "session_configured" } & SessionConfiguredEvent | { "type": "thread_name_updated" } & ThreadNameUpdatedEvent | { "type": "mcp_startup_update" } & McpStartupUpdateEvent | { "type": "mcp_startup_complete" } & McpStartupCompleteEvent | { "type": "mcp_tool_call_begin" } & McpToolCallBeginEvent | { "type": "mcp_tool_call_end" } & McpToolCallEndEvent | { "type": "web_search_begin" } & WebSearchBeginEvent | { "type": "web_search_end" } & WebSearchEndEvent | { "type": "exec_command_begin" } & ExecCommandBeginEvent | { "type": "exec_command_output_delta" } & ExecCommandOutputDeltaEvent | { "type": "terminal_interaction" } & TerminalInteractionEvent | { "type": "exec_command_end" } & ExecCommandEndEvent | { "type": "view_image_tool_call" } & ViewImageToolCallEvent | { "type": "exec_approval_request" } & ExecApprovalRequestEvent | { "type": "request_user_input" } & RequestUserInputEvent | { "type": "dynamic_tool_call_request" } & DynamicToolCallRequest | { "type": "elicitation_request" } & ElicitationRequestEvent | { "type": "apply_patch_approval_request" } & ApplyPatchApprovalRequestEvent | { "type": "deprecation_notice" } & DeprecationNoticeEvent | { "type": "background_event" } & BackgroundEventEvent | { "type": "undo_started" } & UndoStartedEvent | { "type": "undo_completed" } & UndoCompletedEvent | { "type": "stream_error" } & StreamErrorEvent | { "type": "patch_apply_begin" } & PatchApplyBeginEvent | { "type": "patch_apply_end" } & PatchApplyEndEvent | { "type": "turn_diff" } & TurnDiffEvent | { "type": "get_history_entry_response" } & GetHistoryEntryResponseEvent | { "type": "mcp_list_tools_response" } & McpListToolsResponseEvent | { "type": "list_custom_prompts_response" } & ListCustomPromptsResponseEvent | { "type": "list_skills_response" } & ListSkillsResponseEvent | { "type": "list_remote_skills_response" } & ListRemoteSkillsResponseEvent | { "type": "remote_skill_downloaded" } & RemoteSkillDownloadedEvent | { "type": "skills_update_available" } | { "type": "plan_update" } & UpdatePlanArgs | { "type": "turn_aborted" } & TurnAbortedEvent | { "type": "shutdown_complete" } | { "type": "entered_review_mode" } & ReviewRequest | { "type": "exited_review_mode" } & ExitedReviewModeEvent | { "type": "raw_response_item" } & RawResponseItemEvent | { "type": "item_started" } & ItemStartedEvent | { "type": "item_completed" } & ItemCompletedEvent | { "type": "agent_message_content_delta" } & AgentMessageContentDeltaEvent | { "type": "plan_delta" } & PlanDeltaEvent | { "type": "reasoning_content_delta" } & ReasoningContentDeltaEvent | { "type": "reasoning_raw_content_delta" } & ReasoningRawContentDeltaEvent | { "type": "collab_agent_spawn_begin" } & CollabAgentSpawnBeginEvent | { "type": "collab_agent_spawn_end" } & CollabAgentSpawnEndEvent | { "type": "collab_agent_interaction_begin" } & CollabAgentInteractionBeginEvent | { "type": "collab_agent_interaction_end" } & CollabAgentInteractionEndEvent | { "type": "collab_waiting_begin" } & CollabWaitingBeginEvent | { "type": "collab_waiting_end" } & CollabWaitingEndEvent | { "type": "collab_close_begin" } & CollabCloseBeginEvent | { "type": "collab_close_end" } & CollabCloseEndEvent | { "type": "collab_resume_begin" } & CollabResumeBeginEvent | { "type": "collab_resume_end" } & CollabResumeEndEvent;

View File

@@ -2,4 +2,10 @@
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
/**
* Classifies an assistant message as interim commentary or final answer text.
*
* Providers do not emit this consistently, so callers must treat `None` as
* "phase unknown" and keep compatibility behavior for legacy models.
*/
export type MessagePhase = "commentary" | "final_answer";

View File

@@ -8,6 +8,7 @@ import type { AccountLoginCompletedNotification } from "./v2/AccountLoginComplet
import type { AccountRateLimitsUpdatedNotification } from "./v2/AccountRateLimitsUpdatedNotification";
import type { AccountUpdatedNotification } from "./v2/AccountUpdatedNotification";
import type { AgentMessageDeltaNotification } from "./v2/AgentMessageDeltaNotification";
import type { AppListUpdatedNotification } from "./v2/AppListUpdatedNotification";
import type { CommandExecutionOutputDeltaNotification } from "./v2/CommandExecutionOutputDeltaNotification";
import type { ConfigWarningNotification } from "./v2/ConfigWarningNotification";
import type { ContextCompactedNotification } from "./v2/ContextCompactedNotification";
@@ -36,4 +37,4 @@ import type { WindowsWorldWritableWarningNotification } from "./v2/WindowsWorldW
/**
* Notification sent from the server to the client.
*/
export type ServerNotification = { "method": "error", "params": ErrorNotification } | { "method": "thread/started", "params": ThreadStartedNotification } | { "method": "thread/name/updated", "params": ThreadNameUpdatedNotification } | { "method": "thread/tokenUsage/updated", "params": ThreadTokenUsageUpdatedNotification } | { "method": "turn/started", "params": TurnStartedNotification } | { "method": "turn/completed", "params": TurnCompletedNotification } | { "method": "turn/diff/updated", "params": TurnDiffUpdatedNotification } | { "method": "turn/plan/updated", "params": TurnPlanUpdatedNotification } | { "method": "item/started", "params": ItemStartedNotification } | { "method": "item/completed", "params": ItemCompletedNotification } | { "method": "rawResponseItem/completed", "params": RawResponseItemCompletedNotification } | { "method": "item/agentMessage/delta", "params": AgentMessageDeltaNotification } | { "method": "item/plan/delta", "params": PlanDeltaNotification } | { "method": "item/commandExecution/outputDelta", "params": CommandExecutionOutputDeltaNotification } | { "method": "item/commandExecution/terminalInteraction", "params": TerminalInteractionNotification } | { "method": "item/fileChange/outputDelta", "params": FileChangeOutputDeltaNotification } | { "method": "item/mcpToolCall/progress", "params": McpToolCallProgressNotification } | { "method": "mcpServer/oauthLogin/completed", "params": McpServerOauthLoginCompletedNotification } | { "method": "account/updated", "params": AccountUpdatedNotification } | { "method": "account/rateLimits/updated", "params": AccountRateLimitsUpdatedNotification } | { "method": "item/reasoning/summaryTextDelta", "params": ReasoningSummaryTextDeltaNotification } | { "method": "item/reasoning/summaryPartAdded", "params": ReasoningSummaryPartAddedNotification } | { "method": "item/reasoning/textDelta", "params": ReasoningTextDeltaNotification } | { "method": "thread/compacted", "params": ContextCompactedNotification } | { "method": "deprecationNotice", "params": DeprecationNoticeNotification } | { "method": "configWarning", "params": ConfigWarningNotification } | { "method": "windows/worldWritableWarning", "params": WindowsWorldWritableWarningNotification } | { "method": "account/login/completed", "params": AccountLoginCompletedNotification } | { "method": "authStatusChange", "params": AuthStatusChangeNotification } | { "method": "loginChatGptComplete", "params": LoginChatGptCompleteNotification } | { "method": "sessionConfigured", "params": SessionConfiguredNotification };
export type ServerNotification = { "method": "error", "params": ErrorNotification } | { "method": "thread/started", "params": ThreadStartedNotification } | { "method": "thread/name/updated", "params": ThreadNameUpdatedNotification } | { "method": "thread/tokenUsage/updated", "params": ThreadTokenUsageUpdatedNotification } | { "method": "turn/started", "params": TurnStartedNotification } | { "method": "turn/completed", "params": TurnCompletedNotification } | { "method": "turn/diff/updated", "params": TurnDiffUpdatedNotification } | { "method": "turn/plan/updated", "params": TurnPlanUpdatedNotification } | { "method": "item/started", "params": ItemStartedNotification } | { "method": "item/completed", "params": ItemCompletedNotification } | { "method": "rawResponseItem/completed", "params": RawResponseItemCompletedNotification } | { "method": "item/agentMessage/delta", "params": AgentMessageDeltaNotification } | { "method": "item/plan/delta", "params": PlanDeltaNotification } | { "method": "item/commandExecution/outputDelta", "params": CommandExecutionOutputDeltaNotification } | { "method": "item/commandExecution/terminalInteraction", "params": TerminalInteractionNotification } | { "method": "item/fileChange/outputDelta", "params": FileChangeOutputDeltaNotification } | { "method": "item/mcpToolCall/progress", "params": McpToolCallProgressNotification } | { "method": "mcpServer/oauthLogin/completed", "params": McpServerOauthLoginCompletedNotification } | { "method": "account/updated", "params": AccountUpdatedNotification } | { "method": "account/rateLimits/updated", "params": AccountRateLimitsUpdatedNotification } | { "method": "app/list/updated", "params": AppListUpdatedNotification } | { "method": "item/reasoning/summaryTextDelta", "params": ReasoningSummaryTextDeltaNotification } | { "method": "item/reasoning/summaryPartAdded", "params": ReasoningSummaryPartAddedNotification } | { "method": "item/reasoning/textDelta", "params": ReasoningTextDeltaNotification } | { "method": "thread/compacted", "params": ContextCompactedNotification } | { "method": "deprecationNotice", "params": DeprecationNoticeNotification } | { "method": "configWarning", "params": ConfigWarningNotification } | { "method": "windows/worldWritableWarning", "params": WindowsWorldWritableWarningNotification } | { "method": "account/login/completed", "params": AccountLoginCompletedNotification } | { "method": "authStatusChange", "params": AuthStatusChangeNotification } | { "method": "loginChatGptComplete", "params": LoginChatGptCompleteNotification } | { "method": "sessionConfigured", "params": SessionConfiguredNotification };

View File

@@ -5,6 +5,7 @@ import type { AskForApproval } from "./AskForApproval";
import type { EventMsg } from "./EventMsg";
import type { ReasoningEffort } from "./ReasoningEffort";
import type { SandboxPolicy } from "./SandboxPolicy";
import type { SessionNetworkProxyRuntime } from "./SessionNetworkProxyRuntime";
import type { ThreadId } from "./ThreadId";
export type SessionConfiguredEvent = { session_id: ThreadId, forked_from_id: ThreadId | null,
@@ -46,6 +47,10 @@ history_entry_count: number,
* When present, UIs can use these to seed the history.
*/
initial_messages: Array<EventMsg> | null,
/**
* Runtime proxy bind addresses, when the managed proxy was started for this session.
*/
network_proxy?: SessionNetworkProxyRuntime,
/**
* Path in which the rollout is stored. Can be `None` for ephemeral threads
*/

View File

@@ -0,0 +1,5 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type SessionNetworkProxyRuntime = { http_addr: string, socks_addr: string, admin_addr: string, };

View File

@@ -37,6 +37,8 @@ export type { CollabAgentSpawnBeginEvent } from "./CollabAgentSpawnBeginEvent";
export type { CollabAgentSpawnEndEvent } from "./CollabAgentSpawnEndEvent";
export type { CollabCloseBeginEvent } from "./CollabCloseBeginEvent";
export type { CollabCloseEndEvent } from "./CollabCloseEndEvent";
export type { CollabResumeBeginEvent } from "./CollabResumeBeginEvent";
export type { CollabResumeEndEvent } from "./CollabResumeEndEvent";
export type { CollabWaitingBeginEvent } from "./CollabWaitingBeginEvent";
export type { CollabWaitingEndEvent } from "./CollabWaitingEndEvent";
export type { CollaborationMode } from "./CollaborationMode";
@@ -173,6 +175,7 @@ export type { ServerNotification } from "./ServerNotification";
export type { ServerRequest } from "./ServerRequest";
export type { SessionConfiguredEvent } from "./SessionConfiguredEvent";
export type { SessionConfiguredNotification } from "./SessionConfiguredNotification";
export type { SessionNetworkProxyRuntime } from "./SessionNetworkProxyRuntime";
export type { SessionSource } from "./SessionSource";
export type { SetDefaultModelParams } from "./SetDefaultModelParams";
export type { SetDefaultModelResponse } from "./SetDefaultModelResponse";

View File

@@ -0,0 +1,5 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type AppDisabledReason = "unknown" | "user";

View File

@@ -2,4 +2,7 @@
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
/**
* EXPERIMENTAL - app metadata returned by app-list APIs.
*/
export type AppInfo = { id: string, name: string, description: string | null, logoUrl: string | null, logoUrlDark: string | null, distributionChannel: string | null, installUrl: string | null, isAccessible: boolean, };

View File

@@ -0,0 +1,9 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { AppInfo } from "./AppInfo";
/**
* EXPERIMENTAL - notification emitted when the app list changes.
*/
export type AppListUpdatedNotification = { data: Array<AppInfo>, };

View File

@@ -0,0 +1,6 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { AppDisabledReason } from "./AppDisabledReason";
export type AppsConfig = { [key in string]?: { enabled: boolean, disabled_reason: AppDisabledReason | null, } };

View File

@@ -2,6 +2,9 @@
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
/**
* EXPERIMENTAL - list available apps/connectors.
*/
export type AppsListParams = {
/**
* Opaque pagination cursor returned by a previous call.
@@ -10,4 +13,12 @@ cursor?: string | null,
/**
* Optional page size; defaults to a reasonable server-side value.
*/
limit?: number | null, };
limit?: number | null,
/**
* Optional thread id used to evaluate app feature gating from that thread's config.
*/
threadId?: string | null,
/**
* When true, bypass app caches and fetch the latest data from sources.
*/
forceRefetch?: boolean, };

View File

@@ -3,6 +3,9 @@
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { AppInfo } from "./AppInfo";
/**
* EXPERIMENTAL - app list response.
*/
export type AppsListResponse = { data: Array<AppInfo>,
/**
* Opaque cursor to pass to the next call to continue after the last item.

View File

@@ -10,7 +10,7 @@ export type ChatgptAuthTokensRefreshParams = { reason: ChatgptAuthTokensRefreshR
* Clients that manage multiple accounts/workspaces can use this as a hint
* to refresh the token for the correct workspace.
*
* This may be `null` when the prior ID token did not include a workspace
* identifier (`chatgpt_account_id`) or when the token could not be parsed.
* This may be `null` when the prior auth state did not include a workspace
* identifier (`chatgpt_account_id`).
*/
previousAccountId?: string | null, };

View File

@@ -2,4 +2,4 @@
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type ChatgptAuthTokensRefreshResponse = { idToken: string, accessToken: string, };
export type ChatgptAuthTokensRefreshResponse = { accessToken: string, chatgptAccountId: string, chatgptPlanType: string | null, };

View File

@@ -2,4 +2,4 @@
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type CollabAgentTool = "spawnAgent" | "sendInput" | "wait" | "closeAgent";
export type CollabAgentTool = "spawnAgent" | "sendInput" | "resumeAgent" | "wait" | "closeAgent";

View File

@@ -14,4 +14,4 @@ import type { SandboxMode } from "./SandboxMode";
import type { SandboxWorkspaceWrite } from "./SandboxWorkspaceWrite";
import type { ToolsV2 } from "./ToolsV2";
export type Config = { model: string | null, review_model: string | null, model_context_window: bigint | null, model_auto_compact_token_limit: bigint | null, model_provider: string | null, approval_policy: AskForApproval | null, sandbox_mode: SandboxMode | null, sandbox_workspace_write: SandboxWorkspaceWrite | null, forced_chatgpt_workspace_id: string | null, forced_login_method: ForcedLoginMethod | null, web_search: WebSearchMode | null, tools: ToolsV2 | null, profile: string | null, profiles: { [key in string]?: ProfileV2 }, instructions: string | null, developer_instructions: string | null, compact_prompt: string | null, model_reasoning_effort: ReasoningEffort | null, model_reasoning_summary: ReasoningSummary | null, model_verbosity: Verbosity | null, analytics: AnalyticsConfig | null, } & ({ [key in string]?: number | string | boolean | Array<JsonValue> | { [key in string]?: JsonValue } | null });
export type Config = {model: string | null, review_model: string | null, model_context_window: bigint | null, model_auto_compact_token_limit: bigint | null, model_provider: string | null, approval_policy: AskForApproval | null, sandbox_mode: SandboxMode | null, sandbox_workspace_write: SandboxWorkspaceWrite | null, forced_chatgpt_workspace_id: string | null, forced_login_method: ForcedLoginMethod | null, web_search: WebSearchMode | null, tools: ToolsV2 | null, profile: string | null, profiles: { [key in string]?: ProfileV2 }, instructions: string | null, developer_instructions: string | null, compact_prompt: string | null, model_reasoning_effort: ReasoningEffort | null, model_reasoning_summary: ReasoningSummary | null, model_verbosity: Verbosity | null, analytics: AnalyticsConfig | null} & ({ [key in string]?: number | string | boolean | Array<JsonValue> | { [key in string]?: JsonValue } | null });

View File

@@ -1,8 +1,9 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { WebSearchMode } from "../WebSearchMode";
import type { AskForApproval } from "./AskForApproval";
import type { ResidencyRequirement } from "./ResidencyRequirement";
import type { SandboxMode } from "./SandboxMode";
export type ConfigRequirements = { allowedApprovalPolicies: Array<AskForApproval> | null, allowedSandboxModes: Array<SandboxMode> | null, enforceResidency: ResidencyRequirement | null, };
export type ConfigRequirements = {allowedApprovalPolicies: Array<AskForApproval> | null, allowedSandboxModes: Array<SandboxMode> | null, allowedWebSearchModes: Array<WebSearchMode> | null, enforceResidency: ResidencyRequirement | null};

View File

@@ -0,0 +1,37 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { ExperimentalFeatureStage } from "./ExperimentalFeatureStage";
export type ExperimentalFeature = {
/**
* Stable key used in config.toml and CLI flag toggles.
*/
name: string,
/**
* Lifecycle stage of this feature flag.
*/
stage: ExperimentalFeatureStage,
/**
* User-facing display name shown in the experimental features UI.
* Null when this feature is not in beta.
*/
displayName: string | null,
/**
* Short summary describing what the feature does.
* Null when this feature is not in beta.
*/
description: string | null,
/**
* Announcement copy shown to users when the feature is introduced.
* Null when this feature is not in beta.
*/
announcement: string | null,
/**
* Whether this feature is currently enabled in the loaded config.
*/
enabled: boolean,
/**
* Whether this feature is enabled by default.
*/
defaultEnabled: boolean, };

View File

@@ -0,0 +1,13 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type ExperimentalFeatureListParams = {
/**
* Opaque pagination cursor returned by a previous call.
*/
cursor?: string | null,
/**
* Optional page size; defaults to a reasonable server-side value.
*/
limit?: number | null, };

View File

@@ -0,0 +1,11 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { ExperimentalFeature } from "./ExperimentalFeature";
export type ExperimentalFeatureListResponse = { data: Array<ExperimentalFeature>,
/**
* Opaque cursor to pass to the next call to continue after the last item.
* If None, there are no more items to return.
*/
nextCursor: string | null, };

View File

@@ -0,0 +1,5 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type ExperimentalFeatureStage = "beta" | "underDevelopment" | "stable" | "deprecated" | "removed";

View File

@@ -3,15 +3,19 @@
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type LoginAccountParams = { "type": "apiKey", apiKey: string, } | { "type": "chatgpt" } | { "type": "chatgptAuthTokens",
/**
* ID token (JWT) supplied by the client.
*
* This token is used for identity and account metadata (email, plan type,
* workspace id).
*/
idToken: string,
/**
* Access token (JWT) supplied by the client.
* This token is used for backend API requests.
* This token is used for backend API requests and email extraction.
*/
accessToken: string, };
accessToken: string,
/**
* Workspace/account identifier supplied by the client.
*/
chatgptAccountId: string,
/**
* Optional plan type supplied by the client.
*
* When `null`, Codex attempts to derive the plan type from access-token
* claims. If unavailable, the plan defaults to `unknown`.
*/
chatgptPlanType?: string | null, };

View File

@@ -0,0 +1,5 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type NetworkRequirements = { enabled: boolean | null, httpPort: number | null, socksPort: number | null, allowUpstreamProxy: boolean | null, dangerouslyAllowNonLoopbackProxy: boolean | null, dangerouslyAllowNonLoopbackAdmin: boolean | null, allowedDomains: Array<string> | null, deniedDomains: Array<string> | null, allowUnixSockets: Array<string> | null, allowLocalBinding: boolean | null, };

View File

@@ -0,0 +1,5 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type SkillsListExtraRootsForCwd = { cwd: string, extraUserRoots: Array<string>, };

View File

@@ -1,6 +1,7 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { SkillsListExtraRootsForCwd } from "./SkillsListExtraRootsForCwd";
export type SkillsListParams = {
/**
@@ -10,4 +11,8 @@ cwds?: Array<string>,
/**
* When true, bypass the skills cache and re-scan skills from disk.
*/
forceReload?: boolean, };
forceReload?: boolean,
/**
* Optional per-cwd extra roots to scan as user-scoped skills.
*/
perCwdExtraUserRoots?: Array<SkillsListExtraRootsForCwd> | null, };

View File

@@ -14,13 +14,11 @@ import type { SandboxMode } from "./SandboxMode";
*
* Prefer using thread_id whenever possible.
*/
export type ThreadForkParams = { threadId: string,
/**
export type ThreadForkParams = {threadId: string, /**
* [UNSTABLE] Specify the rollout path to fork from.
* If specified, the thread_id param will be ignored.
*/
path?: string | null,
/**
path?: string | null, /**
* Configuration overrides for the forked thread, if any.
*/
model?: string | null, modelProvider?: string | null, cwd?: string | null, approvalPolicy?: AskForApproval | null, sandbox?: SandboxMode | null, config?: { [key in string]?: JsonValue } | null, baseInstructions?: string | null, developerInstructions?: string | null, };
model?: string | null, modelProvider?: string | null, cwd?: string | null, approvalPolicy?: AskForApproval | null, sandbox?: SandboxMode | null, config?: { [key in string]?: JsonValue } | null, baseInstructions?: string | null, developerInstructions?: string | null};

View File

@@ -18,19 +18,16 @@ import type { SandboxMode } from "./SandboxMode";
*
* Prefer using thread_id whenever possible.
*/
export type ThreadResumeParams = { threadId: string,
/**
export type ThreadResumeParams = {threadId: string, /**
* [UNSTABLE] FOR CODEX CLOUD - DO NOT USE.
* If specified, the thread will be resumed with the provided history
* instead of loaded from disk.
*/
history?: Array<ResponseItem> | null,
/**
history?: Array<ResponseItem> | null, /**
* [UNSTABLE] Specify the rollout path to resume from.
* If specified, the thread_id param will be ignored.
*/
path?: string | null,
/**
path?: string | null, /**
* Configuration overrides for the resumed thread, if any.
*/
model?: string | null, modelProvider?: string | null, cwd?: string | null, approvalPolicy?: AskForApproval | null, sandbox?: SandboxMode | null, config?: { [key in string]?: JsonValue } | null, baseInstructions?: string | null, developerInstructions?: string | null, personality?: Personality | null, };
model?: string | null, modelProvider?: string | null, cwd?: string | null, approvalPolicy?: AskForApproval | null, sandbox?: SandboxMode | null, config?: { [key in string]?: JsonValue } | null, baseInstructions?: string | null, developerInstructions?: string | null, personality?: Personality | null};

View File

@@ -7,9 +7,7 @@ import type { AskForApproval } from "./AskForApproval";
import type { SandboxMode } from "./SandboxMode";
export type ThreadStartParams = {model?: string | null, modelProvider?: string | null, cwd?: string | null, approvalPolicy?: AskForApproval | null, sandbox?: SandboxMode | null, config?: { [key in string]?: JsonValue } | null, baseInstructions?: string | null, developerInstructions?: string | null, personality?: Personality | null, ephemeral?: boolean | null, /**
* If true, opt into emitting raw response items on the event stream.
*
* If true, opt into emitting raw Responses API items on the event stream.
* This is for internal use only (e.g. Codex Cloud).
* (TODO): Figure out a better way to categorize internal / experimental events & protocols.
*/
experimentalRawEvents: boolean};

View File

@@ -10,41 +10,35 @@ import type { AskForApproval } from "./AskForApproval";
import type { SandboxPolicy } from "./SandboxPolicy";
import type { UserInput } from "./UserInput";
export type TurnStartParams = { threadId: string, input: Array<UserInput>,
/**
export type TurnStartParams = {threadId: string, input: Array<UserInput>, /**
* Override the working directory for this turn and subsequent turns.
*/
cwd?: string | null,
/**
cwd?: string | null, /**
* Override the approval policy for this turn and subsequent turns.
*/
approvalPolicy?: AskForApproval | null,
/**
approvalPolicy?: AskForApproval | null, /**
* Override the sandbox policy for this turn and subsequent turns.
*/
sandboxPolicy?: SandboxPolicy | null,
/**
sandboxPolicy?: SandboxPolicy | null, /**
* Override the model for this turn and subsequent turns.
*/
model?: string | null,
/**
model?: string | null, /**
* Override the reasoning effort for this turn and subsequent turns.
*/
effort?: ReasoningEffort | null,
/**
effort?: ReasoningEffort | null, /**
* Override the reasoning summary for this turn and subsequent turns.
*/
summary?: ReasoningSummary | null,
/**
summary?: ReasoningSummary | null, /**
* Override the personality for this turn and subsequent turns.
*/
personality?: Personality | null,
/**
personality?: Personality | null, /**
* Optional JSON Schema used to constrain the final assistant message for this turn.
*/
outputSchema?: JsonValue | null,
/**
* EXPERIMENTAL - set a pre-set collaboration mode.
outputSchema?: JsonValue | null, /**
* EXPERIMENTAL - Set a pre-set collaboration mode.
* Takes precedence over model, reasoning_effort, and developer instructions if set.
*
* For `collaboration_mode.settings.developer_instructions`, `null` means
* "use the built-in instructions for the selected mode".
*/
collaborationMode?: CollaborationMode | null, };
collaborationMode?: CollaborationMode | null};

View File

@@ -0,0 +1,11 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { UserInput } from "./UserInput";
export type TurnSteerParams = { threadId: string, input: Array<UserInput>,
/**
* Required active turn id precondition. The request fails when it does not
* match the currently active turn.
*/
expectedTurnId: string, };

View File

@@ -0,0 +1,5 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type TurnSteerResponse = { turnId: string, };

View File

@@ -6,7 +6,10 @@ export type { AccountRateLimitsUpdatedNotification } from "./AccountRateLimitsUp
export type { AccountUpdatedNotification } from "./AccountUpdatedNotification";
export type { AgentMessageDeltaNotification } from "./AgentMessageDeltaNotification";
export type { AnalyticsConfig } from "./AnalyticsConfig";
export type { AppDisabledReason } from "./AppDisabledReason";
export type { AppInfo } from "./AppInfo";
export type { AppListUpdatedNotification } from "./AppListUpdatedNotification";
export type { AppsConfig } from "./AppsConfig";
export type { AppsListParams } from "./AppsListParams";
export type { AppsListResponse } from "./AppsListResponse";
export type { AskForApproval } from "./AskForApproval";
@@ -52,6 +55,10 @@ export type { DynamicToolCallResponse } from "./DynamicToolCallResponse";
export type { DynamicToolSpec } from "./DynamicToolSpec";
export type { ErrorNotification } from "./ErrorNotification";
export type { ExecPolicyAmendment } from "./ExecPolicyAmendment";
export type { ExperimentalFeature } from "./ExperimentalFeature";
export type { ExperimentalFeatureListParams } from "./ExperimentalFeatureListParams";
export type { ExperimentalFeatureListResponse } from "./ExperimentalFeatureListResponse";
export type { ExperimentalFeatureStage } from "./ExperimentalFeatureStage";
export type { FeedbackUploadParams } from "./FeedbackUploadParams";
export type { FeedbackUploadResponse } from "./FeedbackUploadResponse";
export type { FileChangeApprovalDecision } from "./FileChangeApprovalDecision";
@@ -85,6 +92,7 @@ export type { Model } from "./Model";
export type { ModelListParams } from "./ModelListParams";
export type { ModelListResponse } from "./ModelListResponse";
export type { NetworkAccess } from "./NetworkAccess";
export type { NetworkRequirements } from "./NetworkRequirements";
export type { OverriddenMetadata } from "./OverriddenMetadata";
export type { PatchApplyStatus } from "./PatchApplyStatus";
export type { PatchChangeKind } from "./PatchChangeKind";
@@ -116,6 +124,7 @@ export type { SkillToolDependency } from "./SkillToolDependency";
export type { SkillsConfigWriteParams } from "./SkillsConfigWriteParams";
export type { SkillsConfigWriteResponse } from "./SkillsConfigWriteResponse";
export type { SkillsListEntry } from "./SkillsListEntry";
export type { SkillsListExtraRootsForCwd } from "./SkillsListExtraRootsForCwd";
export type { SkillsListParams } from "./SkillsListParams";
export type { SkillsListResponse } from "./SkillsListResponse";
export type { SkillsRemoteReadParams } from "./SkillsRemoteReadParams";
@@ -176,6 +185,8 @@ export type { TurnStartParams } from "./TurnStartParams";
export type { TurnStartResponse } from "./TurnStartResponse";
export type { TurnStartedNotification } from "./TurnStartedNotification";
export type { TurnStatus } from "./TurnStatus";
export type { TurnSteerParams } from "./TurnSteerParams";
export type { TurnSteerResponse } from "./TurnSteerResponse";
export type { UserInput } from "./UserInput";
export type { WebSearchAction } from "./WebSearchAction";
export type { WindowsWorldWritableWarningNotification } from "./WindowsWorldWritableWarningNotification";

View File

@@ -238,7 +238,10 @@ fn filter_client_request_ts(out_dir: &Path, experimental_methods: &[&str]) -> Re
.collect();
let new_body = filtered_arms.join(" | ");
content = format!("{prefix}{new_body}{suffix}");
content = prune_unused_type_imports(content, &new_body);
let import_usage_scope = split_type_alias(&content)
.map(|(_, body, _)| body)
.unwrap_or_else(|| new_body.clone());
content = prune_unused_type_imports(content, &import_usage_scope);
fs::write(&path, content).with_context(|| format!("Failed to write {}", path.display()))?;
Ok(())
@@ -296,7 +299,10 @@ fn filter_experimental_fields_in_ts_file(
let prefix = &content[..open_brace + 1];
let suffix = &content[close_brace..];
content = format!("{prefix}{new_inner}{suffix}");
content = prune_unused_type_imports(content, &new_inner);
let import_usage_scope = split_type_alias(&content)
.map(|(_, body, _)| body)
.unwrap_or_else(|| new_inner.clone());
content = prune_unused_type_imports(content, &import_usage_scope);
fs::write(path, content).with_context(|| format!("Failed to write {}", path.display()))?;
Ok(())
}
@@ -1745,6 +1751,50 @@ mod tests {
Ok(())
}
#[test]
fn experimental_type_fields_ts_filter_keeps_imports_used_in_intersection_suffix() -> Result<()>
{
let output_dir = std::env::temp_dir().join(format!("codex_ts_filter_{}", Uuid::now_v7()));
fs::create_dir_all(&output_dir)?;
struct TempDirGuard(PathBuf);
impl Drop for TempDirGuard {
fn drop(&mut self) {
let _ = fs::remove_dir_all(&self.0);
}
}
let _guard = TempDirGuard(output_dir.clone());
let path = output_dir.join("Config.ts");
let content = r#"import type { JsonValue } from "../serde_json/JsonValue";
import type { Keep } from "./Keep";
export type Config = { stableField: Keep, unstableField: string | null } & ({ [key in string]?: number | string | boolean | Array<JsonValue> | { [key in string]?: JsonValue } | null });
"#;
fs::write(&path, content)?;
static CUSTOM_FIELD: crate::experimental_api::ExperimentalField =
crate::experimental_api::ExperimentalField {
type_name: "Config",
field_name: "unstableField",
reason: "custom/unstableField",
};
filter_experimental_type_fields_ts(&output_dir, &[&CUSTOM_FIELD])?;
let filtered = fs::read_to_string(&path)?;
assert_eq!(filtered.contains("unstableField"), false);
assert_eq!(
filtered.contains(r#"import type { JsonValue } from "../serde_json/JsonValue";"#),
true
);
assert_eq!(
filtered.contains(r#"import type { Keep } from "./Keep";"#),
true
);
Ok(())
}
#[test]
fn stable_schema_filter_removes_mock_experimental_method() -> Result<()> {
let output_dir = std::env::temp_dir().join(format!("codex_schema_{}", Uuid::now_v7()));

View File

@@ -190,10 +190,12 @@ client_request_definitions! {
},
ThreadResume => "thread/resume" {
params: v2::ThreadResumeParams,
inspect_params: true,
response: v2::ThreadResumeResponse,
},
ThreadFork => "thread/fork" {
params: v2::ThreadForkParams,
inspect_params: true,
response: v2::ThreadForkResponse,
},
ThreadArchive => "thread/archive" {
@@ -212,6 +214,11 @@ client_request_definitions! {
params: v2::ThreadCompactStartParams,
response: v2::ThreadCompactStartResponse,
},
#[experimental("thread/backgroundTerminals/clean")]
ThreadBackgroundTerminalsClean => "thread/backgroundTerminals/clean" {
params: v2::ThreadBackgroundTerminalsCleanParams,
response: v2::ThreadBackgroundTerminalsCleanResponse,
},
ThreadRollback => "thread/rollback" {
params: v2::ThreadRollbackParams,
response: v2::ThreadRollbackResponse,
@@ -250,8 +257,13 @@ client_request_definitions! {
},
TurnStart => "turn/start" {
params: v2::TurnStartParams,
inspect_params: true,
response: v2::TurnStartResponse,
},
TurnSteer => "turn/steer" {
params: v2::TurnSteerParams,
response: v2::TurnSteerResponse,
},
TurnInterrupt => "turn/interrupt" {
params: v2::TurnInterruptParams,
response: v2::TurnInterruptResponse,
@@ -265,6 +277,10 @@ client_request_definitions! {
params: v2::ModelListParams,
response: v2::ModelListResponse,
},
ExperimentalFeatureList => "experimentalFeature/list" {
params: v2::ExperimentalFeatureListParams,
response: v2::ExperimentalFeatureListResponse,
},
#[experimental("collaborationMode/list")]
/// Lists collaboration mode presets.
CollaborationModeList => "collaborationMode/list" {
@@ -295,6 +311,7 @@ client_request_definitions! {
LoginAccount => "account/login/start" {
params: v2::LoginAccountParams,
inspect_params: true,
response: v2::LoginAccountResponse,
},
@@ -709,6 +726,7 @@ server_notification_definitions! {
McpServerOauthLoginCompleted => "mcpServer/oauthLogin/completed" (v2::McpServerOauthLoginCompletedNotification),
AccountUpdated => "account/updated" (v2::AccountUpdatedNotification),
AccountRateLimitsUpdated => "account/rateLimits/updated" (v2::AccountRateLimitsUpdatedNotification),
AppListUpdated => "app/list/updated" (v2::AppListUpdatedNotification),
ReasoningSummaryTextDelta => "item/reasoning/summaryTextDelta" (v2::ReasoningSummaryTextDeltaNotification),
ReasoningSummaryPartAdded => "item/reasoning/summaryPartAdded" (v2::ReasoningSummaryPartAddedNotification),
ReasoningTextDelta => "item/reasoning/textDelta" (v2::ReasoningTextDeltaNotification),
@@ -985,7 +1003,8 @@ mod tests {
request_id: RequestId::Integer(5),
params: v2::LoginAccountParams::ChatgptAuthTokens {
access_token: "access-token".to_string(),
id_token: "id-token".to_string(),
chatgpt_account_id: "org-123".to_string(),
chatgpt_plan_type: Some("business".to_string()),
},
};
assert_eq!(
@@ -995,7 +1014,8 @@ mod tests {
"params": {
"type": "chatgptAuthTokens",
"accessToken": "access-token",
"idToken": "id-token"
"chatgptAccountId": "org-123",
"chatgptPlanType": "business"
}
}),
serde_json::to_value(&request)?,
@@ -1087,6 +1107,68 @@ mod tests {
Ok(())
}
#[test]
fn serialize_list_apps() -> Result<()> {
let request = ClientRequest::AppsList {
request_id: RequestId::Integer(8),
params: v2::AppsListParams::default(),
};
assert_eq!(
json!({
"method": "app/list",
"id": 8,
"params": {
"cursor": null,
"limit": null,
"threadId": null
}
}),
serde_json::to_value(&request)?,
);
Ok(())
}
#[test]
fn serialize_list_experimental_features() -> Result<()> {
let request = ClientRequest::ExperimentalFeatureList {
request_id: RequestId::Integer(8),
params: v2::ExperimentalFeatureListParams::default(),
};
assert_eq!(
json!({
"method": "experimentalFeature/list",
"id": 8,
"params": {
"cursor": null,
"limit": null
}
}),
serde_json::to_value(&request)?,
);
Ok(())
}
#[test]
fn serialize_thread_background_terminals_clean() -> Result<()> {
let request = ClientRequest::ThreadBackgroundTerminalsClean {
request_id: RequestId::Integer(8),
params: v2::ThreadBackgroundTerminalsCleanParams {
thread_id: "thr_123".to_string(),
},
};
assert_eq!(
json!({
"method": "thread/backgroundTerminals/clean",
"id": 8,
"params": {
"threadId": "thr_123"
}
}),
serde_json::to_value(&request)?,
);
Ok(())
}
#[test]
fn mock_experimental_method_is_marked_experimental() {
let request = ClientRequest::MockExperimentalMethod {

View File

@@ -377,6 +377,36 @@ pub struct AnalyticsConfig {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "snake_case")]
#[ts(export_to = "v2/")]
pub enum AppDisabledReason {
Unknown,
User,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "snake_case")]
#[ts(export_to = "v2/")]
pub struct AppConfig {
#[serde(default = "default_enabled")]
pub enabled: bool,
pub disabled_reason: Option<AppDisabledReason>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "snake_case")]
#[ts(export_to = "v2/")]
pub struct AppsConfig {
#[serde(default, flatten)]
#[schemars(with = "HashMap<String, AppConfig>")]
pub apps: HashMap<String, AppConfig>,
}
const fn default_enabled() -> bool {
true
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS, ExperimentalApi)]
#[serde(rename_all = "snake_case")]
#[ts(export_to = "v2/")]
pub struct Config {
pub model: Option<String>,
pub review_model: Option<String>,
@@ -400,6 +430,9 @@ pub struct Config {
pub model_reasoning_summary: Option<ReasoningSummary>,
pub model_verbosity: Option<Verbosity>,
pub analytics: Option<AnalyticsConfig>,
#[experimental("config/read.apps")]
#[serde(default)]
pub apps: Option<AppsConfig>,
#[serde(default, flatten)]
pub additional: HashMap<String, JsonValue>,
}
@@ -494,13 +527,32 @@ pub struct ConfigReadResponse {
pub layers: Option<Vec<ConfigLayer>>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS, ExperimentalApi)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ConfigRequirements {
pub allowed_approval_policies: Option<Vec<AskForApproval>>,
pub allowed_sandbox_modes: Option<Vec<SandboxMode>>,
pub allowed_web_search_modes: Option<Vec<WebSearchMode>>,
pub enforce_residency: Option<ResidencyRequirement>,
#[experimental("configRequirements/read.network")]
pub network: Option<NetworkRequirements>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct NetworkRequirements {
pub enabled: Option<bool>,
pub http_port: Option<u16>,
pub socks_port: Option<u16>,
pub allow_upstream_proxy: Option<bool>,
pub dangerously_allow_non_loopback_proxy: Option<bool>,
pub dangerously_allow_non_loopback_admin: Option<bool>,
pub allowed_domains: Option<Vec<String>>,
pub denied_domains: Option<Vec<String>>,
pub allow_unix_sockets: Option<Vec<String>>,
pub allow_local_binding: Option<bool>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
@@ -835,7 +887,7 @@ pub enum Account {
Chatgpt { email: String, plan_type: PlanType },
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS, ExperimentalApi)]
#[serde(tag = "type")]
#[ts(tag = "type")]
#[ts(export_to = "v2/")]
@@ -852,21 +904,21 @@ pub enum LoginAccountParams {
Chatgpt,
/// [UNSTABLE] FOR OPENAI INTERNAL USE ONLY - DO NOT USE.
/// The access token must contain the same scopes that Codex-managed ChatGPT auth tokens have.
#[serde(rename = "chatgptAuthTokens")]
#[ts(rename = "chatgptAuthTokens")]
#[experimental("account/login/start.chatgptAuthTokens")]
#[serde(rename = "chatgptAuthTokens", rename_all = "camelCase")]
#[ts(rename = "chatgptAuthTokens", rename_all = "camelCase")]
ChatgptAuthTokens {
/// ID token (JWT) supplied by the client.
///
/// This token is used for identity and account metadata (email, plan type,
/// workspace id).
#[serde(rename = "idToken")]
#[ts(rename = "idToken")]
id_token: String,
/// Access token (JWT) supplied by the client.
/// This token is used for backend API requests.
#[serde(rename = "accessToken")]
#[ts(rename = "accessToken")]
/// This token is used for backend API requests and email extraction.
access_token: String,
/// Workspace/account identifier supplied by the client.
chatgpt_account_id: String,
/// Optional plan type supplied by the client.
///
/// When `null`, Codex attempts to derive the plan type from access-token
/// claims. If unavailable, the plan defaults to `unknown`.
#[ts(optional = nullable)]
chatgpt_plan_type: Option<String>,
},
}
@@ -938,8 +990,8 @@ pub struct ChatgptAuthTokensRefreshParams {
/// Clients that manage multiple accounts/workspaces can use this as a hint
/// to refresh the token for the correct workspace.
///
/// This may be `null` when the prior ID token did not include a workspace
/// identifier (`chatgpt_account_id`) or when the token could not be parsed.
/// This may be `null` when the prior auth state did not include a workspace
/// identifier (`chatgpt_account_id`).
#[ts(optional = nullable)]
pub previous_account_id: Option<String>,
}
@@ -948,8 +1000,9 @@ pub struct ChatgptAuthTokensRefreshParams {
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ChatgptAuthTokensRefreshResponse {
pub id_token: String,
pub access_token: String,
pub chatgpt_account_id: String,
pub chatgpt_plan_type: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
@@ -1043,6 +1096,67 @@ pub struct CollaborationModeListResponse {
pub data: Vec<CollaborationModeMask>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Default, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ExperimentalFeatureListParams {
/// Opaque pagination cursor returned by a previous call.
#[ts(optional = nullable)]
pub cursor: Option<String>,
/// Optional page size; defaults to a reasonable server-side value.
#[ts(optional = nullable)]
pub limit: Option<u32>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub enum ExperimentalFeatureStage {
/// Feature is available for user testing and feedback.
Beta,
/// Feature is still being built and not ready for broad use.
UnderDevelopment,
/// Feature is production-ready.
Stable,
/// Feature is deprecated and should be avoided.
Deprecated,
/// Feature flag is retained only for backwards compatibility.
Removed,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ExperimentalFeature {
/// Stable key used in config.toml and CLI flag toggles.
pub name: String,
/// Lifecycle stage of this feature flag.
pub stage: ExperimentalFeatureStage,
/// User-facing display name shown in the experimental features UI.
/// Null when this feature is not in beta.
pub display_name: Option<String>,
/// Short summary describing what the feature does.
/// Null when this feature is not in beta.
pub description: Option<String>,
/// Announcement copy shown to users when the feature is introduced.
/// Null when this feature is not in beta.
pub announcement: Option<String>,
/// Whether this feature is currently enabled in the loaded config.
pub enabled: bool,
/// Whether this feature is enabled by default.
pub default_enabled: bool,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ExperimentalFeatureListResponse {
pub data: Vec<ExperimentalFeature>,
/// Opaque cursor to pass to the next call to continue after the last item.
/// If None, there are no more items to return.
pub next_cursor: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -1079,6 +1193,7 @@ pub struct ListMcpServerStatusResponse {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Default, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
/// EXPERIMENTAL - list available apps/connectors.
pub struct AppsListParams {
/// Opaque pagination cursor returned by a previous call.
#[ts(optional = nullable)]
@@ -1086,11 +1201,18 @@ pub struct AppsListParams {
/// Optional page size; defaults to a reasonable server-side value.
#[ts(optional = nullable)]
pub limit: Option<u32>,
/// Optional thread id used to evaluate app feature gating from that thread's config.
#[ts(optional = nullable)]
pub thread_id: Option<String>,
/// When true, bypass app caches and fetch the latest data from sources.
#[serde(default, skip_serializing_if = "std::ops::Not::not")]
pub force_refetch: bool,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
/// EXPERIMENTAL - app metadata returned by app-list APIs.
pub struct AppInfo {
pub id: String,
pub name: String,
@@ -1106,6 +1228,7 @@ pub struct AppInfo {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
/// EXPERIMENTAL - app list response.
pub struct AppsListResponse {
pub data: Vec<AppInfo>,
/// Opaque cursor to pass to the next call to continue after the last item.
@@ -1113,6 +1236,14 @@ pub struct AppsListResponse {
pub next_cursor: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
/// EXPERIMENTAL - notification emitted when the app list changes.
pub struct AppListUpdatedNotification {
pub data: Vec<AppInfo>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -1221,10 +1352,9 @@ pub struct ThreadStartParams {
#[experimental("thread/start.mockExperimentalField")]
#[ts(optional = nullable)]
pub mock_experimental_field: Option<String>,
/// If true, opt into emitting raw response items on the event stream.
///
/// If true, opt into emitting raw Responses API items on the event stream.
/// This is for internal use only (e.g. Codex Cloud).
/// (TODO): Figure out a better way to categorize internal / experimental events & protocols.
#[experimental("thread/start.experimentalRawEvents")]
#[serde(default)]
pub experimental_raw_events: bool,
}
@@ -1259,7 +1389,9 @@ pub struct ThreadStartResponse {
pub reasoning_effort: Option<ReasoningEffort>,
}
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq, JsonSchema, TS)]
#[derive(
Serialize, Deserialize, Debug, Default, Clone, PartialEq, JsonSchema, TS, ExperimentalApi,
)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
/// There are three ways to resume a thread:
@@ -1277,11 +1409,13 @@ pub struct ThreadResumeParams {
/// [UNSTABLE] FOR CODEX CLOUD - DO NOT USE.
/// If specified, the thread will be resumed with the provided history
/// instead of loaded from disk.
#[experimental("thread/resume.history")]
#[ts(optional = nullable)]
pub history: Option<Vec<ResponseItem>>,
/// [UNSTABLE] Specify the rollout path to resume from.
/// If specified, the thread_id param will be ignored.
#[experimental("thread/resume.path")]
#[ts(optional = nullable)]
pub path: Option<PathBuf>,
@@ -1319,7 +1453,9 @@ pub struct ThreadResumeResponse {
pub reasoning_effort: Option<ReasoningEffort>,
}
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq, JsonSchema, TS)]
#[derive(
Serialize, Deserialize, Debug, Default, Clone, PartialEq, JsonSchema, TS, ExperimentalApi,
)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
/// There are two ways to fork a thread:
@@ -1334,6 +1470,7 @@ pub struct ThreadForkParams {
/// [UNSTABLE] Specify the rollout path to fork from.
/// If specified, the thread_id param will be ignored.
#[experimental("thread/fork.path")]
#[ts(optional = nullable)]
pub path: Option<PathBuf>,
@@ -1420,6 +1557,18 @@ pub struct ThreadCompactStartParams {
#[ts(export_to = "v2/")]
pub struct ThreadCompactStartResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadBackgroundTerminalsCleanParams {
pub thread_id: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadBackgroundTerminalsCleanResponse {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -1558,6 +1707,19 @@ pub struct SkillsListParams {
/// When true, bypass the skills cache and re-scan skills from disk.
#[serde(default, skip_serializing_if = "std::ops::Not::not")]
pub force_reload: bool,
/// Optional per-cwd extra roots to scan as user-scoped skills.
#[serde(default)]
#[ts(optional = nullable)]
pub per_cwd_extra_user_roots: Option<Vec<SkillsListExtraRootsForCwd>>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct SkillsListExtraRootsForCwd {
pub cwd: PathBuf,
pub extra_user_roots: Vec<PathBuf>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
@@ -1934,7 +2096,9 @@ pub enum TurnStatus {
}
// Turn APIs
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq, JsonSchema, TS)]
#[derive(
Serialize, Deserialize, Debug, Default, Clone, PartialEq, JsonSchema, TS, ExperimentalApi,
)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct TurnStartParams {
@@ -1965,8 +2129,12 @@ pub struct TurnStartParams {
#[ts(optional = nullable)]
pub output_schema: Option<JsonValue>,
/// EXPERIMENTAL - set a pre-set collaboration mode.
/// EXPERIMENTAL - Set a pre-set collaboration mode.
/// Takes precedence over model, reasoning_effort, and developer instructions if set.
///
/// For `collaboration_mode.settings.developer_instructions`, `null` means
/// "use the built-in instructions for the selected mode".
#[experimental("turn/start.collaborationMode")]
#[ts(optional = nullable)]
pub collaboration_mode: Option<CollaborationMode>,
}
@@ -2031,6 +2199,24 @@ pub struct TurnStartResponse {
pub turn: Turn,
}
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct TurnSteerParams {
pub thread_id: String,
pub input: Vec<UserInput>,
/// Required active turn id precondition. The request fails when it does not
/// match the currently active turn.
pub expected_turn_id: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct TurnSteerResponse {
pub turn_id: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -2377,6 +2563,7 @@ pub enum CommandExecutionStatus {
pub enum CollabAgentTool {
SpawnAgent,
SendInput,
ResumeAgent,
Wait,
CloseAgent,
}
@@ -3127,6 +3314,7 @@ mod tests {
text: "world".to_string(),
},
],
phase: None,
});
assert_eq!(
@@ -3180,20 +3368,36 @@ mod tests {
serde_json::to_value(SkillsListParams {
cwds: Vec::new(),
force_reload: false,
per_cwd_extra_user_roots: None,
})
.unwrap(),
json!({}),
json!({
"perCwdExtraUserRoots": null,
}),
);
assert_eq!(
serde_json::to_value(SkillsListParams {
cwds: vec![PathBuf::from("/repo")],
force_reload: true,
per_cwd_extra_user_roots: Some(vec![SkillsListExtraRootsForCwd {
cwd: PathBuf::from("/repo"),
extra_user_roots: vec![
PathBuf::from("/shared/skills"),
PathBuf::from("/tmp/x")
],
}]),
})
.unwrap(),
json!({
"cwds": ["/repo"],
"forceReload": true,
"perCwdExtraUserRoots": [
{
"cwd": "/repo",
"extraUserRoots": ["/shared/skills", "/tmp/x"],
}
],
}),
);
}

View File

@@ -33,6 +33,9 @@ codex-rmcp-client = { workspace = true }
codex-utils-absolute-path = { workspace = true }
codex-utils-json-to-toml = { workspace = true }
chrono = { workspace = true }
clap = { workspace = true, features = ["derive"] }
futures = { workspace = true }
owo-colors = { workspace = true, features = ["supports-colors"] }
serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }
tempfile = { workspace = true }
@@ -45,6 +48,7 @@ tokio = { workspace = true, features = [
"rt-multi-thread",
"signal",
] }
tokio-tungstenite = { workspace = true }
tracing = { workspace = true, features = ["log"] }
tracing-subscriber = { workspace = true, features = ["env-filter", "fmt"] }
uuid = { workspace = true, features = ["serde", "v7"] }
@@ -59,6 +63,7 @@ axum = { workspace = true, default-features = false, features = [
base64 = { workspace = true }
codex-execpolicy = { workspace = true }
core_test_support = { workspace = true }
codex-utils-cargo-bin = { workspace = true }
os_info = { workspace = true }
pretty_assertions = { workspace = true }
rmcp = { workspace = true, default-features = false, features = [
@@ -66,5 +71,6 @@ rmcp = { workspace = true, default-features = false, features = [
"transport-streamable-http-server",
] }
serial_test = { workspace = true }
tokio-tungstenite = { workspace = true }
wiremock = { workspace = true }
shlex = { workspace = true }

View File

@@ -19,7 +19,20 @@
## Protocol
Similar to [MCP](https://modelcontextprotocol.io/), `codex app-server` supports bidirectional communication, streaming JSONL over stdio. The protocol is JSON-RPC 2.0, though the `"jsonrpc":"2.0"` header is omitted.
Similar to [MCP](https://modelcontextprotocol.io/), `codex app-server` supports bidirectional communication using JSON-RPC 2.0 messages (with the `"jsonrpc":"2.0"` header omitted on the wire).
Supported transports:
- stdio (`--listen stdio://`, default): newline-delimited JSON (JSONL)
- websocket (`--listen ws://IP:PORT`): one JSON-RPC message per websocket text frame (**experimental / unsupported**)
Websocket transport is currently experimental and unsupported. Do not rely on it for production workloads.
Backpressure behavior:
- The server uses bounded queues between transport ingress, request processing, and outbound writes.
- When request ingress is saturated, new requests are rejected with a JSON-RPC error code `-32001` and message `"Server overloaded; retry later."`.
- Clients should treat this as retryable and use exponential backoff with jitter.
## Message Schema
@@ -42,7 +55,7 @@ Use the thread APIs to create, list, or archive conversations. Drive a conversat
## Lifecycle Overview
- Initialize once: Immediately after launching the codex app-server process, send an `initialize` request with your client metadata, then emit an `initialized` notification. Any other request before this handshake gets rejected.
- Initialize once per connection: Immediately after opening a transport connection, send an `initialize` request with your client metadata, then emit an `initialized` notification. Any other request on that connection before this handshake gets rejected.
- Start (or resume) a thread: Call `thread/start` to open a fresh conversation. The response returns the thread object and youll also get a `thread/started` notification. If youre continuing an existing conversation, call `thread/resume` with its ID instead. If you want to branch from an existing conversation, call `thread/fork` to create a new thread id with copied history.
- Begin a turn: To send user input, call `turn/start` with the target `threadId` and the user's input. Optional fields let you override model, cwd, sandbox policy, etc. This immediately returns the new turn object and triggers a `turn/started` notification.
- Stream events: After `turn/start`, keep reading JSON-RPC notifications on stdout. Youll see `item/started`, `item/completed`, deltas like `item/agentMessage/delta`, tool progress, etc. These represent streaming model output plus any side effects (commands, tool calls, reasoning notes).
@@ -50,7 +63,7 @@ Use the thread APIs to create, list, or archive conversations. Drive a conversat
## Initialization
Clients must send a single `initialize` request before invoking any other method, then acknowledge with an `initialized` notification. The server returns the user agent string it will present to upstream services; subsequent requests issued before initialization receive a `"Not initialized"` error, and repeated `initialize` calls receive an `"Already initialized"` error.
Clients must send a single `initialize` request per transport connection before invoking any other method on that connection, then acknowledge with an `initialized` notification. The server returns the user agent string it will present to upstream services; subsequent requests issued before initialization receive a `"Not initialized"` error, and repeated `initialize` calls on the same connection receive an `"Already initialized"` error.
Applications building on top of `codex app-server` should identify themselves via the `clientInfo` parameter.
@@ -86,12 +99,15 @@ Example (from OpenAI's official VSCode extension):
- `thread/name/set` — set or update a threads user-facing name; returns `{}` on success. Thread names are not required to be unique; name lookups resolve to the most recently updated thread.
- `thread/unarchive` — move an archived rollout file back into the sessions directory; returns the restored `thread` on success.
- `thread/compact/start` — trigger conversation history compaction for a thread; returns `{}` immediately while progress streams through standard turn/item notifications.
- `thread/backgroundTerminals/clean` — terminate all running background terminals for a thread (experimental; requires `capabilities.experimentalApi`); returns `{}` when the cleanup request is accepted.
- `thread/rollback` — drop the last N turns from the agents in-memory context and persist a rollback marker in the rollout so future resumes see the pruned history; returns the updated `thread` (with `turns` populated) on success.
- `turn/start` — add user input to a thread and begin Codex generation; responds with the initial `turn` object and streams `turn/started`, `item/*`, and `turn/completed` notifications.
- `turn/start` — add user input to a thread and begin Codex generation; responds with the initial `turn` object and streams `turn/started`, `item/*`, and `turn/completed` notifications. For `collaborationMode`, `settings.developer_instructions: null` means "use built-in instructions for the selected mode".
- `turn/steer` — add user input to an already in-flight turn without starting a new turn; returns the active `turnId` that accepted the input.
- `turn/interrupt` — request cancellation of an in-flight turn by `(thread_id, turn_id)`; success is an empty `{}` response and the turn finishes with `status: "interrupted"`.
- `review/start` — kick off Codexs automated reviewer for a thread; responds like `turn/start` and emits `item/started`/`item/completed` notifications with `enteredReviewMode` and `exitedReviewMode` items, plus a final assistant `agentMessage` containing the review.
- `command/exec` — run a single command under the server sandbox without starting a thread/turn (handy for utilities and validation).
- `model/list` — list available models (with reasoning effort options and optional `upgrade` model ids).
- `experimentalFeature/list` — list feature flags with stage metadata (`beta`, `underDevelopment`, `stable`, etc.), enabled/default-enabled state, and cursor pagination. For non-beta flags, `displayName`/`description`/`announcement` are `null`.
- `collaborationMode/list` — list available collaboration mode presets (experimental, no pagination).
- `skills/list` — list skills for one or more `cwd` values (optional `forceReload`).
- `skills/remote/read` — list public remote skills (**under development; do not call from production clients yet**).
@@ -107,7 +123,7 @@ Example (from OpenAI's official VSCode extension):
- `config/read` — fetch the effective config on disk after resolving config layering.
- `config/value/write` — write a single config key/value to the user's config.toml on disk.
- `config/batchWrite` — apply multiple config edits atomically to the user's config.toml on disk.
- `configRequirements/read` — fetch the loaded requirements allow-lists and `enforceResidency` from `requirements.toml` and/or MDM (or `null` if none are configured).
- `configRequirements/read` — fetch loaded requirements constraints from `requirements.toml` and/or MDM (or `null` if none are configured), including allow-lists (`allowedApprovalPolicies`, `allowedSandboxModes`, `allowedWebSearchModes`), `enforceResidency`, and `network` constraints.
### Example: Start or resume a thread
@@ -355,6 +371,33 @@ You can cancel a running Turn with `turn/interrupt`.
The server requests cancellations for running subprocesses, then emits a `turn/completed` event with `status: "interrupted"`. Rely on the `turn/completed` to know when Codex-side cleanup is done.
### Example: Clean background terminals
Use `thread/backgroundTerminals/clean` to terminate all running background terminals associated with a thread. This method is experimental and requires `capabilities.experimentalApi = true`.
```json
{ "method": "thread/backgroundTerminals/clean", "id": 35, "params": {
"threadId": "thr_123"
} }
{ "id": 35, "result": {} }
```
### Example: Steer an active turn
Use `turn/steer` to append additional user input to the currently active turn. This does not emit
`turn/started` and does not accept turn context overrides.
```json
{ "method": "turn/steer", "id": 32, "params": {
"threadId": "thr_123",
"input": [ { "type": "text", "text": "Actually focus on failing tests first." } ],
"expectedTurnId": "turn_456"
} }
{ "id": 32, "result": { "turnId": "turn_456" } }
```
`expectedTurnId` is required. If there is no active turn (or `expectedTurnId` does not match the active turn), the request fails with an `invalid request` error.
### Example: Request a code review
Use `review/start` to run Codexs reviewer on the currently checked-out project. The request takes the thread id plus a `target` describing what should be reviewed:
@@ -472,7 +515,7 @@ Today both notifications carry an empty `items` array even when item events were
- `commandExecution``{id, command, cwd, status, commandActions, aggregatedOutput?, exitCode?, durationMs?}` for sandboxed commands; `status` is `inProgress`, `completed`, `failed`, or `declined`.
- `fileChange``{id, changes, status}` describing proposed edits; `changes` list `{path, kind, diff}` and `status` is `inProgress`, `completed`, `failed`, or `declined`.
- `mcpToolCall``{id, server, tool, status, arguments, result?, error?}` describing MCP calls; `status` is `inProgress`, `completed`, or `failed`.
- `collabToolCall``{id, tool, status, senderThreadId, receiverThreadId?, newThreadId?, prompt?, agentStatus?}` describing collab tool calls (`spawn_agent`, `send_input`, `wait`, `close_agent`); `status` is `inProgress`, `completed`, or `failed`.
- `collabToolCall``{id, tool, status, senderThreadId, receiverThreadId?, newThreadId?, prompt?, agentStatus?}` describing collab tool calls (`spawn_agent`, `send_input`, `resume_agent`, `wait`, `close_agent`); `status` is `inProgress`, `completed`, or `failed`.
- `webSearch``{id, query, action?}` for a web search request issued by the agent; `action` mirrors the Responses API web_search action payload (`search`, `open_page`, `find_in_page`) and may be omitted until completion.
- `imageView``{id, path}` emitted when the agent invokes the image viewer tool.
- `enteredReviewMode``{id, review}` sent when the reviewer starts; `review` is a short user-facing label such as `"current changes"` or the requested target description.
@@ -626,11 +669,20 @@ $skill-creator Add a new skill for triaging flaky CI and include step-by-step us
```
Use `skills/list` to fetch the available skills (optionally scoped by `cwds`, with `forceReload`).
You can also add `perCwdExtraUserRoots` to scan additional absolute paths as `user` scope for specific `cwd` entries.
Entries whose `cwd` is not present in `cwds` are ignored.
`skills/list` might reuse a cached skills result per `cwd`; setting `forceReload` to `true` refreshes the result from disk.
```json
{ "method": "skills/list", "id": 25, "params": {
"cwds": ["/Users/me/project"],
"forceReload": false
"cwds": ["/Users/me/project", "/Users/me/other-project"],
"forceReload": true,
"perCwdExtraUserRoots": [
{
"cwd": "/Users/me/project",
"extraUserRoots": ["/Users/me/shared-skills"]
}
]
} }
{ "id": 25, "result": {
"data": [{
@@ -675,7 +727,9 @@ Use `app/list` to fetch available apps (connectors). Each entry includes metadat
```json
{ "method": "app/list", "id": 50, "params": {
"cursor": null,
"limit": 50
"limit": 50,
"threadId": "thr_123",
"forceRefetch": false
} }
{ "id": 50, "result": {
"data": [
@@ -694,6 +748,32 @@ Use `app/list` to fetch available apps (connectors). Each entry includes metadat
} }
```
When `threadId` is provided, app feature gating (`Feature::Apps`) is evaluated using that thread's config snapshot. When omitted, the latest global config is used.
`app/list` returns after both accessible apps and directory apps are loaded. Set `forceRefetch: true` to bypass app caches and fetch fresh data from sources. Cache entries are only replaced when those refetches succeed.
The server also emits `app/list/updated` notifications whenever either source (accessible apps or directory apps) finishes loading. Each notification includes the latest merged app list.
```json
{
"method": "app/list/updated",
"params": {
"data": [
{
"id": "demo-app",
"name": "Demo App",
"description": "Example connector for documentation.",
"logoUrl": "https://example.com/demo-app.png",
"logoUrlDark": null,
"distributionChannel": null,
"installUrl": "https://chatgpt.com/apps/demo-app/demo-app",
"isAccessible": true
}
]
}
}
```
Invoke an app by inserting `$<app-slug>` in the text input. The slug is derived from the app name and lowercased with non-alphanumeric characters replaced by `-` (for example, "Demo App" becomes `$demo-app`). Add a `mention` input item (recommended) so the server uses the exact `app://<connector-id>` path rather than guessing by name.
Example:
@@ -888,6 +968,7 @@ Examples of descriptor strings:
- `thread/start.mockExperimentalField` (field-level gate)
### For maintainers: Adding experimental fields and methods
Use this checklist when introducing a field/method that should only be available when the client opts into experimental APIs.
At runtime, clients must send `initialize` with `capabilities.experimentalApi = true` to use experimental methods or fields.
@@ -908,7 +989,7 @@ At runtime, clients must send `initialize` with `capabilities.experimentalApi =
# Include experimental API fields/methods in fixtures.
just write-app-server-schema --experimental
```
5. Verify the protocol crate:
```bash

View File

@@ -596,6 +596,28 @@ pub(crate) async fn apply_bespoke_event_handling(
.send_server_notification(ServerNotification::ItemCompleted(notification))
.await;
}
EventMsg::CollabResumeBegin(begin_event) => {
let item = collab_resume_begin_item(begin_event);
let notification = ItemStartedNotification {
thread_id: conversation_id.to_string(),
turn_id: event_turn_id.clone(),
item,
};
outgoing
.send_server_notification(ServerNotification::ItemStarted(notification))
.await;
}
EventMsg::CollabResumeEnd(end_event) => {
let item = collab_resume_end_item(end_event);
let notification = ItemCompletedNotification {
thread_id: conversation_id.to_string(),
turn_id: event_turn_id.clone(),
item,
};
outgoing
.send_server_notification(ServerNotification::ItemCompleted(notification))
.await;
}
EventMsg::AgentMessageContentDelta(event) => {
let codex_protocol::protocol::AgentMessageContentDeltaEvent { item_id, delta, .. } =
event;
@@ -1093,7 +1115,7 @@ pub(crate) async fn apply_bespoke_event_handling(
),
data: None,
};
outgoing.send_error(request_id, error).await;
outgoing.send_error(request_id.clone(), error).await;
return;
}
}
@@ -1107,7 +1129,7 @@ pub(crate) async fn apply_bespoke_event_handling(
),
data: None,
};
outgoing.send_error(request_id, error).await;
outgoing.send_error(request_id.clone(), error).await;
return;
}
};
@@ -1758,6 +1780,44 @@ async fn on_command_execution_request_approval_response(
}
}
fn collab_resume_begin_item(
begin_event: codex_core::protocol::CollabResumeBeginEvent,
) -> ThreadItem {
ThreadItem::CollabAgentToolCall {
id: begin_event.call_id,
tool: CollabAgentTool::ResumeAgent,
status: V2CollabToolCallStatus::InProgress,
sender_thread_id: begin_event.sender_thread_id.to_string(),
receiver_thread_ids: vec![begin_event.receiver_thread_id.to_string()],
prompt: None,
agents_states: HashMap::new(),
}
}
fn collab_resume_end_item(end_event: codex_core::protocol::CollabResumeEndEvent) -> ThreadItem {
let status = match &end_event.status {
codex_protocol::protocol::AgentStatus::Errored(_)
| codex_protocol::protocol::AgentStatus::NotFound => V2CollabToolCallStatus::Failed,
_ => V2CollabToolCallStatus::Completed,
};
let receiver_id = end_event.receiver_thread_id.to_string();
let agents_states = [(
receiver_id.clone(),
V2CollabAgentStatus::from(end_event.status),
)]
.into_iter()
.collect();
ThreadItem::CollabAgentToolCall {
id: end_event.call_id,
tool: CollabAgentTool::ResumeAgent,
status,
sender_thread_id: end_event.sender_thread_id.to_string(),
receiver_thread_ids: vec![receiver_id],
prompt: None,
agents_states,
}
}
/// similar to handle_mcp_tool_call_begin in exec
async fn construct_mcp_tool_call_notification(
begin_event: McpToolCallBeginEvent,
@@ -1831,12 +1891,15 @@ async fn construct_mcp_tool_call_end_notification(
mod tests {
use super::*;
use crate::CHANNEL_CAPACITY;
use crate::outgoing_message::OutgoingEnvelope;
use crate::outgoing_message::OutgoingMessage;
use crate::outgoing_message::OutgoingMessageSender;
use anyhow::Result;
use anyhow::anyhow;
use anyhow::bail;
use codex_app_server_protocol::TurnPlanStepStatus;
use codex_core::protocol::CollabResumeBeginEvent;
use codex_core::protocol::CollabResumeEndEvent;
use codex_core::protocol::CreditsSnapshot;
use codex_core::protocol::McpInvocation;
use codex_core::protocol::RateLimitSnapshot;
@@ -1858,6 +1921,21 @@ mod tests {
Arc::new(Mutex::new(HashMap::new()))
}
async fn recv_broadcast_message(
rx: &mut mpsc::Receiver<OutgoingEnvelope>,
) -> Result<OutgoingMessage> {
let envelope = rx
.recv()
.await
.ok_or_else(|| anyhow!("should send one message"))?;
match envelope {
OutgoingEnvelope::Broadcast { message } => Ok(message),
OutgoingEnvelope::ToConnection { connection_id, .. } => {
bail!("unexpected targeted message for connection {connection_id:?}")
}
}
}
#[test]
fn file_change_accept_for_session_maps_to_approved_for_session() {
let (decision, completion_status) =
@@ -1866,6 +1944,55 @@ mod tests {
assert_eq!(completion_status, None);
}
#[test]
fn collab_resume_begin_maps_to_item_started_resume_agent() {
let event = CollabResumeBeginEvent {
call_id: "call-1".to_string(),
sender_thread_id: ThreadId::new(),
receiver_thread_id: ThreadId::new(),
};
let item = collab_resume_begin_item(event.clone());
let expected = ThreadItem::CollabAgentToolCall {
id: event.call_id,
tool: CollabAgentTool::ResumeAgent,
status: V2CollabToolCallStatus::InProgress,
sender_thread_id: event.sender_thread_id.to_string(),
receiver_thread_ids: vec![event.receiver_thread_id.to_string()],
prompt: None,
agents_states: HashMap::new(),
};
assert_eq!(item, expected);
}
#[test]
fn collab_resume_end_maps_to_item_completed_resume_agent() {
let event = CollabResumeEndEvent {
call_id: "call-2".to_string(),
sender_thread_id: ThreadId::new(),
receiver_thread_id: ThreadId::new(),
status: codex_protocol::protocol::AgentStatus::NotFound,
};
let item = collab_resume_end_item(event.clone());
let receiver_id = event.receiver_thread_id.to_string();
let expected = ThreadItem::CollabAgentToolCall {
id: event.call_id,
tool: CollabAgentTool::ResumeAgent,
status: V2CollabToolCallStatus::Failed,
sender_thread_id: event.sender_thread_id.to_string(),
receiver_thread_ids: vec![receiver_id.clone()],
prompt: None,
agents_states: [(
receiver_id,
V2CollabAgentStatus::from(codex_protocol::protocol::AgentStatus::NotFound),
)]
.into_iter()
.collect(),
};
assert_eq!(item, expected);
}
#[tokio::test]
async fn test_handle_error_records_message() -> Result<()> {
let conversation_id = ThreadId::new();
@@ -1910,10 +2037,7 @@ mod tests {
)
.await;
let msg = rx
.recv()
.await
.ok_or_else(|| anyhow!("should send one notification"))?;
let msg = recv_broadcast_message(&mut rx).await?;
match msg {
OutgoingMessage::AppServerNotification(ServerNotification::TurnCompleted(n)) => {
assert_eq!(n.turn.id, event_turn_id);
@@ -1952,10 +2076,7 @@ mod tests {
)
.await;
let msg = rx
.recv()
.await
.ok_or_else(|| anyhow!("should send one notification"))?;
let msg = recv_broadcast_message(&mut rx).await?;
match msg {
OutgoingMessage::AppServerNotification(ServerNotification::TurnCompleted(n)) => {
assert_eq!(n.turn.id, event_turn_id);
@@ -1994,10 +2115,7 @@ mod tests {
)
.await;
let msg = rx
.recv()
.await
.ok_or_else(|| anyhow!("should send one notification"))?;
let msg = recv_broadcast_message(&mut rx).await?;
match msg {
OutgoingMessage::AppServerNotification(ServerNotification::TurnCompleted(n)) => {
assert_eq!(n.turn.id, event_turn_id);
@@ -2046,10 +2164,7 @@ mod tests {
)
.await;
let msg = rx
.recv()
.await
.ok_or_else(|| anyhow!("should send one notification"))?;
let msg = recv_broadcast_message(&mut rx).await?;
match msg {
OutgoingMessage::AppServerNotification(ServerNotification::TurnPlanUpdated(n)) => {
assert_eq!(n.thread_id, conversation_id.to_string());
@@ -2117,10 +2232,7 @@ mod tests {
)
.await;
let first = rx
.recv()
.await
.ok_or_else(|| anyhow!("expected usage notification"))?;
let first = recv_broadcast_message(&mut rx).await?;
match first {
OutgoingMessage::AppServerNotification(
ServerNotification::ThreadTokenUsageUpdated(payload),
@@ -2136,10 +2248,7 @@ mod tests {
other => bail!("unexpected notification: {other:?}"),
}
let second = rx
.recv()
.await
.ok_or_else(|| anyhow!("expected rate limit notification"))?;
let second = recv_broadcast_message(&mut rx).await?;
match second {
OutgoingMessage::AppServerNotification(
ServerNotification::AccountRateLimitsUpdated(payload),
@@ -2276,10 +2385,7 @@ mod tests {
.await;
// Verify: A turn 1
let msg = rx
.recv()
.await
.ok_or_else(|| anyhow!("should send first notification"))?;
let msg = recv_broadcast_message(&mut rx).await?;
match msg {
OutgoingMessage::AppServerNotification(ServerNotification::TurnCompleted(n)) => {
assert_eq!(n.turn.id, a_turn1);
@@ -2297,10 +2403,7 @@ mod tests {
}
// Verify: B turn 1
let msg = rx
.recv()
.await
.ok_or_else(|| anyhow!("should send second notification"))?;
let msg = recv_broadcast_message(&mut rx).await?;
match msg {
OutgoingMessage::AppServerNotification(ServerNotification::TurnCompleted(n)) => {
assert_eq!(n.turn.id, b_turn1);
@@ -2318,10 +2421,7 @@ mod tests {
}
// Verify: A turn 2
let msg = rx
.recv()
.await
.ok_or_else(|| anyhow!("should send third notification"))?;
let msg = recv_broadcast_message(&mut rx).await?;
match msg {
OutgoingMessage::AppServerNotification(ServerNotification::TurnCompleted(n)) => {
assert_eq!(n.turn.id, a_turn2);
@@ -2487,10 +2587,7 @@ mod tests {
)
.await;
let msg = rx
.recv()
.await
.ok_or_else(|| anyhow!("should send one notification"))?;
let msg = recv_broadcast_message(&mut rx).await?;
match msg {
OutgoingMessage::AppServerNotification(ServerNotification::TurnDiffUpdated(
notification,

File diff suppressed because it is too large Load Diff

View File

@@ -9,6 +9,7 @@ use codex_app_server_protocol::ConfigValueWriteParams;
use codex_app_server_protocol::ConfigWriteErrorCode;
use codex_app_server_protocol::ConfigWriteResponse;
use codex_app_server_protocol::JSONRPCErrorError;
use codex_app_server_protocol::NetworkRequirements;
use codex_app_server_protocol::SandboxMode;
use codex_core::config::ConfigService;
use codex_core::config::ConfigServiceError;
@@ -17,13 +18,19 @@ use codex_core::config_loader::ConfigRequirementsToml;
use codex_core::config_loader::LoaderOverrides;
use codex_core::config_loader::ResidencyRequirement as CoreResidencyRequirement;
use codex_core::config_loader::SandboxModeRequirement as CoreSandboxModeRequirement;
use codex_protocol::config_types::WebSearchMode;
use serde_json::json;
use std::path::PathBuf;
use std::sync::Arc;
use std::sync::RwLock;
use toml::Value as TomlValue;
#[derive(Clone)]
pub(crate) struct ConfigApi {
service: ConfigService,
codex_home: PathBuf,
cli_overrides: Vec<(String, TomlValue)>,
loader_overrides: LoaderOverrides,
cloud_requirements: Arc<RwLock<CloudRequirementsLoader>>,
}
impl ConfigApi {
@@ -31,30 +38,42 @@ impl ConfigApi {
codex_home: PathBuf,
cli_overrides: Vec<(String, TomlValue)>,
loader_overrides: LoaderOverrides,
cloud_requirements: CloudRequirementsLoader,
cloud_requirements: Arc<RwLock<CloudRequirementsLoader>>,
) -> Self {
Self {
service: ConfigService::new(
codex_home,
cli_overrides,
loader_overrides,
cloud_requirements,
),
codex_home,
cli_overrides,
loader_overrides,
cloud_requirements,
}
}
fn config_service(&self) -> ConfigService {
let cloud_requirements = self
.cloud_requirements
.read()
.map(|guard| guard.clone())
.unwrap_or_default();
ConfigService::new(
self.codex_home.clone(),
self.cli_overrides.clone(),
self.loader_overrides.clone(),
cloud_requirements,
)
}
pub(crate) async fn read(
&self,
params: ConfigReadParams,
) -> Result<ConfigReadResponse, JSONRPCErrorError> {
self.service.read(params).await.map_err(map_error)
self.config_service().read(params).await.map_err(map_error)
}
pub(crate) async fn config_requirements_read(
&self,
) -> Result<ConfigRequirementsReadResponse, JSONRPCErrorError> {
let requirements = self
.service
.config_service()
.read_requirements()
.await
.map_err(map_error)?
@@ -67,14 +86,20 @@ impl ConfigApi {
&self,
params: ConfigValueWriteParams,
) -> Result<ConfigWriteResponse, JSONRPCErrorError> {
self.service.write_value(params).await.map_err(map_error)
self.config_service()
.write_value(params)
.await
.map_err(map_error)
}
pub(crate) async fn batch_write(
&self,
params: ConfigBatchWriteParams,
) -> Result<ConfigWriteResponse, JSONRPCErrorError> {
self.service.batch_write(params).await.map_err(map_error)
self.config_service()
.batch_write(params)
.await
.map_err(map_error)
}
}
@@ -92,9 +117,20 @@ fn map_requirements_toml_to_api(requirements: ConfigRequirementsToml) -> ConfigR
.filter_map(map_sandbox_mode_requirement_to_api)
.collect()
}),
allowed_web_search_modes: requirements.allowed_web_search_modes.map(|modes| {
let mut normalized = modes
.into_iter()
.map(Into::into)
.collect::<Vec<WebSearchMode>>();
if !normalized.contains(&WebSearchMode::Disabled) {
normalized.push(WebSearchMode::Disabled);
}
normalized
}),
enforce_residency: requirements
.enforce_residency
.map(map_residency_requirement_to_api),
network: requirements.network.map(map_network_requirements_to_api),
}
}
@@ -115,6 +151,23 @@ fn map_residency_requirement_to_api(
}
}
fn map_network_requirements_to_api(
network: codex_core::config_loader::NetworkRequirementsToml,
) -> NetworkRequirements {
NetworkRequirements {
enabled: network.enabled,
http_port: network.http_port,
socks_port: network.socks_port,
allow_upstream_proxy: network.allow_upstream_proxy,
dangerously_allow_non_loopback_proxy: network.dangerously_allow_non_loopback_proxy,
dangerously_allow_non_loopback_admin: network.dangerously_allow_non_loopback_admin,
allowed_domains: network.allowed_domains,
denied_domains: network.denied_domains,
allow_unix_sockets: network.allow_unix_sockets,
allow_local_binding: network.allow_local_binding,
}
}
fn map_error(err: ConfigServiceError) -> JSONRPCErrorError {
if let Some(code) = err.write_error_code() {
return config_write_error(code, err.to_string());
@@ -140,6 +193,7 @@ fn config_write_error(code: ConfigWriteErrorCode, message: impl Into<String>) ->
#[cfg(test)]
mod tests {
use super::*;
use codex_core::config_loader::NetworkRequirementsToml as CoreNetworkRequirementsToml;
use codex_protocol::protocol::AskForApproval as CoreAskForApproval;
use pretty_assertions::assert_eq;
@@ -154,9 +208,24 @@ mod tests {
CoreSandboxModeRequirement::ReadOnly,
CoreSandboxModeRequirement::ExternalSandbox,
]),
allowed_web_search_modes: Some(vec![
codex_core::config_loader::WebSearchModeRequirement::Cached,
]),
mcp_servers: None,
rules: None,
enforce_residency: Some(CoreResidencyRequirement::Us),
network: Some(CoreNetworkRequirementsToml {
enabled: Some(true),
http_port: Some(8080),
socks_port: Some(1080),
allow_upstream_proxy: Some(false),
dangerously_allow_non_loopback_proxy: Some(false),
dangerously_allow_non_loopback_admin: Some(false),
allowed_domains: Some(vec!["api.openai.com".to_string()]),
denied_domains: Some(vec!["example.com".to_string()]),
allow_unix_sockets: Some(vec!["/tmp/proxy.sock".to_string()]),
allow_local_binding: Some(true),
}),
};
let mapped = map_requirements_toml_to_api(requirements);
@@ -172,9 +241,48 @@ mod tests {
mapped.allowed_sandbox_modes,
Some(vec![SandboxMode::ReadOnly]),
);
assert_eq!(
mapped.allowed_web_search_modes,
Some(vec![WebSearchMode::Cached, WebSearchMode::Disabled]),
);
assert_eq!(
mapped.enforce_residency,
Some(codex_app_server_protocol::ResidencyRequirement::Us),
);
assert_eq!(
mapped.network,
Some(NetworkRequirements {
enabled: Some(true),
http_port: Some(8080),
socks_port: Some(1080),
allow_upstream_proxy: Some(false),
dangerously_allow_non_loopback_proxy: Some(false),
dangerously_allow_non_loopback_admin: Some(false),
allowed_domains: Some(vec!["api.openai.com".to_string()]),
denied_domains: Some(vec!["example.com".to_string()]),
allow_unix_sockets: Some(vec!["/tmp/proxy.sock".to_string()]),
allow_local_binding: Some(true),
}),
);
}
#[test]
fn map_requirements_toml_to_api_normalizes_allowed_web_search_modes() {
let requirements = ConfigRequirementsToml {
allowed_approval_policies: None,
allowed_sandbox_modes: None,
allowed_web_search_modes: Some(Vec::new()),
mcp_servers: None,
rules: None,
enforce_residency: None,
network: None,
};
let mapped = map_requirements_toml_to_api(requirements);
assert_eq!(
mapped.allowed_web_search_modes,
Some(vec![WebSearchMode::Disabled])
);
}
}

View File

@@ -1,2 +1,3 @@
pub(crate) const INVALID_REQUEST_ERROR_CODE: i64 = -32600;
pub(crate) const INTERNAL_ERROR_CODE: i64 = -32603;
pub(crate) const OVERLOADED_ERROR_CODE: i64 = -32001;

View File

@@ -8,14 +8,27 @@ use codex_core::config::ConfigBuilder;
use codex_core::config_loader::CloudRequirementsLoader;
use codex_core::config_loader::ConfigLayerStackOrdering;
use codex_core::config_loader::LoaderOverrides;
use std::collections::HashMap;
use std::collections::VecDeque;
use std::io::ErrorKind;
use std::io::Result as IoResult;
use std::path::PathBuf;
use std::sync::Arc;
use std::sync::atomic::AtomicBool;
use crate::message_processor::MessageProcessor;
use crate::message_processor::MessageProcessorArgs;
use crate::outgoing_message::OutgoingMessage;
use crate::outgoing_message::ConnectionId;
use crate::outgoing_message::OutgoingEnvelope;
use crate::outgoing_message::OutgoingMessageSender;
use crate::transport::CHANNEL_CAPACITY;
use crate::transport::ConnectionState;
use crate::transport::OutboundConnectionState;
use crate::transport::TransportEvent;
use crate::transport::has_initialized_connections;
use crate::transport::route_outgoing_envelope;
use crate::transport::start_stdio_connection;
use crate::transport::start_websocket_acceptor;
use codex_app_server_protocol::ConfigLayerSource;
use codex_app_server_protocol::ConfigWarningNotification;
use codex_app_server_protocol::JSONRPCMessage;
@@ -26,13 +39,9 @@ use codex_core::check_execpolicy_for_warnings;
use codex_core::config_loader::ConfigLoadError;
use codex_core::config_loader::TextRange as CoreTextRange;
use codex_feedback::CodexFeedback;
use tokio::io::AsyncBufReadExt;
use tokio::io::AsyncWriteExt;
use tokio::io::BufReader;
use tokio::io::{self};
use tokio::sync::mpsc;
use tokio::task::JoinHandle;
use toml::Value as TomlValue;
use tracing::debug;
use tracing::error;
use tracing::info;
use tracing::warn;
@@ -51,11 +60,29 @@ mod fuzzy_file_search;
mod message_processor;
mod models;
mod outgoing_message;
mod transport;
/// Size of the bounded channels used to communicate between tasks. The value
/// is a balance between throughput and memory usage 128 messages should be
/// plenty for an interactive CLI.
const CHANNEL_CAPACITY: usize = 128;
pub use crate::transport::AppServerTransport;
/// Control-plane messages from the processor/transport side to the outbound router task.
///
/// `run_main_with_transport` now uses two loops/tasks:
/// - processor loop: handles incoming JSON-RPC and request dispatch
/// - outbound loop: performs potentially slow writes to per-connection writers
///
/// `OutboundControlEvent` keeps those loops coordinated without sharing mutable
/// connection state directly. In particular, the outbound loop needs to know
/// when a connection opens/closes so it can route messages correctly.
enum OutboundControlEvent {
/// Register a new writer for an opened connection.
Opened {
connection_id: ConnectionId,
writer: mpsc::Sender<crate::outgoing_message::OutgoingMessage>,
initialized: Arc<AtomicBool>,
},
/// Remove state for a closed/disconnected connection.
Closed { connection_id: ConnectionId },
}
fn config_warning_from_error(
summary: impl Into<String>,
@@ -173,32 +200,41 @@ pub async fn run_main(
loader_overrides: LoaderOverrides,
default_analytics_enabled: bool,
) -> IoResult<()> {
// Set up channels.
let (incoming_tx, mut incoming_rx) = mpsc::channel::<JSONRPCMessage>(CHANNEL_CAPACITY);
let (outgoing_tx, mut outgoing_rx) = mpsc::channel::<OutgoingMessage>(CHANNEL_CAPACITY);
run_main_with_transport(
codex_linux_sandbox_exe,
cli_config_overrides,
loader_overrides,
default_analytics_enabled,
AppServerTransport::Stdio,
)
.await
}
// Task: read from stdin, push to `incoming_tx`.
let stdin_reader_handle = tokio::spawn({
async move {
let stdin = io::stdin();
let reader = BufReader::new(stdin);
let mut lines = reader.lines();
pub async fn run_main_with_transport(
codex_linux_sandbox_exe: Option<PathBuf>,
cli_config_overrides: CliConfigOverrides,
loader_overrides: LoaderOverrides,
default_analytics_enabled: bool,
transport: AppServerTransport,
) -> IoResult<()> {
let (transport_event_tx, mut transport_event_rx) =
mpsc::channel::<TransportEvent>(CHANNEL_CAPACITY);
let (outgoing_tx, mut outgoing_rx) = mpsc::channel::<OutgoingEnvelope>(CHANNEL_CAPACITY);
let (outbound_control_tx, mut outbound_control_rx) =
mpsc::channel::<OutboundControlEvent>(CHANNEL_CAPACITY);
while let Some(line) = lines.next_line().await.unwrap_or_default() {
match serde_json::from_str::<JSONRPCMessage>(&line) {
Ok(msg) => {
if incoming_tx.send(msg).await.is_err() {
// Receiver gone nothing left to do.
break;
}
}
Err(e) => error!("Failed to deserialize JSONRPCMessage: {e}"),
}
}
debug!("stdin reader finished (EOF)");
let mut stdio_handles = Vec::<JoinHandle<()>>::new();
let mut websocket_accept_handle = None;
match transport {
AppServerTransport::Stdio => {
start_stdio_connection(transport_event_tx.clone(), &mut stdio_handles).await?;
}
});
AppServerTransport::WebSocket { bind_address } => {
websocket_accept_handle =
Some(start_websocket_acceptor(bind_address, transport_event_tx.clone()).await?);
}
}
let shutdown_when_no_connections = matches!(transport, AppServerTransport::Stdio);
// Parse CLI overrides once and derive the base Config eagerly so later
// components do not need to work with raw TOML values.
@@ -267,9 +303,7 @@ pub async fn run_main(
}
};
if let Ok(Some(err)) =
check_execpolicy_for_warnings(&config.features, &config.config_layer_stack).await
{
if let Ok(Some(err)) = check_execpolicy_for_warnings(&config.config_layer_stack).await {
let (path, range) = exec_policy_warning_location(&err);
let message = ConfigWarningNotification {
summary: "Error parsing rules; custom rules not applied.".to_string(),
@@ -327,15 +361,71 @@ pub async fn run_main(
}
}
// Task: process incoming messages.
let transport_event_tx_for_outbound = transport_event_tx.clone();
let outbound_handle = tokio::spawn(async move {
let mut outbound_connections = HashMap::<ConnectionId, OutboundConnectionState>::new();
let mut pending_closed_connections = VecDeque::<ConnectionId>::new();
loop {
tokio::select! {
biased;
event = outbound_control_rx.recv() => {
let Some(event) = event else {
break;
};
match event {
OutboundControlEvent::Opened {
connection_id,
writer,
initialized,
} => {
outbound_connections.insert(
connection_id,
OutboundConnectionState::new(writer, initialized),
);
}
OutboundControlEvent::Closed { connection_id } => {
outbound_connections.remove(&connection_id);
}
}
}
envelope = outgoing_rx.recv() => {
let Some(envelope) = envelope else {
break;
};
let disconnected_connections =
route_outgoing_envelope(&mut outbound_connections, envelope).await;
pending_closed_connections.extend(disconnected_connections);
}
}
while let Some(connection_id) = pending_closed_connections.front().copied() {
match transport_event_tx_for_outbound
.try_send(TransportEvent::ConnectionClosed { connection_id })
{
Ok(()) => {
pending_closed_connections.pop_front();
}
Err(mpsc::error::TrySendError::Full(_)) => {
break;
}
Err(mpsc::error::TrySendError::Closed(_)) => {
return;
}
}
}
}
info!("outbound router task exited (channel closed)");
});
let processor_handle = tokio::spawn({
let outgoing_message_sender = OutgoingMessageSender::new(outgoing_tx);
let outgoing_message_sender = Arc::new(OutgoingMessageSender::new(outgoing_tx));
let outbound_control_tx = outbound_control_tx;
let cli_overrides: Vec<(String, TomlValue)> = cli_kv_overrides.clone();
let loader_overrides = loader_overrides_for_config_api;
let mut processor = MessageProcessor::new(MessageProcessorArgs {
outgoing: outgoing_message_sender,
codex_linux_sandbox_exe,
config: std::sync::Arc::new(config),
config: Arc::new(config),
cli_overrides,
loader_overrides,
cloud_requirements: cloud_requirements.clone(),
@@ -343,25 +433,83 @@ pub async fn run_main(
config_warnings,
});
let mut thread_created_rx = processor.thread_created_receiver();
let mut connections = HashMap::<ConnectionId, ConnectionState>::new();
async move {
let mut listen_for_threads = true;
loop {
tokio::select! {
msg = incoming_rx.recv() => {
let Some(msg) = msg else {
event = transport_event_rx.recv() => {
let Some(event) = event else {
break;
};
match msg {
JSONRPCMessage::Request(r) => processor.process_request(r).await,
JSONRPCMessage::Response(r) => processor.process_response(r).await,
JSONRPCMessage::Notification(n) => processor.process_notification(n).await,
JSONRPCMessage::Error(e) => processor.process_error(e).await,
match event {
TransportEvent::ConnectionOpened { connection_id, writer } => {
let outbound_initialized = Arc::new(AtomicBool::new(false));
if outbound_control_tx
.send(OutboundControlEvent::Opened {
connection_id,
writer,
initialized: Arc::clone(&outbound_initialized),
})
.await
.is_err()
{
break;
}
connections.insert(connection_id, ConnectionState::new(outbound_initialized));
}
TransportEvent::ConnectionClosed { connection_id } => {
if outbound_control_tx
.send(OutboundControlEvent::Closed { connection_id })
.await
.is_err()
{
break;
}
connections.remove(&connection_id);
if shutdown_when_no_connections && connections.is_empty() {
break;
}
}
TransportEvent::IncomingMessage { connection_id, message } => {
match message {
JSONRPCMessage::Request(request) => {
let Some(connection_state) = connections.get_mut(&connection_id) else {
warn!("dropping request from unknown connection: {:?}", connection_id);
continue;
};
let was_initialized = connection_state.session.initialized;
processor
.process_request(
connection_id,
request,
&mut connection_state.session,
&connection_state.outbound_initialized,
)
.await;
if !was_initialized && connection_state.session.initialized {
processor.send_initialize_notifications().await;
}
}
JSONRPCMessage::Response(response) => {
processor.process_response(response).await;
}
JSONRPCMessage::Notification(notification) => {
processor.process_notification(notification).await;
}
JSONRPCMessage::Error(err) => {
processor.process_error(err).await;
}
}
}
}
}
created = thread_created_rx.recv(), if listen_for_threads => {
match created {
Ok(thread_id) => {
processor.try_attach_thread_listener(thread_id).await;
if has_initialized_connections(&connections) {
processor.try_attach_thread_listener(thread_id).await;
}
}
Err(tokio::sync::broadcast::error::RecvError::Lagged(_)) => {
// TODO(jif) handle lag.
@@ -382,33 +530,18 @@ pub async fn run_main(
}
});
// Task: write outgoing messages to stdout.
let stdout_writer_handle = tokio::spawn(async move {
let mut stdout = io::stdout();
while let Some(outgoing_message) = outgoing_rx.recv().await {
let Ok(value) = serde_json::to_value(outgoing_message) else {
error!("Failed to convert OutgoingMessage to JSON value");
continue;
};
match serde_json::to_string(&value) {
Ok(mut json) => {
json.push('\n');
if let Err(e) = stdout.write_all(json.as_bytes()).await {
error!("Failed to write to stdout: {e}");
break;
}
}
Err(e) => error!("Failed to serialize JSONRPCMessage: {e}"),
}
}
drop(transport_event_tx);
info!("stdout writer exited (channel closed)");
});
let _ = processor_handle.await;
let _ = outbound_handle.await;
// Wait for all tasks to finish. The typical exit path is the stdin reader
// hitting EOF which, once it drops `incoming_tx`, propagates shutdown to
// the processor and then to the stdout task.
let _ = tokio::join!(stdin_reader_handle, processor_handle, stdout_writer_handle);
if let Some(handle) = websocket_accept_handle {
handle.abort();
}
for handle in stdio_handles {
let _ = handle.await;
}
Ok(())
}

View File

@@ -1,4 +1,6 @@
use codex_app_server::run_main;
use clap::Parser;
use codex_app_server::AppServerTransport;
use codex_app_server::run_main_with_transport;
use codex_arg0::arg0_dispatch_or_else;
use codex_common::CliConfigOverrides;
use codex_core::config_loader::LoaderOverrides;
@@ -8,19 +10,34 @@ use std::path::PathBuf;
// managed config file without writing to /etc.
const MANAGED_CONFIG_PATH_ENV_VAR: &str = "CODEX_APP_SERVER_MANAGED_CONFIG_PATH";
#[derive(Debug, Parser)]
struct AppServerArgs {
/// Transport endpoint URL. Supported values: `stdio://` (default),
/// `ws://IP:PORT`.
#[arg(
long = "listen",
value_name = "URL",
default_value = AppServerTransport::DEFAULT_LISTEN_URL
)]
listen: AppServerTransport,
}
fn main() -> anyhow::Result<()> {
arg0_dispatch_or_else(|codex_linux_sandbox_exe| async move {
let args = AppServerArgs::parse();
let managed_config_path = managed_config_path_from_debug_env();
let loader_overrides = LoaderOverrides {
managed_config_path,
..Default::default()
};
let transport = args.listen;
run_main(
run_main_with_transport(
codex_linux_sandbox_exe,
CliConfigOverrides::default(),
loader_overrides,
false,
transport,
)
.await?;
Ok(())

Some files were not shown because too many files have changed in this diff Show More