Compare commits

...

31 Commits

Author SHA1 Message Date
Celia Chen
16647b188b chore: add codex debug app-server tooling (#10367)
codex debug app-server <user message> forwards the message through
codex-app-server-test-client’s send_message_v2 library entry point,
using std::env::current_exe() to resolve the codex binary.

for how it looks like, see:

```
celia@com-92114 codex-rs % cargo build -p codex-cli && target/debug/codex debug app-server --help                       
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.34s
Tooling: helps debug the app server

Usage: codex debug app-server [OPTIONS] <COMMAND>

Commands:
  send-message-v2  
  help             Print this message or the help of the given subcommand(s)
````
and
```
celia@com-92114 codex-rs % cargo build -p codex-cli && target/debug/codex debug app-server send-message-v2 "hello world"
   Compiling codex-cli v0.0.0 (/Users/celia/code/codex/codex-rs/cli)
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 1.38s
> {
>   "method": "initialize",
>   "id": "f8ba9f60-3a49-4ea9-81d6-4ab6853e3954",
>   "params": {
>     "clientInfo": {
>       "name": "codex-toy-app-server",
>       "title": "Codex Toy App Server",
>       "version": "0.0.0"
>     },
>     "capabilities": {
>       "experimentalApi": true
>     }
>   }
> }
< {
<   "id": "f8ba9f60-3a49-4ea9-81d6-4ab6853e3954",
<   "result": {
<     "userAgent": "codex-toy-app-server/0.0.0 (Mac OS 26.2.0; arm64) vscode/2.4.27 (codex-toy-app-server; 0.0.0)"
<   }
< }
< initialize response: InitializeResponse { user_agent: "codex-toy-app-server/0.0.0 (Mac OS 26.2.0; arm64) vscode/2.4.27 (codex-toy-app-server; 0.0.0)" }
> {
>   "method": "thread/start",
>   "id": "203f1630-beee-4e60-b17b-9eff16b1638b",
>   "params": {
>     "model": null,
>     "modelProvider": null,
>     "cwd": null,
>     "approvalPolicy": null,
>     "sandbox": null,
>     "config": null,
>     "baseInstructions": null,
>     "developerInstructions": null,
>     "personality": null,
>     "ephemeral": null,
>     "dynamicTools": null,
>     "mockExperimentalField": null,
>     "experimentalRawEvents": false
>   }
> }
...
```
2026-02-03 23:17:34 +00:00
Josh McKinney
aec58ac29b feat(tui): pace catch-up stream chunking with hysteresis (#10461)
## Summary
- preserve baseline streaming behavior (smooth mode still commits one
line per 50ms tick)
- extract adaptive chunking policy and commit-tick orchestration from
ChatWidget into `streaming/chunking.rs` and `streaming/commit_tick.rs`
- add hysteresis-based catch-up behavior with bounded batch draining to
reduce queue lag without bursty single-frame jumps
- document policy behavior, tuning guidance, and debug flow in rustdoc +
docs

## Testing
- just fmt
- cargo test -p codex-tui
2026-02-03 15:01:51 -08:00
Shijie Rao
750ebe154d Feat: add upgrade to app server modelList (#10556)
### Summary
* Add model upgrade to listModel app server endpoint to support
dynamically show model upgrade banner.
2026-02-03 14:53:36 -08:00
Rasmus Rygaard
e3d39013d3 Handle exec shutdown on Interrupt (fixes immortal codex exec with websockets) (#10519)
### Motivation
- Ensure `codex exec` exits when a running turn is interrupted (e.g.,
Ctrl-C) so the CLI is not "immortal" when websockets/streaming are used.

### Description
- Return `CodexStatus::InitiateShutdown` when handling
`EventMsg::TurnAborted` in
`exec/src/event_processor_with_human_output.rs` so human-output exec
mode shuts down after an interrupt.
- Treat `protocol::EventMsg::TurnAborted` as
`CodexStatus::InitiateShutdown` in
`exec/src/event_processor_with_jsonl_output.rs` so JSONL output mode
behaves the same.
- Applied formatting with `just fmt`.

### Testing
- Ran `just fmt` successfully.
- Ran `cargo test -p codex-exec`; many unit tests ran and the test
command completed, but the full test run in this environment produced
`35 passed, 11 failed` where the failures are due to Landlock sandbox
panics and 403 responses in the test harness (environmental/integration
issues) and are not caused by the interrupt/shutdown changes.

------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_698165cec4e083258d17702bd29014c1)
2026-02-03 14:38:21 -08:00
xl-openai
f38d181795 feat: add APIs to list and download public remote skills (#10448)
Add API to list / download from remote public skills
2026-02-03 14:09:37 -08:00
viyatb-oai
08926a3fb7 chore(arg0): advisory-lock janitor for codex tmp paths (#10039)
## Description

### What changed
- Switch the arg0 helper root from `~/.codex/tmp/path` to
`~/.codex/tmp/path2`
- Add `Arg0PathEntryGuard` to keep both the `TempDir` and an exclusive
`.lock` file alive for the process lifetime
- Add a startup janitor that scans `path2` and deletes only directories
whose lock can be acquired

### Tests
- `cargo clippy -p codex-arg0`
- `cargo clippy -p codex-core`
- `cargo test -p codex-arg0`
- `cargo test -p codex-core`
2026-02-03 21:38:31 +00:00
Eric Traut
c87c271128 Fixed icon for CLI bug template (#10552) 2026-02-03 13:27:33 -08:00
gt-oai
8406bd7672 [codex] Default values from requirements if unset (#10531)
If we don't set any explicit values for sandbox or approval policy,
let's try to use a requirements-satisfying value.
2026-02-03 20:47:34 +00:00
Eric Traut
477379b83c Updated bug templates and added a new one for app (#10548) 2026-02-03 12:46:52 -08:00
iceweasel-oai
aabe0f259c implement per-workspace capability SIDs for workspace specific ACLs (#10189)
Today, there is a single capability SID that allows the sandbox to write
to
* workspace (cwd)
* tmp directories if enabled
* additional writable roots

This change splits those up, so that each workspace has its own
capability SID, while tmp and additional roots, which are
installation-wide, are still governed by the "generic" capability SID

This isolates workspaces from each other in terms of sandbox write
access.
Also allows us to protect <cwd>/.codex when codex runs in a specific
<cwd>
2026-02-03 12:37:51 -08:00
Matthew Zeng
654fcb4962 [apps] Gateway MCP should be blocking. (#10289)
Make Apps Gateway MCP blocking since otherwise app mentions may not work
when apps are not loaded. Messages sent before apps become available
will be queued.

This only affects when `apps` feature is enabled.
2026-02-03 12:17:53 -08:00
Charley Cunningham
998eb8f32b Improve Default mode prompt (less confusion with Plan mode) (#10545)
## Summary

This PR updates `request_user_input` behavior and Default-mode guidance
to match current collaboration-mode semantics and reduce model
confusion.

## Why

- `request_user_input` should be explicitly documented as **Plan-only**.
- Tool description and runtime availability checks should be driven by
the **same centralized mode policy**.
- Default mode prompt needed stronger execution guidance and explicit
instruction that `request_user_input` is unavailable.
- Error messages should report the **actual mode name** (not aliases
that can read as misleading).

## What changed

- Centralized `request_user_input` mode policy in `core` handler logic:
  - Added a single allowed-modes config (`Plan` only).
  - Reused that policy for:
    - runtime rejection messaging
    - tool description text
- Updated tool description to include availability constraint:
  - `"This tool is only available in Plan mode."`
- Updated runtime rejection behavior:
  - `Default` -> `"request_user_input is unavailable in Default mode"`
  - `Execute` -> `"request_user_input is unavailable in Execute mode"`
- `PairProgramming` -> `"request_user_input is unavailable in Pair
Programming mode"`
- Strengthened Default collaboration prompt:
  - Added explicit execution-first behavior
  - Added assumptions-first guidance
  - Added explicit `request_user_input` unavailability instruction
  - Added concise progress-reporting expectations
- Simplified formatting implementation:
  - Inlined allowed-mode name collection into `format_allowed_modes()`
- Kept `format_allowed_modes()` output for 3+ modes as CSV style
(`modes: a,b,c`)
2026-02-03 12:08:38 -08:00
Owen Lin
d9ad5c3c49 fix(app-server): fix approval events in review mode (#10416)
One of our partners flagged that they were seeing the wrong order of
events when running `review/start` with command exec approvals:
```
{"method":"item/commandExecution/requestApproval","id":0,"params":{"threadId":"019c0b6b-6a42-7c02-99c4-98c80e88ac27","turnId":"0","itemId":"0","reason":"`/bin/zsh -lc 'git show b7a92b4eacf262c575f26b1e1ed621a357642e55 --stat'` requires approval: Xcode-required approval: Require explicit user confirmation for all commands.","proposedExecpolicyAmendment":null}}

{"method":"item/started","params":{"item":{"type":"commandExecution","id":"call_AEjlbHqLYNM7kbU3N6uw1CNi","command":"/bin/zsh -lc 'git show b7a92b4eacf262c575f26b1e1ed621a357642e55 --stat'","cwd":"/Users/devingreen/Desktop/SampleProject","processId":null,"status":"inProgress","commandActions":[{"type":"unknown","command":"git show b7a92b4eacf262c575f26b1e1ed621a357642e55 --stat"}],"aggregatedOutput":null,"exitCode":null,"durationMs":null},"threadId":"019c0b6b-6a42-7c02-99c4-98c80e88ac27","turnId":"0"}}
```

**Key fix**: In the review sub‑agent delegate we were forwarding exec
(and patch) approvals using the parent turn id (`parent_ctx.sub_id`) as
the approval call_id. That made
`item/commandExecution/requestApproval.itemId` differ from the actual
`item/started` id. We now forward the sub‑agent’s `call_id` from the
approval event instead, so the approval item id matches the
commandExecution item id in review flows.

Here’s the expected event order for an inline `review/start` that
triggers an exec approval after this fix:
1. Response to review/start (JSON‑RPC response)
- Includes `turn` (status inProgress) and `review_thread_id` (same as
parent thread for inline).
2. `turn/started` notification
  - turnId is the review turn id (e.g., "0").
3. `item/started` → EnteredReviewMode
  - item.id == turnId, marks entry into review mode.
4. `item/started` → commandExecution
  - item.id == <call_id> (e.g., "review-call-1"), status: inProgress.
5. `item/commandExecution/requestApproval` request
  - JSON‑RPC request (not a notification).
  - params.itemId == <call_id> and params.turnId == turnId.
6. Client replies to approval request (Approved / Declined / etc).
7. If approved:
  - Optional `item/commandExecution/outputDelta` notifications.
  - `item/completed` → commandExecution with status and exitCode.
8. Review finishes:
  - `item/started` → ExitedReviewMode
  - `item/completed` → ExitedReviewMode
  - (Agent message items may also appear, depending on review output.)
9. `turn/completed` notification

The key being #4 and #5 are now in the proper order with the correct
item id.
2026-02-03 12:08:17 -08:00
Owen Lin
efd96c46c7 fix(app-server): fix TS annotations for optional fields on requests (#10412)
This updates our generated TypeScript types to be more correct with how
the server actually behaves, **specifically for JSON-RPC requests**.

Before this PR, we'd generate `field: T | null`. After this PR, we will
have `field?: T | null`. The latter matches how the server actually
works, in that if an optional field is omitted, the server will treat it
as null. This also makes it less annoying in theory for clients to
upgrade to newer versions of Codex, since adding a new optional field to
a JSON-RPC request should not require a client change.

NOTE: This only applies to JSON-RPC requests. All other payloads (i.e.
responses, notifications) will return `field: T | null` as usual.
2026-02-03 11:51:37 -08:00
viyatb-oai
1dcce204fc Revert "Load untrusted rules" (#10536)
Reverts openai/codex#9791
2026-02-03 19:38:44 +00:00
Max Johnson
66b196a725 Inject CODEX_THREAD_ID into the terminal environment (#10096)
Inject CODEX_THREAD_ID (when applicable) into the terminal environment
so that the agent (and skills) can refer to the current thread / session
ID.

Discussion:
https://openai.slack.com/archives/C095U48JNL9/p1769542492067109
2026-02-03 11:31:12 -08:00
Michael Bolin
9a487f9c18 fix: make $PWD/.agents read-only like $PWD/.codex (#10524)
In light of https://github.com/openai/codex/pull/10317, because
`.agents` can include resources that Codex can run in a privileged way,
it should be read-only by default just as `.codex` is.
2026-02-03 11:26:34 -08:00
jif-oai
c38a5958d7 feat: find_thread_path_by_id_str_in_subdir from DB (#10532) 2026-02-03 19:09:04 +00:00
jif-oai
33dc93e4d2 Enable parallel shell tools (#10505)
Summary
- mark the shell-related tools as supporting parallel tool calls so
exec_command, shell_command, etc. can run concurrently
- update expectations in tool parallelism tests to reflect the new
parallel behavior
- drop the unused serial duration helper from the suite

Testing
- Not run (not requested)
2026-02-03 18:05:02 +00:00
Charley Cunningham
d509df676b Cleanup collaboration mode variants (#10404)
## Summary

This PR simplifies collaboration modes to the visible set `default |
plan`, while preserving backward compatibility for older partners that
may still send legacy mode
names.

Specifically:
- Renames the old Code behavior to **Default**.
- Keeps **Plan** as-is.
- Removes **Custom** mode behavior (fallbacks now resolve to Default).
- Keeps `PairProgramming` and `Execute` internally for compatibility
plumbing, while removing them from schema/API and UI visibility.
- Adds legacy input aliasing so older clients can still send old mode
names.

## What Changed

1. Mode enum and compatibility
- `ModeKind` now uses `Plan` + `Default` as active/public modes.
- `ModeKind::Default` deserialization accepts legacy values:
  - `code`
  - `pair_programming`
  - `execute`
  - `custom`
- `PairProgramming` and `Execute` variants remain in code but are hidden
from protocol/schema generation.
- `Custom` variant is removed; previous custom fallbacks now map to
`Default`.

2. Collaboration presets and templates
- Built-in presets now return only:
  - `Plan`
  - `Default`
- Template rename:
  - `core/templates/collaboration_mode/code.md` -> `default.md`
- `execute.md` and `pair_programming.md` remain on disk but are not
surfaced in visible preset lists.

3. TUI updates
- Updated user-facing naming and prompts from “Code” to “Default”.
- Updated mode-cycle and indicator behavior to reflect only visible
`Plan` and `Default`.
- Updated corresponding tests and snapshots.

4. request_user_input behavior
- `request_user_input` remains allowed only in `Plan` mode.
- Rejection messaging now consistently treats non-plan modes as
`Default`.

5. Schemas
- Regenerated config and app-server schemas.
- Public schema types now advertise mode values as:
  - `plan`
  - `default`

## Backward Compatibility Notes

- Incoming legacy mode names (`code`, `pair_programming`, `execute`,
`custom`) are accepted and coerced to `default`.
- Outgoing/public schema surfaces intentionally expose only `plan |
default`.
- This allows tolerant ingestion of older partner payloads while
standardizing new integrations on the reduced mode set.

## Codex author
`codex fork 019c1fae-693b-7840-b16e-9ad38ea0bd00`
2026-02-03 09:23:53 -08:00
sayan-oai
aea38f0f88 fix WebSearchAction type clash between v1 and v2 (#10408)
type clash; app-server generated types were still using the v1
snake_case `WebSearchAction`, so there was a mismatch between the
camelCase emitted types and the snake_case types we were trying to
parse.

Updated v2 `WebSearchAction` to export into the `v2/` type set and
updated `ThreadItem` to use that.

### Tests
Ran new `just write-app-server-schema` to surface changes to schema, the
import looks correct now.
2026-02-03 17:12:37 +00:00
Michael Bolin
1634db6677 chore: update bytes crate in response to security advisory (#10525)
While here, remove one advisory from `deny.toml` that has been addressed
(it was showing up as a warning).
2026-02-03 17:08:04 +00:00
jif-oai
ed778f9017 Avoid redundant transactional check before inserting dynamic tools (#10521)
Summary
- remove the extra transaction guard that checked for existing dynamic
tools per thread before inserting new ones
- insert each tool record with `ON CONFLICT(thread_id, position) DO
NOTHING` to ignore duplicates instead of pre-querying
- simplify execution to use the shared pool directly and avoid unneeded
commits

Testing
- Not run (not requested)
2026-02-03 15:34:28 +00:00
gt-oai
944541e936 Add more detail to 401 error (#10508)
Add the error.message if it exists, the body otherwise. Truncate body to
1k characters. Print the cf-ray and the requestId.

**Before:**
<img width="860" height="305" alt="Screenshot 2026-02-03 at 13 15 28"
src="https://github.com/user-attachments/assets/949d5a4d-2b51-488c-a723-c6deffde0353"
/>

**After:**
<img width="1523" height="373" alt="Screenshot 2026-02-03 at 13 15 38"
src="https://github.com/user-attachments/assets/f96a747e-e596-4a7a-aae9-64210d805b26"
/>
2026-02-03 14:58:33 +00:00
jif-oai
d5e7248958 feat: clean codex-api part 1 (#10501) 2026-02-03 14:08:09 +00:00
jif-oai
88598b9402 feat: drop wire_api from clients (#10498) 2026-02-03 12:43:09 +00:00
jif-oai
d2394a2494 chore: nuke chat/completions API (#10157) 2026-02-03 11:31:57 +00:00
viyatb-oai
9257d8451c feat(secrets): add codex-secrets crate (#10142)
## Summary
This introduces the first working foundation for Codex managed secrets:
a small Rust crate that can securely store and retrieve secrets locally.

Concretely, it adds a `codex-secrets` crate that:
- encrypts a local secrets file using `age`
- generates a high-entropy encryption key
- stores that key in the OS keyring

## What this enables
- A secure local persistence model for secrets
- A clean, isolated place for future provider backends
- A clear boundary: Codex can become a credential broker without putting
plaintext secrets in config files

## Implementation details
- New crate: `codex-rs/secrets/`
- Encryption: `age` with scrypt recipient/identity
- Key generation: `OsRng` (32 random bytes)
- Key storage: OS keyring via `codex-keyring-store`

## Testing
- `cd codex-rs && just fmt`
- `cd codex-rs && cargo test -p codex-secrets`
2026-02-03 08:14:39 +00:00
viyatb-oai
f956cc2a02 feat(linux-sandbox): vendor bubblewrap and wire it with FFI (#10413)
## Summary

Vendor Bubblewrap into the repo and add minimal build plumbing in
`codex-linux-sandbox` to compile/link it.

## Why

We want to move Linux sandboxing toward Bubblewrap, but in a safe
two-step rollout:
1) vendoring/build setup (this PR),  
2) runtime integration (follow-up PR).

## Included

- Add `codex-rs/vendor/bubblewrap` sources.
- Add build-time FFI path in `codex-rs/linux-sandbox`.
- Update `build.rs` rerun tracking for vendored files.
- Small vendored compile warning fix (`sockaddr_nl` full init).

follow up in https://github.com/openai/codex/pull/9938
2026-02-02 23:33:46 -08:00
pakrym-oai
53d8474061 Ignore remote_compact_trims_function_call_history_to_fit_context_window on windows (#10474) 2026-02-02 22:47:38 -08:00
sayan-oai
59707da857 fix: clarify deprecation message for features.web_search (#10406)
clarify that the new `web_search` is not a feature flag under
`[features]` in the deprecation CTA
2026-02-02 21:17:01 -08:00
253 changed files with 18664 additions and 4971 deletions

47
.github/ISSUE_TEMPLATE/1-codex-app.yml vendored Normal file
View File

@@ -0,0 +1,47 @@
name: 🖥️ Codex App Bug
description: Report an issue with the Codex App
labels:
- app
body:
- type: markdown
attributes:
value: |
Before submitting a new issue, please search for existing issues to see if your issue has already been reported.
If it has, please add a 👍 reaction (no need to leave a comment) to the existing issue instead of creating a new one.
- type: input
id: version
attributes:
label: What version of the Codex App are you using (From “About Codex” dialog)?
validations:
required: true
- type: input
id: plan
attributes:
label: What subscription do you have?
validations:
required: true
- type: textarea
id: actual
attributes:
label: What issue are you seeing?
description: Please include the full error messages and prompts with PII redacted. If possible, please provide text instead of a screenshot.
validations:
required: true
- type: textarea
id: steps
attributes:
label: What steps can reproduce the bug?
description: Explain the bug and provide a code snippet that can reproduce it. Please include session id, token limit usage, context window usage if applicable.
validations:
required: true
- type: textarea
id: expected
attributes:
label: What is the expected behavior?
description: If possible, please provide text instead of a screenshot.
- type: textarea
id: notes
attributes:
label: Additional information
description: Is there anything else you think we should know?

View File

@@ -1,8 +1,7 @@
name: 🧑‍💻 VS Code Extension
description: Report an issue with the VS Code extension
name: 🧑‍💻 IDE Extension Bug
description: Report an issue with the IDE extension
labels:
- extension
- needs triage
body:
- type: markdown
attributes:
@@ -13,7 +12,7 @@ body:
- type: input
id: version
attributes:
label: What version of the VS Code extension are you using?
label: What version of the IDE extension are you using?
validations:
required: true
- type: input
@@ -34,20 +33,20 @@ body:
attributes:
label: What platform is your computer?
description: |
For MacOS and Linux: copy the output of `uname -mprs`
For macOS and Linux: copy the output of `uname -mprs`
For Windows: copy the output of `"$([Environment]::OSVersion | ForEach-Object VersionString) $(if ([Environment]::Is64BitOperatingSystem) { "x64" } else { "x86" })"` in the PowerShell console
- type: textarea
id: actual
attributes:
label: What issue are you seeing?
description: Please include the full error messages and prompts with PII redacted. If possible, please provide text instead of a screenshot.
description: Please include the full error messages and prompts with PII redacted. If possible, please provide text instead of a screenshot.
validations:
required: true
- type: textarea
id: steps
attributes:
label: What steps can reproduce the bug?
description: Explain the bug and provide a code snippet that can reproduce it. Please include session id, token limit usage, context window usage if applicable.
description: Explain the bug and provide a code snippet that can reproduce it.
validations:
required: true
- type: textarea

View File

@@ -1,5 +1,5 @@
name: 🪲 Bug Report
description: Report an issue that should be fixed
name: 💻 CLI Bug
description: Report an issue in the Codex CLI
labels:
- bug
- needs triage
@@ -7,19 +7,16 @@ body:
- type: markdown
attributes:
value: |
Thank you for submitting a bug report! It helps make Codex better for everyone.
If you need help or support using Codex, and are not reporting a bug, please post on [codex/discussions](https://github.com/openai/codex/discussions), where you can ask questions or engage with others on ideas for how to improve codex.
Before submitting a new issue, please search for existing issues to see if your issue has already been reported.
If it has, please add a 👍 reaction (no need to leave a comment) to the existing issue instead of creating a new one.
Make sure you are running the [latest](https://npmjs.com/package/@openai/codex) version of Codex CLI. The bug you are experiencing may already have been fixed.
Please try to include as much information as possible.
- type: input
id: version
attributes:
label: What version of Codex is running?
description: (App) look in "About Codex" dialog; (CLI) use `codex --version`; (IDE Extension) look in extensions panel.
label: What version of Codex CLI is running?
description: use `codex --version`
validations:
required: true
- type: input
@@ -32,13 +29,13 @@ body:
id: model
attributes:
label: Which model were you using?
description: Like `gpt-4.1`, `o4-mini`, `o3`, etc.
description: Like `gpt-5.2`, `gpt-5.2-codex`, etc.
- type: input
id: platform
attributes:
label: What platform is your computer?
description: |
For MacOS and Linux: copy the output of `uname -mprs`
For macOS and Linux: copy the output of `uname -mprs`
For Windows: copy the output of `"$([Environment]::OSVersion | ForEach-Object VersionString) $(if ([Environment]::Is64BitOperatingSystem) { "x64" } else { "x86" })"` in the PowerShell console
- type: input
id: terminal
@@ -58,7 +55,7 @@ body:
id: steps
attributes:
label: What steps can reproduce the bug?
description: Explain the bug and provide a code snippet that can reproduce it. Please include session id, token limit usage, context window usage if applicable.
description: Explain the bug and provide a code snippet that can reproduce it. Please include thread id if applicable.
validations:
required: true
- type: textarea

37
.github/ISSUE_TEMPLATE/4-bug-report.yml vendored Normal file
View File

@@ -0,0 +1,37 @@
name: 🪲 Other Bug
description: Report an issue in Codex Web, integrations, or other Codex components
labels:
- bug
body:
- type: markdown
attributes:
value: |
Before submitting a new issue, please search for existing issues to see if your issue has already been reported.
If it has, please add a 👍 reaction (no need to leave a comment) to the existing issue instead of creating a new one.
If you need help or support using Codex and are not reporting a bug, please post on [codex/discussions](https://github.com/openai/codex/discussions), where you can ask questions or engage with others on ideas for how to improve codex.
- type: textarea
id: actual
attributes:
label: What issue are you seeing?
description: Please include the full error messages and prompts with PII redacted. If possible, please provide text instead of a screenshot.
validations:
required: true
- type: textarea
id: steps
attributes:
label: What steps can reproduce the bug?
description: Explain the bug and provide a code snippet that can reproduce it.
validations:
required: true
- type: textarea
id: expected
attributes:
label: What is the expected behavior?
description: If possible, please provide text instead of a screenshot.
- type: textarea
id: notes
attributes:
label: Additional information
description: Is there anything else you think we should know?

View File

@@ -16,7 +16,7 @@ body:
id: variant
attributes:
label: What variant of Codex are you using?
description: (e.g., CLI, App, IDE Extension, Web)
description: (e.g., App, IDE Extension, CLI, Web)
validations:
required: true
- type: textarea

View File

@@ -112,3 +112,43 @@ If you dont have the tool:
let request = mock.single_request();
// assert using request.function_call_output(call_id) or request.json_body() or other helpers.
```
## App-server API Development Best Practices
These guidelines apply to app-server protocol work in `codex-rs`, especially:
- `app-server-protocol/src/protocol/common.rs`
- `app-server-protocol/src/protocol/v2.rs`
- `app-server/README.md`
### Core Rules
- All active API development should happen in app-server v2. Do not add new API surface area to v1.
- Follow payload naming consistently:
`*Params` for request payloads, `*Response` for responses, and `*Notification` for notifications.
- Expose RPC methods as `<resource>/<method>` and keep `<resource>` singular (for example, `thread/read`, `app/list`).
- Always expose fields as camelCase on the wire with `#[serde(rename_all = "camelCase")]` unless a tagged union or explicit compatibility requirement needs a targeted rename.
- Always set `#[ts(export_to = "v2/")]` on v2 request/response/notification types so generated TypeScript lands in the correct namespace.
- Never use `#[serde(skip_serializing_if = "Option::is_none")]` for v2 API payload fields.
Exception: client->server requests that intentionally have no params may use:
`params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>`.
- For client->server JSON-RPC request payloads (`*Params`) only, every optional field must be annotated with `#[ts(optional = nullable)]`. Do not use `#[ts(optional = nullable)]` outside client->server request payloads (`*Params`).
- For client->server JSON-RPC request payloads only, and you want to express a boolean field where omission means `false`, use `#[serde(default, skip_serializing_if = "std::ops::Not::not")] pub field: bool` over `Option<bool>`.
- For new list methods, implement cursor pagination by default:
request fields `pub cursor: Option<String>` and `pub limit: Option<u32>`,
response fields `pub data: Vec<...>` and `pub next_cursor: Option<String>`.
- Keep Rust and TS wire renames aligned. If a field or variant uses `#[serde(rename = "...")]`, add matching `#[ts(rename = "...")]`.
- For discriminated unions, use explicit tagging in both serializers:
`#[serde(tag = "type", ...)]` and `#[ts(tag = "type", ...)]`.
- Prefer plain `String` IDs at the API boundary (do UUID parsing/conversion internally if needed).
- Timestamps should be integer Unix seconds (`i64`) and named `*_at` (for example, `created_at`, `updated_at`, `resets_at`).
- For experimental API surface area:
use `#[experimental("method/or/field")]`, derive `ExperimentalApi` when field-level gating is needed, and use `inspect_params: true` in `common.rs` when only some fields of a method are experimental.
### Development Workflow
- Update docs/examples when API behavior changes (at minimum `app-server/README.md`).
- Regenerate schema fixtures when API shapes change:
`just write-app-server-schema`
(and `just write-app-server-schema --experimental` when experimental API fixtures are affected).
- Validate with `cargo test -p codex-app-server-protocol`.

675
codex-rs/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -17,6 +17,7 @@ members = [
"cli",
"common",
"core",
"secrets",
"exec",
"exec-server",
"execpolicy",
@@ -69,6 +70,7 @@ codex-ansi-escape = { path = "ansi-escape" }
codex-api = { path = "codex-api" }
codex-app-server = { path = "app-server" }
codex-app-server-protocol = { path = "app-server-protocol" }
codex-app-server-test-client = { path = "app-server-test-client" }
codex-apply-patch = { path = "apply-patch" }
codex-arg0 = { path = "arg0" }
codex-async-utils = { path = "async-utils" }
@@ -79,6 +81,7 @@ codex-cli = { path = "cli"}
codex-client = { path = "codex-client" }
codex-common = { path = "common" }
codex-core = { path = "core" }
codex-secrets = { path = "secrets" }
codex-exec = { path = "exec" }
codex-execpolicy = { path = "execpolicy" }
codex-experimental-api-macros = { path = "codex-experimental-api-macros" }
@@ -114,6 +117,7 @@ exec_server_test_support = { path = "exec-server/tests/common" }
mcp_test_support = { path = "mcp-server/tests/common" }
# External
age = "0.11.1"
allocative = "0.3.3"
ansi-to-tui = "7.0.0"
anyhow = "1"
@@ -246,6 +250,7 @@ walkdir = "2.5.0"
webbrowser = "1.0"
which = "8"
wildmatch = "2.6.1"
zip = "2.4.2"
wiremock = "0.6"
zeroize = "1.8.2"
@@ -291,7 +296,7 @@ unwrap_used = "deny"
# cargo-shear cannot see the platform-specific openssl-sys usage, so we
# silence the false positive here instead of deleting a real dependency.
[workspace.metadata.cargo-shear]
ignored = ["icu_provider", "openssl-sys", "codex-utils-readiness"]
ignored = ["icu_provider", "openssl-sys", "codex-utils-readiness", "codex-secrets"]
[profile.release]
lto = "fat"

View File

@@ -526,7 +526,7 @@
]
},
"FunctionCallOutputPayload": {
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses/Chat Completions APIs understand.",
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses API understands.",
"properties": {
"content": {
"type": "string"
@@ -1049,10 +1049,7 @@
"description": "Initial collaboration mode to use when the TUI starts.",
"enum": [
"plan",
"code",
"pair_programming",
"execute",
"custom"
"default"
],
"type": "string"
},
@@ -1410,7 +1407,7 @@
]
},
"id": {
"description": "Set when using the chat completions API.",
"description": "Legacy id field retained for compatibility with older payloads.",
"type": [
"string",
"null"
@@ -2186,6 +2183,24 @@
},
"type": "object"
},
"SkillsRemoteReadParams": {
"type": "object"
},
"SkillsRemoteWriteParams": {
"properties": {
"hazelnutId": {
"type": "string"
},
"isPreload": {
"type": "boolean"
}
},
"required": [
"hazelnutId",
"isPreload"
],
"type": "object"
},
"TextElement": {
"properties": {
"byteRange": {
@@ -3315,6 +3330,54 @@
"title": "Skills/listRequest",
"type": "object"
},
{
"properties": {
"id": {
"$ref": "#/definitions/RequestId"
},
"method": {
"enum": [
"skills/remote/read"
],
"title": "Skills/remote/readRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/SkillsRemoteReadParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "Skills/remote/readRequest",
"type": "object"
},
{
"properties": {
"id": {
"$ref": "#/definitions/RequestId"
},
"method": {
"enum": [
"skills/remote/write"
],
"title": "Skills/remote/writeRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/SkillsRemoteWriteParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "Skills/remote/writeRequest",
"type": "object"
},
{
"properties": {
"id": {

View File

@@ -551,7 +551,7 @@
"$ref": "#/definitions/ModeKind"
}
],
"default": "code"
"default": "default"
},
"model_context_window": {
"format": "int64",
@@ -2012,6 +2012,59 @@
"title": "ListSkillsResponseEventMsg",
"type": "object"
},
{
"description": "List of remote skills available to the agent.",
"properties": {
"skills": {
"items": {
"$ref": "#/definitions/RemoteSkillSummary"
},
"type": "array"
},
"type": {
"enum": [
"list_remote_skills_response"
],
"title": "ListRemoteSkillsResponseEventMsgType",
"type": "string"
}
},
"required": [
"skills",
"type"
],
"title": "ListRemoteSkillsResponseEventMsg",
"type": "object"
},
{
"description": "Remote skill downloaded to local cache.",
"properties": {
"id": {
"type": "string"
},
"name": {
"type": "string"
},
"path": {
"type": "string"
},
"type": {
"enum": [
"remote_skill_downloaded"
],
"title": "RemoteSkillDownloadedEventMsgType",
"type": "string"
}
},
"required": [
"id",
"name",
"path",
"type"
],
"title": "RemoteSkillDownloadedEventMsg",
"type": "object"
},
{
"description": "Notification that skill data may have been updated and clients may want to reload.",
"properties": {
@@ -2857,7 +2910,7 @@
]
},
"FunctionCallOutputPayload": {
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses/Chat Completions APIs understand.",
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses API understands.",
"properties": {
"content": {
"type": "string"
@@ -3122,10 +3175,7 @@
"description": "Initial collaboration mode to use when the TUI starts.",
"enum": [
"plan",
"code",
"pair_programming",
"execute",
"custom"
"default"
],
"type": "string"
},
@@ -3432,6 +3482,25 @@
}
]
},
"RemoteSkillSummary": {
"properties": {
"description": {
"type": "string"
},
"id": {
"type": "string"
},
"name": {
"type": "string"
}
},
"required": [
"description",
"id",
"name"
],
"type": "object"
},
"RequestId": {
"anyOf": [
{
@@ -3689,7 +3758,7 @@
]
},
"id": {
"description": "Set when using the chat completions API.",
"description": "Legacy id field retained for compatibility with older payloads.",
"type": [
"string",
"null"
@@ -5078,7 +5147,7 @@
"$ref": "#/definitions/ModeKind"
}
],
"default": "code"
"default": "default"
},
"model_context_window": {
"format": "int64",
@@ -6539,6 +6608,59 @@
"title": "ListSkillsResponseEventMsg",
"type": "object"
},
{
"description": "List of remote skills available to the agent.",
"properties": {
"skills": {
"items": {
"$ref": "#/definitions/RemoteSkillSummary"
},
"type": "array"
},
"type": {
"enum": [
"list_remote_skills_response"
],
"title": "ListRemoteSkillsResponseEventMsgType",
"type": "string"
}
},
"required": [
"skills",
"type"
],
"title": "ListRemoteSkillsResponseEventMsg",
"type": "object"
},
{
"description": "Remote skill downloaded to local cache.",
"properties": {
"id": {
"type": "string"
},
"name": {
"type": "string"
},
"path": {
"type": "string"
},
"type": {
"enum": [
"remote_skill_downloaded"
],
"title": "RemoteSkillDownloadedEventMsgType",
"type": "string"
}
},
"required": [
"id",
"name",
"path",
"type"
],
"title": "RemoteSkillDownloadedEventMsg",
"type": "object"
},
{
"description": "Notification that skill data may have been updated and clients may want to reload.",
"properties": {

View File

@@ -1129,7 +1129,7 @@
"$ref": "#/definitions/ModeKind"
}
],
"default": "code"
"default": "default"
},
"model_context_window": {
"format": "int64",
@@ -2590,6 +2590,59 @@
"title": "ListSkillsResponseEventMsg",
"type": "object"
},
{
"description": "List of remote skills available to the agent.",
"properties": {
"skills": {
"items": {
"$ref": "#/definitions/RemoteSkillSummary"
},
"type": "array"
},
"type": {
"enum": [
"list_remote_skills_response"
],
"title": "ListRemoteSkillsResponseEventMsgType",
"type": "string"
}
},
"required": [
"skills",
"type"
],
"title": "ListRemoteSkillsResponseEventMsg",
"type": "object"
},
{
"description": "Remote skill downloaded to local cache.",
"properties": {
"id": {
"type": "string"
},
"name": {
"type": "string"
},
"path": {
"type": "string"
},
"type": {
"enum": [
"remote_skill_downloaded"
],
"title": "RemoteSkillDownloadedEventMsgType",
"type": "string"
}
},
"required": [
"id",
"name",
"path",
"type"
],
"title": "RemoteSkillDownloadedEventMsg",
"type": "object"
},
{
"description": "Notification that skill data may have been updated and clients may want to reload.",
"properties": {
@@ -3477,7 +3530,7 @@
]
},
"FunctionCallOutputPayload": {
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses/Chat Completions APIs understand.",
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses API understands.",
"properties": {
"content": {
"type": "string"
@@ -3901,10 +3954,7 @@
"description": "Initial collaboration mode to use when the TUI starts.",
"enum": [
"plan",
"code",
"pair_programming",
"execute",
"custom"
"default"
],
"type": "string"
},
@@ -4472,6 +4522,25 @@
],
"type": "object"
},
"RemoteSkillSummary": {
"properties": {
"description": {
"type": "string"
},
"id": {
"type": "string"
},
"name": {
"type": "string"
}
},
"required": [
"description",
"id",
"name"
],
"type": "object"
},
"RequestId": {
"anyOf": [
{
@@ -4729,7 +4798,7 @@
]
},
"id": {
"description": "Set when using the chat completions API.",
"description": "Legacy id field retained for compatibility with older payloads.",
"type": [
"string",
"null"

View File

@@ -718,6 +718,54 @@
"title": "Skills/listRequest",
"type": "object"
},
{
"properties": {
"id": {
"$ref": "#/definitions/RequestId"
},
"method": {
"enum": [
"skills/remote/read"
],
"title": "Skills/remote/readRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/v2/SkillsRemoteReadParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "Skills/remote/readRequest",
"type": "object"
},
{
"properties": {
"id": {
"$ref": "#/definitions/RequestId"
},
"method": {
"enum": [
"skills/remote/write"
],
"title": "Skills/remote/writeRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/v2/SkillsRemoteWriteParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "Skills/remote/writeRequest",
"type": "object"
},
{
"properties": {
"id": {
@@ -2391,7 +2439,7 @@
"$ref": "#/definitions/v2/ModeKind"
}
],
"default": "code"
"default": "default"
},
"model_context_window": {
"format": "int64",
@@ -3852,6 +3900,59 @@
"title": "ListSkillsResponseEventMsg",
"type": "object"
},
{
"description": "List of remote skills available to the agent.",
"properties": {
"skills": {
"items": {
"$ref": "#/definitions/v2/RemoteSkillSummary"
},
"type": "array"
},
"type": {
"enum": [
"list_remote_skills_response"
],
"title": "ListRemoteSkillsResponseEventMsgType",
"type": "string"
}
},
"required": [
"skills",
"type"
],
"title": "ListRemoteSkillsResponseEventMsg",
"type": "object"
},
{
"description": "Remote skill downloaded to local cache.",
"properties": {
"id": {
"type": "string"
},
"name": {
"type": "string"
},
"path": {
"type": "string"
},
"type": {
"enum": [
"remote_skill_downloaded"
],
"title": "RemoteSkillDownloadedEventMsgType",
"type": "string"
}
},
"required": [
"id",
"name",
"path",
"type"
],
"title": "RemoteSkillDownloadedEventMsg",
"type": "object"
},
{
"description": "Notification that skill data may have been updated and clients may want to reload.",
"properties": {
@@ -4965,7 +5066,7 @@
]
},
"FunctionCallOutputPayload": {
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses/Chat Completions APIs understand.",
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses API understands.",
"properties": {
"content": {
"type": "string"
@@ -5850,10 +5951,7 @@
"description": "Initial collaboration mode to use when the TUI starts.",
"enum": [
"plan",
"code",
"pair_programming",
"execute",
"custom"
"default"
],
"type": "string"
},
@@ -6354,6 +6452,25 @@
}
]
},
"RemoteSkillSummary": {
"properties": {
"description": {
"type": "string"
},
"id": {
"type": "string"
},
"name": {
"type": "string"
}
},
"required": [
"description",
"id",
"name"
],
"type": "object"
},
"RemoveConversationListenerParams": {
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
@@ -6629,7 +6746,7 @@
]
},
"id": {
"description": "Set when using the chat completions API.",
"description": "Legacy id field retained for compatibility with older payloads.",
"type": [
"string",
"null"
@@ -11151,7 +11268,7 @@
]
},
"FunctionCallOutputPayload": {
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses/Chat Completions APIs understand.",
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses API understands.",
"properties": {
"content": {
"type": "string"
@@ -11778,10 +11895,7 @@
"description": "Initial collaboration mode to use when the TUI starts.",
"enum": [
"plan",
"code",
"pair_programming",
"execute",
"custom"
"default"
],
"type": "string"
},
@@ -11824,6 +11938,12 @@
"supportsPersonality": {
"default": false,
"type": "boolean"
},
"upgrade": {
"type": [
"string",
"null"
]
}
},
"required": [
@@ -12388,6 +12508,25 @@
"title": "ReasoningTextDeltaNotification",
"type": "object"
},
"RemoteSkillSummary": {
"properties": {
"description": {
"type": "string"
},
"id": {
"type": "string"
},
"name": {
"type": "string"
}
},
"required": [
"description",
"id",
"name"
],
"type": "object"
},
"ResidencyRequirement": {
"enum": [
"us"
@@ -12588,7 +12727,7 @@
]
},
"id": {
"description": "Set when using the chat completions API.",
"description": "Legacy id field retained for compatibility with older payloads.",
"type": [
"string",
"null"
@@ -13423,6 +13562,65 @@
"title": "SkillsListResponse",
"type": "object"
},
"SkillsRemoteReadParams": {
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "SkillsRemoteReadParams",
"type": "object"
},
"SkillsRemoteReadResponse": {
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"data": {
"items": {
"$ref": "#/definitions/v2/RemoteSkillSummary"
},
"type": "array"
}
},
"required": [
"data"
],
"title": "SkillsRemoteReadResponse",
"type": "object"
},
"SkillsRemoteWriteParams": {
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"hazelnutId": {
"type": "string"
},
"isPreload": {
"type": "boolean"
}
},
"required": [
"hazelnutId",
"isPreload"
],
"title": "SkillsRemoteWriteParams",
"type": "object"
},
"SkillsRemoteWriteResponse": {
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"id": {
"type": "string"
},
"name": {
"type": "string"
},
"path": {
"type": "string"
}
},
"required": [
"id",
"name",
"path"
],
"title": "SkillsRemoteWriteResponse",
"type": "object"
},
"SubAgentSource": {
"oneOf": [
{

View File

@@ -551,7 +551,7 @@
"$ref": "#/definitions/ModeKind"
}
],
"default": "code"
"default": "default"
},
"model_context_window": {
"format": "int64",
@@ -2012,6 +2012,59 @@
"title": "ListSkillsResponseEventMsg",
"type": "object"
},
{
"description": "List of remote skills available to the agent.",
"properties": {
"skills": {
"items": {
"$ref": "#/definitions/RemoteSkillSummary"
},
"type": "array"
},
"type": {
"enum": [
"list_remote_skills_response"
],
"title": "ListRemoteSkillsResponseEventMsgType",
"type": "string"
}
},
"required": [
"skills",
"type"
],
"title": "ListRemoteSkillsResponseEventMsg",
"type": "object"
},
{
"description": "Remote skill downloaded to local cache.",
"properties": {
"id": {
"type": "string"
},
"name": {
"type": "string"
},
"path": {
"type": "string"
},
"type": {
"enum": [
"remote_skill_downloaded"
],
"title": "RemoteSkillDownloadedEventMsgType",
"type": "string"
}
},
"required": [
"id",
"name",
"path",
"type"
],
"title": "RemoteSkillDownloadedEventMsg",
"type": "object"
},
{
"description": "Notification that skill data may have been updated and clients may want to reload.",
"properties": {
@@ -2857,7 +2910,7 @@
]
},
"FunctionCallOutputPayload": {
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses/Chat Completions APIs understand.",
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses API understands.",
"properties": {
"content": {
"type": "string"
@@ -3122,10 +3175,7 @@
"description": "Initial collaboration mode to use when the TUI starts.",
"enum": [
"plan",
"code",
"pair_programming",
"execute",
"custom"
"default"
],
"type": "string"
},
@@ -3432,6 +3482,25 @@
}
]
},
"RemoteSkillSummary": {
"properties": {
"description": {
"type": "string"
},
"id": {
"type": "string"
},
"name": {
"type": "string"
}
},
"required": [
"description",
"id",
"name"
],
"type": "object"
},
"RequestId": {
"anyOf": [
{
@@ -3689,7 +3758,7 @@
]
},
"id": {
"description": "Set when using the chat completions API.",
"description": "Legacy id field retained for compatibility with older payloads.",
"type": [
"string",
"null"

View File

@@ -144,7 +144,7 @@
]
},
"FunctionCallOutputPayload": {
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses/Chat Completions APIs understand.",
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses API understands.",
"properties": {
"content": {
"type": "string"
@@ -530,7 +530,7 @@
]
},
"id": {
"description": "Set when using the chat completions API.",
"description": "Legacy id field retained for compatibility with older payloads.",
"type": [
"string",
"null"

View File

@@ -551,7 +551,7 @@
"$ref": "#/definitions/ModeKind"
}
],
"default": "code"
"default": "default"
},
"model_context_window": {
"format": "int64",
@@ -2012,6 +2012,59 @@
"title": "ListSkillsResponseEventMsg",
"type": "object"
},
{
"description": "List of remote skills available to the agent.",
"properties": {
"skills": {
"items": {
"$ref": "#/definitions/RemoteSkillSummary"
},
"type": "array"
},
"type": {
"enum": [
"list_remote_skills_response"
],
"title": "ListRemoteSkillsResponseEventMsgType",
"type": "string"
}
},
"required": [
"skills",
"type"
],
"title": "ListRemoteSkillsResponseEventMsg",
"type": "object"
},
{
"description": "Remote skill downloaded to local cache.",
"properties": {
"id": {
"type": "string"
},
"name": {
"type": "string"
},
"path": {
"type": "string"
},
"type": {
"enum": [
"remote_skill_downloaded"
],
"title": "RemoteSkillDownloadedEventMsgType",
"type": "string"
}
},
"required": [
"id",
"name",
"path",
"type"
],
"title": "RemoteSkillDownloadedEventMsg",
"type": "object"
},
{
"description": "Notification that skill data may have been updated and clients may want to reload.",
"properties": {
@@ -2857,7 +2910,7 @@
]
},
"FunctionCallOutputPayload": {
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses/Chat Completions APIs understand.",
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses API understands.",
"properties": {
"content": {
"type": "string"
@@ -3122,10 +3175,7 @@
"description": "Initial collaboration mode to use when the TUI starts.",
"enum": [
"plan",
"code",
"pair_programming",
"execute",
"custom"
"default"
],
"type": "string"
},
@@ -3432,6 +3482,25 @@
}
]
},
"RemoteSkillSummary": {
"properties": {
"description": {
"type": "string"
},
"id": {
"type": "string"
},
"name": {
"type": "string"
}
},
"required": [
"description",
"id",
"name"
],
"type": "object"
},
"RequestId": {
"anyOf": [
{
@@ -3689,7 +3758,7 @@
]
},
"id": {
"description": "Set when using the chat completions API.",
"description": "Legacy id field retained for compatibility with older payloads.",
"type": [
"string",
"null"

View File

@@ -551,7 +551,7 @@
"$ref": "#/definitions/ModeKind"
}
],
"default": "code"
"default": "default"
},
"model_context_window": {
"format": "int64",
@@ -2012,6 +2012,59 @@
"title": "ListSkillsResponseEventMsg",
"type": "object"
},
{
"description": "List of remote skills available to the agent.",
"properties": {
"skills": {
"items": {
"$ref": "#/definitions/RemoteSkillSummary"
},
"type": "array"
},
"type": {
"enum": [
"list_remote_skills_response"
],
"title": "ListRemoteSkillsResponseEventMsgType",
"type": "string"
}
},
"required": [
"skills",
"type"
],
"title": "ListRemoteSkillsResponseEventMsg",
"type": "object"
},
{
"description": "Remote skill downloaded to local cache.",
"properties": {
"id": {
"type": "string"
},
"name": {
"type": "string"
},
"path": {
"type": "string"
},
"type": {
"enum": [
"remote_skill_downloaded"
],
"title": "RemoteSkillDownloadedEventMsgType",
"type": "string"
}
},
"required": [
"id",
"name",
"path",
"type"
],
"title": "RemoteSkillDownloadedEventMsg",
"type": "object"
},
{
"description": "Notification that skill data may have been updated and clients may want to reload.",
"properties": {
@@ -2857,7 +2910,7 @@
]
},
"FunctionCallOutputPayload": {
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses/Chat Completions APIs understand.",
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses API understands.",
"properties": {
"content": {
"type": "string"
@@ -3122,10 +3175,7 @@
"description": "Initial collaboration mode to use when the TUI starts.",
"enum": [
"plan",
"code",
"pair_programming",
"execute",
"custom"
"default"
],
"type": "string"
},
@@ -3432,6 +3482,25 @@
}
]
},
"RemoteSkillSummary": {
"properties": {
"description": {
"type": "string"
},
"id": {
"type": "string"
},
"name": {
"type": "string"
}
},
"required": [
"description",
"id",
"name"
],
"type": "object"
},
"RequestId": {
"anyOf": [
{
@@ -3689,7 +3758,7 @@
]
},
"id": {
"description": "Set when using the chat completions API.",
"description": "Legacy id field retained for compatibility with older payloads.",
"type": [
"string",
"null"

View File

@@ -59,6 +59,12 @@
"supportsPersonality": {
"default": false,
"type": "boolean"
},
"upgrade": {
"type": [
"string",
"null"
]
}
},
"required": [

View File

@@ -111,7 +111,7 @@
]
},
"FunctionCallOutputPayload": {
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses/Chat Completions APIs understand.",
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses API understands.",
"properties": {
"content": {
"type": "string"
@@ -417,7 +417,7 @@
]
},
"id": {
"description": "Set when using the chat completions API.",
"description": "Legacy id field retained for compatibility with older payloads.",
"type": [
"string",
"null"

View File

@@ -0,0 +1,5 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "SkillsRemoteReadParams",
"type": "object"
}

View File

@@ -0,0 +1,37 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"RemoteSkillSummary": {
"properties": {
"description": {
"type": "string"
},
"id": {
"type": "string"
},
"name": {
"type": "string"
}
},
"required": [
"description",
"id",
"name"
],
"type": "object"
}
},
"properties": {
"data": {
"items": {
"$ref": "#/definitions/RemoteSkillSummary"
},
"type": "array"
}
},
"required": [
"data"
],
"title": "SkillsRemoteReadResponse",
"type": "object"
}

View File

@@ -0,0 +1,17 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"hazelnutId": {
"type": "string"
},
"isPreload": {
"type": "boolean"
}
},
"required": [
"hazelnutId",
"isPreload"
],
"title": "SkillsRemoteWriteParams",
"type": "object"
}

View File

@@ -0,0 +1,21 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"id": {
"type": "string"
},
"name": {
"type": "string"
},
"path": {
"type": "string"
}
},
"required": [
"id",
"name",
"path"
],
"title": "SkillsRemoteWriteResponse",
"type": "object"
}

View File

@@ -120,7 +120,7 @@
]
},
"FunctionCallOutputPayload": {
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses/Chat Completions APIs understand.",
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses API understands.",
"properties": {
"content": {
"type": "string"
@@ -433,7 +433,7 @@
]
},
"id": {
"description": "Set when using the chat completions API.",
"description": "Legacy id field retained for compatibility with older payloads.",
"type": [
"string",
"null"

View File

@@ -53,10 +53,7 @@
"description": "Initial collaboration mode to use when the TUI starts.",
"enum": [
"plan",
"code",
"pair_programming",
"execute",
"custom"
"default"
],
"type": "string"
},

View File

@@ -36,6 +36,8 @@ import type { ModelListParams } from "./v2/ModelListParams";
import type { ReviewStartParams } from "./v2/ReviewStartParams";
import type { SkillsConfigWriteParams } from "./v2/SkillsConfigWriteParams";
import type { SkillsListParams } from "./v2/SkillsListParams";
import type { SkillsRemoteReadParams } from "./v2/SkillsRemoteReadParams";
import type { SkillsRemoteWriteParams } from "./v2/SkillsRemoteWriteParams";
import type { ThreadArchiveParams } from "./v2/ThreadArchiveParams";
import type { ThreadForkParams } from "./v2/ThreadForkParams";
import type { ThreadListParams } from "./v2/ThreadListParams";
@@ -52,4 +54,4 @@ import type { TurnStartParams } from "./v2/TurnStartParams";
/**
* Request from the client to the server.
*/
export type ClientRequest ={ "method": "initialize", id: RequestId, params: InitializeParams, } | { "method": "thread/start", id: RequestId, params: ThreadStartParams, } | { "method": "thread/resume", id: RequestId, params: ThreadResumeParams, } | { "method": "thread/fork", id: RequestId, params: ThreadForkParams, } | { "method": "thread/archive", id: RequestId, params: ThreadArchiveParams, } | { "method": "thread/name/set", id: RequestId, params: ThreadSetNameParams, } | { "method": "thread/unarchive", id: RequestId, params: ThreadUnarchiveParams, } | { "method": "thread/rollback", id: RequestId, params: ThreadRollbackParams, } | { "method": "thread/list", id: RequestId, params: ThreadListParams, } | { "method": "thread/loaded/list", id: RequestId, params: ThreadLoadedListParams, } | { "method": "thread/read", id: RequestId, params: ThreadReadParams, } | { "method": "skills/list", id: RequestId, params: SkillsListParams, } | { "method": "app/list", id: RequestId, params: AppsListParams, } | { "method": "skills/config/write", id: RequestId, params: SkillsConfigWriteParams, } | { "method": "turn/start", id: RequestId, params: TurnStartParams, } | { "method": "turn/interrupt", id: RequestId, params: TurnInterruptParams, } | { "method": "review/start", id: RequestId, params: ReviewStartParams, } | { "method": "model/list", id: RequestId, params: ModelListParams, } | { "method": "mcpServer/oauth/login", id: RequestId, params: McpServerOauthLoginParams, } | { "method": "config/mcpServer/reload", id: RequestId, params: undefined, } | { "method": "mcpServerStatus/list", id: RequestId, params: ListMcpServerStatusParams, } | { "method": "account/login/start", id: RequestId, params: LoginAccountParams, } | { "method": "account/login/cancel", id: RequestId, params: CancelLoginAccountParams, } | { "method": "account/logout", id: RequestId, params: undefined, } | { "method": "account/rateLimits/read", id: RequestId, params: undefined, } | { "method": "feedback/upload", id: RequestId, params: FeedbackUploadParams, } | { "method": "command/exec", id: RequestId, params: CommandExecParams, } | { "method": "config/read", id: RequestId, params: ConfigReadParams, } | { "method": "config/value/write", id: RequestId, params: ConfigValueWriteParams, } | { "method": "config/batchWrite", id: RequestId, params: ConfigBatchWriteParams, } | { "method": "configRequirements/read", id: RequestId, params: undefined, } | { "method": "account/read", id: RequestId, params: GetAccountParams, } | { "method": "newConversation", id: RequestId, params: NewConversationParams, } | { "method": "getConversationSummary", id: RequestId, params: GetConversationSummaryParams, } | { "method": "listConversations", id: RequestId, params: ListConversationsParams, } | { "method": "resumeConversation", id: RequestId, params: ResumeConversationParams, } | { "method": "forkConversation", id: RequestId, params: ForkConversationParams, } | { "method": "archiveConversation", id: RequestId, params: ArchiveConversationParams, } | { "method": "sendUserMessage", id: RequestId, params: SendUserMessageParams, } | { "method": "sendUserTurn", id: RequestId, params: SendUserTurnParams, } | { "method": "interruptConversation", id: RequestId, params: InterruptConversationParams, } | { "method": "addConversationListener", id: RequestId, params: AddConversationListenerParams, } | { "method": "removeConversationListener", id: RequestId, params: RemoveConversationListenerParams, } | { "method": "gitDiffToRemote", id: RequestId, params: GitDiffToRemoteParams, } | { "method": "loginApiKey", id: RequestId, params: LoginApiKeyParams, } | { "method": "loginChatGpt", id: RequestId, params: undefined, } | { "method": "cancelLoginChatGpt", id: RequestId, params: CancelLoginChatGptParams, } | { "method": "logoutChatGpt", id: RequestId, params: undefined, } | { "method": "getAuthStatus", id: RequestId, params: GetAuthStatusParams, } | { "method": "getUserSavedConfig", id: RequestId, params: undefined, } | { "method": "setDefaultModel", id: RequestId, params: SetDefaultModelParams, } | { "method": "getUserAgent", id: RequestId, params: undefined, } | { "method": "userInfo", id: RequestId, params: undefined, } | { "method": "fuzzyFileSearch", id: RequestId, params: FuzzyFileSearchParams, } | { "method": "execOneOffCommand", id: RequestId, params: ExecOneOffCommandParams, };
export type ClientRequest ={ "method": "initialize", id: RequestId, params: InitializeParams, } | { "method": "thread/start", id: RequestId, params: ThreadStartParams, } | { "method": "thread/resume", id: RequestId, params: ThreadResumeParams, } | { "method": "thread/fork", id: RequestId, params: ThreadForkParams, } | { "method": "thread/archive", id: RequestId, params: ThreadArchiveParams, } | { "method": "thread/name/set", id: RequestId, params: ThreadSetNameParams, } | { "method": "thread/unarchive", id: RequestId, params: ThreadUnarchiveParams, } | { "method": "thread/rollback", id: RequestId, params: ThreadRollbackParams, } | { "method": "thread/list", id: RequestId, params: ThreadListParams, } | { "method": "thread/loaded/list", id: RequestId, params: ThreadLoadedListParams, } | { "method": "thread/read", id: RequestId, params: ThreadReadParams, } | { "method": "skills/list", id: RequestId, params: SkillsListParams, } | { "method": "skills/remote/read", id: RequestId, params: SkillsRemoteReadParams, } | { "method": "skills/remote/write", id: RequestId, params: SkillsRemoteWriteParams, } | { "method": "app/list", id: RequestId, params: AppsListParams, } | { "method": "skills/config/write", id: RequestId, params: SkillsConfigWriteParams, } | { "method": "turn/start", id: RequestId, params: TurnStartParams, } | { "method": "turn/interrupt", id: RequestId, params: TurnInterruptParams, } | { "method": "review/start", id: RequestId, params: ReviewStartParams, } | { "method": "model/list", id: RequestId, params: ModelListParams, } | { "method": "mcpServer/oauth/login", id: RequestId, params: McpServerOauthLoginParams, } | { "method": "config/mcpServer/reload", id: RequestId, params: undefined, } | { "method": "mcpServerStatus/list", id: RequestId, params: ListMcpServerStatusParams, } | { "method": "account/login/start", id: RequestId, params: LoginAccountParams, } | { "method": "account/login/cancel", id: RequestId, params: CancelLoginAccountParams, } | { "method": "account/logout", id: RequestId, params: undefined, } | { "method": "account/rateLimits/read", id: RequestId, params: undefined, } | { "method": "feedback/upload", id: RequestId, params: FeedbackUploadParams, } | { "method": "command/exec", id: RequestId, params: CommandExecParams, } | { "method": "config/read", id: RequestId, params: ConfigReadParams, } | { "method": "config/value/write", id: RequestId, params: ConfigValueWriteParams, } | { "method": "config/batchWrite", id: RequestId, params: ConfigBatchWriteParams, } | { "method": "configRequirements/read", id: RequestId, params: undefined, } | { "method": "account/read", id: RequestId, params: GetAccountParams, } | { "method": "newConversation", id: RequestId, params: NewConversationParams, } | { "method": "getConversationSummary", id: RequestId, params: GetConversationSummaryParams, } | { "method": "listConversations", id: RequestId, params: ListConversationsParams, } | { "method": "resumeConversation", id: RequestId, params: ResumeConversationParams, } | { "method": "forkConversation", id: RequestId, params: ForkConversationParams, } | { "method": "archiveConversation", id: RequestId, params: ArchiveConversationParams, } | { "method": "sendUserMessage", id: RequestId, params: SendUserMessageParams, } | { "method": "sendUserTurn", id: RequestId, params: SendUserTurnParams, } | { "method": "interruptConversation", id: RequestId, params: InterruptConversationParams, } | { "method": "addConversationListener", id: RequestId, params: AddConversationListenerParams, } | { "method": "removeConversationListener", id: RequestId, params: RemoveConversationListenerParams, } | { "method": "gitDiffToRemote", id: RequestId, params: GitDiffToRemoteParams, } | { "method": "loginApiKey", id: RequestId, params: LoginApiKeyParams, } | { "method": "loginChatGpt", id: RequestId, params: undefined, } | { "method": "cancelLoginChatGpt", id: RequestId, params: CancelLoginChatGptParams, } | { "method": "logoutChatGpt", id: RequestId, params: undefined, } | { "method": "getAuthStatus", id: RequestId, params: GetAuthStatusParams, } | { "method": "getUserSavedConfig", id: RequestId, params: undefined, } | { "method": "setDefaultModel", id: RequestId, params: SetDefaultModelParams, } | { "method": "getUserAgent", id: RequestId, params: undefined, } | { "method": "userInfo", id: RequestId, params: undefined, } | { "method": "fuzzyFileSearch", id: RequestId, params: FuzzyFileSearchParams, } | { "method": "execOneOffCommand", id: RequestId, params: ExecOneOffCommandParams, };

View File

@@ -33,6 +33,7 @@ import type { GetHistoryEntryResponseEvent } from "./GetHistoryEntryResponseEven
import type { ItemCompletedEvent } from "./ItemCompletedEvent";
import type { ItemStartedEvent } from "./ItemStartedEvent";
import type { ListCustomPromptsResponseEvent } from "./ListCustomPromptsResponseEvent";
import type { ListRemoteSkillsResponseEvent } from "./ListRemoteSkillsResponseEvent";
import type { ListSkillsResponseEvent } from "./ListSkillsResponseEvent";
import type { McpListToolsResponseEvent } from "./McpListToolsResponseEvent";
import type { McpStartupCompleteEvent } from "./McpStartupCompleteEvent";
@@ -45,6 +46,7 @@ import type { PlanDeltaEvent } from "./PlanDeltaEvent";
import type { RawResponseItemEvent } from "./RawResponseItemEvent";
import type { ReasoningContentDeltaEvent } from "./ReasoningContentDeltaEvent";
import type { ReasoningRawContentDeltaEvent } from "./ReasoningRawContentDeltaEvent";
import type { RemoteSkillDownloadedEvent } from "./RemoteSkillDownloadedEvent";
import type { RequestUserInputEvent } from "./RequestUserInputEvent";
import type { ReviewRequest } from "./ReviewRequest";
import type { SessionConfiguredEvent } from "./SessionConfiguredEvent";
@@ -70,4 +72,4 @@ import type { WebSearchEndEvent } from "./WebSearchEndEvent";
* Response event from the agent
* NOTE: Make sure none of these values have optional types, as it will mess up the extension code-gen.
*/
export type EventMsg = { "type": "error" } & ErrorEvent | { "type": "warning" } & WarningEvent | { "type": "context_compacted" } & ContextCompactedEvent | { "type": "thread_rolled_back" } & ThreadRolledBackEvent | { "type": "task_started" } & TurnStartedEvent | { "type": "task_complete" } & TurnCompleteEvent | { "type": "token_count" } & TokenCountEvent | { "type": "agent_message" } & AgentMessageEvent | { "type": "user_message" } & UserMessageEvent | { "type": "agent_message_delta" } & AgentMessageDeltaEvent | { "type": "agent_reasoning" } & AgentReasoningEvent | { "type": "agent_reasoning_delta" } & AgentReasoningDeltaEvent | { "type": "agent_reasoning_raw_content" } & AgentReasoningRawContentEvent | { "type": "agent_reasoning_raw_content_delta" } & AgentReasoningRawContentDeltaEvent | { "type": "agent_reasoning_section_break" } & AgentReasoningSectionBreakEvent | { "type": "session_configured" } & SessionConfiguredEvent | { "type": "thread_name_updated" } & ThreadNameUpdatedEvent | { "type": "mcp_startup_update" } & McpStartupUpdateEvent | { "type": "mcp_startup_complete" } & McpStartupCompleteEvent | { "type": "mcp_tool_call_begin" } & McpToolCallBeginEvent | { "type": "mcp_tool_call_end" } & McpToolCallEndEvent | { "type": "web_search_begin" } & WebSearchBeginEvent | { "type": "web_search_end" } & WebSearchEndEvent | { "type": "exec_command_begin" } & ExecCommandBeginEvent | { "type": "exec_command_output_delta" } & ExecCommandOutputDeltaEvent | { "type": "terminal_interaction" } & TerminalInteractionEvent | { "type": "exec_command_end" } & ExecCommandEndEvent | { "type": "view_image_tool_call" } & ViewImageToolCallEvent | { "type": "exec_approval_request" } & ExecApprovalRequestEvent | { "type": "request_user_input" } & RequestUserInputEvent | { "type": "dynamic_tool_call_request" } & DynamicToolCallRequest | { "type": "elicitation_request" } & ElicitationRequestEvent | { "type": "apply_patch_approval_request" } & ApplyPatchApprovalRequestEvent | { "type": "deprecation_notice" } & DeprecationNoticeEvent | { "type": "background_event" } & BackgroundEventEvent | { "type": "undo_started" } & UndoStartedEvent | { "type": "undo_completed" } & UndoCompletedEvent | { "type": "stream_error" } & StreamErrorEvent | { "type": "patch_apply_begin" } & PatchApplyBeginEvent | { "type": "patch_apply_end" } & PatchApplyEndEvent | { "type": "turn_diff" } & TurnDiffEvent | { "type": "get_history_entry_response" } & GetHistoryEntryResponseEvent | { "type": "mcp_list_tools_response" } & McpListToolsResponseEvent | { "type": "list_custom_prompts_response" } & ListCustomPromptsResponseEvent | { "type": "list_skills_response" } & ListSkillsResponseEvent | { "type": "skills_update_available" } | { "type": "plan_update" } & UpdatePlanArgs | { "type": "turn_aborted" } & TurnAbortedEvent | { "type": "shutdown_complete" } | { "type": "entered_review_mode" } & ReviewRequest | { "type": "exited_review_mode" } & ExitedReviewModeEvent | { "type": "raw_response_item" } & RawResponseItemEvent | { "type": "item_started" } & ItemStartedEvent | { "type": "item_completed" } & ItemCompletedEvent | { "type": "agent_message_content_delta" } & AgentMessageContentDeltaEvent | { "type": "plan_delta" } & PlanDeltaEvent | { "type": "reasoning_content_delta" } & ReasoningContentDeltaEvent | { "type": "reasoning_raw_content_delta" } & ReasoningRawContentDeltaEvent | { "type": "collab_agent_spawn_begin" } & CollabAgentSpawnBeginEvent | { "type": "collab_agent_spawn_end" } & CollabAgentSpawnEndEvent | { "type": "collab_agent_interaction_begin" } & CollabAgentInteractionBeginEvent | { "type": "collab_agent_interaction_end" } & CollabAgentInteractionEndEvent | { "type": "collab_waiting_begin" } & CollabWaitingBeginEvent | { "type": "collab_waiting_end" } & CollabWaitingEndEvent | { "type": "collab_close_begin" } & CollabCloseBeginEvent | { "type": "collab_close_end" } & CollabCloseEndEvent;
export type EventMsg = { "type": "error" } & ErrorEvent | { "type": "warning" } & WarningEvent | { "type": "context_compacted" } & ContextCompactedEvent | { "type": "thread_rolled_back" } & ThreadRolledBackEvent | { "type": "task_started" } & TurnStartedEvent | { "type": "task_complete" } & TurnCompleteEvent | { "type": "token_count" } & TokenCountEvent | { "type": "agent_message" } & AgentMessageEvent | { "type": "user_message" } & UserMessageEvent | { "type": "agent_message_delta" } & AgentMessageDeltaEvent | { "type": "agent_reasoning" } & AgentReasoningEvent | { "type": "agent_reasoning_delta" } & AgentReasoningDeltaEvent | { "type": "agent_reasoning_raw_content" } & AgentReasoningRawContentEvent | { "type": "agent_reasoning_raw_content_delta" } & AgentReasoningRawContentDeltaEvent | { "type": "agent_reasoning_section_break" } & AgentReasoningSectionBreakEvent | { "type": "session_configured" } & SessionConfiguredEvent | { "type": "thread_name_updated" } & ThreadNameUpdatedEvent | { "type": "mcp_startup_update" } & McpStartupUpdateEvent | { "type": "mcp_startup_complete" } & McpStartupCompleteEvent | { "type": "mcp_tool_call_begin" } & McpToolCallBeginEvent | { "type": "mcp_tool_call_end" } & McpToolCallEndEvent | { "type": "web_search_begin" } & WebSearchBeginEvent | { "type": "web_search_end" } & WebSearchEndEvent | { "type": "exec_command_begin" } & ExecCommandBeginEvent | { "type": "exec_command_output_delta" } & ExecCommandOutputDeltaEvent | { "type": "terminal_interaction" } & TerminalInteractionEvent | { "type": "exec_command_end" } & ExecCommandEndEvent | { "type": "view_image_tool_call" } & ViewImageToolCallEvent | { "type": "exec_approval_request" } & ExecApprovalRequestEvent | { "type": "request_user_input" } & RequestUserInputEvent | { "type": "dynamic_tool_call_request" } & DynamicToolCallRequest | { "type": "elicitation_request" } & ElicitationRequestEvent | { "type": "apply_patch_approval_request" } & ApplyPatchApprovalRequestEvent | { "type": "deprecation_notice" } & DeprecationNoticeEvent | { "type": "background_event" } & BackgroundEventEvent | { "type": "undo_started" } & UndoStartedEvent | { "type": "undo_completed" } & UndoCompletedEvent | { "type": "stream_error" } & StreamErrorEvent | { "type": "patch_apply_begin" } & PatchApplyBeginEvent | { "type": "patch_apply_end" } & PatchApplyEndEvent | { "type": "turn_diff" } & TurnDiffEvent | { "type": "get_history_entry_response" } & GetHistoryEntryResponseEvent | { "type": "mcp_list_tools_response" } & McpListToolsResponseEvent | { "type": "list_custom_prompts_response" } & ListCustomPromptsResponseEvent | { "type": "list_skills_response" } & ListSkillsResponseEvent | { "type": "list_remote_skills_response" } & ListRemoteSkillsResponseEvent | { "type": "remote_skill_downloaded" } & RemoteSkillDownloadedEvent | { "type": "skills_update_available" } | { "type": "plan_update" } & UpdatePlanArgs | { "type": "turn_aborted" } & TurnAbortedEvent | { "type": "shutdown_complete" } | { "type": "entered_review_mode" } & ReviewRequest | { "type": "exited_review_mode" } & ExitedReviewModeEvent | { "type": "raw_response_item" } & RawResponseItemEvent | { "type": "item_started" } & ItemStartedEvent | { "type": "item_completed" } & ItemCompletedEvent | { "type": "agent_message_content_delta" } & AgentMessageContentDeltaEvent | { "type": "plan_delta" } & PlanDeltaEvent | { "type": "reasoning_content_delta" } & ReasoningContentDeltaEvent | { "type": "reasoning_raw_content_delta" } & ReasoningRawContentDeltaEvent | { "type": "collab_agent_spawn_begin" } & CollabAgentSpawnBeginEvent | { "type": "collab_agent_spawn_end" } & CollabAgentSpawnEndEvent | { "type": "collab_agent_interaction_begin" } & CollabAgentInteractionBeginEvent | { "type": "collab_agent_interaction_end" } & CollabAgentInteractionEndEvent | { "type": "collab_waiting_begin" } & CollabWaitingBeginEvent | { "type": "collab_waiting_end" } & CollabWaitingEndEvent | { "type": "collab_close_begin" } & CollabCloseBeginEvent | { "type": "collab_close_end" } & CollabCloseEndEvent;

View File

@@ -9,7 +9,6 @@ import type { FunctionCallOutputContentItem } from "./FunctionCallOutputContentI
* `content` preserves the historical plain-string payload so downstream
* integrations (tests, logging, etc.) can keep treating tool output as
* `String`. When an MCP server returns richer data we additionally populate
* `content_items` with the structured form that the Responses/Chat
* Completions APIs understand.
* `content_items` with the structured form that the Responses API understands.
*/
export type FunctionCallOutputPayload = { content: string, content_items: Array<FunctionCallOutputContentItem> | null, success: boolean | null, };

View File

@@ -0,0 +1,9 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { RemoteSkillSummary } from "./RemoteSkillSummary";
/**
* Response payload for `Op::ListRemoteSkills`.
*/
export type ListRemoteSkillsResponseEvent = { skills: Array<RemoteSkillSummary>, };

View File

@@ -5,4 +5,4 @@
/**
* Initial collaboration mode to use when the TUI starts.
*/
export type ModeKind = "plan" | "code" | "pair_programming" | "execute" | "custom";
export type ModeKind = "plan" | "default";

View File

@@ -0,0 +1,8 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
/**
* Response payload for `Op::DownloadRemoteSkill`.
*/
export type RemoteSkillDownloadedEvent = { id: string, name: string, path: string, };

View File

@@ -0,0 +1,5 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type RemoteSkillSummary = { id: string, name: string, description: string, };

View File

@@ -98,6 +98,7 @@ export type { ItemStartedEvent } from "./ItemStartedEvent";
export type { ListConversationsParams } from "./ListConversationsParams";
export type { ListConversationsResponse } from "./ListConversationsResponse";
export type { ListCustomPromptsResponseEvent } from "./ListCustomPromptsResponseEvent";
export type { ListRemoteSkillsResponseEvent } from "./ListRemoteSkillsResponseEvent";
export type { ListSkillsResponseEvent } from "./ListSkillsResponseEvent";
export type { LocalShellAction } from "./LocalShellAction";
export type { LocalShellExecAction } from "./LocalShellExecAction";
@@ -140,6 +141,8 @@ export type { ReasoningItemContent } from "./ReasoningItemContent";
export type { ReasoningItemReasoningSummary } from "./ReasoningItemReasoningSummary";
export type { ReasoningRawContentDeltaEvent } from "./ReasoningRawContentDeltaEvent";
export type { ReasoningSummary } from "./ReasoningSummary";
export type { RemoteSkillDownloadedEvent } from "./RemoteSkillDownloadedEvent";
export type { RemoteSkillSummary } from "./RemoteSkillSummary";
export type { RemoveConversationListenerParams } from "./RemoveConversationListenerParams";
export type { RemoveConversationSubscriptionResponse } from "./RemoveConversationSubscriptionResponse";
export type { RequestId } from "./RequestId";

View File

@@ -6,8 +6,8 @@ export type AppsListParams = {
/**
* Opaque pagination cursor returned by a previous call.
*/
cursor: string | null,
cursor?: string | null,
/**
* Optional page size; defaults to a reasonable server-side value.
*/
limit: number | null, };
limit?: number | null, };

View File

@@ -13,4 +13,4 @@ export type ChatgptAuthTokensRefreshParams = { reason: ChatgptAuthTokensRefreshR
* This may be `null` when the prior ID token did not include a workspace
* identifier (`chatgpt_account_id`) or when the token could not be parsed.
*/
previousAccountId: string | null, };
previousAccountId?: string | null, };

View File

@@ -3,4 +3,4 @@
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { SandboxPolicy } from "./SandboxPolicy";
export type CommandExecParams = { command: Array<string>, timeoutMs: number | null, cwd: string | null, sandboxPolicy: SandboxPolicy | null, };
export type CommandExecParams = { command: Array<string>, timeoutMs?: number | null, cwd?: string | null, sandboxPolicy?: SandboxPolicy | null, };

View File

@@ -8,20 +8,20 @@ export type CommandExecutionRequestApprovalParams = { threadId: string, turnId:
/**
* Optional explanatory reason (e.g. request for network access).
*/
reason: string | null,
reason?: string | null,
/**
* The command to be executed.
*/
command?: string,
command?: string | null,
/**
* The command's working directory.
*/
cwd?: string,
cwd?: string | null,
/**
* Best-effort parsed command actions for friendly display.
*/
commandActions?: Array<CommandAction>,
commandActions?: Array<CommandAction> | null,
/**
* Optional proposed execpolicy amendment to allow similar commands without prompting.
*/
proposedExecpolicyAmendment: ExecPolicyAmendment | null, };
proposedExecpolicyAmendment?: ExecPolicyAmendment | null, };

View File

@@ -7,4 +7,4 @@ export type ConfigBatchWriteParams = { edits: Array<ConfigEdit>,
/**
* Path to the config file to write; defaults to the user's `config.toml` when omitted.
*/
filePath: string | null, expectedVersion: string | null, };
filePath?: string | null, expectedVersion?: string | null, };

View File

@@ -8,4 +8,4 @@ export type ConfigReadParams = { includeLayers: boolean,
* return the effective config as seen from that directory (i.e., including any
* project layers between `cwd` and the project/repo root).
*/
cwd: string | null, };
cwd?: string | null, };

View File

@@ -8,4 +8,4 @@ export type ConfigValueWriteParams = { keyPath: string, value: JsonValue, mergeS
/**
* Path to the config file to write; defaults to the user's `config.toml` when omitted.
*/
filePath: string | null, expectedVersion: string | null, };
filePath?: string | null, expectedVersion?: string | null, };

View File

@@ -2,4 +2,4 @@
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type FeedbackUploadParams = { classification: string, reason: string | null, threadId: string | null, includeLogs: boolean, };
export type FeedbackUploadParams = { classification: string, reason?: string | null, threadId?: string | null, includeLogs: boolean, };

View File

@@ -6,9 +6,9 @@ export type FileChangeRequestApprovalParams = { threadId: string, turnId: string
/**
* Optional explanatory reason (e.g. request for extra write access).
*/
reason: string | null,
reason?: string | null,
/**
* [UNSTABLE] When set, the agent is asking the user to allow writes under this root
* for the remainder of the session (unclear if this is honored today).
*/
grantRoot: string | null, };
grantRoot?: string | null, };

View File

@@ -6,8 +6,8 @@ export type ListMcpServerStatusParams = {
/**
* Opaque pagination cursor returned by a previous call.
*/
cursor: string | null,
cursor?: string | null,
/**
* Optional page size; defaults to a server-defined value.
*/
limit: number | null, };
limit?: number | null, };

View File

@@ -2,4 +2,4 @@
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type McpServerOauthLoginParams = { name: string, scopes?: Array<string>, timeoutSecs?: bigint, };
export type McpServerOauthLoginParams = { name: string, scopes?: Array<string> | null, timeoutSecs?: bigint | null, };

View File

@@ -5,4 +5,4 @@ import type { InputModality } from "../InputModality";
import type { ReasoningEffort } from "../ReasoningEffort";
import type { ReasoningEffortOption } from "./ReasoningEffortOption";
export type Model = { id: string, model: string, displayName: string, description: string, supportedReasoningEfforts: Array<ReasoningEffortOption>, defaultReasoningEffort: ReasoningEffort, inputModalities: Array<InputModality>, supportsPersonality: boolean, isDefault: boolean, };
export type Model = { id: string, model: string, upgrade: string | null, displayName: string, description: string, supportedReasoningEfforts: Array<ReasoningEffortOption>, defaultReasoningEffort: ReasoningEffort, inputModalities: Array<InputModality>, supportsPersonality: boolean, isDefault: boolean, };

View File

@@ -6,8 +6,8 @@ export type ModelListParams = {
/**
* Opaque pagination cursor returned by a previous call.
*/
cursor: string | null,
cursor?: string | null,
/**
* Optional page size; defaults to a reasonable server-side value.
*/
limit: number | null, };
limit?: number | null, };

View File

@@ -0,0 +1,5 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type RemoteSkillSummary = { id: string, name: string, description: string, };

View File

@@ -9,4 +9,4 @@ export type ReviewStartParams = { threadId: string, target: ReviewTarget,
* Where to run the review: inline (default) on the current thread or
* detached on a new thread (returned in `reviewThreadId`).
*/
delivery: ReviewDelivery | null, };
delivery?: ReviewDelivery | null, };

View File

@@ -0,0 +1,5 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type SkillsRemoteReadParams = Record<string, never>;

View File

@@ -0,0 +1,6 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { RemoteSkillSummary } from "./RemoteSkillSummary";
export type SkillsRemoteReadResponse = { data: Array<RemoteSkillSummary>, };

View File

@@ -0,0 +1,5 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type SkillsRemoteWriteParams = { hazelnutId: string, isPreload: boolean, };

View File

@@ -0,0 +1,5 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type SkillsRemoteWriteResponse = { id: string, name: string, path: string, };

View File

@@ -19,8 +19,8 @@ export type ThreadForkParams = { threadId: string,
* [UNSTABLE] Specify the rollout path to fork from.
* If specified, the thread_id param will be ignored.
*/
path: string | null,
path?: string | null,
/**
* Configuration overrides for the forked thread, if any.
*/
model: string | null, modelProvider: string | null, cwd: string | null, approvalPolicy: AskForApproval | null, sandbox: SandboxMode | null, config: { [key in string]?: JsonValue } | null, baseInstructions: string | null, developerInstructions: string | null, };
model?: string | null, modelProvider?: string | null, cwd?: string | null, approvalPolicy?: AskForApproval | null, sandbox?: SandboxMode | null, config?: { [key in string]?: JsonValue } | null, baseInstructions?: string | null, developerInstructions?: string | null, };

View File

@@ -1,7 +1,6 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { WebSearchAction } from "../WebSearchAction";
import type { JsonValue } from "../serde_json/JsonValue";
import type { CollabAgentState } from "./CollabAgentState";
import type { CollabAgentTool } from "./CollabAgentTool";
@@ -14,6 +13,7 @@ import type { McpToolCallResult } from "./McpToolCallResult";
import type { McpToolCallStatus } from "./McpToolCallStatus";
import type { PatchApplyStatus } from "./PatchApplyStatus";
import type { UserInput } from "./UserInput";
import type { WebSearchAction } from "./WebSearchAction";
export type ThreadItem = { "type": "userMessage", id: string, content: Array<UserInput>, } | { "type": "agentMessage", id: string, text: string, } | { "type": "plan", id: string, text: string, } | { "type": "reasoning", id: string, summary: Array<string>, content: Array<string>, } | { "type": "commandExecution", id: string,
/**

View File

@@ -8,27 +8,27 @@ export type ThreadListParams = {
/**
* Opaque pagination cursor returned by a previous call.
*/
cursor: string | null,
cursor?: string | null,
/**
* Optional page size; defaults to a reasonable server-side value.
*/
limit: number | null,
limit?: number | null,
/**
* Optional sort key; defaults to created_at.
*/
sortKey: ThreadSortKey | null,
sortKey?: ThreadSortKey | null,
/**
* Optional provider filter; when set, only sessions recorded under these
* providers are returned. When present but empty, includes all providers.
*/
modelProviders: Array<string> | null,
modelProviders?: Array<string> | null,
/**
* Optional source filter; when set, only sessions from these source kinds
* are returned. When omitted or empty, defaults to interactive sources.
*/
sourceKinds: Array<ThreadSourceKind> | null,
sourceKinds?: Array<ThreadSourceKind> | null,
/**
* Optional archived filter; when set to true, only archived threads are returned.
* If false or null, only non-archived threads are returned.
*/
archived: boolean | null, };
archived?: boolean | null, };

View File

@@ -6,8 +6,8 @@ export type ThreadLoadedListParams = {
/**
* Opaque pagination cursor returned by a previous call.
*/
cursor: string | null,
cursor?: string | null,
/**
* Optional page size; defaults to no limit.
*/
limit: number | null, };
limit?: number | null, };

View File

@@ -24,13 +24,13 @@ export type ThreadResumeParams = { threadId: string,
* If specified, the thread will be resumed with the provided history
* instead of loaded from disk.
*/
history: Array<ResponseItem> | null,
history?: Array<ResponseItem> | null,
/**
* [UNSTABLE] Specify the rollout path to resume from.
* If specified, the thread_id param will be ignored.
*/
path: string | null,
path?: string | null,
/**
* Configuration overrides for the resumed thread, if any.
*/
model: string | null, modelProvider: string | null, cwd: string | null, approvalPolicy: AskForApproval | null, sandbox: SandboxMode | null, config: { [key in string]?: JsonValue } | null, baseInstructions: string | null, developerInstructions: string | null, personality: Personality | null, };
model?: string | null, modelProvider?: string | null, cwd?: string | null, approvalPolicy?: AskForApproval | null, sandbox?: SandboxMode | null, config?: { [key in string]?: JsonValue } | null, baseInstructions?: string | null, developerInstructions?: string | null, personality?: Personality | null, };

View File

@@ -6,7 +6,7 @@ import type { JsonValue } from "../serde_json/JsonValue";
import type { AskForApproval } from "./AskForApproval";
import type { SandboxMode } from "./SandboxMode";
export type ThreadStartParams = {model: string | null, modelProvider: string | null, cwd: string | null, approvalPolicy: AskForApproval | null, sandbox: SandboxMode | null, config: { [key in string]?: JsonValue } | null, baseInstructions: string | null, developerInstructions: string | null, personality: Personality | null, ephemeral: boolean | null, /**
export type ThreadStartParams = {model?: string | null, modelProvider?: string | null, cwd?: string | null, approvalPolicy?: AskForApproval | null, sandbox?: SandboxMode | null, config?: { [key in string]?: JsonValue } | null, baseInstructions?: string | null, developerInstructions?: string | null, personality?: Personality | null, ephemeral?: boolean | null, /**
* If true, opt into emitting raw response items on the event stream.
*
* This is for internal use only (e.g. Codex Cloud).

View File

@@ -14,37 +14,37 @@ export type TurnStartParams = { threadId: string, input: Array<UserInput>,
/**
* Override the working directory for this turn and subsequent turns.
*/
cwd: string | null,
cwd?: string | null,
/**
* Override the approval policy for this turn and subsequent turns.
*/
approvalPolicy: AskForApproval | null,
approvalPolicy?: AskForApproval | null,
/**
* Override the sandbox policy for this turn and subsequent turns.
*/
sandboxPolicy: SandboxPolicy | null,
sandboxPolicy?: SandboxPolicy | null,
/**
* Override the model for this turn and subsequent turns.
*/
model: string | null,
model?: string | null,
/**
* Override the reasoning effort for this turn and subsequent turns.
*/
effort: ReasoningEffort | null,
effort?: ReasoningEffort | null,
/**
* Override the reasoning summary for this turn and subsequent turns.
*/
summary: ReasoningSummary | null,
summary?: ReasoningSummary | null,
/**
* Override the personality for this turn and subsequent turns.
*/
personality: Personality | null,
personality?: Personality | null,
/**
* Optional JSON Schema used to constrain the final assistant message for this turn.
*/
outputSchema: JsonValue | null,
outputSchema?: JsonValue | null,
/**
* EXPERIMENTAL - set a pre-set collaboration mode.
* Takes precedence over model, reasoning_effort, and developer instructions if set.
*/
collaborationMode: CollaborationMode | null, };
collaborationMode?: CollaborationMode | null, };

View File

@@ -0,0 +1,5 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type WebSearchAction = { "type": "search", query: string | null, queries: Array<string> | null, } | { "type": "openPage", url: string | null, } | { "type": "findInPage", url: string | null, pattern: string | null, } | { "type": "other" };

View File

@@ -96,6 +96,7 @@ export type { ReasoningEffortOption } from "./ReasoningEffortOption";
export type { ReasoningSummaryPartAddedNotification } from "./ReasoningSummaryPartAddedNotification";
export type { ReasoningSummaryTextDeltaNotification } from "./ReasoningSummaryTextDeltaNotification";
export type { ReasoningTextDeltaNotification } from "./ReasoningTextDeltaNotification";
export type { RemoteSkillSummary } from "./RemoteSkillSummary";
export type { ResidencyRequirement } from "./ResidencyRequirement";
export type { ReviewDelivery } from "./ReviewDelivery";
export type { ReviewStartParams } from "./ReviewStartParams";
@@ -116,6 +117,10 @@ export type { SkillsConfigWriteResponse } from "./SkillsConfigWriteResponse";
export type { SkillsListEntry } from "./SkillsListEntry";
export type { SkillsListParams } from "./SkillsListParams";
export type { SkillsListResponse } from "./SkillsListResponse";
export type { SkillsRemoteReadParams } from "./SkillsRemoteReadParams";
export type { SkillsRemoteReadResponse } from "./SkillsRemoteReadResponse";
export type { SkillsRemoteWriteParams } from "./SkillsRemoteWriteParams";
export type { SkillsRemoteWriteResponse } from "./SkillsRemoteWriteResponse";
export type { TerminalInteractionNotification } from "./TerminalInteractionNotification";
export type { TextElement } from "./TextElement";
export type { TextPosition } from "./TextPosition";
@@ -169,5 +174,6 @@ export type { TurnStartResponse } from "./TurnStartResponse";
export type { TurnStartedNotification } from "./TurnStartedNotification";
export type { TurnStatus } from "./TurnStatus";
export type { UserInput } from "./UserInput";
export type { WebSearchAction } from "./WebSearchAction";
export type { WindowsWorldWritableWarningNotification } from "./WindowsWorldWritableWarningNotification";
export type { WriteStatus } from "./WriteStatus";

View File

@@ -1402,8 +1402,8 @@ mod tests {
use uuid::Uuid;
#[test]
fn generated_ts_has_no_optional_nullable_fields() -> Result<()> {
// Assert that there are no types of the form "?: T | null" in the generated TS files.
fn generated_ts_optional_nullable_fields_only_in_params() -> Result<()> {
// Assert that "?: T | null" only appears in generated *Params types.
let output_dir = std::env::temp_dir().join(format!("codex_ts_types_{}", Uuid::now_v7()));
fs::create_dir(&output_dir)?;
@@ -1463,6 +1463,13 @@ mod tests {
}
if matches!(path.extension().and_then(|ext| ext.to_str()), Some("ts")) {
// Only allow "?: T | null" in objects representing JSON-RPC requests,
// which we assume are called "*Params".
let allow_optional_nullable = path
.file_stem()
.and_then(|stem| stem.to_str())
.is_some_and(|stem| stem.ends_with("Params"));
let contents = fs::read_to_string(&path)?;
if contents.contains("| undefined") {
undefined_offenders.push(path.clone());
@@ -1583,9 +1590,11 @@ mod tests {
}
// If the last non-whitespace before ':' is '?', then this is an
// optional field with a nullable type (i.e., "?: T | null"),
// which we explicitly disallow.
if field_prefix.chars().rev().find(|c| !c.is_whitespace()) == Some('?') {
// optional field with a nullable type (i.e., "?: T | null").
// These are only allowed in *Params types.
if field_prefix.chars().rev().find(|c| !c.is_whitespace()) == Some('?')
&& !allow_optional_nullable
{
let line_number =
contents[..abs_idx].chars().filter(|c| *c == '\n').count() + 1;
let offending_line_end = contents[line_start_idx..]
@@ -1613,12 +1622,12 @@ mod tests {
"Generated TypeScript still includes unions with `undefined` in {undefined_offenders:?}"
);
// If this assertion fails, it means a field was generated as
// "?: T | null" — i.e., both optional (undefined) and nullable (null).
// We only want either "?: T" or ": T | null".
// If this assertion fails, it means a field was generated as "?: T | null",
// which is both optional (undefined) and nullable (null), for a type not ending
// in "Params" (which represent JSON-RPC requests).
assert!(
optional_nullable_offenders.is_empty(),
"Generated TypeScript has optional fields with nullable types (disallowed '?: T | null'), add #[ts(optional)] to fix:\n{optional_nullable_offenders:?}"
"Generated TypeScript has optional nullable fields outside *Params types (disallowed '?: T | null'):\n{optional_nullable_offenders:?}"
);
Ok(())

View File

@@ -228,6 +228,14 @@ client_request_definitions! {
params: v2::SkillsListParams,
response: v2::SkillsListResponse,
},
SkillsRemoteRead => "skills/remote/read" {
params: v2::SkillsRemoteReadParams,
response: v2::SkillsRemoteReadResponse,
},
SkillsRemoteWrite => "skills/remote/write" {
params: v2::SkillsRemoteWriteParams,
response: v2::SkillsRemoteWriteResponse,
},
AppsList => "app/list" {
params: v2::AppsListParams,
response: v2::AppsListResponse,

View File

@@ -480,6 +480,7 @@ pub struct ConfigReadParams {
/// Optional working directory to resolve project config layers. If specified,
/// return the effective config as seen from that directory (i.e., including any
/// project layers between `cwd` and the project/repo root).
#[ts(optional = nullable)]
pub cwd: Option<String>,
}
@@ -525,7 +526,9 @@ pub struct ConfigValueWriteParams {
pub value: JsonValue,
pub merge_strategy: MergeStrategy,
/// Path to the config file to write; defaults to the user's `config.toml` when omitted.
#[ts(optional = nullable)]
pub file_path: Option<String>,
#[ts(optional = nullable)]
pub expected_version: Option<String>,
}
@@ -535,7 +538,9 @@ pub struct ConfigValueWriteParams {
pub struct ConfigBatchWriteParams {
pub edits: Vec<ConfigEdit>,
/// Path to the config file to write; defaults to the user's `config.toml` when omitted.
#[ts(optional = nullable)]
pub file_path: Option<String>,
#[ts(optional = nullable)]
pub expected_version: Option<String>,
}
@@ -935,6 +940,7 @@ pub struct ChatgptAuthTokensRefreshParams {
///
/// This may be `null` when the prior ID token did not include a workspace
/// identifier (`chatgpt_account_id`) or when the token could not be parsed.
#[ts(optional = nullable)]
pub previous_account_id: Option<String>,
}
@@ -979,8 +985,10 @@ pub struct GetAccountResponse {
#[ts(export_to = "v2/")]
pub struct ModelListParams {
/// Opaque pagination cursor returned by a previous call.
#[ts(optional = nullable)]
pub cursor: Option<String>,
/// Optional page size; defaults to a reasonable server-side value.
#[ts(optional = nullable)]
pub limit: Option<u32>,
}
@@ -990,6 +998,7 @@ pub struct ModelListParams {
pub struct Model {
pub id: String,
pub model: String,
pub upgrade: Option<String>,
pub display_name: String,
pub description: String,
pub supported_reasoning_efforts: Vec<ReasoningEffortOption>,
@@ -1039,8 +1048,10 @@ pub struct CollaborationModeListResponse {
#[ts(export_to = "v2/")]
pub struct ListMcpServerStatusParams {
/// Opaque pagination cursor returned by a previous call.
#[ts(optional = nullable)]
pub cursor: Option<String>,
/// Optional page size; defaults to a server-defined value.
#[ts(optional = nullable)]
pub limit: Option<u32>,
}
@@ -1070,8 +1081,10 @@ pub struct ListMcpServerStatusResponse {
#[ts(export_to = "v2/")]
pub struct AppsListParams {
/// Opaque pagination cursor returned by a previous call.
#[ts(optional = nullable)]
pub cursor: Option<String>,
/// Optional page size; defaults to a reasonable server-side value.
#[ts(optional = nullable)]
pub limit: Option<u32>,
}
@@ -1116,10 +1129,10 @@ pub struct McpServerRefreshResponse {}
pub struct McpServerOauthLoginParams {
pub name: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
#[ts(optional = nullable)]
pub scopes: Option<Vec<String>>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
#[ts(optional = nullable)]
pub timeout_secs: Option<i64>,
}
@@ -1135,7 +1148,9 @@ pub struct McpServerOauthLoginResponse {
#[ts(export_to = "v2/")]
pub struct FeedbackUploadParams {
pub classification: String,
#[ts(optional = nullable)]
pub reason: Option<String>,
#[ts(optional = nullable)]
pub thread_id: Option<String>,
pub include_logs: bool,
}
@@ -1153,8 +1168,11 @@ pub struct FeedbackUploadResponse {
pub struct CommandExecParams {
pub command: Vec<String>,
#[ts(type = "number | null")]
#[ts(optional = nullable)]
pub timeout_ms: Option<i64>,
#[ts(optional = nullable)]
pub cwd: Option<PathBuf>,
#[ts(optional = nullable)]
pub sandbox_policy: Option<SandboxPolicy>,
}
@@ -1175,21 +1193,33 @@ pub struct CommandExecResponse {
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadStartParams {
#[ts(optional = nullable)]
pub model: Option<String>,
#[ts(optional = nullable)]
pub model_provider: Option<String>,
#[ts(optional = nullable)]
pub cwd: Option<String>,
#[ts(optional = nullable)]
pub approval_policy: Option<AskForApproval>,
#[ts(optional = nullable)]
pub sandbox: Option<SandboxMode>,
#[ts(optional = nullable)]
pub config: Option<HashMap<String, JsonValue>>,
#[ts(optional = nullable)]
pub base_instructions: Option<String>,
#[ts(optional = nullable)]
pub developer_instructions: Option<String>,
#[ts(optional = nullable)]
pub personality: Option<Personality>,
#[ts(optional = nullable)]
pub ephemeral: Option<bool>,
#[experimental("thread/start.dynamicTools")]
#[ts(optional = nullable)]
pub dynamic_tools: Option<Vec<DynamicToolSpec>>,
/// Test-only experimental field used to validate experimental gating and
/// schema filtering behavior in a stable way.
#[experimental("thread/start.mockExperimentalField")]
#[ts(optional = nullable)]
pub mock_experimental_field: Option<String>,
/// If true, opt into emitting raw response items on the event stream.
///
@@ -1204,6 +1234,7 @@ pub struct ThreadStartParams {
#[ts(export_to = "v2/")]
pub struct MockExperimentalMethodParams {
/// Test-only payload field.
#[ts(optional = nullable)]
pub value: Option<String>,
}
@@ -1246,21 +1277,32 @@ pub struct ThreadResumeParams {
/// [UNSTABLE] FOR CODEX CLOUD - DO NOT USE.
/// If specified, the thread will be resumed with the provided history
/// instead of loaded from disk.
#[ts(optional = nullable)]
pub history: Option<Vec<ResponseItem>>,
/// [UNSTABLE] Specify the rollout path to resume from.
/// If specified, the thread_id param will be ignored.
#[ts(optional = nullable)]
pub path: Option<PathBuf>,
/// Configuration overrides for the resumed thread, if any.
#[ts(optional = nullable)]
pub model: Option<String>,
#[ts(optional = nullable)]
pub model_provider: Option<String>,
#[ts(optional = nullable)]
pub cwd: Option<String>,
#[ts(optional = nullable)]
pub approval_policy: Option<AskForApproval>,
#[ts(optional = nullable)]
pub sandbox: Option<SandboxMode>,
#[ts(optional = nullable)]
pub config: Option<HashMap<String, serde_json::Value>>,
#[ts(optional = nullable)]
pub base_instructions: Option<String>,
#[ts(optional = nullable)]
pub developer_instructions: Option<String>,
#[ts(optional = nullable)]
pub personality: Option<Personality>,
}
@@ -1292,16 +1334,25 @@ pub struct ThreadForkParams {
/// [UNSTABLE] Specify the rollout path to fork from.
/// If specified, the thread_id param will be ignored.
#[ts(optional = nullable)]
pub path: Option<PathBuf>,
/// Configuration overrides for the forked thread, if any.
#[ts(optional = nullable)]
pub model: Option<String>,
#[ts(optional = nullable)]
pub model_provider: Option<String>,
#[ts(optional = nullable)]
pub cwd: Option<String>,
#[ts(optional = nullable)]
pub approval_policy: Option<AskForApproval>,
#[ts(optional = nullable)]
pub sandbox: Option<SandboxMode>,
#[ts(optional = nullable)]
pub config: Option<HashMap<String, serde_json::Value>>,
#[ts(optional = nullable)]
pub base_instructions: Option<String>,
#[ts(optional = nullable)]
pub developer_instructions: Option<String>,
}
@@ -1386,19 +1437,25 @@ pub struct ThreadRollbackResponse {
#[ts(export_to = "v2/")]
pub struct ThreadListParams {
/// Opaque pagination cursor returned by a previous call.
#[ts(optional = nullable)]
pub cursor: Option<String>,
/// Optional page size; defaults to a reasonable server-side value.
#[ts(optional = nullable)]
pub limit: Option<u32>,
/// Optional sort key; defaults to created_at.
#[ts(optional = nullable)]
pub sort_key: Option<ThreadSortKey>,
/// Optional provider filter; when set, only sessions recorded under these
/// providers are returned. When present but empty, includes all providers.
#[ts(optional = nullable)]
pub model_providers: Option<Vec<String>>,
/// Optional source filter; when set, only sessions from these source kinds
/// are returned. When omitted or empty, defaults to interactive sources.
#[ts(optional = nullable)]
pub source_kinds: Option<Vec<ThreadSourceKind>>,
/// Optional archived filter; when set to true, only archived threads are returned.
/// If false or null, only non-archived threads are returned.
#[ts(optional = nullable)]
pub archived: Option<bool>,
}
@@ -1443,8 +1500,10 @@ pub struct ThreadListResponse {
#[ts(export_to = "v2/")]
pub struct ThreadLoadedListParams {
/// Opaque pagination cursor returned by a previous call.
#[ts(optional = nullable)]
pub cursor: Option<String>,
/// Optional page size; defaults to no limit.
#[ts(optional = nullable)]
pub limit: Option<u32>,
}
@@ -1496,6 +1555,44 @@ pub struct SkillsListResponse {
pub data: Vec<SkillsListEntry>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct SkillsRemoteReadParams {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct RemoteSkillSummary {
pub id: String,
pub name: String,
pub description: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct SkillsRemoteReadResponse {
pub data: Vec<RemoteSkillSummary>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct SkillsRemoteWriteParams {
pub hazelnut_id: String,
pub is_preload: bool,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct SkillsRemoteWriteResponse {
pub id: String,
pub name: String,
pub path: PathBuf,
}
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "snake_case")]
#[ts(rename_all = "snake_case")]
@@ -1832,24 +1929,33 @@ pub struct TurnStartParams {
pub thread_id: String,
pub input: Vec<UserInput>,
/// Override the working directory for this turn and subsequent turns.
#[ts(optional = nullable)]
pub cwd: Option<PathBuf>,
/// Override the approval policy for this turn and subsequent turns.
#[ts(optional = nullable)]
pub approval_policy: Option<AskForApproval>,
/// Override the sandbox policy for this turn and subsequent turns.
#[ts(optional = nullable)]
pub sandbox_policy: Option<SandboxPolicy>,
/// Override the model for this turn and subsequent turns.
#[ts(optional = nullable)]
pub model: Option<String>,
/// Override the reasoning effort for this turn and subsequent turns.
#[ts(optional = nullable)]
pub effort: Option<ReasoningEffort>,
/// Override the reasoning summary for this turn and subsequent turns.
#[ts(optional = nullable)]
pub summary: Option<ReasoningSummary>,
/// Override the personality for this turn and subsequent turns.
#[ts(optional = nullable)]
pub personality: Option<Personality>,
/// Optional JSON Schema used to constrain the final assistant message for this turn.
#[ts(optional = nullable)]
pub output_schema: Option<JsonValue>,
/// EXPERIMENTAL - set a pre-set collaboration mode.
/// Takes precedence over model, reasoning_effort, and developer instructions if set.
#[ts(optional = nullable)]
pub collaboration_mode: Option<CollaborationMode>,
}
@@ -1863,6 +1969,7 @@ pub struct ReviewStartParams {
/// Where to run the review: inline (default) on the current thread or
/// detached on a new thread (returned in `reviewThreadId`).
#[serde(default)]
#[ts(optional = nullable)]
pub delivery: Option<ReviewDelivery>,
}
@@ -2170,6 +2277,7 @@ pub enum ThreadItem {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(tag = "type", rename_all = "camelCase")]
#[ts(tag = "type", rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub enum WebSearchAction {
Search {
query: Option<String>,
@@ -2643,20 +2751,22 @@ pub struct CommandExecutionRequestApprovalParams {
pub turn_id: String,
pub item_id: String,
/// Optional explanatory reason (e.g. request for network access).
#[ts(optional = nullable)]
pub reason: Option<String>,
/// The command to be executed.
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
#[ts(optional = nullable)]
pub command: Option<String>,
/// The command's working directory.
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
#[ts(optional = nullable)]
pub cwd: Option<PathBuf>,
/// Best-effort parsed command actions for friendly display.
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
#[ts(optional = nullable)]
pub command_actions: Option<Vec<CommandAction>>,
/// Optional proposed execpolicy amendment to allow similar commands without prompting.
#[ts(optional = nullable)]
pub proposed_execpolicy_amendment: Option<ExecPolicyAmendment>,
}
@@ -2675,9 +2785,11 @@ pub struct FileChangeRequestApprovalParams {
pub turn_id: String,
pub item_id: String,
/// Optional explanatory reason (e.g. request for extra write access).
#[ts(optional = nullable)]
pub reason: Option<String>,
/// [UNSTABLE] When set, the agent is asking the user to allow writes under this root
/// for the remainder of the session (unclear if this is honored today).
#[ts(optional = nullable)]
pub grant_root: Option<PathBuf>,
}

View File

@@ -1,6 +1,6 @@
load("//:defs.bzl", "codex_rust_crate")
codex_rust_crate(
name = "codex-app-server-test-client",
name = "app-server-test-client",
crate_name = "codex_app_server_test_client",
)

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -90,9 +90,11 @@ Example (from OpenAI's official VSCode extension):
- `turn/interrupt` — request cancellation of an in-flight turn by `(thread_id, turn_id)`; success is an empty `{}` response and the turn finishes with `status: "interrupted"`.
- `review/start` — kick off Codexs automated reviewer for a thread; responds like `turn/start` and emits `item/started`/`item/completed` notifications with `enteredReviewMode` and `exitedReviewMode` items, plus a final assistant `agentMessage` containing the review.
- `command/exec` — run a single command under the server sandbox without starting a thread/turn (handy for utilities and validation).
- `model/list` — list available models (with reasoning effort options).
- `model/list` — list available models (with reasoning effort options and optional `upgrade` model ids).
- `collaborationMode/list` — list available collaboration mode presets (experimental, no pagination).
- `skills/list` — list skills for one or more `cwd` values (optional `forceReload`).
- `skills/remote/read` — list public remote skills (**under development; do not call from production clients yet**).
- `skills/remote/write` — download a public remote skill by `hazelnutId`; `isPreload=true` writes to `.codex/vendor_imports/skills` under `codex_home` (**under development; do not call from production clients yet**).
- `app/list` — list available apps.
- `skills/config/write` — write user-level skill config by path.
- `mcpServer/oauth/login` — start an OAuth login for a configured MCP server; returns an `authorization_url` and later emits `mcpServer/oauthLogin/completed` once the browser flow finishes.

View File

@@ -97,6 +97,10 @@ use codex_app_server_protocol::SkillsConfigWriteParams;
use codex_app_server_protocol::SkillsConfigWriteResponse;
use codex_app_server_protocol::SkillsListParams;
use codex_app_server_protocol::SkillsListResponse;
use codex_app_server_protocol::SkillsRemoteReadParams;
use codex_app_server_protocol::SkillsRemoteReadResponse;
use codex_app_server_protocol::SkillsRemoteWriteParams;
use codex_app_server_protocol::SkillsRemoteWriteResponse;
use codex_app_server_protocol::Thread;
use codex_app_server_protocol::ThreadArchiveParams;
use codex_app_server_protocol::ThreadArchiveResponse;
@@ -176,6 +180,8 @@ use codex_core::read_head_for_summary;
use codex_core::read_session_meta_line;
use codex_core::rollout_date_parts;
use codex_core::sandboxing::SandboxPermissions;
use codex_core::skills::remote::download_remote_skill;
use codex_core::skills::remote::list_remote_skills;
use codex_core::state_db::get_state_db;
use codex_core::token_data::parse_id_token;
use codex_core::windows_sandbox::WindowsSandboxLevelExt;
@@ -464,6 +470,12 @@ impl CodexMessageProcessor {
ClientRequest::SkillsList { request_id, params } => {
self.skills_list(request_id, params).await;
}
ClientRequest::SkillsRemoteRead { request_id, params } => {
self.skills_remote_read(request_id, params).await;
}
ClientRequest::SkillsRemoteWrite { request_id, params } => {
self.skills_remote_write(request_id, params).await;
}
ClientRequest::AppsList { request_id, params } => {
self.apps_list(request_id, params).await;
}
@@ -1430,7 +1442,7 @@ impl CodexMessageProcessor {
}
let cwd = params.cwd.unwrap_or_else(|| self.config.cwd.clone());
let env = create_env(&self.config.shell_environment_policy);
let env = create_env(&self.config.shell_environment_policy, None);
let timeout_ms = params
.timeout_ms
.and_then(|timeout_ms| u64::try_from(timeout_ms).ok());
@@ -4053,6 +4065,61 @@ impl CodexMessageProcessor {
.await;
}
async fn skills_remote_read(&self, request_id: RequestId, _params: SkillsRemoteReadParams) {
match list_remote_skills(&self.config).await {
Ok(skills) => {
let data = skills
.into_iter()
.map(|skill| codex_app_server_protocol::RemoteSkillSummary {
id: skill.id,
name: skill.name,
description: skill.description,
})
.collect();
self.outgoing
.send_response(request_id, SkillsRemoteReadResponse { data })
.await;
}
Err(err) => {
self.send_internal_error(
request_id,
format!("failed to read remote skills: {err}"),
)
.await;
}
}
}
async fn skills_remote_write(&self, request_id: RequestId, params: SkillsRemoteWriteParams) {
let SkillsRemoteWriteParams {
hazelnut_id,
is_preload,
} = params;
let response = download_remote_skill(&self.config, hazelnut_id.as_str(), is_preload).await;
match response {
Ok(downloaded) => {
self.outgoing
.send_response(
request_id,
SkillsRemoteWriteResponse {
id: downloaded.id,
name: downloaded.name,
path: downloaded.path,
},
)
.await;
}
Err(err) => {
self.send_internal_error(
request_id,
format!("failed to download remote skill: {err}"),
)
.await;
}
}
}
async fn skills_config_write(&self, request_id: RequestId, params: SkillsConfigWriteParams) {
let SkillsConfigWriteParams { path, enabled } = params;
let edits = vec![ConfigEdit::SetSkillConfig { path, enabled }];

View File

@@ -22,6 +22,7 @@ fn model_from_preset(preset: ModelPreset) -> Model {
Model {
id: preset.id.to_string(),
model: preset.model.to_string(),
upgrade: preset.upgrade.map(|upgrade| upgrade.id),
display_name: preset.display_name.to_string(),
description: preset.description.to_string(),
supported_reasoning_efforts: reasoning_efforts_from_preset(

View File

@@ -1,8 +1,8 @@
//! Validates that the collaboration mode list endpoint returns the expected default presets.
//!
//! The test drives the app server through the MCP harness and asserts that the list response
//! includes the plan, coding, pair programming, and execute modes with their default model and reasoning
//! effort settings, which keeps the API contract visible in one place.
//! includes the plan and default modes with their default model and reasoning effort
//! settings, which keeps the API contract visible in one place.
#![allow(clippy::unwrap_used)]
@@ -45,23 +45,8 @@ async fn list_collaboration_modes_returns_presets() -> Result<()> {
let CollaborationModeListResponse { data: items } =
to_response::<CollaborationModeListResponse>(response)?;
let expected = [
plan_preset(),
code_preset(),
pair_programming_preset(),
execute_preset(),
];
assert_eq!(expected.len(), items.len());
for (expected_mask, actual_mask) in expected.iter().zip(items.iter()) {
assert_eq!(expected_mask.name, actual_mask.name);
assert_eq!(expected_mask.mode, actual_mask.mode);
assert_eq!(expected_mask.model, actual_mask.model);
assert_eq!(expected_mask.reasoning_effort, actual_mask.reasoning_effort);
assert_eq!(
expected_mask.developer_instructions,
actual_mask.developer_instructions
);
}
let expected = vec![plan_preset(), default_preset()];
assert_eq!(expected, items);
Ok(())
}
@@ -77,35 +62,11 @@ fn plan_preset() -> CollaborationModeMask {
.unwrap()
}
/// Builds the pair programming preset that the list response is expected to return.
///
/// The helper keeps the expected model and reasoning defaults co-located with the test
/// so that mismatches point directly at the API contract being exercised.
fn pair_programming_preset() -> CollaborationModeMask {
/// Builds the default preset that the list response is expected to return.
fn default_preset() -> CollaborationModeMask {
let presets = test_builtin_collaboration_mode_presets();
presets
.into_iter()
.find(|p| p.mode == Some(ModeKind::PairProgramming))
.unwrap()
}
/// Builds the code preset that the list response is expected to return.
fn code_preset() -> CollaborationModeMask {
let presets = test_builtin_collaboration_mode_presets();
presets
.into_iter()
.find(|p| p.mode == Some(ModeKind::Code))
.unwrap()
}
/// Builds the execute preset that the list response is expected to return.
///
/// The execute preset uses a different reasoning effort to capture the higher-effort
/// execution contract the server currently exposes.
fn execute_preset() -> CollaborationModeMask {
let presets = test_builtin_collaboration_mode_presets();
presets
.into_iter()
.find(|p| p.mode == Some(ModeKind::Execute))
.find(|p| p.mode == Some(ModeKind::Default))
.unwrap()
}

View File

@@ -51,6 +51,7 @@ async fn list_models_returns_all_models_with_large_limit() -> Result<()> {
Model {
id: "gpt-5.2-codex".to_string(),
model: "gpt-5.2-codex".to_string(),
upgrade: None,
display_name: "gpt-5.2-codex".to_string(),
description: "Latest frontier agentic coding model.".to_string(),
supported_reasoning_efforts: vec![
@@ -80,6 +81,7 @@ async fn list_models_returns_all_models_with_large_limit() -> Result<()> {
Model {
id: "gpt-5.1-codex-max".to_string(),
model: "gpt-5.1-codex-max".to_string(),
upgrade: Some("gpt-5.2-codex".to_string()),
display_name: "gpt-5.1-codex-max".to_string(),
description: "Codex-optimized flagship for deep and fast reasoning.".to_string(),
supported_reasoning_efforts: vec![
@@ -109,6 +111,7 @@ async fn list_models_returns_all_models_with_large_limit() -> Result<()> {
Model {
id: "gpt-5.1-codex-mini".to_string(),
model: "gpt-5.1-codex-mini".to_string(),
upgrade: Some("gpt-5.2-codex".to_string()),
display_name: "gpt-5.1-codex-mini".to_string(),
description: "Optimized for codex. Cheaper, faster, but less capable.".to_string(),
supported_reasoning_efforts: vec![
@@ -130,6 +133,7 @@ async fn list_models_returns_all_models_with_large_limit() -> Result<()> {
Model {
id: "gpt-5.2".to_string(),
model: "gpt-5.2".to_string(),
upgrade: Some("gpt-5.2-codex".to_string()),
display_name: "gpt-5.2".to_string(),
description:
"Latest frontier model with improvements across knowledge, reasoning and coding"

View File

@@ -1,6 +1,9 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_final_assistant_message_sse_response;
use app_test_support::create_mock_responses_server_repeating_assistant;
use app_test_support::create_mock_responses_server_sequence;
use app_test_support::create_shell_command_sse_response;
use app_test_support::to_response;
use codex_app_server_protocol::ItemCompletedNotification;
use codex_app_server_protocol::ItemStartedNotification;
@@ -12,6 +15,7 @@ use codex_app_server_protocol::ReviewDelivery;
use codex_app_server_protocol::ReviewStartParams;
use codex_app_server_protocol::ReviewStartResponse;
use codex_app_server_protocol::ReviewTarget;
use codex_app_server_protocol::ServerRequest;
use codex_app_server_protocol::ThreadItem;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
@@ -129,6 +133,91 @@ async fn review_start_runs_review_turn_and_emits_code_review_item() -> Result<()
Ok(())
}
#[tokio::test]
async fn review_start_exec_approval_item_id_matches_command_execution_item() -> Result<()> {
let responses = vec![
create_shell_command_sse_response(
vec![
"git".to_string(),
"rev-parse".to_string(),
"HEAD".to_string(),
],
None,
Some(5000),
"review-call-1",
)?,
create_final_assistant_message_sse_response("done")?,
];
let server = create_mock_responses_server_sequence(responses).await;
let codex_home = TempDir::new()?;
create_config_toml_with_approval_policy(codex_home.path(), &server.uri(), "untrusted")?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let thread_id = start_default_thread(&mut mcp).await?;
let review_req = mcp
.send_review_start_request(ReviewStartParams {
thread_id,
delivery: Some(ReviewDelivery::Inline),
target: ReviewTarget::Commit {
sha: "1234567deadbeef".to_string(),
title: Some("Check review approvals".to_string()),
},
})
.await?;
let review_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(review_req)),
)
.await??;
let ReviewStartResponse { turn, .. } = to_response::<ReviewStartResponse>(review_resp)?;
let turn_id = turn.id.clone();
let server_req = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_request_message(),
)
.await??;
let ServerRequest::CommandExecutionRequestApproval { request_id, params } = server_req else {
panic!("expected CommandExecutionRequestApproval request");
};
assert_eq!(params.item_id, "review-call-1");
assert_eq!(params.turn_id, turn_id);
let mut command_item_id = None;
for _ in 0..10 {
let item_started: JSONRPCNotification = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("item/started"),
)
.await??;
let started: ItemStartedNotification =
serde_json::from_value(item_started.params.expect("params must be present"))?;
if let ThreadItem::CommandExecution { id, .. } = started.item {
command_item_id = Some(id);
break;
}
}
let command_item_id = command_item_id.expect("did not observe command execution item");
assert_eq!(command_item_id, params.item_id);
mcp.send_response(
request_id,
serde_json::json!({ "decision": codex_core::protocol::ReviewDecision::Approved }),
)
.await?;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("turn/completed"),
)
.await??;
Ok(())
}
#[tokio::test]
async fn review_start_rejects_empty_base_branch() -> Result<()> {
let server = create_mock_responses_server_repeating_assistant("Done").await;
@@ -299,13 +388,21 @@ async fn start_default_thread(mcp: &mut McpProcess) -> Result<String> {
}
fn create_config_toml(codex_home: &std::path::Path, server_uri: &str) -> std::io::Result<()> {
create_config_toml_with_approval_policy(codex_home, server_uri, "never")
}
fn create_config_toml_with_approval_policy(
codex_home: &std::path::Path,
server_uri: &str,
approval_policy: &str,
) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
format!(
r#"
model = "mock-model"
approval_policy = "never"
approval_policy = "{approval_policy}"
sandbox_mode = "read-only"
model_provider = "mock_provider"

View File

@@ -365,7 +365,7 @@ async fn turn_start_accepts_collaboration_mode_override_v2() -> Result<()> {
let ThreadStartResponse { thread, .. } = to_response::<ThreadStartResponse>(thread_resp)?;
let collaboration_mode = CollaborationMode {
mode: ModeKind::Custom,
mode: ModeKind::Default,
settings: Settings {
model: "mock-model-collab".to_string(),
reasoning_effort: Some(ReasoningEffort::High),

View File

@@ -1,3 +1,4 @@
use std::fs::File;
use std::future::Future;
use std::path::Path;
use std::path::PathBuf;
@@ -10,8 +11,24 @@ use tempfile::TempDir;
const LINUX_SANDBOX_ARG0: &str = "codex-linux-sandbox";
const APPLY_PATCH_ARG0: &str = "apply_patch";
const MISSPELLED_APPLY_PATCH_ARG0: &str = "applypatch";
const LOCK_FILENAME: &str = ".lock";
pub fn arg0_dispatch() -> Option<TempDir> {
/// Keeps the per-session PATH entry alive and locked for the process lifetime.
pub struct Arg0PathEntryGuard {
_temp_dir: TempDir,
_lock_file: File,
}
impl Arg0PathEntryGuard {
fn new(temp_dir: TempDir, lock_file: File) -> Self {
Self {
_temp_dir: temp_dir,
_lock_file: lock_file,
}
}
}
pub fn arg0_dispatch() -> Option<Arg0PathEntryGuard> {
// Determine if we were invoked via the special alias.
let mut args = std::env::args_os();
let argv0 = args.next().unwrap_or_default();
@@ -149,7 +166,7 @@ where
///
/// IMPORTANT: This function modifies the PATH environment variable, so it MUST
/// be called before multiple threads are spawned.
pub fn prepend_path_entry_for_codex_aliases() -> std::io::Result<TempDir> {
pub fn prepend_path_entry_for_codex_aliases() -> std::io::Result<Arg0PathEntryGuard> {
let codex_home = codex_core::config::find_codex_home()?;
#[cfg(not(debug_assertions))]
{
@@ -167,7 +184,7 @@ pub fn prepend_path_entry_for_codex_aliases() -> std::io::Result<TempDir> {
std::fs::create_dir_all(&codex_home)?;
// Use a CODEX_HOME-scoped temp root to avoid cluttering the top-level directory.
let temp_root = codex_home.join("tmp").join("path");
let temp_root = codex_home.join("tmp").join("arg0");
std::fs::create_dir_all(&temp_root)?;
#[cfg(unix)]
{
@@ -177,11 +194,25 @@ pub fn prepend_path_entry_for_codex_aliases() -> std::io::Result<TempDir> {
std::fs::set_permissions(&temp_root, std::fs::Permissions::from_mode(0o700))?;
}
// Best-effort cleanup of stale per-session dirs. Ignore failures so startup proceeds.
if let Err(err) = janitor_cleanup(&temp_root) {
eprintln!("WARNING: failed to clean up stale arg0 temp dirs: {err}");
}
let temp_dir = tempfile::Builder::new()
.prefix("codex-arg0")
.tempdir_in(&temp_root)?;
let path = temp_dir.path();
let lock_path = path.join(LOCK_FILENAME);
let lock_file = File::options()
.read(true)
.write(true)
.create(true)
.truncate(false)
.open(&lock_path)?;
lock_file.try_lock()?;
for filename in &[
APPLY_PATCH_ARG0,
MISSPELLED_APPLY_PATCH_ARG0,
@@ -231,5 +262,107 @@ pub fn prepend_path_entry_for_codex_aliases() -> std::io::Result<TempDir> {
std::env::set_var("PATH", updated_path_env_var);
}
Ok(temp_dir)
Ok(Arg0PathEntryGuard::new(temp_dir, lock_file))
}
fn janitor_cleanup(temp_root: &Path) -> std::io::Result<()> {
let entries = match std::fs::read_dir(temp_root) {
Ok(entries) => entries,
Err(err) if err.kind() == std::io::ErrorKind::NotFound => return Ok(()),
Err(err) => return Err(err),
};
for entry in entries.flatten() {
let path = entry.path();
if !path.is_dir() {
continue;
}
// Skip the directory if locking fails or the lock is currently held.
let Some(_lock_file) = try_lock_dir(&path)? else {
continue;
};
match std::fs::remove_dir_all(&path) {
Ok(()) => {}
// Expected TOCTOU race: directory can disappear after read_dir/lock checks.
Err(err) if err.kind() == std::io::ErrorKind::NotFound => continue,
Err(err) => return Err(err),
}
}
Ok(())
}
fn try_lock_dir(dir: &Path) -> std::io::Result<Option<File>> {
let lock_path = dir.join(LOCK_FILENAME);
let lock_file = match File::options().read(true).write(true).open(&lock_path) {
Ok(file) => file,
Err(err) if err.kind() == std::io::ErrorKind::NotFound => return Ok(None),
Err(err) => return Err(err),
};
match lock_file.try_lock() {
Ok(()) => Ok(Some(lock_file)),
Err(std::fs::TryLockError::WouldBlock) => Ok(None),
Err(err) => Err(err.into()),
}
}
#[cfg(test)]
mod tests {
use super::LOCK_FILENAME;
use super::janitor_cleanup;
use std::fs;
use std::fs::File;
use std::path::Path;
fn create_lock(dir: &Path) -> std::io::Result<File> {
let lock_path = dir.join(LOCK_FILENAME);
File::options()
.read(true)
.write(true)
.create(true)
.truncate(false)
.open(lock_path)
}
#[test]
fn janitor_skips_dirs_without_lock_file() -> std::io::Result<()> {
let root = tempfile::tempdir()?;
let dir = root.path().join("no-lock");
fs::create_dir(&dir)?;
janitor_cleanup(root.path())?;
assert!(dir.exists());
Ok(())
}
#[test]
fn janitor_skips_dirs_with_held_lock() -> std::io::Result<()> {
let root = tempfile::tempdir()?;
let dir = root.path().join("locked");
fs::create_dir(&dir)?;
let lock_file = create_lock(&dir)?;
lock_file.try_lock()?;
janitor_cleanup(root.path())?;
assert!(dir.exists());
Ok(())
}
#[test]
fn janitor_removes_dirs_with_unlocked_lock() -> std::io::Result<()> {
let root = tempfile::tempdir()?;
let dir = root.path().join("stale");
fs::create_dir(&dir)?;
create_lock(&dir)?;
janitor_cleanup(root.path())?;
assert!(!dir.exists());
Ok(())
}
}

View File

@@ -21,6 +21,7 @@ clap = { workspace = true, features = ["derive"] }
clap_complete = { workspace = true }
codex-app-server = { workspace = true }
codex-app-server-protocol = { workspace = true }
codex-app-server-test-client = { workspace = true }
codex-arg0 = { workspace = true }
codex-chatgpt = { workspace = true }
codex-cloud-tasks = { path = "../cloud-tasks" }

View File

@@ -130,7 +130,7 @@ async fn run_command_under_sandbox(
let sandbox_policy_cwd = cwd.clone();
let stdio_policy = StdioPolicy::Inherit;
let env = create_env(&config.shell_environment_policy);
let env = create_env(&config.shell_environment_policy, None);
// Special-case Windows sandbox: execute and exit the process to emulate inherited stdio.
if let SandboxType::Windows = sandbox_type {

View File

@@ -110,9 +110,11 @@ enum Subcommand {
Completion(CompletionCommand),
/// Run commands within a Codex-provided sandbox.
#[clap(visible_alias = "debug")]
Sandbox(SandboxArgs),
/// Debugging tools.
Debug(DebugCommand),
/// Execpolicy tooling.
#[clap(hide = true)]
Execpolicy(ExecpolicyCommand),
@@ -150,6 +152,36 @@ struct CompletionCommand {
shell: Shell,
}
#[derive(Debug, Parser)]
struct DebugCommand {
#[command(subcommand)]
subcommand: DebugSubcommand,
}
#[derive(Debug, clap::Subcommand)]
enum DebugSubcommand {
/// Tooling: helps debug the app server.
AppServer(DebugAppServerCommand),
}
#[derive(Debug, Parser)]
struct DebugAppServerCommand {
#[command(subcommand)]
subcommand: DebugAppServerSubcommand,
}
#[derive(Debug, clap::Subcommand)]
enum DebugAppServerSubcommand {
// Send message to app server V2.
SendMessageV2(DebugAppServerSendMessageV2Command),
}
#[derive(Debug, Parser)]
struct DebugAppServerSendMessageV2Command {
#[arg(value_name = "USER_MESSAGE", required = true)]
user_message: String,
}
#[derive(Debug, Parser)]
struct ResumeCommand {
/// Conversation/session id (UUID) or thread name. UUIDs take precedence if it parses.
@@ -425,6 +457,15 @@ fn run_execpolicycheck(cmd: ExecPolicyCheckCommand) -> anyhow::Result<()> {
cmd.run()
}
fn run_debug_app_server_command(cmd: DebugAppServerCommand) -> anyhow::Result<()> {
match cmd.subcommand {
DebugAppServerSubcommand::SendMessageV2(cmd) => {
let codex_bin = std::env::current_exe()?;
codex_app_server_test_client::send_message_v2(&codex_bin, &[], cmd.user_message, &None)
}
}
}
#[derive(Debug, Default, Parser, Clone)]
struct FeatureToggles {
/// Enable a feature (repeatable). Equivalent to `-c features.<name>=true`.
@@ -693,6 +734,11 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
.await?;
}
},
Some(Subcommand::Debug(DebugCommand { subcommand })) => match subcommand {
DebugSubcommand::AppServer(cmd) => {
run_debug_app_server_command(cmd)?;
}
},
Some(Subcommand::Execpolicy(ExecpolicyCommand { sub })) => match sub {
ExecpolicySubcommand::Check(cmd) => run_execpolicycheck(cmd)?,
},

View File

@@ -2,7 +2,7 @@
Typed clients for Codex/OpenAI APIs built on top of the generic transport in `codex-client`.
- Hosts the request/response models and prompt helpers for Responses, Chat Completions, and Compact APIs.
- Hosts the request/response models and prompt helpers for Responses and Compact APIs.
- Owns provider configuration (base URLs, headers, query params), auth header injection, retry tuning, and stream idle settings.
- Parses SSE streams into `ResponseEvent`/`ResponseStream`, including rate-limit snapshots and API-specific error mapping.
- Serves as the wire-level layer consumed by `codex-core`; higher layers handle auth refresh and business logic.
@@ -11,7 +11,7 @@ Typed clients for Codex/OpenAI APIs built on top of the generic transport in `co
The public interface of this crate is intentionally small and uniform:
- **Prompted endpoints (Chat + Responses)**
- **Prompted endpoints (Responses)**
- Input: a single `Prompt` plus endpoint-specific options.
- `Prompt` (re-exported as `codex_api::Prompt`) carries:
- `instructions: String` the fully-resolved system prompt for this turn.

View File

@@ -1,4 +1,6 @@
use codex_client::Request;
use http::HeaderMap;
use http::HeaderValue;
/// Provides bearer and account identity information for API requests.
///
@@ -12,16 +14,20 @@ pub trait AuthProvider: Send + Sync {
}
}
pub(crate) fn add_auth_headers<A: AuthProvider>(auth: &A, mut req: Request) -> Request {
pub(crate) fn add_auth_headers_to_header_map<A: AuthProvider>(auth: &A, headers: &mut HeaderMap) {
if let Some(token) = auth.bearer_token()
&& let Ok(header) = format!("Bearer {token}").parse()
&& let Ok(header) = HeaderValue::from_str(&format!("Bearer {token}"))
{
let _ = req.headers.insert(http::header::AUTHORIZATION, header);
let _ = headers.insert(http::header::AUTHORIZATION, header);
}
if let Some(account_id) = auth.account_id()
&& let Ok(header) = account_id.parse()
&& let Ok(header) = HeaderValue::from_str(&account_id)
{
let _ = req.headers.insert("ChatGPT-Account-ID", header);
let _ = headers.insert("ChatGPT-Account-ID", header);
}
}
pub(crate) fn add_auth_headers<A: AuthProvider>(auth: &A, mut req: Request) -> Request {
add_auth_headers_to_header_map(auth, &mut req.headers);
req
}

View File

@@ -13,7 +13,7 @@ use std::task::Context;
use std::task::Poll;
use tokio::sync::mpsc;
/// Canonical prompt input for Chat and Responses endpoints.
/// Canonical prompt input for Responses endpoints.
#[derive(Debug, Clone)]
pub struct Prompt {
/// Fully-resolved system instructions for this turn.

View File

@@ -1,111 +1,21 @@
use crate::ChatRequest;
use crate::auth::AuthProvider;
use crate::common::Prompt as ApiPrompt;
use crate::common::ResponseEvent;
use crate::common::ResponseStream;
use crate::endpoint::streaming::StreamingClient;
use crate::error::ApiError;
use crate::provider::Provider;
use crate::provider::WireApi;
use crate::sse::chat::spawn_chat_stream;
use crate::telemetry::SseTelemetry;
use codex_client::HttpTransport;
use codex_client::RequestCompression;
use codex_client::RequestTelemetry;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ReasoningItemContent;
use codex_protocol::models::ResponseItem;
use codex_protocol::protocol::SessionSource;
use futures::Stream;
use http::HeaderMap;
use serde_json::Value;
use std::collections::VecDeque;
use std::pin::Pin;
use std::sync::Arc;
use std::task::Context;
use std::task::Poll;
pub struct ChatClient<T: HttpTransport, A: AuthProvider> {
streaming: StreamingClient<T, A>,
}
impl<T: HttpTransport, A: AuthProvider> ChatClient<T, A> {
pub fn new(transport: T, provider: Provider, auth: A) -> Self {
Self {
streaming: StreamingClient::new(transport, provider, auth),
}
}
pub fn with_telemetry(
self,
request: Option<Arc<dyn RequestTelemetry>>,
sse: Option<Arc<dyn SseTelemetry>>,
) -> Self {
Self {
streaming: self.streaming.with_telemetry(request, sse),
}
}
pub async fn stream_request(&self, request: ChatRequest) -> Result<ResponseStream, ApiError> {
self.stream(request.body, request.headers).await
}
pub async fn stream_prompt(
&self,
model: &str,
prompt: &ApiPrompt,
conversation_id: Option<String>,
session_source: Option<SessionSource>,
) -> Result<ResponseStream, ApiError> {
use crate::requests::ChatRequestBuilder;
let request =
ChatRequestBuilder::new(model, &prompt.instructions, &prompt.input, &prompt.tools)
.conversation_id(conversation_id)
.session_source(session_source)
.build(self.streaming.provider())?;
self.stream_request(request).await
}
fn path(&self) -> &'static str {
match self.streaming.provider().wire {
WireApi::Chat => "chat/completions",
_ => "responses",
}
}
pub async fn stream(
&self,
body: Value,
extra_headers: HeaderMap,
) -> Result<ResponseStream, ApiError> {
self.streaming
.stream(
self.path(),
body,
extra_headers,
RequestCompression::None,
spawn_chat_stream,
None,
)
.await
}
}
#[derive(Copy, Clone, Eq, PartialEq)]
pub enum AggregateMode {
AggregatedOnly,
Streaming,
}
/// Stream adapter that merges token deltas into a single assistant message per turn.
pub struct AggregatedStream {
inner: ResponseStream,
cumulative: String,
cumulative_reasoning: String,
pending: VecDeque<ResponseEvent>,
mode: AggregateMode,
}
impl Stream for AggregatedStream {
@@ -122,7 +32,7 @@ impl Stream for AggregatedStream {
match Pin::new(&mut this.inner).poll_next(cx) {
Poll::Pending => return Poll::Pending,
Poll::Ready(None) => return Poll::Ready(None),
Poll::Ready(Some(Err(e))) => return Poll::Ready(Some(Err(e))),
Poll::Ready(Some(Err(err))) => return Poll::Ready(Some(Err(err))),
Poll::Ready(Some(Ok(ResponseEvent::OutputItemDone(item)))) => {
let is_assistant_message = matches!(
&item,
@@ -130,29 +40,16 @@ impl Stream for AggregatedStream {
);
if is_assistant_message {
match this.mode {
AggregateMode::AggregatedOnly => {
if this.cumulative.is_empty()
&& let ResponseItem::Message { content, .. } = &item
&& let Some(text) = content.iter().find_map(|c| match c {
ContentItem::OutputText { text } => Some(text),
_ => None,
})
{
this.cumulative.push_str(text);
}
continue;
}
AggregateMode::Streaming => {
if this.cumulative.is_empty() {
return Poll::Ready(Some(Ok(ResponseEvent::OutputItemDone(
item,
))));
} else {
continue;
}
}
if this.cumulative.is_empty()
&& let ResponseItem::Message { content, .. } = &item
&& let Some(text) = content.iter().find_map(|c| match c {
ContentItem::OutputText { text } => Some(text),
_ => None,
})
{
this.cumulative.push_str(text);
}
continue;
}
return Poll::Ready(Some(Ok(ResponseEvent::OutputItemDone(item))));
@@ -216,35 +113,20 @@ impl Stream for AggregatedStream {
token_usage,
})));
}
Poll::Ready(Some(Ok(ResponseEvent::Created))) => {
continue;
}
Poll::Ready(Some(Ok(ResponseEvent::Created))) => continue,
Poll::Ready(Some(Ok(ResponseEvent::OutputTextDelta(delta)))) => {
this.cumulative.push_str(&delta);
if matches!(this.mode, AggregateMode::Streaming) {
return Poll::Ready(Some(Ok(ResponseEvent::OutputTextDelta(delta))));
} else {
continue;
}
continue;
}
Poll::Ready(Some(Ok(ResponseEvent::ReasoningContentDelta {
delta,
content_index,
content_index: _,
}))) => {
this.cumulative_reasoning.push_str(&delta);
if matches!(this.mode, AggregateMode::Streaming) {
return Poll::Ready(Some(Ok(ResponseEvent::ReasoningContentDelta {
delta,
content_index,
})));
} else {
continue;
}
}
Poll::Ready(Some(Ok(ResponseEvent::ReasoningSummaryDelta { .. }))) => continue,
Poll::Ready(Some(Ok(ResponseEvent::ReasoningSummaryPartAdded { .. }))) => {
continue;
}
Poll::Ready(Some(Ok(ResponseEvent::ReasoningSummaryDelta { .. }))) => continue,
Poll::Ready(Some(Ok(ResponseEvent::ReasoningSummaryPartAdded { .. }))) => continue,
Poll::Ready(Some(Ok(ResponseEvent::OutputItemAdded(item)))) => {
return Poll::Ready(Some(Ok(ResponseEvent::OutputItemAdded(item))));
}
@@ -255,28 +137,21 @@ impl Stream for AggregatedStream {
pub trait AggregateStreamExt {
fn aggregate(self) -> AggregatedStream;
fn streaming_mode(self) -> ResponseStream;
}
impl AggregateStreamExt for ResponseStream {
fn aggregate(self) -> AggregatedStream {
AggregatedStream::new(self, AggregateMode::AggregatedOnly)
}
fn streaming_mode(self) -> ResponseStream {
self
AggregatedStream::new(self)
}
}
impl AggregatedStream {
fn new(inner: ResponseStream, mode: AggregateMode) -> Self {
fn new(inner: ResponseStream) -> Self {
AggregatedStream {
inner,
cumulative: String::new(),
cumulative_reasoning: String::new(),
pending: VecDeque::new(),
mode,
}
}
}

View File

@@ -1,10 +1,8 @@
use crate::auth::AuthProvider;
use crate::auth::add_auth_headers;
use crate::common::CompactionInput;
use crate::endpoint::session::EndpointSession;
use crate::error::ApiError;
use crate::provider::Provider;
use crate::provider::WireApi;
use crate::telemetry::run_with_request_telemetry;
use codex_client::HttpTransport;
use codex_client::RequestTelemetry;
use codex_protocol::models::ResponseItem;
@@ -15,34 +13,24 @@ use serde_json::to_value;
use std::sync::Arc;
pub struct CompactClient<T: HttpTransport, A: AuthProvider> {
transport: T,
provider: Provider,
auth: A,
request_telemetry: Option<Arc<dyn RequestTelemetry>>,
session: EndpointSession<T, A>,
}
impl<T: HttpTransport, A: AuthProvider> CompactClient<T, A> {
pub fn new(transport: T, provider: Provider, auth: A) -> Self {
Self {
transport,
provider,
auth,
request_telemetry: None,
session: EndpointSession::new(transport, provider, auth),
}
}
pub fn with_telemetry(mut self, request: Option<Arc<dyn RequestTelemetry>>) -> Self {
self.request_telemetry = request;
self
pub fn with_telemetry(self, request: Option<Arc<dyn RequestTelemetry>>) -> Self {
Self {
session: self.session.with_request_telemetry(request),
}
}
fn path(&self) -> Result<&'static str, ApiError> {
match self.provider.wire {
WireApi::Compact | WireApi::Responses => Ok("responses/compact"),
WireApi::Chat => Err(ApiError::Stream(
"compact endpoint requires responses wire api".to_string(),
)),
}
fn path() -> &'static str {
"responses/compact"
}
pub async fn compact(
@@ -50,21 +38,10 @@ impl<T: HttpTransport, A: AuthProvider> CompactClient<T, A> {
body: serde_json::Value,
extra_headers: HeaderMap,
) -> Result<Vec<ResponseItem>, ApiError> {
let path = self.path()?;
let builder = || {
let mut req = self.provider.build_request(Method::POST, path);
req.headers.extend(extra_headers.clone());
req.body = Some(body.clone());
add_auth_headers(&self.auth, req)
};
let resp = run_with_request_telemetry(
self.provider.retry.to_policy(),
self.request_telemetry.clone(),
builder,
|req| self.transport.execute(req),
)
.await?;
let resp = self
.session
.execute(Method::POST, Self::path(), extra_headers, Some(body))
.await?;
let parsed: CompactHistoryResponse =
serde_json::from_slice(&resp.body).map_err(|e| ApiError::Stream(e.to_string()))?;
Ok(parsed.output)
@@ -89,14 +66,11 @@ struct CompactHistoryResponse {
#[cfg(test)]
mod tests {
use super::*;
use crate::provider::RetryConfig;
use async_trait::async_trait;
use codex_client::Request;
use codex_client::Response;
use codex_client::StreamResponse;
use codex_client::TransportError;
use http::HeaderMap;
use std::time::Duration;
#[derive(Clone, Default)]
struct DummyTransport;
@@ -121,42 +95,11 @@ mod tests {
}
}
fn provider(wire: WireApi) -> Provider {
Provider {
name: "test".to_string(),
base_url: "https://example.com/v1".to_string(),
query_params: None,
wire,
headers: HeaderMap::new(),
retry: RetryConfig {
max_attempts: 1,
base_delay: Duration::from_millis(1),
retry_429: false,
retry_5xx: true,
retry_transport: true,
},
stream_idle_timeout: Duration::from_secs(1),
}
}
#[tokio::test]
async fn errors_when_wire_is_chat() {
let client = CompactClient::new(DummyTransport, provider(WireApi::Chat), DummyAuth);
let input = CompactionInput {
model: "gpt-test",
input: &[],
instructions: "inst",
};
let err = client
.compact_input(&input, HeaderMap::new())
.await
.expect_err("expected wire mismatch to fail");
match err {
ApiError::Stream(msg) => {
assert_eq!(msg, "compact endpoint requires responses wire api");
}
other => panic!("unexpected error: {other:?}"),
}
#[test]
fn path_is_responses_compact() {
assert_eq!(
CompactClient::<DummyTransport, DummyAuth>::path(),
"responses/compact"
);
}
}

View File

@@ -1,6 +1,6 @@
pub mod chat;
pub mod aggregate;
pub mod compact;
pub mod models;
pub mod responses;
pub mod responses_websocket;
mod streaming;
mod session;

View File

@@ -1,8 +1,7 @@
use crate::auth::AuthProvider;
use crate::auth::add_auth_headers;
use crate::endpoint::session::EndpointSession;
use crate::error::ApiError;
use crate::provider::Provider;
use crate::telemetry::run_with_request_telemetry;
use codex_client::HttpTransport;
use codex_client::RequestTelemetry;
use codex_protocol::openai_models::ModelInfo;
@@ -13,53 +12,42 @@ use http::header::ETAG;
use std::sync::Arc;
pub struct ModelsClient<T: HttpTransport, A: AuthProvider> {
transport: T,
provider: Provider,
auth: A,
request_telemetry: Option<Arc<dyn RequestTelemetry>>,
session: EndpointSession<T, A>,
}
impl<T: HttpTransport, A: AuthProvider> ModelsClient<T, A> {
pub fn new(transport: T, provider: Provider, auth: A) -> Self {
Self {
transport,
provider,
auth,
request_telemetry: None,
session: EndpointSession::new(transport, provider, auth),
}
}
pub fn with_telemetry(mut self, request: Option<Arc<dyn RequestTelemetry>>) -> Self {
self.request_telemetry = request;
self
pub fn with_telemetry(self, request: Option<Arc<dyn RequestTelemetry>>) -> Self {
Self {
session: self.session.with_request_telemetry(request),
}
}
fn path(&self) -> &'static str {
fn path() -> &'static str {
"models"
}
fn append_client_version_query(req: &mut codex_client::Request, client_version: &str) {
let separator = if req.url.contains('?') { '&' } else { '?' };
req.url = format!("{}{}client_version={client_version}", req.url, separator);
}
pub async fn list_models(
&self,
client_version: &str,
extra_headers: HeaderMap,
) -> Result<(Vec<ModelInfo>, Option<String>), ApiError> {
let builder = || {
let mut req = self.provider.build_request(Method::GET, self.path());
req.headers.extend(extra_headers.clone());
let separator = if req.url.contains('?') { '&' } else { '?' };
req.url = format!("{}{}client_version={client_version}", req.url, separator);
add_auth_headers(&self.auth, req)
};
let resp = run_with_request_telemetry(
self.provider.retry.to_policy(),
self.request_telemetry.clone(),
builder,
|req| self.transport.execute(req),
)
.await?;
let resp = self
.session
.execute_with(Method::GET, Self::path(), extra_headers, None, |req| {
Self::append_client_version_query(req, client_version);
})
.await?;
let header_etag = resp
.headers
@@ -83,7 +71,6 @@ impl<T: HttpTransport, A: AuthProvider> ModelsClient<T, A> {
mod tests {
use super::*;
use crate::provider::RetryConfig;
use crate::provider::WireApi;
use async_trait::async_trait;
use codex_client::Request;
use codex_client::Response;
@@ -149,7 +136,6 @@ mod tests {
name: "test".to_string(),
base_url: base_url.to_string(),
query_params: None,
wire: WireApi::Responses,
headers: HeaderMap::new(),
retry: RetryConfig {
max_attempts: 1,

View File

@@ -3,10 +3,9 @@ use crate::common::Prompt as ApiPrompt;
use crate::common::Reasoning;
use crate::common::ResponseStream;
use crate::common::TextControls;
use crate::endpoint::streaming::StreamingClient;
use crate::endpoint::session::EndpointSession;
use crate::error::ApiError;
use crate::provider::Provider;
use crate::provider::WireApi;
use crate::requests::ResponsesRequest;
use crate::requests::ResponsesRequestBuilder;
use crate::requests::responses::Compression;
@@ -17,13 +16,16 @@ use codex_client::RequestCompression;
use codex_client::RequestTelemetry;
use codex_protocol::protocol::SessionSource;
use http::HeaderMap;
use http::HeaderValue;
use http::Method;
use serde_json::Value;
use std::sync::Arc;
use std::sync::OnceLock;
use tracing::instrument;
pub struct ResponsesClient<T: HttpTransport, A: AuthProvider> {
streaming: StreamingClient<T, A>,
session: EndpointSession<T, A>,
sse_telemetry: Option<Arc<dyn SseTelemetry>>,
}
#[derive(Default)]
@@ -43,7 +45,8 @@ pub struct ResponsesOptions {
impl<T: HttpTransport, A: AuthProvider> ResponsesClient<T, A> {
pub fn new(transport: T, provider: Provider, auth: A) -> Self {
Self {
streaming: StreamingClient::new(transport, provider, auth),
session: EndpointSession::new(transport, provider, auth),
sse_telemetry: None,
}
}
@@ -53,7 +56,8 @@ impl<T: HttpTransport, A: AuthProvider> ResponsesClient<T, A> {
sse: Option<Arc<dyn SseTelemetry>>,
) -> Self {
Self {
streaming: self.streaming.with_telemetry(request, sse),
session: self.session.with_request_telemetry(request),
sse_telemetry: sse,
}
}
@@ -103,16 +107,13 @@ impl<T: HttpTransport, A: AuthProvider> ResponsesClient<T, A> {
.store_override(store_override)
.extra_headers(extra_headers)
.compression(compression)
.build(self.streaming.provider())?;
.build(self.session.provider())?;
self.stream_request(request, turn_state).await
}
fn path(&self) -> &'static str {
match self.streaming.provider().wire {
WireApi::Responses | WireApi::Compact => "responses",
WireApi::Chat => "chat/completions",
}
fn path() -> &'static str {
"responses"
}
pub async fn stream(
@@ -122,20 +123,33 @@ impl<T: HttpTransport, A: AuthProvider> ResponsesClient<T, A> {
compression: Compression,
turn_state: Option<Arc<OnceLock<String>>>,
) -> Result<ResponseStream, ApiError> {
let compression = match compression {
let request_compression = match compression {
Compression::None => RequestCompression::None,
Compression::Zstd => RequestCompression::Zstd,
};
self.streaming
.stream(
self.path(),
body,
let stream_response = self
.session
.stream_with(
Method::POST,
Self::path(),
extra_headers,
compression,
spawn_response_stream,
turn_state,
Some(body),
|req| {
req.headers.insert(
http::header::ACCEPT,
HeaderValue::from_static("text/event-stream"),
);
req.compression = request_compression;
},
)
.await
.await?;
Ok(spawn_response_stream(
stream_response,
self.session.provider().stream_idle_timeout,
self.sse_telemetry.clone(),
turn_state,
))
}
}

View File

@@ -1,4 +1,5 @@
use crate::auth::AuthProvider;
use crate::auth::add_auth_headers_to_header_map;
use crate::common::ResponseEvent;
use crate::common::ResponseStream;
use crate::common::ResponsesWsRequest;
@@ -11,7 +12,6 @@ use codex_client::TransportError;
use futures::SinkExt;
use futures::StreamExt;
use http::HeaderMap;
use http::HeaderValue;
use serde_json::Value;
use std::sync::Arc;
use std::sync::OnceLock;
@@ -134,7 +134,7 @@ impl<A: AuthProvider> ResponsesWebsocketClient<A> {
let mut headers = self.provider.headers.clone();
headers.extend(extra_headers);
apply_auth_headers(&mut headers, &self.auth);
add_auth_headers_to_header_map(&self.auth, &mut headers);
let (stream, server_reasoning_included) =
connect_websocket(ws_url, headers, turn_state).await?;
@@ -147,20 +147,6 @@ impl<A: AuthProvider> ResponsesWebsocketClient<A> {
}
}
// TODO (pakrym): share with /auth
fn apply_auth_headers(headers: &mut HeaderMap, auth: &impl AuthProvider) {
if let Some(token) = auth.bearer_token()
&& let Ok(header) = HeaderValue::from_str(&format!("Bearer {token}"))
{
let _ = headers.insert(http::header::AUTHORIZATION, header);
}
if let Some(account_id) = auth.account_id()
&& let Ok(header) = HeaderValue::from_str(&account_id)
{
let _ = headers.insert("ChatGPT-Account-ID", header);
}
}
async fn connect_websocket(
url: Url,
headers: HeaderMap,

View File

@@ -0,0 +1,126 @@
use crate::auth::AuthProvider;
use crate::auth::add_auth_headers;
use crate::error::ApiError;
use crate::provider::Provider;
use crate::telemetry::run_with_request_telemetry;
use codex_client::HttpTransport;
use codex_client::Request;
use codex_client::RequestTelemetry;
use codex_client::Response;
use codex_client::StreamResponse;
use http::HeaderMap;
use http::Method;
use serde_json::Value;
use std::sync::Arc;
pub(crate) struct EndpointSession<T: HttpTransport, A: AuthProvider> {
transport: T,
provider: Provider,
auth: A,
request_telemetry: Option<Arc<dyn RequestTelemetry>>,
}
impl<T: HttpTransport, A: AuthProvider> EndpointSession<T, A> {
pub(crate) fn new(transport: T, provider: Provider, auth: A) -> Self {
Self {
transport,
provider,
auth,
request_telemetry: None,
}
}
pub(crate) fn with_request_telemetry(
mut self,
request: Option<Arc<dyn RequestTelemetry>>,
) -> Self {
self.request_telemetry = request;
self
}
pub(crate) fn provider(&self) -> &Provider {
&self.provider
}
fn make_request(
&self,
method: &Method,
path: &str,
extra_headers: &HeaderMap,
body: Option<&Value>,
) -> Request {
let mut req = self.provider.build_request(method.clone(), path);
req.headers.extend(extra_headers.clone());
if let Some(body) = body {
req.body = Some(body.clone());
}
add_auth_headers(&self.auth, req)
}
pub(crate) async fn execute(
&self,
method: Method,
path: &str,
extra_headers: HeaderMap,
body: Option<Value>,
) -> Result<Response, ApiError> {
self.execute_with(method, path, extra_headers, body, |_| {})
.await
}
pub(crate) async fn execute_with<C>(
&self,
method: Method,
path: &str,
extra_headers: HeaderMap,
body: Option<Value>,
configure: C,
) -> Result<Response, ApiError>
where
C: Fn(&mut Request),
{
let make_request = || {
let mut req = self.make_request(&method, path, &extra_headers, body.as_ref());
configure(&mut req);
req
};
let response = run_with_request_telemetry(
self.provider.retry.to_policy(),
self.request_telemetry.clone(),
make_request,
|req| self.transport.execute(req),
)
.await?;
Ok(response)
}
pub(crate) async fn stream_with<C>(
&self,
method: Method,
path: &str,
extra_headers: HeaderMap,
body: Option<Value>,
configure: C,
) -> Result<StreamResponse, ApiError>
where
C: Fn(&mut Request),
{
let make_request = || {
let mut req = self.make_request(&method, path, &extra_headers, body.as_ref());
configure(&mut req);
req
};
let stream = run_with_request_telemetry(
self.provider.retry.to_policy(),
self.request_telemetry.clone(),
make_request,
|req| self.transport.stream(req),
)
.await?;
Ok(stream)
}
}

View File

@@ -1,95 +0,0 @@
use crate::auth::AuthProvider;
use crate::auth::add_auth_headers;
use crate::common::ResponseStream;
use crate::error::ApiError;
use crate::provider::Provider;
use crate::telemetry::SseTelemetry;
use crate::telemetry::run_with_request_telemetry;
use codex_client::HttpTransport;
use codex_client::RequestCompression;
use codex_client::RequestTelemetry;
use codex_client::StreamResponse;
use http::HeaderMap;
use http::Method;
use serde_json::Value;
use std::sync::Arc;
use std::sync::OnceLock;
use std::time::Duration;
pub(crate) struct StreamingClient<T: HttpTransport, A: AuthProvider> {
transport: T,
provider: Provider,
auth: A,
request_telemetry: Option<Arc<dyn RequestTelemetry>>,
sse_telemetry: Option<Arc<dyn SseTelemetry>>,
}
type StreamSpawner = fn(
StreamResponse,
Duration,
Option<Arc<dyn SseTelemetry>>,
Option<Arc<OnceLock<String>>>,
) -> ResponseStream;
impl<T: HttpTransport, A: AuthProvider> StreamingClient<T, A> {
pub(crate) fn new(transport: T, provider: Provider, auth: A) -> Self {
Self {
transport,
provider,
auth,
request_telemetry: None,
sse_telemetry: None,
}
}
pub(crate) fn with_telemetry(
mut self,
request: Option<Arc<dyn RequestTelemetry>>,
sse: Option<Arc<dyn SseTelemetry>>,
) -> Self {
self.request_telemetry = request;
self.sse_telemetry = sse;
self
}
pub(crate) fn provider(&self) -> &Provider {
&self.provider
}
pub(crate) async fn stream(
&self,
path: &str,
body: Value,
extra_headers: HeaderMap,
compression: RequestCompression,
spawner: StreamSpawner,
turn_state: Option<Arc<OnceLock<String>>>,
) -> Result<ResponseStream, ApiError> {
let builder = || {
let mut req = self.provider.build_request(Method::POST, path);
req.headers.extend(extra_headers.clone());
req.headers.insert(
http::header::ACCEPT,
http::HeaderValue::from_static("text/event-stream"),
);
req.body = Some(body.clone());
req.compression = compression;
add_auth_headers(&self.auth, req)
};
let stream_response = run_with_request_telemetry(
self.provider.retry.to_policy(),
self.request_telemetry.clone(),
builder,
|req| self.transport.stream(req),
)
.await?;
Ok(spawner(
stream_response,
self.provider.stream_idle_timeout,
self.sse_telemetry.clone(),
turn_state,
))
}
}

View File

@@ -22,8 +22,7 @@ pub use crate::common::ResponseEvent;
pub use crate::common::ResponseStream;
pub use crate::common::ResponsesApiRequest;
pub use crate::common::create_text_param_for_request;
pub use crate::endpoint::chat::AggregateStreamExt;
pub use crate::endpoint::chat::ChatClient;
pub use crate::endpoint::aggregate::AggregateStreamExt;
pub use crate::endpoint::compact::CompactClient;
pub use crate::endpoint::models::ModelsClient;
pub use crate::endpoint::responses::ResponsesClient;
@@ -32,10 +31,7 @@ pub use crate::endpoint::responses_websocket::ResponsesWebsocketClient;
pub use crate::endpoint::responses_websocket::ResponsesWebsocketConnection;
pub use crate::error::ApiError;
pub use crate::provider::Provider;
pub use crate::provider::WireApi;
pub use crate::provider::is_azure_responses_wire_base_url;
pub use crate::requests::ChatRequest;
pub use crate::requests::ChatRequestBuilder;
pub use crate::requests::ResponsesRequest;
pub use crate::requests::ResponsesRequestBuilder;
pub use crate::sse::stream_from_fixture;

View File

@@ -8,14 +8,6 @@ use std::collections::HashMap;
use std::time::Duration;
use url::Url;
/// Wire-level APIs supported by a `Provider`.
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum WireApi {
Responses,
Chat,
Compact,
}
/// High-level retry configuration for a provider.
///
/// This is converted into a `RetryPolicy` used by `codex-client` to drive
@@ -52,7 +44,6 @@ pub struct Provider {
pub name: String,
pub base_url: String,
pub query_params: Option<HashMap<String, String>>,
pub wire: WireApi,
pub headers: HeaderMap,
pub retry: RetryConfig,
pub stream_idle_timeout: Duration,
@@ -95,7 +86,7 @@ impl Provider {
}
pub fn is_azure_responses_endpoint(&self) -> bool {
is_azure_responses_wire_base_url(self.wire.clone(), &self.name, Some(&self.base_url))
is_azure_responses_wire_base_url(&self.name, Some(&self.base_url))
}
pub fn websocket_url_for_path(&self, path: &str) -> Result<Url, url::ParseError> {
@@ -112,11 +103,7 @@ impl Provider {
}
}
pub fn is_azure_responses_wire_base_url(wire: WireApi, name: &str, base_url: Option<&str>) -> bool {
if wire != WireApi::Responses {
return false;
}
pub fn is_azure_responses_wire_base_url(name: &str, base_url: Option<&str>) -> bool {
if name.eq_ignore_ascii_case("azure") {
return true;
}
@@ -157,13 +144,12 @@ mod tests {
for base_url in positive_cases {
assert!(
is_azure_responses_wire_base_url(WireApi::Responses, "test", Some(base_url)),
is_azure_responses_wire_base_url("test", Some(base_url)),
"expected {base_url} to be detected as Azure"
);
}
assert!(is_azure_responses_wire_base_url(
WireApi::Responses,
"Azure",
Some("https://example.com")
));
@@ -176,15 +162,9 @@ mod tests {
for base_url in negative_cases {
assert!(
!is_azure_responses_wire_base_url(WireApi::Responses, "test", Some(base_url)),
!is_azure_responses_wire_base_url("test", Some(base_url)),
"expected {base_url} not to be detected as Azure"
);
}
assert!(!is_azure_responses_wire_base_url(
WireApi::Chat,
"Azure",
Some("https://foo.openai.azure.com/openai")
));
}
}

View File

@@ -1,494 +0,0 @@
use crate::error::ApiError;
use crate::provider::Provider;
use crate::requests::headers::build_conversation_headers;
use crate::requests::headers::insert_header;
use crate::requests::headers::subagent_header;
use codex_protocol::models::ContentItem;
use codex_protocol::models::FunctionCallOutputContentItem;
use codex_protocol::models::ReasoningItemContent;
use codex_protocol::models::ResponseItem;
use codex_protocol::protocol::SessionSource;
use http::HeaderMap;
use serde_json::Value;
use serde_json::json;
use std::collections::HashMap;
/// Assembled request body plus headers for Chat Completions streaming calls.
pub struct ChatRequest {
pub body: Value,
pub headers: HeaderMap,
}
pub struct ChatRequestBuilder<'a> {
model: &'a str,
instructions: &'a str,
input: &'a [ResponseItem],
tools: &'a [Value],
conversation_id: Option<String>,
session_source: Option<SessionSource>,
}
impl<'a> ChatRequestBuilder<'a> {
pub fn new(
model: &'a str,
instructions: &'a str,
input: &'a [ResponseItem],
tools: &'a [Value],
) -> Self {
Self {
model,
instructions,
input,
tools,
conversation_id: None,
session_source: None,
}
}
pub fn conversation_id(mut self, id: Option<String>) -> Self {
self.conversation_id = id;
self
}
pub fn session_source(mut self, source: Option<SessionSource>) -> Self {
self.session_source = source;
self
}
pub fn build(self, _provider: &Provider) -> Result<ChatRequest, ApiError> {
let mut messages = Vec::<Value>::new();
messages.push(json!({"role": "system", "content": self.instructions}));
let input = self.input;
let mut reasoning_by_anchor_index: HashMap<usize, String> = HashMap::new();
let mut last_emitted_role: Option<&str> = None;
for item in input {
match item {
ResponseItem::Message { role, .. } => last_emitted_role = Some(role.as_str()),
ResponseItem::FunctionCall { .. } | ResponseItem::LocalShellCall { .. } => {
last_emitted_role = Some("assistant")
}
ResponseItem::FunctionCallOutput { .. } => last_emitted_role = Some("tool"),
ResponseItem::Reasoning { .. } | ResponseItem::Other => {}
ResponseItem::CustomToolCall { .. } => {}
ResponseItem::CustomToolCallOutput { .. } => {}
ResponseItem::WebSearchCall { .. } => {}
ResponseItem::GhostSnapshot { .. } => {}
ResponseItem::Compaction { .. } => {}
}
}
let mut last_user_index: Option<usize> = None;
for (idx, item) in input.iter().enumerate() {
if let ResponseItem::Message { role, .. } = item
&& role == "user"
{
last_user_index = Some(idx);
}
}
if !matches!(last_emitted_role, Some("user")) {
for (idx, item) in input.iter().enumerate() {
if let Some(u_idx) = last_user_index
&& idx <= u_idx
{
continue;
}
if let ResponseItem::Reasoning {
content: Some(items),
..
} = item
{
let mut text = String::new();
for entry in items {
match entry {
ReasoningItemContent::ReasoningText { text: segment }
| ReasoningItemContent::Text { text: segment } => {
text.push_str(segment)
}
}
}
if text.trim().is_empty() {
continue;
}
let mut attached = false;
if idx > 0
&& let ResponseItem::Message { role, .. } = &input[idx - 1]
&& role == "assistant"
{
reasoning_by_anchor_index
.entry(idx - 1)
.and_modify(|v| v.push_str(&text))
.or_insert(text.clone());
attached = true;
}
if !attached && idx + 1 < input.len() {
match &input[idx + 1] {
ResponseItem::FunctionCall { .. }
| ResponseItem::LocalShellCall { .. } => {
reasoning_by_anchor_index
.entry(idx + 1)
.and_modify(|v| v.push_str(&text))
.or_insert(text.clone());
}
ResponseItem::Message { role, .. } if role == "assistant" => {
reasoning_by_anchor_index
.entry(idx + 1)
.and_modify(|v| v.push_str(&text))
.or_insert(text.clone());
}
_ => {}
}
}
}
}
}
let mut last_assistant_text: Option<String> = None;
for (idx, item) in input.iter().enumerate() {
match item {
ResponseItem::Message { role, content, .. } => {
let mut text = String::new();
let mut items: Vec<Value> = Vec::new();
let mut saw_image = false;
for c in content {
match c {
ContentItem::InputText { text: t }
| ContentItem::OutputText { text: t } => {
text.push_str(t);
items.push(json!({"type":"text","text": t}));
}
ContentItem::InputImage { image_url } => {
saw_image = true;
items.push(
json!({"type":"image_url","image_url": {"url": image_url}}),
);
}
}
}
if role == "assistant" {
if let Some(prev) = &last_assistant_text
&& prev == &text
{
continue;
}
last_assistant_text = Some(text.clone());
}
let content_value = if role == "assistant" {
json!(text)
} else if saw_image {
json!(items)
} else {
json!(text)
};
let mut msg = json!({"role": role, "content": content_value});
if role == "assistant"
&& let Some(reasoning) = reasoning_by_anchor_index.get(&idx)
&& let Some(obj) = msg.as_object_mut()
{
obj.insert("reasoning".to_string(), json!(reasoning));
}
messages.push(msg);
}
ResponseItem::FunctionCall {
name,
arguments,
call_id,
..
} => {
let reasoning = reasoning_by_anchor_index.get(&idx).map(String::as_str);
let tool_call = json!({
"id": call_id,
"type": "function",
"function": {
"name": name,
"arguments": arguments,
}
});
push_tool_call_message(&mut messages, tool_call, reasoning);
}
ResponseItem::LocalShellCall {
id,
call_id: _,
status,
action,
} => {
let reasoning = reasoning_by_anchor_index.get(&idx).map(String::as_str);
let tool_call = json!({
"id": id.clone().unwrap_or_default(),
"type": "local_shell_call",
"status": status,
"action": action,
});
push_tool_call_message(&mut messages, tool_call, reasoning);
}
ResponseItem::FunctionCallOutput { call_id, output } => {
let content_value = if let Some(items) = &output.content_items {
let mapped: Vec<Value> = items
.iter()
.map(|it| match it {
FunctionCallOutputContentItem::InputText { text } => {
json!({"type":"text","text": text})
}
FunctionCallOutputContentItem::InputImage { image_url } => {
json!({"type":"image_url","image_url": {"url": image_url}})
}
})
.collect();
json!(mapped)
} else {
json!(output.content)
};
messages.push(json!({
"role": "tool",
"tool_call_id": call_id,
"content": content_value,
}));
}
ResponseItem::CustomToolCall {
id,
call_id: _,
name,
input,
status: _,
} => {
let tool_call = json!({
"id": id,
"type": "custom",
"custom": {
"name": name,
"input": input,
}
});
let reasoning = reasoning_by_anchor_index.get(&idx).map(String::as_str);
push_tool_call_message(&mut messages, tool_call, reasoning);
}
ResponseItem::CustomToolCallOutput { call_id, output } => {
messages.push(json!({
"role": "tool",
"tool_call_id": call_id,
"content": output,
}));
}
ResponseItem::GhostSnapshot { .. } => {
continue;
}
ResponseItem::Reasoning { .. }
| ResponseItem::WebSearchCall { .. }
| ResponseItem::Other
| ResponseItem::Compaction { .. } => {
continue;
}
}
}
let payload = json!({
"model": self.model,
"messages": messages,
"stream": true,
"tools": self.tools,
});
let mut headers = build_conversation_headers(self.conversation_id);
if let Some(subagent) = subagent_header(&self.session_source) {
insert_header(&mut headers, "x-openai-subagent", &subagent);
}
Ok(ChatRequest {
body: payload,
headers,
})
}
}
fn push_tool_call_message(messages: &mut Vec<Value>, tool_call: Value, reasoning: Option<&str>) {
// Chat Completions requires that tool calls are grouped into a single assistant message
// (with `tool_calls: [...]`) followed by tool role responses.
if let Some(Value::Object(obj)) = messages.last_mut()
&& obj.get("role").and_then(Value::as_str) == Some("assistant")
&& obj.get("content").is_some_and(Value::is_null)
&& let Some(tool_calls) = obj.get_mut("tool_calls").and_then(Value::as_array_mut)
{
tool_calls.push(tool_call);
if let Some(reasoning) = reasoning {
if let Some(Value::String(existing)) = obj.get_mut("reasoning") {
if !existing.is_empty() {
existing.push('\n');
}
existing.push_str(reasoning);
} else {
obj.insert(
"reasoning".to_string(),
Value::String(reasoning.to_string()),
);
}
}
return;
}
let mut msg = json!({
"role": "assistant",
"content": null,
"tool_calls": [tool_call],
});
if let Some(reasoning) = reasoning
&& let Some(obj) = msg.as_object_mut()
{
obj.insert("reasoning".to_string(), json!(reasoning));
}
messages.push(msg);
}
#[cfg(test)]
mod tests {
use super::*;
use crate::provider::RetryConfig;
use crate::provider::WireApi;
use codex_protocol::models::FunctionCallOutputPayload;
use codex_protocol::protocol::SessionSource;
use codex_protocol::protocol::SubAgentSource;
use http::HeaderValue;
use pretty_assertions::assert_eq;
use std::time::Duration;
fn provider() -> Provider {
Provider {
name: "openai".to_string(),
base_url: "https://api.openai.com/v1".to_string(),
query_params: None,
wire: WireApi::Chat,
headers: HeaderMap::new(),
retry: RetryConfig {
max_attempts: 1,
base_delay: Duration::from_millis(10),
retry_429: false,
retry_5xx: true,
retry_transport: true,
},
stream_idle_timeout: Duration::from_secs(1),
}
}
#[test]
fn attaches_conversation_and_subagent_headers() {
let prompt_input = vec![ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::InputText {
text: "hi".to_string(),
}],
end_turn: None,
phase: None,
}];
let req = ChatRequestBuilder::new("gpt-test", "inst", &prompt_input, &[])
.conversation_id(Some("conv-1".into()))
.session_source(Some(SessionSource::SubAgent(SubAgentSource::Review)))
.build(&provider())
.expect("request");
assert_eq!(
req.headers.get("session_id"),
Some(&HeaderValue::from_static("conv-1"))
);
assert_eq!(
req.headers.get("x-openai-subagent"),
Some(&HeaderValue::from_static("review"))
);
}
#[test]
fn groups_consecutive_tool_calls_into_a_single_assistant_message() {
let prompt_input = vec![
ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::InputText {
text: "read these".to_string(),
}],
end_turn: None,
phase: None,
},
ResponseItem::FunctionCall {
id: None,
name: "read_file".to_string(),
arguments: r#"{"path":"a.txt"}"#.to_string(),
call_id: "call-a".to_string(),
},
ResponseItem::FunctionCall {
id: None,
name: "read_file".to_string(),
arguments: r#"{"path":"b.txt"}"#.to_string(),
call_id: "call-b".to_string(),
},
ResponseItem::FunctionCall {
id: None,
name: "read_file".to_string(),
arguments: r#"{"path":"c.txt"}"#.to_string(),
call_id: "call-c".to_string(),
},
ResponseItem::FunctionCallOutput {
call_id: "call-a".to_string(),
output: FunctionCallOutputPayload {
content: "A".to_string(),
..Default::default()
},
},
ResponseItem::FunctionCallOutput {
call_id: "call-b".to_string(),
output: FunctionCallOutputPayload {
content: "B".to_string(),
..Default::default()
},
},
ResponseItem::FunctionCallOutput {
call_id: "call-c".to_string(),
output: FunctionCallOutputPayload {
content: "C".to_string(),
..Default::default()
},
},
];
let req = ChatRequestBuilder::new("gpt-test", "inst", &prompt_input, &[])
.build(&provider())
.expect("request");
let messages = req
.body
.get("messages")
.and_then(|v| v.as_array())
.expect("messages array");
// system + user + assistant(tool_calls=[...]) + 3 tool outputs
assert_eq!(messages.len(), 6);
assert_eq!(messages[0]["role"], "system");
assert_eq!(messages[1]["role"], "user");
let tool_calls_msg = &messages[2];
assert_eq!(tool_calls_msg["role"], "assistant");
assert_eq!(tool_calls_msg["content"], serde_json::Value::Null);
let tool_calls = tool_calls_msg["tool_calls"]
.as_array()
.expect("tool_calls array");
assert_eq!(tool_calls.len(), 3);
assert_eq!(tool_calls[0]["id"], "call-a");
assert_eq!(tool_calls[1]["id"], "call-b");
assert_eq!(tool_calls[2]["id"], "call-c");
assert_eq!(messages[3]["role"], "tool");
assert_eq!(messages[3]["tool_call_id"], "call-a");
assert_eq!(messages[4]["role"], "tool");
assert_eq!(messages[4]["tool_call_id"], "call-b");
assert_eq!(messages[5]["role"], "tool");
assert_eq!(messages[5]["tool_call_id"], "call-c");
}
}

View File

@@ -1,8 +1,5 @@
pub mod chat;
pub(crate) mod headers;
pub mod responses;
pub use chat::ChatRequest;
pub use chat::ChatRequestBuilder;
pub use responses::ResponsesRequest;
pub use responses::ResponsesRequestBuilder;

View File

@@ -191,7 +191,6 @@ fn attach_item_ids(payload_json: &mut Value, original_items: &[ResponseItem]) {
mod tests {
use super::*;
use crate::provider::RetryConfig;
use crate::provider::WireApi;
use codex_protocol::protocol::SubAgentSource;
use http::HeaderValue;
use pretty_assertions::assert_eq;
@@ -202,7 +201,6 @@ mod tests {
name: name.to_string(),
base_url: base_url.to_string(),
query_params: None,
wire: WireApi::Responses,
headers: HeaderMap::new(),
retry: RetryConfig {
max_attempts: 1,

View File

@@ -1,717 +0,0 @@
use crate::common::ResponseEvent;
use crate::common::ResponseStream;
use crate::error::ApiError;
use crate::telemetry::SseTelemetry;
use codex_client::StreamResponse;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ReasoningItemContent;
use codex_protocol::models::ResponseItem;
use eventsource_stream::Eventsource;
use futures::Stream;
use futures::StreamExt;
use std::collections::HashMap;
use std::collections::HashSet;
use std::sync::Arc;
use std::sync::OnceLock;
use std::time::Duration;
use tokio::sync::mpsc;
use tokio::time::Instant;
use tokio::time::timeout;
use tracing::debug;
use tracing::trace;
pub(crate) fn spawn_chat_stream(
stream_response: StreamResponse,
idle_timeout: Duration,
telemetry: Option<Arc<dyn SseTelemetry>>,
_turn_state: Option<Arc<OnceLock<String>>>,
) -> ResponseStream {
let (tx_event, rx_event) = mpsc::channel::<Result<ResponseEvent, ApiError>>(1600);
tokio::spawn(async move {
process_chat_sse(stream_response.bytes, tx_event, idle_timeout, telemetry).await;
});
ResponseStream { rx_event }
}
/// Processes Server-Sent Events from the legacy Chat Completions streaming API.
///
/// The upstream protocol terminates a streaming response with a final sentinel event
/// (`data: [DONE]`). Historically, some of our test stubs have emitted `data: DONE`
/// (without brackets) instead.
///
/// `eventsource_stream` delivers these sentinels as regular events rather than signaling
/// end-of-stream. If we try to parse them as JSON, we log and skip them, then keep
/// polling for more events.
///
/// On servers that keep the HTTP connection open after emitting the sentinel (notably
/// wiremock on Windows), skipping the sentinel means we never emit `ResponseEvent::Completed`.
/// Higher-level workflows/tests that wait for completion before issuing subsequent model
/// calls will then stall, which shows up as "expected N requests, got 1" verification
/// failures in the mock server.
pub async fn process_chat_sse<S>(
stream: S,
tx_event: mpsc::Sender<Result<ResponseEvent, ApiError>>,
idle_timeout: Duration,
telemetry: Option<std::sync::Arc<dyn SseTelemetry>>,
) where
S: Stream<Item = Result<bytes::Bytes, codex_client::TransportError>> + Unpin,
{
let mut stream = stream.eventsource();
#[derive(Default, Debug)]
struct ToolCallState {
id: Option<String>,
name: Option<String>,
arguments: String,
}
let mut tool_calls: HashMap<usize, ToolCallState> = HashMap::new();
let mut tool_call_order: Vec<usize> = Vec::new();
let mut tool_call_order_seen: HashSet<usize> = HashSet::new();
let mut tool_call_index_by_id: HashMap<String, usize> = HashMap::new();
let mut next_tool_call_index = 0usize;
let mut last_tool_call_index: Option<usize> = None;
let mut assistant_item: Option<ResponseItem> = None;
let mut reasoning_item: Option<ResponseItem> = None;
let mut completed_sent = false;
async fn flush_and_complete(
tx_event: &mpsc::Sender<Result<ResponseEvent, ApiError>>,
reasoning_item: &mut Option<ResponseItem>,
assistant_item: &mut Option<ResponseItem>,
) {
if let Some(reasoning) = reasoning_item.take() {
let _ = tx_event
.send(Ok(ResponseEvent::OutputItemDone(reasoning)))
.await;
}
if let Some(assistant) = assistant_item.take() {
let _ = tx_event
.send(Ok(ResponseEvent::OutputItemDone(assistant)))
.await;
}
let _ = tx_event
.send(Ok(ResponseEvent::Completed {
response_id: String::new(),
token_usage: None,
}))
.await;
}
loop {
let start = Instant::now();
let response = timeout(idle_timeout, stream.next()).await;
if let Some(t) = telemetry.as_ref() {
t.on_sse_poll(&response, start.elapsed());
}
let sse = match response {
Ok(Some(Ok(sse))) => sse,
Ok(Some(Err(e))) => {
let _ = tx_event.send(Err(ApiError::Stream(e.to_string()))).await;
return;
}
Ok(None) => {
if !completed_sent {
flush_and_complete(&tx_event, &mut reasoning_item, &mut assistant_item).await;
}
return;
}
Err(_) => {
let _ = tx_event
.send(Err(ApiError::Stream("idle timeout waiting for SSE".into())))
.await;
return;
}
};
trace!("SSE event: {}", sse.data);
let data = sse.data.trim();
if data.is_empty() {
continue;
}
if data == "[DONE]" || data == "DONE" {
if !completed_sent {
flush_and_complete(&tx_event, &mut reasoning_item, &mut assistant_item).await;
}
return;
}
let value: serde_json::Value = match serde_json::from_str(data) {
Ok(val) => val,
Err(err) => {
debug!(
"Failed to parse ChatCompletions SSE event: {err}, data: {}",
data
);
continue;
}
};
let Some(choices) = value.get("choices").and_then(|c| c.as_array()) else {
continue;
};
for choice in choices {
if let Some(delta) = choice.get("delta") {
if let Some(reasoning) = delta.get("reasoning") {
if let Some(text) = reasoning.as_str() {
append_reasoning_text(&tx_event, &mut reasoning_item, text.to_string())
.await;
} else if let Some(text) = reasoning.get("text").and_then(|v| v.as_str()) {
append_reasoning_text(&tx_event, &mut reasoning_item, text.to_string())
.await;
} else if let Some(text) = reasoning.get("content").and_then(|v| v.as_str()) {
append_reasoning_text(&tx_event, &mut reasoning_item, text.to_string())
.await;
}
}
if let Some(content) = delta.get("content") {
if content.is_array() {
for item in content.as_array().unwrap_or(&vec![]) {
if let Some(text) = item.get("text").and_then(|t| t.as_str()) {
append_assistant_text(
&tx_event,
&mut assistant_item,
text.to_string(),
)
.await;
}
}
} else if let Some(text) = content.as_str() {
append_assistant_text(&tx_event, &mut assistant_item, text.to_string())
.await;
}
}
if let Some(tool_call_values) = delta.get("tool_calls").and_then(|c| c.as_array()) {
for tool_call in tool_call_values {
let mut index = tool_call
.get("index")
.and_then(serde_json::Value::as_u64)
.map(|i| i as usize);
let mut call_id_for_lookup = None;
if let Some(call_id) = tool_call.get("id").and_then(|i| i.as_str()) {
call_id_for_lookup = Some(call_id.to_string());
if let Some(existing) = tool_call_index_by_id.get(call_id) {
index = Some(*existing);
}
}
if index.is_none() && call_id_for_lookup.is_none() {
index = last_tool_call_index;
}
let index = index.unwrap_or_else(|| {
while tool_calls.contains_key(&next_tool_call_index) {
next_tool_call_index += 1;
}
let idx = next_tool_call_index;
next_tool_call_index += 1;
idx
});
let call_state = tool_calls.entry(index).or_default();
if tool_call_order_seen.insert(index) {
tool_call_order.push(index);
}
if let Some(id) = tool_call.get("id").and_then(|i| i.as_str()) {
call_state.id.get_or_insert_with(|| id.to_string());
tool_call_index_by_id.entry(id.to_string()).or_insert(index);
}
if let Some(func) = tool_call.get("function") {
if let Some(fname) = func.get("name").and_then(|n| n.as_str())
&& !fname.is_empty()
{
call_state.name.get_or_insert_with(|| fname.to_string());
}
if let Some(arguments) = func.get("arguments").and_then(|a| a.as_str())
{
call_state.arguments.push_str(arguments);
}
}
last_tool_call_index = Some(index);
}
}
}
if let Some(message) = choice.get("message")
&& let Some(reasoning) = message.get("reasoning")
{
if let Some(text) = reasoning.as_str() {
append_reasoning_text(&tx_event, &mut reasoning_item, text.to_string()).await;
} else if let Some(text) = reasoning.get("text").and_then(|v| v.as_str()) {
append_reasoning_text(&tx_event, &mut reasoning_item, text.to_string()).await;
} else if let Some(text) = reasoning.get("content").and_then(|v| v.as_str()) {
append_reasoning_text(&tx_event, &mut reasoning_item, text.to_string()).await;
}
}
let finish_reason = choice.get("finish_reason").and_then(|r| r.as_str());
if finish_reason == Some("stop") {
if let Some(reasoning) = reasoning_item.take() {
let _ = tx_event
.send(Ok(ResponseEvent::OutputItemDone(reasoning)))
.await;
}
if let Some(assistant) = assistant_item.take() {
let _ = tx_event
.send(Ok(ResponseEvent::OutputItemDone(assistant)))
.await;
}
if !completed_sent {
let _ = tx_event
.send(Ok(ResponseEvent::Completed {
response_id: String::new(),
token_usage: None,
}))
.await;
completed_sent = true;
}
continue;
}
if finish_reason == Some("length") {
let _ = tx_event.send(Err(ApiError::ContextWindowExceeded)).await;
return;
}
if finish_reason == Some("tool_calls") {
if let Some(reasoning) = reasoning_item.take() {
let _ = tx_event
.send(Ok(ResponseEvent::OutputItemDone(reasoning)))
.await;
}
for index in tool_call_order.drain(..) {
let Some(state) = tool_calls.remove(&index) else {
continue;
};
tool_call_order_seen.remove(&index);
let ToolCallState {
id,
name,
arguments,
} = state;
let Some(name) = name else {
debug!("Skipping tool call at index {index} because name is missing");
continue;
};
let item = ResponseItem::FunctionCall {
id: None,
name,
arguments,
call_id: id.unwrap_or_else(|| format!("tool-call-{index}")),
};
let _ = tx_event.send(Ok(ResponseEvent::OutputItemDone(item))).await;
}
}
}
}
}
async fn append_assistant_text(
tx_event: &mpsc::Sender<Result<ResponseEvent, ApiError>>,
assistant_item: &mut Option<ResponseItem>,
text: String,
) {
if assistant_item.is_none() {
let item = ResponseItem::Message {
id: None,
role: "assistant".to_string(),
content: vec![],
end_turn: None,
phase: None,
};
*assistant_item = Some(item.clone());
let _ = tx_event
.send(Ok(ResponseEvent::OutputItemAdded(item)))
.await;
}
if let Some(ResponseItem::Message { content, .. }) = assistant_item {
content.push(ContentItem::OutputText { text: text.clone() });
let _ = tx_event
.send(Ok(ResponseEvent::OutputTextDelta(text.clone())))
.await;
}
}
async fn append_reasoning_text(
tx_event: &mpsc::Sender<Result<ResponseEvent, ApiError>>,
reasoning_item: &mut Option<ResponseItem>,
text: String,
) {
if reasoning_item.is_none() {
let item = ResponseItem::Reasoning {
id: String::new(),
summary: Vec::new(),
content: Some(vec![]),
encrypted_content: None,
};
*reasoning_item = Some(item.clone());
let _ = tx_event
.send(Ok(ResponseEvent::OutputItemAdded(item)))
.await;
}
if let Some(ResponseItem::Reasoning {
content: Some(content),
..
}) = reasoning_item
{
let content_index = content.len() as i64;
content.push(ReasoningItemContent::ReasoningText { text: text.clone() });
let _ = tx_event
.send(Ok(ResponseEvent::ReasoningContentDelta {
delta: text.clone(),
content_index,
}))
.await;
}
}
#[cfg(test)]
mod tests {
use super::*;
use assert_matches::assert_matches;
use codex_protocol::models::ResponseItem;
use futures::TryStreamExt;
use serde_json::json;
use tokio::sync::mpsc;
use tokio_util::io::ReaderStream;
fn build_body(events: &[serde_json::Value]) -> String {
let mut body = String::new();
for e in events {
body.push_str(&format!("event: message\ndata: {e}\n\n"));
}
body
}
/// Regression test: the stream should complete when we see a `[DONE]` sentinel.
///
/// This is important for tests/mocks that don't immediately close the underlying
/// connection after emitting the sentinel.
#[tokio::test]
async fn completes_on_done_sentinel_without_json() {
let events = collect_events("event: message\ndata: [DONE]\n\n").await;
assert_matches!(&events[..], [ResponseEvent::Completed { .. }]);
}
async fn collect_events(body: &str) -> Vec<ResponseEvent> {
let reader = ReaderStream::new(std::io::Cursor::new(body.to_string()))
.map_err(|err| codex_client::TransportError::Network(err.to_string()));
let (tx, mut rx) = mpsc::channel::<Result<ResponseEvent, ApiError>>(16);
tokio::spawn(process_chat_sse(
reader,
tx,
Duration::from_millis(1000),
None,
));
let mut out = Vec::new();
while let Some(ev) = rx.recv().await {
out.push(ev.expect("stream error"));
}
out
}
#[tokio::test]
async fn concatenates_tool_call_arguments_across_deltas() {
let delta_name = json!({
"choices": [{
"delta": {
"tool_calls": [{
"id": "call_a",
"index": 0,
"function": { "name": "do_a" }
}]
}
}]
});
let delta_args_1 = json!({
"choices": [{
"delta": {
"tool_calls": [{
"index": 0,
"function": { "arguments": "{ \"foo\":" }
}]
}
}]
});
let delta_args_2 = json!({
"choices": [{
"delta": {
"tool_calls": [{
"index": 0,
"function": { "arguments": "1}" }
}]
}
}]
});
let finish = json!({
"choices": [{
"finish_reason": "tool_calls"
}]
});
let body = build_body(&[delta_name, delta_args_1, delta_args_2, finish]);
let events = collect_events(&body).await;
assert_matches!(
&events[..],
[
ResponseEvent::OutputItemDone(ResponseItem::FunctionCall { call_id, name, arguments, .. }),
ResponseEvent::Completed { .. }
] if call_id == "call_a" && name == "do_a" && arguments == "{ \"foo\":1}"
);
}
#[tokio::test]
async fn emits_multiple_tool_calls() {
let delta_a = json!({
"choices": [{
"delta": {
"tool_calls": [{
"id": "call_a",
"function": { "name": "do_a", "arguments": "{\"foo\":1}" }
}]
}
}]
});
let delta_b = json!({
"choices": [{
"delta": {
"tool_calls": [{
"id": "call_b",
"function": { "name": "do_b", "arguments": "{\"bar\":2}" }
}]
}
}]
});
let finish = json!({
"choices": [{
"finish_reason": "tool_calls"
}]
});
let body = build_body(&[delta_a, delta_b, finish]);
let events = collect_events(&body).await;
assert_matches!(
&events[..],
[
ResponseEvent::OutputItemDone(ResponseItem::FunctionCall { call_id: call_a, name: name_a, arguments: args_a, .. }),
ResponseEvent::OutputItemDone(ResponseItem::FunctionCall { call_id: call_b, name: name_b, arguments: args_b, .. }),
ResponseEvent::Completed { .. }
] if call_a == "call_a" && name_a == "do_a" && args_a == "{\"foo\":1}" && call_b == "call_b" && name_b == "do_b" && args_b == "{\"bar\":2}"
);
}
#[tokio::test]
async fn emits_tool_calls_for_multiple_choices() {
let payload = json!({
"choices": [
{
"delta": {
"tool_calls": [{
"id": "call_a",
"index": 0,
"function": { "name": "do_a", "arguments": "{}" }
}]
},
"finish_reason": "tool_calls"
},
{
"delta": {
"tool_calls": [{
"id": "call_b",
"index": 0,
"function": { "name": "do_b", "arguments": "{}" }
}]
},
"finish_reason": "tool_calls"
}
]
});
let body = build_body(&[payload]);
let events = collect_events(&body).await;
assert_matches!(
&events[..],
[
ResponseEvent::OutputItemDone(ResponseItem::FunctionCall { call_id: call_a, name: name_a, arguments: args_a, .. }),
ResponseEvent::OutputItemDone(ResponseItem::FunctionCall { call_id: call_b, name: name_b, arguments: args_b, .. }),
ResponseEvent::Completed { .. }
] if call_a == "call_a" && name_a == "do_a" && args_a == "{}" && call_b == "call_b" && name_b == "do_b" && args_b == "{}"
);
}
#[tokio::test]
async fn merges_tool_calls_by_index_when_id_missing_on_subsequent_deltas() {
let delta_with_id = json!({
"choices": [{
"delta": {
"tool_calls": [{
"index": 0,
"id": "call_a",
"function": { "name": "do_a", "arguments": "{ \"foo\":" }
}]
}
}]
});
let delta_without_id = json!({
"choices": [{
"delta": {
"tool_calls": [{
"index": 0,
"function": { "arguments": "1}" }
}]
}
}]
});
let finish = json!({
"choices": [{
"finish_reason": "tool_calls"
}]
});
let body = build_body(&[delta_with_id, delta_without_id, finish]);
let events = collect_events(&body).await;
assert_matches!(
&events[..],
[
ResponseEvent::OutputItemDone(ResponseItem::FunctionCall { call_id, name, arguments, .. }),
ResponseEvent::Completed { .. }
] if call_id == "call_a" && name == "do_a" && arguments == "{ \"foo\":1}"
);
}
#[tokio::test]
async fn preserves_tool_call_name_when_empty_deltas_arrive() {
let delta_with_name = json!({
"choices": [{
"delta": {
"tool_calls": [{
"id": "call_a",
"function": { "name": "do_a" }
}]
}
}]
});
let delta_with_empty_name = json!({
"choices": [{
"delta": {
"tool_calls": [{
"id": "call_a",
"function": { "name": "", "arguments": "{}" }
}]
}
}]
});
let finish = json!({
"choices": [{
"finish_reason": "tool_calls"
}]
});
let body = build_body(&[delta_with_name, delta_with_empty_name, finish]);
let events = collect_events(&body).await;
assert_matches!(
&events[..],
[
ResponseEvent::OutputItemDone(ResponseItem::FunctionCall { name, arguments, .. }),
ResponseEvent::Completed { .. }
] if name == "do_a" && arguments == "{}"
);
}
#[tokio::test]
async fn emits_tool_calls_even_when_content_and_reasoning_present() {
let delta_content_and_tools = json!({
"choices": [{
"delta": {
"content": [{"text": "hi"}],
"reasoning": "because",
"tool_calls": [{
"id": "call_a",
"function": { "name": "do_a", "arguments": "{}" }
}]
}
}]
});
let finish = json!({
"choices": [{
"finish_reason": "tool_calls"
}]
});
let body = build_body(&[delta_content_and_tools, finish]);
let events = collect_events(&body).await;
assert_matches!(
&events[..],
[
ResponseEvent::OutputItemAdded(ResponseItem::Reasoning { .. }),
ResponseEvent::ReasoningContentDelta { .. },
ResponseEvent::OutputItemAdded(ResponseItem::Message { .. }),
ResponseEvent::OutputTextDelta(delta),
ResponseEvent::OutputItemDone(ResponseItem::Reasoning { .. }),
ResponseEvent::OutputItemDone(ResponseItem::FunctionCall { call_id, name, .. }),
ResponseEvent::OutputItemDone(ResponseItem::Message { .. }),
ResponseEvent::Completed { .. }
] if delta == "hi" && call_id == "call_a" && name == "do_a"
);
}
#[tokio::test]
async fn drops_partial_tool_calls_on_stop_finish_reason() {
let delta_tool = json!({
"choices": [{
"delta": {
"tool_calls": [{
"id": "call_a",
"function": { "name": "do_a", "arguments": "{}" }
}]
}
}]
});
let finish_stop = json!({
"choices": [{
"finish_reason": "stop"
}]
});
let body = build_body(&[delta_tool, finish_stop]);
let events = collect_events(&body).await;
assert!(!events.iter().any(|ev| {
matches!(
ev,
ResponseEvent::OutputItemDone(ResponseItem::FunctionCall { .. })
)
}));
assert_matches!(events.last(), Some(ResponseEvent::Completed { .. }));
}
}

View File

@@ -1,4 +1,3 @@
pub mod chat;
pub mod responses;
pub use responses::process_sse;

View File

@@ -6,11 +6,9 @@ use anyhow::Result;
use async_trait::async_trait;
use bytes::Bytes;
use codex_api::AuthProvider;
use codex_api::ChatClient;
use codex_api::Provider;
use codex_api::ResponsesClient;
use codex_api::ResponsesOptions;
use codex_api::WireApi;
use codex_api::requests::responses::Compression;
use codex_client::HttpTransport;
use codex_client::Request;
@@ -119,12 +117,11 @@ impl AuthProvider for StaticAuth {
}
}
fn provider(name: &str, wire: WireApi) -> Provider {
fn provider(name: &str) -> Provider {
Provider {
name: name.to_string(),
base_url: "https://example.com/v1".to_string(),
query_params: None,
wire,
headers: HeaderMap::new(),
retry: codex_api::provider::RetryConfig {
max_attempts: 1,
@@ -196,38 +193,10 @@ data: {"id":"resp-1","output":[{"type":"message","role":"assistant","content":[{
}
#[tokio::test]
async fn chat_client_uses_chat_completions_path_for_chat_wire() -> Result<()> {
async fn responses_client_uses_responses_path() -> Result<()> {
let state = RecordingState::default();
let transport = RecordingTransport::new(state.clone());
let client = ChatClient::new(transport, provider("openai", WireApi::Chat), NoAuth);
let body = serde_json::json!({ "echo": true });
let _stream = client.stream(body, HeaderMap::new()).await?;
let requests = state.take_stream_requests();
assert_path_ends_with(&requests, "/chat/completions");
Ok(())
}
#[tokio::test]
async fn chat_client_uses_responses_path_for_responses_wire() -> Result<()> {
let state = RecordingState::default();
let transport = RecordingTransport::new(state.clone());
let client = ChatClient::new(transport, provider("openai", WireApi::Responses), NoAuth);
let body = serde_json::json!({ "echo": true });
let _stream = client.stream(body, HeaderMap::new()).await?;
let requests = state.take_stream_requests();
assert_path_ends_with(&requests, "/responses");
Ok(())
}
#[tokio::test]
async fn responses_client_uses_responses_path_for_responses_wire() -> Result<()> {
let state = RecordingState::default();
let transport = RecordingTransport::new(state.clone());
let client = ResponsesClient::new(transport, provider("openai", WireApi::Responses), NoAuth);
let client = ResponsesClient::new(transport, provider("openai"), NoAuth);
let body = serde_json::json!({ "echo": true });
let _stream = client
@@ -239,28 +208,12 @@ async fn responses_client_uses_responses_path_for_responses_wire() -> Result<()>
Ok(())
}
#[tokio::test]
async fn responses_client_uses_chat_path_for_chat_wire() -> Result<()> {
let state = RecordingState::default();
let transport = RecordingTransport::new(state.clone());
let client = ResponsesClient::new(transport, provider("openai", WireApi::Chat), NoAuth);
let body = serde_json::json!({ "echo": true });
let _stream = client
.stream(body, HeaderMap::new(), Compression::None, None)
.await?;
let requests = state.take_stream_requests();
assert_path_ends_with(&requests, "/chat/completions");
Ok(())
}
#[tokio::test]
async fn streaming_client_adds_auth_headers() -> Result<()> {
let state = RecordingState::default();
let transport = RecordingTransport::new(state.clone());
let auth = StaticAuth::new("secret-token", "acct-1");
let client = ResponsesClient::new(transport, provider("openai", WireApi::Responses), auth);
let client = ResponsesClient::new(transport, provider("openai"), auth);
let body = serde_json::json!({ "model": "gpt-test" });
let _stream = client
@@ -295,7 +248,7 @@ async fn streaming_client_adds_auth_headers() -> Result<()> {
async fn streaming_client_retries_on_transport_error() -> Result<()> {
let transport = FlakyTransport::new();
let mut provider = provider("openai", WireApi::Responses);
let mut provider = provider("openai");
provider.retry.max_attempts = 2;
let client = ResponsesClient::new(transport.clone(), provider, NoAuth);

View File

@@ -2,7 +2,6 @@ use codex_api::AuthProvider;
use codex_api::ModelsClient;
use codex_api::provider::Provider;
use codex_api::provider::RetryConfig;
use codex_api::provider::WireApi;
use codex_client::ReqwestTransport;
use codex_protocol::openai_models::ConfigShellToolType;
use codex_protocol::openai_models::ModelInfo;
@@ -34,7 +33,6 @@ fn provider(base_url: &str) -> Provider {
name: "test".to_string(),
base_url: base_url.to_string(),
query_params: None,
wire: WireApi::Responses,
headers: HeaderMap::new(),
retry: RetryConfig {
max_attempts: 1,

View File

@@ -8,7 +8,6 @@ use codex_api::AuthProvider;
use codex_api::Provider;
use codex_api::ResponseEvent;
use codex_api::ResponsesClient;
use codex_api::WireApi;
use codex_api::requests::responses::Compression;
use codex_client::HttpTransport;
use codex_client::Request;
@@ -61,12 +60,11 @@ impl AuthProvider for NoAuth {
}
}
fn provider(name: &str, wire: WireApi) -> Provider {
fn provider(name: &str) -> Provider {
Provider {
name: name.to_string(),
base_url: "https://example.com/v1".to_string(),
query_params: None,
wire,
headers: HeaderMap::new(),
retry: codex_api::provider::RetryConfig {
max_attempts: 1,
@@ -122,7 +120,7 @@ async fn responses_stream_parses_items_and_completed_end_to_end() -> Result<()>
let body = build_responses_body(vec![item1, item2, completed]);
let transport = FixtureSseTransport::new(body);
let client = ResponsesClient::new(transport, provider("openai", WireApi::Responses), NoAuth);
let client = ResponsesClient::new(transport, provider("openai"), NoAuth);
let mut stream = client
.stream(
@@ -192,7 +190,7 @@ async fn responses_stream_aggregates_output_text_deltas() -> Result<()> {
let body = build_responses_body(vec![delta1, delta2, completed]);
let transport = FixtureSseTransport::new(body);
let client = ResponsesClient::new(transport, provider("openai", WireApi::Responses), NoAuth);
let client = ResponsesClient::new(transport, provider("openai"), NoAuth);
let stream = client
.stream(

View File

@@ -1,52 +1,18 @@
//! OSS provider utilities shared between TUI and exec.
use codex_core::LMSTUDIO_OSS_PROVIDER_ID;
use codex_core::OLLAMA_CHAT_PROVIDER_ID;
use codex_core::OLLAMA_OSS_PROVIDER_ID;
use codex_core::WireApi;
use codex_core::config::Config;
use codex_core::protocol::DeprecationNoticeEvent;
use std::io;
/// Returns the default model for a given OSS provider.
pub fn get_default_model_for_oss_provider(provider_id: &str) -> Option<&'static str> {
match provider_id {
LMSTUDIO_OSS_PROVIDER_ID => Some(codex_lmstudio::DEFAULT_OSS_MODEL),
OLLAMA_OSS_PROVIDER_ID | OLLAMA_CHAT_PROVIDER_ID => Some(codex_ollama::DEFAULT_OSS_MODEL),
OLLAMA_OSS_PROVIDER_ID => Some(codex_ollama::DEFAULT_OSS_MODEL),
_ => None,
}
}
/// Returns a deprecation notice if Ollama doesn't support the responses wire API.
pub async fn ollama_chat_deprecation_notice(
config: &Config,
) -> io::Result<Option<DeprecationNoticeEvent>> {
if config.model_provider_id != OLLAMA_OSS_PROVIDER_ID
|| config.model_provider.wire_api != WireApi::Responses
{
return Ok(None);
}
if let Some(detection) = codex_ollama::detect_wire_api(&config.model_provider).await?
&& detection.wire_api == WireApi::Chat
{
let version_suffix = detection
.version
.as_ref()
.map(|version| format!(" (version {version})"))
.unwrap_or_default();
let summary = format!(
"Your Ollama server{version_suffix} doesn't support the Responses API. Either update Ollama or set `oss_provider = \"{OLLAMA_CHAT_PROVIDER_ID}\"` (or `model_provider = \"{OLLAMA_CHAT_PROVIDER_ID}\"`) in your config.toml to use the \"chat\" wire API. Support for the \"chat\" wire API is deprecated and will soon be removed."
);
return Ok(Some(DeprecationNoticeEvent {
summary,
details: None,
}));
}
Ok(None)
}
/// Ensures the specified OSS provider is ready (models downloaded, service reachable).
pub async fn ensure_oss_provider_ready(
provider_id: &str,
@@ -58,7 +24,8 @@ pub async fn ensure_oss_provider_ready(
.await
.map_err(|e| std::io::Error::other(format!("OSS setup failed: {e}")))?;
}
OLLAMA_OSS_PROVIDER_ID | OLLAMA_CHAT_PROVIDER_ID => {
OLLAMA_OSS_PROVIDER_ID => {
codex_ollama::ensure_responses_supported(&config.model_provider).await?;
codex_ollama::ensure_oss_ready(config)
.await
.map_err(|e| std::io::Error::other(format!("OSS setup failed: {e}")))?;

Some files were not shown because too many files have changed in this diff Show More