Compare commits

...

27 Commits

Author SHA1 Message Date
gt-oai
1150449df2 Merge branch 'main' into gt/default-val 2026-02-03 15:47:40 +00:00
jif-oai
ed778f9017 Avoid redundant transactional check before inserting dynamic tools (#10521)
Summary
- remove the extra transaction guard that checked for existing dynamic
tools per thread before inserting new ones
- insert each tool record with `ON CONFLICT(thread_id, position) DO
NOTHING` to ignore duplicates instead of pre-querying
- simplify execution to use the shared pool directly and avoid unneeded
commits

Testing
- Not run (not requested)
2026-02-03 15:34:28 +00:00
gt-oai
944541e936 Add more detail to 401 error (#10508)
Add the error.message if it exists, the body otherwise. Truncate body to
1k characters. Print the cf-ray and the requestId.

**Before:**
<img width="860" height="305" alt="Screenshot 2026-02-03 at 13 15 28"
src="https://github.com/user-attachments/assets/949d5a4d-2b51-488c-a723-c6deffde0353"
/>

**After:**
<img width="1523" height="373" alt="Screenshot 2026-02-03 at 13 15 38"
src="https://github.com/user-attachments/assets/f96a747e-e596-4a7a-aae9-64210d805b26"
/>
2026-02-03 14:58:33 +00:00
gt-oai
c312b807d2 Use default value in constraint 2026-02-03 14:23:16 +00:00
jif-oai
d5e7248958 feat: clean codex-api part 1 (#10501) 2026-02-03 14:08:09 +00:00
jif-oai
88598b9402 feat: drop wire_api from clients (#10498) 2026-02-03 12:43:09 +00:00
jif-oai
d2394a2494 chore: nuke chat/completions API (#10157) 2026-02-03 11:31:57 +00:00
viyatb-oai
9257d8451c feat(secrets): add codex-secrets crate (#10142)
## Summary
This introduces the first working foundation for Codex managed secrets:
a small Rust crate that can securely store and retrieve secrets locally.

Concretely, it adds a `codex-secrets` crate that:
- encrypts a local secrets file using `age`
- generates a high-entropy encryption key
- stores that key in the OS keyring

## What this enables
- A secure local persistence model for secrets
- A clean, isolated place for future provider backends
- A clear boundary: Codex can become a credential broker without putting
plaintext secrets in config files

## Implementation details
- New crate: `codex-rs/secrets/`
- Encryption: `age` with scrypt recipient/identity
- Key generation: `OsRng` (32 random bytes)
- Key storage: OS keyring via `codex-keyring-store`

## Testing
- `cd codex-rs && just fmt`
- `cd codex-rs && cargo test -p codex-secrets`
2026-02-03 08:14:39 +00:00
viyatb-oai
f956cc2a02 feat(linux-sandbox): vendor bubblewrap and wire it with FFI (#10413)
## Summary

Vendor Bubblewrap into the repo and add minimal build plumbing in
`codex-linux-sandbox` to compile/link it.

## Why

We want to move Linux sandboxing toward Bubblewrap, but in a safe
two-step rollout:
1) vendoring/build setup (this PR),  
2) runtime integration (follow-up PR).

## Included

- Add `codex-rs/vendor/bubblewrap` sources.
- Add build-time FFI path in `codex-rs/linux-sandbox`.
- Update `build.rs` rerun tracking for vendored files.
- Small vendored compile warning fix (`sockaddr_nl` full init).

follow up in https://github.com/openai/codex/pull/9938
2026-02-02 23:33:46 -08:00
pakrym-oai
53d8474061 Ignore remote_compact_trims_function_call_history_to_fit_context_window on windows (#10474) 2026-02-02 22:47:38 -08:00
sayan-oai
59707da857 fix: clarify deprecation message for features.web_search (#10406)
clarify that the new `web_search` is not a feature flag under
`[features]` in the deprecation CTA
2026-02-02 21:17:01 -08:00
pakrym-oai
bf87468c2b Restore status after preamble (#10465) 2026-02-02 20:35:50 -08:00
Eric Traut
8b280367b1 Updated bug and feature templates (#10453)
The current bug template uses CLI-specific instructions for getting the
version.

The current feature template doesn't ask the user to provide the Codex
variant (surface) they are using.

This PR addresses these problems.
2026-02-02 20:08:08 -08:00
pakrym-oai
cbfd2a37cc Trim compaction input (#10374)
Two fixes:

1. Include trailing tool output in the total context size calculation.
Otherwise when checking whether compaction should run we ignore newly
added outputs.
2. Trim trailing tool output/tool calls until we can fit the request
into the model context size. Otherwise the compaction endpoint will fail
to compact. We only trim items that can be reproduced again by the model
(tool calls, tool call outputs).
2026-02-02 19:03:11 -08:00
Colin Young
7e07ec8f73 [Codex][CLI] Gate image inputs by model modalities (#10271)
###### Summary

- Add input_modalities to model metadata so clients can determine
supported input types.
- Gate image paste/attach in TUI when the selected model does not
support images.
- Block submits that include images for unsupported models and show a
clear warning.
- Propagate modality metadata through app-server protocol/model-list
responses.
  - Update related tests/fixtures.

  ###### Rationale

  - Models support different input modalities.
- Clients need an explicit capability signal to prevent unsupported
requests.
- Backward-compatible defaults preserve existing behavior when modality
metadata is absent.

  ###### Scope

  - codex-rs/protocol, codex-rs/core, codex-rs/tui
  - codex-rs/app-server-protocol, codex-rs/app-server
  - Generated app-server types / schema fixtures

  ###### Trade-offs

- Default behavior assumes text + image when field is absent for
compatibility.
  - Server-side validation remains the source of truth.

  ###### Follow-up

- Non-TUI clients should consume input_modalities to disable unsupported
attachments.
- Model catalogs should explicitly set input_modalities for text-only
models.

  ###### Testing

  - cargo fmt --all
  - cargo test -p codex-tui
  - env -u GITHUB_APP_KEY cargo test -p codex-core --lib
  - just write-app-server-schema
- cargo run -p codex-cli --bin codex -- app-server generate-ts --out
app-server-types
  - test against local backend
  
<img width="695" height="199" alt="image"
src="https://github.com/user-attachments/assets/d22dd04f-5eba-4db9-a7c5-a2506f60ec44"
/>

---------

Co-authored-by: Josh McKinney <joshka@openai.com>
2026-02-02 18:56:39 -08:00
Ahmed Ibrahim
b8addcddb9 Require models refresh on cli version mismatch (#10414) 2026-02-02 18:55:25 -08:00
sayan-oai
fc05374344 chore: add phase to message responseitem (#10455)
### What

add wiring for `phase` field on `ResponseItem::Message` to lay
groundwork for differentiating model preambles and final messages.
currently optional.

follows pattern in #9698.

updated schemas with `just write-app-server-schema` so we can see type
changes.

### Tests
Updated existing tests for SSE parsing and hydrating from history
2026-02-03 02:52:26 +00:00
Ahmed Ibrahim
0999fd82b9 app tool tip (#10454)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2026-02-03 02:37:01 +00:00
Michael Bolin
891ed87409 chore: remove deprecated mcp-types crate (#10357)
https://github.com/openai/codex/pull/10349 migrated us off of
`mcp-types`, so this PR deletes the code.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/10357).
* __->__ #10357
* #10349
* #10356
2026-02-03 02:33:16 +00:00
Ahmed Ibrahim
97ff090104 Hide short worked-for label in final separator (#10452)
- Hide the "Worked for" label in the final message separator unless
elapsed time is over one minute.\n- Update/add tests to cover both
hidden (<60s) and shown (>=61s) behavior.
2026-02-03 02:29:20 +00:00
Eric Traut
8dd41e229b Fixed sandbox mode inconsistency if untrusted is selected (#10415)
This PR addresses #10395

When a user is asked to pick the trust level of a project, the code
currently reloads the config if they select "trusted". It doesn't reload
the config in the "untrusted" case but should. This causes the sandbox
mode to be reported incorrectly in `/status` during the first run (it's
displayed as `read-only` even though it acts as though it's
`workspace-write`).
2026-02-03 02:00:35 +00:00
Michael Bolin
66447d5d2c feat: replace custom mcp-types crate with equivalents from rmcp (#10349)
We started working with MCP in Codex before
https://crates.io/crates/rmcp was mature, so we had our own crate for
MCP types that was generated from the MCP schema:


8b95d3e082/codex-rs/mcp-types/README.md

Now that `rmcp` is more mature, it makes more sense to use their MCP
types in Rust, as they handle details (like the `_meta` field) that our
custom version ignored. Though one advantage that our custom types had
is that our generated types implemented `JsonSchema` and `ts_rs::TS`,
whereas the types in `rmcp` do not. As such, part of the work of this PR
is leveraging the adapters between `rmcp` types and the serializable
types that are API for us (app server and MCP) introduced in #10356.

Note this PR results in a number of changes to
`codex-rs/app-server-protocol/schema`, which merit special attention
during review. We must ensure that these changes are still
backwards-compatible, which is possible because we have:

```diff
- export type CallToolResult = { content: Array<ContentBlock>, isError?: boolean, structuredContent?: JsonValue, };
+ export type CallToolResult = { content: Array<JsonValue>, structuredContent?: JsonValue, isError?: boolean, _meta?: JsonValue, };
```

so `ContentBlock` has been replaced with the more general `JsonValue`.
Note that `ContentBlock` was defined as:

```typescript
export type ContentBlock = TextContent | ImageContent | AudioContent | ResourceLink | EmbeddedResource;
```

so the deletion of those individual variants should not be a cause of
great concern.

Similarly, we have the following change in
`codex-rs/app-server-protocol/schema/typescript/Tool.ts`:

```
- export type Tool = { annotations?: ToolAnnotations, description?: string, inputSchema: ToolInputSchema, name: string, outputSchema?: ToolOutputSchema, title?: string, };
+ export type Tool = { name: string, title?: string, description?: string, inputSchema: JsonValue, outputSchema?: JsonValue, annotations?: JsonValue, icons?: Array<JsonValue>, _meta?: JsonValue, };
```

so:

- `annotations?: ToolAnnotations` ➡️ `JsonValue`
- `inputSchema: ToolInputSchema` ➡️ `JsonValue`
- `outputSchema?: ToolOutputSchema` ➡️ `JsonValue`

and two new fields: `icons?: Array<JsonValue>, _meta?: JsonValue`

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/10349).
* #10357
* __->__ #10349
* #10356
2026-02-02 17:41:55 -08:00
Charley Cunningham
8f5edddf71 TUI: Render request_user_input results in history and simplify interrupt handling (#10064)
## Summary
This PR improves the TUI experience for `request_user_input` by
rendering submitted question/answer sets directly in conversation
history with clear, structured formatting.

It also intentionally simplifies interrupt behavior for now: on `Esc` /
`Ctrl+C`, the questions overlay interrupts the turn without attempting
to submit partial answers.

<img width="1344" height="573" alt="Screenshot 2026-02-02 at 4 51 40 PM"
src="https://github.com/user-attachments/assets/ff752131-7060-44c1-9ded-af061969a533"
/>

## Scope
- TUI-only changes.
- No core/protocol/app-server behavior changes in this PR.
- Resume reconstruction of interrupted question sets is out of scope for
this PR.

## What Changed
- Added a new history cell: `RequestUserInputResultCell` in
`codex-rs/tui/src/history_cell.rs`.
- On normal `request_user_input` submission, TUI now inserts that
history cell immediately after sending `Op::UserInputAnswer`.
- Rendering includes a `Questions` header with `answered/total` count.
- Rendering shows each question as a bullet item.
- Rendering styles submitted answer lines in cyan.
- Rendering styles notes (for option questions) as `note:` lines in
cyan.
- Rendering styles freeform text (for no-option questions) as `answer:`
lines in cyan.
- Rendering dims only the `(unanswered)` suffix.
- Rendering can include an interrupted suffix and summary text when the
cell is marked interrupted.
- Rendering redacts secret questions as `••••••` instead of showing raw
values.
- Added `wrap_with_prefix(...)` in `history_cell.rs` for wrapped
prefixed lines.
- Added `split_request_user_input_answer(...)` in `history_cell.rs` for
decoding `"user_note: ..."` entries.

## Interrupt Behavior (Intentional for this PR)
- `Esc` / `Ctrl+C` in the questions overlay now performs `Op::Interrupt`
and exits the overlay.
- It does **not** submit partial/committed answers on interrupt.
- Added TODO comments in `request_user_input` overlay interrupt paths
indicating where interrupted partial result emission should be
reintroduced once core support is finalized.
- Queued `request_user_input` overlays are discarded on interrupt in the
current behavior.

## Tests Updated
- Updated/added overlay tests in
`codex-rs/tui/src/bottom_pane/request_user_input/mod.rs` to reflect
interrupt-only behavior.
- Added helper assertion for interrupt-only event expectation.
- Existing submission-path tests now validate history insertion behavior
and expected answer maps.

## Behavior Notes
- Completed question flows now produce a readable `Questions` block in
transcript history.
- Interrupted flows currently do not persist partial answers to
model-visible tool output.

## Follow-ups
- Reintroduce partial-answer-on-interrupt semantics once core can
persist/sequence interrupted `request_user_input` outputs safely.
- Optionally add replay/resume rendering for interrupted question sets
as a separate PR.

## Codex author
`codex fork 019bfb8d-2a65-7313-9be2-ea7100d19a61`
2026-02-02 17:41:30 -08:00
Charley Cunningham
1096d6453c Fix plan implementation prompt reappearing after /agent thread switch (#10447)
## Summary

This fixes a UX bug (https://github.com/openai/codex/issues/10442) where
the **"Implement this plan?"** prompt could reappear after switching
agents with `/agent` and then switching back to the original agent
during plan execution.

## Root Cause

On thread switch, the TUI rebuilds `ChatWidget`, replays buffered thread
events, then drains any queued live events.

In this flow, a `TurnComplete` can be handled twice for the same logical
turn:
1. replayed (`from_replay = true`)
2. then live (`from_replay = false`)

`ChatWidget` used `saw_plan_item_this_turn` to decide whether to show
the plan implementation prompt, but that flag was only reset on
`TurnStarted`.
If duplicate completion events occurred, stale `saw_plan_item_this_turn
= true` could cause the prompt to re-trigger unexpectedly.

## Fix

- Clear `saw_plan_item_this_turn` at the end of `on_task_complete`,
after prompt gating runs.
- This keeps the flag truly turn-scoped and prevents duplicate
`TurnComplete` handling from reopening the prompt.
2026-02-02 17:40:05 -08:00
Ahmed Ibrahim
d02db8b43d Add codex app macOS launcher (#10418)
- Add `codex app <path>` to launch the Codex Desktop app.
- On macOS, auto-downloads the DMG if missing; non-macOS prints a link
to chatgpt.com/codex.
2026-02-02 17:37:04 -08:00
pash-openai
019d89ff86 make codex better at git (#10145)
adds basic git context to the session prefix so the model can anchor git
actions and be a bit more version-aware. structured it in a
multiroot-friendly shape even though we only have one root today
2026-02-02 16:57:29 -08:00
Gav Verma
e24058b7a8 feat: Read personal skills from .agents/skills (#10437)
- Issue: https://github.com/agentskills/agentskills/issues/15
- Follow-up to https://github.com/openai/codex/pull/10317 (for team/repo
skills)
- This change now also loads personal/user skills from
`$HOME/.agents/skills` (or `~/.agents/skills`) in addition to loading
from `.agents/skills` inside of git repos.
- The location of `.system` skills remains unchanged.
- Keeping backwards compatibility with `~/.codex/skills` for now until
we fully deprecate.

With skills in both personal folders:
<img width="831" height="421" alt="image"
src="https://github.com/user-attachments/assets/ad8ac918-bfe6-4a2d-8a8e-d608c9d3d701"
/>

We load from both places:
<img width="607" height="236" alt="image"
src="https://github.com/user-attachments/assets/480f4db0-ae64-4dc1-bdf5-c5de98c16f5c"
/>
2026-02-02 16:49:23 -08:00
275 changed files with 17308 additions and 19006 deletions

View File

@@ -19,7 +19,7 @@ body:
id: version
attributes:
label: What version of Codex is running?
description: Copy the output of `codex --version`
description: (App) look in "About Codex" dialog; (CLI) use `codex --version`; (IDE Extension) look in extensions panel.
validations:
required: true
- type: input

View File

@@ -12,6 +12,13 @@ body:
1. Search existing issues for similar features. If you find one, 👍 it rather than opening a new one.
2. The Codex team will try to balance the varying needs of the community when prioritizing or rejecting new features. Not all features will be accepted. See [Contributing](https://github.com/openai/codex#contributing) for more details.
- type: input
id: variant
attributes:
label: What variant of Codex are you using?
description: (e.g., CLI, App, IDE Extension, Web)
validations:
required: true
- type: textarea
id: feature
attributes:

View File

@@ -64,8 +64,6 @@ jobs:
components: rustfmt
- name: cargo fmt
run: cargo fmt -- --config imports_granularity=Item --check
- name: Verify codegen for mcp-types
run: ./mcp-types/check_lib_rs.py
cargo_shear:
name: cargo shear

571
codex-rs/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -17,6 +17,7 @@ members = [
"cli",
"common",
"core",
"secrets",
"exec",
"exec-server",
"execpolicy",
@@ -27,7 +28,6 @@ members = [
"lmstudio",
"login",
"mcp-server",
"mcp-types",
"network-proxy",
"ollama",
"process-hardening",
@@ -80,6 +80,7 @@ codex-cli = { path = "cli"}
codex-client = { path = "codex-client" }
codex-common = { path = "common" }
codex-core = { path = "core" }
codex-secrets = { path = "secrets" }
codex-exec = { path = "exec" }
codex-execpolicy = { path = "execpolicy" }
codex-experimental-api-macros = { path = "codex-experimental-api-macros" }
@@ -112,10 +113,10 @@ codex-utils-string = { path = "utils/string" }
codex-windows-sandbox = { path = "windows-sandbox-rs" }
core_test_support = { path = "core/tests/common" }
exec_server_test_support = { path = "exec-server/tests/common" }
mcp-types = { path = "mcp-types" }
mcp_test_support = { path = "mcp-server/tests/common" }
# External
age = "0.11.1"
allocative = "0.3.3"
ansi-to-tui = "7.0.0"
anyhow = "1"
@@ -293,7 +294,7 @@ unwrap_used = "deny"
# cargo-shear cannot see the platform-specific openssl-sys usage, so we
# silence the false positive here instead of deleting a real dependency.
[workspace.metadata.cargo-shear]
ignored = ["icu_provider", "openssl-sys", "codex-utils-readiness"]
ignored = ["icu_provider", "openssl-sys", "codex-utils-readiness", "codex-secrets"]
[profile.release]
lto = "fat"

View File

@@ -17,7 +17,6 @@ clap = { workspace = true, features = ["derive"] }
codex-protocol = { workspace = true }
codex-experimental-api-macros = { workspace = true }
codex-utils-absolute-path = { workspace = true }
mcp-types = { workspace = true }
schemars = { workspace = true }
serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }

View File

@@ -526,7 +526,7 @@
]
},
"FunctionCallOutputPayload": {
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses/Chat Completions APIs understand.",
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses API understands.",
"properties": {
"content": {
"type": "string"
@@ -1038,6 +1038,13 @@
],
"type": "string"
},
"MessagePhase": {
"enum": [
"commentary",
"final_answer"
],
"type": "string"
},
"ModeKind": {
"description": "Initial collaboration mode to use when the TUI starts.",
"enum": [
@@ -1317,6 +1324,16 @@
],
"writeOnly": true
},
"phase": {
"anyOf": [
{
"$ref": "#/definitions/MessagePhase"
},
{
"type": "null"
}
]
},
"role": {
"type": "string"
},
@@ -1393,7 +1410,7 @@
]
},
"id": {
"description": "Set when using the chat completions API.",
"description": "Legacy id field retained for compatibility with older payloads.",
"type": [
"string",
"null"

View File

@@ -93,34 +93,6 @@
}
]
},
"Annotations": {
"description": "Optional annotations for the client. The client can use annotations to inform how objects are used or displayed",
"properties": {
"audience": {
"items": {
"$ref": "#/definitions/Role"
},
"type": [
"array",
"null"
]
},
"lastModified": {
"type": [
"string",
"null"
]
},
"priority": {
"format": "double",
"type": [
"number",
"null"
]
}
},
"type": "object"
},
"AskForApproval": {
"description": "Determines the conditions under which the user is consulted to approve running the command proposed by Codex.",
"oneOf": [
@@ -154,57 +126,6 @@
}
]
},
"AudioContent": {
"description": "Audio provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"BlobResourceContents": {
"properties": {
"blob": {
"type": "string"
},
"mimeType": {
"type": [
"string",
"null"
]
},
"uri": {
"type": "string"
}
},
"required": [
"blob",
"uri"
],
"type": "object"
},
"ByteRange": {
"properties": {
"end": {
@@ -229,10 +150,9 @@
"CallToolResult": {
"description": "The server's response to a tool call.",
"properties": {
"_meta": true,
"content": {
"items": {
"$ref": "#/definitions/ContentBlock"
},
"items": true,
"type": "array"
},
"isError": {
@@ -390,25 +310,6 @@
}
]
},
"ContentBlock": {
"anyOf": [
{
"$ref": "#/definitions/TextContent"
},
{
"$ref": "#/definitions/ImageContent"
},
{
"$ref": "#/definitions/AudioContent"
},
{
"$ref": "#/definitions/ResourceLink"
},
{
"$ref": "#/definitions/EmbeddedResource"
}
]
},
"ContentItem": {
"oneOf": [
{
@@ -544,42 +445,6 @@
],
"type": "object"
},
"EmbeddedResource": {
"description": "The contents of a resource, embedded into a prompt or tool call result.\n\nIt is up to the client how best to render embedded resources for the benefit of the LLM and/or the user.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"resource": {
"$ref": "#/definitions/EmbeddedResourceResource"
},
"type": {
"type": "string"
}
},
"required": [
"resource",
"type"
],
"type": "object"
},
"EmbeddedResourceResource": {
"anyOf": [
{
"$ref": "#/definitions/TextResourceContents"
},
{
"$ref": "#/definitions/BlobResourceContents"
}
]
},
"EventMsg": {
"description": "Response event from the agent NOTE: Make sure none of these values have optional types, as it will mess up the extension code-gen.",
"oneOf": [
@@ -2992,7 +2857,7 @@
]
},
"FunctionCallOutputPayload": {
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses/Chat Completions APIs understand.",
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses API understands.",
"properties": {
"content": {
"type": "string"
@@ -3071,36 +2936,6 @@
],
"type": "object"
},
"ImageContent": {
"description": "An image provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"LocalShellAction": {
"oneOf": [
{
@@ -3276,6 +3111,13 @@
}
]
},
"MessagePhase": {
"enum": [
"commentary",
"final_answer"
],
"type": "string"
},
"ModeKind": {
"description": "Initial collaboration mode to use when the TUI starts.",
"enum": [
@@ -3599,7 +3441,8 @@
"format": "int64",
"type": "integer"
}
]
],
"description": "ID of a request, which can be either a string or an integer."
},
"RequestUserInputQuestion": {
"properties": {
@@ -3655,22 +3498,21 @@
"Resource": {
"description": "A known resource that the server is capable of reading.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"_meta": true,
"annotations": true,
"description": {
"type": [
"string",
"null"
]
},
"icons": {
"items": true,
"type": [
"array",
"null"
]
},
"mimeType": {
"type": [
"string",
@@ -3703,74 +3545,10 @@
],
"type": "object"
},
"ResourceLink": {
"description": "A resource that the server is capable of reading, included in a prompt or tool call result.\n\nNote: resource links returned by tools are not guaranteed to appear in the results of `resources/list` requests.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"description": {
"type": [
"string",
"null"
]
},
"mimeType": {
"type": [
"string",
"null"
]
},
"name": {
"type": "string"
},
"size": {
"format": "int64",
"type": [
"integer",
"null"
]
},
"title": {
"type": [
"string",
"null"
]
},
"type": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"name",
"type",
"uri"
],
"type": "object"
},
"ResourceTemplate": {
"description": "A template description for resources available on the server.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"annotations": true,
"description": {
"type": [
"string",
@@ -3825,6 +3603,16 @@
],
"writeOnly": true
},
"phase": {
"anyOf": [
{
"$ref": "#/definitions/MessagePhase"
},
{
"type": "null"
}
]
},
"role": {
"type": "string"
},
@@ -3901,7 +3689,7 @@
]
},
"id": {
"description": "Set when using the chat completions API.",
"description": "Legacy id field retained for compatibility with older payloads.",
"type": [
"string",
"null"
@@ -4361,14 +4149,6 @@
}
]
},
"Role": {
"description": "The sender or recipient of messages and data in a conversation.",
"enum": [
"assistant",
"user"
],
"type": "string"
},
"SandboxPolicy": {
"description": "Determines execution restrictions for model shell commands.",
"oneOf": [
@@ -4678,32 +4458,6 @@
],
"type": "string"
},
"TextContent": {
"description": "Text provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"text": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"text",
"type"
],
"type": "object"
},
"TextElement": {
"properties": {
"byte_range": {
@@ -4727,27 +4481,6 @@
],
"type": "object"
},
"TextResourceContents": {
"properties": {
"mimeType": {
"type": [
"string",
"null"
]
},
"text": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"text",
"uri"
],
"type": "object"
},
"ThreadId": {
"type": "string"
},
@@ -4808,38 +4541,26 @@
"Tool": {
"description": "Definition for a tool the client can call.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/ToolAnnotations"
},
{
"type": "null"
}
]
},
"_meta": true,
"annotations": true,
"description": {
"type": [
"string",
"null"
]
},
"inputSchema": {
"$ref": "#/definitions/ToolInputSchema"
"icons": {
"items": true,
"type": [
"array",
"null"
]
},
"inputSchema": true,
"name": {
"type": "string"
},
"outputSchema": {
"anyOf": [
{
"$ref": "#/definitions/ToolOutputSchema"
},
{
"type": "null"
}
]
},
"outputSchema": true,
"title": {
"type": [
"string",
@@ -4853,82 +4574,6 @@
],
"type": "object"
},
"ToolAnnotations": {
"description": "Additional properties describing a Tool to clients.\n\nNOTE: all properties in ToolAnnotations are **hints**. They are not guaranteed to provide a faithful description of tool behavior (including descriptive properties like `title`).\n\nClients should never make tool use decisions based on ToolAnnotations received from untrusted servers.",
"properties": {
"destructiveHint": {
"type": [
"boolean",
"null"
]
},
"idempotentHint": {
"type": [
"boolean",
"null"
]
},
"openWorldHint": {
"type": [
"boolean",
"null"
]
},
"readOnlyHint": {
"type": [
"boolean",
"null"
]
},
"title": {
"type": [
"string",
"null"
]
}
},
"type": "object"
},
"ToolInputSchema": {
"description": "A JSON Schema object defining the expected parameters for the tool.",
"properties": {
"properties": true,
"required": {
"items": {
"type": "string"
},
"type": [
"array",
"null"
]
},
"type": {
"default": "object",
"type": "string"
}
},
"type": "object"
},
"ToolOutputSchema": {
"description": "An optional JSON Schema object defining the structure of the tool's output returned in the structuredContent field of a CallToolResult.",
"properties": {
"properties": true,
"required": {
"items": {
"type": "string"
},
"type": [
"array",
"null"
]
},
"type": {
"default": "object",
"type": "string"
}
},
"type": "object"
},
"TurnAbortReason": {
"enum": [
"interrupted",

View File

@@ -165,34 +165,6 @@
}
]
},
"Annotations": {
"description": "Optional annotations for the client. The client can use annotations to inform how objects are used or displayed",
"properties": {
"audience": {
"items": {
"$ref": "#/definitions/Role"
},
"type": [
"array",
"null"
]
},
"lastModified": {
"type": [
"string",
"null"
]
},
"priority": {
"format": "double",
"type": [
"number",
"null"
]
}
},
"type": "object"
},
"AskForApproval": {
"description": "Determines the conditions under which the user is consulted to approve running the command proposed by Codex.",
"oneOf": [
@@ -226,36 +198,6 @@
}
]
},
"AudioContent": {
"description": "Audio provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"AuthMode": {
"description": "Authentication mode for OpenAI-backed providers.",
"oneOf": [
@@ -298,27 +240,6 @@
},
"type": "object"
},
"BlobResourceContents": {
"properties": {
"blob": {
"type": "string"
},
"mimeType": {
"type": [
"string",
"null"
]
},
"uri": {
"type": "string"
}
},
"required": [
"blob",
"uri"
],
"type": "object"
},
"ByteRange": {
"properties": {
"end": {
@@ -362,10 +283,9 @@
"CallToolResult": {
"description": "The server's response to a tool call.",
"properties": {
"_meta": true,
"content": {
"items": {
"$ref": "#/definitions/ContentBlock"
},
"items": true,
"type": "array"
},
"isError": {
@@ -889,25 +809,6 @@
],
"type": "object"
},
"ContentBlock": {
"anyOf": [
{
"$ref": "#/definitions/TextContent"
},
{
"$ref": "#/definitions/ImageContent"
},
{
"$ref": "#/definitions/AudioContent"
},
{
"$ref": "#/definitions/ResourceLink"
},
{
"$ref": "#/definitions/EmbeddedResource"
}
]
},
"ContentItem": {
"oneOf": [
{
@@ -1099,42 +1000,6 @@
],
"type": "object"
},
"EmbeddedResource": {
"description": "The contents of a resource, embedded into a prompt or tool call result.\n\nIt is up to the client how best to render embedded resources for the benefit of the LLM and/or the user.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"resource": {
"$ref": "#/definitions/EmbeddedResourceResource"
},
"type": {
"type": "string"
}
},
"required": [
"resource",
"type"
],
"type": "object"
},
"EmbeddedResourceResource": {
"anyOf": [
{
"$ref": "#/definitions/TextResourceContents"
},
{
"$ref": "#/definitions/BlobResourceContents"
}
]
},
"ErrorNotification": {
"properties": {
"error": {
@@ -3612,7 +3477,7 @@
]
},
"FunctionCallOutputPayload": {
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses/Chat Completions APIs understand.",
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses API understands.",
"properties": {
"content": {
"type": "string"
@@ -3714,36 +3579,6 @@
],
"type": "object"
},
"ImageContent": {
"description": "An image provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"ItemCompletedNotification": {
"properties": {
"item": {
@@ -4037,9 +3872,7 @@
"McpToolCallResult": {
"properties": {
"content": {
"items": {
"$ref": "#/definitions/ContentBlock"
},
"items": true,
"type": "array"
},
"structuredContent": true
@@ -4057,6 +3890,13 @@
],
"type": "string"
},
"MessagePhase": {
"enum": [
"commentary",
"final_answer"
],
"type": "string"
},
"ModeKind": {
"description": "Initial collaboration mode to use when the TUI starts.",
"enum": [
@@ -4641,7 +4481,8 @@
"format": "int64",
"type": "integer"
}
]
],
"description": "ID of a request, which can be either a string or an integer."
},
"RequestUserInputQuestion": {
"properties": {
@@ -4697,22 +4538,21 @@
"Resource": {
"description": "A known resource that the server is capable of reading.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"_meta": true,
"annotations": true,
"description": {
"type": [
"string",
"null"
]
},
"icons": {
"items": true,
"type": [
"array",
"null"
]
},
"mimeType": {
"type": [
"string",
@@ -4745,74 +4585,10 @@
],
"type": "object"
},
"ResourceLink": {
"description": "A resource that the server is capable of reading, included in a prompt or tool call result.\n\nNote: resource links returned by tools are not guaranteed to appear in the results of `resources/list` requests.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"description": {
"type": [
"string",
"null"
]
},
"mimeType": {
"type": [
"string",
"null"
]
},
"name": {
"type": "string"
},
"size": {
"format": "int64",
"type": [
"integer",
"null"
]
},
"title": {
"type": [
"string",
"null"
]
},
"type": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"name",
"type",
"uri"
],
"type": "object"
},
"ResourceTemplate": {
"description": "A template description for resources available on the server.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"annotations": true,
"description": {
"type": [
"string",
@@ -4867,6 +4643,16 @@
],
"writeOnly": true
},
"phase": {
"anyOf": [
{
"$ref": "#/definitions/MessagePhase"
},
{
"type": "null"
}
]
},
"role": {
"type": "string"
},
@@ -4943,7 +4729,7 @@
]
},
"id": {
"description": "Set when using the chat completions API.",
"description": "Legacy id field retained for compatibility with older payloads.",
"type": [
"string",
"null"
@@ -5403,14 +5189,6 @@
}
]
},
"Role": {
"description": "The sender or recipient of messages and data in a conversation.",
"enum": [
"assistant",
"user"
],
"type": "string"
},
"SandboxPolicy": {
"description": "Determines execution restrictions for model shell commands.",
"oneOf": [
@@ -5874,32 +5652,6 @@
],
"type": "object"
},
"TextContent": {
"description": "Text provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"text": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"text",
"type"
],
"type": "object"
},
"TextElement": {
"properties": {
"byteRange": {
@@ -5982,27 +5734,6 @@
],
"type": "object"
},
"TextResourceContents": {
"properties": {
"mimeType": {
"type": [
"string",
"null"
]
},
"text": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"text",
"uri"
],
"type": "object"
},
"Thread": {
"properties": {
"cliVersion": {
@@ -6714,38 +6445,26 @@
"Tool": {
"description": "Definition for a tool the client can call.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/ToolAnnotations"
},
{
"type": "null"
}
]
},
"_meta": true,
"annotations": true,
"description": {
"type": [
"string",
"null"
]
},
"inputSchema": {
"$ref": "#/definitions/ToolInputSchema"
"icons": {
"items": true,
"type": [
"array",
"null"
]
},
"inputSchema": true,
"name": {
"type": "string"
},
"outputSchema": {
"anyOf": [
{
"$ref": "#/definitions/ToolOutputSchema"
},
{
"type": "null"
}
]
},
"outputSchema": true,
"title": {
"type": [
"string",
@@ -6759,82 +6478,6 @@
],
"type": "object"
},
"ToolAnnotations": {
"description": "Additional properties describing a Tool to clients.\n\nNOTE: all properties in ToolAnnotations are **hints**. They are not guaranteed to provide a faithful description of tool behavior (including descriptive properties like `title`).\n\nClients should never make tool use decisions based on ToolAnnotations received from untrusted servers.",
"properties": {
"destructiveHint": {
"type": [
"boolean",
"null"
]
},
"idempotentHint": {
"type": [
"boolean",
"null"
]
},
"openWorldHint": {
"type": [
"boolean",
"null"
]
},
"readOnlyHint": {
"type": [
"boolean",
"null"
]
},
"title": {
"type": [
"string",
"null"
]
}
},
"type": "object"
},
"ToolInputSchema": {
"description": "A JSON Schema object defining the expected parameters for the tool.",
"properties": {
"properties": true,
"required": {
"items": {
"type": "string"
},
"type": [
"array",
"null"
]
},
"type": {
"default": "object",
"type": "string"
}
},
"type": "object"
},
"ToolOutputSchema": {
"description": "An optional JSON Schema object defining the structure of the tool's output returned in the structuredContent field of a CallToolResult.",
"properties": {
"properties": true,
"required": {
"items": {
"type": "string"
},
"type": [
"array",
"null"
]
},
"type": {
"default": "object",
"type": "string"
}
},
"type": "object"
},
"Turn": {
"properties": {
"error": {

View File

@@ -93,34 +93,6 @@
}
]
},
"Annotations": {
"description": "Optional annotations for the client. The client can use annotations to inform how objects are used or displayed",
"properties": {
"audience": {
"items": {
"$ref": "#/definitions/Role"
},
"type": [
"array",
"null"
]
},
"lastModified": {
"type": [
"string",
"null"
]
},
"priority": {
"format": "double",
"type": [
"number",
"null"
]
}
},
"type": "object"
},
"AskForApproval": {
"description": "Determines the conditions under which the user is consulted to approve running the command proposed by Codex.",
"oneOf": [
@@ -154,57 +126,6 @@
}
]
},
"AudioContent": {
"description": "Audio provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"BlobResourceContents": {
"properties": {
"blob": {
"type": "string"
},
"mimeType": {
"type": [
"string",
"null"
]
},
"uri": {
"type": "string"
}
},
"required": [
"blob",
"uri"
],
"type": "object"
},
"ByteRange": {
"properties": {
"end": {
@@ -229,10 +150,9 @@
"CallToolResult": {
"description": "The server's response to a tool call.",
"properties": {
"_meta": true,
"content": {
"items": {
"$ref": "#/definitions/ContentBlock"
},
"items": true,
"type": "array"
},
"isError": {
@@ -390,25 +310,6 @@
}
]
},
"ContentBlock": {
"anyOf": [
{
"$ref": "#/definitions/TextContent"
},
{
"$ref": "#/definitions/ImageContent"
},
{
"$ref": "#/definitions/AudioContent"
},
{
"$ref": "#/definitions/ResourceLink"
},
{
"$ref": "#/definitions/EmbeddedResource"
}
]
},
"ContentItem": {
"oneOf": [
{
@@ -544,42 +445,6 @@
],
"type": "object"
},
"EmbeddedResource": {
"description": "The contents of a resource, embedded into a prompt or tool call result.\n\nIt is up to the client how best to render embedded resources for the benefit of the LLM and/or the user.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"resource": {
"$ref": "#/definitions/EmbeddedResourceResource"
},
"type": {
"type": "string"
}
},
"required": [
"resource",
"type"
],
"type": "object"
},
"EmbeddedResourceResource": {
"anyOf": [
{
"$ref": "#/definitions/TextResourceContents"
},
{
"$ref": "#/definitions/BlobResourceContents"
}
]
},
"EventMsg": {
"description": "Response event from the agent NOTE: Make sure none of these values have optional types, as it will mess up the extension code-gen.",
"oneOf": [
@@ -2992,7 +2857,7 @@
]
},
"FunctionCallOutputPayload": {
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses/Chat Completions APIs understand.",
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses API understands.",
"properties": {
"content": {
"type": "string"
@@ -3071,36 +2936,6 @@
],
"type": "object"
},
"ImageContent": {
"description": "An image provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"LocalShellAction": {
"oneOf": [
{
@@ -3276,6 +3111,13 @@
}
]
},
"MessagePhase": {
"enum": [
"commentary",
"final_answer"
],
"type": "string"
},
"ModeKind": {
"description": "Initial collaboration mode to use when the TUI starts.",
"enum": [
@@ -3599,7 +3441,8 @@
"format": "int64",
"type": "integer"
}
]
],
"description": "ID of a request, which can be either a string or an integer."
},
"RequestUserInputQuestion": {
"properties": {
@@ -3655,22 +3498,21 @@
"Resource": {
"description": "A known resource that the server is capable of reading.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"_meta": true,
"annotations": true,
"description": {
"type": [
"string",
"null"
]
},
"icons": {
"items": true,
"type": [
"array",
"null"
]
},
"mimeType": {
"type": [
"string",
@@ -3703,74 +3545,10 @@
],
"type": "object"
},
"ResourceLink": {
"description": "A resource that the server is capable of reading, included in a prompt or tool call result.\n\nNote: resource links returned by tools are not guaranteed to appear in the results of `resources/list` requests.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"description": {
"type": [
"string",
"null"
]
},
"mimeType": {
"type": [
"string",
"null"
]
},
"name": {
"type": "string"
},
"size": {
"format": "int64",
"type": [
"integer",
"null"
]
},
"title": {
"type": [
"string",
"null"
]
},
"type": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"name",
"type",
"uri"
],
"type": "object"
},
"ResourceTemplate": {
"description": "A template description for resources available on the server.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"annotations": true,
"description": {
"type": [
"string",
@@ -3825,6 +3603,16 @@
],
"writeOnly": true
},
"phase": {
"anyOf": [
{
"$ref": "#/definitions/MessagePhase"
},
{
"type": "null"
}
]
},
"role": {
"type": "string"
},
@@ -3901,7 +3689,7 @@
]
},
"id": {
"description": "Set when using the chat completions API.",
"description": "Legacy id field retained for compatibility with older payloads.",
"type": [
"string",
"null"
@@ -4361,14 +4149,6 @@
}
]
},
"Role": {
"description": "The sender or recipient of messages and data in a conversation.",
"enum": [
"assistant",
"user"
],
"type": "string"
},
"SandboxPolicy": {
"description": "Determines execution restrictions for model shell commands.",
"oneOf": [
@@ -4678,32 +4458,6 @@
],
"type": "string"
},
"TextContent": {
"description": "Text provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"text": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"text",
"type"
],
"type": "object"
},
"TextElement": {
"properties": {
"byte_range": {
@@ -4727,27 +4481,6 @@
],
"type": "object"
},
"TextResourceContents": {
"properties": {
"mimeType": {
"type": [
"string",
"null"
]
},
"text": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"text",
"uri"
],
"type": "object"
},
"ThreadId": {
"type": "string"
},
@@ -4808,38 +4541,26 @@
"Tool": {
"description": "Definition for a tool the client can call.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/ToolAnnotations"
},
{
"type": "null"
}
]
},
"_meta": true,
"annotations": true,
"description": {
"type": [
"string",
"null"
]
},
"inputSchema": {
"$ref": "#/definitions/ToolInputSchema"
"icons": {
"items": true,
"type": [
"array",
"null"
]
},
"inputSchema": true,
"name": {
"type": "string"
},
"outputSchema": {
"anyOf": [
{
"$ref": "#/definitions/ToolOutputSchema"
},
{
"type": "null"
}
]
},
"outputSchema": true,
"title": {
"type": [
"string",
@@ -4853,82 +4574,6 @@
],
"type": "object"
},
"ToolAnnotations": {
"description": "Additional properties describing a Tool to clients.\n\nNOTE: all properties in ToolAnnotations are **hints**. They are not guaranteed to provide a faithful description of tool behavior (including descriptive properties like `title`).\n\nClients should never make tool use decisions based on ToolAnnotations received from untrusted servers.",
"properties": {
"destructiveHint": {
"type": [
"boolean",
"null"
]
},
"idempotentHint": {
"type": [
"boolean",
"null"
]
},
"openWorldHint": {
"type": [
"boolean",
"null"
]
},
"readOnlyHint": {
"type": [
"boolean",
"null"
]
},
"title": {
"type": [
"string",
"null"
]
}
},
"type": "object"
},
"ToolInputSchema": {
"description": "A JSON Schema object defining the expected parameters for the tool.",
"properties": {
"properties": true,
"required": {
"items": {
"type": "string"
},
"type": [
"array",
"null"
]
},
"type": {
"default": "object",
"type": "string"
}
},
"type": "object"
},
"ToolOutputSchema": {
"description": "An optional JSON Schema object defining the structure of the tool's output returned in the structuredContent field of a CallToolResult.",
"properties": {
"properties": true,
"required": {
"items": {
"type": "string"
},
"type": [
"array",
"null"
]
},
"type": {
"default": "object",
"type": "string"
}
},
"type": "object"
},
"TurnAbortReason": {
"enum": [
"interrupted",

View File

@@ -144,7 +144,7 @@
]
},
"FunctionCallOutputPayload": {
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses/Chat Completions APIs understand.",
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses API understands.",
"properties": {
"content": {
"type": "string"
@@ -266,6 +266,13 @@
],
"type": "string"
},
"MessagePhase": {
"enum": [
"commentary",
"final_answer"
],
"type": "string"
},
"NewConversationParams": {
"properties": {
"approvalPolicy": {
@@ -437,6 +444,16 @@
],
"writeOnly": true
},
"phase": {
"anyOf": [
{
"$ref": "#/definitions/MessagePhase"
},
{
"type": "null"
}
]
},
"role": {
"type": "string"
},
@@ -513,7 +530,7 @@
]
},
"id": {
"description": "Set when using the chat completions API.",
"description": "Legacy id field retained for compatibility with older payloads.",
"type": [
"string",
"null"

View File

@@ -93,34 +93,6 @@
}
]
},
"Annotations": {
"description": "Optional annotations for the client. The client can use annotations to inform how objects are used or displayed",
"properties": {
"audience": {
"items": {
"$ref": "#/definitions/Role"
},
"type": [
"array",
"null"
]
},
"lastModified": {
"type": [
"string",
"null"
]
},
"priority": {
"format": "double",
"type": [
"number",
"null"
]
}
},
"type": "object"
},
"AskForApproval": {
"description": "Determines the conditions under which the user is consulted to approve running the command proposed by Codex.",
"oneOf": [
@@ -154,57 +126,6 @@
}
]
},
"AudioContent": {
"description": "Audio provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"BlobResourceContents": {
"properties": {
"blob": {
"type": "string"
},
"mimeType": {
"type": [
"string",
"null"
]
},
"uri": {
"type": "string"
}
},
"required": [
"blob",
"uri"
],
"type": "object"
},
"ByteRange": {
"properties": {
"end": {
@@ -229,10 +150,9 @@
"CallToolResult": {
"description": "The server's response to a tool call.",
"properties": {
"_meta": true,
"content": {
"items": {
"$ref": "#/definitions/ContentBlock"
},
"items": true,
"type": "array"
},
"isError": {
@@ -390,25 +310,6 @@
}
]
},
"ContentBlock": {
"anyOf": [
{
"$ref": "#/definitions/TextContent"
},
{
"$ref": "#/definitions/ImageContent"
},
{
"$ref": "#/definitions/AudioContent"
},
{
"$ref": "#/definitions/ResourceLink"
},
{
"$ref": "#/definitions/EmbeddedResource"
}
]
},
"ContentItem": {
"oneOf": [
{
@@ -544,42 +445,6 @@
],
"type": "object"
},
"EmbeddedResource": {
"description": "The contents of a resource, embedded into a prompt or tool call result.\n\nIt is up to the client how best to render embedded resources for the benefit of the LLM and/or the user.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"resource": {
"$ref": "#/definitions/EmbeddedResourceResource"
},
"type": {
"type": "string"
}
},
"required": [
"resource",
"type"
],
"type": "object"
},
"EmbeddedResourceResource": {
"anyOf": [
{
"$ref": "#/definitions/TextResourceContents"
},
{
"$ref": "#/definitions/BlobResourceContents"
}
]
},
"EventMsg": {
"description": "Response event from the agent NOTE: Make sure none of these values have optional types, as it will mess up the extension code-gen.",
"oneOf": [
@@ -2992,7 +2857,7 @@
]
},
"FunctionCallOutputPayload": {
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses/Chat Completions APIs understand.",
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses API understands.",
"properties": {
"content": {
"type": "string"
@@ -3071,36 +2936,6 @@
],
"type": "object"
},
"ImageContent": {
"description": "An image provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"LocalShellAction": {
"oneOf": [
{
@@ -3276,6 +3111,13 @@
}
]
},
"MessagePhase": {
"enum": [
"commentary",
"final_answer"
],
"type": "string"
},
"ModeKind": {
"description": "Initial collaboration mode to use when the TUI starts.",
"enum": [
@@ -3599,7 +3441,8 @@
"format": "int64",
"type": "integer"
}
]
],
"description": "ID of a request, which can be either a string or an integer."
},
"RequestUserInputQuestion": {
"properties": {
@@ -3655,22 +3498,21 @@
"Resource": {
"description": "A known resource that the server is capable of reading.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"_meta": true,
"annotations": true,
"description": {
"type": [
"string",
"null"
]
},
"icons": {
"items": true,
"type": [
"array",
"null"
]
},
"mimeType": {
"type": [
"string",
@@ -3703,74 +3545,10 @@
],
"type": "object"
},
"ResourceLink": {
"description": "A resource that the server is capable of reading, included in a prompt or tool call result.\n\nNote: resource links returned by tools are not guaranteed to appear in the results of `resources/list` requests.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"description": {
"type": [
"string",
"null"
]
},
"mimeType": {
"type": [
"string",
"null"
]
},
"name": {
"type": "string"
},
"size": {
"format": "int64",
"type": [
"integer",
"null"
]
},
"title": {
"type": [
"string",
"null"
]
},
"type": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"name",
"type",
"uri"
],
"type": "object"
},
"ResourceTemplate": {
"description": "A template description for resources available on the server.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"annotations": true,
"description": {
"type": [
"string",
@@ -3825,6 +3603,16 @@
],
"writeOnly": true
},
"phase": {
"anyOf": [
{
"$ref": "#/definitions/MessagePhase"
},
{
"type": "null"
}
]
},
"role": {
"type": "string"
},
@@ -3901,7 +3689,7 @@
]
},
"id": {
"description": "Set when using the chat completions API.",
"description": "Legacy id field retained for compatibility with older payloads.",
"type": [
"string",
"null"
@@ -4361,14 +4149,6 @@
}
]
},
"Role": {
"description": "The sender or recipient of messages and data in a conversation.",
"enum": [
"assistant",
"user"
],
"type": "string"
},
"SandboxPolicy": {
"description": "Determines execution restrictions for model shell commands.",
"oneOf": [
@@ -4678,32 +4458,6 @@
],
"type": "string"
},
"TextContent": {
"description": "Text provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"text": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"text",
"type"
],
"type": "object"
},
"TextElement": {
"properties": {
"byte_range": {
@@ -4727,27 +4481,6 @@
],
"type": "object"
},
"TextResourceContents": {
"properties": {
"mimeType": {
"type": [
"string",
"null"
]
},
"text": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"text",
"uri"
],
"type": "object"
},
"ThreadId": {
"type": "string"
},
@@ -4808,38 +4541,26 @@
"Tool": {
"description": "Definition for a tool the client can call.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/ToolAnnotations"
},
{
"type": "null"
}
]
},
"_meta": true,
"annotations": true,
"description": {
"type": [
"string",
"null"
]
},
"inputSchema": {
"$ref": "#/definitions/ToolInputSchema"
"icons": {
"items": true,
"type": [
"array",
"null"
]
},
"inputSchema": true,
"name": {
"type": "string"
},
"outputSchema": {
"anyOf": [
{
"$ref": "#/definitions/ToolOutputSchema"
},
{
"type": "null"
}
]
},
"outputSchema": true,
"title": {
"type": [
"string",
@@ -4853,82 +4574,6 @@
],
"type": "object"
},
"ToolAnnotations": {
"description": "Additional properties describing a Tool to clients.\n\nNOTE: all properties in ToolAnnotations are **hints**. They are not guaranteed to provide a faithful description of tool behavior (including descriptive properties like `title`).\n\nClients should never make tool use decisions based on ToolAnnotations received from untrusted servers.",
"properties": {
"destructiveHint": {
"type": [
"boolean",
"null"
]
},
"idempotentHint": {
"type": [
"boolean",
"null"
]
},
"openWorldHint": {
"type": [
"boolean",
"null"
]
},
"readOnlyHint": {
"type": [
"boolean",
"null"
]
},
"title": {
"type": [
"string",
"null"
]
}
},
"type": "object"
},
"ToolInputSchema": {
"description": "A JSON Schema object defining the expected parameters for the tool.",
"properties": {
"properties": true,
"required": {
"items": {
"type": "string"
},
"type": [
"array",
"null"
]
},
"type": {
"default": "object",
"type": "string"
}
},
"type": "object"
},
"ToolOutputSchema": {
"description": "An optional JSON Schema object defining the structure of the tool's output returned in the structuredContent field of a CallToolResult.",
"properties": {
"properties": true,
"required": {
"items": {
"type": "string"
},
"type": [
"array",
"null"
]
},
"type": {
"default": "object",
"type": "string"
}
},
"type": "object"
},
"TurnAbortReason": {
"enum": [
"interrupted",

View File

@@ -93,34 +93,6 @@
}
]
},
"Annotations": {
"description": "Optional annotations for the client. The client can use annotations to inform how objects are used or displayed",
"properties": {
"audience": {
"items": {
"$ref": "#/definitions/Role"
},
"type": [
"array",
"null"
]
},
"lastModified": {
"type": [
"string",
"null"
]
},
"priority": {
"format": "double",
"type": [
"number",
"null"
]
}
},
"type": "object"
},
"AskForApproval": {
"description": "Determines the conditions under which the user is consulted to approve running the command proposed by Codex.",
"oneOf": [
@@ -154,57 +126,6 @@
}
]
},
"AudioContent": {
"description": "Audio provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"BlobResourceContents": {
"properties": {
"blob": {
"type": "string"
},
"mimeType": {
"type": [
"string",
"null"
]
},
"uri": {
"type": "string"
}
},
"required": [
"blob",
"uri"
],
"type": "object"
},
"ByteRange": {
"properties": {
"end": {
@@ -229,10 +150,9 @@
"CallToolResult": {
"description": "The server's response to a tool call.",
"properties": {
"_meta": true,
"content": {
"items": {
"$ref": "#/definitions/ContentBlock"
},
"items": true,
"type": "array"
},
"isError": {
@@ -390,25 +310,6 @@
}
]
},
"ContentBlock": {
"anyOf": [
{
"$ref": "#/definitions/TextContent"
},
{
"$ref": "#/definitions/ImageContent"
},
{
"$ref": "#/definitions/AudioContent"
},
{
"$ref": "#/definitions/ResourceLink"
},
{
"$ref": "#/definitions/EmbeddedResource"
}
]
},
"ContentItem": {
"oneOf": [
{
@@ -544,42 +445,6 @@
],
"type": "object"
},
"EmbeddedResource": {
"description": "The contents of a resource, embedded into a prompt or tool call result.\n\nIt is up to the client how best to render embedded resources for the benefit of the LLM and/or the user.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"resource": {
"$ref": "#/definitions/EmbeddedResourceResource"
},
"type": {
"type": "string"
}
},
"required": [
"resource",
"type"
],
"type": "object"
},
"EmbeddedResourceResource": {
"anyOf": [
{
"$ref": "#/definitions/TextResourceContents"
},
{
"$ref": "#/definitions/BlobResourceContents"
}
]
},
"EventMsg": {
"description": "Response event from the agent NOTE: Make sure none of these values have optional types, as it will mess up the extension code-gen.",
"oneOf": [
@@ -2992,7 +2857,7 @@
]
},
"FunctionCallOutputPayload": {
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses/Chat Completions APIs understand.",
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses API understands.",
"properties": {
"content": {
"type": "string"
@@ -3071,36 +2936,6 @@
],
"type": "object"
},
"ImageContent": {
"description": "An image provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"LocalShellAction": {
"oneOf": [
{
@@ -3276,6 +3111,13 @@
}
]
},
"MessagePhase": {
"enum": [
"commentary",
"final_answer"
],
"type": "string"
},
"ModeKind": {
"description": "Initial collaboration mode to use when the TUI starts.",
"enum": [
@@ -3599,7 +3441,8 @@
"format": "int64",
"type": "integer"
}
]
],
"description": "ID of a request, which can be either a string or an integer."
},
"RequestUserInputQuestion": {
"properties": {
@@ -3655,22 +3498,21 @@
"Resource": {
"description": "A known resource that the server is capable of reading.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"_meta": true,
"annotations": true,
"description": {
"type": [
"string",
"null"
]
},
"icons": {
"items": true,
"type": [
"array",
"null"
]
},
"mimeType": {
"type": [
"string",
@@ -3703,74 +3545,10 @@
],
"type": "object"
},
"ResourceLink": {
"description": "A resource that the server is capable of reading, included in a prompt or tool call result.\n\nNote: resource links returned by tools are not guaranteed to appear in the results of `resources/list` requests.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"description": {
"type": [
"string",
"null"
]
},
"mimeType": {
"type": [
"string",
"null"
]
},
"name": {
"type": "string"
},
"size": {
"format": "int64",
"type": [
"integer",
"null"
]
},
"title": {
"type": [
"string",
"null"
]
},
"type": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"name",
"type",
"uri"
],
"type": "object"
},
"ResourceTemplate": {
"description": "A template description for resources available on the server.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"annotations": true,
"description": {
"type": [
"string",
@@ -3825,6 +3603,16 @@
],
"writeOnly": true
},
"phase": {
"anyOf": [
{
"$ref": "#/definitions/MessagePhase"
},
{
"type": "null"
}
]
},
"role": {
"type": "string"
},
@@ -3901,7 +3689,7 @@
]
},
"id": {
"description": "Set when using the chat completions API.",
"description": "Legacy id field retained for compatibility with older payloads.",
"type": [
"string",
"null"
@@ -4361,14 +4149,6 @@
}
]
},
"Role": {
"description": "The sender or recipient of messages and data in a conversation.",
"enum": [
"assistant",
"user"
],
"type": "string"
},
"SandboxPolicy": {
"description": "Determines execution restrictions for model shell commands.",
"oneOf": [
@@ -4678,32 +4458,6 @@
],
"type": "string"
},
"TextContent": {
"description": "Text provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"text": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"text",
"type"
],
"type": "object"
},
"TextElement": {
"properties": {
"byte_range": {
@@ -4727,27 +4481,6 @@
],
"type": "object"
},
"TextResourceContents": {
"properties": {
"mimeType": {
"type": [
"string",
"null"
]
},
"text": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"text",
"uri"
],
"type": "object"
},
"ThreadId": {
"type": "string"
},
@@ -4808,38 +4541,26 @@
"Tool": {
"description": "Definition for a tool the client can call.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/ToolAnnotations"
},
{
"type": "null"
}
]
},
"_meta": true,
"annotations": true,
"description": {
"type": [
"string",
"null"
]
},
"inputSchema": {
"$ref": "#/definitions/ToolInputSchema"
"icons": {
"items": true,
"type": [
"array",
"null"
]
},
"inputSchema": true,
"name": {
"type": "string"
},
"outputSchema": {
"anyOf": [
{
"$ref": "#/definitions/ToolOutputSchema"
},
{
"type": "null"
}
]
},
"outputSchema": true,
"title": {
"type": [
"string",
@@ -4853,82 +4574,6 @@
],
"type": "object"
},
"ToolAnnotations": {
"description": "Additional properties describing a Tool to clients.\n\nNOTE: all properties in ToolAnnotations are **hints**. They are not guaranteed to provide a faithful description of tool behavior (including descriptive properties like `title`).\n\nClients should never make tool use decisions based on ToolAnnotations received from untrusted servers.",
"properties": {
"destructiveHint": {
"type": [
"boolean",
"null"
]
},
"idempotentHint": {
"type": [
"boolean",
"null"
]
},
"openWorldHint": {
"type": [
"boolean",
"null"
]
},
"readOnlyHint": {
"type": [
"boolean",
"null"
]
},
"title": {
"type": [
"string",
"null"
]
}
},
"type": "object"
},
"ToolInputSchema": {
"description": "A JSON Schema object defining the expected parameters for the tool.",
"properties": {
"properties": true,
"required": {
"items": {
"type": "string"
},
"type": [
"array",
"null"
]
},
"type": {
"default": "object",
"type": "string"
}
},
"type": "object"
},
"ToolOutputSchema": {
"description": "An optional JSON Schema object defining the structure of the tool's output returned in the structuredContent field of a CallToolResult.",
"properties": {
"properties": true,
"required": {
"items": {
"type": "string"
},
"type": [
"array",
"null"
]
},
"type": {
"default": "object",
"type": "string"
}
},
"type": "object"
},
"TurnAbortReason": {
"enum": [
"interrupted",

View File

@@ -1,85 +1,6 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"Annotations": {
"description": "Optional annotations for the client. The client can use annotations to inform how objects are used or displayed",
"properties": {
"audience": {
"items": {
"$ref": "#/definitions/Role"
},
"type": [
"array",
"null"
]
},
"lastModified": {
"type": [
"string",
"null"
]
},
"priority": {
"format": "double",
"type": [
"number",
"null"
]
}
},
"type": "object"
},
"AudioContent": {
"description": "Audio provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"BlobResourceContents": {
"properties": {
"blob": {
"type": "string"
},
"mimeType": {
"type": [
"string",
"null"
]
},
"uri": {
"type": "string"
}
},
"required": [
"blob",
"uri"
],
"type": "object"
},
"ByteRange": {
"properties": {
"end": {
@@ -263,61 +184,6 @@
],
"type": "string"
},
"ContentBlock": {
"anyOf": [
{
"$ref": "#/definitions/TextContent"
},
{
"$ref": "#/definitions/ImageContent"
},
{
"$ref": "#/definitions/AudioContent"
},
{
"$ref": "#/definitions/ResourceLink"
},
{
"$ref": "#/definitions/EmbeddedResource"
}
]
},
"EmbeddedResource": {
"description": "The contents of a resource, embedded into a prompt or tool call result.\n\nIt is up to the client how best to render embedded resources for the benefit of the LLM and/or the user.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"resource": {
"$ref": "#/definitions/EmbeddedResourceResource"
},
"type": {
"type": "string"
}
},
"required": [
"resource",
"type"
],
"type": "object"
},
"EmbeddedResourceResource": {
"anyOf": [
{
"$ref": "#/definitions/TextResourceContents"
},
{
"$ref": "#/definitions/BlobResourceContents"
}
]
},
"FileUpdateChange": {
"properties": {
"diff": {
@@ -337,36 +203,6 @@
],
"type": "object"
},
"ImageContent": {
"description": "An image provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"McpToolCallError": {
"properties": {
"message": {
@@ -381,9 +217,7 @@
"McpToolCallResult": {
"properties": {
"content": {
"items": {
"$ref": "#/definitions/ContentBlock"
},
"items": true,
"type": "array"
},
"structuredContent": true
@@ -468,95 +302,6 @@
}
]
},
"ResourceLink": {
"description": "A resource that the server is capable of reading, included in a prompt or tool call result.\n\nNote: resource links returned by tools are not guaranteed to appear in the results of `resources/list` requests.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"description": {
"type": [
"string",
"null"
]
},
"mimeType": {
"type": [
"string",
"null"
]
},
"name": {
"type": "string"
},
"size": {
"format": "int64",
"type": [
"integer",
"null"
]
},
"title": {
"type": [
"string",
"null"
]
},
"type": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"name",
"type",
"uri"
],
"type": "object"
},
"Role": {
"description": "The sender or recipient of messages and data in a conversation.",
"enum": [
"assistant",
"user"
],
"type": "string"
},
"TextContent": {
"description": "Text provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"text": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"text",
"type"
],
"type": "object"
},
"TextElement": {
"properties": {
"byteRange": {
@@ -580,27 +325,6 @@
],
"type": "object"
},
"TextResourceContents": {
"properties": {
"mimeType": {
"type": [
"string",
"null"
]
},
"text": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"text",
"uri"
],
"type": "object"
},
"ThreadItem": {
"oneOf": [
{

View File

@@ -1,85 +1,6 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"Annotations": {
"description": "Optional annotations for the client. The client can use annotations to inform how objects are used or displayed",
"properties": {
"audience": {
"items": {
"$ref": "#/definitions/Role"
},
"type": [
"array",
"null"
]
},
"lastModified": {
"type": [
"string",
"null"
]
},
"priority": {
"format": "double",
"type": [
"number",
"null"
]
}
},
"type": "object"
},
"AudioContent": {
"description": "Audio provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"BlobResourceContents": {
"properties": {
"blob": {
"type": "string"
},
"mimeType": {
"type": [
"string",
"null"
]
},
"uri": {
"type": "string"
}
},
"required": [
"blob",
"uri"
],
"type": "object"
},
"ByteRange": {
"properties": {
"end": {
@@ -263,61 +184,6 @@
],
"type": "string"
},
"ContentBlock": {
"anyOf": [
{
"$ref": "#/definitions/TextContent"
},
{
"$ref": "#/definitions/ImageContent"
},
{
"$ref": "#/definitions/AudioContent"
},
{
"$ref": "#/definitions/ResourceLink"
},
{
"$ref": "#/definitions/EmbeddedResource"
}
]
},
"EmbeddedResource": {
"description": "The contents of a resource, embedded into a prompt or tool call result.\n\nIt is up to the client how best to render embedded resources for the benefit of the LLM and/or the user.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"resource": {
"$ref": "#/definitions/EmbeddedResourceResource"
},
"type": {
"type": "string"
}
},
"required": [
"resource",
"type"
],
"type": "object"
},
"EmbeddedResourceResource": {
"anyOf": [
{
"$ref": "#/definitions/TextResourceContents"
},
{
"$ref": "#/definitions/BlobResourceContents"
}
]
},
"FileUpdateChange": {
"properties": {
"diff": {
@@ -337,36 +203,6 @@
],
"type": "object"
},
"ImageContent": {
"description": "An image provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"McpToolCallError": {
"properties": {
"message": {
@@ -381,9 +217,7 @@
"McpToolCallResult": {
"properties": {
"content": {
"items": {
"$ref": "#/definitions/ContentBlock"
},
"items": true,
"type": "array"
},
"structuredContent": true
@@ -468,95 +302,6 @@
}
]
},
"ResourceLink": {
"description": "A resource that the server is capable of reading, included in a prompt or tool call result.\n\nNote: resource links returned by tools are not guaranteed to appear in the results of `resources/list` requests.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"description": {
"type": [
"string",
"null"
]
},
"mimeType": {
"type": [
"string",
"null"
]
},
"name": {
"type": "string"
},
"size": {
"format": "int64",
"type": [
"integer",
"null"
]
},
"title": {
"type": [
"string",
"null"
]
},
"type": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"name",
"type",
"uri"
],
"type": "object"
},
"Role": {
"description": "The sender or recipient of messages and data in a conversation.",
"enum": [
"assistant",
"user"
],
"type": "string"
},
"TextContent": {
"description": "Text provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"text": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"text",
"type"
],
"type": "object"
},
"TextElement": {
"properties": {
"byteRange": {
@@ -580,27 +325,6 @@
],
"type": "object"
},
"TextResourceContents": {
"properties": {
"mimeType": {
"type": [
"string",
"null"
]
},
"text": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"text",
"uri"
],
"type": "object"
},
"ThreadItem": {
"oneOf": [
{

View File

@@ -1,34 +1,6 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"Annotations": {
"description": "Optional annotations for the client. The client can use annotations to inform how objects are used or displayed",
"properties": {
"audience": {
"items": {
"$ref": "#/definitions/Role"
},
"type": [
"array",
"null"
]
},
"lastModified": {
"type": [
"string",
"null"
]
},
"priority": {
"format": "double",
"type": [
"number",
"null"
]
}
},
"type": "object"
},
"McpAuthStatus": {
"enum": [
"unsupported",
@@ -77,22 +49,21 @@
"Resource": {
"description": "A known resource that the server is capable of reading.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"_meta": true,
"annotations": true,
"description": {
"type": [
"string",
"null"
]
},
"icons": {
"items": true,
"type": [
"array",
"null"
]
},
"mimeType": {
"type": [
"string",
@@ -128,16 +99,7 @@
"ResourceTemplate": {
"description": "A template description for resources available on the server.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"annotations": true,
"description": {
"type": [
"string",
@@ -169,49 +131,29 @@
],
"type": "object"
},
"Role": {
"description": "The sender or recipient of messages and data in a conversation.",
"enum": [
"assistant",
"user"
],
"type": "string"
},
"Tool": {
"description": "Definition for a tool the client can call.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/ToolAnnotations"
},
{
"type": "null"
}
]
},
"_meta": true,
"annotations": true,
"description": {
"type": [
"string",
"null"
]
},
"inputSchema": {
"$ref": "#/definitions/ToolInputSchema"
"icons": {
"items": true,
"type": [
"array",
"null"
]
},
"inputSchema": true,
"name": {
"type": "string"
},
"outputSchema": {
"anyOf": [
{
"$ref": "#/definitions/ToolOutputSchema"
},
{
"type": "null"
}
]
},
"outputSchema": true,
"title": {
"type": [
"string",
@@ -224,82 +166,6 @@
"name"
],
"type": "object"
},
"ToolAnnotations": {
"description": "Additional properties describing a Tool to clients.\n\nNOTE: all properties in ToolAnnotations are **hints**. They are not guaranteed to provide a faithful description of tool behavior (including descriptive properties like `title`).\n\nClients should never make tool use decisions based on ToolAnnotations received from untrusted servers.",
"properties": {
"destructiveHint": {
"type": [
"boolean",
"null"
]
},
"idempotentHint": {
"type": [
"boolean",
"null"
]
},
"openWorldHint": {
"type": [
"boolean",
"null"
]
},
"readOnlyHint": {
"type": [
"boolean",
"null"
]
},
"title": {
"type": [
"string",
"null"
]
}
},
"type": "object"
},
"ToolInputSchema": {
"description": "A JSON Schema object defining the expected parameters for the tool.",
"properties": {
"properties": true,
"required": {
"items": {
"type": "string"
},
"type": [
"array",
"null"
]
},
"type": {
"default": "object",
"type": "string"
}
},
"type": "object"
},
"ToolOutputSchema": {
"description": "An optional JSON Schema object defining the structure of the tool's output returned in the structuredContent field of a CallToolResult.",
"properties": {
"properties": true,
"required": {
"items": {
"type": "string"
},
"type": [
"array",
"null"
]
},
"type": {
"default": "object",
"type": "string"
}
},
"type": "object"
}
},
"properties": {

View File

@@ -1,6 +1,25 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"InputModality": {
"description": "Canonical user-input modality tags advertised by a model.",
"oneOf": [
{
"description": "Plain text turns and tool payloads.",
"enum": [
"text"
],
"type": "string"
},
{
"description": "Image attachments included in user turns.",
"enum": [
"image"
],
"type": "string"
}
]
},
"Model": {
"properties": {
"defaultReasoningEffort": {
@@ -15,6 +34,16 @@
"id": {
"type": "string"
},
"inputModalities": {
"default": [
"text",
"image"
],
"items": {
"$ref": "#/definitions/InputModality"
},
"type": "array"
},
"isDefault": {
"type": "boolean"
},

View File

@@ -111,7 +111,7 @@
]
},
"FunctionCallOutputPayload": {
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses/Chat Completions APIs understand.",
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses API understands.",
"properties": {
"content": {
"type": "string"
@@ -233,6 +233,13 @@
],
"type": "string"
},
"MessagePhase": {
"enum": [
"commentary",
"final_answer"
],
"type": "string"
},
"ReasoningItemContent": {
"oneOf": [
{
@@ -324,6 +331,16 @@
],
"writeOnly": true
},
"phase": {
"anyOf": [
{
"$ref": "#/definitions/MessagePhase"
},
{
"type": "null"
}
]
},
"role": {
"type": "string"
},
@@ -400,7 +417,7 @@
]
},
"id": {
"description": "Set when using the chat completions API.",
"description": "Legacy id field retained for compatibility with older payloads.",
"type": [
"string",
"null"

View File

@@ -1,85 +1,6 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"Annotations": {
"description": "Optional annotations for the client. The client can use annotations to inform how objects are used or displayed",
"properties": {
"audience": {
"items": {
"$ref": "#/definitions/Role"
},
"type": [
"array",
"null"
]
},
"lastModified": {
"type": [
"string",
"null"
]
},
"priority": {
"format": "double",
"type": [
"number",
"null"
]
}
},
"type": "object"
},
"AudioContent": {
"description": "Audio provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"BlobResourceContents": {
"properties": {
"blob": {
"type": "string"
},
"mimeType": {
"type": [
"string",
"null"
]
},
"uri": {
"type": "string"
}
},
"required": [
"blob",
"uri"
],
"type": "object"
},
"ByteRange": {
"properties": {
"end": {
@@ -405,61 +326,6 @@
],
"type": "string"
},
"ContentBlock": {
"anyOf": [
{
"$ref": "#/definitions/TextContent"
},
{
"$ref": "#/definitions/ImageContent"
},
{
"$ref": "#/definitions/AudioContent"
},
{
"$ref": "#/definitions/ResourceLink"
},
{
"$ref": "#/definitions/EmbeddedResource"
}
]
},
"EmbeddedResource": {
"description": "The contents of a resource, embedded into a prompt or tool call result.\n\nIt is up to the client how best to render embedded resources for the benefit of the LLM and/or the user.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"resource": {
"$ref": "#/definitions/EmbeddedResourceResource"
},
"type": {
"type": "string"
}
},
"required": [
"resource",
"type"
],
"type": "object"
},
"EmbeddedResourceResource": {
"anyOf": [
{
"$ref": "#/definitions/TextResourceContents"
},
{
"$ref": "#/definitions/BlobResourceContents"
}
]
},
"FileUpdateChange": {
"properties": {
"diff": {
@@ -479,36 +345,6 @@
],
"type": "object"
},
"ImageContent": {
"description": "An image provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"McpToolCallError": {
"properties": {
"message": {
@@ -523,9 +359,7 @@
"McpToolCallResult": {
"properties": {
"content": {
"items": {
"$ref": "#/definitions/ContentBlock"
},
"items": true,
"type": "array"
},
"structuredContent": true
@@ -610,95 +444,6 @@
}
]
},
"ResourceLink": {
"description": "A resource that the server is capable of reading, included in a prompt or tool call result.\n\nNote: resource links returned by tools are not guaranteed to appear in the results of `resources/list` requests.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"description": {
"type": [
"string",
"null"
]
},
"mimeType": {
"type": [
"string",
"null"
]
},
"name": {
"type": "string"
},
"size": {
"format": "int64",
"type": [
"integer",
"null"
]
},
"title": {
"type": [
"string",
"null"
]
},
"type": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"name",
"type",
"uri"
],
"type": "object"
},
"Role": {
"description": "The sender or recipient of messages and data in a conversation.",
"enum": [
"assistant",
"user"
],
"type": "string"
},
"TextContent": {
"description": "Text provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"text": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"text",
"type"
],
"type": "object"
},
"TextElement": {
"properties": {
"byteRange": {
@@ -722,27 +467,6 @@
],
"type": "object"
},
"TextResourceContents": {
"properties": {
"mimeType": {
"type": [
"string",
"null"
]
},
"text": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"text",
"uri"
],
"type": "object"
},
"ThreadItem": {
"oneOf": [
{

View File

@@ -5,34 +5,6 @@
"description": "A path that is guaranteed to be absolute and normalized (though it is not guaranteed to be canonicalized or exist on the filesystem).\n\nIMPORTANT: When deserializing an `AbsolutePathBuf`, a base path must be set using [AbsolutePathBufGuard::new]. If no base path is set, the deserialization will fail unless the path being deserialized is already absolute.",
"type": "string"
},
"Annotations": {
"description": "Optional annotations for the client. The client can use annotations to inform how objects are used or displayed",
"properties": {
"audience": {
"items": {
"$ref": "#/definitions/Role"
},
"type": [
"array",
"null"
]
},
"lastModified": {
"type": [
"string",
"null"
]
},
"priority": {
"format": "double",
"type": [
"number",
"null"
]
}
},
"type": "object"
},
"AskForApproval": {
"enum": [
"untrusted",
@@ -42,57 +14,6 @@
],
"type": "string"
},
"AudioContent": {
"description": "Audio provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"BlobResourceContents": {
"properties": {
"blob": {
"type": "string"
},
"mimeType": {
"type": [
"string",
"null"
]
},
"uri": {
"type": "string"
}
},
"required": [
"blob",
"uri"
],
"type": "object"
},
"ByteRange": {
"properties": {
"end": {
@@ -418,61 +339,6 @@
],
"type": "string"
},
"ContentBlock": {
"anyOf": [
{
"$ref": "#/definitions/TextContent"
},
{
"$ref": "#/definitions/ImageContent"
},
{
"$ref": "#/definitions/AudioContent"
},
{
"$ref": "#/definitions/ResourceLink"
},
{
"$ref": "#/definitions/EmbeddedResource"
}
]
},
"EmbeddedResource": {
"description": "The contents of a resource, embedded into a prompt or tool call result.\n\nIt is up to the client how best to render embedded resources for the benefit of the LLM and/or the user.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"resource": {
"$ref": "#/definitions/EmbeddedResourceResource"
},
"type": {
"type": "string"
}
},
"required": [
"resource",
"type"
],
"type": "object"
},
"EmbeddedResourceResource": {
"anyOf": [
{
"$ref": "#/definitions/TextResourceContents"
},
{
"$ref": "#/definitions/BlobResourceContents"
}
]
},
"FileUpdateChange": {
"properties": {
"diff": {
@@ -515,36 +381,6 @@
},
"type": "object"
},
"ImageContent": {
"description": "An image provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"McpToolCallError": {
"properties": {
"message": {
@@ -559,9 +395,7 @@
"McpToolCallResult": {
"properties": {
"content": {
"items": {
"$ref": "#/definitions/ContentBlock"
},
"items": true,
"type": "array"
},
"structuredContent": true
@@ -665,69 +499,6 @@
],
"type": "string"
},
"ResourceLink": {
"description": "A resource that the server is capable of reading, included in a prompt or tool call result.\n\nNote: resource links returned by tools are not guaranteed to appear in the results of `resources/list` requests.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"description": {
"type": [
"string",
"null"
]
},
"mimeType": {
"type": [
"string",
"null"
]
},
"name": {
"type": "string"
},
"size": {
"format": "int64",
"type": [
"integer",
"null"
]
},
"title": {
"type": [
"string",
"null"
]
},
"type": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"name",
"type",
"uri"
],
"type": "object"
},
"Role": {
"description": "The sender or recipient of messages and data in a conversation.",
"enum": [
"assistant",
"user"
],
"type": "string"
},
"SandboxPolicy": {
"oneOf": [
{
@@ -900,32 +671,6 @@
}
]
},
"TextContent": {
"description": "Text provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"text": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"text",
"type"
],
"type": "object"
},
"TextElement": {
"properties": {
"byteRange": {
@@ -949,27 +694,6 @@
],
"type": "object"
},
"TextResourceContents": {
"properties": {
"mimeType": {
"type": [
"string",
"null"
]
},
"text": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"text",
"uri"
],
"type": "object"
},
"Thread": {
"properties": {
"cliVersion": {

View File

@@ -1,85 +1,6 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"Annotations": {
"description": "Optional annotations for the client. The client can use annotations to inform how objects are used or displayed",
"properties": {
"audience": {
"items": {
"$ref": "#/definitions/Role"
},
"type": [
"array",
"null"
]
},
"lastModified": {
"type": [
"string",
"null"
]
},
"priority": {
"format": "double",
"type": [
"number",
"null"
]
}
},
"type": "object"
},
"AudioContent": {
"description": "Audio provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"BlobResourceContents": {
"properties": {
"blob": {
"type": "string"
},
"mimeType": {
"type": [
"string",
"null"
]
},
"uri": {
"type": "string"
}
},
"required": [
"blob",
"uri"
],
"type": "object"
},
"ByteRange": {
"properties": {
"end": {
@@ -405,61 +326,6 @@
],
"type": "string"
},
"ContentBlock": {
"anyOf": [
{
"$ref": "#/definitions/TextContent"
},
{
"$ref": "#/definitions/ImageContent"
},
{
"$ref": "#/definitions/AudioContent"
},
{
"$ref": "#/definitions/ResourceLink"
},
{
"$ref": "#/definitions/EmbeddedResource"
}
]
},
"EmbeddedResource": {
"description": "The contents of a resource, embedded into a prompt or tool call result.\n\nIt is up to the client how best to render embedded resources for the benefit of the LLM and/or the user.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"resource": {
"$ref": "#/definitions/EmbeddedResourceResource"
},
"type": {
"type": "string"
}
},
"required": [
"resource",
"type"
],
"type": "object"
},
"EmbeddedResourceResource": {
"anyOf": [
{
"$ref": "#/definitions/TextResourceContents"
},
{
"$ref": "#/definitions/BlobResourceContents"
}
]
},
"FileUpdateChange": {
"properties": {
"diff": {
@@ -502,36 +368,6 @@
},
"type": "object"
},
"ImageContent": {
"description": "An image provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"McpToolCallError": {
"properties": {
"message": {
@@ -546,9 +382,7 @@
"McpToolCallResult": {
"properties": {
"content": {
"items": {
"$ref": "#/definitions/ContentBlock"
},
"items": true,
"type": "array"
},
"structuredContent": true
@@ -633,69 +467,6 @@
}
]
},
"ResourceLink": {
"description": "A resource that the server is capable of reading, included in a prompt or tool call result.\n\nNote: resource links returned by tools are not guaranteed to appear in the results of `resources/list` requests.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"description": {
"type": [
"string",
"null"
]
},
"mimeType": {
"type": [
"string",
"null"
]
},
"name": {
"type": "string"
},
"size": {
"format": "int64",
"type": [
"integer",
"null"
]
},
"title": {
"type": [
"string",
"null"
]
},
"type": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"name",
"type",
"uri"
],
"type": "object"
},
"Role": {
"description": "The sender or recipient of messages and data in a conversation.",
"enum": [
"assistant",
"user"
],
"type": "string"
},
"SessionSource": {
"oneOf": [
{
@@ -773,32 +544,6 @@
}
]
},
"TextContent": {
"description": "Text provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"text": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"text",
"type"
],
"type": "object"
},
"TextElement": {
"properties": {
"byteRange": {
@@ -822,27 +567,6 @@
],
"type": "object"
},
"TextResourceContents": {
"properties": {
"mimeType": {
"type": [
"string",
"null"
]
},
"text": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"text",
"uri"
],
"type": "object"
},
"Thread": {
"properties": {
"cliVersion": {

View File

@@ -1,85 +1,6 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"Annotations": {
"description": "Optional annotations for the client. The client can use annotations to inform how objects are used or displayed",
"properties": {
"audience": {
"items": {
"$ref": "#/definitions/Role"
},
"type": [
"array",
"null"
]
},
"lastModified": {
"type": [
"string",
"null"
]
},
"priority": {
"format": "double",
"type": [
"number",
"null"
]
}
},
"type": "object"
},
"AudioContent": {
"description": "Audio provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"BlobResourceContents": {
"properties": {
"blob": {
"type": "string"
},
"mimeType": {
"type": [
"string",
"null"
]
},
"uri": {
"type": "string"
}
},
"required": [
"blob",
"uri"
],
"type": "object"
},
"ByteRange": {
"properties": {
"end": {
@@ -405,61 +326,6 @@
],
"type": "string"
},
"ContentBlock": {
"anyOf": [
{
"$ref": "#/definitions/TextContent"
},
{
"$ref": "#/definitions/ImageContent"
},
{
"$ref": "#/definitions/AudioContent"
},
{
"$ref": "#/definitions/ResourceLink"
},
{
"$ref": "#/definitions/EmbeddedResource"
}
]
},
"EmbeddedResource": {
"description": "The contents of a resource, embedded into a prompt or tool call result.\n\nIt is up to the client how best to render embedded resources for the benefit of the LLM and/or the user.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"resource": {
"$ref": "#/definitions/EmbeddedResourceResource"
},
"type": {
"type": "string"
}
},
"required": [
"resource",
"type"
],
"type": "object"
},
"EmbeddedResourceResource": {
"anyOf": [
{
"$ref": "#/definitions/TextResourceContents"
},
{
"$ref": "#/definitions/BlobResourceContents"
}
]
},
"FileUpdateChange": {
"properties": {
"diff": {
@@ -502,36 +368,6 @@
},
"type": "object"
},
"ImageContent": {
"description": "An image provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"McpToolCallError": {
"properties": {
"message": {
@@ -546,9 +382,7 @@
"McpToolCallResult": {
"properties": {
"content": {
"items": {
"$ref": "#/definitions/ContentBlock"
},
"items": true,
"type": "array"
},
"structuredContent": true
@@ -633,69 +467,6 @@
}
]
},
"ResourceLink": {
"description": "A resource that the server is capable of reading, included in a prompt or tool call result.\n\nNote: resource links returned by tools are not guaranteed to appear in the results of `resources/list` requests.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"description": {
"type": [
"string",
"null"
]
},
"mimeType": {
"type": [
"string",
"null"
]
},
"name": {
"type": "string"
},
"size": {
"format": "int64",
"type": [
"integer",
"null"
]
},
"title": {
"type": [
"string",
"null"
]
},
"type": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"name",
"type",
"uri"
],
"type": "object"
},
"Role": {
"description": "The sender or recipient of messages and data in a conversation.",
"enum": [
"assistant",
"user"
],
"type": "string"
},
"SessionSource": {
"oneOf": [
{
@@ -773,32 +544,6 @@
}
]
},
"TextContent": {
"description": "Text provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"text": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"text",
"type"
],
"type": "object"
},
"TextElement": {
"properties": {
"byteRange": {
@@ -822,27 +567,6 @@
],
"type": "object"
},
"TextResourceContents": {
"properties": {
"mimeType": {
"type": [
"string",
"null"
]
},
"text": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"text",
"uri"
],
"type": "object"
},
"Thread": {
"properties": {
"cliVersion": {

View File

@@ -120,7 +120,7 @@
]
},
"FunctionCallOutputPayload": {
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses/Chat Completions APIs understand.",
"description": "The payload we send back to OpenAI when reporting a tool call result.\n\n`content` preserves the historical plain-string payload so downstream integrations (tests, logging, etc.) can keep treating tool output as `String`. When an MCP server returns richer data we additionally populate `content_items` with the structured form that the Responses API understands.",
"properties": {
"content": {
"type": "string"
@@ -242,6 +242,13 @@
],
"type": "string"
},
"MessagePhase": {
"enum": [
"commentary",
"final_answer"
],
"type": "string"
},
"Personality": {
"enum": [
"friendly",
@@ -340,6 +347,16 @@
],
"writeOnly": true
},
"phase": {
"anyOf": [
{
"$ref": "#/definitions/MessagePhase"
},
{
"type": "null"
}
]
},
"role": {
"type": "string"
},
@@ -416,7 +433,7 @@
]
},
"id": {
"description": "Set when using the chat completions API.",
"description": "Legacy id field retained for compatibility with older payloads.",
"type": [
"string",
"null"

View File

@@ -5,34 +5,6 @@
"description": "A path that is guaranteed to be absolute and normalized (though it is not guaranteed to be canonicalized or exist on the filesystem).\n\nIMPORTANT: When deserializing an `AbsolutePathBuf`, a base path must be set using [AbsolutePathBufGuard::new]. If no base path is set, the deserialization will fail unless the path being deserialized is already absolute.",
"type": "string"
},
"Annotations": {
"description": "Optional annotations for the client. The client can use annotations to inform how objects are used or displayed",
"properties": {
"audience": {
"items": {
"$ref": "#/definitions/Role"
},
"type": [
"array",
"null"
]
},
"lastModified": {
"type": [
"string",
"null"
]
},
"priority": {
"format": "double",
"type": [
"number",
"null"
]
}
},
"type": "object"
},
"AskForApproval": {
"enum": [
"untrusted",
@@ -42,57 +14,6 @@
],
"type": "string"
},
"AudioContent": {
"description": "Audio provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"BlobResourceContents": {
"properties": {
"blob": {
"type": "string"
},
"mimeType": {
"type": [
"string",
"null"
]
},
"uri": {
"type": "string"
}
},
"required": [
"blob",
"uri"
],
"type": "object"
},
"ByteRange": {
"properties": {
"end": {
@@ -418,61 +339,6 @@
],
"type": "string"
},
"ContentBlock": {
"anyOf": [
{
"$ref": "#/definitions/TextContent"
},
{
"$ref": "#/definitions/ImageContent"
},
{
"$ref": "#/definitions/AudioContent"
},
{
"$ref": "#/definitions/ResourceLink"
},
{
"$ref": "#/definitions/EmbeddedResource"
}
]
},
"EmbeddedResource": {
"description": "The contents of a resource, embedded into a prompt or tool call result.\n\nIt is up to the client how best to render embedded resources for the benefit of the LLM and/or the user.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"resource": {
"$ref": "#/definitions/EmbeddedResourceResource"
},
"type": {
"type": "string"
}
},
"required": [
"resource",
"type"
],
"type": "object"
},
"EmbeddedResourceResource": {
"anyOf": [
{
"$ref": "#/definitions/TextResourceContents"
},
{
"$ref": "#/definitions/BlobResourceContents"
}
]
},
"FileUpdateChange": {
"properties": {
"diff": {
@@ -515,36 +381,6 @@
},
"type": "object"
},
"ImageContent": {
"description": "An image provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"McpToolCallError": {
"properties": {
"message": {
@@ -559,9 +395,7 @@
"McpToolCallResult": {
"properties": {
"content": {
"items": {
"$ref": "#/definitions/ContentBlock"
},
"items": true,
"type": "array"
},
"structuredContent": true
@@ -665,69 +499,6 @@
],
"type": "string"
},
"ResourceLink": {
"description": "A resource that the server is capable of reading, included in a prompt or tool call result.\n\nNote: resource links returned by tools are not guaranteed to appear in the results of `resources/list` requests.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"description": {
"type": [
"string",
"null"
]
},
"mimeType": {
"type": [
"string",
"null"
]
},
"name": {
"type": "string"
},
"size": {
"format": "int64",
"type": [
"integer",
"null"
]
},
"title": {
"type": [
"string",
"null"
]
},
"type": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"name",
"type",
"uri"
],
"type": "object"
},
"Role": {
"description": "The sender or recipient of messages and data in a conversation.",
"enum": [
"assistant",
"user"
],
"type": "string"
},
"SandboxPolicy": {
"oneOf": [
{
@@ -900,32 +671,6 @@
}
]
},
"TextContent": {
"description": "Text provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"text": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"text",
"type"
],
"type": "object"
},
"TextElement": {
"properties": {
"byteRange": {
@@ -949,27 +694,6 @@
],
"type": "object"
},
"TextResourceContents": {
"properties": {
"mimeType": {
"type": [
"string",
"null"
]
},
"text": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"text",
"uri"
],
"type": "object"
},
"Thread": {
"properties": {
"cliVersion": {

View File

@@ -1,85 +1,6 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"Annotations": {
"description": "Optional annotations for the client. The client can use annotations to inform how objects are used or displayed",
"properties": {
"audience": {
"items": {
"$ref": "#/definitions/Role"
},
"type": [
"array",
"null"
]
},
"lastModified": {
"type": [
"string",
"null"
]
},
"priority": {
"format": "double",
"type": [
"number",
"null"
]
}
},
"type": "object"
},
"AudioContent": {
"description": "Audio provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"BlobResourceContents": {
"properties": {
"blob": {
"type": "string"
},
"mimeType": {
"type": [
"string",
"null"
]
},
"uri": {
"type": "string"
}
},
"required": [
"blob",
"uri"
],
"type": "object"
},
"ByteRange": {
"properties": {
"end": {
@@ -405,61 +326,6 @@
],
"type": "string"
},
"ContentBlock": {
"anyOf": [
{
"$ref": "#/definitions/TextContent"
},
{
"$ref": "#/definitions/ImageContent"
},
{
"$ref": "#/definitions/AudioContent"
},
{
"$ref": "#/definitions/ResourceLink"
},
{
"$ref": "#/definitions/EmbeddedResource"
}
]
},
"EmbeddedResource": {
"description": "The contents of a resource, embedded into a prompt or tool call result.\n\nIt is up to the client how best to render embedded resources for the benefit of the LLM and/or the user.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"resource": {
"$ref": "#/definitions/EmbeddedResourceResource"
},
"type": {
"type": "string"
}
},
"required": [
"resource",
"type"
],
"type": "object"
},
"EmbeddedResourceResource": {
"anyOf": [
{
"$ref": "#/definitions/TextResourceContents"
},
{
"$ref": "#/definitions/BlobResourceContents"
}
]
},
"FileUpdateChange": {
"properties": {
"diff": {
@@ -502,36 +368,6 @@
},
"type": "object"
},
"ImageContent": {
"description": "An image provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"McpToolCallError": {
"properties": {
"message": {
@@ -546,9 +382,7 @@
"McpToolCallResult": {
"properties": {
"content": {
"items": {
"$ref": "#/definitions/ContentBlock"
},
"items": true,
"type": "array"
},
"structuredContent": true
@@ -633,69 +467,6 @@
}
]
},
"ResourceLink": {
"description": "A resource that the server is capable of reading, included in a prompt or tool call result.\n\nNote: resource links returned by tools are not guaranteed to appear in the results of `resources/list` requests.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"description": {
"type": [
"string",
"null"
]
},
"mimeType": {
"type": [
"string",
"null"
]
},
"name": {
"type": "string"
},
"size": {
"format": "int64",
"type": [
"integer",
"null"
]
},
"title": {
"type": [
"string",
"null"
]
},
"type": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"name",
"type",
"uri"
],
"type": "object"
},
"Role": {
"description": "The sender or recipient of messages and data in a conversation.",
"enum": [
"assistant",
"user"
],
"type": "string"
},
"SessionSource": {
"oneOf": [
{
@@ -773,32 +544,6 @@
}
]
},
"TextContent": {
"description": "Text provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"text": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"text",
"type"
],
"type": "object"
},
"TextElement": {
"properties": {
"byteRange": {
@@ -822,27 +567,6 @@
],
"type": "object"
},
"TextResourceContents": {
"properties": {
"mimeType": {
"type": [
"string",
"null"
]
},
"text": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"text",
"uri"
],
"type": "object"
},
"Thread": {
"properties": {
"cliVersion": {

View File

@@ -5,34 +5,6 @@
"description": "A path that is guaranteed to be absolute and normalized (though it is not guaranteed to be canonicalized or exist on the filesystem).\n\nIMPORTANT: When deserializing an `AbsolutePathBuf`, a base path must be set using [AbsolutePathBufGuard::new]. If no base path is set, the deserialization will fail unless the path being deserialized is already absolute.",
"type": "string"
},
"Annotations": {
"description": "Optional annotations for the client. The client can use annotations to inform how objects are used or displayed",
"properties": {
"audience": {
"items": {
"$ref": "#/definitions/Role"
},
"type": [
"array",
"null"
]
},
"lastModified": {
"type": [
"string",
"null"
]
},
"priority": {
"format": "double",
"type": [
"number",
"null"
]
}
},
"type": "object"
},
"AskForApproval": {
"enum": [
"untrusted",
@@ -42,57 +14,6 @@
],
"type": "string"
},
"AudioContent": {
"description": "Audio provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"BlobResourceContents": {
"properties": {
"blob": {
"type": "string"
},
"mimeType": {
"type": [
"string",
"null"
]
},
"uri": {
"type": "string"
}
},
"required": [
"blob",
"uri"
],
"type": "object"
},
"ByteRange": {
"properties": {
"end": {
@@ -418,61 +339,6 @@
],
"type": "string"
},
"ContentBlock": {
"anyOf": [
{
"$ref": "#/definitions/TextContent"
},
{
"$ref": "#/definitions/ImageContent"
},
{
"$ref": "#/definitions/AudioContent"
},
{
"$ref": "#/definitions/ResourceLink"
},
{
"$ref": "#/definitions/EmbeddedResource"
}
]
},
"EmbeddedResource": {
"description": "The contents of a resource, embedded into a prompt or tool call result.\n\nIt is up to the client how best to render embedded resources for the benefit of the LLM and/or the user.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"resource": {
"$ref": "#/definitions/EmbeddedResourceResource"
},
"type": {
"type": "string"
}
},
"required": [
"resource",
"type"
],
"type": "object"
},
"EmbeddedResourceResource": {
"anyOf": [
{
"$ref": "#/definitions/TextResourceContents"
},
{
"$ref": "#/definitions/BlobResourceContents"
}
]
},
"FileUpdateChange": {
"properties": {
"diff": {
@@ -515,36 +381,6 @@
},
"type": "object"
},
"ImageContent": {
"description": "An image provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"McpToolCallError": {
"properties": {
"message": {
@@ -559,9 +395,7 @@
"McpToolCallResult": {
"properties": {
"content": {
"items": {
"$ref": "#/definitions/ContentBlock"
},
"items": true,
"type": "array"
},
"structuredContent": true
@@ -665,69 +499,6 @@
],
"type": "string"
},
"ResourceLink": {
"description": "A resource that the server is capable of reading, included in a prompt or tool call result.\n\nNote: resource links returned by tools are not guaranteed to appear in the results of `resources/list` requests.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"description": {
"type": [
"string",
"null"
]
},
"mimeType": {
"type": [
"string",
"null"
]
},
"name": {
"type": "string"
},
"size": {
"format": "int64",
"type": [
"integer",
"null"
]
},
"title": {
"type": [
"string",
"null"
]
},
"type": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"name",
"type",
"uri"
],
"type": "object"
},
"Role": {
"description": "The sender or recipient of messages and data in a conversation.",
"enum": [
"assistant",
"user"
],
"type": "string"
},
"SandboxPolicy": {
"oneOf": [
{
@@ -900,32 +671,6 @@
}
]
},
"TextContent": {
"description": "Text provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"text": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"text",
"type"
],
"type": "object"
},
"TextElement": {
"properties": {
"byteRange": {
@@ -949,27 +694,6 @@
],
"type": "object"
},
"TextResourceContents": {
"properties": {
"mimeType": {
"type": [
"string",
"null"
]
},
"text": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"text",
"uri"
],
"type": "object"
},
"Thread": {
"properties": {
"cliVersion": {

View File

@@ -1,85 +1,6 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"Annotations": {
"description": "Optional annotations for the client. The client can use annotations to inform how objects are used or displayed",
"properties": {
"audience": {
"items": {
"$ref": "#/definitions/Role"
},
"type": [
"array",
"null"
]
},
"lastModified": {
"type": [
"string",
"null"
]
},
"priority": {
"format": "double",
"type": [
"number",
"null"
]
}
},
"type": "object"
},
"AudioContent": {
"description": "Audio provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"BlobResourceContents": {
"properties": {
"blob": {
"type": "string"
},
"mimeType": {
"type": [
"string",
"null"
]
},
"uri": {
"type": "string"
}
},
"required": [
"blob",
"uri"
],
"type": "object"
},
"ByteRange": {
"properties": {
"end": {
@@ -405,61 +326,6 @@
],
"type": "string"
},
"ContentBlock": {
"anyOf": [
{
"$ref": "#/definitions/TextContent"
},
{
"$ref": "#/definitions/ImageContent"
},
{
"$ref": "#/definitions/AudioContent"
},
{
"$ref": "#/definitions/ResourceLink"
},
{
"$ref": "#/definitions/EmbeddedResource"
}
]
},
"EmbeddedResource": {
"description": "The contents of a resource, embedded into a prompt or tool call result.\n\nIt is up to the client how best to render embedded resources for the benefit of the LLM and/or the user.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"resource": {
"$ref": "#/definitions/EmbeddedResourceResource"
},
"type": {
"type": "string"
}
},
"required": [
"resource",
"type"
],
"type": "object"
},
"EmbeddedResourceResource": {
"anyOf": [
{
"$ref": "#/definitions/TextResourceContents"
},
{
"$ref": "#/definitions/BlobResourceContents"
}
]
},
"FileUpdateChange": {
"properties": {
"diff": {
@@ -502,36 +368,6 @@
},
"type": "object"
},
"ImageContent": {
"description": "An image provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"McpToolCallError": {
"properties": {
"message": {
@@ -546,9 +382,7 @@
"McpToolCallResult": {
"properties": {
"content": {
"items": {
"$ref": "#/definitions/ContentBlock"
},
"items": true,
"type": "array"
},
"structuredContent": true
@@ -633,69 +467,6 @@
}
]
},
"ResourceLink": {
"description": "A resource that the server is capable of reading, included in a prompt or tool call result.\n\nNote: resource links returned by tools are not guaranteed to appear in the results of `resources/list` requests.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"description": {
"type": [
"string",
"null"
]
},
"mimeType": {
"type": [
"string",
"null"
]
},
"name": {
"type": "string"
},
"size": {
"format": "int64",
"type": [
"integer",
"null"
]
},
"title": {
"type": [
"string",
"null"
]
},
"type": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"name",
"type",
"uri"
],
"type": "object"
},
"Role": {
"description": "The sender or recipient of messages and data in a conversation.",
"enum": [
"assistant",
"user"
],
"type": "string"
},
"SessionSource": {
"oneOf": [
{
@@ -773,32 +544,6 @@
}
]
},
"TextContent": {
"description": "Text provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"text": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"text",
"type"
],
"type": "object"
},
"TextElement": {
"properties": {
"byteRange": {
@@ -822,27 +567,6 @@
],
"type": "object"
},
"TextResourceContents": {
"properties": {
"mimeType": {
"type": [
"string",
"null"
]
},
"text": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"text",
"uri"
],
"type": "object"
},
"Thread": {
"properties": {
"cliVersion": {

View File

@@ -1,85 +1,6 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"Annotations": {
"description": "Optional annotations for the client. The client can use annotations to inform how objects are used or displayed",
"properties": {
"audience": {
"items": {
"$ref": "#/definitions/Role"
},
"type": [
"array",
"null"
]
},
"lastModified": {
"type": [
"string",
"null"
]
},
"priority": {
"format": "double",
"type": [
"number",
"null"
]
}
},
"type": "object"
},
"AudioContent": {
"description": "Audio provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"BlobResourceContents": {
"properties": {
"blob": {
"type": "string"
},
"mimeType": {
"type": [
"string",
"null"
]
},
"uri": {
"type": "string"
}
},
"required": [
"blob",
"uri"
],
"type": "object"
},
"ByteRange": {
"properties": {
"end": {
@@ -405,61 +326,6 @@
],
"type": "string"
},
"ContentBlock": {
"anyOf": [
{
"$ref": "#/definitions/TextContent"
},
{
"$ref": "#/definitions/ImageContent"
},
{
"$ref": "#/definitions/AudioContent"
},
{
"$ref": "#/definitions/ResourceLink"
},
{
"$ref": "#/definitions/EmbeddedResource"
}
]
},
"EmbeddedResource": {
"description": "The contents of a resource, embedded into a prompt or tool call result.\n\nIt is up to the client how best to render embedded resources for the benefit of the LLM and/or the user.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"resource": {
"$ref": "#/definitions/EmbeddedResourceResource"
},
"type": {
"type": "string"
}
},
"required": [
"resource",
"type"
],
"type": "object"
},
"EmbeddedResourceResource": {
"anyOf": [
{
"$ref": "#/definitions/TextResourceContents"
},
{
"$ref": "#/definitions/BlobResourceContents"
}
]
},
"FileUpdateChange": {
"properties": {
"diff": {
@@ -502,36 +368,6 @@
},
"type": "object"
},
"ImageContent": {
"description": "An image provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"McpToolCallError": {
"properties": {
"message": {
@@ -546,9 +382,7 @@
"McpToolCallResult": {
"properties": {
"content": {
"items": {
"$ref": "#/definitions/ContentBlock"
},
"items": true,
"type": "array"
},
"structuredContent": true
@@ -633,69 +467,6 @@
}
]
},
"ResourceLink": {
"description": "A resource that the server is capable of reading, included in a prompt or tool call result.\n\nNote: resource links returned by tools are not guaranteed to appear in the results of `resources/list` requests.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"description": {
"type": [
"string",
"null"
]
},
"mimeType": {
"type": [
"string",
"null"
]
},
"name": {
"type": "string"
},
"size": {
"format": "int64",
"type": [
"integer",
"null"
]
},
"title": {
"type": [
"string",
"null"
]
},
"type": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"name",
"type",
"uri"
],
"type": "object"
},
"Role": {
"description": "The sender or recipient of messages and data in a conversation.",
"enum": [
"assistant",
"user"
],
"type": "string"
},
"SessionSource": {
"oneOf": [
{
@@ -773,32 +544,6 @@
}
]
},
"TextContent": {
"description": "Text provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"text": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"text",
"type"
],
"type": "object"
},
"TextElement": {
"properties": {
"byteRange": {
@@ -822,27 +567,6 @@
],
"type": "object"
},
"TextResourceContents": {
"properties": {
"mimeType": {
"type": [
"string",
"null"
]
},
"text": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"text",
"uri"
],
"type": "object"
},
"Thread": {
"properties": {
"cliVersion": {

View File

@@ -1,85 +1,6 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"Annotations": {
"description": "Optional annotations for the client. The client can use annotations to inform how objects are used or displayed",
"properties": {
"audience": {
"items": {
"$ref": "#/definitions/Role"
},
"type": [
"array",
"null"
]
},
"lastModified": {
"type": [
"string",
"null"
]
},
"priority": {
"format": "double",
"type": [
"number",
"null"
]
}
},
"type": "object"
},
"AudioContent": {
"description": "Audio provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"BlobResourceContents": {
"properties": {
"blob": {
"type": "string"
},
"mimeType": {
"type": [
"string",
"null"
]
},
"uri": {
"type": "string"
}
},
"required": [
"blob",
"uri"
],
"type": "object"
},
"ByteRange": {
"properties": {
"end": {
@@ -405,61 +326,6 @@
],
"type": "string"
},
"ContentBlock": {
"anyOf": [
{
"$ref": "#/definitions/TextContent"
},
{
"$ref": "#/definitions/ImageContent"
},
{
"$ref": "#/definitions/AudioContent"
},
{
"$ref": "#/definitions/ResourceLink"
},
{
"$ref": "#/definitions/EmbeddedResource"
}
]
},
"EmbeddedResource": {
"description": "The contents of a resource, embedded into a prompt or tool call result.\n\nIt is up to the client how best to render embedded resources for the benefit of the LLM and/or the user.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"resource": {
"$ref": "#/definitions/EmbeddedResourceResource"
},
"type": {
"type": "string"
}
},
"required": [
"resource",
"type"
],
"type": "object"
},
"EmbeddedResourceResource": {
"anyOf": [
{
"$ref": "#/definitions/TextResourceContents"
},
{
"$ref": "#/definitions/BlobResourceContents"
}
]
},
"FileUpdateChange": {
"properties": {
"diff": {
@@ -479,36 +345,6 @@
],
"type": "object"
},
"ImageContent": {
"description": "An image provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"McpToolCallError": {
"properties": {
"message": {
@@ -523,9 +359,7 @@
"McpToolCallResult": {
"properties": {
"content": {
"items": {
"$ref": "#/definitions/ContentBlock"
},
"items": true,
"type": "array"
},
"structuredContent": true
@@ -610,95 +444,6 @@
}
]
},
"ResourceLink": {
"description": "A resource that the server is capable of reading, included in a prompt or tool call result.\n\nNote: resource links returned by tools are not guaranteed to appear in the results of `resources/list` requests.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"description": {
"type": [
"string",
"null"
]
},
"mimeType": {
"type": [
"string",
"null"
]
},
"name": {
"type": "string"
},
"size": {
"format": "int64",
"type": [
"integer",
"null"
]
},
"title": {
"type": [
"string",
"null"
]
},
"type": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"name",
"type",
"uri"
],
"type": "object"
},
"Role": {
"description": "The sender or recipient of messages and data in a conversation.",
"enum": [
"assistant",
"user"
],
"type": "string"
},
"TextContent": {
"description": "Text provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"text": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"text",
"type"
],
"type": "object"
},
"TextElement": {
"properties": {
"byteRange": {
@@ -722,27 +467,6 @@
],
"type": "object"
},
"TextResourceContents": {
"properties": {
"mimeType": {
"type": [
"string",
"null"
]
},
"text": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"text",
"uri"
],
"type": "object"
},
"ThreadItem": {
"oneOf": [
{

View File

@@ -1,85 +1,6 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"Annotations": {
"description": "Optional annotations for the client. The client can use annotations to inform how objects are used or displayed",
"properties": {
"audience": {
"items": {
"$ref": "#/definitions/Role"
},
"type": [
"array",
"null"
]
},
"lastModified": {
"type": [
"string",
"null"
]
},
"priority": {
"format": "double",
"type": [
"number",
"null"
]
}
},
"type": "object"
},
"AudioContent": {
"description": "Audio provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"BlobResourceContents": {
"properties": {
"blob": {
"type": "string"
},
"mimeType": {
"type": [
"string",
"null"
]
},
"uri": {
"type": "string"
}
},
"required": [
"blob",
"uri"
],
"type": "object"
},
"ByteRange": {
"properties": {
"end": {
@@ -405,61 +326,6 @@
],
"type": "string"
},
"ContentBlock": {
"anyOf": [
{
"$ref": "#/definitions/TextContent"
},
{
"$ref": "#/definitions/ImageContent"
},
{
"$ref": "#/definitions/AudioContent"
},
{
"$ref": "#/definitions/ResourceLink"
},
{
"$ref": "#/definitions/EmbeddedResource"
}
]
},
"EmbeddedResource": {
"description": "The contents of a resource, embedded into a prompt or tool call result.\n\nIt is up to the client how best to render embedded resources for the benefit of the LLM and/or the user.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"resource": {
"$ref": "#/definitions/EmbeddedResourceResource"
},
"type": {
"type": "string"
}
},
"required": [
"resource",
"type"
],
"type": "object"
},
"EmbeddedResourceResource": {
"anyOf": [
{
"$ref": "#/definitions/TextResourceContents"
},
{
"$ref": "#/definitions/BlobResourceContents"
}
]
},
"FileUpdateChange": {
"properties": {
"diff": {
@@ -479,36 +345,6 @@
],
"type": "object"
},
"ImageContent": {
"description": "An image provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"McpToolCallError": {
"properties": {
"message": {
@@ -523,9 +359,7 @@
"McpToolCallResult": {
"properties": {
"content": {
"items": {
"$ref": "#/definitions/ContentBlock"
},
"items": true,
"type": "array"
},
"structuredContent": true
@@ -610,95 +444,6 @@
}
]
},
"ResourceLink": {
"description": "A resource that the server is capable of reading, included in a prompt or tool call result.\n\nNote: resource links returned by tools are not guaranteed to appear in the results of `resources/list` requests.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"description": {
"type": [
"string",
"null"
]
},
"mimeType": {
"type": [
"string",
"null"
]
},
"name": {
"type": "string"
},
"size": {
"format": "int64",
"type": [
"integer",
"null"
]
},
"title": {
"type": [
"string",
"null"
]
},
"type": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"name",
"type",
"uri"
],
"type": "object"
},
"Role": {
"description": "The sender or recipient of messages and data in a conversation.",
"enum": [
"assistant",
"user"
],
"type": "string"
},
"TextContent": {
"description": "Text provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"text": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"text",
"type"
],
"type": "object"
},
"TextElement": {
"properties": {
"byteRange": {
@@ -722,27 +467,6 @@
],
"type": "object"
},
"TextResourceContents": {
"properties": {
"mimeType": {
"type": [
"string",
"null"
]
},
"text": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"text",
"uri"
],
"type": "object"
},
"ThreadItem": {
"oneOf": [
{

View File

@@ -1,85 +1,6 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"Annotations": {
"description": "Optional annotations for the client. The client can use annotations to inform how objects are used or displayed",
"properties": {
"audience": {
"items": {
"$ref": "#/definitions/Role"
},
"type": [
"array",
"null"
]
},
"lastModified": {
"type": [
"string",
"null"
]
},
"priority": {
"format": "double",
"type": [
"number",
"null"
]
}
},
"type": "object"
},
"AudioContent": {
"description": "Audio provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"BlobResourceContents": {
"properties": {
"blob": {
"type": "string"
},
"mimeType": {
"type": [
"string",
"null"
]
},
"uri": {
"type": "string"
}
},
"required": [
"blob",
"uri"
],
"type": "object"
},
"ByteRange": {
"properties": {
"end": {
@@ -405,61 +326,6 @@
],
"type": "string"
},
"ContentBlock": {
"anyOf": [
{
"$ref": "#/definitions/TextContent"
},
{
"$ref": "#/definitions/ImageContent"
},
{
"$ref": "#/definitions/AudioContent"
},
{
"$ref": "#/definitions/ResourceLink"
},
{
"$ref": "#/definitions/EmbeddedResource"
}
]
},
"EmbeddedResource": {
"description": "The contents of a resource, embedded into a prompt or tool call result.\n\nIt is up to the client how best to render embedded resources for the benefit of the LLM and/or the user.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"resource": {
"$ref": "#/definitions/EmbeddedResourceResource"
},
"type": {
"type": "string"
}
},
"required": [
"resource",
"type"
],
"type": "object"
},
"EmbeddedResourceResource": {
"anyOf": [
{
"$ref": "#/definitions/TextResourceContents"
},
{
"$ref": "#/definitions/BlobResourceContents"
}
]
},
"FileUpdateChange": {
"properties": {
"diff": {
@@ -479,36 +345,6 @@
],
"type": "object"
},
"ImageContent": {
"description": "An image provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"data": {
"type": "string"
},
"mimeType": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"data",
"mimeType",
"type"
],
"type": "object"
},
"McpToolCallError": {
"properties": {
"message": {
@@ -523,9 +359,7 @@
"McpToolCallResult": {
"properties": {
"content": {
"items": {
"$ref": "#/definitions/ContentBlock"
},
"items": true,
"type": "array"
},
"structuredContent": true
@@ -610,95 +444,6 @@
}
]
},
"ResourceLink": {
"description": "A resource that the server is capable of reading, included in a prompt or tool call result.\n\nNote: resource links returned by tools are not guaranteed to appear in the results of `resources/list` requests.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"description": {
"type": [
"string",
"null"
]
},
"mimeType": {
"type": [
"string",
"null"
]
},
"name": {
"type": "string"
},
"size": {
"format": "int64",
"type": [
"integer",
"null"
]
},
"title": {
"type": [
"string",
"null"
]
},
"type": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"name",
"type",
"uri"
],
"type": "object"
},
"Role": {
"description": "The sender or recipient of messages and data in a conversation.",
"enum": [
"assistant",
"user"
],
"type": "string"
},
"TextContent": {
"description": "Text provided to or from an LLM.",
"properties": {
"annotations": {
"anyOf": [
{
"$ref": "#/definitions/Annotations"
},
{
"type": "null"
}
]
},
"text": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"text",
"type"
],
"type": "object"
},
"TextElement": {
"properties": {
"byteRange": {
@@ -722,27 +467,6 @@
],
"type": "object"
},
"TextResourceContents": {
"properties": {
"mimeType": {
"type": [
"string",
"null"
]
},
"text": {
"type": "string"
},
"uri": {
"type": "string"
}
},
"required": [
"text",
"uri"
],
"type": "object"
},
"ThreadItem": {
"oneOf": [
{

View File

@@ -1,9 +0,0 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { Role } from "./Role";
/**
* Optional annotations for the client. The client can use annotations to inform how objects are used or displayed
*/
export type Annotations = { audience?: Array<Role>, lastModified?: string, priority?: number, };

View File

@@ -1,9 +0,0 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { Annotations } from "./Annotations";
/**
* Audio provided to or from an LLM.
*/
export type AudioContent = { annotations?: Annotations, data: string, mimeType: string, type: string, };

View File

@@ -1,5 +0,0 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type BlobResourceContents = { blob: string, mimeType?: string, uri: string, };

View File

@@ -1,10 +1,9 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { ContentBlock } from "./ContentBlock";
import type { JsonValue } from "./serde_json/JsonValue";
/**
* The server's response to a tool call.
*/
export type CallToolResult = { content: Array<ContentBlock>, isError?: boolean, structuredContent?: JsonValue, };
export type CallToolResult = { content: Array<JsonValue>, structuredContent?: JsonValue, isError?: boolean, _meta?: JsonValue, };

View File

@@ -1,10 +0,0 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { AudioContent } from "./AudioContent";
import type { EmbeddedResource } from "./EmbeddedResource";
import type { ImageContent } from "./ImageContent";
import type { ResourceLink } from "./ResourceLink";
import type { TextContent } from "./TextContent";
export type ContentBlock = TextContent | ImageContent | AudioContent | ResourceLink | EmbeddedResource;

View File

@@ -1,6 +1,5 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { RequestId } from "./RequestId";
export type ElicitationRequestEvent = { server_name: string, id: RequestId, message: string, };
export type ElicitationRequestEvent = { server_name: string, id: string | number, message: string, };

View File

@@ -1,13 +0,0 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { Annotations } from "./Annotations";
import type { EmbeddedResourceResource } from "./EmbeddedResourceResource";
/**
* The contents of a resource, embedded into a prompt or tool call result.
*
* It is up to the client how best to render embedded resources for the benefit
* of the LLM and/or the user.
*/
export type EmbeddedResource = { annotations?: Annotations, resource: EmbeddedResourceResource, type: string, };

View File

@@ -1,7 +0,0 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { BlobResourceContents } from "./BlobResourceContents";
import type { TextResourceContents } from "./TextResourceContents";
export type EmbeddedResourceResource = TextResourceContents | BlobResourceContents;

View File

@@ -9,7 +9,6 @@ import type { FunctionCallOutputContentItem } from "./FunctionCallOutputContentI
* `content` preserves the historical plain-string payload so downstream
* integrations (tests, logging, etc.) can keep treating tool output as
* `String`. When an MCP server returns richer data we additionally populate
* `content_items` with the structured form that the Responses/Chat
* Completions APIs understand.
* `content_items` with the structured form that the Responses API understands.
*/
export type FunctionCallOutputPayload = { content: string, content_items: Array<FunctionCallOutputContentItem> | null, success: boolean | null, };

View File

@@ -1,9 +0,0 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { Annotations } from "./Annotations";
/**
* An image provided to or from an LLM.
*/
export type ImageContent = { annotations?: Annotations, data: string, mimeType: string, type: string, };

View File

@@ -3,6 +3,6 @@
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
/**
* The sender or recipient of messages and data in a conversation.
* Canonical user-input modality tags advertised by a model.
*/
export type Role = "assistant" | "user";
export type InputModality = "text" | "image";

View File

@@ -2,4 +2,4 @@
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type TextResourceContents = { mimeType?: string, text: string, uri: string, };
export type MessagePhase = "commentary" | "final_answer";

View File

@@ -1,9 +1,9 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { Annotations } from "./Annotations";
import type { JsonValue } from "./serde_json/JsonValue";
/**
* A known resource that the server is capable of reading.
*/
export type Resource = { annotations?: Annotations, description?: string, mimeType?: string, name: string, size?: bigint, title?: string, uri: string, };
export type Resource = { annotations?: JsonValue, description?: string, mimeType?: string, name: string, size?: number, title?: string, uri: string, icons?: Array<JsonValue>, _meta?: JsonValue, };

View File

@@ -1,11 +0,0 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { Annotations } from "./Annotations";
/**
* A resource that the server is capable of reading, included in a prompt or tool call result.
*
* Note: resource links returned by tools are not guaranteed to appear in the results of `resources/list` requests.
*/
export type ResourceLink = { annotations?: Annotations, description?: string, mimeType?: string, name: string, size?: bigint, title?: string, type: string, uri: string, };

View File

@@ -1,9 +1,9 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { Annotations } from "./Annotations";
import type { JsonValue } from "./serde_json/JsonValue";
/**
* A template description for resources available on the server.
*/
export type ResourceTemplate = { annotations?: Annotations, description?: string, mimeType?: string, name: string, title?: string, uriTemplate: string, };
export type ResourceTemplate = { annotations?: JsonValue, uriTemplate: string, name: string, title?: string, description?: string, mimeType?: string, };

View File

@@ -6,11 +6,12 @@ import type { FunctionCallOutputPayload } from "./FunctionCallOutputPayload";
import type { GhostCommit } from "./GhostCommit";
import type { LocalShellAction } from "./LocalShellAction";
import type { LocalShellStatus } from "./LocalShellStatus";
import type { MessagePhase } from "./MessagePhase";
import type { ReasoningItemContent } from "./ReasoningItemContent";
import type { ReasoningItemReasoningSummary } from "./ReasoningItemReasoningSummary";
import type { WebSearchAction } from "./WebSearchAction";
export type ResponseItem = { "type": "message", role: string, content: Array<ContentItem>, end_turn?: boolean, } | { "type": "reasoning", summary: Array<ReasoningItemReasoningSummary>, content?: Array<ReasoningItemContent>, encrypted_content: string | null, } | { "type": "local_shell_call",
export type ResponseItem = { "type": "message", role: string, content: Array<ContentItem>, end_turn?: boolean, phase?: MessagePhase, } | { "type": "reasoning", summary: Array<ReasoningItemReasoningSummary>, content?: Array<ReasoningItemContent>, encrypted_content: string | null, } | { "type": "local_shell_call",
/**
* Set when using the Responses API.
*/

View File

@@ -1,9 +0,0 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { Annotations } from "./Annotations";
/**
* Text provided to or from an LLM.
*/
export type TextContent = { annotations?: Annotations, text: string, type: string, };

View File

@@ -1,11 +1,9 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { ToolAnnotations } from "./ToolAnnotations";
import type { ToolInputSchema } from "./ToolInputSchema";
import type { ToolOutputSchema } from "./ToolOutputSchema";
import type { JsonValue } from "./serde_json/JsonValue";
/**
* Definition for a tool the client can call.
*/
export type Tool = { annotations?: ToolAnnotations, description?: string, inputSchema: ToolInputSchema, name: string, outputSchema?: ToolOutputSchema, title?: string, };
export type Tool = { name: string, title?: string, description?: string, inputSchema: JsonValue, outputSchema?: JsonValue, annotations?: JsonValue, icons?: Array<JsonValue>, _meta?: JsonValue, };

View File

@@ -1,15 +0,0 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
/**
* Additional properties describing a Tool to clients.
*
* NOTE: all properties in ToolAnnotations are **hints**.
* They are not guaranteed to provide a faithful description of
* tool behavior (including descriptive properties like `title`).
*
* Clients should never make tool use decisions based on ToolAnnotations
* received from untrusted servers.
*/
export type ToolAnnotations = { destructiveHint?: boolean, idempotentHint?: boolean, openWorldHint?: boolean, readOnlyHint?: boolean, title?: string, };

View File

@@ -1,9 +0,0 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { JsonValue } from "./serde_json/JsonValue";
/**
* A JSON Schema object defining the expected parameters for the tool.
*/
export type ToolInputSchema = { properties?: JsonValue, required?: Array<string>, type: string, };

View File

@@ -1,10 +0,0 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { JsonValue } from "./serde_json/JsonValue";
/**
* An optional JSON Schema object defining the structure of the tool's output returned in
* the structuredContent field of a CallToolResult.
*/
export type ToolOutputSchema = { properties?: JsonValue, required?: Array<string>, type: string, };

View File

@@ -14,18 +14,15 @@ export type { AgentReasoningRawContentDeltaEvent } from "./AgentReasoningRawCont
export type { AgentReasoningRawContentEvent } from "./AgentReasoningRawContentEvent";
export type { AgentReasoningSectionBreakEvent } from "./AgentReasoningSectionBreakEvent";
export type { AgentStatus } from "./AgentStatus";
export type { Annotations } from "./Annotations";
export type { ApplyPatchApprovalParams } from "./ApplyPatchApprovalParams";
export type { ApplyPatchApprovalRequestEvent } from "./ApplyPatchApprovalRequestEvent";
export type { ApplyPatchApprovalResponse } from "./ApplyPatchApprovalResponse";
export type { ArchiveConversationParams } from "./ArchiveConversationParams";
export type { ArchiveConversationResponse } from "./ArchiveConversationResponse";
export type { AskForApproval } from "./AskForApproval";
export type { AudioContent } from "./AudioContent";
export type { AuthMode } from "./AuthMode";
export type { AuthStatusChangeNotification } from "./AuthStatusChangeNotification";
export type { BackgroundEventEvent } from "./BackgroundEventEvent";
export type { BlobResourceContents } from "./BlobResourceContents";
export type { ByteRange } from "./ByteRange";
export type { CallToolResult } from "./CallToolResult";
export type { CancelLoginChatGptParams } from "./CancelLoginChatGptParams";
@@ -44,7 +41,6 @@ export type { CollabWaitingBeginEvent } from "./CollabWaitingBeginEvent";
export type { CollabWaitingEndEvent } from "./CollabWaitingEndEvent";
export type { CollaborationMode } from "./CollaborationMode";
export type { CollaborationModeMask } from "./CollaborationModeMask";
export type { ContentBlock } from "./ContentBlock";
export type { ContentItem } from "./ContentItem";
export type { ContextCompactedEvent } from "./ContextCompactedEvent";
export type { ContextCompactionItem } from "./ContextCompactionItem";
@@ -55,8 +51,6 @@ export type { CustomPrompt } from "./CustomPrompt";
export type { DeprecationNoticeEvent } from "./DeprecationNoticeEvent";
export type { DynamicToolCallRequest } from "./DynamicToolCallRequest";
export type { ElicitationRequestEvent } from "./ElicitationRequestEvent";
export type { EmbeddedResource } from "./EmbeddedResource";
export type { EmbeddedResourceResource } from "./EmbeddedResourceResource";
export type { ErrorEvent } from "./ErrorEvent";
export type { EventMsg } from "./EventMsg";
export type { ExecApprovalRequestEvent } from "./ExecApprovalRequestEvent";
@@ -92,11 +86,11 @@ export type { GitDiffToRemoteParams } from "./GitDiffToRemoteParams";
export type { GitDiffToRemoteResponse } from "./GitDiffToRemoteResponse";
export type { GitSha } from "./GitSha";
export type { HistoryEntry } from "./HistoryEntry";
export type { ImageContent } from "./ImageContent";
export type { InitializeCapabilities } from "./InitializeCapabilities";
export type { InitializeParams } from "./InitializeParams";
export type { InitializeResponse } from "./InitializeResponse";
export type { InputItem } from "./InputItem";
export type { InputModality } from "./InputModality";
export type { InterruptConversationParams } from "./InterruptConversationParams";
export type { InterruptConversationResponse } from "./InterruptConversationResponse";
export type { ItemCompletedEvent } from "./ItemCompletedEvent";
@@ -122,6 +116,7 @@ export type { McpStartupStatus } from "./McpStartupStatus";
export type { McpStartupUpdateEvent } from "./McpStartupUpdateEvent";
export type { McpToolCallBeginEvent } from "./McpToolCallBeginEvent";
export type { McpToolCallEndEvent } from "./McpToolCallEndEvent";
export type { MessagePhase } from "./MessagePhase";
export type { ModeKind } from "./ModeKind";
export type { NetworkAccess } from "./NetworkAccess";
export type { NewConversationParams } from "./NewConversationParams";
@@ -152,7 +147,6 @@ export type { RequestUserInputEvent } from "./RequestUserInputEvent";
export type { RequestUserInputQuestion } from "./RequestUserInputQuestion";
export type { RequestUserInputQuestionOption } from "./RequestUserInputQuestionOption";
export type { Resource } from "./Resource";
export type { ResourceLink } from "./ResourceLink";
export type { ResourceTemplate } from "./ResourceTemplate";
export type { ResponseItem } from "./ResponseItem";
export type { ResumeConversationParams } from "./ResumeConversationParams";
@@ -164,7 +158,6 @@ export type { ReviewLineRange } from "./ReviewLineRange";
export type { ReviewOutputEvent } from "./ReviewOutputEvent";
export type { ReviewRequest } from "./ReviewRequest";
export type { ReviewTarget } from "./ReviewTarget";
export type { Role } from "./Role";
export type { SandboxMode } from "./SandboxMode";
export type { SandboxPolicy } from "./SandboxPolicy";
export type { SandboxSettings } from "./SandboxSettings";
@@ -191,9 +184,7 @@ export type { StepStatus } from "./StepStatus";
export type { StreamErrorEvent } from "./StreamErrorEvent";
export type { SubAgentSource } from "./SubAgentSource";
export type { TerminalInteractionEvent } from "./TerminalInteractionEvent";
export type { TextContent } from "./TextContent";
export type { TextElement } from "./TextElement";
export type { TextResourceContents } from "./TextResourceContents";
export type { ThreadId } from "./ThreadId";
export type { ThreadNameUpdatedEvent } from "./ThreadNameUpdatedEvent";
export type { ThreadRolledBackEvent } from "./ThreadRolledBackEvent";
@@ -201,9 +192,6 @@ export type { TokenCountEvent } from "./TokenCountEvent";
export type { TokenUsage } from "./TokenUsage";
export type { TokenUsageInfo } from "./TokenUsageInfo";
export type { Tool } from "./Tool";
export type { ToolAnnotations } from "./ToolAnnotations";
export type { ToolInputSchema } from "./ToolInputSchema";
export type { ToolOutputSchema } from "./ToolOutputSchema";
export type { Tools } from "./Tools";
export type { TurnAbortReason } from "./TurnAbortReason";
export type { TurnAbortedEvent } from "./TurnAbortedEvent";

View File

@@ -1,7 +1,6 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { ContentBlock } from "../ContentBlock";
import type { JsonValue } from "../serde_json/JsonValue";
export type McpToolCallResult = { content: Array<ContentBlock>, structuredContent: JsonValue | null, };
export type McpToolCallResult = { content: Array<JsonValue>, structuredContent: JsonValue | null, };

View File

@@ -1,7 +1,8 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { InputModality } from "../InputModality";
import type { ReasoningEffort } from "../ReasoningEffort";
import type { ReasoningEffortOption } from "./ReasoningEffortOption";
export type Model = { id: string, model: string, displayName: string, description: string, supportedReasoningEfforts: Array<ReasoningEffortOption>, defaultReasoningEffort: ReasoningEffort, supportsPersonality: boolean, isDefault: boolean, };
export type Model = { id: string, model: string, displayName: string, description: string, supportedReasoningEfforts: Array<ReasoningEffortOption>, defaultReasoningEffort: ReasoningEffort, inputModalities: Array<InputModality>, supportsPersonality: boolean, isDefault: boolean, };

View File

@@ -15,8 +15,13 @@ use codex_protocol::config_types::Verbosity;
use codex_protocol::config_types::WebSearchMode;
use codex_protocol::items::AgentMessageContent as CoreAgentMessageContent;
use codex_protocol::items::TurnItem as CoreTurnItem;
use codex_protocol::mcp::Resource as McpResource;
use codex_protocol::mcp::ResourceTemplate as McpResourceTemplate;
use codex_protocol::mcp::Tool as McpTool;
use codex_protocol::models::ResponseItem;
use codex_protocol::openai_models::InputModality;
use codex_protocol::openai_models::ReasoningEffort;
use codex_protocol::openai_models::default_input_modalities;
use codex_protocol::parse_command::ParsedCommand as CoreParsedCommand;
use codex_protocol::plan_tool::PlanItemArg as CorePlanItemArg;
use codex_protocol::plan_tool::StepStatus as CorePlanStepStatus;
@@ -41,10 +46,6 @@ use codex_protocol::user_input::ByteRange as CoreByteRange;
use codex_protocol::user_input::TextElement as CoreTextElement;
use codex_protocol::user_input::UserInput as CoreUserInput;
use codex_utils_absolute_path::AbsolutePathBuf;
use mcp_types::ContentBlock as McpContentBlock;
use mcp_types::Resource as McpResource;
use mcp_types::ResourceTemplate as McpResourceTemplate;
use mcp_types::Tool as McpTool;
use schemars::JsonSchema;
use serde::Deserialize;
use serde::Serialize;
@@ -993,6 +994,8 @@ pub struct Model {
pub description: String,
pub supported_reasoning_efforts: Vec<ReasoningEffortOption>,
pub default_reasoning_effort: ReasoningEffort,
#[serde(default = "default_input_modalities")]
pub input_modalities: Vec<InputModality>,
#[serde(default)]
pub supports_personality: bool,
// Only one model should be marked as default.
@@ -2360,7 +2363,12 @@ impl From<CoreAgentStatus> for CollabAgentState {
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct McpToolCallResult {
pub content: Vec<McpContentBlock>,
// NOTE: `rmcp::model::Content` (and its `RawContent` variants) would be a more precise Rust
// representation of MCP content blocks. We intentionally use `serde_json::Value` here because
// this crate exports JSON schema + TS types (`schemars`/`ts-rs`), and the rmcp model types
// aren't set up to be schema/TS friendly (and would introduce heavier coupling to rmcp's Rust
// representations). Using `JsonValue` keeps the payload wire-shaped and easy to export.
pub content: Vec<JsonValue>,
pub structured_content: Option<JsonValue>,
}

View File

@@ -2,6 +2,7 @@ use anyhow::Context;
use anyhow::Result;
use serde_json::Map;
use serde_json::Value;
use std::cmp::Ordering;
use std::collections::BTreeMap;
use std::path::Path;
use std::path::PathBuf;
@@ -92,7 +93,49 @@ fn read_file_bytes(path: &Path) -> Result<Vec<u8>> {
fn canonicalize_json(value: &Value) -> Value {
match value {
Value::Array(items) => Value::Array(items.iter().map(canonicalize_json).collect()),
Value::Array(items) => {
// NOTE: We sort some JSON arrays to make schema fixture comparisons stable across
// platforms.
//
// In general, JSON array ordering is significant. However, this code path is used
// only by `schema_fixtures_match_generated` to compare our *vendored* JSON schema
// files against freshly generated output. Some parts of schema generation end up
// with non-deterministic ordering across platforms (often due to map iteration order
// upstream), which can cause Windows CI failures even when the generated schema is
// semantically equivalent.
//
// JSON Schema itself also contains a number of array-valued keywords whose ordering
// does not affect validation semantics (e.g. `required`, `type`, `enum`, `anyOf`,
// `oneOf`, `allOf`). That makes it reasonable to treat many schema-emitted arrays as
// order-insensitive for the purpose of fixture diffs.
//
// To avoid accidentally changing the meaning of arrays where order *could* matter
// (e.g. tuple validation / `prefixItems`-style arrays), we only sort arrays when we
// can derive a stable sort key for *every* element. If we cannot, we preserve the
// original ordering.
let items = items.iter().map(canonicalize_json).collect::<Vec<_>>();
let mut sortable = Vec::with_capacity(items.len());
for item in &items {
let Some(key) = schema_array_item_sort_key(item) else {
return Value::Array(items);
};
let stable = serde_json::to_string(item).unwrap_or_default();
sortable.push((key, stable));
}
let mut items = items.into_iter().zip(sortable).collect::<Vec<_>>();
items.sort_by(
|(_, (key_left, stable_left)), (_, (key_right, stable_right))| match key_left
.cmp(key_right)
{
Ordering::Equal => stable_left.cmp(stable_right),
other => other,
},
);
Value::Array(items.into_iter().map(|(item, _)| item).collect())
}
Value::Object(map) => {
let mut entries: Vec<_> = map.iter().collect();
entries.sort_by(|(left, _), (right, _)| left.cmp(right));
@@ -106,6 +149,25 @@ fn canonicalize_json(value: &Value) -> Value {
}
}
fn schema_array_item_sort_key(item: &Value) -> Option<String> {
match item {
Value::Null => Some("null".to_string()),
Value::Bool(b) => Some(format!("b:{b}")),
Value::Number(n) => Some(format!("n:{n}")),
Value::String(s) => Some(format!("s:{s}")),
Value::Object(map) => {
if let Some(Value::String(reference)) = map.get("$ref") {
Some(format!("ref:{reference}"))
} else if let Some(Value::String(title)) = map.get("title") {
Some(format!("title:{title}"))
} else {
None
}
}
Value::Array(_) => None,
}
}
fn collect_files_recursive(root: &Path) -> Result<BTreeMap<PathBuf, Vec<u8>>> {
let mut files = BTreeMap::new();
@@ -146,3 +208,29 @@ fn collect_files_recursive(root: &Path) -> Result<BTreeMap<PathBuf, Vec<u8>>> {
Ok(files)
}
#[cfg(test)]
mod tests {
use super::*;
use pretty_assertions::assert_eq;
#[test]
fn canonicalize_json_sorts_string_arrays() {
let value = serde_json::json!(["b", "a"]);
let expected = serde_json::json!(["a", "b"]);
assert_eq!(canonicalize_json(&value), expected);
}
#[test]
fn canonicalize_json_sorts_schema_ref_arrays() {
let value = serde_json::json!([
{"$ref": "#/definitions/B"},
{"$ref": "#/definitions/A"}
]);
let expected = serde_json::json!([
{"$ref": "#/definitions/A"},
{"$ref": "#/definitions/B"}
]);
assert_eq!(canonicalize_json(&value), expected);
}
}

View File

@@ -175,7 +175,6 @@ dependencies = [
"base64",
"icu_decimal",
"icu_locale_core",
"mcp-types",
"mime_guess",
"serde",
"serde_json",
@@ -521,16 +520,6 @@ version = "0.4.28"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "34080505efa8e45a4b816c349525ebe327ceaa8559756f0356cba97ef3bf7432"
[[package]]
name = "mcp-types"
version = "0.45.0"
source = "git+https://github.com/openai/codex.git?tag=rust-v0.45.0#a7c7869c23f88f6c468281e6f438ba4a91b81f26"
dependencies = [
"serde",
"serde_json",
"ts-rs",
]
[[package]]
name = "memchr"
version = "2.7.6"

View File

@@ -35,7 +35,6 @@ codex-utils-json-to-toml = { workspace = true }
chrono = { workspace = true }
serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }
mcp-types = { workspace = true }
tempfile = { workspace = true }
time = { workspace = true }
toml = { workspace = true }
@@ -60,7 +59,6 @@ axum = { workspace = true, default-features = false, features = [
base64 = { workspace = true }
codex-execpolicy = { workspace = true }
core_test_support = { workspace = true }
mcp-types = { workspace = true }
os_info = { workspace = true }
pretty_assertions = { workspace = true }
rmcp = { workspace = true, default-features = false, features = [

View File

@@ -1841,12 +1841,11 @@ mod tests {
use codex_core::protocol::RateLimitWindow;
use codex_core::protocol::TokenUsage;
use codex_core::protocol::TokenUsageInfo;
use codex_protocol::mcp::CallToolResult;
use codex_protocol::plan_tool::PlanItemArg;
use codex_protocol::plan_tool::StepStatus;
use mcp_types::CallToolResult;
use mcp_types::ContentBlock;
use mcp_types::TextContent;
use pretty_assertions::assert_eq;
use rmcp::model::Content;
use serde_json::Value as JsonValue;
use std::collections::HashMap;
use std::time::Duration;
@@ -2374,15 +2373,15 @@ mod tests {
#[tokio::test]
async fn test_construct_mcp_tool_call_end_notification_success() {
let content = vec![ContentBlock::TextContent(TextContent {
annotations: None,
text: "{\"resources\":[]}".to_string(),
r#type: "text".to_string(),
})];
let content = vec![
serde_json::to_value(Content::text("{\"resources\":[]}"))
.expect("content should serialize"),
];
let result = CallToolResult {
content: content.clone(),
is_error: Some(false),
structured_content: None,
meta: None,
};
let end_event = McpToolCallEndEvent {

View File

@@ -28,6 +28,7 @@ fn model_from_preset(preset: ModelPreset) -> Model {
preset.supported_reasoning_efforts,
),
default_reasoning_effort: preset.default_reasoning_effort,
input_modalities: preset.input_modalities,
supports_personality: preset.supports_personality,
is_default: preset.is_default,
}

View File

@@ -6,6 +6,7 @@ use codex_protocol::openai_models::ModelInfo;
use codex_protocol::openai_models::ModelPreset;
use codex_protocol::openai_models::ModelVisibility;
use codex_protocol::openai_models::TruncationPolicyConfig;
use codex_protocol::openai_models::default_input_modalities;
use serde_json::json;
use std::path::Path;
@@ -38,6 +39,7 @@ fn preset_to_info(preset: &ModelPreset, priority: i32) -> ModelInfo {
auto_compact_token_limit: None,
effective_context_window_percent: 95,
experimental_supported_tools: Vec::new(),
input_modalities: default_input_modalities(),
}
}
@@ -75,9 +77,11 @@ pub fn write_models_cache_with_models(
let cache_path = codex_home.join("models_cache.json");
// DateTime<Utc> serializes to RFC3339 format by default with serde
let fetched_at: DateTime<Utc> = Utc::now();
let client_version = codex_core::models_manager::client_version_to_whole();
let cache = json!({
"fetched_at": fetched_at,
"etag": null,
"client_version": client_version,
"models": models
});
std::fs::write(cache_path, serde_json::to_string_pretty(&cache)?)

View File

@@ -308,6 +308,7 @@ async fn test_list_and_resume_conversations() -> Result<()> {
text: fork_history_text.to_string(),
}],
end_turn: None,
phase: None,
}];
let resume_with_history_req_id = mcp
.send_resume_conversation_request(ResumeConversationParams {

View File

@@ -126,6 +126,7 @@ async fn auto_compaction_remote_emits_started_and_completed_items() -> Result<()
text: "REMOTE_COMPACT_SUMMARY".to_string(),
}],
end_turn: None,
phase: None,
},
ResponseItem::Compaction {
encrypted_content: "ENCRYPTED_COMPACTION_SUMMARY".to_string(),

View File

@@ -12,6 +12,7 @@ use codex_app_server_protocol::ModelListParams;
use codex_app_server_protocol::ModelListResponse;
use codex_app_server_protocol::ReasoningEffortOption;
use codex_app_server_protocol::RequestId;
use codex_protocol::openai_models::InputModality;
use codex_protocol::openai_models::ReasoningEffort;
use pretty_assertions::assert_eq;
use tempfile::TempDir;
@@ -72,6 +73,7 @@ async fn list_models_returns_all_models_with_large_limit() -> Result<()> {
},
],
default_reasoning_effort: ReasoningEffort::Medium,
input_modalities: vec![InputModality::Text, InputModality::Image],
supports_personality: false,
is_default: true,
},
@@ -100,6 +102,7 @@ async fn list_models_returns_all_models_with_large_limit() -> Result<()> {
},
],
default_reasoning_effort: ReasoningEffort::Medium,
input_modalities: vec![InputModality::Text, InputModality::Image],
supports_personality: false,
is_default: false,
},
@@ -120,6 +123,7 @@ async fn list_models_returns_all_models_with_large_limit() -> Result<()> {
},
],
default_reasoning_effort: ReasoningEffort::Medium,
input_modalities: vec![InputModality::Text, InputModality::Image],
supports_personality: false,
is_default: false,
},
@@ -154,6 +158,7 @@ async fn list_models_returns_all_models_with_large_limit() -> Result<()> {
},
],
default_reasoning_effort: ReasoningEffort::Medium,
input_modalities: vec![InputModality::Text, InputModality::Image],
supports_personality: false,
is_default: false,
},

View File

@@ -338,6 +338,7 @@ async fn thread_resume_supports_history_and_overrides() -> Result<()> {
text: history_text.to_string(),
}],
end_turn: None,
phase: None,
}];
// Resume with explicit history and override the model.

View File

@@ -40,6 +40,7 @@ owo-colors = { workspace = true }
regex-lite = { workspace = true }
serde_json = { workspace = true }
supports-color = { workspace = true }
tempfile = { workspace = true }
tokio = { workspace = true, features = [
"io-std",
"macros",
@@ -59,4 +60,3 @@ assert_matches = { workspace = true }
codex-utils-cargo-bin = { workspace = true }
predicates = { workspace = true }
pretty_assertions = { workspace = true }
tempfile = { workspace = true }

View File

@@ -0,0 +1,21 @@
use clap::Parser;
use std::path::PathBuf;
const DEFAULT_CODEX_DMG_URL: &str = "https://persistent.oaistatic.com/codex-app-prod/Codex.dmg";
#[derive(Debug, Parser)]
pub struct AppCommand {
/// Workspace path to open in Codex Desktop.
#[arg(value_name = "PATH", default_value = ".")]
pub path: PathBuf,
/// Override the macOS DMG download URL (advanced).
#[arg(long, default_value = DEFAULT_CODEX_DMG_URL)]
pub download_url: String,
}
#[cfg(target_os = "macos")]
pub async fn run_app(cmd: AppCommand) -> anyhow::Result<()> {
let workspace = std::fs::canonicalize(&cmd.path).unwrap_or(cmd.path);
crate::desktop_app::run_app_open_or_install(workspace, cmd.download_url).await
}

View File

@@ -0,0 +1,281 @@
use anyhow::Context as _;
use std::path::Path;
use std::path::PathBuf;
use tempfile::Builder;
use tokio::process::Command;
pub async fn run_mac_app_open_or_install(
workspace: PathBuf,
download_url: String,
) -> anyhow::Result<()> {
if let Some(app_path) = find_existing_codex_app_path() {
eprintln!(
"Opening Codex Desktop at {app_path}...",
app_path = app_path.display()
);
open_codex_app(&app_path, &workspace).await?;
return Ok(());
}
eprintln!("Codex Desktop not found; downloading installer...");
let installed_app = download_and_install_codex_to_user_applications(&download_url)
.await
.context("failed to download/install Codex Desktop")?;
eprintln!(
"Launching Codex Desktop from {installed_app}...",
installed_app = installed_app.display()
);
open_codex_app(&installed_app, &workspace).await?;
Ok(())
}
fn find_existing_codex_app_path() -> Option<PathBuf> {
candidate_codex_app_paths()
.into_iter()
.find(|candidate| candidate.is_dir())
}
fn candidate_codex_app_paths() -> Vec<PathBuf> {
let mut paths = vec![PathBuf::from("/Applications/Codex.app")];
if let Some(home) = std::env::var_os("HOME") {
paths.push(PathBuf::from(home).join("Applications").join("Codex.app"));
}
paths
}
async fn open_codex_app(app_path: &Path, workspace: &Path) -> anyhow::Result<()> {
eprintln!(
"Opening workspace {workspace}...",
workspace = workspace.display()
);
let status = Command::new("open")
.arg("-a")
.arg(app_path)
.arg(workspace)
.status()
.await
.context("failed to invoke `open`")?;
if status.success() {
return Ok(());
}
anyhow::bail!(
"`open -a {app_path} {workspace}` exited with {status}",
app_path = app_path.display(),
workspace = workspace.display()
);
}
async fn download_and_install_codex_to_user_applications(dmg_url: &str) -> anyhow::Result<PathBuf> {
let temp_dir = Builder::new()
.prefix("codex-app-installer-")
.tempdir()
.context("failed to create temp dir")?;
let tmp_root = temp_dir.path().to_path_buf();
let _temp_dir = temp_dir;
let dmg_path = tmp_root.join("Codex.dmg");
download_dmg(dmg_url, &dmg_path).await?;
eprintln!("Mounting Codex Desktop installer...");
let mount_point = mount_dmg(&dmg_path).await?;
eprintln!(
"Installer mounted at {mount_point}.",
mount_point = mount_point.display()
);
let result = async {
let app_in_volume = find_codex_app_in_mount(&mount_point)
.context("failed to locate Codex.app in mounted dmg")?;
install_codex_app_bundle(&app_in_volume).await
}
.await;
let detach_result = detach_dmg(&mount_point).await;
if let Err(err) = detach_result {
eprintln!(
"warning: failed to detach dmg at {mount_point}: {err}",
mount_point = mount_point.display()
);
}
result
}
async fn install_codex_app_bundle(app_in_volume: &Path) -> anyhow::Result<PathBuf> {
for applications_dir in candidate_applications_dirs()? {
eprintln!(
"Installing Codex Desktop into {applications_dir}...",
applications_dir = applications_dir.display()
);
std::fs::create_dir_all(&applications_dir).with_context(|| {
format!(
"failed to create applications dir {applications_dir}",
applications_dir = applications_dir.display()
)
})?;
let dest_app = applications_dir.join("Codex.app");
if dest_app.is_dir() {
return Ok(dest_app);
}
match copy_app_bundle(app_in_volume, &dest_app).await {
Ok(()) => return Ok(dest_app),
Err(err) => {
eprintln!(
"warning: failed to install Codex.app to {applications_dir}: {err}",
applications_dir = applications_dir.display()
);
}
}
}
anyhow::bail!("failed to install Codex.app to any applications directory");
}
fn candidate_applications_dirs() -> anyhow::Result<Vec<PathBuf>> {
let mut dirs = vec![PathBuf::from("/Applications")];
dirs.push(user_applications_dir()?);
Ok(dirs)
}
async fn download_dmg(url: &str, dest: &Path) -> anyhow::Result<()> {
eprintln!("Downloading installer...");
let status = Command::new("curl")
.arg("-fL")
.arg("--retry")
.arg("3")
.arg("--retry-delay")
.arg("1")
.arg("-o")
.arg(dest)
.arg(url)
.status()
.await
.context("failed to invoke `curl`")?;
if status.success() {
return Ok(());
}
anyhow::bail!("curl download failed with {status}");
}
async fn mount_dmg(dmg_path: &Path) -> anyhow::Result<PathBuf> {
let output = Command::new("hdiutil")
.arg("attach")
.arg("-nobrowse")
.arg("-readonly")
.arg(dmg_path)
.output()
.await
.context("failed to invoke `hdiutil attach`")?;
if !output.status.success() {
anyhow::bail!(
"`hdiutil attach` failed with {status}: {stderr}",
status = output.status,
stderr = String::from_utf8_lossy(&output.stderr)
);
}
let stdout = String::from_utf8_lossy(&output.stdout);
parse_hdiutil_attach_mount_point(&stdout)
.map(PathBuf::from)
.with_context(|| format!("failed to parse mount point from hdiutil output:\n{stdout}"))
}
async fn detach_dmg(mount_point: &Path) -> anyhow::Result<()> {
let status = Command::new("hdiutil")
.arg("detach")
.arg(mount_point)
.status()
.await
.context("failed to invoke `hdiutil detach`")?;
if status.success() {
return Ok(());
}
anyhow::bail!("hdiutil detach failed with {status}");
}
fn find_codex_app_in_mount(mount_point: &Path) -> anyhow::Result<PathBuf> {
let direct = mount_point.join("Codex.app");
if direct.is_dir() {
return Ok(direct);
}
for entry in std::fs::read_dir(mount_point).with_context(|| {
format!(
"failed to read {mount_point}",
mount_point = mount_point.display()
)
})? {
let entry = entry.context("failed to read mount directory entry")?;
let path = entry.path();
if path.extension().is_some_and(|ext| ext == "app") && path.is_dir() {
return Ok(path);
}
}
anyhow::bail!(
"no .app bundle found at {mount_point}",
mount_point = mount_point.display()
);
}
async fn copy_app_bundle(src_app: &Path, dest_app: &Path) -> anyhow::Result<()> {
let status = Command::new("ditto")
.arg(src_app)
.arg(dest_app)
.status()
.await
.context("failed to invoke `ditto`")?;
if status.success() {
return Ok(());
}
anyhow::bail!("ditto copy failed with {status}");
}
fn user_applications_dir() -> anyhow::Result<PathBuf> {
let home = std::env::var_os("HOME").context("HOME is not set")?;
Ok(PathBuf::from(home).join("Applications"))
}
fn parse_hdiutil_attach_mount_point(output: &str) -> Option<String> {
output.lines().find_map(|line| {
if !line.contains("/Volumes/") {
return None;
}
if let Some((_, mount)) = line.rsplit_once('\t') {
return Some(mount.trim().to_string());
}
line.split_whitespace()
.find(|field| field.starts_with("/Volumes/"))
.map(str::to_string)
})
}
#[cfg(test)]
mod tests {
use super::parse_hdiutil_attach_mount_point;
use pretty_assertions::assert_eq;
#[test]
fn parses_mount_point_from_tab_separated_hdiutil_output() {
let output = "/dev/disk2s1\tApple_HFS\tCodex\t/Volumes/Codex\n";
assert_eq!(
parse_hdiutil_attach_mount_point(output).as_deref(),
Some("/Volumes/Codex")
);
}
#[test]
fn parses_mount_point_with_spaces() {
let output = "/dev/disk2s1\tApple_HFS\tCodex Installer\t/Volumes/Codex Installer\n";
assert_eq!(
parse_hdiutil_attach_mount_point(output).as_deref(),
Some("/Volumes/Codex Installer")
);
}
}

View File

@@ -0,0 +1,11 @@
#[cfg(target_os = "macos")]
mod mac;
/// Run the app install/open logic for the current OS.
#[cfg(target_os = "macos")]
pub async fn run_app_open_or_install(
workspace: std::path::PathBuf,
download_url: String,
) -> anyhow::Result<()> {
mac::run_mac_app_open_or_install(workspace, download_url).await
}

View File

@@ -31,6 +31,10 @@ use std::io::IsTerminal;
use std::path::PathBuf;
use supports_color::Stream;
#[cfg(target_os = "macos")]
mod app_cmd;
#[cfg(target_os = "macos")]
mod desktop_app;
mod mcp_cmd;
#[cfg(not(windows))]
mod wsl_paths;
@@ -98,6 +102,10 @@ enum Subcommand {
/// [experimental] Run the app server or related tooling.
AppServer(AppServerCommand),
/// Launch the Codex desktop app (downloads the macOS installer if missing).
#[cfg(target_os = "macos")]
App(app_cmd::AppCommand),
/// Generate shell completion scripts.
Completion(CompletionCommand),
@@ -564,6 +572,10 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
)?;
}
},
#[cfg(target_os = "macos")]
Some(Subcommand::App(app_cli)) => {
app_cmd::run_app(app_cli).await?;
}
Some(Subcommand::Resume(ResumeCommand {
session_id,
last,

View File

@@ -2,7 +2,7 @@
Typed clients for Codex/OpenAI APIs built on top of the generic transport in `codex-client`.
- Hosts the request/response models and prompt helpers for Responses, Chat Completions, and Compact APIs.
- Hosts the request/response models and prompt helpers for Responses and Compact APIs.
- Owns provider configuration (base URLs, headers, query params), auth header injection, retry tuning, and stream idle settings.
- Parses SSE streams into `ResponseEvent`/`ResponseStream`, including rate-limit snapshots and API-specific error mapping.
- Serves as the wire-level layer consumed by `codex-core`; higher layers handle auth refresh and business logic.
@@ -11,7 +11,7 @@ Typed clients for Codex/OpenAI APIs built on top of the generic transport in `co
The public interface of this crate is intentionally small and uniform:
- **Prompted endpoints (Chat + Responses)**
- **Prompted endpoints (Responses)**
- Input: a single `Prompt` plus endpoint-specific options.
- `Prompt` (re-exported as `codex_api::Prompt`) carries:
- `instructions: String` the fully-resolved system prompt for this turn.

View File

@@ -1,4 +1,6 @@
use codex_client::Request;
use http::HeaderMap;
use http::HeaderValue;
/// Provides bearer and account identity information for API requests.
///
@@ -12,16 +14,20 @@ pub trait AuthProvider: Send + Sync {
}
}
pub(crate) fn add_auth_headers<A: AuthProvider>(auth: &A, mut req: Request) -> Request {
pub(crate) fn add_auth_headers_to_header_map<A: AuthProvider>(auth: &A, headers: &mut HeaderMap) {
if let Some(token) = auth.bearer_token()
&& let Ok(header) = format!("Bearer {token}").parse()
&& let Ok(header) = HeaderValue::from_str(&format!("Bearer {token}"))
{
let _ = req.headers.insert(http::header::AUTHORIZATION, header);
let _ = headers.insert(http::header::AUTHORIZATION, header);
}
if let Some(account_id) = auth.account_id()
&& let Ok(header) = account_id.parse()
&& let Ok(header) = HeaderValue::from_str(&account_id)
{
let _ = req.headers.insert("ChatGPT-Account-ID", header);
let _ = headers.insert("ChatGPT-Account-ID", header);
}
}
pub(crate) fn add_auth_headers<A: AuthProvider>(auth: &A, mut req: Request) -> Request {
add_auth_headers_to_header_map(auth, &mut req.headers);
req
}

View File

@@ -13,7 +13,7 @@ use std::task::Context;
use std::task::Poll;
use tokio::sync::mpsc;
/// Canonical prompt input for Chat and Responses endpoints.
/// Canonical prompt input for Responses endpoints.
#[derive(Debug, Clone)]
pub struct Prompt {
/// Fully-resolved system instructions for this turn.

View File

@@ -1,111 +1,21 @@
use crate::ChatRequest;
use crate::auth::AuthProvider;
use crate::common::Prompt as ApiPrompt;
use crate::common::ResponseEvent;
use crate::common::ResponseStream;
use crate::endpoint::streaming::StreamingClient;
use crate::error::ApiError;
use crate::provider::Provider;
use crate::provider::WireApi;
use crate::sse::chat::spawn_chat_stream;
use crate::telemetry::SseTelemetry;
use codex_client::HttpTransport;
use codex_client::RequestCompression;
use codex_client::RequestTelemetry;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ReasoningItemContent;
use codex_protocol::models::ResponseItem;
use codex_protocol::protocol::SessionSource;
use futures::Stream;
use http::HeaderMap;
use serde_json::Value;
use std::collections::VecDeque;
use std::pin::Pin;
use std::sync::Arc;
use std::task::Context;
use std::task::Poll;
pub struct ChatClient<T: HttpTransport, A: AuthProvider> {
streaming: StreamingClient<T, A>,
}
impl<T: HttpTransport, A: AuthProvider> ChatClient<T, A> {
pub fn new(transport: T, provider: Provider, auth: A) -> Self {
Self {
streaming: StreamingClient::new(transport, provider, auth),
}
}
pub fn with_telemetry(
self,
request: Option<Arc<dyn RequestTelemetry>>,
sse: Option<Arc<dyn SseTelemetry>>,
) -> Self {
Self {
streaming: self.streaming.with_telemetry(request, sse),
}
}
pub async fn stream_request(&self, request: ChatRequest) -> Result<ResponseStream, ApiError> {
self.stream(request.body, request.headers).await
}
pub async fn stream_prompt(
&self,
model: &str,
prompt: &ApiPrompt,
conversation_id: Option<String>,
session_source: Option<SessionSource>,
) -> Result<ResponseStream, ApiError> {
use crate::requests::ChatRequestBuilder;
let request =
ChatRequestBuilder::new(model, &prompt.instructions, &prompt.input, &prompt.tools)
.conversation_id(conversation_id)
.session_source(session_source)
.build(self.streaming.provider())?;
self.stream_request(request).await
}
fn path(&self) -> &'static str {
match self.streaming.provider().wire {
WireApi::Chat => "chat/completions",
_ => "responses",
}
}
pub async fn stream(
&self,
body: Value,
extra_headers: HeaderMap,
) -> Result<ResponseStream, ApiError> {
self.streaming
.stream(
self.path(),
body,
extra_headers,
RequestCompression::None,
spawn_chat_stream,
None,
)
.await
}
}
#[derive(Copy, Clone, Eq, PartialEq)]
pub enum AggregateMode {
AggregatedOnly,
Streaming,
}
/// Stream adapter that merges token deltas into a single assistant message per turn.
pub struct AggregatedStream {
inner: ResponseStream,
cumulative: String,
cumulative_reasoning: String,
pending: VecDeque<ResponseEvent>,
mode: AggregateMode,
}
impl Stream for AggregatedStream {
@@ -122,7 +32,7 @@ impl Stream for AggregatedStream {
match Pin::new(&mut this.inner).poll_next(cx) {
Poll::Pending => return Poll::Pending,
Poll::Ready(None) => return Poll::Ready(None),
Poll::Ready(Some(Err(e))) => return Poll::Ready(Some(Err(e))),
Poll::Ready(Some(Err(err))) => return Poll::Ready(Some(Err(err))),
Poll::Ready(Some(Ok(ResponseEvent::OutputItemDone(item)))) => {
let is_assistant_message = matches!(
&item,
@@ -130,29 +40,16 @@ impl Stream for AggregatedStream {
);
if is_assistant_message {
match this.mode {
AggregateMode::AggregatedOnly => {
if this.cumulative.is_empty()
&& let ResponseItem::Message { content, .. } = &item
&& let Some(text) = content.iter().find_map(|c| match c {
ContentItem::OutputText { text } => Some(text),
_ => None,
})
{
this.cumulative.push_str(text);
}
continue;
}
AggregateMode::Streaming => {
if this.cumulative.is_empty() {
return Poll::Ready(Some(Ok(ResponseEvent::OutputItemDone(
item,
))));
} else {
continue;
}
}
if this.cumulative.is_empty()
&& let ResponseItem::Message { content, .. } = &item
&& let Some(text) = content.iter().find_map(|c| match c {
ContentItem::OutputText { text } => Some(text),
_ => None,
})
{
this.cumulative.push_str(text);
}
continue;
}
return Poll::Ready(Some(Ok(ResponseEvent::OutputItemDone(item))));
@@ -194,6 +91,7 @@ impl Stream for AggregatedStream {
text: std::mem::take(&mut this.cumulative),
}],
end_turn: None,
phase: None,
};
this.pending
.push_back(ResponseEvent::OutputItemDone(aggregated_message));
@@ -215,35 +113,20 @@ impl Stream for AggregatedStream {
token_usage,
})));
}
Poll::Ready(Some(Ok(ResponseEvent::Created))) => {
continue;
}
Poll::Ready(Some(Ok(ResponseEvent::Created))) => continue,
Poll::Ready(Some(Ok(ResponseEvent::OutputTextDelta(delta)))) => {
this.cumulative.push_str(&delta);
if matches!(this.mode, AggregateMode::Streaming) {
return Poll::Ready(Some(Ok(ResponseEvent::OutputTextDelta(delta))));
} else {
continue;
}
continue;
}
Poll::Ready(Some(Ok(ResponseEvent::ReasoningContentDelta {
delta,
content_index,
content_index: _,
}))) => {
this.cumulative_reasoning.push_str(&delta);
if matches!(this.mode, AggregateMode::Streaming) {
return Poll::Ready(Some(Ok(ResponseEvent::ReasoningContentDelta {
delta,
content_index,
})));
} else {
continue;
}
}
Poll::Ready(Some(Ok(ResponseEvent::ReasoningSummaryDelta { .. }))) => continue,
Poll::Ready(Some(Ok(ResponseEvent::ReasoningSummaryPartAdded { .. }))) => {
continue;
}
Poll::Ready(Some(Ok(ResponseEvent::ReasoningSummaryDelta { .. }))) => continue,
Poll::Ready(Some(Ok(ResponseEvent::ReasoningSummaryPartAdded { .. }))) => continue,
Poll::Ready(Some(Ok(ResponseEvent::OutputItemAdded(item)))) => {
return Poll::Ready(Some(Ok(ResponseEvent::OutputItemAdded(item))));
}
@@ -254,28 +137,21 @@ impl Stream for AggregatedStream {
pub trait AggregateStreamExt {
fn aggregate(self) -> AggregatedStream;
fn streaming_mode(self) -> ResponseStream;
}
impl AggregateStreamExt for ResponseStream {
fn aggregate(self) -> AggregatedStream {
AggregatedStream::new(self, AggregateMode::AggregatedOnly)
}
fn streaming_mode(self) -> ResponseStream {
self
AggregatedStream::new(self)
}
}
impl AggregatedStream {
fn new(inner: ResponseStream, mode: AggregateMode) -> Self {
fn new(inner: ResponseStream) -> Self {
AggregatedStream {
inner,
cumulative: String::new(),
cumulative_reasoning: String::new(),
pending: VecDeque::new(),
mode,
}
}
}

View File

@@ -1,10 +1,8 @@
use crate::auth::AuthProvider;
use crate::auth::add_auth_headers;
use crate::common::CompactionInput;
use crate::endpoint::session::EndpointSession;
use crate::error::ApiError;
use crate::provider::Provider;
use crate::provider::WireApi;
use crate::telemetry::run_with_request_telemetry;
use codex_client::HttpTransport;
use codex_client::RequestTelemetry;
use codex_protocol::models::ResponseItem;
@@ -15,34 +13,24 @@ use serde_json::to_value;
use std::sync::Arc;
pub struct CompactClient<T: HttpTransport, A: AuthProvider> {
transport: T,
provider: Provider,
auth: A,
request_telemetry: Option<Arc<dyn RequestTelemetry>>,
session: EndpointSession<T, A>,
}
impl<T: HttpTransport, A: AuthProvider> CompactClient<T, A> {
pub fn new(transport: T, provider: Provider, auth: A) -> Self {
Self {
transport,
provider,
auth,
request_telemetry: None,
session: EndpointSession::new(transport, provider, auth),
}
}
pub fn with_telemetry(mut self, request: Option<Arc<dyn RequestTelemetry>>) -> Self {
self.request_telemetry = request;
self
pub fn with_telemetry(self, request: Option<Arc<dyn RequestTelemetry>>) -> Self {
Self {
session: self.session.with_request_telemetry(request),
}
}
fn path(&self) -> Result<&'static str, ApiError> {
match self.provider.wire {
WireApi::Compact | WireApi::Responses => Ok("responses/compact"),
WireApi::Chat => Err(ApiError::Stream(
"compact endpoint requires responses wire api".to_string(),
)),
}
fn path() -> &'static str {
"responses/compact"
}
pub async fn compact(
@@ -50,21 +38,10 @@ impl<T: HttpTransport, A: AuthProvider> CompactClient<T, A> {
body: serde_json::Value,
extra_headers: HeaderMap,
) -> Result<Vec<ResponseItem>, ApiError> {
let path = self.path()?;
let builder = || {
let mut req = self.provider.build_request(Method::POST, path);
req.headers.extend(extra_headers.clone());
req.body = Some(body.clone());
add_auth_headers(&self.auth, req)
};
let resp = run_with_request_telemetry(
self.provider.retry.to_policy(),
self.request_telemetry.clone(),
builder,
|req| self.transport.execute(req),
)
.await?;
let resp = self
.session
.execute(Method::POST, Self::path(), extra_headers, Some(body))
.await?;
let parsed: CompactHistoryResponse =
serde_json::from_slice(&resp.body).map_err(|e| ApiError::Stream(e.to_string()))?;
Ok(parsed.output)
@@ -89,14 +66,11 @@ struct CompactHistoryResponse {
#[cfg(test)]
mod tests {
use super::*;
use crate::provider::RetryConfig;
use async_trait::async_trait;
use codex_client::Request;
use codex_client::Response;
use codex_client::StreamResponse;
use codex_client::TransportError;
use http::HeaderMap;
use std::time::Duration;
#[derive(Clone, Default)]
struct DummyTransport;
@@ -121,42 +95,11 @@ mod tests {
}
}
fn provider(wire: WireApi) -> Provider {
Provider {
name: "test".to_string(),
base_url: "https://example.com/v1".to_string(),
query_params: None,
wire,
headers: HeaderMap::new(),
retry: RetryConfig {
max_attempts: 1,
base_delay: Duration::from_millis(1),
retry_429: false,
retry_5xx: true,
retry_transport: true,
},
stream_idle_timeout: Duration::from_secs(1),
}
}
#[tokio::test]
async fn errors_when_wire_is_chat() {
let client = CompactClient::new(DummyTransport, provider(WireApi::Chat), DummyAuth);
let input = CompactionInput {
model: "gpt-test",
input: &[],
instructions: "inst",
};
let err = client
.compact_input(&input, HeaderMap::new())
.await
.expect_err("expected wire mismatch to fail");
match err {
ApiError::Stream(msg) => {
assert_eq!(msg, "compact endpoint requires responses wire api");
}
other => panic!("unexpected error: {other:?}"),
}
#[test]
fn path_is_responses_compact() {
assert_eq!(
CompactClient::<DummyTransport, DummyAuth>::path(),
"responses/compact"
);
}
}

View File

@@ -1,6 +1,6 @@
pub mod chat;
pub mod aggregate;
pub mod compact;
pub mod models;
pub mod responses;
pub mod responses_websocket;
mod streaming;
mod session;

View File

@@ -1,8 +1,7 @@
use crate::auth::AuthProvider;
use crate::auth::add_auth_headers;
use crate::endpoint::session::EndpointSession;
use crate::error::ApiError;
use crate::provider::Provider;
use crate::telemetry::run_with_request_telemetry;
use codex_client::HttpTransport;
use codex_client::RequestTelemetry;
use codex_protocol::openai_models::ModelInfo;
@@ -13,53 +12,42 @@ use http::header::ETAG;
use std::sync::Arc;
pub struct ModelsClient<T: HttpTransport, A: AuthProvider> {
transport: T,
provider: Provider,
auth: A,
request_telemetry: Option<Arc<dyn RequestTelemetry>>,
session: EndpointSession<T, A>,
}
impl<T: HttpTransport, A: AuthProvider> ModelsClient<T, A> {
pub fn new(transport: T, provider: Provider, auth: A) -> Self {
Self {
transport,
provider,
auth,
request_telemetry: None,
session: EndpointSession::new(transport, provider, auth),
}
}
pub fn with_telemetry(mut self, request: Option<Arc<dyn RequestTelemetry>>) -> Self {
self.request_telemetry = request;
self
pub fn with_telemetry(self, request: Option<Arc<dyn RequestTelemetry>>) -> Self {
Self {
session: self.session.with_request_telemetry(request),
}
}
fn path(&self) -> &'static str {
fn path() -> &'static str {
"models"
}
fn append_client_version_query(req: &mut codex_client::Request, client_version: &str) {
let separator = if req.url.contains('?') { '&' } else { '?' };
req.url = format!("{}{}client_version={client_version}", req.url, separator);
}
pub async fn list_models(
&self,
client_version: &str,
extra_headers: HeaderMap,
) -> Result<(Vec<ModelInfo>, Option<String>), ApiError> {
let builder = || {
let mut req = self.provider.build_request(Method::GET, self.path());
req.headers.extend(extra_headers.clone());
let separator = if req.url.contains('?') { '&' } else { '?' };
req.url = format!("{}{}client_version={client_version}", req.url, separator);
add_auth_headers(&self.auth, req)
};
let resp = run_with_request_telemetry(
self.provider.retry.to_policy(),
self.request_telemetry.clone(),
builder,
|req| self.transport.execute(req),
)
.await?;
let resp = self
.session
.execute_with(Method::GET, Self::path(), extra_headers, None, |req| {
Self::append_client_version_query(req, client_version);
})
.await?;
let header_etag = resp
.headers
@@ -83,7 +71,6 @@ impl<T: HttpTransport, A: AuthProvider> ModelsClient<T, A> {
mod tests {
use super::*;
use crate::provider::RetryConfig;
use crate::provider::WireApi;
use async_trait::async_trait;
use codex_client::Request;
use codex_client::Response;
@@ -149,7 +136,6 @@ mod tests {
name: "test".to_string(),
base_url: base_url.to_string(),
query_params: None,
wire: WireApi::Responses,
headers: HeaderMap::new(),
retry: RetryConfig {
max_attempts: 1,

View File

@@ -3,10 +3,9 @@ use crate::common::Prompt as ApiPrompt;
use crate::common::Reasoning;
use crate::common::ResponseStream;
use crate::common::TextControls;
use crate::endpoint::streaming::StreamingClient;
use crate::endpoint::session::EndpointSession;
use crate::error::ApiError;
use crate::provider::Provider;
use crate::provider::WireApi;
use crate::requests::ResponsesRequest;
use crate::requests::ResponsesRequestBuilder;
use crate::requests::responses::Compression;
@@ -17,13 +16,16 @@ use codex_client::RequestCompression;
use codex_client::RequestTelemetry;
use codex_protocol::protocol::SessionSource;
use http::HeaderMap;
use http::HeaderValue;
use http::Method;
use serde_json::Value;
use std::sync::Arc;
use std::sync::OnceLock;
use tracing::instrument;
pub struct ResponsesClient<T: HttpTransport, A: AuthProvider> {
streaming: StreamingClient<T, A>,
session: EndpointSession<T, A>,
sse_telemetry: Option<Arc<dyn SseTelemetry>>,
}
#[derive(Default)]
@@ -43,7 +45,8 @@ pub struct ResponsesOptions {
impl<T: HttpTransport, A: AuthProvider> ResponsesClient<T, A> {
pub fn new(transport: T, provider: Provider, auth: A) -> Self {
Self {
streaming: StreamingClient::new(transport, provider, auth),
session: EndpointSession::new(transport, provider, auth),
sse_telemetry: None,
}
}
@@ -53,7 +56,8 @@ impl<T: HttpTransport, A: AuthProvider> ResponsesClient<T, A> {
sse: Option<Arc<dyn SseTelemetry>>,
) -> Self {
Self {
streaming: self.streaming.with_telemetry(request, sse),
session: self.session.with_request_telemetry(request),
sse_telemetry: sse,
}
}
@@ -103,16 +107,13 @@ impl<T: HttpTransport, A: AuthProvider> ResponsesClient<T, A> {
.store_override(store_override)
.extra_headers(extra_headers)
.compression(compression)
.build(self.streaming.provider())?;
.build(self.session.provider())?;
self.stream_request(request, turn_state).await
}
fn path(&self) -> &'static str {
match self.streaming.provider().wire {
WireApi::Responses | WireApi::Compact => "responses",
WireApi::Chat => "chat/completions",
}
fn path() -> &'static str {
"responses"
}
pub async fn stream(
@@ -122,20 +123,33 @@ impl<T: HttpTransport, A: AuthProvider> ResponsesClient<T, A> {
compression: Compression,
turn_state: Option<Arc<OnceLock<String>>>,
) -> Result<ResponseStream, ApiError> {
let compression = match compression {
let request_compression = match compression {
Compression::None => RequestCompression::None,
Compression::Zstd => RequestCompression::Zstd,
};
self.streaming
.stream(
self.path(),
body,
let stream_response = self
.session
.stream_with(
Method::POST,
Self::path(),
extra_headers,
compression,
spawn_response_stream,
turn_state,
Some(body),
|req| {
req.headers.insert(
http::header::ACCEPT,
HeaderValue::from_static("text/event-stream"),
);
req.compression = request_compression;
},
)
.await
.await?;
Ok(spawn_response_stream(
stream_response,
self.session.provider().stream_idle_timeout,
self.sse_telemetry.clone(),
turn_state,
))
}
}

View File

@@ -1,4 +1,5 @@
use crate::auth::AuthProvider;
use crate::auth::add_auth_headers_to_header_map;
use crate::common::ResponseEvent;
use crate::common::ResponseStream;
use crate::common::ResponsesWsRequest;
@@ -11,7 +12,6 @@ use codex_client::TransportError;
use futures::SinkExt;
use futures::StreamExt;
use http::HeaderMap;
use http::HeaderValue;
use serde_json::Value;
use std::sync::Arc;
use std::sync::OnceLock;
@@ -134,7 +134,7 @@ impl<A: AuthProvider> ResponsesWebsocketClient<A> {
let mut headers = self.provider.headers.clone();
headers.extend(extra_headers);
apply_auth_headers(&mut headers, &self.auth);
add_auth_headers_to_header_map(&self.auth, &mut headers);
let (stream, server_reasoning_included) =
connect_websocket(ws_url, headers, turn_state).await?;
@@ -147,20 +147,6 @@ impl<A: AuthProvider> ResponsesWebsocketClient<A> {
}
}
// TODO (pakrym): share with /auth
fn apply_auth_headers(headers: &mut HeaderMap, auth: &impl AuthProvider) {
if let Some(token) = auth.bearer_token()
&& let Ok(header) = HeaderValue::from_str(&format!("Bearer {token}"))
{
let _ = headers.insert(http::header::AUTHORIZATION, header);
}
if let Some(account_id) = auth.account_id()
&& let Ok(header) = HeaderValue::from_str(&account_id)
{
let _ = headers.insert("ChatGPT-Account-ID", header);
}
}
async fn connect_websocket(
url: Url,
headers: HeaderMap,

View File

@@ -0,0 +1,126 @@
use crate::auth::AuthProvider;
use crate::auth::add_auth_headers;
use crate::error::ApiError;
use crate::provider::Provider;
use crate::telemetry::run_with_request_telemetry;
use codex_client::HttpTransport;
use codex_client::Request;
use codex_client::RequestTelemetry;
use codex_client::Response;
use codex_client::StreamResponse;
use http::HeaderMap;
use http::Method;
use serde_json::Value;
use std::sync::Arc;
pub(crate) struct EndpointSession<T: HttpTransport, A: AuthProvider> {
transport: T,
provider: Provider,
auth: A,
request_telemetry: Option<Arc<dyn RequestTelemetry>>,
}
impl<T: HttpTransport, A: AuthProvider> EndpointSession<T, A> {
pub(crate) fn new(transport: T, provider: Provider, auth: A) -> Self {
Self {
transport,
provider,
auth,
request_telemetry: None,
}
}
pub(crate) fn with_request_telemetry(
mut self,
request: Option<Arc<dyn RequestTelemetry>>,
) -> Self {
self.request_telemetry = request;
self
}
pub(crate) fn provider(&self) -> &Provider {
&self.provider
}
fn make_request(
&self,
method: &Method,
path: &str,
extra_headers: &HeaderMap,
body: Option<&Value>,
) -> Request {
let mut req = self.provider.build_request(method.clone(), path);
req.headers.extend(extra_headers.clone());
if let Some(body) = body {
req.body = Some(body.clone());
}
add_auth_headers(&self.auth, req)
}
pub(crate) async fn execute(
&self,
method: Method,
path: &str,
extra_headers: HeaderMap,
body: Option<Value>,
) -> Result<Response, ApiError> {
self.execute_with(method, path, extra_headers, body, |_| {})
.await
}
pub(crate) async fn execute_with<C>(
&self,
method: Method,
path: &str,
extra_headers: HeaderMap,
body: Option<Value>,
configure: C,
) -> Result<Response, ApiError>
where
C: Fn(&mut Request),
{
let make_request = || {
let mut req = self.make_request(&method, path, &extra_headers, body.as_ref());
configure(&mut req);
req
};
let response = run_with_request_telemetry(
self.provider.retry.to_policy(),
self.request_telemetry.clone(),
make_request,
|req| self.transport.execute(req),
)
.await?;
Ok(response)
}
pub(crate) async fn stream_with<C>(
&self,
method: Method,
path: &str,
extra_headers: HeaderMap,
body: Option<Value>,
configure: C,
) -> Result<StreamResponse, ApiError>
where
C: Fn(&mut Request),
{
let make_request = || {
let mut req = self.make_request(&method, path, &extra_headers, body.as_ref());
configure(&mut req);
req
};
let stream = run_with_request_telemetry(
self.provider.retry.to_policy(),
self.request_telemetry.clone(),
make_request,
|req| self.transport.stream(req),
)
.await?;
Ok(stream)
}
}

View File

@@ -1,95 +0,0 @@
use crate::auth::AuthProvider;
use crate::auth::add_auth_headers;
use crate::common::ResponseStream;
use crate::error::ApiError;
use crate::provider::Provider;
use crate::telemetry::SseTelemetry;
use crate::telemetry::run_with_request_telemetry;
use codex_client::HttpTransport;
use codex_client::RequestCompression;
use codex_client::RequestTelemetry;
use codex_client::StreamResponse;
use http::HeaderMap;
use http::Method;
use serde_json::Value;
use std::sync::Arc;
use std::sync::OnceLock;
use std::time::Duration;
pub(crate) struct StreamingClient<T: HttpTransport, A: AuthProvider> {
transport: T,
provider: Provider,
auth: A,
request_telemetry: Option<Arc<dyn RequestTelemetry>>,
sse_telemetry: Option<Arc<dyn SseTelemetry>>,
}
type StreamSpawner = fn(
StreamResponse,
Duration,
Option<Arc<dyn SseTelemetry>>,
Option<Arc<OnceLock<String>>>,
) -> ResponseStream;
impl<T: HttpTransport, A: AuthProvider> StreamingClient<T, A> {
pub(crate) fn new(transport: T, provider: Provider, auth: A) -> Self {
Self {
transport,
provider,
auth,
request_telemetry: None,
sse_telemetry: None,
}
}
pub(crate) fn with_telemetry(
mut self,
request: Option<Arc<dyn RequestTelemetry>>,
sse: Option<Arc<dyn SseTelemetry>>,
) -> Self {
self.request_telemetry = request;
self.sse_telemetry = sse;
self
}
pub(crate) fn provider(&self) -> &Provider {
&self.provider
}
pub(crate) async fn stream(
&self,
path: &str,
body: Value,
extra_headers: HeaderMap,
compression: RequestCompression,
spawner: StreamSpawner,
turn_state: Option<Arc<OnceLock<String>>>,
) -> Result<ResponseStream, ApiError> {
let builder = || {
let mut req = self.provider.build_request(Method::POST, path);
req.headers.extend(extra_headers.clone());
req.headers.insert(
http::header::ACCEPT,
http::HeaderValue::from_static("text/event-stream"),
);
req.body = Some(body.clone());
req.compression = compression;
add_auth_headers(&self.auth, req)
};
let stream_response = run_with_request_telemetry(
self.provider.retry.to_policy(),
self.request_telemetry.clone(),
builder,
|req| self.transport.stream(req),
)
.await?;
Ok(spawner(
stream_response,
self.provider.stream_idle_timeout,
self.sse_telemetry.clone(),
turn_state,
))
}
}

View File

@@ -22,8 +22,7 @@ pub use crate::common::ResponseEvent;
pub use crate::common::ResponseStream;
pub use crate::common::ResponsesApiRequest;
pub use crate::common::create_text_param_for_request;
pub use crate::endpoint::chat::AggregateStreamExt;
pub use crate::endpoint::chat::ChatClient;
pub use crate::endpoint::aggregate::AggregateStreamExt;
pub use crate::endpoint::compact::CompactClient;
pub use crate::endpoint::models::ModelsClient;
pub use crate::endpoint::responses::ResponsesClient;
@@ -32,10 +31,7 @@ pub use crate::endpoint::responses_websocket::ResponsesWebsocketClient;
pub use crate::endpoint::responses_websocket::ResponsesWebsocketConnection;
pub use crate::error::ApiError;
pub use crate::provider::Provider;
pub use crate::provider::WireApi;
pub use crate::provider::is_azure_responses_wire_base_url;
pub use crate::requests::ChatRequest;
pub use crate::requests::ChatRequestBuilder;
pub use crate::requests::ResponsesRequest;
pub use crate::requests::ResponsesRequestBuilder;
pub use crate::sse::stream_from_fixture;

View File

@@ -8,14 +8,6 @@ use std::collections::HashMap;
use std::time::Duration;
use url::Url;
/// Wire-level APIs supported by a `Provider`.
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum WireApi {
Responses,
Chat,
Compact,
}
/// High-level retry configuration for a provider.
///
/// This is converted into a `RetryPolicy` used by `codex-client` to drive
@@ -52,7 +44,6 @@ pub struct Provider {
pub name: String,
pub base_url: String,
pub query_params: Option<HashMap<String, String>>,
pub wire: WireApi,
pub headers: HeaderMap,
pub retry: RetryConfig,
pub stream_idle_timeout: Duration,
@@ -95,7 +86,7 @@ impl Provider {
}
pub fn is_azure_responses_endpoint(&self) -> bool {
is_azure_responses_wire_base_url(self.wire.clone(), &self.name, Some(&self.base_url))
is_azure_responses_wire_base_url(&self.name, Some(&self.base_url))
}
pub fn websocket_url_for_path(&self, path: &str) -> Result<Url, url::ParseError> {
@@ -112,11 +103,7 @@ impl Provider {
}
}
pub fn is_azure_responses_wire_base_url(wire: WireApi, name: &str, base_url: Option<&str>) -> bool {
if wire != WireApi::Responses {
return false;
}
pub fn is_azure_responses_wire_base_url(name: &str, base_url: Option<&str>) -> bool {
if name.eq_ignore_ascii_case("azure") {
return true;
}
@@ -157,13 +144,12 @@ mod tests {
for base_url in positive_cases {
assert!(
is_azure_responses_wire_base_url(WireApi::Responses, "test", Some(base_url)),
is_azure_responses_wire_base_url("test", Some(base_url)),
"expected {base_url} to be detected as Azure"
);
}
assert!(is_azure_responses_wire_base_url(
WireApi::Responses,
"Azure",
Some("https://example.com")
));
@@ -176,15 +162,9 @@ mod tests {
for base_url in negative_cases {
assert!(
!is_azure_responses_wire_base_url(WireApi::Responses, "test", Some(base_url)),
!is_azure_responses_wire_base_url("test", Some(base_url)),
"expected {base_url} not to be detected as Azure"
);
}
assert!(!is_azure_responses_wire_base_url(
WireApi::Chat,
"Azure",
Some("https://foo.openai.azure.com/openai")
));
}
}

View File

@@ -1,492 +0,0 @@
use crate::error::ApiError;
use crate::provider::Provider;
use crate::requests::headers::build_conversation_headers;
use crate::requests::headers::insert_header;
use crate::requests::headers::subagent_header;
use codex_protocol::models::ContentItem;
use codex_protocol::models::FunctionCallOutputContentItem;
use codex_protocol::models::ReasoningItemContent;
use codex_protocol::models::ResponseItem;
use codex_protocol::protocol::SessionSource;
use http::HeaderMap;
use serde_json::Value;
use serde_json::json;
use std::collections::HashMap;
/// Assembled request body plus headers for Chat Completions streaming calls.
pub struct ChatRequest {
pub body: Value,
pub headers: HeaderMap,
}
pub struct ChatRequestBuilder<'a> {
model: &'a str,
instructions: &'a str,
input: &'a [ResponseItem],
tools: &'a [Value],
conversation_id: Option<String>,
session_source: Option<SessionSource>,
}
impl<'a> ChatRequestBuilder<'a> {
pub fn new(
model: &'a str,
instructions: &'a str,
input: &'a [ResponseItem],
tools: &'a [Value],
) -> Self {
Self {
model,
instructions,
input,
tools,
conversation_id: None,
session_source: None,
}
}
pub fn conversation_id(mut self, id: Option<String>) -> Self {
self.conversation_id = id;
self
}
pub fn session_source(mut self, source: Option<SessionSource>) -> Self {
self.session_source = source;
self
}
pub fn build(self, _provider: &Provider) -> Result<ChatRequest, ApiError> {
let mut messages = Vec::<Value>::new();
messages.push(json!({"role": "system", "content": self.instructions}));
let input = self.input;
let mut reasoning_by_anchor_index: HashMap<usize, String> = HashMap::new();
let mut last_emitted_role: Option<&str> = None;
for item in input {
match item {
ResponseItem::Message { role, .. } => last_emitted_role = Some(role.as_str()),
ResponseItem::FunctionCall { .. } | ResponseItem::LocalShellCall { .. } => {
last_emitted_role = Some("assistant")
}
ResponseItem::FunctionCallOutput { .. } => last_emitted_role = Some("tool"),
ResponseItem::Reasoning { .. } | ResponseItem::Other => {}
ResponseItem::CustomToolCall { .. } => {}
ResponseItem::CustomToolCallOutput { .. } => {}
ResponseItem::WebSearchCall { .. } => {}
ResponseItem::GhostSnapshot { .. } => {}
ResponseItem::Compaction { .. } => {}
}
}
let mut last_user_index: Option<usize> = None;
for (idx, item) in input.iter().enumerate() {
if let ResponseItem::Message { role, .. } = item
&& role == "user"
{
last_user_index = Some(idx);
}
}
if !matches!(last_emitted_role, Some("user")) {
for (idx, item) in input.iter().enumerate() {
if let Some(u_idx) = last_user_index
&& idx <= u_idx
{
continue;
}
if let ResponseItem::Reasoning {
content: Some(items),
..
} = item
{
let mut text = String::new();
for entry in items {
match entry {
ReasoningItemContent::ReasoningText { text: segment }
| ReasoningItemContent::Text { text: segment } => {
text.push_str(segment)
}
}
}
if text.trim().is_empty() {
continue;
}
let mut attached = false;
if idx > 0
&& let ResponseItem::Message { role, .. } = &input[idx - 1]
&& role == "assistant"
{
reasoning_by_anchor_index
.entry(idx - 1)
.and_modify(|v| v.push_str(&text))
.or_insert(text.clone());
attached = true;
}
if !attached && idx + 1 < input.len() {
match &input[idx + 1] {
ResponseItem::FunctionCall { .. }
| ResponseItem::LocalShellCall { .. } => {
reasoning_by_anchor_index
.entry(idx + 1)
.and_modify(|v| v.push_str(&text))
.or_insert(text.clone());
}
ResponseItem::Message { role, .. } if role == "assistant" => {
reasoning_by_anchor_index
.entry(idx + 1)
.and_modify(|v| v.push_str(&text))
.or_insert(text.clone());
}
_ => {}
}
}
}
}
}
let mut last_assistant_text: Option<String> = None;
for (idx, item) in input.iter().enumerate() {
match item {
ResponseItem::Message { role, content, .. } => {
let mut text = String::new();
let mut items: Vec<Value> = Vec::new();
let mut saw_image = false;
for c in content {
match c {
ContentItem::InputText { text: t }
| ContentItem::OutputText { text: t } => {
text.push_str(t);
items.push(json!({"type":"text","text": t}));
}
ContentItem::InputImage { image_url } => {
saw_image = true;
items.push(
json!({"type":"image_url","image_url": {"url": image_url}}),
);
}
}
}
if role == "assistant" {
if let Some(prev) = &last_assistant_text
&& prev == &text
{
continue;
}
last_assistant_text = Some(text.clone());
}
let content_value = if role == "assistant" {
json!(text)
} else if saw_image {
json!(items)
} else {
json!(text)
};
let mut msg = json!({"role": role, "content": content_value});
if role == "assistant"
&& let Some(reasoning) = reasoning_by_anchor_index.get(&idx)
&& let Some(obj) = msg.as_object_mut()
{
obj.insert("reasoning".to_string(), json!(reasoning));
}
messages.push(msg);
}
ResponseItem::FunctionCall {
name,
arguments,
call_id,
..
} => {
let reasoning = reasoning_by_anchor_index.get(&idx).map(String::as_str);
let tool_call = json!({
"id": call_id,
"type": "function",
"function": {
"name": name,
"arguments": arguments,
}
});
push_tool_call_message(&mut messages, tool_call, reasoning);
}
ResponseItem::LocalShellCall {
id,
call_id: _,
status,
action,
} => {
let reasoning = reasoning_by_anchor_index.get(&idx).map(String::as_str);
let tool_call = json!({
"id": id.clone().unwrap_or_default(),
"type": "local_shell_call",
"status": status,
"action": action,
});
push_tool_call_message(&mut messages, tool_call, reasoning);
}
ResponseItem::FunctionCallOutput { call_id, output } => {
let content_value = if let Some(items) = &output.content_items {
let mapped: Vec<Value> = items
.iter()
.map(|it| match it {
FunctionCallOutputContentItem::InputText { text } => {
json!({"type":"text","text": text})
}
FunctionCallOutputContentItem::InputImage { image_url } => {
json!({"type":"image_url","image_url": {"url": image_url}})
}
})
.collect();
json!(mapped)
} else {
json!(output.content)
};
messages.push(json!({
"role": "tool",
"tool_call_id": call_id,
"content": content_value,
}));
}
ResponseItem::CustomToolCall {
id,
call_id: _,
name,
input,
status: _,
} => {
let tool_call = json!({
"id": id,
"type": "custom",
"custom": {
"name": name,
"input": input,
}
});
let reasoning = reasoning_by_anchor_index.get(&idx).map(String::as_str);
push_tool_call_message(&mut messages, tool_call, reasoning);
}
ResponseItem::CustomToolCallOutput { call_id, output } => {
messages.push(json!({
"role": "tool",
"tool_call_id": call_id,
"content": output,
}));
}
ResponseItem::GhostSnapshot { .. } => {
continue;
}
ResponseItem::Reasoning { .. }
| ResponseItem::WebSearchCall { .. }
| ResponseItem::Other
| ResponseItem::Compaction { .. } => {
continue;
}
}
}
let payload = json!({
"model": self.model,
"messages": messages,
"stream": true,
"tools": self.tools,
});
let mut headers = build_conversation_headers(self.conversation_id);
if let Some(subagent) = subagent_header(&self.session_source) {
insert_header(&mut headers, "x-openai-subagent", &subagent);
}
Ok(ChatRequest {
body: payload,
headers,
})
}
}
fn push_tool_call_message(messages: &mut Vec<Value>, tool_call: Value, reasoning: Option<&str>) {
// Chat Completions requires that tool calls are grouped into a single assistant message
// (with `tool_calls: [...]`) followed by tool role responses.
if let Some(Value::Object(obj)) = messages.last_mut()
&& obj.get("role").and_then(Value::as_str) == Some("assistant")
&& obj.get("content").is_some_and(Value::is_null)
&& let Some(tool_calls) = obj.get_mut("tool_calls").and_then(Value::as_array_mut)
{
tool_calls.push(tool_call);
if let Some(reasoning) = reasoning {
if let Some(Value::String(existing)) = obj.get_mut("reasoning") {
if !existing.is_empty() {
existing.push('\n');
}
existing.push_str(reasoning);
} else {
obj.insert(
"reasoning".to_string(),
Value::String(reasoning.to_string()),
);
}
}
return;
}
let mut msg = json!({
"role": "assistant",
"content": null,
"tool_calls": [tool_call],
});
if let Some(reasoning) = reasoning
&& let Some(obj) = msg.as_object_mut()
{
obj.insert("reasoning".to_string(), json!(reasoning));
}
messages.push(msg);
}
#[cfg(test)]
mod tests {
use super::*;
use crate::provider::RetryConfig;
use crate::provider::WireApi;
use codex_protocol::models::FunctionCallOutputPayload;
use codex_protocol::protocol::SessionSource;
use codex_protocol::protocol::SubAgentSource;
use http::HeaderValue;
use pretty_assertions::assert_eq;
use std::time::Duration;
fn provider() -> Provider {
Provider {
name: "openai".to_string(),
base_url: "https://api.openai.com/v1".to_string(),
query_params: None,
wire: WireApi::Chat,
headers: HeaderMap::new(),
retry: RetryConfig {
max_attempts: 1,
base_delay: Duration::from_millis(10),
retry_429: false,
retry_5xx: true,
retry_transport: true,
},
stream_idle_timeout: Duration::from_secs(1),
}
}
#[test]
fn attaches_conversation_and_subagent_headers() {
let prompt_input = vec![ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::InputText {
text: "hi".to_string(),
}],
end_turn: None,
}];
let req = ChatRequestBuilder::new("gpt-test", "inst", &prompt_input, &[])
.conversation_id(Some("conv-1".into()))
.session_source(Some(SessionSource::SubAgent(SubAgentSource::Review)))
.build(&provider())
.expect("request");
assert_eq!(
req.headers.get("session_id"),
Some(&HeaderValue::from_static("conv-1"))
);
assert_eq!(
req.headers.get("x-openai-subagent"),
Some(&HeaderValue::from_static("review"))
);
}
#[test]
fn groups_consecutive_tool_calls_into_a_single_assistant_message() {
let prompt_input = vec![
ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::InputText {
text: "read these".to_string(),
}],
end_turn: None,
},
ResponseItem::FunctionCall {
id: None,
name: "read_file".to_string(),
arguments: r#"{"path":"a.txt"}"#.to_string(),
call_id: "call-a".to_string(),
},
ResponseItem::FunctionCall {
id: None,
name: "read_file".to_string(),
arguments: r#"{"path":"b.txt"}"#.to_string(),
call_id: "call-b".to_string(),
},
ResponseItem::FunctionCall {
id: None,
name: "read_file".to_string(),
arguments: r#"{"path":"c.txt"}"#.to_string(),
call_id: "call-c".to_string(),
},
ResponseItem::FunctionCallOutput {
call_id: "call-a".to_string(),
output: FunctionCallOutputPayload {
content: "A".to_string(),
..Default::default()
},
},
ResponseItem::FunctionCallOutput {
call_id: "call-b".to_string(),
output: FunctionCallOutputPayload {
content: "B".to_string(),
..Default::default()
},
},
ResponseItem::FunctionCallOutput {
call_id: "call-c".to_string(),
output: FunctionCallOutputPayload {
content: "C".to_string(),
..Default::default()
},
},
];
let req = ChatRequestBuilder::new("gpt-test", "inst", &prompt_input, &[])
.build(&provider())
.expect("request");
let messages = req
.body
.get("messages")
.and_then(|v| v.as_array())
.expect("messages array");
// system + user + assistant(tool_calls=[...]) + 3 tool outputs
assert_eq!(messages.len(), 6);
assert_eq!(messages[0]["role"], "system");
assert_eq!(messages[1]["role"], "user");
let tool_calls_msg = &messages[2];
assert_eq!(tool_calls_msg["role"], "assistant");
assert_eq!(tool_calls_msg["content"], serde_json::Value::Null);
let tool_calls = tool_calls_msg["tool_calls"]
.as_array()
.expect("tool_calls array");
assert_eq!(tool_calls.len(), 3);
assert_eq!(tool_calls[0]["id"], "call-a");
assert_eq!(tool_calls[1]["id"], "call-b");
assert_eq!(tool_calls[2]["id"], "call-c");
assert_eq!(messages[3]["role"], "tool");
assert_eq!(messages[3]["tool_call_id"], "call-a");
assert_eq!(messages[4]["role"], "tool");
assert_eq!(messages[4]["tool_call_id"], "call-b");
assert_eq!(messages[5]["role"], "tool");
assert_eq!(messages[5]["tool_call_id"], "call-c");
}
}

View File

@@ -1,8 +1,5 @@
pub mod chat;
pub(crate) mod headers;
pub mod responses;
pub use chat::ChatRequest;
pub use chat::ChatRequestBuilder;
pub use responses::ResponsesRequest;
pub use responses::ResponsesRequestBuilder;

View File

@@ -191,7 +191,6 @@ fn attach_item_ids(payload_json: &mut Value, original_items: &[ResponseItem]) {
mod tests {
use super::*;
use crate::provider::RetryConfig;
use crate::provider::WireApi;
use codex_protocol::protocol::SubAgentSource;
use http::HeaderValue;
use pretty_assertions::assert_eq;
@@ -202,7 +201,6 @@ mod tests {
name: name.to_string(),
base_url: base_url.to_string(),
query_params: None,
wire: WireApi::Responses,
headers: HeaderMap::new(),
retry: RetryConfig {
max_attempts: 1,
@@ -224,12 +222,14 @@ mod tests {
role: "assistant".into(),
content: Vec::new(),
end_turn: None,
phase: None,
},
ResponseItem::Message {
id: None,
role: "assistant".into(),
content: Vec::new(),
end_turn: None,
phase: None,
},
];

View File

@@ -1,716 +0,0 @@
use crate::common::ResponseEvent;
use crate::common::ResponseStream;
use crate::error::ApiError;
use crate::telemetry::SseTelemetry;
use codex_client::StreamResponse;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ReasoningItemContent;
use codex_protocol::models::ResponseItem;
use eventsource_stream::Eventsource;
use futures::Stream;
use futures::StreamExt;
use std::collections::HashMap;
use std::collections::HashSet;
use std::sync::Arc;
use std::sync::OnceLock;
use std::time::Duration;
use tokio::sync::mpsc;
use tokio::time::Instant;
use tokio::time::timeout;
use tracing::debug;
use tracing::trace;
pub(crate) fn spawn_chat_stream(
stream_response: StreamResponse,
idle_timeout: Duration,
telemetry: Option<Arc<dyn SseTelemetry>>,
_turn_state: Option<Arc<OnceLock<String>>>,
) -> ResponseStream {
let (tx_event, rx_event) = mpsc::channel::<Result<ResponseEvent, ApiError>>(1600);
tokio::spawn(async move {
process_chat_sse(stream_response.bytes, tx_event, idle_timeout, telemetry).await;
});
ResponseStream { rx_event }
}
/// Processes Server-Sent Events from the legacy Chat Completions streaming API.
///
/// The upstream protocol terminates a streaming response with a final sentinel event
/// (`data: [DONE]`). Historically, some of our test stubs have emitted `data: DONE`
/// (without brackets) instead.
///
/// `eventsource_stream` delivers these sentinels as regular events rather than signaling
/// end-of-stream. If we try to parse them as JSON, we log and skip them, then keep
/// polling for more events.
///
/// On servers that keep the HTTP connection open after emitting the sentinel (notably
/// wiremock on Windows), skipping the sentinel means we never emit `ResponseEvent::Completed`.
/// Higher-level workflows/tests that wait for completion before issuing subsequent model
/// calls will then stall, which shows up as "expected N requests, got 1" verification
/// failures in the mock server.
pub async fn process_chat_sse<S>(
stream: S,
tx_event: mpsc::Sender<Result<ResponseEvent, ApiError>>,
idle_timeout: Duration,
telemetry: Option<std::sync::Arc<dyn SseTelemetry>>,
) where
S: Stream<Item = Result<bytes::Bytes, codex_client::TransportError>> + Unpin,
{
let mut stream = stream.eventsource();
#[derive(Default, Debug)]
struct ToolCallState {
id: Option<String>,
name: Option<String>,
arguments: String,
}
let mut tool_calls: HashMap<usize, ToolCallState> = HashMap::new();
let mut tool_call_order: Vec<usize> = Vec::new();
let mut tool_call_order_seen: HashSet<usize> = HashSet::new();
let mut tool_call_index_by_id: HashMap<String, usize> = HashMap::new();
let mut next_tool_call_index = 0usize;
let mut last_tool_call_index: Option<usize> = None;
let mut assistant_item: Option<ResponseItem> = None;
let mut reasoning_item: Option<ResponseItem> = None;
let mut completed_sent = false;
async fn flush_and_complete(
tx_event: &mpsc::Sender<Result<ResponseEvent, ApiError>>,
reasoning_item: &mut Option<ResponseItem>,
assistant_item: &mut Option<ResponseItem>,
) {
if let Some(reasoning) = reasoning_item.take() {
let _ = tx_event
.send(Ok(ResponseEvent::OutputItemDone(reasoning)))
.await;
}
if let Some(assistant) = assistant_item.take() {
let _ = tx_event
.send(Ok(ResponseEvent::OutputItemDone(assistant)))
.await;
}
let _ = tx_event
.send(Ok(ResponseEvent::Completed {
response_id: String::new(),
token_usage: None,
}))
.await;
}
loop {
let start = Instant::now();
let response = timeout(idle_timeout, stream.next()).await;
if let Some(t) = telemetry.as_ref() {
t.on_sse_poll(&response, start.elapsed());
}
let sse = match response {
Ok(Some(Ok(sse))) => sse,
Ok(Some(Err(e))) => {
let _ = tx_event.send(Err(ApiError::Stream(e.to_string()))).await;
return;
}
Ok(None) => {
if !completed_sent {
flush_and_complete(&tx_event, &mut reasoning_item, &mut assistant_item).await;
}
return;
}
Err(_) => {
let _ = tx_event
.send(Err(ApiError::Stream("idle timeout waiting for SSE".into())))
.await;
return;
}
};
trace!("SSE event: {}", sse.data);
let data = sse.data.trim();
if data.is_empty() {
continue;
}
if data == "[DONE]" || data == "DONE" {
if !completed_sent {
flush_and_complete(&tx_event, &mut reasoning_item, &mut assistant_item).await;
}
return;
}
let value: serde_json::Value = match serde_json::from_str(data) {
Ok(val) => val,
Err(err) => {
debug!(
"Failed to parse ChatCompletions SSE event: {err}, data: {}",
data
);
continue;
}
};
let Some(choices) = value.get("choices").and_then(|c| c.as_array()) else {
continue;
};
for choice in choices {
if let Some(delta) = choice.get("delta") {
if let Some(reasoning) = delta.get("reasoning") {
if let Some(text) = reasoning.as_str() {
append_reasoning_text(&tx_event, &mut reasoning_item, text.to_string())
.await;
} else if let Some(text) = reasoning.get("text").and_then(|v| v.as_str()) {
append_reasoning_text(&tx_event, &mut reasoning_item, text.to_string())
.await;
} else if let Some(text) = reasoning.get("content").and_then(|v| v.as_str()) {
append_reasoning_text(&tx_event, &mut reasoning_item, text.to_string())
.await;
}
}
if let Some(content) = delta.get("content") {
if content.is_array() {
for item in content.as_array().unwrap_or(&vec![]) {
if let Some(text) = item.get("text").and_then(|t| t.as_str()) {
append_assistant_text(
&tx_event,
&mut assistant_item,
text.to_string(),
)
.await;
}
}
} else if let Some(text) = content.as_str() {
append_assistant_text(&tx_event, &mut assistant_item, text.to_string())
.await;
}
}
if let Some(tool_call_values) = delta.get("tool_calls").and_then(|c| c.as_array()) {
for tool_call in tool_call_values {
let mut index = tool_call
.get("index")
.and_then(serde_json::Value::as_u64)
.map(|i| i as usize);
let mut call_id_for_lookup = None;
if let Some(call_id) = tool_call.get("id").and_then(|i| i.as_str()) {
call_id_for_lookup = Some(call_id.to_string());
if let Some(existing) = tool_call_index_by_id.get(call_id) {
index = Some(*existing);
}
}
if index.is_none() && call_id_for_lookup.is_none() {
index = last_tool_call_index;
}
let index = index.unwrap_or_else(|| {
while tool_calls.contains_key(&next_tool_call_index) {
next_tool_call_index += 1;
}
let idx = next_tool_call_index;
next_tool_call_index += 1;
idx
});
let call_state = tool_calls.entry(index).or_default();
if tool_call_order_seen.insert(index) {
tool_call_order.push(index);
}
if let Some(id) = tool_call.get("id").and_then(|i| i.as_str()) {
call_state.id.get_or_insert_with(|| id.to_string());
tool_call_index_by_id.entry(id.to_string()).or_insert(index);
}
if let Some(func) = tool_call.get("function") {
if let Some(fname) = func.get("name").and_then(|n| n.as_str())
&& !fname.is_empty()
{
call_state.name.get_or_insert_with(|| fname.to_string());
}
if let Some(arguments) = func.get("arguments").and_then(|a| a.as_str())
{
call_state.arguments.push_str(arguments);
}
}
last_tool_call_index = Some(index);
}
}
}
if let Some(message) = choice.get("message")
&& let Some(reasoning) = message.get("reasoning")
{
if let Some(text) = reasoning.as_str() {
append_reasoning_text(&tx_event, &mut reasoning_item, text.to_string()).await;
} else if let Some(text) = reasoning.get("text").and_then(|v| v.as_str()) {
append_reasoning_text(&tx_event, &mut reasoning_item, text.to_string()).await;
} else if let Some(text) = reasoning.get("content").and_then(|v| v.as_str()) {
append_reasoning_text(&tx_event, &mut reasoning_item, text.to_string()).await;
}
}
let finish_reason = choice.get("finish_reason").and_then(|r| r.as_str());
if finish_reason == Some("stop") {
if let Some(reasoning) = reasoning_item.take() {
let _ = tx_event
.send(Ok(ResponseEvent::OutputItemDone(reasoning)))
.await;
}
if let Some(assistant) = assistant_item.take() {
let _ = tx_event
.send(Ok(ResponseEvent::OutputItemDone(assistant)))
.await;
}
if !completed_sent {
let _ = tx_event
.send(Ok(ResponseEvent::Completed {
response_id: String::new(),
token_usage: None,
}))
.await;
completed_sent = true;
}
continue;
}
if finish_reason == Some("length") {
let _ = tx_event.send(Err(ApiError::ContextWindowExceeded)).await;
return;
}
if finish_reason == Some("tool_calls") {
if let Some(reasoning) = reasoning_item.take() {
let _ = tx_event
.send(Ok(ResponseEvent::OutputItemDone(reasoning)))
.await;
}
for index in tool_call_order.drain(..) {
let Some(state) = tool_calls.remove(&index) else {
continue;
};
tool_call_order_seen.remove(&index);
let ToolCallState {
id,
name,
arguments,
} = state;
let Some(name) = name else {
debug!("Skipping tool call at index {index} because name is missing");
continue;
};
let item = ResponseItem::FunctionCall {
id: None,
name,
arguments,
call_id: id.unwrap_or_else(|| format!("tool-call-{index}")),
};
let _ = tx_event.send(Ok(ResponseEvent::OutputItemDone(item))).await;
}
}
}
}
}
async fn append_assistant_text(
tx_event: &mpsc::Sender<Result<ResponseEvent, ApiError>>,
assistant_item: &mut Option<ResponseItem>,
text: String,
) {
if assistant_item.is_none() {
let item = ResponseItem::Message {
id: None,
role: "assistant".to_string(),
content: vec![],
end_turn: None,
};
*assistant_item = Some(item.clone());
let _ = tx_event
.send(Ok(ResponseEvent::OutputItemAdded(item)))
.await;
}
if let Some(ResponseItem::Message { content, .. }) = assistant_item {
content.push(ContentItem::OutputText { text: text.clone() });
let _ = tx_event
.send(Ok(ResponseEvent::OutputTextDelta(text.clone())))
.await;
}
}
async fn append_reasoning_text(
tx_event: &mpsc::Sender<Result<ResponseEvent, ApiError>>,
reasoning_item: &mut Option<ResponseItem>,
text: String,
) {
if reasoning_item.is_none() {
let item = ResponseItem::Reasoning {
id: String::new(),
summary: Vec::new(),
content: Some(vec![]),
encrypted_content: None,
};
*reasoning_item = Some(item.clone());
let _ = tx_event
.send(Ok(ResponseEvent::OutputItemAdded(item)))
.await;
}
if let Some(ResponseItem::Reasoning {
content: Some(content),
..
}) = reasoning_item
{
let content_index = content.len() as i64;
content.push(ReasoningItemContent::ReasoningText { text: text.clone() });
let _ = tx_event
.send(Ok(ResponseEvent::ReasoningContentDelta {
delta: text.clone(),
content_index,
}))
.await;
}
}
#[cfg(test)]
mod tests {
use super::*;
use assert_matches::assert_matches;
use codex_protocol::models::ResponseItem;
use futures::TryStreamExt;
use serde_json::json;
use tokio::sync::mpsc;
use tokio_util::io::ReaderStream;
fn build_body(events: &[serde_json::Value]) -> String {
let mut body = String::new();
for e in events {
body.push_str(&format!("event: message\ndata: {e}\n\n"));
}
body
}
/// Regression test: the stream should complete when we see a `[DONE]` sentinel.
///
/// This is important for tests/mocks that don't immediately close the underlying
/// connection after emitting the sentinel.
#[tokio::test]
async fn completes_on_done_sentinel_without_json() {
let events = collect_events("event: message\ndata: [DONE]\n\n").await;
assert_matches!(&events[..], [ResponseEvent::Completed { .. }]);
}
async fn collect_events(body: &str) -> Vec<ResponseEvent> {
let reader = ReaderStream::new(std::io::Cursor::new(body.to_string()))
.map_err(|err| codex_client::TransportError::Network(err.to_string()));
let (tx, mut rx) = mpsc::channel::<Result<ResponseEvent, ApiError>>(16);
tokio::spawn(process_chat_sse(
reader,
tx,
Duration::from_millis(1000),
None,
));
let mut out = Vec::new();
while let Some(ev) = rx.recv().await {
out.push(ev.expect("stream error"));
}
out
}
#[tokio::test]
async fn concatenates_tool_call_arguments_across_deltas() {
let delta_name = json!({
"choices": [{
"delta": {
"tool_calls": [{
"id": "call_a",
"index": 0,
"function": { "name": "do_a" }
}]
}
}]
});
let delta_args_1 = json!({
"choices": [{
"delta": {
"tool_calls": [{
"index": 0,
"function": { "arguments": "{ \"foo\":" }
}]
}
}]
});
let delta_args_2 = json!({
"choices": [{
"delta": {
"tool_calls": [{
"index": 0,
"function": { "arguments": "1}" }
}]
}
}]
});
let finish = json!({
"choices": [{
"finish_reason": "tool_calls"
}]
});
let body = build_body(&[delta_name, delta_args_1, delta_args_2, finish]);
let events = collect_events(&body).await;
assert_matches!(
&events[..],
[
ResponseEvent::OutputItemDone(ResponseItem::FunctionCall { call_id, name, arguments, .. }),
ResponseEvent::Completed { .. }
] if call_id == "call_a" && name == "do_a" && arguments == "{ \"foo\":1}"
);
}
#[tokio::test]
async fn emits_multiple_tool_calls() {
let delta_a = json!({
"choices": [{
"delta": {
"tool_calls": [{
"id": "call_a",
"function": { "name": "do_a", "arguments": "{\"foo\":1}" }
}]
}
}]
});
let delta_b = json!({
"choices": [{
"delta": {
"tool_calls": [{
"id": "call_b",
"function": { "name": "do_b", "arguments": "{\"bar\":2}" }
}]
}
}]
});
let finish = json!({
"choices": [{
"finish_reason": "tool_calls"
}]
});
let body = build_body(&[delta_a, delta_b, finish]);
let events = collect_events(&body).await;
assert_matches!(
&events[..],
[
ResponseEvent::OutputItemDone(ResponseItem::FunctionCall { call_id: call_a, name: name_a, arguments: args_a, .. }),
ResponseEvent::OutputItemDone(ResponseItem::FunctionCall { call_id: call_b, name: name_b, arguments: args_b, .. }),
ResponseEvent::Completed { .. }
] if call_a == "call_a" && name_a == "do_a" && args_a == "{\"foo\":1}" && call_b == "call_b" && name_b == "do_b" && args_b == "{\"bar\":2}"
);
}
#[tokio::test]
async fn emits_tool_calls_for_multiple_choices() {
let payload = json!({
"choices": [
{
"delta": {
"tool_calls": [{
"id": "call_a",
"index": 0,
"function": { "name": "do_a", "arguments": "{}" }
}]
},
"finish_reason": "tool_calls"
},
{
"delta": {
"tool_calls": [{
"id": "call_b",
"index": 0,
"function": { "name": "do_b", "arguments": "{}" }
}]
},
"finish_reason": "tool_calls"
}
]
});
let body = build_body(&[payload]);
let events = collect_events(&body).await;
assert_matches!(
&events[..],
[
ResponseEvent::OutputItemDone(ResponseItem::FunctionCall { call_id: call_a, name: name_a, arguments: args_a, .. }),
ResponseEvent::OutputItemDone(ResponseItem::FunctionCall { call_id: call_b, name: name_b, arguments: args_b, .. }),
ResponseEvent::Completed { .. }
] if call_a == "call_a" && name_a == "do_a" && args_a == "{}" && call_b == "call_b" && name_b == "do_b" && args_b == "{}"
);
}
#[tokio::test]
async fn merges_tool_calls_by_index_when_id_missing_on_subsequent_deltas() {
let delta_with_id = json!({
"choices": [{
"delta": {
"tool_calls": [{
"index": 0,
"id": "call_a",
"function": { "name": "do_a", "arguments": "{ \"foo\":" }
}]
}
}]
});
let delta_without_id = json!({
"choices": [{
"delta": {
"tool_calls": [{
"index": 0,
"function": { "arguments": "1}" }
}]
}
}]
});
let finish = json!({
"choices": [{
"finish_reason": "tool_calls"
}]
});
let body = build_body(&[delta_with_id, delta_without_id, finish]);
let events = collect_events(&body).await;
assert_matches!(
&events[..],
[
ResponseEvent::OutputItemDone(ResponseItem::FunctionCall { call_id, name, arguments, .. }),
ResponseEvent::Completed { .. }
] if call_id == "call_a" && name == "do_a" && arguments == "{ \"foo\":1}"
);
}
#[tokio::test]
async fn preserves_tool_call_name_when_empty_deltas_arrive() {
let delta_with_name = json!({
"choices": [{
"delta": {
"tool_calls": [{
"id": "call_a",
"function": { "name": "do_a" }
}]
}
}]
});
let delta_with_empty_name = json!({
"choices": [{
"delta": {
"tool_calls": [{
"id": "call_a",
"function": { "name": "", "arguments": "{}" }
}]
}
}]
});
let finish = json!({
"choices": [{
"finish_reason": "tool_calls"
}]
});
let body = build_body(&[delta_with_name, delta_with_empty_name, finish]);
let events = collect_events(&body).await;
assert_matches!(
&events[..],
[
ResponseEvent::OutputItemDone(ResponseItem::FunctionCall { name, arguments, .. }),
ResponseEvent::Completed { .. }
] if name == "do_a" && arguments == "{}"
);
}
#[tokio::test]
async fn emits_tool_calls_even_when_content_and_reasoning_present() {
let delta_content_and_tools = json!({
"choices": [{
"delta": {
"content": [{"text": "hi"}],
"reasoning": "because",
"tool_calls": [{
"id": "call_a",
"function": { "name": "do_a", "arguments": "{}" }
}]
}
}]
});
let finish = json!({
"choices": [{
"finish_reason": "tool_calls"
}]
});
let body = build_body(&[delta_content_and_tools, finish]);
let events = collect_events(&body).await;
assert_matches!(
&events[..],
[
ResponseEvent::OutputItemAdded(ResponseItem::Reasoning { .. }),
ResponseEvent::ReasoningContentDelta { .. },
ResponseEvent::OutputItemAdded(ResponseItem::Message { .. }),
ResponseEvent::OutputTextDelta(delta),
ResponseEvent::OutputItemDone(ResponseItem::Reasoning { .. }),
ResponseEvent::OutputItemDone(ResponseItem::FunctionCall { call_id, name, .. }),
ResponseEvent::OutputItemDone(ResponseItem::Message { .. }),
ResponseEvent::Completed { .. }
] if delta == "hi" && call_id == "call_a" && name == "do_a"
);
}
#[tokio::test]
async fn drops_partial_tool_calls_on_stop_finish_reason() {
let delta_tool = json!({
"choices": [{
"delta": {
"tool_calls": [{
"id": "call_a",
"function": { "name": "do_a", "arguments": "{}" }
}]
}
}]
});
let finish_stop = json!({
"choices": [{
"finish_reason": "stop"
}]
});
let body = build_body(&[delta_tool, finish_stop]);
let events = collect_events(&body).await;
assert!(!events.iter().any(|ev| {
matches!(
ev,
ResponseEvent::OutputItemDone(ResponseItem::FunctionCall { .. })
)
}));
assert_matches!(events.last(), Some(ResponseEvent::Completed { .. }));
}
}

View File

@@ -1,4 +1,3 @@
pub mod chat;
pub mod responses;
pub use responses::process_sse;

View File

@@ -429,6 +429,7 @@ mod tests {
use super::*;
use assert_matches::assert_matches;
use bytes::Bytes;
use codex_protocol::models::MessagePhase;
use codex_protocol::models::ResponseItem;
use futures::stream;
use pretty_assertions::assert_eq;
@@ -492,7 +493,8 @@ mod tests {
"item": {
"type": "message",
"role": "assistant",
"content": [{"type": "output_text", "text": "Hello"}]
"content": [{"type": "output_text", "text": "Hello"}],
"phase": "commentary"
}
})
.to_string();
@@ -523,8 +525,11 @@ mod tests {
assert_matches!(
&events[0],
Ok(ResponseEvent::OutputItemDone(ResponseItem::Message { role, .. }))
if role == "assistant"
Ok(ResponseEvent::OutputItemDone(ResponseItem::Message {
role,
phase: Some(MessagePhase::Commentary),
..
})) if role == "assistant"
);
assert_matches!(

View File

@@ -6,11 +6,9 @@ use anyhow::Result;
use async_trait::async_trait;
use bytes::Bytes;
use codex_api::AuthProvider;
use codex_api::ChatClient;
use codex_api::Provider;
use codex_api::ResponsesClient;
use codex_api::ResponsesOptions;
use codex_api::WireApi;
use codex_api::requests::responses::Compression;
use codex_client::HttpTransport;
use codex_client::Request;
@@ -119,12 +117,11 @@ impl AuthProvider for StaticAuth {
}
}
fn provider(name: &str, wire: WireApi) -> Provider {
fn provider(name: &str) -> Provider {
Provider {
name: name.to_string(),
base_url: "https://example.com/v1".to_string(),
query_params: None,
wire,
headers: HeaderMap::new(),
retry: codex_api::provider::RetryConfig {
max_attempts: 1,
@@ -196,38 +193,10 @@ data: {"id":"resp-1","output":[{"type":"message","role":"assistant","content":[{
}
#[tokio::test]
async fn chat_client_uses_chat_completions_path_for_chat_wire() -> Result<()> {
async fn responses_client_uses_responses_path() -> Result<()> {
let state = RecordingState::default();
let transport = RecordingTransport::new(state.clone());
let client = ChatClient::new(transport, provider("openai", WireApi::Chat), NoAuth);
let body = serde_json::json!({ "echo": true });
let _stream = client.stream(body, HeaderMap::new()).await?;
let requests = state.take_stream_requests();
assert_path_ends_with(&requests, "/chat/completions");
Ok(())
}
#[tokio::test]
async fn chat_client_uses_responses_path_for_responses_wire() -> Result<()> {
let state = RecordingState::default();
let transport = RecordingTransport::new(state.clone());
let client = ChatClient::new(transport, provider("openai", WireApi::Responses), NoAuth);
let body = serde_json::json!({ "echo": true });
let _stream = client.stream(body, HeaderMap::new()).await?;
let requests = state.take_stream_requests();
assert_path_ends_with(&requests, "/responses");
Ok(())
}
#[tokio::test]
async fn responses_client_uses_responses_path_for_responses_wire() -> Result<()> {
let state = RecordingState::default();
let transport = RecordingTransport::new(state.clone());
let client = ResponsesClient::new(transport, provider("openai", WireApi::Responses), NoAuth);
let client = ResponsesClient::new(transport, provider("openai"), NoAuth);
let body = serde_json::json!({ "echo": true });
let _stream = client
@@ -239,28 +208,12 @@ async fn responses_client_uses_responses_path_for_responses_wire() -> Result<()>
Ok(())
}
#[tokio::test]
async fn responses_client_uses_chat_path_for_chat_wire() -> Result<()> {
let state = RecordingState::default();
let transport = RecordingTransport::new(state.clone());
let client = ResponsesClient::new(transport, provider("openai", WireApi::Chat), NoAuth);
let body = serde_json::json!({ "echo": true });
let _stream = client
.stream(body, HeaderMap::new(), Compression::None, None)
.await?;
let requests = state.take_stream_requests();
assert_path_ends_with(&requests, "/chat/completions");
Ok(())
}
#[tokio::test]
async fn streaming_client_adds_auth_headers() -> Result<()> {
let state = RecordingState::default();
let transport = RecordingTransport::new(state.clone());
let auth = StaticAuth::new("secret-token", "acct-1");
let client = ResponsesClient::new(transport, provider("openai", WireApi::Responses), auth);
let client = ResponsesClient::new(transport, provider("openai"), auth);
let body = serde_json::json!({ "model": "gpt-test" });
let _stream = client
@@ -295,7 +248,7 @@ async fn streaming_client_adds_auth_headers() -> Result<()> {
async fn streaming_client_retries_on_transport_error() -> Result<()> {
let transport = FlakyTransport::new();
let mut provider = provider("openai", WireApi::Responses);
let mut provider = provider("openai");
provider.retry.max_attempts = 2;
let client = ResponsesClient::new(transport.clone(), provider, NoAuth);
@@ -309,6 +262,7 @@ async fn streaming_client_retries_on_transport_error() -> Result<()> {
text: "hi".to_string(),
}],
end_turn: None,
phase: None,
}],
tools: Vec::<Value>::new(),
parallel_tool_calls: false,

View File

@@ -2,7 +2,6 @@ use codex_api::AuthProvider;
use codex_api::ModelsClient;
use codex_api::provider::Provider;
use codex_api::provider::RetryConfig;
use codex_api::provider::WireApi;
use codex_client::ReqwestTransport;
use codex_protocol::openai_models::ConfigShellToolType;
use codex_protocol::openai_models::ModelInfo;
@@ -11,6 +10,7 @@ use codex_protocol::openai_models::ModelsResponse;
use codex_protocol::openai_models::ReasoningEffort;
use codex_protocol::openai_models::ReasoningEffortPreset;
use codex_protocol::openai_models::TruncationPolicyConfig;
use codex_protocol::openai_models::default_input_modalities;
use http::HeaderMap;
use http::Method;
use wiremock::Mock;
@@ -33,7 +33,6 @@ fn provider(base_url: &str) -> Provider {
name: "test".to_string(),
base_url: base_url.to_string(),
query_params: None,
wire: WireApi::Responses,
headers: HeaderMap::new(),
retry: RetryConfig {
max_attempts: 1,
@@ -88,6 +87,7 @@ async fn models_client_hits_models_endpoint() {
auto_compact_token_limit: None,
effective_context_window_percent: 95,
experimental_supported_tools: Vec::new(),
input_modalities: default_input_modalities(),
}],
};

View File

@@ -8,7 +8,6 @@ use codex_api::AuthProvider;
use codex_api::Provider;
use codex_api::ResponseEvent;
use codex_api::ResponsesClient;
use codex_api::WireApi;
use codex_api::requests::responses::Compression;
use codex_client::HttpTransport;
use codex_client::Request;
@@ -61,12 +60,11 @@ impl AuthProvider for NoAuth {
}
}
fn provider(name: &str, wire: WireApi) -> Provider {
fn provider(name: &str) -> Provider {
Provider {
name: name.to_string(),
base_url: "https://example.com/v1".to_string(),
query_params: None,
wire,
headers: HeaderMap::new(),
retry: codex_api::provider::RetryConfig {
max_attempts: 1,
@@ -122,7 +120,7 @@ async fn responses_stream_parses_items_and_completed_end_to_end() -> Result<()>
let body = build_responses_body(vec![item1, item2, completed]);
let transport = FixtureSseTransport::new(body);
let client = ResponsesClient::new(transport, provider("openai", WireApi::Responses), NoAuth);
let client = ResponsesClient::new(transport, provider("openai"), NoAuth);
let mut stream = client
.stream(
@@ -192,7 +190,7 @@ async fn responses_stream_aggregates_output_text_deltas() -> Result<()> {
let body = build_responses_body(vec![delta1, delta2, completed]);
let transport = FixtureSseTransport::new(body);
let client = ResponsesClient::new(transport, provider("openai", WireApi::Responses), NoAuth);
let client = ResponsesClient::new(transport, provider("openai"), NoAuth);
let stream = client
.stream(

View File

@@ -1,52 +1,18 @@
//! OSS provider utilities shared between TUI and exec.
use codex_core::LMSTUDIO_OSS_PROVIDER_ID;
use codex_core::OLLAMA_CHAT_PROVIDER_ID;
use codex_core::OLLAMA_OSS_PROVIDER_ID;
use codex_core::WireApi;
use codex_core::config::Config;
use codex_core::protocol::DeprecationNoticeEvent;
use std::io;
/// Returns the default model for a given OSS provider.
pub fn get_default_model_for_oss_provider(provider_id: &str) -> Option<&'static str> {
match provider_id {
LMSTUDIO_OSS_PROVIDER_ID => Some(codex_lmstudio::DEFAULT_OSS_MODEL),
OLLAMA_OSS_PROVIDER_ID | OLLAMA_CHAT_PROVIDER_ID => Some(codex_ollama::DEFAULT_OSS_MODEL),
OLLAMA_OSS_PROVIDER_ID => Some(codex_ollama::DEFAULT_OSS_MODEL),
_ => None,
}
}
/// Returns a deprecation notice if Ollama doesn't support the responses wire API.
pub async fn ollama_chat_deprecation_notice(
config: &Config,
) -> io::Result<Option<DeprecationNoticeEvent>> {
if config.model_provider_id != OLLAMA_OSS_PROVIDER_ID
|| config.model_provider.wire_api != WireApi::Responses
{
return Ok(None);
}
if let Some(detection) = codex_ollama::detect_wire_api(&config.model_provider).await?
&& detection.wire_api == WireApi::Chat
{
let version_suffix = detection
.version
.as_ref()
.map(|version| format!(" (version {version})"))
.unwrap_or_default();
let summary = format!(
"Your Ollama server{version_suffix} doesn't support the Responses API. Either update Ollama or set `oss_provider = \"{OLLAMA_CHAT_PROVIDER_ID}\"` (or `model_provider = \"{OLLAMA_CHAT_PROVIDER_ID}\"`) in your config.toml to use the \"chat\" wire API. Support for the \"chat\" wire API is deprecated and will soon be removed."
);
return Ok(Some(DeprecationNoticeEvent {
summary,
details: None,
}));
}
Ok(None)
}
/// Ensures the specified OSS provider is ready (models downloaded, service reachable).
pub async fn ensure_oss_provider_ready(
provider_id: &str,
@@ -58,7 +24,8 @@ pub async fn ensure_oss_provider_ready(
.await
.map_err(|e| std::io::Error::other(format!("OSS setup failed: {e}")))?;
}
OLLAMA_OSS_PROVIDER_ID | OLLAMA_CHAT_PROVIDER_ID => {
OLLAMA_OSS_PROVIDER_ID => {
codex_ollama::ensure_responses_supported(&config.model_provider).await?;
codex_ollama::ensure_oss_ready(config)
.await
.map_err(|e| std::io::Error::other(format!("OSS setup failed: {e}")))?;

View File

@@ -45,6 +45,7 @@ codex-utils-pty = { workspace = true }
codex-utils-readiness = { workspace = true }
codex-utils-string = { workspace = true }
codex-windows-sandbox = { package = "codex-windows-sandbox", path = "../windows-sandbox-rs" }
dirs = { workspace = true }
dunce = { workspace = true }
encoding_rs = { workspace = true }
env-flags = { workspace = true }
@@ -56,7 +57,6 @@ indexmap = { workspace = true }
indoc = { workspace = true }
keyring = { workspace = true, features = ["crypto-rust"] }
libc = { workspace = true }
mcp-types = { workspace = true }
multimap = { workspace = true }
once_cell = { workspace = true }
os_info = { workspace = true }
@@ -64,6 +64,12 @@ rand = { workspace = true }
regex = { workspace = true }
regex-lite = { workspace = true }
reqwest = { workspace = true, features = ["json", "stream"] }
rmcp = { workspace = true, default-features = false, features = [
"base64",
"macros",
"schemars",
"server",
] }
schemars = { workspace = true }
serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }

View File

@@ -464,7 +464,7 @@
"$ref": "#/definitions/WireApi"
}
],
"default": "chat",
"default": "responses",
"description": "Which wire protocol this provider expects."
}
},
@@ -1087,7 +1087,7 @@
"type": "string"
},
"WireApi": {
"description": "Wire protocol that the provider speaks. Most third-party services only implement the classic OpenAI Chat Completions JSON schema, whereas OpenAI itself (and a handful of others) additionally expose the more modern *Responses* API. The two protocols use different request/response shapes and *cannot* be auto-detected at runtime, therefore each provider entry must declare which one it expects.",
"description": "Wire protocol that the provider speaks.",
"oneOf": [
{
"description": "The Responses API exposed by OpenAI at `/v1/responses`.",
@@ -1095,13 +1095,6 @@
"responses"
],
"type": "string"
},
{
"description": "Regular Chat Completions compatible with `/v1/chat/completions`.",
"enum": [
"chat"
],
"type": "string"
}
]
}
@@ -1423,7 +1416,7 @@
"type": "array"
},
"oss_provider": {
"description": "Preferred OSS provider for local models, e.g. \"lmstudio\", \"ollama\", or \"ollama-chat\".",
"description": "Preferred OSS provider for local models, e.g. \"lmstudio\" or \"ollama\".",
"type": "string"
},
"otel": {

View File

@@ -28,6 +28,7 @@ pub(crate) fn map_api_error(err: ApiError) -> CodexErr {
status,
body: message,
url: None,
cf_ray: None,
request_id: None,
}),
ApiError::InvalidRequest { message } => CodexErr::InvalidRequest(message),
@@ -89,13 +90,14 @@ pub(crate) fn map_api_error(err: ApiError) -> CodexErr {
CodexErr::RetryLimit(RetryLimitReachedError {
status,
request_id: extract_request_id(headers.as_ref()),
request_id: extract_request_tracking_id(headers.as_ref()),
})
} else {
CodexErr::UnexpectedStatus(UnexpectedResponseError {
status,
body: body_text,
url,
cf_ray: extract_header(headers.as_ref(), CF_RAY_HEADER),
request_id: extract_request_id(headers.as_ref()),
})
}
@@ -115,6 +117,9 @@ pub(crate) fn map_api_error(err: ApiError) -> CodexErr {
const MODEL_CAP_MODEL_HEADER: &str = "x-codex-model-cap-model";
const MODEL_CAP_RESET_AFTER_HEADER: &str = "x-codex-model-cap-reset-after-seconds";
const REQUEST_ID_HEADER: &str = "x-request-id";
const OAI_REQUEST_ID_HEADER: &str = "x-oai-request-id";
const CF_RAY_HEADER: &str = "cf-ray";
#[cfg(test)]
mod tests {
@@ -149,15 +154,20 @@ mod tests {
}
}
fn extract_request_tracking_id(headers: Option<&HeaderMap>) -> Option<String> {
extract_request_id(headers).or_else(|| extract_header(headers, CF_RAY_HEADER))
}
fn extract_request_id(headers: Option<&HeaderMap>) -> Option<String> {
extract_header(headers, REQUEST_ID_HEADER)
.or_else(|| extract_header(headers, OAI_REQUEST_ID_HEADER))
}
fn extract_header(headers: Option<&HeaderMap>, name: &str) -> Option<String> {
headers.and_then(|map| {
["cf-ray", "x-request-id", "x-oai-request-id"]
.iter()
.find_map(|name| {
map.get(*name)
.and_then(|v| v.to_str().ok())
.map(str::to_string)
})
map.get(name)
.and_then(|value| value.to_str().ok())
.map(str::to_string)
})
}

View File

@@ -1,12 +1,13 @@
use std::path::PathBuf;
use std::sync::Arc;
use std::sync::OnceLock;
use std::sync::RwLock;
use crate::api_bridge::CoreAuthProvider;
use crate::api_bridge::auth_provider_from_auth;
use crate::api_bridge::map_api_error;
use crate::auth::UnauthorizedRecovery;
use codex_api::AggregateStreamExt;
use codex_api::ChatClient as ApiChatClient;
use crate::turn_metadata::build_turn_metadata_header;
use codex_api::CompactClient as ApiCompactClient;
use codex_api::CompactionInput as ApiCompactionInput;
use codex_api::Prompt as ApiPrompt;
@@ -14,7 +15,6 @@ use codex_api::RequestTelemetry;
use codex_api::ReqwestTransport;
use codex_api::ResponseAppendWsRequest;
use codex_api::ResponseCreateWsRequest;
use codex_api::ResponseStream as ApiResponseStream;
use codex_api::ResponsesClient as ApiResponsesClient;
use codex_api::ResponsesOptions as ApiResponsesOptions;
use codex_api::ResponsesWebsocketClient as ApiWebSocketResponsesClient;
@@ -66,12 +66,18 @@ use crate::features::Feature;
use crate::flags::CODEX_RS_SSE_FIXTURE;
use crate::model_provider_info::ModelProviderInfo;
use crate::model_provider_info::WireApi;
use crate::tools::spec::create_tools_json_for_chat_completions_api;
use crate::tools::spec::create_tools_json_for_responses_api;
use crate::transport_manager::TransportManager;
pub const WEB_SEARCH_ELIGIBLE_HEADER: &str = "x-oai-web-search-eligible";
pub const X_CODEX_TURN_STATE_HEADER: &str = "x-codex-turn-state";
pub const X_CODEX_TURN_METADATA_HEADER: &str = "x-codex-turn-metadata";
#[derive(Debug, Default)]
struct TurnMetadataCache {
cwd: Option<PathBuf>,
header: Option<HeaderValue>,
}
#[derive(Debug)]
struct ModelClientState {
@@ -85,6 +91,7 @@ struct ModelClientState {
summary: ReasoningSummaryConfig,
session_source: SessionSource,
transport_manager: TransportManager,
turn_metadata_cache: Arc<RwLock<TurnMetadataCache>>,
}
#[derive(Debug, Clone)]
@@ -136,11 +143,13 @@ impl ModelClient {
summary,
session_source,
transport_manager,
turn_metadata_cache: Arc::new(RwLock::new(TurnMetadataCache::default())),
}),
}
}
pub fn new_session(&self) -> ModelClientSession {
pub fn new_session(&self, turn_metadata_cwd: Option<PathBuf>) -> ModelClientSession {
self.prewarm_turn_metadata_header(turn_metadata_cwd);
ModelClientSession {
state: Arc::clone(&self.state),
connection: None,
@@ -149,6 +158,38 @@ impl ModelClient {
turn_state: Arc::new(OnceLock::new()),
}
}
/// Refresh turn metadata in the background and update a cached header that request
/// builders can read without blocking.
fn prewarm_turn_metadata_header(&self, turn_metadata_cwd: Option<PathBuf>) {
let turn_metadata_cwd =
turn_metadata_cwd.map(|cwd| std::fs::canonicalize(&cwd).unwrap_or(cwd));
if let Ok(mut cache) = self.state.turn_metadata_cache.write()
&& cache.cwd != turn_metadata_cwd
{
cache.cwd = turn_metadata_cwd.clone();
cache.header = None;
}
let Some(cwd) = turn_metadata_cwd else {
return;
};
let turn_metadata_cache = Arc::clone(&self.state.turn_metadata_cache);
if let Ok(handle) = tokio::runtime::Handle::try_current() {
let _task = handle.spawn(async move {
let header = build_turn_metadata_header(cwd.as_path())
.await
.and_then(|value| HeaderValue::from_str(value.as_str()).ok());
if let Ok(mut cache) = turn_metadata_cache.write()
&& cache.cwd.as_ref() == Some(&cwd)
{
cache.header = header;
}
});
}
}
}
impl ModelClient {
@@ -257,11 +298,15 @@ impl ModelClient {
}
impl ModelClientSession {
/// Streams a single model turn using either the Responses or Chat
/// Completions wire API, depending on the configured provider.
///
/// For Chat providers, the underlying stream is optionally aggregated
/// based on the `show_raw_agent_reasoning` flag in the config.
fn turn_metadata_header(&self) -> Option<HeaderValue> {
self.state
.turn_metadata_cache
.try_read()
.ok()
.and_then(|cache| cache.header.clone())
}
/// Streams a single model turn using the configured Responses transport.
pub async fn stream(&mut self, prompt: &Prompt) -> Result<ResponseStream> {
let wire_api = self.state.provider.wire_api;
match wire_api {
@@ -275,21 +320,6 @@ impl ModelClientSession {
self.stream_responses_api(prompt).await
}
}
WireApi::Chat => {
let api_stream = self.stream_chat_completions(prompt).await?;
if self.state.config.show_raw_agent_reasoning {
Ok(map_response_stream(
api_stream.streaming_mode(),
self.state.otel_manager.clone(),
))
} else {
Ok(map_response_stream(
api_stream.aggregate(),
self.state.otel_manager.clone(),
))
}
}
}
}
@@ -332,6 +362,7 @@ impl ModelClientSession {
prompt: &Prompt,
compression: Compression,
) -> ApiResponsesOptions {
let turn_metadata_header = self.turn_metadata_header();
let model_info = &self.state.model_info;
let default_reasoning_effort = model_info.default_reasoning_level;
@@ -380,7 +411,11 @@ impl ModelClientSession {
store_override: None,
conversation_id: Some(conversation_id),
session_source: Some(self.state.session_source.clone()),
extra_headers: build_responses_headers(&self.state.config, Some(&self.turn_state)),
extra_headers: build_responses_headers(
&self.state.config,
Some(&self.turn_state),
turn_metadata_header.as_ref(),
),
compression,
turn_state: Some(Arc::clone(&self.turn_state)),
}
@@ -486,64 +521,6 @@ impl ModelClientSession {
}
}
/// Streams a turn via the OpenAI Chat Completions API.
///
/// This path is only used when the provider is configured with
/// `WireApi::Chat`; it does not support `output_schema` today.
async fn stream_chat_completions(&self, prompt: &Prompt) -> Result<ApiResponseStream> {
if prompt.output_schema.is_some() {
return Err(CodexErr::UnsupportedOperation(
"output_schema is not supported for Chat Completions API".to_string(),
));
}
let auth_manager = self.state.auth_manager.clone();
let instructions = prompt.base_instructions.text.clone();
let tools_json = create_tools_json_for_chat_completions_api(&prompt.tools)?;
let api_prompt = build_api_prompt(prompt, instructions, tools_json);
let conversation_id = self.state.conversation_id.to_string();
let session_source = self.state.session_source.clone();
let mut auth_recovery = auth_manager
.as_ref()
.map(super::auth::AuthManager::unauthorized_recovery);
loop {
let auth = match auth_manager.as_ref() {
Some(manager) => manager.auth().await,
None => None,
};
let api_provider = self
.state
.provider
.to_api_provider(auth.as_ref().map(CodexAuth::internal_auth_mode))?;
let api_auth = auth_provider_from_auth(auth.clone(), &self.state.provider)?;
let transport = ReqwestTransport::new(build_reqwest_client());
let (request_telemetry, sse_telemetry) = self.build_streaming_telemetry();
let client = ApiChatClient::new(transport, api_provider, api_auth)
.with_telemetry(Some(request_telemetry), Some(sse_telemetry));
let stream_result = client
.stream_prompt(
&self.state.model_info.slug,
&api_prompt,
Some(conversation_id.clone()),
Some(session_source.clone()),
)
.await;
match stream_result {
Ok(stream) => return Ok(stream),
Err(ApiError::Transport(TransportError::Http { status, .. }))
if status == StatusCode::UNAUTHORIZED =>
{
handle_unauthorized(status, &mut auth_recovery).await?;
continue;
}
Err(err) => return Err(map_api_error(err)),
}
}
}
/// Streams a turn via the OpenAI Responses API.
///
/// Handles SSE fixtures, reasoning summaries, verbosity, and the
@@ -590,10 +567,10 @@ impl ModelClientSession {
Ok(stream) => {
return Ok(map_response_stream(stream, self.state.otel_manager.clone()));
}
Err(ApiError::Transport(TransportError::Http { status, .. }))
if status == StatusCode::UNAUTHORIZED =>
{
handle_unauthorized(status, &mut auth_recovery).await?;
Err(ApiError::Transport(
unauthorized_transport @ TransportError::Http { status, .. },
)) if status == StatusCode::UNAUTHORIZED => {
handle_unauthorized(unauthorized_transport, &mut auth_recovery).await?;
continue;
}
Err(err) => return Err(map_api_error(err)),
@@ -629,10 +606,10 @@ impl ModelClientSession {
.await
{
Ok(connection) => connection,
Err(ApiError::Transport(TransportError::Http { status, .. }))
if status == StatusCode::UNAUTHORIZED =>
{
handle_unauthorized(status, &mut auth_recovery).await?;
Err(ApiError::Transport(
unauthorized_transport @ TransportError::Http { status, .. },
)) if status == StatusCode::UNAUTHORIZED => {
handle_unauthorized(unauthorized_transport, &mut auth_recovery).await?;
continue;
}
Err(err) => return Err(map_api_error(err)),
@@ -651,7 +628,7 @@ impl ModelClientSession {
}
}
/// Builds request and SSE telemetry for streaming API calls (Chat/Responses).
/// Builds request and SSE telemetry for streaming API calls.
fn build_streaming_telemetry(&self) -> (Arc<dyn RequestTelemetry>, Arc<dyn SseTelemetry>) {
let telemetry = Arc::new(ApiTelemetry::new(self.state.otel_manager.clone()));
let request_telemetry: Arc<dyn RequestTelemetry> = telemetry.clone();
@@ -713,6 +690,7 @@ fn experimental_feature_headers(config: &Config) -> ApiHeaderMap {
fn build_responses_headers(
config: &Config,
turn_state: Option<&Arc<OnceLock<String>>>,
turn_metadata_header: Option<&HeaderValue>,
) -> ApiHeaderMap {
let mut headers = experimental_feature_headers(config);
headers.insert(
@@ -731,6 +709,9 @@ fn build_responses_headers(
{
headers.insert(X_CODEX_TURN_STATE_HEADER, header_value);
}
if let Some(header_value) = turn_metadata_header {
headers.insert(X_CODEX_TURN_METADATA_HEADER, header_value.clone());
}
headers
}
@@ -799,7 +780,7 @@ where
/// When refresh succeeds, the caller should retry the API call; otherwise
/// the mapped `CodexErr` is returned to the caller.
async fn handle_unauthorized(
status: StatusCode,
transport: TransportError,
auth_recovery: &mut Option<UnauthorizedRecovery>,
) -> Result<()> {
if let Some(recovery) = auth_recovery
@@ -812,16 +793,7 @@ async fn handle_unauthorized(
};
}
Err(map_unauthorized_status(status))
}
fn map_unauthorized_status(status: StatusCode) -> CodexErr {
map_api_error(ApiError::Transport(TransportError::Http {
status,
url: None,
headers: None,
body: None,
}))
Err(map_api_error(ApiError::Transport(transport)))
}
struct ApiTelemetry {

View File

@@ -3,9 +3,7 @@ use std::collections::HashSet;
use std::fmt::Debug;
use std::path::PathBuf;
use std::sync::Arc;
use std::sync::atomic::AtomicBool;
use std::sync::atomic::AtomicU64;
use std::sync::atomic::Ordering;
use crate::AuthManager;
use crate::CodexAuth;
@@ -50,6 +48,7 @@ use codex_protocol::dynamic_tools::DynamicToolSpec;
use codex_protocol::items::PlanItem;
use codex_protocol::items::TurnItem;
use codex_protocol::items::UserMessageItem;
use codex_protocol::mcp::CallToolResult;
use codex_protocol::models::BaseInstructions;
use codex_protocol::models::format_allow_prefixes;
use codex_protocol::openai_models::ModelInfo;
@@ -72,14 +71,12 @@ use codex_rmcp_client::OAuthCredentialsStoreMode;
use futures::future::BoxFuture;
use futures::prelude::*;
use futures::stream::FuturesOrdered;
use mcp_types::CallToolResult;
use mcp_types::ListResourceTemplatesRequestParams;
use mcp_types::ListResourceTemplatesResult;
use mcp_types::ListResourcesRequestParams;
use mcp_types::ListResourcesResult;
use mcp_types::ReadResourceRequestParams;
use mcp_types::ReadResourceResult;
use mcp_types::RequestId;
use rmcp::model::ListResourceTemplatesResult;
use rmcp::model::ListResourcesResult;
use rmcp::model::PaginatedRequestParam;
use rmcp::model::ReadResourceRequestParam;
use rmcp::model::ReadResourceResult;
use rmcp::model::RequestId;
use serde_json;
use serde_json::Value;
use tokio::sync::Mutex;
@@ -97,7 +94,6 @@ use tracing::trace_span;
use tracing::warn;
use crate::ModelProviderInfo;
use crate::WireApi;
use crate::client::ModelClient;
use crate::client::ModelClientSession;
use crate::client_common::Prompt;
@@ -119,6 +115,7 @@ use crate::error::Result as CodexResult;
use crate::exec::StreamOutput;
use crate::exec_policy::ExecPolicyUpdateError;
use crate::feedback_tags;
use crate::git_info::get_git_repo_root;
use crate::instructions::UserInstructions;
use crate::mcp::CODEX_APPS_MCP_SERVER_NAME;
use crate::mcp::auth::compute_auth_statuses;
@@ -130,7 +127,6 @@ use crate::mentions::build_connector_slug_counts;
use crate::mentions::build_skill_name_counts;
use crate::mentions::collect_explicit_app_paths;
use crate::mentions::collect_tool_mentions_from_messages;
use crate::model_provider_info::CHAT_WIRE_API_DEPRECATION_SUMMARY;
use crate::project_doc::get_user_instructions;
use crate::proposed_plan_parser::ProposedPlanParser;
use crate::proposed_plan_parser::ProposedPlanSegment;
@@ -243,31 +239,6 @@ pub struct CodexSpawnOk {
pub(crate) const INITIAL_SUBMIT_ID: &str = "";
pub(crate) const SUBMISSION_CHANNEL_CAPACITY: usize = 64;
static CHAT_WIRE_API_DEPRECATION_EMITTED: AtomicBool = AtomicBool::new(false);
fn maybe_push_chat_wire_api_deprecation(
config: &Config,
post_session_configured_events: &mut Vec<Event>,
) {
if config.model_provider.wire_api != WireApi::Chat {
return;
}
if CHAT_WIRE_API_DEPRECATION_EMITTED
.compare_exchange(false, true, Ordering::SeqCst, Ordering::SeqCst)
.is_err()
{
return;
}
post_session_configured_events.push(Event {
id: INITIAL_SUBMIT_ID.to_owned(),
msg: EventMsg::DeprecationNotice(DeprecationNoticeEvent {
summary: CHAT_WIRE_API_DEPRECATION_SUMMARY.to_string(),
details: None,
}),
});
}
impl Codex {
/// Spawn a new [`Codex`] and initialize the session.
@@ -531,7 +502,6 @@ pub(crate) struct TurnContext {
pub(crate) truncation_policy: TruncationPolicy,
pub(crate) dynamic_tools: Vec<DynamicToolSpec>,
}
impl TurnContext {
pub(crate) fn resolve_path(&self, path: Option<String>) -> PathBuf {
path.as_ref()
@@ -866,7 +836,6 @@ impl Session {
}),
});
}
maybe_push_chat_wire_api_deprecation(&config, &mut post_session_configured_events);
maybe_push_unstable_features_warning(&config, &mut post_session_configured_events);
let auth = auth.as_ref();
@@ -1052,6 +1021,11 @@ impl Session {
state.get_total_token_usage(state.server_reasoning_included())
}
async fn get_estimated_token_count(&self, turn_context: &TurnContext) -> Option<i64> {
let state = self.state.lock().await;
state.history.estimate_token_count(turn_context)
}
pub(crate) async fn get_base_instructions(&self) -> BaseInstructions {
let state = self.state.lock().await;
BaseInstructions {
@@ -1835,6 +1809,7 @@ impl Session {
text: format!("Warning: {}", message.into()),
}],
end_turn: None,
phase: None,
};
self.record_conversation_items(ctx, &[item]).await;
@@ -2224,7 +2199,7 @@ impl Session {
pub async fn list_resources(
&self,
server: &str,
params: Option<ListResourcesRequestParams>,
params: Option<PaginatedRequestParam>,
) -> anyhow::Result<ListResourcesResult> {
self.services
.mcp_connection_manager
@@ -2237,7 +2212,7 @@ impl Session {
pub async fn list_resource_templates(
&self,
server: &str,
params: Option<ListResourceTemplatesRequestParams>,
params: Option<PaginatedRequestParam>,
) -> anyhow::Result<ListResourceTemplatesResult> {
self.services
.mcp_connection_manager
@@ -2250,7 +2225,7 @@ impl Session {
pub async fn read_resource(
&self,
server: &str,
params: ReadResourceRequestParams,
params: ReadResourceRequestParam,
) -> anyhow::Result<ReadResourceResult> {
self.services
.mcp_connection_manager
@@ -2577,10 +2552,10 @@ mod handlers {
use codex_protocol::config_types::ModeKind;
use codex_protocol::config_types::Settings;
use codex_protocol::dynamic_tools::DynamicToolResponse;
use codex_protocol::mcp::RequestId as ProtocolRequestId;
use codex_protocol::user_input::UserInput;
use codex_rmcp_client::ElicitationAction;
use codex_rmcp_client::ElicitationResponse;
use mcp_types::RequestId;
use std::path::PathBuf;
use std::sync::Arc;
use tracing::info;
@@ -2709,7 +2684,7 @@ mod handlers {
pub async fn resolve_elicitation(
sess: &Arc<Session>,
server_name: String,
request_id: RequestId,
request_id: ProtocolRequestId,
decision: codex_protocol::approvals::ElicitationAction,
) {
let action = match decision {
@@ -2724,6 +2699,12 @@ mod handlers {
ElicitationAction::Decline | ElicitationAction::Cancel => None,
};
let response = ElicitationResponse { action, content };
let request_id = match request_id {
ProtocolRequestId::String(value) => {
rmcp::model::NumberOrString::String(std::sync::Arc::from(value))
}
ProtocolRequestId::Integer(value) => rmcp::model::NumberOrString::Number(value),
};
if let Err(err) = sess
.resolve_elicitation(server_name, request_id, response)
.await
@@ -3304,6 +3285,7 @@ pub(crate) async fn run_turn(
let model_info = turn_context.client.get_model_info();
let auto_compact_limit = model_info.auto_compact_token_limit().unwrap_or(i64::MAX);
let total_usage_tokens = sess.get_total_token_usage().await;
let event = EventMsg::TurnStarted(TurnStartedEvent {
model_context_window: turn_context.client.get_model_context_window(),
collaboration_mode_kind: turn_context.collaboration_mode.mode,
@@ -3406,7 +3388,9 @@ pub(crate) async fn run_turn(
// many turns, from the perspective of the user, it is a single turn.
let turn_diff_tracker = Arc::new(tokio::sync::Mutex::new(TurnDiffTracker::new()));
let mut client_session = turn_context.client.new_session();
let mut client_session = turn_context
.client
.new_session(Some(turn_context.cwd.clone()));
loop {
// Note that pending_input would be something like a message the user
@@ -3457,6 +3441,19 @@ pub(crate) async fn run_turn(
let total_usage_tokens = sess.get_total_token_usage().await;
let token_limit_reached = total_usage_tokens >= auto_compact_limit;
let estimated_token_count =
sess.get_estimated_token_count(turn_context.as_ref()).await;
info!(
turn_id = %turn_context.sub_id,
total_usage_tokens,
estimated_token_count = ?estimated_token_count,
auto_compact_limit,
token_limit_reached,
needs_follow_up,
"post sampling token usage"
);
// as long as compaction works well in getting us way below the token limit, we shouldn't worry about being in an infinite loop.
if token_limit_reached && needs_follow_up {
run_auto_compact(&sess, &turn_context).await;
@@ -4469,8 +4466,6 @@ pub(super) fn get_last_assistant_message_from_turn(responses: &[ResponseItem]) -
#[cfg(test)]
pub(crate) use tests::make_session_and_context;
use crate::git_info::get_git_repo_root;
#[cfg(test)]
pub(crate) use tests::make_session_and_context_with_rx;
@@ -4516,8 +4511,7 @@ mod tests {
use std::time::Duration;
use tokio::time::sleep;
use mcp_types::ContentBlock;
use mcp_types::TextContent;
use codex_protocol::mcp::CallToolResult as McpCallToolResult;
use pretty_assertions::assert_eq;
use serde::Deserialize;
use serde_json::json;
@@ -4538,6 +4532,7 @@ mod tests {
text: text.to_string(),
}],
end_turn: None,
phase: None,
}
}
@@ -4819,6 +4814,7 @@ mod tests {
text: "turn 1 user".to_string(),
}],
end_turn: None,
phase: None,
},
ResponseItem::Message {
id: None,
@@ -4827,6 +4823,7 @@ mod tests {
text: "turn 1 assistant".to_string(),
}],
end_turn: None,
phase: None,
},
];
sess.record_into_history(&turn_1, tc.as_ref()).await;
@@ -4839,6 +4836,7 @@ mod tests {
text: "turn 2 user".to_string(),
}],
end_turn: None,
phase: None,
},
ResponseItem::Message {
id: None,
@@ -4847,6 +4845,7 @@ mod tests {
text: "turn 2 assistant".to_string(),
}],
end_turn: None,
phase: None,
},
];
sess.record_into_history(&turn_2, tc.as_ref()).await;
@@ -4879,6 +4878,7 @@ mod tests {
text: "turn 1 user".to_string(),
}],
end_turn: None,
phase: None,
}];
sess.record_into_history(&turn_1, tc.as_ref()).await;
@@ -5101,7 +5101,7 @@ mod tests {
#[test]
fn prefers_structured_content_when_present() {
let ctr = CallToolResult {
let ctr = McpCallToolResult {
// Content present but should be ignored because structured_content is set.
content: vec![text_block("ignored")],
is_error: None,
@@ -5109,6 +5109,7 @@ mod tests {
"ok": true,
"value": 42
})),
meta: None,
};
let got = FunctionCallOutputPayload::from(&ctr);
@@ -5147,10 +5148,11 @@ mod tests {
#[test]
fn falls_back_to_content_when_structured_is_null() {
let ctr = CallToolResult {
let ctr = McpCallToolResult {
content: vec![text_block("hello"), text_block("world")],
is_error: None,
structured_content: Some(serde_json::Value::Null),
meta: None,
};
let got = FunctionCallOutputPayload::from(&ctr);
@@ -5166,10 +5168,11 @@ mod tests {
#[test]
fn success_flag_reflects_is_error_true() {
let ctr = CallToolResult {
let ctr = McpCallToolResult {
content: vec![text_block("unused")],
is_error: Some(true),
structured_content: Some(json!({ "message": "bad" })),
meta: None,
};
let got = FunctionCallOutputPayload::from(&ctr);
@@ -5184,10 +5187,11 @@ mod tests {
#[test]
fn success_flag_true_with_no_error_and_content_used() {
let ctr = CallToolResult {
let ctr = McpCallToolResult {
content: vec![text_block("alpha")],
is_error: Some(false),
structured_content: None,
meta: None,
};
let got = FunctionCallOutputPayload::from(&ctr);
@@ -5238,11 +5242,10 @@ mod tests {
}
}
fn text_block(s: &str) -> ContentBlock {
ContentBlock::TextContent(TextContent {
annotations: None,
text: s.to_string(),
r#type: "text".to_string(),
fn text_block(s: &str) -> serde_json::Value {
json!({
"type": "text",
"text": s,
})
}
@@ -5825,6 +5828,7 @@ mod tests {
text: "first user".to_string(),
}],
end_turn: None,
phase: None,
};
live_history.record_items(std::iter::once(&user1), turn_context.truncation_policy);
rollout_items.push(RolloutItem::ResponseItem(user1.clone()));
@@ -5836,6 +5840,7 @@ mod tests {
text: "assistant reply one".to_string(),
}],
end_turn: None,
phase: None,
};
live_history.record_items(std::iter::once(&assistant1), turn_context.truncation_policy);
rollout_items.push(RolloutItem::ResponseItem(assistant1.clone()));
@@ -5861,6 +5866,7 @@ mod tests {
text: "second user".to_string(),
}],
end_turn: None,
phase: None,
};
live_history.record_items(std::iter::once(&user2), turn_context.truncation_policy);
rollout_items.push(RolloutItem::ResponseItem(user2.clone()));
@@ -5872,6 +5878,7 @@ mod tests {
text: "assistant reply two".to_string(),
}],
end_turn: None,
phase: None,
};
live_history.record_items(std::iter::once(&assistant2), turn_context.truncation_policy);
rollout_items.push(RolloutItem::ResponseItem(assistant2.clone()));
@@ -5897,6 +5904,7 @@ mod tests {
text: "third user".to_string(),
}],
end_turn: None,
phase: None,
};
live_history.record_items(std::iter::once(&user3), turn_context.truncation_policy);
rollout_items.push(RolloutItem::ResponseItem(user3));
@@ -5908,6 +5916,7 @@ mod tests {
text: "assistant reply three".to_string(),
}],
end_turn: None,
phase: None,
};
live_history.record_items(std::iter::once(&assistant3), turn_context.truncation_policy);
rollout_items.push(RolloutItem::ResponseItem(assistant3));

Some files were not shown because too many files have changed in this diff Show More