Compare commits

...

35 Commits

Author SHA1 Message Date
jif-oai
507e43b69c more 2026-01-16 18:29:52 +01:00
jif-oai
8869f662bf mini vite client 2026-01-16 18:23:28 +01:00
jif-oai
1668ca726f chore: close pipe on non-pty processes (#9369)
Closing the STDIN of piped process when starting them to avoid commands
like `rg` to wait for content on STDIN and hangs for ever
2026-01-16 15:54:32 +01:00
jif-oai
7905e99d03 prompt collab (#9367) 2026-01-16 15:12:41 +01:00
jif-oai
7fc49697dd feat: CODEX_CI (#9366) 2026-01-16 13:52:16 +00:00
jif-oai
c576756c81 feat: collab wait multiple IDs (#9294) 2026-01-16 12:05:04 +01:00
jif-oai
c1ac5223e1 feat: run user commands under user snapshot (#9357)
The initial goal is for user snapshots to have access to aliases etc
2026-01-16 11:49:28 +01:00
jif-oai
f5b3e738fb feat: propagate approval request of unsubscribed threads (#9232)
A thread can now be spawned by another thread. In order to process the
approval requests of such sub-threads, we need to detect those event and
show them in the TUI.

This is a temporary solution while the UX is being figured out. This PR
should be reverted once done
2026-01-16 11:23:01 +01:00
Ahmed Ibrahim
0cce6ebd83 rename model turn to sampling request (#9336)
We have two type of turns now: model and user turns. It's always
confusing to refer to either. Model turn is basically a sampling
request.
2026-01-16 10:06:24 +01:00
Eric Traut
1fc72c647f Fix token estimate during compaction (#9337)
This addresses #9287
2026-01-15 19:48:11 -08:00
Michael Bolin
99f47d6e9a fix(mcp): include threadId in both content and structuredContent in CallToolResult (#9338) 2026-01-15 18:33:11 -08:00
Thanh Nguyen
a6324ab34b fix(tui): only show 'Worked for' separator when actual work was performed (#8958)
Fixes #7919.

This PR addresses a TUI display bug where the "Worked for" separator
would appear prematurely during the planning stage.

**Changes:**
- Added `had_work_activity` flag to `ChatWidget` to track if actual work
(exec commands, MCP tool calls, patches) was performed in the current
turn.
- Updated `handle_streaming_delta` to only display the
`FinalMessageSeparator` if both `needs_final_message_separator` AND
`had_work_activity` are true.
- Updated `handle_exec_end_now`, `handle_patch_apply_end_now`, and
`handle_mcp_end_now` to set `had_work_activity = true`.

**Verification:**
- Ran `cargo test -p codex-tui` to ensure no regressions.
- Manual verification confirms the separator now only appears after
actual work is completed.

---------

Co-authored-by: Josh McKinney <joshka@openai.com>
2026-01-16 01:41:43 +00:00
Dylan Hurd
3cabb24210 chore(windows) Enable Powershell UTF8 feature (#9195)
## Summary
We've received a lot of positive feedback about this feature, so we're
going to enable it by default.
2026-01-16 01:29:12 +00:00
charley-oai
1fa8350ae7 Add text element metadata to protocol, app server, and core (#9331)
The second part of breaking up PR
https://github.com/openai/codex/pull/9116

Summary:

- Add `TextElement` / `ByteRange` to protocol user inputs and user
message events with defaults.
- Thread `text_elements` through app-server v1/v2 request handling and
history rebuild.
- Preserve UI metadata only in user input/events (not `ContentItem`)
while keeping local image attachments in user events for rehydration.

Details:

- Protocol: `UserInput::Text` carries `text_elements`;
`UserMessageEvent` carries `text_elements` + `local_images`.
Serialization includes empty vectors for backward compatibility.
- app-server-protocol: v1 defines `V1TextElement` / `V1ByteRange` in
camelCase with conversions; v2 uses its own camelCase wrapper.
- app-server: v1/v2 input mapping includes `text_elements`; thread
history rebuilds include them.
- Core: user event emission preserves UI metadata while model history
stays clean; history replay round-trips the metadata.
2026-01-15 17:26:41 -08:00
Yuvraj Angad Singh
004a74940a fix: send non-null content on elicitation Accept (#9196)
## Summary

- When a user accepts an MCP elicitation request, send `content:
Some(json!({}))` instead of `None`
- MCP servers that use elicitation expect content to be present when
action is Accept
- This matches the expected behavior shown in tests at
`exec-server/tests/common/lib.rs:171`

## Root Cause

In `codex-rs/core/src/codex.rs`, the `resolve_elicitation` function
always sent `content: None`:

```rust
let response = ElicitationResponse {
    action,
    content: None,  // Always None, even for Accept
};
```

## Fix

Send an empty object when accepting:

```rust
let content = match action {
    ElicitationAction::Accept => Some(serde_json::json!({})),
    ElicitationAction::Decline | ElicitationAction::Cancel => None,
};
```

## Test plan

- [x] Code compiles with `cargo check -p codex-core`
- [x] Formatted with `just fmt`
- [ ] Integration test `accept_elicitation_for_prompt_rule` (requires
MCP server binary)

Fixes #9053
2026-01-15 14:20:57 -08:00
Ahmed Ibrahim
749b58366c Revert empty paste image handling (#9318)
Revert #9049 behavior so empty paste events no longer trigger a
clipboard image read.
2026-01-15 14:16:09 -08:00
pap-openai
d886a8646c remove needs_follow_up error log (#9272) 2026-01-15 21:20:54 +00:00
sayan-oai
169201b1b5 [search] allow explicitly disabling web search (#9249)
moving `web_search` rollout serverside, so need a way to explicitly
disable search + signal eligibility from the client.

- Add `x‑oai‑web‑search‑eligible` header that signifies whether the
request can have web search.
- Only attach the `web_search` tool when the resolved `WebSearchMode` is
`Live` or `Cached`.
2026-01-15 11:28:57 -08:00
xl-openai
42fa4c237f Support SKILL.toml file. (#9125)
We’re introducing a new SKILL.toml to hold skill metadata so Codex can
deliver a richer Skills experience.

Initial focus is the interface block:
```
[interface]
display_name = "Optional user-facing name"
short_description = "Optional user-facing description"
icon_small = "./assets/small-400px.png"
icon_large = "./assets/large-logo.svg"
brand_color = "#3B82F6"
default_prompt = "Optional surrounding prompt to use the skill with"
```

All fields are exposed via the app server API.
display_name and short_description are consumed by the TUI.
2026-01-15 11:20:04 -08:00
Eric Traut
5f10548772 Revert recent styling change for input prompt placeholder text (#9307)
A recent change in commit ccba737d26 modified the styling of the
placeholder text (e.g. "Implement {feature}") in the input box of the
CLI, changing it from non-italic to italic. I think this was likely
unintentional. It results in a bad display appearance on some terminal
emulators, and several users have complained about it.

This change switches back to non-italic styling, restoring the older
behavior.

It addresses #9262
2026-01-15 10:58:12 -08:00
jif-oai
da44569fef nit: clean unified exec background processes (#9304)
To fix the occurences where the End event is received after the listener
stopped listenning
2026-01-15 18:34:33 +00:00
jif-oai
393a5a0311 chore: better orchestrator prompt (#9301) 2026-01-15 18:11:43 +00:00
viyatb-oai
55bda1a0f2 revert: remove pre-Landlock bind mounts apply (#9300)
**Description**

This removes the pre‑Landlock read‑only bind‑mount step from the Linux
sandbox so filesystem restrictions rely solely on Landlock again.
`mounts.rs` is kept in place but left unused. The linux‑sandbox README
is updated to match the new behavior and manual test expectations.
2026-01-15 09:47:57 -08:00
李琼羽
b4d240c3ae fix(exec): improve stdin prompt decoding (#9151)
Fixes #8733.

- Read prompt from stdin as raw bytes and decode more helpfully.
- Strip UTF-8 BOM; decode UTF-16LE/UTF-16BE when a BOM is present.
- For other non-UTF8 input, fail with an actionable message (offset +
iconv hint).

Tests: `cargo test -p codex-exec`.
2026-01-15 09:29:05 -08:00
gt-oai
f6df1596eb Propagate MCP disabled reason (#9207)
Indicate why MCP servers are disabled when they are disabled by
requirements:

```
➜  codex git:(main) ✗ just codex mcp list
cargo run --bin codex -- "$@"
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.27s
     Running `target/debug/codex mcp list`
Name         Command          Args  Env  Cwd  Status                                                                  Auth
docs         docs-mcp         -     -    -    disabled: requirements (MDM com.openai.codex:requirements_toml_base64)  Unsupported
hello_world  hello-world-mcp  -     -    -    disabled: requirements (MDM com.openai.codex:requirements_toml_base64)  Unsupported

➜  codex git:(main) ✗ just c
cargo run --bin codex -- "$@"
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.90s
     Running `target/debug/codex`
╭─────────────────────────────────────────────╮
│ >_ OpenAI Codex (v0.0.0)                    │
│                                             │
│ model:     gpt-5.2 xhigh   /model to change │
│ directory: ~/code/codex/codex-rs            │
╰─────────────────────────────────────────────╯

/mcp

🔌  MCP Tools

  • No MCP tools available.

  • docs (disabled)
    • Reason: requirements (MDM com.openai.codex:requirements_toml_base64)

  • hello_world (disabled)
    • Reason: requirements (MDM com.openai.codex:requirements_toml_base64)
```
2026-01-15 17:24:00 +00:00
Eric Traut
ae96a15312 Changed codex resume --last to honor the current cwd (#9245)
This PR changes `codex resume --last` to work consistently with `codex
resume`. Namely, it filters based on the cwd when selecting the last
session. It also supports the `--all` modifier as an override.

This addresses #8700
2026-01-15 17:05:08 +00:00
jif-oai
3fc487e0e0 feat: basic tui for event emission (#9209) 2026-01-15 15:53:02 +00:00
jif-oai
faeb08c1e1 feat: add interrupt capabilities to send_input (#9276) 2026-01-15 14:59:07 +00:00
jif-oai
05b960671d feat: add agent roles to collab tools (#9275)
Add `agent_type` parameter to the collab tool `spawn_agent` that
contains a preset to apply on the config when spawning this agent
2026-01-15 13:33:52 +00:00
jif-oai
bad4c12b9d feat: collab tools app-server event mapping (#9213) 2026-01-15 09:03:26 +00:00
viyatb-oai
2259031d64 fix: fallback to Landlock-only when user namespaces unavailable and set PR_SET_NO_NEW_PRIVS early (#9250)
fixes https://github.com/openai/codex/issues/9236

### Motivation
- Prevent sandbox setup from failing when unprivileged user namespaces
are denied so Landlock-only protections can still be applied.
- Ensure `PR_SET_NO_NEW_PRIVS` is set before installing seccomp and
Landlock restrictions to avoid kernel `EPERM`/`LandlockRestrict`
ordering issues.

### Description
- Add `is_permission_denied` helper that detects `EPERM` /
`PermissionDenied` from `CodexErr` to drive fallback logic.
- In `apply_read_only_mounts` skip read-only bind-mount setup and return
`Ok(())` when `unshare_user_and_mount_namespaces()` fails with
permission-denied so Landlock rules can still be installed.
- Add `set_no_new_privs()` and call it from
`apply_sandbox_policy_to_current_thread` before installing seccomp
filters and Landlock rules when disk or network access is restricted.
2026-01-14 22:24:34 -08:00
Ahmed Ibrahim
a09711332a Add migration_markdown in model_info (#9219)
Next step would be to clean Model Upgrade in model presets

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: aibrahim-oai <219906144+aibrahim-oai@users.noreply.github.com>
2026-01-15 01:55:22 +00:00
charley-oai
4a9c2bcc5a Add text element metadata to types (#9235)
Initial type tweaking PR to make the diff of
https://github.com/openai/codex/pull/9116 smaller

This should not change any behavior, just adds some fields to types
2026-01-14 16:41:50 -08:00
Michael Bolin
2a68b74b9b fix: increase timeout for release builds from 30 to 60 minutes (#9242)
Windows builds have been tripping the 30 minute timeout. For sure, we
need to improve this, but as a quick fix, let's just increase the
timeout.

Perhaps we should switch to `lto = "thin"` for release builds, at least
for Windows:


3728db11b8/codex-rs/Cargo.toml (L288)

See https://doc.rust-lang.org/cargo/reference/profiles.html#lto for
details.
2026-01-15 00:38:25 +00:00
Michael Bolin
3728db11b8 fix: eliminate unnecessary clone() for each SSE event (#9238)
Given how many SSE events we get, seems worth fixing.
2026-01-15 00:06:09 +00:00
146 changed files with 5869 additions and 880 deletions

View File

@@ -49,7 +49,7 @@ jobs:
needs: tag-check
name: Build - ${{ matrix.runner }} - ${{ matrix.target }}
runs-on: ${{ matrix.runner }}
timeout-minutes: 30
timeout-minutes: 60
permissions:
contents: read
id-token: write

24
app-server-ui/.gitignore vendored Normal file
View File

@@ -0,0 +1,24 @@
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
lerna-debug.log*
node_modules
dist
dist-ssr
*.local
# Editor directories and files
.vscode/*
!.vscode/extensions.json
.idea
.DS_Store
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?

30
app-server-ui/README.md Normal file
View File

@@ -0,0 +1,30 @@
# Codex App Server UI
Minimal React + Vite client for the codex app-server v2 JSON-RPC protocol.
## Prerequisites
- `codex` CLI available in your PATH (or set `CODEX_BIN`).
- If you are working from this repo, the bridge will prefer the local
`codex-rs/target/debug/codex-app-server` binary when it exists.
- A configured Codex environment (API key or login) as required by the app-server.
## Quickstart
From the repo root:
```bash
pnpm install
pnpm --filter app-server-ui dev
```
This starts:
- a WebSocket bridge at `ws://localhost:8787` that spawns `codex app-server`
- the Vite dev server at `http://localhost:5173`
## Configuration
- `CODEX_BIN`: path to the `codex` executable (default: `codex`).
- `APP_SERVER_BIN` / `CODEX_APP_SERVER_BIN`: path to a `codex-app-server` binary (overrides `CODEX_BIN`).
- `APP_SERVER_UI_PORT`: port for the bridge server (default: `8787`).
- `VITE_APP_SERVER_WS`: WebSocket URL for the UI (default: `ws://localhost:8787`).

12
app-server-ui/index.html Normal file
View File

@@ -0,0 +1,12 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Codex App Server UI</title>
</head>
<body>
<div id="root"></div>
<script type="module" src="/src/main.tsx"></script>
</body>
</html>

View File

@@ -0,0 +1,26 @@
{
"name": "app-server-ui",
"private": true,
"version": "0.1.0",
"type": "module",
"scripts": {
"dev": "concurrently -k \"pnpm:dev:server\" \"pnpm:dev:client\"",
"dev:client": "vite",
"dev:server": "node server/index.mjs",
"build": "tsc && vite build",
"preview": "vite preview"
},
"dependencies": {
"react": "^18.3.1",
"react-dom": "^18.3.1",
"ws": "^8.18.0"
},
"devDependencies": {
"@types/react": "^18.3.18",
"@types/react-dom": "^18.3.5",
"@vitejs/plugin-react": "^4.3.4",
"concurrently": "^8.2.2",
"typescript": "~5.9.3",
"vite": "^7.2.4"
}
}

View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="iconify iconify--logos" width="31.88" height="32" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 257"><defs><linearGradient id="IconifyId1813088fe1fbc01fb466" x1="-.828%" x2="57.636%" y1="7.652%" y2="78.411%"><stop offset="0%" stop-color="#41D1FF"></stop><stop offset="100%" stop-color="#BD34FE"></stop></linearGradient><linearGradient id="IconifyId1813088fe1fbc01fb467" x1="43.376%" x2="50.316%" y1="2.242%" y2="89.03%"><stop offset="0%" stop-color="#FFEA83"></stop><stop offset="8.333%" stop-color="#FFDD35"></stop><stop offset="100%" stop-color="#FFA800"></stop></linearGradient></defs><path fill="url(#IconifyId1813088fe1fbc01fb466)" d="M255.153 37.938L134.897 252.976c-2.483 4.44-8.862 4.466-11.382.048L.875 37.958c-2.746-4.814 1.371-10.646 6.827-9.67l120.385 21.517a6.537 6.537 0 0 0 2.322-.004l117.867-21.483c5.438-.991 9.574 4.796 6.877 9.62Z"></path><path fill="url(#IconifyId1813088fe1fbc01fb467)" d="M185.432.063L96.44 17.501a3.268 3.268 0 0 0-2.634 3.014l-5.474 92.456a3.268 3.268 0 0 0 3.997 3.378l24.777-5.718c2.318-.535 4.413 1.507 3.936 3.838l-7.361 36.047c-.495 2.426 1.782 4.5 4.151 3.78l15.304-4.649c2.372-.72 4.652 1.36 4.15 3.788l-11.698 56.621c-.732 3.542 3.979 5.473 5.943 2.437l1.313-2.028l72.516-144.72c1.215-2.423-.88-5.186-3.54-4.672l-25.505 4.922c-2.396.462-4.435-1.77-3.759-4.114l16.646-57.705c.677-2.35-1.37-4.583-3.769-4.113Z"></path></svg>

After

Width:  |  Height:  |  Size: 1.5 KiB

View File

@@ -0,0 +1,181 @@
import { createServer } from "node:http";
import { spawn } from "node:child_process";
import { createInterface } from "node:readline";
import { existsSync } from "node:fs";
import { dirname, resolve } from "node:path";
import { fileURLToPath } from "node:url";
import { WebSocketServer } from "ws";
const port = Number(process.env.APP_SERVER_UI_PORT ?? 8787);
const codexBin = process.env.CODEX_BIN ?? "codex";
const explicitAppServerBin =
process.env.APP_SERVER_BIN ?? process.env.CODEX_APP_SERVER_BIN ?? null;
const here = dirname(fileURLToPath(import.meta.url));
const repoRoot = resolve(here, "..", "..");
const localAppServerBin = resolve(repoRoot, "codex-rs/target/debug/codex-app-server");
const appServerBin =
explicitAppServerBin ?? (existsSync(localAppServerBin) ? localAppServerBin : null);
const sockets = new Set();
let appServer = null;
const broadcast = (payload) => {
const message = JSON.stringify(payload);
for (const ws of sockets) {
if (ws.readyState === ws.OPEN) {
ws.send(message);
}
}
};
const startAppServer = () => {
if (appServer?.child?.exitCode === null) {
return appServer;
}
const command = appServerBin ?? codexBin;
const args = appServerBin ? [] : ["app-server"];
const child = spawn(command, args, {
stdio: ["pipe", "pipe", "pipe"],
env: process.env,
});
const stdout = child.stdout;
const stderr = child.stderr;
const stdoutRl = stdout ? createInterface({ input: stdout }) : null;
const stderrRl = stderr ? createInterface({ input: stderr }) : null;
stdoutRl?.on("line", (line) => {
const trimmed = line.trim();
if (!trimmed) {
return;
}
let payload = trimmed;
try {
const parsed = JSON.parse(trimmed);
payload = JSON.stringify(parsed);
} catch {
payload = JSON.stringify({
method: "ui/raw",
params: { line: trimmed },
});
}
for (const ws of sockets) {
if (ws.readyState === ws.OPEN) {
ws.send(payload);
}
}
});
stderrRl?.on("line", (line) => {
const trimmed = line.trim();
if (!trimmed) {
return;
}
console.error(trimmed);
broadcast({ method: "ui/stderr", params: { line: trimmed } });
});
child.on("error", (err) => {
console.error("codex app-server spawn error:", err);
broadcast({ method: "ui/error", params: { message: "Failed to spawn app-server.", details: String(err) } });
appServer = null;
});
child.on("exit", (code, signal) => {
console.log(`codex app-server exited (code=${code ?? "null"}, signal=${signal ?? "null"})`);
broadcast({ method: "ui/exit", params: { code, signal } });
appServer = null;
});
appServer = { child, stdoutRl, stderrRl };
return appServer;
};
const server = createServer((req, res) => {
if (req.url === "/health") {
res.writeHead(200, { "content-type": "application/json" });
res.end(JSON.stringify({ status: "ok" }));
return;
}
res.writeHead(404, { "content-type": "text/plain" });
res.end("Not found");
});
const wss = new WebSocketServer({ server });
wss.on("connection", (ws) => {
sockets.add(ws);
const running = Boolean(appServer?.child?.exitCode === null);
ws.send(
JSON.stringify({
method: "ui/connected",
params: { pid: appServer?.child?.pid ?? null, running },
}),
);
ws.on("close", () => {
sockets.delete(ws);
});
ws.on("message", (data) => {
const text = typeof data === "string" ? data : data.toString("utf8");
if (!text.trim()) {
return;
}
let parsed;
try {
parsed = JSON.parse(text);
} catch (err) {
ws.send(
JSON.stringify({
method: "ui/error",
params: {
message: "Failed to parse JSON from client.",
details: String(err),
},
}),
);
return;
}
if (!appServer || appServer.child.exitCode !== null || !appServer.child.stdin?.writable) {
startAppServer();
}
if (!appServer || !appServer.child.stdin?.writable) {
ws.send(
JSON.stringify({
method: "ui/error",
params: {
message: "app-server stdin is closed.",
},
}),
);
return;
}
appServer.child.stdin.write(`${JSON.stringify(parsed)}\n`);
});
});
server.listen(port, () => {
console.log(`App server bridge listening on ws://localhost:${port}`);
});
startAppServer();
const shutdown = () => {
appServer?.stdoutRl?.close();
appServer?.stderrRl?.close();
wss.close();
server.close();
appServer?.child?.kill("SIGTERM");
};
process.on("SIGINT", shutdown);
process.on("SIGTERM", shutdown);

347
app-server-ui/src/App.css Normal file
View File

@@ -0,0 +1,347 @@
@import url("https://fonts.googleapis.com/css2?family=Space+Grotesk:wght@400;500;600&family=IBM+Plex+Mono:wght@400;500&display=swap");
:root {
color-scheme: only light;
font-family: "Space Grotesk", "Segoe UI", system-ui, sans-serif;
color: #1d2327;
background-color: #f5f4f2;
--panel-bg: #ffffff;
--panel-border: #d9d4cd;
--accent: #ff6a3d;
--accent-2: #227c88;
--text-muted: #4f5b62;
--shadow: 0 10px 22px rgba(28, 36, 40, 0.12);
}
* {
box-sizing: border-box;
}
body {
margin: 0;
min-height: 100vh;
background-color: #f5f4f2;
background-image:
linear-gradient(180deg, #f8f7f5 0%, #f0ede9 100%),
repeating-linear-gradient(90deg, rgba(29, 35, 39, 0.05) 0 1px, transparent 1px 120px),
repeating-linear-gradient(0deg, rgba(29, 35, 39, 0.04) 0 1px, transparent 1px 120px);
}
.app {
min-height: 100vh;
padding: 28px 36px 40px;
display: flex;
flex-direction: column;
gap: 22px;
}
.hero {
display: flex;
justify-content: flex-end;
align-items: center;
}
.status {
display: flex;
align-items: center;
gap: 12px;
padding: 10px 14px;
background: #ffffff;
border: 1px solid #cfc8c0;
border-radius: 10px;
box-shadow: var(--shadow);
min-width: 220px;
}
.status-dot {
width: 12px;
height: 12px;
border-radius: 999px;
background: #c2c2c2;
}
.status-dot.on {
background: #3cb371;
}
.status-dot.off {
background: #c46a5e;
}
.status-label {
font-weight: 600;
}
.status-meta {
font-size: 12px;
color: var(--text-muted);
}
.grid {
display: grid;
grid-template-columns: minmax(0, 1.1fr) minmax(0, 0.9fr);
gap: 18px;
}
.panel {
background: var(--panel-bg);
border: 1px solid var(--panel-border);
border-radius: 14px;
padding: 20px;
box-shadow: var(--shadow);
}
.panel-title {
font-weight: 600;
text-transform: uppercase;
letter-spacing: 0.08em;
font-size: 11px;
color: var(--text-muted);
margin-bottom: 12px;
}
.control {
display: flex;
flex-direction: column;
gap: 14px;
}
.control-row {
display: grid;
grid-template-columns: repeat(2, minmax(0, 1fr));
gap: 12px;
}
.label {
font-size: 12px;
text-transform: uppercase;
letter-spacing: 0.08em;
color: var(--text-muted);
}
.value {
font-size: 14px;
font-family: "IBM Plex Mono", ui-monospace, "SFMono-Regular", monospace;
margin-top: 2px;
}
.button-row {
display: flex;
gap: 10px;
flex-wrap: wrap;
}
.btn {
border: 1px solid var(--panel-border);
background: #ffffff;
color: #1f2a2e;
padding: 9px 14px;
border-radius: 8px;
font-weight: 600;
cursor: pointer;
transition: box-shadow 0.2s ease, border-color 0.2s ease;
}
.btn:hover:enabled {
box-shadow: 0 6px 14px rgba(31, 42, 46, 0.12);
border-color: #c3bbb2;
}
.btn.primary {
background: linear-gradient(120deg, var(--accent), #ff946b);
border-color: transparent;
color: #1b1a19;
}
.btn:disabled {
cursor: not-allowed;
opacity: 0.6;
}
.notice {
padding: 10px 12px;
border-radius: 10px;
background: rgba(255, 106, 61, 0.12);
color: #9a3f1e;
font-size: 13px;
}
.composer textarea {
width: 100%;
margin-top: 8px;
margin-bottom: 10px;
border-radius: 10px;
border: 1px solid var(--panel-border);
padding: 10px 12px;
font-family: inherit;
resize: vertical;
min-height: 96px;
background: #ffffff;
}
.thread-list {
display: flex;
flex-direction: column;
gap: 10px;
}
.thread-item {
text-align: left;
padding: 9px 10px;
border-radius: 10px;
border: 1px solid #e4ded7;
background: #ffffff;
font-family: "IBM Plex Mono", ui-monospace, "SFMono-Regular", monospace;
font-size: 12px;
cursor: pointer;
}
.thread-item:hover {
border-color: rgba(34, 124, 136, 0.6);
}
.thread-item.active {
border-color: rgba(255, 106, 61, 0.7);
background: rgba(255, 106, 61, 0.08);
}
.approvals {
display: flex;
flex-direction: column;
gap: 12px;
}
.approval-card {
background: #ffffff;
border: 1px dashed rgba(31, 42, 46, 0.2);
border-radius: 12px;
padding: 12px;
display: flex;
flex-direction: column;
gap: 10px;
}
.approval-header {
display: flex;
justify-content: space-between;
font-size: 13px;
color: var(--text-muted);
}
.approval-time {
font-family: "IBM Plex Mono", ui-monospace, "SFMono-Regular", monospace;
}
.approval-card pre {
margin: 0;
padding: 10px;
border-radius: 8px;
background: #f6f4f1;
font-size: 12px;
font-family: "IBM Plex Mono", ui-monospace, "SFMono-Regular", monospace;
overflow-x: auto;
}
.chat {
display: flex;
flex-direction: column;
gap: 12px;
}
.chat-scroll {
display: flex;
flex-direction: column;
gap: 12px;
max-height: 520px;
overflow-y: auto;
}
.bubble {
border-radius: 14px;
padding: 12px 14px;
border: 1px solid transparent;
background: #ffffff;
}
.bubble.user {
align-self: flex-start;
border-color: rgba(34, 124, 136, 0.35);
background: rgba(34, 124, 136, 0.06);
}
.bubble.assistant {
align-self: flex-end;
border-color: rgba(255, 106, 61, 0.35);
background: rgba(255, 106, 61, 0.07);
}
.bubble-role {
font-size: 11px;
text-transform: uppercase;
letter-spacing: 0.1em;
color: var(--text-muted);
margin-bottom: 4px;
}
.bubble-text {
white-space: pre-wrap;
line-height: 1.5;
}
.logs {
display: flex;
flex-direction: column;
}
.log-scroll {
display: flex;
flex-direction: column;
gap: 8px;
max-height: 520px;
overflow-y: auto;
font-family: "IBM Plex Mono", ui-monospace, "SFMono-Regular", monospace;
font-size: 12px;
}
.log-entry {
padding: 9px 10px;
border-radius: 10px;
border: 1px solid rgba(31, 42, 46, 0.12);
background: #ffffff;
}
.log-entry.out {
border-color: rgba(34, 124, 136, 0.28);
}
.log-meta {
display: flex;
justify-content: space-between;
margin-bottom: 4px;
color: var(--text-muted);
}
.log-detail {
color: #1f2a2e;
word-break: break-word;
}
.empty {
color: var(--text-muted);
font-size: 14px;
}
@media (max-width: 980px) {
.hero {
flex-direction: column;
align-items: stretch;
}
.grid {
grid-template-columns: 1fr;
}
.app {
padding: 20px;
}
}

705
app-server-ui/src/App.tsx Normal file
View File

@@ -0,0 +1,705 @@
import { useCallback, useEffect, useMemo, useRef, useState } from "react";
type RpcError = {
message?: string;
[key: string]: unknown;
};
type RpcMessage = {
id?: number | string;
method?: string;
params?: Record<string, unknown>;
result?: Record<string, unknown>;
error?: RpcError;
};
type LogEntry = {
id: number;
direction: "in" | "out";
label: string;
detail?: string;
time: string;
};
type ChatMessage = {
id: string;
role: "user" | "assistant";
text: string;
};
type ApprovalRequest = {
id: number | string;
method: string;
params: Record<string, unknown>;
receivedAt: string;
};
type PendingRequest = {
method: string;
};
const wsUrl = import.meta.env.VITE_APP_SERVER_WS ?? "ws://localhost:8787";
const formatTime = () =>
new Date().toLocaleTimeString("en-US", {
hour12: false,
hour: "2-digit",
minute: "2-digit",
second: "2-digit",
});
const summarizeParams = (params: Record<string, unknown> | undefined) => {
if (!params) {
return undefined;
}
try {
return JSON.stringify(params);
} catch {
return "[unserializable params]";
}
};
const shouldLogMethod = (method: string) => {
if (method.includes("/delta")) {
return false;
}
return true;
};
export default function App() {
const wsRef = useRef<WebSocket | null>(null);
const nextIdRef = useRef(1);
const pendingRef = useRef<Map<number | string, PendingRequest>>(new Map());
const agentIndexRef = useRef<Map<string, Map<string, number>>>(new Map());
const selectedThreadIdRef = useRef<string | null>(null);
const userItemIdsRef = useRef<Map<string, Set<string>>>(new Map());
const [connected, setConnected] = useState(false);
const [initialized, setInitialized] = useState(false);
const [threads, setThreads] = useState<string[]>([]);
const [selectedThreadId, setSelectedThreadId] = useState<string | null>(null);
const [activeTurnId, setActiveTurnId] = useState<string | null>(null);
const [activeTurnThreadId, setActiveTurnThreadId] = useState<string | null>(null);
const [input, setInput] = useState("");
const [logs, setLogs] = useState<LogEntry[]>([]);
const [threadMessages, setThreadMessages] = useState<Record<string, ChatMessage[]>>({});
const [approvals, setApprovals] = useState<ApprovalRequest[]>([]);
const [connectionError, setConnectionError] = useState<string | null>(null);
const pushLog = useCallback((entry: Omit<LogEntry, "id" | "time">) => {
setLogs((prev) => {
const next: LogEntry[] = [
{
id: prev.length ? prev[0].id + 1 : 1,
time: formatTime(),
...entry,
},
...prev,
];
return next.slice(0, 200);
});
}, []);
const sendPayload = useCallback(
(payload: RpcMessage, label?: string) => {
const socket = wsRef.current;
if (!socket || socket.readyState !== WebSocket.OPEN) {
return;
}
const json = JSON.stringify(payload);
socket.send(json);
pushLog({
direction: "out",
label: label ?? payload.method ?? "response",
detail: summarizeParams(payload.params) ?? summarizeParams(payload.result),
});
},
[pushLog],
);
const sendRequest = useCallback(
(method: string, params?: Record<string, unknown>) => {
const id = nextIdRef.current++;
pendingRef.current.set(id, { method });
sendPayload({ id, method, params }, method);
return id;
},
[sendPayload],
);
const sendNotification = useCallback(
(method: string, params?: Record<string, unknown>) => {
sendPayload({ method, params }, method);
},
[sendPayload],
);
const selectThread = useCallback((threadId: string | null) => {
selectedThreadIdRef.current = threadId;
setSelectedThreadId(threadId);
}, []);
const ensureThread = useCallback(
(threadId: string) => {
setThreads((prev) => (prev.includes(threadId) ? prev : [...prev, threadId]));
setThreadMessages((prev) => (prev[threadId] ? prev : { ...prev, [threadId]: [] }));
if (!selectedThreadIdRef.current) {
selectThread(threadId);
}
},
[selectThread],
);
const getAgentIndexForThread = useCallback((threadId: string) => {
const existing = agentIndexRef.current.get(threadId);
if (existing) {
return existing;
}
const next = new Map<string, number>();
agentIndexRef.current.set(threadId, next);
return next;
}, []);
const handleInitialize = useCallback(() => {
sendRequest("initialize", {
clientInfo: {
name: "codex_app_server_ui",
title: "Codex App Server UI",
version: "0.1.0",
},
});
}, [sendRequest]);
const handleStartThread = useCallback(() => {
if (!initialized) {
return;
}
sendRequest("thread/start", {});
}, [initialized, sendRequest]);
const handleSendMessage = useCallback(() => {
if (!initialized || !selectedThreadId || !input.trim()) {
return;
}
const text = input.trim();
setInput("");
sendRequest("turn/start", {
threadId: selectedThreadId,
input: [{ type: "text", text }],
});
}, [initialized, selectedThreadId, input, sendRequest]);
const handleApprovalDecision = useCallback(
(approvalId: number | string, decision: "accept" | "decline") => {
sendPayload({
id: approvalId,
result: {
decision,
},
});
setApprovals((prev) => prev.filter((approval) => approval.id !== approvalId));
},
[sendPayload],
);
const updateAgentMessage = useCallback(
(threadId: string, itemId: string, delta: string) => {
setThreadMessages((prev) => {
const threadLog = prev[threadId] ?? [];
const indexMap = getAgentIndexForThread(threadId);
const existingIndex = indexMap.get(itemId);
if (existingIndex === undefined) {
indexMap.set(itemId, threadLog.length);
return {
...prev,
[threadId]: [...threadLog, { id: itemId, role: "assistant", text: delta }],
};
}
const nextThreadLog = [...threadLog];
nextThreadLog[existingIndex] = {
...nextThreadLog[existingIndex],
text: nextThreadLog[existingIndex].text + delta,
};
return { ...prev, [threadId]: nextThreadLog };
});
},
[getAgentIndexForThread],
);
const extractUserText = useCallback((content: unknown) => {
if (!Array.isArray(content)) {
return null;
}
const parts = content
.map((entry) => {
if (entry && typeof entry === "object" && (entry as { type?: string }).type === "text") {
return (entry as { text?: string }).text ?? "";
}
return "";
})
.filter((text) => text.length > 0);
return parts.length ? parts.join("\n") : null;
}, []);
const markUserItemSeen = useCallback((threadId: string, itemId: string) => {
const seen = userItemIdsRef.current.get(threadId) ?? new Set<string>();
if (!userItemIdsRef.current.has(threadId)) {
userItemIdsRef.current.set(threadId, seen);
}
if (seen.has(itemId)) {
return false;
}
seen.add(itemId);
return true;
}, []);
const handleIncomingMessage = useCallback(
(message: RpcMessage) => {
if (message.id !== undefined && message.method) {
const requestId = message.id;
if (
message.method === "item/commandExecution/requestApproval" ||
message.method === "item/fileChange/requestApproval"
) {
setApprovals((prev) => [
...prev,
{
id: requestId,
method: message.method ?? "",
params: message.params ?? {},
receivedAt: formatTime(),
},
]);
pushLog({
direction: "in",
label: message.method,
detail: summarizeParams(message.params),
});
}
return;
}
if (message.id !== undefined) {
const pending = pendingRef.current.get(message.id);
pendingRef.current.delete(message.id);
if (pending) {
if (pending.method === "initialize") {
const errorMessage =
message.error && typeof message.error.message === "string"
? message.error.message
: null;
const alreadyInitialized = errorMessage === "Already initialized";
if (!message.error) {
sendNotification("initialized");
}
if (!message.error || alreadyInitialized) {
setInitialized(true);
sendRequest("thread/loaded/list");
}
}
if (pending.method === "thread/start" || pending.method === "thread/resume") {
const thread = message.result?.thread as { id?: string } | undefined;
if (thread?.id) {
ensureThread(thread.id);
}
}
if (pending.method === "thread/loaded/list") {
const data = message.result?.data;
if (Array.isArray(data)) {
const ids = data.filter((entry): entry is string => typeof entry === "string");
setThreads(ids);
setThreadMessages((prev) => {
const next = { ...prev };
for (const id of ids) {
if (!next[id]) {
next[id] = [];
}
}
return next;
});
if (!selectedThreadIdRef.current && ids.length > 0) {
selectThread(ids[0]);
}
}
}
if (pending.method === "turn/start") {
const turn = message.result?.turn as { id?: string } | undefined;
if (turn?.id) {
setActiveTurnId(turn.id);
setActiveTurnThreadId(selectedThreadIdRef.current);
}
}
}
pushLog({
direction: "in",
label: pending?.method ?? "response",
detail: summarizeParams(message.result) ?? summarizeParams(message.error),
});
return;
}
if (message.method) {
const eventThreadId = message.params?.threadId as string | undefined;
if (eventThreadId) {
ensureThread(eventThreadId);
}
if (shouldLogMethod(message.method)) {
pushLog({
direction: "in",
label: message.method,
detail: summarizeParams(message.params),
});
}
if (message.method === "thread/started") {
const thread = (message.params?.thread as { id?: string } | undefined) ?? undefined;
if (thread?.id) {
ensureThread(thread.id);
}
}
if (message.method === "turn/started") {
const turn = (message.params?.turn as { id?: string } | undefined) ?? undefined;
const threadId = message.params?.threadId as string | undefined;
if (turn?.id) {
setActiveTurnId(turn.id);
setActiveTurnThreadId(threadId ?? null);
}
}
if (message.method === "turn/completed") {
setActiveTurnId(null);
setActiveTurnThreadId(null);
}
if (message.method === "item/started") {
const item = message.params?.item as {
id?: string;
type?: string;
content?: unknown;
text?: string;
} | undefined;
const threadId = message.params?.threadId as string | undefined;
if (!threadId) {
return;
}
const itemId = item?.id;
if (item?.type === "agentMessage" && itemId) {
setThreadMessages((prev) => {
const threadLog = prev[threadId] ?? [];
const indexMap = getAgentIndexForThread(threadId);
indexMap.set(itemId, threadLog.length);
return {
...prev,
[threadId]: [...threadLog, { id: itemId, role: "assistant", text: "" }],
};
});
}
if (item?.type === "userMessage") {
const userText = extractUserText(item.content);
if (userText) {
if (itemId && !markUserItemSeen(threadId, itemId)) {
return;
}
setThreadMessages((prev) => {
const threadLog = prev[threadId] ?? [];
return {
...prev,
[threadId]: [
...threadLog,
{ id: itemId ?? `user-${Date.now()}`, role: "user", text: userText },
],
};
});
}
}
}
if (message.method === "item/agentMessage/delta") {
const itemId = message.params?.itemId as string | undefined;
const threadId = message.params?.threadId as string | undefined;
const delta = message.params?.delta as string | undefined;
if (itemId && delta && threadId) {
updateAgentMessage(threadId, itemId, delta);
}
}
if (message.method === "item/completed") {
const item = message.params?.item as {
id?: string;
type?: string;
text?: string;
content?: unknown;
} | undefined;
const threadId = message.params?.threadId as string | undefined;
if (!threadId) {
return;
}
const itemId = item?.id;
if (item?.type === "agentMessage" && itemId && typeof item.text === "string") {
setThreadMessages((prev) => {
const threadLog = prev[threadId] ?? [];
const index = getAgentIndexForThread(threadId).get(itemId);
if (index === undefined) {
getAgentIndexForThread(threadId).set(itemId, threadLog.length);
return {
...prev,
[threadId]: [...threadLog, { id: itemId, role: "assistant", text: item.text ?? "" }],
};
}
const nextThreadLog = [...threadLog];
nextThreadLog[index] = { ...nextThreadLog[index], text: item.text ?? "" };
return { ...prev, [threadId]: nextThreadLog };
});
}
if (item?.type === "userMessage") {
return;
}
}
}
},
[
ensureThread,
extractUserText,
getAgentIndexForThread,
markUserItemSeen,
pushLog,
sendNotification,
selectThread,
sendRequest,
updateAgentMessage,
],
);
const connect = useCallback(() => {
if (wsRef.current) {
wsRef.current.close();
}
const socket = new WebSocket(wsUrl);
wsRef.current = socket;
socket.onopen = () => {
setConnected(true);
setConnectionError(null);
handleInitialize();
};
socket.onclose = () => {
setConnected(false);
setInitialized(false);
setThreads([]);
selectThread(null);
setActiveTurnId(null);
setActiveTurnThreadId(null);
setApprovals([]);
setThreadMessages({});
agentIndexRef.current.clear();
userItemIdsRef.current.clear();
pendingRef.current.clear();
};
socket.onerror = () => {
setConnectionError("WebSocket error. Check the bridge server.");
};
socket.onmessage = (event) => {
try {
const parsed = JSON.parse(event.data as string) as RpcMessage;
handleIncomingMessage(parsed);
} catch (err) {
pushLog({
direction: "in",
label: "ui/error",
detail: `Failed to parse message: ${String(err)}`,
});
}
};
}, [handleIncomingMessage, handleInitialize, pushLog, selectThread]);
useEffect(() => {
connect();
return () => {
wsRef.current?.close();
wsRef.current = null;
};
}, [connect]);
const statusLabel = useMemo(() => {
if (!connected) {
return "Disconnected";
}
if (!initialized) {
return "Connecting";
}
return "Ready";
}, [connected, initialized]);
const activeMessages = selectedThreadId ? threadMessages[selectedThreadId] ?? [] : [];
const displayedTurnId =
selectedThreadId && activeTurnThreadId === selectedThreadId ? activeTurnId : null;
return (
<div className="app">
<header className="hero">
<div className="status">
<span className={`status-dot ${connected ? "on" : "off"}`} />
<div>
<div className="status-label">{statusLabel}</div>
<div className="status-meta">{wsUrl}</div>
</div>
</div>
</header>
<main className="grid">
<section className="panel control">
<div className="panel-title">Session</div>
<div className="control-row">
<div>
<div className="label">Thread</div>
<div className="value">{selectedThreadId ?? "none"}</div>
</div>
<div>
<div className="label">Turn</div>
<div className="value">{displayedTurnId ?? "idle"}</div>
</div>
</div>
<div className="button-row">
<button className="btn" onClick={connect} type="button">
Reconnect
</button>
<button className="btn primary" onClick={handleStartThread} type="button" disabled={!initialized}>
Start Thread
</button>
</div>
{connectionError ? <div className="notice">{connectionError}</div> : null}
<div className="composer">
<label className="label" htmlFor="message">
Message
</label>
<textarea
id="message"
value={input}
placeholder="Ask Codex for a change or summary..."
onChange={(event) => setInput(event.target.value)}
rows={4}
/>
<button
className="btn primary"
type="button"
onClick={handleSendMessage}
disabled={!initialized || !selectedThreadId || !input.trim()}
>
Send Turn
</button>
</div>
<div className="thread-list">
<div className="panel-title">Subscribed Threads</div>
{threads.length === 0 ? (
<div className="empty">No threads yet.</div>
) : (
threads.map((id) => (
<button
key={id}
type="button"
className={`thread-item ${selectedThreadId === id ? "active" : ""}`}
onClick={() => selectThread(id)}
>
{id}
</button>
))
)}
</div>
{approvals.length ? (
<div className="approvals">
<div className="panel-title">Approvals</div>
{approvals.map((approval) => (
<div className="approval-card" key={String(approval.id)}>
<div className="approval-header">
<span>{approval.method}</span>
<span className="approval-time">{approval.receivedAt}</span>
</div>
<pre>{JSON.stringify(approval.params, null, 2)}</pre>
<div className="button-row">
<button
className="btn"
type="button"
onClick={() => handleApprovalDecision(approval.id, "decline")}
>
Decline
</button>
<button
className="btn primary"
type="button"
onClick={() => handleApprovalDecision(approval.id, "accept")}
>
Accept
</button>
</div>
</div>
))}
</div>
) : null}
</section>
<section className="panel chat">
<div className="panel-title">Conversation</div>
<div className="chat-scroll">
{activeMessages.length === 0 ? (
<div className="empty">No messages yet. Start a thread to begin.</div>
) : (
activeMessages.map((message) => (
<div className={`bubble ${message.role}`} key={message.id}>
<div className="bubble-role">{message.role === "user" ? "You" : "Codex"}</div>
<div className="bubble-text">{message.text}</div>
</div>
))
)}
</div>
</section>
<section className="panel logs">
<div className="panel-title">Event Log</div>
<div className="log-scroll">
{logs.length === 0 ? (
<div className="empty">Events from app-server will appear here.</div>
) : (
logs.map((entry) => (
<div className={`log-entry ${entry.direction}`} key={entry.id}>
<div className="log-meta">
<span className="log-time">{entry.time}</span>
<span className="log-label">{entry.label}</span>
</div>
{entry.detail ? <div className="log-detail">{entry.detail}</div> : null}
</div>
))
)}
</div>
</section>
</main>
</div>
);
}

View File

@@ -0,0 +1,16 @@
import React from "react";
import ReactDOM from "react-dom/client";
import App from "./App";
import "./App.css";
const root = document.getElementById("root");
if (!root) {
throw new Error("Root element not found");
}
ReactDOM.createRoot(root).render(
<React.StrictMode>
<App />
</React.StrictMode>,
);

View File

@@ -0,0 +1,27 @@
{
"compilerOptions": {
"target": "ES2022",
"useDefineForClassFields": true,
"module": "ESNext",
"lib": ["ES2022", "DOM", "DOM.Iterable"],
"jsx": "react-jsx",
"types": ["vite/client"],
"skipLibCheck": true,
/* Bundler mode */
"moduleResolution": "bundler",
"allowImportingTsExtensions": true,
"verbatimModuleSyntax": true,
"moduleDetection": "force",
"noEmit": true,
/* Linting */
"strict": true,
"noUnusedLocals": true,
"noUnusedParameters": true,
"erasableSyntaxOnly": true,
"noFallthroughCasesInSwitch": true,
"noUncheckedSideEffectImports": true
},
"include": ["src"]
}

View File

@@ -0,0 +1,10 @@
import { defineConfig } from "vite";
import react from "@vitejs/plugin-react";
export default defineConfig({
plugins: [react()],
server: {
port: 5173,
strictPort: true,
},
});

1
codex-rs/Cargo.lock generated
View File

@@ -1524,6 +1524,7 @@ dependencies = [
"codex-utils-absolute-path",
"landlock",
"libc",
"pretty_assertions",
"seccompiler",
"tempfile",
"tokio",

View File

@@ -197,6 +197,12 @@ impl ThreadHistoryBuilder {
if !payload.message.trim().is_empty() {
content.push(UserInput::Text {
text: payload.message.clone(),
text_elements: payload
.text_elements
.iter()
.cloned()
.map(Into::into)
.collect(),
});
}
if let Some(images) = &payload.images {
@@ -204,6 +210,9 @@ impl ThreadHistoryBuilder {
content.push(UserInput::Image { url: image.clone() });
}
}
for path in &payload.local_images {
content.push(UserInput::LocalImage { path: path.clone() });
}
content
}
}
@@ -244,6 +253,8 @@ mod tests {
EventMsg::UserMessage(UserMessageEvent {
message: "First turn".into(),
images: Some(vec!["https://example.com/one.png".into()]),
text_elements: Vec::new(),
local_images: Vec::new(),
}),
EventMsg::AgentMessage(AgentMessageEvent {
message: "Hi there".into(),
@@ -257,6 +268,8 @@ mod tests {
EventMsg::UserMessage(UserMessageEvent {
message: "Second turn".into(),
images: None,
text_elements: Vec::new(),
local_images: Vec::new(),
}),
EventMsg::AgentMessage(AgentMessageEvent {
message: "Reply two".into(),
@@ -277,6 +290,7 @@ mod tests {
content: vec![
UserInput::Text {
text: "First turn".into(),
text_elements: Vec::new(),
},
UserInput::Image {
url: "https://example.com/one.png".into(),
@@ -308,7 +322,8 @@ mod tests {
ThreadItem::UserMessage {
id: "item-4".into(),
content: vec![UserInput::Text {
text: "Second turn".into()
text: "Second turn".into(),
text_elements: Vec::new(),
}],
}
);
@@ -327,6 +342,8 @@ mod tests {
EventMsg::UserMessage(UserMessageEvent {
message: "Turn start".into(),
images: None,
text_elements: Vec::new(),
local_images: Vec::new(),
}),
EventMsg::AgentReasoning(AgentReasoningEvent {
text: "first summary".into(),
@@ -371,6 +388,8 @@ mod tests {
EventMsg::UserMessage(UserMessageEvent {
message: "Please do the thing".into(),
images: None,
text_elements: Vec::new(),
local_images: Vec::new(),
}),
EventMsg::AgentMessage(AgentMessageEvent {
message: "Working...".into(),
@@ -381,6 +400,8 @@ mod tests {
EventMsg::UserMessage(UserMessageEvent {
message: "Let's try again".into(),
images: None,
text_elements: Vec::new(),
local_images: Vec::new(),
}),
EventMsg::AgentMessage(AgentMessageEvent {
message: "Second attempt complete.".into(),
@@ -398,7 +419,8 @@ mod tests {
ThreadItem::UserMessage {
id: "item-1".into(),
content: vec![UserInput::Text {
text: "Please do the thing".into()
text: "Please do the thing".into(),
text_elements: Vec::new(),
}],
}
);
@@ -418,7 +440,8 @@ mod tests {
ThreadItem::UserMessage {
id: "item-3".into(),
content: vec![UserInput::Text {
text: "Let's try again".into()
text: "Let's try again".into(),
text_elements: Vec::new(),
}],
}
);
@@ -437,6 +460,8 @@ mod tests {
EventMsg::UserMessage(UserMessageEvent {
message: "First".into(),
images: None,
text_elements: Vec::new(),
local_images: Vec::new(),
}),
EventMsg::AgentMessage(AgentMessageEvent {
message: "A1".into(),
@@ -444,6 +469,8 @@ mod tests {
EventMsg::UserMessage(UserMessageEvent {
message: "Second".into(),
images: None,
text_elements: Vec::new(),
local_images: Vec::new(),
}),
EventMsg::AgentMessage(AgentMessageEvent {
message: "A2".into(),
@@ -452,6 +479,8 @@ mod tests {
EventMsg::UserMessage(UserMessageEvent {
message: "Third".into(),
images: None,
text_elements: Vec::new(),
local_images: Vec::new(),
}),
EventMsg::AgentMessage(AgentMessageEvent {
message: "A3".into(),
@@ -469,6 +498,7 @@ mod tests {
id: "item-1".into(),
content: vec![UserInput::Text {
text: "First".into(),
text_elements: Vec::new(),
}],
},
ThreadItem::AgentMessage {
@@ -486,6 +516,7 @@ mod tests {
id: "item-3".into(),
content: vec![UserInput::Text {
text: "Third".into(),
text_elements: Vec::new(),
}],
},
ThreadItem::AgentMessage {
@@ -504,6 +535,8 @@ mod tests {
EventMsg::UserMessage(UserMessageEvent {
message: "One".into(),
images: None,
text_elements: Vec::new(),
local_images: Vec::new(),
}),
EventMsg::AgentMessage(AgentMessageEvent {
message: "A1".into(),
@@ -511,6 +544,8 @@ mod tests {
EventMsg::UserMessage(UserMessageEvent {
message: "Two".into(),
images: None,
text_elements: Vec::new(),
local_images: Vec::new(),
}),
EventMsg::AgentMessage(AgentMessageEvent {
message: "A2".into(),

View File

@@ -16,6 +16,8 @@ use codex_protocol::protocol::ReviewDecision;
use codex_protocol::protocol::SandboxPolicy;
use codex_protocol::protocol::SessionSource;
use codex_protocol::protocol::TurnAbortReason;
use codex_protocol::user_input::ByteRange as CoreByteRange;
use codex_protocol::user_input::TextElement as CoreTextElement;
use codex_utils_absolute_path::AbsolutePathBuf;
use schemars::JsonSchema;
use serde::Deserialize;
@@ -444,9 +446,74 @@ pub struct RemoveConversationListenerParams {
#[serde(rename_all = "camelCase")]
#[serde(tag = "type", content = "data")]
pub enum InputItem {
Text { text: String },
Image { image_url: String },
LocalImage { path: PathBuf },
Text {
text: String,
/// UI-defined spans within `text` used to render or persist special elements.
#[serde(default)]
text_elements: Vec<V1TextElement>,
},
Image {
image_url: String,
},
LocalImage {
path: PathBuf,
},
}
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(rename = "ByteRange")]
pub struct V1ByteRange {
/// Start byte offset (inclusive) within the UTF-8 text buffer.
pub start: usize,
/// End byte offset (exclusive) within the UTF-8 text buffer.
pub end: usize,
}
impl From<CoreByteRange> for V1ByteRange {
fn from(value: CoreByteRange) -> Self {
Self {
start: value.start,
end: value.end,
}
}
}
impl From<V1ByteRange> for CoreByteRange {
fn from(value: V1ByteRange) -> Self {
Self {
start: value.start,
end: value.end,
}
}
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(rename = "TextElement")]
pub struct V1TextElement {
/// Byte range in the parent `text` buffer that this element occupies.
pub byte_range: V1ByteRange,
/// Optional human-readable placeholder for the element, displayed in the UI.
pub placeholder: Option<String>,
}
impl From<CoreTextElement> for V1TextElement {
fn from(value: CoreTextElement) -> Self {
Self {
byte_range: value.byte_range.into(),
placeholder: value.placeholder,
}
}
}
impl From<V1TextElement> for CoreTextElement {
fn from(value: V1TextElement) -> Self {
Self {
byte_range: value.byte_range.into(),
placeholder: value.placeholder,
}
}
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]

View File

@@ -16,6 +16,7 @@ use codex_protocol::openai_models::ReasoningEffort;
use codex_protocol::parse_command::ParsedCommand as CoreParsedCommand;
use codex_protocol::plan_tool::PlanItemArg as CorePlanItemArg;
use codex_protocol::plan_tool::StepStatus as CorePlanStepStatus;
use codex_protocol::protocol::AgentStatus as CoreAgentStatus;
use codex_protocol::protocol::AskForApproval as CoreAskForApproval;
use codex_protocol::protocol::CodexErrorInfo as CoreCodexErrorInfo;
use codex_protocol::protocol::CreditsSnapshot as CoreCreditsSnapshot;
@@ -24,10 +25,13 @@ use codex_protocol::protocol::RateLimitSnapshot as CoreRateLimitSnapshot;
use codex_protocol::protocol::RateLimitWindow as CoreRateLimitWindow;
use codex_protocol::protocol::SessionSource as CoreSessionSource;
use codex_protocol::protocol::SkillErrorInfo as CoreSkillErrorInfo;
use codex_protocol::protocol::SkillInterface as CoreSkillInterface;
use codex_protocol::protocol::SkillMetadata as CoreSkillMetadata;
use codex_protocol::protocol::SkillScope as CoreSkillScope;
use codex_protocol::protocol::TokenUsage as CoreTokenUsage;
use codex_protocol::protocol::TokenUsageInfo as CoreTokenUsageInfo;
use codex_protocol::user_input::ByteRange as CoreByteRange;
use codex_protocol::user_input::TextElement as CoreTextElement;
use codex_protocol::user_input::UserInput as CoreUserInput;
use codex_utils_absolute_path::AbsolutePathBuf;
use mcp_types::ContentBlock as McpContentBlock;
@@ -1251,13 +1255,35 @@ pub enum SkillScope {
pub struct SkillMetadata {
pub name: String,
pub description: String,
#[ts(optional)]
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
/// Legacy short_description from SKILL.md. Prefer SKILL.toml interface.short_description.
pub short_description: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub interface: Option<SkillInterface>,
pub path: PathBuf,
pub scope: SkillScope,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct SkillInterface {
#[ts(optional)]
pub display_name: Option<String>,
#[ts(optional)]
pub short_description: Option<String>,
#[ts(optional)]
pub icon_small: Option<PathBuf>,
#[ts(optional)]
pub icon_large: Option<PathBuf>,
#[ts(optional)]
pub brand_color: Option<String>,
#[ts(optional)]
pub default_prompt: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -1281,12 +1307,26 @@ impl From<CoreSkillMetadata> for SkillMetadata {
name: value.name,
description: value.description,
short_description: value.short_description,
interface: value.interface.map(SkillInterface::from),
path: value.path,
scope: value.scope.into(),
}
}
}
impl From<CoreSkillInterface> for SkillInterface {
fn from(value: CoreSkillInterface) -> Self {
Self {
display_name: value.display_name,
short_description: value.short_description,
brand_color: value.brand_color,
default_prompt: value.default_prompt,
icon_small: value.icon_small,
icon_large: value.icon_large,
}
}
}
impl From<CoreSkillScope> for SkillScope {
fn from(value: CoreSkillScope) -> Self {
match value {
@@ -1543,21 +1583,93 @@ pub struct TurnInterruptParams {
pub struct TurnInterruptResponse {}
// User input types
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ByteRange {
pub start: usize,
pub end: usize,
}
impl From<CoreByteRange> for ByteRange {
fn from(value: CoreByteRange) -> Self {
Self {
start: value.start,
end: value.end,
}
}
}
impl From<ByteRange> for CoreByteRange {
fn from(value: ByteRange) -> Self {
Self {
start: value.start,
end: value.end,
}
}
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct TextElement {
/// Byte range in the parent `text` buffer that this element occupies.
pub byte_range: ByteRange,
/// Optional human-readable placeholder for the element, displayed in the UI.
pub placeholder: Option<String>,
}
impl From<CoreTextElement> for TextElement {
fn from(value: CoreTextElement) -> Self {
Self {
byte_range: value.byte_range.into(),
placeholder: value.placeholder,
}
}
}
impl From<TextElement> for CoreTextElement {
fn from(value: TextElement) -> Self {
Self {
byte_range: value.byte_range.into(),
placeholder: value.placeholder,
}
}
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(tag = "type", rename_all = "camelCase")]
#[ts(tag = "type")]
#[ts(export_to = "v2/")]
pub enum UserInput {
Text { text: String },
Image { url: String },
LocalImage { path: PathBuf },
Skill { name: String, path: PathBuf },
Text {
text: String,
/// UI-defined spans within `text` used to render or persist special elements.
#[serde(default)]
text_elements: Vec<TextElement>,
},
Image {
url: String,
},
LocalImage {
path: PathBuf,
},
Skill {
name: String,
path: PathBuf,
},
}
impl UserInput {
pub fn into_core(self) -> CoreUserInput {
match self {
UserInput::Text { text } => CoreUserInput::Text { text },
UserInput::Text {
text,
text_elements,
} => CoreUserInput::Text {
text,
text_elements: text_elements.into_iter().map(Into::into).collect(),
},
UserInput::Image { url } => CoreUserInput::Image { image_url: url },
UserInput::LocalImage { path } => CoreUserInput::LocalImage { path },
UserInput::Skill { name, path } => CoreUserInput::Skill { name, path },
@@ -1568,7 +1680,13 @@ impl UserInput {
impl From<CoreUserInput> for UserInput {
fn from(value: CoreUserInput) -> Self {
match value {
CoreUserInput::Text { text } => UserInput::Text { text },
CoreUserInput::Text {
text,
text_elements,
} => UserInput::Text {
text,
text_elements: text_elements.into_iter().map(Into::into).collect(),
},
CoreUserInput::Image { image_url } => UserInput::Image { url: image_url },
CoreUserInput::LocalImage { path } => UserInput::LocalImage { path },
CoreUserInput::Skill { name, path } => UserInput::Skill { name, path },
@@ -1643,6 +1761,25 @@ pub enum ThreadItem {
},
#[serde(rename_all = "camelCase")]
#[ts(rename_all = "camelCase")]
CollabAgentToolCall {
/// Unique identifier for this collab tool call.
id: String,
/// Name of the collab tool that was invoked.
tool: CollabAgentTool,
/// Current status of the collab tool call.
status: CollabAgentToolCallStatus,
/// Thread ID of the agent issuing the collab request.
sender_thread_id: String,
/// Thread ID of the receiving agent, when applicable. In case of spawn operation,
/// this corresponds to the newly spawned agent.
receiver_thread_ids: Vec<String>,
/// Prompt text sent as part of the collab tool call, when available.
prompt: Option<String>,
/// Last known status of the target agents, when available.
agents_states: HashMap<String, CollabAgentState>,
},
#[serde(rename_all = "camelCase")]
#[ts(rename_all = "camelCase")]
WebSearch { id: String, query: String },
#[serde(rename_all = "camelCase")]
#[ts(rename_all = "camelCase")]
@@ -1695,6 +1832,16 @@ pub enum CommandExecutionStatus {
Declined,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub enum CollabAgentTool {
SpawnAgent,
SendInput,
Wait,
CloseAgent,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -1733,6 +1880,66 @@ pub enum McpToolCallStatus {
Failed,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub enum CollabAgentToolCallStatus {
InProgress,
Completed,
Failed,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub enum CollabAgentStatus {
PendingInit,
Running,
Completed,
Errored,
Shutdown,
NotFound,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct CollabAgentState {
pub status: CollabAgentStatus,
pub message: Option<String>,
}
impl From<CoreAgentStatus> for CollabAgentState {
fn from(value: CoreAgentStatus) -> Self {
match value {
CoreAgentStatus::PendingInit => Self {
status: CollabAgentStatus::PendingInit,
message: None,
},
CoreAgentStatus::Running => Self {
status: CollabAgentStatus::Running,
message: None,
},
CoreAgentStatus::Completed(message) => Self {
status: CollabAgentStatus::Completed,
message,
},
CoreAgentStatus::Errored(message) => Self {
status: CollabAgentStatus::Errored,
message: Some(message),
},
CoreAgentStatus::Shutdown => Self {
status: CollabAgentStatus::Shutdown,
message: None,
},
CoreAgentStatus::NotFound => Self {
status: CollabAgentStatus::NotFound,
message: None,
},
}
}
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
@@ -2160,6 +2367,7 @@ mod tests {
content: vec![
CoreUserInput::Text {
text: "hello".to_string(),
text_elements: Vec::new(),
},
CoreUserInput::Image {
image_url: "https://example.com/image.png".to_string(),
@@ -2181,6 +2389,7 @@ mod tests {
content: vec![
UserInput::Text {
text: "hello".to_string(),
text_elements: Vec::new(),
},
UserInput::Image {
url: "https://example.com/image.png".to_string(),

View File

@@ -256,7 +256,11 @@ fn send_message_v2_with_policies(
println!("< thread/start response: {thread_response:?}");
let mut turn_params = TurnStartParams {
thread_id: thread_response.thread.id.clone(),
input: vec![V2UserInput::Text { text: user_message }],
input: vec![V2UserInput::Text {
text: user_message,
// Plain text conversion has no UI element ranges.
text_elements: Vec::new(),
}],
..Default::default()
};
turn_params.approval_policy = approval_policy;
@@ -288,6 +292,7 @@ fn send_follow_up_v2(
thread_id: thread_response.thread.id.clone(),
input: vec![V2UserInput::Text {
text: first_message,
text_elements: Vec::new(),
}],
..Default::default()
};
@@ -299,6 +304,7 @@ fn send_follow_up_v2(
thread_id: thread_response.thread.id.clone(),
input: vec![V2UserInput::Text {
text: follow_up_message,
text_elements: Vec::new(),
}],
..Default::default()
};
@@ -471,6 +477,7 @@ impl CodexClient {
conversation_id: *conversation_id,
items: vec![InputItem::Text {
text: message.to_string(),
text_elements: Vec::new(),
}],
},
};

View File

@@ -375,6 +375,7 @@ Today both notifications carry an empty `items` array even when item events were
- `commandExecution``{id, command, cwd, status, commandActions, aggregatedOutput?, exitCode?, durationMs?}` for sandboxed commands; `status` is `inProgress`, `completed`, `failed`, or `declined`.
- `fileChange``{id, changes, status}` describing proposed edits; `changes` list `{path, kind, diff}` and `status` is `inProgress`, `completed`, `failed`, or `declined`.
- `mcpToolCall``{id, server, tool, status, arguments, result?, error?}` describing MCP calls; `status` is `inProgress`, `completed`, or `failed`.
- `collabToolCall``{id, tool, status, senderThreadId, receiverThreadId?, newThreadId?, prompt?, agentStatus?}` describing collab tool calls (`spawn_agent`, `send_input`, `wait`, `close_agent`); `status` is `inProgress`, `completed`, or `failed`.
- `webSearch``{id, query}` for a web search request issued by the agent.
- `imageView``{id, path}` emitted when the agent invokes the image viewer tool.
- `enteredReviewMode``{id, review}` sent when the reviewer starts; `review` is a short user-facing label such as `"current changes"` or the requested target description.

View File

@@ -14,6 +14,9 @@ use codex_app_server_protocol::AgentMessageDeltaNotification;
use codex_app_server_protocol::ApplyPatchApprovalParams;
use codex_app_server_protocol::ApplyPatchApprovalResponse;
use codex_app_server_protocol::CodexErrorInfo as V2CodexErrorInfo;
use codex_app_server_protocol::CollabAgentState as V2CollabAgentStatus;
use codex_app_server_protocol::CollabAgentTool;
use codex_app_server_protocol::CollabAgentToolCallStatus as V2CollabToolCallStatus;
use codex_app_server_protocol::CommandAction as V2ParsedCommand;
use codex_app_server_protocol::CommandExecutionApprovalDecision;
use codex_app_server_protocol::CommandExecutionOutputDeltaNotification;
@@ -278,6 +281,218 @@ pub(crate) async fn apply_bespoke_event_handling(
.send_server_notification(ServerNotification::ItemCompleted(notification))
.await;
}
EventMsg::CollabAgentSpawnBegin(begin_event) => {
let item = ThreadItem::CollabAgentToolCall {
id: begin_event.call_id,
tool: CollabAgentTool::SpawnAgent,
status: V2CollabToolCallStatus::InProgress,
sender_thread_id: begin_event.sender_thread_id.to_string(),
receiver_thread_ids: Vec::new(),
prompt: Some(begin_event.prompt),
agents_states: HashMap::new(),
};
let notification = ItemStartedNotification {
thread_id: conversation_id.to_string(),
turn_id: event_turn_id.clone(),
item,
};
outgoing
.send_server_notification(ServerNotification::ItemStarted(notification))
.await;
}
EventMsg::CollabAgentSpawnEnd(end_event) => {
let has_receiver = end_event.new_thread_id.is_some();
let status = match &end_event.status {
codex_protocol::protocol::AgentStatus::Errored(_)
| codex_protocol::protocol::AgentStatus::NotFound => V2CollabToolCallStatus::Failed,
_ if has_receiver => V2CollabToolCallStatus::Completed,
_ => V2CollabToolCallStatus::Failed,
};
let (receiver_thread_ids, agents_states) = match end_event.new_thread_id {
Some(id) => {
let receiver_id = id.to_string();
let received_status = V2CollabAgentStatus::from(end_event.status.clone());
(
vec![receiver_id.clone()],
[(receiver_id, received_status)].into_iter().collect(),
)
}
None => (Vec::new(), HashMap::new()),
};
let item = ThreadItem::CollabAgentToolCall {
id: end_event.call_id,
tool: CollabAgentTool::SpawnAgent,
status,
sender_thread_id: end_event.sender_thread_id.to_string(),
receiver_thread_ids,
prompt: Some(end_event.prompt),
agents_states,
};
let notification = ItemCompletedNotification {
thread_id: conversation_id.to_string(),
turn_id: event_turn_id.clone(),
item,
};
outgoing
.send_server_notification(ServerNotification::ItemCompleted(notification))
.await;
}
EventMsg::CollabAgentInteractionBegin(begin_event) => {
let receiver_thread_ids = vec![begin_event.receiver_thread_id.to_string()];
let item = ThreadItem::CollabAgentToolCall {
id: begin_event.call_id,
tool: CollabAgentTool::SendInput,
status: V2CollabToolCallStatus::InProgress,
sender_thread_id: begin_event.sender_thread_id.to_string(),
receiver_thread_ids,
prompt: Some(begin_event.prompt),
agents_states: HashMap::new(),
};
let notification = ItemStartedNotification {
thread_id: conversation_id.to_string(),
turn_id: event_turn_id.clone(),
item,
};
outgoing
.send_server_notification(ServerNotification::ItemStarted(notification))
.await;
}
EventMsg::CollabAgentInteractionEnd(end_event) => {
let status = match &end_event.status {
codex_protocol::protocol::AgentStatus::Errored(_)
| codex_protocol::protocol::AgentStatus::NotFound => V2CollabToolCallStatus::Failed,
_ => V2CollabToolCallStatus::Completed,
};
let receiver_id = end_event.receiver_thread_id.to_string();
let received_status = V2CollabAgentStatus::from(end_event.status);
let item = ThreadItem::CollabAgentToolCall {
id: end_event.call_id,
tool: CollabAgentTool::SendInput,
status,
sender_thread_id: end_event.sender_thread_id.to_string(),
receiver_thread_ids: vec![receiver_id.clone()],
prompt: Some(end_event.prompt),
agents_states: [(receiver_id, received_status)].into_iter().collect(),
};
let notification = ItemCompletedNotification {
thread_id: conversation_id.to_string(),
turn_id: event_turn_id.clone(),
item,
};
outgoing
.send_server_notification(ServerNotification::ItemCompleted(notification))
.await;
}
EventMsg::CollabWaitingBegin(begin_event) => {
let receiver_thread_ids = begin_event
.receiver_thread_ids
.iter()
.map(ToString::to_string)
.collect();
let item = ThreadItem::CollabAgentToolCall {
id: begin_event.call_id,
tool: CollabAgentTool::Wait,
status: V2CollabToolCallStatus::InProgress,
sender_thread_id: begin_event.sender_thread_id.to_string(),
receiver_thread_ids,
prompt: None,
agents_states: HashMap::new(),
};
let notification = ItemStartedNotification {
thread_id: conversation_id.to_string(),
turn_id: event_turn_id.clone(),
item,
};
outgoing
.send_server_notification(ServerNotification::ItemStarted(notification))
.await;
}
EventMsg::CollabWaitingEnd(end_event) => {
let status = if end_event.statuses.values().any(|status| {
matches!(
status,
codex_protocol::protocol::AgentStatus::Errored(_)
| codex_protocol::protocol::AgentStatus::NotFound
)
}) {
V2CollabToolCallStatus::Failed
} else {
V2CollabToolCallStatus::Completed
};
let receiver_thread_ids = end_event.statuses.keys().map(ToString::to_string).collect();
let agents_states = end_event
.statuses
.iter()
.map(|(id, status)| (id.to_string(), V2CollabAgentStatus::from(status.clone())))
.collect();
let item = ThreadItem::CollabAgentToolCall {
id: end_event.call_id,
tool: CollabAgentTool::Wait,
status,
sender_thread_id: end_event.sender_thread_id.to_string(),
receiver_thread_ids,
prompt: None,
agents_states,
};
let notification = ItemCompletedNotification {
thread_id: conversation_id.to_string(),
turn_id: event_turn_id.clone(),
item,
};
outgoing
.send_server_notification(ServerNotification::ItemCompleted(notification))
.await;
}
EventMsg::CollabCloseBegin(begin_event) => {
let item = ThreadItem::CollabAgentToolCall {
id: begin_event.call_id,
tool: CollabAgentTool::CloseAgent,
status: V2CollabToolCallStatus::InProgress,
sender_thread_id: begin_event.sender_thread_id.to_string(),
receiver_thread_ids: vec![begin_event.receiver_thread_id.to_string()],
prompt: None,
agents_states: HashMap::new(),
};
let notification = ItemStartedNotification {
thread_id: conversation_id.to_string(),
turn_id: event_turn_id.clone(),
item,
};
outgoing
.send_server_notification(ServerNotification::ItemStarted(notification))
.await;
}
EventMsg::CollabCloseEnd(end_event) => {
let status = match &end_event.status {
codex_protocol::protocol::AgentStatus::Errored(_)
| codex_protocol::protocol::AgentStatus::NotFound => V2CollabToolCallStatus::Failed,
_ => V2CollabToolCallStatus::Completed,
};
let receiver_id = end_event.receiver_thread_id.to_string();
let agents_states = [(
receiver_id.clone(),
V2CollabAgentStatus::from(end_event.status),
)]
.into_iter()
.collect();
let item = ThreadItem::CollabAgentToolCall {
id: end_event.call_id,
tool: CollabAgentTool::CloseAgent,
status,
sender_thread_id: end_event.sender_thread_id.to_string(),
receiver_thread_ids: vec![receiver_id],
prompt: None,
agents_states,
};
let notification = ItemCompletedNotification {
thread_id: conversation_id.to_string(),
turn_id: event_turn_id.clone(),
item,
};
outgoing
.send_server_notification(ServerNotification::ItemCompleted(notification))
.await;
}
EventMsg::AgentMessageContentDelta(event) => {
let notification = AgentMessageDeltaNotification {
thread_id: conversation_id.to_string(),

View File

@@ -3125,7 +3125,13 @@ impl CodexMessageProcessor {
let mapped_items: Vec<CoreInputItem> = items
.into_iter()
.map(|item| match item {
WireInputItem::Text { text } => CoreInputItem::Text { text },
WireInputItem::Text {
text,
text_elements,
} => CoreInputItem::Text {
text,
text_elements: text_elements.into_iter().map(Into::into).collect(),
},
WireInputItem::Image { image_url } => CoreInputItem::Image { image_url },
WireInputItem::LocalImage { path } => CoreInputItem::LocalImage { path },
})
@@ -3171,7 +3177,13 @@ impl CodexMessageProcessor {
let mapped_items: Vec<CoreInputItem> = items
.into_iter()
.map(|item| match item {
WireInputItem::Text { text } => CoreInputItem::Text { text },
WireInputItem::Text {
text,
text_elements,
} => CoreInputItem::Text {
text,
text_elements: text_elements.into_iter().map(Into::into).collect(),
},
WireInputItem::Image { image_url } => CoreInputItem::Image { image_url },
WireInputItem::LocalImage { path } => CoreInputItem::LocalImage { path },
})
@@ -3333,6 +3345,8 @@ impl CodexMessageProcessor {
id: turn_id.clone(),
content: vec![V2UserInput::Text {
text: display_text.to_string(),
// Review prompt display text is synthesized; no UI element ranges to preserve.
text_elements: Vec::new(),
}],
}]
};
@@ -3885,6 +3899,16 @@ fn skills_to_info(
name: skill.name.clone(),
description: skill.description.clone(),
short_description: skill.short_description.clone(),
interface: skill.interface.clone().map(|interface| {
codex_app_server_protocol::SkillInterface {
display_name: interface.display_name,
short_description: interface.short_description,
icon_small: interface.icon_small,
icon_large: interface.icon_large,
brand_color: interface.brand_color,
default_prompt: interface.default_prompt,
}
}),
path: skill.path.clone(),
scope: skill.scope.into(),
})

View File

@@ -29,6 +29,7 @@ pub use responses::create_exec_command_sse_response;
pub use responses::create_final_assistant_message_sse_response;
pub use responses::create_shell_command_sse_response;
pub use rollout::create_fake_rollout;
pub use rollout::create_fake_rollout_with_text_elements;
use serde::de::DeserializeOwned;
pub fn to_response<T: DeserializeOwned>(response: JSONRPCResponse) -> anyhow::Result<T> {

View File

@@ -25,7 +25,7 @@ fn preset_to_info(preset: &ModelPreset, priority: i32) -> ModelInfo {
},
supported_in_api: true,
priority,
upgrade: preset.upgrade.as_ref().map(|u| u.id.clone()),
upgrade: preset.upgrade.as_ref().map(|u| u.into()),
base_instructions: "base instructions".to_string(),
supports_reasoning_summaries: false,
support_verbosity: false,

View File

@@ -87,3 +87,75 @@ pub fn create_fake_rollout(
fs::write(file_path, lines.join("\n") + "\n")?;
Ok(uuid_str)
}
pub fn create_fake_rollout_with_text_elements(
codex_home: &Path,
filename_ts: &str,
meta_rfc3339: &str,
preview: &str,
text_elements: Vec<serde_json::Value>,
model_provider: Option<&str>,
git_info: Option<GitInfo>,
) -> Result<String> {
let uuid = Uuid::new_v4();
let uuid_str = uuid.to_string();
let conversation_id = ThreadId::from_string(&uuid_str)?;
// sessions/YYYY/MM/DD derived from filename_ts (YYYY-MM-DDThh-mm-ss)
let year = &filename_ts[0..4];
let month = &filename_ts[5..7];
let day = &filename_ts[8..10];
let dir = codex_home.join("sessions").join(year).join(month).join(day);
fs::create_dir_all(&dir)?;
let file_path = dir.join(format!("rollout-{filename_ts}-{uuid}.jsonl"));
// Build JSONL lines
let meta = SessionMeta {
id: conversation_id,
timestamp: meta_rfc3339.to_string(),
cwd: PathBuf::from("/"),
originator: "codex".to_string(),
cli_version: "0.0.0".to_string(),
instructions: None,
source: SessionSource::Cli,
model_provider: model_provider.map(str::to_string),
};
let payload = serde_json::to_value(SessionMetaLine {
meta,
git: git_info,
})?;
let lines = [
json!( {
"timestamp": meta_rfc3339,
"type": "session_meta",
"payload": payload
})
.to_string(),
json!( {
"timestamp": meta_rfc3339,
"type":"response_item",
"payload": {
"type":"message",
"role":"user",
"content":[{"type":"input_text","text": preview}]
}
})
.to_string(),
json!( {
"timestamp": meta_rfc3339,
"type":"event_msg",
"payload": {
"type":"user_message",
"message": preview,
"text_elements": text_elements,
"local_images": []
}
})
.to_string(),
];
fs::write(file_path, lines.join("\n") + "\n")?;
Ok(uuid_str)
}

View File

@@ -114,6 +114,7 @@ async fn test_codex_jsonrpc_conversation_flow() -> Result<()> {
conversation_id,
items: vec![codex_app_server_protocol::InputItem::Text {
text: "text".to_string(),
text_elements: Vec::new(),
}],
})
.await?;
@@ -241,6 +242,7 @@ async fn test_send_user_turn_changes_approval_policy_behavior() -> Result<()> {
conversation_id,
items: vec![codex_app_server_protocol::InputItem::Text {
text: "run python".to_string(),
text_elements: Vec::new(),
}],
})
.await?;
@@ -296,6 +298,7 @@ async fn test_send_user_turn_changes_approval_policy_behavior() -> Result<()> {
conversation_id,
items: vec![codex_app_server_protocol::InputItem::Text {
text: "run python again".to_string(),
text_elements: Vec::new(),
}],
cwd: working_directory.clone(),
approval_policy: AskForApproval::Never,
@@ -405,6 +408,7 @@ async fn test_send_user_turn_updates_sandbox_and_cwd_between_turns() -> Result<(
conversation_id,
items: vec![InputItem::Text {
text: "first turn".to_string(),
text_elements: Vec::new(),
}],
cwd: first_cwd.clone(),
approval_policy: AskForApproval::Never,
@@ -437,6 +441,7 @@ async fn test_send_user_turn_updates_sandbox_and_cwd_between_turns() -> Result<(
conversation_id,
items: vec![InputItem::Text {
text: "second turn".to_string(),
text_elements: Vec::new(),
}],
cwd: second_cwd.clone(),
approval_policy: AskForApproval::Never,

View File

@@ -77,6 +77,7 @@ async fn test_conversation_create_and_send_message_ok() -> Result<()> {
conversation_id,
items: vec![InputItem::Text {
text: "Hello".to_string(),
text_elements: Vec::new(),
}],
})
.await?;

View File

@@ -105,6 +105,7 @@ async fn shell_command_interruption() -> anyhow::Result<()> {
conversation_id,
items: vec![codex_app_server_protocol::InputItem::Text {
text: "run first sleep command".to_string(),
text_elements: Vec::new(),
}],
})
.await?;

View File

@@ -80,6 +80,7 @@ async fn send_user_turn_accepts_output_schema_v1() -> Result<()> {
conversation_id,
items: vec![InputItem::Text {
text: "Hello".to_string(),
text_elements: Vec::new(),
}],
cwd: codex_home.path().to_path_buf(),
approval_policy: AskForApproval::Never,
@@ -181,6 +182,7 @@ async fn send_user_turn_output_schema_is_per_turn_v1() -> Result<()> {
conversation_id,
items: vec![InputItem::Text {
text: "Hello".to_string(),
text_elements: Vec::new(),
}],
cwd: codex_home.path().to_path_buf(),
approval_policy: AskForApproval::Never,
@@ -228,6 +230,7 @@ async fn send_user_turn_output_schema_is_per_turn_v1() -> Result<()> {
conversation_id,
items: vec![InputItem::Text {
text: "Hello again".to_string(),
text_elements: Vec::new(),
}],
cwd: codex_home.path().to_path_buf(),
approval_policy: AskForApproval::Never,

View File

@@ -101,6 +101,7 @@ async fn send_message(
conversation_id,
items: vec![InputItem::Text {
text: message.to_string(),
text_elements: Vec::new(),
}],
})
.await?;
@@ -194,6 +195,7 @@ async fn test_send_message_raw_notifications_opt_in() -> Result<()> {
conversation_id,
items: vec![InputItem::Text {
text: "Hello".to_string(),
text_elements: Vec::new(),
}],
})
.await?;
@@ -245,6 +247,7 @@ async fn test_send_message_session_not_found() -> Result<()> {
conversation_id: unknown,
items: vec![InputItem::Text {
text: "ping".to_string(),
text_elements: Vec::new(),
}],
})
.await?;
@@ -425,7 +428,7 @@ fn content_texts(content: &[ContentItem]) -> Vec<&str> {
content
.iter()
.filter_map(|item| match item {
ContentItem::InputText { text } | ContentItem::OutputText { text } => {
ContentItem::InputText { text, .. } | ContentItem::OutputText { text } => {
Some(text.as_str())
}
_ => None,

View File

@@ -61,6 +61,7 @@ async fn turn_start_accepts_output_schema_v2() -> Result<()> {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "Hello".to_string(),
text_elements: Vec::new(),
}],
output_schema: Some(output_schema.clone()),
..Default::default()
@@ -142,6 +143,7 @@ async fn turn_start_output_schema_is_per_turn_v2() -> Result<()> {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "Hello".to_string(),
text_elements: Vec::new(),
}],
output_schema: Some(output_schema.clone()),
..Default::default()
@@ -183,6 +185,7 @@ async fn turn_start_output_schema_is_per_turn_v2() -> Result<()> {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "Hello again".to_string(),
text_elements: Vec::new(),
}],
output_schema: None,
..Default::default()

View File

@@ -95,7 +95,8 @@ async fn thread_fork_creates_new_thread_and_emits_started() -> Result<()> {
assert_eq!(
content,
&vec![UserInput::Text {
text: preview.to_string()
text: preview.to_string(),
text_elements: Vec::new(),
}]
);
}

View File

@@ -1,6 +1,6 @@
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_fake_rollout;
use app_test_support::create_fake_rollout_with_text_elements;
use app_test_support::create_mock_responses_server_repeating_assistant;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCResponse;
@@ -15,6 +15,9 @@ use codex_app_server_protocol::TurnStatus;
use codex_app_server_protocol::UserInput;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ResponseItem;
use codex_protocol::user_input::ByteRange;
use codex_protocol::user_input::TextElement;
use pretty_assertions::assert_eq;
use std::path::PathBuf;
use tempfile::TempDir;
use tokio::time::timeout;
@@ -71,11 +74,19 @@ async fn thread_resume_returns_rollout_history() -> Result<()> {
create_config_toml(codex_home.path(), &server.uri())?;
let preview = "Saved user message";
let conversation_id = create_fake_rollout(
let text_elements = vec![TextElement {
byte_range: ByteRange { start: 0, end: 5 },
placeholder: Some("<note>".into()),
}];
let conversation_id = create_fake_rollout_with_text_elements(
codex_home.path(),
"2025-01-05T12-00-00",
"2025-01-05T12:00:00Z",
preview,
text_elements
.iter()
.map(|elem| serde_json::to_value(elem).expect("serialize text element"))
.collect(),
Some("mock_provider"),
None,
)?;
@@ -118,7 +129,8 @@ async fn thread_resume_returns_rollout_history() -> Result<()> {
assert_eq!(
content,
&vec![UserInput::Text {
text: preview.to_string()
text: preview.to_string(),
text_elements: text_elements.clone().into_iter().map(Into::into).collect(),
}]
);
}

View File

@@ -57,6 +57,7 @@ async fn thread_rollback_drops_last_turns_and_persists_to_rollout() -> Result<()
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: first_text.to_string(),
text_elements: Vec::new(),
}],
..Default::default()
})
@@ -77,6 +78,7 @@ async fn thread_rollback_drops_last_turns_and_persists_to_rollout() -> Result<()
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "Second".to_string(),
text_elements: Vec::new(),
}],
..Default::default()
})
@@ -115,7 +117,8 @@ async fn thread_rollback_drops_last_turns_and_persists_to_rollout() -> Result<()
assert_eq!(
content,
&vec![V2UserInput::Text {
text: first_text.to_string()
text: first_text.to_string(),
text_elements: Vec::new(),
}]
);
}
@@ -143,7 +146,8 @@ async fn thread_rollback_drops_last_turns_and_persists_to_rollout() -> Result<()
assert_eq!(
content,
&vec![V2UserInput::Text {
text: first_text.to_string()
text: first_text.to_string(),
text_elements: Vec::new(),
}]
);
}

View File

@@ -73,6 +73,7 @@ async fn turn_interrupt_aborts_running_turn() -> Result<()> {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "run sleep".to_string(),
text_elements: Vec::new(),
}],
cwd: Some(working_directory.clone()),
..Default::default()

View File

@@ -8,6 +8,7 @@ use app_test_support::create_mock_responses_server_sequence_unchecked;
use app_test_support::create_shell_command_sse_response;
use app_test_support::format_with_current_shell_display;
use app_test_support::to_response;
use codex_app_server_protocol::ByteRange;
use codex_app_server_protocol::ClientInfo;
use codex_app_server_protocol::CommandExecutionApprovalDecision;
use codex_app_server_protocol::CommandExecutionRequestApprovalResponse;
@@ -23,6 +24,7 @@ use codex_app_server_protocol::PatchApplyStatus;
use codex_app_server_protocol::PatchChangeKind;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ServerRequest;
use codex_app_server_protocol::TextElement;
use codex_app_server_protocol::ThreadItem;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
@@ -80,6 +82,7 @@ async fn turn_start_sends_originator_header() -> Result<()> {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "Hello".to_string(),
text_elements: Vec::new(),
}],
..Default::default()
})
@@ -112,6 +115,87 @@ async fn turn_start_sends_originator_header() -> Result<()> {
Ok(())
}
#[tokio::test]
async fn turn_start_emits_user_message_item_with_text_elements() -> Result<()> {
let responses = vec![create_final_assistant_message_sse_response("Done")?];
let server = create_mock_responses_server_sequence_unchecked(responses).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri(), "never")?;
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
let thread_req = mcp
.send_thread_start_request(ThreadStartParams {
model: Some("mock-model".to_string()),
..Default::default()
})
.await?;
let thread_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(thread_req)),
)
.await??;
let ThreadStartResponse { thread, .. } = to_response::<ThreadStartResponse>(thread_resp)?;
let text_elements = vec![TextElement {
byte_range: ByteRange { start: 0, end: 5 },
placeholder: Some("<note>".to_string()),
}];
let turn_req = mcp
.send_turn_start_request(TurnStartParams {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "Hello".to_string(),
text_elements: text_elements.clone(),
}],
..Default::default()
})
.await?;
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(turn_req)),
)
.await??;
let user_message_item = timeout(DEFAULT_READ_TIMEOUT, async {
loop {
let notification = mcp
.read_stream_until_notification_message("item/started")
.await?;
let params = notification.params.expect("item/started params");
let item_started: ItemStartedNotification =
serde_json::from_value(params).expect("deserialize item/started notification");
if let ThreadItem::UserMessage { .. } = item_started.item {
return Ok::<ThreadItem, anyhow::Error>(item_started.item);
}
}
})
.await??;
match user_message_item {
ThreadItem::UserMessage { content, .. } => {
assert_eq!(
content,
vec![V2UserInput::Text {
text: "Hello".to_string(),
text_elements,
}]
);
}
other => panic!("expected user message item, got {other:?}"),
}
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("turn/completed"),
)
.await??;
Ok(())
}
#[tokio::test]
async fn turn_start_emits_notifications_and_accepts_model_override() -> Result<()> {
// Provide a mock server and config so model wiring is valid.
@@ -149,6 +233,7 @@ async fn turn_start_emits_notifications_and_accepts_model_override() -> Result<(
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "Hello".to_string(),
text_elements: Vec::new(),
}],
..Default::default()
})
@@ -181,6 +266,7 @@ async fn turn_start_emits_notifications_and_accepts_model_override() -> Result<(
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "Second".to_string(),
text_elements: Vec::new(),
}],
model: Some("mock-model-override".to_string()),
..Default::default()
@@ -331,6 +417,7 @@ async fn turn_start_exec_approval_toggle_v2() -> Result<()> {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "run python".to_string(),
text_elements: Vec::new(),
}],
..Default::default()
})
@@ -376,6 +463,7 @@ async fn turn_start_exec_approval_toggle_v2() -> Result<()> {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "run python again".to_string(),
text_elements: Vec::new(),
}],
approval_policy: Some(codex_app_server_protocol::AskForApproval::Never),
sandbox_policy: Some(codex_app_server_protocol::SandboxPolicy::DangerFullAccess),
@@ -452,6 +540,7 @@ async fn turn_start_exec_approval_decline_v2() -> Result<()> {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "run python".to_string(),
text_elements: Vec::new(),
}],
cwd: Some(workspace.clone()),
..Default::default()
@@ -600,6 +689,7 @@ async fn turn_start_updates_sandbox_and_cwd_between_turns_v2() -> Result<()> {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "first turn".to_string(),
text_elements: Vec::new(),
}],
cwd: Some(first_cwd.clone()),
approval_policy: Some(codex_app_server_protocol::AskForApproval::Never),
@@ -633,6 +723,7 @@ async fn turn_start_updates_sandbox_and_cwd_between_turns_v2() -> Result<()> {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "second turn".to_string(),
text_elements: Vec::new(),
}],
cwd: Some(second_cwd.clone()),
approval_policy: Some(codex_app_server_protocol::AskForApproval::Never),
@@ -733,6 +824,7 @@ async fn turn_start_file_change_approval_v2() -> Result<()> {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "apply patch".into(),
text_elements: Vec::new(),
}],
cwd: Some(workspace.clone()),
..Default::default()
@@ -910,6 +1002,7 @@ async fn turn_start_file_change_approval_accept_for_session_persists_v2() -> Res
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "apply patch 1".into(),
text_elements: Vec::new(),
}],
cwd: Some(workspace.clone()),
..Default::default()
@@ -986,6 +1079,7 @@ async fn turn_start_file_change_approval_accept_for_session_persists_v2() -> Res
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "apply patch 2".into(),
text_elements: Vec::new(),
}],
cwd: Some(workspace.clone()),
..Default::default()
@@ -1083,6 +1177,7 @@ async fn turn_start_file_change_approval_decline_v2() -> Result<()> {
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "apply patch".into(),
text_elements: Vec::new(),
}],
cwd: Some(workspace.clone()),
..Default::default()
@@ -1230,6 +1325,7 @@ unified_exec = true
thread_id: thread.id.clone(),
input: vec![V2UserInput::Text {
text: "run a command".to_string(),
text_elements: Vec::new(),
}],
..Default::default()
})

View File

@@ -241,6 +241,7 @@ async fn run_add(config_overrides: &CliConfigOverrides, add_args: AddArgs) -> Re
let new_entry = McpServerConfig {
transport: transport.clone(),
enabled: true,
disabled_reason: None,
startup_timeout_sec: None,
tool_timeout_sec: None,
enabled_tools: None,
@@ -448,6 +449,7 @@ async fn run_list(config_overrides: &CliConfigOverrides, list_args: ListArgs) ->
serde_json::json!({
"name": name,
"enabled": cfg.enabled,
"disabled_reason": cfg.disabled_reason.as_ref().map(ToString::to_string),
"transport": transport,
"startup_timeout_sec": cfg
.startup_timeout_sec
@@ -492,11 +494,7 @@ async fn run_list(config_overrides: &CliConfigOverrides, list_args: ListArgs) ->
.map(|path| path.display().to_string())
.filter(|value| !value.is_empty())
.unwrap_or_else(|| "-".to_string());
let status = if cfg.enabled {
"enabled".to_string()
} else {
"disabled".to_string()
};
let status = format_mcp_status(cfg);
let auth_status = auth_statuses
.get(name.as_str())
.map(|entry| entry.auth_status)
@@ -517,11 +515,7 @@ async fn run_list(config_overrides: &CliConfigOverrides, list_args: ListArgs) ->
bearer_token_env_var,
..
} => {
let status = if cfg.enabled {
"enabled".to_string()
} else {
"disabled".to_string()
};
let status = format_mcp_status(cfg);
let auth_status = auth_statuses
.get(name.as_str())
.map(|entry| entry.auth_status)
@@ -691,6 +685,7 @@ async fn run_get(config_overrides: &CliConfigOverrides, get_args: GetArgs) -> Re
let output = serde_json::to_string_pretty(&serde_json::json!({
"name": get_args.name,
"enabled": server.enabled,
"disabled_reason": server.disabled_reason.as_ref().map(ToString::to_string),
"transport": transport,
"enabled_tools": server.enabled_tools.clone(),
"disabled_tools": server.disabled_tools.clone(),
@@ -706,7 +701,11 @@ async fn run_get(config_overrides: &CliConfigOverrides, get_args: GetArgs) -> Re
}
if !server.enabled {
println!("{} (disabled)", get_args.name);
if let Some(reason) = server.disabled_reason.as_ref() {
println!("{name} (disabled: {reason})", name = get_args.name);
} else {
println!("{name} (disabled)", name = get_args.name);
}
return Ok(());
}
@@ -828,3 +827,13 @@ fn validate_server_name(name: &str) -> Result<()> {
bail!("invalid server name '{name}' (use letters, numbers, '-', '_')");
}
}
fn format_mcp_status(config: &McpServerConfig) -> String {
if config.enabled {
"enabled".to_string()
} else if let Some(reason) = config.disabled_reason.as_ref() {
format!("disabled: {reason}")
} else {
"disabled".to_string()
}
}

View File

@@ -89,6 +89,7 @@ async fn list_and_get_render_expected_output() -> Result<()> {
{
"name": "docs",
"enabled": true,
"disabled_reason": null,
"transport": {
"type": "stdio",
"command": "docs-server",

View File

@@ -320,8 +320,7 @@ pub async fn process_sse(
}
};
let raw = sse.data.clone();
trace!("SSE event: {raw}");
trace!("SSE event: {}", &sse.data);
let event: ResponsesStreamEvent = match serde_json::from_str(&sse.data) {
Ok(event) => event,

View File

@@ -75,6 +75,9 @@
"apply_patch_freeform": {
"type": "boolean"
},
"child_agents_md": {
"type": "boolean"
},
"collab": {
"type": "boolean"
},
@@ -99,9 +102,6 @@
"experimental_windows_sandbox": {
"type": "boolean"
},
"child_agents_md": {
"type": "boolean"
},
"include_apply_patch_tool": {
"type": "boolean"
},
@@ -373,6 +373,15 @@
"description": "When set to `true`, `AgentReasoningRawContentEvent` events will be shown in the UI/output. Defaults to `false`.",
"type": "boolean"
},
"skills": {
"description": "Additional skill sources to load from local paths or URLs.",
"default": null,
"allOf": [
{
"$ref": "#/definitions/SkillsConfigToml"
}
]
},
"tool_output_token_limit": {
"description": "Token budget applied when storing tool/function outputs in the context manager.",
"type": "integer",
@@ -543,6 +552,9 @@
"apply_patch_freeform": {
"type": "boolean"
},
"child_agents_md": {
"type": "boolean"
},
"collab": {
"type": "boolean"
},
@@ -567,9 +579,6 @@
"experimental_windows_sandbox": {
"type": "boolean"
},
"child_agents_md": {
"type": "boolean"
},
"include_apply_patch_tool": {
"type": "boolean"
},
@@ -1288,6 +1297,55 @@
},
"additionalProperties": false
},
"SkillSourcePathToml": {
"type": "object",
"required": [
"path"
],
"properties": {
"path": {
"$ref": "#/definitions/AbsolutePathBuf"
}
},
"additionalProperties": false
},
"SkillSourceToml": {
"anyOf": [
{
"$ref": "#/definitions/SkillSourcePathToml"
},
{
"$ref": "#/definitions/SkillSourceUrlToml"
}
]
},
"SkillSourceUrlToml": {
"type": "object",
"required": [
"url"
],
"properties": {
"url": {
"type": "string"
}
},
"additionalProperties": false
},
"SkillsConfigToml": {
"description": "Configuration for additional skill sources.",
"type": "object",
"properties": {
"sources": {
"description": "Additional skill sources to load. Each entry can be a local path or a URL.",
"default": [],
"type": "array",
"items": {
"$ref": "#/definitions/SkillSourceToml"
}
}
},
"additionalProperties": false
},
"ToolsToml": {
"type": "object",
"properties": {

File diff suppressed because one or more lines are too long

View File

@@ -56,7 +56,11 @@ impl AgentControl {
.send_op(
agent_id,
Op::UserInput {
items: vec![UserInput::Text { text: prompt }],
items: vec![UserInput::Text {
text: prompt,
// Plain text conversion has no UI element ranges.
text_elements: Vec::new(),
}],
final_output_json_schema: None,
},
)
@@ -67,6 +71,12 @@ impl AgentControl {
result
}
/// Interrupt the current task for an existing agent thread.
pub(crate) async fn interrupt_agent(&self, agent_id: ThreadId) -> CodexResult<String> {
let state = self.upgrade()?;
state.send_op(agent_id, Op::Interrupt).await
}
/// Submit a shutdown request to an existing agent thread.
pub(crate) async fn shutdown_agent(&self, agent_id: ThreadId) -> CodexResult<String> {
let state = self.upgrade()?;
@@ -321,6 +331,7 @@ mod tests {
Op::UserInput {
items: vec![UserInput::Text {
text: "hello from tests".to_string(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
},
@@ -351,6 +362,7 @@ mod tests {
Op::UserInput {
items: vec![UserInput::Text {
text: "spawned".to_string(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
},

View File

@@ -1,6 +1,8 @@
pub(crate) mod control;
pub(crate) mod role;
pub(crate) mod status;
pub(crate) use codex_protocol::protocol::AgentStatus;
pub(crate) use control::AgentControl;
pub(crate) use role::AgentRole;
pub(crate) use status::agent_status_from_event;

View File

@@ -0,0 +1,85 @@
use crate::config::Config;
use crate::protocol::SandboxPolicy;
use serde::Deserialize;
use serde::Serialize;
/// Base instructions for the orchestrator role.
const ORCHESTRATOR_PROMPT: &str = include_str!("../../templates/agents/orchestrator.md");
/// Base instructions for the worker role.
const WORKER_PROMPT: &str = include_str!("../../gpt-5.2-codex_prompt.md");
/// Default worker model override used by the worker role.
const WORKER_MODEL: &str = "gpt-5.2-codex";
/// Enumerated list of all supported agent roles.
const ALL_ROLES: [AgentRole; 3] = [
AgentRole::Default,
AgentRole::Orchestrator,
AgentRole::Worker,
];
/// Hard-coded agent role selection used when spawning sub-agents.
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
pub enum AgentRole {
/// Inherit the parent agent's configuration unchanged.
Default,
/// Coordination-only agent that delegates to workers.
Orchestrator,
/// Task-executing agent with a fixed model override.
Worker,
}
/// Immutable profile data that drives per-agent configuration overrides.
#[derive(Debug, Clone, Copy, PartialEq, Eq, Default)]
pub struct AgentProfile {
/// Optional base instructions override.
pub base_instructions: Option<&'static str>,
/// Optional model override.
pub model: Option<&'static str>,
/// Whether to force a read-only sandbox policy.
pub read_only: bool,
}
impl AgentRole {
/// Returns the string values used by JSON schema enums.
pub fn enum_values() -> Vec<String> {
ALL_ROLES
.iter()
.filter_map(|role| serde_json::to_string(role).ok())
.collect()
}
/// Returns the hard-coded profile for this role.
pub fn profile(self) -> AgentProfile {
match self {
AgentRole::Default => AgentProfile::default(),
AgentRole::Orchestrator => AgentProfile {
base_instructions: Some(ORCHESTRATOR_PROMPT),
..Default::default()
},
AgentRole::Worker => AgentProfile {
base_instructions: Some(WORKER_PROMPT),
model: Some(WORKER_MODEL),
..Default::default()
},
}
}
/// Applies this role's profile onto the provided config.
pub fn apply_to_config(self, config: &mut Config) -> Result<(), String> {
let profile = self.profile();
if let Some(base_instructions) = profile.base_instructions {
config.base_instructions = Some(base_instructions.to_string());
}
if let Some(model) = profile.model {
config.model = Some(model.to_string());
}
if profile.read_only {
config
.sandbox_policy
.set(SandboxPolicy::new_read_only_policy())
.map_err(|err| format!("sandbox_policy is invalid: {err}"))?;
}
Ok(())
}
}

View File

@@ -31,6 +31,7 @@ use codex_otel::OtelManager;
use codex_protocol::ThreadId;
use codex_protocol::config_types::ReasoningSummary as ReasoningSummaryConfig;
use codex_protocol::config_types::WebSearchMode;
use codex_protocol::models::ResponseItem;
use codex_protocol::openai_models::ModelInfo;
use codex_protocol::openai_models::ReasoningEffort as ReasoningEffortConfig;
@@ -64,6 +65,8 @@ use crate::model_provider_info::WireApi;
use crate::tools::spec::create_tools_json_for_chat_completions_api;
use crate::tools::spec::create_tools_json_for_responses_api;
pub const WEB_SEARCH_ELIGIBLE_HEADER: &str = "x-oai-web-search-eligible";
#[derive(Debug)]
struct ModelClientState {
config: Arc<Config>,
@@ -319,7 +322,7 @@ impl ModelClientSession {
store_override: None,
conversation_id: Some(conversation_id),
session_source: Some(self.state.session_source.clone()),
extra_headers: beta_feature_headers(&self.state.config),
extra_headers: build_responses_headers(&self.state.config),
compression,
}
}
@@ -635,6 +638,21 @@ fn beta_feature_headers(config: &Config) -> ApiHeaderMap {
headers
}
fn build_responses_headers(config: &Config) -> ApiHeaderMap {
let mut headers = beta_feature_headers(config);
headers.insert(
WEB_SEARCH_ELIGIBLE_HEADER,
HeaderValue::from_static(
if matches!(config.web_search_mode, Some(WebSearchMode::Disabled)) {
"false"
} else {
"true"
},
),
);
headers
}
fn map_response_stream<S>(api_stream: S, otel_manager: OtelManager) -> ResponseStream
where
S: futures::Stream<Item = std::result::Result<ResponseEvent, ApiError>>

View File

@@ -36,6 +36,7 @@ use codex_protocol::ThreadId;
use codex_protocol::approvals::ExecPolicyAmendment;
use codex_protocol::config_types::WebSearchMode;
use codex_protocol::items::TurnItem;
use codex_protocol::items::UserMessageItem;
use codex_protocol::openai_models::ModelInfo;
use codex_protocol::protocol::FileChange;
use codex_protocol::protocol::HasLegacyEvent;
@@ -120,6 +121,7 @@ use crate::protocol::ReviewDecision;
use crate::protocol::SandboxPolicy;
use crate::protocol::SessionConfiguredEvent;
use crate::protocol::SkillErrorInfo;
use crate::protocol::SkillInterface as ProtocolSkillInterface;
use crate::protocol::SkillMetadata as ProtocolSkillMetadata;
use crate::protocol::StreamErrorEvent;
use crate::protocol::Submission;
@@ -541,24 +543,12 @@ impl Session {
web_search_mode: per_turn_config.web_search_mode,
});
let base_instructions = if per_turn_config.features.enabled(Feature::Collab) {
const COLLAB_INSTRUCTIONS: &str =
include_str!("../templates/collab/experimental_prompt.md");
let base = session_configuration
.base_instructions
.as_deref()
.unwrap_or(model_info.base_instructions.as_str());
Some(format!("{base}\n\n{COLLAB_INSTRUCTIONS}"))
} else {
session_configuration.base_instructions.clone()
};
TurnContext {
sub_id,
client,
cwd: session_configuration.cwd.clone(),
developer_instructions: session_configuration.developer_instructions.clone(),
base_instructions,
base_instructions: session_configuration.base_instructions.clone(),
compact_prompt: session_configuration.compact_prompt.clone(),
user_instructions: session_configuration.user_instructions.clone(),
approval_policy: session_configuration.approval_policy.value(),
@@ -1536,6 +1526,22 @@ impl Session {
}
}
pub(crate) async fn record_user_prompt_and_emit_turn_item(
&self,
turn_context: &TurnContext,
input: &[UserInput],
response_item: ResponseItem,
) {
// Persist the user message to history, but emit the turn item from `UserInput` so
// UI-only `text_elements` are preserved. `ResponseItem::Message` does not carry
// those spans, and `record_response_item_and_emit_turn_item` would drop them.
self.record_conversation_items(turn_context, std::slice::from_ref(&response_item))
.await;
let turn_item = TurnItem::UserMessage(UserMessageItem::new(input));
self.emit_turn_item_started(turn_context, &turn_item).await;
self.emit_turn_item_completed(turn_context, turn_item).await;
}
pub(crate) async fn notify_background_event(
&self,
turn_context: &TurnContext,
@@ -2076,10 +2082,13 @@ mod handlers {
codex_protocol::approvals::ElicitationAction::Decline => ElicitationAction::Decline,
codex_protocol::approvals::ElicitationAction::Cancel => ElicitationAction::Cancel,
};
let response = ElicitationResponse {
action,
content: None,
// When accepting, send an empty object as content to satisfy MCP servers
// that expect non-null content on Accept. For Decline/Cancel, content is None.
let content = match action {
ElicitationAction::Accept => Some(serde_json::json!({})),
ElicitationAction::Decline | ElicitationAction::Cancel => None,
};
let response = ElicitationResponse { action, content };
if let Err(err) = sess
.resolve_elicitation(server_name, request_id, response)
.await
@@ -2260,6 +2269,7 @@ mod handlers {
Arc::clone(&turn_context),
vec![UserInput::Text {
text: turn_context.compact_prompt().to_string(),
text_elements: Vec::new(),
}],
CompactTask,
)
@@ -2415,7 +2425,7 @@ async fn spawn_review_thread(
let tools_config = ToolsConfig::new(&ToolsConfigParams {
model_info: &review_model_info,
features: &review_features,
web_search_mode: review_web_search_mode,
web_search_mode: Some(review_web_search_mode),
});
let base_instructions = REVIEW_PROMPT.to_string();
@@ -2428,7 +2438,7 @@ async fn spawn_review_thread(
let mut per_turn_config = (*config).clone();
per_turn_config.model = Some(model.clone());
per_turn_config.features = review_features.clone();
per_turn_config.web_search_mode = review_web_search_mode;
per_turn_config.web_search_mode = Some(review_web_search_mode);
let otel_manager = parent_turn_context
.client
@@ -2470,6 +2480,7 @@ async fn spawn_review_thread(
// Seed the child task with the review prompt as the initial user message.
let input: Vec<UserInput> = vec![UserInput::Text {
text: review_prompt,
text_elements: Vec::new(),
}];
let tc = Arc::new(review_turn_context);
sess.spawn_task(tc.clone(), input, ReviewTask::new()).await;
@@ -2490,6 +2501,17 @@ fn skills_to_info(skills: &[SkillMetadata]) -> Vec<ProtocolSkillMetadata> {
name: skill.name.clone(),
description: skill.description.clone(),
short_description: skill.short_description.clone(),
interface: skill
.interface
.clone()
.map(|interface| ProtocolSkillInterface {
display_name: interface.display_name,
short_description: interface.short_description,
icon_small: interface.icon_small,
icon_large: interface.icon_large,
brand_color: interface.brand_color,
default_prompt: interface.default_prompt,
}),
path: skill.path.clone(),
scope: skill.scope,
})
@@ -2506,17 +2528,17 @@ fn errors_to_info(errors: &[SkillError]) -> Vec<SkillErrorInfo> {
.collect()
}
/// Takes a user message as input and runs a loop where, at each turn, the model
/// Takes a user message as input and runs a loop where, at each sampling request, the model
/// replies with either:
///
/// - requested function calls
/// - an assistant message
///
/// While it is possible for the model to return multiple of these items in a
/// single turn, in practice, we generally one item per turn:
/// single sampling request, in practice, we generally one item per sampling request:
///
/// - If the model requests a function call, we execute it and send the output
/// back to the model in the next turn.
/// back to the model in the next sampling request.
/// - If the model sends only an assistant message, we record it in the
/// conversation history and consider the turn complete.
///
@@ -2558,9 +2580,9 @@ pub(crate) async fn run_turn(
.await;
}
let initial_input_for_turn: ResponseInputItem = ResponseInputItem::from(input);
let initial_input_for_turn: ResponseInputItem = ResponseInputItem::from(input.clone());
let response_item: ResponseItem = initial_input_for_turn.clone().into();
sess.record_response_item_and_emit_turn_item(turn_context.as_ref(), response_item)
sess.record_user_prompt_and_emit_turn_item(turn_context.as_ref(), &input, response_item)
.await;
if !skill_items.is_empty() {
@@ -2589,13 +2611,13 @@ pub(crate) async fn run_turn(
.collect::<Vec<ResponseItem>>();
// Construct the input that we will send to the model.
let turn_input: Vec<ResponseItem> = {
let sampling_request_input: Vec<ResponseItem> = {
sess.record_conversation_items(&turn_context, &pending_input)
.await;
sess.clone_history().await.for_prompt()
};
let turn_input_messages = turn_input
let sampling_request_input_messages = sampling_request_input
.iter()
.filter_map(|item| match parse_turn_item(item) {
Some(TurnItem::UserMessage(user_message)) => Some(user_message),
@@ -2603,21 +2625,21 @@ pub(crate) async fn run_turn(
})
.map(|user_message| user_message.message())
.collect::<Vec<String>>();
match run_model_turn(
match run_sampling_request(
Arc::clone(&sess),
Arc::clone(&turn_context),
Arc::clone(&turn_diff_tracker),
&mut client_session,
turn_input,
sampling_request_input,
cancellation_token.child_token(),
)
.await
{
Ok(turn_output) => {
let TurnRunResult {
Ok(sampling_request_output) => {
let SamplingRequestResult {
needs_follow_up,
last_agent_message: turn_last_agent_message,
} = turn_output;
last_agent_message: sampling_request_last_agent_message,
} = sampling_request_output;
let total_usage_tokens = sess.get_total_token_usage().await;
let token_limit_reached = total_usage_tokens >= auto_compact_limit;
@@ -2628,13 +2650,13 @@ pub(crate) async fn run_turn(
}
if !needs_follow_up {
last_agent_message = turn_last_agent_message;
last_agent_message = sampling_request_last_agent_message;
sess.notifier()
.notify(&UserNotification::AgentTurnComplete {
thread_id: sess.conversation_id.to_string(),
turn_id: turn_context.sub_id.clone(),
cwd: turn_context.cwd.display().to_string(),
input_messages: turn_input_messages,
input_messages: sampling_request_input_messages,
last_assistant_message: last_agent_message.clone(),
});
break;
@@ -2690,14 +2712,14 @@ async fn run_auto_compact(sess: &Arc<Session>, turn_context: &Arc<TurnContext>)
cwd = %turn_context.cwd.display()
)
)]
async fn run_model_turn(
async fn run_sampling_request(
sess: Arc<Session>,
turn_context: Arc<TurnContext>,
turn_diff_tracker: SharedTurnDiffTracker,
client_session: &mut ModelClientSession,
input: Vec<ResponseItem>,
cancellation_token: CancellationToken,
) -> CodexResult<TurnRunResult> {
) -> CodexResult<SamplingRequestResult> {
let mcp_tools = sess
.services
.mcp_connection_manager
@@ -2731,7 +2753,7 @@ async fn run_model_turn(
let mut retries = 0;
loop {
let err = match try_run_turn(
let err = match try_run_sampling_request(
Arc::clone(&router),
Arc::clone(&sess),
Arc::clone(&turn_context),
@@ -2771,7 +2793,9 @@ async fn run_model_turn(
}
_ => backoff(retries),
};
warn!("stream disconnected - retrying turn ({retries}/{max_retries} in {delay:?})...",);
warn!(
"stream disconnected - retrying sampling request ({retries}/{max_retries} in {delay:?})...",
);
// Surface retry information to any UI/frontend so the
// user understands what is happening instead of staring
@@ -2791,7 +2815,7 @@ async fn run_model_turn(
}
#[derive(Debug)]
struct TurnRunResult {
struct SamplingRequestResult {
needs_follow_up: bool,
last_agent_message: Option<String>,
}
@@ -2823,7 +2847,7 @@ async fn drain_in_flight(
model = %turn_context.client.get_model()
)
)]
async fn try_run_turn(
async fn try_run_sampling_request(
router: Arc<ToolRouter>,
sess: Arc<Session>,
turn_context: Arc<TurnContext>,
@@ -2831,7 +2855,7 @@ async fn try_run_turn(
turn_diff_tracker: SharedTurnDiffTracker,
prompt: &Prompt,
cancellation_token: CancellationToken,
) -> CodexResult<TurnRunResult> {
) -> CodexResult<SamplingRequestResult> {
let rollout_item = RolloutItem::TurnContext(TurnContextItem {
cwd: turn_context.cwd.clone(),
approval_policy: turn_context.approval_policy,
@@ -2875,7 +2899,7 @@ async fn try_run_turn(
let mut active_item: Option<TurnItem> = None;
let mut should_emit_turn_diff = false;
let receiving_span = trace_span!("receiving_stream");
let outcome: CodexResult<TurnRunResult> = loop {
let outcome: CodexResult<SamplingRequestResult> = loop {
let handle_responses = trace_span!(
parent: &receiving_span,
"handle_responses",
@@ -2960,9 +2984,8 @@ async fn try_run_turn(
should_emit_turn_diff = true;
needs_follow_up |= sess.has_pending_input().await;
error!("needs_follow_up: {needs_follow_up}");
break Ok(TurnRunResult {
break Ok(SamplingRequestResult {
needs_follow_up,
last_agent_message,
});
@@ -3964,6 +3987,7 @@ mod tests {
let (sess, tc, rx) = make_session_and_context_with_rx().await;
let input = vec![UserInput::Text {
text: "hello".to_string(),
text_elements: Vec::new(),
}];
sess.spawn_task(
Arc::clone(&tc),
@@ -3993,6 +4017,7 @@ mod tests {
let (sess, tc, rx) = make_session_and_context_with_rx().await;
let input = vec![UserInput::Text {
text: "hello".to_string(),
text_elements: Vec::new(),
}];
sess.spawn_task(
Arc::clone(&tc),
@@ -4019,6 +4044,7 @@ mod tests {
let (sess, tc, rx) = make_session_and_context_with_rx().await;
let input = vec![UserInput::Text {
text: "start review".to_string(),
text_elements: Vec::new(),
}];
sess.spawn_task(Arc::clone(&tc), input, ReviewTask::new())
.await;

View File

@@ -44,7 +44,11 @@ pub(crate) async fn run_inline_auto_compact_task(
turn_context: Arc<TurnContext>,
) {
let prompt = turn_context.compact_prompt().to_string();
let input = vec![UserInput::Text { text: prompt }];
let input = vec![UserInput::Text {
text: prompt,
// Plain text conversion has no UI element ranges.
text_elements: Vec::new(),
}];
run_compact_task_inner(sess, turn_context, input).await;
}

View File

@@ -1124,6 +1124,7 @@ gpt-5 = "gpt-5.1"
cwd: None,
},
enabled: true,
disabled_reason: None,
startup_timeout_sec: None,
tool_timeout_sec: None,
enabled_tools: Some(vec!["one".to_string(), "two".to_string()]),
@@ -1145,6 +1146,7 @@ gpt-5 = "gpt-5.1"
env_http_headers: None,
},
enabled: false,
disabled_reason: None,
startup_timeout_sec: Some(std::time::Duration::from_secs(5)),
tool_timeout_sec: None,
enabled_tools: None,
@@ -1209,6 +1211,7 @@ foo = { command = "cmd" }
cwd: None,
},
enabled: true,
disabled_reason: None,
startup_timeout_sec: None,
tool_timeout_sec: None,
enabled_tools: None,
@@ -1252,6 +1255,7 @@ foo = { command = "cmd" } # keep me
cwd: None,
},
enabled: false,
disabled_reason: None,
startup_timeout_sec: None,
tool_timeout_sec: None,
enabled_tools: None,
@@ -1294,6 +1298,7 @@ foo = { command = "cmd", args = ["--flag"] } # keep me
cwd: None,
},
enabled: true,
disabled_reason: None,
startup_timeout_sec: None,
tool_timeout_sec: None,
enabled_tools: None,
@@ -1337,6 +1342,7 @@ foo = { command = "cmd" }
cwd: None,
},
enabled: false,
disabled_reason: None,
startup_timeout_sec: None,
tool_timeout_sec: None,
enabled_tools: None,

View File

@@ -2,6 +2,7 @@ use crate::auth::AuthCredentialsStoreMode;
use crate::config::types::DEFAULT_OTEL_ENVIRONMENT;
use crate::config::types::History;
use crate::config::types::McpServerConfig;
use crate::config::types::McpServerDisabledReason;
use crate::config::types::McpServerTransportConfig;
use crate::config::types::Notice;
use crate::config::types::Notifications;
@@ -19,6 +20,7 @@ use crate::config_loader::ConfigRequirements;
use crate::config_loader::LoaderOverrides;
use crate::config_loader::McpServerIdentity;
use crate::config_loader::McpServerRequirement;
use crate::config_loader::Sourced;
use crate::config_loader::load_config_layers_state;
use crate::features::Feature;
use crate::features::FeatureOverrides;
@@ -337,7 +339,8 @@ pub struct Config {
/// model info's default preference.
pub include_apply_patch_tool: bool,
pub web_search_mode: WebSearchMode,
/// Explicit or feature-derived web search mode.
pub web_search_mode: Option<WebSearchMode>,
/// If set to `true`, used only the experimental unified exec tool.
pub use_experimental_unified_exec_tool: bool,
@@ -539,25 +542,32 @@ fn deserialize_config_toml_with_base(
fn filter_mcp_servers_by_requirements(
mcp_servers: &mut HashMap<String, McpServerConfig>,
mcp_requirements: Option<&BTreeMap<String, McpServerRequirement>>,
mcp_requirements: Option<&Sourced<BTreeMap<String, McpServerRequirement>>>,
) {
let Some(allowlist) = mcp_requirements else {
return;
};
let source = allowlist.source.clone();
for (name, server) in mcp_servers.iter_mut() {
let allowed = allowlist
.value
.get(name)
.is_some_and(|requirement| mcp_server_matches_requirement(requirement, server));
if !allowed {
if allowed {
server.disabled_reason = None;
} else {
server.enabled = false;
server.disabled_reason = Some(McpServerDisabledReason::Requirements {
source: source.clone(),
});
}
}
}
fn constrain_mcp_servers(
mcp_servers: HashMap<String, McpServerConfig>,
mcp_requirements: Option<&BTreeMap<String, McpServerRequirement>>,
mcp_requirements: Option<&Sourced<BTreeMap<String, McpServerRequirement>>>,
) -> ConstraintResult<Constrained<HashMap<String, McpServerConfig>>> {
if mcp_requirements.is_none() {
return Ok(Constrained::allow_any(mcp_servers));
@@ -1182,24 +1192,22 @@ pub fn resolve_oss_provider(
}
}
/// Resolve the web search mode from the config, profile, and features.
/// Resolve the web search mode from explicit config and feature flags.
fn resolve_web_search_mode(
config_toml: &ConfigToml,
config_profile: &ConfigProfile,
features: &Features,
) -> WebSearchMode {
// Enum gets precedence over features flags
) -> Option<WebSearchMode> {
if let Some(mode) = config_profile.web_search.or(config_toml.web_search) {
return mode;
return Some(mode);
}
if features.enabled(Feature::WebSearchCached) {
return WebSearchMode::Cached;
return Some(WebSearchMode::Cached);
}
if features.enabled(Feature::WebSearchRequest) {
return WebSearchMode::Live;
return Some(WebSearchMode::Live);
}
// Fall back to default
WebSearchMode::default()
None
}
impl Config {
@@ -1707,6 +1715,7 @@ mod tests {
use crate::config::types::HistoryPersistence;
use crate::config::types::McpServerTransportConfig;
use crate::config::types::Notifications;
use crate::config_loader::RequirementSource;
use crate::features::Feature;
use super::*;
@@ -1728,6 +1737,7 @@ mod tests {
cwd: None,
},
enabled: true,
disabled_reason: None,
startup_timeout_sec: None,
tool_timeout_sec: None,
enabled_tools: None,
@@ -1744,6 +1754,7 @@ mod tests {
env_http_headers: None,
},
enabled: true,
disabled_reason: None,
startup_timeout_sec: None,
tool_timeout_sec: None,
enabled_tools: None,
@@ -1976,9 +1987,9 @@ trust_level = "trusted"
(MATCHED_URL_SERVER.to_string(), http_mcp(GOOD_URL)),
(DIFFERENT_NAME_SERVER.to_string(), stdio_mcp("same-cmd")),
]);
filter_mcp_servers_by_requirements(
&mut servers,
Some(&BTreeMap::from([
let source = RequirementSource::LegacyManagedConfigTomlFromMdm;
let requirements = Sourced::new(
BTreeMap::from([
(
MISMATCHED_URL_SERVER.to_string(),
McpServerRequirement {
@@ -2011,20 +2022,29 @@ trust_level = "trusted"
},
},
),
])),
]),
source.clone(),
);
filter_mcp_servers_by_requirements(&mut servers, Some(&requirements));
let reason = Some(McpServerDisabledReason::Requirements { source });
assert_eq!(
servers
.iter()
.map(|(name, server)| (name.clone(), server.enabled))
.collect::<HashMap<String, bool>>(),
.map(|(name, server)| (
name.clone(),
(server.enabled, server.disabled_reason.clone())
))
.collect::<HashMap<String, (bool, Option<McpServerDisabledReason>)>>(),
HashMap::from([
(MISMATCHED_URL_SERVER.to_string(), false),
(MISMATCHED_COMMAND_SERVER.to_string(), false),
(MATCHED_URL_SERVER.to_string(), true),
(MATCHED_COMMAND_SERVER.to_string(), true),
(DIFFERENT_NAME_SERVER.to_string(), false),
(MISMATCHED_URL_SERVER.to_string(), (false, reason.clone())),
(
MISMATCHED_COMMAND_SERVER.to_string(),
(false, reason.clone()),
),
(MATCHED_URL_SERVER.to_string(), (true, None)),
(MATCHED_COMMAND_SERVER.to_string(), (true, None)),
(DIFFERENT_NAME_SERVER.to_string(), (false, reason)),
])
);
}
@@ -2041,11 +2061,14 @@ trust_level = "trusted"
assert_eq!(
servers
.iter()
.map(|(name, server)| (name.clone(), server.enabled))
.collect::<HashMap<String, bool>>(),
.map(|(name, server)| (
name.clone(),
(server.enabled, server.disabled_reason.clone())
))
.collect::<HashMap<String, (bool, Option<McpServerDisabledReason>)>>(),
HashMap::from([
("server-a".to_string(), true),
("server-b".to_string(), true),
("server-a".to_string(), (true, None)),
("server-b".to_string(), (true, None)),
])
);
}
@@ -2057,16 +2080,22 @@ trust_level = "trusted"
("server-b".to_string(), http_mcp("https://example.com/b")),
]);
filter_mcp_servers_by_requirements(&mut servers, Some(&BTreeMap::new()));
let source = RequirementSource::LegacyManagedConfigTomlFromMdm;
let requirements = Sourced::new(BTreeMap::new(), source.clone());
filter_mcp_servers_by_requirements(&mut servers, Some(&requirements));
let reason = Some(McpServerDisabledReason::Requirements { source });
assert_eq!(
servers
.iter()
.map(|(name, server)| (name.clone(), server.enabled))
.collect::<HashMap<String, bool>>(),
.map(|(name, server)| (
name.clone(),
(server.enabled, server.disabled_reason.clone())
))
.collect::<HashMap<String, (bool, Option<McpServerDisabledReason>)>>(),
HashMap::from([
("server-a".to_string(), false),
("server-b".to_string(), false),
("server-a".to_string(), (false, reason.clone())),
("server-b".to_string(), (false, reason)),
])
);
}
@@ -2202,15 +2231,12 @@ trust_level = "trusted"
}
#[test]
fn web_search_mode_uses_default_if_unset() {
fn web_search_mode_uses_none_if_unset() {
let cfg = ConfigToml::default();
let profile = ConfigProfile::default();
let features = Features::with_defaults();
assert_eq!(
resolve_web_search_mode(&cfg, &profile, &features),
WebSearchMode::default()
);
assert_eq!(resolve_web_search_mode(&cfg, &profile, &features), None);
}
#[test]
@@ -2225,7 +2251,7 @@ trust_level = "trusted"
assert_eq!(
resolve_web_search_mode(&cfg, &profile, &features),
WebSearchMode::Live
Some(WebSearchMode::Live)
);
}
@@ -2241,7 +2267,7 @@ trust_level = "trusted"
assert_eq!(
resolve_web_search_mode(&cfg, &profile, &features),
WebSearchMode::Disabled
Some(WebSearchMode::Disabled)
);
}
@@ -2491,6 +2517,7 @@ trust_level = "trusted"
cwd: None,
},
enabled: true,
disabled_reason: None,
startup_timeout_sec: Some(Duration::from_secs(3)),
tool_timeout_sec: Some(Duration::from_secs(5)),
enabled_tools: None,
@@ -2644,6 +2671,7 @@ bearer_token = "secret"
cwd: None,
},
enabled: true,
disabled_reason: None,
startup_timeout_sec: None,
tool_timeout_sec: None,
enabled_tools: None,
@@ -2712,6 +2740,7 @@ ZIG_VAR = "3"
cwd: None,
},
enabled: true,
disabled_reason: None,
startup_timeout_sec: None,
tool_timeout_sec: None,
enabled_tools: None,
@@ -2760,6 +2789,7 @@ ZIG_VAR = "3"
cwd: Some(cwd_path.clone()),
},
enabled: true,
disabled_reason: None,
startup_timeout_sec: None,
tool_timeout_sec: None,
enabled_tools: None,
@@ -2806,6 +2836,7 @@ ZIG_VAR = "3"
env_http_headers: None,
},
enabled: true,
disabled_reason: None,
startup_timeout_sec: Some(Duration::from_secs(2)),
tool_timeout_sec: None,
enabled_tools: None,
@@ -2868,6 +2899,7 @@ startup_timeout_sec = 2.0
)])),
},
enabled: true,
disabled_reason: None,
startup_timeout_sec: Some(Duration::from_secs(2)),
tool_timeout_sec: None,
enabled_tools: None,
@@ -2942,6 +2974,7 @@ X-Auth = "DOCS_AUTH"
)])),
},
enabled: true,
disabled_reason: None,
startup_timeout_sec: Some(Duration::from_secs(2)),
tool_timeout_sec: None,
enabled_tools: None,
@@ -2969,6 +3002,7 @@ X-Auth = "DOCS_AUTH"
env_http_headers: None,
},
enabled: true,
disabled_reason: None,
startup_timeout_sec: None,
tool_timeout_sec: None,
enabled_tools: None,
@@ -3034,6 +3068,7 @@ url = "https://example.com/mcp"
)])),
},
enabled: true,
disabled_reason: None,
startup_timeout_sec: Some(Duration::from_secs(2)),
tool_timeout_sec: None,
enabled_tools: None,
@@ -3051,6 +3086,7 @@ url = "https://example.com/mcp"
cwd: None,
},
enabled: true,
disabled_reason: None,
startup_timeout_sec: None,
tool_timeout_sec: None,
enabled_tools: None,
@@ -3131,6 +3167,7 @@ url = "https://example.com/mcp"
cwd: None,
},
enabled: false,
disabled_reason: None,
startup_timeout_sec: None,
tool_timeout_sec: None,
enabled_tools: None,
@@ -3173,6 +3210,7 @@ url = "https://example.com/mcp"
cwd: None,
},
enabled: true,
disabled_reason: None,
startup_timeout_sec: None,
tool_timeout_sec: None,
enabled_tools: Some(vec!["allowed".to_string()]),
@@ -3581,7 +3619,7 @@ model_verbosity = "high"
forced_chatgpt_workspace_id: None,
forced_login_method: None,
include_apply_patch_tool: false,
web_search_mode: WebSearchMode::default(),
web_search_mode: None,
use_experimental_unified_exec_tool: false,
ghost_snapshot: GhostSnapshotConfig::default(),
features: Features::with_defaults(),
@@ -3668,7 +3706,7 @@ model_verbosity = "high"
forced_chatgpt_workspace_id: None,
forced_login_method: None,
include_apply_patch_tool: false,
web_search_mode: WebSearchMode::default(),
web_search_mode: None,
use_experimental_unified_exec_tool: false,
ghost_snapshot: GhostSnapshotConfig::default(),
features: Features::with_defaults(),
@@ -3770,7 +3808,7 @@ model_verbosity = "high"
forced_chatgpt_workspace_id: None,
forced_login_method: None,
include_apply_patch_tool: false,
web_search_mode: WebSearchMode::default(),
web_search_mode: None,
use_experimental_unified_exec_tool: false,
ghost_snapshot: GhostSnapshotConfig::default(),
features: Features::with_defaults(),
@@ -3858,7 +3896,7 @@ model_verbosity = "high"
forced_chatgpt_workspace_id: None,
forced_login_method: None,
include_apply_patch_tool: false,
web_search_mode: WebSearchMode::default(),
web_search_mode: None,
use_experimental_unified_exec_tool: false,
ghost_snapshot: GhostSnapshotConfig::default(),
features: Features::with_defaults(),

View File

@@ -3,11 +3,13 @@
// Note this file should generally be restricted to simple struct/enum
// definitions that do not contain business logic.
use crate::config_loader::RequirementSource;
pub use codex_protocol::config_types::AltScreenMode;
pub use codex_protocol::config_types::WebSearchMode;
use codex_utils_absolute_path::AbsolutePathBuf;
use std::collections::BTreeMap;
use std::collections::HashMap;
use std::fmt;
use std::path::PathBuf;
use std::time::Duration;
use wildmatch::WildMatchPattern;
@@ -20,6 +22,23 @@ use serde::de::Error as SerdeError;
pub const DEFAULT_OTEL_ENVIRONMENT: &str = "dev";
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum McpServerDisabledReason {
Unknown,
Requirements { source: RequirementSource },
}
impl fmt::Display for McpServerDisabledReason {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
McpServerDisabledReason::Unknown => write!(f, "unknown"),
McpServerDisabledReason::Requirements { source } => {
write!(f, "requirements ({source})")
}
}
}
}
#[derive(Serialize, Debug, Clone, PartialEq)]
pub struct McpServerConfig {
#[serde(flatten)]
@@ -29,6 +48,10 @@ pub struct McpServerConfig {
#[serde(default = "default_enabled")]
pub enabled: bool,
/// Reason this server was disabled after applying requirements.
#[serde(skip)]
pub disabled_reason: Option<McpServerDisabledReason>,
/// Startup timeout in seconds for initializing MCP server & initially listing tools.
#[serde(
default,
@@ -160,6 +183,7 @@ impl<'de> Deserialize<'de> for McpServerConfig {
startup_timeout_sec,
tool_timeout_sec,
enabled,
disabled_reason: None,
enabled_tools,
disabled_tools,
})

View File

@@ -44,7 +44,7 @@ impl fmt::Display for RequirementSource {
pub struct ConfigRequirements {
pub approval_policy: Constrained<AskForApproval>,
pub sandbox_policy: Constrained<SandboxPolicy>,
pub mcp_servers: Option<BTreeMap<String, McpServerRequirement>>,
pub mcp_servers: Option<Sourced<BTreeMap<String, McpServerRequirement>>>,
}
impl Default for ConfigRequirements {
@@ -273,7 +273,7 @@ impl TryFrom<ConfigRequirementsWithSources> for ConfigRequirements {
Ok(ConfigRequirements {
approval_policy,
sandbox_policy,
mcp_servers: mcp_servers.map(|sourced| sourced.value),
mcp_servers,
})
}
}
@@ -571,24 +571,27 @@ mod tests {
assert_eq!(
requirements.mcp_servers,
Some(BTreeMap::from([
(
"docs".to_string(),
McpServerRequirement {
identity: McpServerIdentity::Command {
command: "codex-mcp".to_string(),
Some(Sourced::new(
BTreeMap::from([
(
"docs".to_string(),
McpServerRequirement {
identity: McpServerIdentity::Command {
command: "codex-mcp".to_string(),
},
},
},
),
(
"remote".to_string(),
McpServerRequirement {
identity: McpServerIdentity::Url {
url: "https://example.com/mcp".to_string(),
),
(
"remote".to_string(),
McpServerRequirement {
identity: McpServerIdentity::Url {
url: "https://example.com/mcp".to_string(),
},
},
},
),
]))
),
]),
RequirementSource::Unknown,
))
);
Ok(())
}

View File

@@ -30,6 +30,7 @@ pub use config_requirements::McpServerIdentity;
pub use config_requirements::McpServerRequirement;
pub use config_requirements::RequirementSource;
pub use config_requirements::SandboxModeRequirement;
pub use config_requirements::Sourced;
pub use merge::merge_toml_values;
pub(crate) use overrides::build_cli_overrides_layer;
pub use state::ConfigLayerEntry;

View File

@@ -97,7 +97,11 @@ impl ContextManager {
}
| ResponseItem::Compaction {
encrypted_content: content,
} => estimate_reasoning_length(content.len()) as i64,
} => {
let reasoning_bytes = estimate_reasoning_length(content.len());
i64::try_from(approx_tokens_from_byte_count(reasoning_bytes))
.unwrap_or(i64::MAX)
}
item => {
let serialized = serde_json::to_string(item).unwrap_or_default();
i64::try_from(approx_token_count(&serialized)).unwrap_or(i64::MAX)

View File

@@ -50,7 +50,11 @@ fn parse_user_message(message: &[ContentItem]) -> Option<UserMessageItem> {
if is_session_prefix(text) || is_user_shell_command_text(text) {
return None;
}
content.push(UserInput::Text { text: text.clone() });
content.push(UserInput::Text {
text: text.clone(),
// Plain text conversion has no UI element ranges.
text_elements: Vec::new(),
});
}
ContentItem::InputImage { image_url } => {
content.push(UserInput::Image {
@@ -179,6 +183,7 @@ mod tests {
let expected_content = vec![
UserInput::Text {
text: "Hello world".to_string(),
text_elements: Vec::new(),
},
UserInput::Image { image_url: img1 },
UserInput::Image { image_url: img2 },
@@ -218,7 +223,10 @@ mod tests {
TurnItem::UserMessage(user) => {
let expected_content = vec![
UserInput::Image { image_url },
UserInput::Text { text: user_text },
UserInput::Text {
text: user_text,
text_elements: Vec::new(),
},
];
assert_eq!(user.content, expected_content);
}
@@ -255,7 +263,10 @@ mod tests {
TurnItem::UserMessage(user) => {
let expected_content = vec![
UserInput::Image { image_url },
UserInput::Text { text: user_text },
UserInput::Text {
text: user_text,
text_elements: Vec::new(),
},
];
assert_eq!(user.content, expected_content);
}

View File

@@ -403,7 +403,17 @@ pub const FEATURES: &[FeatureSpec] = &[
FeatureSpec {
id: Feature::PowershellUtf8,
key: "powershell_utf8",
#[cfg(windows)]
stage: Stage::Beta {
name: "Powershell UTF-8 support",
menu_description: "Enable UTF-8 output in Powershell.",
announcement: "Codex now supports UTF-8 output in Powershell. If you are seeing problems, disable in /experimental.",
},
#[cfg(windows)]
default_enabled: true,
#[cfg(not(windows))]
stage: Stage::Experimental,
#[cfg(not(windows))]
default_enabled: false,
},
FeatureSpec {

View File

@@ -111,6 +111,7 @@ mod user_shell_command;
pub mod util;
pub use apply_patch::CODEX_APPLY_PATCH_ARG1;
pub use client::WEB_SEARCH_ELIGIBLE_HEADER;
pub use command_safety::is_dangerous_command;
pub use command_safety::is_safe_command;
pub use exec_policy::ExecPolicyError;

View File

@@ -1171,6 +1171,7 @@ mod tests {
env_http_headers: None,
},
enabled: true,
disabled_reason: None,
startup_timeout_sec: None,
tool_timeout_sec: None,
enabled_tools: None,
@@ -1215,6 +1216,7 @@ mod tests {
env_http_headers: None,
},
enabled: true,
disabled_reason: None,
startup_timeout_sec: None,
tool_timeout_sec: None,
enabled_tools: None,

View File

@@ -28,6 +28,7 @@ use super::policy::is_persisted_response_item;
use crate::config::Config;
use crate::default_client::originator;
use crate::git_info::collect_git_info;
use crate::path_utils;
use codex_protocol::protocol::InitialHistory;
use codex_protocol::protocol::ResumedHistory;
use codex_protocol::protocol::RolloutItem;
@@ -113,6 +114,35 @@ impl RolloutRecorder {
.await
}
/// Find the newest recorded thread path, optionally filtering to a matching cwd.
pub async fn find_latest_thread_path(
codex_home: &Path,
allowed_sources: &[SessionSource],
model_providers: Option<&[String]>,
default_provider: &str,
filter_cwd: Option<&Path>,
) -> std::io::Result<Option<PathBuf>> {
let mut cursor: Option<Cursor> = None;
loop {
let page = Self::list_threads(
codex_home,
25,
cursor.as_ref(),
allowed_sources,
model_providers,
default_provider,
)
.await?;
if let Some(path) = select_resume_path(&page, filter_cwd) {
return Ok(Some(path));
}
cursor = page.next_cursor;
if cursor.is_none() {
return Ok(None);
}
}
}
/// Attempt to create a new [`RolloutRecorder`]. If the sessions directory
/// cannot be created or the rollout file cannot be opened we return the
/// error so the caller can decide whether to disable persistence.
@@ -431,3 +461,36 @@ impl JsonlWriter {
Ok(())
}
}
fn select_resume_path(page: &ThreadsPage, filter_cwd: Option<&Path>) -> Option<PathBuf> {
match filter_cwd {
Some(cwd) => page.items.iter().find_map(|item| {
if session_cwd_matches(&item.head, cwd) {
Some(item.path.clone())
} else {
None
}
}),
None => page.items.first().map(|item| item.path.clone()),
}
}
fn session_cwd_matches(head: &[serde_json::Value], cwd: &Path) -> bool {
let Some(session_cwd) = extract_session_cwd(head) else {
return false;
};
if let (Ok(ca), Ok(cb)) = (
path_utils::normalize_for_path_comparison(&session_cwd),
path_utils::normalize_for_path_comparison(cwd),
) {
return ca == cb;
}
session_cwd == cwd
}
fn extract_session_cwd(head: &[serde_json::Value]) -> Option<PathBuf> {
head.iter().find_map(|value| {
let meta_line = serde_json::from_value::<SessionMetaLine>(value.clone()).ok()?;
Some(meta_line.meta.cwd)
})
}

View File

@@ -603,6 +603,8 @@ async fn test_updated_at_uses_file_mtime() -> Result<()> {
item: RolloutItem::EventMsg(EventMsg::UserMessage(UserMessageEvent {
message: "hello".into(),
images: None,
text_elements: Vec::new(),
local_images: Vec::new(),
})),
};
writeln!(file, "{}", serde_json::to_string(&user_event_line)?)?;

View File

@@ -1,6 +1,7 @@
use crate::config::Config;
use crate::config_loader::ConfigLayerStack;
use crate::skills::model::SkillError;
use crate::skills::model::SkillInterface;
use crate::skills::model::SkillLoadOutcome;
use crate::skills::model::SkillMetadata;
use crate::skills::system::system_cache_root_dir;
@@ -13,6 +14,7 @@ use std::collections::VecDeque;
use std::error::Error;
use std::fmt;
use std::fs;
use std::path::Component;
use std::path::Path;
use std::path::PathBuf;
use tracing::error;
@@ -31,11 +33,29 @@ struct SkillFrontmatterMetadata {
short_description: Option<String>,
}
#[derive(Debug, Default, Deserialize)]
struct SkillToml {
#[serde(default)]
interface: Option<Interface>,
}
#[derive(Debug, Default, Deserialize)]
struct Interface {
display_name: Option<String>,
short_description: Option<String>,
icon_small: Option<PathBuf>,
icon_large: Option<PathBuf>,
brand_color: Option<String>,
default_prompt: Option<String>,
}
const SKILLS_FILENAME: &str = "SKILL.md";
const SKILLS_TOML_FILENAME: &str = "SKILL.toml";
const SKILLS_DIR_NAME: &str = "skills";
const MAX_NAME_LEN: usize = 64;
const MAX_DESCRIPTION_LEN: usize = 1024;
const MAX_SHORT_DESCRIPTION_LEN: usize = MAX_DESCRIPTION_LEN;
const MAX_DEFAULT_PROMPT_LEN: usize = MAX_DESCRIPTION_LEN;
// Traversal depth from the skills root.
const MAX_SCAN_DEPTH: usize = 6;
const MAX_SKILLS_DIRS_PER_ROOT: usize = 2000;
@@ -195,7 +215,7 @@ fn discover_skills_under_root(root: &Path, scope: SkillScope, outcome: &mut Skil
}
}
// Follow symlinks for user, admin, and repo skills. System skills are written by Codex itself.
// Follow symlinked directories for user, admin, and repo skills. System skills are written by Codex itself.
let follow_symlinks = matches!(
scope,
SkillScope::Repo | SkillScope::User | SkillScope::Admin
@@ -262,20 +282,6 @@ fn discover_skills_under_root(root: &Path, scope: SkillScope, outcome: &mut Skil
continue;
}
if metadata.is_file() && file_name == SKILLS_FILENAME {
match parse_skill_file(&path, scope) {
Ok(skill) => outcome.skills.push(skill),
Err(err) => {
if scope != SkillScope::System {
outcome.errors.push(SkillError {
path,
message: err.to_string(),
});
}
}
}
}
continue;
}
@@ -336,11 +342,12 @@ fn parse_skill_file(path: &Path, scope: SkillScope) -> Result<SkillMetadata, Ski
.as_deref()
.map(sanitize_single_line)
.filter(|value| !value.is_empty());
let interface = load_skill_interface(path);
validate_field(&name, MAX_NAME_LEN, "name")?;
validate_field(&description, MAX_DESCRIPTION_LEN, "description")?;
validate_len(&name, MAX_NAME_LEN, "name")?;
validate_len(&description, MAX_DESCRIPTION_LEN, "description")?;
if let Some(short_description) = short_description.as_deref() {
validate_field(
validate_len(
short_description,
MAX_SHORT_DESCRIPTION_LEN,
"metadata.short-description",
@@ -353,16 +360,115 @@ fn parse_skill_file(path: &Path, scope: SkillScope) -> Result<SkillMetadata, Ski
name,
description,
short_description,
interface,
path: resolved_path,
scope,
})
}
fn load_skill_interface(skill_path: &Path) -> Option<SkillInterface> {
// Fail open: optional SKILL.toml metadata should not block loading SKILL.md.
let skill_dir = skill_path.parent()?;
let interface_path = skill_dir.join(SKILLS_TOML_FILENAME);
if !interface_path.exists() {
return None;
}
let contents = match fs::read_to_string(&interface_path) {
Ok(contents) => contents,
Err(error) => {
tracing::warn!(
"ignoring {path}: failed to read SKILL.toml: {error}",
path = interface_path.display()
);
return None;
}
};
let parsed: SkillToml = match toml::from_str(&contents) {
Ok(parsed) => parsed,
Err(error) => {
tracing::warn!(
"ignoring {path}: invalid TOML: {error}",
path = interface_path.display()
);
return None;
}
};
let interface = parsed.interface?;
let interface = SkillInterface {
display_name: resolve_str(
interface.display_name,
MAX_NAME_LEN,
"interface.display_name",
),
short_description: resolve_str(
interface.short_description,
MAX_SHORT_DESCRIPTION_LEN,
"interface.short_description",
),
icon_small: resolve_asset_path(skill_dir, "interface.icon_small", interface.icon_small),
icon_large: resolve_asset_path(skill_dir, "interface.icon_large", interface.icon_large),
brand_color: resolve_color_str(interface.brand_color, "interface.brand_color"),
default_prompt: resolve_str(
interface.default_prompt,
MAX_DEFAULT_PROMPT_LEN,
"interface.default_prompt",
),
};
let has_fields = interface.display_name.is_some()
|| interface.short_description.is_some()
|| interface.icon_small.is_some()
|| interface.icon_large.is_some()
|| interface.brand_color.is_some()
|| interface.default_prompt.is_some();
if has_fields { Some(interface) } else { None }
}
fn resolve_asset_path(
skill_dir: &Path,
field: &'static str,
path: Option<PathBuf>,
) -> Option<PathBuf> {
// Icons must be relative paths under the skill's assets/ directory; otherwise return None.
let path = path?;
if path.as_os_str().is_empty() {
return None;
}
let assets_dir = skill_dir.join("assets");
if path.is_absolute() {
tracing::warn!(
"ignoring {field}: icon must be a relative assets path (not {})",
assets_dir.display()
);
return None;
}
let mut components = path.components().peekable();
while matches!(components.peek(), Some(Component::CurDir)) {
components.next();
}
match components.next() {
Some(Component::Normal(component)) if component == "assets" => {}
_ => {
tracing::warn!("ignoring {field}: icon path must be under assets/");
return None;
}
}
if components.any(|component| matches!(component, Component::ParentDir)) {
tracing::warn!("ignoring {field}: icon path must not contain '..'");
return None;
}
Some(skill_dir.join(path))
}
fn sanitize_single_line(raw: &str) -> String {
raw.split_whitespace().collect::<Vec<_>>().join(" ")
}
fn validate_field(
fn validate_len(
value: &str,
max_len: usize,
field_name: &'static str,
@@ -379,6 +485,36 @@ fn validate_field(
Ok(())
}
fn resolve_str(value: Option<String>, max_len: usize, field: &'static str) -> Option<String> {
let value = value?;
let value = sanitize_single_line(&value);
if value.is_empty() {
tracing::warn!("ignoring {field}: value is empty");
return None;
}
if value.chars().count() > max_len {
tracing::warn!("ignoring {field}: exceeds maximum length of {max_len} characters");
return None;
}
Some(value)
}
fn resolve_color_str(value: Option<String>, field: &'static str) -> Option<String> {
let value = value?;
let value = value.trim();
if value.is_empty() {
tracing::warn!("ignoring {field}: value is empty");
return None;
}
let mut chars = value.chars();
if value.len() == 7 && chars.next() == Some('#') && chars.all(|c| c.is_ascii_hexdigit()) {
Some(value.to_string())
} else {
tracing::warn!("ignoring {field}: expected #RRGGBB, got {value}");
None
}
}
fn extract_frontmatter(contents: &str) -> Option<String> {
let mut lines = contents.lines();
if !matches!(lines.next(), Some(line) if line.trim() == "---") {
@@ -528,6 +664,224 @@ mod tests {
path
}
fn write_skill_interface_at(skill_dir: &Path, contents: &str) -> PathBuf {
let path = skill_dir.join(SKILLS_TOML_FILENAME);
fs::write(&path, contents).unwrap();
path
}
#[tokio::test]
async fn loads_skill_interface_metadata_happy_path() {
let codex_home = tempfile::tempdir().expect("tempdir");
let skill_path = write_skill(&codex_home, "demo", "ui-skill", "from toml");
let skill_dir = skill_path.parent().expect("skill dir");
let normalized_skill_dir = normalized(skill_dir);
write_skill_interface_at(
skill_dir,
r##"
[interface]
display_name = "UI Skill"
short_description = " short desc "
icon_small = "./assets/small-400px.png"
icon_large = "./assets/large-logo.svg"
brand_color = "#3B82F6"
default_prompt = " default prompt "
"##,
);
let cfg = make_config(&codex_home).await;
let outcome = load_skills(&cfg);
assert!(
outcome.errors.is_empty(),
"unexpected errors: {:?}",
outcome.errors
);
assert_eq!(
outcome.skills,
vec![SkillMetadata {
name: "ui-skill".to_string(),
description: "from toml".to_string(),
short_description: None,
interface: Some(SkillInterface {
display_name: Some("UI Skill".to_string()),
short_description: Some("short desc".to_string()),
icon_small: Some(normalized_skill_dir.join("./assets/small-400px.png")),
icon_large: Some(normalized_skill_dir.join("./assets/large-logo.svg")),
brand_color: Some("#3B82F6".to_string()),
default_prompt: Some("default prompt".to_string()),
}),
path: normalized(&skill_path),
scope: SkillScope::User,
}]
);
}
#[tokio::test]
async fn accepts_icon_paths_under_assets_dir() {
let codex_home = tempfile::tempdir().expect("tempdir");
let skill_path = write_skill(&codex_home, "demo", "ui-skill", "from toml");
let skill_dir = skill_path.parent().expect("skill dir");
let normalized_skill_dir = normalized(skill_dir);
write_skill_interface_at(
skill_dir,
r#"
[interface]
display_name = "UI Skill"
icon_small = "assets/icon.png"
icon_large = "./assets/logo.svg"
"#,
);
let cfg = make_config(&codex_home).await;
let outcome = load_skills(&cfg);
assert!(
outcome.errors.is_empty(),
"unexpected errors: {:?}",
outcome.errors
);
assert_eq!(
outcome.skills,
vec![SkillMetadata {
name: "ui-skill".to_string(),
description: "from toml".to_string(),
short_description: None,
interface: Some(SkillInterface {
display_name: Some("UI Skill".to_string()),
short_description: None,
icon_small: Some(normalized_skill_dir.join("assets/icon.png")),
icon_large: Some(normalized_skill_dir.join("./assets/logo.svg")),
brand_color: None,
default_prompt: None,
}),
path: normalized(&skill_path),
scope: SkillScope::User,
}]
);
}
#[tokio::test]
async fn ignores_invalid_brand_color() {
let codex_home = tempfile::tempdir().expect("tempdir");
let skill_path = write_skill(&codex_home, "demo", "ui-skill", "from toml");
let skill_dir = skill_path.parent().expect("skill dir");
write_skill_interface_at(
skill_dir,
r#"
[interface]
brand_color = "blue"
"#,
);
let cfg = make_config(&codex_home).await;
let outcome = load_skills(&cfg);
assert!(
outcome.errors.is_empty(),
"unexpected errors: {:?}",
outcome.errors
);
assert_eq!(
outcome.skills,
vec![SkillMetadata {
name: "ui-skill".to_string(),
description: "from toml".to_string(),
short_description: None,
interface: None,
path: normalized(&skill_path),
scope: SkillScope::User,
}]
);
}
#[tokio::test]
async fn ignores_default_prompt_over_max_length() {
let codex_home = tempfile::tempdir().expect("tempdir");
let skill_path = write_skill(&codex_home, "demo", "ui-skill", "from toml");
let skill_dir = skill_path.parent().expect("skill dir");
let normalized_skill_dir = normalized(skill_dir);
let too_long = "x".repeat(MAX_DEFAULT_PROMPT_LEN + 1);
write_skill_interface_at(
skill_dir,
&format!(
r##"
[interface]
display_name = "UI Skill"
icon_small = "./assets/small-400px.png"
default_prompt = "{too_long}"
"##
),
);
let cfg = make_config(&codex_home).await;
let outcome = load_skills(&cfg);
assert!(
outcome.errors.is_empty(),
"unexpected errors: {:?}",
outcome.errors
);
assert_eq!(
outcome.skills,
vec![SkillMetadata {
name: "ui-skill".to_string(),
description: "from toml".to_string(),
short_description: None,
interface: Some(SkillInterface {
display_name: Some("UI Skill".to_string()),
short_description: None,
icon_small: Some(normalized_skill_dir.join("./assets/small-400px.png")),
icon_large: None,
brand_color: None,
default_prompt: None,
}),
path: normalized(&skill_path),
scope: SkillScope::User,
}]
);
}
#[tokio::test]
async fn drops_interface_when_icons_are_invalid() {
let codex_home = tempfile::tempdir().expect("tempdir");
let skill_path = write_skill(&codex_home, "demo", "ui-skill", "from toml");
let skill_dir = skill_path.parent().expect("skill dir");
write_skill_interface_at(
skill_dir,
r#"
[interface]
icon_small = "icon.png"
icon_large = "./assets/../logo.svg"
"#,
);
let cfg = make_config(&codex_home).await;
let outcome = load_skills(&cfg);
assert!(
outcome.errors.is_empty(),
"unexpected errors: {:?}",
outcome.errors
);
assert_eq!(
outcome.skills,
vec![SkillMetadata {
name: "ui-skill".to_string(),
description: "from toml".to_string(),
short_description: None,
interface: None,
path: normalized(&skill_path),
scope: SkillScope::User,
}]
);
}
#[cfg(unix)]
fn symlink_dir(target: &Path, link: &Path) {
std::os::unix::fs::symlink(target, link).unwrap();
@@ -563,6 +917,7 @@ mod tests {
name: "linked-skill".to_string(),
description: "from link".to_string(),
short_description: None,
interface: None,
path: normalized(&shared_skill_path),
scope: SkillScope::User,
}]
@@ -571,7 +926,7 @@ mod tests {
#[tokio::test]
#[cfg(unix)]
async fn loads_skills_via_symlinked_skill_file_for_user_scope() {
async fn ignores_symlinked_skill_file_for_user_scope() {
let codex_home = tempfile::tempdir().expect("tempdir");
let shared = tempfile::tempdir().expect("tempdir");
@@ -590,16 +945,7 @@ mod tests {
"unexpected errors: {:?}",
outcome.errors
);
assert_eq!(
outcome.skills,
vec![SkillMetadata {
name: "linked-file-skill".to_string(),
description: "from link".to_string(),
short_description: None,
path: normalized(&shared_skill_path),
scope: SkillScope::User,
}]
);
assert_eq!(outcome.skills, Vec::new());
}
#[tokio::test]
@@ -629,6 +975,7 @@ mod tests {
name: "cycle-skill".to_string(),
description: "still loads".to_string(),
short_description: None,
interface: None,
path: normalized(&skill_path),
scope: SkillScope::User,
}]
@@ -662,6 +1009,7 @@ mod tests {
name: "admin-linked-skill".to_string(),
description: "from link".to_string(),
short_description: None,
interface: None,
path: normalized(&shared_skill_path),
scope: SkillScope::Admin,
}]
@@ -699,6 +1047,7 @@ mod tests {
name: "repo-linked-skill".to_string(),
description: "from link".to_string(),
short_description: None,
interface: None,
path: normalized(&linked_skill_path),
scope: SkillScope::Repo,
}]
@@ -759,6 +1108,7 @@ mod tests {
name: "within-depth-skill".to_string(),
description: "loads".to_string(),
short_description: None,
interface: None,
path: normalized(&within_depth_path),
scope: SkillScope::User,
}]
@@ -783,6 +1133,7 @@ mod tests {
name: "demo-skill".to_string(),
description: "does things carefully".to_string(),
short_description: None,
interface: None,
path: normalized(&skill_path),
scope: SkillScope::User,
}]
@@ -811,6 +1162,7 @@ mod tests {
name: "demo-skill".to_string(),
description: "long description".to_string(),
short_description: Some("short summary".to_string()),
interface: None,
path: normalized(&skill_path),
scope: SkillScope::User,
}]
@@ -920,6 +1272,7 @@ mod tests {
name: "repo-skill".to_string(),
description: "from repo".to_string(),
short_description: None,
interface: None,
path: normalized(&skill_path),
scope: SkillScope::Repo,
}]
@@ -970,6 +1323,7 @@ mod tests {
name: "nested-skill".to_string(),
description: "from nested".to_string(),
short_description: None,
interface: None,
path: normalized(&nested_skill_path),
scope: SkillScope::Repo,
},
@@ -977,6 +1331,7 @@ mod tests {
name: "root-skill".to_string(),
description: "from root".to_string(),
short_description: None,
interface: None,
path: normalized(&root_skill_path),
scope: SkillScope::Repo,
},
@@ -1013,6 +1368,7 @@ mod tests {
name: "local-skill".to_string(),
description: "from cwd".to_string(),
short_description: None,
interface: None,
path: normalized(&skill_path),
scope: SkillScope::Repo,
}]
@@ -1050,6 +1406,7 @@ mod tests {
name: "dupe-skill".to_string(),
description: "from repo".to_string(),
short_description: None,
interface: None,
path: normalized(&repo_skill_path),
scope: SkillScope::Repo,
}]
@@ -1077,6 +1434,7 @@ mod tests {
name: "dupe-skill".to_string(),
description: "from user".to_string(),
short_description: None,
interface: None,
path: normalized(&user_skill_path),
scope: SkillScope::User,
}]
@@ -1145,6 +1503,7 @@ mod tests {
name: "repo-skill".to_string(),
description: "from repo".to_string(),
short_description: None,
interface: None,
path: normalized(&skill_path),
scope: SkillScope::Repo,
}]
@@ -1200,6 +1559,7 @@ mod tests {
name: "system-skill".to_string(),
description: "from system".to_string(),
short_description: None,
interface: None,
path: normalized(&skill_path),
scope: SkillScope::System,
}]
@@ -1254,6 +1614,7 @@ mod tests {
name: "dupe-skill".to_string(),
description: "from system".to_string(),
short_description: None,
interface: None,
path: normalized(&system_skill_path),
scope: SkillScope::System,
}]
@@ -1283,6 +1644,7 @@ mod tests {
name: "dupe-skill".to_string(),
description: "from user".to_string(),
short_description: None,
interface: None,
path: normalized(&user_skill_path),
scope: SkillScope::User,
}]
@@ -1321,6 +1683,7 @@ mod tests {
name: "dupe-skill".to_string(),
description: "from repo".to_string(),
short_description: None,
interface: None,
path: normalized(&repo_skill_path),
scope: SkillScope::Repo,
}]
@@ -1371,6 +1734,7 @@ mod tests {
name: "dupe-skill".to_string(),
description: "from nested".to_string(),
short_description: None,
interface: None,
path: expected_path,
scope: SkillScope::Repo,
}],

View File

@@ -7,10 +7,21 @@ pub struct SkillMetadata {
pub name: String,
pub description: String,
pub short_description: Option<String>,
pub interface: Option<SkillInterface>,
pub path: PathBuf,
pub scope: SkillScope,
}
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct SkillInterface {
pub display_name: Option<String>,
pub short_description: Option<String>,
pub icon_small: Option<PathBuf>,
pub icon_large: Option<PathBuf>,
pub brand_color: Option<String>,
pub default_prompt: Option<String>,
}
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct SkillError {
pub path: PathBuf,

View File

@@ -86,7 +86,7 @@ async fn start_review_conversation(
let mut sub_agent_config = config.as_ref().clone();
// Carry over review-only feature restrictions so the delegate cannot
// re-enable blocked tools (web search, view image).
sub_agent_config.web_search_mode = WebSearchMode::Disabled;
sub_agent_config.web_search_mode = Some(WebSearchMode::Disabled);
// Set explicit review rubric for the sub-agent
sub_agent_config.base_instructions = Some(crate::REVIEW_PROMPT.to_string());

View File

@@ -27,6 +27,7 @@ use crate::sandboxing::ExecEnv;
use crate::sandboxing::SandboxPermissions;
use crate::state::TaskKind;
use crate::tools::format_exec_output_str;
use crate::tools::runtimes::maybe_wrap_shell_lc_with_snapshot;
use crate::user_shell_command::user_shell_command_record_item;
use super::SessionTask;
@@ -74,9 +75,9 @@ impl SessionTask for UserShellCommandTask {
// allows commands that use shell features (pipes, &&, redirects, etc.).
// We do not source rc files or otherwise reformat the script.
let use_login_shell = true;
let command = session
.user_shell()
.derive_exec_args(&self.command, use_login_shell);
let session_shell = session.user_shell();
let command = session_shell.derive_exec_args(&self.command, use_login_shell);
let command = maybe_wrap_shell_lc_with_snapshot(&command, session_shell.as_ref());
let call_id = Uuid::new_v4().to_string();
let raw_command = self.command.clone();

View File

@@ -76,11 +76,13 @@ impl ToolHandler for CollabHandler {
mod spawn {
use super::*;
use crate::agent::AgentRole;
use std::sync::Arc;
#[derive(Debug, Deserialize)]
struct SpawnAgentArgs {
message: String,
agent_type: Option<AgentRole>,
}
#[derive(Debug, Serialize)]
@@ -95,6 +97,7 @@ mod spawn {
arguments: String,
) -> Result<ToolOutput, FunctionCallError> {
let args: SpawnAgentArgs = parse_arguments(&arguments)?;
let agent_role = args.agent_type.unwrap_or(AgentRole::Default);
let prompt = args.message;
if prompt.trim().is_empty() {
return Err(FunctionCallError::RespondToModel(
@@ -112,7 +115,10 @@ mod spawn {
.into(),
)
.await;
let config = build_agent_spawn_config(turn.as_ref())?;
let mut config = build_agent_spawn_config(turn.as_ref())?;
agent_role
.apply_to_config(&mut config)
.map_err(FunctionCallError::RespondToModel)?;
let result = session
.services
.agent_control
@@ -164,6 +170,8 @@ mod send_input {
struct SendInputArgs {
id: String,
message: String,
#[serde(default)]
interrupt: bool,
}
#[derive(Debug, Serialize)]
@@ -185,6 +193,14 @@ mod send_input {
"Empty message can't be sent to an agent".to_string(),
));
}
if args.interrupt {
session
.services
.agent_control
.interrupt_agent(receiver_thread_id)
.await
.map_err(|err| collab_agent_error(receiver_thread_id, err))?;
}
session
.send_event(
&turn,
@@ -238,20 +254,26 @@ mod send_input {
mod wait {
use super::*;
use crate::agent::status::is_final;
use futures::FutureExt;
use futures::StreamExt;
use futures::stream::FuturesUnordered;
use std::collections::HashMap;
use std::sync::Arc;
use std::time::Duration;
use tokio::sync::watch::Receiver;
use tokio::time::Instant;
use tokio::time::timeout_at;
#[derive(Debug, Deserialize)]
struct WaitArgs {
id: String,
ids: Vec<String>,
timeout_ms: Option<i64>,
}
#[derive(Debug, Serialize)]
struct WaitResult {
status: AgentStatus,
status: HashMap<ThreadId, AgentStatus>,
timed_out: bool,
}
@@ -262,7 +284,16 @@ mod wait {
arguments: String,
) -> Result<ToolOutput, FunctionCallError> {
let args: WaitArgs = parse_arguments(&arguments)?;
let receiver_thread_id = agent_id(&args.id)?;
if args.ids.is_empty() {
return Err(FunctionCallError::RespondToModel(
"ids must be non-empty".to_owned(),
));
}
let receiver_thread_ids = args
.ids
.iter()
.map(|id| agent_id(id))
.collect::<Result<Vec<_>, _>>()?;
// Validate timeout.
let timeout_ms = args.timeout_ms.unwrap_or(DEFAULT_WAIT_TIMEOUT_MS);
@@ -280,105 +311,131 @@ mod wait {
&turn,
CollabWaitingBeginEvent {
sender_thread_id: session.conversation_id,
receiver_thread_id,
receiver_thread_ids: receiver_thread_ids.clone(),
call_id: call_id.clone(),
}
.into(),
)
.await;
let status_rx = match session
.services
.agent_control
.subscribe_status(receiver_thread_id)
.await
{
Ok(status_rx) => status_rx,
Err(err) => {
let status = session
.services
.agent_control
.get_status(receiver_thread_id)
.await;
session
.send_event(
&turn,
CollabWaitingEndEvent {
sender_thread_id: session.conversation_id,
receiver_thread_id,
call_id: call_id.clone(),
status,
}
.into(),
)
.await;
return Err(collab_agent_error(receiver_thread_id, err));
let mut status_rxs = Vec::with_capacity(receiver_thread_ids.len());
let mut initial_final_statuses = Vec::new();
for id in &receiver_thread_ids {
match session.services.agent_control.subscribe_status(*id).await {
Ok(rx) => {
let status = rx.borrow().clone();
if is_final(&status) {
initial_final_statuses.push((*id, status));
}
status_rxs.push((*id, rx));
}
Err(CodexErr::ThreadNotFound(_)) => {
initial_final_statuses.push((*id, AgentStatus::NotFound));
}
Err(err) => {
let mut statuses = HashMap::with_capacity(1);
statuses.insert(*id, session.services.agent_control.get_status(*id).await);
session
.send_event(
&turn,
CollabWaitingEndEvent {
sender_thread_id: session.conversation_id,
call_id: call_id.clone(),
statuses,
}
.into(),
)
.await;
return Err(collab_agent_error(*id, err));
}
}
}
let statuses = if !initial_final_statuses.is_empty() {
initial_final_statuses
} else {
// Wait for the first agent to reach a final status.
let mut futures = FuturesUnordered::new();
for (id, rx) in status_rxs.into_iter() {
let session = session.clone();
futures.push(wait_for_final_status(session, id, rx));
}
let mut results = Vec::new();
let deadline = Instant::now() + Duration::from_millis(timeout_ms as u64);
loop {
match timeout_at(deadline, futures.next()).await {
Ok(Some(Some(result))) => {
results.push(result);
break;
}
Ok(Some(None)) => continue,
Ok(None) | Err(_) => break,
}
}
if !results.is_empty() {
// Drain the unlikely last elements to prevent race.
loop {
match futures.next().now_or_never() {
Some(Some(Some(result))) => results.push(result),
Some(Some(None)) => continue,
Some(None) | None => break,
}
}
}
results
};
let result =
wait_for_status(session.as_ref(), receiver_thread_id, timeout_ms, status_rx).await;
// Convert payload.
let statuses_map = statuses.clone().into_iter().collect::<HashMap<_, _>>();
let result = WaitResult {
status: statuses_map.clone(),
timed_out: statuses.is_empty(),
};
// Final event emission.
session
.send_event(
&turn,
CollabWaitingEndEvent {
sender_thread_id: session.conversation_id,
receiver_thread_id,
call_id,
status: result.status.clone(),
statuses: statuses_map,
}
.into(),
)
.await;
if matches!(result.status, AgentStatus::NotFound) {
return Err(FunctionCallError::RespondToModel(format!(
"agent with id {receiver_thread_id} not found"
)));
}
let content = serde_json::to_string(&result).map_err(|err| {
FunctionCallError::Fatal(format!("failed to serialize wait result: {err}"))
})?;
let success = !result.timed_out && !matches!(result.status, AgentStatus::Errored(_));
Ok(ToolOutput::Function {
content,
success: Some(success),
success: None,
content_items: None,
})
}
async fn wait_for_status(
session: &Session,
agent_id: ThreadId,
timeout_ms: i64,
mut status_rx: tokio::sync::watch::Receiver<AgentStatus>,
) -> WaitResult {
// Get last known status.
let mut status = status_rx.borrow_and_update().clone();
let deadline = Instant::now() + Duration::from_millis(timeout_ms as u64);
async fn wait_for_final_status(
session: Arc<Session>,
thread_id: ThreadId,
mut status_rx: Receiver<AgentStatus>,
) -> Option<(ThreadId, AgentStatus)> {
let mut status = status_rx.borrow().clone();
if is_final(&status) {
return Some((thread_id, status));
}
let timed_out = loop {
loop {
if status_rx.changed().await.is_err() {
let latest = session.services.agent_control.get_status(thread_id).await;
return is_final(&latest).then_some((thread_id, latest));
}
status = status_rx.borrow().clone();
if is_final(&status) {
break false;
return Some((thread_id, status));
}
match timeout_at(deadline, status_rx.changed()).await {
Ok(Ok(())) => status = status_rx.borrow().clone(),
Ok(Err(_)) => {
let last_status = session.services.agent_control.get_status(agent_id).await;
if last_status != AgentStatus::NotFound {
// On-purpose we keep the last known status if the agent gets dropped. This
// event is not supposed to happen.
status = last_status;
}
break false;
}
Err(_) => break true,
}
};
WaitResult { status, timed_out }
}
}
}
@@ -544,7 +601,9 @@ mod tests {
use crate::turn_diff_tracker::TurnDiffTracker;
use codex_protocol::ThreadId;
use pretty_assertions::assert_eq;
use serde::Deserialize;
use serde_json::json;
use std::collections::HashMap;
use std::path::PathBuf;
use std::sync::Arc;
use std::time::Duration;
@@ -717,6 +776,51 @@ mod tests {
);
}
#[tokio::test]
async fn send_input_interrupts_before_prompt() {
let (mut session, turn) = make_session_and_context().await;
let manager = thread_manager();
session.services.agent_control = manager.agent_control();
let config = turn.client.config().as_ref().clone();
let thread = manager.start_thread(config).await.expect("start thread");
let agent_id = thread.thread_id;
let invocation = invocation(
Arc::new(session),
Arc::new(turn),
"send_input",
function_payload(json!({
"id": agent_id.to_string(),
"message": "hi",
"interrupt": true
})),
);
CollabHandler
.handle(invocation)
.await
.expect("send_input should succeed");
let ops = manager.captured_ops();
let ops_for_agent: Vec<&Op> = ops
.iter()
.filter_map(|(id, op)| (*id == agent_id).then_some(op))
.collect();
assert_eq!(ops_for_agent.len(), 2);
assert!(matches!(ops_for_agent[0], Op::Interrupt));
assert!(matches!(ops_for_agent[1], Op::UserInput { .. }));
let _ = thread
.thread
.submit(Op::Shutdown {})
.await
.expect("shutdown should submit");
}
#[derive(Debug, Deserialize, PartialEq, Eq)]
struct WaitResult {
status: HashMap<ThreadId, AgentStatus>,
timed_out: bool,
}
#[tokio::test]
async fn wait_rejects_non_positive_timeout() {
let (session, turn) = make_session_and_context().await;
@@ -724,7 +828,10 @@ mod tests {
Arc::new(session),
Arc::new(turn),
"wait",
function_payload(json!({"id": ThreadId::new().to_string(), "timeout_ms": 0})),
function_payload(json!({
"ids": [ThreadId::new().to_string()],
"timeout_ms": 0
})),
);
let Err(err) = CollabHandler.handle(invocation).await else {
panic!("non-positive timeout should be rejected");
@@ -742,7 +849,7 @@ mod tests {
Arc::new(session),
Arc::new(turn),
"wait",
function_payload(json!({"id": "invalid"})),
function_payload(json!({"ids": ["invalid"]})),
);
let Err(err) = CollabHandler.handle(invocation).await else {
panic!("invalid id should be rejected");
@@ -753,6 +860,65 @@ mod tests {
assert!(msg.starts_with("invalid agent id invalid:"));
}
#[tokio::test]
async fn wait_rejects_empty_ids() {
let (session, turn) = make_session_and_context().await;
let invocation = invocation(
Arc::new(session),
Arc::new(turn),
"wait",
function_payload(json!({"ids": []})),
);
let Err(err) = CollabHandler.handle(invocation).await else {
panic!("empty ids should be rejected");
};
assert_eq!(
err,
FunctionCallError::RespondToModel("ids must be non-empty".to_string())
);
}
#[tokio::test]
async fn wait_returns_not_found_for_missing_agents() {
let (mut session, turn) = make_session_and_context().await;
let manager = thread_manager();
session.services.agent_control = manager.agent_control();
let id_a = ThreadId::new();
let id_b = ThreadId::new();
let invocation = invocation(
Arc::new(session),
Arc::new(turn),
"wait",
function_payload(json!({
"ids": [id_a.to_string(), id_b.to_string()],
"timeout_ms": 1000
})),
);
let output = CollabHandler
.handle(invocation)
.await
.expect("wait should succeed");
let ToolOutput::Function {
content, success, ..
} = output
else {
panic!("expected function output");
};
let result: WaitResult =
serde_json::from_str(&content).expect("wait result should be json");
assert_eq!(
result,
WaitResult {
status: HashMap::from([
(id_a, AgentStatus::NotFound),
(id_b, AgentStatus::NotFound),
]),
timed_out: false
}
);
assert_eq!(success, None);
}
#[tokio::test]
async fn wait_times_out_when_status_is_not_final() {
let (mut session, turn) = make_session_and_context().await;
@@ -765,7 +931,10 @@ mod tests {
Arc::new(session),
Arc::new(turn),
"wait",
function_payload(json!({"id": agent_id.to_string(), "timeout_ms": 10})),
function_payload(json!({
"ids": [agent_id.to_string()],
"timeout_ms": 10
})),
);
let output = CollabHandler
.handle(invocation)
@@ -777,8 +946,16 @@ mod tests {
else {
panic!("expected function output");
};
assert_eq!(content, r#"{"status":"pending_init","timed_out":true}"#);
assert_eq!(success, Some(false));
let result: WaitResult =
serde_json::from_str(&content).expect("wait result should be json");
assert_eq!(
result,
WaitResult {
status: HashMap::new(),
timed_out: true
}
);
assert_eq!(success, None);
let _ = thread
.thread
@@ -814,7 +991,10 @@ mod tests {
Arc::new(session),
Arc::new(turn),
"wait",
function_payload(json!({"id": agent_id.to_string(), "timeout_ms": 1000})),
function_payload(json!({
"ids": [agent_id.to_string()],
"timeout_ms": 1000
})),
);
let output = CollabHandler
.handle(invocation)
@@ -826,8 +1006,16 @@ mod tests {
else {
panic!("expected function output");
};
assert_eq!(content, r#"{"status":"shutdown","timed_out":false}"#);
assert_eq!(success, Some(true));
let result: WaitResult =
serde_json::from_str(&content).expect("wait result should be json");
assert_eq!(
result,
WaitResult {
status: HashMap::from([(agent_id, AgentStatus::Shutdown)]),
timed_out: false
}
);
assert_eq!(success, None);
}
#[tokio::test]

View File

@@ -202,7 +202,7 @@ impl ToolHandler for UnifiedExecHandler {
})
.await
.map_err(|err| {
FunctionCallError::RespondToModel(format!("write_stdin failed: {err:?}"))
FunctionCallError::RespondToModel(format!("write_stdin failed: {err}"))
})?;
let interaction = TerminalInteractionEvent {

View File

@@ -1,3 +1,4 @@
use crate::agent::AgentRole;
use crate::client_common::tools::ResponsesApiTool;
use crate::client_common::tools::ToolSpec;
use crate::features::Feature;
@@ -24,7 +25,7 @@ use std::collections::HashMap;
pub(crate) struct ToolsConfig {
pub shell_type: ConfigShellToolType,
pub apply_patch_tool_type: Option<ApplyPatchToolType>,
pub web_search_mode: WebSearchMode,
pub web_search_mode: Option<WebSearchMode>,
pub collab_tools: bool,
pub experimental_supported_tools: Vec<String>,
}
@@ -32,7 +33,7 @@ pub(crate) struct ToolsConfig {
pub(crate) struct ToolsConfigParams<'a> {
pub(crate) model_info: &'a ModelInfo,
pub(crate) features: &'a Features,
pub(crate) web_search_mode: WebSearchMode,
pub(crate) web_search_mode: Option<WebSearchMode>,
}
impl ToolsConfig {
@@ -441,6 +442,15 @@ fn create_spawn_agent_tool() -> ToolSpec {
description: Some("Initial message to send to the new agent.".to_string()),
},
);
properties.insert(
"agent_type".to_string(),
JsonSchema::String {
description: Some(format!(
"Optional agent type to spawn ({}).",
AgentRole::enum_values().join(", ")
)),
},
);
ToolSpec::Function(ResponsesApiTool {
name: "spawn_agent".to_string(),
@@ -468,6 +478,15 @@ fn create_send_input_tool() -> ToolSpec {
description: Some("Message to send to the agent.".to_string()),
},
);
properties.insert(
"interrupt".to_string(),
JsonSchema::Boolean {
description: Some(
"When true, interrupt the agent's current task before sending the message. When false (default), the message will be processed when the agent is done on its current task."
.to_string(),
),
},
);
ToolSpec::Function(ResponsesApiTool {
name: "send_input".to_string(),
@@ -484,9 +503,10 @@ fn create_send_input_tool() -> ToolSpec {
fn create_wait_tool() -> ToolSpec {
let mut properties = BTreeMap::new();
properties.insert(
"id".to_string(),
JsonSchema::String {
description: Some("Identifier of the agent to wait on.".to_string()),
"ids".to_string(),
JsonSchema::Array {
items: Box::new(JsonSchema::String { description: None }),
description: Some("Identifiers of the agents to wait on.".to_string()),
},
);
properties.insert(
@@ -500,11 +520,13 @@ fn create_wait_tool() -> ToolSpec {
ToolSpec::Function(ResponsesApiTool {
name: "wait".to_string(),
description: "Wait for an agent and return its status.".to_string(),
description:
"Wait for agents and return their statuses. If no agent is done, no status get returned."
.to_string(),
strict: false,
parameters: JsonSchema::Object {
properties,
required: Some(vec!["id".to_string()]),
required: Some(vec!["ids".to_string()]),
additional_properties: Some(false.into()),
},
})
@@ -1225,17 +1247,17 @@ pub(crate) fn build_specs(
}
match config.web_search_mode {
WebSearchMode::Disabled => {}
WebSearchMode::Cached => {
Some(WebSearchMode::Cached) => {
builder.push_spec(ToolSpec::WebSearch {
external_web_access: Some(false),
});
}
WebSearchMode::Live => {
Some(WebSearchMode::Live) => {
builder.push_spec(ToolSpec::WebSearch {
external_web_access: Some(true),
});
}
Some(WebSearchMode::Disabled) | None => {}
}
builder.push_spec_with_parallel_support(create_view_image_tool(), true);
@@ -1379,7 +1401,7 @@ mod tests {
let config = ToolsConfig::new(&ToolsConfigParams {
model_info: &model_info,
features: &features,
web_search_mode: WebSearchMode::Live,
web_search_mode: Some(WebSearchMode::Live),
});
let (tools, _) = build_specs(&config, None).build();
@@ -1441,7 +1463,7 @@ mod tests {
let tools_config = ToolsConfig::new(&ToolsConfigParams {
model_info: &model_info,
features: &features,
web_search_mode: WebSearchMode::Cached,
web_search_mode: Some(WebSearchMode::Cached),
});
let (tools, _) = build_specs(&tools_config, None).build();
assert_contains_tool_names(
@@ -1453,7 +1475,7 @@ mod tests {
fn assert_model_tools(
model_slug: &str,
features: &Features,
web_search_mode: WebSearchMode,
web_search_mode: Option<WebSearchMode>,
expected_tools: &[&str],
) {
let config = test_config();
@@ -1477,7 +1499,7 @@ mod tests {
let tools_config = ToolsConfig::new(&ToolsConfigParams {
model_info: &model_info,
features: &features,
web_search_mode: WebSearchMode::Cached,
web_search_mode: Some(WebSearchMode::Cached),
});
let (tools, _) = build_specs(&tools_config, None).build();
@@ -1499,7 +1521,7 @@ mod tests {
let tools_config = ToolsConfig::new(&ToolsConfigParams {
model_info: &model_info,
features: &features,
web_search_mode: WebSearchMode::Live,
web_search_mode: Some(WebSearchMode::Live),
});
let (tools, _) = build_specs(&tools_config, None).build();
@@ -1517,7 +1539,7 @@ mod tests {
assert_model_tools(
"gpt-5-codex",
&Features::with_defaults(),
WebSearchMode::Cached,
Some(WebSearchMode::Cached),
&[
"shell_command",
"list_mcp_resources",
@@ -1536,7 +1558,7 @@ mod tests {
assert_model_tools(
"gpt-5.1-codex",
&Features::with_defaults(),
WebSearchMode::Cached,
Some(WebSearchMode::Cached),
&[
"shell_command",
"list_mcp_resources",
@@ -1555,7 +1577,7 @@ mod tests {
assert_model_tools(
"gpt-5-codex",
Features::with_defaults().enable(Feature::UnifiedExec),
WebSearchMode::Live,
Some(WebSearchMode::Live),
&[
"exec_command",
"write_stdin",
@@ -1575,7 +1597,7 @@ mod tests {
assert_model_tools(
"gpt-5.1-codex",
Features::with_defaults().enable(Feature::UnifiedExec),
WebSearchMode::Live,
Some(WebSearchMode::Live),
&[
"exec_command",
"write_stdin",
@@ -1595,7 +1617,7 @@ mod tests {
assert_model_tools(
"codex-mini-latest",
&Features::with_defaults(),
WebSearchMode::Cached,
Some(WebSearchMode::Cached),
&[
"local_shell",
"list_mcp_resources",
@@ -1613,7 +1635,7 @@ mod tests {
assert_model_tools(
"gpt-5.1-codex-mini",
&Features::with_defaults(),
WebSearchMode::Cached,
Some(WebSearchMode::Cached),
&[
"shell_command",
"list_mcp_resources",
@@ -1632,7 +1654,7 @@ mod tests {
assert_model_tools(
"gpt-5",
&Features::with_defaults(),
WebSearchMode::Cached,
Some(WebSearchMode::Cached),
&[
"shell",
"list_mcp_resources",
@@ -1650,7 +1672,7 @@ mod tests {
assert_model_tools(
"gpt-5.1",
&Features::with_defaults(),
WebSearchMode::Cached,
Some(WebSearchMode::Cached),
&[
"shell_command",
"list_mcp_resources",
@@ -1669,7 +1691,7 @@ mod tests {
assert_model_tools(
"exp-5.1",
&Features::with_defaults(),
WebSearchMode::Cached,
Some(WebSearchMode::Cached),
&[
"exec_command",
"write_stdin",
@@ -1689,7 +1711,7 @@ mod tests {
assert_model_tools(
"codex-mini-latest",
Features::with_defaults().enable(Feature::UnifiedExec),
WebSearchMode::Live,
Some(WebSearchMode::Live),
&[
"exec_command",
"write_stdin",
@@ -1712,7 +1734,7 @@ mod tests {
let tools_config = ToolsConfig::new(&ToolsConfigParams {
model_info: &model_info,
features: &features,
web_search_mode: WebSearchMode::Live,
web_search_mode: Some(WebSearchMode::Live),
});
let (tools, _) = build_specs(&tools_config, Some(HashMap::new())).build();
@@ -1734,7 +1756,7 @@ mod tests {
let tools_config = ToolsConfig::new(&ToolsConfigParams {
model_info: &model_info,
features: &features,
web_search_mode: WebSearchMode::Cached,
web_search_mode: Some(WebSearchMode::Cached),
});
let (tools, _) = build_specs(&tools_config, None).build();
@@ -1753,7 +1775,7 @@ mod tests {
let tools_config = ToolsConfig::new(&ToolsConfigParams {
model_info: &model_info,
features: &features,
web_search_mode: WebSearchMode::Cached,
web_search_mode: Some(WebSearchMode::Cached),
});
let (tools, _) = build_specs(&tools_config, None).build();
@@ -1784,7 +1806,7 @@ mod tests {
let tools_config = ToolsConfig::new(&ToolsConfigParams {
model_info: &model_info,
features: &features,
web_search_mode: WebSearchMode::Live,
web_search_mode: Some(WebSearchMode::Live),
});
let (tools, _) = build_specs(
&tools_config,
@@ -1879,7 +1901,7 @@ mod tests {
let tools_config = ToolsConfig::new(&ToolsConfigParams {
model_info: &model_info,
features: &features,
web_search_mode: WebSearchMode::Cached,
web_search_mode: Some(WebSearchMode::Cached),
});
// Intentionally construct a map with keys that would sort alphabetically.
@@ -1956,7 +1978,7 @@ mod tests {
let tools_config = ToolsConfig::new(&ToolsConfigParams {
model_info: &model_info,
features: &features,
web_search_mode: WebSearchMode::Cached,
web_search_mode: Some(WebSearchMode::Cached),
});
let (tools, _) = build_specs(
@@ -2013,7 +2035,7 @@ mod tests {
let tools_config = ToolsConfig::new(&ToolsConfigParams {
model_info: &model_info,
features: &features,
web_search_mode: WebSearchMode::Cached,
web_search_mode: Some(WebSearchMode::Cached),
});
let (tools, _) = build_specs(
@@ -2067,7 +2089,7 @@ mod tests {
let tools_config = ToolsConfig::new(&ToolsConfigParams {
model_info: &model_info,
features: &features,
web_search_mode: WebSearchMode::Cached,
web_search_mode: Some(WebSearchMode::Cached),
});
let (tools, _) = build_specs(
@@ -2123,7 +2145,7 @@ mod tests {
let tools_config = ToolsConfig::new(&ToolsConfigParams {
model_info: &model_info,
features: &features,
web_search_mode: WebSearchMode::Cached,
web_search_mode: Some(WebSearchMode::Cached),
});
let (tools, _) = build_specs(
@@ -2235,7 +2257,7 @@ Examples of valid command strings:
let tools_config = ToolsConfig::new(&ToolsConfigParams {
model_info: &model_info,
features: &features,
web_search_mode: WebSearchMode::Cached,
web_search_mode: Some(WebSearchMode::Cached),
});
let (tools, _) = build_specs(
&tools_config,

View File

@@ -10,6 +10,10 @@ pub(crate) enum UnifiedExecError {
UnknownProcessId { process_id: String },
#[error("failed to write to stdin")]
WriteToStdin,
#[error(
"stdin is closed for this session; rerun exec_command with tty=true to keep stdin open"
)]
StdinClosed,
#[error("missing command line for unified exec request")]
MissingCommandLine,
#[error("Command denied by sandbox: {message}")]

View File

@@ -136,6 +136,7 @@ struct ProcessEntry {
call_id: String,
process_id: String,
command: Vec<String>,
tty: bool,
last_used: tokio::time::Instant,
}

View File

@@ -47,7 +47,7 @@ use crate::unified_exec::process::OutputHandles;
use crate::unified_exec::process::UnifiedExecProcess;
use crate::unified_exec::resolve_max_tokens;
const UNIFIED_EXEC_ENV: [(&str, &str); 9] = [
const UNIFIED_EXEC_ENV: [(&str, &str); 10] = [
("NO_COLOR", "1"),
("TERM", "dumb"),
("LANG", "C.UTF-8"),
@@ -57,6 +57,7 @@ const UNIFIED_EXEC_ENV: [(&str, &str); 9] = [
("PAGER", "cat"),
("GIT_PAGER", "cat"),
("GH_PAGER", "cat"),
("CODEX_CI", "1"),
];
fn apply_unified_exec_env(mut env: HashMap<String, String>) -> HashMap<String, String> {
@@ -73,6 +74,7 @@ struct PreparedProcessHandles {
cancellation_token: CancellationToken,
command: Vec<String>,
process_id: String,
tty: bool,
}
impl UnifiedExecProcessManager {
@@ -217,6 +219,7 @@ impl UnifiedExecProcessManager {
cwd.clone(),
start,
process_id,
request.tty,
Arc::clone(&transcript),
)
.await;
@@ -255,10 +258,14 @@ impl UnifiedExecProcessManager {
cancellation_token,
command: session_command,
process_id,
tty,
..
} = self.prepare_process_handles(process_id.as_str()).await?;
if !request.input.is_empty() {
if !tty {
return Err(UnifiedExecError::StdinClosed);
}
Self::send_input(&writer_tx, request.input.as_bytes()).await?;
// Give the remote process a brief window to react so that we are
// more likely to capture its output in the poll below.
@@ -379,6 +386,7 @@ impl UnifiedExecProcessManager {
cancellation_token,
command: entry.command.clone(),
process_id: entry.process_id.clone(),
tty: entry.tty,
})
}
@@ -401,6 +409,7 @@ impl UnifiedExecProcessManager {
cwd: PathBuf,
started_at: Instant,
process_id: String,
tty: bool,
transcript: Arc<tokio::sync::Mutex<HeadTailBuffer>>,
) {
let entry = ProcessEntry {
@@ -408,6 +417,7 @@ impl UnifiedExecProcessManager {
call_id: context.call_id.clone(),
process_id: process_id.clone(),
command: command.to_vec(),
tty,
last_used: started_at,
};
let number_processes = {
@@ -460,7 +470,7 @@ impl UnifiedExecProcessManager {
)
.await
} else {
codex_utils_pty::pipe::spawn_process(
codex_utils_pty::pipe::spawn_process_no_stdin(
program,
args,
env.cwd.as_path(),
@@ -688,6 +698,7 @@ mod tests {
("PAGER".to_string(), "cat".to_string()),
("GIT_PAGER".to_string(), "cat".to_string()),
("GH_PAGER".to_string(), "cat".to_string()),
("CODEX_CI".to_string(), "1".to_string()),
]);
assert_eq!(env, expected);

View File

@@ -0,0 +1,71 @@
You are Codex Orchestrator, based on GPT-5. You are running as an orchestration agent in the Codex CLI on a user's computer.
## Role
* You are the interface between the user and the workers.
* Your job is to understand the task, decompose it, and delegate well-scoped work to workers.
* You coordinate execution, monitor progress, resolve conflicts, and integrate results into a single coherent outcome.
* You may perform lightweight actions (e.g. reading files, basic commands) to understand the task, but all substantive work must be delegated to workers.
* **Your job is not finished until the entire task is fully completed and verified.**
* While the task is incomplete, you must keep monitoring and coordinating workers. You must not return early.
## Core invariants
* **Never stop monitoring workers.**
* **Do not rush workers. Be patient.**
* The orchestrator must not return unless the task is fully accomplished.
## Worker execution semantics
* While a worker is running, you cannot observe intermediate state.
* Messages sent with `send_input` are queued and processed only after the worker finishes, unless interrupted.
* Therefore:
* Do not send messages to “check status” or “ask for progress” unless being asked.
* Monitoring happens exclusively via `wait`.
* Sending a message is a commitment for the *next* phase of work.
## Interrupt semantics
* If a worker is taking longer than expected but is still working, do nothing and keep waiting unless being asked.
* Only intervene if you must change, stop, or redirect the *current* work.
* To stop a workers current task, you **must** use `send_input(interrupt=true)`.
* Use `interrupt=true` sparingly and deliberately.
## Multi-agent workflow
1. Understand the request and determine the optimal set of workers. If the task can be divided into sub-tasks, spawn one worker per sub-task and make them work together.
2. Spawn worker(s) with precise goals, constraints, and expected deliverables.
3. Monitor workers using `wait`.
4. When a worker finishes:
* verify correctness,
* check integration with other work,
* assess whether the global task is closer to completion.
5. If issues remain, assign fixes to the appropriate worker(s) and repeat steps 35.
6. Close agents only when no further work is required from them.
7. Return to the user only when the task is fully completed and verified.
## Collaboration rules
* Workers operate in a shared environment. You must tell it to them.
* Workers must not revert, overwrite, or conflict with others work.
* By default, workers must not spawn sub-agents unless explicitly allowed.
* When multiple workers are active, you may pass multiple IDs to `wait` to react to the first completion and keep the workflow event-driven and use a long timeout (e.g. 5 minutes).
## Collab tools
* `spawn_agent`: create a worker with an initial prompt (`agent_type` required).
* `send_input`: send follow-ups or fixes (queued unless interrupted).
* `send_input(interrupt=true)`: stop current work and redirect immediately.
* `wait`: wait for one or more workers; returns when at least one finishes.
* `close_agent`: close a worker when fully done.
## Final response
* Keep responses concise, factual, and in plain text.
* Summarize:
* what was delegated,
* key outcomes,
* verification performed,
* and any remaining risks.
* If verification failed, state issues clearly and describe what was reassigned.
* Do not dump large files inline; reference paths using backticks.

View File

@@ -270,6 +270,7 @@ impl TestCodex {
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: prompt.into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd: self.cwd.path().to_path_buf(),

View File

@@ -9,15 +9,18 @@ use codex_core::ModelProviderInfo;
use codex_core::Prompt;
use codex_core::ResponseEvent;
use codex_core::ResponseItem;
use codex_core::WEB_SEARCH_ELIGIBLE_HEADER;
use codex_core::WireApi;
use codex_core::models_manager::manager::ModelsManager;
use codex_otel::OtelManager;
use codex_protocol::ThreadId;
use codex_protocol::config_types::ReasoningSummary;
use codex_protocol::config_types::WebSearchMode;
use codex_protocol::protocol::SessionSource;
use codex_protocol::protocol::SubAgentSource;
use core_test_support::load_default_config_for_test;
use core_test_support::responses;
use core_test_support::test_codex::test_codex;
use futures::StreamExt;
use tempfile::TempDir;
use wiremock::matchers::header;
@@ -213,6 +216,66 @@ async fn responses_stream_includes_subagent_header_on_other() {
);
}
#[tokio::test]
async fn responses_stream_includes_web_search_eligible_header_true_by_default() {
core_test_support::skip_if_no_network!();
let server = responses::start_mock_server().await;
let response_body = responses::sse(vec![
responses::ev_response_created("resp-1"),
responses::ev_completed("resp-1"),
]);
let request_recorder = responses::mount_sse_once_match(
&server,
header(WEB_SEARCH_ELIGIBLE_HEADER, "true"),
response_body,
)
.await;
let test = test_codex().build(&server).await.expect("build test codex");
test.submit_turn("hello").await.expect("submit test prompt");
let request = request_recorder.single_request();
assert_eq!(
request.header(WEB_SEARCH_ELIGIBLE_HEADER).as_deref(),
Some("true")
);
}
#[tokio::test]
async fn responses_stream_includes_web_search_eligible_header_false_when_disabled() {
core_test_support::skip_if_no_network!();
let server = responses::start_mock_server().await;
let response_body = responses::sse(vec![
responses::ev_response_created("resp-1"),
responses::ev_completed("resp-1"),
]);
let request_recorder = responses::mount_sse_once_match(
&server,
header(WEB_SEARCH_ELIGIBLE_HEADER, "false"),
response_body,
)
.await;
let test = test_codex()
.with_config(|config| {
config.web_search_mode = Some(WebSearchMode::Disabled);
})
.build(&server)
.await
.expect("build test codex");
test.submit_turn("hello").await.expect("submit test prompt");
let request = request_recorder.single_request();
assert_eq!(
request.header(WEB_SEARCH_ELIGIBLE_HEADER).as_deref(),
Some("false")
);
}
#[tokio::test]
async fn responses_respects_model_info_overrides_from_config() {
core_test_support::skip_if_no_network!();

View File

@@ -48,6 +48,7 @@ async fn interrupt_long_running_tool_emits_turn_aborted() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "start sleep".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -101,6 +102,7 @@ async fn interrupt_tool_records_history_entries() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "start history recording".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -118,6 +120,7 @@ async fn interrupt_tool_records_history_entries() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "follow up".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})

View File

@@ -302,6 +302,7 @@ async fn apply_patch_cli_move_without_content_change_has_no_turn_diff(
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "rename without content change".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd: cwd.path().to_path_buf(),
@@ -888,6 +889,7 @@ async fn apply_patch_shell_command_heredoc_with_cd_emits_turn_diff() -> Result<(
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "apply via shell heredoc with cd".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd: cwd.path().to_path_buf(),
@@ -965,6 +967,7 @@ async fn apply_patch_shell_command_failure_propagates_error_and_skips_diff() ->
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "apply patch via shell".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd: cwd.path().to_path_buf(),
@@ -1112,6 +1115,7 @@ async fn apply_patch_emits_turn_diff_event_with_unified_diff(
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "emit diff".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd: cwd.path().to_path_buf(),
@@ -1172,6 +1176,7 @@ async fn apply_patch_turn_diff_for_rename_with_content_change(
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "rename with change".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd: cwd.path().to_path_buf(),
@@ -1240,6 +1245,7 @@ async fn apply_patch_aggregates_diff_across_multiple_tool_calls() -> Result<()>
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "aggregate diffs".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd: cwd.path().to_path_buf(),
@@ -1308,6 +1314,7 @@ async fn apply_patch_aggregates_diff_preserves_success_after_failure() -> Result
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "apply patch twice with failure".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd: cwd.path().to_path_buf(),

View File

@@ -492,6 +492,7 @@ async fn submit_turn(
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: prompt.into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd: test.cwd.path().to_path_buf(),

View File

@@ -14,6 +14,7 @@ use codex_core::ThreadManager;
use codex_core::WireApi;
use codex_core::auth::AuthCredentialsStoreMode;
use codex_core::built_in_model_providers;
use codex_core::default_client::originator;
use codex_core::error::CodexErr;
use codex_core::models_manager::manager::ModelsManager;
use codex_core::protocol::EventMsg;
@@ -289,6 +290,7 @@ async fn resume_includes_initial_messages_and_sends_prior_items() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -387,6 +389,7 @@ async fn includes_conversation_id_and_model_headers_in_request() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -404,7 +407,7 @@ async fn includes_conversation_id_and_model_headers_in_request() {
let request_originator = request.header("originator").expect("originator header");
assert_eq!(request_session_id, session_id.to_string());
assert_eq!(request_originator, "codex_cli_rs");
assert_eq!(request_originator, originator().value);
assert_eq!(request_authorization, "Bearer Test API Key");
}
@@ -440,6 +443,7 @@ async fn includes_base_instructions_override_in_request() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -496,6 +500,7 @@ async fn chatgpt_auth_sends_correct_request() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -518,7 +523,7 @@ async fn chatgpt_auth_sends_correct_request() {
let session_id = request.header("session_id").expect("session_id header");
assert_eq!(session_id, thread_id.to_string());
assert_eq!(request_originator, "codex_cli_rs");
assert_eq!(request_originator, originator().value);
assert_eq!(request_authorization, "Bearer Access Token");
assert_eq!(request_chatgpt_account_id, "account_id");
assert!(request_body["stream"].as_bool().unwrap());
@@ -588,6 +593,7 @@ async fn prefers_apikey_when_config_prefers_apikey_even_with_chatgpt_tokens() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -629,6 +635,7 @@ async fn includes_user_instructions_message_in_request() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -708,6 +715,7 @@ async fn skills_append_to_instructions() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -760,6 +768,7 @@ async fn includes_configured_effort_in_request() -> anyhow::Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -797,6 +806,7 @@ async fn includes_no_effort_in_request() -> anyhow::Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -832,6 +842,7 @@ async fn includes_default_reasoning_effort_in_request_when_defined_by_model_info
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -871,6 +882,7 @@ async fn configured_reasoning_summary_is_sent() -> anyhow::Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -910,6 +922,7 @@ async fn reasoning_summary_is_omitted_when_disabled() -> anyhow::Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -943,6 +956,7 @@ async fn includes_default_verbosity_in_request() -> anyhow::Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -983,6 +997,7 @@ async fn configured_verbosity_not_sent_for_models_without_support() -> anyhow::R
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -1022,6 +1037,7 @@ async fn configured_verbosity_is_sent() -> anyhow::Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -1077,6 +1093,7 @@ async fn includes_developer_instructions_message_in_request() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -1326,6 +1343,7 @@ async fn token_count_includes_rate_limits_snapshot() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -1484,6 +1502,7 @@ async fn usage_limit_error_emits_rate_limit_event() -> anyhow::Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -1554,6 +1573,7 @@ async fn context_window_error_sets_total_tokens_to_model_window() -> anyhow::Res
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "seed turn".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -1565,6 +1585,7 @@ async fn context_window_error_sets_total_tokens_to_model_window() -> anyhow::Res
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "trigger context window".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -1685,6 +1706,7 @@ async fn azure_overrides_assign_properties_used_for_responses_url() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -1768,6 +1790,7 @@ async fn env_var_overrides_loaded_auth() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -1840,7 +1863,10 @@ async fn history_dedupes_streamed_and_final_messages_across_turns() {
// Turn 1: user sends U1; wait for completion.
codex
.submit(Op::UserInput {
items: vec![UserInput::Text { text: "U1".into() }],
items: vec![UserInput::Text {
text: "U1".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
.await
@@ -1850,7 +1876,10 @@ async fn history_dedupes_streamed_and_final_messages_across_turns() {
// Turn 2: user sends U2; wait for completion.
codex
.submit(Op::UserInput {
items: vec![UserInput::Text { text: "U2".into() }],
items: vec![UserInput::Text {
text: "U2".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
.await
@@ -1860,7 +1889,10 @@ async fn history_dedupes_streamed_and_final_messages_across_turns() {
// Turn 3: user sends U3; wait for completion.
codex
.submit(Op::UserInput {
items: vec![UserInput::Text { text: "U3".into() }],
items: vec![UserInput::Text {
text: "U3".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
.await

View File

@@ -159,6 +159,7 @@ async fn summarize_context_three_requests_and_instructions() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello world".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -180,6 +181,7 @@ async fn summarize_context_three_requests_and_instructions() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: THIRD_USER_MSG.into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -573,6 +575,7 @@ async fn multiple_auto_compact_per_task_runs_after_token_limit_hit() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: user_message.into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -1049,6 +1052,7 @@ async fn auto_compact_runs_after_token_limit_hit() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: FIRST_AUTO_MSG.into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -1061,6 +1065,7 @@ async fn auto_compact_runs_after_token_limit_hit() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: SECOND_AUTO_MSG.into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -1073,6 +1078,7 @@ async fn auto_compact_runs_after_token_limit_hit() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: POST_AUTO_USER_MSG.into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -1273,6 +1279,7 @@ async fn auto_compact_runs_after_resume_when_token_usage_is_over_limit() {
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: follow_up_user.into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd: resumed.cwd.path().to_path_buf(),
@@ -1382,6 +1389,7 @@ async fn auto_compact_persists_rollout_entries() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: FIRST_AUTO_MSG.into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -1393,6 +1401,7 @@ async fn auto_compact_persists_rollout_entries() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: SECOND_AUTO_MSG.into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -1404,6 +1413,7 @@ async fn auto_compact_persists_rollout_entries() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: POST_AUTO_USER_MSG.into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -1496,6 +1506,7 @@ async fn manual_compact_retries_after_context_window_error() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "first turn".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -1629,6 +1640,7 @@ async fn manual_compact_twice_preserves_latest_user_messages() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: first_user_message.into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -1643,6 +1655,7 @@ async fn manual_compact_twice_preserves_latest_user_messages() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: second_user_message.into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -1657,6 +1670,7 @@ async fn manual_compact_twice_preserves_latest_user_messages() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: final_user_message.into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -1835,7 +1849,10 @@ async fn auto_compact_allows_multiple_attempts_when_interleaved_with_other_turn_
for user in [MULTI_AUTO_MSG, follow_up_user, final_user] {
codex
.submit(Op::UserInput {
items: vec![UserInput::Text { text: user.into() }],
items: vec![UserInput::Text {
text: user.into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
.await
@@ -1948,6 +1965,7 @@ async fn auto_compact_triggers_after_function_call_over_95_percent_usage() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: FUNCTION_CALL_LIMIT_MSG.into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -1960,6 +1978,7 @@ async fn auto_compact_triggers_after_function_call_over_95_percent_usage() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: follow_up_user.into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -2075,7 +2094,10 @@ async fn auto_compact_counts_encrypted_reasoning_before_last_user() {
{
codex
.submit(Op::UserInput {
items: vec![UserInput::Text { text: user.into() }],
items: vec![UserInput::Text {
text: user.into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
.await

View File

@@ -73,6 +73,7 @@ async fn remote_compact_replaces_history_for_followups() -> Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello remote compact".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -86,6 +87,7 @@ async fn remote_compact_replaces_history_for_followups() -> Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "after compact".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -192,6 +194,7 @@ async fn remote_compact_runs_automatically() -> Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello remote compact".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -265,6 +268,7 @@ async fn remote_compact_persists_replacement_history_in_rollout() -> Result<()>
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "needs compaction".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})

View File

@@ -983,7 +983,10 @@ async fn start_test_conversation(
async fn user_turn(conversation: &Arc<CodexThread>, text: &str) {
conversation
.submit(Op::UserInput {
items: vec![UserInput::Text { text: text.into() }],
items: vec![UserInput::Text {
text: text.into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
.await

View File

@@ -72,6 +72,7 @@ async fn execpolicy_blocks_shell_invocation() -> Result<()> {
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "run shell command".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd: test.cwd_path().to_path_buf(),

View File

@@ -70,6 +70,7 @@ async fn fork_thread_twice_drops_to_first_message() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: text.to_string(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})

View File

@@ -117,6 +117,7 @@ async fn copy_paste_local_image_persists_rollout_request_shape() -> anyhow::Resu
},
UserInput::Text {
text: "pasted image".to_string(),
text_elements: Vec::new(),
},
],
final_output_json_schema: None,
@@ -194,6 +195,7 @@ async fn drag_drop_image_persists_rollout_request_shape() -> anyhow::Result<()>
},
UserInput::Text {
text: "dropped image".to_string(),
text_elements: Vec::new(),
},
],
final_output_json_schema: None,

View File

@@ -6,6 +6,8 @@ use codex_core::protocol::ItemCompletedEvent;
use codex_core::protocol::ItemStartedEvent;
use codex_core::protocol::Op;
use codex_protocol::items::TurnItem;
use codex_protocol::user_input::ByteRange;
use codex_protocol::user_input::TextElement;
use codex_protocol::user_input::UserInput;
use core_test_support::responses::ev_assistant_message;
use core_test_support::responses::ev_completed;
@@ -38,11 +40,18 @@ async fn user_message_item_is_emitted() -> anyhow::Result<()> {
let first_response = sse(vec![ev_response_created("resp-1"), ev_completed("resp-1")]);
mount_sse_once(&server, first_response).await;
let text_elements = vec![TextElement {
byte_range: ByteRange { start: 0, end: 6 },
placeholder: Some("<file>".into()),
}];
let expected_input = UserInput::Text {
text: "please inspect sample.txt".into(),
text_elements: text_elements.clone(),
};
codex
.submit(Op::UserInput {
items: (vec![UserInput::Text {
text: "please inspect sample.txt".into(),
}]),
items: vec![expected_input.clone()],
final_output_json_schema: None,
})
.await?;
@@ -65,18 +74,16 @@ async fn user_message_item_is_emitted() -> anyhow::Result<()> {
.await;
assert_eq!(started_item.id, completed_item.id);
assert_eq!(
started_item.content,
vec![UserInput::Text {
text: "please inspect sample.txt".into(),
}]
);
assert_eq!(
completed_item.content,
vec![UserInput::Text {
text: "please inspect sample.txt".into(),
}]
);
assert_eq!(started_item.content, vec![expected_input.clone()]);
assert_eq!(completed_item.content, vec![expected_input]);
let legacy_message = wait_for_event_match(&codex, |ev| match ev {
EventMsg::UserMessage(event) => Some(event.clone()),
_ => None,
})
.await;
assert_eq!(legacy_message.message, "please inspect sample.txt");
assert_eq!(legacy_message.text_elements, text_elements);
Ok(())
}
@@ -99,6 +106,7 @@ async fn assistant_message_item_is_emitted() -> anyhow::Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "please summarize results".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -156,6 +164,7 @@ async fn reasoning_item_is_emitted() -> anyhow::Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "explain your reasoning".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -215,6 +224,7 @@ async fn web_search_item_is_emitted() -> anyhow::Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "find the weather".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -268,6 +278,7 @@ async fn agent_message_content_delta_has_item_metadata() -> anyhow::Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "please stream text".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -334,6 +345,7 @@ async fn reasoning_content_delta_has_item_metadata() -> anyhow::Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "reason through it".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -392,6 +404,7 @@ async fn reasoning_raw_content_delta_respects_flag() -> anyhow::Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "show raw reasoning".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})

View File

@@ -76,6 +76,7 @@ async fn codex_returns_json_result(model: String) -> anyhow::Result<()> {
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "hello world".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: Some(serde_json::from_str(SCHEMA)?),
cwd: cwd.path().to_path_buf(),

View File

@@ -36,7 +36,7 @@ async fn collect_tool_identifiers_for_model(model: &str) -> Vec<String> {
let mut builder = test_codex()
.with_model(model)
// Keep tool expectations stable when the default web_search mode changes.
.with_config(|config| config.web_search_mode = WebSearchMode::Cached);
.with_config(|config| config.web_search_mode = Some(WebSearchMode::Cached));
let test = builder
.build(&server)
.await

View File

@@ -87,7 +87,10 @@ async fn renews_cache_ttl_on_matching_models_etag() -> Result<()> {
codex
.submit(Op::UserTurn {
items: vec![UserInput::Text { text: "hi".into() }],
items: vec![UserInput::Text {
text: "hi".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd: test.cwd_path().to_path_buf(),
approval_policy: codex_core::protocol::AskForApproval::Never,

View File

@@ -100,6 +100,7 @@ async fn refresh_models_on_models_etag_mismatch_and_avoid_duplicate_models_fetch
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "please run a tool".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd: cwd.path().to_path_buf(),

View File

@@ -45,6 +45,7 @@ async fn responses_api_emits_api_request_event() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -87,6 +88,7 @@ async fn process_sse_emits_tracing_for_output_item() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -126,6 +128,7 @@ async fn process_sse_emits_failed_event_on_parse_error() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -166,6 +169,7 @@ async fn process_sse_records_failed_event_when_stream_closes_without_completed()
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -226,6 +230,7 @@ async fn process_sse_failed_event_records_response_error_message() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -284,6 +289,7 @@ async fn process_sse_failed_event_logs_parse_error() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -329,6 +335,7 @@ async fn process_sse_failed_event_logs_missing_error() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -383,6 +390,7 @@ async fn process_sse_failed_event_logs_response_completed_parse_error() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -434,6 +442,7 @@ async fn process_sse_emits_completed_telemetry() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -502,6 +511,7 @@ async fn handle_responses_span_records_response_kind_and_tool_name() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -567,6 +577,7 @@ async fn record_responses_sets_span_fields_for_response_events() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -647,6 +658,7 @@ async fn handle_response_item_records_tool_result_for_custom_tool_call() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -715,6 +727,7 @@ async fn handle_response_item_records_tool_result_for_function_call() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -793,6 +806,7 @@ async fn handle_response_item_records_tool_result_for_local_shell_missing_ids()
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -855,6 +869,7 @@ async fn handle_response_item_records_tool_result_for_local_shell_call() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -960,6 +975,7 @@ async fn handle_container_exec_autoapprove_from_config_records_tool_decision() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -1009,6 +1025,7 @@ async fn handle_container_exec_user_approved_records_tool_decision() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "approved".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -1068,6 +1085,7 @@ async fn handle_container_exec_user_approved_for_session_records_tool_decision()
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "persist".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -1127,6 +1145,7 @@ async fn handle_sandbox_error_user_approves_retry_records_tool_decision() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "retry".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -1186,6 +1205,7 @@ async fn handle_container_exec_user_denies_records_tool_decision() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "deny".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -1245,6 +1265,7 @@ async fn handle_sandbox_error_user_approves_for_session_records_tool_decision()
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "persist".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -1305,6 +1326,7 @@ async fn handle_sandbox_error_user_denies_records_tool_decision() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "deny".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})

View File

@@ -100,6 +100,7 @@ async fn injected_user_input_triggers_follow_up_request_with_deltas() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "first prompt".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -115,6 +116,7 @@ async fn injected_user_input_triggers_follow_up_request_with_deltas() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "second prompt".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})

View File

@@ -61,6 +61,7 @@ async fn permissions_message_sent_once_on_start() -> Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -93,6 +94,7 @@ async fn permissions_message_added_on_override_change() -> Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello 1".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -114,6 +116,7 @@ async fn permissions_message_added_on_override_change() -> Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello 2".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -152,6 +155,7 @@ async fn permissions_message_not_added_when_no_change() -> Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello 1".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -162,6 +166,7 @@ async fn permissions_message_not_added_when_no_change() -> Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello 2".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -203,6 +208,7 @@ async fn resume_replays_permissions_messages() -> Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello 1".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -226,6 +232,7 @@ async fn resume_replays_permissions_messages() -> Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello 2".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -238,6 +245,7 @@ async fn resume_replays_permissions_messages() -> Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "after resume".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -276,6 +284,7 @@ async fn resume_and_fork_append_permissions_messages() -> Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello 1".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -299,6 +308,7 @@ async fn resume_and_fork_append_permissions_messages() -> Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello 2".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -319,6 +329,7 @@ async fn resume_and_fork_append_permissions_messages() -> Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "after resume".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -346,6 +357,7 @@ async fn resume_and_fork_append_permissions_messages() -> Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "after fork".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -393,6 +405,7 @@ async fn permissions_message_includes_writable_roots() -> Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})

View File

@@ -88,7 +88,7 @@ async fn prompt_tools_are_consistent_across_requests() -> anyhow::Result<()> {
config.user_instructions = Some("be consistent and helpful".to_string());
config.model = Some("gpt-5.1-codex-max".to_string());
// Keep tool expectations stable when the default web_search mode changes.
config.web_search_mode = WebSearchMode::Cached;
config.web_search_mode = Some(WebSearchMode::Cached);
})
.build(&server)
.await?;
@@ -108,6 +108,7 @@ async fn prompt_tools_are_consistent_across_requests() -> anyhow::Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello 1".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -118,6 +119,7 @@ async fn prompt_tools_are_consistent_across_requests() -> anyhow::Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello 2".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -180,6 +182,7 @@ async fn codex_mini_latest_tools() -> anyhow::Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello 1".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -190,6 +193,7 @@ async fn codex_mini_latest_tools() -> anyhow::Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello 2".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -241,6 +245,7 @@ async fn prefixes_context_and_instructions_once_and_consistently_across_requests
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello 1".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -251,6 +256,7 @@ async fn prefixes_context_and_instructions_once_and_consistently_across_requests
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello 2".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -316,6 +322,7 @@ async fn overrides_turn_context_but_keeps_cached_prefix_and_key_constant() -> an
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello 1".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -345,6 +352,7 @@ async fn overrides_turn_context_but_keeps_cached_prefix_and_key_constant() -> an
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello 2".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -406,6 +414,7 @@ async fn override_before_first_turn_emits_environment_context() -> anyhow::Resul
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "first message".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -517,6 +526,7 @@ async fn per_turn_overrides_keep_cached_prefix_and_key_constant() -> anyhow::Res
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello 1".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -536,6 +546,7 @@ async fn per_turn_overrides_keep_cached_prefix_and_key_constant() -> anyhow::Res
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "hello 2".into(),
text_elements: Vec::new(),
}],
cwd: new_cwd.path().to_path_buf(),
approval_policy: AskForApproval::Never,
@@ -627,6 +638,7 @@ async fn send_user_turn_with_no_changes_does_not_send_environment_context() -> a
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "hello 1".into(),
text_elements: Vec::new(),
}],
cwd: default_cwd.clone(),
approval_policy: default_approval_policy,
@@ -643,6 +655,7 @@ async fn send_user_turn_with_no_changes_does_not_send_environment_context() -> a
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "hello 2".into(),
text_elements: Vec::new(),
}],
cwd: default_cwd.clone(),
approval_policy: default_approval_policy,
@@ -720,6 +733,7 @@ async fn send_user_turn_with_changes_sends_environment_context() -> anyhow::Resu
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "hello 1".into(),
text_elements: Vec::new(),
}],
cwd: default_cwd.clone(),
approval_policy: default_approval_policy,
@@ -736,6 +750,7 @@ async fn send_user_turn_with_changes_sends_environment_context() -> anyhow::Resu
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "hello 2".into(),
text_elements: Vec::new(),
}],
cwd: default_cwd.clone(),
approval_policy: AskForApproval::Never,

View File

@@ -43,6 +43,7 @@ async fn quota_exceeded_emits_single_error_event() -> Result<()> {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "quota?".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})

View File

@@ -165,6 +165,7 @@ async fn remote_models_remote_model_uses_unified_exec() -> Result<()> {
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "run call".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd: cwd.path().to_path_buf(),
@@ -369,6 +370,7 @@ async fn remote_models_apply_remote_base_instructions() -> Result<()> {
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "hello remote".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd: cwd.path().to_path_buf(),

View File

@@ -39,6 +39,7 @@ async fn request_body_is_zstd_compressed_for_codex_backend_when_enabled() -> any
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "compress me".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -82,6 +83,7 @@ async fn request_body_is_not_compressed_for_api_key_auth_even_when_enabled() ->
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "do not compress".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})

View File

@@ -1,6 +1,8 @@
use anyhow::Result;
use codex_core::protocol::EventMsg;
use codex_core::protocol::Op;
use codex_protocol::user_input::ByteRange;
use codex_protocol::user_input::TextElement;
use codex_protocol::user_input::UserInput;
use core_test_support::responses::ev_assistant_message;
use core_test_support::responses::ev_completed;
@@ -12,6 +14,7 @@ use core_test_support::responses::start_mock_server;
use core_test_support::skip_if_no_network;
use core_test_support::test_codex::test_codex;
use core_test_support::wait_for_event;
use pretty_assertions::assert_eq;
use std::sync::Arc;
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
@@ -32,10 +35,16 @@ async fn resume_includes_initial_messages_from_rollout_events() -> Result<()> {
]);
mount_sse_once(&server, initial_sse).await;
let text_elements = vec![TextElement {
byte_range: ByteRange { start: 0, end: 6 },
placeholder: Some("<note>".into()),
}];
codex
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "Record some messages".into(),
text_elements: text_elements.clone(),
}],
final_output_json_schema: None,
})
@@ -56,6 +65,7 @@ async fn resume_includes_initial_messages_from_rollout_events() -> Result<()> {
EventMsg::TokenCount(_),
] => {
assert_eq!(first_user.message, "Record some messages");
assert_eq!(first_user.text_elements, text_elements);
assert_eq!(assistant_message.message, "Completed first turn");
}
other => panic!("unexpected initial messages after resume: {other:#?}"),
@@ -89,6 +99,7 @@ async fn resume_includes_initial_messages_from_reasoning_events() -> Result<()>
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "Record reasoning messages".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})

View File

@@ -707,6 +707,7 @@ async fn review_history_surfaces_in_parent_session() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: followup.clone(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})

View File

@@ -88,6 +88,7 @@ async fn stdio_server_round_trip() -> anyhow::Result<()> {
cwd: None,
},
enabled: true,
disabled_reason: None,
startup_timeout_sec: Some(Duration::from_secs(10)),
tool_timeout_sec: None,
enabled_tools: None,
@@ -108,6 +109,7 @@ async fn stdio_server_round_trip() -> anyhow::Result<()> {
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "call the rmcp echo tool".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd: fixture.cwd.path().to_path_buf(),
@@ -224,6 +226,7 @@ async fn stdio_image_responses_round_trip() -> anyhow::Result<()> {
cwd: None,
},
enabled: true,
disabled_reason: None,
startup_timeout_sec: Some(Duration::from_secs(10)),
tool_timeout_sec: None,
enabled_tools: None,
@@ -244,6 +247,7 @@ async fn stdio_image_responses_round_trip() -> anyhow::Result<()> {
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "call the rmcp image tool".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd: fixture.cwd.path().to_path_buf(),
@@ -418,6 +422,7 @@ async fn stdio_image_completions_round_trip() -> anyhow::Result<()> {
cwd: None,
},
enabled: true,
disabled_reason: None,
startup_timeout_sec: Some(Duration::from_secs(10)),
tool_timeout_sec: None,
enabled_tools: None,
@@ -438,6 +443,7 @@ async fn stdio_image_completions_round_trip() -> anyhow::Result<()> {
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "call the rmcp image tool".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd: fixture.cwd.path().to_path_buf(),
@@ -560,6 +566,7 @@ async fn stdio_server_propagates_whitelisted_env_vars() -> anyhow::Result<()> {
cwd: None,
},
enabled: true,
disabled_reason: None,
startup_timeout_sec: Some(Duration::from_secs(10)),
tool_timeout_sec: None,
enabled_tools: None,
@@ -580,6 +587,7 @@ async fn stdio_server_propagates_whitelisted_env_vars() -> anyhow::Result<()> {
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "call the rmcp echo tool".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd: fixture.cwd.path().to_path_buf(),
@@ -713,6 +721,7 @@ async fn streamable_http_tool_call_round_trip() -> anyhow::Result<()> {
env_http_headers: None,
},
enabled: true,
disabled_reason: None,
startup_timeout_sec: Some(Duration::from_secs(10)),
tool_timeout_sec: None,
enabled_tools: None,
@@ -733,6 +742,7 @@ async fn streamable_http_tool_call_round_trip() -> anyhow::Result<()> {
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "call the rmcp streamable http echo tool".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd: fixture.cwd.path().to_path_buf(),
@@ -898,6 +908,7 @@ async fn streamable_http_with_oauth_round_trip() -> anyhow::Result<()> {
env_http_headers: None,
},
enabled: true,
disabled_reason: None,
startup_timeout_sec: Some(Duration::from_secs(10)),
tool_timeout_sec: None,
enabled_tools: None,
@@ -918,6 +929,7 @@ async fn streamable_http_with_oauth_round_trip() -> anyhow::Result<()> {
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "call the rmcp streamable http oauth echo tool".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd: fixture.cwd.path().to_path_buf(),

View File

@@ -71,6 +71,7 @@ async fn run_snapshot_command(command: &str) -> Result<SnapshotRun> {
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "run unified exec with shell snapshot".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd,
@@ -147,6 +148,7 @@ async fn run_shell_command_snapshot(command: &str) -> Result<SnapshotRun> {
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "run shell_command with shell snapshot".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd,
@@ -284,6 +286,7 @@ async fn shell_command_snapshot_still_intercepts_apply_patch() -> Result<()> {
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "apply patch via shell_command with snapshot".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd: cwd.clone(),

View File

@@ -64,6 +64,7 @@ async fn user_turn_includes_skill_instructions() -> Result<()> {
items: vec![
UserInput::Text {
text: "please use $demo".to_string(),
text_elements: Vec::new(),
},
UserInput::Skill {
name: "demo".to_string(),

View File

@@ -88,6 +88,7 @@ async fn continue_after_stream_error() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "first message".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})
@@ -106,6 +107,7 @@ async fn continue_after_stream_error() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "follow up".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})

View File

@@ -95,6 +95,7 @@ async fn retries_on_early_close() {
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
})

View File

@@ -82,6 +82,7 @@ async fn shell_tool_executes_command_and_streams_output() -> anyhow::Result<()>
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "please run the shell command".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd: cwd.path().to_path_buf(),
@@ -148,6 +149,7 @@ async fn update_plan_tool_emits_plan_update_event() -> anyhow::Result<()> {
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "please update the plan".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd: cwd.path().to_path_buf(),
@@ -224,6 +226,7 @@ async fn update_plan_tool_rejects_malformed_payload() -> anyhow::Result<()> {
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "please update the plan".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd: cwd.path().to_path_buf(),
@@ -312,6 +315,7 @@ async fn apply_patch_tool_executes_and_emits_patch_events() -> anyhow::Result<()
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "please apply a patch".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd: cwd.path().to_path_buf(),
@@ -408,6 +412,7 @@ async fn apply_patch_reports_parse_diagnostics() -> anyhow::Result<()> {
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "please apply a patch".into(),
text_elements: Vec::new(),
}],
final_output_json_schema: None,
cwd: cwd.path().to_path_buf(),

Some files were not shown because too many files have changed in this diff Show More