Compare commits

..

22 Commits

Author SHA1 Message Date
Ahmed Ibrahim
1ac36e67af fix plan mismatch 2025-10-01 14:36:08 -07:00
Ahmed Ibrahim
11c28a319b fix plan mismatch 2025-10-01 14:32:56 -07:00
Ahmed Ibrahim
7b5b4e08a0 fix plan mismatch 2025-10-01 14:19:37 -07:00
Ahmed Ibrahim
2f370e946d Show context window usage while tasks run (#4536)
## Summary
- show the remaining context window percentage in `/status` alongside
existing token usage details
- replace the composer shortcut prompt with the context window
percentage (or an unavailable message) while a task is running
- update TUI snapshots to reflect the new context window line

## Testing
- cargo test -p codex-tui

------
https://chatgpt.com/codex/tasks/task_i_68dc6e7397ac8321909d7daff25a396c
2025-10-01 18:03:05 +00:00
Ahmed Ibrahim
751b3b50ac Show placeholder for commands with no output (#4509)
## Summary
- show a dim “(no output)” placeholder when an executed command produces
no stdout or stderr so empty runs are visible
- update TUI snapshots to include the new placeholder in history
renderings

## Testing
- cargo test -p codex-tui


------
https://chatgpt.com/codex/tasks/task_i_68dc056c1d5883218fe8d9929e9b1657
2025-10-01 10:42:30 -07:00
Ahmed Ibrahim
d78d0764aa Add Updated at time in resume picker (#4468)
<img width="639" height="281" alt="image"
src="https://github.com/user-attachments/assets/92b2ad2b-9e18-4485-9b8d-d7056eb98651"
/>
2025-10-01 10:40:43 -07:00
rakesh-oai
699c121606 Handle trailing backslash properly (#4559)
**Summary**

This PR fixes an issue in the device code login flow where trailing
slashes in the issuer URL could cause malformed URLs during codex token
exchange step


**Test**


Before the changes

`Error logging in with device code: device code exchange failed: error
decoding response body`

After the changes

`Successfully logged in`
2025-10-01 10:32:09 -07:00
iceweasel-oai
dde615f482 implement command safety for PowerShell commands (#4269)
Implement command safety for PowerShell commands on Windows

This change adds a new Windows-specific command-safety module under
`codex-rs/core/src/command_safety/windows_safe_commands.rs` to strictly
sanitise PowerShell invocations. Key points:

- Introduce `is_safe_command_windows()` to only allow explicitly
read-only PowerShell calls.
- Parse and split PowerShell invocations (including inline `-Command`
scripts and pipelines).
- Block unsafe switches (`-File`, `-EncodedCommand`, `-ExecutionPolicy`,
unknown flags, call operators, redirections, separators).
- Whitelist only read-only cmdlets (`Get-ChildItem`, `Get-Content`,
`Select-Object`, etc.), safe Git subcommands (`status`, `log`, `show`,
`diff`, `cat-file`), and ripgrep without unsafe options.
- Add comprehensive unit tests covering allowed and rejected command
patterns (nested calls, side effects, chaining, redirections).

This ensures Codex on Windows can safely execute discover-only
PowerShell workflows without risking destructive operations.
2025-10-01 09:56:48 -07:00
Michael Bolin
325fad1d92 fix: pnpm/action-setup@v4 should run before actions/setup-node@v5 (#4555)
`rust-release.yml` just failed:

https://github.com/openai/codex/actions/runs/18167317356/job/51714366768

The error is:

> Error: Unable to locate executable file: pnpm. Please verify either
the file path exists or the file can be found within a directory
specified by the PATH environment variable. Also check the file mode to
verify the file is executable.

We need to install `pnpm` first like we do in `ci.yml`:


f815157dd9/.github/workflows/ci.yml (L17-L25)
2025-10-01 09:04:14 -07:00
Michael Bolin
f815157dd9 chore: introduce publishing logic for @openai/codex-sdk (#4543)
There was a bit of copypasta I put up with when were publishing two
packages to npm, but now that it's three, I created some more scripts to
consolidate things.

With this change, I ran:

```shell
./scripts/stage_npm_packages.py --release-version 0.43.0-alpha.8 --package codex --package codex-responses-api-proxy --package codex-sdk
```

Indeed when it finished, I ended up with:

```shell
$ tree dist
dist
└── npm
    ├── codex-npm-0.43.0-alpha.8.tgz
    ├── codex-responses-api-proxy-npm-0.43.0-alpha.8.tgz
    └── codex-sdk-npm-0.43.0-alpha.8.tgz
$ tar tzvf dist/npm/codex-sdk-npm-0.43.0-alpha.8.tgz
-rwxr-xr-x  0 0      0    25476720 Oct 26  1985 package/vendor/aarch64-apple-darwin/codex/codex
-rwxr-xr-x  0 0      0    29871400 Oct 26  1985 package/vendor/aarch64-unknown-linux-musl/codex/codex
-rwxr-xr-x  0 0      0    28368096 Oct 26  1985 package/vendor/x86_64-apple-darwin/codex/codex
-rwxr-xr-x  0 0      0    36029472 Oct 26  1985 package/vendor/x86_64-unknown-linux-musl/codex/codex
-rw-r--r--  0 0      0       10926 Oct 26  1985 package/LICENSE
-rw-r--r--  0 0      0    30187520 Oct 26  1985 package/vendor/aarch64-pc-windows-msvc/codex/codex.exe
-rw-r--r--  0 0      0    35277824 Oct 26  1985 package/vendor/x86_64-pc-windows-msvc/codex/codex.exe
-rw-r--r--  0 0      0        4842 Oct 26  1985 package/dist/index.js
-rw-r--r--  0 0      0        1347 Oct 26  1985 package/package.json
-rw-r--r--  0 0      0        9867 Oct 26  1985 package/dist/index.js.map
-rw-r--r--  0 0      0          12 Oct 26  1985 package/README.md
-rw-r--r--  0 0      0        4287 Oct 26  1985 package/dist/index.d.ts
```
2025-10-01 08:29:59 -07:00
jif-oai
b8195a17e5 chore: sanbox extraction (#4286)
# Extract and Centralize Sandboxing
- Goal: Improve safety and clarity by centralizing sandbox planning and
execution.
  - Approach:
- Add planner (ExecPlan) and backend registry (Direct/Seatbelt/Linux)
with run_with_plan.
- Refactor codex.rs to plan-then-execute; handle failures/escalation via
the plan.
- Delegate apply_patch to the codex binary and run it with an empty env
for determinism.
2025-10-01 12:05:12 +01:00
rakesh-oai
349ef7edc6 Fix Callback URL for staging and prod environments (#4533)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.
2025-10-01 02:57:37 +00:00
Michael Bolin
5881c0d6d4 fix: remove mcp-types from app server protocol (#4537)
We continue the separation between `codex app-server` and `codex
mcp-server`.

In particular, we introduce a new crate, `codex-app-server-protocol`,
and migrate `codex-rs/protocol/src/mcp_protocol.rs` into it, renaming it
`codex-rs/app-server-protocol/src/protocol.rs`.

Because `ConversationId` was defined in `mcp_protocol.rs`, we move it
into its own file, `codex-rs/protocol/src/conversation_id.rs`, and
because it is referenced in a ton of places, we have to touch a lot of
files as part of this PR.

We also decide to get away from proper JSON-RPC 2.0 semantics, so we
also introduce `codex-rs/app-server-protocol/src/jsonrpc_lite.rs`, which
is basically the same `JSONRPCMessage` type defined in `mcp-types`
except with all of the `"jsonrpc": "2.0"` removed.

Getting rid of `"jsonrpc": "2.0"` makes our serialization logic
considerably simpler, as we can lean heavier on serde to serialize
directly into the wire format that we use now.
2025-10-01 02:16:26 +00:00
pakrym-oai
8dd771d217 Add executable detection and export Codex from the SDK (#4532)
Executable detection uses the same rules as the codex wrapper.
2025-09-30 18:06:16 -07:00
Michael Bolin
32853ecbc5 fix: use macros to ensure request/response symmetry (#4529)
Manually curating `protocol-ts/src/lib.rs` was error-prone, as expected.
I finally asked Codex to write some Rust macros so we can ensure that:

- For every variant of `ClientRequest` and `ServerRequest`, there is an
associated `params` and `response` type.
- All response types are included automatically in the output of `codex
generate-ts`.
2025-09-30 18:06:05 -07:00
pakrym-oai
7fc3edf8a7 Remove legacy codex exec --json format (#4525)
`codex exec --json` now maps to the behavior of `codex exec
--experimental-json` with new event and item shapes.

Thread events:
- thread.started
- turn.started
- turn.completed
- turn.failed
- item.started
- item.updated
- item.completed

Item types: 
- assistant_message
- reasoning
- command_execution
- file_change
- mcp_tool_call
- web_search
- todo_list
- error

Sample output:

<details>
`codex exec "list my assigned github issues"  --json | jq`

```
{
  "type": "thread.started",
  "thread_id": "01999ce5-f229-7661-8570-53312bd47ea3"
}
{
  "type": "turn.started"
}
{
  "type": "item.completed",
  "item": {
    "id": "item_0",
    "item_type": "reasoning",
    "text": "**Planning to list assigned GitHub issues**"
  }
}
{
  "type": "item.started",
  "item": {
    "id": "item_1",
    "item_type": "mcp_tool_call",
    "server": "github",
    "tool": "search_issues",
    "status": "in_progress"
  }
}
{
  "type": "item.completed",
  "item": {
    "id": "item_1",
    "item_type": "mcp_tool_call",
    "server": "github",
    "tool": "search_issues",
    "status": "completed"
  }
}
{
  "type": "item.completed",
  "item": {
    "id": "item_2",
    "item_type": "reasoning",
    "text": "**Organizing final message structure**"
  }
}
{
  "type": "item.completed",
  "item": {
    "id": "item_3",
    "item_type": "assistant_message",
    "text": "**Assigned Issues**\n- openai/codex#3267 – “stream error: stream disconnected before completion…” (bug) – last update 2025-09-08\n- openai/codex#3257 – “You've hit your usage limit. Try again in 4 days 20 hours 9 minutes.” – last update 2025-09-23\n- openai/codex#3054 – “reqwest SSL panic (library has no ciphers)” (bug) – last update 2025-09-03\n- openai/codex#3051 – “thread 'main' panicked at linux-sandbox/src/linux_run_main.rs:53:5:” (bug) – last update 2025-09-10\n- openai/codex#3004 – “Auto-compact when approaching context limit” (enhancement) – last update 2025-09-26\n- openai/codex#2916 – “Feature request: Add OpenAI service tier support for cost optimization” – last update 2025-09-12\n- openai/codex#1581 – “stream error: stream disconnected before completion: stream closed before response.complete; retrying...” (bug) – last update 2025-09-17"
  }
}
{
  "type": "turn.completed",
  "usage": {
    "input_tokens": 34785,
    "cached_input_tokens": 12544,
    "output_tokens": 560
  }
}
```

</details>
2025-09-30 17:21:37 -07:00
Jeremy Rose
01e6503672 wrap markdown at render time (#4506)
This results in correctly indenting list items with long lines.

<img width="1006" height="251" alt="Screenshot 2025-09-30 at 10 00
48 AM"
src="https://github.com/user-attachments/assets/0a076cf6-ca3c-4efb-b3af-dc07617cdb6f"
/>
2025-09-30 23:13:55 +00:00
pakrym-oai
9c259737d3 Delete codex proto (#4520) 2025-09-30 22:33:28 +00:00
Michael Bolin
b8e1fe60c5 fix: enable process hardening in Codex CLI for release builds (#4521)
I don't believe there is any upside in making process hardening opt-in
for Codex CLI releases. If you want to tinker with Codex CLI, then build
from source (or run as `root`)?
2025-09-30 14:34:35 -07:00
Michael Bolin
ddfb7eb548 fix: clean up TypeScript exports (#4518)
Fixes:

- Removed overdeclaration of types that were unnecessary because they
were already included by induction.
- Reordered list of response types to match the enum order, making it
easier to identify what was missing.
- Added `ExecArbitraryCommandResponse` because it was missing.
- Leveraged `use codex_protocol::mcp_protocol::*;` to make the file more
readable.
- Removed crate dependency on `mcp-types` now that we have separate the
app server from the MCP server:
https://github.com/openai/codex/pull/4471

My next move is to come up with some scheme that ensures request types
always have a response type and that the response type is automatically
included with the output of `codex generate-ts`.
2025-09-30 14:08:43 -07:00
Michael Bolin
6910be3224 fix: ensure every variant of ClientRequest has a params field (#4512)
This ensures changes the generated TypeScript type for `ClientRequest`
so that instead of this:

```typescript
/**
 * Request from the client to the server.
 */
export type ClientRequest =
  | { method: "initialize"; id: RequestId; params: InitializeParams }
  | { method: "newConversation"; id: RequestId; params: NewConversationParams }
  // ...
  | { method: "getUserAgent"; id: RequestId }
  | { method: "userInfo"; id: RequestId }
  // ...
```

we have this:

```typescript
/**
 * Request from the client to the server.
 */
export type ClientRequest =
  | { method: "initialize"; id: RequestId; params: InitializeParams }
  | { method: "newConversation"; id: RequestId; params: NewConversationParams }
  // ...
  | { method: "getUserAgent"; id: RequestId; params: undefined }
  | { method: "userInfo"; id: RequestId; params: undefined }
  // ...
```

which makes TypeScript happier when it comes to destructuring instances
of `ClientRequest` because it does not complain about `params` not being
guaranteed to exist anymore.
2025-09-30 12:03:32 -07:00
pakrym-oai
a534356fe1 Wire up web search item (#4511)
Add handling for web search events.
2025-09-30 12:01:17 -07:00
155 changed files with 4245 additions and 2927 deletions

View File

@@ -27,7 +27,7 @@ jobs:
- name: Install dependencies
run: pnpm install --frozen-lockfile
# build_npm_package.py requires DotSlash when staging releases.
# stage_npm_packages.py requires DotSlash when staging releases.
- uses: facebook/install-dotslash@v2
- name: Stage npm package
@@ -37,10 +37,12 @@ jobs:
run: |
set -euo pipefail
CODEX_VERSION=0.40.0
PACK_OUTPUT="${RUNNER_TEMP}/codex-npm.tgz"
python3 ./codex-cli/scripts/build_npm_package.py \
OUTPUT_DIR="${RUNNER_TEMP}"
python3 ./scripts/stage_npm_packages.py \
--release-version "$CODEX_VERSION" \
--pack-output "$PACK_OUTPUT"
--package codex \
--output-dir "$OUTPUT_DIR"
PACK_OUTPUT="${OUTPUT_DIR}/codex-npm-${CODEX_VERSION}.tgz"
echo "pack_output=$PACK_OUTPUT" >> "$GITHUB_OUTPUT"
- name: Upload staged npm package artifact

View File

@@ -216,31 +216,30 @@ jobs:
echo "npm_tag=" >> "$GITHUB_OUTPUT"
fi
# build_npm_package.py requires DotSlash when staging releases.
- uses: facebook/install-dotslash@v2
- name: Stage codex CLI npm package
env:
GH_TOKEN: ${{ github.token }}
run: |
set -euo pipefail
TMP_DIR="${RUNNER_TEMP}/npm-stage"
./codex-cli/scripts/build_npm_package.py \
--package codex \
--release-version "${{ steps.release_name.outputs.name }}" \
--staging-dir "${TMP_DIR}" \
--pack-output "${GITHUB_WORKSPACE}/dist/npm/codex-npm-${{ steps.release_name.outputs.name }}.tgz"
- name: Setup pnpm
uses: pnpm/action-setup@v4
with:
run_install: false
- name: Stage responses API proxy npm package
- name: Setup Node.js for npm packaging
uses: actions/setup-node@v5
with:
node-version: 22
- name: Install dependencies
run: pnpm install --frozen-lockfile
# stage_npm_packages.py requires DotSlash when staging releases.
- uses: facebook/install-dotslash@v2
- name: Stage npm packages
env:
GH_TOKEN: ${{ github.token }}
run: |
set -euo pipefail
TMP_DIR="${RUNNER_TEMP}/npm-stage-responses"
./codex-cli/scripts/build_npm_package.py \
--package codex-responses-api-proxy \
./scripts/stage_npm_packages.py \
--release-version "${{ steps.release_name.outputs.name }}" \
--staging-dir "${TMP_DIR}" \
--pack-output "${GITHUB_WORKSPACE}/dist/npm/codex-responses-api-proxy-npm-${{ steps.release_name.outputs.name }}.tgz"
--package codex \
--package codex-responses-api-proxy \
--package codex-sdk
- name: Create GitHub Release
uses: softprops/action-gh-release@v2
@@ -300,6 +299,10 @@ jobs:
--repo "${GITHUB_REPOSITORY}" \
--pattern "codex-responses-api-proxy-npm-${version}.tgz" \
--dir dist/npm
gh release download "$tag" \
--repo "${GITHUB_REPOSITORY}" \
--pattern "codex-sdk-npm-${version}.tgz" \
--dir dist/npm
# No NODE_AUTH_TOKEN needed because we use OIDC.
- name: Publish to npm
@@ -316,6 +319,7 @@ jobs:
tarballs=(
"codex-npm-${VERSION}.tgz"
"codex-responses-api-proxy-npm-${VERSION}.tgz"
"codex-sdk-npm-${VERSION}.tgz"
)
for tarball in "${tarballs[@]}"; do

View File

@@ -1,11 +1,19 @@
# npm releases
Run the following:
To build the 0.2.x or later version of the npm module, which runs the Rust version of the CLI, build it as follows:
Use the staging helper in the repo root to generate npm tarballs for a release. For
example, to stage the CLI, responses proxy, and SDK packages for version `0.6.0`:
```bash
./codex-cli/scripts/build_npm_package.py --release-version 0.6.0
./scripts/stage_npm_packages.py \
--release-version 0.6.0 \
--package codex \
--package codex-responses-api-proxy \
--package codex-sdk
```
Note this will create `./codex-cli/vendor/` as a side-effect.
This downloads the native artifacts once, hydrates `vendor/` for each package, and writes
tarballs to `dist/npm/`.
If you need to invoke `build_npm_package.py` directly, run
`codex-cli/scripts/install_native_deps.py` first and pass `--vendor-src` pointing to the
directory that contains the populated `vendor/` tree.

View File

@@ -3,7 +3,6 @@
import argparse
import json
import re
import shutil
import subprocess
import sys
@@ -14,19 +13,25 @@ SCRIPT_DIR = Path(__file__).resolve().parent
CODEX_CLI_ROOT = SCRIPT_DIR.parent
REPO_ROOT = CODEX_CLI_ROOT.parent
RESPONSES_API_PROXY_NPM_ROOT = REPO_ROOT / "codex-rs" / "responses-api-proxy" / "npm"
GITHUB_REPO = "openai/codex"
CODEX_SDK_ROOT = REPO_ROOT / "sdk" / "typescript"
# The docs are not clear on what the expected value/format of
# workflow/workflowName is:
# https://cli.github.com/manual/gh_run_list
WORKFLOW_NAME = ".github/workflows/rust-release.yml"
PACKAGE_NATIVE_COMPONENTS: dict[str, list[str]] = {
"codex": ["codex", "rg"],
"codex-responses-api-proxy": ["codex-responses-api-proxy"],
"codex-sdk": ["codex"],
}
COMPONENT_DEST_DIR: dict[str, str] = {
"codex": "codex",
"codex-responses-api-proxy": "codex-responses-api-proxy",
"rg": "path",
}
def parse_args() -> argparse.Namespace:
parser = argparse.ArgumentParser(description="Build or stage the Codex CLI npm package.")
parser.add_argument(
"--package",
choices=("codex", "codex-responses-api-proxy"),
choices=("codex", "codex-responses-api-proxy", "codex-sdk"),
default="codex",
help="Which npm package to stage (default: codex).",
)
@@ -37,14 +42,9 @@ def parse_args() -> argparse.Namespace:
parser.add_argument(
"--release-version",
help=(
"Version to stage for npm release. When provided, the script also resolves the "
"matching rust-release workflow unless --workflow-url is supplied."
"Version to stage for npm release."
),
)
parser.add_argument(
"--workflow-url",
help="Optional GitHub Actions workflow run URL used to download native binaries.",
)
parser.add_argument(
"--staging-dir",
type=Path,
@@ -64,6 +64,11 @@ def parse_args() -> argparse.Namespace:
type=Path,
help="Path where the generated npm tarball should be written.",
)
parser.add_argument(
"--vendor-src",
type=Path,
help="Directory containing pre-installed native binaries to bundle (vendor root).",
)
return parser.parse_args()
@@ -86,29 +91,19 @@ def main() -> int:
try:
stage_sources(staging_dir, version, package)
workflow_url = args.workflow_url
resolved_head_sha: str | None = None
if not workflow_url:
if release_version:
workflow = resolve_release_workflow(version)
workflow_url = workflow["url"]
resolved_head_sha = workflow.get("headSha")
else:
workflow_url = resolve_latest_alpha_workflow_url()
elif release_version:
try:
workflow = resolve_release_workflow(version)
resolved_head_sha = workflow.get("headSha")
except Exception:
resolved_head_sha = None
vendor_src = args.vendor_src.resolve() if args.vendor_src else None
native_components = PACKAGE_NATIVE_COMPONENTS.get(package, [])
if release_version and resolved_head_sha:
print(f"should `git checkout {resolved_head_sha}`")
if native_components:
if vendor_src is None:
components_str = ", ".join(native_components)
raise RuntimeError(
"Native components "
f"({components_str}) required for package '{package}'. Provide --vendor-src "
"pointing to a directory containing pre-installed binaries."
)
if not workflow_url:
raise RuntimeError("Unable to determine workflow URL for native binaries.")
install_native_binaries(staging_dir, workflow_url, package)
copy_native_binaries(vendor_src, staging_dir, native_components)
if release_version:
staging_dir_str = str(staging_dir)
@@ -119,12 +114,20 @@ def main() -> int:
f" node {staging_dir_str}/bin/codex.js --version\n"
f" node {staging_dir_str}/bin/codex.js --help\n\n"
)
else:
elif package == "codex-responses-api-proxy":
print(
f"Staged version {version} for release in {staging_dir_str}\n\n"
"Verify the responses API proxy:\n"
f" node {staging_dir_str}/bin/codex-responses-api-proxy.js --help\n\n"
)
else:
print(
f"Staged version {version} for release in {staging_dir_str}\n\n"
"Verify the SDK contents:\n"
f" ls {staging_dir_str}/dist\n"
f" ls {staging_dir_str}/vendor\n"
" node -e \"import('./dist/index.js').then(() => console.log('ok'))\"\n\n"
)
else:
print(f"Staged package in {staging_dir}")
@@ -152,10 +155,9 @@ def prepare_staging_dir(staging_dir: Path | None) -> tuple[Path, bool]:
def stage_sources(staging_dir: Path, version: str, package: str) -> None:
bin_dir = staging_dir / "bin"
bin_dir.mkdir(parents=True, exist_ok=True)
if package == "codex":
bin_dir = staging_dir / "bin"
bin_dir.mkdir(parents=True, exist_ok=True)
shutil.copy2(CODEX_CLI_ROOT / "bin" / "codex.js", bin_dir / "codex.js")
rg_manifest = CODEX_CLI_ROOT / "bin" / "rg"
if rg_manifest.exists():
@@ -167,6 +169,8 @@ def stage_sources(staging_dir: Path, version: str, package: str) -> None:
package_json_path = CODEX_CLI_ROOT / "package.json"
elif package == "codex-responses-api-proxy":
bin_dir = staging_dir / "bin"
bin_dir.mkdir(parents=True, exist_ok=True)
launcher_src = RESPONSES_API_PROXY_NPM_ROOT / "bin" / "codex-responses-api-proxy.js"
shutil.copy2(launcher_src, bin_dir / "codex-responses-api-proxy.js")
@@ -175,6 +179,9 @@ def stage_sources(staging_dir: Path, version: str, package: str) -> None:
shutil.copy2(readme_src, staging_dir / "README.md")
package_json_path = RESPONSES_API_PROXY_NPM_ROOT / "package.json"
elif package == "codex-sdk":
package_json_path = CODEX_SDK_ROOT / "package.json"
stage_codex_sdk_sources(staging_dir)
else:
raise RuntimeError(f"Unknown package '{package}'.")
@@ -182,91 +189,85 @@ def stage_sources(staging_dir: Path, version: str, package: str) -> None:
package_json = json.load(fh)
package_json["version"] = version
if package == "codex-sdk":
scripts = package_json.get("scripts")
if isinstance(scripts, dict):
scripts.pop("prepare", None)
files = package_json.get("files")
if isinstance(files, list):
if "vendor" not in files:
files.append("vendor")
else:
package_json["files"] = ["dist", "vendor"]
with open(staging_dir / "package.json", "w", encoding="utf-8") as out:
json.dump(package_json, out, indent=2)
out.write("\n")
def install_native_binaries(staging_dir: Path, workflow_url: str, package: str) -> None:
package_components = {
"codex": ["codex", "rg"],
"codex-responses-api-proxy": ["codex-responses-api-proxy"],
}
components = package_components.get(package)
if components is None:
raise RuntimeError(f"Unknown package '{package}'.")
cmd = ["./scripts/install_native_deps.py", "--workflow-url", workflow_url]
for component in components:
cmd.extend(["--component", component])
cmd.append(str(staging_dir))
subprocess.check_call(cmd, cwd=CODEX_CLI_ROOT)
def run_command(cmd: list[str], cwd: Path | None = None) -> None:
print("+", " ".join(cmd))
subprocess.run(cmd, cwd=cwd, check=True)
def resolve_latest_alpha_workflow_url() -> str:
version = determine_latest_alpha_version()
workflow = resolve_release_workflow(version)
return workflow["url"]
def stage_codex_sdk_sources(staging_dir: Path) -> None:
package_root = CODEX_SDK_ROOT
run_command(["pnpm", "install", "--frozen-lockfile"], cwd=package_root)
run_command(["pnpm", "run", "build"], cwd=package_root)
dist_src = package_root / "dist"
if not dist_src.exists():
raise RuntimeError("codex-sdk build did not produce a dist directory.")
shutil.copytree(dist_src, staging_dir / "dist")
readme_src = package_root / "README.md"
if readme_src.exists():
shutil.copy2(readme_src, staging_dir / "README.md")
license_src = REPO_ROOT / "LICENSE"
if license_src.exists():
shutil.copy2(license_src, staging_dir / "LICENSE")
def determine_latest_alpha_version() -> str:
releases = list_releases()
best_key: tuple[int, int, int, int] | None = None
best_version: str | None = None
pattern = re.compile(r"^rust-v(\d+)\.(\d+)\.(\d+)-alpha\.(\d+)$")
for release in releases:
tag = release.get("tag_name", "")
match = pattern.match(tag)
if not match:
def copy_native_binaries(vendor_src: Path, staging_dir: Path, components: list[str]) -> None:
vendor_src = vendor_src.resolve()
if not vendor_src.exists():
raise RuntimeError(f"Vendor source directory not found: {vendor_src}")
components_set = {component for component in components if component in COMPONENT_DEST_DIR}
if not components_set:
return
vendor_dest = staging_dir / "vendor"
if vendor_dest.exists():
shutil.rmtree(vendor_dest)
vendor_dest.mkdir(parents=True, exist_ok=True)
for target_dir in vendor_src.iterdir():
if not target_dir.is_dir():
continue
key = tuple(int(match.group(i)) for i in range(1, 5))
if best_key is None or key > best_key:
best_key = key
best_version = (
f"{match.group(1)}.{match.group(2)}.{match.group(3)}-alpha.{match.group(4)}"
)
if best_version is None:
raise RuntimeError("No alpha releases found when resolving workflow URL.")
return best_version
dest_target_dir = vendor_dest / target_dir.name
dest_target_dir.mkdir(parents=True, exist_ok=True)
for component in components_set:
dest_dir_name = COMPONENT_DEST_DIR.get(component)
if dest_dir_name is None:
continue
def list_releases() -> list[dict]:
stdout = subprocess.check_output(
["gh", "api", f"/repos/{GITHUB_REPO}/releases?per_page=100"],
text=True,
)
try:
releases = json.loads(stdout or "[]")
except json.JSONDecodeError as exc:
raise RuntimeError("Unable to parse releases JSON.") from exc
if not isinstance(releases, list):
raise RuntimeError("Unexpected response when listing releases.")
return releases
src_component_dir = target_dir / dest_dir_name
if not src_component_dir.exists():
raise RuntimeError(
f"Missing native component '{component}' in vendor source: {src_component_dir}"
)
def resolve_release_workflow(version: str) -> dict:
stdout = subprocess.check_output(
[
"gh",
"run",
"list",
"--branch",
f"rust-v{version}",
"--json",
"workflowName,url,headSha",
"--workflow",
WORKFLOW_NAME,
"--jq",
"first(.[])",
],
text=True,
)
workflow = json.loads(stdout or "[]")
if not workflow:
raise RuntimeError(f"Unable to find rust-release workflow for version {version}.")
return workflow
dest_component_dir = dest_target_dir / dest_dir_name
if dest_component_dir.exists():
shutil.rmtree(dest_component_dir)
shutil.copytree(src_component_dir, dest_component_dir)
def run_npm_pack(staging_dir: Path, output_path: Path) -> Path:

33
codex-rs/Cargo.lock generated
View File

@@ -171,8 +171,7 @@ version = "0.0.0"
dependencies = [
"anyhow",
"assert_cmd",
"codex-protocol",
"mcp-types",
"codex-app-server-protocol",
"serde",
"serde_json",
"tokio",
@@ -657,6 +656,7 @@ dependencies = [
"app_test_support",
"assert_cmd",
"base64",
"codex-app-server-protocol",
"codex-arg0",
"codex-common",
"codex-core",
@@ -665,7 +665,6 @@ dependencies = [
"codex-protocol",
"codex-utils-json-to-toml",
"core_test_support",
"mcp-types",
"os_info",
"pretty_assertions",
"serde",
@@ -679,6 +678,21 @@ dependencies = [
"wiremock",
]
[[package]]
name = "codex-app-server-protocol"
version = "0.0.0"
dependencies = [
"anyhow",
"codex-protocol",
"paste",
"pretty_assertions",
"serde",
"serde_json",
"strum_macros 0.27.2",
"ts-rs",
"uuid",
]
[[package]]
name = "codex-apply-patch"
version = "0.0.0"
@@ -750,6 +764,7 @@ dependencies = [
"clap",
"clap_complete",
"codex-app-server",
"codex-app-server-protocol",
"codex-arg0",
"codex-chatgpt",
"codex-cloud-tasks",
@@ -771,8 +786,6 @@ dependencies = [
"supports-color",
"tempfile",
"tokio",
"tracing",
"tracing-subscriber",
]
[[package]]
@@ -822,6 +835,7 @@ name = "codex-common"
version = "0.0.0"
dependencies = [
"clap",
"codex-app-server-protocol",
"codex-core",
"codex-protocol",
"serde",
@@ -840,6 +854,7 @@ dependencies = [
"base64",
"bytes",
"chrono",
"codex-app-server-protocol",
"codex-apply-patch",
"codex-file-search",
"codex-mcp-client",
@@ -996,8 +1011,8 @@ dependencies = [
"anyhow",
"base64",
"chrono",
"codex-app-server-protocol",
"codex-core",
"codex-protocol",
"core_test_support",
"rand 0.9.2",
"reqwest",
@@ -1073,6 +1088,7 @@ name = "codex-otel"
version = "0.0.0"
dependencies = [
"chrono",
"codex-app-server-protocol",
"codex-protocol",
"eventsource-stream",
"opentelemetry",
@@ -1105,7 +1121,6 @@ dependencies = [
"icu_locale_core",
"mcp-types",
"mime_guess",
"pretty_assertions",
"serde",
"serde_json",
"serde_with",
@@ -1124,8 +1139,7 @@ version = "0.0.0"
dependencies = [
"anyhow",
"clap",
"codex-protocol",
"mcp-types",
"codex-app-server-protocol",
"ts-rs",
]
@@ -1173,6 +1187,7 @@ dependencies = [
"chrono",
"clap",
"codex-ansi-escape",
"codex-app-server-protocol",
"codex-arg0",
"codex-common",
"codex-core",

View File

@@ -3,6 +3,7 @@ members = [
"backend-client",
"ansi-escape",
"app-server",
"app-server-protocol",
"apply-patch",
"arg0",
"codex-backend-openapi-models",
@@ -47,6 +48,7 @@ edition = "2024"
app_test_support = { path = "app-server/tests/common" }
codex-ansi-escape = { path = "ansi-escape" }
codex-app-server = { path = "app-server" }
codex-app-server-protocol = { path = "app-server-protocol" }
codex-apply-patch = { path = "apply-patch" }
codex-arg0 = { path = "arg0" }
codex-chatgpt = { path = "chatgpt" }
@@ -123,6 +125,7 @@ opentelemetry-semantic-conventions = "0.30.0"
opentelemetry_sdk = "0.30.0"
os_info = "3.12.0"
owo-colors = "4.2.0"
paste = "1.0.15"
path-absolutize = "3.1.1"
path-clean = "1.0.1"
pathdiff = "0.2"

View File

@@ -0,0 +1,24 @@
[package]
edition = "2024"
name = "codex-app-server-protocol"
version = { workspace = true }
[lib]
name = "codex_app_server_protocol"
path = "src/lib.rs"
[lints]
workspace = true
[dependencies]
codex-protocol = { workspace = true }
paste = { workspace = true }
serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }
strum_macros = { workspace = true }
ts-rs = { workspace = true }
uuid = { workspace = true, features = ["serde", "v7"] }
[dev-dependencies]
anyhow = { workspace = true }
pretty_assertions = { workspace = true }

View File

@@ -0,0 +1,66 @@
//! We do not do true JSON-RPC 2.0, as we neither send nor expect the
//! "jsonrpc": "2.0" field.
use serde::Deserialize;
use serde::Serialize;
use ts_rs::TS;
pub const JSONRPC_VERSION: &str = "2.0";
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, Hash, Eq, TS)]
#[serde(untagged)]
pub enum RequestId {
String(String),
Integer(i64),
}
pub type Result = serde_json::Value;
/// Refers to any valid JSON-RPC object that can be decoded off the wire, or encoded to be sent.
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, TS)]
#[serde(untagged)]
pub enum JSONRPCMessage {
Request(JSONRPCRequest),
Notification(JSONRPCNotification),
Response(JSONRPCResponse),
Error(JSONRPCError),
}
/// A request that expects a response.
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, TS)]
pub struct JSONRPCRequest {
pub id: RequestId,
pub method: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub params: Option<serde_json::Value>,
}
/// A notification which does not expect a response.
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, TS)]
pub struct JSONRPCNotification {
pub method: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub params: Option<serde_json::Value>,
}
/// A successful (non-error) response to a request.
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, TS)]
pub struct JSONRPCResponse {
pub id: RequestId,
pub result: Result,
}
/// A response to a request that indicates an error occurred.
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, TS)]
pub struct JSONRPCError {
pub error: JSONRPCErrorError,
pub id: RequestId,
}
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, TS)]
pub struct JSONRPCErrorError {
pub code: i64,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub data: Option<serde_json::Value>,
pub message: String,
}

View File

@@ -0,0 +1,5 @@
mod jsonrpc_lite;
mod protocol;
pub use jsonrpc_lite::*;
pub use protocol::*;

View File

@@ -1,77 +1,27 @@
use std::collections::HashMap;
use std::fmt::Display;
use std::path::PathBuf;
use crate::config_types::ReasoningEffort;
use crate::config_types::ReasoningSummary;
use crate::config_types::SandboxMode;
use crate::config_types::Verbosity;
use crate::protocol::AskForApproval;
use crate::protocol::EventMsg;
use crate::protocol::FileChange;
use crate::protocol::ReviewDecision;
use crate::protocol::SandboxPolicy;
use crate::protocol::TurnAbortReason;
use mcp_types::JSONRPCNotification;
use mcp_types::RequestId;
use crate::JSONRPCNotification;
use crate::JSONRPCRequest;
use crate::RequestId;
use codex_protocol::ConversationId;
use codex_protocol::config_types::ReasoningEffort;
use codex_protocol::config_types::ReasoningSummary;
use codex_protocol::config_types::SandboxMode;
use codex_protocol::config_types::Verbosity;
use codex_protocol::protocol::AskForApproval;
use codex_protocol::protocol::EventMsg;
use codex_protocol::protocol::FileChange;
use codex_protocol::protocol::ReviewDecision;
use codex_protocol::protocol::SandboxPolicy;
use codex_protocol::protocol::TurnAbortReason;
use paste::paste;
use serde::Deserialize;
use serde::Serialize;
use strum_macros::Display;
use ts_rs::TS;
use uuid::Uuid;
#[derive(Debug, Clone, Copy, PartialEq, Eq, TS, Hash)]
#[ts(type = "string")]
pub struct ConversationId {
uuid: Uuid,
}
impl ConversationId {
pub fn new() -> Self {
Self {
uuid: Uuid::now_v7(),
}
}
pub fn from_string(s: &str) -> Result<Self, uuid::Error> {
Ok(Self {
uuid: Uuid::parse_str(s)?,
})
}
}
impl Default for ConversationId {
fn default() -> Self {
Self::new()
}
}
impl Display for ConversationId {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.uuid)
}
}
impl Serialize for ConversationId {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
serializer.collect_str(&self.uuid)
}
}
impl<'de> Deserialize<'de> for ConversationId {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: serde::Deserializer<'de>,
{
let value = String::deserialize(deserializer)?;
let uuid = Uuid::parse_str(&value).map_err(serde::de::Error::custom)?;
Ok(Self { uuid })
}
}
#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, TS)]
#[ts(type = "string")]
pub struct GitSha(pub String);
@@ -89,117 +39,137 @@ pub enum AuthMode {
ChatGPT,
}
/// Request from the client to the server.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(tag = "method", rename_all = "camelCase")]
pub enum ClientRequest {
/// Generates an `enum ClientRequest` where each variant is a request that the
/// client can send to the server. Each variant has associated `params` and
/// `response` types. Also generates a `export_client_responses()` function to
/// export all response types to TypeScript.
macro_rules! client_request_definitions {
(
$(
$(#[$variant_meta:meta])*
$variant:ident {
params: $(#[$params_meta:meta])* $params:ty,
response: $response:ty,
}
),* $(,)?
) => {
/// Request from the client to the server.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(tag = "method", rename_all = "camelCase")]
pub enum ClientRequest {
$(
$(#[$variant_meta])*
$variant {
#[serde(rename = "id")]
request_id: RequestId,
$(#[$params_meta])*
params: $params,
},
)*
}
pub fn export_client_responses(
out_dir: &::std::path::Path,
) -> ::std::result::Result<(), ::ts_rs::ExportError> {
$(
<$response as ::ts_rs::TS>::export_all_to(out_dir)?;
)*
Ok(())
}
};
}
client_request_definitions! {
Initialize {
#[serde(rename = "id")]
request_id: RequestId,
params: InitializeParams,
response: InitializeResponse,
},
NewConversation {
#[serde(rename = "id")]
request_id: RequestId,
params: NewConversationParams,
response: NewConversationResponse,
},
/// List recorded Codex conversations (rollouts) with optional pagination and search.
ListConversations {
#[serde(rename = "id")]
request_id: RequestId,
params: ListConversationsParams,
response: ListConversationsResponse,
},
/// Resume a recorded Codex conversation from a rollout file.
ResumeConversation {
#[serde(rename = "id")]
request_id: RequestId,
params: ResumeConversationParams,
response: ResumeConversationResponse,
},
ArchiveConversation {
#[serde(rename = "id")]
request_id: RequestId,
params: ArchiveConversationParams,
response: ArchiveConversationResponse,
},
SendUserMessage {
#[serde(rename = "id")]
request_id: RequestId,
params: SendUserMessageParams,
response: SendUserMessageResponse,
},
SendUserTurn {
#[serde(rename = "id")]
request_id: RequestId,
params: SendUserTurnParams,
response: SendUserTurnResponse,
},
InterruptConversation {
#[serde(rename = "id")]
request_id: RequestId,
params: InterruptConversationParams,
response: InterruptConversationResponse,
},
AddConversationListener {
#[serde(rename = "id")]
request_id: RequestId,
params: AddConversationListenerParams,
response: AddConversationSubscriptionResponse,
},
RemoveConversationListener {
#[serde(rename = "id")]
request_id: RequestId,
params: RemoveConversationListenerParams,
response: RemoveConversationSubscriptionResponse,
},
GitDiffToRemote {
#[serde(rename = "id")]
request_id: RequestId,
params: GitDiffToRemoteParams,
response: GitDiffToRemoteResponse,
},
LoginApiKey {
#[serde(rename = "id")]
request_id: RequestId,
params: LoginApiKeyParams,
response: LoginApiKeyResponse,
},
LoginChatGpt {
#[serde(rename = "id")]
request_id: RequestId,
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
response: LoginChatGptResponse,
},
CancelLoginChatGpt {
#[serde(rename = "id")]
request_id: RequestId,
params: CancelLoginChatGptParams,
response: CancelLoginChatGptResponse,
},
LogoutChatGpt {
#[serde(rename = "id")]
request_id: RequestId,
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
response: LogoutChatGptResponse,
},
GetAuthStatus {
#[serde(rename = "id")]
request_id: RequestId,
params: GetAuthStatusParams,
response: GetAuthStatusResponse,
},
GetUserSavedConfig {
#[serde(rename = "id")]
request_id: RequestId,
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
response: GetUserSavedConfigResponse,
},
SetDefaultModel {
#[serde(rename = "id")]
request_id: RequestId,
params: SetDefaultModelParams,
response: SetDefaultModelResponse,
},
GetUserAgent {
#[serde(rename = "id")]
request_id: RequestId,
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
response: GetUserAgentResponse,
},
UserInfo {
#[serde(rename = "id")]
request_id: RequestId,
params: #[ts(type = "undefined")] #[serde(skip_serializing_if = "Option::is_none")] Option<()>,
response: UserInfoResponse,
},
FuzzyFileSearch {
#[serde(rename = "id")]
request_id: RequestId,
params: FuzzyFileSearchParams,
response: FuzzyFileSearchResponse,
},
/// Execute a command (argv vector) under the server's sandbox.
ExecOneOffCommand {
#[serde(rename = "id")]
request_id: RequestId,
params: ExecOneOffCommandParams,
response: ExecOneOffCommandResponse,
},
}
@@ -429,7 +399,7 @@ pub struct ExecOneOffCommandParams {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(rename_all = "camelCase")]
pub struct ExecArbitraryCommandResponse {
pub struct ExecOneOffCommandResponse {
pub exit_code: i32,
pub stdout: String,
pub stderr: String,
@@ -633,30 +603,74 @@ pub enum InputItem {
},
}
// TODO(mbolin): Need test to ensure these constants match the enum variants.
/// Generates an `enum ServerRequest` where each variant is a request that the
/// server can send to the client along with the corresponding params and
/// response types. It also generates helper types used by the app/server
/// infrastructure (payload enum, request constructor, and export helpers).
macro_rules! server_request_definitions {
(
$(
$(#[$variant_meta:meta])*
$variant:ident
),* $(,)?
) => {
paste! {
/// Request initiated from the server and sent to the client.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(tag = "method", rename_all = "camelCase")]
pub enum ServerRequest {
$(
$(#[$variant_meta])*
$variant {
#[serde(rename = "id")]
request_id: RequestId,
params: [<$variant Params>],
},
)*
}
pub const APPLY_PATCH_APPROVAL_METHOD: &str = "applyPatchApproval";
pub const EXEC_COMMAND_APPROVAL_METHOD: &str = "execCommandApproval";
#[derive(Debug, Clone, PartialEq)]
pub enum ServerRequestPayload {
$( $variant([<$variant Params>]), )*
}
/// Request initiated from the server and sent to the client.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(tag = "method", rename_all = "camelCase")]
pub enum ServerRequest {
impl ServerRequestPayload {
pub fn request_with_id(self, request_id: RequestId) -> ServerRequest {
match self {
$(Self::$variant(params) => ServerRequest::$variant { request_id, params },)*
}
}
}
}
pub fn export_server_responses(
out_dir: &::std::path::Path,
) -> ::std::result::Result<(), ::ts_rs::ExportError> {
paste! {
$(<[<$variant Response>] as ::ts_rs::TS>::export_all_to(out_dir)?;)*
}
Ok(())
}
};
}
impl TryFrom<JSONRPCRequest> for ServerRequest {
type Error = serde_json::Error;
fn try_from(value: JSONRPCRequest) -> Result<Self, Self::Error> {
serde_json::from_value(serde_json::to_value(value)?)
}
}
server_request_definitions! {
/// Request to approve a patch.
ApplyPatchApproval {
#[serde(rename = "id")]
request_id: RequestId,
params: ApplyPatchApprovalParams,
},
ApplyPatchApproval,
/// Request to exec a command.
ExecCommandApproval {
#[serde(rename = "id")]
request_id: RequestId,
params: ExecCommandApprovalParams,
},
ExecCommandApproval,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(rename_all = "camelCase")]
pub struct ApplyPatchApprovalParams {
pub conversation_id: ConversationId,
/// Use to correlate this with [codex_core::protocol::PatchApplyBeginEvent]
@@ -673,6 +687,7 @@ pub struct ApplyPatchApprovalParams {
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, TS)]
#[serde(rename_all = "camelCase")]
pub struct ExecCommandApprovalParams {
pub conversation_id: ConversationId,
/// Use to correlate this with [codex_core::protocol::ExecCommandBeginEvent]
@@ -746,6 +761,7 @@ pub struct SessionConfiguredNotification {
pub history_log_id: u64,
/// Current number of entries in the history log.
#[ts(type = "number")]
pub history_entry_count: usize,
/// Optional initial messages (as events) for resumed sessions.
@@ -842,12 +858,6 @@ mod tests {
Ok(())
}
#[test]
fn test_conversation_id_default_is_not_zeroes() {
let id = ConversationId::default();
assert_ne!(id.uuid, Uuid::nil());
}
#[test]
fn conversation_id_serializes_as_plain_string() -> Result<()> {
let id = ConversationId::from_string("67e55044-10b1-426f-9247-bb680e5fe0c8")?;
@@ -883,4 +893,39 @@ mod tests {
);
Ok(())
}
#[test]
fn serialize_server_request() -> Result<()> {
let conversation_id = ConversationId::from_string("67e55044-10b1-426f-9247-bb680e5fe0c8")?;
let params = ExecCommandApprovalParams {
conversation_id,
call_id: "call-42".to_string(),
command: vec!["echo".to_string(), "hello".to_string()],
cwd: PathBuf::from("/tmp"),
reason: Some("because tests".to_string()),
};
let request = ServerRequest::ExecCommandApproval {
request_id: RequestId::Integer(7),
params: params.clone(),
};
assert_eq!(
json!({
"method": "execCommandApproval",
"id": 7,
"params": {
"conversationId": "67e55044-10b1-426f-9247-bb680e5fe0c8",
"callId": "call-42",
"command": ["echo", "hello"],
"cwd": "/tmp",
"reason": "because tests",
}
}),
serde_json::to_value(&request)?,
);
let payload = ServerRequestPayload::ExecCommandApproval(params);
assert_eq!(payload.request_with_id(RequestId::Integer(7)), request);
Ok(())
}
}

View File

@@ -22,10 +22,8 @@ codex-core = { workspace = true }
codex-file-search = { workspace = true }
codex-login = { workspace = true }
codex-protocol = { workspace = true }
codex-app-server-protocol = { workspace = true }
codex-utils-json-to-toml = { workspace = true }
# We should only be using mcp-types for JSON-RPC types: it would be nice to
# split this out into a separate crate at some point.
mcp-types = { workspace = true }
serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }
tokio = { workspace = true, features = [

View File

@@ -3,6 +3,52 @@ use crate::error_code::INVALID_REQUEST_ERROR_CODE;
use crate::fuzzy_file_search::run_fuzzy_file_search;
use crate::outgoing_message::OutgoingMessageSender;
use crate::outgoing_message::OutgoingNotification;
use codex_app_server_protocol::AddConversationListenerParams;
use codex_app_server_protocol::AddConversationSubscriptionResponse;
use codex_app_server_protocol::ApplyPatchApprovalParams;
use codex_app_server_protocol::ApplyPatchApprovalResponse;
use codex_app_server_protocol::ArchiveConversationParams;
use codex_app_server_protocol::ArchiveConversationResponse;
use codex_app_server_protocol::AuthStatusChangeNotification;
use codex_app_server_protocol::ClientRequest;
use codex_app_server_protocol::ConversationSummary;
use codex_app_server_protocol::ExecCommandApprovalParams;
use codex_app_server_protocol::ExecCommandApprovalResponse;
use codex_app_server_protocol::ExecOneOffCommandParams;
use codex_app_server_protocol::ExecOneOffCommandResponse;
use codex_app_server_protocol::FuzzyFileSearchParams;
use codex_app_server_protocol::FuzzyFileSearchResponse;
use codex_app_server_protocol::GetUserAgentResponse;
use codex_app_server_protocol::GetUserSavedConfigResponse;
use codex_app_server_protocol::GitDiffToRemoteResponse;
use codex_app_server_protocol::InputItem as WireInputItem;
use codex_app_server_protocol::InterruptConversationParams;
use codex_app_server_protocol::InterruptConversationResponse;
use codex_app_server_protocol::JSONRPCErrorError;
use codex_app_server_protocol::ListConversationsParams;
use codex_app_server_protocol::ListConversationsResponse;
use codex_app_server_protocol::LoginApiKeyParams;
use codex_app_server_protocol::LoginApiKeyResponse;
use codex_app_server_protocol::LoginChatGptCompleteNotification;
use codex_app_server_protocol::LoginChatGptResponse;
use codex_app_server_protocol::NewConversationParams;
use codex_app_server_protocol::NewConversationResponse;
use codex_app_server_protocol::RemoveConversationListenerParams;
use codex_app_server_protocol::RemoveConversationSubscriptionResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::Result as JsonRpcResult;
use codex_app_server_protocol::ResumeConversationParams;
use codex_app_server_protocol::SendUserMessageParams;
use codex_app_server_protocol::SendUserMessageResponse;
use codex_app_server_protocol::SendUserTurnParams;
use codex_app_server_protocol::SendUserTurnResponse;
use codex_app_server_protocol::ServerNotification;
use codex_app_server_protocol::ServerRequestPayload;
use codex_app_server_protocol::SessionConfiguredNotification;
use codex_app_server_protocol::SetDefaultModelParams;
use codex_app_server_protocol::SetDefaultModelResponse;
use codex_app_server_protocol::UserInfoResponse;
use codex_app_server_protocol::UserSavedConfig;
use codex_core::AuthManager;
use codex_core::CodexConversation;
use codex_core::ConversationManager;
@@ -36,58 +82,12 @@ use codex_core::protocol::ReviewDecision;
use codex_login::ServerOptions as LoginServerOptions;
use codex_login::ShutdownHandle;
use codex_login::run_login_server;
use codex_protocol::mcp_protocol::APPLY_PATCH_APPROVAL_METHOD;
use codex_protocol::mcp_protocol::AddConversationListenerParams;
use codex_protocol::mcp_protocol::AddConversationSubscriptionResponse;
use codex_protocol::mcp_protocol::ApplyPatchApprovalParams;
use codex_protocol::mcp_protocol::ApplyPatchApprovalResponse;
use codex_protocol::mcp_protocol::ArchiveConversationParams;
use codex_protocol::mcp_protocol::ArchiveConversationResponse;
use codex_protocol::mcp_protocol::AuthStatusChangeNotification;
use codex_protocol::mcp_protocol::ClientRequest;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::mcp_protocol::ConversationSummary;
use codex_protocol::mcp_protocol::EXEC_COMMAND_APPROVAL_METHOD;
use codex_protocol::mcp_protocol::ExecArbitraryCommandResponse;
use codex_protocol::mcp_protocol::ExecCommandApprovalParams;
use codex_protocol::mcp_protocol::ExecCommandApprovalResponse;
use codex_protocol::mcp_protocol::ExecOneOffCommandParams;
use codex_protocol::mcp_protocol::FuzzyFileSearchParams;
use codex_protocol::mcp_protocol::FuzzyFileSearchResponse;
use codex_protocol::mcp_protocol::GetUserAgentResponse;
use codex_protocol::mcp_protocol::GetUserSavedConfigResponse;
use codex_protocol::mcp_protocol::GitDiffToRemoteResponse;
use codex_protocol::mcp_protocol::InputItem as WireInputItem;
use codex_protocol::mcp_protocol::InterruptConversationParams;
use codex_protocol::mcp_protocol::InterruptConversationResponse;
use codex_protocol::mcp_protocol::ListConversationsParams;
use codex_protocol::mcp_protocol::ListConversationsResponse;
use codex_protocol::mcp_protocol::LoginApiKeyParams;
use codex_protocol::mcp_protocol::LoginApiKeyResponse;
use codex_protocol::mcp_protocol::LoginChatGptCompleteNotification;
use codex_protocol::mcp_protocol::LoginChatGptResponse;
use codex_protocol::mcp_protocol::NewConversationParams;
use codex_protocol::mcp_protocol::NewConversationResponse;
use codex_protocol::mcp_protocol::RemoveConversationListenerParams;
use codex_protocol::mcp_protocol::RemoveConversationSubscriptionResponse;
use codex_protocol::mcp_protocol::ResumeConversationParams;
use codex_protocol::mcp_protocol::SendUserMessageParams;
use codex_protocol::mcp_protocol::SendUserMessageResponse;
use codex_protocol::mcp_protocol::SendUserTurnParams;
use codex_protocol::mcp_protocol::SendUserTurnResponse;
use codex_protocol::mcp_protocol::ServerNotification;
use codex_protocol::mcp_protocol::SessionConfiguredNotification;
use codex_protocol::mcp_protocol::SetDefaultModelParams;
use codex_protocol::mcp_protocol::SetDefaultModelResponse;
use codex_protocol::mcp_protocol::UserInfoResponse;
use codex_protocol::mcp_protocol::UserSavedConfig;
use codex_protocol::ConversationId;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ResponseItem;
use codex_protocol::protocol::InputMessageKind;
use codex_protocol::protocol::USER_MESSAGE_BEGIN;
use codex_utils_json_to_toml::json_to_toml;
use mcp_types::JSONRPCErrorError;
use mcp_types::RequestId;
use std::collections::HashMap;
use std::ffi::OsStr;
use std::path::PathBuf;
@@ -193,28 +193,43 @@ impl CodexMessageProcessor {
ClientRequest::LoginApiKey { request_id, params } => {
self.login_api_key(request_id, params).await;
}
ClientRequest::LoginChatGpt { request_id } => {
ClientRequest::LoginChatGpt {
request_id,
params: _,
} => {
self.login_chatgpt(request_id).await;
}
ClientRequest::CancelLoginChatGpt { request_id, params } => {
self.cancel_login_chatgpt(request_id, params.login_id).await;
}
ClientRequest::LogoutChatGpt { request_id } => {
ClientRequest::LogoutChatGpt {
request_id,
params: _,
} => {
self.logout_chatgpt(request_id).await;
}
ClientRequest::GetAuthStatus { request_id, params } => {
self.get_auth_status(request_id, params).await;
}
ClientRequest::GetUserSavedConfig { request_id } => {
ClientRequest::GetUserSavedConfig {
request_id,
params: _,
} => {
self.get_user_saved_config(request_id).await;
}
ClientRequest::SetDefaultModel { request_id, params } => {
self.set_default_model(request_id, params).await;
}
ClientRequest::GetUserAgent { request_id } => {
ClientRequest::GetUserAgent {
request_id,
params: _,
} => {
self.get_user_agent(request_id).await;
}
ClientRequest::UserInfo { request_id } => {
ClientRequest::UserInfo {
request_id,
params: _,
} => {
self.get_user_info(request_id).await;
}
ClientRequest::FuzzyFileSearch { request_id, params } => {
@@ -371,7 +386,7 @@ impl CodexMessageProcessor {
self.outgoing
.send_response(
request_id,
codex_protocol::mcp_protocol::CancelLoginChatGptResponse {},
codex_app_server_protocol::CancelLoginChatGptResponse {},
)
.await;
} else {
@@ -407,7 +422,7 @@ impl CodexMessageProcessor {
self.outgoing
.send_response(
request_id,
codex_protocol::mcp_protocol::LogoutChatGptResponse {},
codex_app_server_protocol::LogoutChatGptResponse {},
)
.await;
@@ -425,7 +440,7 @@ impl CodexMessageProcessor {
async fn get_auth_status(
&self,
request_id: RequestId,
params: codex_protocol::mcp_protocol::GetAuthStatusParams,
params: codex_app_server_protocol::GetAuthStatusParams,
) {
let include_token = params.include_token.unwrap_or(false);
let do_refresh = params.refresh_token.unwrap_or(false);
@@ -440,7 +455,7 @@ impl CodexMessageProcessor {
let requires_openai_auth = self.config.model_provider.requires_openai_auth;
let response = if !requires_openai_auth {
codex_protocol::mcp_protocol::GetAuthStatusResponse {
codex_app_server_protocol::GetAuthStatusResponse {
auth_method: None,
auth_token: None,
requires_openai_auth: Some(false),
@@ -460,13 +475,13 @@ impl CodexMessageProcessor {
(None, None)
}
};
codex_protocol::mcp_protocol::GetAuthStatusResponse {
codex_app_server_protocol::GetAuthStatusResponse {
auth_method: reported_auth_method,
auth_token: token_opt,
requires_openai_auth: Some(true),
}
}
None => codex_protocol::mcp_protocol::GetAuthStatusResponse {
None => codex_app_server_protocol::GetAuthStatusResponse {
auth_method: None,
auth_token: None,
requires_openai_auth: Some(true),
@@ -617,7 +632,7 @@ impl CodexMessageProcessor {
.await
{
Ok(output) => {
let response = ExecArbitraryCommandResponse {
let response = ExecOneOffCommandResponse {
exit_code: output.exit_code,
stdout: output.stdout.text,
stderr: output.stderr.text,
@@ -793,7 +808,7 @@ impl CodexMessageProcessor {
});
// Reply with conversation id + model and initial messages (when present)
let response = codex_protocol::mcp_protocol::ResumeConversationResponse {
let response = codex_app_server_protocol::ResumeConversationResponse {
conversation_id,
model: session_configured.model.clone(),
initial_messages,
@@ -1253,9 +1268,8 @@ async fn apply_bespoke_event_handling(
reason,
grant_root,
};
let value = serde_json::to_value(&params).unwrap_or_default();
let rx = outgoing
.send_request(APPLY_PATCH_APPROVAL_METHOD, Some(value))
.send_request(ServerRequestPayload::ApplyPatchApproval(params))
.await;
// TODO(mbolin): Enforce a timeout so this task does not live indefinitely?
tokio::spawn(async move {
@@ -1275,9 +1289,8 @@ async fn apply_bespoke_event_handling(
cwd,
reason,
};
let value = serde_json::to_value(&params).unwrap_or_default();
let rx = outgoing
.send_request(EXEC_COMMAND_APPROVAL_METHOD, Some(value))
.send_request(ServerRequestPayload::ExecCommandApproval(params))
.await;
// TODO(mbolin): Enforce a timeout so this task does not live indefinitely?
@@ -1348,7 +1361,7 @@ fn derive_config_from_params(
async fn on_patch_approval_response(
event_id: String,
receiver: oneshot::Receiver<mcp_types::Result>,
receiver: oneshot::Receiver<JsonRpcResult>,
codex: Arc<CodexConversation>,
) {
let response = receiver.await;
@@ -1390,7 +1403,7 @@ async fn on_patch_approval_response(
async fn on_exec_approval_response(
event_id: String,
receiver: oneshot::Receiver<mcp_types::Result>,
receiver: oneshot::Receiver<JsonRpcResult>,
conversation: Arc<CodexConversation>,
) {
let response = receiver.await;

View File

@@ -4,8 +4,8 @@ use std::path::PathBuf;
use std::sync::Arc;
use std::sync::atomic::AtomicBool;
use codex_app_server_protocol::FuzzyFileSearchResult;
use codex_file_search as file_search;
use codex_protocol::mcp_protocol::FuzzyFileSearchResult;
use tokio::task::JoinSet;
use tracing::warn;

View File

@@ -8,7 +8,7 @@ use codex_common::CliConfigOverrides;
use codex_core::config::Config;
use codex_core::config::ConfigOverrides;
use mcp_types::JSONRPCMessage;
use codex_app_server_protocol::JSONRPCMessage;
use tokio::io::AsyncBufReadExt;
use tokio::io::AsyncWriteExt;
use tokio::io::BufReader;
@@ -111,17 +111,17 @@ pub async fn run_main(
let stdout_writer_handle = tokio::spawn(async move {
let mut stdout = io::stdout();
while let Some(outgoing_message) = outgoing_rx.recv().await {
let msg: JSONRPCMessage = outgoing_message.into();
match serde_json::to_string(&msg) {
Ok(json) => {
let Ok(value) = serde_json::to_value(outgoing_message) else {
error!("Failed to convert OutgoingMessage to JSON value");
continue;
};
match serde_json::to_string(&value) {
Ok(mut json) => {
json.push('\n');
if let Err(e) = stdout.write_all(json.as_bytes()).await {
error!("Failed to write to stdout: {e}");
break;
}
if let Err(e) = stdout.write_all(b"\n").await {
error!("Failed to write newline to stdout: {e}");
break;
}
}
Err(e) => error!("Failed to serialize JSONRPCMessage: {e}"),
}

View File

@@ -3,20 +3,20 @@ use std::path::PathBuf;
use crate::codex_message_processor::CodexMessageProcessor;
use crate::error_code::INVALID_REQUEST_ERROR_CODE;
use crate::outgoing_message::OutgoingMessageSender;
use codex_protocol::mcp_protocol::ClientInfo;
use codex_protocol::mcp_protocol::ClientRequest;
use codex_protocol::mcp_protocol::InitializeResponse;
use codex_app_server_protocol::ClientInfo;
use codex_app_server_protocol::ClientRequest;
use codex_app_server_protocol::InitializeResponse;
use codex_app_server_protocol::JSONRPCError;
use codex_app_server_protocol::JSONRPCErrorError;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCRequest;
use codex_app_server_protocol::JSONRPCResponse;
use codex_core::AuthManager;
use codex_core::ConversationManager;
use codex_core::config::Config;
use codex_core::default_client::USER_AGENT_SUFFIX;
use codex_core::default_client::get_codex_user_agent;
use mcp_types::JSONRPCError;
use mcp_types::JSONRPCErrorError;
use mcp_types::JSONRPCNotification;
use mcp_types::JSONRPCRequest;
use mcp_types::JSONRPCResponse;
use std::sync::Arc;
pub(crate) struct MessageProcessor {

View File

@@ -2,16 +2,12 @@ use std::collections::HashMap;
use std::sync::atomic::AtomicI64;
use std::sync::atomic::Ordering;
use codex_protocol::mcp_protocol::ServerNotification;
use mcp_types::JSONRPC_VERSION;
use mcp_types::JSONRPCError;
use mcp_types::JSONRPCErrorError;
use mcp_types::JSONRPCMessage;
use mcp_types::JSONRPCNotification;
use mcp_types::JSONRPCRequest;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use mcp_types::Result;
use codex_app_server_protocol::JSONRPCErrorError;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::Result;
use codex_app_server_protocol::ServerNotification;
use codex_app_server_protocol::ServerRequest;
use codex_app_server_protocol::ServerRequestPayload;
use serde::Serialize;
use tokio::sync::Mutex;
use tokio::sync::mpsc;
@@ -38,8 +34,7 @@ impl OutgoingMessageSender {
pub(crate) async fn send_request(
&self,
method: &str,
params: Option<serde_json::Value>,
request: ServerRequestPayload,
) -> oneshot::Receiver<Result> {
let id = RequestId::Integer(self.next_request_id.fetch_add(1, Ordering::Relaxed));
let outgoing_message_id = id.clone();
@@ -49,11 +44,8 @@ impl OutgoingMessageSender {
request_id_to_callback.insert(id, tx_approve);
}
let outgoing_message = OutgoingMessage::Request(OutgoingRequest {
id: outgoing_message_id,
method: method.to_string(),
params,
});
let outgoing_message =
OutgoingMessage::Request(request.request_with_id(outgoing_message_id));
let _ = self.sender.send(outgoing_message);
rx_approve
}
@@ -116,8 +108,10 @@ impl OutgoingMessageSender {
}
/// Outgoing message from the server to the client.
#[derive(Debug, Clone, Serialize)]
#[serde(untagged)]
pub(crate) enum OutgoingMessage {
Request(OutgoingRequest),
Request(ServerRequest),
Notification(OutgoingNotification),
/// AppServerNotification is specific to the case where this is run as an
/// "app server" as opposed to an MCP server.
@@ -126,64 +120,6 @@ pub(crate) enum OutgoingMessage {
Error(OutgoingError),
}
impl From<OutgoingMessage> for JSONRPCMessage {
fn from(val: OutgoingMessage) -> Self {
use OutgoingMessage::*;
match val {
Request(OutgoingRequest { id, method, params }) => {
JSONRPCMessage::Request(JSONRPCRequest {
jsonrpc: JSONRPC_VERSION.into(),
id,
method,
params,
})
}
Notification(OutgoingNotification { method, params }) => {
JSONRPCMessage::Notification(JSONRPCNotification {
jsonrpc: JSONRPC_VERSION.into(),
method,
params,
})
}
AppServerNotification(notification) => {
let method = notification.to_string();
let params = match notification.to_params() {
Ok(params) => Some(params),
Err(err) => {
warn!("failed to serialize notification params: {err}");
None
}
};
JSONRPCMessage::Notification(JSONRPCNotification {
jsonrpc: JSONRPC_VERSION.into(),
method,
params,
})
}
Response(OutgoingResponse { id, result }) => {
JSONRPCMessage::Response(JSONRPCResponse {
jsonrpc: JSONRPC_VERSION.into(),
id,
result,
})
}
Error(OutgoingError { id, error }) => JSONRPCMessage::Error(JSONRPCError {
jsonrpc: JSONRPC_VERSION.into(),
id,
error,
}),
}
}
}
#[derive(Debug, Clone, PartialEq, Serialize)]
pub(crate) struct OutgoingRequest {
pub id: RequestId,
pub method: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub params: Option<serde_json::Value>,
}
#[derive(Debug, Clone, PartialEq, Serialize)]
pub(crate) struct OutgoingNotification {
pub method: String,
@@ -205,7 +141,7 @@ pub(crate) struct OutgoingError {
#[cfg(test)]
mod tests {
use codex_protocol::mcp_protocol::LoginChatGptCompleteNotification;
use codex_app_server_protocol::LoginChatGptCompleteNotification;
use pretty_assertions::assert_eq;
use serde_json::json;
use uuid::Uuid;
@@ -221,18 +157,17 @@ mod tests {
error: None,
});
let jsonrpc_notification: JSONRPCMessage =
OutgoingMessage::AppServerNotification(notification).into();
let jsonrpc_notification = OutgoingMessage::AppServerNotification(notification);
assert_eq!(
JSONRPCMessage::Notification(JSONRPCNotification {
jsonrpc: "2.0".into(),
method: "loginChatGptComplete".into(),
params: Some(json!({
json!({
"method": "loginChatGptComplete",
"params": {
"loginId": Uuid::nil(),
"success": true,
})),
},
}),
jsonrpc_notification,
serde_json::to_value(jsonrpc_notification)
.expect("ensure the strum macros serialize the method field correctly"),
"ensure the strum macros serialize the method field correctly"
);
}

View File

@@ -9,8 +9,7 @@ path = "lib.rs"
[dependencies]
anyhow = { workspace = true }
assert_cmd = { workspace = true }
codex-protocol = { workspace = true }
mcp-types = { workspace = true }
codex-app-server-protocol = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
tokio = { workspace = true, features = [

View File

@@ -2,8 +2,8 @@ mod mcp_process;
mod mock_model_server;
mod responses;
use codex_app_server_protocol::JSONRPCResponse;
pub use mcp_process::McpProcess;
use mcp_types::JSONRPCResponse;
pub use mock_model_server::create_mock_chat_completions_server;
pub use responses::create_apply_patch_sse_response;
pub use responses::create_final_assistant_message_sse_response;

View File

@@ -1,3 +1,4 @@
use std::collections::VecDeque;
use std::path::Path;
use std::process::Stdio;
use std::sync::atomic::AtomicI64;
@@ -11,29 +12,30 @@ use tokio::process::ChildStdout;
use anyhow::Context;
use assert_cmd::prelude::*;
use codex_protocol::mcp_protocol::AddConversationListenerParams;
use codex_protocol::mcp_protocol::ArchiveConversationParams;
use codex_protocol::mcp_protocol::CancelLoginChatGptParams;
use codex_protocol::mcp_protocol::ClientInfo;
use codex_protocol::mcp_protocol::ClientNotification;
use codex_protocol::mcp_protocol::GetAuthStatusParams;
use codex_protocol::mcp_protocol::InitializeParams;
use codex_protocol::mcp_protocol::InterruptConversationParams;
use codex_protocol::mcp_protocol::ListConversationsParams;
use codex_protocol::mcp_protocol::LoginApiKeyParams;
use codex_protocol::mcp_protocol::NewConversationParams;
use codex_protocol::mcp_protocol::RemoveConversationListenerParams;
use codex_protocol::mcp_protocol::ResumeConversationParams;
use codex_protocol::mcp_protocol::SendUserMessageParams;
use codex_protocol::mcp_protocol::SendUserTurnParams;
use codex_protocol::mcp_protocol::SetDefaultModelParams;
use codex_app_server_protocol::AddConversationListenerParams;
use codex_app_server_protocol::ArchiveConversationParams;
use codex_app_server_protocol::CancelLoginChatGptParams;
use codex_app_server_protocol::ClientInfo;
use codex_app_server_protocol::ClientNotification;
use codex_app_server_protocol::GetAuthStatusParams;
use codex_app_server_protocol::InitializeParams;
use codex_app_server_protocol::InterruptConversationParams;
use codex_app_server_protocol::ListConversationsParams;
use codex_app_server_protocol::LoginApiKeyParams;
use codex_app_server_protocol::NewConversationParams;
use codex_app_server_protocol::RemoveConversationListenerParams;
use codex_app_server_protocol::ResumeConversationParams;
use codex_app_server_protocol::SendUserMessageParams;
use codex_app_server_protocol::SendUserTurnParams;
use codex_app_server_protocol::ServerRequest;
use codex_app_server_protocol::SetDefaultModelParams;
use mcp_types::JSONRPC_VERSION;
use mcp_types::JSONRPCMessage;
use mcp_types::JSONRPCNotification;
use mcp_types::JSONRPCRequest;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use codex_app_server_protocol::JSONRPCError;
use codex_app_server_protocol::JSONRPCMessage;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCRequest;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use std::process::Command as StdCommand;
use tokio::process::Command;
@@ -46,6 +48,7 @@ pub struct McpProcess {
process: Child,
stdin: ChildStdin,
stdout: BufReader<ChildStdout>,
pending_user_messages: VecDeque<JSONRPCNotification>,
}
impl McpProcess {
@@ -116,6 +119,7 @@ impl McpProcess {
process,
stdin,
stdout,
pending_user_messages: VecDeque::new(),
})
}
@@ -317,7 +321,6 @@ impl McpProcess {
let request_id = self.next_request_id.fetch_add(1, Ordering::Relaxed);
let message = JSONRPCMessage::Request(JSONRPCRequest {
jsonrpc: JSONRPC_VERSION.into(),
id: RequestId::Integer(request_id),
method: method.to_string(),
params,
@@ -331,12 +334,8 @@ impl McpProcess {
id: RequestId,
result: serde_json::Value,
) -> anyhow::Result<()> {
self.send_jsonrpc_message(JSONRPCMessage::Response(JSONRPCResponse {
jsonrpc: JSONRPC_VERSION.into(),
id,
result,
}))
.await
self.send_jsonrpc_message(JSONRPCMessage::Response(JSONRPCResponse { id, result }))
.await
}
pub async fn send_notification(
@@ -345,7 +344,6 @@ impl McpProcess {
) -> anyhow::Result<()> {
let value = serde_json::to_value(notification)?;
self.send_jsonrpc_message(JSONRPCMessage::Notification(JSONRPCNotification {
jsonrpc: JSONRPC_VERSION.into(),
method: value
.get("method")
.and_then(|m| m.as_str())
@@ -373,18 +371,21 @@ impl McpProcess {
Ok(message)
}
pub async fn read_stream_until_request_message(&mut self) -> anyhow::Result<JSONRPCRequest> {
pub async fn read_stream_until_request_message(&mut self) -> anyhow::Result<ServerRequest> {
eprintln!("in read_stream_until_request_message()");
loop {
let message = self.read_jsonrpc_message().await?;
match message {
JSONRPCMessage::Notification(_) => {
eprintln!("notification: {message:?}");
JSONRPCMessage::Notification(notification) => {
eprintln!("notification: {notification:?}");
self.enqueue_user_message(notification);
}
JSONRPCMessage::Request(jsonrpc_request) => {
return Ok(jsonrpc_request);
return jsonrpc_request.try_into().with_context(
|| "failed to deserialize ServerRequest from JSONRPCRequest",
);
}
JSONRPCMessage::Error(_) => {
anyhow::bail!("unexpected JSONRPCMessage::Error: {message:?}");
@@ -405,8 +406,9 @@ impl McpProcess {
loop {
let message = self.read_jsonrpc_message().await?;
match message {
JSONRPCMessage::Notification(_) => {
eprintln!("notification: {message:?}");
JSONRPCMessage::Notification(notification) => {
eprintln!("notification: {notification:?}");
self.enqueue_user_message(notification);
}
JSONRPCMessage::Request(_) => {
anyhow::bail!("unexpected JSONRPCMessage::Request: {message:?}");
@@ -426,12 +428,13 @@ impl McpProcess {
pub async fn read_stream_until_error_message(
&mut self,
request_id: RequestId,
) -> anyhow::Result<mcp_types::JSONRPCError> {
) -> anyhow::Result<JSONRPCError> {
loop {
let message = self.read_jsonrpc_message().await?;
match message {
JSONRPCMessage::Notification(_) => {
eprintln!("notification: {message:?}");
JSONRPCMessage::Notification(notification) => {
eprintln!("notification: {notification:?}");
self.enqueue_user_message(notification);
}
JSONRPCMessage::Request(_) => {
anyhow::bail!("unexpected JSONRPCMessage::Request: {message:?}");
@@ -454,6 +457,10 @@ impl McpProcess {
) -> anyhow::Result<JSONRPCNotification> {
eprintln!("in read_stream_until_notification_message({method})");
if let Some(notification) = self.take_pending_notification_by_method(method) {
return Ok(notification);
}
loop {
let message = self.read_jsonrpc_message().await?;
match message {
@@ -461,6 +468,7 @@ impl McpProcess {
if notification.method == method {
return Ok(notification);
}
self.enqueue_user_message(notification);
}
JSONRPCMessage::Request(_) => {
anyhow::bail!("unexpected JSONRPCMessage::Request: {message:?}");
@@ -474,4 +482,21 @@ impl McpProcess {
}
}
}
fn take_pending_notification_by_method(&mut self, method: &str) -> Option<JSONRPCNotification> {
if let Some(pos) = self
.pending_user_messages
.iter()
.position(|notification| notification.method == method)
{
return self.pending_user_messages.remove(pos);
}
None
}
fn enqueue_user_message(&mut self, notification: JSONRPCNotification) {
if notification.method == "codex/event/user_message" {
self.pending_user_messages.push_back(notification);
}
}
}

View File

@@ -2,13 +2,13 @@ use std::path::Path;
use app_test_support::McpProcess;
use app_test_support::to_response;
use codex_app_server_protocol::ArchiveConversationParams;
use codex_app_server_protocol::ArchiveConversationResponse;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::NewConversationParams;
use codex_app_server_protocol::NewConversationResponse;
use codex_app_server_protocol::RequestId;
use codex_core::ARCHIVED_SESSIONS_SUBDIR;
use codex_protocol::mcp_protocol::ArchiveConversationParams;
use codex_protocol::mcp_protocol::ArchiveConversationResponse;
use codex_protocol::mcp_protocol::NewConversationParams;
use codex_protocol::mcp_protocol::NewConversationResponse;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use tempfile::TempDir;
use tokio::time::timeout;

View File

@@ -2,13 +2,13 @@ use std::path::Path;
use app_test_support::McpProcess;
use app_test_support::to_response;
use codex_protocol::mcp_protocol::AuthMode;
use codex_protocol::mcp_protocol::GetAuthStatusParams;
use codex_protocol::mcp_protocol::GetAuthStatusResponse;
use codex_protocol::mcp_protocol::LoginApiKeyParams;
use codex_protocol::mcp_protocol::LoginApiKeyResponse;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use codex_app_server_protocol::AuthMode;
use codex_app_server_protocol::GetAuthStatusParams;
use codex_app_server_protocol::GetAuthStatusResponse;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::LoginApiKeyParams;
use codex_app_server_protocol::LoginApiKeyResponse;
use codex_app_server_protocol::RequestId;
use pretty_assertions::assert_eq;
use tempfile::TempDir;
use tokio::time::timeout;

View File

@@ -5,25 +5,31 @@ use app_test_support::create_final_assistant_message_sse_response;
use app_test_support::create_mock_chat_completions_server;
use app_test_support::create_shell_sse_response;
use app_test_support::to_response;
use codex_app_server_protocol::AddConversationListenerParams;
use codex_app_server_protocol::AddConversationSubscriptionResponse;
use codex_app_server_protocol::ExecCommandApprovalParams;
use codex_app_server_protocol::InputItem;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::NewConversationParams;
use codex_app_server_protocol::NewConversationResponse;
use codex_app_server_protocol::RemoveConversationListenerParams;
use codex_app_server_protocol::RemoveConversationSubscriptionResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::SendUserMessageParams;
use codex_app_server_protocol::SendUserMessageResponse;
use codex_app_server_protocol::SendUserTurnParams;
use codex_app_server_protocol::SendUserTurnResponse;
use codex_app_server_protocol::ServerRequest;
use codex_core::protocol::AskForApproval;
use codex_core::protocol::SandboxPolicy;
use codex_core::protocol_config_types::ReasoningEffort;
use codex_core::protocol_config_types::ReasoningSummary;
use codex_core::spawn::CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR;
use codex_protocol::mcp_protocol::AddConversationListenerParams;
use codex_protocol::mcp_protocol::AddConversationSubscriptionResponse;
use codex_protocol::mcp_protocol::EXEC_COMMAND_APPROVAL_METHOD;
use codex_protocol::mcp_protocol::NewConversationParams;
use codex_protocol::mcp_protocol::NewConversationResponse;
use codex_protocol::mcp_protocol::RemoveConversationListenerParams;
use codex_protocol::mcp_protocol::RemoveConversationSubscriptionResponse;
use codex_protocol::mcp_protocol::SendUserMessageParams;
use codex_protocol::mcp_protocol::SendUserMessageResponse;
use codex_protocol::mcp_protocol::SendUserTurnParams;
use codex_protocol::mcp_protocol::SendUserTurnResponse;
use mcp_types::JSONRPCNotification;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use codex_protocol::config_types::SandboxMode;
use codex_protocol::protocol::Event;
use codex_protocol::protocol::EventMsg;
use codex_protocol::protocol::InputMessageKind;
use pretty_assertions::assert_eq;
use std::env;
use tempfile::TempDir;
@@ -115,7 +121,7 @@ async fn test_codex_jsonrpc_conversation_flow() {
let send_user_id = mcp
.send_send_user_message_request(SendUserMessageParams {
conversation_id,
items: vec![codex_protocol::mcp_protocol::InputItem::Text {
items: vec![codex_app_server_protocol::InputItem::Text {
text: "text".to_string(),
}],
})
@@ -265,7 +271,7 @@ async fn test_send_user_turn_changes_approval_policy_behavior() {
let send_user_id = mcp
.send_send_user_message_request(SendUserMessageParams {
conversation_id,
items: vec![codex_protocol::mcp_protocol::InputItem::Text {
items: vec![codex_app_server_protocol::InputItem::Text {
text: "run python".to_string(),
}],
})
@@ -290,11 +296,28 @@ async fn test_send_user_turn_changes_approval_policy_behavior() {
.await
.expect("waiting for exec approval request timeout")
.expect("exec approval request");
assert_eq!(request.method, EXEC_COMMAND_APPROVAL_METHOD);
let ServerRequest::ExecCommandApproval { request_id, params } = request else {
panic!("expected ExecCommandApproval request, got: {request:?}");
};
assert_eq!(
ExecCommandApprovalParams {
conversation_id,
call_id: "call1".to_string(),
command: vec![
"python3".to_string(),
"-c".to_string(),
"print(42)".to_string(),
],
cwd: working_directory.clone(),
reason: None,
},
params
);
// Approve so the first turn can complete
mcp.send_response(
request.id,
request_id,
serde_json::json!({ "decision": codex_core::protocol::ReviewDecision::Approved }),
)
.await
@@ -313,7 +336,7 @@ async fn test_send_user_turn_changes_approval_policy_behavior() {
let send_turn_id = mcp
.send_send_user_turn_request(SendUserTurnParams {
conversation_id,
items: vec![codex_protocol::mcp_protocol::InputItem::Text {
items: vec![codex_app_server_protocol::InputItem::Text {
text: "run python again".to_string(),
}],
cwd: working_directory.clone(),
@@ -349,6 +372,234 @@ async fn test_send_user_turn_changes_approval_policy_behavior() {
}
// Helper: minimal config.toml pointing at mock provider.
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
async fn test_send_user_turn_updates_sandbox_and_cwd_between_turns() {
if env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
"Skipping test because it cannot execute when network is disabled in a Codex sandbox."
);
return;
}
let tmp = TempDir::new().expect("tmp dir");
let codex_home = tmp.path().join("codex_home");
std::fs::create_dir(&codex_home).expect("create codex home dir");
let workspace_root = tmp.path().join("workspace");
std::fs::create_dir(&workspace_root).expect("create workspace root");
let first_cwd = workspace_root.join("turn1");
let second_cwd = workspace_root.join("turn2");
std::fs::create_dir(&first_cwd).expect("create first cwd");
std::fs::create_dir(&second_cwd).expect("create second cwd");
let responses = vec![
create_shell_sse_response(
vec![
"bash".to_string(),
"-lc".to_string(),
"echo first turn".to_string(),
],
None,
Some(5000),
"call-first",
)
.expect("create first shell response"),
create_final_assistant_message_sse_response("done first")
.expect("create first final assistant message"),
create_shell_sse_response(
vec![
"bash".to_string(),
"-lc".to_string(),
"echo second turn".to_string(),
],
None,
Some(5000),
"call-second",
)
.expect("create second shell response"),
create_final_assistant_message_sse_response("done second")
.expect("create second final assistant message"),
];
let server = create_mock_chat_completions_server(responses).await;
create_config_toml(&codex_home, &server.uri()).expect("write config");
let mut mcp = McpProcess::new(&codex_home)
.await
.expect("spawn mcp process");
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize())
.await
.expect("init timeout")
.expect("init failed");
let new_conv_id = mcp
.send_new_conversation_request(NewConversationParams {
cwd: Some(first_cwd.to_string_lossy().into_owned()),
approval_policy: Some(AskForApproval::Never),
sandbox: Some(SandboxMode::WorkspaceWrite),
..Default::default()
})
.await
.expect("send newConversation");
let new_conv_resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(new_conv_id)),
)
.await
.expect("newConversation timeout")
.expect("newConversation resp");
let NewConversationResponse {
conversation_id,
model,
..
} = to_response::<NewConversationResponse>(new_conv_resp)
.expect("deserialize newConversation response");
let add_listener_id = mcp
.send_add_conversation_listener_request(AddConversationListenerParams { conversation_id })
.await
.expect("send addConversationListener");
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(add_listener_id)),
)
.await
.expect("addConversationListener timeout")
.expect("addConversationListener resp");
let first_turn_id = mcp
.send_send_user_turn_request(SendUserTurnParams {
conversation_id,
items: vec![InputItem::Text {
text: "first turn".to_string(),
}],
cwd: first_cwd.clone(),
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::WorkspaceWrite {
writable_roots: vec![first_cwd.clone()],
network_access: false,
exclude_tmpdir_env_var: false,
exclude_slash_tmp: false,
},
model: model.clone(),
effort: Some(ReasoningEffort::Medium),
summary: ReasoningSummary::Auto,
})
.await
.expect("send first sendUserTurn");
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(first_turn_id)),
)
.await
.expect("sendUserTurn 1 timeout")
.expect("sendUserTurn 1 resp");
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("codex/event/task_complete"),
)
.await
.expect("task_complete 1 timeout")
.expect("task_complete 1 notification");
let second_turn_id = mcp
.send_send_user_turn_request(SendUserTurnParams {
conversation_id,
items: vec![InputItem::Text {
text: "second turn".to_string(),
}],
cwd: second_cwd.clone(),
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::DangerFullAccess,
model: model.clone(),
effort: Some(ReasoningEffort::Medium),
summary: ReasoningSummary::Auto,
})
.await
.expect("send second sendUserTurn");
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(second_turn_id)),
)
.await
.expect("sendUserTurn 2 timeout")
.expect("sendUserTurn 2 resp");
let mut env_message: Option<String> = None;
let second_cwd_str = second_cwd.to_string_lossy().into_owned();
for _ in 0..10 {
let notification = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("codex/event/user_message"),
)
.await
.expect("user_message timeout")
.expect("user_message notification");
let params = notification
.params
.clone()
.expect("user_message should include params");
let event: Event = serde_json::from_value(params).expect("deserialize user_message event");
if let EventMsg::UserMessage(user) = event.msg
&& matches!(user.kind, Some(InputMessageKind::EnvironmentContext))
&& user.message.contains(&second_cwd_str)
{
env_message = Some(user.message);
break;
}
}
let env_message = env_message.expect("expected environment context update");
assert!(
env_message.contains("<sandbox_mode>danger-full-access</sandbox_mode>"),
"env context should reflect new sandbox mode: {env_message}"
);
assert!(
env_message.contains("<network_access>enabled</network_access>"),
"env context should enable network access for danger-full-access policy: {env_message}"
);
assert!(
env_message.contains(&second_cwd_str),
"env context should include updated cwd: {env_message}"
);
let exec_begin_notification = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("codex/event/exec_command_begin"),
)
.await
.expect("exec_command_begin timeout")
.expect("exec_command_begin notification");
let params = exec_begin_notification
.params
.clone()
.expect("exec_command_begin params");
let event: Event = serde_json::from_value(params).expect("deserialize exec begin event");
let exec_begin = match event.msg {
EventMsg::ExecCommandBegin(exec_begin) => exec_begin,
other => panic!("expected ExecCommandBegin event, got {other:?}"),
};
assert_eq!(
exec_begin.cwd, second_cwd,
"exec turn should run from updated cwd"
);
assert_eq!(
exec_begin.command,
vec![
"bash".to_string(),
"-lc".to_string(),
"echo second turn".to_string()
],
"exec turn should run expected command"
);
timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("codex/event/task_complete"),
)
.await
.expect("task_complete 2 timeout")
.expect("task_complete 2 notification");
}
fn create_config_toml(codex_home: &Path, server_uri: &str) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(

View File

@@ -3,18 +3,18 @@ use std::path::Path;
use app_test_support::McpProcess;
use app_test_support::to_response;
use codex_app_server_protocol::GetUserSavedConfigResponse;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::Profile;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::SandboxSettings;
use codex_app_server_protocol::Tools;
use codex_app_server_protocol::UserSavedConfig;
use codex_core::protocol::AskForApproval;
use codex_protocol::config_types::ReasoningEffort;
use codex_protocol::config_types::ReasoningSummary;
use codex_protocol::config_types::SandboxMode;
use codex_protocol::config_types::Verbosity;
use codex_protocol::mcp_protocol::GetUserSavedConfigResponse;
use codex_protocol::mcp_protocol::Profile;
use codex_protocol::mcp_protocol::SandboxSettings;
use codex_protocol::mcp_protocol::Tools;
use codex_protocol::mcp_protocol::UserSavedConfig;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use pretty_assertions::assert_eq;
use tempfile::TempDir;
use tokio::time::timeout;

View File

@@ -4,15 +4,15 @@ use app_test_support::McpProcess;
use app_test_support::create_final_assistant_message_sse_response;
use app_test_support::create_mock_chat_completions_server;
use app_test_support::to_response;
use codex_protocol::mcp_protocol::AddConversationListenerParams;
use codex_protocol::mcp_protocol::AddConversationSubscriptionResponse;
use codex_protocol::mcp_protocol::InputItem;
use codex_protocol::mcp_protocol::NewConversationParams;
use codex_protocol::mcp_protocol::NewConversationResponse;
use codex_protocol::mcp_protocol::SendUserMessageParams;
use codex_protocol::mcp_protocol::SendUserMessageResponse;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use codex_app_server_protocol::AddConversationListenerParams;
use codex_app_server_protocol::AddConversationSubscriptionResponse;
use codex_app_server_protocol::InputItem;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::NewConversationParams;
use codex_app_server_protocol::NewConversationResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::SendUserMessageParams;
use codex_app_server_protocol::SendUserMessageResponse;
use pretty_assertions::assert_eq;
use serde_json::json;
use tempfile::TempDir;

View File

@@ -1,6 +1,6 @@
use app_test_support::McpProcess;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use pretty_assertions::assert_eq;
use serde_json::json;
use tempfile::TempDir;

View File

@@ -3,17 +3,17 @@
use std::path::Path;
use codex_app_server_protocol::AddConversationListenerParams;
use codex_app_server_protocol::InterruptConversationParams;
use codex_app_server_protocol::InterruptConversationResponse;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::NewConversationParams;
use codex_app_server_protocol::NewConversationResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::SendUserMessageParams;
use codex_app_server_protocol::SendUserMessageResponse;
use codex_core::protocol::TurnAbortReason;
use codex_protocol::mcp_protocol::AddConversationListenerParams;
use codex_protocol::mcp_protocol::InterruptConversationParams;
use codex_protocol::mcp_protocol::InterruptConversationResponse;
use codex_protocol::mcp_protocol::NewConversationParams;
use codex_protocol::mcp_protocol::NewConversationResponse;
use codex_protocol::mcp_protocol::SendUserMessageParams;
use codex_protocol::mcp_protocol::SendUserMessageResponse;
use core_test_support::skip_if_no_network;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use tempfile::TempDir;
use tokio::time::timeout;
@@ -100,7 +100,7 @@ async fn shell_command_interruption() -> anyhow::Result<()> {
let send_user_id = mcp
.send_send_user_message_request(SendUserMessageParams {
conversation_id,
items: vec![codex_protocol::mcp_protocol::InputItem::Text {
items: vec![codex_app_server_protocol::InputItem::Text {
text: "run first sleep command".to_string(),
}],
})

View File

@@ -3,16 +3,16 @@ use std::path::Path;
use app_test_support::McpProcess;
use app_test_support::to_response;
use codex_protocol::mcp_protocol::ListConversationsParams;
use codex_protocol::mcp_protocol::ListConversationsResponse;
use codex_protocol::mcp_protocol::NewConversationParams; // reused for overrides shape
use codex_protocol::mcp_protocol::ResumeConversationParams;
use codex_protocol::mcp_protocol::ResumeConversationResponse;
use codex_protocol::mcp_protocol::ServerNotification;
use codex_protocol::mcp_protocol::SessionConfiguredNotification;
use mcp_types::JSONRPCNotification;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::ListConversationsParams;
use codex_app_server_protocol::ListConversationsResponse;
use codex_app_server_protocol::NewConversationParams; // reused for overrides shape
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ResumeConversationParams;
use codex_app_server_protocol::ResumeConversationResponse;
use codex_app_server_protocol::ServerNotification;
use codex_app_server_protocol::SessionConfiguredNotification;
use pretty_assertions::assert_eq;
use serde_json::json;
use tempfile::TempDir;

View File

@@ -3,15 +3,15 @@ use std::time::Duration;
use app_test_support::McpProcess;
use app_test_support::to_response;
use codex_app_server_protocol::CancelLoginChatGptParams;
use codex_app_server_protocol::CancelLoginChatGptResponse;
use codex_app_server_protocol::GetAuthStatusParams;
use codex_app_server_protocol::GetAuthStatusResponse;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::LoginChatGptResponse;
use codex_app_server_protocol::LogoutChatGptResponse;
use codex_app_server_protocol::RequestId;
use codex_login::login_with_api_key;
use codex_protocol::mcp_protocol::CancelLoginChatGptParams;
use codex_protocol::mcp_protocol::CancelLoginChatGptResponse;
use codex_protocol::mcp_protocol::GetAuthStatusParams;
use codex_protocol::mcp_protocol::GetAuthStatusResponse;
use codex_protocol::mcp_protocol::LoginChatGptResponse;
use codex_protocol::mcp_protocol::LogoutChatGptResponse;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use tempfile::TempDir;
use tokio::time::timeout;

View File

@@ -4,17 +4,17 @@ use app_test_support::McpProcess;
use app_test_support::create_final_assistant_message_sse_response;
use app_test_support::create_mock_chat_completions_server;
use app_test_support::to_response;
use codex_protocol::mcp_protocol::AddConversationListenerParams;
use codex_protocol::mcp_protocol::AddConversationSubscriptionResponse;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::mcp_protocol::InputItem;
use codex_protocol::mcp_protocol::NewConversationParams;
use codex_protocol::mcp_protocol::NewConversationResponse;
use codex_protocol::mcp_protocol::SendUserMessageParams;
use codex_protocol::mcp_protocol::SendUserMessageResponse;
use mcp_types::JSONRPCNotification;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use codex_app_server_protocol::AddConversationListenerParams;
use codex_app_server_protocol::AddConversationSubscriptionResponse;
use codex_app_server_protocol::InputItem;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::NewConversationParams;
use codex_app_server_protocol::NewConversationResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::SendUserMessageParams;
use codex_app_server_protocol::SendUserMessageResponse;
use codex_protocol::ConversationId;
use pretty_assertions::assert_eq;
use tempfile::TempDir;
use tokio::time::timeout;

View File

@@ -2,11 +2,11 @@ use std::path::Path;
use app_test_support::McpProcess;
use app_test_support::to_response;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::SetDefaultModelParams;
use codex_app_server_protocol::SetDefaultModelResponse;
use codex_core::config::ConfigToml;
use codex_protocol::mcp_protocol::SetDefaultModelParams;
use codex_protocol::mcp_protocol::SetDefaultModelResponse;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use pretty_assertions::assert_eq;
use tempfile::TempDir;
use tokio::time::timeout;

View File

@@ -1,8 +1,8 @@
use app_test_support::McpProcess;
use app_test_support::to_response;
use codex_protocol::mcp_protocol::GetUserAgentResponse;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use codex_app_server_protocol::GetUserAgentResponse;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use pretty_assertions::assert_eq;
use tempfile::TempDir;
use tokio::time::timeout;

View File

@@ -5,14 +5,14 @@ use app_test_support::McpProcess;
use app_test_support::to_response;
use base64::Engine;
use base64::engine::general_purpose::URL_SAFE_NO_PAD;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::UserInfoResponse;
use codex_core::auth::AuthDotJson;
use codex_core::auth::get_auth_file;
use codex_core::auth::write_auth_json;
use codex_core::token_data::IdTokenInfo;
use codex_core::token_data::TokenData;
use codex_protocol::mcp_protocol::UserInfoResponse;
use mcp_types::JSONRPCResponse;
use mcp_types::RequestId;
use pretty_assertions::assert_eq;
use serde_json::json;
use tempfile::TempDir;

View File

@@ -28,6 +28,7 @@ codex-login = { workspace = true }
codex-mcp-server = { workspace = true }
codex-process-hardening = { workspace = true }
codex-protocol = { workspace = true }
codex-app-server-protocol = { workspace = true }
codex-protocol-ts = { workspace = true }
codex-responses-api-proxy = { workspace = true }
codex-tui = { workspace = true }
@@ -43,8 +44,6 @@ tokio = { workspace = true, features = [
"rt-multi-thread",
"signal",
] }
tracing = { workspace = true }
tracing-subscriber = { workspace = true }
[dev-dependencies]
assert_cmd = { workspace = true }

View File

@@ -1,7 +1,6 @@
pub mod debug_sandbox;
mod exit_status;
pub mod login;
pub mod proto;
use clap::Parser;
use codex_common::CliConfigOverrides;

View File

@@ -1,3 +1,4 @@
use codex_app_server_protocol::AuthMode;
use codex_common::CliConfigOverrides;
use codex_core::CodexAuth;
use codex_core::auth::CLIENT_ID;
@@ -8,7 +9,6 @@ use codex_core::config::ConfigOverrides;
use codex_login::ServerOptions;
use codex_login::run_device_code_login;
use codex_login::run_login_server;
use codex_protocol::mcp_protocol::AuthMode;
use std::path::PathBuf;
pub async fn login_with_chatgpt(codex_home: PathBuf) -> std::io::Result<()> {

View File

@@ -12,7 +12,6 @@ use codex_cli::login::run_login_with_api_key;
use codex_cli::login::run_login_with_chatgpt;
use codex_cli::login::run_login_with_device_code;
use codex_cli::login::run_logout;
use codex_cli::proto;
use codex_cloud_tasks::Cli as CloudTasksCli;
use codex_common::CliConfigOverrides;
use codex_exec::Cli as ExecCli;
@@ -26,7 +25,6 @@ use supports_color::Stream;
mod mcp_cmd;
use crate::mcp_cmd::McpCli;
use crate::proto::ProtoCli;
/// Codex CLI
///
@@ -74,10 +72,6 @@ enum Subcommand {
/// [experimental] Run the app server.
AppServer,
/// Run the Protocol stream via stdin/stdout
#[clap(visible_alias = "p")]
Proto(ProtoCli),
/// Generate shell completion scripts.
Completion(CompletionCommand),
@@ -224,25 +218,12 @@ fn print_exit_messages(exit_info: AppExitInfo) {
}
}
pub(crate) const CODEX_SECURE_MODE_ENV_VAR: &str = "CODEX_SECURE_MODE";
/// As early as possible in the process lifecycle, apply hardening measures
/// if the CODEX_SECURE_MODE environment variable is set to "1".
/// As early as possible in the process lifecycle, apply hardening measures. We
/// skip this in debug builds to avoid interfering with debugging.
#[ctor::ctor]
#[cfg(not(debug_assertions))]
fn pre_main_hardening() {
let secure_mode = match std::env::var(CODEX_SECURE_MODE_ENV_VAR) {
Ok(value) => value,
Err(_) => return,
};
if secure_mode == "1" {
codex_process_hardening::pre_main_hardening();
}
// Always clear this env var so child processes don't inherit it.
unsafe {
std::env::remove_var(CODEX_SECURE_MODE_ENV_VAR);
}
codex_process_hardening::pre_main_hardening();
}
fn main() -> anyhow::Result<()> {
@@ -332,13 +313,6 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
);
run_logout(logout_cli.config_overrides).await;
}
Some(Subcommand::Proto(mut proto_cli)) => {
prepend_config_flags(
&mut proto_cli.config_overrides,
root_config_overrides.clone(),
);
proto::run_main(proto_cli).await?;
}
Some(Subcommand::Completion(completion_cli)) => {
print_completion(completion_cli);
}
@@ -481,7 +455,7 @@ fn print_completion(cmd: CompletionCommand) {
mod tests {
use super::*;
use codex_core::protocol::TokenUsage;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::ConversationId;
fn finalize_from_args(args: &[&str]) -> TuiCli {
let cli = MultitoolCli::try_parse_from(args).expect("parse");

View File

@@ -1,133 +0,0 @@
use std::io::IsTerminal;
use clap::Parser;
use codex_common::CliConfigOverrides;
use codex_core::AuthManager;
use codex_core::ConversationManager;
use codex_core::NewConversation;
use codex_core::config::Config;
use codex_core::config::ConfigOverrides;
use codex_core::protocol::Event;
use codex_core::protocol::EventMsg;
use codex_core::protocol::Submission;
use tokio::io::AsyncBufReadExt;
use tokio::io::BufReader;
use tracing::error;
use tracing::info;
#[derive(Debug, Parser)]
pub struct ProtoCli {
#[clap(skip)]
pub config_overrides: CliConfigOverrides,
}
pub async fn run_main(opts: ProtoCli) -> anyhow::Result<()> {
if std::io::stdin().is_terminal() {
anyhow::bail!("Protocol mode expects stdin to be a pipe, not a terminal");
}
tracing_subscriber::fmt()
.with_writer(std::io::stderr)
.init();
let ProtoCli { config_overrides } = opts;
let overrides_vec = config_overrides
.parse_overrides()
.map_err(anyhow::Error::msg)?;
let config = Config::load_with_cli_overrides(overrides_vec, ConfigOverrides::default())?;
// Use conversation_manager API to start a conversation
let conversation_manager =
ConversationManager::new(AuthManager::shared(config.codex_home.clone()));
let NewConversation {
conversation_id: _,
conversation,
session_configured,
} = conversation_manager.new_conversation(config).await?;
// Simulate streaming the session_configured event.
let synthetic_event = Event {
// Fake id value.
id: "".to_string(),
msg: EventMsg::SessionConfigured(session_configured),
};
let session_configured_event = match serde_json::to_string(&synthetic_event) {
Ok(s) => s,
Err(e) => {
error!("Failed to serialize session_configured: {e}");
return Err(anyhow::Error::from(e));
}
};
println!("{session_configured_event}");
// Task that reads JSON lines from stdin and forwards to Submission Queue
let sq_fut = {
let conversation = conversation.clone();
async move {
let stdin = BufReader::new(tokio::io::stdin());
let mut lines = stdin.lines();
loop {
let result = tokio::select! {
_ = tokio::signal::ctrl_c() => {
break
},
res = lines.next_line() => res,
};
match result {
Ok(Some(line)) => {
let line = line.trim();
if line.is_empty() {
continue;
}
match serde_json::from_str::<Submission>(line) {
Ok(sub) => {
if let Err(e) = conversation.submit_with_id(sub).await {
error!("{e:#}");
break;
}
}
Err(e) => {
error!("invalid submission: {e}");
}
}
}
_ => {
info!("Submission queue closed");
break;
}
}
}
}
};
// Task that reads events from the agent and prints them as JSON lines to stdout
let eq_fut = async move {
loop {
let event = tokio::select! {
_ = tokio::signal::ctrl_c() => break,
event = conversation.next_event() => event,
};
match event {
Ok(event) => {
let event_str = match serde_json::to_string(&event) {
Ok(s) => s,
Err(e) => {
error!("Failed to serialize event: {e}");
continue;
}
};
println!("{event_str}");
}
Err(e) => {
error!("{e:#}");
break;
}
}
}
info!("Event queue closed");
};
tokio::join!(sq_fut, eq_fut);
Ok(())
}

View File

@@ -10,6 +10,7 @@ workspace = true
clap = { workspace = true, features = ["derive", "wrap_help"], optional = true }
codex-core = { workspace = true }
codex-protocol = { workspace = true }
codex-app-server-protocol = { workspace = true }
serde = { workspace = true, optional = true }
toml = { workspace = true, optional = true }

View File

@@ -1,5 +1,5 @@
use codex_app_server_protocol::AuthMode;
use codex_core::protocol_config_types::ReasoningEffort;
use codex_protocol::mcp_protocol::AuthMode;
/// A simple preset pairing a model slug with a reasoning effort.
#[derive(Debug, Clone, Copy)]

View File

@@ -24,6 +24,7 @@ codex-file-search = { workspace = true }
codex-mcp-client = { workspace = true }
codex-rmcp-client = { workspace = true }
codex-protocol = { workspace = true }
codex-app-server-protocol = { workspace = true }
codex-otel = { workspace = true, features = ["otel"] }
dirs = { workspace = true }
env-flags = { workspace = true }

View File

@@ -27,6 +27,7 @@ pub(crate) enum InternalApplyPatchInvocation {
DelegateToExec(ApplyPatchExec),
}
#[derive(Debug)]
pub(crate) struct ApplyPatchExec {
pub(crate) action: ApplyPatchAction,
pub(crate) user_explicitly_approved_this_action: bool,
@@ -109,3 +110,28 @@ pub(crate) fn convert_apply_patch_to_protocol(
}
result
}
#[cfg(test)]
mod tests {
use super::*;
use pretty_assertions::assert_eq;
use tempfile::tempdir;
#[test]
fn convert_apply_patch_maps_add_variant() {
let tmp = tempdir().expect("tmp");
let p = tmp.path().join("a.txt");
// Create an action with a single Add change
let action = ApplyPatchAction::new_add_for_test(&p, "hello".to_string());
let got = convert_apply_patch_to_protocol(&action);
assert_eq!(
got.get(&p),
Some(&FileChange::Add {
content: "hello".to_string()
})
);
}
}

View File

@@ -15,7 +15,7 @@ use std::sync::Arc;
use std::sync::Mutex;
use std::time::Duration;
use codex_protocol::mcp_protocol::AuthMode;
use codex_app_server_protocol::AuthMode;
use crate::token_data::PlanType;
use crate::token_data::TokenData;

View File

@@ -6,8 +6,8 @@ use std::time::Duration;
use crate::AuthManager;
use crate::auth::CodexAuth;
use bytes::Bytes;
use codex_protocol::mcp_protocol::AuthMode;
use codex_protocol::mcp_protocol::ConversationId;
use codex_app_server_protocol::AuthMode;
use codex_protocol::ConversationId;
use eventsource_stream::Eventsource;
use futures::prelude::*;
use regex_lite::Regex;
@@ -383,6 +383,25 @@ impl ModelClient {
let body = res.json::<ErrorResponse>().await.ok();
if let Some(ErrorResponse { error }) = body {
if error.r#type.as_deref() == Some("usage_limit_reached") {
let context = "usage_limit_reached";
if Self::refresh_on_plan_mismatch(
auth_manager,
&auth,
error.plan_type.clone(),
context,
)
.await
{
return Err(StreamAttemptError::RetryableTransportError(
CodexErr::Stream(
format!(
"plan mismatch detected during {context}; retrying"
),
None,
),
));
}
// Prefer the plan_type provided in the error message if present
// because it's more up to date than the one encoded in the auth
// token.
@@ -397,6 +416,24 @@ impl ModelClient {
});
return Err(StreamAttemptError::Fatal(codex_err));
} else if error.r#type.as_deref() == Some("usage_not_included") {
let context = "usage_not_included";
if Self::refresh_on_plan_mismatch(
auth_manager,
&auth,
error.plan_type.clone(),
context,
)
.await
{
return Err(StreamAttemptError::RetryableTransportError(
CodexErr::Stream(
format!(
"plan mismatch detected during {context}; retrying"
),
None,
),
));
}
return Err(StreamAttemptError::Fatal(CodexErr::UsageNotIncluded));
}
}
@@ -411,6 +448,48 @@ impl ModelClient {
}
}
async fn refresh_on_plan_mismatch(
auth_manager: &Option<Arc<AuthManager>>,
auth: &Option<CodexAuth>,
server_plan_type: Option<PlanType>,
log_context: &str,
) -> bool {
let Some(server_plan) = server_plan_type else {
return false;
};
let jwt_plan = auth.as_ref().and_then(CodexAuth::get_plan_type);
if jwt_plan == Some(server_plan.clone()) {
return false;
}
if let Some(manager) = auth_manager.as_ref() {
match manager.refresh_token().await {
Ok(_) => {
warn!(
context = log_context,
?server_plan,
previous_plan = ?jwt_plan,
"plan mismatch detected; refreshed token and will retry"
);
return true;
}
Err(err) => {
warn!(
context = log_context,
?server_plan,
previous_plan = ?jwt_plan,
error = ?err,
"failed to refresh token after plan mismatch"
);
}
}
}
false
}
pub fn get_provider(&self) -> ModelProviderInfo {
self.provider.clone()
}
@@ -1326,4 +1405,32 @@ mod tests {
let plan_json = serde_json::to_string(&resp.error.plan_type).expect("serialize plan_type");
assert_eq!(plan_json, "\"vip\"");
}
#[tokio::test]
async fn refresh_on_plan_mismatch_retries_when_plan_differs() {
use crate::token_data::KnownPlan;
use crate::token_data::PlanType;
use std::path::PathBuf;
use std::sync::Arc;
let auth = CodexAuth::create_dummy_chatgpt_auth_for_testing();
if let Some(auth_json) = auth.auth_dot_json.lock().unwrap().as_mut()
&& let Some(tokens) = auth_json.tokens.as_mut()
{
tokens.id_token.chatgpt_plan_type = Some(PlanType::Known(KnownPlan::Plus));
tokens.id_token.raw_jwt = "dummy".to_string();
}
let manager = Arc::new(AuthManager::new(PathBuf::new()));
let should_retry = ModelClient::refresh_on_plan_mismatch(
&Some(manager),
&Some(auth),
Some(PlanType::Known(KnownPlan::Team)),
"usage_limit_reached",
)
.await;
assert!(should_retry);
}
}

View File

@@ -1,11 +1,9 @@
use std::borrow::Cow;
use std::collections::HashMap;
use std::fmt::Debug;
use std::path::Path;
use std::path::PathBuf;
use std::sync::Arc;
use std::sync::atomic::AtomicU64;
use std::time::Duration;
use crate::AuthManager;
use crate::client_common::REVIEW_PROMPT;
@@ -19,7 +17,7 @@ use async_channel::Sender;
use codex_apply_patch::ApplyPatchAction;
use codex_apply_patch::MaybeApplyPatchVerified;
use codex_apply_patch::maybe_parse_apply_patch_verified;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::ConversationId;
use codex_protocol::protocol::ConversationPathResponseEvent;
use codex_protocol::protocol::ExitedReviewModeEvent;
use codex_protocol::protocol::ReviewRequest;
@@ -44,7 +42,6 @@ use tracing::warn;
use crate::ModelProviderInfo;
use crate::apply_patch;
use crate::apply_patch::ApplyPatchExec;
use crate::apply_patch::CODEX_APPLY_PATCH_ARG1;
use crate::apply_patch::InternalApplyPatchInvocation;
use crate::apply_patch::convert_apply_patch_to_protocol;
use crate::client::ModelClient;
@@ -57,19 +54,21 @@ use crate::environment_context::EnvironmentContext;
use crate::error::CodexErr;
use crate::error::Result as CodexResult;
use crate::error::SandboxErr;
use crate::error::get_error_message_ui;
use crate::exec::ExecParams;
use crate::exec::ExecToolCallOutput;
use crate::exec::SandboxType;
use crate::exec::StdoutStream;
#[cfg(test)]
use crate::exec::StreamOutput;
use crate::exec::process_exec_tool_call;
use crate::exec_command::EXEC_COMMAND_TOOL_NAME;
use crate::exec_command::ExecCommandParams;
use crate::exec_command::ExecSessionManager;
use crate::exec_command::WRITE_STDIN_TOOL_NAME;
use crate::exec_command::WriteStdinParams;
use crate::exec_env::create_env;
use crate::executor::ExecutionMode;
use crate::executor::Executor;
use crate::executor::ExecutorConfig;
use crate::executor::normalize_exec_result;
use crate::mcp_connection_manager::McpConnectionManager;
use crate::mcp_tool_call::handle_mcp_tool_call;
use crate::model_family::find_family_for_model;
@@ -113,9 +112,6 @@ use crate::protocol::TurnDiffEvent;
use crate::protocol::WebSearchBeginEvent;
use crate::rollout::RolloutRecorder;
use crate::rollout::RolloutRecorderParams;
use crate::safety::SafetyCheck;
use crate::safety::assess_command_safety;
use crate::safety::assess_safety_for_untrusted_command;
use crate::shell;
use crate::state::ActiveTurn;
use crate::state::SessionServices;
@@ -128,7 +124,6 @@ use crate::user_instructions::UserInstructions;
use crate::user_notification::UserNotification;
use crate::util::backoff;
use codex_otel::otel_event_manager::OtelEventManager;
use codex_otel::otel_event_manager::ToolDecisionSource;
use codex_protocol::config_types::ReasoningEffort as ReasoningEffortConfig;
use codex_protocol::config_types::ReasoningSummary as ReasoningSummaryConfig;
use codex_protocol::custom_prompts::CustomPrompt;
@@ -486,9 +481,13 @@ impl Session {
unified_exec_manager: UnifiedExecSessionManager::default(),
notifier: notify,
rollout: Mutex::new(Some(rollout_recorder)),
codex_linux_sandbox_exe: config.codex_linux_sandbox_exe.clone(),
user_shell: default_shell,
show_raw_agent_reasoning: config.show_raw_agent_reasoning,
executor: Executor::new(ExecutorConfig::new(
turn_context.sandbox_policy.clone(),
turn_context.cwd.clone(),
config.codex_linux_sandbox_exe.clone(),
)),
};
let sess = Arc::new(Session {
@@ -573,6 +572,11 @@ impl Session {
}
}
/// Emit an exec approval request event and await the user's decision.
///
/// The request is keyed by `sub_id`/`call_id` so matching responses are delivered
/// to the correct in-flight turn. If the task is aborted, this returns the
/// default `ReviewDecision` (`Denied`).
pub async fn request_command_approval(
&self,
sub_id: String,
@@ -670,11 +674,6 @@ impl Session {
}
}
pub async fn add_approved_command(&self, cmd: Vec<String>) {
let mut state = self.state.lock().await;
state.add_approved_command(cmd);
}
/// Records input items: always append to conversation history and
/// persist these response items to rollout.
async fn record_conversation_items(&self, items: &[ResponseItem]) {
@@ -832,6 +831,7 @@ impl Session {
command_for_display,
cwd,
apply_patch,
..
} = exec_command_context;
let msg = match apply_patch {
Some(ApplyPatchCommandContext {
@@ -928,45 +928,29 @@ impl Session {
/// command even on error.
///
/// Returns the output of the exec tool call.
async fn run_exec_with_events<'a>(
async fn run_exec_with_events(
&self,
turn_diff_tracker: &mut TurnDiffTracker,
begin_ctx: ExecCommandContext,
exec_args: ExecInvokeArgs<'a>,
) -> crate::error::Result<ExecToolCallOutput> {
let is_apply_patch = begin_ctx.apply_patch.is_some();
let sub_id = begin_ctx.sub_id.clone();
let call_id = begin_ctx.call_id.clone();
prepared: PreparedExec,
approval_policy: AskForApproval,
) -> Result<ExecToolCallOutput, ExecError> {
let PreparedExec { context, request } = prepared;
let is_apply_patch = context.apply_patch.is_some();
let sub_id = context.sub_id.clone();
let call_id = context.call_id.clone();
self.on_exec_command_begin(turn_diff_tracker, begin_ctx.clone())
self.on_exec_command_begin(turn_diff_tracker, context.clone())
.await;
let result = process_exec_tool_call(
exec_args.params,
exec_args.sandbox_type,
exec_args.sandbox_policy,
exec_args.sandbox_cwd,
exec_args.codex_linux_sandbox_exe,
exec_args.stdout_stream,
)
.await;
let result = self
.services
.executor
.run(request, self, approval_policy, &context)
.await;
let normalized = normalize_exec_result(&result);
let borrowed = normalized.event_output();
let output_stderr;
let borrowed: &ExecToolCallOutput = match &result {
Ok(output) => output,
Err(CodexErr::Sandbox(SandboxErr::Timeout { output })) => output,
Err(e) => {
output_stderr = ExecToolCallOutput {
exit_code: -1,
stdout: StreamOutput::new(String::new()),
stderr: StreamOutput::new(get_error_message_ui(e)),
aggregated_output: StreamOutput::new(get_error_message_ui(e)),
duration: Duration::default(),
timed_out: false,
};
&output_stderr
}
};
self.on_exec_command_end(
turn_diff_tracker,
&sub_id,
@@ -976,13 +960,15 @@ impl Session {
)
.await;
drop(normalized);
result
}
/// Helper that emits a BackgroundEvent with the given message. This keeps
/// the callsites terse so adding more diagnostics does not clutter the
/// core agent logic.
async fn notify_background_event(&self, sub_id: &str, message: impl Into<String>) {
pub(crate) async fn notify_background_event(&self, sub_id: &str, message: impl Into<String>) {
let event = Event {
id: sub_id.to_string(),
msg: EventMsg::BackgroundEvent(BackgroundEventEvent {
@@ -1070,7 +1056,7 @@ impl Session {
&self.services.notifier
}
fn user_shell(&self) -> &shell::Shell {
pub(crate) fn user_shell(&self) -> &shell::Shell {
&self.services.user_shell
}
@@ -1092,6 +1078,8 @@ pub(crate) struct ExecCommandContext {
pub(crate) command_for_display: Vec<String>,
pub(crate) cwd: PathBuf,
pub(crate) apply_patch: Option<ApplyPatchCommandContext>,
pub(crate) tool_name: String,
pub(crate) otel_event_manager: OtelEventManager,
}
#[derive(Clone, Debug)]
@@ -1298,8 +1286,19 @@ async fn submission_loop(
let previous_env_context = EnvironmentContext::from(turn_context.as_ref());
let new_env_context = EnvironmentContext::from(&fresh_turn_context);
if !new_env_context.equals_except_shell(&previous_env_context) {
sess.record_conversation_items(&[ResponseItem::from(new_env_context)])
let env_response_item = ResponseItem::from(new_env_context);
sess.record_conversation_items(std::slice::from_ref(&env_response_item))
.await;
for msg in map_response_item_to_event_messages(
&env_response_item,
sess.show_raw_agent_reasoning(),
) {
let event = Event {
id: sub.id.clone(),
msg,
};
sess.send_event(event).await;
}
}
// Install the new persistent context for subsequent tasks/turns.
@@ -2610,33 +2609,6 @@ fn parse_container_exec_arguments(
})
}
pub struct ExecInvokeArgs<'a> {
pub params: ExecParams,
pub sandbox_type: SandboxType,
pub sandbox_policy: &'a SandboxPolicy,
pub sandbox_cwd: &'a Path,
pub codex_linux_sandbox_exe: &'a Option<PathBuf>,
pub stdout_stream: Option<StdoutStream>,
}
fn maybe_translate_shell_command(
params: ExecParams,
sess: &Session,
turn_context: &TurnContext,
) -> ExecParams {
let should_translate = matches!(sess.user_shell(), crate::shell::Shell::PowerShell(_))
|| turn_context.shell_environment_policy.use_profile;
if should_translate
&& let Some(command) = sess
.user_shell()
.format_default_shell_invocation(params.command.clone())
{
return ExecParams { command, ..params };
}
params
}
async fn handle_container_exec_with_params(
tool_name: &str,
params: ExecParams,
@@ -2682,152 +2654,10 @@ async fn handle_container_exec_with_params(
MaybeApplyPatchVerified::NotApplyPatch => None,
};
let (params, safety, command_for_display) = match &apply_patch_exec {
Some(ApplyPatchExec {
action: ApplyPatchAction { patch, cwd, .. },
user_explicitly_approved_this_action,
}) => {
let path_to_codex = std::env::current_exe()
.ok()
.map(|p| p.to_string_lossy().to_string());
let Some(path_to_codex) = path_to_codex else {
return Err(FunctionCallError::RespondToModel(
"failed to determine path to codex executable".to_string(),
));
};
let params = ExecParams {
command: vec![
path_to_codex,
CODEX_APPLY_PATCH_ARG1.to_string(),
patch.clone(),
],
cwd: cwd.clone(),
timeout_ms: params.timeout_ms,
env: HashMap::new(),
with_escalated_permissions: params.with_escalated_permissions,
justification: params.justification.clone(),
};
let safety = if *user_explicitly_approved_this_action {
SafetyCheck::AutoApprove {
sandbox_type: SandboxType::None,
user_explicitly_approved: true,
}
} else {
assess_safety_for_untrusted_command(
turn_context.approval_policy,
&turn_context.sandbox_policy,
params.with_escalated_permissions.unwrap_or(false),
)
};
(
params,
safety,
vec!["apply_patch".to_string(), patch.clone()],
)
}
None => {
let safety = {
let state = sess.state.lock().await;
assess_command_safety(
&params.command,
turn_context.approval_policy,
&turn_context.sandbox_policy,
state.approved_commands_ref(),
params.with_escalated_permissions.unwrap_or(false),
)
};
let command_for_display = params.command.clone();
(params, safety, command_for_display)
}
};
let sandbox_type = match safety {
SafetyCheck::AutoApprove {
sandbox_type,
user_explicitly_approved,
} => {
otel_event_manager.tool_decision(
tool_name,
call_id.as_str(),
ReviewDecision::Approved,
if user_explicitly_approved {
ToolDecisionSource::User
} else {
ToolDecisionSource::Config
},
);
sandbox_type
}
SafetyCheck::AskUser => {
let decision = sess
.request_command_approval(
sub_id.clone(),
call_id.clone(),
params.command.clone(),
params.cwd.clone(),
params.justification.clone(),
)
.await;
match decision {
ReviewDecision::Approved => {
otel_event_manager.tool_decision(
tool_name,
call_id.as_str(),
ReviewDecision::Approved,
ToolDecisionSource::User,
);
}
ReviewDecision::ApprovedForSession => {
otel_event_manager.tool_decision(
tool_name,
call_id.as_str(),
ReviewDecision::ApprovedForSession,
ToolDecisionSource::User,
);
sess.add_approved_command(params.command.clone()).await;
}
ReviewDecision::Denied => {
otel_event_manager.tool_decision(
tool_name,
call_id.as_str(),
ReviewDecision::Denied,
ToolDecisionSource::User,
);
return Err(FunctionCallError::RespondToModel(
"exec command rejected by user".to_string(),
));
}
ReviewDecision::Abort => {
otel_event_manager.tool_decision(
tool_name,
call_id.as_str(),
ReviewDecision::Abort,
ToolDecisionSource::User,
);
return Err(FunctionCallError::RespondToModel(
"exec command aborted by user".to_string(),
));
}
}
// No sandboxing is applied because the user has given
// explicit approval. Often, we end up in this case because
// the command cannot be run in a sandbox, such as
// installing a new dependency that requires network access.
SandboxType::None
}
SafetyCheck::Reject { reason } => {
otel_event_manager.tool_decision(
tool_name,
call_id.as_str(),
ReviewDecision::Denied,
ToolDecisionSource::Config,
);
return Err(FunctionCallError::RespondToModel(format!(
"exec command rejected: {reason:?}"
)));
}
let command_for_display = if let Some(exec) = apply_patch_exec.as_ref() {
vec!["apply_patch".to_string(), exec.action.patch.clone()]
} else {
params.command.clone()
};
let exec_command_context = ExecCommandContext {
@@ -2835,38 +2665,47 @@ async fn handle_container_exec_with_params(
call_id: call_id.clone(),
command_for_display: command_for_display.clone(),
cwd: params.cwd.clone(),
apply_patch: apply_patch_exec.map(
apply_patch: apply_patch_exec.as_ref().map(
|ApplyPatchExec {
action,
user_explicitly_approved_this_action,
}| ApplyPatchCommandContext {
user_explicitly_approved_this_action,
changes: convert_apply_patch_to_protocol(&action),
user_explicitly_approved_this_action: *user_explicitly_approved_this_action,
changes: convert_apply_patch_to_protocol(action),
},
),
tool_name: tool_name.to_string(),
otel_event_manager,
};
let params = maybe_translate_shell_command(params, sess, turn_context);
let mode = match apply_patch_exec {
Some(exec) => ExecutionMode::ApplyPatch(exec),
None => ExecutionMode::Shell,
};
sess.services.executor.update_environment(
turn_context.sandbox_policy.clone(),
turn_context.cwd.clone(),
);
let prepared_exec = PreparedExec::new(
exec_command_context,
params,
command_for_display,
mode,
Some(StdoutStream {
sub_id: sub_id.clone(),
call_id: call_id.clone(),
tx_event: sess.tx_event.clone(),
}),
turn_context.shell_environment_policy.use_profile,
);
let output_result = sess
.run_exec_with_events(
turn_diff_tracker,
exec_command_context.clone(),
ExecInvokeArgs {
params: params.clone(),
sandbox_type,
sandbox_policy: &turn_context.sandbox_policy,
sandbox_cwd: &turn_context.cwd,
codex_linux_sandbox_exe: &sess.services.codex_linux_sandbox_exe,
stdout_stream: if exec_command_context.apply_patch.is_some() {
None
} else {
Some(StdoutStream {
sub_id: sub_id.clone(),
call_id: call_id.clone(),
tx_event: sess.tx_event.clone(),
})
},
},
prepared_exec,
turn_context.approval_policy,
)
.await;
@@ -2880,154 +2719,16 @@ async fn handle_container_exec_with_params(
Err(FunctionCallError::RespondToModel(content))
}
}
Err(CodexErr::Sandbox(error)) => {
handle_sandbox_error(
tool_name,
turn_diff_tracker,
params,
exec_command_context,
error,
sandbox_type,
sess,
turn_context,
&otel_event_manager,
)
.await
}
Err(e) => Err(FunctionCallError::RespondToModel(format!(
"execution error: {e:?}"
Err(ExecError::Function(err)) => Err(err),
Err(ExecError::Codex(CodexErr::Sandbox(SandboxErr::Timeout { output }))) => Err(
FunctionCallError::RespondToModel(format_exec_output(&output)),
),
Err(ExecError::Codex(err)) => Err(FunctionCallError::RespondToModel(format!(
"execution error: {err:?}"
))),
}
}
#[allow(clippy::too_many_arguments)]
async fn handle_sandbox_error(
tool_name: &str,
turn_diff_tracker: &mut TurnDiffTracker,
params: ExecParams,
exec_command_context: ExecCommandContext,
error: SandboxErr,
sandbox_type: SandboxType,
sess: &Session,
turn_context: &TurnContext,
otel_event_manager: &OtelEventManager,
) -> Result<String, FunctionCallError> {
let call_id = exec_command_context.call_id.clone();
let sub_id = exec_command_context.sub_id.clone();
let cwd = exec_command_context.cwd.clone();
if let SandboxErr::Timeout { output } = &error {
let content = format_exec_output(output);
return Err(FunctionCallError::RespondToModel(content));
}
// Early out if either the user never wants to be asked for approval, or
// we're letting the model manage escalation requests. Otherwise, continue
match turn_context.approval_policy {
AskForApproval::Never | AskForApproval::OnRequest => {
return Err(FunctionCallError::RespondToModel(format!(
"failed in sandbox {sandbox_type:?} with execution error: {error:?}"
)));
}
AskForApproval::UnlessTrusted | AskForApproval::OnFailure => (),
}
// Note that when `error` is `SandboxErr::Denied`, it could be a false
// positive. That is, it may have exited with a non-zero exit code, not
// because the sandbox denied it, but because that is its expected behavior,
// i.e., a grep command that did not match anything. Ideally we would
// include additional metadata on the command to indicate whether non-zero
// exit codes merit a retry.
// For now, we categorically ask the user to retry without sandbox and
// emit the raw error as a background event.
sess.notify_background_event(&sub_id, format!("Execution failed: {error}"))
.await;
let decision = sess
.request_command_approval(
sub_id.clone(),
call_id.clone(),
params.command.clone(),
cwd.clone(),
Some("command failed; retry without sandbox?".to_string()),
)
.await;
match decision {
ReviewDecision::Approved | ReviewDecision::ApprovedForSession => {
// Persist this command as preapproved for the
// remainder of the session so future
// executions skip the sandbox directly.
// TODO(ragona): Isn't this a bug? It always saves the command in an | fork?
sess.add_approved_command(params.command.clone()).await;
// Inform UI we are retrying without sandbox.
sess.notify_background_event(&sub_id, "retrying command without sandbox")
.await;
otel_event_manager.tool_decision(
tool_name,
call_id.as_str(),
decision,
ToolDecisionSource::User,
);
// This is an escalated retry; the policy will not be
// examined and the sandbox has been set to `None`.
let retry_output_result = sess
.run_exec_with_events(
turn_diff_tracker,
exec_command_context.clone(),
ExecInvokeArgs {
params,
sandbox_type: SandboxType::None,
sandbox_policy: &turn_context.sandbox_policy,
sandbox_cwd: &turn_context.cwd,
codex_linux_sandbox_exe: &sess.services.codex_linux_sandbox_exe,
stdout_stream: if exec_command_context.apply_patch.is_some() {
None
} else {
Some(StdoutStream {
sub_id: sub_id.clone(),
call_id: call_id.clone(),
tx_event: sess.tx_event.clone(),
})
},
},
)
.await;
match retry_output_result {
Ok(retry_output) => {
let ExecToolCallOutput { exit_code, .. } = &retry_output;
let content = format_exec_output(&retry_output);
if *exit_code == 0 {
Ok(content)
} else {
Err(FunctionCallError::RespondToModel(content))
}
}
Err(e) => Err(FunctionCallError::RespondToModel(format!(
"retry failed: {e}"
))),
}
}
decision @ (ReviewDecision::Denied | ReviewDecision::Abort) => {
otel_event_manager.tool_decision(
tool_name,
call_id.as_str(),
decision,
ToolDecisionSource::User,
);
// Fall through to original failure handling.
Err(FunctionCallError::RespondToModel(
"exec command rejected by user".to_string(),
))
}
}
}
fn format_exec_output_str(exec_output: &ExecToolCallOutput) -> String {
let ExecToolCallOutput {
aggregated_output, ..
@@ -3286,6 +2987,8 @@ pub(crate) async fn exit_review_mode(
.await;
}
use crate::executor::errors::ExecError;
use crate::executor::linkers::PreparedExec;
#[cfg(test)]
pub(crate) use tests::make_session_and_context;
@@ -3301,7 +3004,7 @@ mod tests {
use crate::state::TaskKind;
use crate::tasks::SessionTask;
use crate::tasks::SessionTaskContext;
use codex_protocol::mcp_protocol::AuthMode;
use codex_app_server_protocol::AuthMode;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ResponseItem;
@@ -3599,9 +3302,13 @@ mod tests {
unified_exec_manager: UnifiedExecSessionManager::default(),
notifier: UserNotifier::default(),
rollout: Mutex::new(None),
codex_linux_sandbox_exe: None,
user_shell: shell::Shell::Unknown,
show_raw_agent_reasoning: config.show_raw_agent_reasoning,
executor: Executor::new(ExecutorConfig::new(
turn_context.sandbox_policy.clone(),
turn_context.cwd.clone(),
None,
)),
};
let session = Session {
conversation_id,
@@ -3668,9 +3375,13 @@ mod tests {
unified_exec_manager: UnifiedExecSessionManager::default(),
notifier: UserNotifier::default(),
rollout: Mutex::new(None),
codex_linux_sandbox_exe: None,
user_shell: shell::Shell::Unknown,
show_raw_agent_reasoning: config.show_raw_agent_reasoning,
executor: Executor::new(ExecutorConfig::new(
config.sandbox_policy.clone(),
config.cwd.clone(),
None,
)),
};
let session = Arc::new(Session {
conversation_id,

View File

@@ -1,25 +1,431 @@
// This is a WIP. This will eventually contain a real list of common safe Windows commands.
pub fn is_safe_command_windows(_command: &[String]) -> bool {
use shlex::split as shlex_split;
/// On Windows, we conservatively allow only clearly read-only PowerShell invocations
/// that match a small safelist. Anything else (including direct CMD commands) is unsafe.
pub fn is_safe_command_windows(command: &[String]) -> bool {
if let Some(commands) = try_parse_powershell_command_sequence(command) {
return commands
.iter()
.all(|cmd| is_safe_powershell_command(cmd.as_slice()));
}
// Only PowerShell invocations are allowed on Windows for now; anything else is unsafe.
false
}
/// Returns each command sequence if the invocation starts with a PowerShell binary.
/// For example, the tokens from `pwsh Get-ChildItem | Measure-Object` become two sequences.
fn try_parse_powershell_command_sequence(command: &[String]) -> Option<Vec<Vec<String>>> {
let (exe, rest) = command.split_first()?;
if !is_powershell_executable(exe) {
return None;
}
parse_powershell_invocation(rest)
}
/// Parses a PowerShell invocation into discrete command vectors, rejecting unsafe patterns.
fn parse_powershell_invocation(args: &[String]) -> Option<Vec<Vec<String>>> {
if args.is_empty() {
// Examples rejected here: "pwsh" and "powershell.exe" with no additional arguments.
return None;
}
let mut idx = 0;
while idx < args.len() {
let arg = &args[idx];
let lower = arg.to_ascii_lowercase();
match lower.as_str() {
"-command" | "/command" | "-c" => {
let script = args.get(idx + 1)?;
if idx + 2 != args.len() {
// Reject if there is more than one token representing the actual command.
// Examples rejected here: "pwsh -Command foo bar" and "powershell -c ls extra".
return None;
}
return parse_powershell_script(script);
}
_ if lower.starts_with("-command:") || lower.starts_with("/command:") => {
if idx + 1 != args.len() {
// Reject if there are more tokens after the command itself.
// Examples rejected here: "pwsh -Command:dir C:\\" and "powershell /Command:dir C:\\" with trailing args.
return None;
}
let script = arg.split_once(':')?.1;
return parse_powershell_script(script);
}
// Benign, no-arg flags we tolerate.
"-nologo" | "-noprofile" | "-noninteractive" | "-mta" | "-sta" => {
idx += 1;
continue;
}
// Explicitly forbidden/opaque or unnecessary for read-only operations.
"-encodedcommand" | "-ec" | "-file" | "/file" | "-windowstyle" | "-executionpolicy"
| "-workingdirectory" => {
// Examples rejected here: "pwsh -EncodedCommand ..." and "powershell -File script.ps1".
return None;
}
// Unknown switch → bail conservatively.
_ if lower.starts_with('-') => {
// Examples rejected here: "pwsh -UnknownFlag" and "powershell -foo bar".
return None;
}
// If we hit non-flag tokens, treat the remainder as a command sequence.
// This happens if powershell is invoked without -Command, e.g.
// ["pwsh", "-NoLogo", "git", "-c", "core.pager=cat", "status"]
_ => {
return split_into_commands(args[idx..].to_vec());
}
}
}
// Examples rejected here: "pwsh" and "powershell.exe -NoLogo" without a script.
None
}
/// Tokenizes an inline PowerShell script and delegates to the command splitter.
/// Examples of when this is called: pwsh.exe -Command '<script>' or pwsh.exe -Command:<script>
fn parse_powershell_script(script: &str) -> Option<Vec<Vec<String>>> {
let tokens = shlex_split(script)?;
split_into_commands(tokens)
}
/// Splits tokens into pipeline segments while ensuring no unsafe separators slip through.
/// e.g. Get-ChildItem | Measure-Object -> [['Get-ChildItem'], ['Measure-Object']]
fn split_into_commands(tokens: Vec<String>) -> Option<Vec<Vec<String>>> {
if tokens.is_empty() {
// Examples rejected here: "pwsh -Command ''" and "powershell -Command \"\"".
return None;
}
let mut commands = Vec::new();
let mut current = Vec::new();
for token in tokens.into_iter() {
match token.as_str() {
"|" | "||" | "&&" | ";" => {
if current.is_empty() {
// Examples rejected here: "pwsh -Command '| Get-ChildItem'" and "pwsh -Command '; dir'".
return None;
}
commands.push(current);
current = Vec::new();
}
// Reject if any token embeds separators, redirection, or call operator characters.
_ if token.contains(['|', ';', '>', '<', '&']) || token.contains("$(") => {
// Examples rejected here: "pwsh -Command 'dir|select'" and "pwsh -Command 'echo hi > out.txt'".
return None;
}
_ => current.push(token),
}
}
if current.is_empty() {
// Examples rejected here: "pwsh -Command 'dir |'" and "pwsh -Command 'Get-ChildItem ;'".
return None;
}
commands.push(current);
Some(commands)
}
/// Returns true when the executable name is one of the supported PowerShell binaries.
fn is_powershell_executable(exe: &str) -> bool {
matches!(
exe.to_ascii_lowercase().as_str(),
"powershell" | "powershell.exe" | "pwsh" | "pwsh.exe"
)
}
/// Validates that a parsed PowerShell command stays within our read-only safelist.
/// Everything before this is parsing, and rejecting things that make us feel uncomfortable.
fn is_safe_powershell_command(words: &[String]) -> bool {
if words.is_empty() {
// Examples rejected here: "pwsh -Command ''" and "pwsh -Command \"\"".
return false;
}
// Reject nested unsafe cmdlets inside parentheses or arguments
for w in words.iter() {
let inner = w
.trim_matches(|c| c == '(' || c == ')')
.trim_start_matches('-')
.to_ascii_lowercase();
if matches!(
inner.as_str(),
"set-content"
| "add-content"
| "out-file"
| "new-item"
| "remove-item"
| "move-item"
| "copy-item"
| "rename-item"
| "start-process"
| "stop-process"
) {
// Examples rejected here: "Write-Output (Set-Content foo6.txt 'abc')" and "Get-Content (New-Item bar.txt)".
return false;
}
}
// Block PowerShell call operator or any redirection explicitly.
if words.iter().any(|w| {
matches!(
w.as_str(),
"&" | ">" | ">>" | "1>" | "2>" | "2>&1" | "*>" | "<" | "<<"
)
}) {
// Examples rejected here: "pwsh -Command '& Remove-Item foo'" and "pwsh -Command 'Get-Content foo > bar'".
return false;
}
let command = words[0]
.trim_matches(|c| c == '(' || c == ')')
.trim_start_matches('-')
.to_ascii_lowercase();
match command.as_str() {
"echo" | "write-output" | "write-host" => true, // (no redirection allowed)
"dir" | "ls" | "get-childitem" | "gci" => true,
"cat" | "type" | "gc" | "get-content" => true,
"select-string" | "sls" | "findstr" => true,
"measure-object" | "measure" => true,
"get-location" | "gl" | "pwd" => true,
"test-path" | "tp" => true,
"resolve-path" | "rvpa" => true,
"select-object" | "select" => true,
"get-item" => true,
"git" => is_safe_git_command(words),
"rg" => is_safe_ripgrep(words),
// Extra safety: explicitly prohibit common side-effecting cmdlets regardless of args.
"set-content" | "add-content" | "out-file" | "new-item" | "remove-item" | "move-item"
| "copy-item" | "rename-item" | "start-process" | "stop-process" => {
// Examples rejected here: "pwsh -Command 'Set-Content notes.txt data'" and "pwsh -Command 'Remove-Item temp.log'".
false
}
_ => {
// Examples rejected here: "pwsh -Command 'Invoke-WebRequest https://example.com'" and "pwsh -Command 'Start-Service Spooler'".
false
}
}
}
/// Checks that an `rg` invocation avoids options that can spawn arbitrary executables.
fn is_safe_ripgrep(words: &[String]) -> bool {
const UNSAFE_RIPGREP_OPTIONS_WITH_ARGS: &[&str] = &["--pre", "--hostname-bin"];
const UNSAFE_RIPGREP_OPTIONS_WITHOUT_ARGS: &[&str] = &["--search-zip", "-z"];
!words.iter().skip(1).any(|arg| {
let arg_lc = arg.to_ascii_lowercase();
// Examples rejected here: "pwsh -Command 'rg --pre cat pattern'" and "pwsh -Command 'rg --search-zip pattern'".
UNSAFE_RIPGREP_OPTIONS_WITHOUT_ARGS.contains(&arg_lc.as_str())
|| UNSAFE_RIPGREP_OPTIONS_WITH_ARGS
.iter()
.any(|opt| arg_lc == *opt || arg_lc.starts_with(&format!("{opt}=")))
})
}
/// Ensures a Git command sticks to whitelisted read-only subcommands and flags.
fn is_safe_git_command(words: &[String]) -> bool {
const SAFE_SUBCOMMANDS: &[&str] = &["status", "log", "show", "diff", "cat-file"];
let mut iter = words.iter().skip(1);
while let Some(arg) = iter.next() {
let arg_lc = arg.to_ascii_lowercase();
if arg.starts_with('-') {
if arg.eq_ignore_ascii_case("-c") || arg.eq_ignore_ascii_case("--config") {
if iter.next().is_none() {
// Examples rejected here: "pwsh -Command 'git -c'" and "pwsh -Command 'git --config'".
return false;
}
continue;
}
if arg_lc.starts_with("-c=")
|| arg_lc.starts_with("--config=")
|| arg_lc.starts_with("--git-dir=")
|| arg_lc.starts_with("--work-tree=")
{
continue;
}
if arg.eq_ignore_ascii_case("--git-dir") || arg.eq_ignore_ascii_case("--work-tree") {
if iter.next().is_none() {
// Examples rejected here: "pwsh -Command 'git --git-dir'" and "pwsh -Command 'git --work-tree'".
return false;
}
continue;
}
continue;
}
return SAFE_SUBCOMMANDS.contains(&arg_lc.as_str());
}
// Examples rejected here: "pwsh -Command 'git'" and "pwsh -Command 'git status --short | Remove-Item foo'".
false
}
#[cfg(test)]
mod tests {
use super::is_safe_command_windows;
use std::string::ToString;
/// Converts a slice of string literals into owned `String`s for the tests.
fn vec_str(args: &[&str]) -> Vec<String> {
args.iter().map(ToString::to_string).collect()
}
#[test]
fn everything_is_unsafe() {
for cmd in [
vec_str(&["powershell.exe", "-NoLogo", "-Command", "echo hello"]),
vec_str(&["copy", "foo", "bar"]),
vec_str(&["del", "file.txt"]),
vec_str(&["powershell.exe", "Get-ChildItem"]),
] {
assert!(!is_safe_command_windows(&cmd));
}
fn recognizes_safe_powershell_wrappers() {
assert!(is_safe_command_windows(&vec_str(&[
"powershell.exe",
"-NoLogo",
"-Command",
"Get-ChildItem -Path .",
])));
assert!(is_safe_command_windows(&vec_str(&[
"powershell.exe",
"-NoProfile",
"-Command",
"git status",
])));
assert!(is_safe_command_windows(&vec_str(&[
"powershell.exe",
"Get-Content",
"Cargo.toml",
])));
// pwsh parity
assert!(is_safe_command_windows(&vec_str(&[
"pwsh.exe",
"-NoProfile",
"-Command",
"Get-ChildItem",
])));
}
#[test]
fn allows_read_only_pipelines_and_git_usage() {
assert!(is_safe_command_windows(&vec_str(&[
"pwsh",
"-NoLogo",
"-NoProfile",
"-Command",
"rg --files-with-matches foo | Measure-Object | Select-Object -ExpandProperty Count",
])));
assert!(is_safe_command_windows(&vec_str(&[
"pwsh",
"-NoLogo",
"-NoProfile",
"-Command",
"Get-Content foo.rs | Select-Object -Skip 200",
])));
assert!(is_safe_command_windows(&vec_str(&[
"pwsh",
"-NoLogo",
"-NoProfile",
"-Command",
"git -c core.pager=cat show HEAD:foo.rs",
])));
assert!(is_safe_command_windows(&vec_str(&[
"pwsh",
"-Command",
"-git cat-file -p HEAD:foo.rs",
])));
assert!(is_safe_command_windows(&vec_str(&[
"pwsh",
"-Command",
"(Get-Content foo.rs -Raw)",
])));
assert!(is_safe_command_windows(&vec_str(&[
"pwsh",
"-Command",
"Get-Item foo.rs | Select-Object Length",
])));
}
#[test]
fn rejects_powershell_commands_with_side_effects() {
assert!(!is_safe_command_windows(&vec_str(&[
"powershell.exe",
"-NoLogo",
"-Command",
"Remove-Item foo.txt",
])));
assert!(!is_safe_command_windows(&vec_str(&[
"powershell.exe",
"-NoProfile",
"-Command",
"rg --pre cat",
])));
assert!(!is_safe_command_windows(&vec_str(&[
"powershell.exe",
"-Command",
"Set-Content foo.txt 'hello'",
])));
// Redirections are blocked
assert!(!is_safe_command_windows(&vec_str(&[
"powershell.exe",
"-Command",
"echo hi > out.txt",
])));
assert!(!is_safe_command_windows(&vec_str(&[
"powershell.exe",
"-Command",
"Get-Content x | Out-File y",
])));
assert!(!is_safe_command_windows(&vec_str(&[
"powershell.exe",
"-Command",
"Write-Output foo 2> err.txt",
])));
// Call operator is blocked
assert!(!is_safe_command_windows(&vec_str(&[
"powershell.exe",
"-Command",
"& Remove-Item foo",
])));
// Chained safe + unsafe must fail
assert!(!is_safe_command_windows(&vec_str(&[
"powershell.exe",
"-Command",
"Get-ChildItem; Remove-Item foo",
])));
// Nested unsafe cmdlet inside safe command must fail
assert!(!is_safe_command_windows(&vec_str(&[
"powershell.exe",
"-Command",
"Write-Output (Set-Content foo6.txt 'abc')",
])));
// Additional nested unsafe cmdlet examples must fail
assert!(!is_safe_command_windows(&vec_str(&[
"powershell.exe",
"-Command",
"Write-Host (Remove-Item foo.txt)",
])));
assert!(!is_safe_command_windows(&vec_str(&[
"powershell.exe",
"-Command",
"Get-Content (New-Item bar.txt)",
])));
}
}

View File

@@ -23,12 +23,12 @@ use crate::openai_model_info::get_model_info;
use crate::protocol::AskForApproval;
use crate::protocol::SandboxPolicy;
use anyhow::Context;
use codex_app_server_protocol::Tools;
use codex_app_server_protocol::UserSavedConfig;
use codex_protocol::config_types::ReasoningEffort;
use codex_protocol::config_types::ReasoningSummary;
use codex_protocol::config_types::SandboxMode;
use codex_protocol::config_types::Verbosity;
use codex_protocol::mcp_protocol::Tools;
use codex_protocol::mcp_protocol::UserSavedConfig;
use dirs::home_dir;
use serde::Deserialize;
use std::collections::BTreeMap;

View File

@@ -22,7 +22,7 @@ pub struct ConfigProfile {
pub experimental_instructions_file: Option<PathBuf>,
}
impl From<ConfigProfile> for codex_protocol::mcp_protocol::Profile {
impl From<ConfigProfile> for codex_app_server_protocol::Profile {
fn from(config_profile: ConfigProfile) -> Self {
Self {
model: config_profile.model,

View File

@@ -313,7 +313,7 @@ pub struct SandboxWorkspaceWrite {
pub exclude_slash_tmp: bool,
}
impl From<SandboxWorkspaceWrite> for codex_protocol::mcp_protocol::SandboxSettings {
impl From<SandboxWorkspaceWrite> for codex_app_server_protocol::SandboxSettings {
fn from(sandbox_workspace_write: SandboxWorkspaceWrite) -> Self {
Self {
writable_roots: sandbox_workspace_write.writable_roots,

View File

@@ -13,7 +13,7 @@ use crate::protocol::Event;
use crate::protocol::EventMsg;
use crate::protocol::SessionConfiguredEvent;
use crate::rollout::RolloutRecorder;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::ConversationId;
use codex_protocol::models::ResponseItem;
use codex_protocol::protocol::InitialHistory;
use codex_protocol::protocol::RolloutItem;

View File

@@ -1,7 +1,7 @@
use crate::exec::ExecToolCallOutput;
use crate::token_data::KnownPlan;
use crate::token_data::PlanType;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::ConversationId;
use codex_protocol::protocol::RateLimitSnapshot;
use reqwest::StatusCode;
use serde_json;

View File

@@ -0,0 +1,101 @@
use std::collections::HashMap;
use std::env;
use async_trait::async_trait;
use crate::CODEX_APPLY_PATCH_ARG1;
use crate::apply_patch::ApplyPatchExec;
use crate::exec::ExecParams;
use crate::function_tool::FunctionCallError;
pub(crate) enum ExecutionMode {
Shell,
ApplyPatch(ApplyPatchExec),
}
#[async_trait]
/// Backend-specific hooks that prepare and post-process execution requests for a
/// given [`ExecutionMode`].
pub(crate) trait ExecutionBackend: Send + Sync {
fn prepare(
&self,
params: ExecParams,
// Required for downcasting the apply_patch.
mode: &ExecutionMode,
) -> Result<ExecParams, FunctionCallError>;
fn stream_stdout(&self, _mode: &ExecutionMode) -> bool {
true
}
}
static SHELL_BACKEND: ShellBackend = ShellBackend;
static APPLY_PATCH_BACKEND: ApplyPatchBackend = ApplyPatchBackend;
pub(crate) fn backend_for_mode(mode: &ExecutionMode) -> &'static dyn ExecutionBackend {
match mode {
ExecutionMode::Shell => &SHELL_BACKEND,
ExecutionMode::ApplyPatch(_) => &APPLY_PATCH_BACKEND,
}
}
struct ShellBackend;
#[async_trait]
impl ExecutionBackend for ShellBackend {
fn prepare(
&self,
params: ExecParams,
mode: &ExecutionMode,
) -> Result<ExecParams, FunctionCallError> {
match mode {
ExecutionMode::Shell => Ok(params),
_ => Err(FunctionCallError::RespondToModel(
"shell backend invoked with non-shell mode".to_string(),
)),
}
}
}
struct ApplyPatchBackend;
#[async_trait]
impl ExecutionBackend for ApplyPatchBackend {
fn prepare(
&self,
params: ExecParams,
mode: &ExecutionMode,
) -> Result<ExecParams, FunctionCallError> {
match mode {
ExecutionMode::ApplyPatch(exec) => {
let path_to_codex = env::current_exe()
.ok()
.map(|p| p.to_string_lossy().to_string())
.ok_or_else(|| {
FunctionCallError::RespondToModel(
"failed to determine path to codex executable".to_string(),
)
})?;
let patch = exec.action.patch.clone();
Ok(ExecParams {
command: vec![path_to_codex, CODEX_APPLY_PATCH_ARG1.to_string(), patch],
cwd: exec.action.cwd.clone(),
timeout_ms: params.timeout_ms,
// Run apply_patch with a minimal environment for determinism and to
// avoid leaking host environment variables into the patch process.
env: HashMap::new(),
with_escalated_permissions: params.with_escalated_permissions,
justification: params.justification,
})
}
ExecutionMode::Shell => Err(FunctionCallError::RespondToModel(
"apply_patch backend invoked without patch context".to_string(),
)),
}
}
fn stream_stdout(&self, _mode: &ExecutionMode) -> bool {
false
}
}

View File

@@ -0,0 +1,51 @@
use std::collections::HashSet;
use std::sync::Arc;
use std::sync::Mutex;
#[derive(Clone, Debug, Default)]
/// Thread-safe store of user approvals so repeated commands can reuse
/// previously granted trust.
pub(crate) struct ApprovalCache {
inner: Arc<Mutex<HashSet<Vec<String>>>>,
}
impl ApprovalCache {
pub(crate) fn insert(&self, command: Vec<String>) {
if command.is_empty() {
return;
}
if let Ok(mut guard) = self.inner.lock() {
guard.insert(command);
}
}
pub(crate) fn snapshot(&self) -> HashSet<Vec<String>> {
self.inner.lock().map(|g| g.clone()).unwrap_or_default()
}
}
#[cfg(test)]
mod tests {
use super::*;
use pretty_assertions::assert_eq;
#[test]
fn insert_ignores_empty_and_dedupes() {
let cache = ApprovalCache::default();
// Empty should be ignored
cache.insert(vec![]);
assert!(cache.snapshot().is_empty());
// Insert a command and verify snapshot contains it
let cmd = vec!["foo".to_string(), "bar".to_string()];
cache.insert(cmd.clone());
let snap1 = cache.snapshot();
assert!(snap1.contains(&cmd));
// Reinserting should not create duplicates
cache.insert(cmd);
let snap2 = cache.snapshot();
assert_eq!(snap1, snap2);
}
}

View File

@@ -0,0 +1,64 @@
mod backends;
mod cache;
mod runner;
mod sandbox;
pub(crate) use backends::ExecutionMode;
pub(crate) use runner::ExecutionRequest;
pub(crate) use runner::Executor;
pub(crate) use runner::ExecutorConfig;
pub(crate) use runner::normalize_exec_result;
pub(crate) mod linkers {
use crate::codex::ExecCommandContext;
use crate::exec::ExecParams;
use crate::exec::StdoutStream;
use crate::executor::backends::ExecutionMode;
use crate::executor::runner::ExecutionRequest;
pub struct PreparedExec {
pub(crate) context: ExecCommandContext,
pub(crate) request: ExecutionRequest,
}
impl PreparedExec {
pub fn new(
context: ExecCommandContext,
params: ExecParams,
approval_command: Vec<String>,
mode: ExecutionMode,
stdout_stream: Option<StdoutStream>,
use_shell_profile: bool,
) -> Self {
let request = ExecutionRequest {
params,
approval_command,
mode,
stdout_stream,
use_shell_profile,
};
Self { context, request }
}
}
}
pub mod errors {
use crate::error::CodexErr;
use crate::function_tool::FunctionCallError;
use thiserror::Error;
#[derive(Debug, Error)]
pub enum ExecError {
#[error(transparent)]
Function(#[from] FunctionCallError),
#[error(transparent)]
Codex(#[from] CodexErr),
}
impl ExecError {
pub(crate) fn rejection(msg: impl Into<String>) -> Self {
FunctionCallError::RespondToModel(msg.into()).into()
}
}
}

View File

@@ -0,0 +1,387 @@
use std::path::PathBuf;
use std::sync::Arc;
use std::sync::RwLock;
use std::time::Duration;
use super::backends::ExecutionMode;
use super::backends::backend_for_mode;
use super::cache::ApprovalCache;
use crate::codex::ExecCommandContext;
use crate::codex::Session;
use crate::error::CodexErr;
use crate::error::SandboxErr;
use crate::error::get_error_message_ui;
use crate::exec::ExecParams;
use crate::exec::ExecToolCallOutput;
use crate::exec::SandboxType;
use crate::exec::StdoutStream;
use crate::exec::StreamOutput;
use crate::exec::process_exec_tool_call;
use crate::executor::errors::ExecError;
use crate::executor::sandbox::select_sandbox;
use crate::function_tool::FunctionCallError;
use crate::protocol::AskForApproval;
use crate::protocol::ReviewDecision;
use crate::protocol::SandboxPolicy;
use crate::shell;
use codex_otel::otel_event_manager::ToolDecisionSource;
#[derive(Clone, Debug)]
pub(crate) struct ExecutorConfig {
pub(crate) sandbox_policy: SandboxPolicy,
pub(crate) sandbox_cwd: PathBuf,
codex_linux_sandbox_exe: Option<PathBuf>,
}
impl ExecutorConfig {
pub(crate) fn new(
sandbox_policy: SandboxPolicy,
sandbox_cwd: PathBuf,
codex_linux_sandbox_exe: Option<PathBuf>,
) -> Self {
Self {
sandbox_policy,
sandbox_cwd,
codex_linux_sandbox_exe,
}
}
}
/// Coordinates sandbox selection, backend-specific preparation, and command
/// execution for tool calls requested by the model.
pub(crate) struct Executor {
approval_cache: ApprovalCache,
config: Arc<RwLock<ExecutorConfig>>,
}
impl Executor {
pub(crate) fn new(config: ExecutorConfig) -> Self {
Self {
approval_cache: ApprovalCache::default(),
config: Arc::new(RwLock::new(config)),
}
}
/// Updates the sandbox policy and working directory used for future
/// executions without recreating the executor.
pub(crate) fn update_environment(&self, sandbox_policy: SandboxPolicy, sandbox_cwd: PathBuf) {
if let Ok(mut cfg) = self.config.write() {
cfg.sandbox_policy = sandbox_policy;
cfg.sandbox_cwd = sandbox_cwd;
}
}
/// Runs a prepared execution request end-to-end: prepares parameters, decides on
/// sandbox placement (prompting the user when necessary), launches the command,
/// and lets the backend post-process the final output.
pub(crate) async fn run(
&self,
mut request: ExecutionRequest,
session: &Session,
approval_policy: AskForApproval,
context: &ExecCommandContext,
) -> Result<ExecToolCallOutput, ExecError> {
if matches!(request.mode, ExecutionMode::Shell) {
request.params =
maybe_translate_shell_command(request.params, session, request.use_shell_profile);
}
// Step 1: Normalise parameters via the selected backend.
let backend = backend_for_mode(&request.mode);
let stdout_stream = if backend.stream_stdout(&request.mode) {
request.stdout_stream.clone()
} else {
None
};
request.params = backend
.prepare(request.params, &request.mode)
.map_err(ExecError::from)?;
// Step 2: Snapshot sandbox configuration so it stays stable for this run.
let config = self
.config
.read()
.map_err(|_| ExecError::rejection("executor config poisoned"))?
.clone();
// Step 3: Decide sandbox placement, prompting for approval when needed.
let sandbox_decision = select_sandbox(
&request,
approval_policy,
self.approval_cache.snapshot(),
&config,
session,
&context.sub_id,
&context.call_id,
&context.otel_event_manager,
)
.await?;
if sandbox_decision.record_session_approval {
self.approval_cache.insert(request.approval_command.clone());
}
// Step 4: Launch the command within the chosen sandbox.
let first_attempt = self
.spawn(
request.params.clone(),
sandbox_decision.initial_sandbox,
&config,
stdout_stream.clone(),
)
.await;
// Step 5: Handle sandbox outcomes, optionally escalating to an unsandboxed retry.
match first_attempt {
Ok(output) => Ok(output),
Err(CodexErr::Sandbox(SandboxErr::Timeout { output })) => {
Err(CodexErr::Sandbox(SandboxErr::Timeout { output }).into())
}
Err(CodexErr::Sandbox(error)) => {
if sandbox_decision.escalate_on_failure {
self.retry_without_sandbox(
&request,
&config,
session,
context,
stdout_stream,
error,
)
.await
} else {
Err(ExecError::rejection(format!(
"failed in sandbox {:?} with execution error: {error:?}",
sandbox_decision.initial_sandbox
)))
}
}
Err(err) => Err(err.into()),
}
}
/// Fallback path invoked when a sandboxed run is denied so the user can
/// approve rerunning without isolation.
async fn retry_without_sandbox(
&self,
request: &ExecutionRequest,
config: &ExecutorConfig,
session: &Session,
context: &ExecCommandContext,
stdout_stream: Option<StdoutStream>,
sandbox_error: SandboxErr,
) -> Result<ExecToolCallOutput, ExecError> {
session
.notify_background_event(
&context.sub_id,
format!("Execution failed: {sandbox_error}"),
)
.await;
let decision = session
.request_command_approval(
context.sub_id.to_string(),
context.call_id.to_string(),
request.approval_command.clone(),
request.params.cwd.clone(),
Some("command failed; retry without sandbox?".to_string()),
)
.await;
context.otel_event_manager.tool_decision(
&context.tool_name,
&context.call_id,
decision,
ToolDecisionSource::User,
);
match decision {
ReviewDecision::Approved | ReviewDecision::ApprovedForSession => {
if matches!(decision, ReviewDecision::ApprovedForSession) {
self.approval_cache.insert(request.approval_command.clone());
}
session
.notify_background_event(&context.sub_id, "retrying command without sandbox")
.await;
let retry_output = self
.spawn(
request.params.clone(),
SandboxType::None,
config,
stdout_stream,
)
.await?;
Ok(retry_output)
}
ReviewDecision::Denied | ReviewDecision::Abort => {
Err(ExecError::rejection("exec command rejected by user"))
}
}
}
async fn spawn(
&self,
params: ExecParams,
sandbox: SandboxType,
config: &ExecutorConfig,
stdout_stream: Option<StdoutStream>,
) -> Result<ExecToolCallOutput, CodexErr> {
process_exec_tool_call(
params,
sandbox,
&config.sandbox_policy,
&config.sandbox_cwd,
&config.codex_linux_sandbox_exe,
stdout_stream,
)
.await
}
}
fn maybe_translate_shell_command(
params: ExecParams,
session: &Session,
use_shell_profile: bool,
) -> ExecParams {
let should_translate =
matches!(session.user_shell(), shell::Shell::PowerShell(_)) || use_shell_profile;
if should_translate
&& let Some(command) = session
.user_shell()
.format_default_shell_invocation(params.command.clone())
{
return ExecParams { command, ..params };
}
params
}
pub(crate) struct ExecutionRequest {
pub params: ExecParams,
pub approval_command: Vec<String>,
pub mode: ExecutionMode,
pub stdout_stream: Option<StdoutStream>,
pub use_shell_profile: bool,
}
pub(crate) struct NormalizedExecOutput<'a> {
borrowed: Option<&'a ExecToolCallOutput>,
synthetic: Option<ExecToolCallOutput>,
}
impl<'a> NormalizedExecOutput<'a> {
pub(crate) fn event_output(&'a self) -> &'a ExecToolCallOutput {
match (self.borrowed, self.synthetic.as_ref()) {
(Some(output), _) => output,
(None, Some(output)) => output,
(None, None) => unreachable!("normalized exec output missing data"),
}
}
}
/// Converts a raw execution result into a uniform view that always exposes an
/// [`ExecToolCallOutput`], synthesizing error output when the command fails
/// before producing a response.
pub(crate) fn normalize_exec_result(
result: &Result<ExecToolCallOutput, ExecError>,
) -> NormalizedExecOutput<'_> {
match result {
Ok(output) => NormalizedExecOutput {
borrowed: Some(output),
synthetic: None,
},
Err(ExecError::Codex(CodexErr::Sandbox(SandboxErr::Timeout { output }))) => {
NormalizedExecOutput {
borrowed: Some(output.as_ref()),
synthetic: None,
}
}
Err(err) => {
let message = match err {
ExecError::Function(FunctionCallError::RespondToModel(msg)) => msg.clone(),
ExecError::Codex(e) => get_error_message_ui(e),
};
let synthetic = ExecToolCallOutput {
exit_code: -1,
stdout: StreamOutput::new(String::new()),
stderr: StreamOutput::new(message.clone()),
aggregated_output: StreamOutput::new(message),
duration: Duration::default(),
timed_out: false,
};
NormalizedExecOutput {
borrowed: None,
synthetic: Some(synthetic),
}
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::error::CodexErr;
use crate::error::EnvVarError;
use crate::error::SandboxErr;
use crate::exec::StreamOutput;
use pretty_assertions::assert_eq;
fn make_output(text: &str) -> ExecToolCallOutput {
ExecToolCallOutput {
exit_code: 1,
stdout: StreamOutput::new(String::new()),
stderr: StreamOutput::new(String::new()),
aggregated_output: StreamOutput::new(text.to_string()),
duration: Duration::from_millis(123),
timed_out: false,
}
}
#[test]
fn normalize_success_borrows() {
let out = make_output("ok");
let result: Result<ExecToolCallOutput, ExecError> = Ok(out);
let normalized = normalize_exec_result(&result);
assert_eq!(normalized.event_output().aggregated_output.text, "ok");
}
#[test]
fn normalize_timeout_borrows_embedded_output() {
let out = make_output("timed out payload");
let err = CodexErr::Sandbox(SandboxErr::Timeout {
output: Box::new(out),
});
let result: Result<ExecToolCallOutput, ExecError> = Err(ExecError::Codex(err));
let normalized = normalize_exec_result(&result);
assert_eq!(
normalized.event_output().aggregated_output.text,
"timed out payload"
);
}
#[test]
fn normalize_function_error_synthesizes_payload() {
let err = FunctionCallError::RespondToModel("boom".to_string());
let result: Result<ExecToolCallOutput, ExecError> = Err(ExecError::Function(err));
let normalized = normalize_exec_result(&result);
assert_eq!(normalized.event_output().aggregated_output.text, "boom");
}
#[test]
fn normalize_codex_error_synthesizes_user_message() {
// Use a simple EnvVar error which formats to a clear message
let e = CodexErr::EnvVar(EnvVarError {
var: "FOO".to_string(),
instructions: Some("set it".to_string()),
});
let result: Result<ExecToolCallOutput, ExecError> = Err(ExecError::Codex(e));
let normalized = normalize_exec_result(&result);
assert!(
normalized
.event_output()
.aggregated_output
.text
.contains("Missing environment variable: `FOO`"),
"expected synthesized user-friendly message"
);
}
}

View File

@@ -0,0 +1,405 @@
use crate::apply_patch::ApplyPatchExec;
use crate::codex::Session;
use crate::exec::SandboxType;
use crate::executor::ExecutionMode;
use crate::executor::ExecutionRequest;
use crate::executor::ExecutorConfig;
use crate::executor::errors::ExecError;
use crate::safety::SafetyCheck;
use crate::safety::assess_command_safety;
use crate::safety::assess_patch_safety;
use codex_otel::otel_event_manager::OtelEventManager;
use codex_otel::otel_event_manager::ToolDecisionSource;
use codex_protocol::protocol::AskForApproval;
use codex_protocol::protocol::ReviewDecision;
use std::collections::HashSet;
/// Sandbox placement options selected for an execution run, including whether
/// to escalate after failures and whether approvals should persist.
pub(crate) struct SandboxDecision {
pub(crate) initial_sandbox: SandboxType,
pub(crate) escalate_on_failure: bool,
pub(crate) record_session_approval: bool,
}
impl SandboxDecision {
fn auto(sandbox: SandboxType, escalate_on_failure: bool) -> Self {
Self {
initial_sandbox: sandbox,
escalate_on_failure,
record_session_approval: false,
}
}
fn user_override(record_session_approval: bool) -> Self {
Self {
initial_sandbox: SandboxType::None,
escalate_on_failure: false,
record_session_approval,
}
}
}
fn should_escalate_on_failure(approval: AskForApproval, sandbox: SandboxType) -> bool {
matches!(
(approval, sandbox),
(
AskForApproval::UnlessTrusted | AskForApproval::OnFailure,
SandboxType::MacosSeatbelt | SandboxType::LinuxSeccomp
)
)
}
/// Determines how a command should be sandboxed, prompting the user when
/// policy requires explicit approval.
#[allow(clippy::too_many_arguments)]
pub async fn select_sandbox(
request: &ExecutionRequest,
approval_policy: AskForApproval,
approval_cache: HashSet<Vec<String>>,
config: &ExecutorConfig,
session: &Session,
sub_id: &str,
call_id: &str,
otel_event_manager: &OtelEventManager,
) -> Result<SandboxDecision, ExecError> {
match &request.mode {
ExecutionMode::Shell => {
select_shell_sandbox(
request,
approval_policy,
approval_cache,
config,
session,
sub_id,
call_id,
otel_event_manager,
)
.await
}
ExecutionMode::ApplyPatch(exec) => {
select_apply_patch_sandbox(exec, approval_policy, config)
}
}
}
#[allow(clippy::too_many_arguments)]
async fn select_shell_sandbox(
request: &ExecutionRequest,
approval_policy: AskForApproval,
approved_snapshot: HashSet<Vec<String>>,
config: &ExecutorConfig,
session: &Session,
sub_id: &str,
call_id: &str,
otel_event_manager: &OtelEventManager,
) -> Result<SandboxDecision, ExecError> {
let command_for_safety = if request.approval_command.is_empty() {
request.params.command.clone()
} else {
request.approval_command.clone()
};
let safety = assess_command_safety(
&command_for_safety,
approval_policy,
&config.sandbox_policy,
&approved_snapshot,
request.params.with_escalated_permissions.unwrap_or(false),
);
match safety {
SafetyCheck::AutoApprove {
sandbox_type,
user_explicitly_approved,
} => {
let mut decision = SandboxDecision::auto(
sandbox_type,
should_escalate_on_failure(approval_policy, sandbox_type),
);
if user_explicitly_approved {
decision.record_session_approval = true;
}
let (decision_for_event, source) = if user_explicitly_approved {
(ReviewDecision::ApprovedForSession, ToolDecisionSource::User)
} else {
(ReviewDecision::Approved, ToolDecisionSource::Config)
};
otel_event_manager.tool_decision("local_shell", call_id, decision_for_event, source);
Ok(decision)
}
SafetyCheck::AskUser => {
let decision = session
.request_command_approval(
sub_id.to_string(),
call_id.to_string(),
request.approval_command.clone(),
request.params.cwd.clone(),
request.params.justification.clone(),
)
.await;
otel_event_manager.tool_decision(
"local_shell",
call_id,
decision,
ToolDecisionSource::User,
);
match decision {
ReviewDecision::Approved => Ok(SandboxDecision::user_override(false)),
ReviewDecision::ApprovedForSession => Ok(SandboxDecision::user_override(true)),
ReviewDecision::Denied | ReviewDecision::Abort => {
Err(ExecError::rejection("exec command rejected by user"))
}
}
}
SafetyCheck::Reject { reason } => Err(ExecError::rejection(format!(
"exec command rejected: {reason}"
))),
}
}
fn select_apply_patch_sandbox(
exec: &ApplyPatchExec,
approval_policy: AskForApproval,
config: &ExecutorConfig,
) -> Result<SandboxDecision, ExecError> {
if exec.user_explicitly_approved_this_action {
return Ok(SandboxDecision::user_override(false));
}
match assess_patch_safety(
&exec.action,
approval_policy,
&config.sandbox_policy,
&config.sandbox_cwd,
) {
SafetyCheck::AutoApprove { sandbox_type, .. } => Ok(SandboxDecision::auto(
sandbox_type,
should_escalate_on_failure(approval_policy, sandbox_type),
)),
SafetyCheck::AskUser => Err(ExecError::rejection(
"patch requires approval but none was recorded",
)),
SafetyCheck::Reject { reason } => {
Err(ExecError::rejection(format!("patch rejected: {reason}")))
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::codex::make_session_and_context;
use crate::exec::ExecParams;
use crate::function_tool::FunctionCallError;
use crate::protocol::SandboxPolicy;
use codex_apply_patch::ApplyPatchAction;
use pretty_assertions::assert_eq;
#[tokio::test]
async fn select_apply_patch_user_override_when_explicit() {
let (session, ctx) = make_session_and_context();
let tmp = tempfile::tempdir().expect("tmp");
let p = tmp.path().join("a.txt");
let action = ApplyPatchAction::new_add_for_test(&p, "hello".to_string());
let exec = ApplyPatchExec {
action,
user_explicitly_approved_this_action: true,
};
let cfg = ExecutorConfig::new(SandboxPolicy::ReadOnly, std::env::temp_dir(), None);
let request = ExecutionRequest {
params: ExecParams {
command: vec!["apply_patch".into()],
cwd: std::env::temp_dir(),
timeout_ms: None,
env: std::collections::HashMap::new(),
with_escalated_permissions: None,
justification: None,
},
approval_command: vec!["apply_patch".into()],
mode: ExecutionMode::ApplyPatch(exec),
stdout_stream: None,
use_shell_profile: false,
};
let otel_event_manager = ctx.client.get_otel_event_manager();
let decision = select_sandbox(
&request,
AskForApproval::OnRequest,
Default::default(),
&cfg,
&session,
"sub",
"call",
&otel_event_manager,
)
.await
.expect("ok");
// Explicit user override runs without sandbox
assert_eq!(decision.initial_sandbox, SandboxType::None);
assert_eq!(decision.escalate_on_failure, false);
}
#[tokio::test]
async fn select_apply_patch_autoapprove_in_danger() {
let (session, ctx) = make_session_and_context();
let tmp = tempfile::tempdir().expect("tmp");
let p = tmp.path().join("a.txt");
let action = ApplyPatchAction::new_add_for_test(&p, "hello".to_string());
let exec = ApplyPatchExec {
action,
user_explicitly_approved_this_action: false,
};
let cfg = ExecutorConfig::new(SandboxPolicy::DangerFullAccess, std::env::temp_dir(), None);
let request = ExecutionRequest {
params: ExecParams {
command: vec!["apply_patch".into()],
cwd: std::env::temp_dir(),
timeout_ms: None,
env: std::collections::HashMap::new(),
with_escalated_permissions: None,
justification: None,
},
approval_command: vec!["apply_patch".into()],
mode: ExecutionMode::ApplyPatch(exec),
stdout_stream: None,
use_shell_profile: false,
};
let otel_event_manager = ctx.client.get_otel_event_manager();
let decision = select_sandbox(
&request,
AskForApproval::OnRequest,
Default::default(),
&cfg,
&session,
"sub",
"call",
&otel_event_manager,
)
.await
.expect("ok");
// On platforms with a sandbox, DangerFullAccess still prefers it
let expected = crate::safety::get_platform_sandbox().unwrap_or(SandboxType::None);
assert_eq!(decision.initial_sandbox, expected);
assert_eq!(decision.escalate_on_failure, false);
}
#[tokio::test]
async fn select_apply_patch_requires_approval_on_unless_trusted() {
let (session, ctx) = make_session_and_context();
let tempdir = tempfile::tempdir().expect("tmpdir");
let p = tempdir.path().join("a.txt");
let action = ApplyPatchAction::new_add_for_test(&p, "hello".to_string());
let exec = ApplyPatchExec {
action,
user_explicitly_approved_this_action: false,
};
let cfg = ExecutorConfig::new(SandboxPolicy::ReadOnly, std::env::temp_dir(), None);
let request = ExecutionRequest {
params: ExecParams {
command: vec!["apply_patch".into()],
cwd: std::env::temp_dir(),
timeout_ms: None,
env: std::collections::HashMap::new(),
with_escalated_permissions: None,
justification: None,
},
approval_command: vec!["apply_patch".into()],
mode: ExecutionMode::ApplyPatch(exec),
stdout_stream: None,
use_shell_profile: false,
};
let otel_event_manager = ctx.client.get_otel_event_manager();
let result = select_sandbox(
&request,
AskForApproval::UnlessTrusted,
Default::default(),
&cfg,
&session,
"sub",
"call",
&otel_event_manager,
)
.await;
match result {
Ok(_) => panic!("expected error"),
Err(ExecError::Function(FunctionCallError::RespondToModel(msg))) => {
assert!(msg.contains("requires approval"))
}
Err(other) => panic!("unexpected error: {other:?}"),
}
}
#[tokio::test]
async fn select_shell_autoapprove_in_danger_mode() {
let (session, ctx) = make_session_and_context();
let cfg = ExecutorConfig::new(SandboxPolicy::DangerFullAccess, std::env::temp_dir(), None);
let request = ExecutionRequest {
params: ExecParams {
command: vec!["some-unknown".into()],
cwd: std::env::temp_dir(),
timeout_ms: None,
env: std::collections::HashMap::new(),
with_escalated_permissions: None,
justification: None,
},
approval_command: vec!["some-unknown".into()],
mode: ExecutionMode::Shell,
stdout_stream: None,
use_shell_profile: false,
};
let otel_event_manager = ctx.client.get_otel_event_manager();
let decision = select_sandbox(
&request,
AskForApproval::OnRequest,
Default::default(),
&cfg,
&session,
"sub",
"call",
&otel_event_manager,
)
.await
.expect("ok");
assert_eq!(decision.initial_sandbox, SandboxType::None);
assert_eq!(decision.escalate_on_failure, false);
}
#[cfg(any(target_os = "macos", target_os = "linux"))]
#[tokio::test]
async fn select_shell_escalates_on_failure_with_platform_sandbox() {
let (session, ctx) = make_session_and_context();
let cfg = ExecutorConfig::new(SandboxPolicy::ReadOnly, std::env::temp_dir(), None);
let request = ExecutionRequest {
params: ExecParams {
// Unknown command => untrusted but not flagged dangerous
command: vec!["some-unknown".into()],
cwd: std::env::temp_dir(),
timeout_ms: None,
env: std::collections::HashMap::new(),
with_escalated_permissions: None,
justification: None,
},
approval_command: vec!["some-unknown".into()],
mode: ExecutionMode::Shell,
stdout_stream: None,
use_shell_profile: false,
};
let otel_event_manager = ctx.client.get_otel_event_manager();
let decision = select_sandbox(
&request,
AskForApproval::OnFailure,
Default::default(),
&cfg,
&session,
"sub",
"call",
&otel_event_manager,
)
.await
.expect("ok");
// On macOS/Linux we should have a platform sandbox and escalate on failure
assert_ne!(decision.initial_sandbox, SandboxType::None);
assert_eq!(decision.escalate_on_failure, true);
}
}

View File

@@ -2,7 +2,7 @@ use std::collections::HashSet;
use std::path::Path;
use std::path::PathBuf;
use codex_protocol::mcp_protocol::GitSha;
use codex_app_server_protocol::GitSha;
use codex_protocol::protocol::GitInfo;
use futures::future::join_all;
use serde::Deserialize;

View File

@@ -27,6 +27,7 @@ pub mod error;
pub mod exec;
mod exec_command;
pub mod exec_env;
pub mod executor;
mod flags;
pub mod git_info;
pub mod landlock;

View File

@@ -30,7 +30,7 @@ use tokio::io::AsyncReadExt;
use crate::config::Config;
use crate::config_types::HistoryPersistence;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::ConversationId;
#[cfg(unix)]
use std::os::unix::fs::OpenOptionsExt;
#[cfg(unix)]

View File

@@ -6,7 +6,7 @@
//! key. These override or extend the defaults at runtime.
use crate::CodexAuth;
use codex_protocol::mcp_protocol::AuthMode;
use codex_app_server_protocol::AuthMode;
use serde::Deserialize;
use serde::Serialize;
use std::collections::HashMap;

View File

@@ -40,10 +40,24 @@ pub struct ConversationItem {
pub head: Vec<serde_json::Value>,
/// Last up to `TAIL_RECORD_LIMIT` JSONL response records parsed as JSON.
pub tail: Vec<serde_json::Value>,
/// RFC3339 timestamp string for when the session was created, if available.
pub created_at: Option<String>,
/// RFC3339 timestamp string for the most recent response in the tail, if available.
pub updated_at: Option<String>,
}
#[derive(Default)]
struct HeadTailSummary {
head: Vec<serde_json::Value>,
tail: Vec<serde_json::Value>,
saw_session_meta: bool,
saw_user_event: bool,
created_at: Option<String>,
updated_at: Option<String>,
}
/// Hard cap to bound worstcase work per request.
const MAX_SCAN_FILES: usize = 100;
const MAX_SCAN_FILES: usize = 10000;
const HEAD_RECORD_LIMIT: usize = 10;
const TAIL_RECORD_LIMIT: usize = 10;
@@ -179,13 +193,26 @@ async fn traverse_directories_for_paths(
}
// Read head and simultaneously detect message events within the same
// first N JSONL records to avoid a second file read.
let (head, tail, saw_session_meta, saw_user_event) =
read_head_and_tail(&path, HEAD_RECORD_LIMIT, TAIL_RECORD_LIMIT)
.await
.unwrap_or((Vec::new(), Vec::new(), false, false));
let summary = read_head_and_tail(&path, HEAD_RECORD_LIMIT, TAIL_RECORD_LIMIT)
.await
.unwrap_or_default();
// Apply filters: must have session meta and at least one user message event
if saw_session_meta && saw_user_event {
items.push(ConversationItem { path, head, tail });
if summary.saw_session_meta && summary.saw_user_event {
let HeadTailSummary {
head,
tail,
created_at,
mut updated_at,
..
} = summary;
updated_at = updated_at.or_else(|| created_at.clone());
items.push(ConversationItem {
path,
head,
tail,
created_at,
updated_at,
});
}
}
}
@@ -293,17 +320,15 @@ async fn read_head_and_tail(
path: &Path,
head_limit: usize,
tail_limit: usize,
) -> io::Result<(Vec<serde_json::Value>, Vec<serde_json::Value>, bool, bool)> {
) -> io::Result<HeadTailSummary> {
use tokio::io::AsyncBufReadExt;
let file = tokio::fs::File::open(path).await?;
let reader = tokio::io::BufReader::new(file);
let mut lines = reader.lines();
let mut head: Vec<serde_json::Value> = Vec::new();
let mut saw_session_meta = false;
let mut saw_user_event = false;
let mut summary = HeadTailSummary::default();
while head.len() < head_limit {
while summary.head.len() < head_limit {
let line_opt = lines.next_line().await?;
let Some(line) = line_opt else { break };
let trimmed = line.trim();
@@ -316,14 +341,22 @@ async fn read_head_and_tail(
match rollout_line.item {
RolloutItem::SessionMeta(session_meta_line) => {
summary.created_at = summary
.created_at
.clone()
.or_else(|| Some(rollout_line.timestamp.clone()));
if let Ok(val) = serde_json::to_value(session_meta_line) {
head.push(val);
saw_session_meta = true;
summary.head.push(val);
summary.saw_session_meta = true;
}
}
RolloutItem::ResponseItem(item) => {
summary.created_at = summary
.created_at
.clone()
.or_else(|| Some(rollout_line.timestamp.clone()));
if let Ok(val) = serde_json::to_value(item) {
head.push(val);
summary.head.push(val);
}
}
RolloutItem::TurnContext(_) => {
@@ -334,28 +367,30 @@ async fn read_head_and_tail(
}
RolloutItem::EventMsg(ev) => {
if matches!(ev, EventMsg::UserMessage(_)) {
saw_user_event = true;
summary.saw_user_event = true;
}
}
}
}
let tail = if tail_limit == 0 {
Vec::new()
} else {
read_tail_records(path, tail_limit).await?
};
Ok((head, tail, saw_session_meta, saw_user_event))
if tail_limit != 0 {
let (tail, updated_at) = read_tail_records(path, tail_limit).await?;
summary.tail = tail;
summary.updated_at = updated_at;
}
Ok(summary)
}
async fn read_tail_records(path: &Path, max_records: usize) -> io::Result<Vec<serde_json::Value>> {
async fn read_tail_records(
path: &Path,
max_records: usize,
) -> io::Result<(Vec<serde_json::Value>, Option<String>)> {
use std::io::SeekFrom;
use tokio::io::AsyncReadExt;
use tokio::io::AsyncSeekExt;
if max_records == 0 {
return Ok(Vec::new());
return Ok((Vec::new(), None));
}
const CHUNK_SIZE: usize = 8192;
@@ -363,24 +398,28 @@ async fn read_tail_records(path: &Path, max_records: usize) -> io::Result<Vec<se
let mut file = tokio::fs::File::open(path).await?;
let mut pos = file.seek(SeekFrom::End(0)).await?;
if pos == 0 {
return Ok(Vec::new());
return Ok((Vec::new(), None));
}
let mut buffer: Vec<u8> = Vec::new();
let mut latest_timestamp: Option<String> = None;
loop {
let slice_start = match (pos > 0, buffer.iter().position(|&b| b == b'\n')) {
(true, Some(idx)) => idx + 1,
_ => 0,
};
let tail = collect_last_response_values(&buffer[slice_start..], max_records);
let (tail, newest_ts) = collect_last_response_values(&buffer[slice_start..], max_records);
if latest_timestamp.is_none() {
latest_timestamp = newest_ts.clone();
}
if tail.len() >= max_records || pos == 0 {
return Ok(tail);
return Ok((tail, latest_timestamp.or(newest_ts)));
}
let read_size = CHUNK_SIZE.min(pos as usize);
if read_size == 0 {
return Ok(tail);
return Ok((tail, latest_timestamp.or(newest_ts)));
}
pos -= read_size as u64;
file.seek(SeekFrom::Start(pos)).await?;
@@ -391,15 +430,19 @@ async fn read_tail_records(path: &Path, max_records: usize) -> io::Result<Vec<se
}
}
fn collect_last_response_values(buffer: &[u8], max_records: usize) -> Vec<serde_json::Value> {
fn collect_last_response_values(
buffer: &[u8],
max_records: usize,
) -> (Vec<serde_json::Value>, Option<String>) {
use std::borrow::Cow;
if buffer.is_empty() || max_records == 0 {
return Vec::new();
return (Vec::new(), None);
}
let text: Cow<'_, str> = String::from_utf8_lossy(buffer);
let mut collected_rev: Vec<serde_json::Value> = Vec::new();
let mut latest_timestamp: Option<String> = None;
for line in text.lines().rev() {
let trimmed = line.trim();
if trimmed.is_empty() {
@@ -407,9 +450,13 @@ fn collect_last_response_values(buffer: &[u8], max_records: usize) -> Vec<serde_
}
let parsed: serde_json::Result<RolloutLine> = serde_json::from_str(trimmed);
let Ok(rollout_line) = parsed else { continue };
if let RolloutItem::ResponseItem(item) = rollout_line.item
&& let Ok(val) = serde_json::to_value(item)
let RolloutLine { timestamp, item } = rollout_line;
if let RolloutItem::ResponseItem(item) = item
&& let Ok(val) = serde_json::to_value(&item)
{
if latest_timestamp.is_none() {
latest_timestamp = Some(timestamp.clone());
}
collected_rev.push(val);
if collected_rev.len() == max_records {
break;
@@ -417,7 +464,7 @@ fn collect_last_response_values(buffer: &[u8], max_records: usize) -> Vec<serde_
}
}
collected_rev.reverse();
collected_rev
(collected_rev, latest_timestamp)
}
/// Locate a recorded conversation rollout file by its UUID string using the existing

View File

@@ -6,7 +6,7 @@ use std::io::Error as IoError;
use std::path::Path;
use std::path::PathBuf;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::ConversationId;
use serde_json::Value;
use time::OffsetDateTime;
use time::format_description::FormatItem;

View File

@@ -18,7 +18,7 @@ use crate::rollout::list::Cursor;
use crate::rollout::list::get_conversation;
use crate::rollout::list::get_conversations;
use anyhow::Result;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::ConversationId;
use codex_protocol::models::ContentItem;
use codex_protocol::models::ResponseItem;
use codex_protocol::protocol::CompactedItem;
@@ -159,16 +159,22 @@ async fn test_list_conversations_latest_first() {
path: p1,
head: head_3,
tail: Vec::new(),
created_at: Some("2025-01-03T12-00-00".into()),
updated_at: Some("2025-01-03T12-00-00".into()),
},
ConversationItem {
path: p2,
head: head_2,
tail: Vec::new(),
created_at: Some("2025-01-02T12-00-00".into()),
updated_at: Some("2025-01-02T12-00-00".into()),
},
ConversationItem {
path: p3,
head: head_1,
tail: Vec::new(),
created_at: Some("2025-01-01T12-00-00".into()),
updated_at: Some("2025-01-01T12-00-00".into()),
},
],
next_cursor: Some(expected_cursor),
@@ -235,11 +241,15 @@ async fn test_pagination_cursor() {
path: p5,
head: head_5,
tail: Vec::new(),
created_at: Some("2025-03-05T09-00-00".into()),
updated_at: Some("2025-03-05T09-00-00".into()),
},
ConversationItem {
path: p4,
head: head_4,
tail: Vec::new(),
created_at: Some("2025-03-04T09-00-00".into()),
updated_at: Some("2025-03-04T09-00-00".into()),
},
],
next_cursor: Some(expected_cursor1.clone()),
@@ -287,11 +297,15 @@ async fn test_pagination_cursor() {
path: p3,
head: head_3,
tail: Vec::new(),
created_at: Some("2025-03-03T09-00-00".into()),
updated_at: Some("2025-03-03T09-00-00".into()),
},
ConversationItem {
path: p2,
head: head_2,
tail: Vec::new(),
created_at: Some("2025-03-02T09-00-00".into()),
updated_at: Some("2025-03-02T09-00-00".into()),
},
],
next_cursor: Some(expected_cursor2.clone()),
@@ -324,6 +338,8 @@ async fn test_pagination_cursor() {
path: p1,
head: head_1,
tail: Vec::new(),
created_at: Some("2025-03-01T09-00-00".into()),
updated_at: Some("2025-03-01T09-00-00".into()),
}],
next_cursor: Some(expected_cursor3),
num_scanned_files: 5, // scanned 05, 04 (anchor), 03, 02 (anchor), 01
@@ -367,6 +383,8 @@ async fn test_get_conversation_contents() {
path: expected_path,
head: expected_head,
tail: Vec::new(),
created_at: Some(ts.into()),
updated_at: Some(ts.into()),
}],
next_cursor: Some(expected_cursor),
num_scanned_files: 1,
@@ -449,18 +467,23 @@ async fn test_tail_includes_last_response_items() -> Result<()> {
let expected: Vec<serde_json::Value> = (total_messages - tail_len..total_messages)
.map(|idx| {
serde_json::to_value(ResponseItem::Message {
id: None,
role: "assistant".into(),
content: vec![ContentItem::OutputText {
text: format!("reply-{idx}"),
}],
serde_json::json!({
"type": "message",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": format!("reply-{idx}"),
}
],
})
.expect("serialize response item")
})
.collect();
assert_eq!(item.tail, expected);
assert_eq!(item.created_at.as_deref(), Some(ts));
let expected_updated = format!("{ts}-{last:02}", last = total_messages - 1);
assert_eq!(item.updated_at.as_deref(), Some(expected_updated.as_str()));
Ok(())
}
@@ -526,18 +549,25 @@ async fn test_tail_handles_short_sessions() -> Result<()> {
let expected: Vec<serde_json::Value> = (0..3)
.map(|idx| {
serde_json::to_value(ResponseItem::Message {
id: None,
role: "assistant".into(),
content: vec![ContentItem::OutputText {
text: format!("short-{idx}"),
}],
serde_json::json!({
"type": "message",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": format!("short-{idx}"),
}
],
})
.expect("serialize response item")
})
.collect();
assert_eq!(tail, &expected);
let expected_updated = format!("{ts}-{last:02}", last = 2);
assert_eq!(
page.items[0].updated_at.as_deref(),
Some(expected_updated.as_str())
);
Ok(())
}
@@ -615,18 +645,25 @@ async fn test_tail_skips_trailing_non_responses() -> Result<()> {
let expected: Vec<serde_json::Value> = (0..4)
.map(|idx| {
serde_json::to_value(ResponseItem::Message {
id: None,
role: "assistant".into(),
content: vec![ContentItem::OutputText {
text: format!("response-{idx}"),
}],
serde_json::json!({
"type": "message",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": format!("response-{idx}"),
}
],
})
.expect("serialize response item")
})
.collect();
assert_eq!(tail, &expected);
let expected_updated = format!("{ts}-{last:02}", last = 3);
assert_eq!(
page.items[0].updated_at.as_deref(),
Some(expected_updated.as_str())
);
Ok(())
}
@@ -676,11 +713,15 @@ async fn test_stable_ordering_same_second_pagination() {
path: p3,
head: head(u3),
tail: Vec::new(),
created_at: Some(ts.to_string()),
updated_at: Some(ts.to_string()),
},
ConversationItem {
path: p2,
head: head(u2),
tail: Vec::new(),
created_at: Some(ts.to_string()),
updated_at: Some(ts.to_string()),
},
],
next_cursor: Some(expected_cursor1.clone()),
@@ -704,6 +745,8 @@ async fn test_stable_ordering_same_second_pagination() {
path: p1,
head: head(u1),
tail: Vec::new(),
created_at: Some(ts.to_string()),
updated_at: Some(ts.to_string()),
}],
next_cursor: Some(expected_cursor2),
num_scanned_files: 3, // scanned u3, u2 (anchor), u1

View File

@@ -125,9 +125,10 @@ pub fn assess_command_safety(
// the session _because_ they know it needs to run outside a sandbox.
if is_known_safe_command(command) || approved.contains(command) {
let user_explicitly_approved = approved.contains(command);
return SafetyCheck::AutoApprove {
sandbox_type: SandboxType::None,
user_explicitly_approved: false,
user_explicitly_approved,
};
}
@@ -380,7 +381,7 @@ mod tests {
safety_check,
SafetyCheck::AutoApprove {
sandbox_type: SandboxType::None,
user_explicitly_approved: false,
user_explicitly_approved: true,
}
);
}

View File

@@ -1,9 +1,9 @@
use crate::RolloutRecorder;
use crate::exec_command::ExecSessionManager;
use crate::executor::Executor;
use crate::mcp_connection_manager::McpConnectionManager;
use crate::unified_exec::UnifiedExecSessionManager;
use crate::user_notification::UserNotifier;
use std::path::PathBuf;
use tokio::sync::Mutex;
pub(crate) struct SessionServices {
@@ -12,7 +12,7 @@ pub(crate) struct SessionServices {
pub(crate) unified_exec_manager: UnifiedExecSessionManager,
pub(crate) notifier: UserNotifier,
pub(crate) rollout: Mutex<Option<RolloutRecorder>>,
pub(crate) codex_linux_sandbox_exe: Option<PathBuf>,
pub(crate) user_shell: crate::shell::Shell,
pub(crate) show_raw_agent_reasoning: bool,
pub(crate) executor: Executor,
}

View File

@@ -1,7 +1,5 @@
//! Session-wide mutable state.
use std::collections::HashSet;
use codex_protocol::models::ResponseItem;
use crate::conversation_history::ConversationHistory;
@@ -12,7 +10,6 @@ use crate::protocol::TokenUsageInfo;
/// Persistent, session-scoped state previously stored directly on `Session`.
#[derive(Default)]
pub(crate) struct SessionState {
pub(crate) approved_commands: HashSet<Vec<String>>,
pub(crate) history: ConversationHistory,
pub(crate) token_info: Option<TokenUsageInfo>,
pub(crate) latest_rate_limits: Option<RateLimitSnapshot>,
@@ -44,15 +41,6 @@ impl SessionState {
self.history.replace(items);
}
// Approved command helpers
pub(crate) fn add_approved_command(&mut self, cmd: Vec<String>) {
self.approved_commands.insert(cmd);
}
pub(crate) fn approved_commands_ref(&self) -> &HashSet<Vec<String>> {
&self.approved_commands
}
// Token/rate limit helpers
pub(crate) fn update_token_info_from_usage(
&mut self,

View File

@@ -1,5 +1,6 @@
use std::sync::Arc;
use codex_app_server_protocol::AuthMode;
use codex_core::ContentItem;
use codex_core::LocalShellAction;
use codex_core::LocalShellExecAction;
@@ -12,8 +13,7 @@ use codex_core::ResponseItem;
use codex_core::WireApi;
use codex_core::spawn::CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR;
use codex_otel::otel_event_manager::OtelEventManager;
use codex_protocol::mcp_protocol::AuthMode;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::ConversationId;
use core_test_support::load_default_config_for_test;
use futures::StreamExt;
use serde_json::Value;

View File

@@ -1,6 +1,7 @@
use std::sync::Arc;
use tracing_test::traced_test;
use codex_app_server_protocol::AuthMode;
use codex_core::ContentItem;
use codex_core::ModelClient;
use codex_core::ModelProviderInfo;
@@ -10,8 +11,7 @@ use codex_core::ResponseItem;
use codex_core::WireApi;
use codex_core::spawn::CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR;
use codex_otel::otel_event_manager::OtelEventManager;
use codex_protocol::mcp_protocol::AuthMode;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::ConversationId;
use core_test_support::load_default_config_for_test;
use futures::StreamExt;
use tempfile::TempDir;

View File

@@ -1,3 +1,4 @@
use codex_app_server_protocol::AuthMode;
use codex_core::CodexAuth;
use codex_core::ContentItem;
use codex_core::ConversationManager;
@@ -17,8 +18,7 @@ use codex_core::protocol::EventMsg;
use codex_core::protocol::InputItem;
use codex_core::protocol::Op;
use codex_otel::otel_event_manager::OtelEventManager;
use codex_protocol::mcp_protocol::AuthMode;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::ConversationId;
use codex_protocol::models::ReasoningItemReasoningSummary;
use codex_protocol::models::WebSearchAction;
use core_test_support::load_default_config_for_test;

View File

@@ -169,6 +169,12 @@ async fn python_getpwuid_works_under_seatbelt() {
return;
}
// For local dev.
if which::which("python3").is_err() {
eprintln!("python3 not found in PATH, skipping test.");
return;
}
// ReadOnly is sufficient here since we are only exercising user lookup.
let policy = SandboxPolicy::ReadOnly;
let command_cwd = std::env::current_dir().expect("getcwd");

View File

@@ -64,20 +64,9 @@ pub struct Cli {
pub color: Color,
/// Print events to stdout as JSONL.
#[arg(
long = "json",
default_value_t = false,
conflicts_with = "experimental_json"
)]
#[arg(long = "json", alias = "experimental-json", default_value_t = false)]
pub json: bool,
#[arg(
long = "experimental-json",
default_value_t = false,
conflicts_with = "json"
)]
pub experimental_json: bool,
/// Whether to include the plan tool in the conversation.
#[arg(long = "include-plan-tool", default_value_t = false)]
pub include_plan_tool: bool,

View File

@@ -1,64 +0,0 @@
use std::collections::HashMap;
use std::path::PathBuf;
use codex_core::config::Config;
use codex_core::protocol::Event;
use codex_core::protocol::EventMsg;
use codex_core::protocol::SessionConfiguredEvent;
use codex_core::protocol::TaskCompleteEvent;
use serde_json::json;
use crate::event_processor::CodexStatus;
use crate::event_processor::EventProcessor;
use crate::event_processor::handle_last_message;
use codex_common::create_config_summary_entries;
pub(crate) struct EventProcessorWithJsonOutput {
last_message_path: Option<PathBuf>,
}
impl EventProcessorWithJsonOutput {
pub fn new(last_message_path: Option<PathBuf>) -> Self {
Self { last_message_path }
}
}
impl EventProcessor for EventProcessorWithJsonOutput {
fn print_config_summary(&mut self, config: &Config, prompt: &str, _: &SessionConfiguredEvent) {
let entries = create_config_summary_entries(config)
.into_iter()
.map(|(key, value)| (key.to_string(), value))
.collect::<HashMap<String, String>>();
#[expect(clippy::expect_used)]
let config_json =
serde_json::to_string(&entries).expect("Failed to serialize config summary to JSON");
println!("{config_json}");
let prompt_json = json!({
"prompt": prompt,
});
println!("{prompt_json}");
}
fn process_event(&mut self, event: Event) -> CodexStatus {
match event.msg {
EventMsg::AgentMessageDelta(_) | EventMsg::AgentReasoningDelta(_) => {
// Suppress streaming events in JSON mode.
CodexStatus::Running
}
EventMsg::TaskComplete(TaskCompleteEvent { last_agent_message }) => {
if let Some(output_file) = self.last_message_path.as_deref() {
handle_last_message(last_agent_message.as_deref(), output_file);
}
CodexStatus::InitiateShutdown
}
EventMsg::ShutdownComplete => CodexStatus::Shutdown,
_ => {
if let Ok(line) = serde_json::to_string(&event) {
println!("{line}");
}
CodexStatus::Running
}
}
}
}

View File

@@ -29,6 +29,7 @@ use crate::exec_events::TurnCompletedEvent;
use crate::exec_events::TurnFailedEvent;
use crate::exec_events::TurnStartedEvent;
use crate::exec_events::Usage;
use crate::exec_events::WebSearchItem;
use codex_core::config::Config;
use codex_core::plan_tool::StepStatus;
use codex_core::plan_tool::UpdatePlanArgs;
@@ -46,10 +47,11 @@ use codex_core::protocol::PatchApplyEndEvent;
use codex_core::protocol::SessionConfiguredEvent;
use codex_core::protocol::TaskCompleteEvent;
use codex_core::protocol::TaskStartedEvent;
use codex_core::protocol::WebSearchEndEvent;
use tracing::error;
use tracing::warn;
pub struct ExperimentalEventProcessorWithJsonOutput {
pub struct EventProcessorWithJsonOutput {
last_message_path: Option<PathBuf>,
next_event_id: AtomicU64,
// Tracks running commands by call_id, including the associated item id.
@@ -81,7 +83,7 @@ struct RunningMcpToolCall {
item_id: String,
}
impl ExperimentalEventProcessorWithJsonOutput {
impl EventProcessorWithJsonOutput {
pub fn new(last_message_path: Option<PathBuf>) -> Self {
Self {
last_message_path,
@@ -106,6 +108,8 @@ impl ExperimentalEventProcessorWithJsonOutput {
EventMsg::McpToolCallEnd(ev) => self.handle_mcp_tool_call_end(ev),
EventMsg::PatchApplyBegin(ev) => self.handle_patch_apply_begin(ev),
EventMsg::PatchApplyEnd(ev) => self.handle_patch_apply_end(ev),
EventMsg::WebSearchBegin(_) => Vec::new(),
EventMsg::WebSearchEnd(ev) => self.handle_web_search_end(ev),
EventMsg::TokenCount(ev) => {
if let Some(info) = &ev.info {
self.last_total_token_usage = Some(info.total_token_usage.clone());
@@ -143,6 +147,17 @@ impl ExperimentalEventProcessorWithJsonOutput {
})]
}
fn handle_web_search_end(&self, ev: &WebSearchEndEvent) -> Vec<ThreadEvent> {
let item = ThreadItem {
id: self.get_next_item_id(),
details: ThreadItemDetails::WebSearch(WebSearchItem {
query: ev.query.clone(),
}),
};
vec![ThreadEvent::ItemCompleted(ItemCompletedEvent { item })]
}
fn handle_agent_message(&self, payload: &AgentMessageEvent) -> Vec<ThreadEvent> {
let item = ThreadItem {
id: self.get_next_item_id(),
@@ -405,7 +420,7 @@ impl ExperimentalEventProcessorWithJsonOutput {
}
}
impl EventProcessor for ExperimentalEventProcessorWithJsonOutput {
impl EventProcessor for EventProcessorWithJsonOutput {
fn print_config_summary(&mut self, _: &Config, _: &str, ev: &SessionConfiguredEvent) {
self.process_event(Event {
id: "".to_string(),

View File

@@ -1,9 +1,8 @@
mod cli;
mod event_processor;
mod event_processor_with_human_output;
pub mod event_processor_with_json_output;
pub mod event_processor_with_jsonl_output;
pub mod exec_events;
pub mod experimental_event_processor_with_json_output;
pub use cli::Cli;
use codex_core::AuthManager;
@@ -22,8 +21,7 @@ use codex_core::protocol::TaskCompleteEvent;
use codex_ollama::DEFAULT_OSS_MODEL;
use codex_protocol::config_types::SandboxMode;
use event_processor_with_human_output::EventProcessorWithHumanOutput;
use event_processor_with_json_output::EventProcessorWithJsonOutput;
use experimental_event_processor_with_json_output::ExperimentalEventProcessorWithJsonOutput;
use event_processor_with_jsonl_output::EventProcessorWithJsonOutput;
use opentelemetry_appender_tracing::layer::OpenTelemetryTracingBridge;
use serde_json::Value;
use std::io::IsTerminal;
@@ -59,7 +57,6 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
color,
last_message_file,
json: json_mode,
experimental_json,
sandbox_mode: sandbox_mode_cli_arg,
prompt,
output_schema: output_schema_path,
@@ -212,17 +209,8 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
let _ = tracing_subscriber::registry().with(fmt_layer).try_init();
}
let mut event_processor: Box<dyn EventProcessor> = match (json_mode, experimental_json) {
(_, true) => Box::new(ExperimentalEventProcessorWithJsonOutput::new(
last_message_file.clone(),
)),
(true, _) => {
eprintln!(
"The existing `--json` output format is being deprecated. Please try the new format using `--experimental-json`."
);
Box::new(EventProcessorWithJsonOutput::new(last_message_file.clone()))
}
let mut event_processor: Box<dyn EventProcessor> = match json_mode {
true => Box::new(EventProcessorWithJsonOutput::new(last_message_file.clone())),
_ => Box::new(EventProcessorWithHumanOutput::create_with_ansi(
stdout_with_ansi,
&config,

View File

@@ -12,6 +12,8 @@ use codex_core::protocol::McpToolCallEndEvent;
use codex_core::protocol::PatchApplyBeginEvent;
use codex_core::protocol::PatchApplyEndEvent;
use codex_core::protocol::SessionConfiguredEvent;
use codex_core::protocol::WebSearchEndEvent;
use codex_exec::event_processor_with_jsonl_output::EventProcessorWithJsonOutput;
use codex_exec::exec_events::AssistantMessageItem;
use codex_exec::exec_events::CommandExecutionItem;
use codex_exec::exec_events::CommandExecutionStatus;
@@ -34,7 +36,7 @@ use codex_exec::exec_events::TurnCompletedEvent;
use codex_exec::exec_events::TurnFailedEvent;
use codex_exec::exec_events::TurnStartedEvent;
use codex_exec::exec_events::Usage;
use codex_exec::experimental_event_processor_with_json_output::ExperimentalEventProcessorWithJsonOutput;
use codex_exec::exec_events::WebSearchItem;
use mcp_types::CallToolResult;
use pretty_assertions::assert_eq;
use std::path::PathBuf;
@@ -49,11 +51,10 @@ fn event(id: &str, msg: EventMsg) -> Event {
#[test]
fn session_configured_produces_thread_started_event() {
let mut ep = ExperimentalEventProcessorWithJsonOutput::new(None);
let session_id = codex_protocol::mcp_protocol::ConversationId::from_string(
"67e55044-10b1-426f-9247-bb680e5fe0c8",
)
.unwrap();
let mut ep = EventProcessorWithJsonOutput::new(None);
let session_id =
codex_protocol::ConversationId::from_string("67e55044-10b1-426f-9247-bb680e5fe0c8")
.unwrap();
let rollout_path = PathBuf::from("/tmp/rollout.json");
let ev = event(
"e1",
@@ -78,7 +79,7 @@ fn session_configured_produces_thread_started_event() {
#[test]
fn task_started_produces_turn_started_event() {
let mut ep = ExperimentalEventProcessorWithJsonOutput::new(None);
let mut ep = EventProcessorWithJsonOutput::new(None);
let out = ep.collect_thread_events(&event(
"t1",
EventMsg::TaskStarted(codex_core::protocol::TaskStartedEvent {
@@ -89,13 +90,36 @@ fn task_started_produces_turn_started_event() {
assert_eq!(out, vec![ThreadEvent::TurnStarted(TurnStartedEvent {})]);
}
#[test]
fn web_search_end_emits_item_completed() {
let mut ep = EventProcessorWithJsonOutput::new(None);
let query = "rust async await".to_string();
let out = ep.collect_thread_events(&event(
"w1",
EventMsg::WebSearchEnd(WebSearchEndEvent {
call_id: "call-123".to_string(),
query: query.clone(),
}),
));
assert_eq!(
out,
vec![ThreadEvent::ItemCompleted(ItemCompletedEvent {
item: ThreadItem {
id: "item_0".to_string(),
details: ThreadItemDetails::WebSearch(WebSearchItem { query }),
},
})]
);
}
#[test]
fn plan_update_emits_todo_list_started_updated_and_completed() {
use codex_core::plan_tool::PlanItemArg;
use codex_core::plan_tool::StepStatus;
use codex_core::plan_tool::UpdatePlanArgs;
let mut ep = ExperimentalEventProcessorWithJsonOutput::new(None);
let mut ep = EventProcessorWithJsonOutput::new(None);
// First plan update => item.started (todo_list)
let first = event(
@@ -212,7 +236,7 @@ fn plan_update_emits_todo_list_started_updated_and_completed() {
#[test]
fn mcp_tool_call_begin_and_end_emit_item_events() {
let mut ep = ExperimentalEventProcessorWithJsonOutput::new(None);
let mut ep = EventProcessorWithJsonOutput::new(None);
let invocation = McpInvocation {
server: "server_a".to_string(),
tool: "tool_x".to_string(),
@@ -272,7 +296,7 @@ fn mcp_tool_call_begin_and_end_emit_item_events() {
#[test]
fn mcp_tool_call_failure_sets_failed_status() {
let mut ep = ExperimentalEventProcessorWithJsonOutput::new(None);
let mut ep = EventProcessorWithJsonOutput::new(None);
let invocation = McpInvocation {
server: "server_b".to_string(),
tool: "tool_y".to_string(),
@@ -319,7 +343,7 @@ fn plan_update_after_complete_starts_new_todo_list_with_new_id() {
use codex_core::plan_tool::StepStatus;
use codex_core::plan_tool::UpdatePlanArgs;
let mut ep = ExperimentalEventProcessorWithJsonOutput::new(None);
let mut ep = EventProcessorWithJsonOutput::new(None);
// First turn: start + complete
let start = event(
@@ -364,7 +388,7 @@ fn plan_update_after_complete_starts_new_todo_list_with_new_id() {
#[test]
fn agent_reasoning_produces_item_completed_reasoning() {
let mut ep = ExperimentalEventProcessorWithJsonOutput::new(None);
let mut ep = EventProcessorWithJsonOutput::new(None);
let ev = event(
"e1",
EventMsg::AgentReasoning(AgentReasoningEvent {
@@ -387,7 +411,7 @@ fn agent_reasoning_produces_item_completed_reasoning() {
#[test]
fn agent_message_produces_item_completed_assistant_message() {
let mut ep = ExperimentalEventProcessorWithJsonOutput::new(None);
let mut ep = EventProcessorWithJsonOutput::new(None);
let ev = event(
"e1",
EventMsg::AgentMessage(AgentMessageEvent {
@@ -410,7 +434,7 @@ fn agent_message_produces_item_completed_assistant_message() {
#[test]
fn error_event_produces_error() {
let mut ep = ExperimentalEventProcessorWithJsonOutput::new(None);
let mut ep = EventProcessorWithJsonOutput::new(None);
let out = ep.collect_thread_events(&event(
"e1",
EventMsg::Error(codex_core::protocol::ErrorEvent {
@@ -427,7 +451,7 @@ fn error_event_produces_error() {
#[test]
fn stream_error_event_produces_error() {
let mut ep = ExperimentalEventProcessorWithJsonOutput::new(None);
let mut ep = EventProcessorWithJsonOutput::new(None);
let out = ep.collect_thread_events(&event(
"e1",
EventMsg::StreamError(codex_core::protocol::StreamErrorEvent {
@@ -444,7 +468,7 @@ fn stream_error_event_produces_error() {
#[test]
fn error_followed_by_task_complete_produces_turn_failed() {
let mut ep = ExperimentalEventProcessorWithJsonOutput::new(None);
let mut ep = EventProcessorWithJsonOutput::new(None);
let error_event = event(
"e1",
@@ -477,7 +501,7 @@ fn error_followed_by_task_complete_produces_turn_failed() {
#[test]
fn exec_command_end_success_produces_completed_command_item() {
let mut ep = ExperimentalEventProcessorWithJsonOutput::new(None);
let mut ep = EventProcessorWithJsonOutput::new(None);
// Begin -> no output
let begin = event(
@@ -537,7 +561,7 @@ fn exec_command_end_success_produces_completed_command_item() {
#[test]
fn exec_command_end_failure_produces_failed_command_item() {
let mut ep = ExperimentalEventProcessorWithJsonOutput::new(None);
let mut ep = EventProcessorWithJsonOutput::new(None);
// Begin -> no output
let begin = event(
@@ -596,7 +620,7 @@ fn exec_command_end_failure_produces_failed_command_item() {
#[test]
fn exec_command_end_without_begin_is_ignored() {
let mut ep = ExperimentalEventProcessorWithJsonOutput::new(None);
let mut ep = EventProcessorWithJsonOutput::new(None);
// End event arrives without a prior Begin; should produce no thread events.
let end_only = event(
@@ -617,7 +641,7 @@ fn exec_command_end_without_begin_is_ignored() {
#[test]
fn patch_apply_success_produces_item_completed_patchapply() {
let mut ep = ExperimentalEventProcessorWithJsonOutput::new(None);
let mut ep = EventProcessorWithJsonOutput::new(None);
// Prepare a patch with multiple kinds of changes
let mut changes = std::collections::HashMap::new();
@@ -699,7 +723,7 @@ fn patch_apply_success_produces_item_completed_patchapply() {
#[test]
fn patch_apply_failure_produces_item_completed_patchapply_failed() {
let mut ep = ExperimentalEventProcessorWithJsonOutput::new(None);
let mut ep = EventProcessorWithJsonOutput::new(None);
let mut changes = std::collections::HashMap::new();
changes.insert(
@@ -753,7 +777,7 @@ fn patch_apply_failure_produces_item_completed_patchapply_failed() {
#[test]
fn task_complete_produces_turn_completed_with_usage() {
let mut ep = ExperimentalEventProcessorWithJsonOutput::new(None);
let mut ep = EventProcessorWithJsonOutput::new(None);
// First, feed a TokenCount event with known totals.
let usage = codex_core::protocol::TokenUsage {

View File

@@ -10,7 +10,7 @@ workspace = true
base64 = { workspace = true }
chrono = { workspace = true, features = ["serde"] }
codex-core = { workspace = true }
codex-protocol = { workspace = true }
codex-app-server-protocol = { workspace = true }
rand = { workspace = true }
reqwest = { workspace = true, features = ["json", "blocking"] }
serde = { workspace = true, features = ["derive"] }

View File

@@ -148,20 +148,20 @@ fn print_colored_warning_device_code() {
/// Full device code login flow.
pub async fn run_device_code_login(opts: ServerOptions) -> std::io::Result<()> {
let client = reqwest::Client::new();
let auth_base_url = opts.issuer.trim_end_matches('/').to_owned();
let base_url = opts.issuer.trim_end_matches('/');
let api_base_url = format!("{}/api/accounts", opts.issuer.trim_end_matches('/'));
print_colored_warning_device_code();
println!("⏳ Generating a new 9-digit device code for authentication...\n");
let uc = request_user_code(&client, &auth_base_url, &opts.client_id).await?;
let uc = request_user_code(&client, &api_base_url, &opts.client_id).await?;
println!(
"To authenticate, visit: {}/deviceauth/authorize and enter code: {}",
opts.issuer.trim_end_matches('/'),
uc.user_code
api_base_url, uc.user_code
);
let code_resp = poll_for_token(
&client,
&auth_base_url,
&api_base_url,
&uc.device_auth_id,
&uc.user_code,
uc.interval,
@@ -173,10 +173,10 @@ pub async fn run_device_code_login(opts: ServerOptions) -> std::io::Result<()> {
code_challenge: code_resp.code_challenge,
};
println!("authorization code received");
let redirect_uri = format!("{}/deviceauth/callback", opts.issuer.trim_end_matches('/'));
let redirect_uri = format!("{base_url}/deviceauth/callback");
let tokens = crate::server::exchange_code_for_tokens(
&opts.issuer,
base_url,
&opts.client_id,
&redirect_uri,
&pkce,

View File

@@ -9,6 +9,7 @@ pub use server::ShutdownHandle;
pub use server::run_login_server;
// Re-export commonly used auth types and helpers from codex-core for compatibility
pub use codex_app_server_protocol::AuthMode;
pub use codex_core::AuthManager;
pub use codex_core::CodexAuth;
pub use codex_core::auth::AuthDotJson;
@@ -20,4 +21,3 @@ pub use codex_core::auth::logout;
pub use codex_core::auth::try_read_auth_json;
pub use codex_core::auth::write_auth_json;
pub use codex_core::token_data::TokenData;
pub use codex_protocol::mcp_protocol::AuthMode;

View File

@@ -32,7 +32,7 @@ fn make_jwt(payload: serde_json::Value) -> String {
async fn mock_usercode_success(server: &MockServer) {
Mock::given(method("POST"))
.and(path("/deviceauth/usercode"))
.and(path("/api/accounts/deviceauth/usercode"))
.respond_with(ResponseTemplate::new(200).set_body_json(json!({
"device_auth_id": "device-auth-123",
"user_code": "CODE-12345",
@@ -45,7 +45,7 @@ async fn mock_usercode_success(server: &MockServer) {
async fn mock_usercode_failure(server: &MockServer, status: u16) {
Mock::given(method("POST"))
.and(path("/deviceauth/usercode"))
.and(path("/api/accounts/deviceauth/usercode"))
.respond_with(ResponseTemplate::new(status))
.mount(server)
.await;
@@ -58,7 +58,7 @@ async fn mock_poll_token_two_step(
) {
let c = counter.clone();
Mock::given(method("POST"))
.and(path("/deviceauth/token"))
.and(path("/api/accounts/deviceauth/token"))
.respond_with(move |_: &Request| {
let attempt = c.fetch_add(1, Ordering::SeqCst);
if attempt == 0 {
@@ -214,7 +214,7 @@ async fn device_code_login_integration_handles_error_payload() {
// // /deviceauth/token → returns error payload with status 401
mock_poll_token_single(
&mock_server,
"/deviceauth/token",
"/api/accounts/deviceauth/token",
ResponseTemplate::new(401).set_body_json(json!({
"error": "authorization_declined",
"error_description": "Denied"

View File

@@ -22,7 +22,7 @@ use codex_core::protocol::InputItem;
use codex_core::protocol::Op;
use codex_core::protocol::Submission;
use codex_core::protocol::TaskCompleteEvent;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::ConversationId;
use mcp_types::CallToolResult;
use mcp_types::ContentBlock;
use mcp_types::RequestId;

View File

@@ -7,7 +7,7 @@ use crate::codex_tool_config::create_tool_for_codex_tool_call_param;
use crate::codex_tool_config::create_tool_for_codex_tool_call_reply_param;
use crate::error_code::INVALID_REQUEST_ERROR_CODE;
use crate::outgoing_message::OutgoingMessageSender;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::ConversationId;
use codex_core::AuthManager;
use codex_core::ConversationManager;

View File

@@ -234,8 +234,8 @@ mod tests {
use anyhow::Result;
use codex_core::protocol::EventMsg;
use codex_core::protocol::SessionConfiguredEvent;
use codex_protocol::ConversationId;
use codex_protocol::config_types::ReasoningEffort;
use codex_protocol::mcp_protocol::ConversationId;
use pretty_assertions::assert_eq;
use serde_json::json;
use tempfile::NamedTempFile;

View File

@@ -4,9 +4,9 @@ name = "codex-otel"
version = { workspace = true }
[lib]
doctest = false
name = "codex_otel"
path = "src/lib.rs"
doctest = false
[lints]
workspace = true
@@ -15,25 +15,30 @@ workspace = true
# Compile-time gate for OTLP support; disabled by default.
# Downstream crates can enable via `features = ["otel"]`.
default = []
otel = [
"opentelemetry",
"opentelemetry_sdk",
"opentelemetry-otlp",
"tonic",
]
otel = ["opentelemetry", "opentelemetry_sdk", "opentelemetry-otlp", "tonic"]
[dependencies]
codex-protocol = { path = "../protocol" }
chrono = { workspace = true }
tracing = { workspace = true }
opentelemetry = { workspace = true, features = ["logs"], optional = true }
opentelemetry_sdk = { workspace = true, features = ["logs", "rt-tokio"], optional = true }
opentelemetry-otlp = { workspace = true, features = ["grpc-tonic", "http-proto", "http-json", "reqwest", "reqwest-rustls"], optional = true }
opentelemetry-semantic-conventions = { workspace = true }
tonic = { workspace = true, optional = true }
serde = { workspace = true, features = ["derive"] }
strum_macros = { workspace = true }
reqwest = { workspace = true }
codex-app-server-protocol = { workspace = true }
codex-protocol = { workspace = true }
eventsource-stream = { workspace = true }
opentelemetry = { workspace = true, features = ["logs"], optional = true }
opentelemetry-otlp = { workspace = true, features = [
"grpc-tonic",
"http-proto",
"http-json",
"reqwest",
"reqwest-rustls",
], optional = true }
opentelemetry-semantic-conventions = { workspace = true }
opentelemetry_sdk = { workspace = true, features = [
"logs",
"rt-tokio",
], optional = true }
reqwest = { workspace = true }
serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }
strum_macros = { workspace = true }
tokio = { workspace = true }
serde_json = { workspace = true }
tonic = { workspace = true, optional = true }
tracing = { workspace = true }

View File

@@ -1,9 +1,9 @@
use chrono::SecondsFormat;
use chrono::Utc;
use codex_app_server_protocol::AuthMode;
use codex_protocol::ConversationId;
use codex_protocol::config_types::ReasoningEffort;
use codex_protocol::config_types::ReasoningSummary;
use codex_protocol::mcp_protocol::AuthMode;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::models::ResponseItem;
use codex_protocol::protocol::AskForApproval;
use codex_protocol::protocol::InputItem;

View File

@@ -16,7 +16,6 @@ path = "src/main.rs"
[dependencies]
anyhow = { workspace = true }
mcp-types = { workspace = true }
codex-protocol = { workspace = true }
ts-rs = { workspace = true }
clap = { workspace = true, features = ["derive"] }
codex-app-server-protocol = { workspace = true }
ts-rs = { workspace = true }

View File

@@ -1,6 +1,12 @@
use anyhow::Context;
use anyhow::Result;
use anyhow::anyhow;
use codex_app_server_protocol::ClientNotification;
use codex_app_server_protocol::ClientRequest;
use codex_app_server_protocol::ServerNotification;
use codex_app_server_protocol::ServerRequest;
use codex_app_server_protocol::export_client_responses;
use codex_app_server_protocol::export_server_responses;
use std::ffi::OsStr;
use std::fs;
use std::io::Read;
@@ -15,44 +21,17 @@ const HEADER: &str = "// GENERATED CODE! DO NOT MODIFY BY HAND!\n\n";
pub fn generate_ts(out_dir: &Path, prettier: Option<&Path>) -> Result<()> {
ensure_dir(out_dir)?;
// Generate TS bindings
mcp_types::InitializeResult::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::ConversationId::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::InputItem::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::ClientRequest::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::ServerRequest::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::InitializeResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::NewConversationResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::ListConversationsResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::ResumeConversationResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::ArchiveConversationResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::AddConversationSubscriptionResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::RemoveConversationSubscriptionResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::SendUserMessageResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::SendUserTurnResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::InterruptConversationResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::GitDiffToRemoteResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::LoginApiKeyParams::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::LoginApiKeyResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::LoginChatGptResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::CancelLoginChatGptResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::LogoutChatGptResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::GetAuthStatusResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::ApplyPatchApprovalResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::ExecCommandApprovalResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::FuzzyFileSearchParams::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::FuzzyFileSearchResult::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::FuzzyFileSearchResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::GetUserSavedConfigResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::SetDefaultModelResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::GetUserAgentResponse::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::UserInfoResponse::export_all_to(out_dir)?;
// Generate the TS bindings client -> server messages.
ClientRequest::export_all_to(out_dir)?;
export_client_responses(out_dir)?;
ClientNotification::export_all_to(out_dir)?;
// All notification types reachable from this enum will be generated by
// induction, so they do not need to be listed individually.
codex_protocol::mcp_protocol::ServerNotification::export_all_to(out_dir)?;
codex_protocol::mcp_protocol::ClientNotification::export_all_to(out_dir)?;
// Generate the TS bindings server -> client messages.
ServerRequest::export_all_to(out_dir)?;
export_server_responses(out_dir)?;
ServerNotification::export_all_to(out_dir)?;
// Generate index.ts that re-exports all types.
generate_index_ts(out_dir)?;
// Prepend header to each generated .ts file

View File

@@ -32,7 +32,6 @@ uuid = { workspace = true, features = ["serde", "v7"] }
[dev-dependencies]
anyhow = { workspace = true }
pretty_assertions = { workspace = true }
tempfile = { workspace = true }
[package.metadata.cargo-shear]

View File

@@ -0,0 +1,68 @@
use std::fmt::Display;
use serde::Deserialize;
use serde::Serialize;
use ts_rs::TS;
use uuid::Uuid;
#[derive(Debug, Clone, Copy, PartialEq, Eq, TS, Hash)]
#[ts(type = "string")]
pub struct ConversationId {
uuid: Uuid,
}
impl ConversationId {
pub fn new() -> Self {
Self {
uuid: Uuid::now_v7(),
}
}
pub fn from_string(s: &str) -> Result<Self, uuid::Error> {
Ok(Self {
uuid: Uuid::parse_str(s)?,
})
}
}
impl Default for ConversationId {
fn default() -> Self {
Self::new()
}
}
impl Display for ConversationId {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.uuid)
}
}
impl Serialize for ConversationId {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
serializer.collect_str(&self.uuid)
}
}
impl<'de> Deserialize<'de> for ConversationId {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: serde::Deserializer<'de>,
{
let value = String::deserialize(deserializer)?;
let uuid = Uuid::parse_str(&value).map_err(serde::de::Error::custom)?;
Ok(Self { uuid })
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_conversation_id_default_is_not_zeroes() {
let id = ConversationId::default();
assert_ne!(id.uuid, Uuid::nil());
}
}

View File

@@ -1,6 +1,7 @@
mod conversation_id;
pub use conversation_id::ConversationId;
pub mod config_types;
pub mod custom_prompts;
pub mod mcp_protocol;
pub mod message_history;
pub mod models;
pub mod num_format;

View File

@@ -10,10 +10,10 @@ use std::path::PathBuf;
use std::str::FromStr;
use std::time::Duration;
use crate::ConversationId;
use crate::config_types::ReasoningEffort as ReasoningEffortConfig;
use crate::config_types::ReasoningSummary as ReasoningSummaryConfig;
use crate::custom_prompts::CustomPrompt;
use crate::mcp_protocol::ConversationId;
use crate::message_history::HistoryEntry;
use crate::models::ContentItem;
use crate::models::ResponseItem;

View File

@@ -39,6 +39,7 @@ codex-git-tooling = { workspace = true }
codex-login = { workspace = true }
codex-ollama = { workspace = true }
codex-protocol = { workspace = true }
codex-app-server-protocol = { workspace = true }
color-eyre = { workspace = true }
crossterm = { workspace = true, features = ["bracketed-paste", "event-stream"] }
diffy = { workspace = true }

View File

@@ -16,7 +16,7 @@ use codex_core::config::persist_model_selection;
use codex_core::model_family::find_family_for_model;
use codex_core::protocol::TokenUsage;
use codex_core::protocol_config_types::ReasoningEffort as ReasoningEffortConfig;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::ConversationId;
use color_eyre::eyre::Result;
use color_eyre::eyre::WrapErr;
use crossterm::event::KeyCode;
@@ -451,7 +451,7 @@ mod tests {
use codex_core::CodexAuth;
use codex_core::ConversationManager;
use codex_core::protocol::SessionConfiguredEvent;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::ConversationId;
use ratatui::prelude::Line;
use std::path::PathBuf;
use std::sync::Arc;

View File

@@ -9,7 +9,7 @@ use crate::pager_overlay::Overlay;
use crate::tui;
use crate::tui::TuiEvent;
use codex_core::protocol::ConversationPathResponseEvent;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::ConversationId;
use color_eyre::eyre::Result;
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;

View File

@@ -108,6 +108,7 @@ pub(crate) struct ChatComposer {
custom_prompts: Vec<CustomPrompt>,
footer_mode: FooterMode,
footer_hint_override: Option<Vec<(String, String)>>,
context_window_percent: Option<u8>,
}
/// Popup state at most one can be visible at any time.
@@ -150,6 +151,7 @@ impl ChatComposer {
custom_prompts: Vec::new(),
footer_mode: FooterMode::ShortcutPrompt,
footer_hint_override: None,
context_window_percent: None,
};
// Apply configuration via the setter to keep side-effects centralized.
this.set_disable_paste_burst(disable_paste_burst);
@@ -1317,6 +1319,7 @@ impl ChatComposer {
esc_backtrack_hint: self.esc_backtrack_hint,
use_shift_enter_hint: self.use_shift_enter_hint,
is_task_running: self.is_task_running,
context_window_percent: self.context_window_percent,
}
}
@@ -1447,6 +1450,12 @@ impl ChatComposer {
self.is_task_running = running;
}
pub(crate) fn set_context_window_percent(&mut self, percent: Option<u8>) {
if self.context_window_percent != percent {
self.context_window_percent = percent;
}
}
pub(crate) fn set_esc_backtrack_hint(&mut self, show: bool) {
self.esc_backtrack_hint = show;
if show {

View File

@@ -5,6 +5,7 @@ use ratatui::buffer::Buffer;
use ratatui::layout::Rect;
use ratatui::style::Stylize;
use ratatui::text::Line;
use ratatui::text::Span;
use ratatui::widgets::WidgetRef;
use std::iter;
@@ -14,6 +15,7 @@ pub(crate) struct FooterProps {
pub(crate) esc_backtrack_hint: bool,
pub(crate) use_shift_enter_hint: bool,
pub(crate) is_task_running: bool,
pub(crate) context_window_percent: Option<u8>,
}
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
@@ -75,7 +77,13 @@ fn footer_lines(props: FooterProps) -> Vec<Line<'static>> {
FooterMode::CtrlCReminder => vec![ctrl_c_reminder_line(CtrlCReminderState {
is_task_running: props.is_task_running,
})],
FooterMode::ShortcutPrompt => vec![dim_line(indent_text("? for shortcuts"))],
FooterMode::ShortcutPrompt => {
if props.is_task_running {
vec![context_window_line(props.context_window_percent)]
} else {
vec![dim_line(indent_text("? for shortcuts"))]
}
}
FooterMode::ShortcutOverlay => shortcut_overlay_lines(ShortcutsState {
use_shift_enter_hint: props.use_shift_enter_hint,
esc_backtrack_hint: props.esc_backtrack_hint,
@@ -211,6 +219,21 @@ fn dim_line(text: String) -> Line<'static> {
Line::from(text).dim()
}
fn context_window_line(percent: Option<u8>) -> Line<'static> {
let mut spans: Vec<Span<'static>> = Vec::new();
spans.push(indent_text("").into());
match percent {
Some(percent) => {
spans.push(format!("{percent}%").bold());
spans.push(" context left".dim());
}
None => {
spans.push("? for shortcuts".dim());
}
}
Line::from(spans)
}
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
enum ShortcutId {
Commands,
@@ -398,6 +421,7 @@ mod tests {
esc_backtrack_hint: false,
use_shift_enter_hint: false,
is_task_running: false,
context_window_percent: None,
},
);
@@ -408,6 +432,7 @@ mod tests {
esc_backtrack_hint: true,
use_shift_enter_hint: true,
is_task_running: false,
context_window_percent: None,
},
);
@@ -418,6 +443,7 @@ mod tests {
esc_backtrack_hint: false,
use_shift_enter_hint: false,
is_task_running: false,
context_window_percent: None,
},
);
@@ -428,6 +454,7 @@ mod tests {
esc_backtrack_hint: false,
use_shift_enter_hint: false,
is_task_running: true,
context_window_percent: None,
},
);
@@ -438,6 +465,7 @@ mod tests {
esc_backtrack_hint: false,
use_shift_enter_hint: false,
is_task_running: false,
context_window_percent: None,
},
);
@@ -448,6 +476,18 @@ mod tests {
esc_backtrack_hint: true,
use_shift_enter_hint: false,
is_task_running: false,
context_window_percent: None,
},
);
snapshot_footer(
"footer_shortcuts_context_running",
FooterProps {
mode: FooterMode::ShortcutPrompt,
esc_backtrack_hint: false,
use_shift_enter_hint: false,
is_task_running: true,
context_window_percent: Some(72),
},
);
}

View File

@@ -68,6 +68,7 @@ pub(crate) struct BottomPane {
status: Option<StatusIndicatorWidget>,
/// Queued user messages to show under the status indicator.
queued_user_messages: Vec<String>,
context_window_percent: Option<u8>,
}
pub(crate) struct BottomPaneParams {
@@ -100,6 +101,7 @@ impl BottomPane {
status: None,
queued_user_messages: Vec::new(),
esc_backtrack_hint: false,
context_window_percent: None,
}
}
@@ -341,6 +343,16 @@ impl BottomPane {
}
}
pub(crate) fn set_context_window_percent(&mut self, percent: Option<u8>) {
if self.context_window_percent == percent {
return;
}
self.context_window_percent = percent;
self.composer.set_context_window_percent(percent);
self.request_redraw();
}
/// Show a generic list selection view with the provided items.
pub(crate) fn show_selection_view(&mut self, params: list_selection_view::SelectionViewParams) {
let view = list_selection_view::ListSelectionView::new(params, self.app_event_tx.clone());

View File

@@ -0,0 +1,5 @@
---
source: tui/src/bottom_pane/footer.rs
expression: terminal.backend()
---
" 72% context left "

View File

@@ -41,7 +41,7 @@ use codex_core::protocol::TurnDiffEvent;
use codex_core::protocol::UserMessageEvent;
use codex_core::protocol::WebSearchBeginEvent;
use codex_core::protocol::WebSearchEndEvent;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::ConversationId;
use codex_protocol::parse_command::ParsedCommand;
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
@@ -256,6 +256,8 @@ pub(crate) struct ChatWidget {
ghost_snapshots_disabled: bool,
// Whether to add a final message separator after the last message
needs_final_message_separator: bool,
last_rendered_width: std::cell::Cell<Option<usize>>,
}
struct UserMessage {
@@ -394,8 +396,16 @@ impl ChatWidget {
}
pub(crate) fn set_token_info(&mut self, info: Option<TokenUsageInfo>) {
if info.is_some() {
self.token_info = info;
if let Some(info) = info {
let context_window = info
.model_context_window
.or(self.config.model_context_window);
let percent = context_window.map(|window| {
info.last_token_usage
.percent_of_context_window_remaining(window)
});
self.bottom_pane.set_context_window_percent(percent);
self.token_info = Some(info);
}
}
@@ -658,7 +668,10 @@ impl ChatWidget {
self.add_to_history(history_cell::FinalMessageSeparator::new(elapsed_seconds));
self.needs_final_message_separator = false;
}
self.stream_controller = Some(StreamController::new(self.config.clone()));
self.stream_controller = Some(StreamController::new(
self.config.clone(),
self.last_rendered_width.get().map(|w| w.saturating_sub(2)),
));
}
if let Some(controller) = self.stream_controller.as_mut()
&& controller.push(&delta)
@@ -912,6 +925,7 @@ impl ChatWidget {
ghost_snapshots: Vec::new(),
ghost_snapshots_disabled: true,
needs_final_message_separator: false,
last_rendered_width: std::cell::Cell::new(None),
}
}
@@ -974,6 +988,7 @@ impl ChatWidget {
ghost_snapshots: Vec::new(),
ghost_snapshots_disabled: true,
needs_final_message_separator: false,
last_rendered_width: std::cell::Cell::new(None),
}
}
@@ -1447,7 +1462,7 @@ impl ChatWidget {
} else {
// Show explanation when there are no structured findings.
let mut rendered: Vec<ratatui::text::Line<'static>> = vec!["".into()];
append_markdown(&explanation, &mut rendered, &self.config);
append_markdown(&explanation, None, &mut rendered, &self.config);
let body_cell = AgentMessageCell::new(rendered, false);
self.app_event_tx
.send(AppEvent::InsertHistoryCell(Box::new(body_cell)));
@@ -1456,7 +1471,7 @@ impl ChatWidget {
let message_text =
codex_core::review_format::format_review_findings_block(&output.findings, None);
let mut message_lines: Vec<ratatui::text::Line<'static>> = Vec::new();
append_markdown(&message_text, &mut message_lines, &self.config);
append_markdown(&message_text, None, &mut message_lines, &self.config);
let body_cell = AgentMessageCell::new(message_lines, true);
self.app_event_tx
.send(AppEvent::InsertHistoryCell(Box::new(body_cell)));
@@ -1548,16 +1563,16 @@ impl ChatWidget {
}
pub(crate) fn add_status_output(&mut self) {
let default_usage;
let usage_ref = if let Some(ti) = &self.token_info {
&ti.total_token_usage
let default_usage = TokenUsage::default();
let (total_usage, context_usage) = if let Some(ti) = &self.token_info {
(&ti.total_token_usage, Some(&ti.last_token_usage))
} else {
default_usage = TokenUsage::default();
&default_usage
(&default_usage, Some(&default_usage))
};
self.add_to_history(crate::status::new_status_output(
&self.config,
usage_ref,
total_usage,
context_usage,
&self.conversation_id,
self.rate_limit_snapshot.as_ref(),
));
@@ -1998,6 +2013,7 @@ impl WidgetRef for &ChatWidget {
tool.render_ref(area, buf);
}
}
self.last_rendered_width.set(Some(area.width as usize));
}
}

View File

@@ -3,3 +3,4 @@ source: tui/src/chatwidget/tests.rs
expression: exec_blob
---
• Ran sleep 1
└ (no output)

View File

@@ -1,6 +1,5 @@
---
source: tui/src/chatwidget/tests.rs
assertion_line: 1445
expression: terminal.backend()
---
" "

View File

@@ -34,7 +34,7 @@ use codex_core::protocol::ReviewRequest;
use codex_core::protocol::StreamErrorEvent;
use codex_core::protocol::TaskCompleteEvent;
use codex_core::protocol::TaskStartedEvent;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::ConversationId;
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
use crossterm::event::KeyModifiers;
@@ -341,6 +341,7 @@ fn make_chatwidget_manual() -> (
ghost_snapshots: Vec::new(),
ghost_snapshots_disabled: false,
needs_final_message_separator: false,
last_rendered_width: std::cell::Cell::new(None),
};
(widget, rx, op_rx)
}

Some files were not shown because too many files have changed in this diff Show More