Compare commits

..

57 Commits

Author SHA1 Message Date
starr-openai
e94ba219db Add cloud environment provider
Load remote environments from a cloud-backed provider in environments.toml and simplify executor registry registration to use the current cloud API without mutable environment upserts.

Co-authored-by: Codex <noreply@openai.com>
2026-05-07 16:22:53 -07:00
starr-openai
da52fe6f43 codex: await test environment manager setup (#20667)
Co-authored-by: Codex <noreply@openai.com>
2026-05-07 16:22:42 -07:00
starr-openai
f69cd0bf99 Fix environment startup defaults and sample build
Co-authored-by: Codex <noreply@openai.com>
2026-05-07 16:22:42 -07:00
starr-openai
f4f9839a79 Strengthen multi-environment startup regression
Exercise a non-local configured default so the test proves default-first ordering rather than relying on the implicit local environment already being first.

Co-authored-by: Codex <noreply@openai.com>
2026-05-07 16:22:42 -07:00
starr-openai
8436adb8e3 Attach configured environments on thread startup
Preserve provider environment order and derive startup selections from the configured default plus remaining environments. This lets multi-environment CODEX_HOME setups attach every configured environment by default while keeping the default first as primary.

Co-authored-by: Codex <noreply@openai.com>
2026-05-07 16:22:42 -07:00
starr-openai
72453d70e3 Fix prompt debug formatting
Apply the rustfmt indentation reported by the PR20667 Format / etc CI job after the stack rebase.

Co-authored-by: Codex <noreply@openai.com>
2026-05-07 16:22:41 -07:00
starr-openai
394b50eafb Use test environment manager in chat widget helper
Keep the direct ChatWidget test constructor aligned with the production initializer after wiring the active environment manager through the TUI.

Co-authored-by: Codex <noreply@openai.com>
2026-05-07 16:22:41 -07:00
starr-openai
9f2a9b32e7 Use active environment manager for TUI connectors
Pass the app-owned EnvironmentManager into ChatWidget so connector prefetch uses the same environment selection that the session was initialized with, instead of reconstructing it from config.

Co-authored-by: Codex <noreply@openai.com>
2026-05-07 16:22:41 -07:00
starr-openai
5f009d27c1 Load configured environments from CODEX_HOME
Thread codex_home into EnvironmentManager construction so app entrypoints load environments.toml when present and continue falling back to the legacy CODEX_EXEC_SERVER_URL provider otherwise.

Co-authored-by: Codex <noreply@openai.com>
2026-05-07 16:22:41 -07:00
starr-openai
90e4520bf7 Simplify environment provider defaults
Co-authored-by: Codex <noreply@openai.com>
2026-05-07 16:22:41 -07:00
starr-openai
b49ff33dc9 codex: remove duplicate environment manager constructors (#20666)
Co-authored-by: Codex <noreply@openai.com>
2026-05-07 16:22:06 -07:00
starr-openai
e9f0eaf8ed Expose CODEX_HOME environment manager constructor
Make the environments.toml provider reachable from the exec-server crate API so the provider PR passes clippy before entrypoint wiring lands in the next stack PR.

Co-authored-by: Codex <noreply@openai.com>
2026-05-07 16:22:06 -07:00
starr-openai
525ed52b18 Align TOML provider with snapshot trait
Co-authored-by: Codex <noreply@openai.com>
2026-05-07 16:22:06 -07:00
starr-openai
74df186f8e Narrow exec server URL accessor
Co-authored-by: Codex <noreply@openai.com>
2026-05-07 16:22:06 -07:00
starr-openai
bea492f54d Limit TOML provider test constructor to tests
Avoid keeping the test-only constructor in normal builds now that production construction uses the config-dir aware path.

Co-authored-by: Codex <noreply@openai.com>
2026-05-07 16:22:05 -07:00
starr-openai
84c8e21b31 Fix environments TOML lint coverage
Co-authored-by: Codex <noreply@openai.com>
2026-05-07 16:22:05 -07:00
starr-openai
ef74ece7e6 Add CODEX_HOME environments TOML provider
Add the environments.toml schema, parser, validation, and provider implementation for configured websocket and stdio-command environments. This keeps the provider load helper available but does not make product entrypoints use it yet.

Co-authored-by: Codex <noreply@openai.com>
2026-05-07 16:22:05 -07:00
starr-openai
1fa0fec4dd Represent provider defaults with snapshots
Keep EnvironmentManager construction async to preserve caller behavior while moving provider-owned default selection into a single snapshot object.

Co-authored-by: Codex <noreply@openai.com>
2026-05-07 16:22:05 -07:00
starr-openai
0a7006cebc Simplify environment provider defaults
Co-authored-by: Codex <noreply@openai.com>
2026-05-07 16:22:05 -07:00
starr-openai
0db1e4b4d9 Fix exec-server transport CI failures
Co-authored-by: Codex <noreply@openai.com>
2026-05-07 16:22:05 -07:00
starr-openai
dc926a56c7 Represent provider defaults with snapshots
Keep EnvironmentManager construction async to preserve caller behavior while moving provider-owned default selection into a single snapshot object.

Co-authored-by: Codex <noreply@openai.com>
2026-05-07 16:21:56 -07:00
starr-openai
a2ef8e05b5 Simplify environment provider defaults
Co-authored-by: Codex <noreply@openai.com>
2026-05-07 16:21:56 -07:00
starr-openai
5086768859 Inline provider manager construction
Remove the private from_provider_parts helper. EnvironmentManager::from_provider now performs the provider read, validation, and manager construction directly, and tests use a small provider implementation instead of bypassing that path.

Co-authored-by: Codex <noreply@openai.com>
2026-05-07 16:21:56 -07:00
starr-openai
b8f4ee4439 Split provider environments from default id
Remove the EnvironmentProviderSnapshot wrapper. Providers now expose environments and the selected default id directly, while EnvironmentManager validates that the default id exists in the returned environment map.

Co-authored-by: Codex <noreply@openai.com>
2026-05-07 16:21:56 -07:00
starr-openai
7a8bed96eb Return provider environment snapshots
Make environment providers return the environment map and default id together. This keeps provider-owned startup state in one boundary and removes the separate default callback over a map.

Co-authored-by: Codex <noreply@openai.com>
2026-05-07 16:21:55 -07:00
starr-openai
93f68577ed Simplify provider default environment selection
Have providers return a concrete default environment id after constructing their environment map, using None to disable the default. This removes the DefaultEnvironmentSelection tri-state while preserving legacy derived defaults through the trait's default implementation.

Co-authored-by: Codex <noreply@openai.com>
2026-05-07 16:21:55 -07:00
starr-openai
a970e46442 Fix environment manager clippy lints
Co-authored-by: Codex <noreply@openai.com>
2026-05-07 16:21:55 -07:00
starr-openai
729d8109a3 Make environment providers own default selection
Let environment providers return an explicit default selection and let remote environments track the underlying transport instead of treating only websocket URLs as remote. This prepares the environment layer for stdio-backed remotes without introducing config-file loading.

Co-authored-by: Codex <noreply@openai.com>
2026-05-07 16:21:55 -07:00
starr-openai
1bfe59e42d codex: fix Windows stdio transport clippy (#20664)
Co-authored-by: Codex <noreply@openai.com>
2026-05-07 16:21:42 -07:00
starr-openai
26899a2d5b codex: address stdio transport review feedback (#20664)
Co-authored-by: Codex <noreply@openai.com>
2026-05-07 16:06:20 -07:00
starr-openai
9f125d25cb Use PowerShell for Windows stdio test helper
Avoid cmd.exe echo quoting semantics in the Windows stdio client test by reading stdin and writing the JSON-RPC initialize response from PowerShell.

Co-authored-by: Codex <noreply@openai.com>
2026-05-06 19:19:46 -07:00
starr-openai
256760d6b9 Fix Windows stdio test JSON quoting
Escape the JSON-RPC response quotes in the cmd.exe stdio test command so Windows emits valid JSON before the client initialize timeout.

Co-authored-by: Codex <noreply@openai.com>
2026-05-06 19:19:46 -07:00
starr-openai
e58b331d8f Apply rustfmt to stdio transport guard
Match the rustfmt shape reported by the PR20664 Format / etc CI job after boxing the retained stdio transport guard.

Co-authored-by: Codex <noreply@openai.com>
2026-05-06 19:19:46 -07:00
starr-openai
dd1c9ff41a Box retained stdio transport guard
Avoid the Windows clippy large-enum-variant failure while preserving the retained stdio child cleanup guard behavior.

Co-authored-by: Codex <noreply@openai.com>
2026-05-06 19:19:46 -07:00
starr-openai
62bd368d38 Fix stdio transport clippy issues
Keep the stack-introduced stdio transport variant explicit while avoiding dead-code and redundant-pattern lints reported by PR20664 CI.

Co-authored-by: Codex <noreply@openai.com>
2026-05-06 19:19:45 -07:00
starr-openai
28b23c5cd3 Narrow stdio client lifetime handling
Keep the retained transport ownership needed for stdio child cleanup, but drop the broader AtomicBool closed-state behavior and its targeted tests from this PR.

Co-authored-by: Codex <noreply@openai.com>
2026-05-06 19:19:45 -07:00
starr-openai
3ff901257a Flatten JSON-RPC connection state
Drop the separate JsonRpcConnectionRuntime wrapper so JsonRpcConnection directly owns the channels, disconnect watch, transport tasks, and transport guard. This keeps the lifetime model explicit without helper extraction methods.

Co-authored-by: Codex <noreply@openai.com>
2026-05-06 19:19:45 -07:00
starr-openai
c72f484068 Simplify exec-server connection ownership
Remove the runtime extraction helpers and make JsonRpcConnection ownership explicit at the destructuring sites. Let the stdio transport clean up through Drop so ExecServerClient no longer needs to call an explicit shutdown hook.

Co-authored-by: Codex <noreply@openai.com>
2026-05-06 19:19:45 -07:00
starr-openai
7557a7307a Restore exec-server processor ownership boundary
Keep the server-side connection processor on the original by-value parts API, and move the compatibility needed for that shape into JsonRpcConnection. The client still borrows the connection mutably so it can keep transport ownership with ExecServerClient.

Co-authored-by: Codex <noreply@openai.com>
2026-05-06 19:19:45 -07:00
starr-openai
08795f1b65 Simplify exec-server transport ownership
Remove the Option wrapper used only to force connection drop order and call transport shutdown explicitly instead. Also drop dead-code allowances that are no longer needed.

Co-authored-by: Codex <noreply@openai.com>
2026-05-06 19:19:45 -07:00
starr-openai
f47954caef Remove server disconnect race test
The stdio transport no longer adds a processor-side disconnect side channel, so drop the test that asserted that removed behavior. Client cleanup is covered at the RPC/client transport boundary instead.

Co-authored-by: Codex <noreply@openai.com>
2026-05-06 19:19:44 -07:00
starr-openai
c317a66c61 Simplify exec-server disconnect plumbing
Keep transport shutdown responsible for stdio child cleanup, and remove the separate disconnect watch channel from the JSON-RPC connection/runtime. The RPC client now keeps a single closed flag for rejecting calls after the ordered reader exits.

Co-authored-by: Codex <noreply@openai.com>
2026-05-06 19:19:44 -07:00
starr-openai
d4b347176a Fix exec-server transport CI failures
Co-authored-by: Codex <noreply@openai.com>
2026-05-06 19:19:44 -07:00
starr-openai
6a7112ad21 Rename exec-server transport input params
Co-authored-by: Codex <noreply@openai.com>
2026-05-06 19:19:44 -07:00
starr-openai
b4269e85ff Split JSON-RPC transport variants
Co-authored-by: Codex <noreply@openai.com>
2026-05-06 19:19:44 -07:00
starr-openai
29f8812a83 Model retained JSON-RPC transport generically
Co-authored-by: Codex <noreply@openai.com>
2026-05-06 19:19:43 -07:00
starr-openai
942a674042 Name retained exec-server connection field
Co-authored-by: Codex <noreply@openai.com>
2026-05-06 19:19:43 -07:00
starr-openai
6ed49d62d7 Order exec-server transport teardown before RPC teardown
Co-authored-by: Codex <noreply@openai.com>
2026-05-06 19:19:43 -07:00
starr-openai
045c740618 Clarify exec-server transport connect naming
Co-authored-by: Codex <noreply@openai.com>
2026-05-06 19:19:43 -07:00
starr-openai
21297834ed Simplify stdio exec-server transport ownership
Co-authored-by: Codex <noreply@openai.com>
2026-05-06 19:19:42 -07:00
starr-openai
c00a36e727 Address stdio exec-server review feedback
Spawn stdio exec-server commands directly from structured argv/env/cwd instead of wrapping a shell string, redact the connection label, and tie the stdio child guard to transport disconnect.

Co-authored-by: Codex <noreply@openai.com>
2026-05-06 19:19:42 -07:00
starr-openai
74e96987b8 Simplify exec-server transport internals
Keep environment transport connection policy on ExecServerClient instead of the transport enum, and replace the JSON-RPC connection tuple alias with named connection parts.

Co-authored-by: Codex <noreply@openai.com>
2026-05-06 19:19:42 -07:00
starr-openai
caea51d3b7 Clean up stdio client process groups
Use the existing process-group cleanup pattern for stdio command transports so wrapper shell children are terminated with the client lifetime. Add a regression test that drops the client after spawning a background shell child through the command-backed transport.

Co-authored-by: Codex <noreply@openai.com>
2026-05-06 19:19:42 -07:00
starr-openai
c956939cc6 Clarify exec-server transport lifetime ownership
Co-authored-by: Codex <noreply@openai.com>
2026-05-06 19:19:42 -07:00
starr-openai
0bb3f728e1 Remove duplicate stdio client test import
Co-authored-by: Codex <noreply@openai.com>
2026-05-06 19:19:42 -07:00
starr-openai
995a669971 Make exec-server RPC client Send-safe
Co-authored-by: Codex <noreply@openai.com>
2026-05-06 19:19:42 -07:00
starr-openai
9face2bcbf Add stdio exec-server client transport
Allow exec-server clients to connect through a shell command over stdio. The connection can now retain a drop resource so the spawned child is terminated when the JSON-RPC client is dropped.

Co-authored-by: Codex <noreply@openai.com>
2026-05-06 19:19:41 -07:00
409 changed files with 10148 additions and 9078 deletions

View File

@@ -50,7 +50,7 @@ runs:
- name: Restore bazel repository cache
id: cache_bazel_repository_restore
continue-on-error: true
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: ${{ steps.setup_bazel.outputs.repository-cache-path }}
key: ${{ steps.cache_bazel_repository_key.outputs.repository-cache-key }}

View File

@@ -30,7 +30,7 @@ runs:
using: composite
steps:
- name: Azure login for Trusted Signing (OIDC)
uses: azure/login@a457da9ea143d694b1b9c7c869ebb04ebe844ef5 # v2.3.0
uses: azure/login@a457da9ea143d694b1b9c7c869ebb04ebe844ef5 # v2
with:
client-id: ${{ inputs.client-id }}
tenant-id: ${{ inputs.tenant-id }}
@@ -54,7 +54,7 @@ runs:
} >> "$GITHUB_OUTPUT"
- name: Sign Windows binaries with Azure Trusted Signing
uses: azure/trusted-signing-action@1d365fec12862c4aa68fcac418143d73f0cea293 # v0.5.11
uses: azure/trusted-signing-action@1d365fec12862c4aa68fcac418143d73f0cea293 # v0
with:
endpoint: ${{ inputs.endpoint }}
trusted-signing-account-name: ${{ inputs.account-name }}

View File

@@ -6,37 +6,25 @@ updates:
directory: .github/actions/codex
schedule:
interval: weekly
cooldown:
default-days: 7
- package-ecosystem: cargo
directories:
- codex-rs
- codex-rs/*
schedule:
interval: weekly
cooldown:
default-days: 7
- package-ecosystem: devcontainers
directory: /
schedule:
interval: weekly
cooldown:
default-days: 7
- package-ecosystem: docker
directory: codex-cli
schedule:
interval: weekly
cooldown:
default-days: 7
- package-ecosystem: github-actions
directory: /
schedule:
interval: weekly
cooldown:
default-days: 7
- package-ecosystem: rust-toolchain
directory: codex-rs
schedule:
interval: weekly
cooldown:
default-days: 7

View File

@@ -56,7 +56,7 @@ jobs:
name: Bazel test on ${{ matrix.os }} for ${{ matrix.target }}
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Check rusty_v8 MODULE.bazel checksums
if: matrix.os == 'ubuntu-24.04' && matrix.target == 'x86_64-unknown-linux-gnu'
@@ -122,7 +122,7 @@ jobs:
- name: Upload Bazel execution logs
if: always() && !cancelled()
continue-on-error: true
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: bazel-execution-logs-test-${{ matrix.target }}
path: ${{ runner.temp }}/bazel-execution-logs
@@ -133,7 +133,7 @@ jobs:
- name: Save bazel repository cache
if: always() && !cancelled() && steps.prepare_bazel.outputs.repository-cache-hit != 'true'
continue-on-error: true
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: ${{ steps.prepare_bazel.outputs.repository-cache-path }}
key: ${{ steps.prepare_bazel.outputs.repository-cache-key }}
@@ -148,7 +148,7 @@ jobs:
name: Bazel test on windows-latest for x86_64-pc-windows-gnullvm (native main)
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Prepare Bazel CI
id: prepare_bazel
@@ -195,7 +195,7 @@ jobs:
- name: Upload Bazel execution logs
if: always() && !cancelled()
continue-on-error: true
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: bazel-execution-logs-test-windows-native-x86_64-pc-windows-gnullvm
path: ${{ runner.temp }}/bazel-execution-logs
@@ -206,7 +206,7 @@ jobs:
- name: Save bazel repository cache
if: always() && !cancelled() && steps.prepare_bazel.outputs.repository-cache-hit != 'true'
continue-on-error: true
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: ${{ steps.prepare_bazel.outputs.repository-cache-path }}
key: ${{ steps.prepare_bazel.outputs.repository-cache-key }}
@@ -231,7 +231,7 @@ jobs:
name: Bazel clippy on ${{ matrix.os }} for ${{ matrix.target }}
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Prepare Bazel CI
id: prepare_bazel
@@ -286,7 +286,7 @@ jobs:
- name: Upload Bazel execution logs
if: always() && !cancelled()
continue-on-error: true
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: bazel-execution-logs-clippy-${{ matrix.target }}
path: ${{ runner.temp }}/bazel-execution-logs
@@ -297,7 +297,7 @@ jobs:
- name: Save bazel repository cache
if: always() && !cancelled() && steps.prepare_bazel.outputs.repository-cache-hit != 'true'
continue-on-error: true
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: ${{ steps.prepare_bazel.outputs.repository-cache-path }}
key: ${{ steps.prepare_bazel.outputs.repository-cache-key }}
@@ -318,7 +318,7 @@ jobs:
name: Verify release build on ${{ matrix.os }} for ${{ matrix.target }}
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Prepare Bazel CI
id: prepare_bazel
@@ -390,7 +390,7 @@ jobs:
- name: Upload Bazel execution logs
if: always() && !cancelled()
continue-on-error: true
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: bazel-execution-logs-verify-release-build-${{ matrix.target }}
path: ${{ runner.temp }}/bazel-execution-logs
@@ -401,7 +401,7 @@ jobs:
- name: Save bazel repository cache
if: always() && !cancelled() && steps.prepare_bazel.outputs.repository-cache-hit != 'true'
continue-on-error: true
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: ${{ steps.prepare_bazel.outputs.repository-cache-path }}
key: ${{ steps.prepare_bazel.outputs.repository-cache-key }}

View File

@@ -8,7 +8,7 @@ jobs:
name: Blob size policy
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
with:
fetch-depth: 0

View File

@@ -14,7 +14,7 @@ jobs:
working-directory: ./codex-rs
steps:
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Install Rust toolchain
uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0

View File

@@ -12,7 +12,7 @@ jobs:
NODE_OPTIONS: --max-old-space-size=4096
steps:
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Verify codex-rs Cargo manifests inherit workspace settings
run: python3 .github/scripts/verify_cargo_workspace_manifests.py
@@ -29,7 +29,7 @@ jobs:
run_install: false
- name: Setup Node.js
uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6.3.0
uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6
with:
node-version: 22
@@ -63,7 +63,7 @@ jobs:
echo "pack_output=$PACK_OUTPUT" >> "$GITHUB_OUTPUT"
- name: Upload staged npm package artifact
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: codex-npm-staging
path: ${{ steps.stage_npm_package.outputs.pack_output }}

View File

@@ -17,7 +17,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Close inactive PRs from contributors
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |

View File

@@ -18,9 +18,9 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Annotate locations with typos
uses: codespell-project/codespell-problem-matcher@b80729f885d32f78a716c2f107b4db1025001c42 # v1.1.0
uses: codespell-project/codespell-problem-matcher@b80729f885d32f78a716c2f107b4db1025001c42 # v1
- name: Codespell
uses: codespell-project/actions-codespell@8f01853be192eb0f849a5c7d721450e7a467c579 # v2.2
with:

View File

@@ -19,7 +19,7 @@ jobs:
reason: ${{ steps.normalize-all.outputs.reason }}
has_matches: ${{ steps.normalize-all.outputs.has_matches }}
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Prepare Codex inputs
env:
@@ -155,7 +155,7 @@ jobs:
reason: ${{ steps.normalize-open.outputs.reason }}
has_matches: ${{ steps.normalize-open.outputs.has_matches }}
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Prepare Codex inputs
env:
@@ -342,7 +342,7 @@ jobs:
issues: write
steps:
- name: Comment on issue
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
env:
CODEX_OUTPUT: ${{ needs.select-final.outputs.codex_output }}
with:

View File

@@ -17,7 +17,7 @@ jobs:
outputs:
codex_output: ${{ steps.codex.outputs.final-message }}
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- id: codex
uses: openai/codex-action@5c3f4ccdb2b8790f73d6b21751ac00e602aa0c02 # v1.7

View File

@@ -17,7 +17,7 @@ jobs:
run:
working-directory: codex-rs
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
with:
components: rustfmt
@@ -31,12 +31,12 @@ jobs:
run:
working-directory: codex-rs
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
- uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2.62.49
- uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
with:
tool: cargo-shear
version: 1.11.2
version: 1.5.1
- name: cargo shear
run: cargo shear
@@ -47,14 +47,14 @@ jobs:
CARGO_DYLINT_VERSION: 5.0.0
DYLINT_LINK_VERSION: 5.0.0
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
with:
toolchain: nightly-2025-09-18
components: llvm-tools-preview, rustc-dev, rust-src
- name: Cache cargo-dylint tooling
id: cargo_dylint_cache
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: |
~/.cargo/bin/cargo-dylint
@@ -97,7 +97,7 @@ jobs:
group: codex-runners
labels: codex-windows-x64
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: ./.github/actions/setup-bazel-ci
with:
target: ${{ runner.os }}
@@ -233,7 +233,7 @@ jobs:
labels: codex-windows-arm64
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Install Linux build dependencies
if: ${{ runner.os == 'Linux' }}
shell: bash
@@ -276,7 +276,7 @@ jobs:
# avoid caching the large target dir on the gnu-dev job.
- name: Restore cargo home cache
id: cache_cargo_home_restore
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: |
~/.cargo/bin/
@@ -294,7 +294,7 @@ jobs:
# Install and restore sccache cache
- name: Install sccache
if: ${{ env.USE_SCCACHE == 'true' }}
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2.62.49
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
with:
tool: sccache
version: 0.7.5
@@ -321,7 +321,7 @@ jobs:
- name: Restore sccache cache (fallback)
if: ${{ env.USE_SCCACHE == 'true' && env.SCCACHE_GHA_ENABLED != 'true' }}
id: cache_sccache_restore
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: ${{ github.workspace }}/.sccache/
key: sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ github.run_id }}
@@ -348,7 +348,7 @@ jobs:
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Restore APT cache (musl)
id: cache_apt_restore
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: |
/var/cache/apt
@@ -356,7 +356,7 @@ jobs:
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Install Zig
uses: mlugg/setup-zig@d1434d08867e3ee9daa34448df10607b98908d29 # v2.2.1
uses: mlugg/setup-zig@d1434d08867e3ee9daa34448df10607b98908d29 # v2
with:
version: 0.14.0
@@ -430,7 +430,7 @@ jobs:
- name: Install cargo-chef
if: ${{ matrix.profile == 'release' }}
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2.62.49
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
with:
tool: cargo-chef
version: 0.1.71
@@ -445,11 +445,11 @@ jobs:
cargo chef cook --recipe-path "$RECIPE" --target ${{ matrix.target }} --release
- name: cargo clippy
run: cargo clippy --target ${{ matrix.target }} --tests --profile ${{ matrix.profile }} --timings --locked -- -D warnings
run: cargo clippy --target ${{ matrix.target }} --tests --profile ${{ matrix.profile }} --timings -- -D warnings
- name: Upload Cargo timings (clippy)
if: always()
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: cargo-timings-rust-ci-clippy-${{ matrix.target }}-${{ matrix.profile }}
path: codex-rs/target/**/cargo-timings/cargo-timing.html
@@ -460,7 +460,7 @@ jobs:
- name: Save cargo home cache
if: always() && !cancelled() && steps.cache_cargo_home_restore.outputs.cache-hit != 'true'
continue-on-error: true
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: |
~/.cargo/bin/
@@ -476,7 +476,7 @@ jobs:
- name: Save sccache cache (fallback)
if: always() && !cancelled() && env.USE_SCCACHE == 'true' && env.SCCACHE_GHA_ENABLED != 'true'
continue-on-error: true
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: ${{ github.workspace }}/.sccache/
key: sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ github.run_id }}
@@ -501,7 +501,7 @@ jobs:
- name: Save APT cache (musl)
if: always() && !cancelled() && (matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl') && steps.cache_apt_restore.outputs.cache-hit != 'true'
continue-on-error: true
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: |
/var/cache/apt
@@ -559,7 +559,7 @@ jobs:
labels: codex-windows-arm64
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Install Linux build dependencies
if: ${{ runner.os == 'Linux' }}
shell: bash
@@ -590,7 +590,7 @@ jobs:
- name: Restore cargo home cache
id: cache_cargo_home_restore
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: |
~/.cargo/bin/
@@ -603,7 +603,7 @@ jobs:
- name: Install sccache
if: ${{ env.USE_SCCACHE == 'true' }}
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2.62.49
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
with:
tool: sccache
version: 0.7.5
@@ -630,7 +630,7 @@ jobs:
- name: Restore sccache cache (fallback)
if: ${{ env.USE_SCCACHE == 'true' && env.SCCACHE_GHA_ENABLED != 'true' }}
id: cache_sccache_restore
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: ${{ github.workspace }}/.sccache/
key: sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ github.run_id }}
@@ -638,7 +638,7 @@ jobs:
sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-
sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-
- uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2.62.49
- uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
with:
tool: nextest
version: 0.9.103
@@ -674,7 +674,7 @@ jobs:
- name: Upload Cargo timings (nextest)
if: always()
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: cargo-timings-rust-ci-nextest-${{ matrix.target }}-${{ matrix.profile }}
path: codex-rs/target/**/cargo-timings/cargo-timing.html
@@ -683,7 +683,7 @@ jobs:
- name: Save cargo home cache
if: always() && !cancelled() && steps.cache_cargo_home_restore.outputs.cache-hit != 'true'
continue-on-error: true
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: |
~/.cargo/bin/
@@ -695,7 +695,7 @@ jobs:
- name: Save sccache cache (fallback)
if: always() && !cancelled() && env.USE_SCCACHE == 'true' && env.SCCACHE_GHA_ENABLED != 'true'
continue-on-error: true
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: ${{ github.workspace }}/.sccache/
key: sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ github.run_id }}

View File

@@ -14,7 +14,7 @@ jobs:
codex: ${{ steps.detect.outputs.codex }}
workflows: ${{ steps.detect.outputs.workflows }}
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
with:
fetch-depth: 0
- name: Detect changed paths (no external action)
@@ -61,7 +61,7 @@ jobs:
run:
working-directory: codex-rs
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
with:
components: rustfmt
@@ -77,12 +77,12 @@ jobs:
run:
working-directory: codex-rs
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
- uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2.62.49
- uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
with:
tool: cargo-shear
version: 1.11.2
version: 1.5.1
- name: cargo shear
run: cargo shear
@@ -95,7 +95,7 @@ jobs:
CARGO_DYLINT_VERSION: 5.0.0
DYLINT_LINK_VERSION: 5.0.0
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
- name: Install nightly argument-comment-lint toolchain
shell: bash
@@ -109,7 +109,7 @@ jobs:
rustup default nightly-2025-09-18
- name: Cache cargo-dylint tooling
id: cargo_dylint_cache
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: |
~/.cargo/bin/cargo-dylint
@@ -170,7 +170,7 @@ jobs:
echo "No argument-comment-lint relevant changes."
echo "run=false" >> "$GITHUB_OUTPUT"
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
if: ${{ steps.argument_comment_lint_gate.outputs.run == 'true' }}
- name: Run argument comment lint on codex-rs via Bazel
if: ${{ steps.argument_comment_lint_gate.outputs.run == 'true' }}

View File

@@ -56,7 +56,7 @@ jobs:
labels: codex-windows-x64
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
with:
@@ -75,7 +75,7 @@ jobs:
- name: Cargo build
working-directory: tools/argument-comment-lint
shell: bash
run: cargo build --release --target ${{ matrix.target }} --locked
run: cargo build --release --target ${{ matrix.target }}
- name: Stage artifact
shell: bash
@@ -100,7 +100,7 @@ jobs:
(cd "${RUNNER_TEMP}" && tar -czf "$GITHUB_WORKSPACE/$archive_path" argument-comment-lint)
fi
- uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
- uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: argument-comment-lint-${{ matrix.target }}
path: dist/argument-comment-lint/${{ matrix.target }}/*

View File

@@ -18,7 +18,7 @@ jobs:
if: github.repository == 'openai/codex'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
with:
ref: main
fetch-depth: 0
@@ -43,7 +43,7 @@ jobs:
curl --http1.1 --fail --show-error --location "${headers[@]}" "${url}" | jq '.' > codex-rs/models-manager/models.json
- name: Open pull request (if changed)
uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0 # v8.1.0
uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0 # v8
with:
commit-message: "Update models.json"
title: "Update models.json"

View File

@@ -83,7 +83,7 @@ jobs:
labels: codex-windows-arm64
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Print runner specs (Windows)
shell: powershell
run: |
@@ -109,10 +109,10 @@ jobs:
for binary in ${{ matrix.binaries }}; do
build_args+=(--bin "$binary")
done
cargo build --target ${{ matrix.target }} --release --timings --locked "${build_args[@]}"
cargo build --target ${{ matrix.target }} --release --timings "${build_args[@]}"
- name: Upload Cargo timings
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: cargo-timings-rust-release-windows-${{ matrix.target }}-${{ matrix.bundle }}
path: codex-rs/target/**/cargo-timings/cargo-timing.html
@@ -128,7 +128,7 @@ jobs:
done
- name: Upload Windows binaries
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: windows-binaries-${{ matrix.target }}-${{ matrix.bundle }}
path: |
@@ -165,22 +165,22 @@ jobs:
labels: codex-windows-arm64
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Download prebuilt Windows primary binaries
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8
with:
name: windows-binaries-${{ matrix.target }}-primary
path: codex-rs/target/${{ matrix.target }}/release
- name: Download prebuilt Windows helper binaries
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8
with:
name: windows-binaries-${{ matrix.target }}-helpers
path: codex-rs/target/${{ matrix.target }}/release
- name: Download prebuilt Windows app-server binary
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8
with:
name: windows-binaries-${{ matrix.target }}-app-server
path: codex-rs/target/${{ matrix.target }}/release
@@ -281,7 +281,7 @@ jobs:
"${GITHUB_WORKSPACE}/.github/workflows/zstd" -T0 -19 "$dest/$base"
done
- uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
- uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: ${{ matrix.target }}
path: |

View File

@@ -45,7 +45,7 @@ jobs:
git \
libncursesw5-dev
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Build, smoke-test, and stage zsh artifact
shell: bash
@@ -53,7 +53,7 @@ jobs:
"${GITHUB_WORKSPACE}/.github/scripts/build-zsh-release-artifact.sh" \
"dist/zsh/${{ matrix.target }}/${{ matrix.archive_name }}"
- uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
- uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: codex-zsh-${{ matrix.target }}
path: dist/zsh/${{ matrix.target }}/*
@@ -81,7 +81,7 @@ jobs:
brew install autoconf
fi
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Build, smoke-test, and stage zsh artifact
shell: bash
@@ -89,7 +89,7 @@ jobs:
"${GITHUB_WORKSPACE}/.github/scripts/build-zsh-release-artifact.sh" \
"dist/zsh/${{ matrix.target }}/${{ matrix.archive_name }}"
- uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
- uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: codex-zsh-${{ matrix.target }}
path: dist/zsh/${{ matrix.target }}/*

View File

@@ -19,7 +19,7 @@ jobs:
tag-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
- name: Validate tag matches Cargo.toml version
shell: bash
@@ -118,7 +118,7 @@ jobs:
build_dmg: "false"
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Print runner specs (Linux)
if: ${{ runner.os == 'Linux' }}
shell: bash
@@ -181,7 +181,7 @@ jobs:
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Install Zig
uses: mlugg/setup-zig@d1434d08867e3ee9daa34448df10607b98908d29 # v2.2.1
uses: mlugg/setup-zig@d1434d08867e3ee9daa34448df10607b98908d29 # v2
with:
version: 0.14.0
@@ -261,7 +261,7 @@ jobs:
run: |
set -euo pipefail
target="${{ matrix.target }}"
cargo build --target "$target" --release --timings --locked --bin bwrap
cargo build --target "$target" --release --timings --bin bwrap
bwrap_path="target/${target}/release/bwrap"
if [[ ! -f "$bwrap_path" ]]; then
@@ -281,10 +281,10 @@ jobs:
build_args+=(--bin "$binary")
done
echo "CARGO_PROFILE_RELEASE_LTO: ${CARGO_PROFILE_RELEASE_LTO}"
cargo build --target ${{ matrix.target }} --release --timings --locked "${build_args[@]}"
cargo build --target ${{ matrix.target }} --release --timings "${build_args[@]}"
- name: Upload Cargo timings
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: cargo-timings-rust-release-${{ matrix.target }}-${{ matrix.bundle }}
path: codex-rs/target/**/cargo-timings/cargo-timing.html
@@ -430,7 +430,7 @@ jobs:
zstd -T0 -19 --rm "$dest/$base"
done
- uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
- uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: ${{ matrix.artifact_name }}
# Upload the per-binary .zst files, .tar.gz equivalents, and any
@@ -476,7 +476,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Generate release notes from tag commit message
id: release_notes
@@ -498,7 +498,7 @@ jobs:
echo "path=${notes_path}" >> "${GITHUB_OUTPUT}"
- uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
- uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8
with:
path: dist
@@ -553,7 +553,7 @@ jobs:
run_install: false
- name: Setup Node.js for npm packaging
uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6.3.0
uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6
with:
node-version: 22
@@ -579,7 +579,7 @@ jobs:
cp scripts/install/install.ps1 dist/install.ps1
- name: Create GitHub Release
uses: softprops/action-gh-release@153bb8e04406b158c6c84fc1615b65b24149a1fe # v2.6.1
uses: softprops/action-gh-release@153bb8e04406b158c6c84fc1615b65b24149a1fe # v2
with:
name: ${{ steps.release_name.outputs.name }}
tag_name: ${{ github.ref_name }}
@@ -638,7 +638,7 @@ jobs:
steps:
- name: Setup Node.js
uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6.3.0
uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6
with:
# Node 24 bundles npm >= 11.5.1, which trusted publishing requires.
node-version: 24

View File

@@ -17,10 +17,10 @@ jobs:
v8_version: ${{ steps.v8_version.outputs.version }}
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Set up Python
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6
with:
python-version: "3.12"
@@ -69,7 +69,7 @@ jobs:
target: aarch64-unknown-linux-musl
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Set up Bazel
uses: ./.github/actions/setup-bazel-ci
@@ -77,7 +77,7 @@ jobs:
target: ${{ matrix.target }}
- name: Set up Python
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6
with:
python-version: "3.12"
@@ -133,7 +133,7 @@ jobs:
--output-dir "dist/${TARGET}"
- name: Upload staged musl artifacts
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: rusty-v8-${{ needs.metadata.outputs.v8_version }}-${{ matrix.target }}
path: dist/${{ matrix.target }}/*
@@ -161,12 +161,12 @@ jobs:
exit 1
fi
- uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
- uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8
with:
path: dist
- name: Create GitHub Release
uses: softprops/action-gh-release@153bb8e04406b158c6c84fc1615b65b24149a1fe # v2.6.1
uses: softprops/action-gh-release@153bb8e04406b158c6c84fc1615b65b24149a1fe # v2
with:
tag_name: ${{ needs.metadata.outputs.release_tag }}
name: ${{ needs.metadata.outputs.release_tag }}

View File

@@ -13,7 +13,7 @@ jobs:
timeout-minutes: 10
steps:
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Install Linux bwrap build dependencies
shell: bash
@@ -28,7 +28,7 @@ jobs:
run_install: false
- name: Setup Node.js
uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6.3.0
uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6
with:
node-version: 22
cache: pnpm
@@ -115,7 +115,7 @@ jobs:
- name: Save bazel repository cache
if: always() && !cancelled() && steps.setup_bazel.outputs.cache-hit != 'true'
continue-on-error: true
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: |
~/.cache/bazel-repo-cache

View File

@@ -40,10 +40,10 @@ jobs:
v8_version: ${{ steps.v8_version.outputs.version }}
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Set up Python
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6
with:
python-version: "3.12"
@@ -74,7 +74,7 @@ jobs:
target: aarch64-unknown-linux-musl
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Set up Bazel
uses: ./.github/actions/setup-bazel-ci
@@ -82,7 +82,7 @@ jobs:
target: ${{ matrix.target }}
- name: Set up Python
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6
with:
python-version: "3.12"
@@ -132,7 +132,7 @@ jobs:
--output-dir "dist/${TARGET}"
- name: Upload staged musl artifacts
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: v8-canary-${{ needs.metadata.outputs.v8_version }}-${{ matrix.target }}
path: dist/${{ matrix.target }}/*

View File

@@ -130,7 +130,7 @@ When UI or text output changes intentionally, update the snapshots as follows:
If you dont have the tool:
- `cargo install --locked cargo-insta`
- `cargo install cargo-insta`
### Test assertions

24
codex-rs/Cargo.lock generated
View File

@@ -1866,6 +1866,7 @@ dependencies = [
"codex-config",
"codex-core",
"codex-core-plugins",
"codex-device-key",
"codex-exec-server",
"codex-external-agent-migration",
"codex-external-agent-sessions",
@@ -2130,10 +2131,10 @@ name = "codex-builtin-mcps"
version = "0.0.0"
dependencies = [
"anyhow",
"codex-config",
"codex-memories-mcp",
"codex-utils-absolute-path",
"pretty_assertions",
"tokio",
]
[[package]]
@@ -2181,6 +2182,7 @@ dependencies = [
"codex-app-server-protocol",
"codex-app-server-test-client",
"codex-arg0",
"codex-builtin-mcps",
"codex-chatgpt",
"codex-cloud-tasks",
"codex-config",
@@ -2431,6 +2433,7 @@ dependencies = [
"bm25",
"chrono",
"clap",
"codex-agent-graph-store",
"codex-analytics",
"codex-api",
"codex-app-server-protocol",
@@ -2567,7 +2570,6 @@ dependencies = [
"codex-core-skills",
"codex-exec-server",
"codex-git-utils",
"codex-hooks",
"codex-login",
"codex-model-provider",
"codex-otel",
@@ -2636,6 +2638,22 @@ dependencies = [
"serde_json",
]
[[package]]
name = "codex-device-key"
version = "0.0.0"
dependencies = [
"async-trait",
"base64 0.22.1",
"p256",
"pretty_assertions",
"rand 0.9.3",
"serde",
"serde_json",
"thiserror 2.0.18",
"tokio",
"url",
]
[[package]]
name = "codex-exec"
version = "0.0.0"
@@ -2705,13 +2723,13 @@ dependencies = [
"serde",
"serde_json",
"serial_test",
"sha2",
"tempfile",
"test-case",
"thiserror 2.0.18",
"tokio",
"tokio-tungstenite",
"tokio-util",
"toml 0.9.11+spec-1.1.0",
"tracing",
"uuid",
"wiremock",

View File

@@ -30,6 +30,7 @@ members = [
"collaboration-mode-templates",
"connectors",
"config",
"device-key",
"shell-command",
"shell-escalation",
"skills",
@@ -153,6 +154,7 @@ codex-core = { path = "core" }
codex-core-api = { path = "core-api" }
codex-core-plugins = { path = "core-plugins" }
codex-core-skills = { path = "core-skills" }
codex-device-key = { path = "device-key" }
codex-exec = { path = "exec" }
codex-file-system = { path = "file-system" }
codex-exec-server = { path = "exec-server" }
@@ -317,6 +319,7 @@ os_info = "3.12.0"
owo-colors = "4.3.0"
path-absolutize = "3.1.1"
pathdiff = "0.2"
p256 = "0.13.2"
portable-pty = "0.9.0"
predicates = "3"
pretty_assertions = "1.4.1"
@@ -475,13 +478,13 @@ ignored = [
[profile.dev]
# Keep line tables/backtraces while avoiding expensive full variable debug info
# across local dev builds.
debug = "limited"
debug = 1
[profile.dev-small]
inherits = "dev"
opt-level = 0
debug = "none"
strip = "symbols"
debug = 0
strip = true
[profile.release]
lto = "fat"
@@ -493,15 +496,8 @@ strip = "symbols"
# See https://github.com/openai/codex/issues/1411 for details.
codegen-units = 1
[profile.profiling]
inherits = "release"
debug = "full"
lto = false
strip = false
[profile.ci-test]
# Reduce binary size to reduce disk pressure.
debug = "limited"
debug = 1 # Reduce debug symbol size
inherits = "test"
opt-level = 0

View File

@@ -7,7 +7,6 @@ version.workspace = true
[lib]
name = "codex_agent_graph_store"
path = "src/lib.rs"
doctest = false
[lints]
workspace = true

View File

@@ -1012,7 +1012,7 @@ fn command_execution_event_serializes_expected_shape() {
runtime_os_version: "15.3.1".to_string(),
runtime_arch: "aarch64".to_string(),
},
thread_source: Some(ThreadSource::User),
thread_source: Some("user"),
subagent_source: None,
parent_thread_id: None,
tool_name: "shell".to_string(),

View File

@@ -70,8 +70,6 @@ pub(crate) enum TrackEventRequest {
CollabAgentToolCall(CodexCollabAgentToolCallEventRequest),
WebSearch(CodexWebSearchEventRequest),
ImageGeneration(CodexImageGenerationEventRequest),
#[allow(dead_code)]
ReviewEvent(CodexReviewEventRequest),
PluginUsed(CodexPluginUsedEventRequest),
PluginInstalled(CodexPluginEventRequest),
PluginUninstalled(CodexPluginEventRequest),
@@ -444,7 +442,7 @@ pub(crate) struct CodexToolItemEventBase {
pub(crate) item_id: String,
pub(crate) app_server_client: CodexAppServerClientMetadata,
pub(crate) runtime: CodexRuntimeMetadata,
pub(crate) thread_source: Option<ThreadSource>,
pub(crate) thread_source: Option<&'static str>,
pub(crate) subagent_source: Option<String>,
pub(crate) parent_thread_id: Option<String>,
pub(crate) tool_name: String,
@@ -464,83 +462,6 @@ pub(crate) struct CodexToolItemEventBase {
pub(crate) requested_network_access: bool,
}
#[allow(dead_code)]
#[derive(Clone, Copy, Debug, Serialize)]
#[serde(rename_all = "snake_case")]
pub(crate) enum ReviewSubjectKind {
CommandExecution,
FileChange,
McpToolCall,
Permissions,
NetworkAccess,
}
#[allow(dead_code)]
#[derive(Clone, Copy, Debug, Serialize)]
#[serde(rename_all = "snake_case")]
pub(crate) enum Reviewer {
Guardian,
User,
}
#[allow(dead_code)]
#[derive(Clone, Copy, Debug, Serialize)]
#[serde(rename_all = "snake_case")]
pub(crate) enum ReviewTrigger {
Initial,
SandboxDenial,
NetworkPolicyDenial,
ExecveIntercept,
}
#[allow(dead_code)]
#[derive(Clone, Copy, Debug, Serialize)]
#[serde(rename_all = "snake_case")]
pub(crate) enum ReviewStatus {
Approved,
Denied,
Aborted,
TimedOut,
}
#[allow(dead_code)]
#[derive(Clone, Copy, Debug, Serialize)]
#[serde(rename_all = "snake_case")]
pub(crate) enum ReviewResolution {
None,
SessionApproval,
ExecPolicyAmendment,
NetworkPolicyAmendment,
}
#[derive(Serialize)]
pub(crate) struct CodexReviewEventParams {
pub(crate) thread_id: String,
pub(crate) turn_id: String,
pub(crate) item_id: Option<String>,
pub(crate) review_id: String,
pub(crate) app_server_client: CodexAppServerClientMetadata,
pub(crate) runtime: CodexRuntimeMetadata,
pub(crate) thread_source: Option<ThreadSource>,
pub(crate) subagent_source: Option<String>,
pub(crate) parent_thread_id: Option<String>,
pub(crate) tool_kind: ReviewSubjectKind,
pub(crate) tool_name: String,
pub(crate) reviewer: Reviewer,
pub(crate) trigger: ReviewTrigger,
pub(crate) status: ReviewStatus,
pub(crate) resolution: ReviewResolution,
pub(crate) started_at_ms: u64,
pub(crate) completed_at_ms: u64,
pub(crate) duration_ms: Option<u64>,
}
#[derive(Serialize)]
pub(crate) struct CodexReviewEventRequest {
pub(crate) event_type: &'static str,
pub(crate) event_params: CodexReviewEventParams,
}
#[allow(dead_code)]
#[derive(Clone, Copy, Debug, Serialize)]
#[serde(rename_all = "snake_case")]
pub(crate) enum WebSearchActionKind {

View File

@@ -1447,7 +1447,7 @@ fn tool_item_base(
item_id,
app_server_client: context.connection_state.app_server_client.clone(),
runtime: context.connection_state.runtime.clone(),
thread_source: thread_metadata.thread_source,
thread_source: thread_metadata.thread_source.map(ThreadSource::as_str),
subagent_source: thread_metadata.subagent_source.clone(),
parent_thread_id: thread_metadata.parent_thread_id.clone(),
tool_name,

View File

@@ -7,8 +7,6 @@ license.workspace = true
[lib]
name = "codex_ansi_escape"
path = "src/lib.rs"
test = false
doctest = false
[lints]
workspace = true

View File

@@ -7,7 +7,6 @@ license.workspace = true
[lib]
name = "codex_app_server_client"
path = "src/lib.rs"
doctest = false
[lints]
workspace = true

View File

@@ -29,7 +29,6 @@ pub use codex_app_server::in_process::DEFAULT_IN_PROCESS_CHANNEL_CAPACITY;
pub use codex_app_server::in_process::InProcessServerEvent;
use codex_app_server::in_process::InProcessStartArgs;
use codex_app_server::in_process::LogDbLayer;
pub use codex_app_server::in_process::StateDbHandle;
use codex_app_server_protocol::ClientInfo;
use codex_app_server_protocol::ClientNotification;
use codex_app_server_protocol::ClientRequest;
@@ -47,9 +46,9 @@ use codex_config::LoaderOverrides;
use codex_config::NoopThreadConfigLoader;
use codex_config::RemoteThreadConfigLoader;
use codex_config::ThreadConfigLoader;
pub use codex_core::StateDbHandle;
use codex_core::config::Config;
pub use codex_exec_server::EnvironmentManager;
pub use codex_exec_server::EnvironmentManagerArgs;
pub use codex_exec_server::ExecServerRuntimePaths;
use codex_feedback::CodexFeedback;
use codex_protocol::protocol::SessionSource;
@@ -951,7 +950,7 @@ mod tests {
use codex_app_server_protocol::ToolRequestUserInputParams;
use codex_app_server_protocol::ToolRequestUserInputQuestion;
use codex_core::config::ConfigBuilder;
use codex_core::init_state_db;
use codex_core::init_state_db_from_config;
use futures::SinkExt;
use futures::StreamExt;
use pretty_assertions::assert_eq;
@@ -1017,7 +1016,7 @@ mod tests {
) -> TestClient {
let codex_home = TempDir::new().expect("temp dir");
let config = Arc::new(build_test_config_for_codex_home(codex_home.path()).await);
let state_db = init_state_db(config.as_ref())
let state_db = init_state_db_from_config(config.as_ref())
.await
.expect("state db should initialize for in-process test");
let client = InProcessAppServerClient::start(InProcessClientStartArgs {

View File

@@ -7,7 +7,6 @@ license.workspace = true
[lib]
name = "codex_app_server_protocol"
path = "src/lib.rs"
doctest = false
[lints]
workspace = true

View File

@@ -533,6 +533,200 @@
}
]
},
"DeviceKeyCreateParams": {
"description": "Create a controller-local device key with a random key id.",
"properties": {
"accountUserId": {
"type": "string"
},
"clientId": {
"type": "string"
},
"protectionPolicy": {
"anyOf": [
{
"$ref": "#/definitions/DeviceKeyProtectionPolicy"
},
{
"type": "null"
}
],
"description": "Defaults to `hardware_only` when omitted."
}
},
"required": [
"accountUserId",
"clientId"
],
"type": "object"
},
"DeviceKeyProtectionPolicy": {
"description": "Protection policy for creating or loading a controller-local device key.",
"enum": [
"hardware_only",
"allow_os_protected_nonextractable"
],
"type": "string"
},
"DeviceKeyPublicParams": {
"description": "Fetch a controller-local device key public key by id.",
"properties": {
"keyId": {
"type": "string"
}
},
"required": [
"keyId"
],
"type": "object"
},
"DeviceKeySignParams": {
"description": "Sign an accepted structured payload with a controller-local device key.",
"properties": {
"keyId": {
"type": "string"
},
"payload": {
"$ref": "#/definitions/DeviceKeySignPayload"
}
},
"required": [
"keyId",
"payload"
],
"type": "object"
},
"DeviceKeySignPayload": {
"description": "Structured payloads accepted by `device/key/sign`.",
"oneOf": [
{
"description": "Payload bound to one remote-control controller websocket `/client` connection challenge.",
"properties": {
"accountUserId": {
"type": "string"
},
"audience": {
"$ref": "#/definitions/RemoteControlClientConnectionAudience"
},
"clientId": {
"type": "string"
},
"nonce": {
"type": "string"
},
"scopes": {
"description": "Must contain exactly `remote_control_controller_websocket`.",
"items": {
"type": "string"
},
"type": "array"
},
"sessionId": {
"description": "Backend-issued websocket session id that this proof authorizes.",
"type": "string"
},
"targetOrigin": {
"description": "Origin of the backend endpoint that issued the challenge and will verify this proof.",
"type": "string"
},
"targetPath": {
"description": "Websocket route path that this proof authorizes.",
"type": "string"
},
"tokenExpiresAt": {
"description": "Remote-control token expiration as Unix seconds.",
"format": "int64",
"type": "integer"
},
"tokenSha256Base64url": {
"description": "SHA-256 of the controller-scoped remote-control token, encoded as unpadded base64url.",
"type": "string"
},
"type": {
"enum": [
"remoteControlClientConnection"
],
"title": "RemoteControlClientConnectionDeviceKeySignPayloadType",
"type": "string"
}
},
"required": [
"accountUserId",
"audience",
"clientId",
"nonce",
"scopes",
"sessionId",
"targetOrigin",
"targetPath",
"tokenExpiresAt",
"tokenSha256Base64url",
"type"
],
"title": "RemoteControlClientConnectionDeviceKeySignPayload",
"type": "object"
},
{
"description": "Payload bound to a remote-control client `/client/enroll` ownership challenge.",
"properties": {
"accountUserId": {
"type": "string"
},
"audience": {
"$ref": "#/definitions/RemoteControlClientEnrollmentAudience"
},
"challengeExpiresAt": {
"description": "Enrollment challenge expiration as Unix seconds.",
"format": "int64",
"type": "integer"
},
"challengeId": {
"description": "Backend-issued enrollment challenge id that this proof authorizes.",
"type": "string"
},
"clientId": {
"type": "string"
},
"deviceIdentitySha256Base64url": {
"description": "SHA-256 of the requested device identity operation, encoded as unpadded base64url.",
"type": "string"
},
"nonce": {
"type": "string"
},
"targetOrigin": {
"description": "Origin of the backend endpoint that issued the challenge and will verify this proof.",
"type": "string"
},
"targetPath": {
"description": "HTTP route path that this proof authorizes.",
"type": "string"
},
"type": {
"enum": [
"remoteControlClientEnrollment"
],
"title": "RemoteControlClientEnrollmentDeviceKeySignPayloadType",
"type": "string"
}
},
"required": [
"accountUserId",
"audience",
"challengeExpiresAt",
"challengeId",
"clientId",
"deviceIdentitySha256Base64url",
"nonce",
"targetOrigin",
"targetPath",
"type"
],
"title": "RemoteControlClientEnrollmentDeviceKeySignPayload",
"type": "object"
}
]
},
"DynamicToolSpec": {
"properties": {
"deferLoading": {
@@ -2310,6 +2504,20 @@
}
]
},
"RemoteControlClientConnectionAudience": {
"description": "Audience for a remote-control client connection device-key proof.",
"enum": [
"remote_control_client_websocket"
],
"type": "string"
},
"RemoteControlClientEnrollmentAudience": {
"description": "Audience for a remote-control client enrollment device-key proof.",
"enum": [
"remote_control_client_enrollment"
],
"type": "string"
},
"RequestId": {
"anyOf": [
{
@@ -4096,31 +4304,6 @@
],
"type": "object"
},
"TurnItemsView": {
"oneOf": [
{
"description": "`items` was not loaded for this turn. The field is intentionally empty.",
"enum": [
"notLoaded"
],
"type": "string"
},
{
"description": "`items` contains only a display summary for this turn.",
"enum": [
"summary"
],
"type": "string"
},
{
"description": "`items` contains every ThreadItem available from persisted app-server history for this turn.",
"enum": [
"full"
],
"type": "string"
}
]
},
"TurnStartParams": {
"properties": {
"approvalPolicy": {
@@ -5125,6 +5308,78 @@
"title": "App/listRequest",
"type": "object"
},
{
"properties": {
"id": {
"$ref": "#/definitions/RequestId"
},
"method": {
"enum": [
"device/key/create"
],
"title": "Device/key/createRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/DeviceKeyCreateParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "Device/key/createRequest",
"type": "object"
},
{
"properties": {
"id": {
"$ref": "#/definitions/RequestId"
},
"method": {
"enum": [
"device/key/public"
],
"title": "Device/key/publicRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/DeviceKeyPublicParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "Device/key/publicRequest",
"type": "object"
},
{
"properties": {
"id": {
"$ref": "#/definitions/RequestId"
},
"method": {
"enum": [
"device/key/sign"
],
"title": "Device/key/signRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/DeviceKeySignParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "Device/key/signRequest",
"type": "object"
},
{
"properties": {
"id": {
@@ -6206,4 +6461,4 @@
}
],
"title": "ClientRequest"
}
}

View File

@@ -906,6 +906,78 @@
"title": "App/listRequest",
"type": "object"
},
{
"properties": {
"id": {
"$ref": "#/definitions/v2/RequestId"
},
"method": {
"enum": [
"device/key/create"
],
"title": "Device/key/createRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/v2/DeviceKeyCreateParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "Device/key/createRequest",
"type": "object"
},
{
"properties": {
"id": {
"$ref": "#/definitions/v2/RequestId"
},
"method": {
"enum": [
"device/key/public"
],
"title": "Device/key/publicRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/v2/DeviceKeyPublicParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "Device/key/publicRequest",
"type": "object"
},
{
"properties": {
"id": {
"$ref": "#/definitions/v2/RequestId"
},
"method": {
"enum": [
"device/key/sign"
],
"title": "Device/key/signRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/v2/DeviceKeySignParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "Device/key/signRequest",
"type": "object"
},
{
"properties": {
"id": {
@@ -7875,6 +7947,300 @@
"title": "DeprecationNoticeNotification",
"type": "object"
},
"DeviceKeyAlgorithm": {
"description": "Device-key algorithm reported at enrollment and signing boundaries.",
"enum": [
"ecdsa_p256_sha256"
],
"type": "string"
},
"DeviceKeyCreateParams": {
"$schema": "http://json-schema.org/draft-07/schema#",
"description": "Create a controller-local device key with a random key id.",
"properties": {
"accountUserId": {
"type": "string"
},
"clientId": {
"type": "string"
},
"protectionPolicy": {
"anyOf": [
{
"$ref": "#/definitions/v2/DeviceKeyProtectionPolicy"
},
{
"type": "null"
}
],
"description": "Defaults to `hardware_only` when omitted."
}
},
"required": [
"accountUserId",
"clientId"
],
"title": "DeviceKeyCreateParams",
"type": "object"
},
"DeviceKeyCreateResponse": {
"$schema": "http://json-schema.org/draft-07/schema#",
"description": "Device-key metadata and public key returned by create/public APIs.",
"properties": {
"algorithm": {
"$ref": "#/definitions/v2/DeviceKeyAlgorithm"
},
"keyId": {
"type": "string"
},
"protectionClass": {
"$ref": "#/definitions/v2/DeviceKeyProtectionClass"
},
"publicKeySpkiDerBase64": {
"description": "SubjectPublicKeyInfo DER encoded as base64.",
"type": "string"
}
},
"required": [
"algorithm",
"keyId",
"protectionClass",
"publicKeySpkiDerBase64"
],
"title": "DeviceKeyCreateResponse",
"type": "object"
},
"DeviceKeyProtectionClass": {
"description": "Platform protection class for a controller-local device key.",
"enum": [
"hardware_secure_enclave",
"hardware_tpm",
"os_protected_nonextractable"
],
"type": "string"
},
"DeviceKeyProtectionPolicy": {
"description": "Protection policy for creating or loading a controller-local device key.",
"enum": [
"hardware_only",
"allow_os_protected_nonextractable"
],
"type": "string"
},
"DeviceKeyPublicParams": {
"$schema": "http://json-schema.org/draft-07/schema#",
"description": "Fetch a controller-local device key public key by id.",
"properties": {
"keyId": {
"type": "string"
}
},
"required": [
"keyId"
],
"title": "DeviceKeyPublicParams",
"type": "object"
},
"DeviceKeyPublicResponse": {
"$schema": "http://json-schema.org/draft-07/schema#",
"description": "Device-key public metadata returned by `device/key/public`.",
"properties": {
"algorithm": {
"$ref": "#/definitions/v2/DeviceKeyAlgorithm"
},
"keyId": {
"type": "string"
},
"protectionClass": {
"$ref": "#/definitions/v2/DeviceKeyProtectionClass"
},
"publicKeySpkiDerBase64": {
"description": "SubjectPublicKeyInfo DER encoded as base64.",
"type": "string"
}
},
"required": [
"algorithm",
"keyId",
"protectionClass",
"publicKeySpkiDerBase64"
],
"title": "DeviceKeyPublicResponse",
"type": "object"
},
"DeviceKeySignParams": {
"$schema": "http://json-schema.org/draft-07/schema#",
"description": "Sign an accepted structured payload with a controller-local device key.",
"properties": {
"keyId": {
"type": "string"
},
"payload": {
"$ref": "#/definitions/v2/DeviceKeySignPayload"
}
},
"required": [
"keyId",
"payload"
],
"title": "DeviceKeySignParams",
"type": "object"
},
"DeviceKeySignPayload": {
"description": "Structured payloads accepted by `device/key/sign`.",
"oneOf": [
{
"description": "Payload bound to one remote-control controller websocket `/client` connection challenge.",
"properties": {
"accountUserId": {
"type": "string"
},
"audience": {
"$ref": "#/definitions/v2/RemoteControlClientConnectionAudience"
},
"clientId": {
"type": "string"
},
"nonce": {
"type": "string"
},
"scopes": {
"description": "Must contain exactly `remote_control_controller_websocket`.",
"items": {
"type": "string"
},
"type": "array"
},
"sessionId": {
"description": "Backend-issued websocket session id that this proof authorizes.",
"type": "string"
},
"targetOrigin": {
"description": "Origin of the backend endpoint that issued the challenge and will verify this proof.",
"type": "string"
},
"targetPath": {
"description": "Websocket route path that this proof authorizes.",
"type": "string"
},
"tokenExpiresAt": {
"description": "Remote-control token expiration as Unix seconds.",
"format": "int64",
"type": "integer"
},
"tokenSha256Base64url": {
"description": "SHA-256 of the controller-scoped remote-control token, encoded as unpadded base64url.",
"type": "string"
},
"type": {
"enum": [
"remoteControlClientConnection"
],
"title": "RemoteControlClientConnectionDeviceKeySignPayloadType",
"type": "string"
}
},
"required": [
"accountUserId",
"audience",
"clientId",
"nonce",
"scopes",
"sessionId",
"targetOrigin",
"targetPath",
"tokenExpiresAt",
"tokenSha256Base64url",
"type"
],
"title": "RemoteControlClientConnectionDeviceKeySignPayload",
"type": "object"
},
{
"description": "Payload bound to a remote-control client `/client/enroll` ownership challenge.",
"properties": {
"accountUserId": {
"type": "string"
},
"audience": {
"$ref": "#/definitions/v2/RemoteControlClientEnrollmentAudience"
},
"challengeExpiresAt": {
"description": "Enrollment challenge expiration as Unix seconds.",
"format": "int64",
"type": "integer"
},
"challengeId": {
"description": "Backend-issued enrollment challenge id that this proof authorizes.",
"type": "string"
},
"clientId": {
"type": "string"
},
"deviceIdentitySha256Base64url": {
"description": "SHA-256 of the requested device identity operation, encoded as unpadded base64url.",
"type": "string"
},
"nonce": {
"type": "string"
},
"targetOrigin": {
"description": "Origin of the backend endpoint that issued the challenge and will verify this proof.",
"type": "string"
},
"targetPath": {
"description": "HTTP route path that this proof authorizes.",
"type": "string"
},
"type": {
"enum": [
"remoteControlClientEnrollment"
],
"title": "RemoteControlClientEnrollmentDeviceKeySignPayloadType",
"type": "string"
}
},
"required": [
"accountUserId",
"audience",
"challengeExpiresAt",
"challengeId",
"clientId",
"deviceIdentitySha256Base64url",
"nonce",
"targetOrigin",
"targetPath",
"type"
],
"title": "RemoteControlClientEnrollmentDeviceKeySignPayload",
"type": "object"
}
]
},
"DeviceKeySignResponse": {
"$schema": "http://json-schema.org/draft-07/schema#",
"description": "ASN.1 DER signature returned by `device/key/sign`.",
"properties": {
"algorithm": {
"$ref": "#/definitions/v2/DeviceKeyAlgorithm"
},
"signatureDerBase64": {
"description": "ECDSA signature DER encoded as base64.",
"type": "string"
},
"signedPayloadBase64": {
"description": "Exact bytes signed by the device key, encoded as base64. Verifiers must verify this byte string directly and must not reserialize `payload`.",
"type": "string"
}
},
"required": [
"algorithm",
"signatureDerBase64",
"signedPayloadBase64"
],
"title": "DeviceKeySignResponse",
"type": "object"
},
"DynamicToolCallOutputContentItem": {
"oneOf": [
{
@@ -11778,12 +12144,6 @@
"null"
]
},
"hooks": {
"items": {
"$ref": "#/definitions/v2/PluginHookSummary"
},
"type": "array"
},
"marketplaceName": {
"type": "string"
},
@@ -11815,7 +12175,6 @@
},
"required": [
"apps",
"hooks",
"marketplaceName",
"mcpServers",
"skills",
@@ -11823,21 +12182,6 @@
],
"type": "object"
},
"PluginHookSummary": {
"properties": {
"eventName": {
"$ref": "#/definitions/v2/HookEventName"
},
"key": {
"type": "string"
}
},
"required": [
"eventName",
"key"
],
"type": "object"
},
"PluginInstallParams": {
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
@@ -12187,21 +12531,6 @@
},
"remotePluginId": {
"type": "string"
},
"shareTargets": {
"items": {
"$ref": "#/definitions/v2/PluginSharePrincipal"
},
"type": [
"array",
"null"
]
},
"shareUrl": {
"type": [
"string",
"null"
]
}
},
"required": [
@@ -13244,6 +13573,20 @@
"title": "ReasoningTextDeltaNotification",
"type": "object"
},
"RemoteControlClientConnectionAudience": {
"description": "Audience for a remote-control client connection device-key proof.",
"enum": [
"remote_control_client_websocket"
],
"type": "string"
},
"RemoteControlClientEnrollmentAudience": {
"description": "Audience for a remote-control client enrollment device-key proof.",
"enum": [
"remote_control_client_enrollment"
],
"type": "string"
},
"RemoteControlConnectionStatus": {
"enum": [
"disabled",
@@ -18389,4 +18732,4 @@
},
"title": "CodexAppServerProtocol",
"type": "object"
}
}

View File

@@ -1665,6 +1665,78 @@
"title": "App/listRequest",
"type": "object"
},
{
"properties": {
"id": {
"$ref": "#/definitions/RequestId"
},
"method": {
"enum": [
"device/key/create"
],
"title": "Device/key/createRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/DeviceKeyCreateParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "Device/key/createRequest",
"type": "object"
},
{
"properties": {
"id": {
"$ref": "#/definitions/RequestId"
},
"method": {
"enum": [
"device/key/public"
],
"title": "Device/key/publicRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/DeviceKeyPublicParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "Device/key/publicRequest",
"type": "object"
},
{
"properties": {
"id": {
"$ref": "#/definitions/RequestId"
},
"method": {
"enum": [
"device/key/sign"
],
"title": "Device/key/signRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/DeviceKeySignParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "Device/key/signRequest",
"type": "object"
},
{
"properties": {
"id": {
@@ -4331,6 +4403,300 @@
"title": "DeprecationNoticeNotification",
"type": "object"
},
"DeviceKeyAlgorithm": {
"description": "Device-key algorithm reported at enrollment and signing boundaries.",
"enum": [
"ecdsa_p256_sha256"
],
"type": "string"
},
"DeviceKeyCreateParams": {
"$schema": "http://json-schema.org/draft-07/schema#",
"description": "Create a controller-local device key with a random key id.",
"properties": {
"accountUserId": {
"type": "string"
},
"clientId": {
"type": "string"
},
"protectionPolicy": {
"anyOf": [
{
"$ref": "#/definitions/DeviceKeyProtectionPolicy"
},
{
"type": "null"
}
],
"description": "Defaults to `hardware_only` when omitted."
}
},
"required": [
"accountUserId",
"clientId"
],
"title": "DeviceKeyCreateParams",
"type": "object"
},
"DeviceKeyCreateResponse": {
"$schema": "http://json-schema.org/draft-07/schema#",
"description": "Device-key metadata and public key returned by create/public APIs.",
"properties": {
"algorithm": {
"$ref": "#/definitions/DeviceKeyAlgorithm"
},
"keyId": {
"type": "string"
},
"protectionClass": {
"$ref": "#/definitions/DeviceKeyProtectionClass"
},
"publicKeySpkiDerBase64": {
"description": "SubjectPublicKeyInfo DER encoded as base64.",
"type": "string"
}
},
"required": [
"algorithm",
"keyId",
"protectionClass",
"publicKeySpkiDerBase64"
],
"title": "DeviceKeyCreateResponse",
"type": "object"
},
"DeviceKeyProtectionClass": {
"description": "Platform protection class for a controller-local device key.",
"enum": [
"hardware_secure_enclave",
"hardware_tpm",
"os_protected_nonextractable"
],
"type": "string"
},
"DeviceKeyProtectionPolicy": {
"description": "Protection policy for creating or loading a controller-local device key.",
"enum": [
"hardware_only",
"allow_os_protected_nonextractable"
],
"type": "string"
},
"DeviceKeyPublicParams": {
"$schema": "http://json-schema.org/draft-07/schema#",
"description": "Fetch a controller-local device key public key by id.",
"properties": {
"keyId": {
"type": "string"
}
},
"required": [
"keyId"
],
"title": "DeviceKeyPublicParams",
"type": "object"
},
"DeviceKeyPublicResponse": {
"$schema": "http://json-schema.org/draft-07/schema#",
"description": "Device-key public metadata returned by `device/key/public`.",
"properties": {
"algorithm": {
"$ref": "#/definitions/DeviceKeyAlgorithm"
},
"keyId": {
"type": "string"
},
"protectionClass": {
"$ref": "#/definitions/DeviceKeyProtectionClass"
},
"publicKeySpkiDerBase64": {
"description": "SubjectPublicKeyInfo DER encoded as base64.",
"type": "string"
}
},
"required": [
"algorithm",
"keyId",
"protectionClass",
"publicKeySpkiDerBase64"
],
"title": "DeviceKeyPublicResponse",
"type": "object"
},
"DeviceKeySignParams": {
"$schema": "http://json-schema.org/draft-07/schema#",
"description": "Sign an accepted structured payload with a controller-local device key.",
"properties": {
"keyId": {
"type": "string"
},
"payload": {
"$ref": "#/definitions/DeviceKeySignPayload"
}
},
"required": [
"keyId",
"payload"
],
"title": "DeviceKeySignParams",
"type": "object"
},
"DeviceKeySignPayload": {
"description": "Structured payloads accepted by `device/key/sign`.",
"oneOf": [
{
"description": "Payload bound to one remote-control controller websocket `/client` connection challenge.",
"properties": {
"accountUserId": {
"type": "string"
},
"audience": {
"$ref": "#/definitions/RemoteControlClientConnectionAudience"
},
"clientId": {
"type": "string"
},
"nonce": {
"type": "string"
},
"scopes": {
"description": "Must contain exactly `remote_control_controller_websocket`.",
"items": {
"type": "string"
},
"type": "array"
},
"sessionId": {
"description": "Backend-issued websocket session id that this proof authorizes.",
"type": "string"
},
"targetOrigin": {
"description": "Origin of the backend endpoint that issued the challenge and will verify this proof.",
"type": "string"
},
"targetPath": {
"description": "Websocket route path that this proof authorizes.",
"type": "string"
},
"tokenExpiresAt": {
"description": "Remote-control token expiration as Unix seconds.",
"format": "int64",
"type": "integer"
},
"tokenSha256Base64url": {
"description": "SHA-256 of the controller-scoped remote-control token, encoded as unpadded base64url.",
"type": "string"
},
"type": {
"enum": [
"remoteControlClientConnection"
],
"title": "RemoteControlClientConnectionDeviceKeySignPayloadType",
"type": "string"
}
},
"required": [
"accountUserId",
"audience",
"clientId",
"nonce",
"scopes",
"sessionId",
"targetOrigin",
"targetPath",
"tokenExpiresAt",
"tokenSha256Base64url",
"type"
],
"title": "RemoteControlClientConnectionDeviceKeySignPayload",
"type": "object"
},
{
"description": "Payload bound to a remote-control client `/client/enroll` ownership challenge.",
"properties": {
"accountUserId": {
"type": "string"
},
"audience": {
"$ref": "#/definitions/RemoteControlClientEnrollmentAudience"
},
"challengeExpiresAt": {
"description": "Enrollment challenge expiration as Unix seconds.",
"format": "int64",
"type": "integer"
},
"challengeId": {
"description": "Backend-issued enrollment challenge id that this proof authorizes.",
"type": "string"
},
"clientId": {
"type": "string"
},
"deviceIdentitySha256Base64url": {
"description": "SHA-256 of the requested device identity operation, encoded as unpadded base64url.",
"type": "string"
},
"nonce": {
"type": "string"
},
"targetOrigin": {
"description": "Origin of the backend endpoint that issued the challenge and will verify this proof.",
"type": "string"
},
"targetPath": {
"description": "HTTP route path that this proof authorizes.",
"type": "string"
},
"type": {
"enum": [
"remoteControlClientEnrollment"
],
"title": "RemoteControlClientEnrollmentDeviceKeySignPayloadType",
"type": "string"
}
},
"required": [
"accountUserId",
"audience",
"challengeExpiresAt",
"challengeId",
"clientId",
"deviceIdentitySha256Base64url",
"nonce",
"targetOrigin",
"targetPath",
"type"
],
"title": "RemoteControlClientEnrollmentDeviceKeySignPayload",
"type": "object"
}
]
},
"DeviceKeySignResponse": {
"$schema": "http://json-schema.org/draft-07/schema#",
"description": "ASN.1 DER signature returned by `device/key/sign`.",
"properties": {
"algorithm": {
"$ref": "#/definitions/DeviceKeyAlgorithm"
},
"signatureDerBase64": {
"description": "ECDSA signature DER encoded as base64.",
"type": "string"
},
"signedPayloadBase64": {
"description": "Exact bytes signed by the device key, encoded as base64. Verifiers must verify this byte string directly and must not reserialize `payload`.",
"type": "string"
}
},
"required": [
"algorithm",
"signatureDerBase64",
"signedPayloadBase64"
],
"title": "DeviceKeySignResponse",
"type": "object"
},
"DynamicToolCallOutputContentItem": {
"oneOf": [
{
@@ -8389,12 +8755,6 @@
"null"
]
},
"hooks": {
"items": {
"$ref": "#/definitions/PluginHookSummary"
},
"type": "array"
},
"marketplaceName": {
"type": "string"
},
@@ -8426,7 +8786,6 @@
},
"required": [
"apps",
"hooks",
"marketplaceName",
"mcpServers",
"skills",
@@ -8434,21 +8793,6 @@
],
"type": "object"
},
"PluginHookSummary": {
"properties": {
"eventName": {
"$ref": "#/definitions/HookEventName"
},
"key": {
"type": "string"
}
},
"required": [
"eventName",
"key"
],
"type": "object"
},
"PluginInstallParams": {
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
@@ -8798,21 +9142,6 @@
},
"remotePluginId": {
"type": "string"
},
"shareTargets": {
"items": {
"$ref": "#/definitions/PluginSharePrincipal"
},
"type": [
"array",
"null"
]
},
"shareUrl": {
"type": [
"string",
"null"
]
}
},
"required": [
@@ -9855,6 +10184,20 @@
"title": "ReasoningTextDeltaNotification",
"type": "object"
},
"RemoteControlClientConnectionAudience": {
"description": "Audience for a remote-control client connection device-key proof.",
"enum": [
"remote_control_client_websocket"
],
"type": "string"
},
"RemoteControlClientEnrollmentAudience": {
"description": "Audience for a remote-control client enrollment device-key proof.",
"enum": [
"remote_control_client_enrollment"
],
"type": "string"
},
"RemoteControlConnectionStatus": {
"enum": [
"disabled",
@@ -16274,4 +16617,4 @@
},
"title": "CodexAppServerProtocolV2",
"type": "object"
}
}

View File

@@ -0,0 +1,39 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"DeviceKeyProtectionPolicy": {
"description": "Protection policy for creating or loading a controller-local device key.",
"enum": [
"hardware_only",
"allow_os_protected_nonextractable"
],
"type": "string"
}
},
"description": "Create a controller-local device key with a random key id.",
"properties": {
"accountUserId": {
"type": "string"
},
"clientId": {
"type": "string"
},
"protectionPolicy": {
"anyOf": [
{
"$ref": "#/definitions/DeviceKeyProtectionPolicy"
},
{
"type": "null"
}
],
"description": "Defaults to `hardware_only` when omitted."
}
},
"required": [
"accountUserId",
"clientId"
],
"title": "DeviceKeyCreateParams",
"type": "object"
}

View File

@@ -0,0 +1,45 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"DeviceKeyAlgorithm": {
"description": "Device-key algorithm reported at enrollment and signing boundaries.",
"enum": [
"ecdsa_p256_sha256"
],
"type": "string"
},
"DeviceKeyProtectionClass": {
"description": "Platform protection class for a controller-local device key.",
"enum": [
"hardware_secure_enclave",
"hardware_tpm",
"os_protected_nonextractable"
],
"type": "string"
}
},
"description": "Device-key metadata and public key returned by create/public APIs.",
"properties": {
"algorithm": {
"$ref": "#/definitions/DeviceKeyAlgorithm"
},
"keyId": {
"type": "string"
},
"protectionClass": {
"$ref": "#/definitions/DeviceKeyProtectionClass"
},
"publicKeySpkiDerBase64": {
"description": "SubjectPublicKeyInfo DER encoded as base64.",
"type": "string"
}
},
"required": [
"algorithm",
"keyId",
"protectionClass",
"publicKeySpkiDerBase64"
],
"title": "DeviceKeyCreateResponse",
"type": "object"
}

View File

@@ -0,0 +1,14 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"description": "Fetch a controller-local device key public key by id.",
"properties": {
"keyId": {
"type": "string"
}
},
"required": [
"keyId"
],
"title": "DeviceKeyPublicParams",
"type": "object"
}

View File

@@ -0,0 +1,45 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"DeviceKeyAlgorithm": {
"description": "Device-key algorithm reported at enrollment and signing boundaries.",
"enum": [
"ecdsa_p256_sha256"
],
"type": "string"
},
"DeviceKeyProtectionClass": {
"description": "Platform protection class for a controller-local device key.",
"enum": [
"hardware_secure_enclave",
"hardware_tpm",
"os_protected_nonextractable"
],
"type": "string"
}
},
"description": "Device-key public metadata returned by `device/key/public`.",
"properties": {
"algorithm": {
"$ref": "#/definitions/DeviceKeyAlgorithm"
},
"keyId": {
"type": "string"
},
"protectionClass": {
"$ref": "#/definitions/DeviceKeyProtectionClass"
},
"publicKeySpkiDerBase64": {
"description": "SubjectPublicKeyInfo DER encoded as base64.",
"type": "string"
}
},
"required": [
"algorithm",
"keyId",
"protectionClass",
"publicKeySpkiDerBase64"
],
"title": "DeviceKeyPublicResponse",
"type": "object"
}

View File

@@ -0,0 +1,165 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"DeviceKeySignPayload": {
"description": "Structured payloads accepted by `device/key/sign`.",
"oneOf": [
{
"description": "Payload bound to one remote-control controller websocket `/client` connection challenge.",
"properties": {
"accountUserId": {
"type": "string"
},
"audience": {
"$ref": "#/definitions/RemoteControlClientConnectionAudience"
},
"clientId": {
"type": "string"
},
"nonce": {
"type": "string"
},
"scopes": {
"description": "Must contain exactly `remote_control_controller_websocket`.",
"items": {
"type": "string"
},
"type": "array"
},
"sessionId": {
"description": "Backend-issued websocket session id that this proof authorizes.",
"type": "string"
},
"targetOrigin": {
"description": "Origin of the backend endpoint that issued the challenge and will verify this proof.",
"type": "string"
},
"targetPath": {
"description": "Websocket route path that this proof authorizes.",
"type": "string"
},
"tokenExpiresAt": {
"description": "Remote-control token expiration as Unix seconds.",
"format": "int64",
"type": "integer"
},
"tokenSha256Base64url": {
"description": "SHA-256 of the controller-scoped remote-control token, encoded as unpadded base64url.",
"type": "string"
},
"type": {
"enum": [
"remoteControlClientConnection"
],
"title": "RemoteControlClientConnectionDeviceKeySignPayloadType",
"type": "string"
}
},
"required": [
"accountUserId",
"audience",
"clientId",
"nonce",
"scopes",
"sessionId",
"targetOrigin",
"targetPath",
"tokenExpiresAt",
"tokenSha256Base64url",
"type"
],
"title": "RemoteControlClientConnectionDeviceKeySignPayload",
"type": "object"
},
{
"description": "Payload bound to a remote-control client `/client/enroll` ownership challenge.",
"properties": {
"accountUserId": {
"type": "string"
},
"audience": {
"$ref": "#/definitions/RemoteControlClientEnrollmentAudience"
},
"challengeExpiresAt": {
"description": "Enrollment challenge expiration as Unix seconds.",
"format": "int64",
"type": "integer"
},
"challengeId": {
"description": "Backend-issued enrollment challenge id that this proof authorizes.",
"type": "string"
},
"clientId": {
"type": "string"
},
"deviceIdentitySha256Base64url": {
"description": "SHA-256 of the requested device identity operation, encoded as unpadded base64url.",
"type": "string"
},
"nonce": {
"type": "string"
},
"targetOrigin": {
"description": "Origin of the backend endpoint that issued the challenge and will verify this proof.",
"type": "string"
},
"targetPath": {
"description": "HTTP route path that this proof authorizes.",
"type": "string"
},
"type": {
"enum": [
"remoteControlClientEnrollment"
],
"title": "RemoteControlClientEnrollmentDeviceKeySignPayloadType",
"type": "string"
}
},
"required": [
"accountUserId",
"audience",
"challengeExpiresAt",
"challengeId",
"clientId",
"deviceIdentitySha256Base64url",
"nonce",
"targetOrigin",
"targetPath",
"type"
],
"title": "RemoteControlClientEnrollmentDeviceKeySignPayload",
"type": "object"
}
]
},
"RemoteControlClientConnectionAudience": {
"description": "Audience for a remote-control client connection device-key proof.",
"enum": [
"remote_control_client_websocket"
],
"type": "string"
},
"RemoteControlClientEnrollmentAudience": {
"description": "Audience for a remote-control client enrollment device-key proof.",
"enum": [
"remote_control_client_enrollment"
],
"type": "string"
}
},
"description": "Sign an accepted structured payload with a controller-local device key.",
"properties": {
"keyId": {
"type": "string"
},
"payload": {
"$ref": "#/definitions/DeviceKeySignPayload"
}
},
"required": [
"keyId",
"payload"
],
"title": "DeviceKeySignParams",
"type": "object"
}

View File

@@ -0,0 +1,33 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"DeviceKeyAlgorithm": {
"description": "Device-key algorithm reported at enrollment and signing boundaries.",
"enum": [
"ecdsa_p256_sha256"
],
"type": "string"
}
},
"description": "ASN.1 DER signature returned by `device/key/sign`.",
"properties": {
"algorithm": {
"$ref": "#/definitions/DeviceKeyAlgorithm"
},
"signatureDerBase64": {
"description": "ECDSA signature DER encoded as base64.",
"type": "string"
},
"signedPayloadBase64": {
"description": "Exact bytes signed by the device key, encoded as base64. Verifiers must verify this byte string directly and must not reserialize `payload`.",
"type": "string"
}
},
"required": [
"algorithm",
"signatureDerBase64",
"signedPayloadBase64"
],
"title": "DeviceKeySignResponse",
"type": "object"
}

View File

@@ -248,21 +248,6 @@
},
"remotePluginId": {
"type": "string"
},
"shareTargets": {
"items": {
"$ref": "#/definitions/PluginSharePrincipal"
},
"type": [
"array",
"null"
]
},
"shareUrl": {
"type": [
"string",
"null"
]
}
},
"required": [
@@ -270,33 +255,6 @@
],
"type": "object"
},
"PluginSharePrincipal": {
"properties": {
"name": {
"type": "string"
},
"principalId": {
"type": "string"
},
"principalType": {
"$ref": "#/definitions/PluginSharePrincipalType"
}
},
"required": [
"name",
"principalId",
"principalType"
],
"type": "object"
},
"PluginSharePrincipalType": {
"enum": [
"user",
"group",
"workspace"
],
"type": "string"
},
"PluginSource": {
"oneOf": [
{

View File

@@ -37,19 +37,6 @@
],
"type": "object"
},
"HookEventName": {
"enum": [
"preToolUse",
"permissionRequest",
"postToolUse",
"preCompact",
"postCompact",
"sessionStart",
"userPromptSubmit",
"stop"
],
"type": "string"
},
"PluginAuthPolicy": {
"enum": [
"ON_INSTALL",
@@ -88,12 +75,6 @@
"null"
]
},
"hooks": {
"items": {
"$ref": "#/definitions/PluginHookSummary"
},
"type": "array"
},
"marketplaceName": {
"type": "string"
},
@@ -125,7 +106,6 @@
},
"required": [
"apps",
"hooks",
"marketplaceName",
"mcpServers",
"skills",
@@ -133,21 +113,6 @@
],
"type": "object"
},
"PluginHookSummary": {
"properties": {
"eventName": {
"$ref": "#/definitions/HookEventName"
},
"key": {
"type": "string"
}
},
"required": [
"eventName",
"key"
],
"type": "object"
},
"PluginInstallPolicy": {
"enum": [
"NOT_AVAILABLE",
@@ -302,21 +267,6 @@
},
"remotePluginId": {
"type": "string"
},
"shareTargets": {
"items": {
"$ref": "#/definitions/PluginSharePrincipal"
},
"type": [
"array",
"null"
]
},
"shareUrl": {
"type": [
"string",
"null"
]
}
},
"required": [
@@ -324,33 +274,6 @@
],
"type": "object"
},
"PluginSharePrincipal": {
"properties": {
"name": {
"type": "string"
},
"principalId": {
"type": "string"
},
"principalType": {
"$ref": "#/definitions/PluginSharePrincipalType"
}
},
"required": [
"name",
"principalId",
"principalType"
],
"type": "object"
},
"PluginSharePrincipalType": {
"enum": [
"user",
"group",
"workspace"
],
"type": "string"
},
"PluginSource": {
"oneOf": [
{

View File

@@ -183,21 +183,6 @@
},
"remotePluginId": {
"type": "string"
},
"shareTargets": {
"items": {
"$ref": "#/definitions/PluginSharePrincipal"
},
"type": [
"array",
"null"
]
},
"shareUrl": {
"type": [
"string",
"null"
]
}
},
"required": [
@@ -230,33 +215,6 @@
],
"type": "object"
},
"PluginSharePrincipal": {
"properties": {
"name": {
"type": "string"
},
"principalId": {
"type": "string"
},
"principalType": {
"$ref": "#/definitions/PluginSharePrincipalType"
}
},
"required": [
"name",
"principalId",
"principalType"
],
"type": "object"
},
"PluginSharePrincipalType": {
"enum": [
"user",
"group",
"workspace"
],
"type": "string"
},
"PluginSource": {
"oneOf": [
{

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,8 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
/**
* Device-key algorithm reported at enrollment and signing boundaries.
*/
export type DeviceKeyAlgorithm = "ecdsa_p256_sha256";

View File

@@ -0,0 +1,13 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { DeviceKeyProtectionPolicy } from "./DeviceKeyProtectionPolicy";
/**
* Create a controller-local device key with a random key id.
*/
export type DeviceKeyCreateParams = {
/**
* Defaults to `hardware_only` when omitted.
*/
protectionPolicy?: DeviceKeyProtectionPolicy | null, accountUserId: string, clientId: string, };

View File

@@ -0,0 +1,14 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { DeviceKeyAlgorithm } from "./DeviceKeyAlgorithm";
import type { DeviceKeyProtectionClass } from "./DeviceKeyProtectionClass";
/**
* Device-key metadata and public key returned by create/public APIs.
*/
export type DeviceKeyCreateResponse = { keyId: string,
/**
* SubjectPublicKeyInfo DER encoded as base64.
*/
publicKeySpkiDerBase64: string, algorithm: DeviceKeyAlgorithm, protectionClass: DeviceKeyProtectionClass, };

View File

@@ -0,0 +1,8 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
/**
* Platform protection class for a controller-local device key.
*/
export type DeviceKeyProtectionClass = "hardware_secure_enclave" | "hardware_tpm" | "os_protected_nonextractable";

View File

@@ -0,0 +1,8 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
/**
* Protection policy for creating or loading a controller-local device key.
*/
export type DeviceKeyProtectionPolicy = "hardware_only" | "allow_os_protected_nonextractable";

View File

@@ -1,6 +1,8 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { HookEventName } from "./HookEventName";
export type PluginHookSummary = { key: string, eventName: HookEventName, };
/**
* Fetch a controller-local device key public key by id.
*/
export type DeviceKeyPublicParams = { keyId: string, };

View File

@@ -0,0 +1,14 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { DeviceKeyAlgorithm } from "./DeviceKeyAlgorithm";
import type { DeviceKeyProtectionClass } from "./DeviceKeyProtectionClass";
/**
* Device-key public metadata returned by `device/key/public`.
*/
export type DeviceKeyPublicResponse = { keyId: string,
/**
* SubjectPublicKeyInfo DER encoded as base64.
*/
publicKeySpkiDerBase64: string, algorithm: DeviceKeyAlgorithm, protectionClass: DeviceKeyProtectionClass, };

View File

@@ -0,0 +1,9 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { DeviceKeySignPayload } from "./DeviceKeySignPayload";
/**
* Sign an accepted structured payload with a controller-local device key.
*/
export type DeviceKeySignParams = { keyId: string, payload: DeviceKeySignPayload, };

View File

@@ -0,0 +1,54 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { RemoteControlClientConnectionAudience } from "./RemoteControlClientConnectionAudience";
import type { RemoteControlClientEnrollmentAudience } from "./RemoteControlClientEnrollmentAudience";
/**
* Structured payloads accepted by `device/key/sign`.
*/
export type DeviceKeySignPayload = { "type": "remoteControlClientConnection", nonce: string, audience: RemoteControlClientConnectionAudience,
/**
* Backend-issued websocket session id that this proof authorizes.
*/
sessionId: string,
/**
* Origin of the backend endpoint that issued the challenge and will verify this proof.
*/
targetOrigin: string,
/**
* Websocket route path that this proof authorizes.
*/
targetPath: string, accountUserId: string, clientId: string,
/**
* Remote-control token expiration as Unix seconds.
*/
tokenExpiresAt: number,
/**
* SHA-256 of the controller-scoped remote-control token, encoded as unpadded base64url.
*/
tokenSha256Base64url: string,
/**
* Must contain exactly `remote_control_controller_websocket`.
*/
scopes: Array<string>, } | { "type": "remoteControlClientEnrollment", nonce: string, audience: RemoteControlClientEnrollmentAudience,
/**
* Backend-issued enrollment challenge id that this proof authorizes.
*/
challengeId: string,
/**
* Origin of the backend endpoint that issued the challenge and will verify this proof.
*/
targetOrigin: string,
/**
* HTTP route path that this proof authorizes.
*/
targetPath: string, accountUserId: string, clientId: string,
/**
* SHA-256 of the requested device identity operation, encoded as unpadded base64url.
*/
deviceIdentitySha256Base64url: string,
/**
* Enrollment challenge expiration as Unix seconds.
*/
challengeExpiresAt: number, };

View File

@@ -0,0 +1,18 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { DeviceKeyAlgorithm } from "./DeviceKeyAlgorithm";
/**
* ASN.1 DER signature returned by `device/key/sign`.
*/
export type DeviceKeySignResponse = {
/**
* ECDSA signature DER encoded as base64.
*/
signatureDerBase64: string,
/**
* Exact bytes signed by the device key, encoded as base64. Verifiers must verify this byte
* string directly and must not reserialize `payload`.
*/
signedPayloadBase64: string, algorithm: DeviceKeyAlgorithm, };

View File

@@ -3,8 +3,7 @@
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { AbsolutePathBuf } from "../AbsolutePathBuf";
import type { AppSummary } from "./AppSummary";
import type { PluginHookSummary } from "./PluginHookSummary";
import type { PluginSummary } from "./PluginSummary";
import type { SkillSummary } from "./SkillSummary";
export type PluginDetail = { marketplaceName: string, marketplacePath: AbsolutePathBuf | null, summary: PluginSummary, description: string | null, skills: Array<SkillSummary>, hooks: Array<PluginHookSummary>, apps: Array<AppSummary>, mcpServers: Array<string>, };
export type PluginDetail = { marketplaceName: string, marketplacePath: AbsolutePathBuf | null, summary: PluginSummary, description: string | null, skills: Array<SkillSummary>, apps: Array<AppSummary>, mcpServers: Array<string>, };

View File

@@ -1,6 +1,5 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { PluginSharePrincipal } from "./PluginSharePrincipal";
export type PluginShareContext = { remotePluginId: string, shareUrl: string | null, creatorAccountUserId: string | null, creatorName: string | null, shareTargets: Array<PluginSharePrincipal> | null, };
export type PluginShareContext = { remotePluginId: string, creatorAccountUserId: string | null, creatorName: string | null, };

View File

@@ -0,0 +1,8 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
/**
* Audience for a remote-control client connection device-key proof.
*/
export type RemoteControlClientConnectionAudience = "remote_control_client_websocket";

View File

@@ -0,0 +1,8 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
/**
* Audience for a remote-control client enrollment device-key proof.
*/
export type RemoteControlClientEnrollmentAudience = "remote_control_client_enrollment";

View File

@@ -79,6 +79,16 @@ export type { ConfiguredHookMatcherGroup } from "./ConfiguredHookMatcherGroup";
export type { ContextCompactedNotification } from "./ContextCompactedNotification";
export type { CreditsSnapshot } from "./CreditsSnapshot";
export type { DeprecationNoticeNotification } from "./DeprecationNoticeNotification";
export type { DeviceKeyAlgorithm } from "./DeviceKeyAlgorithm";
export type { DeviceKeyCreateParams } from "./DeviceKeyCreateParams";
export type { DeviceKeyCreateResponse } from "./DeviceKeyCreateResponse";
export type { DeviceKeyProtectionClass } from "./DeviceKeyProtectionClass";
export type { DeviceKeyProtectionPolicy } from "./DeviceKeyProtectionPolicy";
export type { DeviceKeyPublicParams } from "./DeviceKeyPublicParams";
export type { DeviceKeyPublicResponse } from "./DeviceKeyPublicResponse";
export type { DeviceKeySignParams } from "./DeviceKeySignParams";
export type { DeviceKeySignPayload } from "./DeviceKeySignPayload";
export type { DeviceKeySignResponse } from "./DeviceKeySignResponse";
export type { DynamicToolCallOutputContentItem } from "./DynamicToolCallOutputContentItem";
export type { DynamicToolCallParams } from "./DynamicToolCallParams";
export type { DynamicToolCallResponse } from "./DynamicToolCallResponse";
@@ -264,7 +274,6 @@ export type { PlanDeltaNotification } from "./PlanDeltaNotification";
export type { PluginAuthPolicy } from "./PluginAuthPolicy";
export type { PluginAvailability } from "./PluginAvailability";
export type { PluginDetail } from "./PluginDetail";
export type { PluginHookSummary } from "./PluginHookSummary";
export type { PluginInstallParams } from "./PluginInstallParams";
export type { PluginInstallPolicy } from "./PluginInstallPolicy";
export type { PluginInstallResponse } from "./PluginInstallResponse";
@@ -309,6 +318,8 @@ export type { ReasoningEffortOption } from "./ReasoningEffortOption";
export type { ReasoningSummaryPartAddedNotification } from "./ReasoningSummaryPartAddedNotification";
export type { ReasoningSummaryTextDeltaNotification } from "./ReasoningSummaryTextDeltaNotification";
export type { ReasoningTextDeltaNotification } from "./ReasoningTextDeltaNotification";
export type { RemoteControlClientConnectionAudience } from "./RemoteControlClientConnectionAudience";
export type { RemoteControlClientEnrollmentAudience } from "./RemoteControlClientEnrollmentAudience";
export type { RemoteControlConnectionStatus } from "./RemoteControlConnectionStatus";
export type { RemoteControlStatusChangedNotification } from "./RemoteControlStatusChangedNotification";
export type { RequestPermissionProfile } from "./RequestPermissionProfile";

View File

@@ -77,7 +77,6 @@ macro_rules! experimental_type_entry {
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum ClientRequestSerializationScope {
Global(&'static str),
GlobalSharedRead(&'static str),
Thread { thread_id: String },
ThreadPath { path: PathBuf },
CommandExecProcess { process_id: String },
@@ -94,9 +93,6 @@ macro_rules! serialization_scope_expr {
($actual_params:ident, global($key:literal)) => {
Some(ClientRequestSerializationScope::Global($key))
};
($actual_params:ident, global_shared_read($key:literal)) => {
Some(ClientRequestSerializationScope::GlobalSharedRead($key))
};
($actual_params:ident, thread_id($params:ident . $field:ident)) => {
Some(ClientRequestSerializationScope::Thread {
thread_id: $actual_params.$field.clone(),
@@ -581,13 +577,6 @@ client_request_definitions! {
serialization: None,
response: v2::ThreadTurnsListResponse,
},
#[experimental("thread/turns/items/list")]
ThreadTurnsItemsList => "thread/turns/items/list" {
params: v2::ThreadTurnsItemsListParams,
// Explicitly concurrent: this primarily reads append-only rollout storage.
serialization: None,
response: v2::ThreadTurnsItemsListResponse,
},
/// Append raw Responses API items to the thread history without starting a user turn.
ThreadInjectItems => "thread/inject_items" {
params: v2::ThreadInjectItemsParams,
@@ -596,7 +585,7 @@ client_request_definitions! {
},
SkillsList => "skills/list" {
params: v2::SkillsListParams,
serialization: global_shared_read("config"),
serialization: global("config"),
response: v2::SkillsListResponse,
},
HooksList => "hooks/list" {
@@ -621,7 +610,7 @@ client_request_definitions! {
},
PluginList => "plugin/list" {
params: v2::PluginListParams,
serialization: global_shared_read("config"),
serialization: global("config"),
response: v2::PluginListResponse,
},
PluginRead => "plugin/read" {
@@ -659,6 +648,21 @@ client_request_definitions! {
serialization: None,
response: v2::AppsListResponse,
},
DeviceKeyCreate => "device/key/create" {
params: v2::DeviceKeyCreateParams,
serialization: global("device-key"),
response: v2::DeviceKeyCreateResponse,
},
DeviceKeyPublic => "device/key/public" {
params: v2::DeviceKeyPublicParams,
serialization: global("device-key"),
response: v2::DeviceKeyPublicResponse,
},
DeviceKeySign => "device/key/sign" {
params: v2::DeviceKeySignParams,
serialization: global("device-key"),
response: v2::DeviceKeySignResponse,
},
// File system requests are intentionally concurrent. Desktop already treats local
// file system operations as concurrent, and app-server remote fs mirrors that model.
FsReadFile => "fs/readFile" {
@@ -943,7 +947,7 @@ client_request_definitions! {
ConfigRead => "config/read" {
params: v2::ConfigReadParams,
serialization: global_shared_read("config"),
serialization: global("config"),
response: v2::ConfigReadResponse,
},
ExternalAgentConfigDetect => "externalAgentConfig/detect" {
@@ -1651,31 +1655,6 @@ mod tests {
Some(ClientRequestSerializationScope::Global("config"))
);
let skills_list = ClientRequest::SkillsList {
request_id: request_id(),
params: v2::SkillsListParams {
cwds: Vec::new(),
force_reload: false,
per_cwd_extra_user_roots: None,
},
};
assert_eq!(
skills_list.serialization_scope(),
Some(ClientRequestSerializationScope::GlobalSharedRead("config"))
);
let plugin_list = ClientRequest::PluginList {
request_id: request_id(),
params: v2::PluginListParams {
cwds: None,
marketplace_kinds: None,
},
};
assert_eq!(
plugin_list.serialization_scope(),
Some(ClientRequestSerializationScope::GlobalSharedRead("config"))
);
let plugin_uninstall = ClientRequest::PluginUninstall {
request_id: request_id(),
params: v2::PluginUninstallParams {
@@ -1726,7 +1705,7 @@ mod tests {
};
assert_eq!(
config_read.serialization_scope(),
Some(ClientRequestSerializationScope::GlobalSharedRead("config"))
Some(ClientRequestSerializationScope::Global("config"))
);
let account_read = ClientRequest::GetAccount {
@@ -1781,6 +1760,19 @@ mod tests {
Some(ClientRequestSerializationScope::Global("config"))
);
let device_key_create = ClientRequest::DeviceKeyCreate {
request_id: request_id(),
params: v2::DeviceKeyCreateParams {
protection_policy: None,
account_user_id: "user".to_string(),
client_id: "client".to_string(),
},
};
assert_eq!(
device_key_create.serialization_scope(),
Some(ClientRequestSerializationScope::Global("device-key"))
);
let add_credits_nudge = ClientRequest::SendAddCreditsNudgeEmail {
request_id: request_id(),
params: v2::SendAddCreditsNudgeEmailParams {
@@ -1850,23 +1842,10 @@ mod tests {
cursor: None,
limit: None,
sort_direction: None,
items_view: None,
},
};
assert_eq!(thread_turns_list.serialization_scope(), None);
let thread_turns_items_list = ClientRequest::ThreadTurnsItemsList {
request_id: request_id(),
params: v2::ThreadTurnsItemsListParams {
thread_id: "thread-1".to_string(),
turn_id: "turn-1".to_string(),
cursor: None,
limit: None,
sort_direction: None,
},
};
assert_eq!(thread_turns_items_list.serialization_scope(), None);
let mcp_resource_read = ClientRequest::McpResourceRead {
request_id: request_id(),
params: v2::McpResourceReadParams {

View File

@@ -0,0 +1,181 @@
use schemars::JsonSchema;
use serde::Deserialize;
use serde::Serialize;
use ts_rs::TS;
/// Device-key algorithm reported at enrollment and signing boundaries.
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "snake_case")]
#[ts(rename_all = "snake_case", export_to = "v2/")]
pub enum DeviceKeyAlgorithm {
EcdsaP256Sha256,
}
/// Platform protection class for a controller-local device key.
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "snake_case")]
#[ts(rename_all = "snake_case", export_to = "v2/")]
pub enum DeviceKeyProtectionClass {
HardwareSecureEnclave,
HardwareTpm,
OsProtectedNonextractable,
}
/// Protection policy for creating or loading a controller-local device key.
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "snake_case")]
#[ts(rename_all = "snake_case", export_to = "v2/")]
pub enum DeviceKeyProtectionPolicy {
HardwareOnly,
AllowOsProtectedNonextractable,
}
/// Create a controller-local device key with a random key id.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct DeviceKeyCreateParams {
/// Defaults to `hardware_only` when omitted.
#[ts(optional = nullable)]
pub protection_policy: Option<DeviceKeyProtectionPolicy>,
pub account_user_id: String,
pub client_id: String,
}
/// Device-key metadata and public key returned by create/public APIs.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct DeviceKeyCreateResponse {
pub key_id: String,
/// SubjectPublicKeyInfo DER encoded as base64.
pub public_key_spki_der_base64: String,
pub algorithm: DeviceKeyAlgorithm,
pub protection_class: DeviceKeyProtectionClass,
}
/// Fetch a controller-local device key public key by id.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct DeviceKeyPublicParams {
pub key_id: String,
}
/// Device-key public metadata returned by `device/key/public`.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct DeviceKeyPublicResponse {
pub key_id: String,
/// SubjectPublicKeyInfo DER encoded as base64.
pub public_key_spki_der_base64: String,
pub algorithm: DeviceKeyAlgorithm,
pub protection_class: DeviceKeyProtectionClass,
}
/// Current remote-control connection status and environment id exposed to clients.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct RemoteControlStatusChangedNotification {
pub status: RemoteControlConnectionStatus,
pub environment_id: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(rename_all = "camelCase", export_to = "v2/")]
pub enum RemoteControlConnectionStatus {
Disabled,
Connecting,
Connected,
Errored,
}
/// Audience for a remote-control client connection device-key proof.
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "snake_case")]
#[ts(rename_all = "snake_case", export_to = "v2/")]
pub enum RemoteControlClientConnectionAudience {
RemoteControlClientWebsocket,
}
/// Audience for a remote-control client enrollment device-key proof.
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "snake_case")]
#[ts(rename_all = "snake_case", export_to = "v2/")]
pub enum RemoteControlClientEnrollmentAudience {
RemoteControlClientEnrollment,
}
/// Structured payloads accepted by `device/key/sign`.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(tag = "type", rename_all = "camelCase")]
#[ts(tag = "type", export_to = "v2/")]
pub enum DeviceKeySignPayload {
/// Payload bound to one remote-control controller websocket `/client` connection challenge.
#[serde(rename_all = "camelCase")]
#[ts(rename_all = "camelCase")]
RemoteControlClientConnection {
nonce: String,
audience: RemoteControlClientConnectionAudience,
/// Backend-issued websocket session id that this proof authorizes.
session_id: String,
/// Origin of the backend endpoint that issued the challenge and will verify this proof.
target_origin: String,
/// Websocket route path that this proof authorizes.
target_path: String,
account_user_id: String,
client_id: String,
/// Remote-control token expiration as Unix seconds.
#[ts(type = "number")]
token_expires_at: i64,
/// SHA-256 of the controller-scoped remote-control token, encoded as unpadded base64url.
token_sha256_base64url: String,
/// Must contain exactly `remote_control_controller_websocket`.
scopes: Vec<String>,
},
/// Payload bound to a remote-control client `/client/enroll` ownership challenge.
#[serde(rename_all = "camelCase")]
#[ts(rename_all = "camelCase")]
RemoteControlClientEnrollment {
nonce: String,
audience: RemoteControlClientEnrollmentAudience,
/// Backend-issued enrollment challenge id that this proof authorizes.
challenge_id: String,
/// Origin of the backend endpoint that issued the challenge and will verify this proof.
target_origin: String,
/// HTTP route path that this proof authorizes.
target_path: String,
account_user_id: String,
client_id: String,
/// SHA-256 of the requested device identity operation, encoded as unpadded base64url.
device_identity_sha256_base64url: String,
/// Enrollment challenge expiration as Unix seconds.
#[ts(type = "number")]
challenge_expires_at: i64,
},
}
/// Sign an accepted structured payload with a controller-local device key.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct DeviceKeySignParams {
pub key_id: String,
pub payload: DeviceKeySignPayload,
}
/// ASN.1 DER signature returned by `device/key/sign`.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct DeviceKeySignResponse {
/// ECDSA signature DER encoded as base64.
pub signature_der_base64: String,
/// Exact bytes signed by the device key, encoded as base64. Verifiers must verify this byte
/// string directly and must not reserialize `payload`.
pub signed_payload_base64: String,
pub algorithm: DeviceKeyAlgorithm,
}

View File

@@ -5,6 +5,7 @@ mod apps;
mod collaboration_mode;
mod command_exec;
mod config;
mod device_key;
mod experimental_feature;
mod feedback;
mod fs;
@@ -17,7 +18,6 @@ mod permissions;
mod plugin;
mod process;
mod realtime;
mod remote_control;
mod review;
mod thread;
mod thread_data;
@@ -29,6 +29,7 @@ pub use apps::*;
pub use collaboration_mode::*;
pub use command_exec::*;
pub use config::*;
pub use device_key::*;
pub use experimental_feature::*;
pub use feedback::*;
pub use fs::*;
@@ -41,7 +42,6 @@ pub use permissions::*;
pub use plugin::*;
pub use process::*;
pub use realtime::*;
pub use remote_control::*;
pub use review::*;
pub use shared::*;
pub use thread::*;

View File

@@ -539,10 +539,8 @@ pub struct PluginSummary {
#[ts(export_to = "v2/")]
pub struct PluginShareContext {
pub remote_plugin_id: String,
pub share_url: Option<String>,
pub creator_account_user_id: Option<String>,
pub creator_name: Option<String>,
pub share_targets: Option<Vec<PluginSharePrincipal>>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
@@ -554,19 +552,10 @@ pub struct PluginDetail {
pub summary: PluginSummary,
pub description: Option<String>,
pub skills: Vec<SkillSummary>,
pub hooks: Vec<PluginHookSummary>,
pub apps: Vec<AppSummary>,
pub mcp_servers: Vec<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct PluginHookSummary {
pub key: String,
pub event_name: HookEventName,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]

View File

@@ -1,23 +0,0 @@
use schemars::JsonSchema;
use serde::Deserialize;
use serde::Serialize;
use ts_rs::TS;
/// Current remote-control connection status and environment id exposed to clients.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct RemoteControlStatusChangedNotification {
pub status: RemoteControlConnectionStatus,
pub environment_id: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(rename_all = "camelCase", export_to = "v2/")]
pub enum RemoteControlConnectionStatus {
Disabled,
Connecting,
Connected,
Errored,
}

View File

@@ -95,59 +95,6 @@ fn turn_defaults_legacy_missing_items_view_to_full() {
assert_eq!(turn.items_view, TurnItemsView::Full);
}
#[test]
fn thread_turns_list_params_accepts_items_view() {
let params = serde_json::from_value::<ThreadTurnsListParams>(json!({
"threadId": "thr_123",
"cursor": null,
"limit": 25,
"sortDirection": "desc",
"itemsView": "notLoaded",
}))
.expect("thread turns list params should deserialize");
assert_eq!(params.thread_id, "thr_123");
assert_eq!(params.items_view, Some(TurnItemsView::NotLoaded));
}
#[test]
fn thread_turns_items_list_round_trips() {
let params = ThreadTurnsItemsListParams {
thread_id: "thr_123".to_string(),
turn_id: "turn_456".to_string(),
cursor: Some("cursor_1".to_string()),
limit: Some(50),
sort_direction: Some(SortDirection::Asc),
};
assert_eq!(
serde_json::to_value(&params).expect("serialize params"),
json!({
"threadId": "thr_123",
"turnId": "turn_456",
"cursor": "cursor_1",
"limit": 50,
"sortDirection": "asc",
})
);
let response = ThreadTurnsItemsListResponse {
data: vec![ThreadItem::ContextCompaction {
id: "item_1".to_string(),
}],
next_cursor: None,
backwards_cursor: Some("cursor_0".to_string()),
};
assert_eq!(
serde_json::to_value(&response).expect("serialize response"),
json!({
"data": [{"type": "contextCompaction", "id": "item_1"}],
"nextCursor": null,
"backwardsCursor": "cursor_0",
})
);
}
#[test]
fn thread_list_params_accepts_single_cwd() {
let params = serde_json::from_value::<ThreadListParams>(json!({
@@ -717,6 +664,181 @@ fn fs_read_file_params_round_trip() {
assert_eq!(decoded, params);
}
#[test]
fn device_key_create_params_round_trip_uses_protection_policy() {
let params = DeviceKeyCreateParams {
protection_policy: None,
account_user_id: "account-user-1".to_string(),
client_id: "cli_123".to_string(),
};
let value = serde_json::to_value(&params).expect("serialize device/key/create params");
assert_eq!(
value,
json!({
"accountUserId": "account-user-1",
"clientId": "cli_123",
"protectionPolicy": null,
})
);
let decoded = serde_json::from_value::<DeviceKeyCreateParams>(value)
.expect("deserialize device/key/create params");
assert_eq!(decoded, params);
let params = DeviceKeyCreateParams {
protection_policy: Some(DeviceKeyProtectionPolicy::AllowOsProtectedNonextractable),
account_user_id: "account-user-1".to_string(),
client_id: "cli_123".to_string(),
};
let value = serde_json::to_value(&params)
.expect("serialize device/key/create params with protection policy");
assert_eq!(
value,
json!({
"accountUserId": "account-user-1",
"clientId": "cli_123",
"protectionPolicy": "allow_os_protected_nonextractable",
})
);
}
#[test]
fn device_key_create_response_round_trips_protection_class() {
let response = DeviceKeyCreateResponse {
key_id: "dk_123".to_string(),
public_key_spki_der_base64: "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE".to_string(),
algorithm: DeviceKeyAlgorithm::EcdsaP256Sha256,
protection_class: DeviceKeyProtectionClass::OsProtectedNonextractable,
};
let value = serde_json::to_value(&response).expect("serialize device/key/create response");
assert_eq!(
value,
json!({
"keyId": "dk_123",
"publicKeySpkiDerBase64": "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE",
"algorithm": "ecdsa_p256_sha256",
"protectionClass": "os_protected_nonextractable",
})
);
let decoded = serde_json::from_value::<DeviceKeyCreateResponse>(value)
.expect("deserialize device/key/create response");
assert_eq!(decoded, response);
}
#[test]
fn device_key_sign_params_round_trip_uses_accepted_payload_enum() {
let params = DeviceKeySignParams {
key_id: "dk_123".to_string(),
payload: DeviceKeySignPayload::RemoteControlClientConnection {
nonce: "nonce-1".to_string(),
audience: RemoteControlClientConnectionAudience::RemoteControlClientWebsocket,
session_id: "wssess_123".to_string(),
target_origin: "https://chatgpt.com".to_string(),
target_path: "/api/codex/remote/control/client".to_string(),
account_user_id: "account-user-1".to_string(),
client_id: "cli_123".to_string(),
token_sha256_base64url: "47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU".to_string(),
token_expires_at: 1_700_000_000,
scopes: vec!["remote_control_controller_websocket".to_string()],
},
};
let value = serde_json::to_value(&params).expect("serialize device/key/sign params");
assert_eq!(
value,
json!({
"keyId": "dk_123",
"payload": {
"type": "remoteControlClientConnection",
"nonce": "nonce-1",
"audience": "remote_control_client_websocket",
"sessionId": "wssess_123",
"targetOrigin": "https://chatgpt.com",
"targetPath": "/api/codex/remote/control/client",
"accountUserId": "account-user-1",
"clientId": "cli_123",
"tokenSha256Base64url": "47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU",
"tokenExpiresAt": 1_700_000_000,
"scopes": ["remote_control_controller_websocket"],
},
})
);
let decoded = serde_json::from_value::<DeviceKeySignParams>(value)
.expect("deserialize device/key/sign params");
assert_eq!(decoded, params);
}
#[test]
fn device_key_sign_params_round_trip_uses_enrollment_payload() {
let params = DeviceKeySignParams {
key_id: "dk_123".to_string(),
payload: DeviceKeySignPayload::RemoteControlClientEnrollment {
nonce: "nonce-1".to_string(),
audience: RemoteControlClientEnrollmentAudience::RemoteControlClientEnrollment,
challenge_id: "rch_123".to_string(),
target_origin: "https://chatgpt.com".to_string(),
target_path: "/wham/remote/control/client/enroll".to_string(),
account_user_id: "account-user-1".to_string(),
client_id: "cli_123".to_string(),
device_identity_sha256_base64url: "47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU"
.to_string(),
challenge_expires_at: 1_700_000_000,
},
};
let value = serde_json::to_value(&params)
.expect("serialize device/key/sign params with enrollment payload");
assert_eq!(
value,
json!({
"keyId": "dk_123",
"payload": {
"type": "remoteControlClientEnrollment",
"nonce": "nonce-1",
"audience": "remote_control_client_enrollment",
"challengeId": "rch_123",
"targetOrigin": "https://chatgpt.com",
"targetPath": "/wham/remote/control/client/enroll",
"accountUserId": "account-user-1",
"clientId": "cli_123",
"deviceIdentitySha256Base64url": "47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU",
"challengeExpiresAt": 1_700_000_000,
},
})
);
let decoded = serde_json::from_value::<DeviceKeySignParams>(value)
.expect("deserialize device/key/sign params with enrollment payload");
assert_eq!(decoded, params);
}
#[test]
fn device_key_sign_response_returns_signed_payload_bytes() {
let response = DeviceKeySignResponse {
signature_der_base64: "MEUCIQD".to_string(),
signed_payload_base64: "eyJkb21haW4iOiJjb2RleA".to_string(),
algorithm: DeviceKeyAlgorithm::EcdsaP256Sha256,
};
let value = serde_json::to_value(&response).expect("serialize device/key/sign response");
assert_eq!(
value,
json!({
"signatureDerBase64": "MEUCIQD",
"signedPayloadBase64": "eyJkb21haW4iOiJjb2RleA",
"algorithm": "ecdsa_p256_sha256",
})
);
let decoded = serde_json::from_value::<DeviceKeySignResponse>(value)
.expect("deserialize device/key/sign response");
assert_eq!(decoded, response);
}
#[test]
fn fs_create_directory_params_round_trip_with_default_recursive() {
let params = FsCreateDirectoryParams {

View File

@@ -6,11 +6,9 @@ use super::PermissionProfileSelectionParams;
use super::SandboxMode;
use super::SandboxPolicy;
use super::Thread;
use super::ThreadItem;
use super::ThreadSource;
use super::Turn;
use super::TurnEnvironmentParams;
use super::TurnItemsView;
use super::shared::v2_enum_from_core;
use codex_experimental_api_macros::ExperimentalApi;
use codex_protocol::config_types::Personality;
@@ -1007,9 +1005,6 @@ pub struct ThreadTurnsListParams {
/// Optional turn pagination direction; defaults to descending.
#[ts(optional = nullable)]
pub sort_direction: Option<SortDirection>,
/// How much item detail to include for each returned turn; defaults to summary.
#[ts(optional = nullable)]
pub items_view: Option<TurnItemsView>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
@@ -1027,36 +1022,6 @@ pub struct ThreadTurnsListResponse {
pub backwards_cursor: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadTurnsItemsListParams {
pub thread_id: String,
pub turn_id: String,
/// Opaque cursor to pass to the next call to continue after the last item.
#[ts(optional = nullable)]
pub cursor: Option<String>,
/// Optional item page size.
#[ts(optional = nullable)]
pub limit: Option<u32>,
/// Optional item pagination direction; defaults to ascending.
#[ts(optional = nullable)]
pub sort_direction: Option<SortDirection>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct ThreadTurnsItemsListResponse {
pub data: Vec<ThreadItem>,
/// Opaque cursor to pass to the next call to continue after the last item.
/// if None, there are no more items to return.
pub next_cursor: Option<String>,
/// Opaque cursor to pass as `cursor` when reversing `sortDirection`.
/// This is only populated when the page contains at least one item.
pub backwards_cursor: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]

View File

@@ -23,7 +23,3 @@ tracing-subscriber = { workspace = true }
tungstenite = { workspace = true }
url = { workspace = true }
uuid = { workspace = true, features = ["v4"] }
[lib]
test = false
doctest = false

View File

@@ -7,7 +7,6 @@ license.workspace = true
[lib]
name = "codex_app_server_transport"
path = "src/lib.rs"
doctest = false
[lints]
workspace = true

View File

@@ -171,6 +171,14 @@ pub enum ConnectionOrigin {
RemoteControl,
}
impl ConnectionOrigin {
pub fn allows_device_key_requests(self) -> bool {
// Device-key endpoints are only for local connections that own the app-server instance.
// Do not include remote transports such as SSH or remote-control websocket connections.
matches!(self, Self::Stdio | Self::InProcess)
}
}
static CONNECTION_ID_COUNTER: AtomicU64 = AtomicU64::new(0);
fn next_connection_id() -> ConnectionId {

View File

@@ -15,7 +15,6 @@ path = "src/bin/notify_capture.rs"
[lib]
name = "codex_app_server"
path = "src/lib.rs"
doctest = false
[lints]
workspace = true
@@ -36,6 +35,7 @@ codex-cloud-requirements = { workspace = true }
codex-config = { workspace = true }
codex-core = { workspace = true }
codex-core-plugins = { workspace = true }
codex-device-key = { workspace = true }
codex-exec-server = { workspace = true }
codex-external-agent-migration = { workspace = true }
codex-external-agent-sessions = { workspace = true }
@@ -72,6 +72,7 @@ clap = { workspace = true, features = ["derive"] }
futures = { workspace = true }
serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }
sha2 = { workspace = true }
tempfile = { workspace = true }
thiserror = { workspace = true }
time = { workspace = true }
@@ -87,19 +88,20 @@ tokio = { workspace = true, features = [
tokio-util = { workspace = true }
tracing = { workspace = true, features = ["log"] }
tracing-subscriber = { workspace = true, features = ["env-filter", "fmt", "json"] }
url = { workspace = true }
uuid = { workspace = true, features = ["serde", "v7"] }
[dev-dependencies]
app_test_support = { workspace = true }
base64 = { workspace = true }
axum = { workspace = true, default-features = false, features = [
"http1",
"json",
"tokio",
] }
base64 = { workspace = true }
core_test_support = { workspace = true }
codex-model-provider-info = { workspace = true }
codex-utils-cargo-bin = { workspace = true }
core_test_support = { workspace = true }
flate2 = { workspace = true }
hmac = { workspace = true }
opentelemetry = { workspace = true }
@@ -112,10 +114,8 @@ rmcp = { workspace = true, default-features = false, features = [
"transport-streamable-http-server",
] }
serial_test = { workspace = true }
sha2 = { workspace = true }
shlex = { workspace = true }
tar = { workspace = true }
tokio-tungstenite = { workspace = true }
tracing-opentelemetry = { workspace = true }
url = { workspace = true }
wiremock = { workspace = true }
shlex = { workspace = true }

View File

@@ -149,8 +149,7 @@ Example with notification opt-out:
- `thread/list` — page through stored rollouts; supports cursor-based pagination and optional `modelProviders`, `sourceKinds`, `archived`, `cwd`, and `searchTerm` filters. Each returned `thread` includes `status` (`ThreadStatus`), defaulting to `notLoaded` when the thread is not currently loaded.
- `thread/loaded/list` — list the thread ids currently loaded in memory.
- `thread/read` — read a stored thread by id without resuming it; optionally include turns via `includeTurns`. The returned `thread` includes `status` (`ThreadStatus`), defaulting to `notLoaded` when the thread is not currently loaded.
- `thread/turns/list` — experimental; page through a stored threads turn history without resuming it; supports cursor-based pagination with `sortDirection`, `itemsView`, `nextCursor`, and `backwardsCursor`.
- `thread/turns/items/list` — experimental; reserved for paging full items for one turn. The API shape is present, but app-server currently returns an unsupported-method JSON-RPC error.
- `thread/turns/list` — experimental; page through a stored threads turn history without resuming it; supports cursor-based pagination with `sortDirection`, `nextCursor`, and `backwardsCursor`.
- `thread/metadata/update` — patch stored thread metadata in sqlite; currently supports updating persisted `gitInfo` fields and returns the refreshed `thread`.
- `thread/memoryMode/set` — experimental; set a threads persisted memory eligibility to `"enabled"` or `"disabled"` for either a loaded thread or a stored rollout; returns `{}` on success.
- `memory/reset` — experimental; clear the current `CODEX_HOME/memories` directory and reset persisted memory stage data in sqlite while preserving existing thread memory modes; returns `{}` on success.
@@ -209,10 +208,13 @@ Example with notification opt-out:
- `marketplace/remove` — remove a configured marketplace by name from the user marketplace config, and delete its installed marketplace root when one exists.
- `marketplace/upgrade` — upgrade all configured Git plugin marketplaces, or one named marketplace when `marketplaceName` is provided. Returns selected marketplace names, upgraded roots, and per-marketplace errors.
- `plugin/list` — list discovered plugin marketplaces and plugin state, including effective marketplace install/auth policy metadata, plugin `availability` (`AVAILABLE` by default or `DISABLED_BY_ADMIN` for remote plugins blocked upstream), fail-open `marketplaceLoadErrors` entries for marketplace files that could not be parsed or loaded, and best-effort `featuredPluginIds` for the official curated marketplace. `interface.category` uses the marketplace category when present; otherwise it falls back to the plugin manifest category (**under development; do not call from production clients yet**).
- `plugin/read` — read one plugin by `marketplacePath` plus `pluginName`, returning marketplace info, a list-style `summary`, manifest descriptions/interface metadata, and bundled skills/hooks/apps/MCP server names. Returned plugin skills include their current `enabled` state after local config filtering; bundled hooks are returned as lightweight declaration summaries keyed for correlation with `hooks/list`. Plugin app summaries also include `needsAuth` when the server can determine connector accessibility (**under development; do not call from production clients yet**).
- `plugin/read` — read one plugin by `marketplacePath` plus `pluginName`, returning marketplace info, a list-style `summary`, manifest descriptions/interface metadata, and bundled skills/apps/MCP server names. Returned plugin skills include their current `enabled` state after local config filtering. Plugin app summaries also include `needsAuth` when the server can determine connector accessibility (**under development; do not call from production clients yet**).
- `plugin/skill/read` — read remote plugin skill markdown on demand by `remoteMarketplaceName`, `remotePluginId`, and `skillName`. This lets clients preview uninstalled remote plugin skills without downloading the plugin bundle.
- `skills/changed` — notification emitted when watched local skill files change.
- `app/list` — list available apps.
- `device/key/create` — create or load a controller-local device signing key for an account/client binding. This local-key API is available only over local transports such as stdio and in-process; remote transports reject it. Hardware-backed providers are the target protection class; an OS-protected non-extractable fallback is allowed only with `protectionPolicy: "allow_os_protected_nonextractable"` and returns the reported `protectionClass`.
- `device/key/public` — return a device key's SPKI DER public key as base64 plus its `algorithm` and `protectionClass`.
- `device/key/sign` — sign one of the accepted structured payload variants with a controller-local device key. The only accepted payload today is `remoteControlClientConnection`, which binds a server-issued `/client` websocket challenge to the enrolled controller device without signing the bearer token itself; this is intentionally not an arbitrary-byte signing API.
- `remoteControl/status/changed` — notification emitted when the remote-control status or client-visible environment id changes. `status` is one of `disabled`, `connecting`, `connected`, or `errored`; `environmentId` is a string when the app-server has a current enrollment and `null` when that enrollment is cleared, invalidated, or remote control is disabled. Newly initialized app-server clients always receive the current status snapshot.
- `skills/config/write` — write user-level skill config by name or absolute path.
- `plugin/install` — install a plugin from a discovered marketplace entry, rejecting marketplace entries marked unavailable for install, install MCPs if any, and return the effective plugin auth policy plus any apps that still need auth (**under development; do not call from production clients yet**).
@@ -425,14 +427,13 @@ Use `thread/read` to fetch a stored thread by id without resuming it. Pass `incl
Use `thread/turns/list` with `capabilities.experimentalApi = true` to page a stored threads turn history without resuming it. By default, results are sorted descending so clients can start at the present and fetch older turns with `nextCursor`. The response also includes `backwardsCursor`; pass it as `cursor` on a later request with `sortDirection: "asc"` to fetch turns newer than the first item from the earlier page.
Every returned `Turn` includes `itemsView`, which tells clients whether the `items` array was omitted intentionally (`notLoaded`), contains only summary items (`summary`), or contains every item available from persisted app-server history (`full`). Pass `itemsView` to choose the returned detail level; omitted `itemsView` defaults to `"summary"`.
Every returned `Turn` includes `itemsView`, which tells clients whether the `items` array was omitted intentionally (`notLoaded`), contains only summary items (`summary`), or contains every item available from persisted app-server history (`full`). Current `thread/turns/list` responses return `full` turns.
```json
{ "method": "thread/turns/list", "id": 24, "params": {
"threadId": "thr_123",
"limit": 50,
"sortDirection": "desc",
"itemsView": "summary"
"sortDirection": "desc"
} }
{ "id": 24, "result": {
"data": [ ... ],
@@ -441,19 +442,6 @@ Every returned `Turn` includes `itemsView`, which tells clients whether the `ite
} }
```
`thread/turns/items/list` is the planned hydration API for fetching full items for one turn:
```json
{ "method": "thread/turns/items/list", "id": 25, "params": {
"threadId": "thr_123",
"turnId": "turn_456",
"limit": 100,
"sortDirection": "asc"
} }
```
This method currently returns JSON-RPC `-32601` with message `thread/turns/items/list is not supported yet`.
### Example: Update stored thread metadata
Use `thread/metadata/update` to patch sqlite-backed metadata for a thread without resuming it. Today this supports persisted `gitInfo`; omitted fields are left unchanged, while explicit `null` clears a stored value.

View File

@@ -49,7 +49,6 @@ use codex_app_server_protocol::RawResponseItemCompletedNotification;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ServerNotification;
use codex_app_server_protocol::ServerRequestPayload;
use codex_app_server_protocol::SkillsChangedNotification;
use codex_app_server_protocol::ThreadGoalUpdatedNotification;
use codex_app_server_protocol::ThreadItem;
use codex_app_server_protocol::ThreadRealtimeClosedNotification;
@@ -194,13 +193,6 @@ pub(crate) async fn apply_bespoke_event_handling(
)
.await;
}
EventMsg::SkillsUpdateAvailable => {
outgoing
.send_server_notification(ServerNotification::SkillsChanged(
SkillsChangedNotification {},
))
.await;
}
EventMsg::McpStartupUpdate(update) => {
let (status, error) = match update.status {
codex_protocol::protocol::McpStartupStatus::Starting => {
@@ -2594,7 +2586,8 @@ mod tests {
config.model_provider.clone(),
config.codex_home.to_path_buf(),
Arc::new(codex_exec_server::EnvironmentManager::default_for_tests()),
),
)
.await,
);
let codex_core::NewThread {
thread_id: conversation_id,
@@ -3172,7 +3165,8 @@ mod tests {
config.model_provider.clone(),
config.codex_home.to_path_buf(),
Arc::new(codex_exec_server::EnvironmentManager::default_for_tests()),
),
)
.await,
);
let codex_core::NewThread {
thread_id: conversation_id,

View File

@@ -1,7 +1,6 @@
use codex_app_server_protocol::JSONRPCErrorError;
pub(crate) const INVALID_REQUEST_ERROR_CODE: i64 = -32600;
pub(crate) const METHOD_NOT_FOUND_ERROR_CODE: i64 = -32601;
pub const INVALID_PARAMS_ERROR_CODE: i64 = -32602;
pub(crate) const INTERNAL_ERROR_CODE: i64 = -32603;
pub(crate) const OVERLOADED_ERROR_CODE: i64 = -32001;
@@ -11,10 +10,6 @@ pub(crate) fn invalid_request(message: impl Into<String>) -> JSONRPCErrorError {
error(INVALID_REQUEST_ERROR_CODE, message)
}
pub(crate) fn method_not_found(message: impl Into<String>) -> JSONRPCErrorError {
error(METHOD_NOT_FOUND_ERROR_CODE, message)
}
pub(crate) fn invalid_params(message: impl Into<String>) -> JSONRPCErrorError {
error(INVALID_PARAMS_ERROR_CODE, message)
}

View File

@@ -64,6 +64,7 @@ use crate::outgoing_message::OutgoingMessage;
use crate::outgoing_message::OutgoingMessageSender;
use crate::outgoing_message::QueuedOutgoingMessage;
use crate::transport::CHANNEL_CAPACITY;
use crate::transport::ConnectionOrigin;
use crate::transport::OutboundConnectionState;
use crate::transport::route_outgoing_envelope;
use codex_analytics::AppServerRpcTransport;
@@ -81,12 +82,13 @@ use codex_config::CloudRequirementsLoader;
use codex_config::LoaderOverrides;
use codex_config::ThreadConfigLoader;
use codex_core::config::Config;
use codex_core::init_state_db_from_config;
use codex_core::resolve_installation_id;
use codex_exec_server::EnvironmentManager;
use codex_feedback::CodexFeedback;
use codex_login::AuthManager;
use codex_protocol::protocol::SessionSource;
pub use codex_rollout::StateDbHandle;
use codex_rollout::state_db::StateDbHandle;
pub use codex_state::log_db::LogDbLayer;
use tokio::sync::mpsc;
use tokio::sync::oneshot;
@@ -127,7 +129,7 @@ pub struct InProcessStartArgs {
pub feedback: CodexFeedback,
/// SQLite tracing layer used to flush recently emitted logs before feedback upload.
pub log_db: Option<LogDbLayer>,
/// Process-wide SQLite state handle shared with embedded app-server consumers.
/// Optional state DB handle to use for the in-process runtime.
pub state_db: Option<StateDbHandle>,
/// Environment manager used by core execution and filesystem operations.
pub environment_manager: Arc<EnvironmentManager>,
@@ -366,6 +368,10 @@ pub async fn start(args: InProcessStartArgs) -> IoResult<InProcessClientHandle>
async fn start_uninitialized(args: InProcessStartArgs) -> IoResult<InProcessClientHandle> {
let channel_capacity = args.channel_capacity.max(1);
let state_db = match args.state_db.clone() {
Some(state_db) => Some(state_db),
None => init_state_db_from_config(args.config.as_ref()).await,
};
let installation_id = resolve_installation_id(&args.config.codex_home).await?;
let (client_tx, mut client_rx) = mpsc::channel::<InProcessClientMessage>(channel_capacity);
let (event_tx, event_rx) = mpsc::channel::<InProcessServerEvent>(channel_capacity);
@@ -415,6 +421,12 @@ async fn start_uninitialized(args: InProcessStartArgs) -> IoResult<InProcessClie
);
let (processor_tx, mut processor_rx) = mpsc::channel::<ProcessorCommand>(channel_capacity);
let mut processor_handle = tokio::spawn(async move {
let Some(state_db) = state_db else {
warn!(
"in-process app-server state db initialization failed; shutting down processor task"
);
return;
};
let processor = Arc::new(MessageProcessor::new(MessageProcessorArgs {
outgoing: Arc::clone(&processor_outgoing),
analytics_events_client,
@@ -424,7 +436,7 @@ async fn start_uninitialized(args: InProcessStartArgs) -> IoResult<InProcessClie
environment_manager: args.environment_manager,
feedback: args.feedback,
log_db: args.log_db,
state_db: args.state_db,
state_db,
config_warnings: args.config_warnings,
session_source: args.session_source,
auth_manager,
@@ -434,7 +446,7 @@ async fn start_uninitialized(args: InProcessStartArgs) -> IoResult<InProcessClie
plugin_startup_tasks: crate::PluginStartupTasks::Start,
}));
let mut thread_created_rx = processor.thread_created_receiver();
let session = Arc::new(ConnectionSessionState::new());
let session = Arc::new(ConnectionSessionState::new(ConnectionOrigin::InProcess));
let mut listen_for_threads = true;
loop {
@@ -721,8 +733,14 @@ async fn start_uninitialized(args: InProcessStartArgs) -> IoResult<InProcessClie
#[cfg(test)]
mod tests {
use super::*;
use crate::error_code::INVALID_REQUEST_ERROR_CODE;
use codex_app_server_protocol::ClientInfo;
use codex_app_server_protocol::ConfigRequirementsReadResponse;
use codex_app_server_protocol::DeviceKeyPublicParams;
use codex_app_server_protocol::DeviceKeySignParams;
use codex_app_server_protocol::DeviceKeySignPayload;
use codex_app_server_protocol::RemoteControlClientConnectionAudience;
use codex_app_server_protocol::RemoteControlClientEnrollmentAudience;
use codex_app_server_protocol::SessionSource as ApiSessionSource;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
@@ -757,7 +775,7 @@ mod tests {
) -> InProcessClientHandle {
let codex_home = TempDir::new().expect("temp dir");
let config = Arc::new(build_test_config(codex_home.path()).await);
let state_db = codex_rollout::state_db::try_init(config.as_ref())
let state_db = init_state_db_from_config(config.as_ref())
.await
.expect("state db should initialize for in-process test");
let args = InProcessStartArgs {
@@ -814,6 +832,87 @@ mod tests {
.expect("in-process runtime should shutdown cleanly");
}
#[tokio::test]
async fn in_process_allows_device_key_requests_to_reach_device_key_api() {
let client = start_test_client(SessionSource::Cli).await;
const MALFORMED_KEY_ID_MESSAGE: &str = concat!(
"invalid device key payload: keyId must be dk_hse_, dk_tpm_, or dk_osn_ ",
"followed by unpadded base64url-encoded 32 bytes"
);
let requests = [
(
ClientRequest::DeviceKeyPublic {
request_id: RequestId::Integer(11),
params: DeviceKeyPublicParams {
key_id: String::new(),
},
},
MALFORMED_KEY_ID_MESSAGE,
),
(
ClientRequest::DeviceKeySign {
request_id: RequestId::Integer(12),
params: DeviceKeySignParams {
key_id: String::new(),
payload: DeviceKeySignPayload::RemoteControlClientConnection {
nonce: "nonce-123".to_string(),
audience:
RemoteControlClientConnectionAudience::RemoteControlClientWebsocket,
session_id: "wssess_123".to_string(),
target_origin: "https://chatgpt.com".to_string(),
target_path: "/api/codex/remote/control/client".to_string(),
account_user_id: "acct_123".to_string(),
client_id: "cli_123".to_string(),
token_expires_at: 4_102_444_800,
token_sha256_base64url: "47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU"
.to_string(),
scopes: vec!["remote_control_controller_websocket".to_string()],
},
},
},
MALFORMED_KEY_ID_MESSAGE,
),
(
ClientRequest::DeviceKeySign {
request_id: RequestId::Integer(13),
params: DeviceKeySignParams {
key_id: String::new(),
payload: DeviceKeySignPayload::RemoteControlClientEnrollment {
nonce: "nonce-123".to_string(),
audience:
RemoteControlClientEnrollmentAudience::RemoteControlClientEnrollment,
challenge_id: "rch_123".to_string(),
target_origin: "https://chatgpt.com".to_string(),
target_path: "/wham/remote/control/client/enroll".to_string(),
account_user_id: "acct_123".to_string(),
client_id: "cli_123".to_string(),
device_identity_sha256_base64url:
"47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU".to_string(),
challenge_expires_at: 4_102_444_800,
},
},
},
MALFORMED_KEY_ID_MESSAGE,
),
];
for (request, expected_message) in requests {
let error = client
.request(request)
.await
.expect("request transport should work")
.expect_err("request should be rejected");
assert_eq!(error.code, INVALID_REQUEST_ERROR_CODE);
assert_eq!(error.message, expected_message);
}
client
.shutdown()
.await
.expect("in-process runtime should shutdown cleanly");
}
#[tokio::test]
async fn in_process_start_uses_requested_session_source_for_thread_start() {
for (requested_source, expected_source) in [

View File

@@ -8,7 +8,6 @@ use codex_config::RemoteThreadConfigLoader;
use codex_config::ThreadConfigLoader;
use codex_core::config::Config;
use codex_core::resolve_installation_id;
use codex_exec_server::EnvironmentManagerArgs;
use codex_features::Feature;
use codex_login::AuthManager;
use codex_utils_cli::CliConfigOverrides;
@@ -51,11 +50,11 @@ use codex_config::TextRange as CoreTextRange;
use codex_core::ExecPolicyError;
use codex_core::check_execpolicy_for_warnings;
use codex_core::config::find_codex_home;
use codex_core::init_state_db_from_config;
use codex_exec_server::EnvironmentManager;
use codex_exec_server::ExecServerRuntimePaths;
use codex_feedback::CodexFeedback;
use codex_protocol::protocol::SessionSource;
use codex_rollout::state_db as rollout_state_db;
use codex_state::log_db;
use tokio::sync::mpsc;
use tokio::sync::oneshot;
@@ -93,6 +92,7 @@ mod outgoing_message;
mod request_processors;
mod request_serialization;
mod server_request_error;
mod skills_watcher;
mod thread_state;
mod thread_status;
mod transport;
@@ -419,15 +419,6 @@ pub async fn run_main_with_transport_options(
auth: AppServerWebsocketAuthSettings,
runtime_options: AppServerRuntimeOptions,
) -> IoResult<()> {
let environment_manager = Arc::new(
EnvironmentManager::new(EnvironmentManagerArgs::new(
ExecServerRuntimePaths::from_optional_paths(
arg0_paths.codex_self_exe.clone(),
arg0_paths.codex_linux_sandbox_exe.clone(),
)?,
))
.await,
);
let (transport_event_tx, mut transport_event_rx) =
mpsc::channel::<TransportEvent>(CHANNEL_CAPACITY);
let (outgoing_tx, mut outgoing_rx) = mpsc::channel::<OutgoingEnvelope>(CHANNEL_CAPACITY);
@@ -443,6 +434,17 @@ pub async fn run_main_with_transport_options(
)
})?;
let codex_home = find_codex_home()?;
let local_runtime_paths = ExecServerRuntimePaths::from_optional_paths(
arg0_paths.codex_self_exe.clone(),
arg0_paths.codex_linux_sandbox_exe.clone(),
)?;
let environment_manager = if loader_overrides.ignore_user_config {
EnvironmentManager::from_env(local_runtime_paths).await
} else {
EnvironmentManager::from_codex_home(codex_home.clone(), local_runtime_paths).await
}
.map(Arc::new)
.map_err(std::io::Error::other)?;
let config_manager = ConfigManager::new(
codex_home.to_path_buf(),
cli_kv_overrides.clone(),
@@ -489,9 +491,9 @@ pub async fn run_main_with_transport_options(
}
};
let state_db_result = rollout_state_db::try_init(&config).await;
let state_db_init_error = state_db_result.as_ref().err().map(ToString::to_string);
let state_db = state_db_result.ok();
let state_db = init_state_db_from_config(&config)
.await
.ok_or_else(|| std::io::Error::other("failed to initialize sqlite state db"))?;
if should_run_personality_migration {
let effective_toml = config.config_layer_stack.effective_config();
@@ -600,10 +602,12 @@ pub async fn run_main_with_transport_options(
let feedback_layer = feedback.logger_layer();
let feedback_metadata_layer = feedback.metadata_layer();
let log_db = state_db.clone().map(log_db::start);
let log_db_layer = log_db
.clone()
.map(|layer| layer.with_filter(Targets::new().with_default(Level::TRACE)));
let log_db = log_db::start(state_db.clone());
let log_db_layer = Some(
log_db
.clone()
.with_filter(Targets::new().with_default(Level::TRACE)),
);
let otel_logger_layer = otel.as_ref().and_then(|o| o.logger_layer());
let otel_tracing_layer = otel.as_ref().and_then(|o| o.tracing_layer());
let _ = tracing_subscriber::registry()
@@ -621,10 +625,6 @@ pub async fn run_main_with_transport_options(
}
}
let installation_id = resolve_installation_id(&config.codex_home).await?;
if let Some(err) = &state_db_init_error {
error!("failed to initialize sqlite state db: {err}");
}
let transport_shutdown_token = CancellationToken::new();
let mut transport_accept_handles = Vec::<JoinHandle<()>>::new();
@@ -669,25 +669,17 @@ pub async fn run_main_with_transport_options(
let auth_manager =
AuthManager::shared_from_config(&config, /*enable_codex_api_key_env*/ false).await;
let remote_control_config_enabled = config.features.enabled(Feature::RemoteControl);
let remote_control_enabled = remote_control_config_enabled && state_db.is_some();
if remote_control_config_enabled && state_db.is_none() {
error!("remote control disabled because sqlite state db is unavailable");
}
let remote_control_enabled = config.features.enabled(Feature::RemoteControl);
if transport_accept_handles.is_empty() && !remote_control_enabled {
return Err(std::io::Error::new(
ErrorKind::InvalidInput,
if remote_control_config_enabled && state_db.is_none() {
"no transport configured; remote control disabled because sqlite state db is unavailable"
} else {
"no transport configured; use --listen or enable remote control"
},
"no transport configured; use --listen or enable remote control",
));
}
let (remote_control_accept_handle, remote_control_handle) = start_remote_control(
config.chatgpt_base_url.clone(),
state_db.clone(),
Some(state_db.clone()),
auth_manager.clone(),
transport_event_tx.clone(),
transport_shutdown_token.clone(),
@@ -771,7 +763,7 @@ pub async fn run_main_with_transport_options(
config_manager,
environment_manager,
feedback: feedback.clone(),
log_db,
log_db: Some(log_db),
state_db: state_db.clone(),
config_warnings,
session_source,

View File

@@ -108,8 +108,9 @@ mod tests {
use codex_config::ThreadConfigLoadErrorCode;
use codex_config::ThreadConfigLoader;
use codex_config::ThreadConfigSource;
use codex_core::agent_graph_store_from_state_db;
use codex_core::config::ConfigOverrides;
use codex_core::init_state_db;
use codex_core::init_state_db_from_config;
use codex_core::thread_store_from_config;
use codex_exec_server::EnvironmentManager;
use codex_login::AuthManager;
@@ -174,18 +175,20 @@ mod tests {
.await?;
let auth_manager = AuthManager::from_auth_for_testing(CodexAuth::from_api_key("dummy"));
let state_db = init_state_db(&good_config)
let state_db = init_state_db_from_config(&good_config)
.await
.expect("refresh tests require state db");
let thread_store = thread_store_from_config(&good_config, Some(state_db.clone()));
let thread_store = thread_store_from_config(&good_config, state_db.clone());
let agent_graph_store = agent_graph_store_from_state_db(state_db.clone());
let thread_manager = Arc::new(ThreadManager::new(
&good_config,
auth_manager,
SessionSource::Exec,
Arc::new(EnvironmentManager::default_for_tests()),
/*analytics_events_client*/ None,
state_db,
thread_store,
Some(state_db.clone()),
agent_graph_store,
"11111111-1111-4111-8111-111111111111".to_string(),
));
thread_manager.start_thread(good_config).await?;

View File

@@ -17,6 +17,7 @@ use crate::request_processors::AppsRequestProcessor;
use crate::request_processors::CatalogRequestProcessor;
use crate::request_processors::CommandExecRequestProcessor;
use crate::request_processors::ConfigRequestProcessor;
use crate::request_processors::DeviceKeyRequestProcessor;
use crate::request_processors::ExternalAgentConfigRequestProcessor;
use crate::request_processors::FeedbackRequestProcessor;
use crate::request_processors::FsRequestProcessor;
@@ -34,8 +35,10 @@ use crate::request_processors::WindowsSandboxRequestProcessor;
use crate::request_serialization::QueuedInitializedRequest;
use crate::request_serialization::RequestSerializationQueueKey;
use crate::request_serialization::RequestSerializationQueues;
use crate::skills_watcher::SkillsWatcher;
use crate::thread_state::ThreadStateManager;
use crate::transport::AppServerTransport;
use crate::transport::ConnectionOrigin;
use crate::transport::RemoteControlHandle;
use async_trait::async_trait;
use codex_analytics::AnalyticsEventsClient;
@@ -59,6 +62,7 @@ use codex_app_server_protocol::experimental_required_message;
use codex_arg0::Arg0DispatchPaths;
use codex_chatgpt::workspace_settings;
use codex_core::ThreadManager;
use codex_core::agent_graph_store_from_state_db;
use codex_core::config::Config;
use codex_core::thread_store_from_config;
use codex_exec_server::EnvironmentManager;
@@ -158,6 +162,7 @@ pub(crate) struct MessageProcessor {
command_exec_processor: CommandExecRequestProcessor,
process_exec_processor: ProcessExecRequestProcessor,
config_processor: ConfigRequestProcessor,
device_key_processor: DeviceKeyRequestProcessor,
external_agent_config_processor: ExternalAgentConfigRequestProcessor,
feedback_processor: FeedbackRequestProcessor,
fs_processor: FsRequestProcessor,
@@ -176,6 +181,7 @@ pub(crate) struct MessageProcessor {
#[derive(Debug)]
pub(crate) struct ConnectionSessionState {
origin: ConnectionOrigin,
pub(crate) rpc_gate: Arc<ConnectionRpcGate>,
initialized: OnceLock<InitializedConnectionSessionState>,
}
@@ -190,13 +196,14 @@ pub(crate) struct InitializedConnectionSessionState {
impl Default for ConnectionSessionState {
fn default() -> Self {
Self::new()
Self::new(ConnectionOrigin::WebSocket)
}
}
impl ConnectionSessionState {
pub(crate) fn new() -> Self {
pub(crate) fn new(origin: ConnectionOrigin) -> Self {
Self {
origin,
rpc_gate: Arc::new(ConnectionRpcGate::new()),
initialized: OnceLock::new(),
}
@@ -206,6 +213,10 @@ impl ConnectionSessionState {
self.initialized.get().is_some()
}
fn allows_device_key_requests(&self) -> bool {
self.origin.allows_device_key_requests()
}
pub(crate) fn experimental_api_enabled(&self) -> bool {
self.initialized
.get()
@@ -245,7 +256,7 @@ pub(crate) struct MessageProcessorArgs {
pub(crate) environment_manager: Arc<EnvironmentManager>,
pub(crate) feedback: CodexFeedback,
pub(crate) log_db: Option<LogDbLayer>,
pub(crate) state_db: Option<StateDbHandle>,
pub(crate) state_db: StateDbHandle,
pub(crate) config_warnings: Vec<ConfigWarningNotification>,
pub(crate) session_source: SessionSource,
pub(crate) auth_manager: Arc<AuthManager>,
@@ -284,19 +295,22 @@ impl MessageProcessor {
// affect per-thread behavior, but they must not move newly started,
// resumed, or forked threads to a different persistence backend/root.
let thread_store = thread_store_from_config(config.as_ref(), state_db.clone());
let agent_graph_store = agent_graph_store_from_state_db(state_db.clone());
let thread_manager = Arc::new(ThreadManager::new(
config.as_ref(),
auth_manager.clone(),
session_source,
environment_manager,
Some(analytics_events_client.clone()),
Arc::clone(&thread_store),
state_db.clone(),
Arc::clone(&thread_store),
agent_graph_store.clone(),
installation_id,
));
thread_manager
.plugins_manager()
.set_analytics_events_client(analytics_events_client.clone());
let skills_watcher = SkillsWatcher::new(thread_manager.skills_manager(), outgoing.clone());
let pending_thread_unloads = Arc::new(Mutex::new(HashSet::new()));
let thread_state_manager = ThreadStateManager::new();
@@ -338,7 +352,7 @@ impl MessageProcessor {
Arc::clone(&config),
feedback,
log_db,
state_db.clone(),
Some(state_db.clone()),
);
let git_processor = GitRequestProcessor::new();
let initialize_processor = InitializeRequestProcessor::new(
@@ -388,7 +402,8 @@ impl MessageProcessor {
thread_watch_manager.clone(),
Arc::clone(&thread_list_state_permit),
thread_goal_processor.clone(),
state_db,
Some(state_db.clone()),
Arc::clone(&skills_watcher),
);
let turn_processor = TurnRequestProcessor::new(
auth_manager.clone(),
@@ -402,6 +417,7 @@ impl MessageProcessor {
thread_state_manager,
thread_watch_manager,
thread_list_state_permit,
Arc::clone(&skills_watcher),
);
if matches!(plugin_startup_tasks, crate::PluginStartupTasks::Start) {
// Keep plugin startup warmups aligned at app-server startup.
@@ -432,6 +448,7 @@ impl MessageProcessor {
arg0_paths,
config.codex_home.to_path_buf(),
);
let device_key_processor = DeviceKeyRequestProcessor::new(outgoing.clone(), state_db);
let fs_processor = FsRequestProcessor::new(
thread_manager
.environment_manager()
@@ -453,6 +470,7 @@ impl MessageProcessor {
command_exec_processor,
process_exec_processor,
config_processor,
device_key_processor,
external_agent_config_processor,
feedback_processor,
fs_processor,
@@ -759,6 +777,7 @@ impl MessageProcessor {
let serialization_scope = codex_request.serialization_scope();
let app_server_client_name = session.app_server_client_name().map(str::to_string);
let client_version = session.client_version().map(str::to_string);
let device_key_requests_allowed = session.allows_device_key_requests();
let error_request_id = connection_request_id.clone();
let rpc_gate = Arc::clone(&session.rpc_gate);
let processor = Arc::clone(self);
@@ -774,6 +793,7 @@ impl MessageProcessor {
request_context,
app_server_client_name,
client_version,
device_key_requests_allowed,
)
.await;
if let Err(error) = result {
@@ -784,9 +804,9 @@ impl MessageProcessor {
);
if let Some(scope) = serialization_scope {
let (key, access) = RequestSerializationQueueKey::from_scope(connection_id, scope);
let key = RequestSerializationQueueKey::from_scope(connection_id, scope);
self.request_serialization_queues
.enqueue(key, access, request)
.enqueue(key, request)
.await;
} else {
tokio::spawn(async move {
@@ -803,6 +823,7 @@ impl MessageProcessor {
request_context: RequestContext,
app_server_client_name: Option<String>,
client_version: Option<String>,
device_key_requests_allowed: bool,
) -> Result<(), JSONRPCErrorError> {
let connection_id = connection_request_id.connection_id;
let request_id = ConnectionRequestId {
@@ -850,6 +871,30 @@ impl MessageProcessor {
.config_requirements_read()
.await
.map(|response| Some(response.into())),
ClientRequest::DeviceKeyCreate { params, .. } => {
self.device_key_processor.create(
request_id.clone(),
params,
device_key_requests_allowed,
);
Ok(None)
}
ClientRequest::DeviceKeyPublic { params, .. } => {
self.device_key_processor.public(
request_id.clone(),
params,
device_key_requests_allowed,
);
Ok(None)
}
ClientRequest::DeviceKeySign { params, .. } => {
self.device_key_processor.sign(
request_id.clone(),
params,
device_key_requests_allowed,
);
Ok(None)
}
ClientRequest::FsReadFile { params, .. } => self
.fs_processor
.read_file(params)
@@ -1008,9 +1053,6 @@ impl MessageProcessor {
ClientRequest::ThreadTurnsList { params, .. } => {
self.thread_processor.thread_turns_list(params).await
}
ClientRequest::ThreadTurnsItemsList { params, .. } => {
self.thread_processor.thread_turns_items_list(params).await
}
ClientRequest::ThreadShellCommand { params, .. } => {
self.thread_processor
.thread_shell_command(&request_id, params)

View File

@@ -6,16 +6,21 @@ use crate::config_manager::ConfigManager;
use crate::outgoing_message::ConnectionId;
use crate::outgoing_message::OutgoingMessageSender;
use crate::transport::AppServerTransport;
use crate::transport::ConnectionOrigin;
use anyhow::Result;
use app_test_support::create_mock_responses_server_repeating_assistant;
use app_test_support::write_mock_responses_config_toml;
use codex_analytics::AppServerRpcTransport;
use codex_app_server_protocol::ClientInfo;
use codex_app_server_protocol::ClientRequest;
use codex_app_server_protocol::DeviceKeySignParams;
use codex_app_server_protocol::DeviceKeySignPayload;
use codex_app_server_protocol::InitializeCapabilities;
use codex_app_server_protocol::InitializeParams;
use codex_app_server_protocol::InitializeResponse;
use codex_app_server_protocol::JSONRPCErrorError;
use codex_app_server_protocol::JSONRPCRequest;
use codex_app_server_protocol::RemoteControlClientConnectionAudience;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
@@ -27,6 +32,7 @@ use codex_config::CloudRequirementsLoader;
use codex_config::LoaderOverrides;
use codex_core::config::Config;
use codex_core::config::ConfigBuilder;
use codex_core::init_state_db_from_config;
use codex_exec_server::EnvironmentManager;
use codex_feedback::CodexFeedback;
use codex_login::AuthManager;
@@ -116,6 +122,10 @@ struct TracingHarness {
impl TracingHarness {
async fn new() -> Result<Self> {
Self::new_with_origin(ConnectionOrigin::WebSocket).await
}
async fn new_with_origin(origin: ConnectionOrigin) -> Result<Self> {
let server = create_mock_responses_server_repeating_assistant("Done").await;
let codex_home = TempDir::new()?;
let config = Arc::new(build_test_config(codex_home.path(), &server.uri()).await?);
@@ -128,7 +138,7 @@ impl TracingHarness {
_codex_home: codex_home,
processor,
outgoing_rx,
session: Arc::new(ConnectionSessionState::new()),
session: Arc::new(ConnectionSessionState::new(origin)),
tracing,
};
@@ -187,6 +197,29 @@ impl TracingHarness {
read_response(&mut self.outgoing_rx, request_id).await
}
async fn request_error(
&mut self,
request: ClientRequest,
trace: Option<W3cTraceContext>,
) -> JSONRPCErrorError {
let request_id = match request.id() {
RequestId::Integer(request_id) => *request_id,
request_id => panic!("expected integer request id in test harness, got {request_id:?}"),
};
let mut request = request_from_client_request(request);
request.trace = trace;
self.processor
.process_request(
TEST_CONNECTION_ID,
request,
&AppServerTransport::Stdio,
Arc::clone(&self.session),
)
.await;
read_error(&mut self.outgoing_rx, request_id).await
}
async fn start_thread(
&mut self,
request_id: i64,
@@ -249,6 +282,9 @@ async fn build_test_processor(
outgoing_tx,
analytics_events_client.clone(),
));
let state_db = init_state_db_from_config(config.as_ref())
.await
.expect("tracing test processor requires state db");
let processor = Arc::new(MessageProcessor::new(MessageProcessorArgs {
outgoing,
analytics_events_client,
@@ -258,7 +294,7 @@ async fn build_test_processor(
environment_manager: Arc::new(EnvironmentManager::default_for_tests()),
feedback: CodexFeedback::new(),
log_db: None,
state_db: None,
state_db,
config_warnings: Vec::new(),
session_source: SessionSource::VSCode,
auth_manager,
@@ -453,6 +489,36 @@ async fn read_response<T: serde::de::DeserializeOwned>(
}
}
async fn read_error(
outgoing_rx: &mut mpsc::Receiver<crate::outgoing_message::OutgoingEnvelope>,
request_id: i64,
) -> JSONRPCErrorError {
loop {
let envelope = tokio::time::timeout(std::time::Duration::from_secs(5), outgoing_rx.recv())
.await
.expect("timed out waiting for error")
.expect("outgoing channel closed");
let crate::outgoing_message::OutgoingEnvelope::ToConnection {
connection_id,
message,
..
} = envelope
else {
continue;
};
if connection_id != TEST_CONNECTION_ID {
continue;
}
let crate::outgoing_message::OutgoingMessage::Error(error) = message else {
continue;
};
if error.id != RequestId::Integer(request_id) {
continue;
}
return error.error;
}
}
async fn read_thread_started_notification(
outgoing_rx: &mut mpsc::Receiver<crate::outgoing_message::OutgoingEnvelope>,
) {
@@ -631,6 +697,47 @@ fn thread_start_jsonrpc_span_exports_server_span_and_parents_children() -> Resul
)
}
#[tokio::test(flavor = "current_thread")]
#[serial(app_server_tracing)]
async fn remote_control_origin_rejects_device_key_requests() -> Result<()> {
let mut harness = TracingHarness::new_with_origin(ConnectionOrigin::RemoteControl).await?;
let error = harness
.request_error(
ClientRequest::DeviceKeySign {
request_id: RequestId::Integer(20_004),
params: DeviceKeySignParams {
key_id: "dk_123".to_string(),
payload: DeviceKeySignPayload::RemoteControlClientConnection {
nonce: "nonce-123".to_string(),
audience:
RemoteControlClientConnectionAudience::RemoteControlClientWebsocket,
session_id: "wssess_123".to_string(),
target_origin: "https://chatgpt.com".to_string(),
target_path: "/api/codex/remote/control/client".to_string(),
account_user_id: "acct_123".to_string(),
client_id: "cli_123".to_string(),
token_expires_at: 4_102_444_800,
token_sha256_base64url: "47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU"
.to_string(),
scopes: vec!["remote_control_controller_websocket".to_string()],
},
},
},
/*trace*/ None,
)
.await;
assert_eq!(error.code, crate::error_code::INVALID_REQUEST_ERROR_CODE);
assert_eq!(
error.message,
"device/key/sign is not available over remote transports"
);
harness.shutdown().await;
Ok(())
}
#[tokio::test(flavor = "current_thread")]
#[serial(app_server_tracing)]
async fn turn_start_jsonrpc_span_parents_core_turn_spans() -> Result<()> {

View File

@@ -11,10 +11,13 @@ use crate::outgoing_message::ConnectionRequestId;
use crate::outgoing_message::OutgoingMessageSender;
use crate::outgoing_message::RequestContext;
use crate::outgoing_message::ThreadScopedOutgoingMessageSender;
use crate::skills_watcher::SkillsWatcher;
use crate::thread_status::ThreadWatchManager;
use crate::thread_status::resolve_thread_status;
use chrono::DateTime;
use chrono::Duration as ChronoDuration;
use chrono::SecondsFormat;
use chrono::Utc;
use codex_analytics::AnalyticsEventsClient;
use codex_analytics::AnalyticsJsonRpcError;
use codex_analytics::InputError;
@@ -215,7 +218,6 @@ use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadStartResponse;
use codex_app_server_protocol::ThreadStartedNotification;
use codex_app_server_protocol::ThreadStatus;
use codex_app_server_protocol::ThreadTurnsItemsListParams;
use codex_app_server_protocol::ThreadTurnsListParams;
use codex_app_server_protocol::ThreadTurnsListResponse;
use codex_app_server_protocol::ThreadUnarchiveParams;
@@ -273,6 +275,7 @@ use codex_core::exec::ExecCapturePolicy;
use codex_core::exec::ExecExpiration;
use codex_core::exec::ExecParams;
use codex_core::exec_env::create_env;
use codex_core::find_thread_name_by_id;
use codex_core::find_thread_path_by_id_str;
use codex_core::path_utils;
#[cfg(test)]
@@ -433,6 +436,7 @@ mod apps_processor;
mod catalog_processor;
mod command_exec_processor;
mod config_processor;
mod device_key_processor;
mod external_agent_config_processor;
mod feedback_processor;
mod fs_processor;
@@ -453,6 +457,7 @@ pub(crate) use apps_processor::AppsRequestProcessor;
pub(crate) use catalog_processor::CatalogRequestProcessor;
pub(crate) use command_exec_processor::CommandExecRequestProcessor;
pub(crate) use config_processor::ConfigRequestProcessor;
pub(crate) use device_key_processor::DeviceKeyRequestProcessor;
pub(crate) use external_agent_config_processor::ExternalAgentConfigRequestProcessor;
pub(crate) use feedback_processor::FeedbackRequestProcessor;
pub(crate) use fs_processor::FsRequestProcessor;
@@ -494,7 +499,6 @@ pub(crate) use self::thread_lifecycle::populate_thread_turns_from_history;
pub(crate) use self::thread_processor::thread_from_stored_thread;
#[cfg(test)]
pub(crate) use self::thread_summary::read_summary_from_rollout;
#[cfg(test)]
pub(crate) use self::thread_summary::summary_to_thread;
pub(crate) fn build_api_turns_from_rollout_items(items: &[RolloutItem]) -> Vec<Turn> {

View File

@@ -1,5 +1,4 @@
use super::*;
use futures::StreamExt;
#[derive(Clone)]
pub(crate) struct CatalogRequestProcessor {
@@ -10,8 +9,6 @@ pub(crate) struct CatalogRequestProcessor {
pub(super) workspace_settings_cache: Arc<workspace_settings::WorkspaceSettingsCache>,
}
const SKILLS_LIST_CWD_CONCURRENCY: usize = 5;
fn skills_to_info(
skills: &[codex_core::skills::SkillMetadata],
disabled_paths: &HashSet<AbsolutePathBuf>,
@@ -433,76 +430,56 @@ impl CatalogRequestProcessor {
.environment_manager()
.default_environment()
.map(|environment| environment.get_filesystem());
let mut data = futures::stream::iter(cwds.into_iter().enumerate())
.map(|(index, cwd)| {
let config = &config;
let extra_roots_by_cwd = &extra_roots_by_cwd;
let fs = fs.clone();
let plugins_manager = &plugins_manager;
let skills_manager = &skills_manager;
async move {
let (cwd_abs, config_layer_stack) = match self.resolve_cwd_config(&cwd).await {
Ok(resolved) => resolved,
Err(message) => {
let error_path = cwd.clone();
return (
index,
codex_app_server_protocol::SkillsListEntry {
cwd,
skills: Vec::new(),
errors: vec![codex_app_server_protocol::SkillErrorInfo {
path: error_path,
message,
}],
},
);
}
};
let extra_roots = extra_roots_by_cwd
.get(&cwd)
.map_or(&[][..], std::vec::Vec::as_slice);
let effective_skill_roots = if workspace_codex_plugins_enabled {
let plugins_input = config.plugins_config_input();
plugins_manager
.effective_skill_roots_for_layer_stack(
&config_layer_stack,
&plugins_input,
)
.await
} else {
Vec::new()
};
let skills_input = codex_core::skills::SkillsLoadInput::new(
cwd_abs.clone(),
effective_skill_roots,
config_layer_stack,
config.bundled_skills_enabled(),
);
let outcome = skills_manager
.skills_for_cwd_with_extra_user_roots(
&skills_input,
force_reload,
extra_roots,
fs,
)
.await;
let errors = errors_to_info(&outcome.errors);
let skills = skills_to_info(&outcome.skills, &outcome.disabled_paths);
(
index,
codex_app_server_protocol::SkillsListEntry {
cwd,
skills,
errors,
},
)
let mut data = Vec::new();
for cwd in cwds {
let (cwd_abs, config_layer_stack) = match self.resolve_cwd_config(&cwd).await {
Ok(resolved) => resolved,
Err(message) => {
let error_path = cwd.clone();
data.push(codex_app_server_protocol::SkillsListEntry {
cwd,
skills: Vec::new(),
errors: vec![codex_app_server_protocol::SkillErrorInfo {
path: error_path,
message,
}],
});
continue;
}
})
.buffer_unordered(SKILLS_LIST_CWD_CONCURRENCY)
.collect::<Vec<_>>()
.await;
data.sort_unstable_by_key(|(index, _)| *index);
let data = data.into_iter().map(|(_, entry)| entry).collect();
};
let extra_roots = extra_roots_by_cwd
.get(&cwd)
.map_or(&[][..], std::vec::Vec::as_slice);
let effective_skill_roots = if workspace_codex_plugins_enabled {
let plugins_input = config.plugins_config_input();
plugins_manager
.effective_skill_roots_for_layer_stack(&config_layer_stack, &plugins_input)
.await
} else {
Vec::new()
};
let skills_input = codex_core::skills::SkillsLoadInput::new(
cwd_abs.clone(),
effective_skill_roots,
config_layer_stack,
config.bundled_skills_enabled(),
);
let outcome = skills_manager
.skills_for_cwd_with_extra_user_roots(
&skills_input,
force_reload,
extra_roots,
fs.clone(),
)
.await;
let errors = errors_to_info(&outcome.errors);
let skills = skills_to_info(&outcome.skills, &outcome.disabled_paths);
data.push(codex_app_server_protocol::SkillsListEntry {
cwd,
skills,
errors,
});
}
Ok(SkillsListResponse { data })
}

View File

@@ -46,6 +46,7 @@ use codex_login::AuthManager;
use codex_model_provider::create_model_provider;
use codex_plugin::PluginId;
use codex_protocol::config_types::WebSearchMode;
use codex_protocol::protocol::Op;
use serde_json::json;
use std::path::PathBuf;
@@ -377,22 +378,14 @@ impl ConfigRequestProcessor {
}
async fn reload_user_config(&self) {
let next_config = match self.load_latest_config(/*fallback_cwd*/ None).await {
Ok(config) => config,
Err(err) => {
tracing::warn!(
"failed to rebuild user config for runtime refresh: {}",
err.message
);
return;
}
};
let thread_ids = self.thread_manager.list_thread_ids().await;
for thread_id in thread_ids {
let Ok(thread) = self.thread_manager.get_thread(thread_id).await else {
continue;
};
thread.refresh_runtime_config(next_config.clone()).await;
if let Err(err) = thread.submit(Op::ReloadUserConfig).await {
tracing::warn!("failed to request user config reload: {err}");
}
}
}

View File

@@ -0,0 +1,359 @@
use std::fmt;
use std::future::Future;
use std::sync::Arc;
use crate::error_code::internal_error;
use crate::error_code::invalid_request;
use crate::outgoing_message::ConnectionRequestId;
use crate::outgoing_message::OutgoingMessageSender;
use async_trait::async_trait;
use base64::Engine;
use base64::engine::general_purpose::STANDARD;
use codex_app_server_protocol::ClientResponsePayload;
use codex_app_server_protocol::DeviceKeyAlgorithm;
use codex_app_server_protocol::DeviceKeyCreateParams;
use codex_app_server_protocol::DeviceKeyCreateResponse;
use codex_app_server_protocol::DeviceKeyProtectionClass;
use codex_app_server_protocol::DeviceKeyPublicParams;
use codex_app_server_protocol::DeviceKeyPublicResponse;
use codex_app_server_protocol::DeviceKeySignParams;
use codex_app_server_protocol::DeviceKeySignPayload;
use codex_app_server_protocol::DeviceKeySignResponse;
use codex_app_server_protocol::JSONRPCErrorError;
use codex_device_key::DeviceKeyBinding;
use codex_device_key::DeviceKeyBindingStore;
use codex_device_key::DeviceKeyCreateRequest;
use codex_device_key::DeviceKeyError;
use codex_device_key::DeviceKeyGetPublicRequest;
use codex_device_key::DeviceKeyInfo;
use codex_device_key::DeviceKeyProtectionPolicy;
use codex_device_key::DeviceKeySignRequest;
use codex_device_key::DeviceKeyStore;
use codex_device_key::RemoteControlClientConnectionAudience;
use codex_device_key::RemoteControlClientConnectionSignPayload;
use codex_device_key::RemoteControlClientEnrollmentAudience;
use codex_device_key::RemoteControlClientEnrollmentSignPayload;
use codex_rollout::state_db::StateDbHandle;
use codex_state::DeviceKeyBindingRecord;
#[derive(Clone)]
pub(crate) struct DeviceKeyRequestProcessor {
outgoing: Arc<OutgoingMessageSender>,
store: DeviceKeyStore,
}
impl DeviceKeyRequestProcessor {
pub(crate) fn new(outgoing: Arc<OutgoingMessageSender>, state_db: StateDbHandle) -> Self {
Self {
outgoing,
store: DeviceKeyStore::new(Arc::new(StateDeviceKeyBindingStore::new(state_db))),
}
}
pub(crate) fn create(
&self,
request_id: ConnectionRequestId,
params: DeviceKeyCreateParams,
device_key_requests_allowed: bool,
) {
self.spawn_request(
request_id,
"device/key/create",
device_key_requests_allowed,
move |store| async move { create_device_key(store, params).await },
);
}
pub(crate) fn public(
&self,
request_id: ConnectionRequestId,
params: DeviceKeyPublicParams,
device_key_requests_allowed: bool,
) {
self.spawn_request(
request_id,
"device/key/public",
device_key_requests_allowed,
move |store| async move { public_device_key(store, params).await },
);
}
pub(crate) fn sign(
&self,
request_id: ConnectionRequestId,
params: DeviceKeySignParams,
device_key_requests_allowed: bool,
) {
self.spawn_request(
request_id,
"device/key/sign",
device_key_requests_allowed,
move |store| async move { sign_device_key(store, params).await },
);
}
fn spawn_request<R, F, Fut>(
&self,
request_id: ConnectionRequestId,
method: &'static str,
device_key_requests_allowed: bool,
run_request: F,
) where
R: Into<ClientResponsePayload> + Send + 'static,
F: FnOnce(DeviceKeyStore) -> Fut + Send + 'static,
Fut: Future<Output = Result<R, JSONRPCErrorError>> + Send + 'static,
{
let store = self.store.clone();
let outgoing = Arc::clone(&self.outgoing);
tokio::spawn(async move {
let result = if !device_key_requests_allowed {
Err(invalid_request(format!(
"{method} is not available over remote transports"
)))
} else {
run_request(store).await
};
outgoing.send_result(request_id, result).await;
});
}
}
async fn create_device_key(
store: DeviceKeyStore,
params: DeviceKeyCreateParams,
) -> Result<DeviceKeyCreateResponse, JSONRPCErrorError> {
let info = store
.create(DeviceKeyCreateRequest {
protection_policy: protection_policy_from_params(params.protection_policy),
binding: DeviceKeyBinding {
account_user_id: params.account_user_id,
client_id: params.client_id,
},
})
.await
.map_err(map_device_key_error)?;
Ok(create_response_from_info(info))
}
async fn public_device_key(
store: DeviceKeyStore,
params: DeviceKeyPublicParams,
) -> Result<DeviceKeyPublicResponse, JSONRPCErrorError> {
let info = store
.get_public(DeviceKeyGetPublicRequest {
key_id: params.key_id,
})
.await
.map_err(map_device_key_error)?;
Ok(public_response_from_info(info))
}
async fn sign_device_key(
store: DeviceKeyStore,
params: DeviceKeySignParams,
) -> Result<DeviceKeySignResponse, JSONRPCErrorError> {
let signature = store
.sign(DeviceKeySignRequest {
key_id: params.key_id,
payload: payload_from_params(params.payload),
})
.await
.map_err(map_device_key_error)?;
Ok(DeviceKeySignResponse {
signature_der_base64: STANDARD.encode(signature.signature_der),
signed_payload_base64: STANDARD.encode(signature.signed_payload),
algorithm: algorithm_from_store(signature.algorithm),
})
}
struct StateDeviceKeyBindingStore {
state_db: StateDbHandle,
}
impl StateDeviceKeyBindingStore {
fn new(state_db: StateDbHandle) -> Self {
Self { state_db }
}
}
impl fmt::Debug for StateDeviceKeyBindingStore {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("StateDeviceKeyBindingStore")
.finish_non_exhaustive()
}
}
#[async_trait]
impl DeviceKeyBindingStore for StateDeviceKeyBindingStore {
async fn get_binding(&self, key_id: &str) -> Result<Option<DeviceKeyBinding>, DeviceKeyError> {
let state_db = self.state_db.clone();
state_db
.get_device_key_binding(key_id)
.await
.map(|record| {
record.map(|record| DeviceKeyBinding {
account_user_id: record.account_user_id,
client_id: record.client_id,
})
})
.map_err(|err| DeviceKeyError::Platform(err.to_string()))
}
async fn put_binding(
&self,
key_id: &str,
binding: &DeviceKeyBinding,
) -> Result<(), DeviceKeyError> {
let state_db = self.state_db.clone();
state_db
.upsert_device_key_binding(&DeviceKeyBindingRecord {
key_id: key_id.to_string(),
account_user_id: binding.account_user_id.clone(),
client_id: binding.client_id.clone(),
})
.await
.map_err(|err| DeviceKeyError::Platform(err.to_string()))
}
}
fn create_response_from_info(info: DeviceKeyInfo) -> DeviceKeyCreateResponse {
DeviceKeyCreateResponse {
key_id: info.key_id,
public_key_spki_der_base64: STANDARD.encode(info.public_key_spki_der),
algorithm: algorithm_from_store(info.algorithm),
protection_class: protection_class_from_store(info.protection_class),
}
}
fn public_response_from_info(info: DeviceKeyInfo) -> DeviceKeyPublicResponse {
DeviceKeyPublicResponse {
key_id: info.key_id,
public_key_spki_der_base64: STANDARD.encode(info.public_key_spki_der),
algorithm: algorithm_from_store(info.algorithm),
protection_class: protection_class_from_store(info.protection_class),
}
}
fn protection_policy_from_params(
protection_policy: Option<codex_app_server_protocol::DeviceKeyProtectionPolicy>,
) -> DeviceKeyProtectionPolicy {
match protection_policy
.unwrap_or(codex_app_server_protocol::DeviceKeyProtectionPolicy::HardwareOnly)
{
codex_app_server_protocol::DeviceKeyProtectionPolicy::HardwareOnly => {
DeviceKeyProtectionPolicy::HardwareOnly
}
codex_app_server_protocol::DeviceKeyProtectionPolicy::AllowOsProtectedNonextractable => {
DeviceKeyProtectionPolicy::AllowOsProtectedNonextractable
}
}
}
fn payload_from_params(payload: DeviceKeySignPayload) -> codex_device_key::DeviceKeySignPayload {
match payload {
DeviceKeySignPayload::RemoteControlClientConnection {
nonce,
audience,
session_id,
target_origin,
target_path,
account_user_id,
client_id,
token_sha256_base64url,
token_expires_at,
scopes,
} => codex_device_key::DeviceKeySignPayload::RemoteControlClientConnection(
RemoteControlClientConnectionSignPayload {
nonce,
audience: remote_control_client_connection_audience_from_protocol(audience),
session_id,
target_origin,
target_path,
account_user_id,
client_id,
token_sha256_base64url,
token_expires_at,
scopes,
},
),
DeviceKeySignPayload::RemoteControlClientEnrollment {
nonce,
audience,
challenge_id,
target_origin,
target_path,
account_user_id,
client_id,
device_identity_sha256_base64url,
challenge_expires_at,
} => codex_device_key::DeviceKeySignPayload::RemoteControlClientEnrollment(
RemoteControlClientEnrollmentSignPayload {
nonce,
audience: remote_control_client_enrollment_audience_from_protocol(audience),
challenge_id,
target_origin,
target_path,
account_user_id,
client_id,
device_identity_sha256_base64url,
challenge_expires_at,
},
),
}
}
fn remote_control_client_connection_audience_from_protocol(
audience: codex_app_server_protocol::RemoteControlClientConnectionAudience,
) -> RemoteControlClientConnectionAudience {
match audience {
codex_app_server_protocol::RemoteControlClientConnectionAudience::RemoteControlClientWebsocket => {
RemoteControlClientConnectionAudience::RemoteControlClientWebsocket
}
}
}
fn remote_control_client_enrollment_audience_from_protocol(
audience: codex_app_server_protocol::RemoteControlClientEnrollmentAudience,
) -> RemoteControlClientEnrollmentAudience {
match audience {
codex_app_server_protocol::RemoteControlClientEnrollmentAudience::RemoteControlClientEnrollment => {
RemoteControlClientEnrollmentAudience::RemoteControlClientEnrollment
}
}
}
fn algorithm_from_store(algorithm: codex_device_key::DeviceKeyAlgorithm) -> DeviceKeyAlgorithm {
match algorithm {
codex_device_key::DeviceKeyAlgorithm::EcdsaP256Sha256 => {
DeviceKeyAlgorithm::EcdsaP256Sha256
}
}
}
fn protection_class_from_store(
protection_class: codex_device_key::DeviceKeyProtectionClass,
) -> DeviceKeyProtectionClass {
match protection_class {
codex_device_key::DeviceKeyProtectionClass::HardwareSecureEnclave => {
DeviceKeyProtectionClass::HardwareSecureEnclave
}
codex_device_key::DeviceKeyProtectionClass::HardwareTpm => {
DeviceKeyProtectionClass::HardwareTpm
}
codex_device_key::DeviceKeyProtectionClass::OsProtectedNonextractable => {
DeviceKeyProtectionClass::OsProtectedNonextractable
}
}
}
fn map_device_key_error(error: DeviceKeyError) -> JSONRPCErrorError {
match &error {
DeviceKeyError::DegradedProtectionNotAllowed { .. }
| DeviceKeyError::HardwareBackedKeysUnavailable
| DeviceKeyError::KeyNotFound
| DeviceKeyError::InvalidPayload(_) => invalid_request(error.to_string()),
DeviceKeyError::Platform(_) | DeviceKeyError::Crypto(_) => {
internal_error(error.to_string())
}
}
}

View File

@@ -72,13 +72,6 @@ impl FeedbackRequestProcessor {
{
tracing::info!(target: "feedback_tags", chatgpt_user_id);
}
if let Some(account_id) = self
.auth_manager
.auth_cached()
.and_then(|auth| auth.get_account_id())
{
tracing::info!(target: "feedback_tags", account_id);
}
let snapshot = self.feedback.snapshot(conversation_id);
let thread_id = snapshot.thread_id.clone();
let (feedback_thread_ids, sqlite_feedback_logs, state_db_ctx) = if include_logs {

View File

@@ -108,10 +108,8 @@ fn share_context_for_source(
.cloned()
.map(|remote_plugin_id| PluginShareContext {
remote_plugin_id,
share_url: None,
creator_account_user_id: None,
creator_name: None,
share_targets: None,
}),
MarketplacePluginSource::Git { .. } => None,
}
@@ -617,15 +615,6 @@ impl PluginRequestProcessor {
&visible_skills,
&outcome.plugin.disabled_skill_paths,
),
hooks: outcome
.plugin
.hooks
.into_iter()
.map(|hook| codex_app_server_protocol::PluginHookSummary {
key: hook.key,
event_name: hook.event_name.into(),
})
.collect(),
apps: app_summaries,
mcp_servers: outcome.plugin.mcp_server_names,
}
@@ -1475,15 +1464,8 @@ fn remote_plugin_share_context_to_info(
) -> PluginShareContext {
PluginShareContext {
remote_plugin_id: context.remote_plugin_id,
share_url: context.share_url,
creator_account_user_id: context.creator_account_user_id,
creator_name: context.creator_name,
share_targets: context.share_targets.map(|targets| {
targets
.into_iter()
.map(plugin_share_principal_from_remote)
.collect()
}),
}
}
@@ -1508,7 +1490,6 @@ fn remote_plugin_detail_to_info(
enabled: skill.enabled,
})
.collect(),
hooks: Vec::new(),
apps,
mcp_servers: Vec::new(),
}

View File

@@ -7,7 +7,7 @@ pub(crate) struct ThreadGoalRequestProcessor {
outgoing: Arc<OutgoingMessageSender>,
config: Arc<Config>,
thread_state_manager: ThreadStateManager,
state_db: Option<StateDbHandle>,
state_db: StateDbHandle,
}
impl ThreadGoalRequestProcessor {
@@ -16,7 +16,7 @@ impl ThreadGoalRequestProcessor {
outgoing: Arc<OutgoingMessageSender>,
config: Arc<Config>,
thread_state_manager: ThreadStateManager,
state_db: Option<StateDbHandle>,
state_db: StateDbHandle,
) -> Self {
Self {
thread_manager,
@@ -72,23 +72,6 @@ impl ThreadGoalRequestProcessor {
}
}
pub(crate) async fn pending_resume_goal_state(
&self,
thread: &CodexThread,
) -> (bool, Option<StateDbHandle>) {
let emit_thread_goal_update = self.config.features.enabled(Feature::Goals);
let thread_goal_state_db = if emit_thread_goal_update {
if let Some(state_db) = thread.state_db() {
Some(state_db)
} else {
self.state_db.clone()
}
} else {
None
};
(emit_thread_goal_update, thread_goal_state_db)
}
async fn thread_goal_set_inner(
&self,
request_id: ConnectionRequestId,
@@ -110,7 +93,7 @@ impl ThreadGoalRequestProcessor {
None => find_thread_path_by_id_str(
&self.config.codex_home,
&thread_id.to_string(),
self.state_db.as_deref(),
Some(self.state_db.as_ref()),
)
.await
.map_err(|err| {
@@ -275,7 +258,7 @@ impl ThreadGoalRequestProcessor {
None => find_thread_path_by_id_str(
&self.config.codex_home,
&thread_id.to_string(),
self.state_db.as_deref(),
Some(self.state_db.as_ref()),
)
.await
.map_err(|err| {
@@ -339,7 +322,7 @@ impl ThreadGoalRequestProcessor {
find_thread_path_by_id_str(
&self.config.codex_home,
&thread_id.to_string(),
self.state_db.as_deref(),
Some(self.state_db.as_ref()),
)
.await
.map_err(|err| {
@@ -348,9 +331,7 @@ impl ThreadGoalRequestProcessor {
.ok_or_else(|| invalid_request(format!("thread not found: {thread_id}")))?;
}
self.state_db
.clone()
.ok_or_else(|| internal_error("sqlite state db unavailable for thread goals"))
Ok(self.state_db.clone())
}
async fn emit_thread_goal_snapshot(&self, thread_id: ThreadId) {

View File

@@ -12,6 +12,7 @@ pub(super) struct ListenerTaskContext {
pub(super) thread_list_state_permit: Arc<Semaphore>,
pub(super) fallback_model_provider: String,
pub(super) codex_home: PathBuf,
pub(super) skills_watcher: Arc<SkillsWatcher>,
}
struct UnloadingState {
@@ -226,12 +227,22 @@ pub(super) async fn ensure_listener_task_running(
"thread {conversation_id} is closing; retry after the thread is closed"
)));
};
let config = conversation.config().await;
let environments = conversation.environment_selections().await;
let watch_registration = listener_task_context
.skills_watcher
.register_thread_config(
config.as_ref(),
listener_task_context.thread_manager.as_ref(),
&environments,
)
.await;
let (mut listener_command_rx, listener_generation) = {
let mut thread_state = thread_state.lock().await;
if thread_state.listener_matches(&conversation) {
return Ok(());
}
thread_state.set_listener(cancel_tx, &conversation)
thread_state.set_listener(cancel_tx, &conversation, watch_registration)
};
let ListenerTaskContext {
outgoing,
@@ -242,6 +253,7 @@ pub(super) async fn ensure_listener_task_running(
thread_list_state_permit,
fallback_model_provider,
codex_home,
..
} = listener_task_context;
let outgoing_for_task = Arc::clone(&outgoing);
tokio::spawn(async move {

View File

@@ -1,5 +1,4 @@
use super::*;
use crate::error_code::method_not_found;
const THREAD_LIST_DEFAULT_LIMIT: usize = 25;
const THREAD_LIST_MAX_LIMIT: usize = 100;
@@ -317,6 +316,7 @@ pub(crate) struct ThreadRequestProcessor {
pub(super) thread_goal_processor: ThreadGoalRequestProcessor,
pub(super) state_db: Option<StateDbHandle>,
pub(super) background_tasks: TaskTracker,
pub(super) skills_watcher: Arc<SkillsWatcher>,
}
impl ThreadRequestProcessor {
@@ -335,6 +335,7 @@ impl ThreadRequestProcessor {
thread_list_state_permit: Arc<Semaphore>,
thread_goal_processor: ThreadGoalRequestProcessor,
state_db: Option<StateDbHandle>,
skills_watcher: Arc<SkillsWatcher>,
) -> Self {
Self {
auth_manager,
@@ -351,6 +352,7 @@ impl ThreadRequestProcessor {
thread_goal_processor,
state_db,
background_tasks: TaskTracker::new(),
skills_watcher,
}
}
@@ -592,15 +594,6 @@ impl ThreadRequestProcessor {
.map(|response| Some(response.into()))
}
pub(crate) async fn thread_turns_items_list(
&self,
_params: ThreadTurnsItemsListParams,
) -> Result<Option<ClientResponsePayload>, JSONRPCErrorError> {
Err(method_not_found(
"thread/turns/items/list is not supported yet",
))
}
pub(crate) async fn thread_shell_command(
&self,
request_id: &ConnectionRequestId,
@@ -752,6 +745,7 @@ impl ThreadRequestProcessor {
thread_list_state_permit: self.thread_list_state_permit.clone(),
fallback_model_provider: self.config.model_provider_id.clone(),
codex_home: self.config.codex_home.to_path_buf(),
skills_watcher: Arc::clone(&self.skills_watcher),
}
}
@@ -849,6 +843,7 @@ impl ThreadRequestProcessor {
thread_list_state_permit: self.thread_list_state_permit.clone(),
fallback_model_provider: self.config.model_provider_id.clone(),
codex_home: self.config.codex_home.to_path_buf(),
skills_watcher: Arc::clone(&self.skills_watcher),
};
let request_trace = request_context.request_trace();
let config_manager = self.config_manager.clone();
@@ -1049,7 +1044,6 @@ impl ThreadRequestProcessor {
.collect()
};
let core_dynamic_tool_count = core_dynamic_tools.len();
let NewThread {
thread_id,
thread,
@@ -1615,8 +1609,11 @@ impl ThreadRequestProcessor {
.unarchive_thread(StoreArchiveThreadParams { thread_id })
.await
.map_err(|err| thread_store_archive_error("unarchive", err))?;
let (mut thread, _) =
thread_from_stored_thread(stored_thread, fallback_provider.as_str(), &self.config.cwd);
let summary = summary_from_stored_thread(stored_thread, fallback_provider.as_str())
.ok_or_else(|| {
internal_error(format!("failed to read unarchived thread {thread_id}"))
})?;
let mut thread = summary_to_thread(summary, &self.config.cwd);
thread.status = resolve_thread_status(
self.thread_watch_manager
@@ -2082,9 +2079,7 @@ impl ThreadRequestProcessor {
cursor,
limit,
sort_direction,
items_view,
} = params;
let items_view = items_view.unwrap_or(TurnItemsView::Summary);
let thread_uuid = ThreadId::from_string(&thread_id)
.map_err(|err| invalid_request(format!("invalid thread id: {err}")))?;
@@ -2113,7 +2108,7 @@ impl ThreadRequestProcessor {
} else {
None
};
let mut turns = reconstruct_thread_turns_for_turns_list(
let turns = reconstruct_thread_turns_for_turns_list(
&items,
self.thread_watch_manager
.loaded_status_for_thread(&thread_uuid.to_string())
@@ -2121,41 +2116,6 @@ impl ThreadRequestProcessor {
has_live_running_thread,
active_turn,
);
for turn in &mut turns {
match items_view {
TurnItemsView::NotLoaded => {
turn.items.clear();
turn.items_view = TurnItemsView::NotLoaded;
}
TurnItemsView::Summary => {
let first_user_message = turn
.items
.iter()
.find(|item| matches!(item, ThreadItem::UserMessage { .. }))
.cloned();
let final_agent_message = turn
.items
.iter()
.rev()
.find(|item| matches!(item, ThreadItem::AgentMessage { .. }))
.cloned();
turn.items = match (first_user_message, final_agent_message) {
(Some(user_message), Some(agent_message))
if user_message.id() != agent_message.id() =>
{
vec![user_message, agent_message]
}
(Some(user_message), _) => vec![user_message],
(None, Some(agent_message)) => vec![agent_message],
(None, None) => Vec::new(),
};
turn.items_view = TurnItemsView::Summary;
}
TurnItemsView::Full => {
turn.items_view = TurnItemsView::Full;
}
}
}
let page = paginate_thread_turns(
turns,
cursor.as_deref(),
@@ -2715,10 +2675,10 @@ impl ThreadRequestProcessor {
)));
};
let (emit_thread_goal_update, thread_goal_state_db) = self
.thread_goal_processor
.pending_resume_goal_state(existing_thread.as_ref())
.await;
let emit_thread_goal_update = self.config.features.enabled(Feature::Goals);
let thread_goal_state_db = emit_thread_goal_update
.then(|| self.state_db.clone())
.flatten();
let command = crate::thread_state::ThreadListenerCommand::SendThreadResumeResponse(
Box::new(crate::thread_state::PendingThreadResumeRequest {
@@ -2958,19 +2918,10 @@ impl ThreadRequestProcessor {
}
async fn attach_thread_name(&self, thread_id: ThreadId, thread: &mut Thread) {
if let Ok(stored_thread) = self
.thread_store
.read_thread(StoreReadThreadParams {
thread_id,
include_archived: true,
include_history: false,
})
.await
&& let Some(title) = stored_thread.name.as_deref().map(str::trim)
&& !title.is_empty()
&& stored_thread.preview.trim() != title
if let Some(title) =
title_from_state_db(&self.config, self.state_db.as_ref(), thread_id).await
{
set_thread_name_from_title(thread, title.to_string());
set_thread_name_from_title(thread, title);
}
}
@@ -3259,7 +3210,12 @@ impl ThreadRequestProcessor {
};
let stored_thread = read_result?;
let summary = summary_from_stored_thread(stored_thread, fallback_provider);
let summary =
summary_from_stored_thread(stored_thread, fallback_provider).ok_or_else(|| {
internal_error(
"failed to load conversation summary: thread is missing rollout path",
)
})?;
Ok(GetConversationSummaryResponse { summary })
}
@@ -3543,30 +3499,19 @@ fn normalize_thread_turns_status(
enum ThreadReadViewError {
InvalidRequest(String),
Unsupported(&'static str),
Internal(String),
}
fn thread_read_view_error(err: ThreadReadViewError) -> JSONRPCErrorError {
match err {
ThreadReadViewError::InvalidRequest(message) => invalid_request(message),
ThreadReadViewError::Unsupported(operation) => {
unsupported_thread_store_operation(operation)
}
ThreadReadViewError::Internal(message) => internal_error(message),
}
}
fn unsupported_thread_store_operation(operation: &'static str) -> JSONRPCErrorError {
method_not_found(format!("{operation} is not supported yet"))
}
fn thread_store_list_error(err: ThreadStoreError) -> JSONRPCErrorError {
match err {
ThreadStoreError::InvalidRequest { message } => invalid_request(message),
ThreadStoreError::Unsupported { operation } => {
unsupported_thread_store_operation(operation)
}
err => internal_error(format!("failed to list threads: {err}")),
}
}
@@ -3574,9 +3519,6 @@ fn thread_store_list_error(err: ThreadStoreError) -> JSONRPCErrorError {
fn thread_store_resume_read_error(err: ThreadStoreError) -> JSONRPCErrorError {
match err {
ThreadStoreError::InvalidRequest { message } => invalid_request(message),
ThreadStoreError::Unsupported { operation } => {
unsupported_thread_store_operation(operation)
}
ThreadStoreError::ThreadNotFound { thread_id } => {
invalid_request(format!("no rollout found for thread id {thread_id}"))
}
@@ -3599,7 +3541,6 @@ fn thread_turns_list_history_load_error(
ThreadStoreError::InvalidRequest { message } => {
ThreadReadViewError::InvalidRequest(message)
}
ThreadStoreError::Unsupported { operation } => ThreadReadViewError::Unsupported(operation),
err => ThreadReadViewError::Internal(format!(
"failed to load thread history for thread {thread_id}: {err}"
)),
@@ -3626,7 +3567,6 @@ fn thread_read_history_load_error(
ThreadStoreError::InvalidRequest { message } => {
ThreadReadViewError::InvalidRequest(message)
}
ThreadStoreError::Unsupported { operation } => ThreadReadViewError::Unsupported(operation),
err => ThreadReadViewError::Internal(format!(
"failed to load thread history for thread {thread_id}: {err}"
)),
@@ -3642,9 +3582,6 @@ fn conversation_summary_thread_id_read_error(
ThreadStoreError::InvalidRequest { message } if message == no_rollout_message => {
conversation_summary_not_found_error(conversation_id)
}
ThreadStoreError::Unsupported { operation } => {
unsupported_thread_store_operation(operation)
}
ThreadStoreError::ThreadNotFound { thread_id } if thread_id == conversation_id => {
conversation_summary_not_found_error(conversation_id)
}
@@ -3667,9 +3604,6 @@ fn conversation_summary_rollout_path_read_error(
) -> JSONRPCErrorError {
match err {
ThreadStoreError::InvalidRequest { message } => invalid_request(message),
ThreadStoreError::Unsupported { operation } => {
unsupported_thread_store_operation(operation)
}
err => internal_error(format!(
"failed to load conversation summary from {}: {}",
path.display(),
@@ -3684,9 +3618,6 @@ fn thread_store_write_error(operation: &str, err: ThreadStoreError) -> JSONRPCEr
invalid_request(format!("thread not found: {thread_id}"))
}
ThreadStoreError::InvalidRequest { message } => invalid_request(message),
ThreadStoreError::Unsupported { operation } => {
unsupported_thread_store_operation(operation)
}
err => internal_error(format!("failed to {operation}: {err}")),
}
}
@@ -3694,13 +3625,41 @@ fn thread_store_write_error(operation: &str, err: ThreadStoreError) -> JSONRPCEr
fn thread_store_archive_error(operation: &str, err: ThreadStoreError) -> JSONRPCErrorError {
match err {
ThreadStoreError::InvalidRequest { message } => invalid_request(message),
ThreadStoreError::Unsupported {
operation: unsupported_operation,
} => unsupported_thread_store_operation(unsupported_operation),
err => internal_error(format!("failed to {operation} thread: {err}")),
}
}
async fn title_from_state_db(
config: &Config,
state_db_ctx: Option<&StateDbHandle>,
thread_id: ThreadId,
) -> Option<String> {
if let Some(state_db_ctx) = state_db_ctx
&& let Some(metadata) = state_db_ctx.get_thread(thread_id).await.ok().flatten()
&& let Some(title) = distinct_title(&metadata)
{
return Some(title);
}
find_thread_name_by_id(&config.codex_home, &thread_id)
.await
.ok()
.flatten()
}
fn non_empty_title(metadata: &ThreadMetadata) -> Option<String> {
let title = metadata.title.trim();
(!title.is_empty()).then(|| title.to_string())
}
fn distinct_title(metadata: &ThreadMetadata) -> Option<String> {
let title = non_empty_title(metadata)?;
if metadata.first_user_message.as_deref().map(str::trim) == Some(title.as_str()) {
None
} else {
Some(title)
}
}
fn set_thread_name_from_title(thread: &mut Thread, title: String) {
if title.trim().is_empty() || thread.preview.trim() == title.trim() {
return;
@@ -3764,8 +3723,8 @@ pub(crate) fn thread_from_stored_thread(
fn summary_from_stored_thread(
thread: StoredThread,
fallback_provider: &str,
) -> ConversationSummary {
let path = thread.rollout_path.unwrap_or_default();
) -> Option<ConversationSummary> {
let path = thread.rollout_path?;
let source = with_thread_spawn_agent_metadata(
thread.source,
thread.agent_nickname.clone(),
@@ -3776,7 +3735,7 @@ fn summary_from_stored_thread(
branch: git.branch,
origin_url: git.repository_url,
});
ConversationSummary {
Some(ConversationSummary {
conversation_id: thread.thread_id,
path,
preview: thread.first_user_message.unwrap_or(thread.preview),
@@ -3801,7 +3760,7 @@ fn summary_from_stored_thread(
cli_version: thread.cli_version,
source,
git_info,
}
})
}
#[allow(clippy::too_many_arguments)]

View File

@@ -409,7 +409,8 @@ mod thread_processor_behavior_tests {
history: None,
};
let summary = summary_from_stored_thread(stored_thread, "fallback");
let summary =
summary_from_stored_thread(stored_thread, "fallback").expect("summary should exist");
assert_eq!(
summary.timestamp.as_deref(),

View File

@@ -1,10 +1,5 @@
use super::*;
#[cfg(test)]
use chrono::DateTime;
#[cfg(test)]
use chrono::Utc;
#[cfg(test)]
pub(crate) async fn read_summary_from_rollout(
path: &Path,
@@ -208,7 +203,6 @@ pub(super) fn thread_response_sandbox_policy(
sandbox_policy.into()
}
#[cfg(test)]
fn parse_datetime(timestamp: Option<&str>) -> Option<DateTime<Utc>> {
timestamp.and_then(|ts| {
chrono::DateTime::parse_from_rfc3339(ts)
@@ -235,7 +229,6 @@ pub(super) fn thread_started_notification(mut thread: Thread) -> ThreadStartedNo
ThreadStartedNotification { thread }
}
#[cfg(test)]
pub(crate) fn summary_to_thread(
summary: ConversationSummary,
fallback_cwd: &AbsolutePathBuf,
@@ -264,7 +257,6 @@ pub(crate) fn summary_to_thread(
AbsolutePathBuf::relative_to_current_dir(path_utils::normalize_for_native_workdir(cwd))
.unwrap_or_else(|err| {
warn!(
conversation_id = %conversation_id,
path = %path.display(),
"failed to normalize thread cwd while summarizing thread: {err}"
);
@@ -282,7 +274,7 @@ pub(crate) fn summary_to_thread(
created_at: created_at.map(|dt| dt.timestamp()).unwrap_or(0),
updated_at: updated_at.map(|dt| dt.timestamp()).unwrap_or(0),
status: ThreadStatus::NotLoaded,
path: (!path.as_os_str().is_empty()).then_some(path),
path: Some(path),
cwd,
cli_version,
agent_nickname: source.get_nickname(),

View File

@@ -13,6 +13,7 @@ pub(crate) struct TurnRequestProcessor {
thread_state_manager: ThreadStateManager,
thread_watch_manager: ThreadWatchManager,
thread_list_state_permit: Arc<Semaphore>,
skills_watcher: Arc<SkillsWatcher>,
}
impl TurnRequestProcessor {
@@ -29,6 +30,7 @@ impl TurnRequestProcessor {
thread_state_manager: ThreadStateManager,
thread_watch_manager: ThreadWatchManager,
thread_list_state_permit: Arc<Semaphore>,
skills_watcher: Arc<SkillsWatcher>,
) -> Self {
Self {
auth_manager,
@@ -42,6 +44,7 @@ impl TurnRequestProcessor {
thread_state_manager,
thread_watch_manager,
thread_list_state_permit,
skills_watcher,
}
}
@@ -1087,6 +1090,7 @@ impl TurnRequestProcessor {
thread_list_state_permit: self.thread_list_state_permit.clone(),
fallback_model_provider: self.config.model_provider_id.clone(),
codex_home: self.config.codex_home.to_path_buf(),
skills_watcher: Arc::clone(&self.skills_watcher),
}
}

View File

@@ -6,7 +6,6 @@ use std::pin::Pin;
use std::sync::Arc;
use codex_app_server_protocol::ClientRequestSerializationScope;
use futures::future::join_all;
use tokio::sync::Mutex;
use tracing::Instrument;
@@ -44,61 +43,35 @@ pub(crate) enum RequestSerializationQueueKey {
},
}
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
pub(crate) enum RequestSerializationAccess {
Exclusive,
SharedRead,
}
impl RequestSerializationQueueKey {
pub(crate) fn from_scope(
connection_id: ConnectionId,
scope: ClientRequestSerializationScope,
) -> (Self, RequestSerializationAccess) {
) -> Self {
match scope {
ClientRequestSerializationScope::Global(name) => {
(Self::Global(name), RequestSerializationAccess::Exclusive)
}
ClientRequestSerializationScope::GlobalSharedRead(name) => {
(Self::Global(name), RequestSerializationAccess::SharedRead)
}
ClientRequestSerializationScope::Thread { thread_id } => (
Self::Thread { thread_id },
RequestSerializationAccess::Exclusive,
),
ClientRequestSerializationScope::ThreadPath { path } => (
Self::ThreadPath { path },
RequestSerializationAccess::Exclusive,
),
ClientRequestSerializationScope::CommandExecProcess { process_id } => (
ClientRequestSerializationScope::Global(name) => Self::Global(name),
ClientRequestSerializationScope::Thread { thread_id } => Self::Thread { thread_id },
ClientRequestSerializationScope::ThreadPath { path } => Self::ThreadPath { path },
ClientRequestSerializationScope::CommandExecProcess { process_id } => {
Self::CommandExecProcess {
connection_id,
process_id,
},
RequestSerializationAccess::Exclusive,
),
ClientRequestSerializationScope::Process { process_handle } => (
Self::Process {
connection_id,
process_handle,
},
RequestSerializationAccess::Exclusive,
),
ClientRequestSerializationScope::FuzzyFileSearchSession { session_id } => (
Self::FuzzyFileSearchSession { session_id },
RequestSerializationAccess::Exclusive,
),
ClientRequestSerializationScope::FsWatch { watch_id } => (
Self::FsWatch {
connection_id,
watch_id,
},
RequestSerializationAccess::Exclusive,
),
ClientRequestSerializationScope::McpOauth { server_name } => (
Self::McpOauth { server_name },
RequestSerializationAccess::Exclusive,
),
}
}
ClientRequestSerializationScope::Process { process_handle } => Self::Process {
connection_id,
process_handle,
},
ClientRequestSerializationScope::FuzzyFileSearchSession { session_id } => {
Self::FuzzyFileSearchSession { session_id }
}
ClientRequestSerializationScope::FsWatch { watch_id } => Self::FsWatch {
connection_id,
watch_id,
},
ClientRequestSerializationScope::McpOauth { server_name } => {
Self::McpOauth { server_name }
}
}
}
}
@@ -125,24 +98,17 @@ impl QueuedInitializedRequest {
}
}
struct QueuedSerializedRequest {
access: RequestSerializationAccess,
request: QueuedInitializedRequest,
}
#[derive(Clone, Default)]
pub(crate) struct RequestSerializationQueues {
inner: Arc<Mutex<HashMap<RequestSerializationQueueKey, VecDeque<QueuedSerializedRequest>>>>,
inner: Arc<Mutex<HashMap<RequestSerializationQueueKey, VecDeque<QueuedInitializedRequest>>>>,
}
impl RequestSerializationQueues {
pub(crate) async fn enqueue(
&self,
key: RequestSerializationQueueKey,
access: RequestSerializationAccess,
request: QueuedInitializedRequest,
) {
let request = QueuedSerializedRequest { access, request };
let should_spawn = {
let mut queues = self.inner.lock().await;
match queues.get_mut(&key) {
@@ -168,27 +134,13 @@ impl RequestSerializationQueues {
async fn drain(self, key: RequestSerializationQueueKey) {
loop {
let requests = {
let request = {
let mut queues = self.inner.lock().await;
let Some(queue) = queues.get_mut(&key) else {
return;
};
match queue.pop_front() {
Some(request) => {
let access = request.access;
let mut requests = vec![request];
if access == RequestSerializationAccess::SharedRead {
while queue.front().is_some_and(|request| {
request.access == RequestSerializationAccess::SharedRead
}) {
let Some(request) = queue.pop_front() else {
break;
};
requests.push(request);
}
}
requests
}
Some(request) => request,
None => {
queues.remove(&key);
return;
@@ -196,7 +148,7 @@ impl RequestSerializationQueues {
}
};
join_all(requests.into_iter().map(|request| request.request.run())).await;
request.run().await;
}
}
}
@@ -206,7 +158,6 @@ mod tests {
use super::*;
use pretty_assertions::assert_eq;
use std::sync::Arc;
use tokio::sync::broadcast;
use tokio::sync::mpsc;
use tokio::sync::oneshot;
use tokio::time::Duration;
@@ -244,7 +195,6 @@ mod tests {
queues
.enqueue(
key.clone(),
RequestSerializationAccess::Exclusive,
QueuedInitializedRequest::new(Arc::clone(&gate), async move {
tx.send(value).expect("receiver should be open");
}),
@@ -280,7 +230,6 @@ mod tests {
queues
.enqueue(
RequestSerializationQueueKey::Global("blocked"),
RequestSerializationAccess::Exclusive,
QueuedInitializedRequest::new(gate(), async move {
let _ = blocked_rx.await;
}),
@@ -289,7 +238,6 @@ mod tests {
queues
.enqueue(
RequestSerializationQueueKey::Global("other"),
RequestSerializationAccess::Exclusive,
QueuedInitializedRequest::new(gate(), async move {
ran_tx.send(()).expect("receiver should be open");
}),
@@ -320,7 +268,6 @@ mod tests {
queues
.enqueue(
key.clone(),
RequestSerializationAccess::Exclusive,
QueuedInitializedRequest::new(Arc::clone(&live_gate), async move {
tx.send(FIRST_REQUEST_VALUE)
.expect("receiver should be open");
@@ -334,7 +281,6 @@ mod tests {
queues
.enqueue(
key.clone(),
RequestSerializationAccess::Exclusive,
QueuedInitializedRequest::new(closed_gate, async move {
tx.send(SECOND_REQUEST_VALUE)
.expect("receiver should be open");
@@ -347,7 +293,6 @@ mod tests {
queues
.enqueue(
key,
RequestSerializationAccess::Exclusive,
QueuedInitializedRequest::new(live_gate, async move {
tx.send(THIRD_REQUEST_VALUE)
.expect("receiver should be open");
@@ -391,7 +336,6 @@ mod tests {
queues
.enqueue(
key.clone(),
RequestSerializationAccess::Exclusive,
QueuedInitializedRequest::new(Arc::clone(&live_gate), async move {
tx.send(FIRST_REQUEST_VALUE)
.expect("receiver should be open");
@@ -405,7 +349,6 @@ mod tests {
queues
.enqueue(
key,
RequestSerializationAccess::Exclusive,
QueuedInitializedRequest::new(live_gate.clone(), async move {
tx.send(SECOND_REQUEST_VALUE)
.expect("receiver should be open");
@@ -442,241 +385,4 @@ mod tests {
None
);
}
#[tokio::test]
async fn same_key_shared_reads_run_concurrently() {
let queues = RequestSerializationQueues::default();
let key = RequestSerializationQueueKey::Global("test");
let (blocker_started_tx, blocker_started_rx) = oneshot::channel::<()>();
let (blocker_release_tx, blocker_release_rx) = oneshot::channel::<()>();
let (started_tx, mut started_rx) = mpsc::unbounded_channel();
let (release_tx, _) = broadcast::channel::<()>(/*capacity*/ 1);
queues
.enqueue(
key.clone(),
RequestSerializationAccess::Exclusive,
QueuedInitializedRequest::new(gate(), async move {
blocker_started_tx
.send(())
.expect("receiver should be open");
let _ = blocker_release_rx.await;
}),
)
.await;
timeout(queue_drain_timeout(), blocker_started_rx)
.await
.expect("blocker should start")
.expect("sender should be open");
for value in [FIRST_REQUEST_VALUE, SECOND_REQUEST_VALUE] {
let started_tx = started_tx.clone();
let mut release_rx = release_tx.subscribe();
queues
.enqueue(
key.clone(),
RequestSerializationAccess::SharedRead,
QueuedInitializedRequest::new(gate(), async move {
started_tx.send(value).expect("receiver should be open");
let _ = release_rx.recv().await;
}),
)
.await;
}
drop(started_tx);
blocker_release_tx
.send(())
.expect("blocker should still be waiting");
let mut started = Vec::new();
for _ in 0..2 {
started.push(
timeout(queue_drain_timeout(), started_rx.recv())
.await
.expect("timed out waiting for shared read")
.expect("sender should be open"),
);
}
assert_eq!(started, vec![FIRST_REQUEST_VALUE, SECOND_REQUEST_VALUE]);
release_tx
.send(())
.expect("shared reads should still be waiting");
}
#[tokio::test]
async fn exclusive_write_waits_for_running_shared_reads() {
let queues = RequestSerializationQueues::default();
let key = RequestSerializationQueueKey::Global("test");
let (blocker_started_tx, blocker_started_rx) = oneshot::channel::<()>();
let (blocker_release_tx, blocker_release_rx) = oneshot::channel::<()>();
let (read_started_tx, mut read_started_rx) = mpsc::unbounded_channel();
let (read_release_tx, _) = broadcast::channel::<()>(/*capacity*/ 1);
let (write_started_tx, write_started_rx) = oneshot::channel::<()>();
queues
.enqueue(
key.clone(),
RequestSerializationAccess::Exclusive,
QueuedInitializedRequest::new(gate(), async move {
blocker_started_tx
.send(())
.expect("receiver should be open");
let _ = blocker_release_rx.await;
}),
)
.await;
timeout(queue_drain_timeout(), blocker_started_rx)
.await
.expect("blocker should start")
.expect("sender should be open");
for value in [FIRST_REQUEST_VALUE, SECOND_REQUEST_VALUE] {
let read_started_tx = read_started_tx.clone();
let mut read_release_rx = read_release_tx.subscribe();
queues
.enqueue(
key.clone(),
RequestSerializationAccess::SharedRead,
QueuedInitializedRequest::new(gate(), async move {
read_started_tx
.send(value)
.expect("receiver should be open");
let _ = read_release_rx.recv().await;
}),
)
.await;
}
queues
.enqueue(
key.clone(),
RequestSerializationAccess::Exclusive,
QueuedInitializedRequest::new(gate(), async move {
write_started_tx.send(()).expect("receiver should be open");
}),
)
.await;
drop(read_started_tx);
blocker_release_tx
.send(())
.expect("blocker should still be waiting");
for _ in 0..2 {
timeout(queue_drain_timeout(), read_started_rx.recv())
.await
.expect("timed out waiting for shared read")
.expect("sender should be open");
}
let mut write_started_rx = Box::pin(write_started_rx);
timeout(shutdown_wait_timeout(), &mut write_started_rx)
.await
.expect_err("write should wait for running shared reads");
read_release_tx
.send(())
.expect("shared reads should still be waiting");
timeout(queue_drain_timeout(), &mut write_started_rx)
.await
.expect("write should start after shared reads finish")
.expect("sender should be open");
}
#[tokio::test]
async fn later_shared_reads_do_not_jump_ahead_of_queued_write() {
let queues = RequestSerializationQueues::default();
let key = RequestSerializationQueueKey::Global("test");
let (blocker_started_tx, blocker_started_rx) = oneshot::channel::<()>();
let (blocker_release_tx, blocker_release_rx) = oneshot::channel::<()>();
let (first_read_started_tx, first_read_started_rx) = oneshot::channel::<()>();
let (first_read_release_tx, first_read_release_rx) = oneshot::channel::<()>();
let (write_started_tx, write_started_rx) = oneshot::channel::<()>();
let (write_release_tx, write_release_rx) = oneshot::channel::<()>();
let (later_read_started_tx, later_read_started_rx) = oneshot::channel::<()>();
queues
.enqueue(
key.clone(),
RequestSerializationAccess::Exclusive,
QueuedInitializedRequest::new(gate(), async move {
blocker_started_tx
.send(())
.expect("receiver should be open");
let _ = blocker_release_rx.await;
}),
)
.await;
timeout(queue_drain_timeout(), blocker_started_rx)
.await
.expect("blocker should start")
.expect("sender should be open");
queues
.enqueue(
key.clone(),
RequestSerializationAccess::SharedRead,
QueuedInitializedRequest::new(gate(), async move {
first_read_started_tx
.send(())
.expect("receiver should be open");
let _ = first_read_release_rx.await;
}),
)
.await;
queues
.enqueue(
key.clone(),
RequestSerializationAccess::Exclusive,
QueuedInitializedRequest::new(gate(), async move {
write_started_tx.send(()).expect("receiver should be open");
let _ = write_release_rx.await;
}),
)
.await;
queues
.enqueue(
key.clone(),
RequestSerializationAccess::SharedRead,
QueuedInitializedRequest::new(gate(), async move {
later_read_started_tx
.send(())
.expect("receiver should be open");
}),
)
.await;
blocker_release_tx
.send(())
.expect("blocker should still be waiting");
timeout(queue_drain_timeout(), first_read_started_rx)
.await
.expect("first read should start")
.expect("sender should be open");
let mut write_started_rx = Box::pin(write_started_rx);
timeout(shutdown_wait_timeout(), &mut write_started_rx)
.await
.expect_err("write should wait for the first read");
let mut later_read_started_rx = Box::pin(later_read_started_rx);
timeout(shutdown_wait_timeout(), &mut later_read_started_rx)
.await
.expect_err("later read should wait behind the queued write");
first_read_release_tx
.send(())
.expect("first read should still be waiting");
timeout(queue_drain_timeout(), &mut write_started_rx)
.await
.expect("write should start after the first read")
.expect("sender should be open");
timeout(shutdown_wait_timeout(), &mut later_read_started_rx)
.await
.expect_err("later read should still wait while the write is running");
write_release_tx
.send(())
.expect("write should still be waiting");
timeout(queue_drain_timeout(), &mut later_read_started_rx)
.await
.expect("later read should start after the write")
.expect("sender should be open");
}
}

View File

@@ -0,0 +1,112 @@
use std::sync::Arc;
use std::time::Duration;
use crate::outgoing_message::OutgoingMessageSender;
use codex_app_server_protocol::ServerNotification;
use codex_app_server_protocol::SkillsChangedNotification;
use codex_core::ThreadManager;
use codex_core::config::Config;
use codex_core::file_watcher::FileWatcher;
use codex_core::file_watcher::FileWatcherSubscriber;
use codex_core::file_watcher::Receiver;
use codex_core::file_watcher::ThrottledWatchReceiver;
use codex_core::file_watcher::WatchPath;
use codex_core::file_watcher::WatchRegistration;
use codex_core::skills::SkillsLoadInput;
use codex_core::skills::SkillsManager;
use codex_protocol::protocol::TurnEnvironmentSelection;
use tracing::warn;
#[cfg(not(test))]
const WATCHER_THROTTLE_INTERVAL: Duration = Duration::from_secs(10);
#[cfg(test)]
const WATCHER_THROTTLE_INTERVAL: Duration = Duration::from_millis(50);
pub(crate) struct SkillsWatcher {
subscriber: FileWatcherSubscriber,
}
impl SkillsWatcher {
pub(crate) fn new(
skills_manager: Arc<SkillsManager>,
outgoing: Arc<OutgoingMessageSender>,
) -> Arc<Self> {
let file_watcher = match FileWatcher::new() {
Ok(file_watcher) => Arc::new(file_watcher),
Err(err) => {
warn!("failed to initialize skills file watcher: {err}");
Arc::new(FileWatcher::noop())
}
};
let (subscriber, rx) = file_watcher.add_subscriber();
Self::spawn_event_loop(rx, skills_manager, outgoing);
Arc::new(Self { subscriber })
}
pub(crate) async fn register_thread_config(
&self,
config: &Config,
thread_manager: &ThreadManager,
environments: &[TurnEnvironmentSelection],
) -> WatchRegistration {
let Some(environment_selection) = environments.first() else {
return WatchRegistration::default();
};
let Some(environment) = thread_manager
.environment_manager()
.get_environment(&environment_selection.environment_id)
else {
warn!(
"failed to register skills watcher for unknown environment `{}`",
environment_selection.environment_id
);
return WatchRegistration::default();
};
if environment.is_remote() {
return WatchRegistration::default();
}
let plugins_input = config.plugins_config_input();
let plugins_manager = thread_manager.plugins_manager();
let plugin_outcome = plugins_manager.plugins_for_config(&plugins_input).await;
let skills_input = SkillsLoadInput::new(
config.cwd.clone(),
plugin_outcome.effective_plugin_skill_roots(),
config.config_layer_stack.clone(),
config.bundled_skills_enabled(),
);
let roots = thread_manager
.skills_manager()
.skill_roots_for_config(&skills_input, Some(environment.get_filesystem()))
.await
.into_iter()
.map(|root| WatchPath {
path: root.path.into_path_buf(),
recursive: true,
})
.collect();
self.subscriber.register_paths(roots)
}
fn spawn_event_loop(
rx: Receiver,
skills_manager: Arc<SkillsManager>,
outgoing: Arc<OutgoingMessageSender>,
) {
let mut rx = ThrottledWatchReceiver::new(rx, WATCHER_THROTTLE_INTERVAL);
let Ok(handle) = tokio::runtime::Handle::try_current() else {
warn!("skills watcher listener skipped: no Tokio runtime available");
return;
};
handle.spawn(async move {
while rx.recv().await.is_some() {
skills_manager.clear_cache();
outgoing
.send_server_notification(ServerNotification::SkillsChanged(
SkillsChangedNotification {},
))
.await;
}
});
}
}

View File

@@ -7,6 +7,7 @@ use codex_app_server_protocol::Turn;
use codex_app_server_protocol::TurnError;
use codex_core::CodexThread;
use codex_core::ThreadConfigSnapshot;
use codex_core::file_watcher::WatchRegistration;
use codex_protocol::ThreadId;
use codex_protocol::protocol::EventMsg;
use codex_protocol::protocol::RolloutItem;
@@ -77,6 +78,7 @@ pub(crate) struct ThreadState {
listener_command_tx: Option<mpsc::UnboundedSender<ThreadListenerCommand>>,
current_turn_history: ThreadHistoryBuilder,
listener_thread: Option<Weak<CodexThread>>,
watch_registration: WatchRegistration,
}
impl ThreadState {
@@ -91,6 +93,7 @@ impl ThreadState {
&mut self,
cancel_tx: oneshot::Sender<()>,
conversation: &Arc<CodexThread>,
watch_registration: WatchRegistration,
) -> (mpsc::UnboundedReceiver<ThreadListenerCommand>, u64) {
if let Some(previous) = self.cancel_tx.replace(cancel_tx) {
let _ = previous.send(());
@@ -99,6 +102,7 @@ impl ThreadState {
let (listener_command_tx, listener_command_rx) = mpsc::unbounded_channel();
self.listener_command_tx = Some(listener_command_tx);
self.listener_thread = Some(Arc::downgrade(conversation));
self.watch_registration = watch_registration;
(listener_command_rx, self.listener_generation)
}
@@ -109,6 +113,7 @@ impl ThreadState {
self.listener_command_tx = None;
self.current_turn_history.reset();
self.listener_thread = None;
self.watch_registration = WatchRegistration::default();
}
pub(crate) fn set_experimental_raw_events(&mut self, enabled: bool) {

View File

@@ -36,7 +36,7 @@ pub(crate) struct ConnectionState {
impl ConnectionState {
pub(crate) fn new(
_origin: ConnectionOrigin,
origin: ConnectionOrigin,
outbound_initialized: Arc<AtomicBool>,
outbound_experimental_api_enabled: Arc<AtomicBool>,
outbound_opted_out_notification_methods: Arc<RwLock<HashSet<String>>>,
@@ -45,7 +45,7 @@ impl ConnectionState {
outbound_initialized,
outbound_experimental_api_enabled,
outbound_opted_out_notification_methods,
session: Arc::new(ConnectionSessionState::new()),
session: Arc::new(ConnectionSessionState::new(origin)),
}
}
}

View File

@@ -6,8 +6,6 @@ license.workspace = true
[lib]
path = "lib.rs"
test = false
doctest = false
[lints]
workspace = true

View File

@@ -89,7 +89,6 @@ use codex_app_server_protocol::ThreadRollbackParams;
use codex_app_server_protocol::ThreadSetNameParams;
use codex_app_server_protocol::ThreadShellCommandParams;
use codex_app_server_protocol::ThreadStartParams;
use codex_app_server_protocol::ThreadTurnsItemsListParams;
use codex_app_server_protocol::ThreadTurnsListParams;
use codex_app_server_protocol::ThreadUnarchiveParams;
use codex_app_server_protocol::ThreadUnsubscribeParams;
@@ -523,15 +522,6 @@ impl McpProcess {
self.send_request("thread/turns/list", params).await
}
/// Send a `thread/turns/items/list` JSON-RPC request.
pub async fn send_thread_turns_items_list_request(
&mut self,
params: ThreadTurnsItemsListParams,
) -> anyhow::Result<i64> {
let params = Some(serde_json::to_value(params)?);
self.send_request("thread/turns/items/list", params).await
}
/// Send a `model/list` JSON-RPC request.
pub async fn send_list_models_request(
&mut self,

View File

@@ -3,41 +3,20 @@ use app_test_support::McpProcess;
use app_test_support::create_fake_rollout;
use app_test_support::rollout_path;
use app_test_support::to_response;
use codex_app_server::in_process;
use codex_app_server::in_process::InProcessStartArgs;
use codex_app_server_protocol::ClientInfo;
use codex_app_server_protocol::ClientRequest;
use codex_app_server_protocol::ConversationSummary;
use codex_app_server_protocol::GetConversationSummaryParams;
use codex_app_server_protocol::GetConversationSummaryResponse;
use codex_app_server_protocol::InitializeCapabilities;
use codex_app_server_protocol::InitializeParams;
use codex_app_server_protocol::JSONRPCError;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_arg0::Arg0DispatchPaths;
use codex_config::CloudRequirementsLoader;
use codex_config::LoaderOverrides;
use codex_core::config::ConfigBuilder;
use codex_exec_server::EnvironmentManager;
use codex_feedback::CodexFeedback;
use codex_protocol::ThreadId;
use codex_protocol::models::BaseInstructions;
use codex_protocol::protocol::SessionSource;
use codex_protocol::protocol::ThreadMemoryMode;
use codex_thread_store::CreateThreadParams;
use codex_thread_store::InMemoryThreadStore;
use codex_thread_store::ThreadEventPersistenceMode;
use codex_thread_store::ThreadPersistenceMetadata;
use codex_thread_store::ThreadStore;
use codex_utils_absolute_path::AbsolutePathBuf;
use pretty_assertions::assert_eq;
use std::path::Path;
use std::path::PathBuf;
use std::sync::Arc;
use tempfile::TempDir;
use tokio::time::timeout;
use uuid::Uuid;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
const FILENAME_TS: &str = "2025-01-02T12-00-00";
@@ -68,9 +47,7 @@ fn normalized_canonical_path(path: impl AsRef<Path>) -> Result<PathBuf> {
}
fn normalized_summary_path(mut summary: ConversationSummary) -> Result<ConversationSummary> {
if !summary.path.as_os_str().is_empty() {
summary.path = normalized_canonical_path(summary.path)?;
}
summary.path = normalized_canonical_path(&summary.path)?;
Ok(summary)
}
@@ -145,87 +122,6 @@ async fn get_conversation_summary_by_rollout_path_rejects_remote_thread_store()
Ok(())
}
#[tokio::test]
async fn get_conversation_summary_by_thread_id_reads_pathless_store_thread() -> Result<()> {
let codex_home = TempDir::new()?;
let store_id = Uuid::new_v4().to_string();
create_config_toml_with_in_memory_thread_store(codex_home.path(), &store_id)?;
let store = InMemoryThreadStore::for_id(store_id.clone());
let _in_memory_store = InMemoryThreadStoreId { store_id };
let thread_id = ThreadId::from_string("00000000-0000-4000-8000-000000000125")?;
store
.create_thread(CreateThreadParams {
thread_id,
forked_from_id: None,
source: SessionSource::Cli,
thread_source: None,
base_instructions: BaseInstructions::default(),
dynamic_tools: Vec::new(),
metadata: ThreadPersistenceMetadata {
cwd: None,
model_provider: "test-provider".to_string(),
memory_mode: ThreadMemoryMode::Disabled,
},
event_persistence_mode: ThreadEventPersistenceMode::default(),
})
.await?;
let loader_overrides = LoaderOverrides::without_managed_config_for_tests();
let config = ConfigBuilder::default()
.codex_home(codex_home.path().to_path_buf())
.fallback_cwd(Some(codex_home.path().to_path_buf()))
.loader_overrides(loader_overrides.clone())
.build()
.await?;
let client = in_process::start(InProcessStartArgs {
arg0_paths: Arg0DispatchPaths::default(),
config: Arc::new(config),
cli_overrides: Vec::new(),
loader_overrides,
cloud_requirements: CloudRequirementsLoader::default(),
thread_config_loader: Arc::new(codex_config::NoopThreadConfigLoader),
feedback: CodexFeedback::new(),
log_db: None,
state_db: None,
environment_manager: Arc::new(EnvironmentManager::default_for_tests()),
config_warnings: Vec::new(),
session_source: SessionSource::Cli,
enable_codex_api_key_env: false,
initialize: InitializeParams {
client_info: ClientInfo {
name: "codex-app-server-tests".to_string(),
title: None,
version: "0.1.0".to_string(),
},
capabilities: Some(InitializeCapabilities {
experimental_api: true,
..Default::default()
}),
},
channel_capacity: in_process::DEFAULT_IN_PROCESS_CHANNEL_CAPACITY,
})
.await?;
let result = client
.request(ClientRequest::GetConversationSummary {
request_id: RequestId::Integer(1),
params: GetConversationSummaryParams::ThreadId {
conversation_id: thread_id,
},
})
.await?
.expect("getConversationSummary should succeed");
let GetConversationSummaryResponse { summary } = serde_json::from_value(result)?;
assert_eq!(summary.conversation_id, thread_id);
assert_eq!(summary.path, PathBuf::new());
assert_eq!(summary.cwd, PathBuf::new());
assert_eq!(summary.model_provider, "test");
client.shutdown().await?;
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn get_conversation_summary_by_relative_rollout_path_resolves_from_codex_home() -> Result<()>
{
@@ -261,39 +157,3 @@ async fn get_conversation_summary_by_relative_rollout_path_resolves_from_codex_h
assert_eq!(normalized_summary_path(received.summary)?, expected);
Ok(())
}
struct InMemoryThreadStoreId {
store_id: String,
}
impl Drop for InMemoryThreadStoreId {
fn drop(&mut self) {
InMemoryThreadStore::remove_id(&self.store_id);
}
}
fn create_config_toml_with_in_memory_thread_store(
codex_home: &Path,
store_id: &str,
) -> std::io::Result<()> {
std::fs::write(
codex_home.join("config.toml"),
format!(
r#"
model = "mock-model"
approval_policy = "never"
sandbox_mode = "read-only"
experimental_thread_store = {{ type = "in_memory", id = "{store_id}" }}
model_provider = "mock_provider"
[model_providers.mock_provider]
name = "Mock provider for test"
base_url = "http://127.0.0.1:1/v1"
wire_api = "responses"
request_max_retries = 0
stream_max_retries = 0
"#
),
)
}

View File

@@ -0,0 +1,119 @@
use super::connection_handling_websocket::connect_websocket;
use super::connection_handling_websocket::create_config_toml;
use super::connection_handling_websocket::read_error_for_id;
use super::connection_handling_websocket::read_response_for_id;
use super::connection_handling_websocket::send_initialize_request;
use super::connection_handling_websocket::send_request;
use super::connection_handling_websocket::spawn_websocket_server;
use anyhow::Result;
use app_test_support::McpProcess;
use app_test_support::create_mock_responses_server_sequence_unchecked;
use codex_app_server_protocol::RequestId;
use pretty_assertions::assert_eq;
use serde_json::json;
use tempfile::TempDir;
use tokio::time::Duration;
use tokio::time::timeout;
#[cfg(any(target_os = "macos", windows))]
const DEFAULT_READ_TIMEOUT: Duration = Duration::from_secs(60);
#[cfg(not(any(target_os = "macos", windows)))]
const DEFAULT_READ_TIMEOUT: Duration = Duration::from_secs(10);
async fn initialized_mcp(codex_home: &TempDir) -> Result<McpProcess> {
let mut mcp = McpProcess::new(codex_home.path()).await?;
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize()).await??;
Ok(mcp)
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn device_key_create_rejects_empty_account_user_id() -> Result<()> {
let codex_home = TempDir::new()?;
let mut mcp = initialized_mcp(&codex_home).await?;
let request_id = mcp
.send_raw_request(
"device/key/create",
Some(json!({
"accountUserId": "",
"clientId": "cli_123",
})),
)
.await?;
let error = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_error_message(RequestId::Integer(request_id)),
)
.await??;
assert_eq!(error.error.code, -32600);
assert_eq!(
error.error.message,
"invalid device key payload: accountUserId must not be empty"
);
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn device_key_methods_are_rejected_over_websocket() -> Result<()> {
let server = create_mock_responses_server_sequence_unchecked(Vec::new()).await;
let codex_home = TempDir::new()?;
create_config_toml(codex_home.path(), &server.uri(), "never")?;
let (mut process, bind_addr) = spawn_websocket_server(codex_home.path()).await?;
let mut ws = connect_websocket(bind_addr).await?;
send_initialize_request(&mut ws, /*id*/ 1, "device_key_ws_test").await?;
let initialize_response = read_response_for_id(&mut ws, /*id*/ 1).await?;
assert_eq!(initialize_response.id, RequestId::Integer(1));
let cases = [
(
"device/key/create",
json!({
"accountUserId": "acct_123",
"clientId": "cli_123",
}),
),
(
"device/key/public",
json!({
"keyId": "device-key-123",
}),
),
(
"device/key/sign",
json!({
"keyId": "device-key-123",
"payload": {
"type": "remoteControlClientConnection",
"nonce": "nonce-123",
"audience": "remote_control_client_websocket",
"sessionId": "wssess_123",
"targetOrigin": "https://chatgpt.com",
"targetPath": "/api/codex/remote/control/client",
"accountUserId": "acct_123",
"clientId": "cli_123",
"tokenExpiresAt": 4_102_444_800i64,
"tokenSha256Base64url": "47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU",
"scopes": ["remote_control_controller_websocket"],
},
}),
),
];
for (index, (method, params)) in cases.into_iter().enumerate() {
let id = 2 + index as i64;
send_request(&mut ws, method, id, Some(params)).await?;
let error = read_error_for_id(&mut ws, id).await?;
assert_eq!(error.error.code, -32600);
assert_eq!(
error.error.message,
format!("{method} is not available over remote transports")
);
}
process.kill().await?;
Ok(())
}

View File

@@ -10,6 +10,7 @@ mod config_rpc;
mod connection_handling_websocket;
#[cfg(unix)]
mod connection_handling_websocket_unix;
mod device_key;
mod dynamic_tools;
mod experimental_api;
mod experimental_feature_list;

Some files were not shown because too many files have changed in this diff Show More