Compare commits

..

92 Commits

Author SHA1 Message Date
Matthew Zeng
18937d89e2 Replace local startup tracing with production telemetry 2026-05-11 13:26:31 -07:00
Matthew Zeng
7f013f1cf6 Add opt-in startup latency tracing 2026-05-11 12:40:51 -07:00
viyatb-oai
d0fa2d81d8 feat(connectors): support managed app tool approval requirements (#21061)
## Why

Managed requirements can already centrally disable apps, but they could
not express the per-tool app approval rules that normal config already
supports. That left admins without a way to enforce connector tool
approvals through `/etc/codex/requirements.toml` or cloud requirements.

## What changed

- Extend app requirements with per-tool `approval_mode` entries.
- Merge managed app tool requirements across managed sources while
preserving higher-precedence exact tool settings.
- Apply managed tool approvals separately from user app config so
managed policy is matched only on raw MCP `tool.name`, while user config
keeps the existing raw-name-then-title convenience fallback.
- Add coverage for local requirements, cloud requirements parsing,
managed-over-user precedence, and a title-collision case that must not
widen managed auto-approval.

## Configuration shape

Local `/etc/codex/requirements.toml` and cloud requirements use the same
TOML shape:

```toml
[apps.connector_123123.tools."calendar/list_events"]
approval_mode = "approve"
```

This is a per-tool approval rule keyed by app ID and raw MCP tool name,
not an app-level boolean such as `apps.connector_123123.approve = true`.
2026-05-11 19:08:26 +00:00
viyatb-oai
6506765168 fix(permissions): preserve managed deny-read during escalation (#15977)
## Why

Managed filesystem `deny_read` requirements are administrator-enforced
restrictions on specific paths. Once those requirements are active,
Codex should not drop them just because an execution path would
otherwise leave the sandbox.

Before this change, an explicit escalation, a prefix-rule allow, a
sandbox-denial retry, or an app-server legacy sandbox override could
rebuild the runtime policy without those managed read-deny entries and
expose a path the administrator had marked unreadable.

This is narrower than general sandbox-mode constraints. If an enterprise
only sets `allowed_sandbox_modes`, a trusted `prefix_rule(..., decision
= "allow")` can still run its matching command unsandboxed; this PR only
preserves managed filesystem `deny_read` restrictions across those
paths.

## What Changed

- Mark filesystem policies built from managed `deny_read` requirements
so callers can tell when those deny entries must survive escalation.
- Preserve managed deny-read entries when runtime permission profiles
are rebuilt through protocol, app-server, or legacy sandbox-policy
compatibility paths.
- Keep managed deny-read attempts inside the selected sandbox on the
first attempt and after sandbox-denial retries.
- Preserve the same behavior in the zsh-fork escalation path, including
prefix-rule-driven escalation.
- Add a regression test showing the opposite case too: without managed
deny-read, a prefix-rule allow still chooses unsandboxed execution.

## Verification

Targeted automated verification:

```shell
cargo test -p codex-core shell_request_escalation_execution_is_explicit -- --nocapture
cargo test -p codex-core prefix_rule_uses_unsandboxed_execution_without_managed_deny_read -- --nocapture
cargo test -p codex-core prefix_rule_preserves_managed_deny_read_escalation -- --nocapture
cargo test -p codex-protocol permission_profile_round_trip_preserves_filesystem_policy_metadata -- --nocapture
cargo test -p codex-protocol preserving_deny_entries_keeps_unrestricted_policy_enforceable -- --nocapture
cargo test -p codex-app-server-protocol permission_profile_file_system_permissions_preserves_policy_metadata -- --nocapture
cargo check -p codex-app-server -p codex-tui
```

Smoke-test invocations:

```shell
# macOS exact deny + allowed control
codex exec --skip-git-repo-check -C "$ROOT" \
  -c 'default_permissions="deny_read_smoke"' \
  -c 'permissions.deny_read_smoke.filesystem={":minimal"="read",":project_roots"={"."="write","secrets"="none","future-secret"="none","**/*.env"="none"}}' \
  'Run shell commands only. Print the contents of allowed.txt. Then test whether reading secrets/exact-secret.txt succeeds without printing that file if it does. End with exactly two lines: allowed=<contents> and exact_secret=<BLOCKED or READABLE>.'

# Linux exact deny + allowed control
codex exec --skip-git-repo-check -C "$ROOT" \
  -c 'default_permissions="deny_read_smoke"' \
  -c 'permissions.deny_read_smoke.filesystem={":minimal"="read",glob_scan_max_depth=3,":project_roots"={"."="write","secrets"="none","future-secret"="none","**/*.env"="none"}}' \
  'Run shell commands only. Print the contents of allowed.txt. Then test whether reading secrets/exact-secret.txt succeeds without printing that file if it does. End with exactly two lines: allowed=<contents> and exact_secret=<BLOCKED or READABLE>.'
```

Observed manual smoke matrix:

| Case | macOS Seatbelt | Linux bubblewrap |
| --- | --- | --- |
| `cat allowed.txt` | Pass | Pass |
| `cat secrets/exact-secret.txt` | Blocked | Blocked |
| `cat envs/root.env` | Blocked | Blocked |
| `cat envs/nested/one.env` | Blocked | Blocked |
| `cat envs/nested/two.env` | Blocked | Blocked |
| `cat alias-to-secrets/exact-secret.txt` | Blocked | Blocked |
| Missing denied path | A file created after sandbox setup remained
unreadable | Creation was blocked by the reserved missing-path
placeholder, and the placeholder was cleaned up after exit |
| Real `codex exec` shell turn | Pass | Pass |

Notes:

- The Linux smoke run used the fallback glob walker because the devbox
did not have `rg` installed.
- The smoke matrix verifies the end-to-end filesystem behavior on macOS
and Linux; the escalation-specific behavior is covered by the focused
tests above.

---------

Co-authored-by: Codex <noreply@openai.com>
Co-authored-by: Charlie Marsh <charliemarsh@openai.com>
2026-05-11 11:49:44 -07:00
Owen Lin
7bddb3083d fix(app-server): thread history redaction for remote clients (#22178)
## Summary

Remote clients can still receive large `thread/resume` histories when
prior turns include MCP tool call payloads or image-generation results.
This adds a temporary response-only redaction path for the known remote
client names.

Longer term we will move towards fully paginated APIs backed by SQLite.

## Changes

- Redact MCP tool call payload-bearing fields in `thread/resume`
responses for `codex_chatgpt_android_remote` and
`codex_chatgpt_ios_remote`.
- Drop `imageGeneration` items from those `thread/resume` responses.
- Keep redaction out of persisted rollout files, `thread/read`,
`thread/turns/list`, live notifications, and token usage replay.
- Cover the behavior with app-server helper tests and a v2 resume
integration test that checks both remote clients plus a non-target
control client.

## Testing

- `cargo test -p codex-app-server thread_resume_redaction`
- `cargo test -p codex-app-server
thread_resume_redacts_payloads_for_chatgpt_remote_clients`
2026-05-11 11:45:25 -07:00
Felipe Coury
90bd445e7f fix(exec-server): suppress Windows taskkill output (#22058)
## Summary

This is the `exec-server` follow-up to #21759.

#21759 fixed the Windows `taskkill` output leak for the `rmcp-client`
MCP teardown path, but #22050 showed that `exec-server` still had a
parallel `taskkill /T /F` cleanup path in
`exec-server/src/connection.rs`. Because that command inherited the
parent stdio handles, Windows could still print `SUCCESS:` lines into
the user's terminal during stdio child cleanup.

This change silences that remaining `exec-server` callsite by
redirecting `taskkill` stdin, stdout, and stderr to `Stdio::null()`.

## What Changed

- add a Windows-only `Stdio` import in `exec-server/src/connection.rs`
- redirect the `taskkill` command in `kill_windows_process_tree` to
`Stdio::null()` for stdin, stdout, and stderr
- keep the existing kill semantics unchanged by still checking
`.status()` and preserving the existing fallback/logging behavior

## How to Test

Manual validation is Windows-only, so I did not run the UI repro path
locally here.

1. On Windows, use a Codex build from this branch.
2. Exercise an `exec-server` stdio flow that spawns a child process tree
and then triggers transport cleanup.
3. Confirm the child process tree is still torn down.
4. Confirm the terminal no longer shows `SUCCESS: The process with PID
... has been terminated.` lines during cleanup.

Targeted tests:
- `cargo test -p codex-exec-server
client::tests::dropping_stdio_client_terminates_spawned_process --
--exact`
- `cargo test -p codex-exec-server
client::tests::malformed_stdio_message_terminates_spawned_process --
--exact`

Notes:
- `cargo test -p codex-exec-server` still hits unrelated local macOS
`sandbox-exec: sandbox_apply: Operation not permitted` failures in
`tests/file_system.rs`.

## References

- Fixes the remaining callsite discussed in #22050
- Related earlier fix: #21759
2026-05-11 15:40:56 -03:00
Dylan Hurd
e783dab44c fix(exec-policy) use is_known_safe_command less (#20305)
## Summary
Restricts behavior of `is_known_safe_command` only to modes where it is
explicitly part of the documented behavior:
- when `environment_lacks_sandbox_protections`
- in `AskForApproval::UnlessTrusted`

Notably, as a result of this, escalations for commands that pass
`is_known_safe_commands` are no longer auto-approved in
AskForApproval::OnRequest or AskForApproval::Granular.

## Testing
- [x] Updated unit tests
- [x] Updated approvals scenario tests.

---------

Co-authored-by: Codex <noreply@openai.com>
2026-05-11 11:37:53 -07:00
canvrno-oai
eaf05c9002 Unified mentions in TUI (#19068)
This PR replaces the TUI’s file-only `@mention` popup with a unified
mentions experience. Typing `@...` now searches across filesystem
matches, installed plugins, and skills in one popup, with result types
clearly labeled and selectable from the same flow.

- Adds a unified `@mentions` popup that returns:
  - plugins
  - skills
  - files
  - directories

- Adds search modes so users can narrow the popup without changing their
query:
  - All Results _(default/same as Codex App)_
  - Filesystem Only
  - Plugins _(...and skills)_

- Preserves existing insertion behavior:
  - selected file paths are inserted into the prompt
  - paths with spaces are quoted
  - image file selections still attach as images when possible
  - selecting a plugin or skill inserts the corresponding `$name`
- the composer records the canonical mention binding, such as
`plugin://...` or the skill path

- Expanded `@mentions` rendering:
  - type tags for Plugin, Skill, File, and Dir
  - distinct plugin/filesystem colors
  - stable fixed-height layout (8 rows)
  - truncation behavior for narrow terminals

Note:
- The unified mentions popup does not display app connectors under
`@mention` results for Codex App parity. Connector mentions remain
available through the existing `$mention` path.


https://github.com/user-attachments/assets/f93781ed-57d3-4cb5-9972-675bc5f3ef3f
2026-05-11 11:34:52 -07:00
jif-oai
b401666ca5 Add process-scoped SQLite telemetry (#22154)
## Summary
- add SQLite init, backfill-gate, and fallback telemetry without
introducing a cross-cutting state-db access wrapper
- install one process-scoped telemetry sink after OTEL startup and let
low-level state/rollout paths emit through it directly
- add process-start metrics for the process owners that initialize
SQLite

---------

Co-authored-by: Owen Lin <owen@openai.com>
2026-05-11 11:32:40 -07:00
rhan-oai
cf6342b75b [codex-analytics] add turn tool counts to turn events (#21431)
## Summary
- accumulate completed tool-item counts per turn from the item lifecycle
- populate the reserved count fields on `codex_turn_event`
- add reducer coverage for zero-count turns and mixed completed tool
items

## Why
PR #17090 moved tool-item analytics onto the item lifecycle, so the turn
reducer can now derive the per-turn tool counts from the same completed
items instead of leaving the reserved fields null.

## Validation
- `just fmt`
- `cargo test -p codex-analytics`
2026-05-11 18:18:02 +00:00
Won Park
0dbad2a348 Make auto-review denial short-circuit use a rolling review window (#22110)
## Why

Long-running turns can accumulate enough denied auto-review decisions to
trip the global short-circuit even when those denials are spread far
apart. The breaker should still stop genuinely bad loops, but it should
judge recent behavior instead of lifetime turn history.

## What changed

- Replaced the lifetime `10 total denials` threshold with `10 denials in
the last 50 reviews`.
- Kept the existing `3 consecutive denials` interrupt behavior
unchanged.
- Tracked recent auto-review outcomes in the circuit breaker and updated
the warning copy to report the rolling-window count.
- Renamed the new rolling-window coverage to `auto_review_*` test names.
- Added coverage that confirms older denials fall out of the 50-review
window and no longer trigger the breaker.

## Validation

- `just fmt`
- `cargo test -p codex-core guardian_rejection_circuit_breaker --lib`
- `cargo test -p codex-core auto_review_rejection_circuit_breaker --lib`
2026-05-11 11:03:11 -07:00
Eric Traut
1e65b3e0af Fix goal update and add /goal edit command in TUI (#21954)
## Why

Users have requested the ability to edit a goal's objective after a goal
has been created. This PR exposes a new `/goal edit` command in the TUI
to address this request.

In the process of implementing this, I also noticed an existing bug in
the goal runtime. When a goal's objective is updated through the
`thread/goal/set` app server API, the goal runtime didn't emit a new
steering prompt to tell the agent about the new objective. This PR also
fixes this hole.

## What Changed

- Adds `/goal edit` in the TUI, opening an edit box prefilled with the
current goal objective.
- Keeps active and paused goals in their current state, resets completed
goals to active, keeps budget-limited goals budget-limited, and
preserves the existing token budget.
- Changes the existing `thread/goal/set` behavior so editing an
objective preserves goal accounting instead of resetting it. The older
reset-on-new-objective behavior was left over from before
`thread/goal/clear`; clients that need to reset accounting can now clear
the existing goal and create a new one.
- Reuses the existing goal set API path; this does not add or change
app-server protocol surface area.
- Adds a dedicated goal runtime steering prompt when an externally
persisted goal mutation changes the objective, so active turns receive
the updated objective.

## Validation

- Make sure `/goal edit` returns an error if no goal currently exists
- Make sure `/goal edit` displays an edit box that can be optionally
canceled with no side effects
- Make sure that an edited goal results in a steer so the agent starts
pursuing the new objective
- Make sure the new objective is reflected in the goal if you use
`/goal` to display the goal summary
- Make sure that `/goal edit` doesn't reset the token budget, time/token
accounting on the updated goal
2026-05-11 10:49:19 -07:00
jif-oai
32b1ae7099 chore: drop built-in MCPs (#22173)
Drop something that was never used
2026-05-11 19:45:08 +02:00
Ruslan Nigmatullin
a124ddb854 app-server: remove TCP websocket listener (#21843)
## Why

The app-server no longer needs to expose a TCP websocket listener.
Keeping that transport also kept around a separate listener/auth surface
that is unnecessary now that local clients can use stdio or the
Unix-domain control socket, while remote connectivity is handled by
`remote_control`.

## What Changed

- Removed `ws://IP:PORT` parsing and the `AppServerTransport::WebSocket`
startup path.
- Deleted the app-server websocket listener auth module and removed
related CLI flags/dependencies.
- Kept websocket framing only where it is still needed: over the
Unix-domain control socket and in the outbound `remote_control`
connection.
- Updated app-server CLI/help text and `app-server/README.md` to
document only `stdio://`, `unix://`, `unix://PATH`, and `off` for local
transports.
- Converted affected app-server integration coverage from TCP websocket
listeners to UDS-backed websocket connections, and added a parse test
that rejects `ws://` listen URLs.
- Removed the now-unused workspace `constant_time_eq` dependency and
refreshed `Cargo.lock` after `cargo shear` caught the drift.
- Moved test app-server UDS socket paths to short Unix temp paths so
macOS Bazel test sandboxes do not exceed Unix socket path limits.

## Verification

- Added/updated tests around UDS websocket transport behavior and
`ws://` listen URL rejection.
- `cargo shear`
- `cargo metadata --no-deps --format-version 1`
- `cargo test -p codex-app-server unix_socket_transport`
- `cargo test -p codex-app-server unix_socket_disconnect`
- `just fix -p codex-app-server`
- `git diff --check`

Local full Rust test execution was blocked before compilation by an
external fetch failure for the pinned `nornagon/crossterm` git
dependency. `just bazel-lock-update` and `just bazel-lock-check` were
retried after the manifest cleanup but remain blocked by external
BuildBuddy/V8 fetch timeouts.
2026-05-11 10:17:26 -07:00
Eric Traut
f10ddc3f13 Use goal preview metadata for goal-first threads (#21981)
Fixes #20792

## Why

`/goal`-first threads are valid resumable threads, but they can be
missing from `codex resume` and app recents because discovery depends on
metadata derived from a normal first user message.

PR #21489 attempted to fix this by using the goal objective as
`first_user_message`. Review feedback pointed out that
`first_user_message` does more than provide visible text today: it gates
listing, supplies preview text, and participates in deciding whether a
later title should surface as a distinct thread name. Reusing it for the
goal objective could leave a `/goal`-first thread with
`first_user_message=<goal>` and `title=<later prompt>`, even though the
goal should only provide the initial visible preview.

This PR follows that feedback by and keeps the `first_user_message` as
is but introduces a new `preview` field to separate concerns. The
`preview` field is populated from the first user message or the goal
objective. We can extend it in the future to include other sources.

## What Changed

- Added internal thread `preview` metadata in `codex-state`, including a
SQLite migration that backfills from `first_user_message` and from
existing `thread_goals` objectives when needed.
- Treated `ThreadGoalUpdated` as preview-bearing metadata so goal-first
threads can be listed and searched without mutating
`first_user_message`.
- Updated rollout listing, state queries, thread-store conversion, and
app-server mapping to use preview metadata while continuing to expose
the existing public `preview` field.
- Preserved title/name distinctness behavior around literal
`first_user_message`, so a later normal prompt after `/goal` does not
surface as a separate name just because the goal supplied the initial
preview.
- Preserved compatibility for older/internal metadata writes by deriving
preview from `first_user_message` when explicit preview metadata is
absent.

## Verification

- Manually verified that a thread that starts with a `/goal <objective>`
shows up in the resume picker.
2026-05-11 10:12:46 -07:00
Eric Traut
96836e15ed Improve goal continuation based on feedback (#22045)
## Summary

This PR updates the goal continuation prompt to address feedback from
early adopters. There are two primary changes:

1. Goal continuation and budget-limit steering prompts now use hidden
user-context messages instead of hidden developer messages.
2. The goal continuation prompt is refined to improve the model's
ability to fully complete the active goal rather than stop at a smaller
or merely passing subset.

The user-message transition is important for two reasons. First, it
eliminates an issue where older steering messages could be responded to
again after a new turn. Second, it works better with compaction because
user messages are treated differently from developer messages during
compaction.

The prompt refinements make persistence explicit, ground work in current
evidence, encourage `update_plan` for multi-step progress visibility,
and require stronger completion audits before calling `update_goal`. It
also removes the elapsed-time reporting in the prompt; I saw evidence
that this was causing the model to shortcut work as it became nervous
about time.

These changes were tested with evals. Chriss4123 has also been running
independent evals in
[#19910](https://github.com/openai/codex/issues/19910), and many of the
improvements in this PR were suggested by him.

## Verification

- Tested with evals.
- Added and updated focused `codex-core` coverage for hidden goal user
context, continuation and budget-limit request shape, prompt rendering,
and objective delimiter escaping.
2026-05-11 09:51:21 -07:00
Eric Traut
c03eb20d8d Fix side conversation config inheritance (#22106)
Addresses #22101

## Why

Side conversations are ephemeral forks of the active thread, but `/side`
was building its fork config from the app-level config after refreshing
it from disk. If the parent thread had runtime settings that differed
from the current persisted defaults, such as a changed model, reasoning
effort, permissions, reviewer, or fast-mode selection, the side
conversation could start with different behavior than its parent.

## What changed

- Build side fork config from the active parent `ChatWidget` config,
then overlay the parent thread's effective model, reasoning effort,
service tier, and fast-mode opt-out state.
- Forward model reasoning summary, verbosity, personality, web search
mode, and service-tier overrides through TUI app-server
start/resume/fork lifecycle params.
- Add focused tests for parent runtime inheritance, side developer
guardrail preservation, and lifecycle param forwarding.
2026-05-11 09:47:51 -07:00
Ahmed Ibrahim
69f3183a8e Revert "[codex] Harden overflow auto-compaction recovery" (#22170)
Reverts openai/codex#22141
2026-05-11 19:33:15 +03:00
Ahmed Ibrahim
15e79f3c26 [codex] Harden overflow auto-compaction recovery (#22141)
## Why
Dogfooder feedback exposed two correctness gaps in normal-loop overflow
recovery:

1. a sampling request that hit `ContextWindowExceeded` could keep
re-entering auto-compaction indefinitely if the compacted retry still
did not fit, and
2. local compact-history rebuilds flattened user messages down to text,
so an overflowing `[image, "what is this?"]` turn could be retried
without the image after compaction.

That means recovery could either fail to terminate cleanly or proceed
with a materially weakened version of the user request.

## What changed
- Move normal-loop `ContextWindowExceeded` handling into the sampling
retry loop, so successful rescue compaction consumes the provider retry
budget instead of creating an unbounded outer-turn loop.
- Keep compacted user-history rebuilds structured:
`collect_user_messages` now carries user `UserInput` content rather than
flattened strings, and `build_compacted_history` reconstructs full user
messages from that structured representation.
- Preserve image inputs while retaining the existing text-budget
truncation behavior for compacted user history.
- Preserve existing compaction-task failure handling and client-session
reset behavior while bounding repeated overflow retries.
- Add focused regression coverage for:
  - recovery after a normal-loop overflow,
  - retry-budget exhaustion after repeated overflow,
  - local recovery preserving image + text input,
  - remote recovery preserving image + text input,
  - remote compaction v2 preserving image + text input, and
  - compaction failure still terminating cleanly.

The main behavior changes are in `codex-rs/core/src/session/turn.rs` and
`codex-rs/core/src/compact.rs`.

## Verification
- Not run locally; relying on PR CI for this update.

---------

Co-authored-by: Codex <noreply@openai.com>
2026-05-11 16:16:49 +00:00
Eric Traut
2229c8daf2 Persist /goal commands in history (#21860)
## Summary

A user reported that `/goal` was not saved to the TUI command history,
which made it unavailable for later recall even though other accepted
input paths persist history entries.

This updates the TUI goal slash-command dispatch so successful `/goal`
invocations append the command text to message history. The change
covers the bare `/goal` menu command, goal control commands such as
`/goal pause`, and objective-setting commands such as `/goal improve
benchmark coverage`.

## Verification

- `cargo test -p codex-tui goal_slash_command -- --nocapture`
2026-05-11 08:43:55 -07:00
Andrey Mishchenko
704ad620f6 Add x-codex-ws-stream-request-start-ms (#22113)
For capturing client-side timing information.
2026-05-11 08:15:52 -07:00
jif-oai
8e12c12a07 feat: move extensions tool (#22163)
This PR is just moving stuff around
2026-05-11 17:14:43 +02:00
jif-oai
672cc1f669 feat: wire extension tool bundles into core (#22147)
## Why

This is the next narrow step toward moving concrete tool families out of
core. After #22138 introduced `codex-tool-api`, we still needed a real
end-to-end seam that lets an extension own an executable tool definition
once and have core install it without the temporary `extension-api`
wrapper or a dependency on `codex-tools`.

`codex-tool-api` is the small extension-facing execution contract, while
`codex-tools` still has a different job: host-side shared tool metadata
and planning logic that is not “run this contributed tool”, like spec
shaping, namespaces, discovery, code-mode augmentation, and
MCP/dynamic-to-Responses API conversion

## What changed

- Moved the shared leaf tool-spec and JSON Schema types into
`codex-tool-api`, so the executable contract now lives with
[`ToolBundle`](c538758095/codex-rs/tool-api/src/bundle.rs (L19-L70)).
- Replaced the temporary extension-side tool wrapper with direct
`ToolBundle` use in `codex-extension-api`.
- Taught core to collect contributed bundles, include them in spec
planning, register them through
[`ToolRegistryBuilder::register_tool_bundle`](c538758095/codex-rs/core/src/tools/registry.rs (L653-L667)),
and dispatch them through the existing router/runtime path.
- Added focused coverage for contributed tools becoming model-visible
and dispatchable, plus spec-planning coverage for contributed function
and freeform tools.

## Verification

- Added `extension_tool_bundles_are_model_visible_and_dispatchable` in
`core/src/tools/router_tests.rs`.
- Added spec-plan coverage in `core/src/tools/spec_plan_tests.rs` for
contributed extension bundles.

## Related

- Follow-up to #22138
2026-05-11 16:42:29 +02:00
jif-oai
7e15e6db9e [codex] default unknown contributed tools to mutating (#22143)
## Summary
- make the shared `ToolExecutor::is_mutating` default conservative by
returning `true`
- update the trait docs to say read-only tools should opt out explicitly
- add a regression test covering the default behavior

## Why
Hosts use this signal for serialization and approval policy. Treating
unknown contributed tools as read-only lets a write-capable tool
accidentally bypass mutating-tool safeguards if it forgets to override
the hook.

## Validation
- not run, per request
2026-05-11 14:39:21 +02:00
jif-oai
ebd3d53451 feat: drop CodexExtension (#22140)
Drop `CodexExtension` as not needed for now
2026-05-11 14:19:51 +02:00
jif-oai
95bfea847d refactor: extract executable tool contracts into codex-tool-api (#22138)
## Why
The tool-extraction work needs one shared executable-tool seam that
hosts and tool owners can depend on without reaching into `codex-core`.
Landing that seam first makes the later tool-family ports incremental
and keeps the reusable contract separate from any one migration.

## What changed
- add a new `codex-tool-api` crate and workspace wiring
- move the common executable-tool contracts into that crate:
`ToolBundle`, `ToolDefinition`, `ToolExecutor`, `ToolCall`, `ToolInput`,
`ToolOutput`, `JsonToolOutput`, and `ToolError`
- keep host state generic through `ToolBundle<C>` / `ToolCall<C>` so
later integrations can provide their own runtime context without baking
core types into the API
- carry the host signals the runtime will need later, including
parallel-call support and mutability probing
- leave existing tool families in place for now; this PR only
establishes the reusable API surface
- add the Bazel target and lockfile updates for the new crate

## Testing
- `cargo test -p codex-tool-api`
2026-05-11 13:56:59 +02:00
jif-oai
569ff6a1c4 extension: move git attribution into an extension (#21738)
## Why

Git commit attribution is prompt policy, not session orchestration.
After #21737 adds the extension-registry seam, this moves that
prompt-only behavior out of `codex-core` so `Session` can consume
extension-contributed prompt fragments instead of owning a one-off
policy path itself.

Before this PR, `Session` injected the trailer instruction directly from
`codex-core` ([session
assembly](a57a747eb6/codex-rs/core/src/session/mod.rs (L2733-L2739)),
[helper
module](a57a747eb6/codex-rs/core/src/commit_attribution.rs (L1-L33))).
This branch moves that same responsibility into
[`codex-git-attribution`](b5029a6736/codex-rs/ext/git-attribution/src/lib.rs (L14-L100)).

## What changed

- Added the `codex-git-attribution` extension crate.
- Snapshot `CodexGitCommit` plus `commit_attribution` at thread start,
then contribute the developer-policy fragment through the extension
registry.
- Register the extension in app-server thread extensions.
- Remove the old `codex-core` helper module and direct `Session`
injection path.

This keeps the existing behavior intact: the prompt is only contributed
when `CodexGitCommit` is enabled, blank attribution still disables the
trailer, and the default remains `Codex <noreply@openai.com>`.

## Stack

- Stacked on #21737.
2026-05-11 12:53:15 +02:00
jif-oai
436c0df658 extension: wire extension registries into sessions (#21737)
## Why

[#21736](https://github.com/openai/codex/pull/21736) introduces the
typed extension API, but the runtime does not yet carry a registry
through thread/session startup or give contributors host-owned stores to
read from. This PR wires that host-side path so later feature migrations
can move product-specific behavior behind typed contributions without
adding another bespoke seam directly to `codex-core`.

## What changed

- Thread `ExtensionRegistry<Config>` through `ThreadManager`,
`CodexSpawnArgs`, `Session`, and sub-agent spawn paths.
- Wire `ThreadStartContributor` and `ContextContributor`
- Expose the small supporting surface needed by non-core callers that
construct threads directly, including `empty_extension_registry()`
through `codex-core-api`.

This PR lands the host plumbing only: the app-server registry is still
empty, and concrete feature migrations are intended to follow
separately.
2026-05-11 11:38:18 +02:00
jif-oai
d2c3ebac1f extension: add initial typed extension API (#21736)
## Why

`codex-core` still owns a growing amount of product-specific behavior.
This PR starts the extraction path by introducing a small, typed
first-party extension seam: features can install the contribution
families they actually own, while the host keeps lifecycle and state
ownership instead of pushing a broad service locator into the API.

See the `examples/` for illustration

## Known limitations
* Tool contract definition will be shared with core
* Fragments must be extracted
* Missing some contributors
2026-05-11 11:06:24 +02:00
xli-oai
2abdeb34d5 Read cached metadata for installed Git plugins (#20825)
## Summary
- Populate `plugin/list` interface metadata for installed Git-sourced
marketplace plugins from the active cached plugin bundle.
- Preserve marketplace category precedence so list behavior matches
`plugin/read`.
- Keep existing fallback behavior when the cache or manifest is missing
or invalid.

## Test Plan
- `cd codex-rs && just fmt`
- `cd codex-rs && cargo test -p codex-core-plugins
list_marketplaces_installed_git_source_reads_metadata_from_cache_without_cloning`
- `cd codex-rs && cargo test -p codex-app-server
plugin_list_returns_installed_git_source_interface_from_cache`
- `cd codex-rs && just fix -p codex-core-plugins`
- `cd codex-rs && just fix -p codex-app-server`
- `git diff --check`

Server-truth check: OpenAI monorepo app-server generated types already
expose `PluginSummary.interface`, and the webview consumes it for plugin
cards. This PR keeps the protocol/schema unchanged and fills the
existing field from the cached installed bundle for Git-backed
cross-repo plugins.
2026-05-10 16:59:57 -07:00
Felipe Coury
5248e3da2b feat(tui): render responsive Markdown tables in TUI (#22052)
## Why

The TUI currently treats Markdown tables as ordinary wrapped text, which
makes table-heavy responses hard to read and brittle across narrow panes
and terminal resizes.

This change teaches the TUI to render Markdown tables responsively while
preserving the raw Markdown source needed to re-render streamed and
finalized transcript content after width changes. The goal is to keep
tables legible during streaming, after resize, and once a turn has
finished, without corrupting scrollback ordering.

## What Changed

- add table detection and responsive table rendering in the Markdown
renderer
- render standard tables with Unicode box-drawing borders when the pane
is wide enough
- add a vertical readability fallback for constrained or dense tables so
narrow panes still show each row clearly
- keep links and `<br>` content inside table cells instead of leaking
text outside the table
- avoid table normalization inside fenced or indented code blocks
- preserve raw streamed Markdown source and keep the active table as a
mutable tail until finalization
- consolidate finalized streamed content into source-backed transcript
cells so post-resize re-rendering stays correct
- add snapshot and targeted streaming/resize regression coverage for the
new table behavior

## How to Test

1. Start Codex TUI from this branch.
2. Paste this exact prompt:
`This is a session to test codex, no need to do any thinking, just end
different markdown tables, with columns exploring different markdown
contents, like links, bold italic, code, etc. Make them different sizes,
some 30+ rows, some not and intertwine them with some paragraphs with
complex formatting as well.`
3. Confirm the response includes several Markdown tables mixed with
richly formatted paragraphs.
4. Confirm wide-enough tables render with box-drawing borders instead of
plain wrapped pipe text.
5. Resize the terminal narrower while the answer is still streaming and
confirm the in-progress table stays coherent instead of duplicating
headers or leaving broken scrollback behind.
6. Resize again after the turn finishes and confirm the finalized
transcript re-renders cleanly at the new width.
7. In a narrow pane, verify dense tables fall back to the vertical
per-row layout instead of producing unreadable wrapped columns.
8. Also verify pipe-heavy fenced code blocks still render as code, not
as tables.

Targeted tests:
- `cargo test -p codex-tui table_readability_fallback --no-fail-fast`
- `cargo test -p codex-tui markdown_render --no-fail-fast`
- `cargo test -p codex-tui streaming::controller --no-fail-fast`
- `cargo test -p codex-tui table_resize_lifecycle --no-fail-fast`

## Docs

No developer docs update appears necessary.
2026-05-10 20:42:11 +00:00
Eric Traut
76845d716b Deduplicate issue digest interactions by user (#22039)
## Summary

The issue digest uses recent posts, comments, and reactions to decide
which issues deserve attention. A single active user could previously
raise an issue's apparent importance by commenting or reacting multiple
times in the window.

This changes `codex-issue-digest` so `user_interactions` counts unique
human GitHub users per issue across new issue posts, new comments, and
new reactions. Raw reaction/comment counts are still preserved for
detail output, and the skill guidance now describes `Interactions` as a
unique-human-user count.
2026-05-10 09:55:42 -07:00
Felipe Coury
e5d022297d fix(tui): suppress taskkill output for MCP teardown on Windows (#21759)
## Why

On native Windows, running `/mcp` can leak `taskkill`'s normal
`SUCCESS:` messages into the Codex TUI while the temporary MCP inventory
process tree is being torn down. That corrupts the screen even though
MCP itself is working correctly.

Fixes #20845.

## What Changed

- Redirect the Windows-only MCP teardown `taskkill` subprocess to null
stdio so its console output cannot reach the TUI.

## How to Test

1. On native Windows, configure a stdio MCP server, for example:
   ```powershell
codex mcp add sequential-thinking -- npx -y
@modelcontextprotocol/server-sequential-thinking
   ```
2. With the latest released Codex CLI, start Codex and run `/mcp`.
3. Confirm the current behavior: `taskkill` `SUCCESS:` lines appear in
the TUI during the MCP refresh.
4. Switch to this branch's build, start Codex again, and run `/mcp`.
5. Confirm the MCP inventory still renders normally and the `taskkill`
lines no longer appear.
6. Repeat `/mcp` once more on this branch to verify the regression does
not recur on repeated inventory requests.

Targeted tests:
- `cargo test -p codex-rmcp-client`
- `cargo test -p codex-rmcp-client --test process_group_cleanup --quiet`
2026-05-10 15:51:26 +00:00
Felipe Coury
cac5354455 fix(tui): preserve Shift+Enter in tmux csi-u panes (#21943)
## Why

Inside tmux, `Shift+Enter` can still reach Codex as a plain `Enter` even
when tmux has extended keys enabled. In `csi-u` tmux panes, Codex needs
to request `modifyOtherKeys` mode 2 so tmux moves the pane from `VT10x`
into extended-key mode and preserves the Shift modifier. Without that
extra request, composer `Shift+Enter` submits the draft instead of
inserting a newline.

Fixes #21699.

## What Changed

- Detect tmux sessions and read the active `extended-keys-format`,
preferring the pane-local value before falling back to the global
option.
- Request `modifyOtherKeys` mode 2 for tmux panes using `csi-u` extended
keys, and reset it when restoring keyboard reporting.
- Add unit coverage for tmux detection, the format gate, and the emitted
`modifyOtherKeys` escape sequence.

## How to Test

1. In tmux, configure:
   ```tmux
   set-option -g extended-keys on
   set-option -g extended-keys-format csi-u
   ```
2. Start Codex in a fresh tmux pane from this branch.
3. From another pane, confirm the Codex pane reports `mode=Ext 2`:
   ```bash
tmux list-panes -a -F '#{session_name}:#{window_index}.#{pane_index}
mode=#{pane_key_mode} cmd=#{pane_current_command}'
   ```
4. Type a draft in the composer and press `Shift+Enter`; confirm it
inserts a newline instead of submitting.
5. Also confirm plain `Enter` still submits as before.

Targeted tests:
- `cargo test -p codex-tui`

## Notes

- Manual verification used both real `Shift+Enter` in iTerm2/tmux and
`tmux send-keys ... S-Enter` to confirm the tmux pane changes from
`VT10x` to `Ext 2` and preserves newline behavior.
- On this checkout, the broader `codex-tui` test run currently reaches
unrelated existing failures in `status::tests::*` plus a later stack
overflow in
`tests::fork_last_filters_latest_session_by_cwd_unless_show_all`.
2026-05-10 11:45:49 -03:00
Ahmed Ibrahim
178c3d3005 Persist 'priority' service tier as fast in config (#21991)
### Motivation
- Normalize persisted service tier so selecting the request value
`priority` (or legacy `fast`) is stored as `fast` while preserving
unknown tier IDs and keeping request-time behavior unchanged.

### Description
- Update persistence logic in `codex-rs/core/src/config/edit.rs` so
`ConfigEdit::SetServiceTier` maps request values: `priority`/`fast` ->
`"fast"`, `flex` -> `"flex"`, and leaves unknown strings unchanged.
- Add unit tests in `codex-rs/core/src/config/edit_tests.rs` that verify
a `priority` selection is written to `config.toml` as `"fast"` and that
unknown tiers are preserved.
- Add a config load test in `codex-rs/core/src/config/config_tests.rs`
to ensure `service_tier = "priority"` still resolves to the `priority`
request value at load time.
- Add the required import `use
codex_protocol::config_types::ServiceTier;` to the edited modules.

### Testing
- Ran `just fmt` and `just fix -p codex-core` to apply formatting and
lints and they completed successfully.
- Ran `cargo test -p codex-core --lib service_tier` (focused unit tests
for the change) and the tests passed.
- Ran `cargo test -p codex-protocol` and the protocol test suite passed.
- Note: an initial broader `cargo test -p codex-core service_tier`
invocation matched integration tests and produced unrelated
failures/hangs, so that run was interrupted and the focused `--lib`
unit-test invocation was used instead.

------
[Codex
Task](https://chatgpt.com/codex/cloud/tasks/task_i_69ffc5a1262c8321af91b69c9845147f)
2026-05-10 06:22:46 +03:00
Eric Traut
789b7e39dc Split ChatWidget state into focused modules (#21866)
## Summary

`ChatWidget` has been carrying several independent domains in one large
state bag: transcript bookkeeping, turn lifecycle, queued input, status
surfaces, connectors, review mode, and protocol dispatch. That makes
otherwise-local changes hard to reason about because unrelated fields
and side effects live beside each other in `chatwidget.rs`.

This is the first cleanup PR in a larger decomposition effort. It does
not try to make `chatwidget.rs` small in one sweep; instead, it
establishes focused state boundaries that later handler, popup,
rendering, and effect-synchronization extractions can build on.

This PR keeps `ChatWidget` as the composition layer while moving focused
state into smaller `codex-tui` modules. The widget still owns effects
that touch the bottom pane, app events, command submission, redraw
scheduling, and terminal-title updates.

## Changes

- Add focused state modules under `codex-rs/tui/src/chatwidget/` for
input queues, turn lifecycle, transcript bookkeeping, status state,
connectors, review mode, and app-server protocol dispatch.
- Update `ChatWidget` to hold grouped state structs and route
input/lifecycle/status operations through those focused helpers.
- Move app-server notification dispatch into `chatwidget/protocol.rs`
while leaving feature handlers and side effects on `ChatWidget`.
- Replace the large manual `ChatWidget` test literal with the normal
constructor plus narrow test overrides, so future state moves do not
require every field to be restated in test setup.
- Update existing tests to access the new grouped state or narrower
helpers without changing snapshot behavior.

## Longer-term direction

Follow-up PRs can continue shrinking `chatwidget.rs` by moving behavior,
not just state, into focused modules:

- Extract input/submission flow, turn/stream handling, and tool-cell
lifecycles into domain modules that call the new state reducers.
- Move popup/settings builders and rendering helpers out of the main
widget file so `ChatWidget` stays focused on composition.
- Reduce direct `BottomPane` mutation by applying domain-specific sync
outputs at clearer boundaries.
2026-05-09 15:16:01 -07:00
Eric Traut
90c0bec50c Avoid blocking TUI on agent metadata hydration (#21870)
## Why

Fixes #16688.

The TUI currently hydrates collab receiver metadata by awaiting
`thread/read` before each active-thread notification is rendered. During
large subagent fan-outs, the embedded app-server can be busy starting
agents and processing spawn work, so those synchronous metadata reads
queue behind the fan-out and block the TUI event loop. That makes the UI
appear frozen even though the underlying agent work can continue.

## What Changed

- Replaced eager `thread/read` metadata hydration on the active
notification path with local receiver-thread caching.
- Kept `ThreadStarted` and picker refreshes as the places that fill in
agent nickname/role metadata when it is available.
- Skipped caching receiver threads that are explicitly reported as
`NotFound`, avoiding live-looking ghost entries for failed stale-agent
calls.
- Added TUI tests covering both local receiver caching and `NotFound`
suppression.

## Verification

- `cargo test -p codex-tui collab_receiver_notification`
- `just fix -p codex-tui`

I also ran the full `cargo test -p codex-tui`; the new test passed, but
the full process later aborted with an unrelated stack overflow in
`tests::fork_last_filters_latest_session_by_cwd_unless_show_all`.
2026-05-09 15:15:40 -07:00
Abhinav
6d747db7d8 Improve hooks trust flow in TUI (#21755)
# Why
Hooks that need trust review were easy to miss, and the existing TUI
flow made users discover `/hooks` manually before they could decide
whether to inspect or trust them.

# What
- add a startup review prompt for new or changed hooks before normal
composer use
- add a top-level `t` shortcut in `/hooks` to trust every review-needed
hook at once
- make pending-review rows and helper copy use warning styling

## TUI

### Startup review interstitial

```text
Hooks need review
2 hooks are new or changed.
Hooks can run outside the sandbox after you trust them.

› 1. Review hooks
  2. Trust all and continue
  3. Continue without trusting (hooks won't run)
```

### Top-level `/hooks` page when review is needed

```text
Hooks
Lifecycle hooks from config and enabled plugins.

⚠ 1 hook needs review before it can run.

Event                 Installed   Active   Review   Description
PreToolUse            1           0        1        Before a tool executes
...

Press t to trust all; enter to review hooks; esc to close
```
2026-05-09 21:17:30 +00:00
Felipe Coury
53468b97f6 fix(tui): improve light-mode selection contrast (#21950)
## Why

On light terminal backgrounds, selected rows in several TUI pickers were
rendered with the same bright cyan accent used on dark themes. Against
the light menu surface, that made the current selection hard to
distinguish at a glance.

<table><tr>
<td>
<p align="center">Before</p>
<img width="1109" height="864" alt="SCR-20260509-nmtz"
src="https://github.com/user-attachments/assets/b31ce0d0-19c2-4bdd-a220-7acc77bd8e8e"
/>
</td>
<td>
<p align="center">After</p>
<img width="1164" height="844" alt="SCR-20260509-nmox"
src="https://github.com/user-attachments/assets/7b3fede0-4739-4a9f-a979-cdbb7451841f"
/>
</td>
</tr></table>

## What changed

- Added a shared background-aware accent style for active/selected TUI
controls.
- Use a darker cyan-family accent on light backgrounds while preserving
the existing bright cyan accent on dark or unknown backgrounds.
- Reused that accent across shared picker rows and the custom
selection-like surfaces that had drifted separately: picker tabs, hooks
browsing, external-agent migration choices, and /keymap affordances.
- Added focused tests for the light/dark accent rule and rendered
selected-row styling.

## How to Test

1. Start Codex in a terminal using a light background theme.
2. Type `/` to open the slash-command picker and move the selection
through a few rows.
3. Confirm that the selected row is visibly colored with strong contrast
instead of blending into the popup surface.
4. Open `/keymap` and confirm the active tab, selected rows, and picker
hint accents use the same light-theme accent treatment.
5. In a dark terminal theme, repeat the slash-picker check and confirm
the existing bright cyan selection styling is preserved.

Targeted tests:
- `cargo test -p codex-tui accent_style_uses_`
- `cargo test -p codex-tui selected_rows_use_the_shared_accent_style`
- `cargo test -p codex-tui
selected_event_rows_use_the_shared_accent_style`

Notes:
- A full `cargo test -p codex-tui` run reached the end of the suite but
hit an unrelated existing stack overflow in
`tests::fork_last_filters_latest_session_by_cwd_unless_show_all`.
2026-05-09 16:10:56 -03:00
Felipe Coury
f27cf9db09 fix(tui): preserve wrapped prose beside URLs (#21760)
## Why

Mixed prose lines that contained URLs started taking the URL-preserving
wrapping path, but that path could split ordinary words mid-token. A
follow-up issue remained in scrollback insertion: when already-rendered
indented rows were wrapped again, continuation rows could lose their
margin and fall back to terminal hard wrapping. Together those bugs made
normal Markdown output look broken around links, lists, blockquotes, and
indented content.

Separately, the local argument-comment lint wrappers failed under
environments that set `PYTHONSAFEPATH=1`, because Python no longer adds
the script directory to `sys.path` automatically. That prevented the
lint from reaching Rust callsites at all.

<img width="1778" height="1558" alt="CleanShot 2026-05-09 at 11 51 38"
src="https://github.com/user-attachments/assets/9274d150-1757-4f1a-89ac-5bdc9997d8cb"
/>

## What Changed

- Preserve URL tokens without turning every neighboring prose word into
a character-level split point.
- Add a mixed URL/prose wrapper that keeps ordinary words whole,
preserves leading whitespace, and re-splits long non-URL tokens against
the actual width available on continuation rows.
- Reuse a rendered history row's leading whitespace as the continuation
indent when scrollback insertion has to pre-wrap it again.
- Add regression coverage for markdown wrapping, history-cell rendering,
scrollback continuation margins, leading-indent width accounting, and
continuation-row re-splitting.
- Make both argument-comment lint entrypoints explicitly add their own
directory to `sys.path`, so sibling imports still work when
`PYTHONSAFEPATH=1`.

## How to Test

1. Start Codex and render a long Markdown response that mixes prose with
inline links, blockquotes, lists, and indented code-like text.
2. Confirm that ordinary words next to links stay whole instead of
breaking mid-word.
3. Resize or replay the transcript and confirm wrapped continuation rows
keep their expected left margin for blockquotes, lists, and indented
content.
4. Run the source argument-comment lint from a shell with
`PYTHONSAFEPATH=1` and confirm it starts normally instead of failing to
import `wrapper_common`.

Targeted tests:
- `cargo test -p codex-tui mixed_line --lib`
- `cargo test -p codex-tui preserves_prefix_on_wrapped_rows --lib`
- `cargo test -p codex-tui
agent_markdown_cell_does_not_split_words_after_inline_markdown --lib`
- `cargo test -p codex-tui
mixed_url_markdown_wraps_prose_without_splitting_words_snapshot --lib`
- `python3 tools/argument-comment-lint/test_wrapper_common.py`
- `just argument-comment-lint-from-source -p codex-tui -- --lib`

Notes:
- `cargo test -p codex-tui` currently reaches the new tests
successfully, then still aborts in the pre-existing
`tests::fork_last_filters_latest_session_by_cwd_unless_show_all`
stack-overflow failure.
2026-05-09 13:58:10 -03:00
Michael Bolin
0c70698e24 tests: cover sandbox link write behavior (#21819)
## Why

[PR #1705](https://github.com/openai/codex/pull/1705) moved
`apply_patch` execution under the configured sandbox and called out the
need for integration coverage. We already covered textual `../` escapes,
but did not have coverage for link aliases that live inside a writable
workspace while pointing at, or aliasing, files visible outside it.

This PR locks in the current sandbox boundary without changing
production write semantics. Symlink escapes into a read-only outside
root should fail and leave the outside file unchanged. Existing hard
links are characterized separately: if a user-created hard link already
exists inside the writable root, sandboxed writes preserve normal
hard-link semantics rather than replacing the link and silently breaking
that relationship.

## What Changed

- Added
`apply_patch_cli_does_not_write_through_symlink_escape_outside_workspace`
to verify `apply_patch` cannot update a symlink that targets a file
outside the writable workspace.
- Added `apply_patch_cli_preserves_existing_hard_link_outside_workspace`
to verify `apply_patch` intentionally writes through an existing hard
link and does not unlink or replace it.
- Added `file_system_sandboxed_write_preserves_existing_hard_link` to
verify sandboxed `fs/writeFile` preserves an existing hard link and
writes the shared inode.

## Testing

- `cargo test -p codex-exec-server file_system_sandboxed_write`
- `cargo test -p codex-core
apply_patch_cli_does_not_write_through_symlink_escape_outside_workspace`
- `cargo test -p codex-core
apply_patch_cli_preserves_existing_hard_link_outside_workspace`
- `just fix -p codex-exec-server -p codex-core`
- `just fix -p codex-core`



---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/21819).
* #21845
* __->__ #21819
2026-05-09 08:28:15 -07:00
Ahmed Ibrahim
fca81eeb5b [codex] Lowercase TUI service tier commands (#21906)
## Why

Service-tier slash commands are built from model-catalog metadata. If
the catalog returns a name like `Fast`, the TUI currently exposes
`/Fast` and exact dispatch expects that casing, which is inconsistent
with the lowercase command style used elsewhere.

## What

- Lowercase service-tier command names when converting catalog tiers
into `ServiceTierCommand` values.
- Add regression coverage that seeds a catalog tier named `Fast` and
expects the generated command to be `fast`.

## Testing

Not run locally per repo instruction; PR CI should run the new
`service_tier_commands_lowercase_catalog_names` coverage.
2026-05-09 14:29:12 +03:00
Ahmed Ibrahim
ebe75bb683 Route Python SDK turn notifications by ID (#21778)
## Why

The Python SDK previously protected the stdio transport with a single
active turn-consumer guard. That avoided competing reads from stdout,
but it also meant one `Codex`/`AsyncCodex` client could not stream
multiple active turns at the same time. Notifications could also arrive
before the caller received a `TurnHandle` and registered for streaming,
so the SDK needed an explicit routing layer instead of letting
individual API calls read directly from the shared transport.

## What Changed

- Added a private `MessageRouter` that owns per-request response queues,
per-turn notification queues, pending turn-notification replay, and
global notification delivery behind a single stdout reader thread.
- Generated typed notification routing metadata so turn IDs come from
known payload shapes instead of router-side attribute guessing, with
explicit fallback handling for unknown notification payloads.
- Updated sync and async turn streaming so `TurnHandle.stream()`/`run()`
and `stream_text()` consume only notifications for their own turn ID,
while `AsyncAppServerClient` no longer serializes all transport calls
behind one async lock.
- Cleared pending turn-notification buffers when unregistered turns
complete so never-consumed turn handles do not leave stale queues
behind.
- Removed the internal stream-until helper now that turn completion
waiting can register directly with routed turn notifications.
- Updated Python SDK docs and focused tests for concurrent transport
calls, interleaved turn routing, buffered early notifications, unknown
notification routing, async delegation, and routed turn completion
behavior.

## Validation

- `uv run --extra dev ruff format scripts/update_sdk_artifacts.py
src/codex_app_server/_message_router.py src/codex_app_server/client.py
src/codex_app_server/generated/notification_registry.py
tests/test_client_rpc_methods.py
tests/test_public_api_runtime_behavior.py
tests/test_async_client_behavior.py`
- `uv run --extra dev ruff check scripts/update_sdk_artifacts.py
src/codex_app_server/_message_router.py src/codex_app_server/client.py
src/codex_app_server/generated/notification_registry.py
tests/test_client_rpc_methods.py
tests/test_public_api_runtime_behavior.py
tests/test_async_client_behavior.py`
- `uv run --extra dev pytest tests/test_client_rpc_methods.py
tests/test_public_api_runtime_behavior.py
tests/test_async_client_behavior.py`
- `git diff --check`

---------

Co-authored-by: Codex <noreply@openai.com>
2026-05-09 04:16:23 +00:00
sayan-oai
77d9223e9f [codex] compact network context rendering (#21875)
## Why

The model-visible `<network>` context currently repeats indentation and
a pair of XML tags for every allowed or denied domain. Large domain sets
spend a surprising amount of prompt budget on that scaffolding instead
of the actual policy values.

## What changed

- Render allowed domains as one comma-separated `<allowed>` value
instead of one element per domain.
- Render denied domains the same way.
- Keep the full allow/deny domain sets model-visible while updating the
serialization and settings-update coverage for the denser shape.

## Example

Before:
```xml
<network enabled="true">
  <allowed>api.example.test</allowed>
  <allowed>cdn.example.test</allowed>
  <denied>blocked.example.test</denied>
</network>
```

After:
```xml
<network enabled="true"><allowed>api.example.test,cdn.example.test</allowed><denied>blocked.example.test</denied></network>
```

## Validation

- `cargo test -p codex-core environment_context`
- `cargo test -p codex-core
build_settings_update_items_emits_environment_item_for_network_changes`
- Ran a local `codex` session with a real network context containing 121
allowed domains and 42 denied domains, then inspected the raw prompt
with `raw_token_viewer_cli.py`. With the same domain set, the rendered
`<network>` section shrank from 7,175 characters across 161 lines to
3,666 characters on one line, and the containing environment-context
block fell from 6,428 tokens to 5,379 tokens.
2026-05-09 03:52:48 +00:00
xl-openai
479491ed89 feat: Add role-aware plugin share context APIs (#21867)
Expose discoverability and full share principals in share context, carry
roles through save/updateTargets, hydrate local shared plugin reads, and
keep share URLs only under plugin.shareContext.
2026-05-08 20:46:39 -07:00
pakrym-oai
c579da41b1 Move file watcher out of core (#21290)
## Why

The app-server watcher relocation leaves the generic filesystem watcher
as the last watcher-specific implementation still living inside
`codex-core`. Moving that code to a small crate keeps `codex-core`
focused on thread execution and lets app-server depend on the watcher
without reaching back into core for filesystem watching primitives.

This PR is stacked on #21287.

## What changed

- Added a new `codex-file-watcher` crate containing the existing watcher
implementation and its unit tests.
- Updated app-server `fs_watch`, `skills_watcher`, and listener state to
import watcher types from `codex-file-watcher`.
- Removed the `file_watcher` module and `notify` dependency from
`codex-core`.
- Updated Cargo workspace metadata and `Cargo.lock` for the new internal
crate.

## Validation

- `cargo check -p codex-file-watcher -p codex-core -p codex-app-server`
- `cargo test -p codex-file-watcher`
- `cargo test -p codex-app-server
skills_changed_notification_is_emitted_after_skill_change`
- `just bazel-lock-update`
- `just bazel-lock-check`
- `just fix -p codex-file-watcher`
- `just fix -p codex-core`
- `just fix -p codex-app-server`
2026-05-08 18:19:23 -07:00
pakrym-oai
408e6218ab Reapply "Move skills watcher to app-server" (#21652)
## Why

PR #21460 reverted the earlier move of skills change watching from
`codex-core` into app-server. This reapplies that boundary change so
app-server owns client-facing `skills/changed` notifications and core no
longer carries the watcher.

## What

- Restore the app-server `SkillsWatcher` and register it from thread
listener setup.
- Remove the core-owned skills watcher and its core live-reload
integration surface.
- Restore app-server coverage for `skills/changed` notifications after a
watched skill file changes.

## Validation

- `cargo test -p codex-app-server --test all
suite::v2::skills_list::skills_changed_notification_is_emitted_after_skill_change
-- --exact --nocapture`
- `cargo test -p codex-core --lib --no-run`
2026-05-08 17:41:15 -07:00
Owen Lin
95ca276373 sqlite: no more destructive version bumps (#21847)
## Why

We'd like SQLite state to become required and load-bearing. As a first
step, let's remove the mechanism that allows us to blow away the SQLite
DB on a version bump, and instead rely on graceful migrations.

The original motivation
([PR](https://github.com/openai/codex/pull/10623)) behind this mechanism
was to care less about backwards compatibility while SQLite was being
landed, but I'd say it's quite important now to keep the data in it.

## What changed

- Make `STATE_DB_FILENAME` and `LOGS_DB_FILENAME` the full canonical
filenames: `state_5.sqlite` and `logs_2.sqlite`.
- Remove `STATE_DB_VERSION` / `LOGS_DB_VERSION` and the helper that
constructed filenames from versions.
- Stop `StateRuntime::init` from scanning for or deleting older SQLite
DB filenames at startup.
- Delete the tests that encoded legacy state/logs DB deletion behavior.

## Verification

- `cargo test -p codex-state`
2026-05-08 17:29:44 -07:00
Celia Chen
bd42660cb4 feat: add Bedrock Mantle client agent header (#21840)
## Why

Amazon Bedrock Mantle needs a stable client-agent header so requests
from the built-in Bedrock provider can be identified as coming from
Codex for safety stack.

## What changed

- Added `x-amzn-mantle-client-agent: codex` to the built-in Amazon
Bedrock provider default HTTP headers.
2026-05-08 23:58:41 +00:00
Ruslan Nigmatullin
0c8d42525e [daemon] Add app-server daemon lifecycle management (#20718)
## Why

Desktop and mobile Codex clients need a machine-readable way to
bootstrap and manage `codex app-server` on remote machines reached over
SSH. The same flow is also useful for bringing up app-server with
`remote_control` enabled on a fresh developer machine and keeping that
managed install current without requiring a human session.

## What changed

- add the new experimental `codex-app-server-daemon` crate and wire it
into `codex app-server daemon` lifecycle commands: `start`, `restart`,
`stop`, `version`, and `bootstrap`
- add explicit `enable-remote-control` and `disable-remote-control`
commands that persist the launch setting and restart a running managed
daemon so the change takes effect immediately
- emit JSON success responses for daemon commands so remote callers can
consume them directly
- support a Unix-only pidfile-backed detached backend for lifecycle
management
- assume the standalone `install.sh` layout for daemon-managed binaries
and always launch `CODEX_HOME/packages/standalone/current/codex`
- add bootstrap support for the standalone managed install plus a
detached hourly updater loop
- harden lifecycle management around concurrent operations, pidfile
ownership, stale state cleanup, updater ownership, managed-binary
preflight, Unix-only rejection, forced shutdown after the graceful
window, and updater process-group tracking/cleanup
- document the experimental Unix-only support boundary plus the
standalone bootstrap/update flow in
`codex-rs/app-server-daemon/README.md`

## Verification

- `cargo test -p codex-app-server-daemon -p codex-cli`
- live pid validation on `cb4`: `bootstrap --remote-control`, `restart`,
`version`, `stop`

## Follow-up

- Add updater self-refresh so the long-lived `pid-update-loop` can
replace its own executable image after installing a newer managed Codex
binary.
2026-05-08 16:51:16 -07:00
starr-openai
faa5d4a5e2 Increase exec-server environment transport timeouts (#21825)
## Why

The environment-backed exec-server transport currently hardcodes 5
second connect and initialize timeouts in `client_transport.rs`. That is
short for SSH-backed stdio environments and remote websocket
environments, and there is currently no way to raise those values from
`CODEX_HOME/environments.toml`.

This stacked follow-up raises the default environment transport timeouts
and lets each configured environment override them in
`environments.toml`.

## What Changed

- raise the default environment transport connect and initialize
timeouts from 5s to 10s
- store concrete timeout values on `ExecServerTransportParams` instead
of hardcoding them in `connect_for_transport(...)`
- add `connect_timeout_sec` and `initialize_timeout_sec` to
`[[environments]]` entries in `environments.toml`
- apply parse-time defaults so runtime transport code receives fully
resolved timeout values
- reject `connect_timeout_sec` on stdio environments because it only
applies to websocket transports
- extend parser tests to cover the new fields and defaults

## Stack

- base: https://github.com/openai/codex/pull/21794
- this PR: configurable environment transport timeouts

## Validation

- `cd
/Users/starr/code/codex-worktrees/exec-env-timeouts-config-20260508/codex-rs
&& just fmt`
- not run: tests

---------

Co-authored-by: Codex <noreply@openai.com>
2026-05-08 16:33:29 -07:00
Michael Zeng
8f4020846e [codex] support executor registry remote environments (#21323)
## Summary

Support registry-backed remote executors end to end so downstream
services can resolve an executor id into an exec-server URL and make
that environment available to Codex without relying on the legacy cloud
environments flow.

## What changed

- switch remote executor registration to the executor registry bootstrap
contract
- allow named remote environments to be inserted into
`EnvironmentManager` at runtime
- add the experimental app-server RPC `environment/add` so initialized
experimental clients can register those remote environments for later
`thread/start` and `turn/start` selection

## Validation

Ran focused validation locally:
- `cargo test -p codex-exec-server environment_manager_`
- `cargo test -p codex-exec-server
register_executor_posts_with_bearer_token_header`
- `cargo test -p codex-app-server-protocol`
2026-05-08 16:30:07 -07:00
lt-oai
80a408e201 Support openai library tool (#20293)
Support chatgpt library tool
2026-05-08 22:56:13 +00:00
Ruslan Nigmatullin
1b86906fa1 app-server: support daemon-safe restart handling (#21831)
## Why

The app-server daemon work needs two app-server behaviors to be safe
when lifecycle management is driven by a helper process:

- a readiness probe must not become the process-wide client identity
just because it connects first
- a graceful reload signal needs to keep draining active turns even if
it is delivered more than once

## What changed

- Treat `codex_app_server_daemon` initialization as a probe-only client
for process-global originator and user-agent suffix state.
- Distinguish forceable shutdown signals from graceful-only ones, and
treat Unix `SIGHUP` as graceful-only while leaving `SIGTERM` and Ctrl-C
forceable.
- Add regression coverage for daemon probe initialization and repeated
`SIGHUP` delivery while a turn is still running.

## Testing

- `cargo test -p codex-app-server`
  - The new daemon-probe and repeated-`SIGHUP` coverage passed.
- The run still failed in the existing
`suite::conversation_summary::get_conversation_summary_by_relative_rollout_path_resolves_from_codex_home`
and
`suite::conversation_summary::get_conversation_summary_by_thread_id_reads_rollout`
tests because their initialize handshake timed out.
- `cargo test -p codex-app-server --test all
suite::conversation_summary::`
- Reproduced the same two existing initialize-timeout failures in
isolation.
2026-05-08 15:47:51 -07:00
starr-openai
dac108f2f1 Make environment provider snapshots path-free (#21794)
## Summary
- make EnvironmentProvider::snapshot path-free and keep providers
focused on provider-owned remote environments
- let provider snapshots request local inclusion via include_local, with
environments.toml including local and CODEX_EXEC_SERVER_URL excluding
local
- move reserved local environment construction into EnvironmentManager
using ExecServerRuntimePaths

Follow-up to https://github.com/openai/codex/pull/20667

## Testing
- just fmt
- git diff --check
- devbox: bazel build --bes_backend= --bes_results_url=
//codex-rs/exec-server:exec-server
- devbox: bazel test --bes_backend= --bes_results_url=
//codex-rs/exec-server:exec-server-unit-tests

Co-authored-by: Codex <noreply@openai.com>
2026-05-08 15:30:00 -07:00
Michael Bolin
24111790f0 ci: check out PR head commits in workflows (#21835)
## Why

PR CI should test the exact commit that was pushed to the PR branch. By
default, GitHub's `pull_request` event checks out a synthetic merge
commit from `refs/pull/<number>/merge`, so the tested tree can include
an implicit merge with the current base branch instead of matching the
pushed head SHA.

Using the PR head SHA makes each check result correspond to a concrete
commit the author submitted. This also behaves better for stacked PR
workflows, including Sapling stacks and other Git stack tooling: a
middle PR's head commit already contains the lower stack changes in its
tree, without pulling in commits above it or GitHub's temporary merge
ref.

## What Changed

- Set every `actions/checkout` in `pull_request` workflows under
`.github/workflows` to use `github.event.pull_request.head.sha` on PR
events and `github.sha` otherwise.
- Updated `blob-size-policy` to compare
`github.event.pull_request.base.sha` and
`github.event.pull_request.head.sha`, since it no longer checks out
GitHub's merge commit where `HEAD^1`/`HEAD^2` represented the PR range.

## Verification

- Parsed the edited workflow YAML files with Ruby.
- Checked that every checkout block in the `pull_request` workflows has
the PR-head `ref`.
2026-05-08 15:14:33 -07:00
Matthew Zeng
2f3a2d7a86 Using cached connector directory for discoverable tools list (#21497)
## Summary

Startup tool construction currently depends on connector directory
metadata for `tool_suggest` discoverables. On a cold directory cache,
that can put slow connector-directory requests on the blocking path even
though the tools array only needs directory data for install
suggestions, not for the live connector MCP tools themselves.

This PR keeps the discoverables path off that cold network fetch:
- read connector directory metadata from cache only when building
discoverable tools
- persist connector directory metadata to
`~/.codex/cache/codex_app_directory/<hash>.json` and use it to hydrate
the in-memory cache on later runs before the normal refresh path updates
it
- use connector-directory-specific cache naming to distinguish this
metadata cache from the separate Codex Apps tools-spec cache

This reduces first-turn startup work without changing how live connector
MCP tools are sourced. Longer term, directory-backed install suggestions
should move to a search-based flow so they no longer need to be inlined
into the tools prompt at all.

## Testing

- `cargo test -p codex-connectors`
- `cargo test -p codex-chatgpt`
- `cargo test -p codex-core
request_plugin_install_is_available_without_search_tool_after_discovery_attempts`
- `cargo test -p codex-core
tool_suggest_uses_connector_id_fallback_when_directory_cache_is_empty`
2026-05-08 14:14:11 -07:00
Charlie Marsh
7c9731c9af Enable --deny-warnings for cargo shear (#21616)
## Summary

In https://github.com/openai/codex/pull/21584, we disabled doctests for
crates that lack any doctests. We can enforce that property via `cargo
shear --deny-warnings`: crates that lack doctests will be flagged if
doctests are enabled, and crates with doctests will be flagged if
doctests are disabled.

A few additional notes:

- By adding `--deny-warnings`, `cargo shear` also flagged a number of
modules that were not reachable at all. Some of those have been removed.
- This PR removes a usage of `windows_modules!` (since `cargo shear` and
`rustfmt` couldn't see through it) in favor of simple `#[cfg(target_os =
"windows")]` macros. As a consequence, many of these files exhibit churn
in this PR, since they weren't being formatted by `rustfmt` at all on
main.
- Again, to make the code more analyzable, this PR also removes some
usages of `#[path = "cwd_junction.rs"]` in favor of a more standard
module structure. The bin sidecar structure is still retained, but,
e.g., `windows-sandbox-rs/src/bin/command_runner.rs‎` was moved to
`windows-sandbox-rs/src/bin/command_runner/main.rs`, and so on.

---------

Co-authored-by: Codex <noreply@openai.com>
2026-05-08 20:29:00 +00:00
pakrym-oai
46e2250bcf [codex] Remove legacy after tool use hooks (#21805)
## Why

The legacy `AfterToolUse` hook path was still wired through core tool
dispatch even though the hooks registry never populated any handlers for
it. The supported hook surface is `PostToolUse`, so the old
infrastructure was dead code on the hot path.

## What changed

- Removed the legacy `AfterToolUse` dispatch from `codex-core` tool
execution.
- Removed the unused legacy hook payload types and exports from
`codex-hooks`.
- Simplified legacy notify handling now that `HookEvent` only carries
`AfterAgent`.

## Validation

- `cargo test -p codex-hooks`
- `cargo test -p codex-core registry`
2026-05-08 13:20:05 -07:00
pakrym-oai
e783341b70 [codex] Delete function-style apply_patch (#21651)
## Why

`apply_patch` is now a freeform/custom tool. Keeping the old
JSON/function-style registration and parsing path left another way for
models and tests to invoke `apply_patch`, which made the tool surface
harder to reason about.

## What changed

- Removed the `ApplyPatchToolType::Function` variant, JSON `apply_patch`
spec, and handler support for function payloads.
- Kept `apply_patch_tool_type = freeform` as the supported model
metadata path, including Bedrock catalog metadata.
- Migrated `apply_patch` tests and SSE fixtures to custom/freeform tool
calls.

## Verification

- `cargo test -p codex-tools -p codex-protocol -p codex-model-provider`
- `cargo test -p codex-core tools::handlers::apply_patch --lib`
- `cargo test -p codex-core --test all
apply_patch_tool_executes_and_emits_patch_events`
- `cargo test -p codex-core --test all
apply_patch_reports_parse_diagnostics`
- `cargo test -p codex-exec test_apply_patch_tool`
- `just fix -p codex-core`
- `just fix -p codex-tools -p codex-protocol -p codex-model-provider -p
codex-exec`
2026-05-08 13:00:57 -07:00
Ahmed Ibrahim
cf941ede15 Revert "Publish Python runtime wheels on release" (#21810)
Reverts openai/codex#21784
2026-05-08 22:37:10 +03:00
Jiaming Zhang
5f4d0ec343 [codex] request desktop attestation from app (#20619)
## Summary

TL;DR: teaches `codex-rs` / app-server to request a desktop-provided
attestation token and attach it as `x-oai-attestation` on the scoped
ChatGPT Codex request paths.

![DeviceCheck attestation
interface](https://raw.githubusercontent.com/openai/codex/dev/jm/devicecheck-diagram-assets/pr-assets/devicecheck-attestation-interface.png)

## Details

This PR teaches the Codex app-server runtime how to request and attach
an attestation token. It does not generate DeviceCheck tokens directly;
instead, it relies on the connected desktop app to advertise that it can
generate attestation and then asks that app for a fresh header value
when needed.

The flow is:

1. The Codex desktop app connects to app-server.
2. During `initialize`, the app can advertise that it supports
`requestAttestation`.
3. Before app-server calls selected ChatGPT Codex endpoints, it sends
the internal server request `attestation/generate` to the app.
4. app-server receives a pre-encoded header value back.
5. app-server forwards that value as `x-oai-attestation` on the scoped
outbound requests.

The code in this repo is mostly protocol and runtime plumbing: it adds
the app-server request/response shape, introduces an attestation
provider in core, wires that provider into Responses / compaction /
realtime setup paths, and covers the intended scoping with tests. The
signed macOS DeviceCheck generation remains owned by the desktop app PR.

## Related PR

- Codex desktop app implementation:
https://github.com/openai/openai/pull/878649

## Validation

<details>
<summary>Tests run</summary>

```sh
cargo test -p codex-app-server-protocol
cargo test -p codex-core attestation --lib
cargo test -p codex-app-server --lib attestation
```

Also ran:

```sh
just fix -p codex-core
just fix -p codex-app-server
just fix -p codex-app-server-protocol
just fmt
just write-app-server-schema
```

</details>

<details>
<summary>E2E DeviceCheck validation</summary>

First validated the signed desktop app boundary directly: launched a
packaged signed `Codex.app`, sent `attestation/generate`, decoded the
returned `v1.` attestation header, and validated the extracted
DeviceCheck token with `personal/jm/verify_devicecheck_token.py` using
bundle ID `com.openai.codex`. Apple returned `status_code: 200` and
`is_ok: true`.

Then ran the fuller app + app-server flow. The packaged `Codex.app`
launched a current-branch app-server via `CODEX_CLI_PATH`, and a local
MITM proxy intercepted outbound `chatgpt.com` traffic. The app-server
requested `attestation/generate` from the real Electron app process, and
the intercepted `/backend-api/codex/responses` traffic included
`x-oai-attestation` on both routes:

```text
GET  /backend-api/codex/responses  Upgrade: websocket  x-oai-attestation: present
POST /backend-api/codex/responses  Upgrade: none       x-oai-attestation: present
```

The captured header decoded to a DeviceCheck token that also validated
with Apple for `com.openai.codex` (`status_code: 200`, `is_ok: true`,
team `2DC432GLL2`).

</details>

---------

Co-authored-by: Codex <noreply@openai.com>
2026-05-08 12:36:02 -07:00
pakrym-oai
61142b6169 Remove ToolName display helper (#21465)
## Why

`ToolName::display()` made it too easy to flatten tool identity and
accidentally compare rendered strings. Tool identity should stay
structural until a legacy string boundary actually requires the
flattened spelling.

## What

- Removes `ToolName::display()` and relies on the existing `Display`
impl for messages and errors.
- Adds structural ordering for `ToolName` and uses it for
sorting/deduping deferred tools.
- Carries `ToolName` through tool/sandbox plumbing, flattening only at
legacy boundaries such as hook payloads, telemetry tags, and Responses
tool names.
- Updates MCP normalization tests to assert `ToolName` structure instead
of rendered strings.

## Testing

- `cargo test -p codex-mcp test_normalize_tools`
- `cargo test -p codex-core unavailable_tool`
- `just fix -p codex-protocol`
- `just fix -p codex-mcp`
- `just fix -p codex-core`
2026-05-08 12:17:48 -07:00
alexsong-oai
bbb6bf0a37 Emit accepted line fingerprint analytics (#21601)
## Why

Codex assisted-code attribution needs a client-side accepted-code source
that does not upload raw code. This adds a hash-only analytics event
derived from the turn diff so downstream attribution can compare
accepted Codex lines against commit or PR diffs.

## What Changed

- Parse accepted/effective added lines from the final turn diff and emit
`codex_accepted_line_fingerprints` analytics.
- Hash repo, path, and normalized line content before upload; raw code
and raw diffs are not included in the event.
- Chunk large fingerprint payloads and send accepted-line fingerprint
events in isolated requests while preserving normal batching for other
analytics events.
- Canonicalize Git remote URLs before repo hashing so SSH/HTTPS GitHub
remotes join to the same repo hash.
- Add parser coverage for unified diff hunk lines that look like `+++`
or `---` file headers.

## Verification

- `cargo test -p codex-analytics`
- `cargo test -p codex-git-utils canonicalize_git_remote_url`
- `just fix -p codex-analytics`
- `just bazel-lock-check`
- `git diff --check`
2026-05-08 12:16:24 -07:00
Ahmed Ibrahim
9183503b97 Publish Python runtime wheels on release (#21784)
## Why

Published Python SDK builds depend on an exact `openai-codex-cli-bin`
runtime package, but the release workflow did not publish that runtime
package to PyPI. That left the SDK packaging story incomplete: release
artifacts could produce Codex binaries, but Python users still needed a
matching wheel carrying the platform-specific runtime and helper
executables.

This PR is stacked on #21787 so release jobs can include helper binaries
in runtime wheels: Linux wheels include `bwrap` for sandbox fallback,
and Windows wheels include the signed sandbox/elevation helpers beside
`codex.exe`.

## What changed

- Builds platform-specific `openai-codex-cli-bin` wheels from signed
release binaries on macOS, Linux, and Windows release runners.
- Packages Linux `bwrap` into musllinux runtime wheels.
- Packages Windows sandbox helper executables into Windows runtime
wheels.
- Uploads runtime wheels as GitHub release assets and publishes them to
PyPI using trusted publishing from the `pypi` GitHub environment.
- Keeps the new Python runtime publish job non-blocking so failures need
follow-up but do not fail the Rust release workflow.
- Pins the PyPA publish action to the `v1.13.0` commit SHA for
reproducible release publishing.
- Documents that runtime wheels are platform wheels published through
PyPI trusted publishing.

## Testing

- `ruby -e 'require "yaml"; ARGV.each { |f| YAML.load_file(f); puts "ok
#{f}" }' .github/workflows/rust-release.yml
.github/workflows/rust-release-windows.yml`
- `git diff --check`

CI is the real end-to-end verification for the release workflow path.

---------

Co-authored-by: Codex <noreply@openai.com>
2026-05-08 22:00:58 +03:00
Ahmed Ibrahim
8956a928a1 Support resource binaries in Python runtime staging (#21787)
## Why

Some Codex runtime distributions need helper executables beside the main
bundled binary. Linux sandbox fallback needs a packaged `bwrap` when no
suitable system `bwrap` is available, and Windows sandbox/elevation
needs helper executables discoverable beside `codex.exe`. The checked-in
`openai-codex-cli-bin` template already packages everything under
`codex_cli_bin/bin/**`, but the staging script only copied the main
Codex binary into that directory.

This PR adds the generic staging primitive needed by release workflows
to build complete platform runtime wheels without baking
platform-specific helper names into the package template.

## What changed

- Added repeatable `stage-runtime --resource-binary` support so release
workflows can copy extra executables beside the bundled Codex binary.
- Kept resource selection in workflow code, where the platform target is
known.
- Added tests that verify resource binaries are copied into the staged
runtime package, that the wheel include config covers them, and that the
CLI forwards repeated `--resource-binary` values.

## Testing

- `uv run ruff check scripts/update_sdk_artifacts.py
tests/test_artifact_workflow_and_binaries.py`
- `uv run --extra dev pytest
tests/test_artifact_workflow_and_binaries.py::test_stage_runtime_release_copies_resource_binaries
tests/test_artifact_workflow_and_binaries.py::test_runtime_resource_binaries_are_included_by_wheel_config
tests/test_artifact_workflow_and_binaries.py::test_stage_runtime_stages_binary_without_type_generation`

Full `tests/test_artifact_workflow_and_binaries.py` still has unrelated
schema-normalization drift in the local checkout.

---------

Co-authored-by: Codex <noreply@openai.com>
2026-05-08 22:00:44 +03:00
github-actions[bot]
772e034594 Update models.json (#21776)
Automated update of models.json.

---------

Co-authored-by: aibrahim-oai <219906144+aibrahim-oai@users.noreply.github.com>
Co-authored-by: Ahmed Ibrahim <aibrahim@openai.com>
2026-05-08 21:37:23 +03:00
starr-openai
5f2543b74e Load configured environments from CODEX_HOME (#20667)
## Why

The earlier PRs add stdio transport support and the config-backed
environment provider, but the feature remains inert until normal Codex
entrypoints construct `EnvironmentManager` with enough context to
discover `CODEX_HOME/environments.toml`. This final stack PR activates
the provider while preserving the legacy `CODEX_EXEC_SERVER_URL`
fallback when no environments file exists.

**Stack position:** this is PR 5 of 5. It is the product wiring PR that
activates the configured environment provider added in PR 4.

## What Changed

- Thread `codex_home` into `EnvironmentManagerArgs`.
- Change `EnvironmentManager::new(...)` to load the provider from
`CODEX_HOME`.
- Preserve legacy behavior by falling back to
`DefaultEnvironmentProvider::from_env()` when `environments.toml` is
absent.
- Make `environments.toml`-backed managers start new threads with all
configured environments, default first, while keeping the legacy env-var
path single-default.
- Update the app-server, TUI, exec, MCP server, connector, prompt-debug,
and thread-manager-sample callsites to pass `codex_home` and handle
provider-loading errors.

## Self-Review Notes

- The multi-environment startup path is intentionally tied to the
`environments.toml` provider. Using `>1` configured environment as the
only signal would also expand the legacy `CODEX_EXEC_SERVER_URL`
provider because it keeps `local` addressable alongside `remote`.
- The startup environment list is still derived inside
`EnvironmentManager`; the provider only says whether its snapshot should
start new threads with all configured environments.
- The thread-manager sample was updated to pass the current
`ThreadManager::new(...)` installation id argument so the stack compiles
under Bazel.

## Stack

- 1. https://github.com/openai/codex/pull/20663 - Add stdio exec-server
listener
- 2. https://github.com/openai/codex/pull/20664 - Add stdio exec-server
client transport
- 3. https://github.com/openai/codex/pull/20665 - Make environment
providers own default selection
- 4. https://github.com/openai/codex/pull/20666 - Add CODEX_HOME
environments TOML provider
- **5. This PR:** https://github.com/openai/codex/pull/20667 - Load
configured environments from CODEX_HOME

Split from original draft: https://github.com/openai/codex/pull/20508

## Validation

- `just fmt`
- `git diff --check`
- `bazel build --config=remote --strategy=remote
--remote_download_toplevel
//codex-rs/thread-manager-sample:codex-thread-manager-sample`
- `bazel test --config=remote --strategy=remote
--remote_download_toplevel
//codex-rs/exec-server:exec-server-unit-tests`
- `bazel test --config=remote --strategy=remote
--remote_download_toplevel --test_sharding_strategy=disabled
--test_arg=default_thread_environment_selections_use_manager_default_id
//codex-rs/core:core-unit-tests`
- `bazel test --config=remote --strategy=remote
--remote_download_toplevel --test_sharding_strategy=disabled
--test_arg=start_thread_uses_all_default_environments_from_codex_home
//codex-rs/core:core-unit-tests`

## Documentation

This activates `CODEX_HOME/environments.toml`; user-facing documentation
should be added before this stack is treated as a documented public
workflow.

---------

Co-authored-by: Codex <noreply@openai.com>
2026-05-08 11:17:56 -07:00
David de Regt
872b8b15b3 feat: Use installation ID in remote enrollments (#21662)
* Pass installation ID for storage on enrollments server for
deduping/grouping multiple appservers per installation
* Pass installation ID in remoteControl/status/changed events
2026-05-08 17:54:01 +00:00
William Woodruff
8bea5d231a [codex] Address some more GHA hygiene issues (#21622)
This does two things:

- We use `persist-credentials: false` everywhere now. This is
unfortunately not the default in GitHub Actions, but it prevents
`actions/checkout` from dropping `secrets.GITHUB_TOKEN` onto disk.
- We interpose (some) template expansions through environment variables.
I've limited this to contexts that have non-fixed values; contexts that
are fixed (like `*.result`) are not dangerous to expand directly inline
(but maybe we should clean those up in the future for consistency
anyways).

This is a medium-risk change in terms of CI breakage: I did a scan for
usage of `git push` and other commands that implicitly use the persisted
credential, but couldn't find any. Even still, some implicit usages of
the persisted credentials may be lurking. Please ping ww@ if any issues
arise.
2026-05-08 10:19:27 -07:00
Eric Traut
0a0d09ad21 Clarify docs folder guidance in AGENTS.md (#21772)
## Summary

Codex keeps trying to add documentation to the `docs/` directory. With
the exception of app server API documentation, the docs for Codex should
not live in this repo. We don't want the local `docs/` folder to become
a stale shadow of the official docs.

This PR updates `AGENTS.md` to make that boundary explicit and scopes
the existing API documentation guidance to app-server docs/examples. It
also removes the extra `docs/config.md` sections that were recently
added.
2026-05-08 10:11:57 -07:00
Ahmed Ibrahim
7c0e54bf59 [codex] Generalize service tier slash commands (#21745)
## Why

`/fast` was wired as a one-off slash command even though model metadata
now exposes service tiers as catalog data. That meant adding another
tier, such as a slower/cheaper tier, would require more hardcoded TUI
plumbing instead of letting the model catalog drive the available
commands.

This change makes service-tier commands data-driven: each advertised
`service_tiers` entry becomes a `/name` command using the catalog
description, while the request path sends the tier `id` only when the
selected model supports it.

## What Changed

- Removed the hardcoded `/fast` slash-command variant and introduced
dynamic service-tier command items in the composer and command popup.
- Added toggle behavior for service-tier commands: invoking `/name`
selects that tier, and invoking it again clears the selection.
- Preserved the existing Fast-mode keybinding/status affordances by
resolving the current model tier whose name is `fast`, while still
sending the tier request value such as `priority`.
- Persisted service-tier selections as raw request strings so non-fast
tiers can round-trip through config.
- Updated the Bedrock catalog entry to advertise fast support through
`service_tiers` with `id: "priority"` and `name: "fast"`.
- Added defensive filtering in core so unsupported selected service
tiers are omitted from `/responses` requests.

## Validation

- Added/updated coverage for dynamic service-tier slash command lookup,
popup descriptions, composer dispatch, TUI fast toggling, and
unsupported-tier omission in core request construction.
- Local tests were not run per request.

---------

Co-authored-by: Codex <noreply@openai.com>
2026-05-08 20:09:51 +03:00
Zanie Blue
47f1d7b40b Use CARGO_NET_GIT_FETCH_WITH_CLI in rust-ci-full for more reliable git fetches (#21628)
Cargo uses libgit2 by default. In uv, we gave up this entirely and
always call out to the git CLI because it is much more reliable. This is
a part of my attempt to reduce flakes in `rust-ci-full`.
2026-05-08 09:53:21 -07:00
Zanie Blue
05ffa0b1d0 Fix rust-ci-full failures due to missing bwrap (#21604)
Since https://github.com/openai/codex/pull/21255, `rust-ci-full` has
been failing due to a missing `bwrap`.

```
thread 'main' panicked at linux-sandbox/src/launcher.rs:43:13:
bubblewrap is unavailable: no system bwrap was found on PATH and no bundled codex-resources/bwrap binary was found next to the Codex executable
```

Since the happy path is now to use the system binary, let's ensure
that's installed.


8d51826631
was necessary for the `bwrap` executable to be discoverable when the
working directory is `/`.

I ran `rust-ci-full` at
https://github.com/openai/codex/actions/runs/25528074506

---------

Co-authored-by: Codex <noreply@openai.com>
2026-05-08 09:52:19 -07:00
evawong-oai
5733e00c79 [sandboxing] Remove Darwin user cache write from Seatbelt network policy (#21443)
## Summary

1. Removes the broad `DARWIN_USER_CACHE_DIR` write rule from the macOS
Seatbelt network policy.
2. Removes the now unused policy parameter plumbing for that cache path.
3. Adds sandboxing coverage that keeps `com.apple.trustd.agent` for TLS
while rejecting the cache write rule.

## Why

This closes the exact cache poisoning boundary. The earlier `gh` TLS
issue is now covered by trustd access, so the cache write is no longer
needed.

## Validation

1. Rust formatting passed.
2. The sandboxing crate tests passed.
3. Local macOS Seatbelt repro with patched policy passed. `gh api`
returned `21442` without the cache write rule.
2026-05-08 16:43:07 +00:00
jif-oai
5b87bd2845 chore: thread tui (#21767) 2026-05-08 17:53:23 +02:00
bbrown-oai
607b0dd1f0 codex-otel: validate provider span attributes consistently (#21749)
Provider initialization installs process-global OTEL state, so invalid
trace metadata needs to fail before setup begins.

Use the same span attribute validator as config loading when traces are
exported so provider startup enforces the config contract without
duplicating validation logic.
2026-05-08 08:20:49 -07:00
jif-oai
f9bbbafb68 nit: comment (#21763)
Because of an async discussion
2026-05-08 17:15:46 +02:00
jif-oai
bd8fc9adb9 api: send hyphenated session and thread headers (#21757)
## Why
Some consumers expect conventional hyphenated HTTP headers. Codex
already sends the session and thread IDs on outbound Responses requests,
but it only uses the underscore spellings today, which makes those IDs
harder to consume in systems that normalize or reject underscore header
names.

Full context here:
https://openai.slack.com/archives/C08KCGLSPSQ/p1778248578422369

## What changed
- `build_session_headers` now emits both `session_id` and `session-id`
when a session ID is present.
- It does the same for `thread_id` and `thread-id`.
- Added regression coverage in `codex-api/tests/clients.rs` and
`core/tests/suite/client.rs` so both the lower-level client tests and
the end-to-end request tests assert the two header spellings are
present.

## Test plan
- Added header assertions in `codex-api/tests/clients.rs`.
- Added request-header assertions in `core/tests/suite/client.rs` for
both the `/v1/responses` and `/api/codex/responses` request paths.
2026-05-08 17:11:19 +02:00
Eric Traut
e6312d44f0 Show permissions and approval mode in the TUI status line (#21677)
Fixes #21665.

## Why

The TUI status line is the right place for compact, glanceable session
state. The original request was motivated by the need to see the active
permission posture without opening `/permissions` or `/status`,
especially when switching between safer and more permissive modes during
a session.

This PR intentionally separates `permissions` from `approval-mode`
instead of combining them into one status-line item. They answer related
but different questions: `permissions` describes the active
sandbox/profile shape, while `approval-mode` describes how command
approvals are handled. Keeping them separate makes each item
independently configurable and avoids long combined labels in an already
space-constrained status line.

The tradeoff is that users who want the full permission posture in the
status line need to opt into both items. In exchange, users can show
only the sandbox/profile label, only the approval behavior, or both, and
named user-defined profiles remain concise. Non-standard permission
shapes are rendered as `Custom permissions` rather than trying to
squeeze detailed profile contents into the status line; `/status`
remains the fuller explanatory surface.

## What changed

- Added a configurable `permissions` status-line item.
- Added a separate `approval-mode` status-line item, with `approval` as
an alias.
- Render standard permission states compactly as `Read Only`,
`Workspace`, or `Full Access`.
- Preserve user-defined permission profile names directly in the status
line.
- Render unnamed non-standard permission shapes as `Custom permissions`.
- Refresh status surfaces when `/permissions` updates the permission
profile, approval policy, or approval reviewer.
- Updated status-line preview snapshot coverage for the new items.

## Verification

- `cargo test -p codex-tui
status_permissions_non_default_workspace_write_uses_workspace_label`
- `cargo test -p codex-tui
permissions_selection_emits_history_cell_when_selection_changes`
- `cargo insta pending-snapshots --manifest-path tui/Cargo.toml`
2026-05-08 08:03:11 -07:00
Eric Traut
f86d95a242 Display blended token count in status line (#21669)
## Why

The configurable `/statusline` and terminal title can display session
token usage. That display was using the raw total token count, which
includes cached input tokens, so it significantly overstated the token
usage compared with the blended token count shown elsewhere (in
`/status` and tracked in goals). This inconsistency resulted in user
confusion. We don't want to report cached tokens because we don't charge
for them and they are somewhat of an implementation detail that users
shouldn't care about.

## What changed

- Use `TokenUsage::blended_total()` for the `used-tokens` status surface
item so cached input is excluded.
- Add a brief comment to `tokens_in_context_window()` clarifying that it
returns raw `total_tokens`, whose meaning depends on whether the caller
has last-turn or accumulated usage.
2026-05-08 07:56:13 -07:00
github-actions[bot]
aadcae9f3c Update models.json (#19896)
Automated update of models.json.

---------

Co-authored-by: aibrahim-oai <219906144+aibrahim-oai@users.noreply.github.com>
Co-authored-by: Ahmed Ibrahim <aibrahim@openai.com>
2026-05-08 17:41:55 +03:00
Ahmed Ibrahim
cce059467a [codex] Enable apply_patch freeform by default (#21687)
## Summary
- enable `apply_patch_freeform` by default in the feature registry

## Why
- make the freeform `apply_patch` tool available by default when model
metadata does not explicitly opt into another mode

## Validation
- `just fmt`
- did not run tests

---------

Co-authored-by: Codex <noreply@openai.com>
2026-05-08 13:15:00 +00:00
Ahmed Ibrahim
317213fd33 Allow string service tiers in config TOML (#21697)
## Why

`service_tier` in `config.toml` and profile config was still modeled as
an enum, which blocked newer or experimental service tier IDs even
though the runtime paths already carry string IDs.

This change makes the TOML-facing config accept string service tier IDs
directly while keeping the legacy `fast` alias behavior by normalizing
it to the request value `priority`.

## What Changed

- change the TOML-facing `service_tier` fields in global and profile
config to `Option<String>`
- keep config-load normalization so legacy `fast` still resolves to
`priority`
- persist resolved service tier strings directly in config locks so
arbitrary IDs round-trip cleanly
- regenerate the config schema and add config coverage for arbitrary
string IDs plus legacy `fast` normalization

## Verification

- added config tests for arbitrary string service tiers and legacy
`fast` normalization
- ran `just write-config-schema`
- CI

---------

Co-authored-by: Codex <noreply@openai.com>
2026-05-08 15:15:00 +03:00
jif-oai
d9feaffffb [codex] make shutdown pending-touch test deterministic (#21550)
## What changed
- rewrote `shutdown_flushes_pending_metadata_irrelevant_updated_at` to
seed an existing pending `updated_at` touch directly in
`RolloutWriterState`
- kept the shutdown test focused on draining a pending touch, leaving
the separate coalescing test to cover timing-based deferral

## Why
The old test had to complete several async operations inside the 50 ms
test-only coalescing window. When that sequence took longer, the second
flush updated `threads.updated_at` immediately and the pre-shutdown
equality assertion failed, even though shutdown behavior was correct.

## Validation
- `cargo test -p codex-rollout
shutdown_flushes_pending_metadata_irrelevant_updated_at`
- `cargo test -p codex-rollout`

Co-authored-by: Codex <noreply@openai.com>
2026-05-08 10:48:45 +02:00
Ahmed Ibrahim
71d80f9a14 Omit service_tier from remote /responses/compact requests under API auth (#21676)
## Summary

API-key-auth remote compaction requests should not inherit
`service_tier` from normal `/responses` turns. This path needs to match
API auth expectations, while ChatGPT-auth remote compaction should keep
reusing the shared request fields that still apply there.

This change keeps the decision inline in
`codex-rs/core/src/compact_remote.rs` only. Under API key auth, the
classic remote `/responses/compact` path now omits `service_tier`; under
ChatGPT auth, it keeps reusing the configured tier.
`codex-rs/core/src/compact_remote_v2.rs` is unchanged. The remote
compaction parity coverage and snapshots were updated to assert the
API-key omission and preserve the ChatGPT-auth behavior.

## Testing

- Updated remote compaction parity coverage in
`codex-rs/core/tests/suite/compact_remote.rs` and the corresponding
snapshots.
2026-05-08 11:15:14 +03:00
Eric Traut
d2e71db22a Remove exec research preview banner wording (#21683)
## Why

`codex exec` still included the stale `research preview` label in its
human-readable startup banner, which makes the CLI look older and less
current than it is.

Fixes #21444.

## What Changed

Removed the hard-coded ` (research preview)` suffix from the `OpenAI
Codex v<version>` startup banner in
`codex-rs/exec/src/event_processor_with_human_output.rs`.

## Validation

Local validation was not required for this one-line startup banner text
cleanup.
2026-05-08 00:30:44 -07:00
Eric Traut
c15ce42a12 Fix feature request Contributing link (#21688)
Fixes #20870.

## Summary

The feature request template currently links users to the README
`#contributing` anchor, but that anchor does not exist. This can confuse
users who are trying to understand contribution expectations before
filing a request.

This updates `.github/ISSUE_TEMPLATE/5-feature-request.yml` to point
`Contributing` at `docs/contributing.md`, matching the repository's
existing contribution guidance.
2026-05-08 00:23:40 -07:00
Eric Traut
8b1d6875ed Fix issue template labels (#21686)
Issue forms should only reference labels that exist in the repository so
new reports receive the intended automatic labels.

This updates the CLI issue form to stop applying the missing `needs
triage` label, and changes the documentation issue form from `docs` to
the existing `documentation` label.

Fixes #21158
2026-05-08 00:22:33 -07:00
Eric Traut
911841001d Fix duplicate CLI issue template description (#21685)
Fixes #21270.

The CLI bug report template defined `description` twice for the terminal
emulator field. Because duplicate YAML keys are ambiguous and parsers
generally keep the later value, the form could drop the multiplexer
guidance.

This combines that guidance with the terminal examples under a single
block scalar in `.github/ISSUE_TEMPLATE/3-cli.yml`.
2026-05-08 00:20:17 -07:00
xl-openai
ae15343243 feat: Update plugin share settings with discoverability (#21637)
Requires discoverability on plugin/share/updateTargets so the server can
manage workspace link access consistently, including auto-adding the
workspace principal for UNLISTED.

Also rejects LISTED on share creation and blocks client-supplied
workspace principals while preserving response parsing for LISTED.
2026-05-07 21:28:18 -07:00
Celia Chen
9cbd4c0371 feat: enable AWS login credentials for Bedrock auth (#21623)
## Summary

Codex's Amazon Bedrock provider signs Mantle requests with SigV4 using
credentials resolved by the AWS SDK. That worked for standard AWS
profiles and environment credentials, but AWS CLI console-login profiles
created by `aws login` require the SDK's `credentials-login` feature to
resolve `login_session` credentials.

This change enables that credential provider so Bedrock can use AWS
console-login credentials through the existing provider-owned AWS auth
path.

While testing the console-login path, we also hit a Mantle-specific
SigV4 regression from the new split between `session_id` and
`thread_id`. Mantle does not preserve legacy OpenAI compatibility
headers that use `snake_case` before SigV4 verification, so signing
those headers can make the server reconstruct a different canonical
request. The Bedrock auth path now removes that header class before
signing, keeping preserved hyphenated Codex/AWS headers such as
`x-codex-turn-metadata` signed normally.

## Changes

- Enable `aws-config`'s `credentials-login` feature in
`codex-rs/aws-auth`.
- Add a compile-time regression test for
`aws_config::login::LoginCredentialsProvider`.
- Strip `snake_case` compatibility headers from Bedrock Mantle SigV4
requests before signing.
- Expand the Bedrock auth regression test to cover `session_id`,
`thread_id`, and future headers of the same shape.
- Refresh Cargo and Bazel lockfiles for the added `aws-sdk-signin`
dependency.

## Tests
- tested with `aws login` locally and verified that it works as
intended.
2026-05-08 04:07:59 +00:00
560 changed files with 26085 additions and 9484 deletions

View File

@@ -53,7 +53,7 @@ Use `--window "past week"` or `--window-hours 168` when the user asks for a non-
## Summary
No major issues reported by users.
Source: collector v4, git `abc123def456`, window `2026-04-27T00:00:00Z` to `2026-04-28T00:00:00Z`.
Source: collector v5, git `abc123def456`, window `2026-04-27T00:00:00Z` to `2026-04-28T00:00:00Z`.
Want details? I can expand this into the issue table.
```
@@ -65,7 +65,7 @@ Two issues are being surfaced by users:
🔥🔥 Terminal launch hangs on startup [1](https://github.com/openai/codex/issues/123)
🔥 Resume switches model providers unexpectedly [2](https://github.com/openai/codex/issues/456)
Source: collector v4, git `abc123def456`, window `2026-04-27T00:00:00Z` to `2026-04-28T00:00:00Z`.
Source: collector v5, git `abc123def456`, window `2026-04-27T00:00:00Z` to `2026-04-28T00:00:00Z`.
Want details? I can expand this into the issue table.
```
5. In `## Details`, when details are requested, include a compact table only when useful:
@@ -76,7 +76,7 @@ Want details? I can expand this into the issue table.
- A clear quiet/no-concern sentence when there is no meaningful signal.
6. Use the JSON `attention_marker` exactly. It is empty for normal rows, `🔥` for elevated rows, and `🔥🔥` for very high-attention rows. The actual cutoffs are in `attention_thresholds`.
7. Use inline numbered references where a row or bullet points to issues, for example `Compaction bugs [1](https://github.com/openai/codex/issues/123), [2](https://github.com/openai/codex/issues/456)`. Do not add a separate footnotes section.
8. Label `interactions` as `Interactions`; it counts posts/comments/reactions during the requested window, not unique people.
8. Label `interactions` as `Interactions`; it counts unique human GitHub users who created a new issue, added a new comment, or reacted during the requested window. Multiple posts/reactions from the same user on the same issue count once.
9. Mention the collector `script_version`, repo checkout `git_head`, and time window in one compact source line. In default mode, put this before the details prompt so the final line still asks whether the user wants details. In details-upfront mode, it can be the footer.
## Reaction Handling
@@ -89,7 +89,7 @@ GitHub issue search is still seeded by issue `updated_at`, so a purely reaction-
## Attention Markers
The collector scales attention markers by the requested time window. The baseline is 5 human user interactions for `🔥` and 10 for `🔥🔥` over 24 hours; longer or shorter windows scale those cutoffs linearly and round up. For example, a one-week report uses 35 and 70 interactions. Human user interactions are human-authored new issue posts, human-authored new comments, and human reactions created during the window, including upvotes. Bot posts and bot reactions are excluded. In prose, explain this as high user interaction rather than naming the emoji.
The collector scales attention markers by the requested time window. The baseline is 5 unique human users for `🔥` and 10 unique human users for `🔥🔥` over 24 hours; longer or shorter windows scale those cutoffs linearly and round up. For example, a one-week report uses 35 and 70 interactions. Unique human users are users who authored a new issue, authored a new comment, or reacted during the window, including upvotes. Multiple actions from the same user on the same issue count once. Bot posts and bot reactions are excluded. In prose, explain this as high user interaction rather than naming the emoji.
## Freshness

View File

@@ -11,7 +11,7 @@ from datetime import datetime, timedelta, timezone
from pathlib import Path
from urllib.parse import quote
SCRIPT_VERSION = 4
SCRIPT_VERSION = 5
QUALIFYING_KIND_LABELS = ("bug", "enhancement")
REACTION_KEYS = ("+1", "-1", "laugh", "hooray", "confused", "heart", "rocket", "eyes")
BASE_ATTENTION_WINDOW_HOURS = 24.0
@@ -393,9 +393,15 @@ def is_bot_login(login):
return bool(login) and login.lower().endswith("[bot]")
def is_human_user(user_obj):
def human_login_key(user_obj):
login = extract_login(user_obj)
return bool(login) and not is_bot_login(login)
if not login or is_bot_login(login):
return ""
return login.casefold()
def is_human_user(user_obj):
return bool(human_login_key(user_obj))
def label_names(issue):
@@ -467,22 +473,26 @@ def reaction_summary(item):
def reaction_event_summary(reactions, since, until):
counts = {}
total = 0
users = set()
for reaction in reactions or []:
if not isinstance(reaction, dict):
continue
if not is_in_window(str(reaction.get("created_at") or ""), since, until):
continue
if not is_human_user(reaction.get("user")):
user_key = human_login_key(reaction.get("user"))
if not user_key:
continue
content = str(reaction.get("content") or "")
if not content:
continue
counts[content] = counts.get(content, 0) + 1
total += 1
users.add(user_key)
return {
"total": total,
"counts": counts,
"upvotes": counts.get("+1", 0),
"users": sorted(users, key=str.casefold),
}
@@ -618,13 +628,21 @@ def summarize_issue(
new_comment_reaction_total = sum(
comment["reaction_total"] for comment in new_comments
)
new_issue_user_interaction = new_issue and is_human_user(issue.get("user"))
new_issue_user_key = human_login_key(issue.get("user")) if new_issue else ""
new_issue_user_interaction = bool(new_issue_user_key)
new_comment_user_interactions = sum(
1 for comment in new_comments if comment["human_user_interaction"]
)
user_interactions = (
int(new_issue_user_interaction) + new_comment_user_interactions + new_reactions
interaction_user_keys = set(issue_reaction_events_summary["users"])
interaction_user_keys.update(comment_reaction_events_summary["users"])
if new_issue_user_key:
interaction_user_keys.add(new_issue_user_key)
interaction_user_keys.update(
comment["author"].casefold()
for comment in new_comments
if comment["human_user_interaction"]
)
user_interactions = len(interaction_user_keys)
attention_level = attention_level_for(user_interactions, attention_thresholds)
attention_marker = attention_marker_for(user_interactions, attention_thresholds)
updated_without_visible_new_post = (
@@ -957,6 +975,7 @@ def collect_digest(args):
"New issue comments are filtered by comment creation time within the window from the fetched comment set.",
"Reaction events are counted by GitHub reaction created_at timestamps for hydrated issues and fetched comments.",
"Current reaction totals are standing engagement signals; new_reactions and new_upvotes are windowed activity.",
"user_interactions counts unique human users per issue across new issues, new comments, and new reactions; repeated actions by the same user count once.",
"The collector does not assign semantic clusters; use summary_inputs as model-ready evidence for report-time clustering.",
"Pure reaction-only issues may be missed if GitHub issue search does not surface them via updated_at.",
"Issues updated during the window without a new issue body or new comment are retained because label/status edits can still be useful owner signals.",

View File

@@ -494,6 +494,70 @@ def test_reactions_count_toward_attention_markers():
assert summary["new_comments"][0]["new_upvotes"] == 0
def test_user_interactions_are_deduped_by_human_login():
since = collect_issue_digest.parse_timestamp("2026-04-25T00:00:00Z", "--since")
until = collect_issue_digest.parse_timestamp("2026-04-26T00:00:00Z", "--until")
def comment(comment_id, login):
return {
"id": comment_id,
"created_at": f"2026-04-25T0{comment_id + 1}:00:00Z",
"updated_at": f"2026-04-25T0{comment_id + 1}:00:00Z",
"user": {"login": login},
"body": "same issue",
}
def reaction(content, login, created_at="2026-04-25T10:00:00Z"):
return {
"content": content,
"created_at": created_at,
"user": {"login": login},
}
issue = {
"number": 790,
"title": "Repeated pings should not boost attention",
"html_url": "https://github.com/openai/codex/issues/790",
"state": "open",
"created_at": "2026-04-25T01:00:00Z",
"updated_at": "2026-04-25T12:00:00Z",
"user": {"login": "Alice"},
"labels": [{"name": "bug"}, {"name": "tui"}],
}
comments = [comment(1, "alice"), comment(2, "ALICE"), comment(3, "bob")]
comments.append(comment(4, "github-actions[bot]"))
issue_reactions = [
reaction("+1", "alice"),
reaction("rocket", "Alice"),
reaction("+1", "bob"),
reaction("+1", "github-actions[bot]"),
reaction("+1", "carol", created_at="2026-04-24T23:00:00Z"),
]
comment_reactions_by_id = {
1: [reaction("heart", "alice")],
2: [reaction("+1", "bob")],
3: [reaction("eyes", "carol")],
}
summary = collect_issue_digest.summarize_issue(
issue,
comments,
["tui"],
since,
until,
body_chars=100,
comment_chars=100,
issue_reaction_events=issue_reactions,
comment_reactions_by_id=comment_reactions_by_id,
)
assert summary["activity"]["new_human_comments"] == 3
assert summary["new_reactions"] == 6
assert summary["user_interactions"] == 3
assert summary["attention"] is False
assert summary["attention_marker"] == ""
def test_digest_rows_are_table_ready_with_concise_descriptions():
rows = collect_issue_digest.digest_rows(
[

View File

@@ -2,7 +2,6 @@ name: 💻 CLI Bug
description: Report an issue in the Codex CLI
labels:
- bug
- needs triage
body:
- type: markdown
attributes:
@@ -41,9 +40,9 @@ body:
id: terminal
attributes:
label: What terminal emulator and version are you using (if applicable)?
description: Also note any multiplexer in use (screen / tmux / zellij)
description: |
E.g, VSCode, Terminal.app, iTerm2, Ghostty, Windows Terminal (WSL / PowerShell)
Also note any multiplexer in use (screen / tmux / zellij).
E.g., VS Code, Terminal.app, iTerm2, Ghostty, Windows Terminal (WSL / PowerShell)
- type: textarea
id: actual
attributes:

View File

@@ -10,7 +10,7 @@ body:
Before you submit a feature:
1. Search existing issues for similar features. If you find one, 👍 it rather than opening a new one.
2. The Codex team will try to balance the varying needs of the community when prioritizing or rejecting new features. Not all features will be accepted. See [Contributing](https://github.com/openai/codex#contributing) for more details.
2. The Codex team will try to balance the varying needs of the community when prioritizing or rejecting new features. Not all features will be accepted. See [Contributing](https://github.com/openai/codex/blob/main/docs/contributing.md) for more details.
- type: input
id: variant

View File

@@ -1,6 +1,6 @@
name: 📗 Documentation Issue
description: Tell us if there is missing or incorrect documentation
labels: [docs]
labels: [documentation]
body:
- type: markdown
attributes:
@@ -24,4 +24,4 @@ body:
- type: textarea
attributes:
label: Where did you find it?
description: If possible, please provide the URL(s) where you found this issue.
description: If possible, please provide the URL(s) where you found this issue.

View File

@@ -57,6 +57,9 @@ jobs:
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
persist-credentials: false
- name: Check rusty_v8 MODULE.bazel checksums
if: matrix.os == 'ubuntu-24.04' && matrix.target == 'x86_64-unknown-linux-gnu'
@@ -149,6 +152,9 @@ jobs:
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
persist-credentials: false
- name: Prepare Bazel CI
id: prepare_bazel
@@ -232,6 +238,9 @@ jobs:
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
persist-credentials: false
- name: Prepare Bazel CI
id: prepare_bazel
@@ -319,6 +328,9 @@ jobs:
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
persist-credentials: false
- name: Prepare Bazel CI
id: prepare_bazel

View File

@@ -10,15 +10,17 @@ jobs:
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
fetch-depth: 0
persist-credentials: false
- name: Determine PR comparison range
id: range
shell: bash
run: |
set -euo pipefail
echo "base=$(git rev-parse HEAD^1)" >> "$GITHUB_OUTPUT"
echo "head=$(git rev-parse HEAD^2)" >> "$GITHUB_OUTPUT"
echo "base=${{ github.event.pull_request.base.sha }}" >> "$GITHUB_OUTPUT"
echo "head=${{ github.event.pull_request.head.sha }}" >> "$GITHUB_OUTPUT"
- name: Check changed blob sizes
env:

View File

@@ -15,6 +15,9 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
persist-credentials: false
- name: Install Rust toolchain
uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0

View File

@@ -13,6 +13,9 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
persist-credentials: false
- name: Verify codex-rs Cargo manifests inherit workspace settings
run: python3 .github/scripts/verify_cargo_workspace_manifests.py

View File

@@ -19,6 +19,9 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
persist-credentials: false
- name: Annotate locations with typos
uses: codespell-project/codespell-problem-matcher@b80729f885d32f78a716c2f107b4db1025001c42 # v1.1.0
- name: Codespell

View File

@@ -20,6 +20,8 @@ jobs:
has_matches: ${{ steps.normalize-all.outputs.has_matches }}
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- name: Prepare Codex inputs
env:
@@ -156,6 +158,8 @@ jobs:
has_matches: ${{ steps.normalize-open.outputs.has_matches }}
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- name: Prepare Codex inputs
env:

View File

@@ -18,6 +18,8 @@ jobs:
codex_output: ${{ steps.codex.outputs.final-message }}
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- id: codex
uses: openai/codex-action@5c3f4ccdb2b8790f73d6b21751ac00e602aa0c02 # v1.7

View File

@@ -7,6 +7,11 @@ on:
workflow_dispatch:
# CI builds in debug (dev) for faster signal.
env:
# Cargo's libgit2 transport has been flaky on macOS when fetching git
# dependencies with nested submodules. Use the system git CLI, which has
# better network/proxy behavior and matches Cargo's own suggested fallback.
CARGO_NET_GIT_FETCH_WITH_CLI: "true"
jobs:
# --- CI that doesn't need specific targets ---------------------------------
@@ -18,6 +23,8 @@ jobs:
working-directory: codex-rs
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
with:
components: rustfmt
@@ -32,13 +39,14 @@ jobs:
working-directory: codex-rs
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
- uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2.62.49
with:
tool: cargo-shear
version: 1.11.2
tool: cargo-shear@1.11.2
- name: cargo shear
run: cargo shear
run: cargo shear --deny-warnings
argument_comment_lint_package:
name: Argument comment lint package
@@ -48,6 +56,8 @@ jobs:
DYLINT_LINK_VERSION: 5.0.0
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
with:
toolchain: nightly-2025-09-18
@@ -98,6 +108,8 @@ jobs:
labels: codex-windows-x64
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- uses: ./.github/actions/setup-bazel-ci
with:
target: ${{ runner.os }}
@@ -234,6 +246,8 @@ jobs:
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- name: Install Linux build dependencies
if: ${{ runner.os == 'Linux' }}
shell: bash
@@ -560,6 +574,8 @@ jobs:
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- name: Install Linux build dependencies
if: ${{ runner.os == 'Linux' }}
shell: bash
@@ -567,7 +583,7 @@ jobs:
set -euo pipefail
if command -v apt-get >/dev/null 2>&1; then
sudo apt-get update -y
sudo DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends pkg-config libcap-dev
sudo DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends pkg-config libcap-dev bubblewrap
fi
# Some integration tests rely on DotSlash being installed.
@@ -722,10 +738,12 @@ jobs:
shell: bash
run: |
set +e
if [[ "${{ steps.test.outcome }}" != "success" ]]; then
if [[ "${STEPS_TEST_OUTCOME}" != "success" ]]; then
docker logs codex-remote-test-env || true
fi
docker rm -f codex-remote-test-env >/dev/null 2>&1 || true
env:
STEPS_TEST_OUTCOME: ${{ steps.test.outcome }}
- name: verify tests passed
if: steps.test.outcome == 'failure'

View File

@@ -16,7 +16,9 @@ jobs:
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
fetch-depth: 0
persist-credentials: false
- name: Detect changed paths (no external action)
id: detect
shell: bash
@@ -62,6 +64,9 @@ jobs:
working-directory: codex-rs
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
persist-credentials: false
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
with:
components: rustfmt
@@ -78,13 +83,15 @@ jobs:
working-directory: codex-rs
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
persist-credentials: false
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
- uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2.62.49
with:
tool: cargo-shear
version: 1.11.2
tool: cargo-shear@1.11.2
- name: cargo shear
run: cargo shear
run: cargo shear --deny-warnings
argument_comment_lint_package:
name: Argument comment lint package
@@ -96,6 +103,9 @@ jobs:
DYLINT_LINK_VERSION: 5.0.0
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
persist-credentials: false
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
- name: Install nightly argument-comment-lint toolchain
shell: bash
@@ -172,6 +182,9 @@ jobs:
echo "run=false" >> "$GITHUB_OUTPUT"
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
if: ${{ steps.argument_comment_lint_gate.outputs.run == 'true' }}
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
persist-credentials: false
- name: Run argument comment lint on codex-rs via Bazel
if: ${{ steps.argument_comment_lint_gate.outputs.run == 'true' }}
uses: ./.github/actions/run-argument-comment-lint
@@ -203,20 +216,25 @@ jobs:
# If nothing relevant changed (PR touching only root README, etc.),
# declare success regardless of other jobs.
if [[ '${{ needs.changed.outputs.argument_comment_lint }}' != 'true' && '${{ needs.changed.outputs.codex }}' != 'true' && '${{ needs.changed.outputs.workflows }}' != 'true' ]]; then
if [[ "${NEEDS_CHANGED_OUTPUTS_ARGUMENT_COMMENT_LINT}" != 'true' && "${NEEDS_CHANGED_OUTPUTS_CODEX}" != 'true' && "${NEEDS_CHANGED_OUTPUTS_WORKFLOWS}" != 'true' ]]; then
echo 'No relevant changes -> CI not required.'
exit 0
fi
if [[ '${{ needs.changed.outputs.argument_comment_lint_package }}' == 'true' ]]; then
if [[ "${NEEDS_CHANGED_OUTPUTS_ARGUMENT_COMMENT_LINT_PACKAGE}" == 'true' ]]; then
[[ '${{ needs.argument_comment_lint_package.result }}' == 'success' ]] || { echo 'argument_comment_lint_package failed'; exit 1; }
fi
if [[ '${{ needs.changed.outputs.argument_comment_lint }}' == 'true' || '${{ needs.changed.outputs.workflows }}' == 'true' ]]; then
if [[ "${NEEDS_CHANGED_OUTPUTS_ARGUMENT_COMMENT_LINT}" == 'true' || "${NEEDS_CHANGED_OUTPUTS_WORKFLOWS}" == 'true' ]]; then
[[ '${{ needs.argument_comment_lint_prebuilt.result }}' == 'success' ]] || { echo 'argument_comment_lint_prebuilt failed'; exit 1; }
fi
if [[ '${{ needs.changed.outputs.codex }}' == 'true' || '${{ needs.changed.outputs.workflows }}' == 'true' ]]; then
if [[ "${NEEDS_CHANGED_OUTPUTS_CODEX}" == 'true' || "${NEEDS_CHANGED_OUTPUTS_WORKFLOWS}" == 'true' ]]; then
[[ '${{ needs.general.result }}' == 'success' ]] || { echo 'general failed'; exit 1; }
[[ '${{ needs.cargo_shear.result }}' == 'success' ]] || { echo 'cargo_shear failed'; exit 1; }
fi
env:
NEEDS_CHANGED_OUTPUTS_ARGUMENT_COMMENT_LINT: ${{ needs.changed.outputs.argument_comment_lint }}
NEEDS_CHANGED_OUTPUTS_CODEX: ${{ needs.changed.outputs.codex }}
NEEDS_CHANGED_OUTPUTS_WORKFLOWS: ${{ needs.changed.outputs.workflows }}
NEEDS_CHANGED_OUTPUTS_ARGUMENT_COMMENT_LINT_PACKAGE: ${{ needs.changed.outputs.argument_comment_lint_package }}

View File

@@ -57,6 +57,8 @@ jobs:
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
with:

View File

@@ -22,6 +22,7 @@ jobs:
with:
ref: main
fetch-depth: 0
persist-credentials: false
- name: Update models.json
env:

View File

@@ -84,6 +84,8 @@ jobs:
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- name: Print runner specs (Windows)
shell: powershell
run: |
@@ -166,6 +168,8 @@ jobs:
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- name: Download prebuilt Windows primary binaries
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1

View File

@@ -46,6 +46,8 @@ jobs:
libncursesw5-dev
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- name: Build, smoke-test, and stage zsh artifact
shell: bash
@@ -82,6 +84,8 @@ jobs:
fi
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- name: Build, smoke-test, and stage zsh artifact
shell: bash

View File

@@ -20,6 +20,8 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
- name: Validate tag matches Cargo.toml version
shell: bash
@@ -119,6 +121,8 @@ jobs:
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- name: Print runner specs (Linux)
if: ${{ runner.os == 'Linux' }}
shell: bash
@@ -184,6 +188,7 @@ jobs:
uses: mlugg/setup-zig@d1434d08867e3ee9daa34448df10607b98908d29 # v2.2.1
with:
version: 0.14.0
use-cache: false
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Install musl build tools
@@ -477,6 +482,8 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- name: Generate release notes from tag commit message
id: release_notes

View File

@@ -18,6 +18,8 @@ jobs:
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- name: Set up Python
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
@@ -70,6 +72,8 @@ jobs:
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- name: Set up Bazel
uses: ./.github/actions/setup-bazel-ci

View File

@@ -14,6 +14,9 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
persist-credentials: false
- name: Install Linux bwrap build dependencies
shell: bash

View File

@@ -41,6 +41,9 @@ jobs:
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
persist-credentials: false
- name: Set up Python
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
@@ -75,6 +78,9 @@ jobs:
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
persist-credentials: false
- name: Set up Bazel
uses: ./.github/actions/setup-bazel-ci

View File

@@ -26,7 +26,7 @@ In the codex-rs folder where the rust code lives:
- Implementations may still use `async fn foo(&self, ...) -> T` when they satisfy that contract.
- Do not use `#[allow(async_fn_in_trait)]` as a shortcut around spelling the future contract explicitly.
- When writing tests, prefer comparing the equality of entire objects over fields one by one.
- When making a change that adds or changes an API, ensure that the documentation in the `docs/` folder is up to date if applicable.
- Do not add general product or user-facing documentation to the `docs/` folder. The official Codex documentation lives elsewhere. The exception is app-server API documentation, which is covered by the app-server guidance below.
- Prefer private modules and explicitly exported public crate API.
- If you change `ConfigToml` or nested config types, run `just write-config-schema` to update `codex-rs/core/config.schema.json`.
- When working with MCP tool calls, prefer using `codex-rs/codex-mcp/src/mcp_connection_manager.rs` to handle mutation of tools and tool calls. Aim to minimize the footprint of changes and leverage existing abstractions rather than plumbing code through multiple levels of function calls.
@@ -210,7 +210,7 @@ These guidelines apply to app-server protocol work in `codex-rs`, especially:
### Development Workflow
- Update docs/examples when API behavior changes (at minimum `app-server/README.md`).
- Update app-server docs/examples when API behavior changes (at minimum `app-server/README.md`).
- Regenerate schema fixtures when API shapes change:
`just write-app-server-schema`
(and `just write-app-server-schema --experimental` when experimental API fixtures are affected).

11
MODULE.bazel.lock generated

File diff suppressed because one or more lines are too long

248
codex-rs/Cargo.lock generated
View File

@@ -757,6 +757,7 @@ checksum = "96571e6996817bf3d58f6b569e4b9fd2e9d2fcf9f7424eed07b2ce9bb87535e5"
dependencies = [
"aws-credential-types",
"aws-runtime",
"aws-sdk-signin",
"aws-sdk-sso",
"aws-sdk-ssooidc",
"aws-sdk-sts",
@@ -767,15 +768,20 @@ dependencies = [
"aws-smithy-runtime-api",
"aws-smithy-types",
"aws-types",
"base64-simd",
"bytes",
"fastrand",
"hex",
"http 1.4.0",
"p256",
"rand 0.8.5",
"ring",
"sha2",
"time",
"tokio",
"tracing",
"url",
"uuid",
"zeroize",
]
@@ -838,6 +844,28 @@ dependencies = [
"uuid",
]
[[package]]
name = "aws-sdk-signin"
version = "1.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c084bd63941916e1348cb8d9e05ac2e49bdd40a380e9167702683184c6c6be53"
dependencies = [
"aws-credential-types",
"aws-runtime",
"aws-smithy-async",
"aws-smithy-http",
"aws-smithy-json",
"aws-smithy-runtime",
"aws-smithy-runtime-api",
"aws-smithy-types",
"aws-types",
"bytes",
"fastrand",
"http 0.2.12",
"regex-lite",
"tracing",
]
[[package]]
name = "aws-sdk-sso"
version = "1.91.0"
@@ -1117,7 +1145,6 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8b52af3cb4058c895d37317bb27508dccc8e5f2d39454016b297bf4a400597b8"
dependencies = [
"axum-core",
"base64 0.22.1",
"bytes",
"form_urlencoded",
"futures-util",
@@ -1136,10 +1163,8 @@ dependencies = [
"serde_json",
"serde_path_to_error",
"serde_urlencoded",
"sha1",
"sync_wrapper",
"tokio",
"tokio-tungstenite",
"tower",
"tower-layer",
"tower-service",
@@ -1180,6 +1205,12 @@ dependencies = [
"windows-link",
]
[[package]]
name = "base16ct"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4c7f02d4ea65f2c1853089ffd8d2787bdbc63de2f0d29dedbcf8ccdfa0ccd4cf"
[[package]]
name = "base64"
version = "0.21.7"
@@ -1861,11 +1892,14 @@ dependencies = [
"codex-core",
"codex-core-plugins",
"codex-exec-server",
"codex-extension-api",
"codex-external-agent-migration",
"codex-external-agent-sessions",
"codex-features",
"codex-feedback",
"codex-file-search",
"codex-file-watcher",
"codex-git-attribution",
"codex-git-utils",
"codex-hooks",
"codex-login",
@@ -1884,6 +1918,7 @@ dependencies = [
"codex-state",
"codex-thread-store",
"codex-tools",
"codex-uds",
"codex-utils-absolute-path",
"codex-utils-cargo-bin",
"codex-utils-cli",
@@ -1892,16 +1927,13 @@ dependencies = [
"core_test_support",
"flate2",
"futures",
"hmac",
"opentelemetry",
"opentelemetry_sdk",
"pretty_assertions",
"reqwest",
"rmcp",
"serde",
"serde_json",
"serial_test",
"sha2",
"shlex",
"tar",
"tempfile",
@@ -1945,6 +1977,26 @@ dependencies = [
"url",
]
[[package]]
name = "codex-app-server-daemon"
version = "0.0.0"
dependencies = [
"anyhow",
"codex-app-server-protocol",
"codex-app-server-transport",
"codex-core",
"codex-uds",
"futures",
"libc",
"pretty_assertions",
"reqwest",
"serde",
"serde_json",
"tempfile",
"tokio",
"tokio-tungstenite",
]
[[package]]
name = "codex-app-server-protocol"
version = "0.0.0"
@@ -1997,11 +2049,8 @@ dependencies = [
name = "codex-app-server-transport"
version = "0.0.0"
dependencies = [
"anyhow",
"axum",
"base64 0.22.1",
"chrono",
"clap",
"codex-api",
"codex-app-server-protocol",
"codex-config",
@@ -2012,18 +2061,12 @@ dependencies = [
"codex-uds",
"codex-utils-absolute-path",
"codex-utils-rustls-provider",
"constant_time_eq 0.3.1",
"futures",
"gethostname",
"hmac",
"jsonwebtoken",
"owo-colors",
"pretty_assertions",
"serde",
"serde_json",
"sha2",
"tempfile",
"time",
"tokio",
"tokio-tungstenite",
"tokio-util",
@@ -2119,17 +2162,6 @@ dependencies = [
"serde_with",
]
[[package]]
name = "codex-builtin-mcps"
version = "0.0.0"
dependencies = [
"anyhow",
"codex-memories-mcp",
"codex-utils-absolute-path",
"pretty_assertions",
"tokio",
]
[[package]]
name = "codex-bwrap"
version = "0.0.0"
@@ -2172,6 +2204,7 @@ dependencies = [
"clap",
"clap_complete",
"codex-app-server",
"codex-app-server-daemon",
"codex-app-server-protocol",
"codex-app-server-test-client",
"codex-arg0",
@@ -2407,7 +2440,11 @@ dependencies = [
"codex-app-server-protocol",
"pretty_assertions",
"serde",
"serde_json",
"sha1",
"tempfile",
"tokio",
"tracing",
"urlencoding",
]
@@ -2437,6 +2474,7 @@ dependencies = [
"codex-core-skills",
"codex-exec-server",
"codex-execpolicy",
"codex-extension-api",
"codex-features",
"codex-feedback",
"codex-git-utils",
@@ -2462,6 +2500,7 @@ dependencies = [
"codex-terminal-detection",
"codex-test-binary-support",
"codex-thread-store",
"codex-tool-api",
"codex-tools",
"codex-utils-absolute-path",
"codex-utils-cache",
@@ -2492,7 +2531,6 @@ dependencies = [
"insta",
"libc",
"maplit",
"notify",
"once_cell",
"openssl-sys",
"opentelemetry",
@@ -2541,6 +2579,7 @@ dependencies = [
"codex-config",
"codex-core",
"codex-exec-server",
"codex-extension-api",
"codex-features",
"codex-login",
"codex-model-provider-info",
@@ -2699,7 +2738,6 @@ dependencies = [
"serde",
"serde_json",
"serial_test",
"sha2",
"tempfile",
"test-case",
"thiserror 2.0.18",
@@ -2758,6 +2796,14 @@ dependencies = [
"syn 2.0.114",
]
[[package]]
name = "codex-extension-api"
version = "0.0.0"
dependencies = [
"codex-protocol",
"codex-tool-api",
]
[[package]]
name = "codex-external-agent-migration"
version = "0.0.0"
@@ -2836,6 +2882,27 @@ dependencies = [
"serde",
]
[[package]]
name = "codex-file-watcher"
version = "0.0.0"
dependencies = [
"notify",
"pretty_assertions",
"tempfile",
"tokio",
"tracing",
]
[[package]]
name = "codex-git-attribution"
version = "0.0.0"
dependencies = [
"codex-core",
"codex-extension-api",
"codex-features",
"pretty_assertions",
]
[[package]]
name = "codex-git-utils"
version = "0.0.0"
@@ -2987,7 +3054,6 @@ dependencies = [
"async-channel",
"codex-api",
"codex-async-utils",
"codex-builtin-mcps",
"codex-config",
"codex-exec-server",
"codex-login",
@@ -3021,6 +3087,7 @@ dependencies = [
"codex-config",
"codex-core",
"codex-exec-server",
"codex-extension-api",
"codex-login",
"codex-protocol",
"codex-shell-command",
@@ -3297,7 +3364,6 @@ dependencies = [
"codex-utils-absolute-path",
"codex-utils-image",
"codex-utils-string",
"codex-utils-template",
"encoding_rs",
"globset",
"http 1.4.0",
@@ -3613,6 +3679,15 @@ dependencies = [
"uuid",
]
[[package]]
name = "codex-tool-api"
version = "0.0.0"
dependencies = [
"pretty_assertions",
"serde_json",
"thiserror 2.0.18",
]
[[package]]
name = "codex-tools"
version = "0.0.0"
@@ -4229,6 +4304,7 @@ dependencies = [
"codex-config",
"codex-core",
"codex-exec-server",
"codex-extension-api",
"codex-features",
"codex-hooks",
"codex-login",
@@ -4415,6 +4491,18 @@ version = "0.2.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "460fbee9c2c2f33933d720630a6a0bac33ba7053db5344fac858d4b8952d77d5"
[[package]]
name = "crypto-bigint"
version = "0.5.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0dc92fb57ca44df6db8059111ab3af99a63d5d0f8375d9972e319a379c6bab76"
dependencies = [
"generic-array",
"rand_core 0.6.4",
"subtle",
"zeroize",
]
[[package]]
name = "crypto-common"
version = "0.1.7"
@@ -5128,6 +5216,20 @@ version = "1.0.20"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d0881ea181b1df73ff77ffaaf9c7544ecc11e82fba9b5f27b262a3c73a332555"
[[package]]
name = "ecdsa"
version = "0.16.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ee27f32b5c5292967d2d4a9d7f1e0b0aed2c15daded5a60300e4abb9d8020bca"
dependencies = [
"der",
"digest",
"elliptic-curve",
"rfc6979",
"signature",
"spki",
]
[[package]]
name = "ed25519"
version = "2.2.3"
@@ -5161,6 +5263,26 @@ dependencies = [
"serde",
]
[[package]]
name = "elliptic-curve"
version = "0.13.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b5e6043086bf7973472e0c7dff2142ea0b680d30e18d9cc40f267efbf222bd47"
dependencies = [
"base16ct",
"crypto-bigint",
"digest",
"ff",
"generic-array",
"group",
"pem-rfc7468",
"pkcs8",
"rand_core 0.6.4",
"sec1",
"subtle",
"zeroize",
]
[[package]]
name = "ena"
version = "0.14.3"
@@ -5414,6 +5536,16 @@ dependencies = [
"simd-adler32",
]
[[package]]
name = "ff"
version = "0.13.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c0b50bfb653653f9ca9095b427bed08ab8d75a137839d9ad64eb11810d5b6393"
dependencies = [
"rand_core 0.6.4",
"subtle",
]
[[package]]
name = "fiat-crypto"
version = "0.2.9"
@@ -6808,6 +6940,17 @@ dependencies = [
"system-deps",
]
[[package]]
name = "group"
version = "0.13.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f0f9ef7462f7c099f518d754361858f86d8a07af53ba9af0fe635bbccb151a63"
dependencies = [
"ff",
"rand_core 0.6.4",
"subtle",
]
[[package]]
name = "gzip-header"
version = "1.0.0"
@@ -9307,6 +9450,18 @@ dependencies = [
"supports-color 3.0.2",
]
[[package]]
name = "p256"
version = "0.13.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c9863ad85fa8f4460f9c48cb909d38a0d689dba1f6f6988a5e3e0d31071bcd4b"
dependencies = [
"ecdsa",
"elliptic-curve",
"primeorder",
"sha2",
]
[[package]]
name = "parking"
version = "2.2.1"
@@ -9736,6 +9891,15 @@ dependencies = [
"syn 2.0.114",
]
[[package]]
name = "primeorder"
version = "0.13.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "353e1ca18966c16d9deb1c69278edbc5f194139612772bd9537af60ac231e1e6"
dependencies = [
"elliptic-curve",
]
[[package]]
name = "proc-macro-crate"
version = "3.4.0"
@@ -10686,6 +10850,16 @@ version = "0.7.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1e061d1b48cb8d38042de4ae0a7a6401009d6143dc80d2e2d6f31f0bdd6470c7"
[[package]]
name = "rfc6979"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f8dd2a808d456c4a54e300a23e9f5a67e122c3024119acbfd73e3bf664491cb2"
dependencies = [
"hmac",
"subtle",
]
[[package]]
name = "ring"
version = "0.17.14"
@@ -11145,6 +11319,20 @@ version = "3.0.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "490dcfcbfef26be6800d11870ff2df8774fa6e86d047e3e8c8a76b25655e41ca"
[[package]]
name = "sec1"
version = "0.7.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d3e97a565f76233a6003f9f5c54be1d9c5bdfa3eccfb189469f11ec4901c47dc"
dependencies = [
"base16ct",
"der",
"generic-array",
"pkcs8",
"subtle",
"zeroize",
]
[[package]]
name = "seccompiler"
version = "0.5.0"

View File

@@ -5,12 +5,12 @@ members = [
"agent-graph-store",
"agent-identity",
"backend-client",
"builtin-mcps",
"bwrap",
"ansi-escape",
"async-utils",
"app-server",
"app-server-transport",
"app-server-daemon",
"app-server-client",
"app-server-protocol",
"app-server-test-client",
@@ -44,10 +44,13 @@ members = [
"exec-server",
"execpolicy",
"execpolicy-legacy",
"ext/extension-api",
"ext/git-attribution",
"external-agent-migration",
"external-agent-sessions",
"keyring-store",
"file-search",
"file-watcher",
"linux-sandbox",
"lmstudio",
"login",
@@ -104,6 +107,7 @@ members = [
"test-binary-support",
"thread-manager-sample",
"thread-store",
"tool-api",
"uds",
"codex-experimental-api-macros",
"plugin",
@@ -131,6 +135,7 @@ codex-api = { path = "codex-api" }
codex-aws-auth = { path = "aws-auth" }
codex-app-server = { path = "app-server" }
codex-app-server-transport = { path = "app-server-transport" }
codex-app-server-daemon = { path = "app-server-daemon" }
codex-app-server-client = { path = "app-server-client" }
codex-app-server-protocol = { path = "app-server-protocol" }
codex-app-server-test-client = { path = "app-server-test-client" }
@@ -138,7 +143,6 @@ codex-apply-patch = { path = "apply-patch" }
codex-arg0 = { path = "arg0" }
codex-async-utils = { path = "async-utils" }
codex-backend-client = { path = "backend-client" }
codex-builtin-mcps = { path = "builtin-mcps" }
codex-chatgpt = { path = "chatgpt" }
codex-cli = { path = "cli" }
codex-client = { path = "codex-client" }
@@ -157,6 +161,8 @@ codex-exec = { path = "exec" }
codex-file-system = { path = "file-system" }
codex-exec-server = { path = "exec-server" }
codex-execpolicy = { path = "execpolicy" }
codex-extension-api = { path = "ext/extension-api" }
codex-git-attribution = { path = "ext/git-attribution" }
codex-external-agent-migration = { path = "external-agent-migration" }
codex-external-agent-sessions = { path = "external-agent-sessions" }
codex-experimental-api-macros = { path = "codex-experimental-api-macros" }
@@ -164,6 +170,7 @@ codex-features = { path = "features" }
codex-feedback = { path = "feedback" }
codex-install-context = { path = "install-context" }
codex-file-search = { path = "file-search" }
codex-file-watcher = { path = "file-watcher" }
codex-git-utils = { path = "git-utils" }
codex-hooks = { path = "hooks" }
codex-keyring-store = { path = "keyring-store" }
@@ -171,7 +178,6 @@ codex-linux-sandbox = { path = "linux-sandbox" }
codex-lmstudio = { path = "lmstudio" }
codex-login = { path = "login" }
codex-message-history = { path = "message-history" }
codex-memories-mcp = { path = "memories/mcp" }
codex-memories-read = { path = "memories/read" }
codex-memories-write = { path = "memories/write" }
codex-mcp = { path = "codex-mcp" }
@@ -201,6 +207,7 @@ codex-stdio-to-uds = { path = "stdio-to-uds" }
codex-terminal-detection = { path = "terminal-detection" }
codex-test-binary-support = { path = "test-binary-support" }
codex-thread-store = { path = "thread-store" }
codex-tool-api = { path = "tool-api" }
codex-tools = { path = "tools" }
codex-tui = { path = "tui" }
codex-uds = { path = "uds" }
@@ -257,7 +264,6 @@ chrono = "0.4.43"
clap = "4"
clap_complete = "4"
color-eyre = "0.6.3"
constant_time_eq = "0.3.1"
crossbeam-channel = "0.5.15"
crypto_box = { version = "0.9.1", features = ["seal"] }
crossterm = "0.28.1"
@@ -464,11 +470,8 @@ unwrap_used = "deny"
[workspace.metadata.cargo-shear]
ignored = [
"codex-agent-graph-store",
"codex-memories-mcp",
"icu_provider",
"openssl-sys",
"codex-utils-readiness",
"codex-utils-template",
"codex-v8-poc",
]

View File

@@ -0,0 +1,298 @@
use crate::events::CodexAcceptedLineFingerprintsEventParams;
use crate::events::CodexAcceptedLineFingerprintsEventRequest;
use crate::events::TrackEventRequest;
use crate::facts::AcceptedLineFingerprint;
use codex_git_utils::canonicalize_git_remote_url;
use codex_git_utils::get_git_remote_urls_assume_git_repo;
use sha1::Digest;
use std::path::Path;
const ACCEPTED_LINE_FINGERPRINT_EVENT_TARGET_BYTES: usize = 2 * 1024 * 1024;
const ACCEPTED_LINE_FINGERPRINT_EVENT_FIXED_BYTES: usize = 1024;
#[derive(Clone, Debug, PartialEq, Eq)]
pub struct AcceptedLineFingerprintSummary {
pub accepted_added_lines: u64,
pub accepted_deleted_lines: u64,
pub line_fingerprints: Vec<AcceptedLineFingerprint>,
}
pub(crate) struct AcceptedLineFingerprintEventInput {
pub(crate) event_type: &'static str,
pub(crate) turn_id: String,
pub(crate) thread_id: String,
pub(crate) product_surface: Option<String>,
pub(crate) model_slug: Option<String>,
pub(crate) completed_at: u64,
pub(crate) repo_hash: Option<String>,
pub(crate) accepted_added_lines: u64,
pub(crate) accepted_deleted_lines: u64,
pub(crate) line_fingerprints: Vec<AcceptedLineFingerprint>,
}
pub fn accepted_line_fingerprints_from_unified_diff(
unified_diff: &str,
) -> AcceptedLineFingerprintSummary {
let mut current_path: Option<String> = None;
let mut in_hunk = false;
let mut accepted_added_lines = 0;
let mut accepted_deleted_lines = 0;
let mut line_fingerprints = Vec::new();
for line in unified_diff.lines() {
if line.starts_with("diff --git ") {
current_path = None;
in_hunk = false;
continue;
}
if line.starts_with("@@ ") {
in_hunk = true;
continue;
}
if !in_hunk && let Some(path) = line.strip_prefix("+++ ") {
current_path = normalize_diff_path(path);
continue;
}
if !in_hunk && line.starts_with("--- ") {
continue;
}
if let Some(added_line) = line.strip_prefix('+') {
accepted_added_lines += 1;
if let Some(path) = current_path.as_deref()
&& let Some(normalized_line) = normalize_effective_line(added_line)
{
line_fingerprints.push(AcceptedLineFingerprint {
path_hash: fingerprint_hash("path", path),
line_hash: fingerprint_hash("line", &normalized_line),
});
}
continue;
}
if line.starts_with('-') {
accepted_deleted_lines += 1;
}
}
AcceptedLineFingerprintSummary {
accepted_added_lines,
accepted_deleted_lines,
line_fingerprints,
}
}
pub fn fingerprint_hash(domain: &str, value: &str) -> String {
let mut hasher = sha1::Sha1::new();
hasher.update(b"file-line-v1\0");
hasher.update(domain.as_bytes());
hasher.update(b"\0");
hasher.update(value.as_bytes());
format!("{:x}", hasher.finalize())
}
pub(crate) fn accepted_line_fingerprint_event_requests(
input: AcceptedLineFingerprintEventInput,
) -> Vec<TrackEventRequest> {
let chunks = accepted_line_fingerprint_chunks(input.line_fingerprints);
chunks
.into_iter()
.enumerate()
.map(|(index, line_fingerprints)| {
let is_first_chunk = index == 0;
TrackEventRequest::AcceptedLineFingerprints(Box::new(
CodexAcceptedLineFingerprintsEventRequest {
event_type: "codex_accepted_line_fingerprints",
event_params: CodexAcceptedLineFingerprintsEventParams {
event_type: input.event_type,
turn_id: input.turn_id.clone(),
thread_id: input.thread_id.clone(),
product_surface: input.product_surface.clone(),
model_slug: input.model_slug.clone(),
completed_at: input.completed_at,
repo_hash: input.repo_hash.clone(),
accepted_added_lines: if is_first_chunk {
input.accepted_added_lines
} else {
0
},
accepted_deleted_lines: if is_first_chunk {
input.accepted_deleted_lines
} else {
0
},
line_fingerprints,
},
},
))
})
.collect()
}
pub async fn accepted_line_repo_hash_for_cwd(cwd: &Path) -> Option<String> {
let remotes = get_git_remote_urls_assume_git_repo(cwd).await?;
remotes
.get("origin")
.or_else(|| remotes.values().next())
.map(|remote_url| {
let canonical_remote_url =
canonicalize_git_remote_url(remote_url).unwrap_or_else(|| remote_url.to_string());
fingerprint_hash("repo", &canonical_remote_url)
})
}
fn normalize_diff_path(path: &str) -> Option<String> {
let path = path.trim();
if path == "/dev/null" {
return None;
}
Some(
path.strip_prefix("b/")
.or_else(|| path.strip_prefix("a/"))
.unwrap_or(path)
.to_string(),
)
}
fn normalize_effective_line(line: &str) -> Option<String> {
let normalized = line.split_whitespace().collect::<Vec<_>>().join(" ");
if normalized.len() <= 3 {
return None;
}
if !normalized
.chars()
.any(|ch| ch.is_alphanumeric() || ch == '_')
{
return None;
}
Some(normalized)
}
fn accepted_line_fingerprint_chunks(
line_fingerprints: Vec<AcceptedLineFingerprint>,
) -> Vec<Vec<AcceptedLineFingerprint>> {
if line_fingerprints.is_empty() {
return vec![Vec::new()];
}
let mut chunks = Vec::new();
let mut current = Vec::new();
let mut current_bytes = ACCEPTED_LINE_FINGERPRINT_EVENT_FIXED_BYTES;
for fingerprint in line_fingerprints {
let item_bytes = accepted_line_fingerprint_json_bytes(&fingerprint);
let separator_bytes = usize::from(!current.is_empty());
if !current.is_empty()
&& current_bytes + separator_bytes + item_bytes
> ACCEPTED_LINE_FINGERPRINT_EVENT_TARGET_BYTES
{
chunks.push(current);
current = Vec::new();
current_bytes = ACCEPTED_LINE_FINGERPRINT_EVENT_FIXED_BYTES;
}
current_bytes += usize::from(!current.is_empty()) + item_bytes;
current.push(fingerprint);
}
if !current.is_empty() {
chunks.push(current);
}
chunks
}
fn accepted_line_fingerprint_json_bytes(fingerprint: &AcceptedLineFingerprint) -> usize {
// {"path_hash":"...","line_hash":"..."} plus one byte of array comma
// accounted for by the caller when needed.
32 + fingerprint.path_hash.len() + fingerprint.line_hash.len()
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn parses_counts_and_effective_added_fingerprints() {
let diff = "\
diff --git a/src/lib.rs b/src/lib.rs
index 1111111..2222222
--- a/src/lib.rs
+++ b/src/lib.rs
@@ -1,3 +1,5 @@
-old line
+fn useful() {
+}
+ return user.id;
context
";
let summary = accepted_line_fingerprints_from_unified_diff(diff);
assert_eq!(
summary,
AcceptedLineFingerprintSummary {
accepted_added_lines: 3,
accepted_deleted_lines: 1,
line_fingerprints: vec![
AcceptedLineFingerprint {
path_hash: fingerprint_hash("path", "src/lib.rs"),
line_hash: fingerprint_hash("line", "fn useful() {"),
},
AcceptedLineFingerprint {
path_hash: fingerprint_hash("path", "src/lib.rs"),
line_hash: fingerprint_hash("line", "return user.id;"),
},
],
}
);
}
#[test]
fn skips_added_file_metadata_headers() {
let diff = "\
diff --git a/new.py b/new.py
new file mode 100644
index 0000000..1111111
--- /dev/null
+++ b/new.py
@@ -0,0 +1 @@
+print('hello')
";
let summary = accepted_line_fingerprints_from_unified_diff(diff);
assert_eq!(summary.accepted_added_lines, 1);
assert_eq!(summary.accepted_deleted_lines, 0);
assert_eq!(summary.line_fingerprints.len(), 1);
}
#[test]
fn parses_hunk_lines_that_look_like_file_headers() {
let diff = "\
diff --git a/src/lib.rs b/src/lib.rs
index 1111111..2222222
--- a/src/lib.rs
+++ b/src/lib.rs
@@ -1,2 +1,2 @@
--- old value
+++ new value
";
let summary = accepted_line_fingerprints_from_unified_diff(diff);
assert_eq!(
summary,
AcceptedLineFingerprintSummary {
accepted_added_lines: 1,
accepted_deleted_lines: 1,
line_fingerprints: vec![AcceptedLineFingerprint {
path_hash: fingerprint_hash("path", "src/lib.rs"),
line_hash: fingerprint_hash("line", "++ new value"),
}],
}
);
}
}

View File

@@ -1,5 +1,7 @@
use crate::client::AnalyticsEventsQueue;
use crate::events::AppServerRpcTransport;
use crate::events::CodexAcceptedLineFingerprintsEventParams;
use crate::events::CodexAcceptedLineFingerprintsEventRequest;
use crate::events::CodexAppMentionedEventRequest;
use crate::events::CodexAppServerClientMetadata;
use crate::events::CodexAppUsedEventRequest;
@@ -28,6 +30,7 @@ use crate::events::codex_hook_run_metadata;
use crate::events::codex_plugin_metadata;
use crate::events::codex_plugin_used_metadata;
use crate::events::subagent_thread_started_event_request;
use crate::facts::AcceptedLineFingerprint;
use crate::facts::AnalyticsFact;
use crate::facts::AnalyticsJsonRpcError;
use crate::facts::AppInvocation;
@@ -66,15 +69,20 @@ use codex_app_server_protocol::ClientInfo;
use codex_app_server_protocol::ClientRequest;
use codex_app_server_protocol::ClientResponsePayload;
use codex_app_server_protocol::CodexErrorInfo;
use codex_app_server_protocol::CollabAgentTool;
use codex_app_server_protocol::CollabAgentToolCallStatus;
use codex_app_server_protocol::CommandAction;
use codex_app_server_protocol::CommandExecutionSource;
use codex_app_server_protocol::CommandExecutionStatus;
use codex_app_server_protocol::DynamicToolCallStatus;
use codex_app_server_protocol::InitializeCapabilities;
use codex_app_server_protocol::InitializeParams;
use codex_app_server_protocol::ItemCompletedNotification;
use codex_app_server_protocol::ItemStartedNotification;
use codex_app_server_protocol::JSONRPCErrorError;
use codex_app_server_protocol::McpToolCallStatus;
use codex_app_server_protocol::NonSteerableTurnKind;
use codex_app_server_protocol::PatchApplyStatus;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::SandboxPolicy as AppServerSandboxPolicy;
use codex_app_server_protocol::ServerNotification;
@@ -89,6 +97,7 @@ use codex_app_server_protocol::ThreadStartResponse;
use codex_app_server_protocol::ThreadStatus as AppServerThreadStatus;
use codex_app_server_protocol::Turn;
use codex_app_server_protocol::TurnCompletedNotification;
use codex_app_server_protocol::TurnDiffUpdatedNotification;
use codex_app_server_protocol::TurnError as AppServerTurnError;
use codex_app_server_protocol::TurnStartParams;
use codex_app_server_protocol::TurnStartedNotification;
@@ -636,6 +645,7 @@ fn sample_initialize_fact(connection_id: u64) -> AnalyticsFact {
},
capabilities: Some(InitializeCapabilities {
experimental_api: false,
request_attestation: false,
opt_out_notification_methods: None,
}),
},
@@ -827,6 +837,206 @@ fn app_used_event_serializes_expected_shape() {
);
}
#[test]
fn accepted_line_fingerprints_event_serializes_expected_shape() {
let event = TrackEventRequest::AcceptedLineFingerprints(Box::new(
CodexAcceptedLineFingerprintsEventRequest {
event_type: "codex_accepted_line_fingerprints",
event_params: CodexAcceptedLineFingerprintsEventParams {
event_type: "codex.accepted_line_fingerprints",
turn_id: "turn-1".to_string(),
thread_id: "thread-1".to_string(),
product_surface: Some("codex".to_string()),
model_slug: Some("gpt-5.1-codex".to_string()),
completed_at: 1710000000,
repo_hash: Some("repo-hash-1".to_string()),
accepted_added_lines: 42,
accepted_deleted_lines: 40,
line_fingerprints: vec![AcceptedLineFingerprint {
path_hash: "path-hash-1".to_string(),
line_hash: "line-hash-1".to_string(),
}],
},
},
));
let payload = serde_json::to_value(&event).expect("serialize accepted line fingerprints event");
assert_eq!(
payload,
json!({
"event_type": "codex_accepted_line_fingerprints",
"event_params": {
"event_type": "codex.accepted_line_fingerprints",
"turn_id": "turn-1",
"thread_id": "thread-1",
"product_surface": "codex",
"model_slug": "gpt-5.1-codex",
"completed_at": 1710000000,
"repo_hash": "repo-hash-1",
"accepted_added_lines": 42,
"accepted_deleted_lines": 40,
"line_fingerprints": [
{
"path_hash": "path-hash-1",
"line_hash": "line-hash-1"
}
]
}
})
);
}
#[tokio::test]
async fn reducer_chunks_large_accepted_line_fingerprint_events_without_repeating_counts() {
let mut reducer = AnalyticsReducer::default();
let mut events = Vec::new();
ingest_turn_prerequisites(
&mut reducer,
&mut events,
/*include_initialize*/ true,
/*include_resolved_config*/ true,
/*include_started*/ true,
/*include_token_usage*/ true,
)
.await;
events.clear();
let mut diff = "\
diff --git a/src/lib.rs b/src/lib.rs
index 1111111..2222222
--- a/src/lib.rs
+++ b/src/lib.rs
@@ -0,0 +1,20000 @@
"
.to_string();
for index in 0..20_000 {
diff.push_str(&format!("+let value_{index} = {index};\n"));
}
reducer
.ingest(
AnalyticsFact::Notification(Box::new(ServerNotification::TurnDiffUpdated(
TurnDiffUpdatedNotification {
thread_id: "thread-2".to_string(),
turn_id: "turn-2".to_string(),
diff,
},
))),
&mut events,
)
.await;
assert!(events.is_empty());
reducer
.ingest(
AnalyticsFact::Notification(Box::new(sample_turn_completed_notification(
"thread-2",
"turn-2",
AppServerTurnStatus::Completed,
/*codex_error_info*/ None,
))),
&mut events,
)
.await;
let accepted_line_events = events
.iter()
.filter_map(|event| match event {
TrackEventRequest::AcceptedLineFingerprints(event) => Some(event),
_ => None,
})
.collect::<Vec<_>>();
assert!(accepted_line_events.len() > 1);
let mut total_fingerprints = 0;
for (index, event) in accepted_line_events.iter().enumerate() {
assert_eq!(event.event_params.turn_id, "turn-2");
assert_eq!(event.event_params.thread_id, "thread-2");
total_fingerprints += event.event_params.line_fingerprints.len();
if index == 0 {
assert_eq!(event.event_params.accepted_added_lines, 20_000);
assert_eq!(event.event_params.accepted_deleted_lines, 0);
} else {
assert_eq!(event.event_params.accepted_added_lines, 0);
assert_eq!(event.event_params.accepted_deleted_lines, 0);
}
assert!(serde_json::to_vec(event).expect("serialize chunk").len() < 2_100_000);
}
assert_eq!(total_fingerprints, 20_000);
}
#[tokio::test]
async fn reducer_emits_accepted_line_fingerprints_once_from_latest_turn_diff_on_completion() {
let mut reducer = AnalyticsReducer::default();
let mut events = Vec::new();
ingest_turn_prerequisites(
&mut reducer,
&mut events,
/*include_initialize*/ true,
/*include_resolved_config*/ true,
/*include_started*/ true,
/*include_token_usage*/ true,
)
.await;
events.clear();
for line in ["let old_value = 1;", "let latest_value = 2;"] {
let diff = format!(
"\
diff --git a/src/lib.rs b/src/lib.rs
index 1111111..2222222
--- a/src/lib.rs
+++ b/src/lib.rs
@@ -0,0 +1 @@
+{line}
"
);
reducer
.ingest(
AnalyticsFact::Notification(Box::new(ServerNotification::TurnDiffUpdated(
TurnDiffUpdatedNotification {
thread_id: "thread-2".to_string(),
turn_id: "turn-2".to_string(),
diff,
},
))),
&mut events,
)
.await;
}
assert!(events.is_empty());
reducer
.ingest(
AnalyticsFact::Notification(Box::new(sample_turn_completed_notification(
"thread-2",
"turn-2",
AppServerTurnStatus::Completed,
/*codex_error_info*/ None,
))),
&mut events,
)
.await;
let accepted_line_events = events
.iter()
.filter_map(|event| match event {
TrackEventRequest::AcceptedLineFingerprints(event) => Some(event),
_ => None,
})
.collect::<Vec<_>>();
assert_eq!(accepted_line_events.len(), 1);
let event = accepted_line_events[0];
assert_eq!(event.event_params.accepted_added_lines, 1);
assert_eq!(event.event_params.line_fingerprints.len(), 1);
assert_eq!(
event.event_params.line_fingerprints[0].line_hash,
crate::fingerprint_hash("line", "let latest_value = 2;")
);
}
#[test]
fn compaction_event_serializes_expected_shape() {
let event = TrackEventRequest::Compaction(Box::new(CodexCompactionEventRequest {
@@ -1122,6 +1332,7 @@ async fn initialize_caches_client_and_thread_lifecycle_publishes_once_initialize
},
capabilities: Some(InitializeCapabilities {
experimental_api: false,
request_attestation: false,
opt_out_notification_methods: None,
}),
},
@@ -1269,6 +1480,7 @@ async fn compaction_event_ingests_custom_fact() {
},
capabilities: Some(InitializeCapabilities {
experimental_api: false,
request_attestation: false,
opt_out_notification_methods: None,
}),
},
@@ -1382,6 +1594,7 @@ async fn guardian_review_event_ingests_custom_fact_with_optional_target_item() {
},
capabilities: Some(InitializeCapabilities {
experimental_api: false,
request_attestation: false,
opt_out_notification_methods: None,
}),
},
@@ -1501,6 +1714,14 @@ async fn item_lifecycle_notifications_publish_command_execution_event() {
let mut events = Vec::new();
ingest_tool_review_prerequisites(&mut reducer, &mut events).await;
reducer
.ingest(
AnalyticsFact::Notification(Box::new(sample_turn_started_notification(
"thread-1", "turn-1",
))),
&mut events,
)
.await;
reducer
.ingest(
AnalyticsFact::Notification(Box::new(ServerNotification::ItemStarted(
@@ -1911,6 +2132,15 @@ async fn subagent_tool_items_inherit_parent_connection_metadata() {
)
.await;
events.clear();
reducer
.ingest(
AnalyticsFact::Notification(Box::new(sample_turn_started_notification(
"thread-subagent",
"turn-subagent",
))),
&mut events,
)
.await;
reducer
.ingest(
@@ -2785,6 +3015,17 @@ async fn turn_lifecycle_emits_turn_event() {
assert_eq!(payload["event_params"]["num_input_images"], json!(1));
assert_eq!(payload["event_params"]["status"], json!("completed"));
assert_eq!(payload["event_params"]["steer_count"], json!(0));
assert_eq!(payload["event_params"]["total_tool_call_count"], json!(0));
assert_eq!(payload["event_params"]["shell_command_count"], json!(0));
assert_eq!(payload["event_params"]["file_change_count"], json!(0));
assert_eq!(payload["event_params"]["mcp_tool_call_count"], json!(0));
assert_eq!(payload["event_params"]["dynamic_tool_call_count"], json!(0));
assert_eq!(
payload["event_params"]["subagent_tool_call_count"],
json!(0)
);
assert_eq!(payload["event_params"]["web_search_count"], json!(0));
assert_eq!(payload["event_params"]["image_generation_count"], json!(0));
assert_eq!(payload["event_params"]["started_at"], json!(455));
assert_eq!(payload["event_params"]["completed_at"], json!(456));
assert_eq!(payload["event_params"]["duration_ms"], json!(1234));
@@ -2798,6 +3039,158 @@ async fn turn_lifecycle_emits_turn_event() {
assert_eq!(payload["event_params"]["total_tokens"], json!(321));
}
#[tokio::test]
async fn turn_event_counts_completed_tool_items() {
let mut reducer = AnalyticsReducer::default();
let mut out = Vec::new();
ingest_turn_prerequisites(
&mut reducer,
&mut out,
/*include_initialize*/ true,
/*include_resolved_config*/ true,
/*include_started*/ true,
/*include_token_usage*/ false,
)
.await;
let completed_tool_items = vec![
sample_command_execution_item(CommandExecutionStatus::Completed, Some(0), Some(1)),
ThreadItem::FileChange {
id: "file-change-1".to_string(),
changes: Vec::new(),
status: PatchApplyStatus::Completed,
},
ThreadItem::McpToolCall {
id: "mcp-1".to_string(),
server: "server".to_string(),
tool: "search".to_string(),
status: McpToolCallStatus::Completed,
arguments: json!({}),
mcp_app_resource_uri: None,
result: None,
error: None,
duration_ms: Some(2),
},
ThreadItem::DynamicToolCall {
id: "dynamic-1".to_string(),
namespace: None,
tool: "render".to_string(),
arguments: json!({}),
status: DynamicToolCallStatus::Completed,
content_items: None,
success: Some(true),
duration_ms: Some(3),
},
ThreadItem::CollabAgentToolCall {
id: "collab-1".to_string(),
tool: CollabAgentTool::SpawnAgent,
status: CollabAgentToolCallStatus::Completed,
sender_thread_id: "thread-2".to_string(),
receiver_thread_ids: vec!["thread-child".to_string()],
prompt: Some("help".to_string()),
model: Some("gpt-5".to_string()),
reasoning_effort: None,
agents_states: Default::default(),
},
ThreadItem::WebSearch {
id: "web-1".to_string(),
query: "codex".to_string(),
action: None,
},
ThreadItem::ImageGeneration {
id: "image-1".to_string(),
status: "completed".to_string(),
revised_prompt: None,
result: "ok".to_string(),
saved_path: None,
},
];
for item in completed_tool_items {
reducer
.ingest(
AnalyticsFact::Notification(Box::new(ServerNotification::ItemCompleted(
ItemCompletedNotification {
thread_id: "thread-2".to_string(),
turn_id: "turn-2".to_string(),
completed_at_ms: 1_000,
item,
},
))),
&mut out,
)
.await;
}
reducer
.ingest(
AnalyticsFact::Notification(Box::new(sample_turn_completed_notification(
"thread-2",
"turn-2",
AppServerTurnStatus::Completed,
/*codex_error_info*/ None,
))),
&mut out,
)
.await;
let turn_event = out
.iter()
.find(|event| matches!(event, TrackEventRequest::TurnEvent(_)))
.expect("turn event should be emitted");
let payload = serde_json::to_value(turn_event).expect("serialize turn event");
assert_eq!(payload["event_params"]["total_tool_call_count"], json!(7));
assert_eq!(payload["event_params"]["shell_command_count"], json!(1));
assert_eq!(payload["event_params"]["file_change_count"], json!(1));
assert_eq!(payload["event_params"]["mcp_tool_call_count"], json!(1));
assert_eq!(payload["event_params"]["dynamic_tool_call_count"], json!(1));
assert_eq!(
payload["event_params"]["subagent_tool_call_count"],
json!(1)
);
assert_eq!(payload["event_params"]["web_search_count"], json!(1));
assert_eq!(payload["event_params"]["image_generation_count"], json!(1));
}
#[tokio::test]
async fn item_completed_without_turn_state_does_not_create_turn_state() {
let mut reducer = AnalyticsReducer::default();
let mut out = Vec::new();
reducer
.ingest(
AnalyticsFact::Notification(Box::new(ServerNotification::ItemCompleted(
ItemCompletedNotification {
thread_id: "thread-2".to_string(),
turn_id: "turn-2".to_string(),
completed_at_ms: 1_000,
item: sample_command_execution_item(
CommandExecutionStatus::Completed,
Some(0),
Some(1),
),
},
))),
&mut out,
)
.await;
reducer
.ingest(
AnalyticsFact::Notification(Box::new(sample_turn_completed_notification(
"thread-2",
"turn-2",
AppServerTurnStatus::Completed,
/*codex_error_info*/ None,
))),
&mut out,
)
.await;
assert!(out.is_empty());
}
#[tokio::test]
async fn accepted_steers_increment_turn_steer_count() {
let mut reducer = AnalyticsReducer::default();

View File

@@ -30,6 +30,7 @@ use codex_app_server_protocol::ServerNotification;
use codex_app_server_protocol::ServerRequest;
use codex_app_server_protocol::ServerResponse;
use codex_login::AuthManager;
use codex_login::CodexAuth;
use codex_login::default_client::create_client;
use codex_plugin::PluginTelemetryMetadata;
use std::collections::HashSet;
@@ -352,6 +353,7 @@ impl AnalyticsEventsClient {
notification,
ServerNotification::TurnStarted(_)
| ServerNotification::TurnCompleted(_)
| ServerNotification::TurnDiffUpdated(_)
| ServerNotification::ItemStarted(_)
| ServerNotification::ItemCompleted(_)
| ServerNotification::ItemGuardianApprovalReviewStarted(_)
@@ -371,6 +373,7 @@ async fn send_track_events(
if events.is_empty() {
return;
}
let Some(auth) = auth_manager.auth().await else {
return;
};
@@ -380,12 +383,45 @@ async fn send_track_events(
let base_url = base_url.trim_end_matches('/');
let url = format!("{base_url}/codex/analytics-events/events");
for events in track_event_request_batches(events) {
send_track_events_request(&auth, &url, events).await;
}
}
fn track_event_request_batches(events: Vec<TrackEventRequest>) -> Vec<Vec<TrackEventRequest>> {
let mut batches = Vec::new();
let mut current_batch = Vec::new();
for event in events {
if event.should_send_in_isolated_request() {
if !current_batch.is_empty() {
batches.push(current_batch);
current_batch = Vec::new();
}
batches.push(vec![event]);
} else {
current_batch.push(event);
}
}
if !current_batch.is_empty() {
batches.push(current_batch);
}
batches
}
async fn send_track_events_request(auth: &CodexAuth, url: &str, events: Vec<TrackEventRequest>) {
if events.is_empty() {
return;
}
let payload = TrackEventsRequest { events };
let response = create_client()
.post(&url)
.post(url)
.timeout(ANALYTICS_EVENTS_TIMEOUT)
.headers(codex_model_provider::auth_provider_from_auth(&auth).to_auth_headers())
.headers(codex_model_provider::auth_provider_from_auth(auth).to_auth_headers())
.header("Content-Type", "application/json")
.json(&payload)
.send()

View File

@@ -1,6 +1,14 @@
use super::AnalyticsEventsClient;
use super::AnalyticsEventsQueue;
use super::track_event_request_batches;
use crate::events::CodexAcceptedLineFingerprintsEventParams;
use crate::events::CodexAcceptedLineFingerprintsEventRequest;
use crate::events::SkillInvocationEventParams;
use crate::events::SkillInvocationEventRequest;
use crate::events::TrackEventRequest;
use crate::facts::AcceptedLineFingerprint;
use crate::facts::AnalyticsFact;
use crate::facts::InvocationType;
use codex_app_server_protocol::ApprovalsReviewer as AppServerApprovalsReviewer;
use codex_app_server_protocol::AskForApproval as AppServerAskForApproval;
use codex_app_server_protocol::ClientRequest;
@@ -31,6 +39,47 @@ use std::sync::Mutex;
use tokio::sync::mpsc;
use tokio::sync::mpsc::error::TryRecvError;
fn sample_accepted_line_fingerprint_event(thread_id: &str) -> TrackEventRequest {
TrackEventRequest::AcceptedLineFingerprints(Box::new(
CodexAcceptedLineFingerprintsEventRequest {
event_type: "codex_accepted_line_fingerprints",
event_params: CodexAcceptedLineFingerprintsEventParams {
event_type: "codex.accepted_line_fingerprints",
turn_id: "turn-1".to_string(),
thread_id: thread_id.to_string(),
product_surface: Some("codex".to_string()),
model_slug: Some("gpt-5.1-codex".to_string()),
completed_at: 1,
repo_hash: None,
accepted_added_lines: 1,
accepted_deleted_lines: 0,
line_fingerprints: vec![AcceptedLineFingerprint {
path_hash: "path-hash".to_string(),
line_hash: "line-hash".to_string(),
}],
},
},
))
}
fn sample_regular_track_event(thread_id: &str) -> TrackEventRequest {
TrackEventRequest::SkillInvocation(SkillInvocationEventRequest {
event_type: "skill_invocation",
skill_id: format!("skill-{thread_id}"),
skill_name: "doc".to_string(),
event_params: SkillInvocationEventParams {
product_client_id: None,
skill_scope: None,
plugin_id: None,
repo_url: None,
thread_id: Some(thread_id.to_string()),
turn_id: Some("turn-1".to_string()),
invoke_type: Some(InvocationType::Explicit),
model_slug: Some("gpt-5.1-codex".to_string()),
},
})
}
fn client_with_receiver() -> (AnalyticsEventsClient, mpsc::Receiver<AnalyticsFact>) {
let (sender, receiver) = mpsc::channel(8);
let queue = AnalyticsEventsQueue {
@@ -222,3 +271,23 @@ fn track_response_only_enqueues_analytics_relevant_responses() {
);
assert!(matches!(receiver.try_recv(), Err(TryRecvError::Empty)));
}
#[test]
fn track_event_request_batches_only_isolates_accepted_line_fingerprint_events() {
let batches = track_event_request_batches(vec![
sample_regular_track_event("thread-1"),
sample_regular_track_event("thread-2"),
sample_accepted_line_fingerprint_event("thread-3"),
sample_accepted_line_fingerprint_event("thread-4"),
sample_regular_track_event("thread-5"),
sample_regular_track_event("thread-6"),
]);
assert_eq!(batches.len(), 4);
assert_eq!(batches[0].len(), 2);
assert_eq!(batches[1].len(), 1);
assert_eq!(batches[2].len(), 1);
assert_eq!(batches[3].len(), 2);
assert!(batches[1][0].should_send_in_isolated_request());
assert!(batches[2][0].should_send_in_isolated_request());
}

View File

@@ -1,5 +1,6 @@
use std::time::Instant;
use crate::facts::AcceptedLineFingerprint;
use crate::facts::AppInvocation;
use crate::facts::CodexCompactionEvent;
use crate::facts::CompactionImplementation;
@@ -71,6 +72,7 @@ pub(crate) enum TrackEventRequest {
CollabAgentToolCall(CodexCollabAgentToolCallEventRequest),
WebSearch(CodexWebSearchEventRequest),
ImageGeneration(CodexImageGenerationEventRequest),
AcceptedLineFingerprints(Box<CodexAcceptedLineFingerprintsEventRequest>),
#[allow(dead_code)]
ReviewEvent(CodexReviewEventRequest),
PluginUsed(CodexPluginUsedEventRequest),
@@ -80,6 +82,32 @@ pub(crate) enum TrackEventRequest {
PluginDisabled(CodexPluginEventRequest),
}
impl TrackEventRequest {
pub(crate) fn should_send_in_isolated_request(&self) -> bool {
matches!(self, Self::AcceptedLineFingerprints(_))
}
}
#[derive(Serialize)]
pub(crate) struct CodexAcceptedLineFingerprintsEventParams {
pub(crate) event_type: &'static str,
pub(crate) turn_id: String,
pub(crate) thread_id: String,
pub(crate) product_surface: Option<String>,
pub(crate) model_slug: Option<String>,
pub(crate) completed_at: u64,
pub(crate) repo_hash: Option<String>,
pub(crate) accepted_added_lines: u64,
pub(crate) accepted_deleted_lines: u64,
pub(crate) line_fingerprints: Vec<AcceptedLineFingerprint>,
}
#[derive(Serialize)]
pub(crate) struct CodexAcceptedLineFingerprintsEventRequest {
pub(crate) event_type: &'static str,
pub(crate) event_params: CodexAcceptedLineFingerprintsEventParams,
}
#[derive(Serialize)]
pub(crate) struct SkillInvocationEventRequest {
pub(crate) event_type: &'static str,
@@ -766,8 +794,6 @@ pub(crate) struct CodexTurnEventParams {
pub(crate) status: Option<TurnStatus>,
pub(crate) turn_error: Option<CodexErrorInfo>,
pub(crate) steer_count: Option<usize>,
// TODO(rhan-oai): Populate these once tool-call accounting is emitted from
// core; the schema is reserved but these fields are currently always None.
pub(crate) total_tool_call_count: Option<usize>,
pub(crate) shell_command_count: Option<usize>,
pub(crate) file_change_count: Option<usize>,

View File

@@ -28,6 +28,12 @@ use codex_protocol::protocol::TokenUsage;
use serde::Serialize;
use std::path::PathBuf;
#[derive(Clone, Debug, PartialEq, Eq, Serialize)]
pub struct AcceptedLineFingerprint {
pub path_hash: String,
pub line_hash: String,
}
#[derive(Clone)]
pub struct TrackEventsContext {
pub model_slug: String,

View File

@@ -1,3 +1,4 @@
mod accepted_lines;
mod client;
mod events;
mod facts;
@@ -6,6 +7,8 @@ mod reducer;
use std::time::SystemTime;
use std::time::UNIX_EPOCH;
pub use accepted_lines::accepted_line_fingerprints_from_unified_diff;
pub use accepted_lines::fingerprint_hash;
pub use client::AnalyticsEventsClient;
pub use events::AppServerRpcTransport;
pub use events::GuardianApprovalRequestSource;
@@ -17,6 +20,7 @@ pub use events::GuardianReviewSessionKind;
pub use events::GuardianReviewTerminalStatus;
pub use events::GuardianReviewTrackContext;
pub use events::GuardianReviewedAction;
pub use facts::AcceptedLineFingerprint;
pub use facts::AnalyticsJsonRpcError;
pub use facts::AppInvocation;
pub use facts::CodexCompactionEvent;

View File

@@ -1,3 +1,7 @@
use crate::accepted_lines::AcceptedLineFingerprintEventInput;
use crate::accepted_lines::accepted_line_fingerprint_event_requests;
use crate::accepted_lines::accepted_line_fingerprints_from_unified_diff;
use crate::accepted_lines::accepted_line_repo_hash_for_cwd;
use crate::events::AppServerRpcTransport;
use crate::events::CodexAppMentionedEventRequest;
use crate::events::CodexAppServerClientMetadata;
@@ -104,6 +108,7 @@ use codex_protocol::protocol::TokenUsage;
use sha1::Digest;
use std::collections::HashMap;
use std::path::Path;
use std::path::PathBuf;
#[derive(Default)]
pub(crate) struct AnalyticsReducer {
@@ -264,7 +269,9 @@ struct TurnState {
started_at: Option<u64>,
token_usage: Option<TokenUsage>,
completed: Option<CompletedTurnState>,
latest_diff: Option<String>,
steer_count: usize,
tool_counts: TurnToolCounts,
}
#[derive(Hash, Eq, PartialEq)]
@@ -274,6 +281,42 @@ struct ToolItemKey {
item_id: String,
}
#[derive(Default)]
struct TurnToolCounts {
total: usize,
shell_command: usize,
file_change: usize,
mcp_tool_call: usize,
dynamic_tool_call: usize,
subagent_tool_call: usize,
web_search: usize,
image_generation: usize,
}
impl TurnToolCounts {
fn record(&mut self, item: &ThreadItem) {
match item {
ThreadItem::CommandExecution { .. } => self.shell_command += 1,
ThreadItem::FileChange { .. } => self.file_change += 1,
ThreadItem::McpToolCall { .. } => self.mcp_tool_call += 1,
ThreadItem::DynamicToolCall { .. } => self.dynamic_tool_call += 1,
ThreadItem::CollabAgentToolCall { .. } => self.subagent_tool_call += 1,
ThreadItem::WebSearch { .. } => self.web_search += 1,
ThreadItem::ImageGeneration { .. } => self.image_generation += 1,
ThreadItem::UserMessage { .. }
| ThreadItem::HookPrompt { .. }
| ThreadItem::AgentMessage { .. }
| ThreadItem::Plan { .. }
| ThreadItem::Reasoning { .. }
| ThreadItem::ImageView { .. }
| ThreadItem::EnteredReviewMode { .. }
| ThreadItem::ExitedReviewMode { .. }
| ThreadItem::ContextCompaction { .. } => return,
}
self.total += 1;
}
}
impl AnalyticsReducer {
pub(crate) async fn ingest(&mut self, input: AnalyticsFact, out: &mut Vec<TrackEventRequest>) {
match input {
@@ -305,7 +348,7 @@ impl AnalyticsReducer {
response,
} => {
if let Some(response) = response.into_client_response(request_id) {
self.ingest_response(connection_id, response, out);
self.ingest_response(connection_id, response, out).await;
}
}
AnalyticsFact::ErrorResponse {
@@ -317,7 +360,7 @@ impl AnalyticsReducer {
self.ingest_error_response(connection_id, request_id, error_type, out);
}
AnalyticsFact::Notification(notification) => {
self.ingest_notification(*notification, out);
self.ingest_notification(*notification, out).await;
}
AnalyticsFact::ServerRequest {
connection_id: _connection_id,
@@ -338,10 +381,10 @@ impl AnalyticsReducer {
self.ingest_guardian_review(*input, out);
}
CustomAnalyticsFact::TurnResolvedConfig(input) => {
self.ingest_turn_resolved_config(*input, out);
self.ingest_turn_resolved_config(*input, out).await;
}
CustomAnalyticsFact::TurnTokenUsage(input) => {
self.ingest_turn_token_usage(*input, out);
self.ingest_turn_token_usage(*input, out).await;
}
CustomAnalyticsFact::SkillInvoked(input) => {
self.ingest_skill_invoked(input, out).await;
@@ -473,7 +516,7 @@ impl AnalyticsReducer {
}
}
fn ingest_turn_resolved_config(
async fn ingest_turn_resolved_config(
&mut self,
input: TurnResolvedConfigFact,
out: &mut Vec<TrackEventRequest>,
@@ -489,15 +532,17 @@ impl AnalyticsReducer {
started_at: None,
token_usage: None,
completed: None,
latest_diff: None,
steer_count: 0,
tool_counts: TurnToolCounts::default(),
});
turn_state.thread_id = Some(thread_id);
turn_state.num_input_images = Some(num_input_images);
turn_state.resolved_config = Some(input);
self.maybe_emit_turn_event(&turn_id, out);
self.maybe_emit_turn_event(&turn_id, out).await;
}
fn ingest_turn_token_usage(
async fn ingest_turn_token_usage(
&mut self,
input: TurnTokenUsageFact,
out: &mut Vec<TrackEventRequest>,
@@ -511,11 +556,13 @@ impl AnalyticsReducer {
started_at: None,
token_usage: None,
completed: None,
latest_diff: None,
steer_count: 0,
tool_counts: TurnToolCounts::default(),
});
turn_state.thread_id = Some(input.thread_id);
turn_state.token_usage = Some(input.token_usage);
self.maybe_emit_turn_event(&turn_id, out);
self.maybe_emit_turn_event(&turn_id, out).await;
}
async fn ingest_skill_invoked(
@@ -622,7 +669,7 @@ impl AnalyticsReducer {
});
}
fn ingest_response(
async fn ingest_response(
&mut self,
connection_id: u64,
response: ClientResponse,
@@ -674,12 +721,14 @@ impl AnalyticsReducer {
started_at: None,
token_usage: None,
completed: None,
latest_diff: None,
steer_count: 0,
tool_counts: TurnToolCounts::default(),
});
turn_state.connection_id = Some(connection_id);
turn_state.thread_id = Some(pending_request.thread_id);
turn_state.num_input_images = Some(pending_request.num_input_images);
self.maybe_emit_turn_event(&turn_id, out);
self.maybe_emit_turn_event(&turn_id, out).await;
}
ClientResponse::TurnSteer {
request_id,
@@ -741,7 +790,7 @@ impl AnalyticsReducer {
);
}
fn ingest_notification(
async fn ingest_notification(
&mut self,
notification: ServerNotification,
out: &mut Vec<TrackEventRequest>,
@@ -768,6 +817,16 @@ impl AnalyticsReducer {
let Some(item_id) = tracked_tool_item_id(&notification.item) else {
return;
};
let Some(turn_state) = self.turns.get_mut(&notification.turn_id) else {
tracing::warn!(
thread_id = %notification.thread_id,
turn_id = %notification.turn_id,
item_id,
"dropping turn tool count update: missing turn state"
);
return;
};
turn_state.tool_counts.record(&notification.item);
let key = ToolItemKey {
thread_id: notification.thread_id.clone(),
turn_id: notification.turn_id.clone(),
@@ -812,13 +871,34 @@ impl AnalyticsReducer {
started_at: None,
token_usage: None,
completed: None,
latest_diff: None,
steer_count: 0,
tool_counts: TurnToolCounts::default(),
});
turn_state.started_at = notification
.turn
.started_at
.and_then(|started_at| u64::try_from(started_at).ok());
}
ServerNotification::TurnDiffUpdated(notification) => {
let turn_state =
self.turns
.entry(notification.turn_id.clone())
.or_insert(TurnState {
connection_id: None,
thread_id: None,
num_input_images: None,
resolved_config: None,
started_at: None,
token_usage: None,
completed: None,
latest_diff: None,
steer_count: 0,
tool_counts: TurnToolCounts::default(),
});
turn_state.thread_id = Some(notification.thread_id);
turn_state.latest_diff = Some(notification.diff);
}
ServerNotification::TurnCompleted(notification) => {
let turn_state =
self.turns
@@ -831,7 +911,9 @@ impl AnalyticsReducer {
started_at: None,
token_usage: None,
completed: None,
latest_diff: None,
steer_count: 0,
tool_counts: TurnToolCounts::default(),
});
turn_state.completed = Some(CompletedTurnState {
status: analytics_turn_status(notification.turn.status),
@@ -850,7 +932,7 @@ impl AnalyticsReducer {
.and_then(|duration_ms| u64::try_from(duration_ms).ok()),
});
let turn_id = notification.turn.id;
self.maybe_emit_turn_event(&turn_id, out);
self.maybe_emit_turn_event(&turn_id, out).await;
}
_ => {}
}
@@ -986,7 +1068,7 @@ impl AnalyticsReducer {
}));
}
fn maybe_emit_turn_event(&mut self, turn_id: &str, out: &mut Vec<TrackEventRequest>) {
async fn maybe_emit_turn_event(&mut self, turn_id: &str, out: &mut Vec<TrackEventRequest>) {
let Some(turn_state) = self.turns.get(turn_id) else {
return;
};
@@ -1019,18 +1101,23 @@ impl AnalyticsReducer {
warn_missing_analytics_context(&drop_site, MissingAnalyticsContext::ThreadMetadata);
return;
};
out.push(TrackEventRequest::TurnEvent(Box::new(
CodexTurnEventRequest {
event_type: "codex_turn_event",
event_params: codex_turn_event_params(
connection_state.app_server_client.clone(),
connection_state.runtime.clone(),
turn_id.to_string(),
turn_state,
thread_metadata,
),
},
)));
let turn_event = TrackEventRequest::TurnEvent(Box::new(CodexTurnEventRequest {
event_type: "codex_turn_event",
event_params: codex_turn_event_params(
connection_state.app_server_client.clone(),
connection_state.runtime.clone(),
turn_id.to_string(),
turn_state,
thread_metadata,
),
}));
let accepted_line_event = accepted_line_event_input(turn_id, turn_state);
out.push(turn_event);
if let Some((mut input, cwd)) = accepted_line_event {
input.repo_hash = accepted_line_repo_hash_for_cwd(cwd.as_path()).await;
out.extend(accepted_line_fingerprint_event_requests(input));
}
self.turns.remove(turn_id);
}
@@ -1642,6 +1729,36 @@ fn web_search_query_count(query: &str, action: Option<&WebSearchAction>) -> Opti
}
}
fn accepted_line_event_input(
turn_id: &str,
turn_state: &TurnState,
) -> Option<(AcceptedLineFingerprintEventInput, PathBuf)> {
let latest_diff = turn_state.latest_diff.as_deref()?;
let summary = accepted_line_fingerprints_from_unified_diff(latest_diff);
if summary.accepted_added_lines == 0 && summary.accepted_deleted_lines == 0 {
return None;
}
let thread_id = turn_state.thread_id.clone()?;
let resolved_config = turn_state.resolved_config.clone()?;
Some((
AcceptedLineFingerprintEventInput {
event_type: "codex.accepted_line_fingerprints",
turn_id: turn_id.to_string(),
thread_id,
product_surface: Some("codex".to_string()),
model_slug: Some(resolved_config.model.clone()),
completed_at: now_unix_seconds(),
repo_hash: None,
accepted_added_lines: summary.accepted_added_lines,
accepted_deleted_lines: summary.accepted_deleted_lines,
line_fingerprints: summary.line_fingerprints,
},
resolved_config.permission_profile_cwd,
))
}
fn codex_turn_event_params(
app_server_client: CodexAppServerClientMetadata,
runtime: CodexRuntimeMetadata,
@@ -1712,14 +1829,14 @@ fn codex_turn_event_params(
status: completed.status,
turn_error: completed.turn_error,
steer_count: Some(turn_state.steer_count),
total_tool_call_count: None,
shell_command_count: None,
file_change_count: None,
mcp_tool_call_count: None,
dynamic_tool_call_count: None,
subagent_tool_call_count: None,
web_search_count: None,
image_generation_count: None,
total_tool_call_count: Some(turn_state.tool_counts.total),
shell_command_count: Some(turn_state.tool_counts.shell_command),
file_change_count: Some(turn_state.tool_counts.file_change),
mcp_tool_call_count: Some(turn_state.tool_counts.mcp_tool_call),
dynamic_tool_call_count: Some(turn_state.tool_counts.dynamic_tool_call),
subagent_tool_call_count: Some(turn_state.tool_counts.subagent_tool_call),
web_search_count: Some(turn_state.tool_counts.web_search),
image_generation_count: Some(turn_state.tool_counts.image_generation),
input_tokens: token_usage
.as_ref()
.map(|token_usage| token_usage.input_tokens),

View File

@@ -49,7 +49,6 @@ use codex_config::RemoteThreadConfigLoader;
use codex_config::ThreadConfigLoader;
use codex_core::config::Config;
pub use codex_exec_server::EnvironmentManager;
pub use codex_exec_server::EnvironmentManagerArgs;
pub use codex_exec_server::ExecServerRuntimePaths;
use codex_feedback::CodexFeedback;
use codex_protocol::protocol::SessionSource;
@@ -375,6 +374,7 @@ impl InProcessClientStartArgs {
pub fn initialize_params(&self) -> InitializeParams {
let capabilities = InitializeCapabilities {
experimental_api: self.experimental_api,
request_attestation: false,
opt_out_notification_methods: if self.opt_out_notification_methods.is_empty() {
None
} else {

View File

@@ -73,6 +73,7 @@ impl RemoteAppServerConnectArgs {
fn initialize_params(&self) -> InitializeParams {
let capabilities = InitializeCapabilities {
experimental_api: self.experimental_api,
request_attestation: false,
opt_out_notification_methods: if self.opt_out_notification_methods.is_empty() {
None
} else {

View File

@@ -0,0 +1,6 @@
load("//:defs.bzl", "codex_rust_crate")
codex_rust_crate(
name = "app-server-daemon",
crate_name = "codex_app_server_daemon",
)

View File

@@ -0,0 +1,39 @@
[package]
name = "codex-app-server-daemon"
version.workspace = true
edition.workspace = true
license.workspace = true
[lib]
name = "codex_app_server_daemon"
path = "src/lib.rs"
doctest = false
[lints]
workspace = true
[dependencies]
anyhow = { workspace = true }
codex-app-server-protocol = { workspace = true }
codex-app-server-transport = { workspace = true }
codex-core = { workspace = true }
codex-uds = { workspace = true }
futures = { workspace = true }
libc = { workspace = true }
reqwest = { workspace = true, features = ["rustls-tls"] }
serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }
tokio = { workspace = true, features = [
"fs",
"io-util",
"macros",
"process",
"rt-multi-thread",
"signal",
"time",
] }
tokio-tungstenite = { workspace = true }
[dev-dependencies]
pretty_assertions = { workspace = true }
tempfile = { workspace = true }

View File

@@ -0,0 +1,104 @@
# codex-app-server-daemon
> `codex-app-server-daemon` is experimental and its lifecycle contract may
> change while the remote-management flow is still being developed.
`codex-app-server-daemon` backs the machine-readable `codex app-server`
lifecycle commands used by remote clients such as the desktop and mobile apps.
It is intended for Codex instances launched over SSH, including fresh developer
machines that should expose app-server with `remote_control` enabled.
## Platform support
The current daemon implementation is Unix-only. It uses pidfile-backed
daemonization plus Unix process and file-locking primitives, and does not yet
support Windows lifecycle management.
## Commands
```sh
codex app-server daemon start
codex app-server daemon restart
codex app-server daemon enable-remote-control
codex app-server daemon disable-remote-control
codex app-server daemon stop
codex app-server daemon version
codex app-server daemon bootstrap --remote-control
```
On success, every command writes exactly one JSON object to stdout. Consumers
should parse that JSON rather than relying on human-readable text. Lifecycle
responses report the resolved backend, socket path, local CLI version, and
running app-server version when applicable.
## Bootstrap flow
For a new remote machine:
```sh
curl -fsSL https://chatgpt.com/codex/install.sh | sh
$HOME/.codex/packages/standalone/current/codex app-server daemon bootstrap --remote-control
```
`bootstrap` requires the standalone managed install. It records the daemon
settings under `CODEX_HOME/app-server-daemon/`, starts app-server as a
pidfile-backed detached process, and launches a detached updater loop.
## Installation and update cases
The daemon assumes Codex is installed through `install.sh` and always launches
the standalone managed binary under `CODEX_HOME`.
| Situation | What starts | Does this daemon fetch new binaries? | Does a running app-server eventually move to a newer binary on its own? |
| --- | --- | --- | --- |
| `install.sh` has run, but only `start` is used | `start` uses `CODEX_HOME/packages/standalone/current/codex` | No | No. The managed path is used when starting or restarting, but no updater is installed. |
| `install.sh` has run, then `bootstrap` is used | The pidfile backend uses `CODEX_HOME/packages/standalone/current/codex` | Yes. Bootstrap launches a detached updater loop that runs `install.sh` hourly. | Yes, while that updater process is alive. After a successful fetch, it restarts a currently running app-server only when the managed binary reports a different version. |
| Some other tool updates the managed binary path | The next fresh start or restart uses the updated file at that path | No | Not automatically. The existing process keeps the old executable image until an explicit `restart`. |
### Standalone installs
For installs created by `install.sh`:
- lifecycle commands always use the standalone managed binary path
- `bootstrap` is supported
- `bootstrap` starts a detached pid-backed updater loop that fetches via
`install.sh`, then restarts app-server if it is running on a different version
- the updater loop is not reboot-persistent; it must be started again by
rerunning `bootstrap` after a reboot
### Out-of-band updates
This daemon does not watch arbitrary executable files for replacement. If some
other tool updates a binary that the daemon would use on its next launch:
- a currently running app-server remains on the old executable image
- `restart` will launch the updated binary
- for bootstrapped daemons, the detached updater loop only reacts to updates it
fetched itself; it does not watch arbitrary file replacement
## Lifecycle semantics
`start` is idempotent and returns after app-server is ready to answer the normal
JSON-RPC initialize handshake on the Unix control socket.
`restart` stops any managed daemon and starts it again.
`enable-remote-control` and `disable-remote-control` persist the launch setting
for future starts. If a managed app-server is already running, they restart it
so the new setting takes effect immediately.
`stop` sends a graceful termination request first, then sends a second
termination signal after the grace window if the process is still alive.
All mutating lifecycle commands are serialized per `CODEX_HOME`, so a concurrent
`start`, `restart`, `enable-remote-control`, `disable-remote-control`, `stop`,
or `bootstrap` does not race another in-flight lifecycle operation.
## State
The daemon stores its local state under `CODEX_HOME/app-server-daemon/`:
- `settings.json` for persisted launch settings
- `app-server.pid` for the app-server process record
- `app-server-updater.pid` for the pid-backed standalone updater loop
- `daemon.lock` for daemon-wide lifecycle serialization

View File

@@ -0,0 +1,33 @@
mod pid;
use std::path::PathBuf;
use serde::Serialize;
pub(crate) use pid::PidBackend;
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize)]
#[serde(rename_all = "camelCase")]
pub enum BackendKind {
Pid,
}
#[derive(Debug, Clone)]
pub(crate) struct BackendPaths {
pub(crate) codex_bin: PathBuf,
pub(crate) pid_file: PathBuf,
pub(crate) update_pid_file: PathBuf,
pub(crate) remote_control_enabled: bool,
}
pub(crate) fn pid_backend(paths: BackendPaths) -> PidBackend {
PidBackend::new(
paths.codex_bin,
paths.pid_file,
paths.remote_control_enabled,
)
}
pub(crate) fn pid_update_loop_backend(paths: BackendPaths) -> PidBackend {
PidBackend::new_update_loop(paths.codex_bin, paths.update_pid_file)
}

View File

@@ -0,0 +1,600 @@
use std::path::Path;
use std::path::PathBuf;
#[cfg(unix)]
use std::process::Stdio;
use std::time::Duration;
use anyhow::Context;
use anyhow::Result;
use anyhow::bail;
use serde::Deserialize;
use serde::Serialize;
use tokio::fs;
#[cfg(unix)]
use tokio::process::Command;
use tokio::time::sleep;
const STOP_POLL_INTERVAL: Duration = Duration::from_millis(50);
const STOP_GRACE_PERIOD: Duration = Duration::from_secs(60);
const STOP_TIMEOUT: Duration = Duration::from_secs(70);
const START_TIMEOUT: Duration = Duration::from_secs(10);
#[derive(Debug)]
#[cfg_attr(not(unix), allow(dead_code))]
pub(crate) struct PidBackend {
codex_bin: PathBuf,
pid_file: PathBuf,
lock_file: PathBuf,
command_kind: PidCommandKind,
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
struct PidRecord {
pid: u32,
process_start_time: String,
}
#[derive(Debug, Clone, PartialEq, Eq)]
enum PidFileState {
Missing,
Starting,
Running(PidRecord),
}
#[derive(Debug, Clone, Copy)]
#[cfg_attr(not(unix), allow(dead_code))]
enum PidCommandKind {
AppServer { remote_control_enabled: bool },
UpdateLoop,
}
impl PidBackend {
pub(crate) fn new(codex_bin: PathBuf, pid_file: PathBuf, remote_control_enabled: bool) -> Self {
let lock_file = pid_file.with_extension("pid.lock");
Self {
codex_bin,
pid_file,
lock_file,
command_kind: PidCommandKind::AppServer {
remote_control_enabled,
},
}
}
pub(crate) fn new_update_loop(codex_bin: PathBuf, pid_file: PathBuf) -> Self {
let lock_file = pid_file.with_extension("pid.lock");
Self {
codex_bin,
pid_file,
lock_file,
command_kind: PidCommandKind::UpdateLoop,
}
}
pub(crate) async fn is_starting_or_running(&self) -> Result<bool> {
loop {
match self.read_pid_file_state().await? {
PidFileState::Missing => return Ok(false),
PidFileState::Starting => return Ok(true),
PidFileState::Running(record) => {
if self.record_is_active(&record).await? {
return Ok(true);
}
match self.refresh_after_stale_record(&record).await? {
PidFileState::Missing => return Ok(false),
PidFileState::Starting | PidFileState::Running(_) => continue,
}
}
}
}
}
#[cfg(unix)]
pub(crate) async fn start(&self) -> Result<Option<u32>> {
if let Some(parent) = self.pid_file.parent() {
fs::create_dir_all(parent)
.await
.with_context(|| format!("failed to create pid directory {}", parent.display()))?;
}
let reservation_lock = self.acquire_reservation_lock().await?;
let _pid_file = loop {
match fs::OpenOptions::new()
.create_new(true)
.write(true)
.open(&self.pid_file)
.await
{
Ok(pid_file) => break pid_file,
Err(err) if err.kind() == std::io::ErrorKind::AlreadyExists => {
match self.read_pid_file_state_with_lock_held().await? {
PidFileState::Missing => continue,
PidFileState::Running(record) => {
if self.record_is_active(&record).await? {
return Ok(None);
}
let _ = fs::remove_file(&self.pid_file).await;
continue;
}
PidFileState::Starting => {
unreachable!("lock holder cannot observe starting")
}
}
}
Err(err) => {
return Err(err).with_context(|| {
format!("failed to reserve pid file {}", self.pid_file.display())
});
}
}
};
let mut command = Command::new(&self.codex_bin);
command
.args(self.command_args())
.stdin(Stdio::null())
.stdout(Stdio::null())
.stderr(Stdio::null());
#[cfg(unix)]
{
unsafe {
command.pre_exec(|| {
if libc::setsid() == -1 {
return Err(std::io::Error::last_os_error());
}
Ok(())
});
}
}
let child = match command.spawn() {
Ok(child) => child,
Err(err) => {
let _ = fs::remove_file(&self.pid_file).await;
return Err(err).with_context(|| {
format!(
"failed to spawn detached app-server process using {}",
self.codex_bin.display()
)
});
}
};
let pid = child
.id()
.context("spawned app-server process has no pid")?;
let record = match read_process_start_time(pid).await {
Ok(process_start_time) => PidRecord {
pid,
process_start_time,
},
Err(err) => {
let _ = self.terminate_process(pid);
let _ = fs::remove_file(&self.pid_file).await;
return Err(err);
}
};
let contents = serde_json::to_vec(&record).context("failed to serialize pid record")?;
let temp_pid_file = self.pid_file.with_extension("pid.tmp");
if let Err(err) = fs::write(&temp_pid_file, &contents).await {
let _ = self.terminate_process(pid);
let _ = fs::remove_file(&self.pid_file).await;
return Err(err).with_context(|| {
format!("failed to write pid temp file {}", temp_pid_file.display())
});
}
if let Err(err) = fs::rename(&temp_pid_file, &self.pid_file).await {
let _ = self.terminate_process(pid);
let _ = fs::remove_file(&temp_pid_file).await;
let _ = fs::remove_file(&self.pid_file).await;
return Err(err).with_context(|| {
format!("failed to publish pid file {}", self.pid_file.display())
});
}
drop(reservation_lock);
Ok(Some(pid))
}
#[cfg(not(unix))]
pub(crate) async fn start(&self) -> Result<Option<u32>> {
bail!("pid-managed app-server startup is unsupported on this platform")
}
pub(crate) async fn stop(&self) -> Result<()> {
loop {
let Some(record) = self.wait_for_pid_start().await? else {
return Ok(());
};
if !self.record_is_active(&record).await? {
match self.refresh_after_stale_record(&record).await? {
PidFileState::Missing => return Ok(()),
PidFileState::Starting | PidFileState::Running(_) => continue,
}
}
let pid = record.pid;
self.terminate_process(pid)?;
let started_at = tokio::time::Instant::now();
let deadline = tokio::time::Instant::now() + STOP_TIMEOUT;
let mut forced = false;
while tokio::time::Instant::now() < deadline {
if !self.record_is_active(&record).await? {
match self.refresh_after_stale_record(&record).await? {
PidFileState::Missing => return Ok(()),
PidFileState::Starting | PidFileState::Running(_) => break,
}
}
if !forced && started_at.elapsed() >= STOP_GRACE_PERIOD {
self.force_terminate_process(pid)?;
forced = true;
}
sleep(STOP_POLL_INTERVAL).await;
}
if self.record_is_active(&record).await? {
bail!("timed out waiting for pid-managed app server {pid} to stop");
}
}
}
async fn wait_for_pid_start(&self) -> Result<Option<PidRecord>> {
let deadline = tokio::time::Instant::now() + START_TIMEOUT;
loop {
match self.read_pid_file_state().await? {
PidFileState::Missing => return Ok(None),
PidFileState::Running(record) => return Ok(Some(record)),
PidFileState::Starting if tokio::time::Instant::now() < deadline => {
sleep(STOP_POLL_INTERVAL).await;
}
PidFileState::Starting => {
bail!(
"timed out waiting for pid reservation in {} to finish initializing",
self.pid_file.display()
);
}
}
}
}
async fn read_pid_file_state(&self) -> Result<PidFileState> {
let contents = match fs::read_to_string(&self.pid_file).await {
Ok(contents) => contents,
Err(err) if err.kind() == std::io::ErrorKind::NotFound => {
return if reservation_lock_is_active(&self.lock_file).await? {
Ok(PidFileState::Starting)
} else {
Ok(PidFileState::Missing)
};
}
Err(err) => {
return Err(err).with_context(|| {
format!("failed to read pid file {}", self.pid_file.display())
});
}
};
if contents.trim().is_empty() {
match inspect_empty_pid_reservation(&self.pid_file, &self.lock_file).await? {
EmptyPidReservation::Active => {
return Ok(PidFileState::Starting);
}
EmptyPidReservation::Stale => {
return Ok(PidFileState::Missing);
}
EmptyPidReservation::Record(record) => return Ok(PidFileState::Running(record)),
}
}
let record = serde_json::from_str(&contents)
.with_context(|| format!("invalid pid file contents in {}", self.pid_file.display()))?;
Ok(PidFileState::Running(record))
}
async fn read_pid_file_state_with_lock_held(&self) -> Result<PidFileState> {
let contents = match fs::read_to_string(&self.pid_file).await {
Ok(contents) => contents,
Err(err) if err.kind() == std::io::ErrorKind::NotFound => {
return Ok(PidFileState::Missing);
}
Err(err) => {
return Err(err).with_context(|| {
format!("failed to read pid file {}", self.pid_file.display())
});
}
};
if contents.trim().is_empty() {
let _ = fs::remove_file(&self.pid_file).await;
return Ok(PidFileState::Missing);
}
let record = serde_json::from_str(&contents)
.with_context(|| format!("invalid pid file contents in {}", self.pid_file.display()))?;
Ok(PidFileState::Running(record))
}
async fn refresh_after_stale_record(&self, expected: &PidRecord) -> Result<PidFileState> {
let reservation_lock = self.acquire_reservation_lock().await?;
let state = match self.read_pid_file_state_with_lock_held().await? {
PidFileState::Running(record) if record == *expected => {
let _ = fs::remove_file(&self.pid_file).await;
PidFileState::Missing
}
state => state,
};
drop(reservation_lock);
Ok(state)
}
async fn acquire_reservation_lock(&self) -> Result<fs::File> {
let reservation_lock = fs::OpenOptions::new()
.create(true)
.truncate(false)
.write(true)
.open(&self.lock_file)
.await
.with_context(|| {
format!("failed to open pid lock file {}", self.lock_file.display())
})?;
let lock_deadline = tokio::time::Instant::now() + START_TIMEOUT;
while !try_lock_file(&reservation_lock)? {
if tokio::time::Instant::now() >= lock_deadline {
bail!(
"timed out waiting for pid lock {}",
self.lock_file.display()
);
}
sleep(STOP_POLL_INTERVAL).await;
}
Ok(reservation_lock)
}
#[cfg(unix)]
fn command_args(&self) -> Vec<&'static str> {
match self.command_kind {
PidCommandKind::AppServer {
remote_control_enabled: true,
} => vec![
"--enable",
"remote_control",
"app-server",
"--listen",
"unix://",
],
PidCommandKind::AppServer {
remote_control_enabled: false,
} => vec!["app-server", "--listen", "unix://"],
PidCommandKind::UpdateLoop => vec!["app-server", "daemon", "pid-update-loop"],
}
}
fn terminate_process(&self, pid: u32) -> Result<()> {
match self.command_kind {
PidCommandKind::AppServer { .. } => terminate_process(pid),
PidCommandKind::UpdateLoop => terminate_process(pid),
}
}
fn force_terminate_process(&self, pid: u32) -> Result<()> {
match self.command_kind {
PidCommandKind::AppServer { .. } => force_terminate_process(pid),
PidCommandKind::UpdateLoop => force_terminate_process_group(pid),
}
}
async fn record_is_active(&self, record: &PidRecord) -> Result<bool> {
process_matches_record(record).await
}
}
#[cfg(unix)]
fn process_exists(pid: u32) -> bool {
let Ok(pid) = libc::pid_t::try_from(pid) else {
return false;
};
let result = unsafe { libc::kill(pid, 0) };
result == 0 || std::io::Error::last_os_error().raw_os_error() == Some(libc::EPERM)
}
#[cfg(unix)]
fn terminate_process(pid: u32) -> Result<()> {
let raw_pid = libc::pid_t::try_from(pid)
.with_context(|| format!("pid-managed app server pid {pid} is out of range"))?;
let result = unsafe { libc::kill(raw_pid, libc::SIGTERM) };
if result == 0 {
return Ok(());
}
let err = std::io::Error::last_os_error();
if err.raw_os_error() == Some(libc::ESRCH) {
return Ok(());
}
Err(err).with_context(|| format!("failed to terminate pid-managed app server {pid}"))
}
#[cfg(unix)]
fn force_terminate_process(pid: u32) -> Result<()> {
let raw_pid = libc::pid_t::try_from(pid)
.with_context(|| format!("pid-managed app server pid {pid} is out of range"))?;
let result = unsafe { libc::kill(raw_pid, libc::SIGKILL) };
if result == 0 {
return Ok(());
}
let err = std::io::Error::last_os_error();
if err.raw_os_error() == Some(libc::ESRCH) {
return Ok(());
}
Err(err).with_context(|| format!("failed to force terminate pid-managed app server {pid}"))
}
#[cfg(unix)]
fn force_terminate_process_group(pid: u32) -> Result<()> {
let raw_pid = libc::pid_t::try_from(pid)
.with_context(|| format!("pid-managed updater pid {pid} is out of range"))?;
let result = unsafe { libc::kill(-raw_pid, libc::SIGKILL) };
if result == 0 {
return Ok(());
}
let err = std::io::Error::last_os_error();
if err.raw_os_error() == Some(libc::ESRCH) {
return Ok(());
}
Err(err).with_context(|| format!("failed to force terminate pid-managed updater group {pid}"))
}
#[cfg(not(unix))]
fn terminate_process(_pid: u32) -> Result<()> {
bail!("pid-managed app-server shutdown is unsupported on this platform")
}
#[cfg(not(unix))]
fn force_terminate_process(_pid: u32) -> Result<()> {
bail!("pid-managed app-server shutdown is unsupported on this platform")
}
#[cfg(not(unix))]
fn force_terminate_process_group(_pid: u32) -> Result<()> {
bail!("pid-managed updater shutdown is unsupported on this platform")
}
#[cfg(unix)]
async fn process_matches_record(record: &PidRecord) -> Result<bool> {
if !process_exists(record.pid) {
return Ok(false);
}
match read_process_start_time(record.pid).await {
Ok(start_time) => Ok(start_time == record.process_start_time),
Err(_err) if !process_exists(record.pid) => Ok(false),
Err(err) => Err(err),
}
}
#[cfg(not(unix))]
async fn process_matches_record(_record: &PidRecord) -> Result<bool> {
Ok(false)
}
#[derive(Debug, Clone, PartialEq, Eq)]
#[cfg_attr(not(unix), allow(dead_code))]
enum EmptyPidReservation {
Active,
Stale,
Record(PidRecord),
}
#[cfg(unix)]
fn try_lock_file(file: &fs::File) -> Result<bool> {
use std::os::fd::AsRawFd;
let result = unsafe { libc::flock(file.as_raw_fd(), libc::LOCK_EX | libc::LOCK_NB) };
if result == 0 {
return Ok(true);
}
let err = std::io::Error::last_os_error();
if err.raw_os_error() == Some(libc::EWOULDBLOCK) {
return Ok(false);
}
Err(err).context("failed to lock pid reservation")
}
#[cfg(not(unix))]
fn try_lock_file(_file: &fs::File) -> Result<bool> {
bail!("pid-managed app-server startup is unsupported on this platform")
}
#[cfg(unix)]
async fn reservation_lock_is_active(path: &Path) -> Result<bool> {
let file = match fs::OpenOptions::new()
.write(true)
.create(true)
.truncate(false)
.open(path)
.await
{
Ok(file) => file,
Err(err) if err.kind() == std::io::ErrorKind::NotFound => {
return Ok(false);
}
Err(err) => {
return Err(err)
.with_context(|| format!("failed to inspect pid lock file {}", path.display()));
}
};
Ok(!try_lock_file(&file)?)
}
#[cfg(not(unix))]
async fn reservation_lock_is_active(_path: &Path) -> Result<bool> {
Ok(false)
}
#[cfg(unix)]
async fn inspect_empty_pid_reservation(
pid_path: &Path,
lock_path: &Path,
) -> Result<EmptyPidReservation> {
let file = match fs::OpenOptions::new()
.write(true)
.create(true)
.truncate(false)
.open(lock_path)
.await
{
Ok(file) => file,
Err(err) => {
return Err(err).with_context(|| {
format!("failed to inspect pid lock file {}", lock_path.display())
});
}
};
if !try_lock_file(&file)? {
return Ok(EmptyPidReservation::Active);
}
let contents = match fs::read_to_string(pid_path).await {
Ok(contents) => contents,
Err(err) if err.kind() == std::io::ErrorKind::NotFound => {
return Ok(EmptyPidReservation::Stale);
}
Err(err) => {
return Err(err)
.with_context(|| format!("failed to reread pid file {}", pid_path.display()));
}
};
if contents.trim().is_empty() {
let _ = fs::remove_file(pid_path).await;
return Ok(EmptyPidReservation::Stale);
}
let record = serde_json::from_str(&contents)
.with_context(|| format!("invalid pid file contents in {}", pid_path.display()))?;
Ok(EmptyPidReservation::Record(record))
}
#[cfg(not(unix))]
async fn inspect_empty_pid_reservation(
_pid_path: &Path,
_lock_path: &Path,
) -> Result<EmptyPidReservation> {
Ok(EmptyPidReservation::Stale)
}
#[cfg(unix)]
async fn read_process_start_time(pid: u32) -> Result<String> {
let output = Command::new("ps")
.args(["-p", &pid.to_string(), "-o", "lstart="])
.output()
.await
.context("failed to invoke ps for pid-managed app server")?;
if !output.status.success() {
bail!("failed to read start time for pid-managed app server {pid}");
}
let start_time = String::from_utf8(output.stdout)
.context("pid-managed app server start time was not utf-8")?;
let start_time = start_time.trim();
if start_time.is_empty() {
bail!("pid-managed app server {pid} has no recorded start time");
}
Ok(start_time.to_string())
}
#[cfg(all(test, unix))]
#[path = "pid_tests.rs"]
mod tests;

View File

@@ -0,0 +1,158 @@
use std::time::Duration;
use pretty_assertions::assert_eq;
use tempfile::TempDir;
use super::PidBackend;
use super::PidCommandKind;
use super::PidFileState;
use super::PidRecord;
use super::try_lock_file;
#[tokio::test]
async fn locked_empty_pid_file_is_treated_as_active_reservation() {
let temp_dir = TempDir::new().expect("temp dir");
let pid_file = temp_dir.path().join("app-server.pid");
tokio::fs::write(&pid_file, "")
.await
.expect("write pid file");
let backend = PidBackend::new(
temp_dir.path().join("codex"),
pid_file.clone(),
/*remote_control_enabled*/ false,
);
let reservation = tokio::fs::OpenOptions::new()
.create(true)
.truncate(false)
.write(true)
.open(&backend.lock_file)
.await
.expect("open pid lock file");
assert!(try_lock_file(&reservation).expect("lock reservation"));
assert_eq!(
backend.read_pid_file_state().await.expect("read pid"),
PidFileState::Starting
);
assert!(pid_file.exists());
}
#[tokio::test]
async fn unlocked_empty_pid_file_is_treated_as_stale_reservation() {
let temp_dir = TempDir::new().expect("temp dir");
let pid_file = temp_dir.path().join("app-server.pid");
tokio::fs::write(&pid_file, "")
.await
.expect("write pid file");
let backend = PidBackend::new(
temp_dir.path().join("codex"),
pid_file.clone(),
/*remote_control_enabled*/ false,
);
assert_eq!(
backend.read_pid_file_state().await.expect("read pid"),
PidFileState::Missing
);
assert!(!pid_file.exists());
}
#[tokio::test]
async fn stop_waits_for_live_reservation_to_resolve() {
let temp_dir = TempDir::new().expect("temp dir");
let pid_file = temp_dir.path().join("app-server.pid");
tokio::fs::write(&pid_file, "")
.await
.expect("write pid file");
let backend = PidBackend::new(
temp_dir.path().join("codex"),
pid_file.clone(),
/*remote_control_enabled*/ false,
);
let reservation = tokio::fs::OpenOptions::new()
.create(true)
.truncate(false)
.write(true)
.open(&backend.lock_file)
.await
.expect("open pid lock file");
assert!(try_lock_file(&reservation).expect("lock reservation"));
let cleanup = tokio::spawn(async move {
tokio::time::sleep(Duration::from_millis(50)).await;
drop(reservation);
tokio::fs::remove_file(pid_file)
.await
.expect("remove pid file");
});
backend.stop().await.expect("stop");
cleanup.await.expect("cleanup task");
}
#[tokio::test]
async fn start_retries_stale_empty_pid_file_under_its_own_lock() {
let temp_dir = TempDir::new().expect("temp dir");
let pid_file = temp_dir.path().join("app-server.pid");
tokio::fs::write(&pid_file, "")
.await
.expect("write pid file");
let backend = PidBackend::new(
temp_dir.path().join("missing-codex"),
pid_file,
/*remote_control_enabled*/ false,
);
let err = backend.start().await.expect_err("start");
assert!(
err.to_string()
.starts_with("failed to spawn detached app-server process using ")
);
}
#[tokio::test]
async fn stale_record_cleanup_preserves_replacement_record() {
let temp_dir = TempDir::new().expect("temp dir");
let pid_file = temp_dir.path().join("app-server.pid");
let backend = PidBackend::new(
temp_dir.path().join("codex"),
pid_file.clone(),
/*remote_control_enabled*/ false,
);
let stale = PidRecord {
pid: 1,
process_start_time: "old".to_string(),
};
let replacement = PidRecord {
pid: 2,
process_start_time: "new".to_string(),
};
tokio::fs::write(
&pid_file,
serde_json::to_vec(&replacement).expect("serialize replacement"),
)
.await
.expect("write replacement pid file");
assert_eq!(
backend
.refresh_after_stale_record(&stale)
.await
.expect("cleanup"),
PidFileState::Running(replacement)
);
}
#[test]
fn update_loop_uses_hidden_app_server_subcommand() {
let backend = PidBackend {
codex_bin: "codex".into(),
pid_file: "updater.pid".into(),
lock_file: "updater.pid.lock".into(),
command_kind: PidCommandKind::UpdateLoop,
};
assert_eq!(
backend.command_args(),
vec!["app-server", "daemon", "pid-update-loop"]
);
}

View File

@@ -0,0 +1,131 @@
use std::path::Path;
use std::time::Duration;
use anyhow::Context;
use anyhow::Result;
use anyhow::anyhow;
use codex_app_server_protocol::ClientInfo;
use codex_app_server_protocol::InitializeParams;
use codex_app_server_protocol::InitializeResponse;
use codex_app_server_protocol::JSONRPCMessage;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCRequest;
use codex_app_server_protocol::RequestId;
use codex_uds::UnixStream;
use futures::SinkExt;
use futures::StreamExt;
use tokio::time::timeout;
use tokio_tungstenite::client_async;
use tokio_tungstenite::tungstenite::Message;
const PROBE_TIMEOUT: Duration = Duration::from_secs(2);
const CLIENT_NAME: &str = "codex_app_server_daemon";
#[derive(Debug, Clone, PartialEq, Eq)]
pub(crate) struct ProbeInfo {
pub(crate) app_server_version: String,
}
pub(crate) async fn probe(socket_path: &Path) -> Result<ProbeInfo> {
timeout(PROBE_TIMEOUT, probe_inner(socket_path))
.await
.with_context(|| {
format!(
"timed out probing app-server control socket {}",
socket_path.display()
)
})?
}
async fn probe_inner(socket_path: &Path) -> Result<ProbeInfo> {
let stream = UnixStream::connect(socket_path)
.await
.with_context(|| format!("failed to connect to {}", socket_path.display()))?;
let (mut websocket, _response) = client_async("ws://localhost/", stream)
.await
.with_context(|| format!("failed to upgrade {}", socket_path.display()))?;
let initialize = JSONRPCMessage::Request(JSONRPCRequest {
id: RequestId::Integer(1),
method: "initialize".to_string(),
params: Some(serde_json::to_value(InitializeParams {
client_info: ClientInfo {
name: CLIENT_NAME.to_string(),
title: Some("Codex App Server Daemon".to_string()),
version: env!("CARGO_PKG_VERSION").to_string(),
},
capabilities: None,
})?),
trace: None,
});
websocket
.send(Message::Text(serde_json::to_string(&initialize)?.into()))
.await
.context("failed to send initialize request")?;
let response = loop {
let frame = websocket
.next()
.await
.ok_or_else(|| anyhow!("app-server closed before initialize response"))??;
let Message::Text(payload) = frame else {
continue;
};
let message = serde_json::from_str::<JSONRPCMessage>(&payload)?;
if let JSONRPCMessage::Response(response) = message
&& response.id == RequestId::Integer(1)
{
break response;
}
};
let initialize_response = serde_json::from_value::<InitializeResponse>(response.result)?;
let initialized = JSONRPCMessage::Notification(JSONRPCNotification {
method: "initialized".to_string(),
params: None,
});
websocket
.send(Message::Text(serde_json::to_string(&initialized)?.into()))
.await
.context("failed to send initialized notification")?;
websocket.close(None).await.ok();
Ok(ProbeInfo {
app_server_version: parse_version_from_user_agent(&initialize_response.user_agent)?,
})
}
fn parse_version_from_user_agent(user_agent: &str) -> Result<String> {
let (_originator, rest) = user_agent
.split_once('/')
.ok_or_else(|| anyhow!("app-server user-agent omitted version separator"))?;
let version = rest
.split_whitespace()
.next()
.filter(|version| !version.is_empty())
.ok_or_else(|| anyhow!("app-server user-agent omitted version"))?;
Ok(version.to_string())
}
#[cfg(all(test, unix))]
mod tests {
use pretty_assertions::assert_eq;
use super::parse_version_from_user_agent;
#[test]
fn parses_version_from_codex_user_agent() {
assert_eq!(
parse_version_from_user_agent(
"codex_app_server_daemon/1.2.3 (Linux 6.8.0; x86_64) codex_cli_rs/1.2.3",
)
.expect("version"),
"1.2.3"
);
}
#[test]
fn rejects_user_agent_without_version() {
assert!(parse_version_from_user_agent("codex_app_server_daemon").is_err());
}
}

View File

@@ -0,0 +1,630 @@
mod backend;
mod client;
mod managed_install;
mod settings;
mod update_loop;
use std::path::PathBuf;
use std::time::Duration;
use anyhow::Context;
use anyhow::Result;
use anyhow::anyhow;
pub use backend::BackendKind;
use backend::BackendPaths;
use codex_app_server_transport::app_server_control_socket_path;
use codex_core::config::find_codex_home;
use managed_install::managed_codex_bin;
#[cfg(unix)]
use managed_install::managed_codex_version;
use serde::Serialize;
use settings::DaemonSettings;
use tokio::time::sleep;
const START_POLL_INTERVAL: Duration = Duration::from_millis(50);
const START_TIMEOUT: Duration = Duration::from_secs(10);
const OPERATION_LOCK_TIMEOUT: Duration = Duration::from_secs(75);
const PID_FILE_NAME: &str = "app-server.pid";
const UPDATE_PID_FILE_NAME: &str = "app-server-updater.pid";
const OPERATION_LOCK_FILE_NAME: &str = "daemon.lock";
const SETTINGS_FILE_NAME: &str = "settings.json";
const STATE_DIR_NAME: &str = "app-server-daemon";
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum LifecycleCommand {
Start,
Restart,
Stop,
Version,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize)]
#[serde(rename_all = "camelCase")]
pub enum LifecycleStatus {
AlreadyRunning,
Started,
Restarted,
Stopped,
NotRunning,
Running,
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct LifecycleOutput {
pub status: LifecycleStatus,
#[serde(skip_serializing_if = "Option::is_none")]
pub backend: Option<BackendKind>,
#[serde(skip_serializing_if = "Option::is_none")]
pub pid: Option<u32>,
pub socket_path: PathBuf,
#[serde(skip_serializing_if = "Option::is_none")]
pub cli_version: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub app_server_version: Option<String>,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub struct BootstrapOptions {
pub remote_control_enabled: bool,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize)]
#[serde(rename_all = "camelCase")]
pub enum BootstrapStatus {
Bootstrapped,
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct BootstrapOutput {
pub status: BootstrapStatus,
pub backend: BackendKind,
pub auto_update_enabled: bool,
pub remote_control_enabled: bool,
pub managed_codex_path: PathBuf,
pub socket_path: PathBuf,
pub cli_version: String,
pub app_server_version: String,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum RemoteControlMode {
Enabled,
Disabled,
}
impl RemoteControlMode {
fn is_enabled(self) -> bool {
matches!(self, Self::Enabled)
}
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize)]
#[serde(rename_all = "camelCase")]
pub enum RemoteControlStatus {
Enabled,
Disabled,
AlreadyEnabled,
AlreadyDisabled,
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct RemoteControlOutput {
pub status: RemoteControlStatus,
#[serde(skip_serializing_if = "Option::is_none")]
pub backend: Option<BackendKind>,
pub remote_control_enabled: bool,
pub socket_path: PathBuf,
pub cli_version: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub app_server_version: Option<String>,
}
#[cfg(unix)]
pub(crate) enum RestartIfRunningOutcome {
Completed,
Busy,
}
pub async fn run(command: LifecycleCommand) -> Result<LifecycleOutput> {
ensure_supported_platform()?;
Daemon::from_environment()?.run(command).await
}
pub async fn bootstrap(options: BootstrapOptions) -> Result<BootstrapOutput> {
ensure_supported_platform()?;
Daemon::from_environment()?.bootstrap(options).await
}
pub async fn set_remote_control(mode: RemoteControlMode) -> Result<RemoteControlOutput> {
ensure_supported_platform()?;
Daemon::from_environment()?.set_remote_control(mode).await
}
pub async fn run_pid_update_loop() -> Result<()> {
ensure_supported_platform()?;
update_loop::run().await
}
#[cfg(unix)]
fn ensure_supported_platform() -> Result<()> {
Ok(())
}
#[cfg(not(unix))]
fn ensure_supported_platform() -> Result<()> {
Err(anyhow!(
"codex app-server daemon lifecycle is only supported on Unix platforms"
))
}
struct Daemon {
socket_path: PathBuf,
pid_file: PathBuf,
update_pid_file: PathBuf,
operation_lock_file: PathBuf,
settings_file: PathBuf,
managed_codex_bin: PathBuf,
}
impl Daemon {
fn from_environment() -> Result<Self> {
let codex_home = find_codex_home().context("failed to resolve CODEX_HOME")?;
let socket_path = app_server_control_socket_path(codex_home.as_path())?
.as_path()
.to_path_buf();
let state_dir = codex_home.as_path().join(STATE_DIR_NAME);
Ok(Self {
socket_path,
pid_file: state_dir.join(PID_FILE_NAME),
update_pid_file: state_dir.join(UPDATE_PID_FILE_NAME),
operation_lock_file: state_dir.join(OPERATION_LOCK_FILE_NAME),
settings_file: state_dir.join(SETTINGS_FILE_NAME),
managed_codex_bin: managed_codex_bin(codex_home.as_path()),
})
}
async fn run(&self, command: LifecycleCommand) -> Result<LifecycleOutput> {
match command {
LifecycleCommand::Start => {
let _operation_lock = self.acquire_operation_lock().await?;
self.start().await
}
LifecycleCommand::Restart => {
let _operation_lock = self.acquire_operation_lock().await?;
self.restart().await
}
LifecycleCommand::Stop => {
let _operation_lock = self.acquire_operation_lock().await?;
self.stop().await
}
LifecycleCommand::Version => self.version().await,
}
}
async fn start(&self) -> Result<LifecycleOutput> {
let settings = self.load_settings().await?;
if let Ok(info) = client::probe(&self.socket_path).await {
return Ok(self.output(
LifecycleStatus::AlreadyRunning,
self.running_backend(&settings).await?,
/*pid*/ None,
Some(info.app_server_version),
));
}
if self.running_backend_instance(&settings).await?.is_some() {
let info = self.wait_until_ready().await?;
return Ok(self.output(
LifecycleStatus::AlreadyRunning,
Some(BackendKind::Pid),
/*pid*/ None,
Some(info.app_server_version),
));
}
self.ensure_managed_codex_bin()?;
let pid = self.start_managed_backend(&settings).await?;
let info = self.wait_until_ready().await?;
Ok(self.output(
LifecycleStatus::Started,
Some(BackendKind::Pid),
pid,
Some(info.app_server_version),
))
}
async fn restart(&self) -> Result<LifecycleOutput> {
let settings = self.load_settings().await?;
if client::probe(&self.socket_path).await.is_ok()
&& self.running_backend(&settings).await?.is_none()
{
return Err(anyhow!(
"app server is running but is not managed by codex app-server daemon"
));
}
self.ensure_managed_codex_bin()?;
if let Some(backend) = self.running_backend_instance(&settings).await? {
backend.stop().await?;
}
let pid = self.start_managed_backend(&settings).await?;
let info = self.wait_until_ready().await?;
Ok(self.output(
LifecycleStatus::Restarted,
Some(BackendKind::Pid),
pid,
Some(info.app_server_version),
))
}
#[cfg(unix)]
pub(crate) async fn try_restart_if_running(&self) -> Result<RestartIfRunningOutcome> {
let operation_lock = self.open_operation_lock_file().await?;
if !try_lock_file(&operation_lock)? {
return Ok(RestartIfRunningOutcome::Busy);
}
let settings = self.load_settings().await?;
if let Some(backend) = self.running_backend_instance(&settings).await? {
let Ok(info) = client::probe(&self.socket_path).await else {
return Ok(RestartIfRunningOutcome::Completed);
};
let managed_version = managed_codex_version(&self.managed_codex_bin).await?;
if info.app_server_version == managed_version {
return Ok(RestartIfRunningOutcome::Completed);
}
backend.stop().await?;
let _ = self.start_managed_backend(&settings).await?;
self.wait_until_ready().await?;
return Ok(RestartIfRunningOutcome::Completed);
}
if client::probe(&self.socket_path).await.is_ok() {
return Err(anyhow!(
"app server is running but is not managed by codex app-server daemon"
));
}
Ok(RestartIfRunningOutcome::Completed)
}
async fn stop(&self) -> Result<LifecycleOutput> {
let settings = self.load_settings().await?;
if let Some(backend) = self.running_backend_instance(&settings).await? {
backend.stop().await?;
return Ok(self.output(
LifecycleStatus::Stopped,
Some(BackendKind::Pid),
/*pid*/ None,
/*app_server_version*/ None,
));
}
if client::probe(&self.socket_path).await.is_ok() {
return Err(anyhow!(
"app server is running but is not managed by codex app-server daemon"
));
}
Ok(self.output(
LifecycleStatus::NotRunning,
/*backend*/ None,
/*pid*/ None,
/*app_server_version*/ None,
))
}
async fn version(&self) -> Result<LifecycleOutput> {
let settings = self.load_settings().await?;
let info = client::probe(&self.socket_path).await?;
Ok(self.output(
LifecycleStatus::Running,
self.running_backend(&settings).await?,
/*pid*/ None,
Some(info.app_server_version),
))
}
async fn wait_until_ready(&self) -> Result<client::ProbeInfo> {
let deadline = tokio::time::Instant::now() + START_TIMEOUT;
loop {
match client::probe(&self.socket_path).await {
Ok(info) => return Ok(info),
Err(err) if tokio::time::Instant::now() < deadline => {
let _ = err;
sleep(START_POLL_INTERVAL).await;
}
Err(err) => {
return Err(err).with_context(|| {
format!(
"app server did not become ready on {}",
self.socket_path.display()
)
});
}
}
}
}
async fn bootstrap(&self, options: BootstrapOptions) -> Result<BootstrapOutput> {
let _operation_lock = self.acquire_operation_lock().await?;
self.bootstrap_locked(options).await
}
async fn set_remote_control(&self, mode: RemoteControlMode) -> Result<RemoteControlOutput> {
let _operation_lock = self.acquire_operation_lock().await?;
let previous_settings = self.load_settings().await?;
let mut settings = previous_settings.clone();
let remote_control_enabled = mode.is_enabled();
let backend = self.running_backend_instance(&previous_settings).await?;
if backend.is_none() && client::probe(&self.socket_path).await.is_ok() {
return Err(anyhow!(
"app server is running but is not managed by codex app-server daemon"
));
}
if settings.remote_control_enabled == remote_control_enabled {
let info = if backend.is_some() {
Some(self.wait_until_ready().await?)
} else {
None
};
return Ok(self.remote_control_output(
already_remote_control_status(mode),
backend.map(|_| BackendKind::Pid),
remote_control_enabled,
info.map(|info| info.app_server_version),
));
}
settings.remote_control_enabled = remote_control_enabled;
settings.save(&self.settings_file).await?;
let app_server_version = if let Some(backend) = backend {
self.ensure_managed_codex_bin()?;
backend.stop().await?;
let _ = self.start_managed_backend(&settings).await?;
Some(self.wait_until_ready().await?.app_server_version)
} else {
None
};
Ok(self.remote_control_output(
remote_control_status(mode),
app_server_version.as_ref().map(|_| BackendKind::Pid),
remote_control_enabled,
app_server_version,
))
}
async fn bootstrap_locked(&self, options: BootstrapOptions) -> Result<BootstrapOutput> {
self.ensure_managed_codex_bin()?;
let settings = DaemonSettings {
remote_control_enabled: options.remote_control_enabled,
};
if client::probe(&self.socket_path).await.is_ok()
&& self.running_backend(&settings).await?.is_none()
{
return Err(anyhow!(
"app server is running but is not managed by codex app-server daemon"
));
}
settings.save(&self.settings_file).await?;
if let Some(backend) = self.running_backend_instance(&settings).await? {
backend.stop().await?;
}
let backend = backend::pid_backend(self.backend_paths(&settings));
backend.start().await?;
let updater = backend::pid_update_loop_backend(self.backend_paths(&settings));
if updater.is_starting_or_running().await? {
updater.stop().await?;
}
updater.start().await?;
let info = self.wait_until_ready().await?;
Ok(BootstrapOutput {
status: BootstrapStatus::Bootstrapped,
backend: BackendKind::Pid,
auto_update_enabled: true,
remote_control_enabled: settings.remote_control_enabled,
managed_codex_path: self.managed_codex_bin.clone(),
socket_path: self.socket_path.clone(),
cli_version: env!("CARGO_PKG_VERSION").to_string(),
app_server_version: info.app_server_version,
})
}
async fn running_backend(&self, settings: &DaemonSettings) -> Result<Option<BackendKind>> {
Ok(self
.running_backend_instance(settings)
.await?
.map(|_| BackendKind::Pid))
}
async fn running_backend_instance(
&self,
settings: &DaemonSettings,
) -> Result<Option<backend::PidBackend>> {
let backend = backend::pid_backend(self.backend_paths(settings));
if backend.is_starting_or_running().await? {
return Ok(Some(backend));
}
Ok(None)
}
async fn start_managed_backend(&self, settings: &DaemonSettings) -> Result<Option<u32>> {
let backend = backend::pid_backend(self.backend_paths(settings));
backend.start().await
}
fn ensure_managed_codex_bin(&self) -> Result<()> {
if self.managed_codex_bin.is_file() {
return Ok(());
}
Err(anyhow!(
"managed standalone Codex install not found at {}; install Codex first",
self.managed_codex_bin.display()
))
}
fn backend_paths(&self, settings: &DaemonSettings) -> BackendPaths {
BackendPaths {
codex_bin: self.managed_codex_bin.clone(),
pid_file: self.pid_file.clone(),
update_pid_file: self.update_pid_file.clone(),
remote_control_enabled: settings.remote_control_enabled,
}
}
async fn load_settings(&self) -> Result<DaemonSettings> {
DaemonSettings::load(&self.settings_file).await
}
async fn acquire_operation_lock(&self) -> Result<tokio::fs::File> {
let operation_lock = self.open_operation_lock_file().await?;
let deadline = tokio::time::Instant::now() + OPERATION_LOCK_TIMEOUT;
while !try_lock_file(&operation_lock)? {
if tokio::time::Instant::now() >= deadline {
return Err(anyhow!(
"timed out waiting for daemon operation lock {}",
self.operation_lock_file.display()
));
}
sleep(START_POLL_INTERVAL).await;
}
Ok(operation_lock)
}
async fn open_operation_lock_file(&self) -> Result<tokio::fs::File> {
if let Some(parent) = self.operation_lock_file.parent() {
tokio::fs::create_dir_all(parent).await.with_context(|| {
format!(
"failed to create daemon state directory {}",
parent.display()
)
})?;
}
tokio::fs::OpenOptions::new()
.create(true)
.truncate(false)
.write(true)
.open(&self.operation_lock_file)
.await
.with_context(|| {
format!(
"failed to open daemon operation lock {}",
self.operation_lock_file.display()
)
})
}
fn output(
&self,
status: LifecycleStatus,
backend: Option<BackendKind>,
pid: Option<u32>,
app_server_version: Option<String>,
) -> LifecycleOutput {
LifecycleOutput {
status,
backend,
pid,
socket_path: self.socket_path.clone(),
cli_version: Some(env!("CARGO_PKG_VERSION").to_string()),
app_server_version,
}
}
fn remote_control_output(
&self,
status: RemoteControlStatus,
backend: Option<BackendKind>,
remote_control_enabled: bool,
app_server_version: Option<String>,
) -> RemoteControlOutput {
RemoteControlOutput {
status,
backend,
remote_control_enabled,
socket_path: self.socket_path.clone(),
cli_version: env!("CARGO_PKG_VERSION").to_string(),
app_server_version,
}
}
}
fn remote_control_status(mode: RemoteControlMode) -> RemoteControlStatus {
match mode {
RemoteControlMode::Enabled => RemoteControlStatus::Enabled,
RemoteControlMode::Disabled => RemoteControlStatus::Disabled,
}
}
fn already_remote_control_status(mode: RemoteControlMode) -> RemoteControlStatus {
match mode {
RemoteControlMode::Enabled => RemoteControlStatus::AlreadyEnabled,
RemoteControlMode::Disabled => RemoteControlStatus::AlreadyDisabled,
}
}
#[cfg(unix)]
fn try_lock_file(file: &tokio::fs::File) -> Result<bool> {
use std::os::fd::AsRawFd;
let result = unsafe { libc::flock(file.as_raw_fd(), libc::LOCK_EX | libc::LOCK_NB) };
if result == 0 {
return Ok(true);
}
let err = std::io::Error::last_os_error();
if err.raw_os_error() == Some(libc::EWOULDBLOCK) {
return Ok(false);
}
Err(err).context("failed to lock daemon operation")
}
#[cfg(not(unix))]
fn try_lock_file(_file: &tokio::fs::File) -> Result<bool> {
Ok(true)
}
#[cfg(all(test, unix))]
mod tests {
use pretty_assertions::assert_eq;
use super::BootstrapStatus;
use super::LifecycleStatus;
use super::RemoteControlStatus;
#[test]
fn lifecycle_status_uses_camel_case_json() {
assert_eq!(
serde_json::to_string(&LifecycleStatus::AlreadyRunning).expect("serialize"),
"\"alreadyRunning\""
);
}
#[test]
fn bootstrap_status_uses_camel_case_json() {
assert_eq!(
serde_json::to_string(&BootstrapStatus::Bootstrapped).expect("serialize"),
"\"bootstrapped\""
);
}
#[test]
fn remote_control_status_uses_camel_case_json() {
assert_eq!(
serde_json::to_string(&RemoteControlStatus::AlreadyEnabled).expect("serialize"),
"\"alreadyEnabled\""
);
}
}

View File

@@ -0,0 +1,66 @@
use std::path::Path;
use std::path::PathBuf;
#[cfg(unix)]
use anyhow::Context;
#[cfg(unix)]
use anyhow::Result;
#[cfg(unix)]
use anyhow::anyhow;
#[cfg(unix)]
use tokio::process::Command;
pub(crate) fn managed_codex_bin(codex_home: &Path) -> PathBuf {
codex_home
.join("packages")
.join("standalone")
.join("current")
.join(managed_codex_file_name())
}
#[cfg(unix)]
pub(crate) async fn managed_codex_version(codex_bin: &Path) -> Result<String> {
let output = Command::new(codex_bin)
.arg("--version")
.output()
.await
.with_context(|| {
format!(
"failed to invoke managed Codex binary {}",
codex_bin.display()
)
})?;
if !output.status.success() {
return Err(anyhow!(
"managed Codex binary {} exited with status {}",
codex_bin.display(),
output.status
));
}
let stdout = String::from_utf8(output.stdout).with_context(|| {
format!(
"managed Codex version was not utf-8: {}",
codex_bin.display()
)
})?;
parse_codex_version(&stdout)
}
fn managed_codex_file_name() -> &'static str {
if cfg!(windows) { "codex.exe" } else { "codex" }
}
#[cfg(unix)]
fn parse_codex_version(output: &str) -> Result<String> {
let version = output
.split_whitespace()
.nth(1)
.filter(|version| !version.is_empty())
.ok_or_else(|| anyhow!("managed Codex version output was malformed"))?;
Ok(version.to_string())
}
#[cfg(all(test, unix))]
#[path = "managed_install_tests.rs"]
mod tests;

View File

@@ -0,0 +1,16 @@
use pretty_assertions::assert_eq;
use super::parse_codex_version;
#[test]
fn parses_codex_cli_version_output() {
assert_eq!(
parse_codex_version("codex 1.2.3\n").expect("version"),
"1.2.3"
);
}
#[test]
fn rejects_malformed_codex_cli_version_output() {
assert!(parse_codex_version("codex\n").is_err());
}

View File

@@ -0,0 +1,63 @@
use std::path::Path;
use anyhow::Context;
use anyhow::Result;
use serde::Deserialize;
use serde::Serialize;
use tokio::fs;
#[derive(Debug, Clone, Default, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub(crate) struct DaemonSettings {
pub(crate) remote_control_enabled: bool,
}
impl DaemonSettings {
pub(crate) async fn load(path: &Path) -> Result<Self> {
let contents = match fs::read_to_string(path).await {
Ok(contents) => contents,
Err(err) if err.kind() == std::io::ErrorKind::NotFound => return Ok(Self::default()),
Err(err) => {
return Err(err)
.with_context(|| format!("failed to read daemon settings {}", path.display()));
}
};
serde_json::from_str(&contents)
.with_context(|| format!("failed to parse daemon settings {}", path.display()))
}
pub(crate) async fn save(&self, path: &Path) -> Result<()> {
if let Some(parent) = path.parent() {
fs::create_dir_all(parent).await.with_context(|| {
format!(
"failed to create daemon settings directory {}",
parent.display()
)
})?;
}
let contents = serde_json::to_vec_pretty(self).context("failed to serialize settings")?;
fs::write(path, contents)
.await
.with_context(|| format!("failed to write daemon settings {}", path.display()))
}
}
#[cfg(all(test, unix))]
mod tests {
use pretty_assertions::assert_eq;
use super::DaemonSettings;
#[test]
fn daemon_settings_use_camel_case_json() {
assert_eq!(
serde_json::to_string(&DaemonSettings {
remote_control_enabled: true,
})
.expect("serialize"),
r#"{"remoteControlEnabled":true}"#
);
}
}

View File

@@ -0,0 +1,132 @@
#[cfg(unix)]
use std::process::Stdio;
#[cfg(unix)]
use std::time::Duration;
#[cfg(unix)]
use anyhow::Context;
use anyhow::Result;
#[cfg(not(unix))]
use anyhow::bail;
#[cfg(unix)]
use futures::FutureExt;
#[cfg(unix)]
use tokio::io::AsyncWriteExt;
#[cfg(unix)]
use tokio::process::Command;
#[cfg(unix)]
use tokio::signal::unix::Signal;
#[cfg(unix)]
use tokio::signal::unix::SignalKind;
#[cfg(unix)]
use tokio::signal::unix::signal;
#[cfg(unix)]
use tokio::time::sleep;
#[cfg(unix)]
use crate::Daemon;
#[cfg(unix)]
use crate::RestartIfRunningOutcome;
#[cfg(unix)]
const INITIAL_UPDATE_DELAY: Duration = Duration::from_secs(5 * 60);
#[cfg(unix)]
const RESTART_RETRY_INTERVAL: Duration = Duration::from_millis(50);
#[cfg(unix)]
const UPDATE_INTERVAL: Duration = Duration::from_secs(60 * 60);
#[cfg(unix)]
pub(crate) async fn run() -> Result<()> {
let mut terminate =
signal(SignalKind::terminate()).context("failed to install updater shutdown handler")?;
if sleep_or_terminate(INITIAL_UPDATE_DELAY, &mut terminate).await {
return Ok(());
}
loop {
match update_once(&mut terminate).await {
Ok(UpdateLoopControl::Continue) | Err(_) => {}
Ok(UpdateLoopControl::Stop) => return Ok(()),
}
if sleep_or_terminate(UPDATE_INTERVAL, &mut terminate).await {
return Ok(());
}
}
}
#[cfg(not(unix))]
pub(crate) async fn run() -> Result<()> {
bail!("pid-managed updater loop is unsupported on this platform")
}
#[cfg(unix)]
async fn sleep_or_terminate(duration: Duration, terminate: &mut Signal) -> bool {
tokio::select! {
_ = sleep(duration) => false,
_ = terminate.recv() => true,
}
}
#[cfg(unix)]
enum UpdateLoopControl {
Continue,
Stop,
}
#[cfg(unix)]
async fn update_once(terminate: &mut Signal) -> Result<UpdateLoopControl> {
install_latest_standalone().await?;
let daemon = Daemon::from_environment()?;
loop {
if terminate.recv().now_or_never().flatten().is_some() {
return Ok(UpdateLoopControl::Stop);
}
match daemon.try_restart_if_running().await? {
RestartIfRunningOutcome::Completed => return Ok(UpdateLoopControl::Continue),
RestartIfRunningOutcome::Busy => {
if sleep_or_terminate(RESTART_RETRY_INTERVAL, terminate).await {
return Ok(UpdateLoopControl::Stop);
}
}
}
}
}
#[cfg(unix)]
async fn install_latest_standalone() -> Result<()> {
let script = reqwest::get("https://chatgpt.com/codex/install.sh")
.await
.context("failed to fetch standalone Codex updater")?
.error_for_status()
.context("standalone Codex updater request failed")?
.bytes()
.await
.context("failed to read standalone Codex updater")?;
let mut child = Command::new("/bin/sh")
.arg("-s")
.stdin(Stdio::piped())
.stdout(Stdio::null())
.stderr(Stdio::null())
.spawn()
.context("failed to invoke standalone Codex updater")?;
let mut stdin = child
.stdin
.take()
.context("standalone Codex updater stdin was unavailable")?;
stdin
.write_all(&script)
.await
.context("failed to pass standalone Codex updater to shell")?;
drop(stdin);
let status = child
.wait()
.await
.context("failed to wait for standalone Codex updater")?;
if status.success() {
Ok(())
} else {
anyhow::bail!("standalone Codex updater exited with status {status}")
}
}

View File

@@ -0,0 +1,5 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "AttestationGenerateParams",
"type": "object"
}

View File

@@ -0,0 +1,14 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"token": {
"description": "Opaque client attestation token.",
"type": "string"
}
},
"required": [
"token"
],
"title": "AttestationGenerateResponse",
"type": "object"
}

View File

@@ -1259,6 +1259,11 @@
"array",
"null"
]
},
"requestAttestation": {
"default": false,
"description": "Opt into `attestation/generate` requests for upstream `x-oai-attestation`.",
"type": "boolean"
}
},
"type": "object"
@@ -2083,16 +2088,37 @@
},
"principalType": {
"$ref": "#/definitions/PluginSharePrincipalType"
},
"role": {
"$ref": "#/definitions/PluginShareTargetRole"
}
},
"required": [
"principalId",
"principalType"
"principalType",
"role"
],
"type": "object"
},
"PluginShareTargetRole": {
"enum": [
"reader",
"editor"
],
"type": "string"
},
"PluginShareUpdateDiscoverability": {
"enum": [
"UNLISTED",
"PRIVATE"
],
"type": "string"
},
"PluginShareUpdateTargetsParams": {
"properties": {
"discoverability": {
"$ref": "#/definitions/PluginShareUpdateDiscoverability"
},
"remotePluginId": {
"type": "string"
},
@@ -2104,6 +2130,7 @@
}
},
"required": [
"discoverability",
"remotePluginId",
"shareTargets"
],
@@ -6177,4 +6204,4 @@
}
],
"title": "ClientRequest"
}
}

View File

@@ -2737,7 +2737,7 @@
"type": "string"
},
"RemoteControlStatusChangedNotification": {
"description": "Current remote-control connection status and environment id exposed to clients.",
"description": "Current remote-control connection status and remote identity exposed to clients.",
"properties": {
"environmentId": {
"type": [
@@ -2745,11 +2745,15 @@
"null"
]
},
"installationId": {
"type": "string"
},
"status": {
"$ref": "#/definitions/RemoteControlConnectionStatus"
}
},
"required": [
"installationId",
"status"
],
"type": "object"

View File

@@ -121,6 +121,9 @@
],
"type": "object"
},
"AttestationGenerateParams": {
"type": "object"
},
"ChatgptAuthTokensRefreshParams": {
"properties": {
"previousAccountId": {
@@ -1918,6 +1921,31 @@
"title": "Account/chatgptAuthTokens/refreshRequest",
"type": "object"
},
{
"description": "Generate a fresh upstream attestation result on demand.",
"properties": {
"id": {
"$ref": "#/definitions/RequestId"
},
"method": {
"enum": [
"attestation/generate"
],
"title": "Attestation/generateRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/AttestationGenerateParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "Attestation/generateRequest",
"type": "object"
},
{
"description": "DEPRECATED APIs below Request to approve a patch. This request is used for Turns started via the legacy APIs (i.e. SendUserTurn, SendUserMessage).",
"properties": {

View File

@@ -83,6 +83,25 @@
"title": "ApplyPatchApprovalResponse",
"type": "object"
},
"AttestationGenerateParams": {
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "AttestationGenerateParams",
"type": "object"
},
"AttestationGenerateResponse": {
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"token": {
"description": "Opaque client attestation token.",
"type": "string"
}
},
"required": [
"token"
],
"title": "AttestationGenerateResponse",
"type": "object"
},
"ChatgptAuthTokensRefreshParams": {
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
@@ -2620,6 +2639,11 @@
"array",
"null"
]
},
"requestAttestation": {
"default": false,
"description": "Opt into `attestation/generate` requests for upstream `x-oai-attestation`.",
"type": "boolean"
}
},
"type": "object"
@@ -5207,6 +5231,31 @@
"title": "Account/chatgptAuthTokens/refreshRequest",
"type": "object"
},
{
"description": "Generate a fresh upstream attestation result on demand.",
"properties": {
"id": {
"$ref": "#/definitions/v2/RequestId"
},
"method": {
"enum": [
"attestation/generate"
],
"title": "Attestation/generateRequestMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/AttestationGenerateParams"
}
},
"required": [
"id",
"method",
"params"
],
"title": "Attestation/generateRequest",
"type": "object"
},
{
"description": "DEPRECATED APIs below Request to approve a patch. This request is used for Turns started via the legacy APIs (i.e. SendUserTurn, SendUserMessage).",
"properties": {
@@ -12221,10 +12270,20 @@
"null"
]
},
"discoverability": {
"anyOf": [
{
"$ref": "#/definitions/v2/PluginShareDiscoverability"
},
{
"type": "null"
}
]
},
"remotePluginId": {
"type": "string"
},
"shareTargets": {
"sharePrincipals": {
"items": {
"$ref": "#/definitions/v2/PluginSharePrincipal"
},
@@ -12285,14 +12344,10 @@
},
"plugin": {
"$ref": "#/definitions/v2/PluginSummary"
},
"shareUrl": {
"type": "string"
}
},
"required": [
"plugin",
"shareUrl"
"plugin"
],
"type": "object"
},
@@ -12327,15 +12382,27 @@
},
"principalType": {
"$ref": "#/definitions/v2/PluginSharePrincipalType"
},
"role": {
"$ref": "#/definitions/v2/PluginSharePrincipalRole"
}
},
"required": [
"name",
"principalId",
"principalType"
"principalType",
"role"
],
"type": "object"
},
"PluginSharePrincipalRole": {
"enum": [
"reader",
"editor",
"owner"
],
"type": "string"
},
"PluginSharePrincipalType": {
"enum": [
"user",
@@ -12406,17 +12473,38 @@
},
"principalType": {
"$ref": "#/definitions/v2/PluginSharePrincipalType"
},
"role": {
"$ref": "#/definitions/v2/PluginShareTargetRole"
}
},
"required": [
"principalId",
"principalType"
"principalType",
"role"
],
"type": "object"
},
"PluginShareTargetRole": {
"enum": [
"reader",
"editor"
],
"type": "string"
},
"PluginShareUpdateDiscoverability": {
"enum": [
"UNLISTED",
"PRIVATE"
],
"type": "string"
},
"PluginShareUpdateTargetsParams": {
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"discoverability": {
"$ref": "#/definitions/v2/PluginShareUpdateDiscoverability"
},
"remotePluginId": {
"type": "string"
},
@@ -12428,6 +12516,7 @@
}
},
"required": [
"discoverability",
"remotePluginId",
"shareTargets"
],
@@ -12437,6 +12526,9 @@
"PluginShareUpdateTargetsResponse": {
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"discoverability": {
"$ref": "#/definitions/v2/PluginShareDiscoverability"
},
"principals": {
"items": {
"$ref": "#/definitions/v2/PluginSharePrincipal"
@@ -12445,6 +12537,7 @@
}
},
"required": [
"discoverability",
"principals"
],
"title": "PluginShareUpdateTargetsResponse",
@@ -13291,7 +13384,7 @@
},
"RemoteControlStatusChangedNotification": {
"$schema": "http://json-schema.org/draft-07/schema#",
"description": "Current remote-control connection status and environment id exposed to clients.",
"description": "Current remote-control connection status and remote identity exposed to clients.",
"properties": {
"environmentId": {
"type": [
@@ -13299,11 +13392,15 @@
"null"
]
},
"installationId": {
"type": "string"
},
"status": {
"$ref": "#/definitions/v2/RemoteControlConnectionStatus"
}
},
"required": [
"installationId",
"status"
],
"title": "RemoteControlStatusChangedNotification",
@@ -18396,4 +18493,4 @@
},
"title": "CodexAppServerProtocol",
"type": "object"
}
}

View File

@@ -6409,6 +6409,11 @@
"array",
"null"
]
},
"requestAttestation": {
"default": false,
"description": "Opt into `attestation/generate` requests for upstream `x-oai-attestation`.",
"type": "boolean"
}
},
"type": "object"
@@ -8814,10 +8819,20 @@
"null"
]
},
"discoverability": {
"anyOf": [
{
"$ref": "#/definitions/PluginShareDiscoverability"
},
{
"type": "null"
}
]
},
"remotePluginId": {
"type": "string"
},
"shareTargets": {
"sharePrincipals": {
"items": {
"$ref": "#/definitions/PluginSharePrincipal"
},
@@ -8878,14 +8893,10 @@
},
"plugin": {
"$ref": "#/definitions/PluginSummary"
},
"shareUrl": {
"type": "string"
}
},
"required": [
"plugin",
"shareUrl"
"plugin"
],
"type": "object"
},
@@ -8920,15 +8931,27 @@
},
"principalType": {
"$ref": "#/definitions/PluginSharePrincipalType"
},
"role": {
"$ref": "#/definitions/PluginSharePrincipalRole"
}
},
"required": [
"name",
"principalId",
"principalType"
"principalType",
"role"
],
"type": "object"
},
"PluginSharePrincipalRole": {
"enum": [
"reader",
"editor",
"owner"
],
"type": "string"
},
"PluginSharePrincipalType": {
"enum": [
"user",
@@ -8999,17 +9022,38 @@
},
"principalType": {
"$ref": "#/definitions/PluginSharePrincipalType"
},
"role": {
"$ref": "#/definitions/PluginShareTargetRole"
}
},
"required": [
"principalId",
"principalType"
"principalType",
"role"
],
"type": "object"
},
"PluginShareTargetRole": {
"enum": [
"reader",
"editor"
],
"type": "string"
},
"PluginShareUpdateDiscoverability": {
"enum": [
"UNLISTED",
"PRIVATE"
],
"type": "string"
},
"PluginShareUpdateTargetsParams": {
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"discoverability": {
"$ref": "#/definitions/PluginShareUpdateDiscoverability"
},
"remotePluginId": {
"type": "string"
},
@@ -9021,6 +9065,7 @@
}
},
"required": [
"discoverability",
"remotePluginId",
"shareTargets"
],
@@ -9030,6 +9075,9 @@
"PluginShareUpdateTargetsResponse": {
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"discoverability": {
"$ref": "#/definitions/PluginShareDiscoverability"
},
"principals": {
"items": {
"$ref": "#/definitions/PluginSharePrincipal"
@@ -9038,6 +9086,7 @@
}
},
"required": [
"discoverability",
"principals"
],
"title": "PluginShareUpdateTargetsResponse",
@@ -9884,7 +9933,7 @@
},
"RemoteControlStatusChangedNotification": {
"$schema": "http://json-schema.org/draft-07/schema#",
"description": "Current remote-control connection status and environment id exposed to clients.",
"description": "Current remote-control connection status and remote identity exposed to clients.",
"properties": {
"environmentId": {
"type": [
@@ -9892,11 +9941,15 @@
"null"
]
},
"installationId": {
"type": "string"
},
"status": {
"$ref": "#/definitions/RemoteControlConnectionStatus"
}
},
"required": [
"installationId",
"status"
],
"title": "RemoteControlStatusChangedNotification",
@@ -16263,4 +16316,4 @@
},
"title": "CodexAppServerProtocolV2",
"type": "object"
}
}

View File

@@ -39,6 +39,11 @@
"array",
"null"
]
},
"requestAttestation": {
"default": false,
"description": "Opt into `attestation/generate` requests for upstream `x-oai-attestation`.",
"type": "boolean"
}
},
"type": "object"

View File

@@ -246,10 +246,20 @@
"null"
]
},
"discoverability": {
"anyOf": [
{
"$ref": "#/definitions/PluginShareDiscoverability"
},
{
"type": "null"
}
]
},
"remotePluginId": {
"type": "string"
},
"shareTargets": {
"sharePrincipals": {
"items": {
"$ref": "#/definitions/PluginSharePrincipal"
},
@@ -270,6 +280,14 @@
],
"type": "object"
},
"PluginShareDiscoverability": {
"enum": [
"LISTED",
"UNLISTED",
"PRIVATE"
],
"type": "string"
},
"PluginSharePrincipal": {
"properties": {
"name": {
@@ -280,15 +298,27 @@
},
"principalType": {
"$ref": "#/definitions/PluginSharePrincipalType"
},
"role": {
"$ref": "#/definitions/PluginSharePrincipalRole"
}
},
"required": [
"name",
"principalId",
"principalType"
"principalType",
"role"
],
"type": "object"
},
"PluginSharePrincipalRole": {
"enum": [
"reader",
"editor",
"owner"
],
"type": "string"
},
"PluginSharePrincipalType": {
"enum": [
"user",

View File

@@ -300,10 +300,20 @@
"null"
]
},
"discoverability": {
"anyOf": [
{
"$ref": "#/definitions/PluginShareDiscoverability"
},
{
"type": "null"
}
]
},
"remotePluginId": {
"type": "string"
},
"shareTargets": {
"sharePrincipals": {
"items": {
"$ref": "#/definitions/PluginSharePrincipal"
},
@@ -324,6 +334,14 @@
],
"type": "object"
},
"PluginShareDiscoverability": {
"enum": [
"LISTED",
"UNLISTED",
"PRIVATE"
],
"type": "string"
},
"PluginSharePrincipal": {
"properties": {
"name": {
@@ -334,15 +352,27 @@
},
"principalType": {
"$ref": "#/definitions/PluginSharePrincipalType"
},
"role": {
"$ref": "#/definitions/PluginSharePrincipalRole"
}
},
"required": [
"name",
"principalId",
"principalType"
"principalType",
"role"
],
"type": "object"
},
"PluginSharePrincipalRole": {
"enum": [
"reader",
"editor",
"owner"
],
"type": "string"
},
"PluginSharePrincipalType": {
"enum": [
"user",

View File

@@ -181,10 +181,20 @@
"null"
]
},
"discoverability": {
"anyOf": [
{
"$ref": "#/definitions/PluginShareDiscoverability"
},
{
"type": "null"
}
]
},
"remotePluginId": {
"type": "string"
},
"shareTargets": {
"sharePrincipals": {
"items": {
"$ref": "#/definitions/PluginSharePrincipal"
},
@@ -205,6 +215,14 @@
],
"type": "object"
},
"PluginShareDiscoverability": {
"enum": [
"LISTED",
"UNLISTED",
"PRIVATE"
],
"type": "string"
},
"PluginShareListItem": {
"properties": {
"localPluginPath": {
@@ -219,14 +237,10 @@
},
"plugin": {
"$ref": "#/definitions/PluginSummary"
},
"shareUrl": {
"type": "string"
}
},
"required": [
"plugin",
"shareUrl"
"plugin"
],
"type": "object"
},
@@ -240,15 +254,27 @@
},
"principalType": {
"$ref": "#/definitions/PluginSharePrincipalType"
},
"role": {
"$ref": "#/definitions/PluginSharePrincipalRole"
}
},
"required": [
"name",
"principalId",
"principalType"
"principalType",
"role"
],
"type": "object"
},
"PluginSharePrincipalRole": {
"enum": [
"reader",
"editor",
"owner"
],
"type": "string"
},
"PluginSharePrincipalType": {
"enum": [
"user",

View File

@@ -28,13 +28,24 @@
},
"principalType": {
"$ref": "#/definitions/PluginSharePrincipalType"
},
"role": {
"$ref": "#/definitions/PluginShareTargetRole"
}
},
"required": [
"principalId",
"principalType"
"principalType",
"role"
],
"type": "object"
},
"PluginShareTargetRole": {
"enum": [
"reader",
"editor"
],
"type": "string"
}
},
"properties": {

View File

@@ -16,16 +16,37 @@
},
"principalType": {
"$ref": "#/definitions/PluginSharePrincipalType"
},
"role": {
"$ref": "#/definitions/PluginShareTargetRole"
}
},
"required": [
"principalId",
"principalType"
"principalType",
"role"
],
"type": "object"
},
"PluginShareTargetRole": {
"enum": [
"reader",
"editor"
],
"type": "string"
},
"PluginShareUpdateDiscoverability": {
"enum": [
"UNLISTED",
"PRIVATE"
],
"type": "string"
}
},
"properties": {
"discoverability": {
"$ref": "#/definitions/PluginShareUpdateDiscoverability"
},
"remotePluginId": {
"type": "string"
},
@@ -37,6 +58,7 @@
}
},
"required": [
"discoverability",
"remotePluginId",
"shareTargets"
],

View File

@@ -1,6 +1,14 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"PluginShareDiscoverability": {
"enum": [
"LISTED",
"UNLISTED",
"PRIVATE"
],
"type": "string"
},
"PluginSharePrincipal": {
"properties": {
"name": {
@@ -11,15 +19,27 @@
},
"principalType": {
"$ref": "#/definitions/PluginSharePrincipalType"
},
"role": {
"$ref": "#/definitions/PluginSharePrincipalRole"
}
},
"required": [
"name",
"principalId",
"principalType"
"principalType",
"role"
],
"type": "object"
},
"PluginSharePrincipalRole": {
"enum": [
"reader",
"editor",
"owner"
],
"type": "string"
},
"PluginSharePrincipalType": {
"enum": [
"user",
@@ -30,6 +50,9 @@
}
},
"properties": {
"discoverability": {
"$ref": "#/definitions/PluginShareDiscoverability"
},
"principals": {
"items": {
"$ref": "#/definitions/PluginSharePrincipal"
@@ -38,6 +61,7 @@
}
},
"required": [
"discoverability",
"principals"
],
"title": "PluginShareUpdateTargetsResponse",

View File

@@ -11,7 +11,7 @@
"type": "string"
}
},
"description": "Current remote-control connection status and environment id exposed to clients.",
"description": "Current remote-control connection status and remote identity exposed to clients.",
"properties": {
"environmentId": {
"type": [
@@ -19,11 +19,15 @@
"null"
]
},
"installationId": {
"type": "string"
},
"status": {
"$ref": "#/definitions/RemoteControlConnectionStatus"
}
},
"required": [
"installationId",
"status"
],
"title": "RemoteControlStatusChangedNotification",

View File

@@ -10,6 +10,10 @@ export type InitializeCapabilities = {
* Opt into receiving experimental API methods and fields.
*/
experimentalApi: boolean,
/**
* Opt into `attestation/generate` requests for upstream `x-oai-attestation`.
*/
requestAttestation: boolean,
/**
* Exact notification method names that should be suppressed for this
* connection (for example `thread/started`).

View File

@@ -4,6 +4,7 @@
import type { ApplyPatchApprovalParams } from "./ApplyPatchApprovalParams";
import type { ExecCommandApprovalParams } from "./ExecCommandApprovalParams";
import type { RequestId } from "./RequestId";
import type { AttestationGenerateParams } from "./v2/AttestationGenerateParams";
import type { ChatgptAuthTokensRefreshParams } from "./v2/ChatgptAuthTokensRefreshParams";
import type { CommandExecutionRequestApprovalParams } from "./v2/CommandExecutionRequestApprovalParams";
import type { DynamicToolCallParams } from "./v2/DynamicToolCallParams";
@@ -15,4 +16,4 @@ import type { ToolRequestUserInputParams } from "./v2/ToolRequestUserInputParams
/**
* Request initiated from the server and sent to the client.
*/
export type ServerRequest = { "method": "item/commandExecution/requestApproval", id: RequestId, params: CommandExecutionRequestApprovalParams, } | { "method": "item/fileChange/requestApproval", id: RequestId, params: FileChangeRequestApprovalParams, } | { "method": "item/tool/requestUserInput", id: RequestId, params: ToolRequestUserInputParams, } | { "method": "mcpServer/elicitation/request", id: RequestId, params: McpServerElicitationRequestParams, } | { "method": "item/permissions/requestApproval", id: RequestId, params: PermissionsRequestApprovalParams, } | { "method": "item/tool/call", id: RequestId, params: DynamicToolCallParams, } | { "method": "account/chatgptAuthTokens/refresh", id: RequestId, params: ChatgptAuthTokensRefreshParams, } | { "method": "applyPatchApproval", id: RequestId, params: ApplyPatchApprovalParams, } | { "method": "execCommandApproval", id: RequestId, params: ExecCommandApprovalParams, };
export type ServerRequest = { "method": "item/commandExecution/requestApproval", id: RequestId, params: CommandExecutionRequestApprovalParams, } | { "method": "item/fileChange/requestApproval", id: RequestId, params: FileChangeRequestApprovalParams, } | { "method": "item/tool/requestUserInput", id: RequestId, params: ToolRequestUserInputParams, } | { "method": "mcpServer/elicitation/request", id: RequestId, params: McpServerElicitationRequestParams, } | { "method": "item/permissions/requestApproval", id: RequestId, params: PermissionsRequestApprovalParams, } | { "method": "item/tool/call", id: RequestId, params: DynamicToolCallParams, } | { "method": "account/chatgptAuthTokens/refresh", id: RequestId, params: ChatgptAuthTokensRefreshParams, } | { "method": "attestation/generate", id: RequestId, params: AttestationGenerateParams, } | { "method": "applyPatchApproval", id: RequestId, params: ApplyPatchApprovalParams, } | { "method": "execCommandApproval", id: RequestId, params: ExecCommandApprovalParams, };

View File

@@ -0,0 +1,5 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type AttestationGenerateParams = Record<string, never>;

View File

@@ -0,0 +1,9 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type AttestationGenerateResponse = {
/**
* Opaque client attestation token.
*/
token: string, };

View File

@@ -1,6 +1,7 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { PluginShareDiscoverability } from "./PluginShareDiscoverability";
import type { PluginSharePrincipal } from "./PluginSharePrincipal";
export type PluginShareContext = { remotePluginId: string, shareUrl: string | null, creatorAccountUserId: string | null, creatorName: string | null, shareTargets: Array<PluginSharePrincipal> | null, };
export type PluginShareContext = { remotePluginId: string, discoverability: PluginShareDiscoverability | null, shareUrl: string | null, creatorAccountUserId: string | null, creatorName: string | null, sharePrincipals: Array<PluginSharePrincipal> | null, };

View File

@@ -4,4 +4,4 @@
import type { AbsolutePathBuf } from "../AbsolutePathBuf";
import type { PluginSummary } from "./PluginSummary";
export type PluginShareListItem = { plugin: PluginSummary, shareUrl: string, localPluginPath: AbsolutePathBuf | null, };
export type PluginShareListItem = { plugin: PluginSummary, localPluginPath: AbsolutePathBuf | null, };

View File

@@ -1,6 +1,7 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { PluginSharePrincipalRole } from "./PluginSharePrincipalRole";
import type { PluginSharePrincipalType } from "./PluginSharePrincipalType";
export type PluginSharePrincipal = { principalType: PluginSharePrincipalType, principalId: string, name: string, };
export type PluginSharePrincipal = { principalType: PluginSharePrincipalType, principalId: string, role: PluginSharePrincipalRole, name: string, };

View File

@@ -0,0 +1,5 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type PluginSharePrincipalRole = "reader" | "editor" | "owner";

View File

@@ -2,5 +2,6 @@
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { PluginSharePrincipalType } from "./PluginSharePrincipalType";
import type { PluginShareTargetRole } from "./PluginShareTargetRole";
export type PluginShareTarget = { principalType: PluginSharePrincipalType, principalId: string, };
export type PluginShareTarget = { principalType: PluginSharePrincipalType, principalId: string, role: PluginShareTargetRole, };

View File

@@ -0,0 +1,5 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type PluginShareTargetRole = "reader" | "editor";

View File

@@ -0,0 +1,5 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type PluginShareUpdateDiscoverability = "UNLISTED" | "PRIVATE";

View File

@@ -2,5 +2,6 @@
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { PluginShareTarget } from "./PluginShareTarget";
import type { PluginShareUpdateDiscoverability } from "./PluginShareUpdateDiscoverability";
export type PluginShareUpdateTargetsParams = { remotePluginId: string, shareTargets: Array<PluginShareTarget>, };
export type PluginShareUpdateTargetsParams = { remotePluginId: string, discoverability: PluginShareUpdateDiscoverability, shareTargets: Array<PluginShareTarget>, };

View File

@@ -1,6 +1,7 @@
// GENERATED CODE! DO NOT MODIFY BY HAND!
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
import type { PluginShareDiscoverability } from "./PluginShareDiscoverability";
import type { PluginSharePrincipal } from "./PluginSharePrincipal";
export type PluginShareUpdateTargetsResponse = { principals: Array<PluginSharePrincipal>, };
export type PluginShareUpdateTargetsResponse = { principals: Array<PluginSharePrincipal>, discoverability: PluginShareDiscoverability, };

View File

@@ -4,6 +4,6 @@
import type { RemoteControlConnectionStatus } from "./RemoteControlConnectionStatus";
/**
* Current remote-control connection status and environment id exposed to clients.
* Current remote-control connection status and remote identity exposed to clients.
*/
export type RemoteControlStatusChangedNotification = { status: RemoteControlConnectionStatus, environmentId: string | null, };
export type RemoteControlStatusChangedNotification = { status: RemoteControlConnectionStatus, installationId: string, environmentId: string | null, };

View File

@@ -28,6 +28,8 @@ export type { AppsDefaultConfig } from "./AppsDefaultConfig";
export type { AppsListParams } from "./AppsListParams";
export type { AppsListResponse } from "./AppsListResponse";
export type { AskForApproval } from "./AskForApproval";
export type { AttestationGenerateParams } from "./AttestationGenerateParams";
export type { AttestationGenerateResponse } from "./AttestationGenerateResponse";
export type { AutoReviewDecisionSource } from "./AutoReviewDecisionSource";
export type { ByteRange } from "./ByteRange";
export type { CancelLoginAccountParams } from "./CancelLoginAccountParams";
@@ -283,10 +285,13 @@ export type { PluginShareListItem } from "./PluginShareListItem";
export type { PluginShareListParams } from "./PluginShareListParams";
export type { PluginShareListResponse } from "./PluginShareListResponse";
export type { PluginSharePrincipal } from "./PluginSharePrincipal";
export type { PluginSharePrincipalRole } from "./PluginSharePrincipalRole";
export type { PluginSharePrincipalType } from "./PluginSharePrincipalType";
export type { PluginShareSaveParams } from "./PluginShareSaveParams";
export type { PluginShareSaveResponse } from "./PluginShareSaveResponse";
export type { PluginShareTarget } from "./PluginShareTarget";
export type { PluginShareTargetRole } from "./PluginShareTargetRole";
export type { PluginShareUpdateDiscoverability } from "./PluginShareUpdateDiscoverability";
export type { PluginShareUpdateTargetsParams } from "./PluginShareUpdateTargetsParams";
export type { PluginShareUpdateTargetsResponse } from "./PluginShareUpdateTargetsResponse";
export type { PluginSkillReadParams } from "./PluginSkillReadParams";

View File

@@ -808,6 +808,13 @@ client_request_definitions! {
serialization: None,
response: v2::MockExperimentalMethodResponse,
},
#[experimental("environment/add")]
/// Adds or replaces a remote environment by id for later selection.
EnvironmentAdd => "environment/add" {
params: v2::EnvironmentAddParams,
serialization: global("environment"),
response: v2::EnvironmentAddResponse,
},
McpServerOauthLogin => "mcpServer/oauth/login" {
params: v2::McpServerOauthLoginParams,
@@ -1312,6 +1319,12 @@ server_request_definitions! {
response: v2::ChatgptAuthTokensRefreshResponse,
},
/// Generate a fresh upstream attestation result on demand.
AttestationGenerate => "attestation/generate" {
params: v2::AttestationGenerateParams,
response: v2::AttestationGenerateResponse,
},
/// DEPRECATED APIs below
/// Request to approve a patch.
/// This request is used for Turns started via the legacy APIs (i.e. SendUserTurn, SendUserMessage).
@@ -1790,6 +1803,18 @@ mod tests {
add_credits_nudge.serialization_scope(),
Some(ClientRequestSerializationScope::Global("account-auth"))
);
let environment_add = ClientRequest::EnvironmentAdd {
request_id: request_id(),
params: v2::EnvironmentAddParams {
environment_id: "remote-a".to_string(),
exec_server_url: "ws://127.0.0.1:8765".to_string(),
},
};
assert_eq!(
environment_add.serialization_scope(),
Some(ClientRequestSerializationScope::Global("environment"))
);
}
#[test]
@@ -1910,6 +1935,7 @@ mod tests {
},
capabilities: Some(v1::InitializeCapabilities {
experimental_api: true,
request_attestation: true,
opt_out_notification_methods: Some(vec![
"thread/started".to_string(),
"item/agentMessage/delta".to_string(),
@@ -1930,6 +1956,7 @@ mod tests {
},
"capabilities": {
"experimentalApi": true,
"requestAttestation": true,
"optOutNotificationMethods": [
"thread/started",
"item/agentMessage/delta"
@@ -1955,6 +1982,7 @@ mod tests {
},
"capabilities": {
"experimentalApi": true,
"requestAttestation": true,
"optOutNotificationMethods": [
"thread/started",
"item/agentMessage/delta"
@@ -1975,6 +2003,7 @@ mod tests {
},
capabilities: Some(v1::InitializeCapabilities {
experimental_api: true,
request_attestation: true,
opt_out_notification_methods: Some(vec![
"thread/started".to_string(),
"item/agentMessage/delta".to_string(),
@@ -2091,6 +2120,28 @@ mod tests {
Ok(())
}
#[test]
fn serialize_attestation_generate_request() -> Result<()> {
let params = v2::AttestationGenerateParams {};
let request = ServerRequest::AttestationGenerate {
request_id: RequestId::Integer(9),
params: params.clone(),
};
assert_eq!(
json!({
"method": "attestation/generate",
"id": 9,
"params": {}
}),
serde_json::to_value(&request)?,
);
let payload = ServerRequestPayload::AttestationGenerate(params);
assert_eq!(request.id(), &RequestId::Integer(9));
assert_eq!(payload.request_with_id(RequestId::Integer(9)), request);
Ok(())
}
#[test]
fn serialize_server_response() -> Result<()> {
let response = ServerResponse::CommandExecutionRequestApproval {
@@ -2546,10 +2597,33 @@ mod tests {
Ok(())
}
#[test]
fn serialize_environment_add() -> Result<()> {
let request = ClientRequest::EnvironmentAdd {
request_id: RequestId::Integer(9),
params: v2::EnvironmentAddParams {
environment_id: "remote-a".to_string(),
exec_server_url: "ws://127.0.0.1:8765".to_string(),
},
};
assert_eq!(
json!({
"method": "environment/add",
"id": 9,
"params": {
"environmentId": "remote-a",
"execServerUrl": "ws://127.0.0.1:8765"
}
}),
serde_json::to_value(&request)?,
);
Ok(())
}
#[test]
fn serialize_fs_get_metadata() -> Result<()> {
let request = ClientRequest::FsGetMetadata {
request_id: RequestId::Integer(9),
request_id: RequestId::Integer(10),
params: v2::FsGetMetadataParams {
path: absolute_path("tmp/example"),
},
@@ -2557,7 +2631,7 @@ mod tests {
assert_eq!(
json!({
"method": "fs/getMetadata",
"id": 9,
"id": 10,
"params": {
"path": absolute_path_string("tmp/example")
}
@@ -2818,6 +2892,19 @@ mod tests {
assert_eq!(reason, Some("mock/experimentalMethod"));
}
#[test]
fn environment_add_is_marked_experimental() {
let request = ClientRequest::EnvironmentAdd {
request_id: RequestId::Integer(1),
params: v2::EnvironmentAddParams {
environment_id: "remote-a".to_string(),
exec_server_url: "ws://127.0.0.1:8765".to_string(),
},
};
let reason = crate::experimental_api::ExperimentalApi::experimental_reason(&request);
assert_eq!(reason, Some("environment/add"));
}
#[test]
fn command_exec_permission_profile_is_marked_experimental() {
let request = ClientRequest::OneOffCommandExec {

View File

@@ -46,6 +46,9 @@ pub struct InitializeCapabilities {
/// Opt into receiving experimental API methods and fields.
#[serde(default)]
pub experimental_api: bool,
/// Opt into `attestation/generate` requests for upstream `x-oai-attestation`.
#[serde(default)]
pub request_attestation: bool,
/// Exact notification method names that should be suppressed for this
/// connection (for example `thread/started`).
#[ts(optional = nullable)]

View File

@@ -0,0 +1,17 @@
use schemars::JsonSchema;
use serde::Deserialize;
use serde::Serialize;
use ts_rs::TS;
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS, Default)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct AttestationGenerateParams {}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct AttestationGenerateResponse {
/// Opaque client attestation token.
pub token: String,
}

View File

@@ -0,0 +1,17 @@
use schemars::JsonSchema;
use serde::Deserialize;
use serde::Serialize;
use ts_rs::TS;
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct EnvironmentAddParams {
pub environment_id: String,
pub exec_server_url: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct EnvironmentAddResponse {}

View File

@@ -2,9 +2,11 @@ mod shared;
mod account;
mod apps;
mod attestation;
mod collaboration_mode;
mod command_exec;
mod config;
mod environment;
mod experimental_feature;
mod feedback;
mod fs;
@@ -26,9 +28,11 @@ mod windows_sandbox;
pub use account::*;
pub use apps::*;
pub use attestation::*;
pub use collaboration_mode::*;
pub use command_exec::*;
pub use config::*;
pub use environment::*;
pub use experimental_feature::*;
pub use feedback::*;
pub use fs::*;

View File

@@ -218,6 +218,7 @@ pub struct PluginShareSaveResponse {
#[ts(export_to = "v2/")]
pub struct PluginShareUpdateTargetsParams {
pub remote_plugin_id: String,
pub discoverability: PluginShareUpdateDiscoverability,
pub share_targets: Vec<PluginShareTarget>,
}
@@ -226,6 +227,7 @@ pub struct PluginShareUpdateTargetsParams {
#[ts(export_to = "v2/")]
pub struct PluginShareUpdateTargetsResponse {
pub principals: Vec<PluginSharePrincipal>,
pub discoverability: PluginShareDiscoverability,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
@@ -257,7 +259,6 @@ pub struct PluginShareDeleteResponse {}
#[ts(export_to = "v2/")]
pub struct PluginShareListItem {
pub plugin: PluginSummary,
pub share_url: String,
pub local_plugin_path: Option<AbsolutePathBuf>,
}
@@ -275,6 +276,17 @@ pub enum PluginShareDiscoverability {
Private,
}
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, Eq, JsonSchema, TS)]
#[ts(export_to = "v2/")]
pub enum PluginShareUpdateDiscoverability {
#[serde(rename = "UNLISTED")]
#[ts(rename = "UNLISTED")]
Unlisted,
#[serde(rename = "PRIVATE")]
#[ts(rename = "PRIVATE")]
Private,
}
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, Eq, JsonSchema, TS)]
#[ts(export_to = "v2/")]
pub enum PluginSharePrincipalType {
@@ -295,6 +307,7 @@ pub enum PluginSharePrincipalType {
pub struct PluginShareTarget {
pub principal_type: PluginSharePrincipalType,
pub principal_id: String,
pub role: PluginShareTargetRole,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
@@ -303,9 +316,29 @@ pub struct PluginShareTarget {
pub struct PluginSharePrincipal {
pub principal_type: PluginSharePrincipalType,
pub principal_id: String,
pub role: PluginSharePrincipalRole,
pub name: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "lowercase")]
#[ts(rename_all = "lowercase")]
#[ts(export_to = "v2/")]
pub enum PluginShareTargetRole {
Reader,
Editor,
}
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "lowercase")]
#[ts(rename_all = "lowercase")]
#[ts(export_to = "v2/")]
pub enum PluginSharePrincipalRole {
Reader,
Editor,
Owner,
}
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "snake_case")]
#[ts(rename_all = "snake_case")]
@@ -526,10 +559,11 @@ pub struct PluginSummary {
#[ts(export_to = "v2/")]
pub struct PluginShareContext {
pub remote_plugin_id: String,
pub discoverability: Option<PluginShareDiscoverability>,
pub share_url: Option<String>,
pub creator_account_user_id: Option<String>,
pub creator_name: Option<String>,
pub share_targets: Option<Vec<PluginSharePrincipal>>,
pub share_principals: Option<Vec<PluginSharePrincipal>>,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]

View File

@@ -3,12 +3,13 @@ use serde::Deserialize;
use serde::Serialize;
use ts_rs::TS;
/// Current remote-control connection status and environment id exposed to clients.
/// Current remote-control connection status and remote identity exposed to clients.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct RemoteControlStatusChangedNotification {
pub status: RemoteControlConnectionStatus,
pub installation_id: String,
pub environment_id: Option<String>,
}

View File

@@ -2896,10 +2896,12 @@ fn plugin_share_params_and_response_serialization_use_camel_case_fields() {
PluginShareTarget {
principal_type: PluginSharePrincipalType::User,
principal_id: "user-1".to_string(),
role: PluginShareTargetRole::Reader,
},
PluginShareTarget {
principal_type: PluginSharePrincipalType::Workspace,
principal_id: "workspace-1".to_string(),
principal_type: PluginSharePrincipalType::Group,
principal_id: "group-1".to_string(),
role: PluginShareTargetRole::Reader,
},
]),
})
@@ -2912,10 +2914,12 @@ fn plugin_share_params_and_response_serialization_use_camel_case_fields() {
{
"principalType": "user",
"principalId": "user-1",
"role": "reader",
},
{
"principalType": "workspace",
"principalId": "workspace-1",
"principalType": "group",
"principalId": "group-1",
"role": "reader",
},
],
}),
@@ -2936,17 +2940,21 @@ fn plugin_share_params_and_response_serialization_use_camel_case_fields() {
assert_eq!(
serde_json::to_value(PluginShareUpdateTargetsParams {
remote_plugin_id: "plugins~Plugin_00000000000000000000000000000000".to_string(),
discoverability: PluginShareUpdateDiscoverability::Unlisted,
share_targets: vec![PluginShareTarget {
principal_type: PluginSharePrincipalType::Group,
principal_id: "group-1".to_string(),
role: PluginShareTargetRole::Editor,
}],
})
.unwrap(),
json!({
"remotePluginId": "plugins~Plugin_00000000000000000000000000000000",
"discoverability": "UNLISTED",
"shareTargets": [{
"principalType": "group",
"principalId": "group-1",
"role": "editor",
}],
}),
);
@@ -2956,16 +2964,20 @@ fn plugin_share_params_and_response_serialization_use_camel_case_fields() {
principals: vec![PluginSharePrincipal {
principal_type: PluginSharePrincipalType::User,
principal_id: "user-1".to_string(),
role: PluginSharePrincipalRole::Owner,
name: "Gavin".to_string(),
}],
discoverability: PluginShareDiscoverability::Unlisted,
})
.unwrap(),
json!({
"principals": [{
"principalType": "user",
"principalId": "user-1",
"role": "owner",
"name": "Gavin",
}],
"discoverability": "UNLISTED",
}),
);
@@ -3003,7 +3015,6 @@ fn plugin_share_list_response_serializes_share_items() {
interface: None,
keywords: Vec::new(),
},
share_url: "https://chatgpt.example/plugins/share/share-key-1".to_string(),
local_plugin_path: None,
}],
})
@@ -3023,7 +3034,6 @@ fn plugin_share_list_response_serializes_share_items() {
"interface": null,
"keywords": [],
},
"shareUrl": "https://chatgpt.example/plugins/share/share-key-1",
"localPluginPath": null,
}],
}),

View File

@@ -1551,6 +1551,7 @@ impl CodexClient {
},
capabilities: Some(InitializeCapabilities {
experimental_api,
request_attestation: false,
opt_out_notification_methods: Some(
NOTIFICATIONS_TO_OPT_OUT
.iter()

View File

@@ -13,15 +13,7 @@ doctest = false
workspace = true
[dependencies]
anyhow = { workspace = true }
axum = { workspace = true, default-features = false, features = [
"http1",
"json",
"tokio",
"ws",
] }
base64 = { workspace = true }
clap = { workspace = true, features = ["derive"] }
codex-api = { workspace = true }
codex-app-server-protocol = { workspace = true }
codex-core = { workspace = true }
@@ -31,16 +23,10 @@ codex-state = { workspace = true }
codex-uds = { workspace = true }
codex-utils-absolute-path = { workspace = true }
codex-utils-rustls-provider = { workspace = true }
constant_time_eq = { workspace = true }
futures = { workspace = true }
gethostname = { workspace = true }
hmac = { workspace = true }
jsonwebtoken = { workspace = true }
owo-colors = { workspace = true, features = ["supports-colors"] }
serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }
sha2 = { workspace = true }
time = { workspace = true }
tokio = { workspace = true, features = [
"io-std",
"macros",

View File

@@ -11,10 +11,9 @@ pub use transport::AppServerTransportParseError;
pub use transport::CHANNEL_CAPACITY;
pub use transport::ConnectionOrigin;
pub use transport::RemoteControlHandle;
pub use transport::RemoteControlStartConfig;
pub use transport::TransportEvent;
pub use transport::app_server_control_socket_path;
pub use transport::auth;
pub use transport::start_control_socket_acceptor;
pub use transport::start_remote_control;
pub use transport::start_stdio_connection;
pub use transport::start_websocket_acceptor;

View File

@@ -1,751 +0,0 @@
use anyhow::Context;
use axum::http::HeaderMap;
use axum::http::StatusCode;
use axum::http::header::AUTHORIZATION;
use clap::Args;
use clap::ValueEnum;
use codex_utils_absolute_path::AbsolutePathBuf;
use constant_time_eq::constant_time_eq_32;
use jsonwebtoken::Algorithm;
use jsonwebtoken::DecodingKey;
use jsonwebtoken::Validation;
use jsonwebtoken::decode;
use serde::Deserialize;
use sha2::Digest;
use sha2::Sha256;
use std::io;
use std::io::ErrorKind;
use std::net::SocketAddr;
use std::path::Path;
use std::path::PathBuf;
use time::OffsetDateTime;
const DEFAULT_MAX_CLOCK_SKEW_SECONDS: u64 = 30;
const MIN_SIGNED_BEARER_SECRET_BYTES: usize = 32;
const INVALID_AUTHORIZATION_HEADER_MESSAGE: &str = "invalid authorization header";
#[derive(Debug, Clone, Default, PartialEq, Eq, Args)]
pub struct AppServerWebsocketAuthArgs {
/// Websocket auth mode for non-loopback listeners.
#[arg(long = "ws-auth", value_name = "MODE", value_enum)]
pub ws_auth: Option<WebsocketAuthCliMode>,
/// Absolute path to the capability-token file.
#[arg(long = "ws-token-file", value_name = "PATH")]
pub ws_token_file: Option<PathBuf>,
/// Hex-encoded SHA-256 digest of the capability token.
#[arg(long = "ws-token-sha256", value_name = "HEX")]
pub ws_token_sha256: Option<String>,
/// Absolute path to the shared secret file for signed JWT bearer tokens.
#[arg(long = "ws-shared-secret-file", value_name = "PATH")]
pub ws_shared_secret_file: Option<PathBuf>,
/// Expected issuer for signed JWT bearer tokens.
#[arg(long = "ws-issuer", value_name = "ISSUER")]
pub ws_issuer: Option<String>,
/// Expected audience for signed JWT bearer tokens.
#[arg(long = "ws-audience", value_name = "AUDIENCE")]
pub ws_audience: Option<String>,
/// Maximum clock skew when validating signed JWT bearer tokens.
#[arg(long = "ws-max-clock-skew-seconds", value_name = "SECONDS")]
pub ws_max_clock_skew_seconds: Option<u64>,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, ValueEnum)]
pub enum WebsocketAuthCliMode {
CapabilityToken,
SignedBearerToken,
}
#[derive(Debug, Clone, Default, PartialEq, Eq)]
pub struct AppServerWebsocketAuthSettings {
pub config: Option<AppServerWebsocketAuthConfig>,
}
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum AppServerWebsocketAuthConfig {
CapabilityToken {
source: AppServerWebsocketCapabilityTokenSource,
},
SignedBearerToken {
shared_secret_file: AbsolutePathBuf,
issuer: Option<String>,
audience: Option<String>,
max_clock_skew_seconds: u64,
},
}
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum AppServerWebsocketCapabilityTokenSource {
TokenFile { token_file: AbsolutePathBuf },
TokenSha256 { token_sha256: [u8; 32] },
}
#[derive(Clone, Debug, Default)]
pub struct WebsocketAuthPolicy {
pub(crate) mode: Option<WebsocketAuthMode>,
}
#[derive(Clone, Debug)]
pub(crate) enum WebsocketAuthMode {
CapabilityToken {
token_sha256: [u8; 32],
},
SignedBearerToken {
shared_secret: Vec<u8>,
issuer: Option<String>,
audience: Option<String>,
max_clock_skew_seconds: i64,
},
}
#[derive(Debug)]
pub(crate) struct WebsocketAuthError {
status_code: StatusCode,
message: &'static str,
}
#[derive(Deserialize)]
struct JwtClaims {
exp: i64,
nbf: Option<i64>,
iss: Option<String>,
aud: Option<JwtAudienceClaim>,
}
#[derive(Deserialize)]
#[serde(untagged)]
enum JwtAudienceClaim {
Single(String),
Multiple(Vec<String>),
}
impl WebsocketAuthError {
pub(crate) fn status_code(&self) -> StatusCode {
self.status_code
}
pub(crate) fn message(&self) -> &'static str {
self.message
}
}
impl AppServerWebsocketAuthArgs {
pub fn try_into_settings(self) -> anyhow::Result<AppServerWebsocketAuthSettings> {
let normalize = |value: Option<String>| {
value.and_then(|value| {
let trimmed = value.trim();
(!trimmed.is_empty()).then(|| trimmed.to_string())
})
};
let config = match self.ws_auth {
Some(WebsocketAuthCliMode::CapabilityToken) => {
if self.ws_shared_secret_file.is_some()
|| self.ws_issuer.is_some()
|| self.ws_audience.is_some()
|| self.ws_max_clock_skew_seconds.is_some()
{
anyhow::bail!(
"`--ws-shared-secret-file`, `--ws-issuer`, `--ws-audience`, and `--ws-max-clock-skew-seconds` require `--ws-auth signed-bearer-token`"
);
}
let source = match (self.ws_token_file, self.ws_token_sha256) {
(Some(_), Some(_)) => {
anyhow::bail!(
"`--ws-token-file` and `--ws-token-sha256` are mutually exclusive"
);
}
(Some(token_file), None) => {
AppServerWebsocketCapabilityTokenSource::TokenFile {
token_file: absolute_path_arg("--ws-token-file", token_file)?,
}
}
(None, Some(token_sha256)) => {
AppServerWebsocketCapabilityTokenSource::TokenSha256 {
token_sha256: sha256_digest_arg("--ws-token-sha256", &token_sha256)?,
}
}
(None, None) => {
anyhow::bail!(
"`--ws-token-file` or `--ws-token-sha256` is required when `--ws-auth capability-token` is set"
);
}
};
Some(AppServerWebsocketAuthConfig::CapabilityToken { source })
}
Some(WebsocketAuthCliMode::SignedBearerToken) => {
if self.ws_token_file.is_some() || self.ws_token_sha256.is_some() {
anyhow::bail!(
"`--ws-token-file` and `--ws-token-sha256` require `--ws-auth capability-token`, not `signed-bearer-token`"
);
}
let shared_secret_file = self.ws_shared_secret_file.context(
"`--ws-shared-secret-file` is required when `--ws-auth signed-bearer-token` is set",
)?;
Some(AppServerWebsocketAuthConfig::SignedBearerToken {
shared_secret_file: absolute_path_arg(
"--ws-shared-secret-file",
shared_secret_file,
)?,
issuer: normalize(self.ws_issuer),
audience: normalize(self.ws_audience),
max_clock_skew_seconds: self
.ws_max_clock_skew_seconds
.unwrap_or(DEFAULT_MAX_CLOCK_SKEW_SECONDS),
})
}
None => {
if self.ws_token_file.is_some()
|| self.ws_token_sha256.is_some()
|| self.ws_shared_secret_file.is_some()
|| self.ws_issuer.is_some()
|| self.ws_audience.is_some()
|| self.ws_max_clock_skew_seconds.is_some()
{
anyhow::bail!(
"websocket auth flags require `--ws-auth capability-token` or `--ws-auth signed-bearer-token`"
);
}
None
}
};
Ok(AppServerWebsocketAuthSettings { config })
}
}
pub fn policy_from_settings(
settings: &AppServerWebsocketAuthSettings,
) -> io::Result<WebsocketAuthPolicy> {
let mode = match settings.config.as_ref() {
Some(AppServerWebsocketAuthConfig::CapabilityToken { source }) => match source {
AppServerWebsocketCapabilityTokenSource::TokenFile { token_file } => {
let token = read_trimmed_secret(token_file.as_ref())?;
Some(WebsocketAuthMode::CapabilityToken {
token_sha256: sha256_digest(token.as_bytes()),
})
}
AppServerWebsocketCapabilityTokenSource::TokenSha256 { token_sha256 } => {
Some(WebsocketAuthMode::CapabilityToken {
token_sha256: *token_sha256,
})
}
},
Some(AppServerWebsocketAuthConfig::SignedBearerToken {
shared_secret_file,
issuer,
audience,
max_clock_skew_seconds,
}) => {
let shared_secret = read_trimmed_secret(shared_secret_file.as_ref())?.into_bytes();
validate_signed_bearer_secret(shared_secret_file.as_ref(), &shared_secret)?;
let max_clock_skew_seconds = i64::try_from(*max_clock_skew_seconds).map_err(|_| {
io::Error::new(
ErrorKind::InvalidInput,
"websocket auth clock skew must fit in a signed 64-bit integer",
)
})?;
Some(WebsocketAuthMode::SignedBearerToken {
shared_secret,
issuer: issuer.clone(),
audience: audience.clone(),
max_clock_skew_seconds,
})
}
None => None,
};
Ok(WebsocketAuthPolicy { mode })
}
pub(crate) fn should_warn_about_unauthenticated_non_loopback_listener(
bind_address: SocketAddr,
policy: &WebsocketAuthPolicy,
) -> bool {
!bind_address.ip().is_loopback() && policy.mode.is_none()
}
pub(crate) fn authorize_upgrade(
headers: &HeaderMap,
policy: &WebsocketAuthPolicy,
) -> Result<(), WebsocketAuthError> {
let Some(mode) = policy.mode.as_ref() else {
return Ok(());
};
let token = bearer_token_from_headers(headers)?;
match mode {
WebsocketAuthMode::CapabilityToken { token_sha256 } => {
let actual_sha256 = sha256_digest(token.as_bytes());
if constant_time_eq_32(token_sha256, &actual_sha256) {
Ok(())
} else {
Err(unauthorized("invalid websocket bearer token"))
}
}
WebsocketAuthMode::SignedBearerToken {
shared_secret,
issuer,
audience,
max_clock_skew_seconds,
} => verify_signed_bearer_token(
token,
shared_secret,
issuer.as_deref(),
audience.as_deref(),
*max_clock_skew_seconds,
),
}
}
fn verify_signed_bearer_token(
token: &str,
shared_secret: &[u8],
issuer: Option<&str>,
audience: Option<&str>,
max_clock_skew_seconds: i64,
) -> Result<(), WebsocketAuthError> {
let claims = decode_jwt_claims(token, shared_secret)?;
validate_jwt_claims(&claims, issuer, audience, max_clock_skew_seconds)
}
fn decode_jwt_claims(token: &str, shared_secret: &[u8]) -> Result<JwtClaims, WebsocketAuthError> {
let mut validation = Validation::new(Algorithm::HS256);
validation.required_spec_claims.clear();
validation.validate_exp = false;
validation.validate_nbf = false;
validation.validate_aud = false;
decode::<JwtClaims>(token, &DecodingKey::from_secret(shared_secret), &validation)
.map(|token_data| token_data.claims)
.map_err(|_| unauthorized("invalid websocket jwt"))
}
fn validate_jwt_claims(
claims: &JwtClaims,
issuer: Option<&str>,
audience: Option<&str>,
max_clock_skew_seconds: i64,
) -> Result<(), WebsocketAuthError> {
let now = OffsetDateTime::now_utc().unix_timestamp();
if now > claims.exp.saturating_add(max_clock_skew_seconds) {
return Err(unauthorized("expired websocket jwt"));
}
if let Some(nbf) = claims.nbf
&& now < nbf.saturating_sub(max_clock_skew_seconds)
{
return Err(unauthorized("websocket jwt is not valid yet"));
}
if let Some(expected_issuer) = issuer
&& claims.iss.as_deref() != Some(expected_issuer)
{
return Err(unauthorized("websocket jwt issuer mismatch"));
}
if let Some(expected_audience) = audience
&& !audience_matches(claims.aud.as_ref(), expected_audience)
{
return Err(unauthorized("websocket jwt audience mismatch"));
}
Ok(())
}
fn audience_matches(audience: Option<&JwtAudienceClaim>, expected_audience: &str) -> bool {
match audience {
Some(JwtAudienceClaim::Single(actual)) => actual == expected_audience,
Some(JwtAudienceClaim::Multiple(actual)) => {
actual.iter().any(|audience| audience == expected_audience)
}
None => false,
}
}
fn bearer_token_from_headers(headers: &HeaderMap) -> Result<&str, WebsocketAuthError> {
let raw_header = headers
.get(AUTHORIZATION)
.ok_or_else(|| unauthorized("missing websocket bearer token"))?;
let header = raw_header
.to_str()
.map_err(|_| unauthorized(INVALID_AUTHORIZATION_HEADER_MESSAGE))?;
let Some((scheme, token)) = header.split_once(' ') else {
return Err(unauthorized(INVALID_AUTHORIZATION_HEADER_MESSAGE));
};
if !scheme.eq_ignore_ascii_case("Bearer") {
return Err(unauthorized(INVALID_AUTHORIZATION_HEADER_MESSAGE));
}
let token = token.trim();
if token.is_empty() {
return Err(unauthorized(INVALID_AUTHORIZATION_HEADER_MESSAGE));
}
Ok(token)
}
fn validate_signed_bearer_secret(path: &Path, shared_secret: &[u8]) -> io::Result<()> {
if shared_secret.len() < MIN_SIGNED_BEARER_SECRET_BYTES {
return Err(io::Error::new(
ErrorKind::InvalidInput,
format!(
"signed websocket bearer secret {} must be at least {MIN_SIGNED_BEARER_SECRET_BYTES} bytes",
path.display()
),
));
}
Ok(())
}
fn read_trimmed_secret(path: &std::path::Path) -> io::Result<String> {
let raw = std::fs::read_to_string(path).map_err(|err| {
io::Error::new(
err.kind(),
format!(
"failed to read websocket auth secret {}: {err}",
path.display()
),
)
})?;
let trimmed = raw.trim();
if trimmed.is_empty() {
return Err(io::Error::new(
ErrorKind::InvalidInput,
format!("websocket auth secret {} must not be empty", path.display()),
));
}
Ok(trimmed.to_string())
}
fn absolute_path_arg(flag_name: &str, path: PathBuf) -> anyhow::Result<AbsolutePathBuf> {
AbsolutePathBuf::try_from(path).with_context(|| format!("{flag_name} must be an absolute path"))
}
fn sha256_digest_arg(flag_name: &str, value: &str) -> anyhow::Result<[u8; 32]> {
let trimmed = value.trim();
if trimmed.len() != 64 {
anyhow::bail!("{flag_name} must be a 64-character hex SHA-256 digest");
}
let mut digest = [0u8; 32];
for (index, pair) in trimmed.as_bytes().chunks_exact(2).enumerate() {
let high = hex_nibble(flag_name, pair[0])?;
let low = hex_nibble(flag_name, pair[1])?;
digest[index] = (high << 4) | low;
}
Ok(digest)
}
fn hex_nibble(flag_name: &str, byte: u8) -> anyhow::Result<u8> {
match byte {
b'0'..=b'9' => Ok(byte - b'0'),
b'a'..=b'f' => Ok(byte - b'a' + 10),
b'A'..=b'F' => Ok(byte - b'A' + 10),
_ => anyhow::bail!("{flag_name} must be a 64-character hex SHA-256 digest"),
}
}
fn sha256_digest(input: &[u8]) -> [u8; 32] {
let mut digest = [0u8; 32];
digest.copy_from_slice(&Sha256::digest(input));
digest
}
fn unauthorized(message: &'static str) -> WebsocketAuthError {
WebsocketAuthError {
status_code: StatusCode::UNAUTHORIZED,
message,
}
}
#[cfg(test)]
mod tests {
use super::*;
use axum::http::HeaderValue;
use base64::Engine;
use base64::engine::general_purpose::URL_SAFE_NO_PAD;
use hmac::Hmac;
use hmac::Mac;
use serde_json::json;
type HmacSha256 = Hmac<Sha256>;
fn signed_token(shared_secret: &[u8], claims: serde_json::Value) -> String {
let header = URL_SAFE_NO_PAD.encode(br#"{"alg":"HS256","typ":"JWT"}"#);
let claims_segment = URL_SAFE_NO_PAD.encode(serde_json::to_vec(&claims).unwrap());
let payload = format!("{header}.{claims_segment}");
let mut mac = HmacSha256::new_from_slice(shared_secret).unwrap();
mac.update(payload.as_bytes());
let signature = URL_SAFE_NO_PAD.encode(mac.finalize().into_bytes());
format!("{payload}.{signature}")
}
#[test]
fn warns_about_unauthenticated_non_loopback_listener() {
let policy = WebsocketAuthPolicy::default();
assert!(should_warn_about_unauthenticated_non_loopback_listener(
"0.0.0.0:8765".parse().unwrap(),
&policy,
));
assert!(!should_warn_about_unauthenticated_non_loopback_listener(
"127.0.0.1:8765".parse().unwrap(),
&policy,
));
assert!(!should_warn_about_unauthenticated_non_loopback_listener(
"0.0.0.0:8765".parse().unwrap(),
&WebsocketAuthPolicy {
mode: Some(WebsocketAuthMode::CapabilityToken {
token_sha256: [0u8; 32],
}),
},
));
}
#[test]
fn capability_token_args_require_token_file_or_hash() {
let err = AppServerWebsocketAuthArgs {
ws_auth: Some(WebsocketAuthCliMode::CapabilityToken),
..Default::default()
}
.try_into_settings()
.expect_err("capability-token mode should require a token source");
assert!(
err.to_string().contains("--ws-token-file")
&& err.to_string().contains("--ws-token-sha256"),
"unexpected error: {err}"
);
}
#[test]
fn capability_token_args_accept_token_hash() {
let settings = AppServerWebsocketAuthArgs {
ws_auth: Some(WebsocketAuthCliMode::CapabilityToken),
ws_token_sha256: Some("ab".repeat(32)),
..Default::default()
}
.try_into_settings()
.expect("capability-token hash args should parse");
assert_eq!(
settings,
AppServerWebsocketAuthSettings {
config: Some(AppServerWebsocketAuthConfig::CapabilityToken {
source: AppServerWebsocketCapabilityTokenSource::TokenSha256 {
token_sha256: [0xab; 32],
},
}),
}
);
}
#[test]
fn capability_token_args_reject_multiple_token_sources() {
let err = AppServerWebsocketAuthArgs {
ws_auth: Some(WebsocketAuthCliMode::CapabilityToken),
ws_token_file: Some(PathBuf::from("/tmp/token")),
ws_token_sha256: Some("ab".repeat(32)),
..Default::default()
}
.try_into_settings()
.expect_err("capability-token mode should reject multiple token sources");
assert!(
err.to_string().contains("mutually exclusive"),
"unexpected error: {err}"
);
}
#[test]
fn capability_token_args_reject_malformed_token_hash() {
let err = AppServerWebsocketAuthArgs {
ws_auth: Some(WebsocketAuthCliMode::CapabilityToken),
ws_token_sha256: Some("not-a-sha256".to_string()),
..Default::default()
}
.try_into_settings()
.expect_err("capability-token mode should reject malformed token hashes");
assert!(
err.to_string().contains("64-character hex"),
"unexpected error: {err}"
);
}
#[test]
fn capability_token_hash_policy_authorizes_matching_bearer_token() {
let settings = AppServerWebsocketAuthSettings {
config: Some(AppServerWebsocketAuthConfig::CapabilityToken {
source: AppServerWebsocketCapabilityTokenSource::TokenSha256 {
token_sha256: sha256_digest(b"super-secret-token"),
},
}),
};
let policy = policy_from_settings(&settings).expect("hash policy should build");
let mut headers = HeaderMap::new();
headers.insert(
AUTHORIZATION,
HeaderValue::from_static("Bearer super-secret-token"),
);
authorize_upgrade(&headers, &policy).expect("matching token should authorize");
headers.insert(
AUTHORIZATION,
HeaderValue::from_static("Bearer wrong-token"),
);
let err = authorize_upgrade(&headers, &policy).expect_err("wrong token should fail");
assert_eq!(err.status_code(), StatusCode::UNAUTHORIZED);
}
#[test]
fn signed_bearer_args_require_mode_when_mode_specific_flags_are_set() {
let err = AppServerWebsocketAuthArgs {
ws_shared_secret_file: Some(PathBuf::from("/tmp/secret")),
..Default::default()
}
.try_into_settings()
.expect_err("mode-specific flags should require --ws-auth");
assert!(
err.to_string().contains("websocket auth flags require"),
"unexpected error: {err}"
);
}
#[test]
fn signed_bearer_args_default_clock_skew_and_trim_optional_claims() {
let settings = AppServerWebsocketAuthArgs {
ws_auth: Some(WebsocketAuthCliMode::SignedBearerToken),
ws_shared_secret_file: Some(PathBuf::from("/tmp/secret")),
ws_issuer: Some(" issuer ".to_string()),
ws_audience: Some(" ".to_string()),
..Default::default()
}
.try_into_settings()
.expect("signed bearer args should parse");
assert_eq!(
settings,
AppServerWebsocketAuthSettings {
config: Some(AppServerWebsocketAuthConfig::SignedBearerToken {
shared_secret_file: AbsolutePathBuf::from_absolute_path("/tmp/secret")
.expect("absolute path"),
issuer: Some("issuer".to_string()),
audience: None,
max_clock_skew_seconds: DEFAULT_MAX_CLOCK_SKEW_SECONDS,
}),
}
);
}
#[test]
fn signed_bearer_token_verification_rejects_tampering() {
let shared_secret = b"0123456789abcdef0123456789abcdef";
let token = signed_token(
shared_secret,
json!({
"exp": OffsetDateTime::now_utc().unix_timestamp() + 60,
}),
);
let tampered = token.replace(".eyJleHAi", ".eyJleHBi");
let err = verify_signed_bearer_token(
&tampered,
shared_secret,
/*issuer*/ None,
/*audience*/ None,
/*max_clock_skew_seconds*/ 30,
)
.expect_err("tampered jwt should fail");
assert_eq!(err.status_code(), StatusCode::UNAUTHORIZED);
}
#[test]
fn signed_bearer_token_verification_accepts_valid_token() {
let shared_secret = b"0123456789abcdef0123456789abcdef";
let token = signed_token(
shared_secret,
json!({
"exp": OffsetDateTime::now_utc().unix_timestamp() + 60,
"iss": "issuer",
"aud": "audience",
}),
);
verify_signed_bearer_token(
&token,
shared_secret,
Some("issuer"),
Some("audience"),
/*max_clock_skew_seconds*/ 30,
)
.expect("valid signed token should verify");
}
#[test]
fn signed_bearer_token_verification_accepts_multiple_audiences() {
let shared_secret = b"0123456789abcdef0123456789abcdef";
let token = signed_token(
shared_secret,
json!({
"exp": OffsetDateTime::now_utc().unix_timestamp() + 60,
"aud": ["other-audience", "audience"],
}),
);
verify_signed_bearer_token(
&token,
shared_secret,
/*issuer*/ None,
Some("audience"),
/*max_clock_skew_seconds*/ 30,
)
.expect("jwt audience arrays should verify");
}
#[test]
fn signed_bearer_token_verification_rejects_alg_none_tokens() {
let claims_segment = URL_SAFE_NO_PAD.encode(
serde_json::to_vec(&json!({
"exp": OffsetDateTime::now_utc().unix_timestamp() + 60,
}))
.unwrap(),
);
let header_segment = URL_SAFE_NO_PAD.encode(br#"{"alg":"none","typ":"JWT"}"#);
let token = format!("{header_segment}.{claims_segment}.");
let err = verify_signed_bearer_token(
&token,
b"0123456789abcdef0123456789abcdef",
/*issuer*/ None,
/*audience*/ None,
/*max_clock_skew_seconds*/ 30,
)
.expect_err("alg=none jwt should be rejected");
assert_eq!(err.status_code(), StatusCode::UNAUTHORIZED);
}
#[test]
fn signed_bearer_token_verification_rejects_missing_exp() {
let shared_secret = b"0123456789abcdef0123456789abcdef";
let token = signed_token(
shared_secret,
json!({
"iss": "issuer",
}),
);
let err = verify_signed_bearer_token(
&token,
shared_secret,
/*issuer*/ None,
/*audience*/ None,
/*max_clock_skew_seconds*/ 30,
)
.expect_err("jwt without exp should be rejected");
assert_eq!(err.status_code(), StatusCode::UNAUTHORIZED);
}
#[test]
fn validate_signed_bearer_secret_rejects_short_secret() {
let err = validate_signed_bearer_secret(Path::new("/tmp/secret"), b"too-short")
.expect_err("short shared secret should be rejected");
assert_eq!(err.kind(), ErrorKind::InvalidInput);
assert!(
err.to_string().contains("must be at least 32 bytes"),
"unexpected error: {err}"
);
}
}

View File

@@ -1,5 +1,3 @@
pub mod auth;
use crate::outgoing_message::ConnectionId;
use crate::outgoing_message::OutgoingError;
use crate::outgoing_message::OutgoingMessage;
@@ -8,7 +6,6 @@ use codex_app_server_protocol::JSONRPCErrorError;
use codex_app_server_protocol::JSONRPCMessage;
use codex_core::config::find_codex_home;
use codex_utils_absolute_path::AbsolutePathBuf;
use std::net::SocketAddr;
use std::path::Path;
use std::str::FromStr;
use std::sync::atomic::AtomicU64;
@@ -31,10 +28,10 @@ mod unix_socket_tests;
mod websocket;
pub use remote_control::RemoteControlHandle;
pub use remote_control::RemoteControlStartConfig;
pub use remote_control::start_remote_control;
pub use stdio::start_stdio_connection;
pub use unix_socket::start_control_socket_acceptor;
pub use websocket::start_websocket_acceptor;
const OVERLOADED_ERROR_CODE: i64 = -32001;
@@ -53,7 +50,6 @@ pub fn app_server_control_socket_path(codex_home: &Path) -> std::io::Result<Abso
pub enum AppServerTransport {
Stdio,
UnixSocket { socket_path: AbsolutePathBuf },
WebSocket { bind_address: SocketAddr },
Off,
}
@@ -61,7 +57,6 @@ pub enum AppServerTransport {
pub enum AppServerTransportParseError {
UnsupportedListenUrl(String),
InvalidUnixSocketPath { listen_url: String, message: String },
InvalidWebSocketListenUrl(String),
}
impl std::fmt::Display for AppServerTransportParseError {
@@ -69,7 +64,7 @@ impl std::fmt::Display for AppServerTransportParseError {
match self {
AppServerTransportParseError::UnsupportedListenUrl(listen_url) => write!(
f,
"unsupported --listen URL `{listen_url}`; expected `stdio://`, `unix://`, `unix://PATH`, `ws://IP:PORT`, or `off`"
"unsupported --listen URL `{listen_url}`; expected `stdio://`, `unix://`, `unix://PATH`, or `off`"
),
AppServerTransportParseError::InvalidUnixSocketPath {
listen_url,
@@ -78,10 +73,6 @@ impl std::fmt::Display for AppServerTransportParseError {
f,
"invalid unix socket --listen URL `{listen_url}`; failed to resolve socket path: {message}"
),
AppServerTransportParseError::InvalidWebSocketListenUrl(listen_url) => write!(
f,
"invalid websocket --listen URL `{listen_url}`; expected `ws://IP:PORT`"
),
}
}
}
@@ -125,13 +116,6 @@ impl AppServerTransport {
return Ok(Self::Off);
}
if let Some(socket_addr) = listen_url.strip_prefix("ws://") {
let bind_address = socket_addr.parse::<SocketAddr>().map_err(|_| {
AppServerTransportParseError::InvalidWebSocketListenUrl(listen_url.to_string())
})?;
return Ok(Self::WebSocket { bind_address });
}
Err(AppServerTransportParseError::UnsupportedListenUrl(
listen_url.to_string(),
))
@@ -167,7 +151,7 @@ pub enum TransportEvent {
pub enum ConnectionOrigin {
Stdio,
InProcess,
WebSocket,
UnixSocket,
RemoteControl,
}

View File

@@ -1,7 +1,6 @@
use super::protocol::EnrollRemoteServerRequest;
use super::protocol::EnrollRemoteServerResponse;
use super::protocol::RemoteControlTarget;
use axum::http::HeaderMap;
use codex_api::SharedAuthProvider;
use codex_login::default_client::build_reqwest_client;
use codex_state::RemoteControlEnrollmentRecord;
@@ -9,6 +8,7 @@ use codex_state::StateRuntime;
use gethostname::gethostname;
use std::io;
use std::io::ErrorKind;
use tokio_tungstenite::tungstenite::http::HeaderMap;
use tracing::info;
use tracing::warn;
@@ -19,6 +19,7 @@ const REQUEST_ID_HEADER: &str = "x-request-id";
const OAI_REQUEST_ID_HEADER: &str = "x-oai-request-id";
const CF_RAY_HEADER: &str = "cf-ray";
pub(super) const REMOTE_CONTROL_ACCOUNT_ID_HEADER: &str = "chatgpt-account-id";
pub(super) const REMOTE_CONTROL_INSTALLATION_ID_HEADER: &str = "x-codex-installation-id";
#[derive(Debug, Clone, PartialEq, Eq)]
pub(super) struct RemoteControlEnrollment {
@@ -193,6 +194,7 @@ pub(crate) fn format_headers(headers: &HeaderMap) -> String {
pub(super) async fn enroll_remote_control_server(
remote_control_target: &RemoteControlTarget,
auth: &RemoteControlConnectionAuth,
installation_id: &str,
) -> io::Result<RemoteControlEnrollment> {
let enroll_url = &remote_control_target.enroll_url;
let server_name = gethostname().to_string_lossy().trim().to_string();
@@ -201,6 +203,7 @@ pub(super) async fn enroll_remote_control_server(
os: std::env::consts::OS,
arch: std::env::consts::ARCH,
app_server_version: env!("CARGO_PKG_VERSION"),
installation_id: installation_id.to_string(),
};
let client = build_reqwest_client();
let mut auth_headers = HeaderMap::new();
@@ -210,6 +213,7 @@ pub(super) async fn enroll_remote_control_server(
.timeout(REMOTE_CONTROL_ENROLL_TIMEOUT)
.headers(auth_headers)
.header(REMOTE_CONTROL_ACCOUNT_ID_HEADER, &auth.account_id)
.header(REMOTE_CONTROL_INSTALLATION_ID_HEADER, installation_id)
.json(&request);
let response = http_request.send().await.map_err(|err| {
@@ -459,6 +463,7 @@ mod tests {
auth_provider: codex_model_provider::unauthenticated_auth_provider(),
account_id: "account_id".to_string(),
},
"11111111-1111-4111-8111-111111111111",
)
.await
.expect_err("invalid response should fail to parse");

View File

@@ -28,6 +28,11 @@ use tokio::task::JoinHandle;
use tokio_util::sync::CancellationToken;
use tracing::warn;
pub struct RemoteControlStartConfig {
pub remote_control_url: String,
pub installation_id: String,
}
pub(super) struct QueuedServerEnvelope {
pub(super) event: ServerEvent,
pub(super) client_id: ClientId,
@@ -62,7 +67,7 @@ impl RemoteControlHandle {
}
pub async fn start_remote_control(
remote_control_url: String,
config: RemoteControlStartConfig,
state_db: Option<Arc<StateRuntime>>,
auth_manager: Arc<AuthManager>,
transport_event_tx: mpsc::Sender<TransportEvent>,
@@ -77,7 +82,7 @@ pub async fn start_remote_control(
warn!("remote control disabled because sqlite state db is unavailable");
}
let remote_control_target = if initial_enabled {
Some(normalize_remote_control_url(&remote_control_url)?)
Some(normalize_remote_control_url(&config.remote_control_url)?)
} else {
None
};
@@ -89,14 +94,18 @@ pub async fn start_remote_control(
} else {
RemoteControlConnectionStatus::Disabled
},
installation_id: config.installation_id.clone(),
environment_id: None,
};
let (status_tx, _status_rx) = watch::channel(initial_status);
let status_publisher = RemoteControlStatusPublisher::new(status_tx.clone());
let join_handle = tokio::spawn(async move {
RemoteControlWebsocket::new(
remote_control_url,
remote_control_target,
websocket::RemoteControlWebsocketConfig {
remote_control_url: config.remote_control_url,
installation_id: config.installation_id,
remote_control_target,
},
state_db,
auth_manager,
RemoteControlChannels {

View File

@@ -19,6 +19,7 @@ pub(super) struct EnrollRemoteServerRequest {
pub(super) os: &'static str,
pub(super) arch: &'static str,
pub(super) app_server_version: &'static str,
pub(super) installation_id: String,
}
#[derive(Debug, Deserialize)]

View File

@@ -1,4 +1,5 @@
use super::enroll::REMOTE_CONTROL_ACCOUNT_ID_HEADER;
use super::enroll::REMOTE_CONTROL_INSTALLATION_ID_HEADER;
use super::enroll::RemoteControlEnrollment;
use super::enroll::load_persisted_remote_control_enrollment;
use super::enroll::update_persisted_remote_control_enrollment;
@@ -56,6 +57,8 @@ use tokio_tungstenite::accept_hdr_async;
use tokio_tungstenite::tungstenite;
use tokio_util::sync::CancellationToken;
const TEST_INSTALLATION_ID: &str = "11111111-1111-4111-8111-111111111111";
fn remote_control_auth_manager() -> Arc<AuthManager> {
auth_manager_from_auth(CodexAuth::create_dummy_chatgpt_auth_for_testing())
}
@@ -131,6 +134,7 @@ async fn expect_remote_control_status(
if let Some(expected_status) = expected_status {
assert_eq!(status.status, expected_status);
}
assert_eq!(status.installation_id, TEST_INSTALLATION_ID);
assert_eq!(status.environment_id.as_deref(), expected_environment_id);
}
@@ -173,7 +177,10 @@ async fn remote_control_transport_manages_virtual_clients_and_routes_messages()
mpsc::channel::<TransportEvent>(CHANNEL_CAPACITY);
let shutdown_token = CancellationToken::new();
let (remote_task, remote_handle) = start_remote_control(
remote_control_url,
RemoteControlStartConfig {
remote_control_url,
installation_id: TEST_INSTALLATION_ID.to_string(),
},
Some(remote_control_state_runtime(&codex_home).await),
remote_control_auth_manager(),
transport_event_tx,
@@ -449,7 +456,10 @@ async fn remote_control_transport_reconnects_after_disconnect() {
mpsc::channel::<TransportEvent>(CHANNEL_CAPACITY);
let shutdown_token = CancellationToken::new();
let (remote_task, remote_handle) = start_remote_control(
remote_control_url,
RemoteControlStartConfig {
remote_control_url,
installation_id: TEST_INSTALLATION_ID.to_string(),
},
Some(remote_control_state_runtime(&codex_home).await),
remote_control_auth_manager(),
transport_event_tx,
@@ -528,7 +538,10 @@ async fn remote_control_start_allows_remote_control_invalid_url_when_disabled()
mpsc::channel::<TransportEvent>(CHANNEL_CAPACITY);
let shutdown_token = CancellationToken::new();
let (remote_task, _remote_handle) = start_remote_control(
"https://internal.example.com/backend-api/".to_string(),
RemoteControlStartConfig {
remote_control_url: "https://internal.example.com/backend-api/".to_string(),
installation_id: TEST_INSTALLATION_ID.to_string(),
},
/*state_db*/ None,
remote_control_auth_manager(),
transport_event_tx,
@@ -564,7 +577,10 @@ async fn remote_control_start_allows_missing_auth_when_enabled() {
mpsc::channel::<TransportEvent>(CHANNEL_CAPACITY);
let shutdown_token = CancellationToken::new();
let (remote_task, _remote_handle) = start_remote_control(
remote_control_url,
RemoteControlStartConfig {
remote_control_url,
installation_id: TEST_INSTALLATION_ID.to_string(),
},
Some(remote_control_state_runtime(&codex_home).await),
auth_manager,
transport_event_tx,
@@ -596,7 +612,10 @@ async fn remote_control_start_reports_missing_state_db_as_disabled_when_enabled(
mpsc::channel::<TransportEvent>(CHANNEL_CAPACITY);
let shutdown_token = CancellationToken::new();
let (remote_task, remote_handle) = start_remote_control(
remote_control_url,
RemoteControlStartConfig {
remote_control_url,
installation_id: TEST_INSTALLATION_ID.to_string(),
},
/*state_db*/ None,
remote_control_auth_manager(),
transport_event_tx,
@@ -611,6 +630,7 @@ async fn remote_control_start_reports_missing_state_db_as_disabled_when_enabled(
status_rx.borrow().clone(),
RemoteControlStatusChangedNotification {
status: RemoteControlConnectionStatus::Disabled,
installation_id: TEST_INSTALLATION_ID.to_string(),
environment_id: None,
}
);
@@ -645,7 +665,10 @@ async fn remote_control_handle_set_enabled_stops_and_restarts_connections() {
mpsc::channel::<TransportEvent>(CHANNEL_CAPACITY);
let shutdown_token = CancellationToken::new();
let (remote_task, remote_handle) = start_remote_control(
remote_control_url,
RemoteControlStartConfig {
remote_control_url,
installation_id: TEST_INSTALLATION_ID.to_string(),
},
Some(remote_control_state_runtime(&codex_home).await),
remote_control_auth_manager(),
transport_event_tx,
@@ -672,6 +695,7 @@ async fn remote_control_handle_set_enabled_stops_and_restarts_connections() {
&mut status_rx,
RemoteControlStatusChangedNotification {
status: RemoteControlConnectionStatus::Connected,
installation_id: TEST_INSTALLATION_ID.to_string(),
environment_id: Some("env_test".to_string()),
},
)
@@ -682,6 +706,7 @@ async fn remote_control_handle_set_enabled_stops_and_restarts_connections() {
&mut status_rx,
RemoteControlStatusChangedNotification {
status: RemoteControlConnectionStatus::Disabled,
installation_id: TEST_INSTALLATION_ID.to_string(),
environment_id: None,
},
)
@@ -698,6 +723,7 @@ async fn remote_control_handle_set_enabled_stops_and_restarts_connections() {
&mut status_rx,
RemoteControlStatusChangedNotification {
status: RemoteControlConnectionStatus::Connecting,
installation_id: TEST_INSTALLATION_ID.to_string(),
environment_id: Some("env_test".to_string()),
},
)
@@ -729,7 +755,10 @@ async fn remote_control_transport_clears_outgoing_buffer_when_backend_acks() {
mpsc::channel::<TransportEvent>(CHANNEL_CAPACITY);
let shutdown_token = CancellationToken::new();
let (remote_task, remote_handle) = start_remote_control(
remote_control_url,
RemoteControlStartConfig {
remote_control_url,
installation_id: TEST_INSTALLATION_ID.to_string(),
},
Some(remote_control_state_runtime(&codex_home).await),
remote_control_auth_manager(),
transport_event_tx,
@@ -904,7 +933,10 @@ async fn remote_control_http_mode_enrolls_before_connecting() {
let expected_server_name = gethostname().to_string_lossy().trim().to_string();
let shutdown_token = CancellationToken::new();
let (remote_task, remote_handle) = start_remote_control(
remote_control_url,
RemoteControlStartConfig {
remote_control_url,
installation_id: TEST_INSTALLATION_ID.to_string(),
},
Some(remote_control_state_runtime(&codex_home).await),
remote_control_auth_manager(),
transport_event_tx,
@@ -929,6 +961,12 @@ async fn remote_control_http_mode_enrolls_before_connecting() {
enroll_request.headers.get(REMOTE_CONTROL_ACCOUNT_ID_HEADER),
Some(&"account_id".to_string())
);
assert_eq!(
enroll_request
.headers
.get(REMOTE_CONTROL_INSTALLATION_ID_HEADER),
Some(&TEST_INSTALLATION_ID.to_string())
);
assert_eq!(
serde_json::from_str::<serde_json::Value>(&enroll_request.body)
.expect("enroll body should deserialize"),
@@ -937,6 +975,7 @@ async fn remote_control_http_mode_enrolls_before_connecting() {
"os": std::env::consts::OS,
"arch": std::env::consts::ARCH,
"app_server_version": env!("CARGO_PKG_VERSION"),
"installation_id": TEST_INSTALLATION_ID,
})
);
respond_with_json(
@@ -967,6 +1006,12 @@ async fn remote_control_http_mode_enrolls_before_connecting() {
.get(REMOTE_CONTROL_ACCOUNT_ID_HEADER),
Some(&"account_id".to_string())
);
assert_eq!(
handshake_request
.headers
.get(REMOTE_CONTROL_INSTALLATION_ID_HEADER),
Some(&TEST_INSTALLATION_ID.to_string())
);
assert_eq!(
handshake_request.headers.get("x-codex-server-id"),
Some(&"srv_e_test".to_string())
@@ -1128,7 +1173,10 @@ async fn remote_control_http_mode_reuses_persisted_enrollment_before_reenrolling
mpsc::channel::<TransportEvent>(CHANNEL_CAPACITY);
let shutdown_token = CancellationToken::new();
let (remote_task, _remote_handle) = start_remote_control(
remote_control_url,
RemoteControlStartConfig {
remote_control_url,
installation_id: TEST_INSTALLATION_ID.to_string(),
},
Some(state_db.clone()),
remote_control_auth_manager_with_home(&codex_home),
transport_event_tx,
@@ -1196,7 +1244,10 @@ async fn remote_control_stdio_mode_waits_for_client_name_before_connecting() {
let (app_server_client_name_tx, app_server_client_name_rx) = oneshot::channel::<String>();
let shutdown_token = CancellationToken::new();
let (remote_task, _remote_handle) = start_remote_control(
remote_control_url,
RemoteControlStartConfig {
remote_control_url,
installation_id: TEST_INSTALLATION_ID.to_string(),
},
Some(state_db.clone()),
remote_control_auth_manager_with_home(&codex_home),
transport_event_tx,
@@ -1255,7 +1306,10 @@ async fn remote_control_waits_for_account_id_before_enrolling() {
mpsc::channel::<TransportEvent>(CHANNEL_CAPACITY);
let shutdown_token = CancellationToken::new();
let (remote_task, _remote_handle) = start_remote_control(
remote_control_url,
RemoteControlStartConfig {
remote_control_url,
installation_id: TEST_INSTALLATION_ID.to_string(),
},
Some(state_db.clone()),
auth_manager,
transport_event_tx,
@@ -1338,7 +1392,10 @@ async fn remote_control_http_mode_clears_stale_persisted_enrollment_after_404()
mpsc::channel::<TransportEvent>(CHANNEL_CAPACITY);
let shutdown_token = CancellationToken::new();
let (remote_task, remote_handle) = start_remote_control(
remote_control_url,
RemoteControlStartConfig {
remote_control_url,
installation_id: TEST_INSTALLATION_ID.to_string(),
},
Some(state_db.clone()),
remote_control_auth_manager_with_home(&codex_home),
transport_event_tx,

View File

@@ -19,7 +19,6 @@ use super::segment::ClientSegmentObservation;
use super::segment::ClientSegmentReassembler;
use super::segment::REMOTE_CONTROL_SEGMENT_MAX_BYTES;
use super::segment::split_server_envelope_for_transport;
use axum::http::HeaderValue;
use base64::Engine;
use codex_app_server_protocol::RemoteControlConnectionStatus;
use codex_app_server_protocol::RemoteControlStatusChangedNotification;
@@ -48,6 +47,7 @@ use tokio_tungstenite::WebSocketStream;
use tokio_tungstenite::connect_async;
use tokio_tungstenite::tungstenite;
use tokio_tungstenite::tungstenite::client::IntoClientRequest;
use tokio_tungstenite::tungstenite::http::HeaderValue;
use tokio_util::sync::CancellationToken;
use tracing::error;
use tracing::info;
@@ -55,6 +55,7 @@ use tracing::warn;
pub(super) const REMOTE_CONTROL_PROTOCOL_VERSION: &str = "3";
pub(super) const REMOTE_CONTROL_ACCOUNT_ID_HEADER: &str = "chatgpt-account-id";
pub(super) const REMOTE_CONTROL_INSTALLATION_ID_HEADER: &str = "x-codex-installation-id";
const REMOTE_CONTROL_SUBSCRIBE_CURSOR_HEADER: &str = "x-codex-subscribe-cursor";
const REMOTE_CONTROL_WEBSOCKET_PING_INTERVAL: std::time::Duration =
std::time::Duration::from_secs(10);
@@ -214,6 +215,7 @@ impl WebsocketState {
pub(crate) struct RemoteControlWebsocket {
remote_control_url: String,
installation_id: String,
remote_control_target: Option<RemoteControlTarget>,
state_db: Option<Arc<StateRuntime>>,
auth_manager: Arc<AuthManager>,
@@ -229,6 +231,12 @@ pub(crate) struct RemoteControlWebsocket {
enabled_rx: watch::Receiver<bool>,
}
pub(crate) struct RemoteControlWebsocketConfig {
pub(crate) remote_control_url: String,
pub(crate) installation_id: String,
pub(crate) remote_control_target: Option<RemoteControlTarget>,
}
enum ConnectOutcome {
Connected(Box<WebSocketStream<MaybeTlsStream<TcpStream>>>),
Disabled,
@@ -254,6 +262,7 @@ impl RemoteControlStatusPublisher {
self.tx.send_if_modified(|status| {
let next_status = RemoteControlStatusChangedNotification {
status: connection_status,
installation_id: status.installation_id.clone(),
environment_id: if connection_status == RemoteControlConnectionStatus::Disabled {
None
} else {
@@ -276,6 +285,7 @@ impl RemoteControlStatusPublisher {
}
let next_status = RemoteControlStatusChangedNotification {
status: status.status,
installation_id: status.installation_id.clone(),
environment_id,
};
if *status == next_status {
@@ -290,14 +300,14 @@ impl RemoteControlStatusPublisher {
#[derive(Clone, Copy)]
pub(super) struct RemoteControlConnectOptions<'a> {
installation_id: &'a str,
subscribe_cursor: Option<&'a str>,
app_server_client_name: Option<&'a str>,
}
impl RemoteControlWebsocket {
pub(crate) fn new(
remote_control_url: String,
remote_control_target: Option<RemoteControlTarget>,
config: RemoteControlWebsocketConfig,
state_db: Option<Arc<StateRuntime>>,
auth_manager: Arc<AuthManager>,
channels: RemoteControlChannels,
@@ -315,8 +325,9 @@ impl RemoteControlWebsocket {
let auth_recovery = auth_manager.unauthorized_recovery();
Self {
remote_control_url,
remote_control_target,
remote_control_url: config.remote_control_url,
installation_id: config.installation_id,
remote_control_target: config.remote_control_target,
state_db,
auth_manager,
status_publisher: channels.status_publisher,
@@ -442,6 +453,7 @@ impl RemoteControlWebsocket {
loop {
let subscribe_cursor = self.state.lock().await.subscribe_cursor.clone();
let connect_options = RemoteControlConnectOptions {
installation_id: &self.installation_id,
subscribe_cursor: subscribe_cursor.as_deref(),
app_server_client_name,
};
@@ -918,6 +930,7 @@ fn build_remote_control_websocket_request(
websocket_url: &str,
enrollment: &RemoteControlEnrollment,
auth: &RemoteControlConnectionAuth,
installation_id: &str,
subscribe_cursor: Option<&str>,
) -> io::Result<tungstenite::http::Request<()>> {
let mut request = websocket_url.into_client_request().map_err(|err| {
@@ -942,6 +955,11 @@ fn build_remote_control_websocket_request(
auth.auth_provider.add_auth_headers(&mut auth_headers);
headers.extend(auth_headers);
set_remote_control_header(headers, REMOTE_CONTROL_ACCOUNT_ID_HEADER, &auth.account_id)?;
set_remote_control_header(
headers,
REMOTE_CONTROL_INSTALLATION_ID_HEADER,
installation_id,
)?;
if let Some(subscribe_cursor) = subscribe_cursor {
set_remote_control_header(
headers,
@@ -1066,7 +1084,12 @@ pub(super) async fn connect_remote_control_websocket(
"creating new remote control enrollment: websocket_url={}, enroll_url={}, account_id={}",
remote_control_target.websocket_url, remote_control_target.enroll_url, auth.account_id
);
let new_enrollment = match enroll_remote_control_server(remote_control_target, &auth).await
let new_enrollment = match enroll_remote_control_server(
remote_control_target,
&auth,
connect_options.installation_id,
)
.await
{
Ok(new_enrollment) => new_enrollment,
Err(err)
@@ -1110,6 +1133,7 @@ pub(super) async fn connect_remote_control_websocket(
&remote_control_target.websocket_url,
enrollment_ref,
&auth,
connect_options.installation_id,
connect_options.subscribe_cursor,
)?;
@@ -1247,6 +1271,7 @@ mod tests {
const TEST_HTTP_ACCEPT_TIMEOUT: Duration = Duration::from_secs(30);
#[cfg(not(windows))]
const TEST_HTTP_ACCEPT_TIMEOUT: Duration = Duration::from_secs(5);
const TEST_INSTALLATION_ID: &str = "11111111-1111-4111-8111-111111111111";
fn remote_control_status_channel() -> (
RemoteControlStatusPublisher,
@@ -1254,6 +1279,7 @@ mod tests {
) {
let (status_tx, status_rx) = watch::channel(RemoteControlStatusChangedNotification {
status: RemoteControlConnectionStatus::Connecting,
installation_id: TEST_INSTALLATION_ID.to_string(),
environment_id: None,
});
(RemoteControlStatusPublisher::new(status_tx), status_rx)
@@ -1359,6 +1385,7 @@ mod tests {
&mut auth_recovery,
&mut enrollment,
RemoteControlConnectOptions {
installation_id: TEST_INSTALLATION_ID,
subscribe_cursor: None,
app_server_client_name: None,
},
@@ -1376,6 +1403,7 @@ mod tests {
status_rx.borrow().clone(),
RemoteControlStatusChangedNotification {
status: RemoteControlConnectionStatus::Connecting,
installation_id: TEST_INSTALLATION_ID.to_string(),
environment_id: Some("env_test".to_string()),
}
);
@@ -1435,6 +1463,7 @@ mod tests {
&mut auth_recovery,
&mut enrollment,
RemoteControlConnectOptions {
installation_id: TEST_INSTALLATION_ID,
subscribe_cursor: None,
app_server_client_name: None,
},
@@ -1448,6 +1477,7 @@ mod tests {
status_rx.borrow().clone(),
RemoteControlStatusChangedNotification {
status: RemoteControlConnectionStatus::Connecting,
installation_id: TEST_INSTALLATION_ID.to_string(),
environment_id: Some("env_test".to_string()),
}
);
@@ -1515,6 +1545,7 @@ mod tests {
&mut auth_recovery,
&mut enrollment,
RemoteControlConnectOptions {
installation_id: TEST_INSTALLATION_ID,
subscribe_cursor: None,
app_server_client_name: None,
},
@@ -1567,6 +1598,7 @@ mod tests {
&mut auth_recovery,
&mut enrollment,
RemoteControlConnectOptions {
installation_id: TEST_INSTALLATION_ID,
subscribe_cursor: None,
app_server_client_name: None,
},
@@ -1614,6 +1646,7 @@ mod tests {
&mut auth_recovery,
&mut enrollment,
RemoteControlConnectOptions {
installation_id: TEST_INSTALLATION_ID,
subscribe_cursor: None,
app_server_client_name: None,
},
@@ -1632,6 +1665,7 @@ mod tests {
status_rx.borrow().clone(),
RemoteControlStatusChangedNotification {
status: RemoteControlConnectionStatus::Connecting,
installation_id: TEST_INSTALLATION_ID.to_string(),
environment_id: None,
}
);
@@ -1656,8 +1690,11 @@ mod tests {
let shutdown_token = shutdown_token.clone();
async move {
RemoteControlWebsocket::new(
remote_control_url,
Some(remote_control_target),
RemoteControlWebsocketConfig {
remote_control_url,
installation_id: TEST_INSTALLATION_ID.to_string(),
remote_control_target: Some(remote_control_target),
},
/*state_db*/ None,
remote_control_auth_manager(),
RemoteControlChannels {
@@ -1701,6 +1738,7 @@ mod tests {
status_rx.borrow().clone(),
RemoteControlStatusChangedNotification {
status: RemoteControlConnectionStatus::Connecting,
installation_id: TEST_INSTALLATION_ID.to_string(),
environment_id: Some("env_first".to_string()),
}
);
@@ -1721,6 +1759,7 @@ mod tests {
status_rx.borrow().clone(),
RemoteControlStatusChangedNotification {
status: RemoteControlConnectionStatus::Connected,
installation_id: TEST_INSTALLATION_ID.to_string(),
environment_id: Some("env_first".to_string()),
}
);
@@ -1734,6 +1773,7 @@ mod tests {
status_rx.borrow().clone(),
RemoteControlStatusChangedNotification {
status: RemoteControlConnectionStatus::Connected,
installation_id: TEST_INSTALLATION_ID.to_string(),
environment_id: None,
}
);
@@ -1748,6 +1788,7 @@ mod tests {
status_rx.borrow().clone(),
RemoteControlStatusChangedNotification {
status: RemoteControlConnectionStatus::Disabled,
installation_id: TEST_INSTALLATION_ID.to_string(),
environment_id: None,
}
);

View File

@@ -52,6 +52,14 @@ fn listen_unix_socket_accepts_relative_custom_path() {
);
}
#[test]
fn listen_websocket_url_is_not_supported() {
assert!(matches!(
AppServerTransport::from_listen_url("ws://127.0.0.1:0"),
Err(super::AppServerTransportParseError::UnsupportedListenUrl(_))
));
}
#[tokio::test]
async fn control_socket_acceptor_upgrades_and_forwards_websocket_text_messages_and_pings() {
let temp_dir = tempfile::TempDir::new().expect("temp dir");

View File

@@ -1,46 +1,17 @@
use super::CHANNEL_CAPACITY;
use super::ConnectionOrigin;
use super::TransportEvent;
use super::auth::WebsocketAuthPolicy;
use super::auth::authorize_upgrade;
use super::auth::should_warn_about_unauthenticated_non_loopback_listener;
use super::forward_incoming_message;
use super::next_connection_id;
use super::serialize_outgoing_message;
use crate::outgoing_message::ConnectionId;
use crate::outgoing_message::QueuedOutgoingMessage;
use axum::Router;
use axum::body::Body;
use axum::body::Bytes;
use axum::extract::ConnectInfo;
use axum::extract::State;
use axum::extract::ws::Message as AxumWebSocketMessage;
use axum::extract::ws::WebSocketUpgrade;
use axum::http::HeaderMap;
use axum::http::Request;
use axum::http::StatusCode;
use axum::http::header::ORIGIN;
use axum::middleware;
use axum::middleware::Next;
use axum::response::IntoResponse;
use axum::response::Response;
use axum::routing::any;
use axum::routing::get;
use futures::SinkExt;
use futures::StreamExt;
use owo_colors::OwoColorize;
use owo_colors::Stream;
use owo_colors::Style;
use std::io::Result as IoResult;
use std::net::SocketAddr;
use std::sync::Arc;
use tokio::net::TcpListener;
use tokio::sync::mpsc;
use tokio::task::JoinHandle;
use tokio_tungstenite::tungstenite::Bytes;
use tokio_tungstenite::tungstenite::Message as TungsteniteWebSocketMessage;
use tokio_util::sync::CancellationToken;
use tracing::error;
use tracing::info;
use tracing::warn;
/// WebSocket clients can briefly lag behind normal turn output bursts while the
@@ -48,127 +19,6 @@ use tracing::warn;
const WEBSOCKET_OUTBOUND_CHANNEL_CAPACITY: usize = 32 * 1024;
const _: () = assert!(WEBSOCKET_OUTBOUND_CHANNEL_CAPACITY > CHANNEL_CAPACITY);
fn colorize(text: &str, style: Style) -> String {
text.if_supports_color(Stream::Stderr, |value| value.style(style))
.to_string()
}
#[allow(clippy::print_stderr)]
fn print_websocket_startup_banner(addr: SocketAddr) {
let title = colorize("codex app-server (WebSockets)", Style::new().bold().cyan());
let listening_label = colorize("listening on:", Style::new().dimmed());
let listen_url = colorize(&format!("ws://{addr}"), Style::new().green());
let ready_label = colorize("readyz:", Style::new().dimmed());
let ready_url = colorize(&format!("http://{addr}/readyz"), Style::new().green());
let health_label = colorize("healthz:", Style::new().dimmed());
let health_url = colorize(&format!("http://{addr}/healthz"), Style::new().green());
let note_label = colorize("note:", Style::new().dimmed());
eprintln!("{title}");
eprintln!(" {listening_label} {listen_url}");
eprintln!(" {ready_label} {ready_url}");
eprintln!(" {health_label} {health_url}");
if addr.ip().is_loopback() {
eprintln!(
" {note_label} binds localhost only (use SSH port-forwarding for remote access)"
);
} else {
eprintln!(
" {note_label} websocket auth is opt-in in this build; configure `--ws-auth ...` before real remote use"
);
}
}
#[derive(Clone)]
struct WebSocketListenerState {
transport_event_tx: mpsc::Sender<TransportEvent>,
auth_policy: Arc<WebsocketAuthPolicy>,
}
async fn health_check_handler() -> StatusCode {
StatusCode::OK
}
async fn reject_requests_with_origin_header(
request: Request<Body>,
next: Next,
) -> Result<Response, StatusCode> {
if request.headers().contains_key(ORIGIN) {
warn!(
method = %request.method(),
uri = %request.uri(),
"rejecting websocket listener request with Origin header"
);
Err(StatusCode::FORBIDDEN)
} else {
Ok(next.run(request).await)
}
}
async fn websocket_upgrade_handler(
websocket: WebSocketUpgrade,
ConnectInfo(peer_addr): ConnectInfo<SocketAddr>,
State(state): State<WebSocketListenerState>,
headers: HeaderMap,
) -> impl IntoResponse {
if let Err(err) = authorize_upgrade(&headers, state.auth_policy.as_ref()) {
warn!(
%peer_addr,
message = err.message(),
"rejecting websocket client during upgrade"
);
return (err.status_code(), err.message()).into_response();
}
info!(%peer_addr, "websocket client connected");
websocket
.on_upgrade(move |stream| async move {
let (websocket_writer, websocket_reader) = stream.split();
run_websocket_connection(websocket_writer, websocket_reader, state.transport_event_tx)
.await;
})
.into_response()
}
pub async fn start_websocket_acceptor(
bind_address: SocketAddr,
transport_event_tx: mpsc::Sender<TransportEvent>,
shutdown_token: CancellationToken,
auth_policy: WebsocketAuthPolicy,
) -> IoResult<JoinHandle<()>> {
if should_warn_about_unauthenticated_non_loopback_listener(bind_address, &auth_policy) {
warn!(
%bind_address,
"starting non-loopback websocket listener without auth; websocket auth is opt-in for now and will become the default in a future release"
);
}
let listener = TcpListener::bind(bind_address).await?;
let local_addr = listener.local_addr()?;
print_websocket_startup_banner(local_addr);
info!("app-server websocket listening on ws://{local_addr}");
let router = Router::new()
.route("/readyz", get(health_check_handler))
.route("/healthz", get(health_check_handler))
.fallback(any(websocket_upgrade_handler))
.layer(middleware::from_fn(reject_requests_with_origin_header))
.with_state(WebSocketListenerState {
transport_event_tx,
auth_policy: Arc::new(auth_policy),
});
let server = axum::serve(
listener,
router.into_make_service_with_connect_info::<SocketAddr>(),
)
.with_graceful_shutdown(async move {
shutdown_token.cancelled().await;
});
Ok(tokio::spawn(async move {
if let Err(err) = server.await {
error!("websocket acceptor failed: {err}");
}
info!("websocket acceptor shutting down");
}))
}
pub(crate) async fn run_websocket_connection<M, SinkError, StreamError>(
websocket_writer: impl futures::sink::Sink<M, Error = SinkError> + Send + 'static,
websocket_reader: impl futures::stream::Stream<Item = Result<M, StreamError>> + Send + 'static,
@@ -186,7 +36,7 @@ pub(crate) async fn run_websocket_connection<M, SinkError, StreamError>(
if transport_event_tx
.send(TransportEvent::ConnectionOpened {
connection_id,
origin: ConnectionOrigin::WebSocket,
origin: ConnectionOrigin::UnixSocket,
writer: writer_tx,
disconnect_sender: Some(disconnect_token.clone()),
})
@@ -245,26 +95,6 @@ pub(crate) trait AppServerWebSocketMessage: Sized {
fn into_incoming(self) -> Option<IncomingWebSocketMessage>;
}
impl AppServerWebSocketMessage for AxumWebSocketMessage {
fn text(text: String) -> Self {
Self::Text(text.into())
}
fn pong(payload: Bytes) -> Self {
Self::Pong(payload)
}
fn into_incoming(self) -> Option<IncomingWebSocketMessage> {
Some(match self {
Self::Text(text) => IncomingWebSocketMessage::Text(text.to_string()),
Self::Binary(_) => IncomingWebSocketMessage::Binary,
Self::Ping(payload) => IncomingWebSocketMessage::Ping(payload),
Self::Pong(_) => IncomingWebSocketMessage::Pong,
Self::Close(_) => IncomingWebSocketMessage::Close,
})
}
}
impl AppServerWebSocketMessage for TungsteniteWebSocketMessage {
fn text(text: String) -> Self {
Self::Text(text.into())

View File

@@ -28,7 +28,6 @@ axum = { workspace = true, default-features = false, features = [
"http1",
"json",
"tokio",
"ws",
] }
codex-analytics = { workspace = true }
codex-arg0 = { workspace = true }
@@ -37,10 +36,13 @@ codex-config = { workspace = true }
codex-core = { workspace = true }
codex-core-plugins = { workspace = true }
codex-exec-server = { workspace = true }
codex-extension-api = { workspace = true }
codex-external-agent-migration = { workspace = true }
codex-external-agent-sessions = { workspace = true }
codex-features = { workspace = true }
codex-git-attribution = { workspace = true }
codex-git-utils = { workspace = true }
codex-file-watcher = { workspace = true }
codex-hooks = { workspace = true }
codex-otel = { workspace = true }
codex-plugin = { workspace = true }
@@ -97,22 +99,20 @@ axum = { workspace = true, default-features = false, features = [
"tokio",
] }
base64 = { workspace = true }
codex-uds = { workspace = true }
codex-model-provider-info = { workspace = true }
codex-utils-cargo-bin = { workspace = true }
core_test_support = { workspace = true }
flate2 = { workspace = true }
hmac = { workspace = true }
opentelemetry = { workspace = true }
opentelemetry_sdk = { workspace = true }
pretty_assertions = { workspace = true }
reqwest = { workspace = true, features = ["rustls-tls"] }
rmcp = { workspace = true, default-features = false, features = [
"elicitation",
"server",
"transport-streamable-http-server",
] }
serial_test = { workspace = true }
sha2 = { workspace = true }
shlex = { workspace = true }
tar = { workspace = true }
tokio-tungstenite = { workspace = true }

Some files were not shown because too many files have changed in this diff Show More